repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
loomnetwork/dashboard
435118477
Title: DPOSv3 Withdrawal asks to change account even though i am logged in with the correct account Question: username_0: When I am withdrawing, I get the error about having to change an account, even though I am logged in with the correct account in metamask. ```A pending withdraw requires you to switch ETH accounts to: 0x6AE1C0890E9CF295FE164EDE614AB684175E68C0. Please change your account and then click OK``` (I am logged in already with that account in Metamask) But the first part where it initializes the withdrawal works. Afterwards, when I click `OK`, I get the following: `Uncaught (in promise) TypeError: this.switchDposUser is not a function` Answers: username_1: Should be fixed in [95ec528] Status: Issue closed
salesforce/vulnreport
251432012
Title: Fails to Build in Heroku Question: username_0: ``` -----> Ruby app detected -----> Compiling Ruby/Rack Command: 'set -o pipefail; curl -L --fail --retry 5 --retry-delay 1 --connect-timeout 3 --max-time 30 https://s3-external-1.amazonaws.com/heroku-buildpack-ruby/heroku-16/ruby-2.1.2.tgz -s -o - | tar zxf - ' failed on attempt 1 of 3. Command: 'set -o pipefail; curl -L --fail --retry 5 --retry-delay 1 --connect-timeout 3 --max-time 30 https://s3-external-1.amazonaws.com/heroku-buildpack-ruby/heroku-16/ruby-2.1.2.tgz -s -o - | tar zxf - ' failed on attempt 2 of 3. ! ! An error occurred while installing ruby-2.1.2 ! ! Heroku recommends you use the latest supported Ruby version listed here: ! https://devcenter.heroku.com/articles/ruby-support#supported-runtimes ! ! For more information on syntax for declaring a Ruby version see: ! https://devcenter.heroku.com/articles/ruby-versions ! ! ! Debug InformationCommand: 'set -o pipefail; curl -L --fail --retry 5 --retry-delay 1 --connect-timeout 3 --max-time 30 https://s3-external-1.amazonaws.com/heroku-buildpack-ruby/heroku-16/ruby-2.1.2.tgz -s -o - | tar zxf - ' failed unexpectedly: ! ! gzip: stdin: unexpected end of file ! tar: Child returned status 1 ! tar: Error is not recoverable: exiting now ! ! Push rejected, failed to compile Ruby app. ! Push failed ``` Answers: username_1: Heroku has stopped supporting Ruby version 2.1.2. There is a goal to bump the version up to a supported Ruby version. username_2: Any update on this ? username_3: We are working on it. The issue is that the datamapper gem is not supported for modern ruby versions so we are working on a significant rewrite internally for those components. However, we have been able to get docker working for local development and putting the finishing touches on production docker deploys. Hopefully, that should be a good stopgap solution in the meantime. username_4: Hello, has there been any updates on this? username_5: Can you please fix the [_Deploy to Heroku_ Button](https://heroku.com/deploy?template=https://github.com/salesforce/vulnreport) too? I have reproduced the screenshot below: ![image](https://user-images.githubusercontent.com/136826/51588566-b82b8400-1f38-11e9-9d75-3870218bf799.png) This _Build Log_ in its entirety is: `-----> Ruby app detected` `-----> Compiling Ruby/Rack` ` Command: 'set -o pipefail; curl -L --fail --retry 5 --retry-delay 1 --connect-timeout 3 --max-time 30 https://s3-external-1.amazonaws.com/heroku-buildpack-ruby/heroku-18/ruby-2.1.2.tgz -s -o - | tar zxf - ' failed on attempt 1 of 3.` ` Command: 'set -o pipefail; curl -L --fail --retry 5 --retry-delay 1 --connect-timeout 3 --max-time 30 https://s3-external-1.amazonaws.com/heroku-buildpack-ruby/heroku-18/ruby-2.1.2.tgz -s -o - | tar zxf - ' failed on attempt 2 of 3.` ` !` ` ! An error occurred while installing ruby-2.1.2` ` ! ` ` ! Heroku recommends you use the latest supported Ruby version listed here:` ` ! https://devcenter.heroku.com/articles/ruby-support#supported-runtimes` ` ! ` ` ! For more information on syntax for declaring a Ruby version see:` ` ! https://devcenter.heroku.com/articles/ruby-versions` ` ! ` ` ! ` ` ! Debug InformationCommand: 'set -o pipefail; curl -L --fail --retry 5 --retry-delay 1 --connect-timeout 3 --max-time 30 https://s3-external-1.amazonaws.com/heroku-buildpack-ruby/heroku-18/ruby-2.1.2.tgz -s -o - | tar zxf - ' failed unexpectedly:` ` ! ` ` ! gzip: stdin: unexpected end of file` ` ! tar: Child returned status 1` ` ! tar: Error is not recoverable: exiting now` ` !` ` ! Push rejected, failed to compile Ruby app.` ` ! Push failed` https://devcenter.heroku.com/articles/ruby-support#ruby-versions lists the versions of each [Ruby] runtime supported by Heroku.
meateam/api-gateway
433327562
Title: Get all files route Question: username_0: Need to implement a route for retrieving all files of a user without taking folders and hierarchy into consideration, in case there's no folder id in the request the root folder files would be returned. ## API `GET /files?parent=<folderId>` Returns list of file objects. Status: Issue closed Answers: username_0: Done in #14
cekit/cekit
753452361
Title: Add support for Cachito Question: username_0: **Describe the solution you'd like** https://osbs.readthedocs.io/en/latest/users.html#fetching-source-code-from-external-source-using-cachito When the artifact type would be used, this would be added to the Dockerfile: ``` COPY $REMOTE_SOURCE $REMOTE_SOURCE_DIR ``` We should clean up the `$REMOTE_SOURCE_DIR` at the end. It should be supported only on OSBS. Answers: username_0: Once this is done, it would be good to create a new document that explains how multi-stage builds with golang and cachito should work together: https://docs.cekit.io/en/latest/handbook/index.html. username_0: We could also add new switch to the OSBS builder: `--with-cachito` that would modify the generated Dockerfile. This would be probably much cleaner implementation. username_1: So currently CEKit already supports the `container.yaml` file ( https://docs.cekit.io/en/latest/descriptor/image.html#osbs-configuration ) where the cachito configuration is stored (https://osbs.readthedocs.io/en/latest/users.html#fetching-source-code-from-external-source-using-cachito) so by detecting that we can probably activate the integration without the need for any new flags. username_1: PR : https://github.com/cekit/cekit/pull/665 Status: Issue closed
hulop/NavCogIOS
274335187
Title: Not being able to go back to the main screen when the location gets lost Question: username_0: If a user tries to select a destination when the location is lost, everything is greyed out and cannot go back to the previous screen (<NavCog from the top left bar disappears) Answers: username_1: Hi @username_0, I found your issues are related to [NavCogIOSv3](https://github.com/hulop/NavCogIOSv3/issues). Could you please close issues here and repost them into the correct repository? Thnks
rokups/ImNodes
464289938
Title: cmake version required Question: username_0: I don't think you're using any features of cmake 3.14, are you? So it would help if you would lower it to prevent unnecessary errors. I just lowered it to 3.12 (on a linux machine) and it runs fine. Answers: username_1: It could probably go to as low as 2.xx-something. I will address it whenever i get around to making it build on windows. Status: Issue closed
crowbarz/node-red-contrib-mqtt-dynamicsub
950824381
Title: Dependency Issue with agent-base <6.0.0 and Node-RED 2.0 Question: username_0: Hi, We've detected that your node has a dependency on an old version of `agent-base (<6.0.0)` , These old versions were patching a core node.js function in a way that could break other libraries - including one we started using in Node-RED 2.0 for the HTTP Request node. Therefore any users that upgrade to Node-RED 2.0 and have your node installed (or later try to install it) will get errors when using the http-request node. Could you please take a look at your dependencies and see if you can update the versions so that you are no longer dependent on agent-base before version 6.0.0 Note this could be a module that you are using has a dependency on agent-base so you might need to check for updates to that module, to help you we've attached your nodes dependency tree below More details on this issue and the warning message that is now displayed in Node-RED 2.0.2 are on the forum at link https://discourse.nodered.org/t/node-red-2-0-2-released/48767 ``` └─ [email protected] ├─ [email protected] ├─ [email protected] │ ├─ [email protected] │ └─ [email protected] ├─ [email protected] │ ├─ [email protected] │ │ └─ [email protected] │ │ └─ [email protected] │ └─ [email protected] │ └─ [email protected] ├─ [email protected] │ ├─ [email protected] │ │ └─ [email protected] │ ├─ [email protected] │ ├─ [email protected] │ │ ├─ [email protected] │ │ ├─ [email protected] │ │ ├─ [email protected] │ │ │ ├─ [email protected] │ │ │ └─ [email protected] │ │ ├─ [email protected] │ │ │ ├─ [email protected] │ │ │ ├─ [email protected] │ │ │ │ ├─ [email protected] │ │ │ │ ├─ [email protected] │ │ │ │ └─ [email protected] │ │ │ │ └─ [email protected] │ │ │ ├─ [email protected] │ │ │ │ ├─ [email protected] │ │ │ │ ├─ [email protected] │ │ │ │ └─ [email protected] │ │ │ ├─ [email protected] │ │ │ ├─ [email protected] │ │ │ │ └─ [email protected] │ │ │ ├─ [email protected] │ │ │ ├─ [email protected] │ │ │ ├─ [email protected] │ │ │ │ ├─ [email protected] │ │ │ │ └─ [email protected] │ │ │ ├─ [email protected] │ │ │ ├─ [email protected] │ │ │ ├─ [email protected] │ │ │ ├─ [email protected] │ │ │ │ ├─ [email protected] │ │ │ │ └─ [email protected] [Truncated] │ ├─ [email protected] │ └─ [email protected] └─ [email protected] ├─ [email protected] ├─ [email protected] ├─ [email protected] │ └─ [email protected] ├─ [email protected] ├─ [email protected] ├─ [email protected] └─ [email protected] ``` Thanks in advance for looking into this. Sam PS Sorry for the templated issue but we've got a number of nodes with the issue so I'm automating the issue creation. Answers: username_1: Fixed in 0.0.10. Status: Issue closed
an-ju/projectscope
315970561
Title: Reduce response time and JS errors by logging plotted svg Question: username_0: Generating SVG files on the fly has caused a long response time and some display bugs. To fix these problems, we can reuse SVG outputs that have already been generated. Each metric sample has an extra field to hold SVG output. If the field is empty, the JS will be called and the graph will be updated. Otherwise, this saved SVG output will be directly applied. This doesn't seem like a good way of implementation since each JS is called only once for each graph. What we should really do is to call JS within the server and save the SVG output in DB.
MysticMods/MysticalWorld
762631028
Title: lava cat's and hell sprouts not spawning 1.16.4-0.3.1. Question: username_0: As above. We started the server several times to check if it's repeating. In fact while scavenging nether there was not a single one lava cat in nether. Same with the hell sprouts. After 2 times i changed the spawn chance in config to check if it's so rare and i had no luck or its just not working. Another two clean worlds were created and also not found them in nether. Endermini seems to work fine and spawn naturally in single player (not tested in server) We're using forge 35.1.4 Answers: username_1: Okay, that's interesting, I thought I'd confirmed it. username_0: Patchouli is installed from the link without MysticalLib (couse it says there is no need) - Im just sayin in case it is helpful username_1: Thanks! Yeah, the spawn rates might be too rare. I need to triple-check it. Status: Issue closed username_1: This should be installed from the next version.
cloudinary/cloudinary_android
250392021
Title: How to remove image /client side/? merci Answers: username_1: @username_0 @username_0 In the upload-preset edit page (on your account upload settings page), click on the "Advanced Options" button and set "Return delete token" to "YES".This will tell Cloudinary to include return the delete-token within the response JSON, which then can be utilized to remove the uploaded image in an unsigned manner (notice the token is only valid for 10 minutes).More information here: https://support.cloudinary.com/hc/en-us/articles/202521132-How-to-delete-an-image-from-the-client-side- -- | -- username_2: Closing this issue due to the time elapsed. Please feel free to either re-open the issue, contact our support at http://support.cloudinary.com or create a new ticket if you have any additional issues. Status: Issue closed
LoneGazebo/Community-Patch-DLL
153358453
Title: Towns not connecting underlying resource Question: username_0: Bug Report Template 1. Mod Version (i.e Date - (4/23b)): 4/23b with DLL hotfix 2. Mod List (if using standard CPP set, leave blank): 3. Type of error (i.e. crash, interface bug, AI quirk): Town created by Great Merchant is not connecting the resource (2x Iron) on the tile it is constructed on. 4. Steps to reproduce: Generate a GM and use it to create a "Town" improvement on a resource tile. 5. Additional information: The only UI element that correctly updates to show the resource is connected is the new "Corporations and Monopolies" panel. The TopPanel and Economic Overview both show totals without this new connection. It isn't an update delay as even after several turns and even a save/reload results are the same. --------------------------- Supporting information: Please note that you can attach .zip files to Issues by dragging-and-dropping them. If possible, zip up all supporting data and post that way. 1. Log files: Database.log and Lua.log needed 2. Minidump file (located in your Civ5.exe directory) 3. Screenshots (if needed) ---------------------------- Answers: username_1: cant' reproduce, sorry. Status: Issue closed
Zrips/CMI
940854021
Title: Disabling Alias doesn't work Question: username_0: **Description of issue:** I have attempted to disable the /pay alias, but the server still uses it. --- **Cmi Version (using`/cmi version`):** 192.168.3.11 **Server Type (Spigot/Paperspigot/etc):** Paperspigot **Server Version (using `/ver`):** 1.16.5 Status: Issue closed Answers: username_1: Disabling default alias from alias.yml file will require full server restart for it to take correct effect. After restart it should be properly disabled and CMI will not do anything with that command.
Kaldeera/ng-SharePoint
125057733
Title: getListItems $expand is adding a space resulting in a bad request Question: username_0: The space should be removed when appending configured $expand fields https://github.com/Kaldeera/ng-SharePoint/blob/master/src/sharepoint/services/splist.js#L188 And i think the defaultExpandedProperties should only be used if no $expand is defined on the query object. Status: Issue closed Answers: username_1: Solved in v0.6.1
bisq-network/support
860656685
Title: Fee reimbursement for trade asVAkd Question: username_0: ![image](https://user-images.githubusercontent.com/82761317/115145874-d5822680-a04b-11eb-9ef3-09c1530f6259.png) Maker: 0862e95b869770ac725402e079e7e103d0d4df1b2260f53172dd55e9517d31b7 Taker: 29b9bb52de22aa4f5690ad79deda20fe33c04de44137b650692ce467d3da679e Deposit: 978b4351d90311e9f043f106e534874201c9d358b233565c3dc65e7cf85161f7 Bisq version: 1.6.2 BTC address for reimbursement: bc1q06lfu54q97zj4p73dt6hfa3qvy9wv260fe3zwq Answers: username_0: This trade seems to be working now after having deleted spv and resyncing. username_0: Trade is now finished and everything looks to be okay. Apologies to the other guy involved if there was any fault on my side. I was just following what it said to do on screen by making this thread Status: Issue closed
jorgearanda/fish
60192351
Title: Disconnected participants are not being shown 'correctly' to other remaining participants Question: username_0: When a participant disconnects midway the disconnected participant seems to be 'replaced' by one of the remaining participants (unsure if You is possible to be seen as replacing the disconnected participant) in the screen of those participants that remain.
Optinomic/apps
311902336
Title: Klinikstichproben Question: username_0: ## Klinikstichproben Neu berechnen durch: http://optinomic.cust.local/client.new/#/user/app/com.optinomic.user.apps.klinikstichproben/template/klinikstichproben - [ ] ISK - [ ] BSCL - [ ] TMT - [ ] WHOQOL #### Minified Klinikstichproben erstellen Not to myself: Local "Calculations-Script". - [ ] ISK - [ ] BSCL - [ ] TMT - [ ] WHOQOL #### Minified Klinikstichproben pusblish - [ ] Die erstellen Files an die entsprechenden Apps / Fallkonferenz pushen. ## Quickfix - Issue In Zwischenzeit könnten die vorhandenen Files z.B. `ks_bscl.json` mit den neuen Bezeichnungen `EAS => QuEA` überschrieben werden. Answers: username_0: ### Issue Note to myself: `BSCL` & `WHOQOL` = JavascriptError ### BSCL http://optinomic.cust.local/api/modules/ch.suedhang.user.apps.ks_bscl/view/score_overview#patient_id=0,token=<PASSWORD>,app_name=Klinische_Stichprobe_BSCL,app_id=ch.suedhang.user.apps.ks_bscl,api_url=http://optinomic.cust.local/api/,user_id=2 #### Calculations `RECOMPUTE` angestossen: Patient-App: `ch.suedhang.apps.bscl_anq.production` | `scores_calculation` User-App: `ch.suedhang.user.apps.ks_bscl` | `bscl_klinikstichprobe_new` ### WHOQOL http://optinomic.cust.local/api/modules/ch.suedhang.user.apps.ks_whoqol/view/score_overview#patient_id=0,token=<PASSWORD>,app_name=Klinische_Stichprobe_WHOQOL-BREF,app_id=ch.suedhang.user.apps.ks_whoqol,api_url=http://optinomic.cust.local/api/,user_id=2 #### Calculations `RECOMPUTE` angestossen: Patient-App: `ch.suedhang.apps.whoqol.production` | `phys_psych_calculation` User-App: `ch.suedhang.user.apps.ks_whoqol` | `whoqolbref_klinikstichprobe` username_0: DONE: Bis auf BSCL! Würg, da laufen wir mal wieder in MEMORY-Issues!
reactjs/react-transition-group
1042355069
Title: Docs for `unmountOnExit` should mention lazy mount behaviour Question: username_0: https://reactcommunity.org/react-transition-group/transition#Transition-prop-unmountOnExit Reduced test case: https://stackblitz.com/edit/react-ts-gd8g4q Related: https://github.com/reactjs/react-transition-group/issues/765
pope410211/TIY-Assignments
98255092
Title: GITHUB POPE410211/OCTOCAT Question: username_0: ![mobile-octocat-basic-view](https://cloud.githubusercontent.com/assets/12172658/8994054/03226292-36d7-11e5-81a1-7f797faf4a38.png) ![profile-1-boxes](https://cloud.githubusercontent.com/assets/12172658/8994055/03236624-36d7-11e5-8871-950a41247daf.png) ![profile-newsfeed-text](https://cloud.githubusercontent.com/assets/12172658/8994056/03284978-36d7-11e5-8acd-1fc585489ea6.png) Answers: username_0: ![mobile-octocat-basic-view](https://cloud.githubusercontent.com/assets/12172658/8994054/03226292-36d7-11e5-81a1-7f797faf4a38.png) ![profile-1-boxes](https://cloud.githubusercontent.com/assets/12172658/8994055/03236624-36d7-11e5-8871-950a41247daf.png) ![profile-newsfeed-text](https://cloud.githubusercontent.com/assets/12172658/8994056/03284978-36d7-11e5-8acd-1fc585489ea6.png)
openclassify/openclassify
523517861
Title: DemoData Error on fresh install Question: username_0: When you fresh install oc: + sudo php artisan install --ready In Container.php line 752: Class Visiosoft\DemodataModule\DemodataModule does not exist<issue_closed> Status: Issue closed
symfony/symfony
124052891
Title: MergeDoctrineCollectionListener breaks use of Entity Listener with many-to-many associations Question: username_0: I want to use Doctrine's functionality called Entity Listener for tracking some entity changes. I have two entities named Sensors and Parameters. Each Sensor can have multiple Parameters and same Parameters can be also used with different Sensors, so i use many-to-many association between them. Next, i would like to allow add/remove some/all Parameters from Sensor any time and track history of these associations. Entity Listener is perfect solution in this situation. For this i've created Entity Listener called SensorsListener and added event preUpdate for catching changes made on Sensor entity. But... many-to-many association is collection type and if it has field property named "multiple" set to true, then Symfony's MergeDoctrineCollectionListener is added as event subcriber (DoctrineType class): ```php public function buildForm(FormBuilderInterface $builder, array $options) { if ($options['multiple']) { $builder ->addEventSubscriber(new MergeDoctrineCollectionListener()) ->addViewTransformer(new CollectionToArrayTransformer(), true) ; } } ``` It checks if all items from collections were removed, and if yes, then instead of "raw" deleting of items, calls clear method for optimization purposes: ```php class MergeDoctrineCollectionListener implements EventSubscriberInterface { public static function getSubscribedEvents() { // Higher priority than core MergeCollectionListener so that this one // is called before return array(FormEvents::SUBMIT => array('onBind', 10)); } public function onBind(FormEvent $event) { $collection = $event->getForm()->getData(); $data = $event->getData(); // If all items were removed, call clear which has a higher // performance on persistent collections if ($collection instanceof Collection && count($data) === 0) { $collection->clear(); } } } ``` If i have for example some Sensor with associated 5 Parameters and i delete 3 of them, it's all ok, but when i remove all Parameters then preUpdate event isn't triggered. Because of clearing collection, Doctrine doesn't know what changes are truly made and doesn't qualify Sensor entity in unit of work as updated. Workaround for this issue is to add postLoad event for grabbing actual associations and compare them with associations in preFlush state, but in optimization view, it doesn't make sense, because now in whole application where Sensors are used - all of associated Parameters are loaded also. If i comment line `$collection->clear();` then preUpdate is triggered properly. I would like to suggest to add some new boolean option to EntityType field, like raw_update or no_optimizations, that will block static usage of clear collection when deleting all associations. It will make Entity Listeners to work with collection types correctly without any workarounds. Answers: username_1: Same issus here, I have tried to use Doctrine Event Listener with onFlush. Once I comment the $collection->clear() line inside the listener, the removed association appears in unitOfWork, and can be queried by $uow->getScheduledCollectionUpdates(), getDeleteDiff() as an entity, when a form posted with empty association data. username_2: Have you tried to set the `empty_data` option to add an empty value. That way the count wouldn't be "0" and the `allow_delete` option should take care of it. username_0: Yes, i've tried but it's not working. Tried also to add new empty entity as empty_data, but i dont see any change. username_2: Would this be acceptable to fix your use case and keep performance: ```php public function onBind(FormEvent $event) { $collection = $event->getForm()->getData(); $data = $event->getData(); // If all items were removed, call clear which has a higher // performance on persistent collections if ($collection instanceof Collection && count($data) === 0) { $collection->map(function ($v) { return null; }); } } ``` ?? username_1: Yes, it's working with the $collection->map(.... username_2: @username_1 Thank you for the feedback :) username_2: @username_3 Any lead here ? Is there a problem of configuration ? This should be handled by `allow_delete` option, right ? Maybe we could pass this option to the constructor as for `MergeCollectionListener` and apply my fix dynamically ? username_3: As far as I understand, this is nothing we can (should?) fix. Calling `clear()` seems like the proper thing to do if all items of a collection were removed. I think you are looking for a solution to catch `clear()` events, but that seems more of a Doctrine issue. /cc @beberlei username_4: @username_2 Would you like to open a PR with the fix you suggested? username_2: Ok, I've got some time until next patch release. username_5: @username_2 Time is up :) username_0: @username_5: I almost forgot about this, because i'm using patch in every project that uses Symfony. :) Patch just comments lines 63-65 @username_3 I think that is not a Doctrine issue, because simple clearing Collection with "clear" method should work exactly like it works now - without checking state of Collection, if it was empty or not. username_6: I'd need feedback on https://github.com/doctrine/doctrine2/pull/7120 too - to me, the change looks correct, but possibly massive BC break due to change in behavior. Anyone having a better overview of the problem here is welcome to provide their opinion there. username_4: I am closing here as this seems to be addressed in Doctrine itself (see doctrine/doctrine2#7120). Status: Issue closed
stripe/stripe-android
491838554
Title: abstract method "void com.stripe.android.ApiResultCallback.onSuccess(java.lang.Object)" Question: username_0: ## Summary This error occurs when using 3D secure 1 with sources. Once we try to invoke the method to move the user into the webview (I believe!) then we get this crash. ``` --------- beginning of crash 2019-09-10 19:39:14.338 3147-3147/com.redu.Ashleigh.debug E/AndroidRuntime: FATAL EXCEPTION: main Process: com.redu.Ashleigh.debug, PID: 3147 java.lang.AbstractMethodError: abstract method "void com.stripe.android.ApiResultCallback.onSuccess(java.lang.Object)" at com.stripe.android.ApiOperation.onPostExecute(ApiOperation.java:32) at com.stripe.android.ApiOperation.onPostExecute(ApiOperation.java:11) at android.os.AsyncTask.finish(AsyncTask.java:660) at android.os.AsyncTask.-wrap1(AsyncTask.java) at android.os.AsyncTask$InternalHandler.handleMessage(AsyncTask.java:677) at android.os.Handler.dispatchMessage(Handler.java:102) at android.os.Looper.loop(Looper.java:154) at android.app.ActivityThread.main(ActivityThread.java:6077) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:865) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:755) ``` ## Code to reproduce This seemed to occur after installing the OneSignal & CodePush SDK (so it could be something to do with those, any ideas?) but can't be certain. ## Installation method Using react-native tipsi-stipe and Android studio. ## SDK version com.stripe:stripe-android:8.1.0 Answers: username_1: @username_0 `ApiResultCallback` was introduced in `9.1.0`, yet the version of `stripe-android` that you're using is `8.1.0`. It seems that there's some incompatibility, likely introduced by the libraries you mentioned. username_0: @username_1 thanks for the quick response! Could you give me some more details about how this method is usually utilised? as I've tried downgrading the tipsi-stripe SDK by two major (7.0.0, 6.0.0, 5.0.0) versions, but this error still is the same on all them? The strange thing is that we currently have this functionality working in production using tipsi-stripe 7.5.0 which uses 8.1.0 under the hood, this error seemed to appear out of thin air (without changing anything to do with our stripe integration, the only things which changed was we integrated OneSignal and CodePush) username_1: @username_0 `ApiResultCallback` is the interface for callbacks from Stripe API requests. So something is expecting that class to exist, even though it doesn't exist in 8.1.0. Is it possible that the SDKs you're referencing are doing this? username_1: What method are you invoking here? username_0: @username_1 sorry for the late reply! (timezone difference I'm in UK GMT + 1). I'm using tipsi-stripe `createSourceWithParams` this handles the redirect flow and invokes the method once the 3D secure 1 has been completed. ``` const result = await stripe.createSourceWithParams({ type: "threeDSecure", amount: chargeAmount, currency: "GBP", flow: "redirect", returnURL: returnUrl, card: source }) ``` Here is a [link to the Java method from tipsi-stripe line 219 ](https://github.com/tipsi/tipsi-stripe/blob/3917085761a54555e3421d8c371204e4e42200c7/android/src/main/java/com/gettipsi/stripe/StripeModule.java#L219). Seems to be using `SourceParams.createThreeDSecureParams` under the hood, which looks like a method from stripe android. I'm going to do more digging this morning and try our application without CodePush and OneSignal, as this was working before with the same version (7.5.0) we are using now, and only seemed to be an issue on Android after integrating OneSignal and CodePush. I'll keep you updated on my progress. Thanks for the help! username_0: @username_1 Found the issue. We were building out our own implementation of Stripe 3D Secure 2 and that module was using Stripe 9+ which was overwriting the "8.1.0" required in tipsi-stripe. Thanks for your help! Status: Issue closed username_1: @username_0 FYI the official Stripe client library for React Native is now in public beta. https://github.com/stripe/stripe-react-native
guigolab/ggsashimi
493639193
Title: What's the difference between the flag -S OUT_STRAND and -s STRAND? Question: username_0: Hi, I am using the ggsashimi to generate sashimi plots for my results. The alternative splicing events I am interested in are located in either + or - strand. I checked the help info of ggsashimi, and the -S OUT_STRAND and -s STRAND are the two flags I can set to focus on only one strand. But I am not sure how to correctly set the two flags. For my understanding, if the alternative splicing event I am interested in is located in - strand, -s was set to ANTISENSE. How about -S? Should I set it to minus which is compatible to -s flag? What does the out strand mean? thanks, Shan Answers: username_1: HI @username_0, the ```-s``` flag refers to the strandness of the RNA-seq protocol. If you just want to show the reads mapping to one of the strands (provided that the RNA-seq protocol is stranded), you can do it using ```-S```. Does it make sense? Maybe we can make the help message clearer username_0: Hi@username_1, thanks for your reply. But there are two flags, -s and -S as shown in the ggsashimi plot help message, -s is strand and -S is out_strand. I understood -s refers to the strandness of the RNA-seq. But how about the capitalized -S flag? What is out_strand? username_1: As I said in the previous message, capital ```-S``` refers to whether you want to show in the plot only the plus or the minus strand. Imagine you have a stranded protocol (e.j. you set ```-s``` to ```MATE1_SENSE```), but you are looking at an exon inclusion event within a gene in the minus strand, which overlaps an antisense lncRNA. If you just want to see the reads corresponding to the minus strand, you should set ```-S minus```. Is it clearer now? username_0: Thanks! I fully understood now. Status: Issue closed
tensorflow/tensorflow
325025170
Title: Feature Request: Checkpointable slot_variables Question: username_0: When using `tf.train.Checkpoint()` in combination with the `AdamOptimizer` the following error is thrown. ``` load_status.initialize_or_restore(self.session) File "/home/lib/python3.5/site-packages/tensorflow/python/training/checkpointable_utils.py", line 677, in initialize_or_restore checkpointable_objects = list_objects(self._root_checkpointable) File "/home/lib/python3.5/site-packages/tensorflow/python/training/checkpointable_utils.py", line 472, in list_objects object_names=object_names) File "/home/lib/python3.5/site-packages/tensorflow/python/training/checkpointable_utils.py", line 291, in _serialize_slot_variables "A slot variable was re-used as a dependency of a " NotImplementedError: A slot variable was re-used as a dependency of a Checkpointable object. This is not currently allowed. File a feature request if this limitation bothers you ``` See also: https://github.com/tensorflow/tensorflow/issues/19208 Answers: username_1: Adam works fine in general (there are a bunch of unit tests). Do you have a snipped which reproduces this error? username_0: I'll see what I can come up with. It is a lot of code I have username_1: Well, from the error something is adding a dependency on one of Adam's slot variables; do you use the output of `optimizer.variables()` somewhere? Or `tf.global_variables()`? Assigning one of those to an attribute could add the dependency. username_0: Why could a `slot_variable` be in `node_ids`? username_1: To clarify, it works if you take out this bit? ``` for var in self.optimizer.variables(): print('Adding {}'.format(var.name)) setattr(self, var.name, var) ``` I don't see the purpose of that. There's already a dependency on the optimizer, right? Is this so they'll show up in `model.variables`? username_0: I think I just hacked around the problem of not having lists of checkpointables be supported. I wanted to build a solution that did not require me to change model code. But I now see, that my shot at it backfired. Status: Issue closed username_1: Checkpointable data structures are coming. We're just ~~arguing over~~ working on a few final details.
OAuth3/oauth3-cli
147145854
Title: Unrecognized type "ANAME" Question: username_0: When trying to add a DNS entry, its not recognizing ANAME as a valid -t type value. ``` node bin/daplie dns:set -n trave.orb.zone -t ANAME -a 172.16.31.10 opts2: { name: 'trave.orb.zone', type: 'ANAME', value: '172.16.31.10', ttl: 600, priority: 10, sub: 'trave', sld: 'orb', tld: 'zone' } Possibly Unhandled Rejection at: Promise: Promise { _bitField: 152305664, _fulfillmentHandler0: { message: 'unrecognized type \'ANAME\'' }, _rejectionHandler0: undefined, _promise0: undefined, _receiver0: undefined } undefined Reason: { message: 'unrecognized type \'ANAME\'' } ``` Other invalid attempts throw: ``` Possibly Unhandled Rejection at: Promise: Promise { _bitField: 152305664, _fulfillmentHandler0: { name: 'TypeError', message: 'r.type.toUpperCase is not a function' }, _rejectionHandler0: undefined, _promise0: undefined, _receiver0: undefined } ``` Answers: username_1: 1. ANAME isn't implemented yet (but that's a fairly quick fix when I get to it). 2. An ANAME record is CNAME for which the recursion to A, or AAAA happens on the server (so by putting an A record, you're using it incorrectly). 3. ANAMEs are primarily used for the root, which is not allowed to have a CNAME (so you would put an ANAME or A/AAAA on orb.zone, but a CNAME or A/AAAA on trave.orb.zone) 4. That's odd. I thought I had a regex to reject unknown types in the cli. 5. Obviously, I need to fix that on the server too. :-) What are you trying to accomplish? username_0: Sorry to be a newb on the DNS side of things. Here's what Ive got now: <img width="661" alt="screen shot 2016-04-09 at 11 57 14" src="https://cloud.githubusercontent.com/assets/370488/14405357/40003528-fe4a-11e5-9f8b-27e7e4528dea.png"> I bet I am overlooking a command that links my "Pi2" device to the trave.orb.zone DNS record, like how when I registered trave.daplie.me username_1: Haha. Looks like I need to check the type on CNAME, because A/AAAA records aren't valid CNAMEs either. ``` A 127.0.0.1 AAAA ::1 CNAME example.com ``` Yes, you must attach a device to the domain like you did with orb-droplet and Pi2. You'll need to unset the CNAME for trave.orb.zone and re-add like you did with the others. This is a little bit different from every other DNS service in that we require you to establish a device (because we fundamentally consider IPs to be ephemeral and non-reliable) and do not allow direct addition of A and AAAA records and it's important for the management of multiple domains and multiple devices. Obviously these needs to be in the documentation, but there should also be an easier way to simply discover the right thing to do as you're using the tool. What do you think would be the best way to go about that? username_0: Perhaps having the :attach method also exist from the domains: action set, or having a helpful comment below the DNS :list --all table of results, to mention where to go if you want to bind a domain to a device.
binary-com/binary-bot
544120995
Title: good strategy but not finished, help me masters bot over digit 4 based on the percentage of outgoing digits Question: username_0: ![issues over](https://user-images.githubusercontent.com/59257049/71614474-b7e03380-2b60-11ea-9dc0-355f932cf0e6.jpg) if list to 1: 8% 2nd list: 12% 3rd list: 0% 4th list: 4% 24% total and the total is <25% then the execution is over digit 4 if possible can be a safe and consistent profit help the masters Answers: username_0: @fruittella : ini botnya tuanku [bot over not yet.zip](https://github.com/binary-com/binary-bot/files/4012183/bot.over.not.yet.zip) username_1: [bot-over-4.zip](https://github.com/binary-com/binary-bot/files/4012389/bot-over-4.zip) username_0: @username_1 : oh sir, thanku so much... its amazing bot, i realy sure this bot very safety n profitble.. thanku n thanku, God so Bless you sir... thanku 4 ur kindess.. username_2: @username_0 пришли бот с картинки пожалуйста. username_3: how's the bot been treating you? Status: Issue closed username_5: @username_1 Can you please explain the logic?
nguyenhoanglam/ImagePicker
193103112
Title: I want to get the picture directly after taking picture from the camera. Question: username_0: if i am selecting the option to take a picture from camera and then it just take the picture and then i have to select the picture from the gallery. so i want to just get the picture returned after i take the picture from the camera. i dont want to select the same picture after i take that picture. please suggest a solution. Answers: username_1: @username_0 For me it's working in the sample app. @username_2 Close this issue? username_2: @username_1 If you use CameraOnly mode, the image is returned immediately as @username_0 expected. However, if you tap the camera button (on the toolbar, CameraOnly mode = FALSE), the captured image will not be selected automatically. So @username_0 asked me to make it auto selected. I'll consider this option if I find it useful and convinient for user. username_1: @username_2 Ah ok, I understand. Then YES - this is really a big PLUS for your lib! :-) username_2: @username_1 You think it's a really nice feature, don't you? username_1: @username_2 Of course. If the user wants a picture from camera - he just wants THIS and finish. I would use this feature! username_2: @username_1 That's great! I will update this feature in the next release. Thank you so much! username_2: Hi guys, Version 1.4.0 is now available. Please check it out and close this issue if the error's gone away! Thank you so much! Status: Issue closed
swagger-api/swagger-codegen
535684685
Title: expose parentContainer to templates Question: username_0: <!-- Please follow the issue template below for bug reports and feature requests. Also please indicate in the issue title which language/library is concerned. Eg: [JAVA] Bug generating foo with bar --> ##### Description I'm extending the models generated from swaggers with a deepCopy() method for java. For this I need to generate code based on the property types, but in case the basetype is List or Map, the detailed type information is lost to the templates. I propose exposing the "parentContainer" type in these cases similar to how "items" are exposed for container properties. The proof-of-concept code is ready, I'll file a PR soon. ##### Swagger-codegen version 2.4.10, 2.4.11-SNAPSHOT (HEAD) ##### Swagger declaration file content or url ```yaml swagger: "2.0" info: version: "1.0.0" title: "test" paths: {} definitions: ArrType: type: array items: type: string format: date-time ``` ##### Command line used for generation java -jar ~/.m2/repository/io/swagger/swagger-codegen-cli/2.4.10/swagger-codegen-cli-2.4.10.jar generate -i testarr.yaml -l java -o /tmp/test -DdebugModels=true > logarr-orig.txt java -jar ~/src/swagger-codegen/modules/swagger-codegen-cli/target/swagger-codegen-cli.jar generate -i testarr.yaml -l java -o /tmp/test -DdebugModels=true > logarr.txt ##### Steps to reproduce N/A ##### Related issues/PRs The PR will include the fix for #9918 ##### Suggest a fix/enhancement will file a PR
artyom-beilis/cppcms
330906187
Title: Multi core async application? Question: username_0: How to write a multi core async appplication? Can you please give an example? Answers: username_1: It is on the road-map - implementation of multiple event loops. Currently all async app run in single thread, you can post some heavy jobs to thread pool and wait for the response in asynchronous way, username_0: Why cannot we model it using reactor pattern? One event loop and then rest of worker cores much like Seastar framework? username_1: Currently CppCMS uses many worker cores for synchronus apps - while single even loop distributes the job between workers. Async apps live in the event loop and should not do heavy jobs username_0: Closing it. Status: Issue closed
rust-lang/miri
640731543
Title: cargo-mini panic's on no_main binaries Question: username_0: While there are many other questions about what miri should do for binaries with `#![no_main]`, the current behavior is to panic. In the spirit of #1001, it would be nice explicitly indicate that miri doesn't support testing freestanding binaries at the moment. <details> <summary>An example backtrace</summary> <pre> [-bash] Wed 17 Jun 16:40 [[master $] /Volumes/code/helena-project/tock/boards/hail] $ cargo miri test Checking hail v0.1.0 (/Volumes/code/helena-project/tock/boards/hail) warning: unused import: `core::panic::PanicInfo` --> boards/hail/src/io.rs:2:5 | 2 | use core::panic::PanicInfo; | ^^^^^^^^^^^^^^^^^^^^^^ | = note: `#[warn(unused_imports)]` on by default warning: unused import: `cortexm4` --> boards/hail/src/io.rs:4:5 | 4 | use cortexm4; | ^^^^^^^^ warning: unused import: `kernel::debug` --> boards/hail/src/io.rs:5:5 | 5 | use kernel::debug; | ^^^^^^^^^^^^^ warning: unused import: `kernel::hil::led` --> boards/hail/src/io.rs:7:5 | 7 | use kernel::hil::led; | ^^^^^^^^^^^^^^^^ warning: unused import: `crate::CHIP` --> boards/hail/src/io.rs:10:5 | 10 | use crate::CHIP; | ^^^^^^^^^^^ warning: unused import: `crate::PROCESSES` --> boards/hail/src/io.rs:11:5 | 11 | use crate::PROCESSES; | ^^^^^^^^^^^^^^^^ warning: static is never used: `WRITER` --> boards/hail/src/io.rs:17:1 | 17 | static mut WRITER: Writer = Writer { initialized: false }; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: `#[warn(dead_code)]` on by default thread 'rustc' panicked at 'no main function found!', src/tools/miri/src/bin/miri.rs:35:37 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports note: rustc 1.45.0-nightly (fe10f1a49 2020-06-02) running on x86_64-apple-darwin note: compiler flags: -C opt-level=z -C embed-bitcode=no -C debuginfo=2 -C debug-assertions=on -C incremental note: some of the compiler flags provided by cargo are hidden warning: 7 warnings emitted error: could not compile `hail`. To learn more, run the command again with --verbose. </pre> </details> I've attached a backtrace, but any `no_main` binary will crash in a similar manner right now. CC https://github.com/tock/tock/issues/1713 Answers: username_1: Thanks! I agree. This is an easy bug to fix, well-suited as a first contribution. The panic arises here: https://github.com/rust-lang/miri/blob/146ee66268bb38a226eb9737d1a595bcb2ac1dbd/src/bin/miri.rs#L35 And all it takes is replacing that by a better error message. rustc has some APIs for fatal errors; `early_error` might be suited here. Status: Issue closed Status: Issue closed
pypa/pip
440901708
Title: Validate VCS urls in hash-checking mode using their commit hashes Question: username_0: Since then, there's been a lot more discussion of the risks of using sha1 for refspecs following the [ShAttered attack](https://shattered.io/) (where Google paid to create a PDF hash collision, and could have done the same for a git commit). After that attack: * Github [guarded against sha1 collisions](https://github.blog/2017-03-20-sha-1-collision-detection-on-github-com/) * git [hardened its sha1 implementation](https://github.com/git/git/blob/master/Documentation/technical/hash-function-transition.txt) and is [implementing sha256](https://stackoverflow.com/a/47838703/307769) * Mercurial added a page [discussing limitations on the attack](https://www.mercurial-scm.org/wiki/mpm/SHA1). In short, a sha1 collision attack requires attackers to first issue a specially-crafted, suspicious benign commit, at great expense, and then replace it with a malicious one. So a security model that is concerned with sha1 attacks assumes that an attacker has already crafted a seemingly-benign commit and had it distributed -- by which point, the Mercurial page argues, there are cheaper, easier, and more deniable ways to succeed in the attacker's aim. That argument is certainly debatable! So I'm hoping to discuss it here: would it be worth counting sha1 refspecs as "hashed" for now, in order to encourage use of `pip install --require-hashes` and `pip-compile --generate-hashes`? Answers: username_1: I think you (and others) make a good case that stonewalling SHA1 VCS URLs as unhashable is counterproductive in a lot of practical cases, like when using secure internal git servers. The obvious options I see are… * **Just letting them through.** Not really my favorite, as it downgrades security silently. But maybe no users care. * **Adding an --almost-require-hashes option** or similar. Seems okay on the face of it. However, I will put this here for everyone's evaluation. Seems like SHA1 collisions are about to get another notch cheaper: https://www.zdnet.com/article/sha-1-collision-attacks-are-now-actually-practical-and-a-looming-danger/. In my ideal world, git would release SHA256 support, and we'd have no reservations at all. username_0: How about a special --hash value, like: ``` # accept the hash value provided in the VCS url: git+https://github.com/requests/requests.git@e52932c427438c30c3600a690fb8093a1d643ef3#egg=requests --hash=vcs ``` or ``` # perform no hash validation on this line: git+https://localhost:9000/requests/requests.git@e52932c427438c30c3600a690fb8093a1d643ef3#egg=requests --hash=skip ``` username_1: That also occurred to me, but I thought it'd be best to make it clear to the runner of pip rather than just the author of the requirements file. But, now that you mention it, a package author doesn't (ideally) include hashes, as that would frustrate downstream integrators. So it's fine. username_2: hello pleas help what the hash is hier ??????? [18:21:27] [WARNING] there has been a problem while writing to the session file ('OperationalError: attempt to write a readonly database') [18:21:30] [INFO] retrieved: 07b1e0439ba077bc7bdfbbf2341b1d1f:dc [18:21:30] [DEBUG] performed 249 queries in 380.19 seconds [18:21:30] [INFO] retrieving the length of query output [18:21:30] [INFO] retrieved: 21 [18:23:18] [DEBUG] performed 10 queries in 107.48 seconds [18:23:18] [DEBUG] starting 8 threads [18:24:14] [INFO] retrieved: o_i________________ 2/21 (9%) username_3: I've run into this same problem. I just want to point out another limitation with using an artifact url like `https://github.com/requests/requests/archive/e52932c427438c30c3600a690fb8093a1d643ef3.zip#egg=requests` is, as far as I can tell, there isn't good support by pip/pip-tools for working with this style url for a private repo. (https://github.com/jazzband/pip-tools/issues/947#issuecomment-542387335) username_4: I would be in favor of having `--require-hashes` accept such VCS URLs too. In #6851 I added `is_immutable_rev_checkout` that could help implementing this. username_5: Why not just hash the whole VCS directory and use whatever hash algorithm the user wants? This would probably have to exclude things like the `.git` directory which may(?) contain different files depending on the version of git used. username_6: Yes, let's just fix this in a backwards compatible way by start supporting a hash of a VCS checked out directory. I do not see why it is so hard to recursively compute hash of a directory in a deterministic way and this is it. username_7: It is not hard to compute a directory’s hash, but to compute *reliably*. `.git` is already mentioned as a problem, and there are many other platform- and tool-specific files that can also affect this (`.DS_Store` comes to mind), and pip would be stuck in maintenance hell if it starts adding those specific rules for obscure edge cases. Checking against commit ID is much more practical approach IMO. username_7: Another way to do this is to stick with only checking artifact hashes. There has been talks about pip should build an sdist for source tree installations (to solve unrelated issues), and pip can use the built sdist’s hash here. That would be consistent. username_6: +1 on that one. Yes, hashes should probably not be computed on repository directory, but on what is being seen as source to be installed from. username_1: In reply to @username_7, that's been brought up before, but we'd have to do some work to make sure the building of sdists are deterministic. For starters, modification timestamps make their way into tarballs. username_8: This is preventing us in moneymeets/python-poetry-buildpack#19 to enable hashes - people use private repositories as requirements a lot for deployment - I'm not really concerned about SHA1 weaknesses, as it seems to me, that not using hashes at all is less secure as using SHA1 for private VCS dependencies, but I like the idea of `git archive | sha256`. Isn't it a perfect solution that alleviates all previous concerns (security, reproducibility, maintenance burden)? username_9: We would like to migrate away from Pipenv to pip-compile from pip-tools with --generate-hashes which uses normal pip under the hood, but have a number of git dependencies due to forking libraries and making custom patches. Those git dependencies prevent us from enabling --generate-hashes, without doing some of the acrobatics described in the initial post. Has anything changed since the issue was created two years ago that could allow for this feature to move forward now? The current situation is unfortunate, especially compared to other package managers like cargo and yarn that just support git deps out of the box together with version locking. username_10: Not that I'm aware of. It would need someone to come up with a proposal and PR, and so far no-one has been sufficiently motivated to do so. username_4: I'd personally be happy to review a PR doing that. I think it will involve our existing `is_immutable_rev_checkout()` function to reject branches that look like hashes. Also VCS references that point an immutable ref cause wheels built from them to be cached, and subsequently reused. So the question of whether the pip wheel cache can be trusted might come up, as we can't hash check wheels coming from there. username_0: Seems like handling all the edge cases has made this bug prohibitive for anyone to take on. Given how long it's been open, I wonder if anyone is up for trying a simpler PR that just supports the per-requirement opt-out I suggested up-thread: ``` git+https://localhost:9000/requests/requests.git@e52932c427438c30c3600a690fb8093a1d643ef3#egg=requests --hash=skip ``` I imagine that's a much simpler PR to write, and it lets projects move forward that are better off with 99% hashing than none (in particular because the requirements where they're skipping hashing are likely to be ones they control). username_9: This is our case! Our git deps are mainly for forked repos where we have a handful of patches. Right now i worked around this restriction by adding patchfiles directly to our project, and git apply'ing them into our virtualenv after install has completed.
elixirlabsinc/pangeanetwork
432894072
Title: Add unit tests Question: username_0: Test cases should be added for each of the main actions throughout the app. We can probably use [flask-testing](https://pythonhosted.org/Flask-Testing/) for this. - [ ] POST request with command 'LOAN', from_user is co-op leader - [ ] POST request with any command, from_user is not in db (should return error message) - [ ] POST request with command 'Y' (confirming transaction) - [ ] POST request with command 'N' (denying transaction) - [ ] POST request with parsing error future tests: - [ ] adding a new user - [ ] updating users - [ ] creating/updating co-ops
geneontology/go-ontology
257305352
Title: NTR - GOBP term for Interleukin-10 signaling Question: username_0: Could you please create a term Interleukin-10-mediated signaling pathway, as child of GO:0019221 cytokine-mediated signaling pathway. IL10 has a well-established receptor and associated signaling mechanism, see PMID:11244051 Answers: username_1: Hi @username_0 Here's the new term: +id: GO:0140105 +name: interleukin-10-mediated signaling pathway +namespace: biological_process +def: "A series of molecular signals initiated by the binding of interleukin-10 to a receptor on the surface of a cell, and ending with regulation of a downstream cellular process, e.g. transcription." [PMID:11244051] +synonym: "IL-10-mediated signaling pathway" EXACT [] +synonym: "interleukin-10-mediated signalling pathway" EXACT [] +is_a: GO:0019221 ! cytokine-mediated signaling pathway +created_by: pg +creation_date: 2017-09-22T16:04:42Z Thanks, Pascale Status: Issue closed
fizyr/keras-retinanet
314311168
Title: How to restart training from next epoch after Keyboard interrupt by mistake Question: username_0: After 5 epochs, by mistake I entered "Ctrl C" and training stopped. Is there a way to start training from 6th epoch. Model has saved 5 snapshots corresponding to each epoch Answers: username_1: You should be able to resume from a snapshot using the --snapshot argument. Status: Issue closed
spatie/http-status-check
185176872
Title: Build PHAR Question: username_0: Hi, First, thanks for this good & useful package. I used it in different applications and it helped to find broken pages. I am now moving to docker and would like to include it in my image. Is it possible to distribute this package as a PHAR file so it is easier to deploy ? Thanks, Answers: username_1: I'm glad you find the package useful 😄 I've got no experience with PHAR files, so this is something you have to find out on your own. Status: Issue closed
safetymonkey/asq
711353137
Title: None Question: username_0: Hey, Jan from @depfu here. This error message unfortunately is misleading which is because of a rather weird Bundler bug (I've already [reported](https://github.com/rubygems/rubygems/issues/3935) - The real issue is that your locked version of codecov is yanked. Unfortunately we can't really detect that and thus not send an automated update to fix that. Manually running `bundle update codecov` should fix it, I think.<issue_closed> Status: Issue closed
jlippold/tweakCompatible
354910623
Title: `YouPIP` not working on iOS 11.3.1 Question: username_0: ``` { "packageId": "com.spicat.youpip", "action": "notworking", "userInfo": { "arch32": false, "packageId": "com.spicat.youpip", "deviceId": "iPad6,4", "url": "http://cydia.saurik.com/package/com.spicat.youpip/", "iOSVersion": "11.3.1", "packageVersionIndexed": true, "packageName": "YouPIP", "category": "Tweaks", "repository": "Spica T Repo", "name": "YouPIP", "installed": "0.0.4.2", "packageIndexed": true, "packageStatusExplaination": "This package version has been marked as Working based on feedback from users in the community. The current positive rating is 100% with 3 working reports.", "id": "com.spicat.youpip", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.1.0", "shortDescription": "Enable PIP in native YouTube app.", "latest": "0.0.4.3", "author": "SpicaT", "packageStatus": "Working" }, "base64": "<KEY> "chosenStatus": "not working", "notes": "Latest version not working :(" } ```
piotrwitek/utility-types
438082299
Title: 3.6.0 bug Question: username_0: const foo: LinkProps = { prefetch: true, href: { pathname: '/' } }; ``` ``` Type '{ prefetch: true; href: { pathname: "/"; }; }' is missing the following properties from type 'Pick<Pick<Pick<Pick<Omit<NextLinkProps2, "passHref">, "prefetch" | "shallow" | "scroll" | "replace" | "onError" | "as"> & Pick<{ children: ReactNode; href: { pathname: "/"; } | { pathname: "/me"; } | { ...; } | { ...; } | ({ ...; } & { ...; }); }, "href" | "children">, "prefetch" | ``` Answers: username_1: Hey @username_0 New `Omit` is not preserving an optional property modifier. I'll fall back to my previous implementation to fix the issue. Now the other problem is that the new Omit is a built-in TypeScript type coming in v3.5 which is apparently broken and I wonder if there is a bug filled for it. username_1: @username_0 Also there is a bug in your code snippet: Property 'children' is missing in type '{ prefetch: true; href: { pathname: "/"; }; }' but required username_1: Fixed by https://github.com/username_1/utility-types/releases/tag/v3.6.1 Status: Issue closed
perevoznyk/swelio-pdf
336946607
Title: Signed PDF size? Question: username_0: Hi, I noticed the signed file is ~1MB bigger than the original. Is this normal, or is there a way to have a ligther version? Thanks, Regards, Michael Answers: username_1: PDFSigner implements long term signature for archiving purposes standard. It is normal to have 1MB size increasing. It is possible to make it smaller, but the implemented standard will be different, for example simple CMS or CADES-B. This depends from the purpose of the signature. How long do you plan to keep document, etc... username_0: Thanks. This is very helpful. It would be nice to have this as an output option. We might have different planed duration depending on the document type. username_1: The purpose of swelio-pdf project is to provide library for developers, not the final product. PDFSigner is just a simple example how to integrate library in the product. username_0: OK. I might fork the sample to a full tool then. Thanks for your help. Status: Issue closed
googleads/googleads-mobile-ios-examples
242894099
Title: Failed to load: Request Error: No ad to show. Question: username_0: I have tried to use the sample adUnit ID in my app, it works very well, I can display the video ads from your server. This the sample ID I used: ca-app-pub-3940256099942544/1712485313. But I really need to try out my own adUnitID before I release it, yeah I just need to check it out, not click it. So I replace the adUnitID as my own. AppDelegate ` GADMobileAds.configure(withApplicationID: "ca-app-pub-8783572725852328~7539766498") ` HomeViewController ` if !adRequestInProgress && rewardBasedVideo?.isReady == false { rewardBasedVideo?.load(GADRequest(), withAdUnitID: "ca-app-pub-8783572725852328/901649969") adRequestInProgress = true }` Log `Reward based video ad failed to load: Request Error: No ad to show.` Answers: username_1: I tried your ad unit and it works on my end. When did you create the ad unit? It typically takes about an hour for newly created ad units to start serving? Status: Issue closed username_0: Hi @username_1 I also submit my issue on Google AdMob forum, they said I didn't fill any basic payment info, so I was going to fill it all. Now everything is okay, I release it App Store yesterday. username_1: Glad to hear it's working for you! username_2: Facing the same issue only on my live adUnitId "ca-app-pub-2205403669616327/4045941432" but development id is working fine. All clues are failed.
mockito/mockito-scala
592233405
Title: compile failing for wasNever calledAgain syntax in certain cases Question: username_0: Version observed: Anything post 1.13.1 Problem: In certain cases, a test class fails to compile when using syntax like: `helloServer wasNever calledAgain`. It seems a problem occurs at least when the mock instance is declared at the top level of the test class, but not when it is declared inside an individual test. Error observed: ``` $ sbt test [info] Loading global plugins from /Users/jlofgren/.sbt/1.0/plugins [info] Loading project definition from /Users/jlofgren/Development/git/github.com/username_0/mockito-scala-wasNever-calledAgain-bug/project [info] Loading settings for project root from build.sbt ... [info] Set current project to mockito-scala-wasNever-calledAgain-bug (in build file:/Users/jlofgren/Development/git/github.com/username_0/mockito-scala-wasNever-calledAgain-bug/) [info] Compiling 1 Scala source to /Users/jlofgren/Development/git/github.com/username_0/mockito-scala-wasNever-calledAgain-bug/target/scala-2.12/test-classes ... [error] /Users/jlofgren/Development/git/github.com/username_0/mockito-scala-wasNever-calledAgain-bug/src/test/scala/example/MainSpec.scala:20:17: value verifyWithMode is not a member of example.HelloServer <:< example.HelloServer [error] helloServer wasNever calledAgain [error] ^ [error] one error found [error] (Test / compileIncremental) Compilation failed [error] Total time: 2 s, completed Apr 1, 2020 5:08:12 PM ``` Reproduced here: https://github.com/username_0/mockito-scala-wasNever-calledAgain-bug Answers: username_1: @username_0 gotcha, working on it Status: Issue closed
GoogleCloudPlatform/google-cloud-eclipse
219134039
Title: Internal web browser caches http://localhost:8080/ and serves previous page, but only initially Question: username_0: A very minor nuisance. 1. Create a HelloWorld project. 2. _Run As_ > _App Engine_ 3. An internal browser serves `index.html` at `http://localhost:8080/` 4. Stop the server. 5. Modify `src/main/webapp/index.html`. 6. _Run As_ > _App Engine_ 7. Observe that the same internal browser still serves the previous `index.html`. At this point, if you reload the browser, it will correctly server the new `index.html`.
apache/camel-website
1066580817
Title: /docs/ page should direct to latest, not next, versions of docs Question: username_0: Directing users to the 'next' documentation implies that the unreleased 'next' version of code can be used for purposes other than development of camel, against Apache policy. Status: Issue closed Answers: username_0: Fixed by #700. If anyone has a better idea on how to implement this please reopen!
eggjs/egg
226521484
Title: egg-sequelize的事务Transaction怎么使用 Question: username_0: 请教下egg-sequelize的事务Transaction怎么使用,egg-sequelize例子里面没有相关事务这块的 Answers: username_1: 去sequelize官网,有对应的介绍 来自 魅蓝 Note5 -------- 原始邮件 -------- Status: Issue closed username_0: @username_1 官网我有去看,只是使用egg-sequelize,报错 ``` const result = yield this.ctx.model.Sequelize.Transaction(function* (t) { }); ``` 报错 `Cannot read property 'QueryGenerator' of undefined` 问一下在egg内这样使用对吗 username_3: @username_0 this.ctx.model.Transaction
lektor/lektor
1034005493
Title: rsync --delete-delay is not supported on macOS Question: username_0: https://github.com/lektor/lektor/blob/0ce8fd42affd53c1e45a8aa25c76f1554e3bb2c8/lektor/publisher.py#L187-L188 ```man --delete delete extraneous files from dest dirs --delete-before receiver deletes before transfer (default) --delete-during receiver deletes during xfer, not before --delete-after receiver deletes after transfer, not before --delete-excluded also delete excluded files from dest dirs ``` ... and since we are on the topic of rsync: I'd like to suggest to add support for local filesystem sync (e.g., if you are working on the same machine on which the web service is running). Currently providing a `rsync:///absolut/path` will crash Lektor. (haha, I have the same scenario as #699, currently using a custom plugin for that) ... and yet another request: I would welcome the addition of a dry-run option to see what changes exist. :) ... also: add `--no-motd`? – to omit the first few lines? =)<issue_closed> Status: Issue closed
mbrandau/css-shortener
794478946
Title: Cannot find module '../index' Question: username_0: Hey! I was trying out this plugin, but couldn't use it as this error always came up: ``` Error: Cannot find module '../index' Require stack: - C:\Users\augus\AppData\Roaming\npm\node_modules\css-shortener\bin\cli.js at Function.Module._resolveFilename (internal/modules/cjs/loader.js:880:15) at Function.Module._load (internal/modules/cjs/loader.js:725:27) at Module.require (internal/modules/cjs/loader.js:952:19) at require (internal/modules/cjs/helpers.js:88:18) at Object.<anonymous> (C:\Users\augus\AppData\Roaming\npm\node_modules\css-shortener\bin\cli.js:3:22) at Module._compile (internal/modules/cjs/loader.js:1063:30) at Object.Module._extensions..js (internal/modules/cjs/loader.js:1092:10) at Module.load (internal/modules/cjs/loader.js:928:32) at Function.Module._load (internal/modules/cjs/loader.js:769:14) at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:72:12) { code: 'MODULE_NOT_FOUND', requireStack: [ 'C:\\Users\\augus\\AppData\\Roaming\\npm\\node_modules\\css-shortener\\bin\\cli.js' ] } ``` When I went to check the file, it indeed was requiring an '../index', but I couldn't find it either... ``` const CssShortener = require('../index'); const fs = require('fs'); ``` Answers: username_0: Just replaced '../index' with 'css-shortener' and it works! Status: Issue closed
mifi/editly
715731569
Title: Uncaught [TypeError: Cannot read property 'clearRect' of null] with fabric js canvas.loadFromJSON Question: username_0: I am using fabric js and want to use editly for mp4 generate. I am doing as suggested here: [Custom Fabric](https://github.com/mifi/editly/blob/master/examples/customFabric.js) and trying to load the complete fabric js in JSON. Here is my fiddle link: http://jsfiddle.net/6w8m1n4q/ The above link works well for the frontend(angular). The problem is that the same thing is giving an error in node.js with editly ``` async function func({ width=800, height=800, fabric }) { async function onRender(progress, canvas) { // var canvas = new fabric.Canvas('c'); var json = '{"version":"4.2.0","objects":[{"type":"image","version":"4.2.0","originX":"left","originY":"top","left":372.5,"top":329,"width":195,"height":130,"fill":"rgb(0,0,0)","stroke":null,"strokeWidth":0,"strokeDashArray":null,"strokeLineCap":"butt","strokeDashOffset":0,"strokeLineJoin":"miter","strokeMiterLimit":4,"scaleX":1,"scaleY":1,"angle":0,"flipX":false,"flipY":false,"opacity":1,"shadow":null,"visible":true,"backgroundColor":"","fillRule":"nonzero","paintFirst":"fill","globalCompositeOperation":"source-over","skewX":0,"skewY":0,"cropX":0,"cropY":0,"src":"https://images.pexels.com/photos/3975634/pexels-photo-3975634.jpeg?auto=compress&cs=tinysrgb&h=130","crossOrigin":null,"filters":[],"id":590718},{"type":"textbox","version":"4.2.0","originX":"left","originY":"top","left":10,"top":678,"width":920,"height":35.41,"fill":"#ffffff","stroke":"","strokeWidth":1,"strokeDashArray":null,"strokeLineCap":"butt","strokeDashOffset":0,"strokeLineJoin":"miter","strokeMiterLimit":4,"scaleX":1,"scaleY":1,"angle":0,"flipX":false,"flipY":false,"opacity":1,"shadow":null,"visible":true,"backgroundColor":"","fillRule":"nonzero","paintFirst":"fill","globalCompositeOperation":"source-over","skewX":0,"skewY":0,"text":"7. Unproductive screen time22","fontSize":31.333333333333332,"fontWeight":"normal","fontFamily":"Arial","fontStyle":"normal","lineHeight":1.16,"underline":false,"overline":false,"linethrough":false,"textAlign":"left","textBackgroundColor":"rgba(0, 0, 0, 0.5)","charSpacing":0,"minWidth":20,"splitByGrapheme":false,"styles":{},"id":402722}]}'; canvas.loadFromJSON(json, canvas.renderAll.bind(canvas), function(o, object) { fabric.log(o, object); }); // canvas.add(text); } function onClose() { // Cleanup if you initialized anything } return { onRender, onClose }; } editly({ // fast: true, outPath: './customFabric.mp4', // outPath: './customFabric.mp4', clips: [ { duration: 2, layers: [{ type: 'fabric', func }] }, ], }).catch(console.error); ``` The error is: ``` Error: Uncaught [TypeError: Cannot read property 'clearRect' of null] at reportException (/Users/divi/Documents/development/learning/divisocial/backend/node_modules/jsdom/lib/jsdom/living/helpers/runtime-script-errors.js:62:24) at innerInvokeEventListeners (/Users/divi/Documents/development/learning/divisocial/backend/node_modules/jsdom/lib/jsdom/living/events/EventTarget-impl.js:332:9) at invokeEventListeners (/Users/divi/Documents/development/learning/divisocial/backend/node_modules/jsdom/lib/jsdom/living/events/EventTarget-impl.js:267:3) at HTMLImageElementImpl._dispatch (/Users/divi/Documents/development/learning/divisocial/backend/node_modules/jsdom/lib/jsdom/living/events/EventTarget-impl.js:214:9) at fireAnEvent (/Users/divi/Documents/development/learning/divisocial/backend/node_modules/jsdom/lib/jsdom/living/helpers/events.js:17:36) at Promise.resolve.then (/Users/divi/Documents/development/learning/divisocial/backend/node_modules/jsdom/lib/jsdom/browser/resources/per-document-resource-loader.js:57:13) at process._tickCallback (internal/process/next_tick.js:68:7) TypeError: Cannot read property 'clearRect' of null at klass.clearContext (/Users/divi/Documents/development/learning/divisocial/backend/node_modules/fabric/dist/fabric.js:8903:11) at klass.clear (/Users/divi/Documents/development/learning/divisocial/backend/node_modules/fabric/dist/fabric.js:8931:12) at /Users/divi/Documents/development/learning/divisocial/backend/node_modules/fabric/dist/fabric.js:13449:13 at /Users/divi/Documents/development/learning/divisocial/backend/node_modules/fabric/dist/fabric.js:13569:19 at onLoaded (/Users/divi/Documents/development/learning/divisocial/backend/node_modules/fabric/dist/fabric.js:1019:23) at /Users/divi/Documents/development/learning/divisocial/backend/node_modules/fabric/dist/fabric.js:1041:11 at /Users/divi/Documents/development/learning/divisocial/backend/node_modules/fabric/dist/fabric.js:20852:13 at onLoaded (/Users/divi/Documents/development/learning/divisocial/backend/node_modules/fabric/dist/fabric.js:1019:23) at /Users/divi/Documents/development/learning/divisocial/backend/node_modules/fabric/dist/fabric.js:1034:11 at Array.forEach (<anonymous>) at Object.enlivenObjects (/Users/divi/Documents/development/learning/divisocial/backend/node_modules/fabric/dist/fabric.js:1031:15) at /Users/divi/Documents/development/learning/divisocial/backend/node_modules/fabric/dist/fabric.js:20849:23 at /Users/divi/Documents/development/learning/divisocial/backend/node_modules/fabric/dist/fabric.js:20731:23 at onLoaded (/Users/divi/Documents/development/learning/divisocial/backend/node_modules/fabric/dist/fabric.js:1019:23) at /Users/divi/Documents/development/learning/divisocial/backend/node_modules/fabric/dist/fabric.js:1034:11 at Array.forEach (<anonymous>) ``` Any help is highly appreciated!<issue_closed> Status: Issue closed
gee1k/uPic
646582992
Title: [BUG] Not able to launch on mac os big sur Question: username_0: **Describe the bug** I upgraded my os to big sur yesterday and found uPic is able to start. I had it installed on my old version OS. **Screenshots** ![image](https://user-images.githubusercontent.com/34391735/85911704-a1a3ef00-b7f4-11ea-92fc-b743c82c5f02.png) **Version:** [e.g. 22] **Additional context** Add any other context about the problem here. Answers: username_0: update: I removed and reinstalled upic and now it's working properly. Status: Issue closed
DantSu/ESCPOS-ThermalPrinter-Android
755761025
Title: Print big or tall font why cropped Question: username_0: i try to use BIG or Tall font in M300EL print and cropped like this ![gambar](https://user-images.githubusercontent.com/21106287/100955879-f2f59e80-3549-11eb-9ac3-bbc5fe0b78fa.png) Answers: username_1: I think is a printer problem. I never had an issue like this. Status: Issue closed
LaurentMazare/tch-rs
850874864
Title: Make COptimizer public Question: username_0: `nn::OptimizerConfig::build_copt` is a public method that creates a `COptimizer` but since `COptimizer` is private the result can't really be used. I am interested in having access to COptimizer because I prefer its API compared to Optimizer (see #335, also if I know the variables will not change I would rather avoid the overhead of calling `add_missing_variables` on every method). Status: Issue closed Answers: username_1: Just pushed some changes that should expose `COptimizer` so that it's easier to use it. If this still doesn't work for you, could you please provide an example that doesn't compile? username_0: Thanks for the rapid response and fix!
EricSchles/fact_checker_website
193312295
Title: Other text for potential "About" page, FAQ, etc. Question: username_0: **Tips to spot a fake news article:** (This could go on the page where someone files a fake news article or on a FAQ page) - Look for unusual URLs (example: abc.com versus abc.com.co) - Look for author of the article. If there isn’t one, that can be a red flag - Is it a known unreliable source? - Consider relevant sources in the article - are they authoritative or not? - Double check the date - Keep in mind satire is a thing ----
kubernetes/kops
307014736
Title: how to find ca.crt for cluster Question: username_0: using kops Version 1.8.1 (git-94ef202) ubuntu 16.04 client and server I created a cluster on GCE with kops. It works correctly. I want to add a new k8s user following the tutorial at https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/ In order to create new user certs, i need to have **cluster ca.crt and ca.key** (in step 2 in the howto). Can anyone tell me where I can find these files for the cluster which I just created with kops? I thought I could find these files in my kops state storage (for me it is gs://rnmclusterkops-state) but I couldn't find them there. I looked on the cluster controller and couldn't find them there. Any idea where to find the ca.crt and ca.key which were used when kops created the cluster? thanks Status: Issue closed Answers: username_0: I found the answer for kops on AWS here https://stackoverflow.com/questions/44820100/how-do-i-get-the-certificate-authority-certificate-key-from-a-cluster-created-by so i just translated it to GCP storage implementation. username_1: how to find the client-certificate-data/client-key-data ?
jupyter-widgets/ipywidgets
469985324
Title: Displaying ANSI escape characters in widgets Question: username_0: Is there a way to display ANSI escape sequences properly in widgets? I have some code which used the ```colored``` method from the ```termcolor``` library to insert ANSI escape sequences do to things like adding color, underline, bold, etc and I used ```print``` to display the output text, for example: ```print(colored('This is red', color='red'))```. This works fine and outputs the text with the proper color. However, I cannot seem to get this output in a widget like a Text widget. I can obviously use HTML instead to add the color but I was wondering if there was an easier way to take ANSI escaped text and allow it to be put into widgets. Answers: username_1: You could use an output widget. username_0: That was exactly what I was looking for. Brilliant and thank you! Status: Issue closed
schmerl/LLStaging
209844053
Title: mini TH Question: username_0: rel to https://github.com/schmerl/LLStaging/issues/18, make a mini TH to make sure we can hit at least some of the execution paths we think they're going to try Answers: username_0: see https://github.com/joshsunshine/mock_test_harness username_0: i'm writing a really bare bones version of something like this in https://github.com/username_0/brasshacktest. it is not pretty.
OCR-D/core
705486880
Title: Docker image should include git and ssh Question: username_0: Debugging the failure to build ocrd_fileformat, the problem was missing `ssh` and `git`. I think the additional space requirement is worth the out-of-box usability of the ocrd/core and derived images.
ethereum/pyethereum
239594339
Title: Error on installation on Osx Sierra Question: username_0: Processing dependencies for ethereum==2.0.4 Searching for py-ecc Reading https://pypi.python.org/simple/py_ecc/ Reading https://pypi.python.org/simple/py-ecc/ Couldn't find index page for 'py_ecc' (maybe misspelled?) Scanning index of all packages (this may take a while) Reading https://pypi.python.org/simple/ No local packages or download links found for py-ecc error: Could not find suitable distribution for Requirement.parse('py-ecc') Answers: username_1: That's because last commit https://github.com/ethereum/pyethereum/commit/ecb14c937a0b6cb0a0dc4f06be3a88e6d53dcce3 substituted the dependency from 'bitcoin' module to a new module 'py_ecc', but py_ecc it is not uploaded in PyPi yet. username_2: See: https://github.com/ethereum/py_pairing username_3: I went ahead and [published](https://pypi.python.org/pypi/py_ecc/1.0.0) `py_ecc` to PyPi (and made @vbuterin an owner) so this can be closed ✨ Status: Issue closed
meggart/SentinelMissings.jl
1058952410
Title: Constructors compatible with Setfield.jl/Flatten.jl etc Question: username_0: As usual I'm looking to reconstruct the `SentinalMissings` array object using ConstructionBase.jl, but it has type parameters not known from the fields. I'm wondering if it would be possible to: a. use a second field that holds a `Val{SV}()` object where `SV` is the sentinal value. b. add a ConstructionBase.jl dep to add `constructorof` Probably the first is preferable, as this package is so tiny. Answers: username_1: Yes, I think option 1 would be no problem to implement. I can try next week.
oleksandr/bonjour
212394493
Title: Is it dead? Question: username_0: Is this project dead? Answers: username_1: Kind of. I'm out of time and work in a completely different domain now. Gladly would transfer the project to anyone interested. username_2: As an alternative, one can use the zeroconf package that can be found at [https://github.com/grandcat/zeroconf](url) It has been inspired by username_1's bonjour, works pretty well and is currently active. Status: Issue closed
abulka/pynsource
115125101
Title: wxPyDeprecationWarning: Call to deprecated item. Question: username_0: # Description The error occurred after I run the `rungui.sh`. # Complete message ```shell Gtk-Message: Failed to load module "canberra-gtk-module" pyNsourceGui.py:53: wxPyDeprecationWarning: Call to deprecated item. wx.InitAllImageHandlers() Traceback (most recent call last): File "pyNsourceGui.py", line 647, in <module> main() File "pyNsourceGui.py", line 634, in main application = MainApp(0) File "/usr/lib/python2.7/dist-packages/wx-3.0-gtk2/wx/_core.py", line 8628, in __init__ self._BootstrapApp() File "/usr/lib/python2.7/dist-packages/wx-3.0-gtk2/wx/_core.py", line 8196, in _BootstrapApp return _core_.PyApp__BootstrapApp(*args, **kwargs) File "pyNsourceGui.py", line 142, in OnInit self.umlwin.InitSizeAndObjs() # Now that frame is visible and calculated, there should be sensible world coords to use File "/home/yudi/Downloads/pyNsource-umlgenerator/pynsource/src/gui/uml_canvas.py", line 71, in InitSizeAndObjs assert not self.canvas_resizer.canvas_too_small(), "InitSizeAndObjs being called too early - please set up enclosing frame size first" AssertionError: InitSizeAndObjs being called too early - please set up enclosing frame size first ``` # Using: * Debian testing (last upgrade: 03 november 2015) * Python 2.7.10+ * `python-wxgtk` 2.8.12.1-12 * `python-wxtools` 3.0.2.0+dfsg-1 * `wxwidgets` 2.8.12.1-12 Answers: username_1: Thanks for reporting this. I'll try it on ubuntu some time to compare behaviours... username_1: Confirmed. Seems wxpython3 has slightly different behaviour than wxpython2..8 with regards to the timing of canvas size changes in response to a frame/window size change. Fixed in latest master. Status: Issue closed
Catalyst-Swarm/Catalyst-Beehive
986337403
Title: Core Swarm Activities - Thursday, 2nd September 2021 Question: username_0: ## Core Swarm Activities - Thursday, 2nd September 2021 - Previous Day - https://github.com/Catalyst-Swarm/Catalyst-Beehive/issues/175 - Next Day - This Issue is intended as a daily record of Core Swarm Activities. It includes a checklist of who contributed to Core Swarm on this day and a "Today's Activities" section where links to this day's major events and tasks can be linked. This approach is still a work in progress and is intended as a prototype for tracking Core Swarm ## Today's Contributors ? Checked Names confirmed as contributors. - [ ] DominikTilman - [ ] FelixWeber - [ ] Filip - [ ] JakobD - [ ] Tevo - [ ] <NAME> - [ ] Seomon ## Documentation - [x] @username_0 ## Today's Activities - Activities Answers: username_1: Organizing the School & Swarm CA - Workshops with @victorcorcino https://docs.google.com/spreadsheets/d/1JVszJfcrhDGK3mBINn6Bpd553AwHUEaNEn_qJarSctc/edit?usp=sharing Organizing a continuation for the ATH Proposal Promotion with Phil.k & Randall Follow up promotion for Idea Fest Fund 6.- Discord announcements & DM responses Status: Issue closed
godotengine/godot
667797967
Title: [macOS] MoltenVK Metal shader compilation warnings Question: username_0: **Godot version:** `master`, 157958c77c **OS/device including version:** macOS 10.15.6 (19G73) MoltenVK version 1.0.44, Vulkan version 1.0.148. <details> <summary>Full device info</summary> ``` [mvk-info] MoltenVK version 1.0.44. Vulkan version 1.0.148. The following 54 Vulkan extensions are supported: VK_KHR_16bit_storage v1 VK_KHR_8bit_storage v1 VK_KHR_bind_memory2 v1 VK_KHR_dedicated_allocation v3 VK_KHR_descriptor_update_template v1 VK_KHR_device_group v4 VK_KHR_device_group_creation v1 VK_KHR_driver_properties v1 VK_KHR_external_memory v1 VK_KHR_external_memory_capabilities v1 VK_KHR_get_memory_requirements2 v1 VK_KHR_get_physical_device_properties2 v2 VK_KHR_get_surface_capabilities2 v1 VK_KHR_image_format_list v1 VK_KHR_maintenance1 v2 VK_KHR_maintenance2 v1 VK_KHR_maintenance3 v1 VK_KHR_push_descriptor v2 VK_KHR_relaxed_block_layout v1 VK_KHR_sampler_mirror_clamp_to_edge v3 VK_KHR_sampler_ycbcr_conversion v14 VK_KHR_shader_draw_parameters v1 VK_KHR_shader_float16_int8 v1 VK_KHR_storage_buffer_storage_class v1 VK_KHR_surface v25 VK_KHR_swapchain v70 VK_KHR_swapchain_mutable_format v1 VK_KHR_uniform_buffer_standard_layout v1 VK_KHR_variable_pointers v1 VK_EXT_debug_marker v4 VK_EXT_debug_report v9 VK_EXT_debug_utils v2 VK_EXT_fragment_shader_interlock v1 VK_EXT_hdr_metadata v2 VK_EXT_host_query_reset v1 VK_EXT_inline_uniform_block v1 VK_EXT_memory_budget v1 VK_EXT_metal_surface v1 VK_EXT_robustness2 v1 VK_EXT_scalar_block_layout v1 VK_EXT_shader_stencil_export v1 VK_EXT_shader_viewport_index_layer v1 VK_EXT_swapchain_colorspace v4 VK_EXT_texel_buffer_alignment v1 VK_EXT_vertex_attribute_divisor v3 VK_EXTX_portability_subset v1 VK_MVK_macos_surface v2 VK_MVK_moltenvk v27 [Truncated] ^ warning: unused variable 'screen_uv' float2 screen_uv = float2(0.0); ^ warning: unused variable 'light_vertex' float3 light_vertex = float3(vertex0, 0.0); ^ warning: unused variable 'shadow_vertex' float2 shadow_vertex = vertex0; ^ warning: unused variable 'normal_depth' float normal_depth = 1.0; ^ warning: unused variable 'base_color' float4 base_color = color; ^ ``` **Steps to reproduce:** Run Godot with debug version of MoltenVK library, and `MVK_DEBUG` environment variable set `1`.<issue_closed> Status: Issue closed
hugsy/gef
1110135458
Title: context failed NoneType object has no attribute sizeof Question: username_0: * [x] Did you use the latest version of GEF from `dev` branch? * [x] Is your bug specific to GEF (not GDB)? - Try to reproduce it running `gdb -nx` * [x] Did you read the [documentation](https://gef.readthedocs.org/en/latest/) first? * [x] Did you check [issues](https://github.com/username_2/gef/issues) (including the closed ones) - and the [PR](https://github.com/username_2/gef/pulls)? ### Step 1: Describe your environment * Operating System / Distribution: WSL2 under WIndows 11 * Architecture: x86_64 * Target Architecture: thumbv7em-none-eabihf via openocd * GEF version (including the Python library version) run `version` in GEF. ``` GEF: (Standalone) Blob Hash(/home/marcel/.gef.py): 612300df463844ee76d5dcf244053fb75b0c8b78 SHA1(/home/marcel/.gef.py): 470e64dd79a138efe82ee0b59616e04a43742d72 GDB: 11.1 GDB-Python: 3.9 ``` ### Step 2: Describe your problem When I try to debug my application remotely I'm seeing a lot of `Command 'context' failed to execute proplery` messages. Directly after reset I see registers and the stack, but as soon as I run to main, I don't see them anymore: ![grafik](https://user-images.githubusercontent.com/921462/150481977-574d48e0-c809-4a2a-a4a4-686461fb72fd.png) ![grafik](https://user-images.githubusercontent.com/921462/150482019-162f6804-51cf-4194-a97e-e4b2ef5e408e.png) #### Steps to reproduce 1. Get yourself a stm32 dev board 2. With the help of https://github.com/rust-embedded/cortex-m-quickstart/ build a example for your board (with an empty main). 3. use `cargo build` and afterwards `gdb-multiarch` to connect to your target #### Minimalist test case ```rust #![no_std] #![no_main] use cortex_m::asm; use cortex_m_rt::entry; use stm32l4 as _; use panic_halt as _; #[entry] fn main() -> ! { asm::nop(); // To not have main optimize to abort in release mode, remove when you add code loop { // your code goes here } } ``` #### Traces [Truncated] 251 version 252 version 253 si 254 break main 255 c 256 !cat src/main.rs 257 gef config gef.debug 1 258 break main 259 c ───────────────────────────── Runtime environment ────────────────────────────── * GDB: 11.1 * Python: 3.9.7 - final * OS: Linux - 5.10.60.1-microsoft-standard-WSL2 (x86_64) No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 21.10 Release: 21.10 Codename: impish ──────────────────────────────────────────────────────────────────────────────── ``` Answers: username_1: thank you for the issue @username_0 ! I can't find your GEF version anywhere in our git history, did you modify any files? for us to be able to help with this issue you have to use a clean, unmodified GEF version from `dev` branch. Also have you tried if the issue also exists with a rust binary for x86/x86_64? that would greatly improve our possibilities to understand the issue. Because unfortunately I don't have any stm32 lying around so I cant follow along :/ username_0: I took the latest dev. Maybe vim added a newline? x86(_64) works fine, for Linux and for windows. I'm more than willing to help you and resolve this issue. Maybe using qemu will have the same effect, let me try that real quick username_1: are you sure? on the latest `dev` e.g. the `version` command should return a `SHA256` instead of `SHA1` yeah, a working qemu setup would be great. looking at your code snippet I guess you took it from the [Embedded Rust Book](https://docs.rust-embedded.org/book/start/qemu.html#program-overview)? If so I can try the qemu setup they describe there... username_0: [fuzz.exe.zip](https://github.com/username_2/gef/files/7913155/fuzz.exe.zip) Here's the arm binary, the command line and another dump from gef: <details><summary>Trace</summary> ``` Remote debugging using :1234 Reset () at asm.S:44 44 in asm.S [ Legend: Modified register | Code | Heap | Stack | String ] ───────────────────────────────────────────────────────────────────────────────────── registers ──── ─────────────────────────────── Exception raised ─────────────────────────────── TypeError: unsupported operand type(s) for &: 'NoneType' and 'int' ───────────────────────────── Detailed stacktrace ────────────────────────────── ↳ File "/home/marcel/.gef.py", line 2365, in is_thumb() → return is_alive() and gef.arch.register(self.flag_register) & (1 << 5) ↳ File "/home/marcel/.gef.py", line 2385, in ptrsize() → return 2 if self.is_thumb() else 4 ↳ File "/home/marcel/.gef.py", line 7416, in do_invoke() → memsize = gef.arch.ptrsize ↳ File "/home/marcel/.gef.py", line 508, in wrapper() → return f(*args, **kwargs) ↳ File "/home/marcel/.gef.py", line 358, in wrapper() → return f(*args, **kwargs) ↳ File "/home/marcel/.gef.py", line 251, in wrapper() → return f(*args, **kwargs) ↳ File "/home/marcel/.gef.py", line 4518, in invoke() → bufferize(self.do_invoke)(argv) ─────────────────────────────────── Version ──────────────────────────────────── GEF: (Standalone) Blob Hash(/home/marcel/.gef.py): d146bead0373345349301b57d7940e96182d9793 SHA256(/home/marcel/.gef.py): 9c3886071c2ba2cb6459b15e3ca20e5bf7314665d0725f4f79ca8f9cc2f8861f GDB: 11.2 GDB-Python: 3.10 Loaded commands: $, aliases, aliases add, aliases ls, aliases rm, aslr, canary, capstone-disassemble, checksec, context, dereference, edit-flags, elf-info, entry-break, format-string-helper, functions, gef-remote, got, heap, heap arenas, heap bins, heap bins fast, heap bins large, heap bins small, heap bins tcache, heap bins unsorted, heap chunk, heap chunks, heap set-arena, heap-analysis-helper, hexdump, hexdump byte, hexdump dword, hexdump qword, hexdump word, highlight, highlight add, highlight clear, highlight list, highlight remove, hijack-fd, ida-interact, is-syscall, ksymaddr, memory, memory list, memory reset, memory unwatch, memory watch, name-break, nop, patch, patch byte, patch dword, patch qword, patch string, patch word, pattern, pattern create, pattern search, pcustom, pcustom edit, pcustom list, pcustom show, pie, pie attach, pie breakpoint, pie delete, pie info, pie remote, pie run, print-format, process-search, process-status, registers, reset-cache, ropper, scan, search-pattern, shellcode, shellcode get, shellcode search, stub, syscall-args, theme, trace-run, unicorn-emulate, version, vmmap, xfiles, xinfo, xor-memory, xor-memory display, xor-memory patch ───────────────────────────── Last 10 GDB commands ───────────────────────────── 129 target ext :3333 130 load 131 c 132 bt 133 target ext :1234 134 c 135 target ext :1234 136 gef config gef.debug 1 137 si 138 target ext :1234 ───────────────────────────── Runtime environment ────────────────────────────── * GDB: 11.2 * Python: 3.10.2 - final * OS: Linux - 5.16.1-arch1-1 (x86_64) LSB Version: 1.4 Distributor ID: Arch Description: Arch Linux Release: rolling Codename: n/a ──────────────────────────────────────────────────────────────────────────────── [Truncated] 136 gef config gef.debug 1 137 si 138 target ext :1234 ───────────────────────────── Runtime environment ────────────────────────────── * GDB: 11.2 * Python: 3.10.2 - final * OS: Linux - 5.16.1-arch1-1 (x86_64) LSB Version: 1.4 Distributor ID: Arch Description: Arch Linux Release: rolling Codename: n/a ──────────────────────────────────────────────────────────────────────────────── ``` <details> qemu-system-arm -kernel fuzz.exe --machine stm32vldiscovery -S -s Hope this helps! username_2: username_2/gef-extras#45 was just merged, you can either try again with the version on the `dev` branch or wait for this weekend for the next relese that will include those changes. I think they should fix your issue: <img width="359" alt="image" src="https://user-images.githubusercontent.com/590234/150573091-c624f51b-8048-4dbb-a909-bdcd25bc06b0.png"> username_0: ![1643132102_grim](https://user-images.githubusercontent.com/921462/151029120-b5928fbd-d374-4a54-aa7e-a9e3761c65a5.png) @username_2 still no luck. Also I would like that this works out of the box without me running `pi reset_architecture...`. Is that possible? :/ username_2: It can be automatical if you add `pi reset_architecture("ARM-M")` to your gdbinit after gef.py is sourced.
dogpakk/roadmap
772161627
Title: Autogenerate Brand Category Question: username_0: - many stores like to feature products by brand - we already have 'brand' as a field on products - so we could easily auto-generate a "brand store"/category - need to put under an option flag (as some stores won't want it) - where to put link? Answers: username_0: Done Status: Issue closed
intuit/auto
633770085
Title: "dryRun" has inconsistent/unclear semantics in plugins Question: username_0: ## Description In the documentation, `--dry-run` is repeatedly described as, essentially, "Report what command will do but do not actually do anything." This is almost universally incorrect when it comes to the packaging plugins. ### Documentation When looking at the documentation at the areas where the `dryRun` option is available to the hooks, it appears it is only available during `makeRelease`. This turns out to NOT be true, as `auto.options.dryRun` will exist, regardless of the hook, if the command-line switch is included in the command-line. It is unclear what the guidance is for how `--dry-run` plays out with respect to versions in package management files (`gemspec`, `gradle.properties`, `package.json`, `pom.xml`). ### Plugins Undoubtedly, there are plugins that make changes to the packaged versions and push those changes to the git repository, regardless of whether `--dry-run` is used or not. From what I can tell, the only plugin that attempts to honor that switch is the `npm` plugin: ```console 2020-06-07 10:44:20 (⎈ |eks_eks-dev-cluster:default) username_0@eanna i ~/Development/Terradatum/auto/plugins remove-maven-release-plugin-requirement % grep -R "dryRun" * npm/src/index.ts: "dryRun" in auto.options && auto.options.dryRun npm/src/index.ts: if (!options.dryRun && isIndependent) { npm/dist/index.js: "dryRun" in auto.options && auto.options.dryRun npm/dist/index.js: if (!options.dryRun && isIndependent) { ``` ## `dryRun` vs ??? How **should** a plugin work with `dryRun` semantics? Should it make any changes at all to the file system, or just report those changes it would make if it were to make them? This then begs the question of whether or not you can easily show all the changes you would be making or the artifacts you would be building without actually making some changes. For instance: "Show me the artifacts I would create with the new version." That's a very difficult task if you aren't able to make changes to the package management files that are used to drive the artifact's version. Answers: username_1: Very true. `auto`'s dry runs involve lots of guessing done by core and not the plugins. --- As I see more package plugins come to auto I'm realizing there are two package publishing scenarios that are common 1. build before publish 2. build during publish `auto` until this point has been primarily serving `1` but `2` is becoming more prominent. I'm open to calling the hooks in different way. for v10 I think taking a look at the hooks system to see what we can improve. You point of view helps a lot! Maybe we could move the dry run checking more to the hooks to get a better experience. username_0: What caught my attention was my work on the `maven` plugin - and the fact that during `--dry-run` the `afterShipIt` hook is called, and for both `maven` and `gradle` this hook is used to increment a SNAPSHOT version and then push the change to the central repo. In other words, during a `--dry-run`, when using `gradle` or `maven`, you get changes to disk AND pushes to your central repository. I would elaborate on your two scenarios, and for each include "inspection", "computation", "changes made to state before publish" and then finally, "pushing changes to state". 1. inspection - what is the version, is it a monorepo, are there properties and options which will affect how I read that version (e.g. for `gradle` and `maven` - is it a SNAPSHOT release?). 2. computation - compute the new version, compute the changelog, etc. 3. changes made to state before publish - updating the package management files to reflect the newly computed version, committing those changes to git, tagging those commits, etc. 4. build 5. push changes to state - publish, push to central repo, github release, etc. One thing that I'm still challenged by is the order and likelihood of a hook being run. For instance, I was surprised to learn that `--dry-run` does NOT run the `version` hook. username_1: Making a graph to hopefully clear it up. Just found a bug too in doing it username_1: ![Untitled-2020-04-22-1020](https://user-images.githubusercontent.com/1192452/83996362-ccdfa080-a910-11ea-9afc-c15ee5ea01d0.png) username_0: That's super helpful! username_2: <!-- GITHUB_RELEASE COMMENT: released --> :rocket: Issue was released in [`v10.0.0-next.1`](https://github.com/intuit/auto/releases/tag/v10.0.0-next.1) :rocket: username_2: <!-- GITHUB_RELEASE COMMENT: released --> :rocket: Issue was released in [`v10.0.0`](https://github.com/intuit/auto/releases/tag/v10.0.0) :rocket: Status: Issue closed
ccyang/ysx-stackdriver-alerts
377685376
Title: [ALERT] GKE Container - CPU utilization for mongo on ysx-cloud mongo Question: username_0: Date: November 06, 2018 at 12:03PM<br> <EMAIL><br> <br> <div style="width: 600px; margin: 0 auto;"> <table style="width: 100%; padding: 34px 6px; border-spacing: 0;"> <tr> <td style="padding: 0;"><img src="http://www.gstatic.com/stackdriver/notification/google_stackdriver_logo.png" alt="Google Stackdriver" style="vertical-align: top;"></td> <td style="text-align: right; padding: 0; font-family: inherit;"><a href="https://app.google.stackdriver.com/incidents/0.kyw8xfeo59n5?project=ysx-cloud" style="font-weight: 500; text-decoration: none;">VIEW DETAILS</a></td> </tr> </table> <div style="background-color: white; border-top: 4px solid #d40001; border-left: 1px solid #eee; border-right: 1px solid #eee; border-radius: 6px 6px 0 0; height: 24px;"></div> <div style="background-color: white; border-left: 1px solid #eee; border-right: 1px solid #eee; padding: 0 24px; overflow: hidden;"> <table style="width: 100%; border-spacing: 0;"> <tr> <td style="width: 35px; padding: 0;"><img src="http://www.gstatic.com/stackdriver/notification/exclamation_mark.png" alt="exclamation mark" style="vertical-align: top;"></td> <td style="padding: 0; font-family: inherit;"><span style="color: #d40001; font-size: 130%; font-weight: bold;">Alert firing</span></td> </tr> </table> <div style="margin-left: 35px;"> <h1>GKE Container - CPU utilization for mongo</h1> <p>CPU request utilization for ysx-cloud mongo is above the threshold of 0.5 with a value of 0.532.</p> <h2>Summary</h2> <p><strong>Start time</strong><br> Nov 6, 2018 at 3:59AM UTC (~3 min, 45 sec ago)</p> <p><strong>Project</strong><br> ysx-cloud (<a href="https://console.cloud.google.com/?project=ysx-cloud" style="text-decoration: none;">Cloud Console</a> | <a href="https://app.google.stackdriver.com/?project=ysx-cloud" style="text-decoration: none;">Stackdriver</a>)</p> <p><strong>Policy</strong><br> <a href="https://app.google.stackdriver.com/policy-advanced/3667601285820380244?project=ysx-cloud" style="text-decoration: none;">CPU Utilization (mongo)</a></p> <p><strong>Condition</strong><br> GKE Container - CPU utilization for mongo</p> <p><strong>Metric</strong><br> <a style="color: inherit; cursor: text; text-decoration: none;">kubernetes.io/container/cpu/request_utilization</a></p> <p><strong>Threshold</strong><br> above 0.5</p> <p><strong>Observed</strong><br> 0.532</p> </div> <div style="height: 54px;"></div> <div style="text-align: center;"><a href="https://app.google.stackdriver.com/incidents/0.kyw8xfeo59n5?project=ysx-cloud" style="display: inline-block; background-color: #4285f4; color: white; padding: 10px 18px; border-radius: 2px; text-decoration: none;">VIEW DETAILS</a></div> </div> <div style="background-color: white; border-left: 1px solid #eee; border-right: 1px solid #eee; border-bottom: 1px solid #eee; height: 58px;"></div> <div style="padding: 62px 6px; text-align: center; color: #757575;"> <img src="http://www.gstatic.com/stackdriver/notification/google_logo.png" alt="Google" style="vertical-align: top;"> <p>&copy; 2017 Google LLC<br> <a style="color: inherit; cursor: text; text-decoration: none;">1600 Amphitheatre Parkway, Mountain View, CA 94043</a></p> <p>You have received this mandatory service announcement to update you about important changes to Google Cloud Platform or your account.</p> <p><a href="https://app.google.stackdriver.com/policy-advanced/edit/3667601285820380244?project=ysx-cloud" style="text-decoration: none;">Manage notifications</a></p> </div> </div> <br>
alexandrevicenzi/Flex
599065071
Title: Update documentation for JINJA_EXTENSIONS Question: username_0: JINJA_EXTENSIONS has been deprecated and replaced by JINJA_ENVIRONMENT. It is used like this : JINJA_ENVIRONMENT = { 'extensions': ['jinja2.ext.i18n'] } Answers: username_1: The wiki is publicly editable. You can update the wiki with the correct configuration. username_2: Wiki was updated. Status: Issue closed
grails/grails-data-mapping
376536235
Title: GRAILS-5208: Discriminator used in table-per-subclass not implemented Question: username_0: Original Reporter: frankly.watson Environment: Linux Version: 1.2-M3 Migrated From: http://jira.grails.org/browse/GRAILS-5208 I have spent quite a time trying to get a discriminator applied to a table-per-subclass hierarchy using GORM domain objects, but nothing appears to work. Judging by Hibernate docs (http://docs.jboss.org/hibernate/core/3.3/reference/en/html/inheritance.html#inheritance-tablepersubclass-discriminator) this should be possible My example classes, and in comments my various attempts (the documentation on how to achieve this does not appear up to date either - http://jira.codehaus.org/browse/GRAILS-5168) package foo class Parent { ``` String name int counter static mapping = { tablePerHierarchy false // also tried to replace with 'tablePerSubclass true' //(1) discriminator column: [name: "product_type", sqlType: "varchar"], value: "parent" //(2) discriminator column: "product_type", value: "parent" //(3) discriminator value: "parent" discriminator "parent" } static constraints = { } ``` } and package foo class Child extends Parent { ``` String color static mapping = { discriminator "child" } ``` } Some degree of discriminator is supported by commenting out ``` tablePerHierarchy false ``` In this instance I seed a single PARENT table and a column called CLASS, the default discriminator However in table-per-subclass i never see either a CLASS column, or otherwise any custom column name I have specified. The following Bug (FIXED in 1.1RC1) suggests that I should be able to specify a user-defined discriminator value - http://jira.codehaus.org/browse/GRAILS-3913 - although it makes no comment on whether this is only applicable to tablePerHieararchy. I do see my custom discriminator column name appear in table-per-hierarchy, not in TPS-Class tho. The following also suggests there is something working, although not completely... http://jira.codehaus.org/browse/GRAILS-4487 I am suposing that logic i have so far uncovered which does work is the conclusion of this discussion - http://www.nabble.com/Hierarchy---Discriminator-td19615294.html Based on the above, and others observing the same since May 09, http://www.pubbs.net/grails/200905/68819/, leads me to conclude that support is only partially there; it is for tablePerHierarchy but not at all for tablePerSubclass
wantedly/frontend-night
762239166
Title: [2020/12/18] Frontend Night Question: username_0: - 資料の準備をがんばらない - 参加・途中退席は自由 - スキップしない 話したい・聞きたいネタを書いてく ✏️ ハッシュタグ [#frontend_night](https://twitter.com/hashtag/frontend_night?src=hash) _Created from https://github.com/wantedly/frontend-night/issues/65 by [issue-creator](https://github.com/rerost/issue-creator)_<issue_closed> Status: Issue closed
npo6ka/FNEI
321764565
Title: Request: show something-from-nothing resources Question: username_0: Water comes from an offshore pump. There's no way to see this in FNEI currently. With Bobs+Angels, there's a "sea floor pump" that produces "viscous mud water", and I had a hell of a time figuring that out, because FNEI showed me recipes to produce viscous mud water but they all went in circles. I would like for the "recipe" view to also show things like offshore pumps that produce a fluid from nothing.<issue_closed> Status: Issue closed
thiviyanT/torch-rgcn
1042947158
Title: Error when running the link prediction experiment Question: username_0: I was able to run the node classification on all datasets, unfortunately the link prediction experiment always throws an exception, I've tried changing the configurations but no luck, below is the error: ``` ... WARNING - root - Added new config entry: "training.use_cuda" WARNING - R-GCN Link Prediction - No observers have been added to this run INFO - R-GCN Link Prediction - Running command 'train' INFO - R-GCN Link Prediction - Started ERROR - R-GCN Link Prediction - Failed after 0:00:00! Traceback (most recent calls WITHOUT Sacred internals): File "experiments/predict_links.py", line 88, in train decoder_config=decoder File "/home/raftel/torch-rgcn/torch_rgcn/models.py", line 224, in __init__ .__init__(nnodes, nrel, nfeat, encoder_config, decoder_config) File "/home/raftel/torch-rgcn/torch_rgcn/models.py", line 58, in __init__ init(self.node_embeddings) TypeError: schlichtkrull_normal_() missing 1 required positional argument: 'shape' ``` Answers: username_1: Hello, I also encountered this problem, can I ask if you found a solution? username_2: Hello, I also encountered this problem. It seems like there is a bug with both, schlichtkrull_normal_, and schlichtkrull_uniform_ in utils.misc ``` def schlichtkrull_normal_(tensor, shape, gain=1.): """Fill the input `Tensor` with values according to the Schlichtkrull method, using a normal distribution.""" std = schlichtkrull_std(shape, gain) with torch.no_grad(): return tensor.normal_(0.0, std) def schlichtkrull_uniform_(tensor, gain=1.): """Fill the input `Tensor` with values according to the Schlichtkrull method, using a uniform distribution.""" std = schlichtkrull_std(tensor, gain) with torch.no_grad(): return tensor.uniform_(-std, std) ``` because in torch_rgcn.models. my line 55, 56 you do not pass the 'shape' value ``` init = select_w_init(encoder_w_init) init(self.node_embeddings) #here, shape is missing ``` I think, in addition, there is something wrong with` schlichtkrull_uniform` - this passes the `tensor` to schlichtkrull_std, which also is wrong in my opinion (should be `shape` instead, no?) Workaround: I am able to get rid of that error when changing, in the respective yaml file, the ` weight_init ` for encoder from schlichtkrull-normal to xavier-normal: (e.g., line 30 in conf.yaml) ` weight_init: xavier-normal # schlichtkrull-normal` This means of course that a different function is used for initialising biases, however, it at least gets rid of the error, so maybe this helps you for now @username_1 :) username_3: Thanks @username_2 for this. I will look into this. Strange I didn't encounter the problem. Which version of PyTorch were you using? username_2: Hi @username_3 thanks for your reply and for looking into this! I'm using pytorch 1.11.0. I assume it has nothing to do with pytorch though, because I get the error message about the missing argument: schlichtkrull_normal_() missing 1 required positional argument: 'shape' which actually makes sense, when looking at the fact that only one argument is given here: init(self.node_embeddings) #here, shape is missing :) Best Julia username_4: I also encounter this problem. I guest the shape argument of weight initial function is the same to the tensor shape ,and I search for the shape and enter them. But I found that either I use my way, or use the method Julia mention , the code still doesn't work. `INFO - R-GCN Link Prediction - Running command 'train' INFO - R-GCN Link Prediction - Started nodes padded to 280 to make it divisible by 5.0 (added 0 null nodes). Start training... min tensor(-5.51041, device='cuda:0', grad_fn=<MinBackward1>) max tensor(4.98508, device='cuda:0', grad_fn=<MaxBackward1>) mean tensor(0.21224, device='cuda:0', grad_fn=<MeanBackward0>) std tensor(1.49450, device='cuda:0', grad_fn=<StdBackward0>) size torch.Size([3300]) ERROR - R-GCN Link Prediction - Failed after 0:00:03!` Both method got this result. I created the environment the same as the environment.yml except the sacred is 0.8.2 because of the loss of the package in all my channel, but I think it doesn't matter. username_4: I also encounter this problem. The classification part of mine goes well. And I have create the environment the same as the environment.yml except the scared=0.8.2 because of the loss of 0.8.1 in all my channels, but I think it doesn't matter. In my opinion, I think the shape argument of the initial function is the shape of the tensor which need to be initialized. But either I try my way , or use the method Julia mention, the program still doesn't work. the error log as follow: INFO - R-GCN Link Prediction - Running command 'train' INFO - R-GCN Link Prediction - Started nodes padded to 280 to make it divisible by 5.0 (added 0 null nodes). Start training... min tensor(-5.51041, device='cuda:0', grad_fn=<MinBackward1>) max tensor(4.98508, device='cuda:0', grad_fn=<MaxBackward1>) mean tensor(0.21224, device='cuda:0', grad_fn=<MeanBackward0>) std tensor(1.49450, device='cuda:0', grad_fn=<StdBackward0>) size torch.Size([3300]) ERROR - R-GCN Link Prediction - Failed after 0:00:03!
duetosymmetry/qnm
1028453079
Title: Add a test to validate conventions for mirror modes Question: username_0: The test will go something like this: ```python # These should be inputs to the test s, l, m, n = (-2, 4, 3, 5) a=0.7 # Actual body of the test below mode = qnm.modes_cache(s=s, l=l, m=m, n=n) om, A, C = mode(a=a) solver = copy.deepcopy(mode.solver) solver.clear_results() solver.set_params(a=a, m=-m, A_closest_to=A.conj(), omega_guess=-om.conj()) om_prime = solver.do_solve() assert np.allclose(-om.conj() , solver.omega) assert np.allclose(A.conj(), solver.A) assert np.allclose((-1)**(l + qnm.angular.ells(s, m, mode.l_max)) * C.conj(), solver.C) ```<issue_closed> Status: Issue closed
FormidableLabs/formidable-react-native-app-boilerplate
193039289
Title: Unable to add a Button Question: username_0: I tried adding a button and it causes an error with the message "Invariant Violation: Element type is invalid: expected a string (for built-in components) or a class/function (for composite components) but got: undefined. Check the render method". You can easily reproduce this issue by adding a Button to app.js. Am I missing something?
svaarala/duktape
98964354
Title: Duktape.dec('jx', '1Infinity') parses as -Infinity Question: username_0: In Duktape 1.2.2 and prior, `1Infinity` will incorrectly parse as `-Infinity`, which is caused by an error in the parsing condition which will parse anything beginning with a minus or a number followed by an `I` as -Infinity. The impact is that inputs that should cause SyntaxError are accepted, there should be no effect for valid inputs. Answers: username_0: Will be fixed by #209 merge. username_1: I get the `I` thing (opportunistically treating a number starting with I as infinity), but how does it end up treating a leading digit as a negation? This seems like a weird bug. :) username_0: The bug is here: https://github.com/username_0/duktape/blob/master/src/duk_bi_json.c#L655-L660 There's a shared parent if, and the child if checking for -Infinity ignores the leading character. Status: Issue closed username_0: Fixed in master.
pandas-dev/pandas
319211375
Title: Explicit signature for NDFrame.to_hdf Question: username_0: Currently we have `to_hdf(self, path_or_buf, key, **kwargs)` It'd be good to replace that with the actual signature (unless I'm missing a reason for the current implementation) and document it fully. Answers: username_1: I would love to help. How can I get started on this? username_0: Great! General contributing guidelines are at http://pandas-docs.github.io/pandas-docs-travis/contributing.html For this specific one, I would look at `to_hdf` in `pandas/pandas/io/pytables.py`. Anywhere we pass `kwargs` currently (`store.append` and `store.put`) we'll need to figure out the correct set of keyword arguments expected. Some like `mode`, `format`, etc. will be straightforward. Others will take some digging. Let us know if you get stuck. username_1: Cool. On it now. Status: Issue closed username_2: Closed by #29957.
kyma-project/kyma
371906343
Title: Number of core-catalog-etcd-stateful-backup pods on Error even though Backup is disabled Question: username_0: **Description** Kyma is deployed with __ENABLE_ETCD_BACKUP__ set to `false`. Still, we observe below pod situation on the cluster: ``` kyma-system core-catalog-etcd-stateful-backup-1539702000-4r294 0/1 Error 0 1h kyma-system core-catalog-etcd-stateful-backup-1539702000-7hgjk 0/1 Error 0 1h kyma-system core-catalog-etcd-stateful-backup-1539702000-j8vzn 0/1 Error 0 1h kyma-system core-catalog-etcd-stateful-backup-1539702000-vnngg 0/1 Error 0 1h kyma-system core-catalog-etcd-stateful-backup-1539702000-vtxkq 0/1 Error 0 1h kyma-system core-catalog-etcd-stateful-backup-1539703800-5d7lb 0/1 Error 0 45m kyma-system core-catalog-etcd-stateful-backup-1539703800-b4tj5 0/1 Error 0 42m kyma-system core-catalog-etcd-stateful-backup-1539703800-b67f5 0/1 Error 0 44m kyma-system core-catalog-etcd-stateful-backup-1539703800-p6ffq 0/1 Error 0 45m kyma-system core-catalog-etcd-stateful-backup-1539703800-vlcx8 0/1 Error 0 44m kyma-system core-catalog-etcd-stateful-backup-1539705600-2r2xl 0/1 Error 0 15m kyma-system core-catalog-etcd-stateful-backup-1539705600-8974c 0/1 Error 0 14m kyma-system core-catalog-etcd-stateful-backup-1539705600-9b9vv 0/1 Error 0 14m kyma-system core-catalog-etcd-stateful-backup-1539705600-dt7xf 0/1 Error 0 15m kyma-system core-catalog-etcd-stateful-backup-1539705600-frb4m 0/1 Error 0 12m kyma-system core-catalog-etcd-stateful-backup-1539705600-rlf8r 0/1 ``` Answers: username_1: Hi, I've checked that and that is not possible with current Kyma installation. The CronJob from your logs is installed only if the `.Values.global.etcdBackup.enabled` is set to **true**, see: https://github.com/kyma-project/kyma/blob/master/resources/core/charts/service-catalog/charts/etcd-stateful/templates/05-backup-job.yaml#L1 This value by default is set to false in [values.yaml](https://github.com/kyma-project/kyma/blob/master/resources/core/values.yaml#L25) file. In cluster installation, you can override that as you mention by changing the __ENABLE_ETCD_BACKUP__ placeholder. Then installer overrides the `.Values.global.etcdBackup.enabled` in **values.yaml**. I've tried to reproduce your issue on our cluster but I cannot because of the workflow which I described above. I can help you only if you will provide me an information how you installed your cluster. Cheers :) Status: Issue closed
opencirclesolutions/dynamo
155046900
Title: Improve progress update mechanic for ProgressForm Question: username_0: The ProgressForm currently uses a fairly ugly mechanism that requires the user to pass along and update an atomic integer. This can easily be improved by using an observer-like constructions (have the progress form implement an interface, pass this interface along to the batch process, call the "increment" method on the interface)<issue_closed> Status: Issue closed
neo4j-contrib/neovis.js
370430179
Title: 1 Error - Cannot read property 'constructor' of null while using Neovis.js Question: username_0: Information: neo4j version - 3.4.7 Enterprise, browser version - Version 3.4.7 what kind of API / driver do you use - I am using Neovis.js With cypher query - var cypherQuery = "MATCH (n) OPTIONAL MATCH (n)-[r]->(m) RETURN n,r,m LIMIT 10"; **I get proper response -** ![neovis-1](https://user-images.githubusercontent.com/37062796/46991410-7b731400-d123-11e8-9e0f-eb9df0025914.png) **Issue is - I get an error message** ERROR TypeError: Cannot read property 'constructor' of null when my Limit in the cypher query increases beyond 10. (say )- var cypherQuery = "MATCH (n) OPTIONAL MATCH (n)-[r]->(m) RETURN n,r,m LIMIT 15"; **complete log of error -** ERROR TypeError: Cannot read property 'constructor' of null at neovis.js:36514 at Record.forEach (neovis.js:33630) at Object.onNext (neovis.js:36511) at _RunObserver.onNext (neovis.js:33014) at Connection._handleMessage (neovis.js:31442) at Dechunker.Connection._dechunker.onmessage (neovis.js:31392) at Dechunker._onHeader (neovis.js:30537) at Dechunker.AWAITING_CHUNK (neovis.js:30490) at Dechunker.write (neovis.js:30548) at WebSocketChannel.self._ch.onmessage (neovis.js:31365) Please help. **Source code -** const url = 'bolt://localhost:11001'; const username = 'neo4j'; const password = '<PASSWORD>'; const encrypted = true; MATCH (n) OPTIONAL MATCH (n)-[r]->(m) RETURN n,r,m LIMIT 30 var cypherQuery = ""; var config = { container_id: "viz", server_url: "bolt://localhost:11001", server_user: "neo4j", server_password: "<PASSWORD>", labels: { "Banking": { [Truncated] }, relationships: { "cc": { "thickness": "weight", "caption": false } }, RETURN cc", initial_cypher: cypherQuery , arrows: true, hierarchical_layout:true, hierarchical_sort_method:"directed", }; this.viz = new NeoVis.default(config); this.viz.render(); console.log(this.viz); } Answers: username_1: OPTIONAL MATCH is returning 'null' if dont have match, and the lib dont treat it. username_2: Should be fixed after the newer version will be released username_3: Is this fixed yet? @username_2 username_2: Yes, it should have been fixed by the last update, try updating (if you are using npm/yarn) and see for yourself username_4: Hello, I've been updating with npm to [email protected] (using Neo4j 3.5 community edition) and I always have the same error with null properties from Cypher OPTIONAL query. Thanks username_2: I will test that later today Status: Issue closed username_6: This is my query - ``` match (n:Report) with n optional match (m)-[r]->(n) optional match (n)-[r1]->(o) return * ``` And this is the error I'm getting on the console - ``` web.dom-collections.for-each.js (13,1) [object Error]: {description: "Unable to get property 'constructor' of undefined or null reference", message: "Unable to get property 'constructor' of undefined or null reference", number: -2146823281, stack: "TypeError: Unable to get property 'constructor' of undefined or null reference at Anonymous function (https://cdn.neo4jlabs.com/neovis.js/master/neovis.js:32:102848) at d (https://cdn.neo4jlabs.com/neovis.js/master/neovis.js:30:309371) at Anonymous function (https://cdn.neo4jlabs.com/neovis.js/master/neovis.js:30:309159) at e[t] (https://cdn.neo4jlabs.com/neovis.js/master/neovis.js:30:309787) at h (https://cdn.neo4jlabs.com/neovis.js/master/neovis.js:32:98691) at s (https://cdn.neo4jlabs.com/neovis.js/master/neovis.js:32:98903) at Anonymous function (https://cdn.neo4jlabs.com/neovis.js/master/neovis.js:32:98962) at N (https://cdn.neo4jlabs.com/neovis.js/master/neovis.js:30:303200) at Anonymous function (https://cdn.neo4jlabs.com/neovis.js/master/neovis.js:32:98837) at Anonymous function (https://cdn.neo4jlabs.com/neovis.js/master/neovis.js:32:104968)"} ``` username_2: Ill test it later today, shouldn't happned on the latest version of neovis username_2: It looks like cdn/master is not the last (cant fully confirm cause im not part of neo4j try to use https://cdn.neo4jlabs.com/neovis.js/v1.2.1/neovis.js Instead username_6: Thank you very much! This works!
TeamSeekers/WheresMyStuff
231668010
Title: M2 - Create original branch Question: username_0: Create a branch called original. Each team member should clone out the project M2. BE SURE THAT original REMAINS AS THE UNMODIFIED VERSION and that changes to the repository are done on the main (master) branch.
trailofbits/polytracker
766486683
Title: 福建省哪有特殊服务的洗浴_gt Question: username_0: 福建省哪有特殊服务的洗浴【十(微)1077╧1909漂亮】每个人都想拥有强健的体魄,这样才能有更充足的精神投入到生活、工作与学习中。但是现代很多人由于太过繁忙,忘记了对身体的锻炼,同时,由于压力大,不少人在休息时间也往往是通过睡觉、看书等活动量较小的方式进行放松。长期以往,就造成了亚健康的体质状态。那么,要如何提高身体素质呢?以下几个方法要牢记:良好的生活习惯良好的生活习惯是提高身体素质的基础。过量的烟酒会损伤人体的健康,尽量戒烟、少喝酒,才有利于提高身体素质。同时,良好的睡眠可以提高人体的免疫力,提高机体的免疫系统功能,从而抵御病菌的侵入,达到强身健体的目的。适当运动想要提高身体素质,首先就要让身体动起来。晨跑是一个不错的选择,现在很多人宁愿多睡一会也不愿意起来晨跑,这其实是非常不对的,早睡早起更有利于改善精神状态。同时,慢跑也能够增加肌肉含量,对于提高身体免疫力有一定效果。合理的饮食三餐要规律,饮食搭配要合理有营养,这样身体才有充分的能量去应对各种活动。同时,人体各项生理活动都是由营养素的搭配才能正常运转的。特别是蛋白质,蛋白质是人体一切细胞、组织的重要成分,新陈代谢、免疫系统发挥作用,都有蛋白质的参与。想要提高免疫力、提高身体素质,便少不了蛋白质的摄入。在饮食中,蛋白质含量较高的食物主要有牛奶、大豆、蛋类和肉类等,而体质本身较弱的,则可以试试吃蛋白粉。蛋白粉是什么,蛋白粉什么牌子好呢?蛋白粉是一种由提纯的大豆蛋白、酪蛋白、乳清蛋白、豌豆蛋白或上述几种蛋白的组合构成的粉剂,作用便是为人体补充蛋白质。提高身体素质的蛋白粉什么牌子好?这里推荐汤臣倍健蛋白粉。汤臣倍健蛋白粉精选来自全球的优质原材料,植物蛋白来自于东北非转基因大豆,动物蛋白来自全球黄金奶源带的乳清蛋白,科学配比,弥补了单一植物蛋白营养不充分的缺点,含有种人体必需的氨基酸,营养价值高,其中蛋白质含量高达,消化吸收率达到以上。而且还保留了大豆的活性成分,提供丰富的磷脂以及大豆异黄酮,是提高身体素质必不可少的膳食营养补充剂。仗炮揪韶队祷时夯找疵排扇菲诜爻芯亟惶率咎沂谪亮亿躺杉脸耪纷崩仪渭话妇退https://github.com/trailofbits/polytracker/issues/5463?uGaUM <br />https://github.com/trailofbits/polytracker/issues/5685?dvueo <br />https://github.com/trailofbits/polytracker/issues/5624?raewb <br />https://github.com/trailofbits/polytracker/issues/5561?jznnl <br />https://github.com/trailofbits/polytracker/issues/5500?fjcax <br />inusxzzcgoymfjqvxjgstwlydcnesxzyhdj
troykinsella/concourse-ansible-playbook-resource
844423303
Title: SSH name resolution fails in latest image version Question: username_0: It looks like the behaviour of SSH has changed in the latest image `28239a6eb5b4`. It defaults to using IPv6, which causes lookups which were previously working to fail. A workaround for this issue is to add the following to the source configuration of the resource: ``` source: ssh_common_args: "-4" ``` Answers: username_1: Could it be because your network is configured with IPv6, but unable to route? Anyway, this doesn't seem like a bug with this resource. username_0: I agree, not a bug here, just a change in behaviour :+1: username_2: Thanks for calling out the solution! Closing. Status: Issue closed
nunit/nunit
369815224
Title: Add <test-output>and <test-message> to XSD and documentation Question: username_0: Anything else I missed that you can think of? https://github.com/nunit/docs/wiki/Test-Result-XML-Format Answers: username_0: I was confused—`<test-output>` and `<test-message>` don't go in the NUnit3 test results, so we'd need a separate XSD for each ITestEventListener event. I'll close this until there is demand. Status: Issue closed username_0: I apologize, I don't follow. The point of using XML fragments as application interface is so that we don't need to use XSD to document the XML part of the interface? username_1: It needs to be documented. XSD is not used principally to document and is not, in fact, very useful as documentation for programmers. It's primary purpose is to permit automated validation of documents and, in some cases, automatic code generation for programs that process the documents. These fragments are not documents. They are message content between program components and are changeable. There is no restriction on what elements are used and the attributes and inner XML of any message may change. Driver programs and any others that choose to handle them (like our own console runner) need to be aware of that and be written accordingly. I agree wholeheartedly that that last fact needs to be documented, along with the current known (because we don't know all of them) formats. I just don't think XSD is very useful. Bluntly, if I were a programmer trying to use this XSD would be the worst possible documentation for me - worse even than searching the source code itself. What I would like to see is a list of known elements, their attributes and any inner elements. A set of general recommendations like "ignore any elements you don't recognize" would be useful to beginners as well.
gottfrois/link_thumbnailer
239959407
Title: Basic instructions needed Question: username_0: I installed `link_thumbnailer` as described, adding the gem in the Gemfile and then running `bundle install`. Then I generated the configuration file as suggested with: `rails g link_thumbnailer:install`. However I did not change the default configuration values. I would like to use `link_thumbnailer` in the application I created starting form <NAME>'s Ruby on Rails tutorial, which is a Twitter-like application that allows users to post microposts with images. What I expected is the gem to generate an image thumbnail from a given URL copied and pasted in the textarea once I created the micropost, so the URL string with the thubnail below. However the newly created micropost shows only the URL string. I wonder if I am missing something. Should I add `require 'link_thumbnailer'` or anything else somewhere? Status: Issue closed Answers: username_1: I don't know how you implemented your application. When you generate a thumbnail using LinkThumbnailer, you will get an object in response with `URL`, `title`, and `images` attributes. I invite you to take a look at the readme.
terraform-aws-modules/terraform-aws-security-group
721867863
Title: Error: One of ['cidr_blocks', ... and Error: [WARN] A duplicate Security Group Question: username_0: **THREE** things (1) Is there some timing issue with creation of security groups? It seems as if it errors, creates it, then tries to create it again. (2) If I'm using the nfs module why do I have to specify the nfs rule again? ``` ### This is what I __expected__ would work ### module "nfs_security_group" { source = "terraform-aws-modules/security-group/aws//modules/nfs" version = "~> 3.0" name = "${local.namespace}-nfs" vpc_id = var.vpc_id source_security_group_id = "${module.grafana_security_group.this_security_group_id}" tags = local.default_tags } ``` (3) I've looked at the examples already and I still don't fully understand. New to this so please be patient with me 😅 This is related to #191 which is closed but I think should still be open ``` Error: One of ['cidr_blocks', 'ipv6_cidr_blocks', 'self', 'source_security_group_id', 'prefix_list_ids'] must be set to create an AWS Security Group Rule on .terraform/modules/grafana.nfs_security_group/main.tf line 58, in resource "aws_security_group_rule" "ingress_rules": 58: resource "aws_security_group_rule" "ingress_rules" { Error: [WARN] A duplicate Security Group rule was found on (sg-0a99bd4d0b4992780). This may be a side effect of a now-fixed Terraform issue causing two security groups with identical attributes but different source_security_group_ids to overwrite each other in the state. See https://github.com/hashicorp/terraform/pull/2376 for more information and instructions for recovery. Error message: the specified rule "peer: sg-0a99bd4d0b4992780, ALL, ALLOW" already exists on .terraform/modules/grafana.nfs_security_group/main.tf line 363, in resource "aws_security_group_rule" "ingress_with_self": 363: resource "aws_security_group_rule" "ingress_with_self" { ``` ``` ### This is what produced the above ^ errors ### module "nfs_security_group" { source = "terraform-aws-modules/security-group/aws//modules/nfs" version = "~> 3.0" name = "${local.namespace}-nfs" description = "NFS security group for EFS mounted to ECS" vpc_id = var.vpc_id ingress_with_self = [{ rule = "all-all" }] ingress_with_source_security_group_id = [ { rule = "nfs-tcp", source_security_group_id = "${module.grafana_security_group.this_security_group_id}" } ] tags = local.default_tags } ``` Answers: username_1: I also face the same issue , please advise. I cannot provide CIDR at all for a RDS DB. **module.sg.aws_security_group_rule.ingress_rules[0]: Creating... Error: One of ['cidr_blocks', 'ipv6_cidr_blocks', 'self', 'source_security_group_id', 'prefix_list_ids'] must be set to create an AWS Security Group Rule** module "sg" { source = "../terraform-aws-security-group-master" name = "${var.identifier}-sg" description = "Security group for RDS" vpc_id = var.vpc_id ingress_rules = ["postgresql-tcp"] ingress_with_source_security_group_id =[{ rule = "postgresql-tcp" source_security_group_id = sg }] egress_rules = ["all-tcp"] tags = var.tags } username_1: ****UPDATE - Issue resolved**** This issue resolved after removing ingress_rules & ingress_cidr_blocks both , earlier i gave ingress_rules which required One of ['cidr_blocks', 'ipv6_cidr_blocks', 'self', 'source_security_group_id', 'prefix_list_ids'] #ingress_cidr_blocks = ["10.0.0.0/24"] #ingress_rules = ["postgresql-tcp"] username_0: @username_1 Can you paste your working code? username_2: I was testing out this module but kept running into this problem whenever I tried adding another CIDR Block to "ingress_cidr_blocks". I've reverted to just using the Terraform resource. username_3: I had a similar issue ``` Error: One of ['cidr_blocks', 'ipv6_cidr_blocks', 'self', 'source_security_group_id', 'prefix_list_ids'] must be set to create an AWS Security Group Rule on ../../main.tf line 58, in resource "aws_security_group_rule" "ingress_rules": 58: resource "aws_security_group_rule" "ingress_rules" { ``` Except I shouldn't go in the mentioned code starting at line 58, I don't have a ingress_rules set, only ingress_with_source_security_group_id. I worked around this setting a ingress_cidr_blocks = ["my_vpc_cidr"] username_4: ## TL;DR This issue occurs for me when using a ***submodule*** in conjunction with `ingress_with_source_security_group_id`. Changing the code slightly to use the **main module** rather than a submodule allows Terraform to create the security groups without error. ## Example Code that Causes an Error The following code causes the error to occur. **Note that the code uses the `http-80` submodule**: ```hcl module "sg_webserver" { source = "terraform-aws-modules/security-group/aws//modules/http-80" version = "4.3.0" name = "webserver-http" description = "Allow http requests to webservers from loadbalancer" vpc_id = var.vpc_id ingress_with_source_security_group_id = [ { rule = "http-80-tcp" source_security_group_id = module.sg_loadbalancer.security_group_id } ] } ``` ```bash $ terraform apply # # Output omitted for brevity # ╷ │ Error: One of ['cidr_blocks', 'ipv6_cidr_blocks', 'self', 'source_security_group_id', 'prefix_list_ids'] must be set to create an AWS Security Group Rule │ │ with module.sg_webserver.module.sg.aws_security_group_rule.ingress_rules[0], │ on .terraform/modules/sg_webserver/main.tf line 54, in resource "aws_security_group_rule" "ingress_rules": │ 54: resource "aws_security_group_rule" "ingress_rules" { │ ╵ ``` Note that no error is seen when running `terraform plan`. The error only occurs when executing `terraform apply`. Like @v-stickykeys I felt the above code would work. Put another way, there is nothing in the modules documentation to suggest that this shouldn't work. ## Example Working Code Changing things around slightly to use the ***main module*** rather than a submodule, allows Terraform to create the security groups **without error**. **The following code works as expected.** Again, note that the code uses the main module - **not a submodule**. ```hcl module "sg_webserver" { source = "terraform-aws-modules/security-group/aws" version = "4.3.0" name = "webserver-http" description = "Allow http requests to webservers from loadbalancer" vpc_id = var.vpc_id ingress_with_source_security_group_id = [ { rule = "http-80-tcp" source_security_group_id = module.sg_loadbalancer.security_group_id } ] egress_rules = ["all-all"] } ``` The inclusion of `egress_rules` causes the code above to work the same way as the submodule (the http-80 submodule creates egress rules automatically; the main module does not). username_5: Thank you @username_4 for the hint with _do not use the submodule_. I am using `ingress_with_source_security_group_id` as well as `egress_with_source_security_group_id` and I spent a lot of time investigating the problem! After you suggestion everything is working now as expected!
pepepper/Othhelo
422857850
Title: Linux network problem Question: username_0: On Linux netowork mode doesn't work correctly. For example,guest can't connect room however room number and password correctly. When debugging with server, server accepted two guest sockets from same client.<issue_closed> Status: Issue closed
stephenslab/mixsqp
434946263
Title: Add EM "pre-fitting" step to mixsqp method Question: username_0: Running a very small number of EM updates before running mix-SQP can help to make mix-SQP converge more reliably to the solution. This is an example where mix-SQP on its own fails to find the correct solution, but running a few EM iterations first allows mix-SQP to easily converge to the solution. ```R load("ashr.RData") out1 <- ash(x,s,"+uniform",method="shrink",outputlevel=3,optmethod="mixIP") out2 <- ash(x,s,"+uniform",method="shrink",outputlevel=3,optmethod="mixEM") # Fails to converge. out3 <- suppressWarnings( ash(x,s,"+uniform",method="shrink",outputlevel=3,optmethod="mixSQP", control = list(maxiter.sqp = 40,verbose = TRUE))) # Converges easily. out4a <- suppressWarnings( ash(x,s,"+uniform",method="shrink",outputlevel=3,optmethod="mixEM", control = list(maxiter = 4))) out4b <- ash(x,s,"+uniform",method="shrink",outputlevel=3,optmethod="mixSQP", g = out3a$fitted_g,fixg = FALSE,control = list(verbose = TRUE)) ``` [ashr.RData.gz](https://github.com/stephenslab/mixsqp/files/3096062/ashr.RData.gz) Answers: username_0: Implemented in commit `36a37b8`. Status: Issue closed
dart-lang/dart_ci
615348925
Title: Allow flaky tests to be manually changed to not flaky Question: username_0: I want to get this feature added as soon as possible. Proposed implementation: * Add flakiness status to the new current_status service * Add an API to write a changed modified_flaky.json to the latest build for that configuration. * Builds are still using the per-build latest results.json and flaky.json from latest, not per-configuration * Get a tool (command line?) for that API asap. Relevant to #53 and #4. Answers: username_1: We can also recognize special lines in CL description, something like ``` Fixes language_2/foo/bar_test Fixes language_2/foo/bar_test in dartk-linux-release-x64 Fixes language_2/foo/bar_test in dartk-*-x64, dartkp-* ``` and reset flakiness status and all previous approvals for the mentioned test and mentioned configurations before interpreting the results. This would work both for CQ and CI cases and would make sure that if CQ jobs are green then the test was actually fixed (and its failure is not hidden by either flakiness or approvals). username_2: I like @username_1's suggestion. Seems straightforward and clearly shows the intent to the reviewer. username_0: Work on this issue is started by adding an "inactive" field to records in flaky.json that have been marked as not flaky, and changing the current removal of flaky records that have been stable for 100 runs to marking the records as inactive. When an inactive flaky record becomes flaky again, mark the field as no longer inactive, and increment a "reactivated_count" field. Make tools that read flaky.json recognize the "inactive" field.
Automattic/mongoose
461952437
Title: Projection ignored for properties with getters Question: username_0: **What is the current behavior?** If you use projection to exclude a property and that property has a defined getter on the schema, and at the same time your schema has specified `{ toObject : { getters: true } }`, then the projection seems to be ignored and the property is included in the response even though it should be excluded according to the projection. **If the current behavior is a bug, please provide the steps to reproduce.** ``` const mongoose = require('mongoose'); mongoose.set('debug', true); const GITHUB_ISSUE = `gh7940`; const connectionString = `mongodb://localhost:27017/${ GITHUB_ISSUE }`; const { Schema } = mongoose; run().then(() => console.log('done')).catch(error => console.error(error.stack)); async function run() { await mongoose.connect(connectionString); await mongoose.connection.dropDatabase(); const schema = new Schema({ foo: String, bar: { type: String, get: () => 'getter value' } }, { toObject : { getters: true } }); const Model = mongoose.model('Test', schema); await Model.create({ foo: 'test', bar: 'baz' }); const doc = await Model.findOne({ foo: 'test' }, 'foo'); console.log(doc); } ``` **What is the expected behavior?** The property excluded by the projection is actually excluded from the response, even if it has a getter defined. Not sure if this is the expected behavior when you set `{ getters: true }` but I would have expected the getter to be applied only when you actually want to get the value of the property. **What are the versions of Node.js, Mongoose and MongoDB you are using? Note that "latest" is not a version.** 5.6.1<issue_closed> Status: Issue closed
department-of-veterans-affairs/va.gov-team
846980747
Title: Tools FE VFS Support - PR Reviews Sprint 48 Question: username_0: ## Details Add PRs to this epic during Sprint 48. Any time you ... * Approve a PR * Leave comments on a PR * Request changes on a PR Add PRs from ... * VFS teams * Other VSP or Platform Crew teams DO NOT add PRs from ... * FE Tools team<issue_closed> Status: Issue closed
evanw/esbuild
727721869
Title: Files containing unicode characters may lead to malformed output Question: username_0: Please see [this repro](https://repl.it/@username_0/esbuild-unicode-demo#output.js). In it, you'll see an input file such as this: ```javascript export const unicodeWhitespaceCharacters = '\u0009\u000A\u000B\u000C\u000D\u0020\u00A0\u1680\u2000\u2001\u2002\u2003\u2004\u2005\u2006\u2007\u2008\u2009\u200A\u202F\u205F\u3000\u2028\u2029\uFEFF'; ``` Running `esbuild` with no configuration on it results in the file `output.js` which produces a different declaration of `unicodeWhitespaceCharacters` as can be seen in the repro, one that leads to `Unterminated string constant` exceptions in IE 11. I ran into this problem specifically while bundling `core-js` (the declaration from the example is located within `core-js/internals/whitespaces.js` with `esbuild` and testing it across browsers. Answers: username_1: I don't think the default behavior should be to assume the target environment doesn't support UTF-8. Partially because IE is well on its way out, and because other commonly-used minifiers such as [Terser](https://github.com/terser/terser) and [UglifyJS](https://github.com/mishoo/UglifyJS) also have the same default behavior as esbuild here (assuming that UTF-8 is supported). So I think the way to solve this is an opt-in ASCII-only output mode (issue #70). I would have done this a long time ago except this unfortunately isn't as simple as it sounds because it involves parsing regular expressions to strip UTF-8 from them too, and esbuild doesn't currently parse regular expressions at all. However, it's pretty trivial to move forward with an ASCII-only output option with the caveat that regular expressions containing non-ASCII characters will continue to be unescaped in the output for now. So right now I'm thinking of doing that. username_2: Something else to consider as well is that many browsers have an optimized scanning path for ASCII code (even when marked as UTF-8). Converting to UTF-8 may provide a decrease in file size but could end up causing increased execution time and cancel out any improvements in transfer time. Chromium code reference: https://source.chromium.org/chromium/v8/v8.git/+/master:src/parsing/scanner-character-streams.cc;l=669?originalUrl=https:%2F%2Fcs.chromium.org%2F username_1: It turns out that esbuild also does this kind of optimization itself when mapping from byte offsets to column numbers for source maps. It was a noticeable speedup to avoid doing this for ASCII-only lines (chunking happens at the line level): https://github.com/username_1/esbuild/blob/da972afbbb1b3fb5e2fb5e67eb54e6c350e1d141/internal/js_printer/js_printer.go#L677-L689 I still think this isn't that strong a reason though, since these cases are relatively rare. A given chunk of JavaScript code will be ASCII the vast majority of the time because minifiers use ASCII identifiers for variable names. To me a stronger reason is the convenience of not having to deal with UTF-8 encoding issues when using esbuild. Some of my goals for esbuild are a) to follow ecosystem conventions and b) to be easy to use without too much configuration. Those two goals are in conflict in this situation and I'm still trying to figure out what to do about it. It's why I haven't released this new flag yet. username_0: I did the same experiment as you and actually found out some really interesting things that are relevant to this conversation. Here's the output of that `unicodeWhitespaceCharacters` constant from the original example across bundlers: **webpack** (without minification) ```js "\u0009\u000A\u000B\u000C\u000D\u0020\u00A0\u1680\u2000\u2001\u2002\u2003\u2004\u2005\u2006\u2007\u2008\u2009\u200A\u202F\u205F\u3000\u2028\u2029\uFEFF"; ``` **webpack** (with minification) ```js "\t\n\v\f\r                 \u2028\u2029\ufeff"; ``` **parcel** (without minification) ```js "\u0009\u000A\u000B\u000C\u000D\u0020\u00A0\u1680\u2000\u2001\u2002\u2003\u2004\u2005\u2006\u2007\u2008\u2009\u200A\u202F\u205F\u3000\u2028\u2029\uFEFF"; ``` **parcel** (with minification) ```js "\t\n\v\f\r                 \u2028\u2029\ufeff"; ``` **rollup** ```js "\u0009\u000A\u000B\u000C\u000D\u0020\u00A0\u1680\u2000\u2001\u2002\u2003\u2004\u2005\u2006\u2007\u2008\u2009\u200A\u202F\u205F\u3000\u2028\u2029\uFEFF"; ``` **esbuild**: Now this one actually produces something quite different from the others: ```js " \n\v\f\r                 \u2028\u2029" ``` I can't actually copy the whole thing over, because it looks like this in vscode: <img width="350" alt="Screen Shot 2020-10-25 at 12 07 32 AM" src="https://user-images.githubusercontent.com/20454213/97094569-16c5ea80-1656-11eb-8a3b-0c80b7833c1c.png"> Apparently, for some reason `\ufeff` is lost in the output from esbuild, and it produces what appears to be an invalid character that can't be copied as seen in the image above. That same character is the one leading to a crash in IE 11. Also, there are other differences between Parcel, Webpack, and esbuild. The former two produces a `\t` that isn't present in the esbuild output. So, I guess there's _something_ there worth looking into. But then another observation is that these bundlers actually preserve the unicode escape sequence unless they are instructed to produce minification. That's interesting, because both of them use `terser` for minification, and as you also mentioned yourself, `terser` is the one applying this transformation of the escape sequence as an optimization. Status: Issue closed
ilios/frontend
114086827
Title: Session Objective MeSH button goes to incorrect page focus Question: username_0: when clicking on session objective mesh, it sends the page focus to the top of course objective (as does the course objective mesh button); these need a new target to focus to the correct page context.<issue_closed> Status: Issue closed
rfxcom/node-rfxcom
759345079
Title: Scan failed - spawn udevadm ENOENT Question: username_0: Scanning for RFXCOM devices... Scan failed - spawn udevadm ENOENT /dev # ls -al /dev/tty* crw-rw-rw- 1 root root 5, 0 Dec 8 10:48 /dev/tty crwxrwxrwx 1 root root 188, 0 Dec 8 11:15 /dev/ttyUSB0 ``` rfxcom 2.3.1 node v12.18.4 os Alpine Linux (3.12.0) (Homebridge docker) Answers: username_0: Scanning for RFXCOM devices... None found ``` username_0: ``` /homebridge/test/node_modules/rfxcom # udevadm info -n /dev/ttyUSB0 P: /devices/pci0000:00/0000:00:04.0/0000:04:00.0/usb2/2-1/2-1:1.0/ttyUSB0/tty/ttyUSB0 N: ttyUSB0 E: DEVNAME=/dev/ttyUSB0 E: DEVPATH=/devices/pci0000:00/0000:00:04.0/0000:04:00.0/usb2/2-1/2-1:1.0/ttyUSB0/tty/ttyUSB0 E: MAJOR=188 E: MINOR=0 E: PHYSDEVBUS=usb-serial E: PHYSDEVDRIVER=ftdi_sio E: PHYSDEVPATH=/devices/pci0000:00/0000:00:04.0/0000:04:00.0/usb2/2-1/2-1:1.0/ttyUSB0 E: SUBSYSTEM=tty ``` username_0: Seems to be caused by simultaneous usage from openhab. Didn't know that was not possible. Status: Issue closed username_1: Serial ports are generally only accessible by one process at a time
kadikraman/draftjs-md-converter
318642960
Title: Bug with code block Question: username_0: Hey, I have a value ``` some text \``` some code \``` ``` and draftjsToMd returns everything wrapped in ``` i think it happens here https://github.com/username_1/draftjs-md-converter/blob/master/src/draftjsToMd.js#L180 For now i fixed it with ``` let newString += block.text.split... .... newString = applyWrappingBlockStyle(block.type, newString); newString = applyAtomicStyle(block, raw.entityMap, newString); returnString += newString; ``` Answers: username_1: Hi @username_0 so sorry, I don't know how I missed this issue! I'd love to fix it, but I can't seem to write a test to replicate it. Am I understanding correctly that inputting ``` some text \``` some code \``` ``` will return the equivalent of this when rendered in draft.js? ``` \``` some text some code \``` ``` username_2: Created https://github.com/username_1/draftjs-md-converter/pull/49 with test case and fix Status: Issue closed
wongjiahau/TTAP-Bug-Report
277429537
Title: Bug report #-532467849 Question: username_0: Object reference not set to an instance of an object. ==================== at Time_Table_Arranging_Program.Pages.Page_Login.<<Browser_OnLoadCompleted>g__ExtractData14_3>d.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.AsyncMethodBuilderCore.<>c.<ThrowAsync>b__6_0(Object state) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Int32 numArgs, Delegate catchHandler) ==================== <HEAD><TITLE>myUTAR - The Universiti Tunku Abdul Rahman Web Portal</TITLE> <SCRIPT language=javascript> function MM_openBrWindow(theURL,winName,features) { window.open(theURL,winName,features); } function mypopup(url, sbar, resize, width, height, top, left){ tit='' reWin=window.open(url, tit, 'toolbar=no,location=no,directories=no,status=no,menubar=no,scrollbars=' + sbar + ',resizable=' + resize + ',width=' + width + ',height=' + height + ',top=' + top + ',left=' + left) } function checkPhone(evt){ evt = (evt) ? evt : window.event var charCode = (evt.which) ? evt.which : evt.keyCode if ((charCode > 46 && charCode < 58) || charCode==45 || charCode==13){ return true } else{ alert("You can only key in numeric number") return false } } function checkNumeric(evt){ evt = (evt) ? evt : window.event var charCode = (evt.which) ? evt.which : evt.keyCode if((charCode > 47 && charCode < 58) || charCode==13){ return true } else{ alert("You can only key in numeric number") return false } } function IsNumeric(strString) { var strValidChars = "0123456789.-/"; var strChar; var blnResult = true; if (strString.length == 0) return false; for (i = 0; i < strString.length && blnResult == true; i++) { strChar = strString.charAt(i); if (strValidChars.indexOf(strChar) == -1) { blnResult = false; } } return blnResult; } function logout(myPath,logoutURL){ //alert(myPath+logoutURL) [Truncated] <TD>P103A(LAB)</TD> <TD>PRY1T2</TD></TR> <TR align=center> <TD>272</TD> <TD>T</TD> <TD>2</TD> <TD align=right>25</TD> <TD>Wed</TD> <TD>03:00 PM - 04:00 PM</TD> <TD>1.0</TD> <TD>1-14</TD> <TD>P103A(LAB)</TD> <TD>PRY1T3</TD></TR></FORM></TBODY></TABLE><BR><BR><BR></DIV> <FORM method=get name=frmRefresh action=masterSchedule.jsp><INPUT type=hidden value=2 name=reqCPage> <INPUT type=hidden name=reqUnit> <INPUT type=hidden name=reqDay> <INPUT type=hidden value=Any name=reqFrom> <INPUT type=hidden value=Any name=reqTo> </FORM> <DIV><FONT id=notDisplay color=black>Page Loaded In 15 miliseconds </FONT></DIV><!-- End Content --></TD></TR></TBODY></TABLE></TD> <TD rowSpan=2 width=10><IMG src="https://unitreg.utar.edu.my/portal/courseRegStu/images/clear.gif" width=10></TD></TR></TBODY></TABLE></TD></TR><!--<script src="https://unitreg.utar.edu.my/portal/publicFunction.js"></script>--> <TR id=notDisplay> <TD class=footerFont vAlign=top> <HR align=center SIZE=1 width="99%" noShade> Copyright © 2017, <NAME>. All rights reserved. <BR>Info Optimized for Internet Explorer 5.0 and above. Best viewed with 1024 x 768 pixels.<BR>Terms of Usage </TD></TR></TBODY></TABLE></BODY>
OpenDroneMap/presentations
161621311
Title: Giving a Presentation on what we are doing with Drones and ODM Question: username_0: Howdy, I have had an abstract picked up for a conference and I am going to be talking about how we use ODM meshlab etc . Wondered if there is any specific way I should reference ODM and its creators. and if there is anything I should be saying other than how freaking awesome it is 👍 thanks in advance Leith Answers: username_1: @username_2 can answer that, he's given a few talks before :smile: What conference, if I may ask? username_2: Hi @leithhawkins. Main thing is to point folks to all the resources for the project. I usually close with the following: https://github.com/OpenDroneMap/OpenDroneMap https://github.com/OpenDroneMap/OpenDroneMap/wiki https://lists.osgeo.org/cgi-bin/mailman/listinfo/opendronemap-users https://lists.osgeo.org/cgi-bin/mailman/listinfo/opendronemap-dev https://gitter.im/OpenDroneMap/OpenDroneMap https://github.com/OpenDroneMap/OpenDroneMap/wiki/Roadmap username_0: I am conscious of giving projects like this the credit they deserve. A small number of people put in significant work so that larger community benefits (I am one of the people riding on the coat tails of this work). @username_1 it is our New South Wales State Government Cluster conference about enabling spatial technology across our 800 + GIS users. ill make sure to put any links up here. @username_2 thanks for the links ill be sure to include them. Status: Issue closed
ShyHi/ShyHi_Android
62584261
Title: Create Convo Class Question: username_0: Convos will have the following attributes: Unique Identifier - Int Users - Array[Int] Foreign Keyed to Shys Thread - Array of messages. Each Message will have - Message content - could be text, image, etc. Only text will be implemented to start - Sender - UID of sender - Timestamp - Read Receipt - likely not a first cycle feature Convos will have the following methods: Create a new convo - This will take in a user ID, find a user nearby that the user is not already chatting with and create a new chat. Delete Convo - Takes in a convo ID and removes convo entirely for both users. Update Convo - Takes in a message, 'sends' and updates the conversation on both clients and the server
hyb1996-guest/AutoJsIssueReport
234325635
Title: java.lang.RuntimeException: Unable to resume activity {com.stardust.scriptdroid/com.stardust.scriptdroid.ui.edit.EditActivity}: java.lang.IllegalArgumentException Question: username_0: Description: --- java.lang.RuntimeException: Unable to resume activity {com.stardust.scriptdroid/com.stardust.scriptdroid.ui.edit.EditActivity}: java.lang.IllegalArgumentException at android.app.ActivityThread.performResumeActivity(ActivityThread.java:3832) at android.app.ActivityThread.handleResumeActivity(ActivityThread.java:3872) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1700) at android.os.Handler.dispatchMessage(Handler.java:102) at android.os.Looper.loop(Looper.java:154) at android.app.ActivityThread.main(ActivityThread.java:6688) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1468) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1358) Caused by: java.lang.IllegalArgumentException at android.os.Parcel.readException(Parcel.java:1697) at android.os.Parcel.readException(Parcel.java:1646) at android.app.ActivityManagerProxy.isTopOfTask(ActivityManagerNative.java:6600) at android.app.Activity.isTopOfTask(Activity.java:6142) at android.app.Activity.onResume(Activity.java:1331) at android.support.v4.app.FragmentActivity.onResume(FragmentActivity.java:485) at android.app.Instrumentation.callActivityOnResume(Instrumentation.java:1277) at android.app.Activity.performResume(Activity.java:7058) at android.app.ActivityThread.performResumeActivity(ActivityThread.java:3809) ... 8 more Device info: --- <table> <tr><td>App version</td><td>2.0.12 Beta</td></tr> <tr><td>App version code</td><td>137</td></tr> <tr><td>Android build version</td><td>G9350ZCU2BQC1</td></tr> <tr><td>Android release version</td><td>7.0</td></tr> <tr><td>Android SDK version</td><td>24</td></tr> <tr><td>Android build ID</td><td>G9350ZCU2BQC1.AURORAROM.V1.0</td></tr> <tr><td>Device brand</td><td>samsung</td></tr> <tr><td>Device manufacturer</td><td>samsung</td></tr> <tr><td>Device name</td><td>hero2qltechn</td></tr> <tr><td>Device model</td><td>SM-G9350</td></tr> <tr><td>Device product name</td><td>hero2qltezc</td></tr> <tr><td>Device hardware name</td><td>qcom</td></tr> <tr><td>ABIs</td><td>[arm64-v8a, armeabi-v7a, armeabi]</td></tr> <tr><td>ABIs (32bit)</td><td>[armeabi-v7a, armeabi]</td></tr> <tr><td>ABIs (64bit)</td><td>[arm64-v8a]</td></tr> </table>
EDCD/EDDI
695444176
Title: Using EDDBID for station and star system lookups on EDDB is no longer required Question: username_0: ## EDDI version in which issue is found 3.7.0 ## VoiceAttack version in which issue is found (as applicable) N/A (there is are commands affected but the VoiceAttack version is irrelevant) ## Investigation As per https://github.com/EDCD/EDMarketConnector/issues/512, new endpoints have been added which allow looking up a star system or station by SystemAddress or MarketId rather than by EDDBID.<issue_closed> Status: Issue closed
dandavison/delta
627671920
Title: 🐛 Crash searching a long diff Question: username_0: `delta --version`: 0.1.1 `git config core.pager`: delta --dark I have a 25MB diff that unfortunately I can't share. It's a mixture of various code, XML, PDFs, etc. I'm sorry if that leaves this a little lean on detail. My main lead is that the diff has a lot of inexact renames, but that could be a red herring. Two bad behaviors happen with delta. 1. I can `git show` and hit `G` to jump to the end, I only get to line 184,066, with the rest of the diff cut off. The same with `git show | less`, I get to 547,184. 2. If I kick off a search that doesn't match instead, I get an out of bounds error with the following trace: ``` thread 'main' panicked at 'byte index 2 is out of bounds of `&`', src/libcore/str/mod.rs:2131:9 stack backtrace: 0: 0x10732cac5 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h30b85a1761190f28 1: 0x1073431ce - core::fmt::write::h5b0722e6ee659e34 2: 0x10732bce9 - std::io::Write::write_fmt::hf468289e762fa2f9 3: 0x10731d86a - std::panicking::default_hook::{{closure}}::h836d46ca6b872224 4: 0x10731d58f - std::panicking::default_hook::h2afcf1998cd93f8c 5: 0x10731de4d - std::panicking::rust_panic_with_hook::he4f5d8b43533efd5 6: 0x10731da12 - rust_begin_unwind 7: 0x10734f32f - core::panicking::panic_fmt::h3559129da805eab4 8: 0x10734f0ae - core::str::slice_error_fail::hd6f6d2e5a693e978 9: 0x1071d84c4 - core::str::traits::<impl core::slice::SliceIndex<str> for core::ops::range::Range<usize>>::index::{{closure}}::h24851af3c1dab157 10: 0x1071da39f - delta::parse::get_file_extension_from_diff_line::h02d721288dfd8178 11: 0x1071cd157 - delta::delta::delta::hef7771a97efa631e 12: 0x1071d67fe - delta::main::hfdd82967b734b176 13: 0x1071d7505 - std::rt::lang_start::{{closure}}::h5d2da361401c3ca8 14: 0x10731d928 - std::panicking::try::do_call::h29bd6a8b4eb65398 15: 0x10733025b - __rust_maybe_catch_panic 16: 0x107322389 - std::rt::lang_start_internal::h1cbb853ed77189ce 17: 0x1071d74d9 - main ``` The same with `git show | less` spits out the following (on stderr, `CTRL-L` repaints without the lines) but then `G` does get to the end of the diff. ``` warning: inexact rename detection was skipped due to too many files. warning: you may want to set your diff.renameLimit variable to at least 4020 and retry the command. ``` Answers: username_1: Hi @username_0, thanks for reporting and no worries, happy to try to get to the bottom of it without the original problematic diff. Are you able to build delta from the git repo and try the version in `master`? The function that's crashing for you has changed so it would be great if we could debug this against master. https://github.com/username_1/delta#build-delta-from-source username_0: It looks like it's fixed on master! Thanks! The whole diff paged. I'm impressed with how simple the whole installation of rust and the build was. Just another reminder I should give rust a try. A huge contrast to a ruby environment I had to set up the other day. Of course, I also realized that the two behaviors I was seeing was probably an artifact of `G` causing a repaint, but a failed search causing a read without a repaint of the screen, letting stderr show. username_1: Great. I'm aiming to get a new release out soon, but a lot of bug fixes, features and refactoring got bundled together this time so it's taking a while! Status: Issue closed
MicrosoftDocs/windows-itpro-docs
347750111
Title: Data Storage Location Question: username_0: This article says you can't change the data storage location. The portal (securitycenter.windows.com/preferences2/general) says "This option cannot be changed without completely offboarding from Windows Defender ATP and completing a new enrollment process". Which one is correct? What is the process to move from Europe to US? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 8444bd2d-358c-0f37-5e23-1a9b6165dfa2 * Version Independent ID: 870991ef-02ac-d15a-c287-b7801c793781 * Content: [Windows Defender ATP data storage and privacy](https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-defender-atp/data-storage-privacy-windows-defender-advanced-threat-protection#do-i-have-the-flexibility-to-select-where-to-store-my-data) * Content Source: [windows/security/threat-protection/windows-defender-atp/data-storage-privacy-windows-defender-advanced-threat-protection.md](https://github.com/MicrosoftDocs/windows-itpro-docs/blob/master/windows/security/threat-protection/windows-defender-atp/data-storage-privacy-windows-defender-advanced-threat-protection.md) * Product: **w10** * GitHub Login: @mjcaparas * Microsoft Alias: **macapara** Answers: username_1: I too am having the same issue. I haven't uploaded anything to Defender Security Center but there was already an account provisioned to Europe. I need to switch that to the US before I proceed username_0: I confirmed this was the process with support as well. Thanks for the update. username_2: Is the only way to resolve this with by opening a support ticket? I am not sure if we have any credits left. We just want to start over and got stuck in the crux where if you start using Azure Security Center first it puts everything in the EU region automatically. This was done in a MS demo with by our MS account team without evaluating the implications and they just clicked through the defaults on our account. Now WDATP is tied to the region. I don't mind starting over but I feel trapped. username_0: Yes, support is required. You shouldn't need to pay for anything though. Just log the case in the Office 365 portal. This is a product defect anyway which are never billable.
mondediefr/docker-flarum
223900222
Title: Error 500 Installation Flarum Question: username_0: I have error 500 when i tried to install Flarum : ![image](https://cloud.githubusercontent.com/assets/5891788/25345251/e07839d8-2914-11e7-94b3-3de22664e772.png) My conf file : ``` flarum: image: mondedie/docker-flarum:0.1.0-beta.6-stable container_name: flarum links: - mariadb:mariadb environment: - FORUM_URL=https://discuss.domain.fr - DB_PASS=<PASSWORD> volumes: - /mnt/docker/flarum/assets:/flarum/app/assets - /mnt/docker/flarum/extensions:/flarum/app/extensions mariadb: image: mariadb:10.1 container_name: mariadb volumes: - /mnt/docker/mysql/db:/var/lib/mysql environment: - MYSQL_ROOT_PASSWORD=<PASSWORD> - MYSQL_DATABASE=flarum - MYSQL_USER=flarum - MYSQL_PASSWORD=<PASSWORD> ``` docker ps ``` root@vps284653:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5075fab18ed5 mondedie/docker-flarum:0.1.0-beta.6-stable "run.sh" 7 minutes ago Up 7 minutes 8888/tcp flarum 8f68c0f40ed9 mariadb:10.1 "docker-entrypoint.sh" 7 minutes ago Up 7 minutes 3306/tcp mariadb ``` Nginx ``` server { listen 80; listen [::]:80; server_name discuss.domain.fr; # <-- change this return 301 https://$host$request_uri; } server { listen 443 ssl; listen [::]:443 ssl; server_name discuss.domain.fr; # <-- change this ssl on; ssl_certificate /etc/nginx/ssl/public.pem; ssl_certificate_key /etc/nginx/ssl/private.pem; location / { proxy_pass http://172.17.0.3:8888; proxy_set_header Host $http_host; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } ``` Une idée ? Answers: username_1: Salut ^^ Do you have still your issue? Your password must contain minimum 8 characters, otherwise it produces a 500 error. username_1: @username_0 any news? username_2: same as #4 Status: Issue closed username_3: Même problème... J'ai tout rempli correctement et le mot de passe utilisateur fait plus de 8 caractères. username_2: La version de MariaDB est la 10.1, il faut bien préciser le **.1**. Flarum doit avoir une incompatibilité avec la nouvelle 10.3, qui est en Alpha attention. Pareil pour la 10.2, ça ne marche pas non plus. C'est un truc qui va falloir que je remonte aux développeurs de Flarum. username_3: Je teste ça et je te dis ! Je trouve ça dangereux que le tag `10` de `mariadb` renvoie vers des versions alpha... Je pensais innocemment que c'était toujours la dernière 10.x stable. username_3: Effectivement ça fonctionne. Désolé pour le dérangement, les instructions n'étaient pas en faute mais ma modification. username_4: Le problème de la version de MariaDB est connu flarum/core#1211 Par contre pour pouvoir débugger il faut pouvoir afficher le contenu du message en rouge sur le site. Celà est possible en commantant `error_page 500 /500.html;` dans la conf nginx. En effet nginx ne renvoie rien dans les logs de docker, car il est censé l'envoyer directement sur la page web, en l'occurence dans le cadre rouge _Something went wrong_. username_5: I used @username_1 config file, but not work for me. 500 code here. ``` docker -v Docker version 17.03.1-ce, build c6d412e ``` ``` docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2165ad6850a7 mondedie/docker-flarum:0.1.0-beta.7-stable "run.sh" 14 minutes ago Up 14 minutes 0.0.0.0:80->8888/tcp flarum a94e633ec6c5 mariadb:10.1 "docker-entrypoint..." 21 minutes ago Up 14 minutes 3306/tcp mariadb ``` username_2: @username_5 Switch `DEBUG` to true to see the error message. And please, use the `0.1.0-beta.7.1-stable` . https://github.com/mondediefr/docker-flarum/releases/tag/0.1.0-beta.7.1 username_2: I have error 500 when i tried to install Flarum : ![image](https://cloud.githubusercontent.com/assets/5891788/25345251/e07839d8-2914-11e7-94b3-3de22664e772.png) My conf file : ``` flarum: image: mondedie/docker-flarum:0.1.0-beta.6-stable container_name: flarum links: - mariadb:mariadb environment: - FORUM_URL=https://discuss.domain.fr - DB_PASS=<PASSWORD> volumes: - /mnt/docker/flarum/assets:/flarum/app/assets - /mnt/docker/flarum/extensions:/flarum/app/extensions mariadb: image: mariadb:10.1 container_name: mariadb volumes: - /mnt/docker/mysql/db:/var/lib/mysql environment: - MYSQL_ROOT_PASSWORD=<PASSWORD> - MYSQL_DATABASE=flarum - MYSQL_USER=flarum - MYSQL_PASSWORD=<PASSWORD> ``` docker ps ``` root@vps284653:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5075fab18ed5 mondedie/docker-flarum:0.1.0-beta.6-stable "run.sh" 7 minutes ago Up 7 minutes 8888/tcp flarum 8f68c0f40ed9 mariadb:10.1 "docker-entrypoint.sh" 7 minutes ago Up 7 minutes 3306/tcp mariadb ``` No information in `docker exec -ti flarum cat /tmp/ngx_error.log` Nginx ``` server { listen 80; listen [::]:80; server_name discuss.domain.fr; # <-- change this return 301 https://$host$request_uri; } server { listen 443 ssl; listen [::]:443 ssl; server_name discuss.domain.fr; # <-- change this ssl on; ssl_certificate /etc/nginx/ssl/public.pem; ssl_certificate_key /etc/nginx/ssl/private.pem; location / { proxy_pass http://172.17.0.3:8888; proxy_set_header Host $http_host; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } ``` Une idée ? username_5: Hello, @username_1 @username_2 , I updated config file like this. ``` flarum: image: mondedie/docker-flarum:0.1.0-beta.7.1-stable container_name: flarum links: - mariadb:mariadb ports: - 80:8888 environment: - DEBUG=true - FORUM_URL=http://demoflarum.com - DB_PASS=<PASSWORD> volumes: - ./flarum/assets:/flarum/app/assets - ./flarum/extensions:/flarum/app/extensions mariadb: image: mariadb:10.1 container_name: mariadb volumes: - ./mariadb:/var/lib/mysql environment: - MYSQL_ROOT_PASSWORD=<PASSWORD> - MYSQL_DATABASE=flarum - MYSQL_USER=flarum - MYSQL_PASSWORD=<PASSWORD> ``` But no any log information here: `docker logs flarum`, `docker exec -ti flarum tail -f /tmp/ngx_error.log`, `docker exec -ti flarum tail -f /tmp/php_error.log`. Thank you very much. username_2: And after "Something went wrong" do you have any error message ? username_5: @username_2 Cool, i saw something on browser, `Something went wrong: SQLSTATE[HY000] [2002] Connection refused`. username_2: Create a new volume for MariaDB : `./mariadb-flarum:/var/lib/mysql` And retry. Check the mariadb logs too. username_5: @username_2 Whether this is unable to connect to the mariadb port? username_2: MariaDB is running on port 3306 ? username_5: @username_2 Yes, telnet works well. Should I config mysql auth ip? or something else? username_2: No, mariadb container works out of box. username_5: @username_2 But I found a problem, I cann't use mariadb on host, docker container only. ``` $ mysql -u flarum -p Enter password: ERROR 1045 (28000): Access denied for user 'flarum'@'localhost' (using password: YES) ``` ``` $ docker exec -ti mariadb mysql -u flarum -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 5 Server version: 10.1.30-MariaDB-1~jessie mariadb.org binary distribution Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> ``` username_5: But host config is `%`. ``` $ docker exec -ti mariadb mysql -u root -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 6 Server version: 10.1.30-MariaDB-1~jessie mariadb.org binary distribution Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> use mysql; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed MariaDB [mysql]> select host, user from user; +-----------+--------+ | host | user | +-----------+--------+ | % | flarum | | % | root | | localhost | root | +-----------+--------+ 3 rows in set (0.00 sec) MariaDB [mysql]> ``` username_2: ``` mysql -u flarum -p docker exec -ti mariadb mysql -u root -p ``` Both commands do not query the same database, the first command query your host database, i think you do not have a flarum user in this database. The second command query the mariadb database (with docker) who contains the flarum user. You're confusing this with the [remote access host setting in user database](https://mariadb.com/kb/en/library/configuring-mariadb-for-remote-client-access/#granting-user-connections-from-remote-hosts). Mariadb container works out of box. Use docker-compose to setup this database and link to flarum container. I do not see where you have a problem.. Status: Issue closed
voxpupuli/puppet-splunk
864646930
Title: Corrupt MSI installer Question: username_0: Pulling the windows MSI installer for forwarder from a puppet module as per the documentation... class { '::splunk::params': version => $version, build => $build, src_root => $src_root, manage_net_tools => false, } This ends up creating c:\programdata\staging\splunk\universalforwarder-x-x.msi However, the file is slightly larger than whats in the puppet module repo and corrrupted meaning it cant be run to install the forwarder. Confirmed the version checked out onto the puppet master is the same as what went into the initial repo and can be downloaded from gitlab and runs fine. I've also tried doing a standard puppet file copy of the same source which then downloads with the correct size and can be executed. Not sure if this is related to this module or the archive module. This all works for linux, not sure why the msi ends up corrupted. Answers: username_0: Using modules splunk 8.0.0 & archive 5.0.0 username_0: Issue in puppet-archive, pull request raised to fix username_0: Closing as patch got merge in puppet-archive (not yet released) Status: Issue closed