repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
emory-libraries/eulfedora
60013532
Title: Missing Content-Disposition in raw_datastream view Question: username_0: When I download a file directly from fedora, get the content-disposition header set. When I download it through the raw_datastream view, I do not see this header. This is useful if you want to make sure a datastream is downloaded with a specific filename as passed through from fedora. This made a difference when returning Digital Negative files (dng) which end up having a mimetype image/tiff. I received a file named download.tif as opposed to ORIGIAL_NAME.dng I'm willing to work on this but not sure where to start. Answers: username_1: Thanks for reporting the issue. Somehow this use case hasn't come up for us, and I think we're generally setting download filenames in the django views that call raw_datastream. The header will have to be pulled from the Fedora response and added to the HttpResponse that's created here: https://github.com/emory-libraries/eulfedora/blob/master/eulfedora/views.py#L213 .. but getting the header from the Fedora response might be a little tricky since the code is currently using the DatastreamObject get_chunked_content method to avoid reading a whole datastream into memory at once. I presume the header you want is available on the response object returned by the api, but I don't see an obvious way to get it since get_chunked_content is a generator. The best I can currently think of is to store relevant header information somewhere on the DatastreamObject so it can be picked up in the view. Hopefully we could store it in a meaningful way rather than just "last datastream response headers". Thoughts? username_1: It occurs to me that newer versions of Fedora than what we're currently running probably also provide additional information that's being calculated by the raw_datastream view, and if we could just pass them through that should be more efficient and accurate than handling them in eulfedora. username_1: It just occurred to me that there is an easier way to fix this - rather than the view calling the DatastreamObject get_chunked_content method, it should just use the api getDatastreamDissemination method directly, which will return a response object that has all the headers as well as access to the content. I think that would actually be pretty straightforward. @username_0 do you want to try fixing this username_0: I will take a look at this on Monday. username_1: @username_0 Is this still an issue for you? And are you on Fedora 3.7 or later? Status: Issue closed
vpsfreecz/vpsadminos
372350689
Title: docker mysql hlasi "mbind: Operation not permitted" Question: username_0: Ahoj, testuju docker na vpsadminos a pomerne zakladni container (mysql) mi hlasi: `mbind: Operation not permitted` coz je podobne jako "permission denied" zminene v kb jako vec kterou je fajn nahlasit. Pokud to nesouvisi s vpsadminos, budu patrat dal u sebe. Prikaz pro reprodukci: `docker run --env MYSQL_ALLOW_EMPTY_PASSWORD=1 --network bridge-coi mysql:latest` Jinak zatim docker slape celkem fajn (lepe nez na OpenVZ), skoda jen toho VFS ;-(<issue_closed> Status: Issue closed
Wynncraft/Issues
123318636
Title: Bug with VIP+ and pets. Question: username_0: Today I bought VIP+. And when ever I rightclick with my pet out, I keep on riding it, forcing me to get rid of my pet when I want to even fight. I would suggest to add a command to enable to ride your pet because it gets quite annoying. Answers: username_1: This isn't really a bug and more of a suggestion. I suggest closing this issue and posting here instead: https://forums.wynncraft.com/forums/general-suggestions.69/ Thanks! Status: Issue closed
rust-lang/rust-by-example
797081750
Title: squash `gh-pages` branch Question: username_0: Currently, `rust-by-example` takes up over 146 MB just for the git history. It would be nice to reduce this so contributors to rust-lang/rust don't have to download as much data (https://github.com/rust-lang/rust/issues/76653#issuecomment-691696545). You can squash the branch with (https://stackoverflow.com/questions/1657017/how-to-squash-all-git-commits-into-one/1661283#1661283) ```sh git checkout gh-pages git update-ref -d refs/heads/gh-pages git commit -m "Redirect to doc.rust-lang.org" ``` then squash the history with `git gc --aggressive`. I verified locally that this reduces the size of .git to 84 MB. I'm not sure how to get github to run gc, I think you may have to wait for them to do it automatically (https://stackoverflow.com/a/9138899/7669110). Answers: username_0: @marioidival I saw you added a +1 - does that mean you squashed the branch? Can I make it easier somehow? username_0: bump - who should I talk to about this? username_0: Actually it looks like the `gh-pages` branch has been unused for a while, the last pages deploy was in 2018: https://github.com/rust-lang/rust-by-example/deployments/activity_log?environment=github-pages. I think just deleting the branch altogether should be fine. username_1: Pushed, I believe, the squashed version of gh-pages. Seems to have cut the size down to ~4.7MB on a fresh clone. Status: Issue closed username_0: Thank you! username_0: That worked like a charm, thank you so much! ``` $ x check Updating only changed submodules Updating submodule src/doc/rust-by-example Submodule 'src/doc/rust-by-example' (https://github.com/rust-lang/rust-by-example.git) registered for path 'src/doc/rust-by-example' Cloning into '/home/joshua/src/rust/rustc2/src/doc/rust-by-example'... remote: Enumerating objects: 12809, done. remote: Counting objects: 100% (119/119), done. remote: Compressing objects: 100% (110/110), done. remote: Total 12809 (delta 59), reused 13 (delta 8), pack-reused 12690 Receiving objects: 100% (12809/12809), 3.06 MiB | 2.33 MiB/s, done. Resolving deltas: 100% (6985/6985), done. Submodule path 'src/doc/rust-by-example': checked out 'c80f0b09fc15b9251825343be910c08531938ab2' Submodules updated in 1.98 seconds ```
algolia/algoliasearch-helper-js
205435098
Title: Provide a way to determine if there is a pending search Question: username_0: Determining if there is a pending search can be useful in the case of bad network and we want to let the user know that his/her request is being searched. Right now this requires to manually keep track of the search and result event and create a counter. This might be wrong if any search is dropped because of the order. This could be done internally, and we could provide two new entries in the API: - `isSearching()` on the helper, that returns `true` if the helper has any pending search (taking care of the discarded searches as well) - `noMoreSearchPending` event to indicate that all pending searches have been completed. Answers: username_0: Feedback from the rest of the team is that the searchOnce and searchForFacetValues method should be taken into account. :) Status: Issue closed
bobbylight/RSyntaxTextArea
267431576
Title: Scrollbars Don't Work (RTextScrollPane) Question: username_0: I'm not able to get the RTextScrollPane to work on even a simple example of RSyntaxTextPane. I get no scrollbars when the text is added something like 50% of the time. The text is clearly further down in the RTextScrollPane than the visible portion of the document. I've tried adding the text with setText and with setDocument and neither seems to work. ``` this.rstextpane = new RSTextPane_000(this); this.rstextpane.setSyntaxEditingStyle(SyntaxConstants.SYNTAX_STYLE_JAVA); this.rstextpane.setCodeFoldingEnabled(true); this.rstextpane.setBackground(new Color(240,240,240)); this.rstextpane.setCurrentLineHighlightColor(new Color(180,180,180)); this.rstextpane.setAutoscrolls(true); // this.rstextscrollpane = new RTextScrollPane(); this.rstextscrollpane.setViewportView(this.rstextpane); this.rstextscrollpane.setHorizontalScrollBarPolicy(RTextScrollPane.HORIZONTAL_SCROLLBAR_ALWAYS); this.rstextscrollpane.setVerticalScrollBarPolicy(RTextScrollPane.VERTICAL_SCROLLBAR_ALWAYS); ``` The (barely) custom Superclass has: ``` package org.widgets; import apml.system.Apmlbasesystem; import apml.system.bodi.Bodi; import org.events.LoadApmlDocumentEvent; import org.fife.ui.rsyntaxtextarea.RSyntaxDocument; import org.fife.ui.rsyntaxtextarea.RSyntaxTextArea; import javax.swing.*; import javax.swing.text.DefaultStyledDocument; import javax.swing.text.Document; import javax.swing.text.SimpleAttributeSet; import java.awt.*; import java.io.BufferedReader; import java.io.FileReader; public class RSTextPane_000 extends RSyntaxTextArea { public String bodi = "//ui/editor/rstextpane_000"; public Component parent; // public RSTextPane_000(Component parent) { this.parent = parent; try { [Truncated] doc = new RSyntaxDocument(SYNTAX_STYLE_XML); doc.insertString(0, text.toString(), new SimpleAttributeSet()); // this.setDocument(doc); } catch(Exception e) { e.printStackTrace(); } } @Override public Dimension getPreferredSize() { return new Dimension(this.parent.getWidth(), this.parent.getHeight()); } ``` Answers: username_0: Could we close this up soon? Thx username_0: I *think* I have this solved. If you would carefully *double check* if the RTextScrollPane does *explicitly require* an overridden getPreferredSize. It appears this is the issue with Swing doing repainting. Thanks. ok. username_0: Issue that I found (quickly looking back on it) was the a Parent Class was or was not supposed to have a getPreferredSize overridden. I have (current code as reference) that the JPanel parent *should* have an overridden impl. /ok. username_1: there shouldn't be a need to override `getPreferredSize()`, or call `setPreferredSize()`, on any RSTA-related component. In general, overriding `getPreferredSize()` in Swing is a bad idea. It's a code smell and can lead to weird issues, perhaps that's what you're seeing here. The easiest way to set a standard size for your text area is to use the constructor that takes a row count and column count, then put that directly into an `RTextScrollPane`, like what the example does in [README.md](https://github.com/username_1/RSyntaxTextArea/blob/master/README.md). Hope this helps! username_1: Closing as this is an old question, and looks to be a layout/application issue and not an issue with this library. Status: Issue closed
eBay/skin
465965022
Title: Button: separate expand-button and cta-button into own modules Question: username_0: For pages that only need an expand-button or cta-button, all of the extra button related styles add up to a lot of additional overhead. Also, they have enough peculiarities of their own to warrant a separate module and this will help make the regular button source code more managable. We can leverage mixins for shared styles across button types. Going forward, as we explore web components, keeping the payload as small as possible per custom tag make sense. Answers: username_0: Bringing this forward to 8.0.0. Status: Issue closed
gradle/gradle
356838317
Title: Checkstyle plugin with Task Configuration Avoidance problem Question: username_0: ``` I am not sure if it is a bug or if I have to configure it differently ... Status: Issue closed Answers: username_1: I can understand the confusion and we hope in the future to make everything more discoverable. To start with, it's the extension that is named `checkstyle`, so you will not find any tasks with that name. The tasks name are [derived on the source set name](https://github.com/gradle/gradle/blob/master/subprojects/code-quality/src/main/groovy/org/gradle/api/plugins/quality/internal/AbstractCodeQualityPlugin.java#L172). The [`Checkstyle#setConfigDir` take a `Provider<File>`](https://github.com/gradle/gradle/blob/master/subprojects/code-quality/src/main/groovy/org/gradle/api/plugins/quality/Checkstyle.java#L268-L271) and `project.file` returns a `File`. You can either change to use [`Checkstyle#setConfigFile` using `configFile`](https://github.com/gradle/gradle/blob/master/subprojects/code-quality/src/main/groovy/org/gradle/api/plugins/quality/Checkstyle.java#L81-L83) if that was the original intent or use [`ProjectLayout` methods](https://github.com/gradle/gradle/blob/master/subprojects/core-api/src/main/java/org/gradle/api/file/ProjectLayout.java) to get a `Provider<File>` to pass to `Checkstyle#setConfigDir`. I hope this answer your question. The provider API is still under review in term of improvement before making it stable, if you have a suggestion for improvement, please search for issue on this repositories or open a new one. Keep in mind this repository is for feature and bug request, all questions should be directed to the [user forum](https://discuss.gradle.org/). username_0: A working version using the _ProjectLayout_ methods looks just awful : ``` tasks.withType(Checkstyle).configureEach { configFile = project.file('gradle/conf/checkstyle.xml') configDir = project.layout.directoryProperty(project.provider({ project.layout.projectDirectory })) .dir('gradle/conf/') .map({ it.getAsFile() } as Transformer<Directory, File>) } ``` This one works too: ``` tasks.withType(Checkstyle).configureEach { configFile = project.file('gradle/conf/checkstyle.xml') configDir = project.provider({ project.file('gradle/conf/') }) } ``` The _gradle/conf_ directory contains a _checkstyle-suppressions.xml_ and _checkstyle.xml_ contains the following snippet: ``` <module name="SuppressionFilter"> <property name="file" value="${config_loc}/checkstyle-suppressions.xml"/> </module> ```
wavded/ogr2ogr
344623439
Title: ENAMETOOLONG - When converting large GeoJSON to Shapefile Question: username_0: Thanks for this awesome package. It works very well for thing things that I need it to do, except I have run into an issue, and am hopeful that I can get it resolved. Running on Windows, GDAL 2.1.3, released 2017/20/01. This works no problem with all my GeoJSON: `var newShape = ogr2ogr(currjson).format('ESRI Shapefile').skipfailures().stream(); newShape.pipe(fs.createWriteStream(filename + '.zip'));` However, there are a couple of instances lately where it is failing and I am getting an ENAMETOOLONG error. At first I thought that it was something in the GeoJSON, so I cleaned a few files and it worked no problem, however today, there is no cleaning possible. (Other files had a ridiculous number of properties that were not important - whereas today's issue has a file with over 2250 coordinate pairs!). I've spent the last few hours looking for solutions, and can't seem to get one that works. I've updated my code as follows: `let newShape = ogr2ogr(currjson) .format('ESRI Shapefile') .skipfailures() .stream(); let outputStream = fs.createWriteStream(filename + '.zip'); newShape.pipe(outputStream) newShape.on('error', function(){ console.log('Well thats messed up'); })` and the on-error doesn't fire, and I still get the ENAMETOOLONG error, and it is coming from the .stream() portion (as far as I can tell). Any pointers would be much appreciated. Answers: username_1: Never have encountered that error, only lead I've found is that the file name created is literally too long for the underlying OS, is there any logic in your application that could be naming things too long? Also, if you have some sample code that can reproduce, that would help. username_0: Well, that's the thing. It works for _most_ of the data. I have the GeoJSON stored in the DB, and I am just passing the data to ogr2ogr to convert to KML and Shape. If the GeoJSON package is large (over 1200 or so coordinate pairs) the temp file (or that is what I am coming to think) is too large of a name. It looks like there is a temp file created when making the shapes or KMLs, and that the naming/location of that (if the GeoJSON is large) causes this error. Is there any way that I can pass in a temp directory for the code? (looks like it creates the folder and destroys it, and that it is appended to the location for the GDAL code. If not, I guess I will have to move the GDAL stuff to the root level, so that temp files can be longer. username_1: Closing due to age, reopen if issue persists. Status: Issue closed
OpenGovLD/specs
53007491
Title: `Content-MD5` message digest Question: username_0: http://tools.ietf.org/html/rfc2616#page-121 Originally suggested in https://github.com/OParl/specs/issues/31#issuecomment-39186002 Answers: username_0: This might be an alternative: Linked Data Signatures https://web-payments.org/specs/source/ld-signatures/
team-digital-couch/digital-couch
563564793
Title: database will no longer open in pg-web Question: username_0: Heroku updated our credentials and the database will no longer open. We are waiting to see if there is an update overnight that fixes the issue, but if not will have to create a new database. Answers: username_1: Credentials have been updated and distributed to the team. Status: Issue closed
kubernetes/kops
507897255
Title: addon yaml file saved to s3 as invalid yaml Question: username_0: Looking at previous versions (kops 1.14), there used to be 'null' written at the top of the file, which will pass the kubectl validation checking. That 'null' has been replaced with '{}' and now it fails to validate and the daemonset is never created.
HEADS-project/ApamaComponents
200173021
Title: How to select the kev-script with -Dnode.bootstrap? Question: username_0: @username_1 , according to https://github.com/dukeboard/kevoree/wiki/Kevoree-System-Properties the following should work, but I get the following error. Kevoree searches for src\main\kevs\main.kevs C:\Workspaces\HEADS ATC Use Case\kevoreeNewsAssetCep>mvn kev:run -Dnode.bootstrap=src/main/kevs/inject.kevs [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building News Asset CEP Kevoree Scripts to run ApamaComponents 0.0.1-SNAPSHOT [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- org.kevoree.tools.mavenplugin:5.4.0-SNAPSHOT:run (default-cli) @ kevoreeNewsAssetCep --- [INFO] Generating a Kevoree model by reflection... [INFO] Model saved at target\classes\KEV-INF\kevlib.json [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.388 s [INFO] Finished at: 2017-01-11T19:29:46+01:00 [INFO] Final Memory: 10M/307M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.kevoree.tools:org.kevoree.tools.mavenplugin:5.4.0-SNAPSHOT:run (default-cli) on project kevoreeNewsAssetCep: Unable to read KevScript file at src\main\kevs\main.kevs: C:\Workspaces\HEADS ATC Use Case\kevoreeNewsAssetCep\src\main\kevs\main.kevs -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException Answers: username_1: @username_0 you are using the Maven plugin to start Kevoree. Not the executable `.jar`. When using the Maven plugin, you should configure the path to the KevScript using the `pom.xml`. Because it is convenient to override that value from the command-line, you can specify: ```xml <configuration> <namespace>yournamespacehere</namespace> <kevscript>${env.KEVS}</kevscript> </configuration> ``` By doing so, you can override the kevscript file using an environment variable: ```sh KEVS=/some/path/to/your.kevs mvn kev:run ``` The cool thing is that by default, if you do no have the `KEVS` var defined, then it will default to the proper `src/main/kevs/main.kevs` file Status: Issue closed
BobbyWibowo/lolisafe
1033455849
Title: How to access the API Question: username_0: Hi! i wanted to use https://zz.ht/ as a basis of uploading content to it. I wanted to implement it in pyupload, but seem to not understand how the API responses. Is there any documentation about what kind of API Response i get, and what i would need to deliver to use it properly? Thanks in Advance, and sorry for the inconvenience caused by this ticket.<issue_closed> Status: Issue closed
oppia/oppia
157583163
Title: We need a team page Question: username_0: Oppia really needs a page showing the breadth and interesting backgrounds of our team. Answers: username_1: We can totally include that in the credits page. Like have our names clickable and some modal would show up saying a couple sentence about each of us or something. I'm totally up for it! :+1: username_2: This will be incorporated in oppia/foundation 👍 Status: Issue closed
jelastic-jps/lets-encrypt
196257093
Title: Untrusted certificate Question: username_0: I am using jelastic 4.9, nginx 1.10.1 https://www.ssllabs.com/ssltest/analyze.html?d=env-3564948.mircloud.host Also in NGINX logs: ``` 2016/12/18 04:05:06 [crit] 5829#0: *252 SSL_do_handshake() failed (SSL: error:14094085:SSL routines:SSL3_READ_BYTES:ccs received early) while SSL handshaking, client: 192.168.127.12, server: 0.0.0.0:443 ``` ![image](https://cloud.githubusercontent.com/assets/21218721/21291463/395549e6-c4e9-11e6-98ae-4faf86684e09.png) Answers: username_1: Hi @username_0, it was designed like that, please check the Readme at the main page `Environment Name Domain - creates a dummy (invalid) SSL certificate for your environment internal URL (env_name.{hoster_domain}) to be used in testing` The valid certificate can be issued for Custom Domain only. Status: Issue closed username_0: @username_1 thanks! although it is unexpected. I can generate a self-signed cert without any add-ons. username_0: @username_1 oh, I got it, nevermind :+1: username_1: @username_0 Ok, no prob, let me know if any other questions! Have a successful and enjoyable coding :) username_2: I don't really understand why it's creating invalid cert, if you can create a valid one using letsencrypt itself (without add-on), even for environment name?
ueberdosis/tiptap
803365279
Title: RangeError: Duplicate use of selection JSON ID cell Question: username_0: vue-router.esm.js?8c4f:2208 RangeError: Duplicate use of selection JSON ID cell at Function.jsonID (index.es.js?5313:188) at eval (index.es.js?41dd:708) at Module../node_modules/tiptap-extensions/node_modules/prosemirror-tables/dist/index.es.js (chunk-vendors.js:7647) at __webpack_require__ (app.js:785) at fn (app.js:151) at eval (extensions.esm.js?f23d:1) at Module../node_modules/tiptap-extensions/dist/extensions.esm.js (chunk-vendors.js:7635) at __webpack_require__ (app.js:785) at fn (app.js:151) at eval (article-basic.vue?bda8:14) Answers: username_0: This bug is related to the extension of table username_1: Any solution? Status: Issue closed username_2: Thanks for reporting! See the related issue: #316 I’ll release a version with updated to dependencies to fix this.
SomeSN/url-shortener
320357111
Title: Client: Display Success Message Question: username_0: @username_1 @username_2. Once Issue #15 is complete, update the `main.js` (client-side code) so that it displays a nice success message to the DOM. Answers: username_1: Done username_2: @username_0 @ayyrickay we done. username_2: @username_0 @ayyrickay @username_1 we done! username_0: Completed in PR #29 Status: Issue closed
broadinstitute/gatk
288241115
Title: SV pipeline failure on CHM WGS1 with "two input alignments' overlap on read consumes completely one of them." Question: username_0: I just ran the pipeline from master on the WGS1 BAM file with the following parameters: ``` /Users/username_0/Documents/code/gatk/gatk StructuralVariationDiscoveryPipelineSpark -I hdfs://cw-test-m:8020/data/G94794.CHMI_CHMI3_WGS1.cram.bam -O hdfs://cw-test-m:8020/output/variants/inv_del_ins.vcf -R hdfs://cw-test-m:8020/reference/Homo_sapiens_assembly38.2bit --aligner-index-image /mnt/1/reference/Homo_sapiens_assembly38.fasta.img --exclusion-intervals hdfs://cw-test-m:8020/reference/Homo_sapiens_assembly38.kill.intervals --kmers-to-ignore hdfs://cw-test-m:8020/reference/Homo_sapiens_assembly38.kill.kmers --cross-contigs-to-ignore hdfs://cw-test-m:8020/reference/Homo_sapiens_assembly38.kill.alts --breakpoint-intervals hdfs://cw-test-m:8020/output/intervals --fastq-dir hdfs://cw-test-m:8020/output/fastq --contig-sam-file hdfs://cw-test-m:8020/output/assemblies.sam --target-link-file hdfs://cw-test-m:8020/output/target_links.bedpe --exp-variants-out-dir hdfs://cw-test-m:8020/output/experimentalVariantInterpretations -- --spark-runner GCS --cluster cw-test --num-executors 20 --driver-memory 30G --executor-memory 30G --conf spark.yarn.executor.memoryOverhead=5000 --conf spark.network.timeout=600 --conf spark.executor.heartbeatInterval=120 --conf spark.driver.userClassPathFirst=false ``` It failed near the end of the pipeline. Here is the tail of the log: ``` 20:38:14.368 INFO StructuralVariationDiscoveryPipelineSpark - Used 3549 evidence target links to annotate assembled breakpoints 20:38:14.462 INFO StructuralVariationDiscoveryPipelineSpark - Called 662 imprecise deletion variants 20:38:14.492 INFO StructuralVariationDiscoveryPipelineSpark - Discovered 7234 variants. 20:38:14.506 INFO StructuralVariationDiscoveryPipelineSpark - INV: 184 20:38:14.506 INFO StructuralVariationDiscoveryPipelineSpark - DEL: 4486 20:38:14.506 INFO StructuralVariationDiscoveryPipelineSpark - DUP: 1170 20:38:14.506 INFO StructuralVariationDiscoveryPipelineSpark - INS: 1394 18/01/12 20:38:16 WARN org.apache.spark.scheduler.TaskSetManager: Stage 17 contains a task of very large size (2518 KB). The maximum recommended task size is 100 KB. 18/01/12 20:38:22 WARN org.apache.spark.scheduler.TaskSetManager: Stage 18 contains a task of very large size (2307 KB). The maximum recommended task size is 100 KB. 20:38:27.207 INFO StructuralVariationDiscoveryPipelineSpark - Processing 501267 raw alignments from 426041 contigs. 18/01/12 20:38:27 WARN org.apache.spark.scheduler.TaskSetManager: Stage 20 contains a task of very large size (2518 KB). The maximum recommended task size is 100 KB. 20:38:35.835 INFO StructuralVariationDiscoveryPipelineSpark - Primitive filtering based purely on MQ left 339065 contigs. 20:38:37.378 INFO StructuralVariationDiscoveryPipelineSpark - 17574 contigs with chimeric alignments potentially giving SV signals. 18/01/12 20:38:37 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 284.0 in stage 25.0 (TID 43189, cw-test-w-6.c.broad-dsde-methods.internal, executor 7): java.lang.IllegalArgumentException: two input alignments' overlap on read consumes completely one of them. 1_1097_chrUn_JTFH01000492v1_decoy:501-1597_+_1097M6H_60_1_1092_O 483_612_chr17:26962677-26962806_-_482S130M491S_60_-1_281_S at org.broadinstitute.hellbender.utils.Utils.validateArg(Utils.java:681) at org.broadinstitute.hellbender.tools.spark.sv.discovery.prototype.ContigAlignmentsModifier.removeOverlap(ContigAlignmentsModifier.java:36) at org.broadinstitute.hellbender.tools.spark.sv.discovery.prototype.AssemblyContigAlignmentSignatureClassifier.lambda$processContigsWithTwoAlignments$e28aa838$1(AssemblyContigAlignmentSignatureClassifier.java:114) at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1040) at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:461) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:461) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:461) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:191) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) 18/01/12 20:38:37 ERROR org.apache.spark.scheduler.TaskSetManager: Task 284 in stage 25.0 failed 4 times; aborting job 18/01/12 20:38:37 INFO org.spark_project.jetty.server.AbstractConnector: Stopped Spark@23007ed{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 18/01/12 20:38:37 ERROR org.apache.spark.scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorMetricsUpdate(50,WrappedArray()) 18/01/12 20:38:37 ERROR org.apache.spark.scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorMetricsUpdate(52,WrappedArray()) 18/01/12 20:38:37 ERROR org.apache.spark.scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorMetricsUpdate(34,WrappedArray()) 18/01/12 20:38:37 ERROR org.apache.spark.scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorMetricsUpdate(60,WrappedArray()) 20:38:37.897 INFO StructuralVariationDiscoveryPipelineSpark - Shutting down engine [January 12, 2018 8:38:37 PM UTC] org.broadinstitute.hellbender.tools.spark.sv.StructuralVariationDiscoveryPipelineSpark done. Elapsed time: 42.74 minutes. Runtime.totalMemory()=16692805632 org.apache.spark.SparkException: Job aborted due to stage failure: Task 284 in stage 25.0 failed 4 times, most recent failure: Lost task 284.3 in stage 25.0 (TID 43224, cw-test-w-6.c.broad-dsde-methods.internal, executor 7): java.lang.IllegalArgumentException: two input alignments' overlap on read consumes completely one of them. 1_1097_chrUn_JTFH01000492v1_decoy:501-1597_+_1097M6H_60_1_1092_O 483_612_chr17:26962677-26962806_-_482S130M491S_60_-1_281_S at org.broadinstitute.hellbender.utils.Utils.validateArg(Utils.java:681) at org.broadinstitute.hellbender.tools.spark.sv.discovery.prototype.ContigAlignmentsModifier.removeOverlap(ContigAlignmentsModifier.java:36) at org.broadinstitute.hellbender.tools.spark.sv.discovery.prototype.AssemblyContigAlignmentSignatureClassifier.lambda$processContigsWithTwoAlignments$e28aa838$1(AssemblyContigAlignmentSignatureClassifier.java:114) at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1040) at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462) [Truncated] at org.broadinstitute.hellbender.tools.spark.sv.discovery.prototype.AssemblyContigAlignmentSignatureClassifier.lambda$processContigsWithTwoAlignments$e28aa838$1(AssemblyContigAlignmentSignatureClassifier.java:114) at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1040) at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:461) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:461) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:461) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:191) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) ERROR: (gcloud.dataproc.jobs.submit.spark) Job [a85f28df-e6b8-4f64-bafb-c0f195dcd4d5] entered state [ERROR] while waiting for [DONE]. ``` Answers: username_0: @username_1 this looks like something in your code username_1: Yes, I saw that too and have a fix planned. Result: The "ins_del_inv.vcf" is still produced but the experimental features are not available due to the exception Cause: gapped-alignment is split and after the split one of the child alignment is contained by another gap-free alignment in their read span. The particular contig's alignment ([ribbon](http://genomeribbon.com/?perma=dUYn5Xecbc)) ``` asm024831:tig00025 16 chr17 26962248 60 121M1D142M1I165M62I130M482S * 0 0 CAAATGTGAACATACAAAAAACAAATCAGAATGTGCCATTCTGATTTAAACTGCTTATTAGTTAATACCCTCAAGATAACATCTGGGTTCTTAGCTGCACTGAGTTAAGCCTACTTACATCTTTTTTGTCTTCCACTGCACTTTTCCTATCACATTACACTCCAGCAATACCAAGCTGTGCCGCCTTCTACCCCATTTCCACTATTTTGCCCCCGCCGCCGAGGCTTTTTGCCCCCGCTGCCGCGGCTTTCTCCCACAGCGGCTTTTTGCCCCCGCAGATGCAGCTTTCTCCTACCTCGCCTTTTTGCCCCCGCCGCCGCGGCTTTCTGCCGCCGCGGCTTTTTCCCCCACCGCCGTGGCTATTTACGGCTTTTTTCCCCCTGCTGCCGCGGCTTTTTGCCCCCTGCCGCCGCGACCTTTTGCCCCCGCCACCGCAGCTTTTTGCCCGCGCCTCCACGGCTTTTTGTCCCCGCCGCCACGGCTTTCGCCGACGCGGCTTTTTACCCCCGCCGCCACGGGTTTTGGCCGCCGCGGCTTTTTGCCCCCTCCCCCACGGCTTTTGCCTATGCGGCTTCTTGCCCCCGCCGCCGCGGGCTTTTTCCCCCGGCCGCGGCTTTTTGCTGCCGCTGCCGTGGTTTTTTGTCCCCGCTGCCGAAGCTTTTTGCCACCGCCGCCACTGCTTTTTGCGACTTTTTGCTCCCCCCACCGGGGCTTTTTACCTCCGTCGCCGCGGCTTTTTCCCCCACCACCGCGGCTTTTTGCCCCCGCCGCCGCTGCTTTTTGCAGCTTTTTGCCCCTGCCACCGCGGCTTTATGTGGTTTTTTGCCCGCGCCGCCGCGGCTTTTTGCCCACGCCGCCGCGGCTTTTAGCGGCTTTTTGCTCCTGCCGCCGCGGCTTTTTGCTCCTGCCGCCGCGGCTTTTTGCCCCCCGCCTCAGCGGCTTTCGGCCACCGCGTCTTTTTGCCCCCGCGGCCGCGGCTTTCTCCCACCGCGGCTTTTTGCCCTCGCCGCCGCAACTTTTTGCCTCCGCCGCCGAGTCTTTTTGCCCCCGCCACCATGGCTTTTTGCCCTCGCCGCTGCGCCTTTTTCCCTCCACGGCTTTTT * SA:Z:chr17,26962689,-,503S109M44I92M22D110M10I44M12D120M71S,60,123;chrUn_JTFH01000492v1_decoy,501,+,1097M6S,60,1; MD:Z:101A11T7^T12G21C13G68C5A12C1T15C2C15T0G10T3T2T11T29C4T4T6C7C0A1C24T4G18A1C4G13A1A47C27T26 RG:Z:GATKSVContigAlignments NM:i:97 AS:i:281 XS:i:182 asm024831:tig00025 2064 chr17 26962689 60 503H109M44I92M22D110M10I44M12D120M71H * 0 0 CCCCCGCCGCCACGGGTTTTGGCCGCCGCGGCTTTTTGCCCCCTCCCCCACGGCTTTTGCCTATGCGGCTTCTTGCCCCCGCCGCCGCGGGCTTTTTCCCCCGGCCGCGGCTTTTTGCTGCCGCTGCCGTGGTTTTTTGTCCCCGCTGCCGAAGCTTTTTGCCACCGCCGCCACTGCTTTTTGCGACTTTTTGCTCCCCCCACCGGGGCTTTTTACCTCCGTCGCCGCGGCTTTTTCCCCCACCACCGCGGCTTTTTGCCCCCGCCGCCGCTGCTTTTTGCAGCTTTTTGCCCCTGCCACCGCGGCTTTATGTGGTTTTTTGCCCGCGCCGCCGCGGCTTTTTGCCCACGCCGCCGCGGCTTTTAGCGGCTTTTTGCTCCTGCCGCCGCGGCTTTTTGCTCCTGCCGCCGCGGCTTTTTGCCCCCCGCCTCAGCGGCTTTCGGCCACCGCGTCTTTTTGCCCCCGCGGCCGCGGCTTTCTCCCACCGCGGCTTTTTGCCCTCGCCGCCGCAACTTTTTGCCTCCGCCGC * SA:Z:chr17,26962248,-,121M1D142M1I165M62I130M482S,60,97;chrUn_JTFH01000492v1_decoy,501,+,1097M6S,60,1; MD:Z:13A1A47C27T27C8G0A0G12C43A4A8^CCGCGGCTTTCTGCTCCCGCCG26A26G2T7T9C3G0T22C2T16C0G0G0C18C6A2^GCCGCGGCTTTT5C26T24C0A17T4T9C9G0T17 RG:Z:GATKSVContigAlignments NM:i:123 AS:i:148 XS:i:71 asm024831:tig00025 2048 chrUn_JTFH01000492v1_decoy 501 60 1097M6H * 0 0 AAAAAGCCGTGGAGGGAAAAAGGCGCAGCGGCGAGGGCAAAAAGCCATGGTGGCGGGGGCAAAAAGACTCGGCGGCGGAGGCAAAAAGTTGCGGCGGCGAGGGCAAAAAGCCGCGGTGGGAGAAAGCCGCGGCCGCGGGGGCAAAAAGACGCGGTGGCCGAAAGCCGCTGAGGCGGGGGGCAAAAAGCCGCGGCGGCAGGAGCAAAAAGCCGCGGCGGCAGGAGCAAAAAGCCGCTAAAAGCCGCGGCGGCGTGGGCAAAAAGCCGCGGCGGCGCGGGCAAAAAACCACATAAAGCCGCGGTGGCAGGGGCAAAAAGCTGCAAAAAGCAGCGGCGGCGGGGGCAAAAAGCCGCGGTGGTGGGGGAAAAAGCCGCGGCGACGGAGGTAAAAAGCCCCGGTGGGGGGAGCAAAAAGTCGCAAAAAGCAGTGGCGGCGGTGGCAAAAAGCTTCGGCAGCGGGGACAAAAAACCACGGCAGCGGCAGCAAAAAGCCGCGGCCGGGGGAAAAAGCCCGCGGCGGCGGGGGCAAGAAGCCGCATAGGCAAAAGCCGTGGGGGAGGGGGCAAAAAGCCGCGGCGGCCAAAACCCGTGGCGGCGGGGGTAAAAAGCCGCGTCGGCGAAAGCCGTGGCGGCGGGGACAAAAAGCCGTGGAGGCGCGGGCAAAAAGCTGCGGTGGCGGGGGCAAAAGGTCGCGGCGGCAGGGGGCAAAAAGCCGCGGCAGCAGGGGGAAAAAAGCCGTAAATAGCCACGGCGGTGGGGGAAAAAGCCGCGGCGGCAGAAAGCCGCGGCGGCGGGGGCAAAAAGGCGAGGTAGGAGAAAGCTGCATCTGCGGGGGCAAAAAGCCGCTGTGGGAGAAAGCCGCGGCAGCGGGGGCAAAAAGCCTCGGCGGCGGGGGCAAAATAGTGGAAATGGGGTAGAAGGCGGCACAGCTTGGTATTGCTGGAGTGTAATGTGATAGGAAAAGTGCAGTGGAAGACAAAAAAGATGTAAGTAGGCTTAACTCAGTGCAGCTAAGAACCCAGATGTTATCTTGAGGGTATTAACTAATAAGCAGTTTAAATCAGAATGGCACATTCTGATTTGTTTTTTGTATGTTCA * SA:Z:chr17,26962248,-,121M1D142M1I165M62I130M482S,60,97;chr17,26962689,-,503S109M44I92M22D110M10I44M12D120M71S,60,123; MD:Z:374A722 RG:Z:GATKSVContigAlignments NM:i:1 AS:i:1092 XS:i:281 ``` Planned fix: push the gap-split to an ever later stage. Temporary fix: Run the pipeline with the line `--exp-variants-out-dir` in "scripts/sv/runWholePipeline.sh" deleted, this turns off the experimental feature. username_0: Let's remove the option from the script, then, if it's causing a failure on our primary benchmarking sample. username_1: temp fix posted in #4146 username_1: closed by #4159 Status: Issue closed
ghettovoice/vuelayers
716682012
Title: ol-ext feature animation Question: username_0: Hi, I tried to use ol-ext library to animate a feature, I set everything up but it's not work, it says animation is playing but I can't see anything on map This is my code: ``` <vl-map ref="map" :load-tiles-while-animating="true" :load-tiles-while-interacting="true" :data-projection="'EPSG:4326'"> <vl-view :enableRotation="false" :center="center" :zoom="zoom" :max-zoom="maxZoom"></vl-view> <vl-layer-tile> <vl-source-osm></vl-source-osm> </vl-layer-tile> <vl-layer-vector ref="featuresLayer"> <vl-source-vector :features.sync="myFeatures"></vl-source-vector> </vl-layer-vector> </vl-map> ``` and the js: ``` <script> import { Feature } from 'ol' import { Point, LineString } from 'ol/geom' import { Icon, Style, Fill, Stroke, RegularShape } from 'ol/style' import featureAnimationPath from 'ol-ext/featureanimation/Path' import _ from 'lodash' export default { data() { return { center: [0, 0], zoom: 5.5, maxZoom: 19, myFeatures: [] } }, methods: { animate() { // feature with id=0 is a line string feature let coordinates = this.myFeatures.find(x => x.id === 0).geometry.coordinates let featurePoint = new Point(coordinates[0]) let feature = new Feature(featurePoint) feature.setStyle(new Style({ image: new RegularShape({ radius: 14, points: 3, fill: new Fill({ color: '#00f' }), stroke: new Stroke({ color: '#fff', width: 2 }) }), stroke: new Stroke({ [Truncated] //you may ask why I creating line string again, using the already created one won't work let lineString = new Feature({ geometry: new LineString(coordinates) }) let animation = new featureAnimationPath({ path: lineString, rotate: true, duration: 10 * 1000, }) // let result = this.$refs.featuresLayer.$olObject.animateFeature(feature, animation) let result = this.$refs.featuresLayer.$layer.animateFeature(feature, animation) console.log(result.isPlaying()) //returns true but nothing happens or at least I can't see } } } </script> ``` The style I set on animating feature is what they used on ol-ext example, a triangle. It's seems we can't use images, curious why?! Answers: username_1: Your example not working because you re-create new path feature (linestring) that is not bound to the vector layer on which you run animateFeature. Actually there is no need to re-create it at all, you should use features from the vector source. Check this updated demo https://jsfiddle.net/username_1/jf41enh8/. As for using image icons, I force icon load with `icon.load()`. It seems that without it ol-ext animation don't know icon resolution and can't render it correctly. username_0: Wow! Why it didn't came to my mind?! Actually here in this line: `let coordinates = this.myFeatures.find(x => x.id === 0).geometry.coordinates` The feature it finds is the line string I'm showing on the map but as I commented It didn't work that way the actual code was like this: ``` let lineString = this.myFeatures.find(x => x.id === 0) . . . let animation = new featureAnimationPath({ path: lineString, rotate: true, duration: 10 * 1000, }) ``` And about the `icon.load()` thanks! It works!! Thank you for your efforts dear, your library and replies really helped me username_1: let coordinates = this.myFeatures.find(x => x.id === 0).geometry.coordinates The feature it finds is the line string I'm showing on the map but as I commented It didn't work that way the actual code was like this: Because `this.myFeatures` is just a serialized representation of OpenLayers features, not the actual ol/Feature instances that was created inside vector source username_0: Genius! You're right, I got that wrong username_0: And another question, Is it possible to do seeking in the animation? username_1: I don't know. But I found that call `vectorLayer.animateFeature(feature, fanim, useFilter)` returns an object with animation control methods. So I think you can stop current animation (this will drop current animating marker) and re-call animation method with a marker on another coordinate along a linestring https://viglino.github.io/ol-ext/doc/doc-pages/ol.layer.Base.html username_0: Thank you, I've been away for a while I had some investigations and the way is using another lib named ol-games, that supports pause & resume actions Status: Issue closed
ThinkR-open/golem
823732646
Title: Location for static HTML files to deploy with golem app Question: username_0: When I install the package from github, I get the following error: ``` warning in file(con, "r") : cannot open file 'C:/Users/pun009/AppData/Local/Temp/RtmpI5OJuG/R.INSTALL2ef859b84601/mmrefpoints/R/Documentation/ProjectionModel_en.html': No such file or directory Warning: Error in file: cannot open the connection ``` I imagine that other Shiny developers have html files they want to insert in their apps using `shiny::includeHTML()`. Where should I put the files? Thank you! Answers: username_1: Hi You can create subdirectory in the R FOLDER AND R folder is only for rfunction (and roxygen2 documentation) You can put your html file into inst/ folder and then use app_sys function to use it. Is your application open source? Can you share a link? Regards username_0: Hi Vincent-- thank you. I see, I'll put it in inst/ instead. [Here](https://github.com/username_0/mmrefpoints) is the link to my app on github. I know there are many other issues that remain but I think this one is the most important! username_1: please have a look to my fork here : https://github.com/username_1/mmrefpoints https://github.com/username_0/mmrefpoints/compare/master...username_1:master And because your app_server file is 1500 line long, please use shiny modules :) (I know it can be scary at first, but once you've started you can't do without it). Status: Issue closed username_0: Excellent! That worked wonderfully, thank you. I also moved my report.Rmd (an HTML report generated by the app) to inst/ and it works great. Lots of other issues to sort out but at least it is functional enough to run the app when it installs. And yes re: modules, I know...I learned Shiny as I was writing it and some pieces are very inefficient. I'll work on that! Thanks again.
stlehmann/Flask-MQTT
1122034097
Title: High CPU usage on raspberry pi 3 model B Question: username_0: I'm deploying a flask app with flask_socketio and flask_mqtt on a raspberry pi, but seems like it's using 90% of cpu. Meanwhile, when I remove flask_mqtt and use only flask_socketio, the cpu usage drops to 20~30%. Additional information: Mosquitto is installed in the raspberry pi so the connection to flask_mqtt is "localhost", the keep alive parameter is setted to 5, like below: ```python app.config['MQTT_BROKER_URL'] = 'localhost' app.config['MQTT_BROKER_PORT'] = 1883 app.config['MQTT_USERNAME'] = '' app.config['MQTT_PASSWORD'] = '' app.config['MQTT_KEEPALIVE'] = 30 app.config['MQTT_TLS_ENABLED'] = False ``` Answers: username_0: I just found out that the problem is when using the eventlet, but I still haven't found another approach. Status: Issue closed
liamchoi943/Study
897656756
Title: Java 24강 오버라이딩과 다형성(3) Question: username_0: 메서드 오버라이딩 (overriding) 하위클래스에 메서드 (method)을 오버라이딩 할수있다! 상위클래스에 있는 메서드를 똑같은 이름으로 하위클래스에 정의하고, 내용만 다르게하면 그 하위클래스에서 그 method을 호출할때 상위클래스가 아닌 하위클래스 메서드를 호출하게된다! 오버라이딩 후 상위클래스로 하위클래스를 선언했을때는 어떻게 되는지? Customer customerwho = new VipCustomer(); << vipcustomer의 메서드가 호출된다. 가상 메서드 (virtual method) 프로그램에서 어떤 객체의 변수나 메서드의 참조는 그 타입에 따라 이루어짐. 가상 메서드의 경우는 타입과 상관없이 실제 생성된 인스턴스의 메서드가 호출 되는 원리. Customer vc = new VIPCustomer(); vc.calcPRice(10000); vc타입은 customer. 실제 생성된 인스턴스의 vipCustomer 클래스의 calcprice() 메서드가 호출됨 ![image](https://user-images.githubusercontent.com/84316417/119079300-cf101180-ba32-11eb-9dc9-bf0116bca5f4.png) 다형성 (polymorphism) 다형성: 하나의 코드가 여러가지 자료형으로 구현되어 실행되는것. 정보은닉, 상속과 더불어 객체지향 프로그래밍의 가장 큰 특징 중 하나. 객체 지향 프로그래밍의 유연성, 재활용성, 유지보수성에 기본이 되는 특집. Animal이 상위 클래스 human.move = "W" eagle.move = "F" Animal Test class의 함수 public void moveAnimal(Animal animal) { animal.move}; Animal Test class의 main void에서 moveAnimal(human와 eagle)를 했을때 W와 F가 뜬다. << 즉 하나의 코드로 다른 결과를 가져오는것.! 이것이 다형성 상속을 언제 사용할까? 여려 클래스를 생성하지 않고 하나의 클래스의 공통적인 요소를 모으고 나머지 클래스는 이를 상속받음 다음 각각 필요한 특성과 메서드를 구현하는 방법. 이렇게 안하면... 많은 if문이 생긴다...;;
taniman/profit-trailer-enhancements
419071864
Title: Coin or coins acumulation Question: username_0: I will like to have the following feature in order to accumulate one or more coins using the market setting i'll like to place the coin that I want to accumulate and in enabled pairs there market where they are located one coin example market = EOS enabled_pairs = BTC, USDT multiple coins market = EOS, TRON enabled_pairs = BTC, USDT Answers: username_1: this is not possible with current bot framework. Status: Issue closed
TobiasNickel/teditor
698282973
Title: How to install extensions Question: username_0: would you please let me know how to install extensions? thanks Answers: username_1: currently it I don't know that myself. teditor currently only works without extensions. for adding new features, 1. start teditor, 2. open it in the browser, 3. try some feature. 4. see if there an error message 5. implement an alternative. teditor is mostly a prove of concept, and it might motivate you, to clone vscode yourself and do sonething with it. If you find a feature you like to see implemented and added to teditor, PRs are very welcome.
team-durumi/didkorea
590218600
Title: SN Treatment Chain (SN) 삭제 Question: username_0: <img width="1217" alt="스크린샷 2020-03-30 오후 8 32 49" src="https://user-images.githubusercontent.com/39326928/77908068-c22bde00-72c5-11ea-99a6-88d10b4b0a26.png"> http://didkorea.co.kr/product/04-environmental-resistant-chain-series 페이지의 SN Treatment Chain (SN)를 삭제해주세요. 여기 아이템이 7개여서 맨 아래 1개만 있으니 공백이 너무 많아서 이상하게 느껴진다고 하시네요 ㅎㅎ 그래서 6개로 맞추고 열 맞추게 하면 될 것 같아요<issue_closed> Status: Issue closed
happy-se-life/kanban
1053437903
Title: Is "unspecified" appropriate in English? Question: username_0: Is "unspecified" appropriate in English? Answers: username_1: As a word it is correct. Whether or not it is appropriate for use would depend on the context you are using it in. For example: `There was an unspecified number of people in the room`. is correct in the context but example: `The house is unspecified` Though grammatically correct, the context makes it weird to use it and in this context it would not be appropriate to use that word. username_0: Hello Thank you for your reply. I understand. "unspecified" may look strange in the ticket filter choices. What would be a natural word in this case?
KarimTazmi/NR-S6-Portfolio
550231511
Title: Padding or row height? Question: username_0: https://github.com/username_1/NR-S6-Portfolio/blob/99419b4f8d0fb3b9780cce0b2e4aa0476d15d533/css/style.css#L175 Padding is pretty forceful, you might want to achieve the visual effect using stretching the items over the rows and center the contents. Answers: username_1: Ik wilde inderdaad content verticaal centreren alleen lukt het me niet bij .project__item. Ik heb wel de padding kunnen verwijderen en stretch gebruikt. username_0: https://codepen.io/henjohoeksma/pen/LYEBapp username_0: align-content: center werkt alleen als de container een hoogte heeft, daar moet je dus ff op letten.
NativeScript/docs
267090348
Title: Add IIS server to the Vagrant setup Question: username_0: Currently, when using Vagrant locally the documentation is served using NGINX server. This makes it impossible to test the IIS configuration that is part of the documentation - like redirects for example. We should add another windows machine in the Vagrant configuration, which will be used for serving the documentation produced by the build scripts. Answers: username_0: This is no longer required, as we switch the hosting servers to NGINX. Status: Issue closed
AdobeDocs/adobeio-codelabs-ci-cd
709116113
Title: Add additional details to the CI/CD set up Question: username_0: It would be great to add additional details to this codelab, including but not limited to - Where do I find the secrets for PROD and Stage? - Our git.corp.adobe.com account does not seem to have GitHub Actions. So to use CI/CD we need a github account outside of the provided by adobe? ### Existing response as staring point: **Where do I find the secrets for PROD and Stage?** User will need to go through the console setup of the app in stage and prod manually. The secrets for PROD and Stage deployments are available within the JSON file that you download either from the Console UI or from the CLI for your PROD and Stage workspaces. They can - Download the console.json file from the Console UI for the workspace to which you want to deploy. Then set the secrets in your secret vault (e.g. GH Secrets in our OOTB CI/CD support) and use the CLI in your CI/CD scripts to build and deploy the app (e.g. what we do in our OOTB GH actions) - Download the console.json from the CLI (I'll come back to that right after) and either do the same than above, or deploy directly to your targeted environment using the CLI yourself (not recommended for prod deployments of course). to download the appropriate console.json from the CLI: ``` aio where shows you where your CLI config points to in terms of org/project/workspace aio console org list and aio console org select let you list and select which org you want to work with aio console project list and aio console project select let you list and select which project you want to work with from the org that you've chosen aio console workspace list and aio console workspace select let you list and select which workspace you want to work with from the org and project that you've chosen ``` once you selected the right org/project/workspace combo, go to your application folder and: this will download the json file for this combo and reconfigure your .env and .aio accordingly. **Our git.corp.adobe.com account does not seem to have GitHub Actions. So to use CI/CD we need a github account outside of the provided by adobe?** Yes, Github actions are not supported on git.corp yet. You must push your app to github.com to leverage the OOTB CI/CD support. **Additional info** you're not forced to use the GH Actions. If you prefer push your app on git.corp, you can leverage Jenkins or Travis to deploy your app. https://github.com/AdobeDocs/project-firefly/blob/master/guides/ci_cd_for_firefly_apps.md#bring-your-own-cicd-pipeline<issue_closed> Status: Issue closed
matrix-org/synapse
616520440
Title: Deactivate account on with delete messages don't work Question: username_0: <!-- If you want to report a security issue, please see https://matrix.org/security-disclosure-policy/ This is a bug report template. By following the instructions below and filling out the sections with your information, you will help the us to get all the necessary data to fix your issue. You can also preview your report before submitting it. You may remove sections that aren't relevant to your particular case. Text between <!-- and --​> marks will be invisible in the report. --> ### Description <!-- Describe here the problem that you are experiencing --> Deactivating an account with "Please forget all messages I have sent when my account is deactivated" checked does not work. You get the error "There was a problem communicating with the server. Please try again." Network tab show 401 for `deactivate`, then 200, then 403. The spinner thing goes on forever (at least a couple of hours before I gave up) I tested this with an almost new account with very few messages. A customer also reported this, awaiting rageshake from them. My rageshake is linked https://newvector.zammad.com/#ticket/zoom/3216 ### Steps to reproduce - Create a matrix.org - Settings - Deactivate Account - Check "Please forget all messages I have sent when my account is deactivated" - Enter password - Click Continue <!-- Describe how what happens differs from what you expected. If you can identify any relevant log snippets from _homeserver.log_, please include those (please be careful to remove any personal or private data). Please surround them with ``` (three backticks, on a line on their own), so that they are formatted legibly. --> ### Version information Matrix.org, account deletion does work on my Modular server <!-- IMPORTANT: please answer the following questions, to help us narrow down the problem --> <!-- Was this issue identified on matrix.org or another homeserver? --> - **Homeserver**: matrix.org <!-- What version of Synapse is running? You can find the Synapse version with this command: $ curl http://localhost:8008/_synapse/admin/v1/server_version (You may need to replace `localhost:8008` if Synapse is not configured to listen on that port.) --> - **Version**: "1.12.4 (b=matrix-org-hotfixes,309e30bae)" - **Install method**: <!-- examples: package manager/git clone/pip --> - **Platform**: <!-- Tell us about the environment in which your homeserver is operating distro, hardware, if it's running in a vm/container, etc. --> Answers: username_0: Synapse logs added to rageshake username_1: So this is happening because Riot starts the User-Interactive Authentication Session with: ```json {"erase": false} ``` then, after the erase checkbox is checked and the user enters their password, ```json { "auth": { "session": "yyy", "type": "m.login.password", "user": "@xxx:matrix.org", "identifier": { "type": "m.id.user", "user": "@xxx:matrix.org" }, "password": "xxx" }, "erase": true } ``` Now that `erase` is `true`, Synapse complains with: ```json { "errcode": "M_UNKNOWN", "error": "Requested operation has changed during the UI authentication session." } ``` I'm not sure if this should be initially fixed from the Synapse or Riot side, but we'll need to inform clients that there's a potential breaking change if they were relying on changing the UIAA parameters mid-way through an authentication session, as Riot is doing here. This check was introduced in https://github.com/matrix-org/synapse/pull/7068. Unfortunately, even after https://github.com/matrix-org/synapse/pull/7455 which relaxed the requirements a bit, this is still a problem in certain cases (but again, maybe a client problem rather than a server one). This occurs on matrix.org as it is running v1.13.0rc - the check is not a part of v1.12.4. username_2: I feel pretty strongly that riot is at fault for (a) moving the goalposts mid-operation and (b) not giving the user a better error username_3: See also #7452. username_3: This should be fixed in 1.13.0 as of #7483. Status: Issue closed
spring-projects/spring-data-jpa
776429593
Title: Working with JSON(B) Postgre [DATAJPA-1738] Question: username_0: **[VladislavK777](https://jira.spring.io/secure/ViewProfile.jspa?name=JIRAUSER49649)** opened **[DATAJPA-1738](https://jira.spring.io/browse/DATAJPA-1738?redirect=false)** and commented Hello. I have a problems with Spring JPA and JSONb(select, update, delete, insert). I need to use the other library for work with JSONb (com.vladmihalcea.hibernate.type.json) This is lib no popular. Maybe you will add new feature. For excample: I have Entity  ```java import com.vladmihalcea.hibernate.type.json.JsonBinaryType; @Entity @Table(name = "Table") @TypeDefs({ @TypeDef(name = "jsonb", typeClass = JsonBinaryType.class) }) public class MyTable { @Id private UUID id; @Type(type = "jsonb") @Column(columnDefinition = "jsonb") @Basic(fetch = FetchType.LAZY) private List<MyModel> models; } ``` models is Array of Objects Now I searched so ```java @Query(value = "select e.* from table t, jsonb_array_elements(t.models) e where e ->> 'numConnect' = ?1", nativeQuery = true) List<Object> findByObject(String num); ``` Next I converted via ObjectMapper to List\<MyModel>. And I would be to search data inside JSON and return List\<MyModel>. ```java @Query(value = "select e.* from table t, jsonb_array_elements(t.models) e where e ->> 'numConnect' = ?1", nativeQuery = true, jsonMode = true) List<MyModel> findByObject(String num); ``` Maybe add in `@Query` the new param "jsonMode = true"? It woulde be a very usful for developers.   --- No further details from [DATAJPA-1738](https://jira.spring.io/browse/DATAJPA-1738?redirect=false) Answers: username_1: Closing since the request is not clear. Status: Issue closed
skygear-demo/skygear-react-google-login
337767482
Title: Auto login if not logout last time Question: username_0: I think it would be good to demonstrate how to implement auto login. ### Auto login flow 1. In `componentDidMount` of `App`, check `skygear.auth.accessToken`. 1. If access token exist, call `skygear.auth.whoami()` and `skygear.auth.getOAuthProviderProfiles()`. 1. If access token does not exist, just display `not yet signed in` like the current one. 1. If either of the call `skygear.auth.whoami()` or `skygear.auth.getOAuthProviderProfiles()` get error, call `skygear.auth.logout()` for the user. ### Code needs to be changed With the current implementation, if you call `skygear.auth.accessToken` in `componentDidMount` of `App`, it would always return `null`. The reason is `skygear.config` in `index.js` is asynchronous, the easiest fix is to render the react app AFTER `skygear.config` is finished, in this way you can always assume `skygear` is ready to use in your components. Answers: username_0: Try this: ``` skygear .config(...) .then(() => { ReactDOM.render(...); }); ``` Status: Issue closed
chef-partners/azure-chef-extension
676585575
Title: Testing azure-chef-extension for chef-16 on Windows 2019 Answers: username_1: Tested azure chef extension in windows 2019 with bootstrap options -[chef_package_path, chef_package_url, run-list, policyfile, daemon, chef_License] with following commands. Windows 2019 version 1809: - az vm extension set --resource-group "sanga-resource" --vm-name "sanga-win19" --name "ChefClient" --publisher Chef.Bootstrap.WindowsAzure --version 1210.13.4.1 --no-auto-upgrade true --protected-settings "{'validation_key':'', 'client_key':'', 'client_rb': '~/chef-repo/.chef/knife.rb'}" --settings "{ 'bootstrap_options': { 'chef_server_url': 'https://api.chef.io/organizations/sanga', 'chef_node_name': 'sanga2' }, 'chef_package_url': 'https://packages.chef.io/files/stable/chef/16.3.45/windows/2019/chef-client-16.3.45-1-x64.msi', 'daemon':'service','CHEF_LICENSE':'accept'}" - az vm extension set --resource-group "sanga-resource" --vm-name "sanga-win19" --name "ChefClient" --publisher Chef.Bootstrap.WindowsAzure --version 1210.13.4.1 --no-auto-upgrade true --protected-settings "{'validation_key':'', 'client_key':'', 'client_rb': '~/chef-repo/.chef/knife.rb'}" --settings "{ 'bootstrap_options': { 'chef_server_url': 'https://api.chef.io/organizations/sanga', 'chef_node_name': 'sanga3' }, 'chef_package_url': 'https://packages.chef.io/files/stable/chef/16.3.45/windows/2019/chef-client-16.3.45-1-x64.msi', 'daemon':'service','CHEF_LICENSE':'accept', 'runlist':'[recipe[starter]]'}" - az vm extension set --resource-group "sanga" --vm-name "sanga-win19" --name "ChefClient" --publisher Chef.Bootstrap.WindowsAzure --version 1210.13.4.1 --no-auto-upgrade true --protected-settings "{'validation_key':'', 'client_key':'', 'client_rb': '~/chef-repo/.chef/knife.rb'}" --settings "{ 'bootstrap_options': { 'chef_server_url': 'https://api.chef.io/organizations/sanga', 'chef_node_name': 'sanga2'}, 'chef_package_url' : 'https://packages.chef.io/files/stable/chef/16.3.45/windows/2019/chef-client-16.3.45-1-x64.msi', 'daemon':'service', 'CHEF_LICENSE':'accept', 'custom_json_attr': { 'policy_group': 'sanga-group', 'policy_name': 'sangapolicy' }}" - az vm extension set --resource-group "sanga" --vm-name "sanga4" --name "ChefClient" --publisher Chef.Bootstrap.WindowsAzure --version 1210.13.4.1 --no-auto-upgrade true --protected-settings "{'validation_key':'', 'client_key':'', 'client_rb': '~/chef-repo/.chef/knife.rb'}" --settings "{ 'bootstrap_options': { 'chef_server_url': 'https://api.chef.io/organizations/sanga', 'chef_node_name': 'sanga3'}, 'chef_package_url' : 'https://packages.chef.io/files/stable/chef/16.3.45/windows/2019/chef-client-16.3.45-1-x64.msi', 'daemon':'task', 'CHEF_LICENSE':'accept', 'custom_json_attr': { 'policy_group': 'sanga-group', 'policy_name': 'sangapolicy' }}" - az vm extension set --resource-group "sanga-resource" --vm-name "sanga-win19" --name "ChefClient" --publisher Chef.Bootstrap.WindowsAzure --version 1210.13.4.1 --no-auto-upgrade true --protected-settings "{'validation_key':'', 'client_key':'', 'client_rb': '~/chef-repo/.chef/knife.rb'}" --settings "{ 'bootstrap_options': { 'chef_server_url': 'https://api.chef.io/organizations/sanga', 'chef_node_name': 'sanga4'}, 'chef_package_path' : 'C:/sanga/chef-client-16.3.45-1-x64.msi', 'daemon':'task', 'CHEF_LICENSE':'accept'}" - az vm extension set --resource-group "sanga-resource" --vm-name "sanga-19" --name "ChefClient" --publisher Chef.Bootstrap.WindowsAzure --version 1210.13.4.1 --no-auto-upgrade true --protected-settings "{'validation_key':'', 'client_key':'', 'client_rb': '~/chef-repo/.chef/knife.rb'}" --settings "{ 'bootstrap_options': { 'chef_server_url': 'https://api.chef.io/organizations/sanga', 'chef_node_name': 'sanga4'}, 'chef_package_path' : 'C:/sanga/chef-client-16.3.45-1-x64.msi', 'daemon':'task', 'CHEF_LICENSE':'accept', 'custom_json_attr': { 'policy_group': 'sanga-group', 'policy_name': 'sangapolicy' }}" - az vm extension set --resource-group "sanga-resource" --vm-name "sanga-win19" --name "ChefClient" --publisher Chef.Bootstrap.WindowsAzure --version 1210.13.4.1 --no-auto-upgrade true --protected-settings "{'validation_key':'', 'client_key':'', 'client_rb': '~/chef-repo/.chef/knife.rb'}" --settings "{ 'bootstrap_options': { 'chef_server_url': 'https://api.chef.io/organizations/sanga', 'chef_node_name': 'sanga2'}, 'chef_package_url' : 'https://packages.chef.io/files/stable/chef/16.3.45/windows/2019/chef-client-16.3.45-1-x64.msi', 'daemon':'task', 'CHEF_LICENSE':'accept', 'environment_variables': {'Path': 'C:/chef','MESSAGE': 'Testing environment variable'}}" - az vm extension set --resource-group "sanga-resource" --vm-name "sanga-win19" --name "ChefClient" --publisher Chef.Bootstrap.WindowsAzure --version 1210.13.4.1 --no-auto-upgrade true --protected-settings "{'validation_key':'-----BEGIN RSA PRIVATE KEY----------END RSA PRIVATE KEY-----', 'client_key':'-----BEGIN RSA PRIVATE KEY-----END RSA PRIVATE KEY-----', 'client_rb': '~/chef-repo/.chef/knife.rb'}" --settings "{ 'bootstrap_options': { 'chef_server_url': 'https://api.chef.io/organizations/sanga', 'chef_node_name': 'sanga3'}, 'chef_package_url' : 'https://packages.chef.io/files/stable/chef/15.11.8/windows/2019/chef-client-15.11.8-1-x64.msi', 'daemon':'service', 'CHEF_LICENSE':'accept', 'environment_variables': {'Path': 'C:/chef','MESSAGE': 'Testing environment variable'}}" Status: Issue closed
MyWebIntelligence/MyWebIntelligence
58734519
Title: Find all Twitter accounts related to a certain search criteria Question: username_0: https://twitter.com/JeremiePat/status/570198724237881344 Answers: username_0: This is just to keep this in mind for one day, not for now. username_1: This issue have hiplace in TA project, keep in mind our road map on this project. Social network are not for the next release even if xe do all to prepare this issue Status: Issue closed
pomodorozhong/personal-research
971938125
Title: Software versioning Question: username_0: ## Reference + [ZeroVer: 0-based Versioning : programming](https://www.reddit.com/r/programming/comments/p2xcpn/zerover_0based_versioning/) + [語意化版本 2.0.0 | Semantic Versioning](https://semver.org/lang/zh-TW/)
ARMmbed/nanostack-hal-mbed-cmsis-rtos
161446071
Title: Dependency on atmel-rf-driver Question: username_0: This module's implementation of arm_hal_random depends on atmel-rf-driver providing rf_read_random and rf_read_mac_address. Since it is nowhere documented in the porting guide that a driver needs to be called 'driverRFPhy', nor that it has to provide the functions mentioned above, it gets pretty confusing when trying to get a different radio up and running with Nanostack. Status: Issue closed Answers: username_1: Was merged in https://github.com/ARMmbed/nanostack-hal-mbed-cmsis-rtos/pull/8 Hence closing.
RomeoDespres/reapy
501336737
Title: Any way to reset the connection? Question: username_0: I'm writing an external tool that occasionally polls Reaper about its active project tab, whether it's rendering, stuff like that. It seems whenever Reaper throws an error box, the reapy "host" gets stuck in a state where my running code throws `ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host` and I have to restart the tool. Given that the tool is designed to be fire-and-forget, I'd like to avoid this. Any pointers? Any way of reinitializing reapy without restarting my tool? Answers: username_1: Could you give me an example of such an error box that generates this state? username_0: The most reliable way is to try and switch projects. ``` Deferred script execution error Traceback (most recent call last): File "defer", line 1, in <module> File "C:\...\Python37\lib\site-packages\reapy\core\reaper\defer.py", line 48, in run callback(*args, **kwargs) File "activate_reapy_server.py", line 24, in run_main_loop SERVER.send_results(results) File "C:\...\Python37\lib\site-packages\reapy\tools\network\server.py", line 111, in send_results self._send_result(connection, result) File "C:\...\Python37\lib\site-packages\reapy\tools\network\server.py", line 77, in _send_result result = json.dumps(result).encode() File "C:\...\Python37\lib\site-packages\reapy\tools\json.py", line 46, in dumps return json.dumps(x, cls=ReapyEncoder) File "c:\...\Python37\Lib\json\__init__.py", line 238, in dumps **kw).encode(obj) File "c:\...\Python37\Lib\json\encoder.py", line 199, in encode chunks = self.iterencode(o, _one_shot=True) File "c:\...\Python37\Lib\json\encoder.py", line 257, in iterencode return _iterencode(o, 0) File "C:\...\Python37\lib\site-packages\reapy\tools\json.py", line 38, in default return json.JSONEncoder.default(self, x) File "c:\...\Python37\Lib\json\encoder.py", line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type _MakeCurrentProject is not JSON serializable ``` username_1: Ok, this looks like it's a bug inside reapy. I mean normally you don't have errors, so you don't have to reset it. I'll try to fix that soon! username_0: Right, however I tried to use `reapy.reascript_api.SelectProjectInstance()`, and I can still get issues (just not as reliably). Additionally, terminating my tool at just the wrong moment also seems to cause Reaper to throw an error message. I hope you can find some time and investigate exactly how that situation could be dealt with. Meanwhile, I'll try and rewrite my code to avoid e.g. calling for a project tab switch while in the middle of checking the current project. I suspect Reaper might not handle that too well. username_1: Hm it's definitely weird if you get errors while using `SelectProjectInstance`. If it happens again I'd be very interested if you could provide me with the error message. Similarly, if you get an error when terminating your tool, please post the error. It may help to understand where the probleme comes from. Concerning your first post, what you were asking for was some function say `reapy.reconnect` that you would call whenever you encounter a problem? It may be an interesting feature if we don't manage to get rid of all your errors. username_0: I think it'd be valuable. If nothing else for extra safety in production-critical environments. username_0: For example, here's an error that Reaper can throw, about 1 in 20 times I terminate my script. Once the user has dismissed the error box in Reaper, the tool can start normally. I currently have no way for my tool to wait until the user does so, since reapy will simply hang, and then throw an exception. It would be preferable for reapy to give up, throw an exception. I could then wait for the user to close the error box, then re-initialize reapy. ``` Deferred script execution error Traceback (most recent call last): File "defer", line 1, in <module> File "C:\...\Python37\lib\site-packages\reapy\core\reaper\defer.py", line 48, in run callback(*args, **kwargs) File "activate_reapy_server.py", line 24, in run_main_loop SERVER.send_results(results) File "C:\...\Python37\lib\site-packages\reapy\tools\network\server.py", line 111, in send_results self._send_result(connection, result) File "C:\...\Python37\lib\site-packages\reapy\tools\network\server.py", line 78, in _send_result connection.send(result) File "C:\...\Python37\lib\site-packages\reapy\tools\network\socket.py", line 59, in send self._socket.sendall(length) ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host ``` username_1: ``` Then navigate there, make a copy of `server.py` as e.g. `server_backup.py` (just to backup the original file). Then please replace line 112 of `server.py`, that contains: ```except (KeyError, BrokenPipeError):``` by: ```except (KeyError, BrokenPipeError, ConnectionAbortedError, ConnectionResetError):``` You can now restart REAPER and hopefully it will be enough to remove the last error you reported... (the first one is a `reapy` bug and will soon be fixed). Please let me know if it works! P.S. Still, I'll work on that `reconnect` function when I get time, cause I think it's a nice idea. username_0: Hey, so I've been playing around a bit more with all this, and I'll just post some thoughts. High-level, I'm trying to automate the installation and initialization of Reapy as much as possible, since my external tool depends on it for parts of its functionality. This involves some hacking; manipulating Reaper config files directly, adding the Reapy init command to `__startup.lua` and so on. Where I feel I have the most issues right now is that Reapy cannot be imported without also trying to connect to the Reapy server. Furthermore, it can only be done once, and that's by doing `import reapy` inside a thread and waiting for it to either succeed or freeze. Having frozen once, it cannot be re-attempted without restarting my application. For that purpose, it would be incredibly helpful to be able to import Reapy without side effects, call something like `reapy.canConnect()` safely and *then* do `reapy.initialize()` or something similar if `canConnect()` returns True. Secondly, anything you can do / explain to make the Reapy installation process as hassle-free as possible for third parties would be a great bonus, like a `reapy.installScripts()` that can be called from the outside, or a `reapy.activateServer()` that can be run from the aforementioned `__startup.lua` (again, without side effects if something fails) to facilitate bootstrapping Reapy without user interaction. Again, just keeping this high-level, but anything that cuts down on the hacks I mentioned above would be very welcome. Thanks again for an invaluable tool! username_1: My feeling is that reapy has to try to connect on first import, because for most use cases, it would be super annoying to have to write ```python import reapy if reapy.can_connect: reapy.connect() ``` at the top of every script using reapy. Would it work for you if reapy provided a `reconnect` function, while still attempting to connect automatically at first import? Regarding your questions about installation: I don't know Lua and the way it integrates with REAPER at all. I feel `reapy` installation process is already quite simple: one `pip install` and one manual configuration step. If you find any way to remove this manual step, maybe by integrating it into a Lua script, it would be amazing and I'd be glad to integrate it into the project! But I don't think I would know how to do that myself... username_0: Oh, absolutely. I'd probably want to use it like this: ``` import reapy # no side-effects if it fails def main(): while not reapy.can_connect(): # notify user time.sleep(10) reapy.reconnect(timeout=3) do_reapy_stuff() ``` Just from memory, here's how I did the install (will edit with accurate info later) * Do some black magic $%^&ery to get Reaper to run redistributable Python builds. Can give you details on this somewhere else if you're interested, I'm available at my user name @ gmail. * Hack `reaper.ini` to set up a web interface, kinda what your installer does. * Hack `reaper-kb.ini` to add the custom action (again, this is just replicating what your installer does) * Add a tiny `__startup.lua` script (Reaper looks for and runs it on every launch) that checks for the `server_port` ext_state that Reapy sets. If no such state is found, trigger the Reapy installer action we hacked into `reaper_kb.ini` earlier. On that note, it would be kinda neat if there was something else I could import rather than just `reapy`, let's call it `reapy.setup,` that contains all the things you do during the user-initialized installation. It would make me duplicate way less code, and we could put all the connection checkers etc in there. I'd imagine the API would look something like: * `reapy.setup.add_web_port()` (and of course the `remove` function aswell) * `reapy.setup.add_custom_action()` * `reapy.setup.verify()` username_1: Ok thanks! I'll try to work on it this weekend! username_1: Hello again, the `reconnect` function has been added to the `master` and will be released as soon as we fix all other points in this issue. Regarding those: - There already is a `can_connect` function that is actually called `reapy.dist_api_is_enabled()`. - There already is a module `reapy.config` that contains `create_new_web_interface(port)` and `delete_web_interface(port)`. - There also are functions `reapy.add_reascript` and `reapy.remove_reascript`. However, these currently use the ReaScript function `RPR_AddRemoveReaScript`, which means they can not be used from the outside before `reapy` has been installed... This is actually the only thing that prevents a full automatic setup process. Hence I would be super interested in knowing how you manually hack into `reaper-kb.ini` (I didn't know this file existed before!). More specifically, how do you generate the codes for ReaScripts (you know these `RS1ee9bb229dabffe151848d7efa3c10f748e1a1cf `)? Do you just use any random value? Oh and also, you have been mentioning several times that `reapy` has side effects when imported, but I don't think it does, does it? It simply tries to connect, and raises a warning if it fails. username_0: I think I have to go back to the code to verify all this, but essentially I can only `import reapy` once in a project. If it fails, I can never import it again for the lifetime of the session. I also have several instances of `import reapy` freezing up the tool, and requiring me to restart it; I think it's when Reaper is showing an error dialog. You mentioned above that reapy catching more types of exceptions might prevent this, and I think it's a good idea. Again; for my purposes, `import reapy` isn't necessarily the point at which I need to make a connection. A failure to connect is not relevant until I need to ask something from Reaper, at which point the try - fail - reconnect pattern I described above would kick in. The `reconnect()` function is perfect for that! username_1: Thanks! There is still something I don't understand. If I'm not mistaken, you hack into `reaper.ini` and `reaper-kb.ini` during the installation process of your tool. Is this installer a Python program? And do you run it from inside or outside REAPER? If from inside REAPER, then you can directly use `reapy.config.enable_dist_api()` to avoid those two hacks. If from outside REAPER, then how do you locate the two files? That's actually the only blocking point to make the whole installation of `reapy` doable from outside REAPER. Using `RPR_get_ini_file()` (only available from inside REAPER as long as reapy distant API hasn't been enabled) is the only way I've found to do it so far. username_0: It's run from outside Reaper, and I just assume them to be in `~/AppData/Roaming/REAPER/` since I don't recall that being something user configurable. My users are all on Windows, but I hope it'd be doable to figure out the corresponding folders on other platforms. username_1: Hey! You can now upgrade reapy and have the `reapy.reconnect()` function... Sorry about the delay! I'm also trying to make the whole installation automatic from outside REAPER, with a post-pip install hook, but this is probably gonna take some more time... username_0: Fantastic! Yeah, I switched to running from the github repo recently, the new functionality definitely helped stabilize the connection. Good luck with the installer, don't forget that someone in my position will wanna run that installer on my client machines (meaning from something like `reapy.config`), not just in my dev environment =) Thanks for the great support! Status: Issue closed username_1: I'll close this one now that #61 deals with command-line installation.
HumanCellAtlas/ingest-central
346208369
Title: Dev Ingest UI HttpErrorResponse. Question: username_0: **Spreadsheet not recognized by Ingest upload.** "HttpErrorResponse: Http failure response for (unknown url): 0 Unknown Error" **To Reproduce** Please explain as much as you can about what you were doing when the bug was observed (to help us reproduce the error for testing). Steps to reproduce the behavior: 1. Go to http://ui.ingest.dev.data.humancellatlas.org/submissions/new/metadata 2. Upload TM spreadsheet ( I can provide upon request) 3. Click on Submit 4. See error **Expected behavior** I expect the spreadsheet to upload. **Please explain what actually happened.** I get this error: HttpErrorResponse: Http failure response for (unknown url): 0 Unknown Error **Impact** We won't be able to upload spreadsheets. We also need more informative error messages. Answers: username_1: Hey @username_0, can you send a link to the spreadsheet? username_0: Sent to you via Slack. Thanks, @username_1 username_1: @username_0 Looks like this is an HTTP timeout during the pre-processing of the spreadsheet. Despite the HTTP timeout, the spreadsheet eventually gets uploaded and processed: http://ui.ingest.dev.data.humancellatlas.org/submissions/detail/5b630ddfe6294900068d0d4f/overview @rdgoite @aaclan-ebi We need to discuss a mode of background pre-processing the spreadsheets and reporting the errors to ingest. Large spreadsheets like this won't get processed in <60s username_0: Good to know. Thanks, Rolando! Closing this ticket. Status: Issue closed
numpy/numpy
238944592
Title: `isnat` seems to be missing from the online documentation Question: username_0: I can't find `isnat` [in the list of logic functions in the documentation](https://docs.scipy.org/doc/numpy/reference/routines.logic.html), nor does its page exist [at the expected URL](https://docs.scipy.org/doc/numpy/reference/generated/numpy.isnat.html). In contrast, `isin` and `heaviside` which are also new ufuncs, can be found in their respective expected locations. Answers: username_1: Fixed in #9253 Status: Issue closed username_1: I can't find `isnat` [in the list of logic functions in the documentation](https://docs.scipy.org/doc/numpy/reference/routines.logic.html), nor does its page exist [at the expected URL](https://docs.scipy.org/doc/numpy/reference/generated/numpy.isnat.html). In contrast, `isin` and `heaviside` which are also new ufuncs, can be found in their respective expected locations. username_1: Actually, leaving this open to raise that we need to backport that username_0: Just in case someone stumbles upon the other half of the issue: the referenced PR fixes the docs of both `isnat` and `positive`. username_2: Backport is up, so closing this. Status: Issue closed
felangel/bloc
459933596
Title: When add Navigator.pushNamed get this: BlocProvider.of() called with a context that does not contain a Bloc of type ActivationBloc. Question: username_0: Hello sir, I have issue with `redirect` to another page and I have no idea what I'm doing wrong. I've already searched for solutions however nothing helped me, yet. My solution works correctly, however when I've tried add redirect to `login` screen `Navigator.of(context).pushNamed('/login');` that is triggered in `BlocListener` when the state is `ActivationStateActivated ` then I've got this error: ``` flutter: The following assertion was thrown building ActivationScreen(dirty): flutter: BlocProvider.of() called with a context that does not contain a Bloc of type ActivationBloc. flutter: No ancestor could be found starting from the context that was passed to flutter: BlocProvider.of<ActivationBloc>(). flutter: This can happen if the context you use comes from a widget above the BlocProvider. flutter: This can also happen if you used BlocProviderTree and didn't explicity provide flutter: the BlocProvider types: BlocProvider(bloc: ActivationBloc()) instead of flutter: BlocProvider<ActivationBloc>(bloc: ActivationBloc()). flutter: The context used was: ActivationScreen(dirty) ``` I have this Bloc class ``` class ActivationBloc extends Bloc<ActivationEvent, ActivationState> { final TerminalRepository terminalRepository; ActivationBloc({@required this.terminalRepository}) : assert(terminalRepository != null); @override ActivationState get initialState => ActivationStateInit(); @override Stream<ActivationState> mapEventToState(ActivationEvent event) async* { if (event is ActivationEventInit) { final token = await terminalRepository.getToken(); print(token); if (token != null) { yield ActivationStateActivated(); } else { yield ActivationStateInit(); } } if (event is ActivationEventActivate) { yield ActivationStateLoading(); try { final terminal = await terminalRepository.activate(event.pin); await terminalRepository.storeToken(terminal.token); yield ActivationStateActivated(); } catch (err) { yield ActivationStateError(error: err.toString()); } } } } ``` Router class ``` class Router { [Truncated] title: Text("Activation"), ), body: BlocBuilder( bloc: activationBloc, builder: (BuildContext context, ActivationState state) { if (state is ActivationStateInit) { activationBloc.dispatch(ActivationEventInit()); } if (state is ActivationStateLoading) { return Loading(); } return buildBody(context, activationBloc, state); }), )); } } ``` Do you have any idea what's wrong there? Thank you Answers: username_1: Hi @username_0 👋 Thanks for opening an issue! Are you able to share a link to the full source code? It would be much easier for me to help if I can run and debug the app locally? Thanks! username_0: Hello @username_1, thank you for the quick response! there is the dummy repo with the example, i've just stubbed some meaningless data. https://github.com/username_0/fl_bloc_playground If you will comment the `line 23` in `lib/src/activation/activation_screen.dart` with the Navigator stuff then there won't be any error ¯\_(ツ)_/¯ Thank you username_1: Hey @username_0 no problem! I took a quick look and it looks like the issue is you're not providing an `ActivationBloc` in the `/login` route. I updated the route to look like: ```dart Route screenLogin() { return MaterialPageRoute( builder: (BuildContext context) { return BlocProvider( builder: (context) => ActivationBloc(terminalRepository: terminalRepository), child: ActivationScreen(), ); }, ); } ``` and `BlocProvider.of<ActivationBloc>(context)` resolves the `ActivationBloc` properly. In the sample you sent, that leads to an infinite loop because `ActivationScreen` always dispatches the init event which triggers the `Navigation.pushNamed` but hopefully that answers your question. 👍 Closing for now but feel free to comment with additional information/questions and I'm happy to continue the conversation 😄 Status: Issue closed username_0: Hi @username_1 , thank you for the help, I've discovered that I was using wrong component on the `/login` screen 🤦‍♂ I am new in flutter (frontend react/redux/graphql background) so I wanna ask if this is a legit way how to do programmatically redirect? username_1: Hi @username_0 no worries! Yeah you can either programatically route using named routes or using anonymous routes using `Navigator.of(context).push(...)`. 👍
NuGet/NuGetGallery
482531561
Title: Display exact match search result at the top and with clear label Question: username_0: Thanks @username_1 for the hint in the demo day a while back. We have deprioritized exact match in some cases but we could bring it back with some UI tweaks like npm: ![a](https://user-images.githubusercontent.com/94054/63299806-8048ad00-c28b-11e9-9311-a871f0078c4d.png) We would need to think through this analogous UI on nuget.org and on VS client. Might need a protocol enhancement. Answers: username_1: ## crates.io I particularly like the way crates.io does it, because they make it clear something is top-of-the-list because it's an -exact match-, not because it ranked better. It's common to want that exact match, after all. ![{7D437DB1-1A2C-4855-80DE-86BD718D52C8}](https://user-images.githubusercontent.com/17535/63300109-488e3500-c28c-11e9-9a4a-2e35399cefce.png) ## rubygems.org rubygems.org does this too, but I'm less of a fan of displaying it twice: ![{B431FC9A-A8DE-4CEE-B685-9173CF5933F9}](https://user-images.githubusercontent.com/17535/63300256-9b67ec80-c28c-11e9-9c01-5664ad17ea7e.png) username_2: I like this idea! I prefer the label (npm style) username_3: Here is another example that would be helped by this work item: https://github.com/NuGet/NuGetGallery/issues/7518 username_4: Or the 500-pound gorilla of popularity-ranking-spam, "Microsoft". Yesterday I had to search for `Microsoft.AspNetCore.Components.Authorization` which currently produces 58,680 hits, and the exact match doesn't show up until page 7 on NuGet.org ... no idea where it shows up in the VS GUI, I didn't have that kind of patience. username_0: I've shipped the first step in this work to our primary region. This means searches on nuget.org and in Visual Studio in non-China should have this first fix. When a search query has an exact match _and_ the package ID contains symbols (dot `.`, underscore `_`, hyphen `-`) then that package is at the top. Regarding package IDs without symbols, we chose not to put that one always at the top yet because: 1. A lot of our top queries have low download count exact match "false positives". Without clear UI saying "this is at the top because exact match only" our telemetry and user feedback indicates that we shouldn't do this one yet. 1. We don't have our A/B testing infrastructure for search changes ready yet so we wanted to avoid changes that we are less confident about. 1. All of the exact match issues reported since we shipped new search were concerning package IDs with dots. In other words, this small fix covers the reported pain points. To be clear, exact match means that the search query exactly matches a package ID. If the query is "system text json", System.Text.Json isn't _necessarily_ at the top but, in this case, it is for other scoring reasons. If there are two package IDs in the search query or a prefix of a package ID the improvement I made does not apply. Other relevancy changes are planned to improve these non-exact match cases but they are less urgent. As always, we warmly welcome input on how we are doing. For the exact match case, please feel free to chime in here. For other search relevancy issues, please respond to https://github.com/NuGet/NuGetGallery/issues/4124. This issue tracks the UI work and putting all exact matches at the top so I will leave it open until we do that. Note the UI is a little challenging because we are thinking about all of the other "markers" or "badges" that may be on a search result so we want to make a UI change that gels well. username_5: Sorry I don't have time to read about the problem you are trying to fix, but I wanted to let you know that I find current search results confusing and quite hard to understand. Totally unrelated packages appear higher in the result without any obvious reason. Documented in [Screenshot_2019-09-19 NuGet Gallery Packages matching Pkcs11Interop](https://user-images.githubusercontent.com/1163818/65214189-514f7200-daa9-11e9-9290-8933aab07052.png). username_6: I can confirm our 2 most problematic packages Amazon.AspNetCore.Identity.Cognito and Amazon.AspNetCore.DataProtection.SSM are showing up at the top of this list now. username_5: @karann-msft judging solely by your reaction to my previous comment I'm guessing I wasn't communicating the problem clear enough. Let me try again. When I searched for `Pkcs11Interop` before this change, I got `Pkcs11Interop` package as the 1st and related package `Pkcs11Interop.X509Store` as 2nd or at most as 3rd. When I search for `Pkcs11Interop` now I got `Pkcs11Interop` package as the 1st (no change here) but related package `Pkcs11Interop.X509Store` is 9th and there are many unrelated packages between the 1st and 9th place. So in my point of view this change may have fixed exact match but it messed up the rest of the result. username_2: PS. There is an comparison page (old/new search results) here: https://www.nuget.org/experiments/search-sxs username_3: Hey @username_5, the search service splits inputs like `Pkcs11Interop` into tokens like `Pkcs`, `11`, and `Interop`. All of the "unrelated" results you mention match `Pkcs` or `Interop`, so this behavior is expected. What's happening here is that we're favoring downloads a little too much for this specific query. This is a tricky to balance, but we have a few improvements in the works that should help this query. Stay tuned! username_3: This would also help clarify the [`json.net` search](https://www.nuget.org/packages?q=json.net): the first result is the package [`Json.Net`](https://www.nuget.org/packages/Json.Net/) as it is an exact match on the package ID, the second result is the package [`Newtonsoft.Json`](https://www.nuget.org/packages/Newtonsoft.Json/) as its package title is `Json.NET`. /cc @anangaur
shinbyh/kaist-placeness-api
241728628
Title: 세부장소 장소성 추출을 위한 필요 목록 Question: username_0: 세부장소 장소성 추출을 위해서 몇가지 패키지를 사용합니다. 모두 실행하기 위해서는 아래 두 가지 패키지가 필요합니다. 1) python-firebase ($pip install python-firebase) 2) konlpy (http://konlpy-ko.readthedocs.io/ko/v0.4.3/) 이용해서 설치 만약 설치 없이 원하는 것들만 실행하기 위해서는 main.py에서 1) import category_classifier을 주석처리한다. 2) class HotspotPlaceness와 class FeatureExtraction을 주석처리한다. 이 경우 위의 두 패키지에 따른 문제는 발생하지 않을 것으로 생각됩니다. Status: Issue closed Answers: username_1: Readme에 반영하였습니다.
AzerothShard/AzerothShard-Issues-ita
239476510
Title: Feral Charge - Bear Question: username_0: Salve a tutti! **Descrizione**: Allora, la Feral Charge - Bear funziona correttamente, ma applica il DR (Diminishing Return) sulle Entangling Roots. **Attuale comportamento**: Per spiegarmi: usando la Feral Charge Bear, si immobilizza il nemico per 4 secondi. Se poi si prova ad usare Entangling Roots, queste durano soli 5 secondi invece che 10 (come se l'effetto della Bear Charge fosse una specie di roots). Stessa cosa se si fa l'inverso, cioè se faccio prima Roots, durano 10 secondi, e poi l'effetto del Bear Charge dura solo 2 secondi. **Comportamento previsto**: Non ci dovrebbe essere DR tra Feral Charge - Bear e Roots. **Passi per riprodurre il problema**: Usare prima Roots e poi la Feral Charge Bear, o viceversa. **Su che reame è stato testato:** Reame principale! **Quando è stato testato**: Il 28 / 06 / 2017 Answers: username_1: Testato e Verificato il bug grazie per la segnalazione.
sp614x/optifine
172463257
Title: Shaders UINT buffers not working Question: username_0: The UINT buffers generate "GL error 0x0506: Invalid framebuffer operation (Fb status 0x8CD6) at pre-useProgram". They may need some program support: https://www.opengl.org/registry/specs/EXT/gpu_shader4.txt Answers: username_1: I figured out how to effectively use floating point buffers as uint buffers: https://github.com/username_1/Ebin-Shaders/blob/829a1d90713bf7773a6c489389f567036f892d5b/shaders/lib/Utility/encoding.glsl#L1 These functions operate identically to the packUnorm functions, however they output out a bitwise floating point caste. In fact, they are slightly faster than the built-in packUnorm functions. One thing to note is that encoded floating point buffers must have filtering 100% disabled. There seems to be some implicit filtering on all screen textures, even when they are not lodded or offset whatsoever. In order to lookup encoded 32 floating point buffers, you must use texelFetch(). I do not know if uint buffers would have this filtering issue. As far as I'm concerned, this issue is solved. Status: Issue closed username_0: Nice. username_1: It appears there is always some implicit filtering with translucent surfaces. So far it only happens when the fourth component of the encoded vec4 is either 0.0 or 1.0, so I believe it's possible to workaround at the moment. The ability to disable alpha blending for certain attatchments in gbuffers_water would fix this annoyance.
kino-ngoo/Instapaper_reading
793192049
Title: 深度工作让你保持专注让忙碌真正转化为生产力 Question: username_0: <b>&#28145;&#24230;&#24037;&#20316;&#35753;&#20320;&#20445;&#25345;&#19987;&#27880;&#65292;&#35753;&#24537;&#30860;&#30495;&#27491;&#36716;&#21270;&#20026;&#29983;&#20135;&#21147;</b><br> &#27627;&#19981;&#30041;&#24773;&#22320;&#30733;&#25481;&#28014;&#27973;&#30340;&#20869;&#23481;&#65292;&#19981;&#36951;&#20313;&#21147;&#22320;&#24378;&#21270;&#28145;&#24230;&#12290; * &#28145;&#24230;&#24037;&#20316;&#65288;Deep Work&#65289;&#25351;&#30340;&#26159;&#22312;&#26080;&#24178;&#25200;&#29615;&#22659;&#19979;&#19987;&#27880;&#36827;&#34892;&#32844;&#19994;&#27963;&#21160;&#65292;&#35201;&#27714;&#20010;&#20154;&#35748;&#30693;&#33021;&#21147;&#36798;&#21040;&#26497;&#38480;&#65292;&#33021;&#22815;&#20026;&#19990;&#30028;&#21019;&#36896;&#26032;&#20215;&#20540;&#65292;&#25552;&#21319;&#20010;&#20154;&#25216;&#33021;&#65292;&#19988;&#38590;&#20197;&#22797;&#21046;&#12290; * &#28014;&#27973;&#24037;&#20316;(Shallow&hellip;<br> <br> January 25, 2021 at 05:10PM<br> via Instapaper https://ift.tt/3eMH6bM
api-platform/api-platform
473751829
Title: [GraphQL] does not respect serialized_name option? Question: username_0: I have Restaurant resource that has some renamed fields in serialization yaml: ``` App\Entity\Restaurant: attributes: ... pictureUrl: serialized_name: picture groups: ['restaurant:read', 'restaurant:write'] openPeriods: serialized_name: workHours groups: ['restaurant:read', 'restaurant:write'] max_depth: 1 ``` Rest queries are entirely ok with that, but graphql schema builder does not generate types for those properties. And I cannot use workHours, picture etc in my graphql queries. I tried adding following config to resource.yaml: ``` resources; App\Entity\Restaurant: properties: workHours: subresource: {collection: true, resourceClass: App\Entity\OpenPeriod, maxDepth: 1} description: 'The dummy foo' readable: true writable: true readableLink: false writableLink: false required: false ``` Then propertyNameCollectionFactory was able to recognize new property, but propertyMetadataFactory returned empty type in getResourceObjectTypeFields method. So, the questions are: 1. Is this a correct behavior? 2. What is the preferred way of dealing with such properties? Answers: username_1: Yes, the GraphQL schema is not built by using the serialization configuration but by using the resource metadata. Maybe you could do what you want by decorating the `PropertyNameCollectionFactory` which is used to determine the properties of a resource. Status: Issue closed username_1: Fixed in 2.6.
rails/webpacker
248717506
Title: webpacker is serving stale assets to subsequent runs of test suite Question: username_0: I'm using cucumber paired with capybara for running tests against my Rails app. I've noticed that webpacker is only compiling assets on the first run of the test suite, placing them in `public/packs-test`. On subsequent runs of the test suite, when I change code, I would expect that the results of my tests would change in the event that I introduce some breaking change in my app code that should cause tests to fail. I'm working around this by creating the following initializer code to wipe out the `public/packs-test` directory when the app loads: ``` require "fileutils" if Rails.env.test? config = YAML.load_file(File.join(Rails.root, "config/webpacker.yml")) packs_path = config[Rails.env]['public_output_path'] packs_dir = File.join(Rails.root, "public", packs_path) FileUtils.rm_rf(packs_dir) end ``` Now, when the test suite runs, webpacker will recompile the assets and I get the results that I would expect from my test suite. I think this is something that probably could/should be integrated into webpacker. I haven't looked at *how* yet, but I can figure it out if this issue is accepted. Thanks! Answers: username_1: @username_0 The assets should be recompiled if files that are under tracked paths are changed. Are you using latest master? username_0: @username_1 yes, I am using the latest master. In `Rails.env == 'development'`, where I'm running the webpack dev server, assets do get recompiled after a change. However, in test, where webpack dev server is not being used, it doesn't happen. It seems that all that is being checked for is whether something exists in `public/packs-test`, and a check isn't being made to see whether recompilation is necessary. username_1: @username_0 We do perform a check to see if an asset has changed and then perform recompilation - https://github.com/rails/webpacker/blob/master/lib/webpacker/compiler.rb#L14 username_2: I can confirm the problem occurs in the latest release: https://rubygems.org/gems/webpacker/versions/2.0. Using the latest master solved the problem in my case. username_0: Ah, wonderful. @username_2 thank you for confirming that. I made the wrong assumption that the version on rubygems was on par with what's in master. I'm consuming this library via the gem, so that would explain things. Thanks for clarifying and taking a look. Status: Issue closed username_3: I am experiencing this problem using the latest version of master. The only way I can get it to recompile `public/packs-test/application.js` is if I delete `tmp/.last-compilation-digest`. The files I am working with are located `app/javascript/.../...`. There must be something wrong in the config, but I can't figure out what it is. username_3: By the way, I am using the `rails test:system` command.
carloscuesta/gitmoji-cli
641065601
Title: [Feature discussion] Rebase and --no-edit behaviour Question: username_0: Hello @username_1! I would like to reopen issue #197 with a larger discussion because, unfortunately, the current behaviour makes it difficult to use git workflows which are semi-linear or linear and require rebasing. *Use-case:* Usually, my workflow requires to branch from master and then rebase before making my MR with both compacting my history and making sure current changes does not conflict with my branch. And then before merging rebasing a last time so that history is semi-linear. *Possible solutions:* 1. Detecting --no-edit 2. An option to disable gitmoji temporary, or during rebases. 3. Pre-filling the CLI with the current commit message if detected, making pressing enter 3 times all that is needed *My thoughts on each solution* 1. Seem impossible currently, but maybe it is a use-case to improve mainstream git? Knowing if the --no-edit flag is on from the prepare-commit-msg may be a legitimate question. 2. Nice but does not make the workflow that much better. 3. Seem the easier solution, and is more elegant than CTRL+C but not a lot more. Answers: username_1: The problem is specifically with the hook right? I use rebase workflow also and have no problems using gitmoji-cli as I would normally do username_1: We have a PR for this but is currently blocked due to: https://github.com/username_1/gitmoji-cli/pull/366#issuecomment-626140318 Status: Issue closed username_1: Solved by @Jeremie-Chauvel, I'm releasing it now!
ranjian0/building_tools
1163816855
Title: Clamp value Question: username_0: Hello, I've been using the addon for a bit and the clamp on the depth value was bothering me so I tried to change the max value in the btool scripts to a higher value but it doesnt seem to fix the problem? I'm wondering if there might be something I haven't taken into consideration other than changing the max valu ![Capture](https://user-images.githubusercontent.com/93084943/157432738-a1ad00ee-11f8-4ea8-b0bb-17ac41b5d664.PNG) e Answers: username_1: What was the problem with the old clamp value? Can you elaborate a bit more. Most of the values in building tools have extra clamping logic in the operators to make them work well with each other. This is mostly done to maintain reasonable proportionality for all the tools/elements. If you could point to your use case that could help a bit more in tracking down what you are struggling with.
wbazant/wbazant.github.io
346683759
Title: NGINX config for development Question: username_0: ``` worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 8080; server_name localhost; location /jbrowse-data { location ~ .*trackList.json { alias /Users/wb4/nginx/test.json; } proxy_pass http://test.parasite.wormbase.org/jbrowse-data; } location /jbrowse { location /jbrowse/dist { root /Users/wb4/dev; } proxy_pass http://test.parasite.wormbase.org/jbrowse; } } include servers/*; } ```
poooi/poi
396217393
Title: 战斗记录相关 Question: username_0: win10 战斗记录的节点视图还是无法使用吗?那怎么生成KC3回放记录呢 我知道有工具可以转POI记录成KC3 但是我就是先问问是等更新还是这个功能已经不做了 以及战斗记录保存截图好像也有问题 [https://wx4.sinaimg.cn/orj360/82e3931bly1fywldqr4u6j20u01xgdnp.jpg](url) Answers: username_1: close as duplicate to https://github.com/poooi/plugin-battle-detail/issues/58 https://github.com/poooi/plugin-battle-detail/issues/59 Status: Issue closed
ManageIQ/container-httpd
667689233
Title: Enable ppc64le support Question: username_0: The Dockerfile has x86_64 references and currently fails at https://github.com/ManageIQ/container-httpd/blob/797e493cca45af2628c372ccd943dd7767b1c5d3/Dockerfile#L3 Proposed Solution We can probably have ARCH as a build-arg defaulting to x86_64. also, this image is based on ubi:8.1, should it be updated to use ubi8.2 with centos-8 repositories like say https://github.com/ManageIQ/manageiq-pods/blob/d479be07d23825f183f4c8b3416ee4a847239bbf/images/manageiq-base/Dockerfile#L48<issue_closed> Status: Issue closed
Neos21/Neos21
413205597
Title: OCI Icon Name Question: username_0: - 3点リーダのアイコンを「アクション・アイコン」って呼んでる - https://docs.oracle.com/cd/E97706_01/Content/GSG/Tasks/terminating_resources.htm - 4本線の一番上だけ長いヤツを「メニュー・アイコン」って呼んでる - https://docs.oracle.com/en/cloud/paas/psmon/access-administration-console-platform-service-software.html - https://docs.oracle.com/en/cloud/paas/psmon/view-association-details.html<issue_closed> Status: Issue closed
mapbox/tilejson-spec
359252053
Title: Include source and source_name vector_layers keys Question: username_0: When more than one tileset is composited, vector_layers get merged into a single list. `source` and `source_name` fields are useful for identifying which vector_layers belong to which member of a composite source. ```js // An example vector layer entry with `source` and `source_name`: { description: string, fields: { [string]: string }, id: string, maxzoom?: number, minzoom?: number, source: string, // Matches id of parent tileset source_name?: string // Matches name of parent tileset } ``` This does thing up a bigger question for me. I find the tileJSON spec's ability to successfully describe composited tileJSON to be lacking. I want to know more details about each tileset in my composite. A nested data structure for composited tileJSON that includes the full original tileJSON for each composited tileset would be a much more convenient document to work with, for example.
Innologica/vue2-daterange-picker
992882902
Title: I get an error "Missing required prop: DateRange" Question: username_0: I tried the code in the documentation in this section: https://innologica.github.io/vue2-daterange-picker/advanced/#slots-demo . But I got an error "Missing required prop: DateRange". Then, I changed **v-model** with **date-range**, but it gives me an error "cannot read property ‘_c’ of undefined". Is there something I've missed? ``` <template> <date-range-picker v-model="dateRange"> <!-- header slot--> <div slot="header" slot-scope="header" class="slot"> <h3>Calendar header</h3> <span v-if="header.in_selection"> - in selection</span> </div> <!-- input slot (new slot syntax)--> <template #input="picker" style="min-width: 350px;"> {{ picker.startDate | date }} - {{ picker.endDate | date }} </template> <!-- date slot--> <template #date="data"> <span class="small">{{ data.date | dateCell }}</span> </template> <!-- ranges (new slot syntax) --> <template #ranges="ranges"> <div class="ranges"> <ul> <li v-for="(range, name) in ranges.ranges" :key="name" @click="ranges.clickRange(range)"> <b>{{ name }}</b> <small class="text-muted">{{ range[0].toDateString() }} - {{ range[1].toDateString() }}</small> </li> </ul> </div> </template> <!-- footer slot--> <div slot="footer" slot-scope="data" class="slot"> <div> <b class="text-black">Calendar footer</b> {{ data.rangeText }} </div> <div style="margin-left: auto"> <a @click="data.clickApply" v-if="!data.in_selection" class="btn btn-primary btn-sm">Choose current</a> </div> </div> </date-range-picker> </template> <script> import DateRangePicker from "../../../src/components/DateRangePicker"; export default { name: "SlotsDemo", components: {DateRangePicker}, data () { let startDate = new Date(); let endDate = new Date(); endDate.setDate(endDate.getDate() + 6) return { dateRange: {startDate, endDate} } }, filters: { dateCell (value) { let dt = new Date(value) return dt.getDate() }, [Truncated] } } } </script> <style scoped> .slot { background-color: #aaa; padding: 0.5rem; color: white; display: flex; align-items: center; justify-content: space-between; } .text-black { color: #000; } </style> ``` Answers: username_0: I just realised that my project is built with vue 3. Does this component only support any projects built with vue 2? username_1: Yes, curretnly only vue 2 is supported. I'm trying to make it work on both vue 2 and 3 but currently it is not working. Status: Issue closed
dalingk/incursion-tracker
429012778
Title: Unable to use in Firefox private mode Question: username_0: When trying to use in a Firefox private window the application gives a `Failed to connect to cache database.` error. This is caused because IndexedDB is [disabled in the private window](https://bugzilla.mozilla.org/show_bug.cgi?id=781982) and this application requires the use of IndexedDB to cache requests to Eve's [ESI API](https://esi.evetech.net/ui/).<issue_closed> Status: Issue closed
pydata/numexpr
151348677
Title: numexpr fails to install if current path contains `%` in its name -- breaks git flow Question: username_0: Git flow branch naming conventions uses branches like `feature/xxx`, `release/xxx` and CI systems will usually checkout the branch as a folder, converting it using HTTP escaping like `feature%2Fxxx`. Surprise, now if you want to install `numexpr` it will fail with ``` File "/usr/lib/python3.4/configparser.py", line 371, in before_get self._interpolate_some(parser, option, L, value, section, defaults, 1) File "/usr/lib/python3.4/configparser.py", line 421, in _interpolate_some "found: %r" % (rest,)) configparser.InterpolationSyntaxError: '%' must be followed by '%' or '(', found: '%2Fdevops/workspace/env/lib:/usr/local/lib:/usr/lib:/usr/lib/x86_64-linux-gnu' ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-3fjvjwcl/numexpr/ ``` Full log https://gist.github.com/username_0/b373062dd45de92735c7482b2735c5fb Answers: username_0: Here a very easy way to replicate the bug: ``` virtualenv -p python3 --no-site-packages env source env/bin/activate pip install numexpr ``` The problem cannot be reproduces with other packages, like numpy or pandas. username_0: And my impression is that this answer is providing the solution to this bug http://stackoverflow.com/questions/14340366/configparser-and-string-with -- use RawConfigParser instead of ConfigParser. username_0: Or even easier, checkout the code somewhere where you have a `%F2` as part of the full path and run: ``` python setup.py egg_info ``` username_1: +1 to changing to RawConfigParser - it seems unnecessary to add yet another variable namespace when we have so many already! username_2: +1 for chaning that too, but I don't see any ConfigParser in setup.py. @username_0 would you like to provide a PR? Thanks. username_0: I am still investigating the bug and looking for the root cause and maybe ways to workaround it. So far the only package that encountered it is numexpr. username_0: As you can see I made two PRs one against master and another one against latest maintenance branch. Initially I wanted to create a patched numpy source tar.gz file myself and use it but somehow it seems that it doesn't want to build the source distro - see https://github.com/numpy/numpy/issues/7574 username_3: Closing as fixed. Status: Issue closed
bartekch/saos
57857818
Title: Should empty columns be always extracted? Question: username_0: Consider example ```r foo1 <- search_judgments(courtType = "SUPREME", limit = 1, judgmentDateFrom = "2014-01-01") foo2 <- search_judgments(courtType = "COMMON", limit = 1, judgmentDateFrom = "2014-01-01") extract(foo1, "division") extract(foo2, "division") foo3 <- c(foo1, foo2) extract(foo3, "division") ``` In each case different columns are returned, because sources of judgments are different (common - supreme). Should this be unified, i. e. we always return data frame with all columns (as in third case)?
pytorch/pytorch
1035085860
Title: Exponential distribution constraint should be non-negative rather than positive Question: username_0: ## 🐛 Bug The exponential distribution has a constraint of `support = constraints.positive`, which doesn't allow for a value of zero, even though zero should be a valid value. Instead, the constraint should be `support = constraints.nonnegative` ## To Reproduce Steps to reproduce the behavior: `torch.distributions.Exponential(0.5).log_prob(torch.Tensor([0]))` The above line of code produces an error: ``` ValueError: Expected value argument (Tensor of shape (1,)) to be within the support (GreaterThan(lower_bound=0.0)) of the distribution Exponential(rate: 0.5), but found invalid values: tensor([0.]) ``` <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> Rather than produce the error, the output should be: `tensor([-0.6931])` <!-- A clear and concise description of what you expected to happen. --> ## Environment PyTorch version: 1.10.0+cpu Is debug build: False CUDA used to build PyTorch: Could not collect ROCM used to build PyTorch: N/A OS: Microsoft Windows 10 Enterprise GCC version: Could not collect Clang version: Could not collect CMake version: Could not collect Libc version: N/A Python version: 3.8.6rc1 (tags/v3.8.6rc1:08bd63d, Sep 7 2020, 23:10:23) [MSC v.1927 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-10-10.0.19041-SP0 Is CUDA available: False CUDA runtime version: 10.2.89 GPU models and configuration: GPU 0: GeForce GTX 1650 Nvidia driver version: 462.31 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Versions of relevant libraries: [pip3] numpy==1.21.3 [pip3] torch==1.10.0 [pip3] torchvision==0.11.1 [conda] Could not collect cc @fritzo @neerajprad @alicanb @nikitaved
goaop/framework
163527311
Title: __PHP_Incomplete_Class sometimes instead annotation Question: username_0: I've trying to deal with annotation values in around aspect and got trange behaviour: in my aspect : ```php <?php namespace Stom\Api\Util\Aop\Aspect; use Annotation\Cacheable; use Go\Aop\Aspect; use Go\Aop\Intercept\MethodInvocation; use Stom\Api\Config\Application; use Exception; use Go\Lang\Annotation\Around; /** * Caching aspect */ class CachingAspect implements Aspect { /** * This advice intercepts the execution of cacheable methods * * The logic is pretty simple: we look for the value in the cache and if we have a cache miss * we then invoke original method and store its result in the cache. * @see Around * @param MethodInvocation $invocation Invocation * * @Around("@execution(Annotation\Cacheable)") * * @return mixed */ public function aroundCacheable(MethodInvocation $invocation) { $cache = null; try { $cache = Application::getMemCache(); } catch (Exception $ex) { var_dump('NO_CACHE'); } if ($cache !== null) { $obj = $invocation->getThis(); $class = is_object($obj) ? get_class($obj) : $obj; $key = $class . ':' . $invocation->getMethod()->name; $key .= ':' . serialize($invocation->getArguments()); $result = $cache->get($key); if ($result === false) { var_dump($invocation->getMethod()->getAnnotations()); $time = $invocation->getMethod()->getAnnotation('Annotation\Cacheable'); $result = $invocation->proceed(); var_dump($time); $cache->set($key, $result); echo('no'); } echo ('cache'); return $result; } else { echo ('nullcache'); return $invocation->proceed(); [Truncated] ``` And at first time it dumps as expected, but in another times it's wrapped into __PHP_Incomplete_Class : ```php array(1) { [0]=> object(__PHP_Incomplete_Class)#394 (3) { ["__PHP_Incomplete_Class_Name"]=> string(20) "Annotation\Cacheable" ["time"]=> int(10) ["value"]=> NULL } } ``` Why can it be so? ubuntu 16.04 x64 PHP 7.0.4-7ubuntu2.1 (cli) ( NTS ) Answers: username_1: Hello, try to set unserialize_callback_function setting to your composer class loader callback. In this case, PHP will load missing classes before unserializing data. username_0: Thank you for fast reply! But could you provide details please? I do not know about unserialize_callback_function and how I should use it. username_1: If you use only composer, then add this line somewhere at front-controller/bootstrap file: ```php // load default composer loader $loader = include __DIR__.'/../vendor/autoload.php'; ini_set('unserialize_callback_func', $loader); ``` username_1: Additional information is available on php.net site: http://php.net/manual/en/var.configuration.php#unserialize-callback-func username_0: Ok. I'm using composer only. Tried your suggestion and it doesn't help. Still works first time only. All another times it returnt Incomplete_Class username_1: Could I ask you to step in into the annotation class loading/unserialization? Is autoloader triggered for you class or not? username_0: I found the problem. It was in my project folder structure: Folder path for annotation was: Stom\Api\Util\Aop\Annotation\Cacheable.php But I've declared namespace as "Annotation" After I changet it to "Stom\Api\Util\Aop\Annotation" problem solved. Is strict following folder structure is mandatory for unserialization? Status: Issue closed username_1: Sorry for the late reply. I think your question isn't related directly to the folder structure and unserialization process, it's more related to the class loading, so if a class could not be loaded, then it will be replaced by incomplete class. That's it ) Anyway, I'm glad that this issue is solved.
jlippold/tweakCompatible
491430919
Title: `Safari Plus` working on iOS 12.1.1 Question: username_0: ``` { "packageId": "com.opa334.safariplus", "action": "working", "userInfo": { "arch32": false, "packageId": "com.opa334.safariplus", "deviceId": "iPhone9,4", "url": "http://cydia.saurik.com/package/com.opa334.safariplus/", "iOSVersion": "12.1.1", "packageVersionIndexed": true, "packageName": "Safari Plus", "category": "Tweaks", "repository": "BigBoss", "name": "Safari Plus", "installed": "1.7.5-3", "packageIndexed": true, "packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.", "id": "com.opa334.safariplus", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.1.5", "shortDescription": "Various enhancements to Safari!", "latest": "1.7.5-3", "author": "opa334", "packageStatus": "Unknown" }, "base64": "<KEY> "chosenStatus": "working", "notes": "" } ```<issue_closed> Status: Issue closed
kquiet/auto-browser
401164284
Title: Return CompletableFuture for executeAction() and executeComposer() of ActionRunner Question: username_0: [CompletableFuture](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html) is of much more usability than [Future](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html). Answers: username_0: ActionComposer&ActionRunner are refactored. Status: Issue closed
danielgindi/Charts
412775727
Title: Have 2 or more tresholds for one graph Question: username_0: ![schwellwert-diagramm_app](https://user-images.githubusercontent.com/35031111/53149445-b4d8f600-35d3-11e9-823d-db302f018696.JPG) I have upload the an image to implement the chart...I have already done everything ....only remains is chart's LimitLine colour with the range (dynamic value)...so please help me ## Charts Environment **Charts version/Branch/Commit Number:** **Xcode version:** 10.1 **Swift version:** 4 **Platform(s) running Charts:** **macOS version running Xcode:** ## Demo Project ℹ Please link to or upload a project we can download that reproduces the issue. Answers: username_1: ask on stack overflow if it's a 'how-to' question. I can barely understand. Status: Issue closed
metatron-app/metatron-discovery
703342857
Title: Change temporary datasource spec Question: username_0: **Describe the bug** A new version needs to set "temporary" parameter to request body, not URL parameter. **To Reproduce** Steps to reproduce the behavior: Metatron Discovery should create a permanent datasource for HA reason though you create temporary datasoruce. Metatron discovery creates temporary datasource now. **Expected behavior** **Screenshots** **Desktop (please complete the following information):** **Additional context**<issue_closed> Status: Issue closed
fabricjs/fabric.js
960100
Title: Disable dragging globally Question: username_0: Is there a way to toggle whether selection/dragging is enabled? Answers: username_1: fcanvas = new fabric.Canvas(canvasId); var oElements= fcanvas.getObjects(); $.each(oElements, function(index, element) { element.lockMovementX = true; element.lockMovementY = true; element.lockScalingX= true; element.lockScalingY=true; element.lockRotation=true; element.hasControls = element.hasBorders = false; }); username_2: thx bud, that works, also the vanilla JS equivalent: ```js const allObjects = fabricRef.current.getObjects(); allObjects.forEach((object) => { object.lockMovementX = true; object.lockMovementY = true; object.lockScalingX = true; object.lockScalingY = true; object.lockRotation = true; object.hasControls = object.hasBorders = false; }); ```
ProcessMaker/screen-builder
478570997
Title: Record List Question: username_0: Steps to Reproduce: Create a screen with a record list Configure it as specified in the documentation Current Behavior: The Record list won't work 2 warnings appear on the console ![image](https://user-images.githubusercontent.com/535300/62720083-21974f80-b9be-11e9-8dae-bd4adabd1cd6.png) Expected Behavior: The record list should be working as specified https://www.dropbox.com/s/2ajqpns0satjpyl/RecordController.mp4?dl=0 Answers: username_1: @username_0 can you post a link to the documentation? I'm not having any clear issues with the record list. Here is a video of it working for me: https://www.dropbox.com/s/7z3vr0jtu5e9ux8/record_list_working_03.mov?dl=0. **Note:** The record list was missing a couple inspector fields, so I had to test with the latest changes from https://github.com/ProcessMaker/screen-builder/pull/329. The deprecation warning is just a warning for when/if we upgrade the component, it's not currently the cause any errors. username_0: @username_1 the record list should not contain the data source options. It should not have been changed and needs to be reverted. Status: Issue closed
gitbucket/gitbucket
477502799
Title: Invalid ssh key message when trying to add a key Question: username_0: ### Before submitting an issue to GitBucket I have first: ## Issue **Impacted version**: GITBUCKET_VERSION | 4.31.2 **Deployment mode**: Apache HTTPD AJP <-> Tomcat 8 **Problem description**: Unable able to add a public SSH key to my profile. Keeps giving me "invalid key" with no indication as to nature of actual error. My key is from id_rsa.pub is in the form of: ssh-rsa AAAA...XULJ m<PASSWORD>@m<PASSWORD>-desktop Answers: username_1: You can see debug messages by changing the log level of GitBcuket: https://github.com/gitbucket/gitbucket/blob/master/src/main/scala/gitbucket/core/ssh/SshUtil.scala See the following documentation to configure GitBucket's logging system: https://github.com/gitbucket/gitbucket/wiki/Tracing-and-logging username_2: same error here username_0: We ended up restarting our full tomcat server as the bouncy castle stuff wasn't loading properly. We are considering switching to the docker version to deal with the class path issues and tomcat. Status: Issue closed
department-of-veterans-affairs/vets.gov-designpattrns
193887045
Title: Reclaim the H2! Question: username_0: H2 was formerly reserved for the page headings on facility locator, playbook, blog, etc. It would be nice to have an H2 for use in distinguishing content hierarchy, both from a visual and accessibility standpoint. This will likely require running through content templates and identifying how the hierarchy will need to change to accommodate H2, as well as apps (Rx, SM, etc) where it makes sense.
konishi-project/konishi
398553789
Title: Security of Konishi Question: username_0: In github, the system complains about insecure packages. In npm, there are complaints. If I look at how the password is plain text at the registration step, I'm internally complaining. And a /p/ user looked through the site and said that within minutes he found all kinds of vulnerabilities. I'm reading up on security, but I'm just generally a noob in all tangent fields here. We need to fix a lot of this before we release an actual beta. Just making this issue so it's on everyone's radar. If anyone want to do a scan and report the security issues (possibly with solutions how to fix it), that'd be great. Answers: username_1: Lookup HTTPS. Status: Issue closed username_1: In github, the system complains about insecure packages. In npm, there are complaints. If I look at how the password is plain text at the registration step, I'm internally complaining. And a /p/ user looked through the site and said that within minutes he found all kinds of vulnerabilities. I'm reading up on security, but I'm just generally a noob in all tangent fields here. We need to fix a lot of this before we release an actual beta. Just making this issue so it's on everyone's radar. If anyone want to do a scan and report the security issues (possibly with solutions how to fix it), that'd be great. username_2: One minor thing but since I ran into it right away I'll just throw it out there - When trying to log in with incorrect credentials you return an error that's too specific. In the case of me inputting an e-mail address that isn't yet in the system you specifically return an error saying: "The email you have entered does not match any account." and using a correct address with a wrong password yields a: "Failed to log in, password may be incorrect." error. This opens you up a bit to exposing user creds since I am able to tell if a person does indeed have an account on the page which in turn drastically reduces my attack time should I try doing something funny. A better error is more general, along the lines of "The e-mail or password you entered is incorrect.". This makes the output more fuzzy and while it does slightly hinder ease-of-use for the end user it's a common practice. I'll try to help out more on security but I'm also just learning about that stuff so it's going to be a fun ride. username_1: We've discuss a similar issue before but the thing here is that an attacker can simply go to the registration page, fill out the details and check if that email is used or not. This is done for UX reasons. Ultimately it's still the backend's job to make sure that the security is tight. You can also use other tools to call the API endpoints to check if email is used etc.
saltstack/salt
73641300
Title: grains.append should work at any level of a grain Question: username_0: I think we should be able to append values to a list at any level of a grain. Don't you? An example: Provided this grain: ``` list: - val1 - val2 ``` The command `salt-call grains.append 'list' 'val3'` will update the grain to: ``` list: - val1 - val2 - val3 ``` With this grain, however: ``` mygrain: var: somevalue list: - val1 - val2 ``` the command `salt-call grains.append 'mygrain:list' 'val3'` will give that output, which is correct, IMO: ``` mygrain:list: - val1 - val2 - val3 ``` but the grain `mygrain` is still: ``` mygrain: var: somevalue list: - val1 - val2 ``` There is no new grain `mygrain:list`, which is a good thing. Answers: username_0: As a side question: maybe I shouldn't use nested grains? username_1: @username_0, nested grains should be supported. I think you need to do a [module refresh](https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.saltutil.html#salt.modules.saltutil.refresh_modules) after updating grains so that `grains.get` will be updated. I do not know why `grains.item` does not need this or even why both these redundant functions exist. :-) username_0: Well, the fact is that the `/etc/salt/grains` file is updated and contains: ``` mygrain: list: - val1 - val2 var: somevalue mygrain:list: - val1 - val2 - val3 ``` So I don't think it's a matter of a module refresh. I have to add that I'm using salt 2014.7.5. username_0: The difference between `get` and `item` is that the latter doesn't seem to be aware of nested grains. Actually, I can obtain something the same results with `get` and the `delimiter` parameter: ``` # salt-call grains.get 'mygrain:list' '' ',' local: - val1 - val2 - val3 # salt-call grains.get 'mygrain,list' '' ',' local: - val1 - val2 ``` username_1: @username_0, thanks for your work on this. username_2: @username_0 If I'm reading this all correctly, it looks like you've got this all solved. (And thank you for your work!) I'll go ahead and close this since the PR has been merged. If that's not correct, let us know and we'll re-open this. Thanks! Status: Issue closed username_0: That's good for me. However, reading http://docs.saltstack.com/en/latest/topics/development/contributing.html I understand that the change will be merged to the develop branch at some point. Is that right? Would it not be best to merge it before closing the issue? Or do you merge the release branch into `develop` in bunch from time to time? username_2: We forward-merge from release branches into develop and we generally close issues when the PR against the release branch is merged. We merge-forward periodically, usually once a day. I'd expect to see this in develop by tomorrow at the latest. username_0: OK, that's cool.
stomp-js/ng2-stompjs
396625577
Title: Stomp service not connecting to spring. Its trying to connect and remain in connect state, not moving to connected. Please check images Question: username_0: I am following angular 6 demo with spring boot. But console showing me trying to connect and remain in connect state, not moving to the connected state. ![1](https://user-images.githubusercontent.com/16814388/50789968-70bdba80-12df-11e9-935d-df9f3763de0c.PNG) ![2](https://user-images.githubusercontent.com/16814388/50789991-7fa46d00-12df-11e9-8df6-8fbdc6ba3795.PNG) Answers: username_0: fixed thanks Status: Issue closed
telstra/open-kilda
391706654
Title: Document meter setting logic for data and default flows on all kinds of switches Question: username_0: As the metering logic grew quite complex and is affected by a multiple hardware constraints, we need some kind of a source of truth here. Baseline doc may include following topics: - Re-calculation of default flow rates and burst rates on Centec switches - Limitations of max burst size on Centec switches - "Fuzzy" burst size limits on the Noviflow switches - Expected behaviour on the OpenVSwitch
VSCodium/vscodium
580772588
Title: [MacOS] Random crashes sometimes Question: username_0: I recently switched over from VSCode and the app crashes sometimes, which wasn't happening with the original (MS) version. I have exactly the same settings and extensions installed as in VSCode, which makes me think it's a VSCodium issue. It doesn't happen often so it's not that much big of a deal, but a crash is still a crash. This last time it happened I used the crash report function in macOS and here's some info from it, don't know if that tells you something useful: ``` Crashed Thread: 0 CrBrowserMain Dispatch queue: com.apple.main-thread Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000a20 Exception Note: EXC_CORPSE_NOTIFY Termination Signal: Segmentation fault: 11 Termination Reason: Namespace SIGNAL, Code 0xb Terminating Process: exc handler [87806] ``` Ask for any info and I'll provide if I know where to find it and also if there's some way to get more information about the crash after it happens so I can provide it the next time it happens. Answers: username_0: ``` Process: Electron [20658] Path: /Applications/VSCodium.app/Contents/MacOS/Electron Identifier: com.visualstudio.code.oss Version: 1.43.0 (1.43.0) Code Type: X86-64 (Native) Parent Process: ??? [1] Responsible: Electron [20658] User ID: 501 Date/Time: 2020-03-16 00:18:43.789 +0100 OS Version: Mac OS X 10.14.6 (18G87) Report Version: 12 Anonymous UUID: E115DC13-87E2-5EDC-EBD5-D97C08EEF60E Sleep/Wake UUID: 24704FC5-47B6-47E5-88F7-5DC7597838C9 Time Awake Since Boot: 980000 seconds Time Since Wake: 130000 seconds System Integrity Protection: enabled Crashed Thread: 0 CrBrowserMain Dispatch queue: com.apple.main-thread Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000000 Exception Note: EXC_CORPSE_NOTIFY Termination Signal: Segmentation fault: 11 Termination Reason: Namespace SIGNAL, Code 0xb Terminating Process: exc handler [20658] VM Regions Near 0: --> __TEXT 000000010993f000-0000000109968000 [ 164K] r-x/rwx SM=COW /Applications/VSCodium.app/Contents/MacOS/Electron Thread 0 Crashed:: CrBrowserMain Dispatch queue: com.apple.main-thread 0 ??? 000000000000000000 0 + 0 1 com.github.Electron.framework 0x000000010b4c1acd 0x109973000 + 28633805 2 com.github.Electron.framework 0x0000000109a800fd 0x109973000 + 1102077 3 com.github.Electron.framework 0x000000010b4c1b76 0x109973000 + 28633974 4 com.github.Electron.framework 0x000000010b971f60 0x109973000 + 33550176 5 com.github.Electron.framework 0x000000010ba56892 0x109973000 + 34486418 6 com.github.Electron.framework 0x000000010ba6e002 0x109973000 + 34582530 7 com.github.Electron.framework 0x000000010ca5bdae 0x109973000 + 51285422 8 com.github.Electron.framework 0x000000010ca5d76c 0x109973000 + 51292012 9 com.github.Electron.framework 0x000000010bd02fce 0x109973000 + 37289934 10 com.github.Electron.framework 0x000000010bcd5d02 0x109973000 + 37104898 11 com.github.Electron.framework 0x000000010bce5c94 0x109973000 + 37170324 12 com.github.Electron.framework 0x000000010bce6167 0x109973000 + 37171559 13 com.github.Electron.framework 0x000000010bd427a3 0x109973000 + 37549987 14 com.github.Electron.framework 0x000000010bc7e34a 0x109973000 + 36746058 15 com.github.Electron.framework 0x000000010bd4210f 0x109973000 + 37548303 16 com.apple.CoreFoundation 0x00007fff4fad7683 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 17 17 com.apple.CoreFoundation 0x00007fff4fad7629 __CFRunLoopDoSource0 + 108 18 com.apple.CoreFoundation 0x00007fff4fabafeb __CFRunLoopDoSources0 + 195 19 com.apple.CoreFoundation 0x00007fff4faba5b5 __CFRunLoopRun + 1189 20 com.apple.CoreFoundation 0x00007fff4fab9ebe CFRunLoopRunSpecific + 455 21 com.apple.HIToolbox 0x00007fff4ed191ab RunCurrentEventLoopInMode + 292 22 com.apple.HIToolbox 0x00007fff4ed18ee5 ReceiveNextEventCommon + 603 [Truncated] shared memory 3872K 14 =========== ======= ======= TOTAL 1.7G 1088 TOTAL, minus reserved VM space 1.3G 1088 Model: MacBookPro11,1, BootROM 156.0.0.0.0, 2 processors, Intel Core i7, 3 GHz, 16 GB, SMC 2.16f68 Graphics: kHW_IntelIrisItem, Intel Iris, spdisplays_builtin Memory Module: BANK 0/DIMM0, 8 GB, DDR3, 1600 MHz, 0x80AD, 0x484D54343147533641465238412D50422020 Memory Module: BANK 1/DIMM0, 8 GB, DDR3, 1600 MHz, 0x80AD, 0x484D54343147533641465238412D50422020 AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0x112), Broadcom BCM43xx 1.0 (172.16.58.3 AirPortDriverBrcmNIC-1305.8) Bluetooth: Version 6.0.14d3, 3 services, 27 devices, 1 incoming serial ports Network Service: Wi-Fi, AirPort, en0 Serial ATA Device: APPLE SSD SM0256F, 251 GB USB Device: USB 3.0 Bus USB Device: Apple Internal Keyboard / Trackpad USB Device: BRCM20702 Hub USB Device: Bluetooth USB Host Controller USB Device: G3 Thunderbolt Bus: MacBook Pro, Apple Inc., 17.2 ``` username_1: Same here, upgrade from 1.42.1 to 1.43.0 and now it randomly crashes. Still on macOS 10.14.6. username_2: Same for me on 1.43.1 it happens when i press esc key Process: Electron [27320] Path: /Applications/VSCodium.app/Contents/MacOS/Electron Identifier: com.visualstudio.code.oss Version: 1.43.1 (1.43.1) Code Type: X86-64 (Native) Parent Process: ??? [1] Responsible: Electron [27320] User ID: 502 Date/Time: 2020-03-19 18:28:01.565 +0100 OS Version: Mac OS X 10.15.2 (19C57) Report Version: 12 Anonymous UUID: 58EA46D5-4934-4B79-BA6D-250546312913 Sleep/Wake UUID: 53225FB0-73A5-4223-A8C0-7E6E82792A10 Time Awake Since Boot: 510000 seconds Time Since Wake: 34000 seconds System Integrity Protection: enabled Crashed Thread: 0 CrBrowserMain Dispatch queue: com.apple.main-thread Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000069 Exception Note: EXC_CORPSE_NOTIFY Termination Signal: Segmentation fault: 11 Termination Reason: Namespace SIGNAL, Code 0xb Terminating Process: exc handler [27320] VM Regions Near 0x69: --> __TEXT 00000001041dc000-0000000104205000 [ 164K] r-x/rwx SM=COW /Applications/VSCodium.app/Contents/MacOS/Electron Thread 0 Crashed:: CrBrowserMain Dispatch queue: com.apple.main-thread 0 com.github.Electron.framework 0x0000000105d65325 0x104216000 + 28635941 1 com.github.Electron.framework 0x00000001043230fd 0x104216000 + 1102077 2 com.github.Electron.framework 0x0000000105d652e9 0x104216000 + 28635881 3 com.github.Electron.framework 0x0000000106210751 0x104216000 + 33531729 4 com.github.Electron.framework 0x00000001060ed8f6 0x104216000 + 32340214 5 com.github.Electron.framework 0x0000000106053c41 0x104216000 + 31710273 6 com.github.Electron.framework 0x000000010607a037 0x104216000 + 31866935 7 com.github.Electron.framework 0x000000010486298f 0x104216000 + 6605199 8 com.github.Electron.framework 0x0000000106890057 0x104216000 + 40345687 9 com.github.Electron.framework 0x0000000106894e2e 0x104216000 + 40365614 10 com.github.Electron.framework 0x00000001068946b4 0x104216000 + 40363700 11 com.github.Electron.framework 0x000000010688d093 0x104216000 + 40333459 12 com.github.Electron.framework 0x000000010688d90a 0x104216000 + 40335626 13 com.github.Electron.framework 0x00000001068a3cde 0x104216000 + 40426718 14 com.github.Electron.framework 0x0000000106578d02 0x104216000 + 37104898 15 com.github.Electron.framework 0x0000000106588c94 0x104216000 + 37170324 16 com.github.Electron.framework 0x0000000106589167 0x104216000 + 37171559 17 com.github.Electron.framework 0x00000001065e57a3 0x104216000 + 37549987 18 com.github.Electron.framework 0x000000010652134a 0x104216000 + 36746058 19 com.github.Electron.framework 0x00000001065e510f 0x104216000 + 37548303 20 com.apple.CoreFoundation 0x00007fff371beb21 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 17 [Truncated] Memory Tag 255 139.0M 65 PROTECTED_MEMORY 4K 1 SQLite page cache 64K 1 STACK GUARD 56.1M 31 Stack 186.8M 31 VM_ALLOCATE 9588K 29 __DATA 48.1M 424 __DATA_CONST 36K 4 __FONT_DATA 4K 1 __LINKEDIT 358.3M 15 __OBJC_RO 32.0M 1 __OBJC_RW 1776K 1 __TEXT 394.4M 411 __UNICODE 564K 1 libnetwork 128K 8 mapped file 72.5M 32 shared memory 664K 21 =========== ======= ======= TOTAL 1.9G 1234 TOTAL, minus reserved VM space 1.5G 1234 username_3: @username_2 I recommend that you use Gist her on GitHub to paste such long log files. Hell to navigate when inline. https://gist.github.com/ username_4: I can confirm I have this issue too, randomly crush on pressing ESC key. will provide the crash log when it happens again username_5: I'm experiencing crashes too. In my case they always happen on moving or resizing the window. On my system, I can reproduce it quite consistently by (un)installing a bunch of extensions (e.g. the first 5-7 extensions after searching for JavaScript) and then trying to move or resize the screen. However, the crashes also occur when doing other work for a while and then trying to move/resize. Crashlog: - https://gist.github.com/username_5/3ea92bf20c4929f522b1dbaf2cc27dba username_0: Oh yeah, when I think of it, the crash happens almost always, when I move to another desktop (on mac, the three-finger swipe to move between full-screen windows) username_6: This happens to me too, with version 1.44.2 For me it's easiest to reproduce by installing/uninstalling a bunch of extensions. username_7: Me too, experienced this crash since version 1.42 when i'm press ESC (randomly), reinstall but still got the same crash log. This crash didn't happen on original VSCode. tried with macOS Catalina 10.15.4, downgraded to 10.14.6 but still exist this is the crash log [Crash log VSCodium on macOS](https://pastebin.com/nffPp2Ld) hope this help username_4: I have just updated VSCodium and it crashes after using RubyTestRunner this is the crash log [VSCodium on macOS](https://pastebin.com/NAnDbJc9) username_1: Same here, reproducible crashes when hitting the Esc key. Version: 1.45.1 Commit: <PASSWORD> Date: 2020-05-15T11:07:54.108Z Electron: 7.2.4 Chrome: 78.0.3904.130 Node.js: 12.8.1 V8: 7.8.279.23-electron.0 OS: Darwin x64 18.7.0 username_8: Same problem, when I use the escape or caps-lock (remapped to escape) quite often but not reproducibly crashes VSCodium. Quite a pickle since I use the neovim plugin. Status: Issue closed
Nevcairiel/Mapster
516859909
Title: Unexplored part of mpa not revealed Question: username_0: The title says all. Im using latest mapster version downloaded from twitch app. None of the Fog of war is cleared even though the option to clear is checked. Tried reinstalling addon multiple times Status: Issue closed Answers: username_1: I've confirmed that it appears to be fully functional. Might be another map-related addon interfering here.
99designs/gqlgen
410037254
Title: Docs just show "what" without "how" or "why" Question: username_0: I'm reading the docs on Resolvers and Dataloaders. The docs for Resolver says "bind to a method" and shows some code that uses a go-defined struct with a method name. However, it doesn't tell me what specific rule the generator uses to decide to use the method -- "if the GQL schema name matches a member function, the function is called" might be the rule? If so, if I want a gqlgen-generated model to generate functions for some properties, how do I do that? Or is that only possible with go-defined structs? The docs for Dataloaders say that `our todo.user resolver looks like this` but if I generate a model from gqlgen, it doesn't generally generate those Resolvers for the structures; just for the Query root. So, how do I get a Resolver for the Todo_user, and what's the rule for how the generator decides to use it? Can I make this happen using gqlgen generated models? It would be great if the documentation filled in these nuts-and-bolts somewhere. Answers: username_1: In some ways yes, model generation is intended to get you up and running quickly with compatible types. I've found model generation really handy for enums and input objects, but over time I find myself binding directly to user defined structs, since they will often have additional presentation or business logic on them. [Configuration](https://gqlgen.com/config/) describes the shape of the config file. If you think it's out of date, or has missing descriptions then feel free to point them out or submit a PR. username_0: Thanks for the answer! Regarding Configuration, that looks like an example file to me. At a minimum, explaining that it represents the sum total of the configurable bits would be clarifying! username_1: True, but also I don't see any reason to think that you couldn't do that. It's not much work to give it a shot! But I agree the config documentation could be clearer. So I think this issue should track: 1. Improvements to documentation explaining how gqlgen decides on when it should generate resolvers. 2. Clearer configuration documentation.
PaddlePaddle/PaddleDetection
976706781
Title: voc用训练好的模型训练自己的数据集检测图片,出现很多错误的检测框 Question: username_0: 用aistudio上的paddledetection水果分类的yolov3模型,训练自己的数据集,检测图片出现许多错误的框框和标签。 Answers: username_0: 应该怎么改配置呢 username_1: 如果要训练自己的数据集,建议检查下数据类别数的设置,已经预训练权重的设置,可以加载coco预训练好的权重进行迁移学习 username_0: 类别数设置是正确的,预训练权重应该怎么设置 username_1: 在pretrain_weight的字段设置coco上已经训练好的检测模型,比如如果是yolov3算法 可以使用这个里面的链接https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.2/configs/yolov3#yolov3
pmd/pmd
626132328
Title: [Java] Auxclasspath in PMD CLI does not support relative file path in file url Question: username_0: **Affects PMD Version:** 6.24.0 onwards. **Description:** Auxclasspath does not support relative file path when specifying classpath file in PMD CLI namely file URLs with the format file:./fname **Exception Stacktrace:** ``` # Copy-paste the stack trace here ``` **Running PMD through:** *[CLI ]* Answers: username_1: Yes, that seems to be bug. I looked at the code: the auxclasspath should work (otherwise you would have seen a FileNotFoundException somewhere). The warning message comes from the incremental cache, which calculates the checksum of the auxclasspath - and there we are missing the support for (PMD's own special) file:/ notation. https://github.com/pmd/pmd/blob/bc4a1d67eb091eeebcad26983eeeafbc68aa2698/pmd-core/src/main/java/net/sourceforge/pmd/cache/AbstractAnalysisCache.java#L225 Something weird is going on. Is `file:/data/data/com.termux/files/home/LearnJava/Unsafe/file` the file with the additional paths or is it one of the paths in the file with additional paths? (AbstractAnalysisCache shouldn't see the original text file with the paths anymore, but only the content, so I guess, it's the latter case). Would you mind to share the file `file:./fname` you used? I guess, there are two cases to be checked: * the file URL in the auxclasspath on the CLI can be relative * the content of the entries in such file can be relative paths both cases should handled. username_0: The file referred to was pmdaux.cp as referred to in other bug. ``` $ cat pmdaux.cp /data/data/com.termux/files/home/LearnJava/lib/junit-4.13.jar . ``` There's one relative path and one full path in the file. In the above case, the file is never read when specified as file:./pmdaux.cp to the option auxclasspath. Wouldn't it be better to read this as @filename with the jars and directories separated by : or ; instead of a line separator? username_1: I'm not sure, I understand exactly, what you want. I assume, with "@filename" you are referring to the java commandline feature [command line argument files](https://docs.oracle.com/en/java/javase/14/docs/specs/man/java.html#java-command-line-argument-files). I've decided for now against introducing this, but we should consider this as part of PMD 7 CLI. username_0: I'm still encountering the warning. You can check out my script pmdauxrel on my repository which reproduces the said error. Run it as bash -x ./pmdauxrel Unsafe in the root directory to reproduce the said warning. username_0: I look forward to @ file expansions in PMD 7. The argument for it, in this case, is that pmd will no longer have to reconstruct a classpath from the input file and can use its contents as-is. username_0: From the documentation: https://pmd.github.io/latest/pmd_userdocs_cli_reference.html#options -auxclasspath <cp> | Specifies the classpath for libraries used by the source code. This is used to resolve types in source files. Alternatively, a file:// URL to a text file containing path elements on consecutive lines can be specified. -- | -- username_1: Sure you do - this issue will only be fixed with the next PMD version. username_0: This is what should have been happening. Additionally, my compile.cp (as it currently exists in my repo) does not contain line separators. So I had to create a file that did specifically for this input. A line-separated classpath is not what's expected on the command line and that's my point about using @ expansion. No reconstruction of the classpath is required at all. Status: Issue closed username_0: Just a note. Relative file path is accepted for ruleset in the Maven plugin.
fabric8io/docker-maven-plugin
160714701
Title: Trouble with Environment Variables and Assembly Question: username_0: First off, I'm new to the plugin, and to docker, but maven and I are old friends. This plugin is exactly what I am looking for and is very intuitive to use. I'm trying to create a container to house postgres for integration tests. The postgres image has some env params to control admin username/pwd and if you add a script to their init dir they will run it on startup. I attached a test project to expose the issues I am facing. When you: mvn integration-test two things don't appear to be happening: 1. the environment variables are not getting applied - please note the error listing the databases 2. the assembly does not seem to be applied to the image. - change the username to postgres and run again, and note the databases. I added a Dockerfile when built works exactly like I'd expect the plugin to behave. I also included the docker command I am using that shows that the env variables are applied and the DBs were created. Thanks in advance, - Joe [docker-plugin-tester.zip](https://github.com/fabric8io/docker-maven-plugin/files/318958/docker-plugin-tester.zip) Answers: username_1: First of all, apologies for the late answer. was away at some conferences and busy otherwise. To your issue: So the enviroment variables are applied, thats not the problem. Instead there are two problems: * When you use an assembly it will be put by default in `/maven` within the image. You can change this directory by setting the `<basedir>` to `/` like here: ```xml <assembly> <basedir>/</basedir> <descriptor>${basedir}/src/test/resources/docker/assemble-postgres.xml</descriptor> </assembly> ``` * The other problem is the `<wait>` section: ```xml <wait> <log>database system is ready to accept connections</log> <time>20000</time> <exec> <postStart>psql -U ${admin.user} -l</postStart> </exec> </wait> ``` If you look closely in the output of the postgres DB you will see that the log output *database system is ready to accept connections* appears multiple times. Since the postexec comand kicks in right after the first appearance, when the DB is not ready initialised, you get this error. So, either wait only for a certain time (without checking the `<log>` line) or you better check for something unique (like 'PostgreSQL init process complete'). 'hope this helps. If so, feel free to close the issue ;-) username_0: Thanks Rowland. I figured it was a noob issue. Your suggestions worked perfectly. Thanks! Status: Issue closed username_1: np, you are always welcome ;-)
freight-team/freight
327182812
Title: InRelease.gpg is missing Question: username_0: When trying to use this commit: https://github.com/freight-team/freight/commit/0a8c99a761e08e1aac8fd702338afe75f6d892a7 InRelease.gpg file is not created so when running: `sudo apt-get update` we got an error: `bionic InRelease: The following signatures couldn't be verified because the public key is not available:` Answers: username_1: AFAICT, that's fine http://archive.ubuntu.com/ubuntu/dists/bionic/ also doesn't have it. InRelease is the Releae file plus the GPG signature in one plaintest file. username_1: NTW, reading the message again, I think you miss the repo's key on the client. username_0: Sorry, you were right. According to the: https://wiki.debian.org/DebianRepository/Format "...InRelease files are signed in-line while Release files should have an accompanying Release.gpg file." Status: Issue closed
Zrips/Residence
619310203
Title: [BUG] Levitating with blocks Question: username_0: Hi, I recently found bug in residence plugin, I'll try to describe it as best as I can... :) When player is in Residence, he can't build in, he is able to jump and quickly place blocks under his legs and stand on them, for other players it looks like he is levitating. He can see bugged blocks in the air and stand on them, but other players can't, and it looks like he is levitating/flying. I think that's kinda important, and it should be fixed as fast as possible, I'm sorry if i described it bad. Thank you for reading :) Answers: username_1: its not a bug by itself, atleast not from side of residence. It looks more like a client/server miscommunication when player places block but block placement is canceled while data between player and server doesn't sync properly due to possible delay between them. Slightly moving around should bring player down. Status: Issue closed
lumien231/Random-Things
236498994
Title: [1.10.2] Feature request: Shield Slot on Player Interface Question: username_0: The Shield/Offhand slot exists in MC 1.10.2, but the player interface is missing the capability to interact with it as it has in 1.11. Is there a reason for this, or can it be added? Currently it's the only slot in the player's inventory that can't be interacted with. Answers: username_1: 1.10.2 doesn't get gameplay updates anymore so that's why it's not available there. Status: Issue closed
sounix/ControlCasos
440363621
Title: Revision Ofertas del 04 - 06 de mayo 2019 Question: username_0: OLUTA | BORRAR |   |   |   |   |   |   |   |   -- | -- | -- | -- | -- | -- | -- | -- | -- | -- Articulo | Nombre | Exist | Descuento | NivelPrecio | Costo | Precio | Margen | Oferta | Utilidad 0114117 | Deter. Ace Regular 900gr | 30 | 4.5 | 1 | 24.7312 | 30 | 0.17562667 | 25.5 | 0.03014902 0117388 | Hig Petalo Ultra Resist 400h C/4 | 43 | 4.1 | 1 | 22.504 | 28 | 0.19628571 | 23.9 | 0.05841004   |   |   |   |   |   |   |   |   |   JALTIPAN | BORRAR |   |   |   |   |   |   |   |   0124223 | Kotex Manzan Noct Alas C/10 | 12 | 2.6 | 1 | 17.22455 | 20.5 | 0.15977805 | 17.9 | 0.03773464 0117388 | Hig Petalo Ultra Resist 400h C/4 | 22 | 3.6 | 1 | 22.504 | 27.500001 | 0.18167276 | 23.900001 | 0.05841008   |   |   |   |   |   |   |   |   |   VICTORIA | BORRAR |   |   |   |   |   |   |   |   0124223 | Kotex Manzan Noct Alas C/10 | 85 | 5.1 | 1 | 17.22455 | 23 | 0.25110652 | 17.9 | 0.03773464   |   |   |   |   |   |   |   |   |   ZARAGOZA | BORRAR |   |   |   |   |   |   |   |   0124223 | Kotex Manzan Noct Alas C/10 | 15 | 2.1 | 1 | 19.97984 | 23.8 | 0.16051092 | 21.7 | 0.07927005   |   |   |   |   |   |   |   |   |
kubernetes/kops
395757166
Title: kops update cluster wants to create already-existing shared subnets if KubernetesCluster tag isn't present Question: username_0: **1. What `kops` version are you running? The command `kops version`, will display this information.** Owner cluster was last updated with 1.10.0. Shared cluster was created with 1.10.0. The issue is present when trying to update the owner cluster with kops 1.10.0, 1.10.1, and 1.11.1. **2. What Kubernetes version are you running? `kubectl version` will print the version if a cluster is running or provide the Kubernetes version specified as a `kops` flag.** 1.10.3 (owned) & 1.10.11 (shared) **3. What cloud provider are you using?** AWS **4. What commands did you run? What is the simplest way to reproduce this issue?** `kops update cluster owner.cluster` **5. What happened after the commands executed?** If `KubernetesCluster` tag is present and set to the name of the owner cluster, I see `No changes need to be applied` If `KubernetesCluster` tag is not present (which is presently the advice [in the docs](https://github.com/kubernetes/kops/blob/master/docs/run_in_existing_vpc.md#subnet-tags): ``` Will create resources: RouteTableAssociation/private-us-west-2a.owner.cluster RouteTable name:private-us-west-2a.owner.cluster id:rtb-XXXXXXXX Subnet name:us-west-2a.owner.cluster RouteTableAssociation/private-us-west-2b.owner.cluster RouteTable name:private-us-west-2b.owner.cluster id:rtb-XXXXXXXX Subnet name:us-west-2b.owner.cluster RouteTableAssociation/utility-us-west-2a.owner.cluster RouteTable name:owner.cluster id:rtb-XXXXXXXX Subnet name:utility-us-west-2a.owner.cluster RouteTableAssociation/utility-us-west-2b.owner.cluster RouteTable name:owner.cluster id:rtb-XXXXXXXX Subnet name:utility-us-west-2b.owner.cluster Subnet/us-west-2a.owner.cluster ShortName us-west-2a VPC name:owner.cluster id:vpc-XXXXXXXX AvailabilityZone us-west-2a CIDR 10.40.32.0/19 Shared false Tags {Name: us-west-2a.owner.cluster, KubernetesCluster: owner.cluster, kubernetes.io/cluster/owner.cluster: owned, kubernetes.io/role/internal-elb: 1, SubnetType: Private} Subnet/us-west-2b.owner.cluster ShortName us-west-2b VPC name:owner.cluster id:vpc-XXXXXXXX AvailabilityZone us-west-2b CIDR 10.40.64.0/19 Shared false Tags {Name: us-west-2b.owner.cluster, KubernetesCluster: owner.cluster, kubernetes.io/cluster/owner.cluster: owned, kubernetes.io/role/internal-elb: 1, SubnetType: Private} [Truncated] nodeLabels: XXXXXXXX/node-util: "true" role: Node rootVolumeSize: 100 rootVolumeType: gp2 subnets: - us-west-2a - us-west-2b ``` **8. Please run the commands with most verbose logging by adding the `-v 10` flag. Paste the logs into this report, or in a gist and provide the gist link here.** https://gist.github.com/username_0/dfc2145dc0ea75296e8aaab12cc5b31c **9. Anything else do we need to know?** `kubernetes.io/cluster/owner.cluster: owned` and `kubernetes.io/cluster/shared.cluster: shared` tags are present on the subnets. Both clusters were created an a preexisting VPC, which is not owned by any cluster. The subnets/route tables/etc. were created by the owner cluster. Answers: username_1: As explained in the ["run in existing VPC" docs](https://github.com/kubernetes/kops/blob/737a7a2cb81b70b558095ba1261a0898cc2bd168/docs/run_in_existing_vpc.md#shared-subnets), if you want `kops` to use existing subnets rather than create its own, you must specify the subnet ID to use as well as the CIDRs. Similarly, if you want to share gateways, you must also provide gateway IDs. So your Cluster.spec.subnets should look like: ``` subnets: - cidr: 10.40.32.0/19 egress: nat-0XXXXXXXXXXXXXXXX id: subnet-0XXXXXXXXXXXXXXXX name: us-west-2a type: Private zone: us-west-2a - cidr: 10.40.64.0/19 egress: nat-0XXXXXXXXXXXXXXXX id: subnet-0XXXXXXXXXXXXXXXX name: us-west-2b type: Private zone: us-west-2b - cidr: 10.40.96.0/22 id: subnet-0XXXXXXXXXXXXXXXX name: utility-us-west-2a type: Utility zone: us-west-2a - cidr: 10.40.100.0/22 id: subnet-0XXXXXXXXXXXXXXXX name: utility-us-west-2b type: Utility zone: us-west-2b ``` If you specify subnets and gateways, `kops` will not create them, and also will not create route tables.
x751685875/ZhengZhou-Coach
518552697
Title: 郑州长途客运 二维码 Question: username_0: ![IMG_0798](https://user-images.githubusercontent.com/23094123/68312827-96c60e00-00ee-11ea-8ea8-20bf4ebcbd70.JPG) Answers: username_0: ![IMG_0806_meitu_1](https://user-images.githubusercontent.com/23094123/68316786-ffb08480-00f4-11ea-8e85-4e6574908091.jpg) ![Uploading IMG_0807_meitu_2.jpg…]() username_0: ![IMG_0807_meitu_2](https://user-images.githubusercontent.com/23094123/68317181-a7c64d80-00f5-11ea-8ae1-ab8ad4ed60c8.jpg)
opendistro/for-elasticsearch-docs
810315037
Title: Trigger trigger multiple times Question: username_0: In my usecase I'm grouping the monitor elasticsearch query results in buckets. For the trigger I'm iterating over the buckets and return true if the condition is met. Afterwards the trigger stops but I want it to continue and trigger **again** for each condition met. Is this possible? Thanks in advance. Answers: username_1: Hey @calipee, evaluation scripts can only return true or false, and once they return, that's the end of the script. If I'm understanding your use case properly, though, I think you can achieve your desired outcome by adding multiple triggers to your monitors and using a piece of your evaluation logic in each trigger. Monitors can have lots of triggers, and if more than one meets its condition, you should get your multiple alerts. Let me know if that's helpful or if I'm misunderstanding something. 👍 Status: Issue closed
travis-ci/dpl
306899174
Title: dpl 1.9.2 uploads S3 artefacts to incorrect path Question: username_0: The following config ```yml deploy: # main branch builds - provider: s3 skip_cleanup: true upload-dir: main-branch-builds local_dir: artifact ``` under `[email protected]` uploads `my-file.tar.gz` to `/main-branch-builds/my-file.tar.gz` ```sh uploading "my-file.tar.gz" with {:content_type=>"application/gzip"} ``` under `[email protected]` uploads `my-file.tar.gz` to `/main-branch-builds/artifact/my-file.tar.gz` ```sh uploading "artifact/my-file.tar.gz" with {:content_type=>"application/gzip"} ``` The `1.9.1. behaviour was the expected for one for us Answers: username_1: A potential fix is in the `releases-local_dir` branch. Could you try this: ```yaml deploy: - provider: s3 edge: branch: releases-local_dir ⋮ # rest ``` and report the result? We'll have a 1.9.4 release shortly. username_2: +1 username_3: Same issue, I'm trying the fix suggested above username_1: @username_2 Did the proposed fix work for you, or are you just indicating that you are see the same problem? username_4: The fix above worked for us. username_0: @username_1 just run a build with using that branch of dpl and it worked :-) username_5: Confirmed the fix worked for us as well. Status: Issue closed username_1: Resolved by https://github.com/travis-ci/dpl/pull/787, which is in 1.9.4. We thank you all for the report and patience, and we apologize for the inconvenience. username_6: Is there a way to ensure DPL is on latest without using edge? We're seeing some deploys happening on `1.9.2` and some on `1.9.4`.
VHDLTool/Sonarqube-Rulechecker-Demo
514554061
Title: improve design example Question: username_0: Add a step into plasma example to display the evolution of technical debt (new stage of plasma project tobe imported). An example for coverage with ghdl or modelsim Answers: username_0: "Add a step into plasma example to display the evolution of technical debt (new stage of plasma project tobe imported)." is now implemented. An example of coverage still need to be done
PokemonGoers/Catch-em-all
173954697
Title: First deployment Question: username_0: After we've decided all hosting related choices in #20, we'll be able to do our first deployment. - [ ] Make our app a docker container - [ ] Setup SSL and domain - [ ] Deploy to chair's docker node - [ ] Add link to our README Answers: username_1: I started working on the Docker topic. See branch `deployment` for further details. The current setup looks as follows. There will be 3 docker containers 1. **app** Web app of Project E which also includes the two PokeMaps as npm requirements. Within the repository of Project E there is also a small node server which serves static assets and redirects HTTP requests to the `/api` route to the **api** docker container (see #28). 2. **api** The PokeData project which serves as a backend for the **app** container. Luckily, the guys from the PokeData team already created a Dockerfile which we will use. Currently we assume that the work of Project B and Project C will be somehow included into this container already. 3. **db** MongoDB container. The 3 containers are linked together using Docker Compose. So ideally we will just need to run `docker compose up` which should produce a running app including backend and database. The Travis CI build is also already set up and running (unfortunately we have no test cases yet). @username_2 How exactly will the deployment to the chair's docker platform work? And are we just talking about the final deployment (when the seminar is over) or can we also use it as CI server during the development? username_2: @username_1 would you care to look if it is possible to user docker compose with [docker hub'd automatic builds](https://docs.docker.com/docker-hub/builds/) and docker cloud automatic deployment (which is just spawning the built from docker hub, so if you solve docker hub, docker cloud should be ok with it)? SOOOOOO.. there's still no chair :D it's my servers, still. And for now it's like this: PR get's approved to develop/master --> automatic build on docker hub to tags develop/latest --> automatic redeploy on docker cloud (which uses my server as a node). I can set the container on the node (my server in the future the chair's server) to always redeploy on the same **port**, and then use nginx on the server to route a specific DNS entry to that port (aka request to `example.com` --> DNS resolve to `172.16.31.10` --> reverse-proxy to `localhost:3030`) via internal reverse-proxy (I'm doing this for http://cell.dallago.us , which is actually http://cellmap-9af2e29c-1.835b2249.cont.dockerapp.io:3030 ). Find out the docker compose story and we can look for the next step username_2: Actually: Why would you need to compose Mongo, API + website? - Mongo we will use a dedicated server, and you won't need mongo because the API will use mongo, but the website will use the API - API will always be up and independent of the website So ultimately the only container you need is for the website. I suggest to drop the compose idea username_1: Thanks for clearing things up @username_2. The idea with using nginx as reverse proxy sounds reasonable! Why compose? Why three containers? - **Configuration.** compose handles the dependencies between the containers. You can define through which ports the three containers communicate with each other while only exposing one port to the outside world. - **Consistency.** I feel like if we're already using docker, why not use it for mongo as well? After all, everything is a container, right? ;) - **Testing.** Using compose everyone can set up their isolated test environment. If we plan to have some integration tests that require both the frontend web app and the backend api to be set up this can be easily done. Don't get me wrong here. That's what I had in mind when i started using docker compose. But I'm always a big fan of dropping stuff that's not needed (anymore). So if you think that the web app container is all we need and you'll be able to set up the entire project from that, then I'm happy to take back that compose idea. username_2: I agree on all points: the composer idea is great in dev, but for the pipeline we have (already setup) it's superfluous :) For me all I need is the dockerfile to build the webapp (so that it can automatically build on hub and depoy on cloud). But in general having a one-click solution with API, storage and webapp is definetly good for other people to try out or (as already said) dev. I'm just trying to be [lean](http://hackerchick.com/agile-vs-lean-yeah-yeah-whats-the-difference/) username_1: @username_2 I made some good progress today in terms of deployment. - The Ionic app will now be packaged for use as web app. - A docker container is created containing the packaged web app and a node server that serves the app on port 8080. The PokeData backend can be reached at the `/api` route. What else do you need me to do? How will the docker image be published to Docker Hub? Automatically on every commit/PR or do we need to trigger the build explicitly? If you want to add me to the pokemongoers organization my username is username_1. Waiting for instructions :) username_2: @username_1 1. Added 2. Triggered first built https://hub.docker.com/r/pokemongoers/catch-em-all/builds/ 3. What do I need to run it actually (env variables?) username_2: P.S.: Builds on merged PRs on develop and master. You are gonna have two endpoints, the master to be used as production, the develop for testing.. Like in real life :D username_1: Thanks for adding me! The necessary changes are on the ionic2 branch and need to be merged to develop first (PR #33 has been opened just a few minutes ago). Sorry, I should've probably mentioned that. The docker container exposes port 8080, hence, it can be run as follows `docker run -p 80:8080 pokemongoers/catch-em-all` on port 80. By default it uses the API endpoint at `pokedata.c4e3f8c7.svc.dockerapp.io:65014`. This can be changed like so `docker run -p 80:8080 -e API_HOST=api.pokedata.com -e API_PORT=80 pokemongoers/catch-em-all` username_2: @username_1 nice! I fucked up the chair's servers this morning (not like, I did do anything extrorindarily dangerous... but it feels like having a ferrari with a fiat 500 engine :dash: ), so now they need to fix an implement the changes that I asked for :D hopefully we have a nice an clean solution soon, then I'll send you the links to the various things username_1: @username_2 The ionic branch with the required deployment changes has just been merged. This means we now have a [successfully built docker container](https://hub.docker.com/r/pokemongoers/catch-em-all/builds/) :) Is there anything else you need me to do? Or are we just waiting for the server to be fixed? :D username_2: @username_1 just wait :) I'll update everyone once everything is up and running username_2: The chairs VM is giving me a hard, hard time. So in the meantime, Christian's servers it is: http://catch-em-all-0a600d65.c65978f4.svc.dockerapp.io:4898/ For now only develop, as latest/master doesn't exist as a tag anyway :) Status: Issue closed username_1: Since we have a running app I will close this now. For all topics related to hosting (e.g. domain and SSL certificate) please see #20.
rapid7/cps
502583958
Title: Move to modules Question: username_0: All of my go projects are now on go modules, this is the last remaining. Not enjoying setting my $GOPATH and going to a different location to build it. This comes right after #7 is done. Answers: username_1: This was completed in #33 Status: Issue closed
lijinyan89/emotion-notebook
270039684
Title: 关于页面的讨论 Question: username_0: ![2017-10-31 11 06 35](https://user-images.githubusercontent.com/17683823/32237460-013b0688-be33-11e7-9deb-2525b798c882.png) ![2017-10-31 11 06 47](https://user-images.githubusercontent.com/17683823/32237461-016ddc3e-be33-11e7-812a-d1f7f588d085.png) ![2017-10-31 11 07 34](https://user-images.githubusercontent.com/17683823/32237462-01a5b244-be33-11e7-9def-af44ed941c3a.png) ![2017-10-31 11 09 37](https://user-images.githubusercontent.com/17683823/32237464-0232a74e-be33-11e7-8dd3-c8f5e62c95c6.png) ![2017-10-31 11 12 10](https://user-images.githubusercontent.com/17683823/32237467-038ad256-be33-11e7-91ef-fc2a3cbf27b5.png) ![2017-10-31 11 12 42](https://user-images.githubusercontent.com/17683823/32237468-03bf8dc0-be33-11e7-8612-3ec96bce601a.png) ![2017-10-31 11 12 54](https://user-images.githubusercontent.com/17683823/32237470-043368e4-be33-11e7-9721-166135065661.png) ![2017-10-31 11 14 06](https://user-images.githubusercontent.com/17683823/32237473-052733c0-be33-11e7-9c35-3db5427799c1.png) ![2017-10-31 11 14 26](https://user-images.githubusercontent.com/17683823/32237475-057b98b6-be33-11e7-9138-5b81f40b65af.png) ![2017-10-31 11 14 57](https://user-images.githubusercontent.com/17683823/32237477-05cfc490-be33-11e7-8498-f0708b8f376e.png) ![2017-10-31 11 15 03](https://user-images.githubusercontent.com/17683823/32237479-0668fbe2-be33-11e7-8794-a5d6611a4439.png) ![2017-10-31 11 15 13](https://user-images.githubusercontent.com/17683823/32237481-06d829c2-be33-11e7-843d-76faad4215be.png) Answers: username_0: 10/27晚讨论结果 修改内容 页面方面 1.所有页面:显示内容的部分窄一点,是原来的2/3 2.坏心情页面:将记录情绪、调节情绪合为一个页面 3.返回页面: 将返回页面改为随机语句(随机语句见注释(1)); 有背景图; 随机语句在中央; 有返回按钮; 将下面的部分整个挪到坏情绪页面最后:(将‘再次感受情绪所占百分比’改为‘再次评估你的情绪’;最下面大方框改为小空格填数字,后面给出‘%’符号) 4.导航页面:坏心情按钮链接到坏心情页面 三栏法表格 1.表格标题:将‘情绪背后的原因’改为‘思维决定情绪’ 字体大一点。 2.下意识思维:五个填空的长框; 后面有文字提示(提示内容见注释(2)); 有超链接(超链接内容见注释(3)) 3.认知扭曲:由11个复选框改为12个,将放大与缩小拆开; 有文字提示(文字提示见注释(4)); 有超链接(超链接见注释(5)) 4.理性回应(同2.下意识思维类似): 五个填空的长框; 后面有文字提示(提示内容见注释(6)); 有超链接(超链接内容见注释(7)) 其它页面内容 1.三栏法表格下面的 ‘朋友,您对情绪背后原因的理解’以及 方框 删去 2.情绪: 每一种情绪(包括其他情绪)后的小框合成一个小框,可以填数字,后面给出‘%’符号 3.感受:将“感受”改为“身体感受” 删减内容: 1.场合:公共场所 2.对象:保留一个小方框,填空 3.事件:大方框减小到宽距两行的长方框
jaredpalmer/formik
302048094
Title: Test for value change in a Field Question: username_0: ## Bug, Feature, or Question? **Question** First of all, I'd like to apologize for my bad english. I'm trying to write a test where I can verify if a button is enabled or not, but I'm not managing to make it work. The component that I have: ```javascript // TodoForm.js import React from 'react'; import { withFormik, Form, Field } from 'formik'; import yup from 'yup'; const TodoForm = ({ isValid }) => ( <Form> <Field name="todo" type="text" placeholder="Type your todo here..." /> <button disabled={!isValid}>Add ToDo</button> </Form> ); const EnhancedTodoForm = withFormik({ mapPropsToValues: () => ({ todo: '' }), validationSchema: yup.object().shape({ todo: yup.string().required(), }), handleSubmit: (values, { props }) => {}, })(TodoForm); export default EnhancedTodoForm; ``` I want to change the 'todo' value (preferably via simulate) and see if the button gets enabled, but I'm not even able to change the value of Field. Does anyone have any ideas? I'd love if it comes with an example <3. ## Additional Information This is the piece of code that I tried without success until now. ```javascript // TodoForm.test.js import React from 'react'; import { shallow } from 'enzyme'; import TodoForm from './TodoForm'; describe('<TodoForm />', () => { let enhancedTodoForm; beforeEach(() => { enhancedTodoForm = shallow(<TodoForm />); }); describe('interactions', () => { describe('user types something', () => { const newTodo = 'my new todo'; [Truncated] }); // this also doesn't change the values prop inside the component enhancedTodoForm.dive().setProps({ values: { todo: newTodo } }); expect(todoFormComponent.find('button').props().disabled).toBe(false); }); }); }); }); ``` --- <!-- PLEASE FILL THIS OUT --> - Formik Version: 0.11.11 - React Version: 16.2.0 - OS: Ubuntu 16.04 - Node Version: 8.9.4 - Package Manager and Version: yarn 1.3.2 Answers: username_0: For the people that could be facing the same problem, I managed to solve this by mounting my component instead of shallow rendering it. I also changed the Field component for an input with the formik functions. [Part 7 of this article](https://semaphoreci.com/community/tutorials/getting-started-with-tdd-in-react) can explain better than me. PS: If someone knows a better way of doing this, please share. username_1: Hmm, I'm having a similar issue where the value of the Field isn't getting updated when I simulate a change event. I get this error in my terminal output: ``` Warning: `handleChange` has triggered and you forgot to pass an `id` or `name` attribute to your input: undefined Formik cannot determine which value to update. For more info see https://github.com/jaredpalmer/formik#handlechange-e-reactchangeeventany--void ``` But when I `console.log(component.find('.email-field').first().debug())` it shows the input field as haing a `name` attribute: ```jsx <input value={[undefined]} name="email" onChange={[Function: onChange]} onBlur={[Function]} autoFocus={true} className="text-input" className="email-field" placeholder="<EMAIL>" type="email" onKeyUp={[Function]} /> ``` username_1: Haha, after typing that out I realized my problem: I had a `name` attribute on the element, but I was assuming Enzyme was going to pass that element along as the `event.target` like in the real DOM, but that's not true. Adding an explicit `event.target.name` in the simulated `onChange` fixed the problem. username_2: Fully mounting solved my issue. I'm guessing that `.find('Form').dive().find('form')` doesn't trigger whatever sets `context.formik` username_3: @username_1 could you show what you did to get this working? username_1: @username_3 yep, sure. The reason it wasn't working for me was because I didn't realize that I needed to explicitly provide a `name` property on the event object I was passing in, like this: ```js component.find('[data-test-id="email-field"]').simulate('change', { target: { name: 'email', value: '<EMAIL>' } }) ``` When your code runs in the browser, the native event will send the _actual_ element as the target, in which case the `name` will be set already.
wilcoxc26149/cspath-datastructures-capstone-project
378105237
Title: Really cool custom parameter for your stack Question: username_0: https://github.com/wilcoxc26149/cspath-datastructures-capstone-project/blob/80096e5b2be102269d0af872fbd7bc07c3a114d4/6a7c40eff7fb2e039ec8f03fd6264fc5-0400e90305744f7bb2f79b33369f1deaae522be5/script.py#L44 This is a really cool way to automatically fill your stack with very minimum effort. Nice job!
cakephp/cakephp
562353378
Title: Error: CakephpController could not be found. Question: username_0: Error: Create the class CakephpController below in file: app/Controller/CakephpController.php <?php class CakephpController extends AppController { } Notice: If you want to customize this error message, create app/View/Errors/missing_controller.ctp Answers: username_1: This is not a help forum. The ticket tracker is reserved for possible bugs and feature enhancements to the CakePHP framework. If you are looking for help on how to implement a feature or to better understand how to use the framework correctly, please visit one of the following: [The CakePHP Manual](http://book.cakephp.org) [The CakePHP online API](http://api.cakephp.org) [CakePHP Official Forum](http://discourse.cakephp.org) [Stackoverflow](http://stackoverflow.com/questions/tagged/cakephp) [Slack channel](http://cakesf.herokuapp.com/) or the #cakephp channel on irc.freenode.net, where we will be more than happy to help answer your questions. Thanks! Status: Issue closed
facebook/react-native
279320175
Title: E/unknown: Reactions: Got DOWN touch before receiving or CANCEL UP from last gesture Question: username_0: Description Anything related to overlay braking touches on android. Even when the yes/no dialog is using I have the same problems E/unknown: Reactions: Got DOWN touch before receiving or CANCEL UP from last gesture Solution JSTouchDispatcher.java causing this handler message Additional Information React Native version: 0.48.4 Platform: Android<issue_closed> Status: Issue closed
apache/trafficcontrol
342429078
Title: Traffic Portal not validating 'routerPortName' on the Server page Question: username_0: When entering or updating the RouterPortName field for a Server, TP should validate the inputs and remove extra spaces, and/or hidden characters like tabs. In the output of the JSON api we are showing servers which have /t and spaces in the output for the RouterPortName. Answers: username_1: @username_0 - any idea if this creates any downstream (or is it upstream) problems? I'm not sure where the value of server.router_port_name (the database column) is actually used. If it's actually used by other components of TC (as opposed to being purely informational) and whitespace or hidden characters can cause significant problems, then the severity of this issue will be raised. Status: Issue closed
whatisachicken/flixapp
407972327
Title: Project Feedback! Question: username_0: 👍 Nice work! The point of this homework was to get a chance to implement a TableView (one of the most common views in iOS) and to work with real data over the network (in this case from the Movies Database API). A key part of these projects is that you add additional features and tweak the UI / UX because that will provide the most learning opportunities. We encourage you to complete the projects early each week with the required stories and then spend time adding your own UI elements and experimenting with optional extensions that will improve the user experience. We have a detailed [Assignment 1 Feedback Guide](https://courses.codepath.org/snippets/ios_university/feedback_guides/project_1_feedback.md) which covers the best practices for implementing this assignment. Read through the feedback guide point-by-point to determine ways you might be able to improve your submission. You should consider going back and implementing these improvements as well. Keep in mind that one of the most important parts of iOS development is learning the correct patterns and conventions. Check out the [project rubric](https://courses.codepath.org/snippets/ios_university/project_rubric.md) for a breakdown of how submissions are scored. If you have any technical questions about the project or concepts covered this week, post a question on our [Discussions Forum](https://discussions.codepath.org) and mark the question as type, "Curiosity". For general questions email us at, <<EMAIL>>. Answers: username_0: 👍 Nice work! The point of this homework was to get familiar with two common forms of navigation in iOS (push and tab bar). It also provided a chance to extend your Flicks app in new ways. We have a detailed [Project 2 Feedback Guide](http://courses.codepath.org/snippets/ios_university/feedback_guides/project_2_feedback.md) which covers the best practices for implementing this assignment. Read through the feedback guide point-by-point to determine ways you might be able to improve your submission. You should consider going back and implementing these improvements as well. Keep in mind that one of the most important parts of iOS development is learning the correct patterns and conventions. Check out the [project rubric](https://courses.codepath.org/snippets/ios_university/project_rubric.md) for a breakdown of how submissions are scored. If you have any technical questions about the project or concepts covered this week, post a question on our [Discussions Forum] and mark the question as type, "Curiosity". For general questions email us at, <<EMAIL>>. /cc @username_0 username_0: :+1: Nice work! This week, we continued to explore how to build apps that use an API (like Twitter). Unlike the movies app, we created a new class called TwitterAPICaller to help us interact with the API. We're also starting to introduce Auto Layout, which is how you make your app work for different phone sizes. Now that you've finished the app for the week, it's good to reflect on a few things: - Manual segue for the login button. Remember that we couldn't create a segue directly from the login button because we have to check the user's credentials. If they enter the wrong password (or the login fails), you don't want to segue to the next screen. - UserDefaults. We used UserDefaults to keep track of whether the user was logged in or not. If they were already logged in, we went directly to the tweets screen. UserDefaults is a great place to keep track of things you want to save locally, but not save on the server. For example, if you want to show a popup message one time only, you could use UserDefaults to keep track of whether you've shown the popup message already. - TwitterAPICaller. Go back to the project and look through this file that we provided. There are some functions related to authentication that you can ignore. Twitter uses OAuth 1.0a for authentication, which is an old standard. Most new APIs will use something similar to OAuth 2. Other than the authentication functions, the class is pretty simple, and you can create something similar to interact with other APIs. Check out the [assignment grading page](https://courses.codepath.org/snippets/ios_university/grading_spring_19) for a breakdown of how submissions are scored. If you have any technical questions about the project or concepts covered this week, post a question on our [Discussions Forum](https://discussions.codepath.org) and mark the question as type, "Curiosity". For general questions email us at, <<EMAIL>>. /cc @username_0
itspladd/scheduler
864995755
Title: Websocket implementation doesn't support simultaneous editing Question: username_0: Currently, if two users are editing the same appointment slot, the second user to submit their appointment will overwrite the first user's submission. The app should automatically cancel the second user's edit if another user updates that slot while they have the Form window open.
nancho17/Programa_Medidor
598165603
Title: Modificar Menu Usuario Alarma opción L Question: username_0: Modficar 4.2.5 OPCIÓN 4: Alarma Modificar opcion L por: L1: Límite mínimo para activación: 100 L2: Límite máximo para des-activación: 105 Answers: username_0: Añadido a la ultima versión. Status: Issue closed username_0: Modficar 4.2.5 OPCIÓN 4: Alarma Modificar opcion L por: L1: Límite mínimo para activación: 100 L2: Límite máximo para des-activación: 105 Status: Issue closed
csernazs/pytest-httpserver
672045783
Title: Allow using wildcard in request URI Question: username_0: Currently when using the `httpserver.expect_request` function, I am only allowed to use the full URI string. It would be nice if it would also accept URI's with wildcards or regex. For instance, I have an endpoint called `GET /users/{USER_ID}/role` that will return the role of a user with a given USER_ID. If I would like to mock this endpoint I would have to specify a `httpserver.expect_request` for each of the users I am using in my tests: ```httpserver.expect_request("/users/1/role").respond_with_json("admin")``` ```httpserver.expect_request("/users/2/role").respond_with_json("admin")``` ```httpserver.expect_request("/users/3/role").respond_with_json("admin")``` It would be way more convenient if we could just pass in a wildcard instead: ```httpserver.expect_request("/users/*/role").respond_with_json("admin")``` Answers: username_1: Hi, This is the dup of #34. Please read my answer there. As it came up from two different people, I think I'll add some basic support for this - however I still do not want to add any routing implementation as it would be a reimplementation of Flask. It means that the handlers will be looked up in a sequential order (in the same order as there were defined) with no respect of how narrow is the match. What I mean by "basic" is the following: - accepting a regexp object (eg the object returned by `re.compile()`), in such case the match will be performed. - accepting an URIPattern object, which would be defined as an abstract class, and it would have a `match()` method of it (eg. it would be handled in the same way as the regexp object). The library itself would have no implementation of the URIPattern class, it would be up to the developer to implement something (eg prefix/suffix matches, glob matching, etc). username_0: Thanks for you answer! Just read your answer in #34 and it's great that we are able to do this 👍. But it might indeed be good to also implement a way to also accept regex as that seems a bit more user-friendly. If you need any help I'll be happy to help implementing it 👌 username_1: cc: @username_2 I've submitted the PR #37 to implement pattern matching. Could you please have a look at it? Obviously I accept any comments on the implementation and also I'm interested if this would make your life easier regarding this library. Having an isinstance on the object returned by re.compile is quite awkward but I could not find any better. The key points are: - regexps can be provided by re.compile (this is good for performance, but also to distinguish string from regexp) - URIPattern is defined as abstract class, so it cannot be instantiated directly, but if you create a subclass and define the match() method, it will be used to match the URI. I was thinking about allowing providing of any callable, but this URIPattern can be parametrized (which will be true in 99% of cases, such as prefix match or glob match), and it is more type safe (in python term: easier to specify a pattern hint, better IDE support, etc) I also kept the possibility to specify any object whose `__eq__` is properly written as I suggested in #34, as I don't want to break any code (unless it is absolutely necessary). username_1: I've merged the PR request, thanks for your comments! I think I'll do a release later today. Status: Issue closed username_1: I've released 0.3.5. I'm closing this issue in the hope that packages on pypi are also usable (I've already checked, but who knows..). :) If you see any problem, feel free to open a new issue. Thanks for using this library! username_2: Hey, thanks for getting back to this, that's cool! I will probably try to upgrade and battle-test your change in upcoming months :) Thanks for developing this great library, Artur