repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
kalexmills/github-vet-tests-dec2020
758431313
Title: gaganhegde/Tasks38: vendor/github.com/tektoncd/pipeline/pkg/pullrequest/disk_test.go; 3 LoC Question: username_0: [Click here to see the code in its original context.](https://github.com/gaganhegde/Tasks38/blob/8dda5753dd8209c870228c4058180b1ee8f59e3a/vendor/github.com/tektoncd/pipeline/pkg/pullrequest/disk_test.go#L297-L299) <details> <summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary> ```go for _, s := range statuses { writeFile(filepath.Join(d, "status", s.Label+".json"), &s) } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 8dda5753dd8209c870228c4058180b1ee8f59e3a<issue_closed> Status: Issue closed
france-connect/service-provider-example
424944796
Title: Ajouter http://localhost:3000/logout-callback aux URIs de redirection autorisées Question: username_0: En voulant nous déconnecter d'une session en local, nous obtenons un message d'erreur. URL où l'erreur est levée : https://fcp.integ01.dev-franceconnect.fr/api/v1/logout?id_token_hint=xxx&state=customState11&post_logout_redirect_uri=http://localhost:3000/logout-callback Message d'erreur : Logout redirect uri does not match one of registered redirect uris. Dans app.js, ligne 57, vous avez défini logout-callback : `app.get('/logout-callback', oauthLogoutCallback);` mais l'URI n'est pas reconnue. Answers: username_1: Merci pour ce retour. Le problème à été corrigé. Status: Issue closed
WarEmu/WarBugs
188424863
Title: No item stats display Question: username_0: Just as the title : I created a character, and I can't see any stats on his gear. The bubble for items stats (or comparaison with actual gear) don't even come. This hapend all the time, from inventory or from character page. I will try these tests to see from where this isue come from : _ Create another character : this one may be bugged from start ? _ Change resolution _ desactivate all adds-on or/and one per one Answers: username_1: This should solve it: https://github.com/WarEmu/WarBugs/issues/8287 Status: Issue closed
pywbem/pywbemtools
882211306
Title: Add 'listener send' command Question: username_0: Add support for a 'send' command in the 'listener' command group, which would send an indication to a listener. The targeted listener may be a pywbemcli listener on the same system, or on a different system, or a different type of listener. The indication can be specified in the command line. Answers: username_1: Note: I have put a command into pywbemcli subscription that creates a subscription in the server and tells the server to send some indications. However, I think that is different than your 'send'. username_0: Yes, that is different. This send here would not depend on a server implementing a provider that can be triggered to perform a send, but instead would perform the send directly. Also, the target listener would be specified by its name, so it would be useable only for pywbemlistener based listeners. username_0: It turns out this depends on a not yet existing functionality in pywbem to send an indication to a listener. I suggest to add that to `pywbem.WBEMConnection` and have opened issue TBD for that. Status: Issue closed
Azure/azure-cli
928548854
Title: Postgres Flexible Server Variables Are Not Part of Local Context Question: username_0: **Resource Provider** Azure CLI **Description of Feature or Work Requested** Currently, users attempting to connect a webapp to a postgres database need to manually copy-paste the server, hostname, username, password and db-name from the JSON output. ![image](https://user-images.githubusercontent.com/25991359/123152544-8ffe2300-d419-11eb-9766-61bb54e45e02.png) This is challenging and adds many step to the process (including scrolling up the terminal, finding this information, copying it over, adding it to the CLI command etc). Would it be possible to keep this information as part of the local context so that the users will not have to take so many steps to succeed? **Minimum API Version Required** N/A **Swagger Link** N/A **Target Date** I would love to see this rolled out before Ignite. The Azure Resource Connector goes into preview by that time and having this feature will likely expedite the utility of the changes coming in through the connector. Answers: username_1: route to service team
baloise/open-source
338264291
Title: Apply for DINAcon 2018 Award with open-source@Baloise Question: username_0: We should apply for [DINAcon 2018 Award](https://dinacon.ch/dinacon-awards/project-submission/) with open-source@Baloise Answers: username_0: ## Open Source @ Baloise ### Contact Name and eMail <NAME> ### Company Name Baloise Group - https://www.baloise.com Status: Issue closed username_0: ![dinacon](https://user-images.githubusercontent.com/1764012/42737215-acfb95b6-8870-11e8-9c7a-99e695fc55fa.png)
paragbaxi/qualysapi
503433761
Title: TypeError: connect() got an unexpected keyword argument 'username' Question: username_0: Hi, I keep getting the above error when trying to connect with credentials. I haven't changed the script, apart from username and password obviously. Any ideas? Answers: username_1: Please post the code generating the error so I can recreate the error.
cds-snc/notification-api
634694687
Title: URLs in email notifications read out fully Question: username_0: See template for "Notify password reset email" for example of how to embed link with description. Answers: username_0: See template for "Notify password reset email" for example of how to embed link with description. username_1: Moving to next sprint. Need to talk with devs at Dev-design sync! username_0: Test on staging first, otherwise use password reset as example of how to do this on Markdown Document and share with those who have stake in links discussion -- username_0: Match staging and prod templates username_2: Need full heads down for 2 hours to be done and was hard to get so far. Anik will try to do that today, should be done by the end of today. The french on those templates is good on prod username_2: Work needed yesterday happened. Close to being done. Still work on MOU template needs to happen + make sure the feedback points to the "contact us" and not the support page. @brdunfield will work on this with @username_0 Status: Issue closed
phetsims/rosetta
59550212
Title: missing Gruntfile and phetLib Question: username_0: Rosetta can't be linted because it has no Gruntfile.js, and it's missing phetLibs in package.json. Answers: username_1: It's not a sim and doesn't have any dependencies on any PhET libraries, so it doesn't need to have phetLibs. Adding a grunt file probably makes sense to make the project lintable. username_0: I made rosetta lintable. Assigning to @username_1 to make sure this doesn't cause any problems. username_1: Linting works, thanks. There are a number of items in the devDependencies field of package.json that I don't think are really needed, since this isn't a sim and will not be built as such. However, I'm reluctant to remove them because, I don't know, maybe the grunt tools will freak if the things needed for the other sims around around. So for now at least, they will be left. Main issue addressed, closing. Status: Issue closed username_0: @username_1 FYI, I did try paring devDependencies down, but it appears that they are all needed in order to runt the 'lint' task. username_1: That settles it then: they stay.
AICC/CMI-5_Spec_Current
106026788
Title: Multiple Passed and Failed Statements in a AU Session Question: username_0: (When PassIsFinal is set to False) If an AU issues multiple Passed and Failed statements within an AU session, what determines the final result. The last statement issued ? What the implications for a registration ? (Multiple AU sessions) Can a user go back and Fail a passed AU (causing a "status reversal") ? Answers: username_1: Very interesting question. What are the use cases that required this "constant" PassIsFinal username_2: Is there anything to stop the AU from creating a new session (from within itself) to allow a user multiple "attempts" at passing an assessment? If not, would this be frowned upon? The only use-case I can come up with (and I'm grasping for straws at this point), is when course navigation is locked. Meaning users have to take the content portion of the course until they unlock the assessment. If we require a user to relaunch an AU in order to get another grade (pass/fail), they'd have to go through the content and then assessment again. Such a small use-case, but may be worth discussing. Especially if the solution is for the AU to create a new session from within itself. Status: Issue closed
portapps/portapps
496807068
Title: Improve topic on GitHub ? Question: username_0: Hi @username_1, Currently the [portapps topic](https://github.com/topics/portapps) on GitHub is not well formed and it would be nice to add it to [github/explore](https://github.com/github/explore). Answers: username_1: @username_0 Sure! Feel free to open a PR on their repository. Status: Issue closed
apcshields/autocomplete-bibtex
118147652
Title: Citation entries parsed from JSON are in a different format than ReferenceProvider.buildWordList expects Question: username_0: `ReferenceProvider.buildWordList` expects the citation entries in `@bibtex` to have either `citation.entryTags.author` or `citation.entryTags.editor`. `citeproc.parse` currently returns author information only in `citation.entryTags.authors`, which means that the citation is ignored by `buildWordList`. Answers: username_1: There is a function to parse citeproc to the format used internally, this should be an easy fix. No time to look at it right now (but I though I had tested with books, sorry for the oversight) Sent from a mobile device. Pardon the typos/brevity. Sent using CloudMagic Email [https://cloudmagic.com/k/d/mailapp?ct=pa&cv=8.0.67&pv=5.1.1&source=email_footer_2] username_0: Timothée, Yep, I've just got to stop for the evening and want to document what I'm running into. Thanks! Andrew username_0: Closed by 24cc381f82dcc67027bc589a5da1624630da2fc7. Status: Issue closed
stereolabs/zed-opencv
522203553
Title: Depth images values in meters Question: username_0: Hello, I was using lately the zed-opencv to save both images and depth images. I was saving the depth into an 8 bit, PNG format, and it is saved obviously as a monochrome gray scale image, and I know that the values of each pixel correspond to shades of grey values between 0 and 255 (255 as white and 0 as black). What I wanted to ask is that from this output image (8 bit depth image) the values between 0 and 255 of each pixel do they correspond each to a certain distance in meters/millimeters? is it like a standard thing? or is there another way to get the depth distance information (in meters) from just the 8 bit depth image? Answers: username_1: Hi, Are you using the saveDepthAs() function from the ZED SDK to save the PNG? In this case, the PNG format is 16bits and not 8bits. It means the depth values are between 0 and 65535. If you are in sl:UNIT_MILLIMETER then the grayscale value will be the millimeter depth value. 8bits is not precise enough to save a depth map in PNG. username_0: Yes I am using the saveDepthAs() function. You mean that if I am using sl:unit_millimeter then if I have a depth value of 65535 it means that its a distance of 65.535m ?? username_1: yes. 65 535 mm or 65,535 m. username_0: Thank you for the information Status: Issue closed
reactor/reactor-kafka
1024006553
Title: Kafka producer instance throws exception when multiple concurrent calls are made transactionally. Question: username_0: The same producer kafka instance can not be used at the same time for more than one request. The reason in the producer options we specify the transactionId - ProducerConfig.TRANSACTIONAL_ID_CONFIG protected Map<String, Object> producerOptions(boolean transactional) { val props = new HashMap<String, Object>(); props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS); props.put(ProducerConfig.ACKS_CONFIG, "all"); if (transactional) { props.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG, "my-transaction-id"); props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true"); } props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class); if (tClass.equals(String.class)) // props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class); else // props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class); return props; } A work around would be like this: on each request create a new kafka producer with a different transactionId. I tried this work around it does not work properly beside the inefficiency. The problem is that each time we use a new kafka producer once we finish with it we need to close it otherwise it causes memory leak. The problem here is that the function to close a producer like producer.close() is blocking and it makes it impossible to use this as a work around. The best option would be to not specify this transactionId by us but to be generated by library internally and being able to use the same producer multiple time by multiple requests in parallel. The error is like this: org.apache.kafka.common.KafkaException: TransactionalId my-transaction-id: Invalid transition attempted from state IN_TRANSACTION to state IN_TRANSACTION at org.apache.kafka.clients.producer.internals.TransactionManager.transitionTo(TransactionManager.java:1078) ~[kafka-clients-2.7.1.jar:na] at org.apache.kafka.clients.producer.internals.TransactionManager.transitionTo(TransactionManager.java:1071) ~[kafka-clients-2.7.1.jar:na] at org.apache.kafka.clients.producer.internals.TransactionManager.beginTransaction(TransactionManager.java:357) ~[kafka-clients-2.7.1.jar:na] at org.apache.kafka.clients.producer.KafkaProducer.beginTransaction(KafkaProducer.java:620) ~[kafka-clients-2.7.1.jar:na] at reactor.kafka.sender.internals.DefaultTransactionManager.lambda$null$0(DefaultTransactionManager.java:43) ~[reactor-kafka-1.3.5.jar:1.3.5] at reactor.core.publisher.MonoRunnable.call(MonoRunnable.java:73) ~[reactor-core-3.4.9.jar:3.4.9] at reactor.core.publisher.MonoRunnable.call(MonoRunnable.java:32) ~[reactor-core-3.4.9.jar:3.4.9] at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:139) ~[reactor-core-3.4.9.jar:3.4.9] at reactor.core.publisher.MonoPublishOn$PublishOnSubscriber.run(MonoPublishOn.java:181) ~[reactor-core-3.4.9.jar:3.4.9] at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:68) ~[reactor-core-3.4.9.jar:3.4.9] at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:28) ~[reactor-core-3.4.9.jar:3.4.9] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na] at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) ~[na:na] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) ~[na:na] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) ~[na:na] at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na] Answers: username_1: is this issue resolved? username_2: It is not; you need to maintain a pool of producers in your code. Only one transaction can be in a process at a time.
wolverton-research-group/qmpy
340231221
Title: How to add new entries to the database Question: username_0: We have some DFT calculations of compounds which are not in the database (or prototypes). How, can we add them using qmpy and check phase stability, etc. Status: Issue closed Answers: username_1: Closing this, as this is not really an issue. Please continue the discussion via the email chain on <EMAIL>
syncthing/syncthing-macos
642585218
Title: Versions in app are out of sync with versions in feed again Question: username_0: Seems like a repeat of https://github.com/syncthing/syncthing-macos/issues/90. * `106010001` is in CFBundleVersion (app). * `100600101` is in sparkle:version (feed). * `1.6.1-1` is in CFBundleShortVersionString (app). * `v1.6.1-1` is in sparkle:shortVersionString (feed). Answers: username_1: Oops, thanks for the report will fix it with 1.7.0 Status: Issue closed
casswe368/ao3summary
650789867
Title: Speed up pulling the raw data from ao3 Question: username_0: Move the file open, write, and close from the loop to the main part of the code to make getting the raw data faster. Current benchmark is 489 pages in 2.5 hours to save the raw HTML as a txt file. Add waits so that I don't get my IP address banned from ao3.<issue_closed> Status: Issue closed
orientechnologies/orientdb
127897965
Title: NullPointerException on DELETE EDGE Question: username_0: ## Description: Cannot delete records of an EDGE when the vertex referring to 'out' is deleted. ## Steps: - Create a vertex record - Create an edge from vertex class to edge class - Delete the vertex record - Try Deleting all records from edge class (`DELETE EDGE myEdgeClassName`) (it will give the NullPointerException) Answers: username_1: hi @username_0 Which version of OrientDB? what do you mean when you say Create an edge from vertex class to edge class? username_1: Can you provide SQL script to reproduce it? Status: Issue closed username_0: Hi @username_1, the version is `community 2.1.0`. I am not able to reproduce the scenario again, not sure why. I will close the issue and open again once I have the reproduce script Thanks and sorry for wasting your time.
cp2-dc-ic-mobile-2018/Symposion
379153290
Title: Padronizar convenção de nomes de classes Question: username_0: No código de vocês, não há um padrão convencionado de nomes de classes. Temos, por exemplo: - `MaskType` escrita em _CamelCase_ com inicial em maiúscula - `bancoDados` escrita em _camelCase_ com inicial em minúscula - `selecioneusuario` escrita toda em minúsculas Seria bom definirem uma convenção de nomes para esse tipo de coisa. Vocês podem, inclusive, seguir um já existente (como o do [Google](https://google.github.io/styleguide/javaguide.html#s5-naming), por exemplo). Procurem também rever os nomes de algumas classes que não estão muito claros. Por exemplo: A classe `Usuarios` na verdade representa 1 único usuário, então é estranho que o nome dela esteja no plural. Lembrem-se também, caso queiram mudar o nome de alguma variável, método ou classe, de usar a funcionalidade de renomear do próprio Android Studio, para que ele já substitua o nome em todos os lugares onde ele é usado (especialmente nos arquivos XML de configuração).
mediaelement/mediaelement-plugins
245677612
Title: Audio Player Ad support Question: username_0: Is there any support for playing audio ads through the audio player? Maybe using something like Digital Audio Ad Serving Template (DAAST)? Answers: username_1: Not currently. I need to rework the Ads plugin and I'll determine viability to add this into it. I'll keep you posted. Thanks
tur-nr/polymer-redux
223691849
Title: reBind Question: username_0: I need a re bind function on connectedCallback (), otherwise my element after the remove, re append will lose the function of auto bind Answers: username_1: I'm sorry I don't understand the described issue. Could you please maybe demonstrate what you are trying to achieve and/or what your work around is via code example perhaps? username_0: try the link [https://jsbin.com/qakahusilo/edit?html,output](https://jsbin.com/qakahusilo/edit?html,output) I added two buttons inside, respectively, the implementation of the append & remove 2 methods. I first append later, PolymerRedux normal work, but when i remove element, re append when he does not work username_0: [https://github.com/username_0/polymer-redux/tree/polymer-2](https://github.com/username_0/polymer-redux/tree/polymer-2) I have the first fork down the project to modify But it was only my personal temporary treatment at best
Teeyenoh/zephis
210618419
Title: Personal gripe [v0.1.2-alpha] Question: username_0: I feel the tiles are a bit too bit, making them smaller would probably be a lot more appealing to look at! Answers: username_1: Due to the "power of two nature" of textures, it's either this or half as wide. I'll set it to that for the next update, and you can see what you think :P username_0: If that's the case, then please zoom the screen out so they look smaller :P
rossfuhrman/_why_the_lucky_markov
581889333
Title: Punch the yield keyword is followed by the time you put a guard at the door and said, Say, Stunt Runner, please. hadn’t ever painted anything of the parts of speech once again. Question: username_0: Toot: Punch the yield keyword is followed by the time you put a guard at the door and said, Say, Stunt Runner, please. hadn’t ever painted anything of the parts of speech once again. One comment = 1 upvote. Sometime after this gets 2 upvotes, it will be posted to the main account at https://mastodon.xyz/@_why_toots
spring-projects/spring-boot
146126676
Title: After creating asynchronous JobLauncher can not use autowired Jpa Repositories Question: username_0: I am using Spring batch in spring boot appliication. I have a issue with creating the jobLauncher async and at the same time access the dao layer (business) of the application I overrode the DefaultBatchConfigurer and set the setTaskExecutor But my auto-wired jpa repositories in my custom line-mapper class, do not perform inserts or updates to my database entities (not the batch related tables) I am injecting ThreadPoolTaskExecutor bean that I have created to this setTaskExecutor. ``` @Configuration public class ProductImportBatchConfig extends DefaultBatchConfigurer { @Autowired private TaskExecutor taskExecutor; @Override protected JobLauncher createJobLauncher() throws Exception { SimpleJobLauncher jobLauncher = new SimpleJobLauncher(); jobLauncher.setJobRepository(super.getJobRepository()); jobLauncher.setTaskExecutor(new SimpleAsyncTaskExecutor()); jobLauncher.afterPropertiesSet(); return jobLauncher; } } ``` This is my Batch Configuration class. The following is my custom fieldsetmapper ` @Component @StepScope public class ItemFieldSetMapper implements FieldSetMapper<BasisCategoryItem> { Logger logger = Logger.getLogger(LocalPersist.class.getName()); @Autowired private CSVColumnDao csvColumnDao; @Override public BasisCategoryItem mapFieldSet(FieldSet fieldSet) throws BindException { // use of csvColumnDao.save() // This save is not working } ` Answers: username_1: Thanks for getting in touch, but it feels like this is a question that would be better suited to [Stack Overflow](http://stackoverflow.com/). As mentioned in [the guidelines for contributing](https://github.com/spring-projects/spring-boot/blob/master/CONTRIBUTING.adoc#using-github-issues), we prefer to use GitHub issues only for bugs and enhancements. Feel free to update this issue with a link to the re-posted question (so that other people can find it) or add some more details if you feel this is a genuine bug. Status: Issue closed username_0: http://stackoverflow.com/questions/36446644/autowired-jparepository-save-ignored-in-custom-fieldsetmapper
vaadin/flow
314896067
Title: Progress bar: Java side setValue is int while client side is per default number. Question: username_0: The Java side API of the progress bar should have a method `setValue(double)` instead of `(int)`, to represent the default range of values (0-1) of the client side component. Also the default range (defined in the default constructor of ProgressBar) should set the range to 0 - 1.
rmagick/rmagick
38634380
Title: Homepage URL is broke. Question: username_0: <a href="https://github.com/rickcarlino"><img src="https://avatars.githubusercontent.com/u/1388608?" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [rickcarlino](https://github.com/rickcarlino)** _Wednesday Jul 23, 2014 at 15:46 GMT_ _Originally opened as https://github.com/rmagick/rmagick/issues/110_ ---- It looks like this project was hosted on the now offline rubygems.org site. Is there a new homepage URL now that Rubyforge shutdown? Answers: username_1: The site is now at http://rmagick.github.io Status: Issue closed username_0: I can't see the diff on my phone Was it not redirecting? username_1: Ugh. I'm afraid I don't understand the question :)
goodwithtech/dockle
516569803
Title: can't trace symbolic link layer.tar (rare case) Question: username_0: **Description** ``` COPY sample.txt /app/sample.txt RUN chmod u+s /app/sample.txt RUN chmod u-s /app/sample.txt ``` This `sample.txt` is not suid file, but Dockle sometimes detect to suid file. ``` ├── 99dd0e6c897c668eaff4c7db78af46f0222de6002d826850b7ccf7647c734b52 │   ├── VERSION │   ├── json │   └── layer.tar ├── 9e54adcf82bab951408ca086571b79a04f34afe2e5984f16a36147c3bd2bdff5 │   ├── VERSION │   ├── json │   └── layer.tar ├── cc9cb9922a613543e7600f4ad3101855d2dd2f04043e46dbf2824adb9aff886b │   ├── VERSION │   └── layer.tar -> ../99dd0e6c897c668eaff4c7db78af46f0222de6002d826850b7ccf7647c734b52/layer.tar ``` This cause of symbolic linked layer.tar file. but I can't reproduce simplified image. **What happened instead?** Always not detect suid file.
jeremybarbet/react-native-modalize
449052541
Title: Problem using the react-native-modalize Question: username_0: I am facing minor problem while using the modalize which is there are blank space at the bottom of the modal. However, the blank space will not showed when using the 'withReactModal' ![Screenshot_1559015282](https://user-images.githubusercontent.com/30256135/58449754-bd931800-813e-11e9-8b88-1808333193a9.png) Any solution to solve it ? Answers: username_1: Hi, Thanks for using Modalize. Can you share the code you used to display the modal component? username_0: {item.code} </Text> </Right> </ListItem> ); })} </List> </ScrollView> </View> </Modalize> ` username_1: {item.code} </Text> </Right> </ListItem> ); })} </List> </Modalize> ``` username_1: I'll close that for inactivity. Feel free to reopen if you come up with more questions. Status: Issue closed
ebigram/emojisense
740846092
Title: Prevent enter from skipping to next line Question: username_0: cool plugin!! I actually prefer this emoji plugin over the official one. The one thing I would personally change (perhaps make it configurable) is to have Enter not skip directly to the next line, but perhaps just make one space between the emoji and the cursor. Answers: username_1: Thank you for your note. Yes, I agree thus should be the expected behavior; it is on my backlog, but I will escalate it this week.
didip/tollbooth
263965144
Title: Limiter doesn't behave as documentation describes Question: username_0: Greetings! I am seeing some odd behavior with how the limiter limits requests. We are constructing a limiter like this: ```go limiter := tollbooth.NewLimiter(20, time.Second, nil) ``` My understanding of this, based on the documentation and code, is that the limiter will now allow a maximum of 20 requests per second, per path, per IP. If a client attempts to exceed 20 req/sec/path/IP, it will receive a 429 response. However, this is not what we are seeing. We have a client that hits the same path about 2 times per second. After about six minutes at 2 req/sec, tollbooth starts responding with 429 errors here and there. Either my understanding of how it is supposed to work is incorrect, or my configuration is incorrect, but it seems that setting 20/time.Second should *never* cause a 429 response under a load of 2/req/sec/path/IP. Is it possible there is a bug somewhere in this package? Please let me know what I'm missing here. Thanks! Answers: username_1: Your expectation is correct, there has to be a bug here. Let me take a look. username_0: Excellent. Thank you! Please let me know if I can be of assistance. username_1: @username_0 Can I see an example on how you use it on your HTTP router? username_1: I have written a test case for what I think describes your problem, but the test is passing. Can you check if it correctly represents your problem? https://github.com/username_1/tollbooth/blob/master/tollbooth_bug_report_test.go username_0: @username_1 thanks for that. It does not correctly represent my problem. I will take your test case and see if I can make it fail in the way I am seeing. Thanks for writing it! username_0: @username_1 I've managed to write a test case that reliably reproduces my issue. ``` --- FAIL: Test_Issue48_RequestTerminatedEvenOnLowVolumeOnSameIP (20.14s) tollbooth_bug_report_test.go:47: Should be able to handle 20 reqs/second. HTTP status: 429. Expected HTTP status: 200. Failed after 39 iterations in 20.138660 seconds. ``` Here's the code: ``` go // See: https://github.com/username_1/tollbooth/issues/48 func Test_Issue48_RequestTerminatedEvenOnLowVolumeOnSameIP(t *testing.T) { lmt := limiter.New(nil).SetMax(20).SetTTL(time.Second) lmt.SetMethods([]string{"GET"}) limitReachedCounter := 0 lmt.SetOnLimitReached(func(w http.ResponseWriter, r *http.Request) { limitReachedCounter++ }) handler := LimitHandler(lmt, http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { w.Write([]byte(`hello world`)) })) // The issue seen by the reporter is that the limiter slowly "leaks", causing requests // to fail after a prolonged period of continuous usage. Try to model that here. // // Report stated that a constant 2 requests per second over several minutes would cause // a limit of 20/req/sec to start returning 429. timeout := time.After(10 * time.Minute) iterations := 0 start := time.Now() for { select { case <-timeout: break case <-time.After(500 * time.Millisecond): req, _ := http.NewRequest("GET", "/doesntmatter", nil) req.RemoteAddr = "127.0.0.1" rr := httptest.NewRecorder() handler.ServeHTTP(rr, req) if status := rr.Code; status != http.StatusOK { t.Fatalf("Should be able to handle 20 reqs/second. HTTP status: %v. Expected HTTP status: %v. Failed after %d iterations in %f seconds.", status, http.StatusOK, iterations, time.Since(start).Seconds()) } iterations++ } } if limitReachedCounter > 0 { t.Fatalf("We should never reached the limit, the counter should be 0. limitReachedCounter: %v", limitReachedCounter) } } ``` username_1: Looks like https://godoc.org/golang.org/x/time/rate#Limiter.AllowN suddenly return false. I don't know what cause it to behave like that yet. See: https://github.com/username_1/tollbooth/blob/master/limiter/limiter.go#L462 username_0: I think you may be using the token bucket incorrectly. According to my reading of the documentation, the bucket refills at a rate of `r` tokens per second, which is the first argument to the constructor. The second argument is the max burst size. It appears you are passing the TTL (1 in my case) to the `r` argument, meaning the bucket is only being refilled at 1 token per second. I believe if you pass the max as the first argument, this issue will be fixed. It also appears that the token bucket ALWAYS operates at `r` tokens per second and you cannot tell it to refill on a different delay. I think this means you will either need to change your interface, or do some math before initializing the token bucket. For example you could do `max/ttl` to get the `r` argument. I think it might be better to remove TTL altogether and only allow a “per second” number like the bucket expects. username_1: Oh wow, you are indeed correct. Thank you for catching this! Status: Issue closed
mardiros/pyshop
422073717
Title: Error in exception when serving `/simple/requests/` Question: username_0: ``` 2019-03-18 07:23:30,616 INFO [pyshop.views.simple][waitress] Create release 1.0.0 for package requests 2019-03-18 07:23:30,617 INFO [pyshop.views.simple][waitress] Looking for author <NAME> 2019-03-18 07:23:31,225 INFO [pyshop.views.simple][waitress] Mirroring version 2.12.0 2019-03-18 07:23:31,501 INFO [pyshop.views.simple][waitress] Create release 2.12.0 for package requests 2019-03-18 07:23:31,502 INFO [pyshop.views.simple][waitress] Looking for author <NAME> 2019-03-18 07:23:32,938 INFO [pyshop.views.simple][waitress] Mirroring version 2.18.0 2019-03-18 07:23:34,161 INFO [pyshop.views.simple][waitress] Create release 2.18.0 for package requests 2019-03-18 07:23:34,161 INFO [pyshop.views.simple][waitress] Looking for author <NAME> 2019-03-18 07:23:35,537 INFO [pyshop.views.simple][waitress] Mirroring version 2.4.3 2019-03-18 07:23:36,531 INFO [pyshop.views.simple][waitress] Create release 2.4.3 for package requests 2019-03-18 07:23:36,532 INFO [pyshop.views.simple][waitress] Looking for author <NAME> 2019-03-18 07:23:37,823 INFO [pyshop.views.simple][waitress] Mirroring version 1.0.1 2019-03-18 07:23:38,826 INFO [pyshop.views.simple][waitress] Create release 1.0.1 for package requests 2019-03-18 07:23:38,827 INFO [pyshop.views.simple][waitress] Looking for author <NAME> 2019-03-18 07:23:40,173 INFO [pyshop.views.simple][waitress] Mirroring version 2.0.1 2019-03-18 07:23:41,190 INFO [pyshop.views.simple][waitress] Create release 2.0.1 for package requests 2019-03-18 07:23:41,191 INFO [pyshop.views.simple][waitress] Looking for author <NAME> 2019-03-18 07:23:42,498 INFO [pyshop.views.simple][waitress] package requests mirrored 2019-03-18 07:23:42,532 ERROR [waitress][waitress] Exception when serving /simple/requests/ Traceback (most recent call last): File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/pyramid/tweens.py", line 13, in _error_handler response = request.invoke_exception_view(exc_info) File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/pyramid/view.py", line 769, in invoke_exception_view raise HTTPNotFound pyramid.httpexceptions.HTTPNotFound: The resource could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/waitress/channel.py", line 336, in service task.service() File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/waitress/task.py", line 175, in service self.execute() File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/waitress/task.py", line 452, in execute app_iter = self.channel.server.application(env, start_response) File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/pyramid/router.py", line 270, in __call__ response = self.execution_policy(environ, self) File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/pyramid/router.py", line 279, in default_execution_policy return request.invoke_exception_view(reraise=True) File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/pyramid/view.py", line 768, in invoke_exception_view reraise_(*exc_info) File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/pyramid/compat.py", line 179, in reraise raise value File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/pyramid/router.py", line 277, in default_execution_policy return router.invoke_request(request) File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/pyramid/router.py", line 249, in invoke_request response = handle_request(request) File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/pyramid_tm/__init__.py", line 171, in tm_tween reraise(*exc_info) File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/pyramid_tm/compat.py", line 36, in reraise raise value File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/pyramid_tm/__init__.py", line 136, in tm_tween response = handler(request) File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/pyramid/tweens.py", line 43, in excview_tween response = _error_handler(request, exc) File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/pyramid/tweens.py", line 17, in _error_handler reraise(*exc_info) File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/pyramid/compat.py", line 179, in reraise raise value [Truncated] File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/pyramid/renderers.py", line 470, in render result = renderer(value, system_values) File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/pyramid_jinja2/__init__.py", line 265, in __call__ return template.render(system) File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/jinja2/environment.py", line 1008, in render return self.environment.handle_exception(exc_info, True) File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/jinja2/environment.py", line 780, in handle_exception reraise(exc_type, exc_value, tb) File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/jinja2/_compat.py", line 37, in reraise raise value.with_traceback(tb) File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/pyshop/templates/pyshop/simple/show.html", line 2, in top-level template code {% for r in package.sorted_releases %} File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/jinja2/environment.py", line 430, in getattr return getattr(obj, attribute) File "/media/swapdrive/local_pypi/env/lib/python3.4/site-packages/pyshop/models.py", line 518, in sorted_releases releases.sort(reverse=True) TypeError: unorderable types: Release() < Release() ``` I am trying to install the `requests` package via the pyshop, but I am having the above error, Answers: username_0: /label bug username_0: https://github.com/mardiros/pyshop/blob/b42510b9c3fa16e0e5710457401ac38fea5bf7a0/pyshop/models.py#L518 temporarily fixed my local, I added a `try/except`,
ioBroker/ioBroker.js-controller
556066833
Title: Decide about ending support for nodejs 8 with controller 2.3 (3.0?) Question: username_0: Mainly (for now) because of * https://github.com/ioBroker/ioBroker.js-controller/pull/629 * https://github.com/ioBroker/ioBroker.js-controller/pull/628 * Nodejs 8 is eol since december 2019 Answers: username_1: mkdirp can be replaced with fs-extra (https://github.com/ioBroker/ioBroker.js-controller/issues/497) username_1: Anyways..., fine with me. But we should announce it early enough. username_0: ALso with the semver dep and the new idea to have strict mode enabled by default in installer I would say the next js-controller is nodejs 10+ only and we do it as js-controller 3.0. Anyone against this? Please speak up till end of this week, else I merge in the PRs and we make it official username_2: I am with you on this. username_0: With the current semver topic it IS decided! I will change anything in master to continue as 3.0.0 Status: Issue closed username_0: done. Master is now 3.0.0
appium/appium
93276419
Title: How to collect and generate report from performance logs for Hybrid app for webview in Android Question: username_0: Hi, How to collect and generate report from performance logs for Hybrid app for webview in Android and iOS. I have seen something like 'enablePerformanceLogs' capability feature for Android alone, how to implement the same in iOS apps. And also where will be the results collected? I need to generate the performance analysis report based on the data collected. As of now if I try to get the logs, I am getting only Logcat and Client in the collection of logs. Thanks Status: Issue closed Answers: username_1: Hmmmmm. I think the way to get that isn't currently a feature of appium, but I have heard of others implementing it. What you should do is use a different xcode trace template file. Appium has a server flag by which you can pass it the path to a specific trace template to use. I think instruments will then collect the performance data you are looking for and record it. @username_2 do you know about this? Closing, because this isn't a bug in appium, and it's not a feature we expect to implement soon. username_2: Appium doesn't support this on ios and has limited support on android (no native app support). Instruments offers performance profiling and apple has documented it on their official website. Android is easier since uiautomator has some performance monitoring APIs and adb makes collecting hardware stats such as processor and memory usage easy. appium doesn't currently support that though. username_0: @username_2 @username_1: How to collect the data in adb in appium test. Can you elaborate this? username_0: @username_2 How to do it android, can you please elaborate? username_2: search google for the adb commands. ex: `adb shell dumpsys cpuinfo`
nanomsg/nanomsg
116293746
Title: How to close a device without closing everything. Question: username_0: What you have suggested have tried already but it didnt work. Please find the code below what i have used. class NNHTest : public ::testing::Test { public: nn_helper_t proxy_st ; }; struct info_my_param_t { int sock1; int sock2; }; static void * loop_device_test( void* args ) { int rc; struct info_my_param_t * imp = (struct info_my_param_t *) args ; rc = nn_device ( imp->sock1, imp->sock2 ); printf("value of nn_device rc = %d\n", rc); return NULL; } TEST_F(NNHTest, StopDeviceTest ) { int rc ,nnbd, con; //device int sock1 = nn_socket ( AF_SP_RAW , NN_REP ); printf("value of sock1 = %d\n", sock1); nnbd = nn_bind ( sock1 , "tcp://127.0.0.1:6699" ); printf("value of nnbd = %d\n", nnbd); int sock2 = nn_socket ( AF_SP_RAW , NN_REQ ); printf("value of sock2 = %d\n", sock2); con = nn_connect ( sock2 , "tcp://127.0.0.1:6799" ); printf("value of con = %d\n", con); //device loop start struct info_my_param_t imp ; imp.sock1 = sock1 ; imp.sock2 = sock2 ; struct info_my_param_t *ptr = (struct info_my_param_t *)memdup( (char*)&imp, sizeof(struct info_my_param_t) ) ; rc = pthread_create( &proxy_st.threadid, NULL, loop_device_test, ptr ); printf("value of pthread_create rc = %d\n", rc); sleep(1); //closing sock1 rc = nn_close(imp.sock1); printf("value of nn_close(imp.sock1) rc = %d\n", rc); [Truncated] } OUTPUT: keshav@keshav-Latitude-E6540:~/clone/BD/Test/Shared/Core$ ./ubuntuRelease64/test-static [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from NNHTest [ RUN ] NNHTest.DeviceTestReqResp value of sock1 = 0 value of nnbd = 1 value of sock2 = 1 value of con = 1 value of pthread_create rc = 0 value of nn_close(imp.sock1) rc = 0 value of nn_close(imp.sock2) rc = 0 Note : its hanging here Answers: username_1: Yes. This is confirmed. And I can confirm that I have a fix for this coming shortly. The problem was rather complex, and the fix fell out as a result of fixing some other nn_close() related problems. Stay tuned. Status: Issue closed
SpiNNakerManchester/RemoteSpiNNaker
43131276
Title: Push job output into a file and then read that file Question: username_0: The JobManager currently uses a JobOutputPipe to forward the output log from a job to a local file, simultaneously storing it in a file for later debugging. Instead, the output should be pushed to a file directly (i.e. using the equivalent of > outputfile.txt) and then this file should be read dynamically to keep track of the output. This would then allow the job to continue to run even if the JobManager should disappear. Answers: username_1: I'm not sure that that will work by itself. The issue is that the job is also part of the process group of the parent, so that if the parent is killed (the usual reason for death) the child _is also killed_. Subprocess management in Java is pretty primitive… username_0: I think the subprocess *does* continue to run from my experience (maybe this is Windows feature - worth a quick test to verify). This is actually the preferred option since I would like to shut down the JobManager to do updates to it whilst leaving the child process running if at all possible (i.e. assuming no interface changes are made that make it impossible for the child to talk to the parent after resuming). As it now stands, the parent and child processes can run on different machines; there are two executors here, the XenVMExecutor where the actual process runs on a VM separate from the parent JobManager; and a LocalExecutor that runs the process locally. The needs of the XenVmExecutor mean that the log file is live-uploaded to the JobManager during the running of the process in general. If this option is disabled, the log is instead pushed to the JobManager at the end. In either case though the JobManager is contacted through a socket. The JobOutputPipe might therefore be obsolete in any case.
alan-null/sc_ext
212513603
Title: Missing [Go to field] after using Sitecore navigation features Question: username_0: ### Steps to Reproduce the Problem 1. Open CE 1. Try to navigate to some item using navigation history or navigate to any item using Links ![image](https://cloud.githubusercontent.com/assets/6848691/23670835/520d8552-0369-11e7-8bbf-fd7e2e0c3c40.png) ### Expected Behavior After item will be loaded I should see `[Go to field]` button injected ### Actual Behavior Missing `[Go to field]` button<issue_closed> Status: Issue closed
esy/esy
579517947
Title: Parsing failure: github:user/lru:lru.opam#2708c70 Question: username_0: Esy incorrectly tries to fetch the repo ``` fatal: unable to access 'https://github.com/user/lru%3Alru.opam.git/': The requested URL returned error: 400 ``` Answers: username_1: Workaround: add a resolution like this ```json "resolutions": { "@opam/lru": "bryphe/lru:lru.opam#2708c70" } ``` Status: Issue closed
containernetworking/plugins
455415503
Title: TOCTOU Race adding firewall chain Question: username_0: I'm not very familiar with this codebase or tool, so please forgive me of some ignorance :smile: During CI-testing of libpod, we run ginkgo with `-nodes 3` (and tests in random order as default) for speed, but also because it catches "interesting" race conditions. [One in particular I've been researching, seems like it may have an easy fix.](https://paste.fedoraproject.org/paste/WdClyZ8YWuX44v95bXrV0g) Since podman has no daemon, it's entirely possible for multiple cni operations to be running concurrently. I believe this is exposing a time-of-check, time-of-use race here: https://github.com/containernetworking/plugins/blob/master/plugins/meta/firewall/iptables.go#L63-L69 In other words, two containers are coming up, no chain exists, both try to create, one wins, the other fails. I'm willing to take a stab at fixing this, but since I don't know the code well, I wanted to check first. I think the fix might be as simple as always ignoring that particular error message. But also I'm not sure of the best way to add a new unit(?) test for this. Please advise. Answers: username_1: Ah yes, that is a very good observation. That does seem like an easy workaround, and your suggestion seems correct. Is there a chance that we could get the same error message when trying to insert a slightly different rule? I don't think so, but can you check? username_0: Easily, and worse. Iptables is extremely naive/simple-minded about such things. It's easily possible one process can add a rule and another one can change it. Top-of-the-head example: One process "inserts" a rule at position 1 while another (simultaneously) "deletes" the rule at position 5...which position 5? The one before or after the first process succeeded? I'd guess this was one of the main reasons firewalld/firewall-cmd was invented :smile: username_2: Does this fix the problem for you? https://github.com/coreos/go-iptables/pull/62 I believe that was the result of debugging some podman issues and we found that when using iptables-nft the error code for "this chain already exists" was different and thus not ignored. username_0: Thanks @username_2 it might could be, I don't recall seeing this particular flake coming out of our CI results recently. We were just talking about scooping up a new CNI version anyway for packaging, which would address the problem outside of CI (I have no reports of happening, but it seems technically possible). I'll keep an eye on our CI system runs and mention here if I come across it occurring in the next few weeks. Okay by me to assume no-news == good-news, and close this. username_0: Dangit...found this happening again in a run of testing libpod on master. username_3: The go-iptables fix seems to work in testing - would be nice to get a release including it so we can get it packaged for Fedora username_0: Ack, thanks for confirming our CI isn't yet up to snuff...okay, so back to monitoring this then.
SkaceKamen/vscode-sqflint
760564109
Title: Ternary Statements Question: username_0: SQFLint does not seem to support SQF's equivalent of a ternary statement, example; ```sqf _angle = if ((_dir1 - _dir2) < 0) then [{(_dir1 - _dir2) * -1}, {_dir1 - _dir2}]; ``` Will result in the following error; `Expected block after then.` I'm new to VS Code Extensions so if anyone knows where I can modify this behavior myself, that'd be greatly appreciated! 😄 Answers: username_1: Hey, I think you're not the first to report this. The extension currently only allows this syntax: ```sqf _angle = if ((_dir1 - _dir2) < 0) then {(_dir1 - _dir2) * -1} else {_dir1 - _dir2}; ``` If you want to look into the issue, the extension uses this parser internally: https://github.com/username_1/sqflint which is written in Java username_0: My apologies if it's a duplicate issue, will take a look at that later and see if I can wrap my head around how it works. Do you think it would be possible to make it so certain errors could be ignored? I think a few other linters out there support something similar. This is the only false-negative I've run into thus far, so it's working really well otherwise! 👍 username_1: Currently, the extension is in maintenance mode, I no longer have time to work on it due to work and personal life. But I may take a look into this, should be an easy fix username_2: Wasn't @username_3 going to take over the maintenance? Just would be nice to know the current state of the extension. username_3: Sorry for the late reply, been busy lately. The underlaying issue is located in the actual sqflint java project. I will have a look into the project in the next few days and will see what I can do about it username_1: I've already created a fix in the sqflint, I'm just trying to get it properly tested Status: Issue closed
kirenbahm/ENP_TOOLS
1058891020
Title: Fix calculation of average flows for observed data Question: username_0: Currently the average flow values for Observed data are not calculated correctly. (The average modeled flows (M01 and M06) appear correct.) The problem appears in the files DS_YEARLY_AVE.txt, DS_M_AVE.txt, and the Critical Flows Report. I believe that the averaging is happening over the period where there is data present, instead of the entire requested time period. For example, S332BN average flow is reported about twice what it should be, but that pump only operated during half of the requested analysis period. Calculations for other structures that have data for the entire analysis period appear to be correct. Please fix the code so the flows are calculated correctly. Thanks! ### These files show incorrect values: **DS_YEARLY_AVE.txt** ``` Average yearly discharges in kilo acre feet (kaf) Station M01 M06 Observed S332BN_Q 27.40 27.47 50.25 ``` **DS_M_AVE.txt** ``` Average monthly S332BN_Q Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec M01 0.63 2.42 3.76 10.93 15.93 49.64 50.40 62.09 67.59 73.71 64.20 51.33 M06 0.74 2.55 3.92 11.10 16.05 49.69 50.46 62.16 67.67 73.75 64.25 51.37 Observed 2.13 6.02 8.24 20.25 29.28 86.68 95.06 113.36 123.60 127.40 110.90 88.78 ``` ### These files show correct values: **DS_Y_AVE.txt** ``` Average annual stages and discharges in ft and cfs S332BN_Q 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 M01 -0.50 -0.53 -0.71 -0.61 -0.47 34.09 107.13 5.84 20.36 91.85 78.92 119.05 M06 -0.49 -0.47 -0.64 -0.52 -0.42 34.20 107.20 5.96 20.48 91.99 79.08 119.14 Observed NaN NaN NaN NaN NaN 45.55 107.84 7.31 21.18 92.70 79.87 119.72 ``` **DS_ACCUMULATED.txt** ``` Cumulative discharges in kilo acre feet (kaf) at the end of the simulation period S332BN_Q 329.00 329.79 335.08 ```
ballerina-platform/ballerina-lang
565209745
Title: [gRPC] Root descriptor is replicated when there are multiple services in proto file Question: username_0: **Description:** $subject **Steps to reproduce:** Use the following protobuf file to generate client or service Ballerina files. ```proto syntax = "proto3"; import "google/protobuf/wrappers.proto"; service Chat { rpc chat (stream Person) returns (stream google.protobuf.StringValue); } service Reply { rpc reply (stream Person) returns (stream google.protobuf.StringValue); } message Person { string name = 1; } ``` Commands ```sh $ ballerina grpc --input sample.proto --mode service --output . ``` OR ```sh $ ballerina grpc --input sample.proto --mode client --output . ``` **Affected Versions:** Ballerina 1.1.1 **Suggested Labels (optional):** Type/Bug Answers: username_0: The solution can be as follows: On the server-side, all the gRPC services can share a common listener endpoint. ```ballerina listener grpc:Listener ep = new (9090); ``` On the client-side, we can generate multiple clients that connect to each endpoint separately because unlike services, these multiple clients can run separately. Status: Issue closed
ratchetphp/Pawl
251902810
Title: Can't connect more than one websocket simultaneously Question: username_0: Unable to connect to two or more WebSocket services simultaneously, The second request goes in waiting and instantly connect when first connection closes here is what I did ```php $websocketURLs[0]['url'] = 'wss://echo.websocket.org'; $websocketURLs[0]['service'] = 'websocket'; $websocketURLs[1]['url'] = 'ws://echo.socketo.me:9000'; $websocketURLs[1]['service'] = 'socketo'; foreach( $websocketURLs as $k ){ $websocket->connect_to_websocket($k['url'], $k['service']); } ``` PHP CLI gives this ![image](https://i.gyazo.com/a3adf5e0c893d628c521cbe7c5caa367.png) Answers: username_1: Can you post all of your code so I can reproduce please? What's given isn't enough to go on. username_0: Updated original post as requested username_1: The [connect](https://github.com/ratchetphp/Pawl/blob/master/src/functions.php#L13) function is an abstraction to hide the event loop, which is fine for one connection, but as you've found doesn't work for multiple as it creates an event loop per call. You need to inject a single event loop into each connection. You will need to use the Connector class instead, as seen in the [second example of the README](https://github.com/ratchetphp/Pawl/blob/master/README.md#example) to achieve multiple connections. username_0: This one worked for multiple connections perfectly. But the problem of freezing still exists **If remote server is not running at the time of creating connection (calling connect_to_websocket () in below code) then the script freezes, it doesn't connect to server when server comes online, which needs force closing/canceling the script** And where to edit or pass timeout? ```php <?php # PHP 7.1.8 (cli) (built: Aug 1 2017 20:56:32) ( NTS MSVC14 (Visual C++ 2015) x64 ) # Copyright (c) 1997-2017 The PHP Group # Zend Engine v3.1.0, Copyright (c) 1998-2017 Zend Technologies require __DIR__ . '/vendor/autoload.php'; $loop = React\EventLoop\Factory::create(); $connector = new Ratchet\Client\Connector($loop); $websocketURLs[] = ['url' => 'wss://echo.websocket.org', 'service' => 'websocket_1']; $websocketURLs[] = ['url' => 'wss://echo.websocket.org', 'service' => 'websocket_2']; foreach( $websocketURLs as $v ){ connect_to_websocket ( $v['url'], $v['service'], $connector ); } function connect_to_websocket( &$url, &$service, &$connector){ $responseCount = 0; echo "\nOK... Will connect to $url at $service\n\n"; $connector( $url, [], [] )->then(function ($connection) use ( &$url, &$service, &$connector, &$responseCount ) { echo "Connected to $service"; echo "\n============================\n"; $connection->send( "Hello" ); $connection->on('message', function($message) use ($connection, &$url, &$service, &$responseCount) { $responseCount++; $message = (string) $message; echo "$service - $message - Response no. $responseCount - Will disconnect at response no. 10"; echo "\n---------------------------------------------------------------------------\n"; [Truncated] $connection->on('close', function($code = null, $reason = null) use ( &$url, &$service, &$connector) { echo "Websocket Connection to $service is closed ({$code} - {$reason}). Reconnecting...\n"; connect_to_websocket( $url, $service, $connector ); }); }, function ($e) { exit("Could not connect to websocket: {$e->getMessage()}. Exiting...\n"); }); } $loop->run(); ``` username_2: I am also having the same problem, I will attempt the fix mentioned above ```php require __DIR__.'/vendor/autoload.php'; // The console should show both ETHBTC and BNBBTC if two websocket connections are established \Ratchet\Client\connect('wss://stream.binance.com:9443/ws/ethbtc@depth')->then(function($conn) { $conn->on('message', function($msg) use($conn) { echo "{$msg}\n"; }); $conn->on('close', function($code = null, $reason = null) { echo "WebSocket Connection closed ({$code} - {$reason})\n"; }); }, function($e) { echo "Could not connect: {$e->getMessage()}\n"; }); \Ratchet\Client\connect('wss://stream.binance.com:9443/ws/bnbbtc@depth')->then(function($conn) { $conn->on('message', function($msg) use($conn) { echo "{$msg}\n"; }); $conn->on('close', function($code = null, $reason = null) { echo "WebSocket Connection closed ({$code} - {$reason})\n"; }); }, function($e) { echo "Could not connect: {$e->getMessage()}\n"; }); ``` username_2: Using the loop above fixed it for me. Thank you Is it possible to use more than one loop? username_3: No, everything needs to run on the same event loop. Status: Issue closed
the-blue-alliance/the-blue-alliance-android
67491273
Title: NPE crash on a non-existent event Question: username_0: This case shouldn't arise in the field so I'll just log it and fix the test data in `test_notification.py` in #379 where the test `schedule_updated` push notification uses the non-existent Event "2014ausy" event. (Why didn't this crash before?) `PopulateEventInfo#onPostExecute()` calls `Event#getDateString()` expecting a non-null value but `getDateString()` can catch `FieldNotDefinedException` and return `null`. **Test case 1:** Uninstall the debug app. Sync to the master branch. Optionally set a breakpoint in the `Event#getDateString()` catch for `FieldNotDefinedException`. Launch the debug build. Let it load team & event data. Tap "not now" for myTBA sign in. It will hit the breakpoint (8 times IIRC) in `Event#getDateString()` as called by `Event#render()`. This breakpoint on the background thead seems to make it hang. **Test case 2:** Repeat test case 1 but don't set a breakpoint. (It's puzzling that "Missing fields for getting date string" is not in logcat given that it would've hit the breakpoint. Is the debugger wrong about what happens in the background thread?) Then send a test notification for a non-existent event like "2014ausy". Then tap that notification to go to the Event screen. That's when `PopulateEventInfo#onPostExecute()` calls `Event#getDateString()`, gets a `null` result, then crashes with a NPE. (See logcat below.) **Question:** Should `Event#getDateString()` propagate the `FieldNotDefinedException`? Return `""`? Should `PopulateEventInfo#onPostExecute()` cope with the `null` result? ``` 04-09 18:04:00.241 6738-6738/com.thebluealliance.androidclient.development E/AndroidRuntime﹕ FATAL EXCEPTION: main Process: com.thebluealliance.androidclient.development, PID: 6738 java.lang.NullPointerException: Attempt to invoke virtual method 'boolean java.lang.String.isEmpty()' on a null object reference at com.thebluealliance.androidclient.background.event.PopulateEventInfo.onPostExecute(PopulateEventInfo.java:207) at com.thebluealliance.androidclient.background.event.PopulateEventInfo.onPostExecute(PopulateEventInfo.java:43) at android.os.AsyncTask.finish(AsyncTask.java:632) at android.os.AsyncTask.access$600(AsyncTask.java:177) at android.os.AsyncTask$InternalHandler.handleMessage(AsyncTask.java:645) at android.os.Handler.dispatchMessage(Handler.java:102) at android.os.Looper.loop(Looper.java:135) at android.app.ActivityThread.main(ActivityThread.java:5221) at java.lang.reflect.Method.invoke(Native Method) at java.lang.reflect.Method.invoke(Method.java:372) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:899) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:694) 04-09 18:04:00.246 6738-7404/com.thebluealliance.androidclient.development E/tba-android:dataManager﹕ Error: HTTP 404 {"404": "2014ausy event not found"} from fetching http://www.thebluealliance.com/api/v2/event/2014ausy/awards ... 04-09 18:04:04.217 6738-7401/com.thebluealliance.androidclient.development E/tba-android:dataManager﹕ Error: HTTP 404 {"404": "2014ausy event not found"} from fetching http://www.thebluealliance.com/api/v2/event/2014ausy/stats 04-09 18:04:04.217 6738-7401/com.thebluealliance.androidclient.development D/tba-android:dataManager﹕ updated in db? false 04-09 18:04:04.218 6738-7401/com.thebluealliance.androidclient.development W/System.err﹕ com.thebluealliance.androidclient.models.BasicModel$FieldNotDefinedException: Field Database.Events.STATS is not defined 04-09 18:04:04.218 6738-7401/com.thebluealliance.androidclient.development W/System.err﹕ at com.thebluealliance.androidclient.models.Event.getStats(Event.java:148) 04-09 18:04:04.218 6738-7401/com.thebluealliance.androidclient.development W/System.err﹕ at com.thebluealliance.androidclient.datafeed.DataManager$Events.getEventStats(DataManager.java:308) 04-09 18:04:04.218 6738-7401/com.thebluealliance.androidclient.development W/System.err﹕ at com.thebluealliance.androidclient.datafeed.DataManager$Events.getEventStats(DataManager.java:299) 04-09 18:04:04.218 6738-7401/com.thebluealliance.androidclient.development W/System.err﹕ at com.thebluealliance.androidclient.background.event.PopulateEventStats.doInBackground(PopulateEventStats.java:70) 04-09 18:04:04.218 6738-7401/com.thebluealliance.androidclient.development W/System.err﹕ at com.thebluealliance.androidclient.background.event.PopulateEventStats.doInBackground(PopulateEventStats.java:44) 04-09 18:04:04.218 6738-7401/com.thebluealliance.androidclient.development W/System.err﹕ at android.os.AsyncTask$2.call(AsyncTask.java:288) 04-09 18:04:04.218 6738-7401/com.thebluealliance.androidclient.development W/System.err﹕ at java.util.concurrent.FutureTask.run(FutureTask.java:237) 04-09 18:04:04.218 6738-7401/com.thebluealliance.androidclient.development W/System.err﹕ at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1112) 04-09 18:04:04.218 6738-7401/com.thebluealliance.androidclient.development W/System.err﹕ at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:587) 04-09 18:04:04.218 6738-7401/com.thebluealliance.androidclient.development W/System.err﹕ at java.lang.Thread.run(Thread.java:818) 04-09 18:04:04.218 6738-7401/com.thebluealliance.androidclient.development W/tba-android﹕ unable to load event stats 04-09 18:04:06.479 6738-6738/com.thebluealliance.androidclient.development I/Process﹕ Sending signal. PID: 6738 SIG: 9 04-09 18:04:06.511 2245-2366/? I/WindowState﹕ WIN DEATH: Window{2d700b39 u0 com.thebluealliance.androidclient.development/com.thebluealliance.androidclient.activities.ViewEventActivity} 04-09 18:04:06.591 2245-4694/? I/ActivityManager﹕ Process com.thebluealliance.androidclient.development (pid 6738) has died ``` Answers: username_1: What would be the consequences of returning "" instead of null in failures cases like this? username_0: That case handles `""` and will crash on `null`. The EventListElement constructor stores `mEventDates = event.getDateString()` and the resulting instance gets serialized (and stored in the local DB, presumably), so a null can bite later. Status: Issue closed
lebaston100/MIDItoOBS
1117851582
Title: Output Multi-Channel VU Question: username_0: Not very familiar with Midi (yet), but I am considering starting a project to create a Midi control device that also doubles as a VU meter for select audio channels in obs. Looking to see if this plugin can, or might be able to return 6-8 channel audio levels to the connected midi device at a configurable interval. Thank you in advance, and I apologize for my ignorance... also in advance. Answers: username_1: Unfortunately this is currently not possible from an obs-websocket v4 side. But in obs-websocket v5 there will be an audio meter event. For now v4 is the only supported version, but if in the future v5 is beeing supported then i'm open to adding it. Exactly which data format would you expect? Something like a simple CC with the volume mapped to the CC value? username_0: From my basic understanding of midi I think that would be what I'm looking for. Then the end device would output that value per channel on an LED bar or something similar username_1: Just as a general suggestion, if you are building something from scratch it might be worth talking directly to obs-websocket with something like an esp32 saving you the additional program that has to run. username_0: Very much appreciated. Based on your previous I just started looking into obs-websocket to see how it works. I am much more familiar with FPGAs and the hardware side of things, so more or less still assessing whats already out there on the software side to connect into OBS.
facebook/relay
668812597
Title: allow to target multiple GraphQL services Question: username_0: We are implementing a web application that targets two independent GraphQL services. `relay` already provides the `environment` abstraction so we can implement two of those and connect to the correct service depending on the data that needs to be fetched. But these two services have their own GraphQL schemas that would need to be processed, which `relay` apparently doesn't support #2887. In the aforementioned issue there is a suggestion to concatenate schema files which can only work if there are no naming conflicts. Therefore, I'd like to suggest to add support for targeting multiple GraphQL services as it's very common in classical REST applications. Answers: username_1: you can have 2 or more relay.config.js with specific `include`, `exclude` patterns to achieve this the problem is that you can reuse one fragment from one GraphQL into another username_1: if you have conflicting Types, how Relay knows which one to use? username_0: Well, I was hoping that when I'm talking to GraphQL 1 then Relay would understand that I need User from Schema 1. username_1: Relay build queries based on fragment at build time So we need to know upfront which User or GraphQL server do you want to use check the reply https://relay-compiler-repl.netlify.app/ Repl sends the compiled query directly to your network layer where you need to resolve using a graphql server or locally Relay does not know which graphql server you will send the query to username_0: A solution would be to allow to annotate the `query` template tag with information about which schema file is being used. Another option would be to "attach" a schema file to an environment and make the query builder aware of the environment at build-time. username_1: Feel free to send a POC implementation username_0: That might be quite involved. Still, I'm a bit puzzled that such a plausible use case is not supported. I believe that fundamentally the problem is that GraphQL has no namespaces. But that ship has sailed a while ago I guess. Anyway, thanks for your pointers. username_2: Ideally you'd solve this on the server - stitch your two schemas together, do whatever namespace transforms you need, and expose a single schema to the client. This is what for example www.onegraph.com does, stitching together a ton of different APIs in various tech (REST, GraphQL, etc), name spacing each API manually and then exposing it as a single schema. username_0: @username_2 sure, but I'd love to avoid writing, deploying and maintaining a proxy backend when there are already two third party GraphQL services available that I can just use. username_2: Ok, very well then. username_0: For what it's worth, Rust's GraphQL client does allow to specify a [schema file per query](https://github.com/graphql-rust/graphql-client/blob/34bbd677006d390ef5d3ff9b29552b28fb160874/examples/github/examples/github.rs#L12). username_3: Thanks for posting. As others have noted, it's already possible to query multiple schemas in a single app with some care on your part: use a distinct environment instance per-schema (to avoid cache collisions), and use include/exclude configuration to have specific parts of the app query one schema or the other. Mixing fragments against different schemas within a single UI hierarchy can be challenging, however, as you may have to use a context provider at every point that you switch queries - and since there can only be a single value of the RelayContext in scope for a given component, i don't see how it would be feasible to query data from two schemas/environments in a single component. While we understand that this is a limitation for some apps that may want to query multiple schemas, this isn't a use-case we intend to support. Status: Issue closed username_0: I don't see support for multiple `relay.config.js` [here](https://github.com/facebook/relay/blob/1ae215c24f1d4423426981f4219ca69d411e8c8a/packages/relay-config/index.js). But maybe I'm overlooking something. The only approach I see is to run `relay-compiler` multiple times - each time with appropriate `--exclude`, `--include` and `--schema` command line arguments. username_1: you need to run relay-compiler multiple times
sdi-sweden/geodataportalen
1046886966
Title: (205, 'Ny användare, Holmenskog AB') Question: username_0: **2011-03-15T07:32:33.000+00:00** ****: Förnamn : Helena Efternamn : Sjölin E-postadress : <EMAIL> Telefon : 0660372702 Mobil : 0706875461 Org.nr/Personnr: 556220-0658 Answers: username_0: **2011-04-07T14:05:08.000+00:00** ****: Updating tickets (#201, #202, #203, #204, #205, #206, #207, #208, #209, #210, #211, #212, #213, #214, #215, #216, #217, #218, #219, #220, #221, #222, #223, #224, #225, #226, #227, #228, #229, #230, #231, #232, #233, #234, #235, #236, #237, #238, #239, #240, #241, #242, #243, #244, #245, #246, #247, #248, #249, #250, #251, #252, #253, #254, #255, #256, #257, #258, #259, #260, #261, #262, #263, #265, #266, #267, #268, #269, #270, #271, #274, #275, #276, #277, #278, #279, #280, #281, #282, #285, #286, #287, #288, #289, #290, #291, #292, #293, #294, #295, #296, #297, #298, #299, #300, #301, #302, #303, #304, #305)
AJEvbank/ShrinkSquared_AWS
314725002
Title: Problem with the get item record function. Question: username_0: When I try to get an item which is not in the database, I get this error: ![image](https://user-images.githubusercontent.com/26314412/38821759-1760fa50-4167-11e8-9b1c-d2ad65ce9dc6.png) I checked and the item is in upcdatabase.org. There aren't any problems getting items that are already in the database. A side note: Here's an item record on upcdatabase.org, http://upcdatabase.org/code/0024000051992. I specifically tried to get this one through our system. Notice that some genius left the title blank and instead put the relevant information in the description field.<issue_closed> Status: Issue closed
LeeGitaek/webgo
562759413
Title: Tensorflow Go Install Problem Issue Question: username_0: I put into terminal the command that is " go get github.com/tensorflow/tensorflow/tensorflow/go " and then the terminal was stopped for a pretty long time. I checked about the URL that was 404. So I couldn't install TensorFlow go. It would be best for me if anybody solves this.
ezylean/webp
560458872
Title: failure to work on aarch64 linux Question: username_0: * **I'm submitting a ...** [ x] bug report [ ] feature request [ ] question about the decisions made in the repository [ ] question about how to use this project * **Summary** failure to work on aarch64 linux ``` Error: spawn /data/data/com.termux/files/home/long-image-split-square/node_modules/@ezy/webp/lib/libwebp-1.0.3-unsupported/bin/cwebp ENOENT at Process.ChildProcess._handle.onexit (internal/child_process.js:264:19) at onErrorNT (internal/child_process.js:456:16) at processTicksAndRejections (internal/process/task_queues.js:80:21) { errno: -2, code: 'ENOENT', syscall: 'spawn /data/data/com.termux/files/home/long-image-split-square/node_modules/@ezy/webp/lib/libwebp-1.0.3-unsupported/bin/cwebp', path: '/data/data/com.termux/files/home/long-image-split-square/node_modules/@ezy/webp/lib/libwebp-1.0.3-unsupported/bin/cwebp', spawnargs: [ '-o', '/sdcard/sliced-long-pictures/-周叽是可爱兔兔-20200201/00afabe87546768fc847e42c7a29f773_comps-0.webp', '--', '/data/data/com.termux/files/usr/tmp/343f1110-355a-44e6-8d5d-91f81a0f6044.jpg' ], cmd: '/data/data/com.termux/files/home/long-image-split-square/node_modules/@ezy/webp/lib/libwebp-1.0.3-unsupported/bin/cwebp -o /sdcard/sliced-long-pictures/-周叽是可爱兔 兔-20200201/00afabe87546768fc847e42c7a29f773_comps-0.webp -- /data/data/com.termux/files/usr/tmp/343f1110-355a-44e6-8d5d-91f81a0f6044.jpg' } error Command failed with exit code 1. ``` * **Other information** (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. StackOverflow, personal fork, etc.) Answers: username_1: hi @username_0, unfortunately the "libwebp" project release directory do not seem to include a precompiled binary for android see here: http://downloads.webmproject.org/releases/webp/ this package just download the binaries and use the right one depending of the platform and architecture. username_0: https://cdn.jsdelivr.net/gh/username_0/[email protected]/libwebp-1.1.0-aarch64.zip You can download the Linux aarch64 version here
GEOSwift/GEOSwift
369812385
Title: Error trying to install earlier version of geos Question: username_0: What's the best way to install an earlier version of GEOSwift and geos using cocoapods? The latest geos version 3.7 won't compile due to a missing `geos/platform.h` file. I tried updating the Podfile to earlier versions I know work: ``` pod 'GEOSwift', '~> 2.2.0' pod 'geos', '~> 3.5.0' ``` But I get the following errors. ``` [!] Error installing geos [!] /usr/bin/svn export --non-interactive --trust-server-cert --force https://svn.osgeo.org/geos/tags/3.5.0 /var/<KEY>T/d20181013-17539-1nd0bf1 svn: E170013: Unable to connect to a repository at URL 'https://svn.osgeo.org/geos/tags/3.5.0' svn: E175013: Access to '/geos/tags/3.5.0' forbidden ``` Thanks Answers: username_1: Looks like the SVN repo that the old version pulled from has been disabled. I'll reach out to them to see if they can reenable it. I'd also like to fix the install issue in geos 3.7.0. Can you provide more details? username_1: Here's the relevant ticket on GEOS: https://trac.osgeo.org/geos/ticket/934 I've requested a login so that I can ask about it. username_1: To restore access to 3.5.0, we may need to republish it and point it to git. Replacing already-published podspecs: https://stackoverflow.com/questions/25604601/how-to-do-pod-trunk-push-to-replace-an-existing-version-of-podspec/44317506#44317506 3.5.0 on git: https://git.osgeo.org/gitea/geos/geos/src/tag/3.5.0 username_1: @username_0 I've republished 3.5.0 so that it points to the GEOS git repo and builds similarly to how we're building 3.7.0. It should be very similar to how the old 3.5.0 worked. Can you give it a try and let me know how it turns out? username_0: Looks like it installs now. Thank you! Still having trouble getting my project to build but I'm not clear it's related to geos now. username_1: Glad to hear it. I'll close this issue, but please let us know if you find other geos/GEOSwift issues. Status: Issue closed
lrberge/fixest
774542098
Title: Large Fixed Effect Interaction Question: username_0: Error : vector memory exhausted (limit reached?) Error in feols(log_export ~ log_sci + log_distw | i(iso3_o, hs_digit) + : Problem evaluating the fixed-effects part of the formula: ``` Equivalent Stata code below for reference: ``` import delimited using Example.csv encode iso3_d, generate(iso3_d_n) encode iso3_o, generate(iso3_o_n) tostring hs_digit, gen(hs_string) encode hs_string, generate(hs_n) ppmlhdfe export log_sci log_distw, absorb(iso3_d_n iso3_o_n) vce(cluster iso3_d_n iso3_o_n) ppmlhdfe export log_sci log_distw, absorb(iso3_d_n##hs_n iso3_o_n##hs_n) vce(cluster iso3_d_n iso3_o_n) reghdfe log_export log_sci log_distw, absorb(iso3_d_n##hs_n iso3_o_n##hs_n) vce(cluster iso3_d_n iso3_o_n) ``` Thanks for your help and for developing this package - It's been needed for a long time! Answers: username_1: Hi, that's perfectly normal! :-) That's because `i()` should only be used to interact stuff that is not in the fixed-effects part (it creates a full matrix!). To interact two fixed-effects, you need to use the specific syntax `fe1^fe2` or `fe1^fe2^fe3` etc (the latter syntax is more general than `i()` with which you can interact only two values). Thanks for the issue, you're not the only one having problems with that! ;-) I understand this is confusing so I'll amend the help of `i()` to point to the appropriate syntax to interact the FEs and I will make it an error with a message stating the way to go. username_0: Makes sense! Thanks for clarifying. Status: Issue closed username_2: Thank you for creating this excellent package! I am running into a similar issue as username_0; below is a MWE. ``` model1 = feols(y~x1+x2+x3 | fe_1^fe_2, dataframe) Error in cpp_quf_gnl(x) : vector Error in feols(y~x1+x2+x3 | : Problem evaluating the fixed-effects part of the formula: Error in cpp_quf_gnl(x) : vector ``` By any chance do you have any insight into how to address this error? I haven't been able to find any issues or threads elsewhere dealing with it. Thank you in advance and apologies if it's a silly user error! username_1: Hi Steve and thanks for the words! It's another problem, the syntax is legit. Could you show a fully reproducible example? Btw which version do you use? username_2: Of course, thank you! I am using version 0.3.1. After starting to put together a fully reproducible example, I was able to isolate which FE variable was causing the issue— it was a zip code variable of numeric data type. I converted it to a factor and now it works fine. Sorry for posting with such a basic error. Thank you very much for your response, I should be all set now!
CocoaPods/CocoaPods
901707994
Title: Development Pods did not create `Resources` group Question: username_0: - [ x ] I've read and understood the [*CONTRIBUTING* guidelines and have done my best effort to follow](https://github.com/CocoaPods/CocoaPods/blob/master/CONTRIBUTING.md). # Report ## What did you do? I found since cocoapods version 1.10.0, After run `pod install`(update), my `Development Pods` did not create `Resources` group as before. resource_bundles for my podsepc file ```ruby s.resource_bundles = { 'TMPodDemo' => ['TMPodDemo/Assets/*.png'] } ``` ## What did you expect to happen? pod 1.9.3 after run pod install, `Development Pods` should create `Resources` group like this <img width="333" alt="pod1 9 3" src="https://user-images.githubusercontent.com/16059158/119596124-33571a80-be11-11eb-8cea-64fef671a7a5.png"> ## What happened instead? pod 1.10.1 cocopods did not create `Resources` group <img width="332" alt="pod1 10 1" src="https://user-images.githubusercontent.com/16059158/119596274-6ef1e480-be11-11eb-9138-a6e2c2dc633f.png"> I wonder whether cocoapods new feature or bug. but without `Resources` group, my `resource_bundles` image files list at `Development Pods` root dir, it's too long, not convenient for my coding... ## CocoaPods Environment ``` CocoaPods : 1.10.1 Ruby : ruby 2.6.3p62 (2019-04-16 revision 67580) [universal.x86_64-darwin20] RubyGems : 3.0.3 Host : macOS 11.3 (20E232) Xcode : 12.5 (12E262) Git : git version 2.30.1 (Apple Git-130) Ruby lib dir : /System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib Repositories : cocoapods - git - https://github.com/CocoaPods/Specs.git @ 00827d9ceab3fd2db7fd6c7b4fba3331e76fcf76 trunk - CDN - https://cdn.cocoapods.org/ YCBinarySpecs - git - http://xxxx YCSpecs - git - http://xxxx.git ``` ### Installation Source ``` Executable Path: /usr/local/bin/pod ``` ### Plugins ``` cocoapods-deintegrate : 1.0.4 cocoapods-disable-podfile-validations : 0.1.1 cocoapods-generate : 2.0.0 [Truncated] ### Podfile ```ruby # use_frameworks! platform :ios, '9.0' target 'TMPodDemo_Example' do pod 'TMPodDemo', :path => '../' target 'TMPodDemo_Tests' do inherit! :search_paths end end ``` ## Project that demonstrates the issue [TMPodDemo](https://github.com/username_0/TMPodDemo) Answers: username_1: +1 username_2: Have you tried `preserve_pod_file_structure` option? ```ruby install! 'cocoapods', preserve_pod_file_structure: true ``` username_2: This is an intentional optimization to flatten the groups created. username_0: I just have tried `install! `install! 'cocoapods', preserve_pod_file_structure: true`, it seems not my want. what i need is cocoapods 1.10.0 create `Resources` group like 1.9.3 did username_2: You can add a subfolder to work around this and force cocoapods to avoid flattening the structure. We can maybe add an option for this. username_3: replace file_references_installer.rb by [older file_references_installer.rb](https://raw.githubusercontent.com/CocoaPods/CocoaPods/3eaff05798cd1ba79aa079a0226e4d5f4a593b0d/lib/cocoapods/installer/xcode/pods_project_generator/file_references_installer.rb) username_3: Add to Podfile ``` class Pod::Installer::Xcode::PodsProjectGenerator::FileReferencesInstaller def add_file_accessors_paths_to_pods_group(file_accessor_key, group_key = nil, reflect_file_system_structure = false) file_accessors.each do |file_accessor| paths = file_accessor.send(file_accessor_key) paths = allowable_project_paths(paths) next if paths.empty? pod_name = file_accessor.spec.name preserve_pod_file_structure_flag = (sandbox.local?(pod_name) || preserve_pod_file_structure) && reflect_file_system_structure base_path = preserve_pod_file_structure_flag ? common_path(paths) : nil # actual_group_key = preserve_pod_file_structure_flag ? nil : group_key # group = pods_project.group_for_spec(pod_name, actual_group_key) group = pods_project.group_for_spec(pod_name, group_key) paths.each do |path| pods_project.add_file_reference(path, group, preserve_pod_file_structure_flag, base_path) end end end end ```
silbinarywolf/gml-go
394641869
Title: test: Running "go test" code coverage stuff but in the browser Question: username_0: **Why?** Being able to run something similar to this: ``` go test -tags "debug headless" -coverpkg $(go list github.com/username_0/gml-go/examples/spaceship/game) ./... ``` but for running in Chrome for the WebAssembly/JS builds would be ideal, especially if my aim is to ship my MMO project in the browser (and other smaller projects building up to that)
UofS-Pulse-Binfo/germ_summary
268851494
Title: Issue #2 - Maternal Parent Cell NOT Highlighted. Question: username_0: ![gs](https://user-images.githubusercontent.com/15472253/32068070-71b40ef4-ba42-11e7-8970-d72937b2620b.gif) The maternal parent header row is not highlighted in the same way as the paternal parent column when mouse pointer is on siblings cell. This is cause by theme_table setting that creates a sticky header where table head markups get duplicated (id, class etc. attributes) causing multiple elements of the same id in the DOM and scripts could not ascertain element/cell to highlight. Answers: username_1: This issue was fixed in the above mentioned pull request. Status: Issue closed
grempe/ex_rated
121036056
Title: Add a clear_rate/reset_rate function Question: username_0: I would like to be able to use ex_rated for login attempts. On successful login I would like to delete/reset the bucket Would you be interested in a pull request adding a function like ExRated.clear_rate("my-rate-limited-api") Answers: username_1: Hi. I guess I would want to understand a bit more about why you need to clear them manually. Couldn't you just let ex_rated cleanup after itself with the built in pruning functionality and set a short bucket timeout when you initialize the gen server? Why does leaving the bucket around until it gets pruned cause an issue? e.g. set :timeout to 15 min so that no bucket will last longer than that after the last login attempt at which point the bucket would disappear lazily on its own when the prune process reaps buckets every minute (or whatever interval you want). Also, if we did do this would you need to pass in the bucket name and a value (60_000 in your example) or just the bucket name? In which case perhaps an api like 'ExRated.delete_bucket("bucket-name")' might be more concise? Maybe I'm missing something. Happy to hear more about your ideas for this. username_0: If I'm doing check_rate for an IP address I would like to be able to reset the count so one person's logins don't affect someone else on that same IP and I would like to do that as soon as they successfully login. The other way to do it is have a function that checks but doesn't increment the count like "read_rate" and then only check_rate on failed login? ExRated.delete_bucket("bucket-name") is a much better idea. Would it be possible to change check_rate to return milliseconds left till next key on error? ie {:error, milli_secs_left} instead of {:error, limit} The limit isn't much use as we already know that and I could use milli_secs_left let people know how long their login is locked out for? username_1: I would accept a pull request for delete_bucket. For returning milliseconds left I would also be fine with that but perhaps it should be added onto the existing response. Otherwise it's a potentially breaking change that would creep in to people's code with the value being returned suddenly having a different meaning. Perhaps what should come back is the full context of how many checks could be done, in what time period, and with x ms remaining. Please keep the pull requests separate. Thanks! username_1: Pull requests #6 and #7 merged. Thank you. New version 1.1.0 of ex_rated has been published to hex.pm: https://hex.pm/packages/ex_rated/1.1.0 Status: Issue closed
AutoMapper/AutoMapper
254366511
Title: Throw Exception When Mapping To/From Same Instance Question: username_0: Ran into a scenario today where *sometimes* we were trying to map to and from the same instance of a type. The scenario is that we have an instance of an entity, and we don't know if it's tracked by the ORM (EF), so we grab an instance from EF and map the modified instance to the existing instance. This pattern works fine if the modified instance isn't tracked by EF, since EF will return a new instance: ``` public void Save(SomeEntity entity) { var existingEntity = db.SomeEntities.First(e => e.Id == entity.Id); mapper.Map(entity, existingEntity); db.SaveChanges(); } ``` When `ReferenceEquals(entity,existingEntity)` we shouldn't map. It just causes problems, especially since the entity has a collection property which either throws a "can't modify collection while enumerating" exception or (if we tell it to MapFrom the ToList of the collection) it makes copies of all items in the collection and adds them in. Is it a valid scenario for AutoMapper to attempt to map from an instance to itself? Assuming not, can we have it throw an exception when this is detected? Since this would be a breaking change, maybe it could be something specified in config? Answers: username_1: Mapping between instances of the same type just causes problems and you shouldn't be doing it. You can avoid mapping to the same instance by checking for this in BeforeMap. You can even try to apply that wherever you want with ForAllMaps. username_0: Good to know. Thoughts on building this behavior in (via throwing an exception, for instance)? username_1: The way I see it, we shouldn't be making it easier for people to do things we advise against :) username_0: Meaning you agree with me AM should throw if it sees someone attempting this... username_1: You're missing my point. You shouldn't be mapping between objects of the same type (regardless if they are the same instance or not). This causes problems and it's not an interesting use case. username_0: Fair enough. Normally I would map from a DTO to an Entity and this would never be an issue. The code in question is something found in a client's codebase where they're not using a DTO for their source. One way to "fix" the issue is to see if I can refactor the code to use a DTO (which I plan to look into in any case). You don't see any scenario where it makes sense to use AM to map two things of the same type? username_2: We get this question now and again, not really in EF cases but when someone is trying to use AutoMapper for cloning. For non-cloning cases, it'll be some other ORM framework and I'll get asked to do things like "only call the setter if the values are different" because that ORM framework will raise events etc. They're wanting to kinda merge things together. Although it might be possible, I've never tried to make AutoMapper a cloning or object merging library. It might be somewhat possible. In these cases where it's outside our normal use case, but still possible, and there's no clear correct default behavior based on the different scenarios raised, you're pretty much left to the existing config. username_0: Question remains regarding AM mapping (inadvertently or not) from and to the same instance of an object. Should this be supported, or should it throw since it's likely indicating the user is trying to do something they probably don't mean to be? Answer either way and close if plan is to do nothing. Thanks! Status: Issue closed username_2: Don't think we'll throw an exception, that'll break people. Just understand that it's not really a supported scenario (but you can do it if you want).
resonance-audio/resonance-audio
315568266
Title: Linking error Question: username_0: Dear all, trying to compile this project for unity. Downloaded CMake, Git, Mercurial and Visual Studio 2015. I am working on Windows 10 x64. Cloned the repository and executed: ./$YOUR_LOCAL_REPO/third_party/clone_core_deps.sh ./$YOUR_LOCAL_REPO/third_party/clone_build_install_unity_deps.sh Then tried to compile with ./build.sh -t=UNITY_PLUGIN But getting the following error unity_win.def : error LNK2001: unresolved external symbol SetRt60ValuesAndProxyRoomProperties [.\$YOUR_LOCAL_REPO\build\platforms\unity\audiopluginresonanceaudio.vcxproj] .\$YOUR_LOCAL_REPO/build/platforms/unity/Release/audiopluginresonanceaudio.lib : fatal error LNK1120: 1 unresolved externals [.\$YOUR_LOCAL_REPO\build\platforms\unity\audiopluginresonanceaudio.vcxproj] Compiling project ".\$YOUR_LOCAL_REPO\build\platforms\unity\audiopluginresonanceaudio.vcxproj" (default target) NOT COMPLETED. Compiling project ".\$YOUR_LOCAL_REPO\build\ALL_BUILD.vcxproj" (default target) NOT COMPLETED. Compiling project ".\$YOUR_LOCAL_REPO\build\install.vcxproj" (default target) NOT COMPLETED. Compiling NOT SUCCEDED. ".\$YOUR_LOCAL_REPO\build\install.vcxproj" (default target) (1) -> ".\$YOUR_LOCAL_REPO\build\ALL_BUILD.vcxproj" (default target) (3) -> ".\$YOUR_LOCAL_REPO\build\platforms\unity\audiopluginresonanceaudio.vcxproj" (default target) (9) -> (destination: Link) -> unity_win.def : error LNK2001: unresolved external symbol SetRt60ValuesAndProxyRoomProperties [.\$YOUR_LOCAL_REPO\build\platforms\unity\audiopluginresonanceaudio.vcxproj] .\$YOUR_LOCAL_REPO/build/platforms/unity/Release/audiopluginresonanceaudio.lib : fatal error LNK1120: 1 unresolved externals [.\$YOUR_LOCAL_REPO\build\platforms\unity\audiopluginresonanceaudio.vcxproj] Thanks and regards, <NAME> Answers: username_1: Ah good catch! Seems like one function definition managed to escape from the legacy code. :) Should be fixed now via f0dec4c0. Could you try again and verify if it resolves the linking issue? username_0: Verified and, yes, now it works ! Sorry, can I ask a very last question? I was expecting to get directly the ResonanceAudioForUnity.unitypackage Could you briefly explain me how to do or where find instruction ?
pushkar-anand/copymean
479237444
Title: Design the intro slider. Question: username_0: Create an activity IntroSliderActivity that displays an introduction about the app on the first run. Refer: - [Intro slider tutorial](https://www.androidhive.info/2016/05/android-build-intro-slider-app/) - [Android App Introduction Slider](https://www.androidtutorialpoint.com/basics/android-app-introduction-slider-android-welcome-screen-tutorial/) To check if the app is started for first-time use shared prefs a demo example below. ```java SharedPreferences sharedPref = getSharedPreferences("FileName",MODE_PRIVATE); Boolean isFirstTime = runCheck.getBoolean("isFirstTime", true); if(isFirstTime) { SharedPreferences.Editor prefEditor = sharedPref.edit(); prefEditor.putString("isFirstTime", false); prefEditor.commit(); } ```<issue_closed> Status: Issue closed
ProyectoIntegrador2018/Inventarios
408471363
Title: TASK-T024 Question: username_0: **Descripción breve:** Como Alumno puedo recibir correos electrónicos con un recordatorio para entregar un dispositivo prestado, para evitar entregar el dispositivo tarde. **Conversación:** NA **Criterios de terminación:** Si el sistema detecta que un dispositivo no ha sido entregado, el alumno recibirá automáticamente un correo electrónico indicando que aún no ha entregado el dispositivo prestado. **Relaciones:** _ID: HU-3006_
gbif/portal-feedback
593782201
Title: This is a sample of Sesleria albicans Kit. ex Schultes not Sesleria coerulans Friv.as placed by GBIF. Question: username_0: **This is a sample of Sesleria albicans Kit. ex Schultes not Sesleria coerulans Friv.as placed by GBIF.** (Original label on this sample is correct, but its author used the name Sesleria caerulea Ard. for Sesleria albicans, how once, and sometimes even today, it was understood). Sesleria albicans differs from Sesleria coerulans for its wider leaves and smaller panicle of variable shape, ovate, elongated or briefly cylindrical, while in Sesleria coerulans the panicle is more massive, usually widely ovate, sometimes almost globular or globular-cylindrical and from its surface more or less evident long everted apexes of the glumes and long awns of the lemmas come up, while in Sesleria albicans the surface of the panicle is relatively smooth. Moreover the sample in question is from France (at Fountainebleau), where Sesleria coerulans does not exist. Reference <NAME>. 1980: Sesleria Scop. In: <NAME>, <NAME>, <NAME>, <NAME>, S.M. & Webb, D.A. (Eds.) Flora Europaea 5: 173-177. Cambridge. ----- User: [See in registry](https://www.gbif.org/api/feedback/user/c8943c49fee40712ca7596f2e99c739d:3f97707ded6172eb4ab41acd8eada1c4ecf16932b2edf291e25cd583dea27b2baf339332c593d895a1cef01ca2a3bc01864c901022befa42ef79470ea2066c3d) System: Safari 9.1.3 / Mac OS X 10.9.5 Referer: https://www.gbif.org/occurrence/438507597 Window size: width 1148 - height 802 [API log](http://elk.gbif.org:5601/app/kibana?#/discover?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:'2020-04-04T09:13:22.380Z',mode:absolute,to:'2020-04-04T09:19:22.380Z'))&_a=(columns:!(_source),index:'prod-varnish-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'response:%3E499')),sort:!('@timestamp',desc))) [Site log](http://elk.gbif.org:5601/app/kibana?#/discover?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:'2020-04-04T09:13:22.380Z',mode:absolute,to:'2020-04-04T09:19:22.380Z'))&_a=(columns:!(_source),index:'prod-portal-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'response:%3E499')),sort:!('@timestamp',desc))) System health at time of feedback: INFO datasetKey: <KEY> publishingOrgKey: <KEY>
CAVaccineInventory/vaccine-feed-ingest
875167815
Title: None Question: username_0: This looks tricky. I can see locations in my browser, but I can't grep for their names in the HAR file from my session. It looks like the data is being served by Google Maps?!? It seems to me something like Puppeteer is needed here. Answers: username_1: Digging through cache, it looks like it's pulling data from https://carbonhe0alth.com/static/data/rev/covid-vaccine-8df6a2b1e1.json; that filename looks like it might well change periodically. I'm not going to have cycles to dig into this for a several days at least, but maybe this can help someone else take a stab at it. username_2: Thanks for the starting point, @username_1. I poked through their javascript and found out that you can get the latest version of that url from this manifest: https://carbonhealth.com/static/data/rev-manifest.json That's enough to create the fetch stage. Here's an implementation: #598. Status: Issue closed
davetcc/IoAbstraction
464865454
Title: Minor debug logging change to add the option to output a hexdump Question: username_0: This will output a hex dump of the provided string, useful when some characters need to be logged in hex format. For example the below would log a title of "blah" followed by a hex dump: ```serdebugHexDump("blah", myData, sizeof(myData);```<issue_closed> Status: Issue closed
nervgh/angular-file-upload
91146032
Title: Filters with extension XLSX Bug? Question: username_0: When I send an XLSX failure gives permission to send ``` uploader.filters.push({ name: 'filesFilter', fn: function(item, options) { var type = '|' + item.type.slice(item.type.lastIndexOf('/') + 1) + '|'; return '|xls|xlsx|'.indexOf(type) !== -1; } }); ``` How to proceed? Answers: username_1: The way i solve was using the name and take the extension, type gets the whole document type =D cheers. username_2: It seems solved Status: Issue closed
hayatoito/test
92604398
Title: [imports]: <link rel=import> shouldn't be active when added by innerHTML (bugzilla: 26898) Question: username_0: Title: [imports]: <link rel=import> shouldn't be active when added by innerHTML (bugzilla: 26898) Migrated from: https://www.w3.org/Bugs/Public/show_bug.cgi?id=26898 ---- comment: 0 comment_url: https://www.w3.org/Bugs/Public/show_bug.cgi?id=26898#c0 *<NAME>* wrote on 2014-09-24 20:07:27 +0000. Reported at https://code.google.com/p/chromium/issues/detail?id=416036 As \<script\>, it should be disabled when injected by innerHTML. cf. http://www.w3.org/TR/2008/WD-html5-20080610/dom.html#innerhtml0 ---- comment: 1 comment_url: https://www.w3.org/Bugs/Public/show_bug.cgi?id=26898#c1 *<NAME>* wrote on 2014-09-24 20:51:51 +0000. Why? The \<script\> thing was mostly done in order to get compatibility with existing content. Specifically there was a lot of content out there that did things like: \<div id=elem\> \<script\>...\</script\> lots of content here \<div\> document.getElementById('elem').innerHTML += "hello world"; This code did not expect the script elements to execute again because back in those days dynamically inserted \<script\> elements almost never executed. I don't think any of those reasons apply here. First of all "reimporting" the same URL is a no-op since we de-duplicate imports, right? Second, there's no existing content that we need to be compatible with since imports are a new feature. The reason I'd rather not make exceptions for innerHTML is that it creates arbitrary and hard-to-learn inconsistencies. Why innerHTML but not outerHTML or insertAdjecentHTML? What about the jQuery provided $("markup here") and parseHTML? ---- comment: 2 comment_url: https://www.w3.org/Bugs/Public/show_bug.cgi?id=26898#c2 *<NAME>* wrote on 2014-09-24 22:24:07 +0000. Good question. Your points are valid. I heard that the \<script\> blacklisting is a safeguard for reducing XSS. Is it misunderstanding the intention of the spec? ---- comment: 3 comment_url: https://www.w3.org/Bugs/Public/show_bug.cgi?id=26898#c3 *<NAME>* wrote on 2014-09-24 22:35:34 +0000. The current limitation was mainly added in order to be compatible with the web. It was originally not added for any security reasons. I don't think that blocking \<script\> in innerHTML is a meaningful XSS-prevention mechanism. But others might disagree.
fluentassertions/fluentassertions.json
599825225
Title: Would be great if you could also support System.Text.Json.JsonDocument Question: username_0: I guess so. Does that mean this issue should be moved? Answers: username_1: Since that's part of the .NET Core framework, doesn't this belong in the main library's repository? username_0: I guess so. Does that mean this issue should be moved? username_1: Yeah, I think it's a great suggestion for FA itself.
sergot/http-useragent
115025082
Title: \r\n grapheme change in rakudo breaks t/170-request-common.t Question: username_0: I think that it's probably only the test itself but only one: Investigating when I've rebuilt the rakudo on the laptop. ``` t/160-issue-67.t ........ ok # Failed test at t/170-request-common.t line 16 # expected: 'POST / HTTP/1.1 # Host: 127.0.0.1 # content-type: multipart/form-data; boundary=XxYyZ # content-length: 190 # --XxYyZ # Content-Disposition: form-data; name="x"; filename="foo.txt" # content-type: application/octet-stream # bar # --XxYyZ # Content-Disposition: form-data; name="foo" # b&r # --XxYyZ-- # ' # got: 'POST / HTTP/1.1 # Host: 127.0.0.1 # content-type: multipart/form-data; boundary=XxYyZ # content-length: 180 # --XxYyZ # Content-Disposition: form-data; name="x"; filename="foo.txt" # content-type: application/octet-stream # bar # --XxYyZ # Content-Disposition: form-data; name="foo" # b&r # --XxYyZ-- # ' # Looks like you failed 1 test of 1 # Failed test 'uri' # at t/170-request-common.t line 7 # Looks like you failed 1 test of 1 # Failed test 'POST(multi-part)' # at t/170-request-common.t line 6 # Looks like you failed 1 test of 7 ``` Answers: username_0: This is actually worse now it crashes moar - will keep tracking as rakudo is a bit borked right now. username_1: wow, that's weird. do you need any help by now? username_0: It's in the hands of the MoarVM people right now as: ``` *** Error in `/home/jonathan/.rakudobrew/moar-nom/install/bin/moar': malloc(): memory corruption: 0x000000000a058970 *** *** Error in `/home/jonathan/.rakudobrew/moar-nom/install/bin/moar': malloc(): memory corruption: 0x000000000a058970 *** ``` Which clearly can't be fixed from here :-\ username_0: Right, as it currently stands, rakudo moar is fixed and most of the tests are passing but there is some fallout: ``` t/090-ua-ssl.t (Wstat: 256 Tests: 2 Failed: 1) Failed test: 2 Non-zero exit status: 1 t/100-redirect-ssl.t (Wstat: 65280 Tests: 0 Failed: 0) Non-zero exit status: 255 Parse errors: Bad plan. You planned 2 tests but ran 0. t/160-issue-67.t (Wstat: 256 Tests: 2 Failed: 1) Failed test: 1 Non-zero exit status: 1 Files=16, Tests=162, 236 wallclock secs ( 0.11 usr 0.03 sys + 228.78 cusr 2.96 csys = 231.88 CPU) Result: FAIL ``` If anyone fancies pitching in at this point, I'm working it down but some of it is quite deep. It's on This is perl6 version 2015.10-165-g619d0a1 built on MoarVM version 2015.10-51-ga362d21 by the way. It works fine on a rakudo more than a day or two old. username_0: The remaining issues are in the chunk parsing code. I've pushed the changes so far into the grapheme-fallout branch username_0: All good now, Just needed changing the use of \r and \n in a few places. Status: Issue closed
selectline-software/selectline-api
886547995
Title: Respone POST ./ExtraTable/{tableName} Question: username_0: Hallo, ich habe da mal eine wichtige Anregung: Es wäre gut, wenn man beim POST den Identifier des neuen Datensatzes zurückgeliefert bekommen würde. Dies würde bei der Entwicklung vieles vereinfachen! Danke. Status: Issue closed Answers: username_1: Guten Tag, danke für das Feedback! Das "Feature" bzw. die Fehlerbehebung wurde mit der V21.2 behoben. Beste Grüße
pingcap/tidb
571860910
Title: planner/cascades: use StatsInfo to estimate the row count of Selection in TiKV layer Question: username_0: ## Feature Request **Is your feature request related to a problem? Please describe:** <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> **Describe the feature you'd like:** <!-- A clear and concise description of what you want to happen. --> Currently, the cascades planner uses the default `Selectivity` to estimate the row count of a `Selection`, instead of using the `StatsInfo` of the table. Let's use the tpc-h `lineitem` as an example: ``` mysql> CREATE TABLE IF NOT EXISTS lineitem ( L_ORDERKEY INTEGER NOT NULL, -> L_PARTKEY INTEGER NOT NULL, -> L_SUPPKEY INTEGER NOT NULL, -> L_LINENUMBER INTEGER NOT NULL, -> L_QUANTITY DECIMAL(15,2) NOT NULL, -> L_EXTENDEDPRICE DECIMAL(15,2) NOT NULL, -> L_DISCOUNT DECIMAL(15,2) NOT NULL, -> L_TAX DECIMAL(15,2) NOT NULL, -> L_RETURNFLAG CHAR(1) NOT NULL, -> L_LINESTATUS CHAR(1) NOT NULL, -> L_SHIPDATE DATE NOT NULL, -> L_COMMITDATE DATE NOT NULL, -> L_RECEIPTDATE DATE NOT NULL, -> L_SHIPINSTRUCT CHAR(25) NOT NULL, -> L_SHIPMODE CHAR(10) NOT NULL, -> L_COMMENT VARCHAR(44) NOT NULL, -> PRIMARY KEY (L_ORDERKEY,L_LINENUMBER), -> CONSTRAINT FOREIGN KEY LINEITEM_FK1 (L_ORDERKEY) references orders(O_ORDERKEY), -> CONSTRAINT FOREIGN KEY LINEITEM_FK2 (L_PARTKEY,L_SUPPKEY) references partsupp(PS_PARTKEY, PS_SUPPKEY)); Query OK, 0 rows affected (0.01 sec) mysql> load stats 's/tpch_stats/lineitem.json'; Query OK, 0 rows affected (0.45 sec) ``` ``` mysql> set tidb_enable_cascades_planner=0; Query OK, 0 rows affected (0.01 sec) mysql> explain select * from lineitem where l_shipdate > "1998-08-15"; +-------------------------+--------------+-----------+----------------------------------------------------------+ | id | count | task | operator info | +-------------------------+--------------+-----------+----------------------------------------------------------+ | TableReader_7 | 100001937.00 | root | data:Selection_6 | | └─Selection_6 | 100001937.00 | cop[tikv] | gt(tpch.lineitem.l_shipdate, 1998-08-15 00:00:00.000000) | | └─TableFullScan_5 | 300005811.00 | cop[tikv] | table:lineitem, keep order:false | +-------------------------+--------------+-----------+----------------------------------------------------------+ 3 rows in set (0.00 sec) ``` ``` mysql> set tidb_enable_cascades_planner=1; Query OK, 0 rows affected (0.01 sec) mysql> explain select * from lineitem where l_shipdate > "1998-08-15"; +-------------------------+--------------+-----------+----------------------------------------------------------+ | id | count | task | operator info | +-------------------------+--------------+-----------+----------------------------------------------------------+ | TableReader_7 | 240004648.80 | root | data:Selection_8 | | └─Selection_8 | 240004648.80 | cop[tikv] | gt(tpch.lineitem.l_shipdate, 1998-08-15 00:00:00.000000) | | └─TableFullScan_9 | 300005811.00 | cop[tikv] | table:lineitem, keep order:false | +-------------------------+--------------+-----------+----------------------------------------------------------+ 3 rows in set (0.01 sec) ``` As we can see above, plannercore uses the statistic information to estimate the row count after the condtion `l_shipdate > "1998-08-15"`, which is more accurate. While, the cascades planner just simply use `the row count of TableFullScan_9 * 0.8`. **Describe alternatives you've considered:** <!-- A clear and concise description of any alternative solutions or features you've considered. --> One solution is that, we should pushes all of the condtions of the `LogicalSelection` into the `LogicalTableScan` or `LogicalIndexScan` in the exploration phase. Then when we implement the `LogicalTableScan` or `LogicalIndexScan` in the implementation phase, we can split the conditions into a `PhysicalSelection` and calculate the StatsInfo for it. **Teachability, Documentation, Adoption, Migration Strategy:** <!-- If you can, explain some scenarios how users might use this, situations it would be helpful in. Any API designs, mockups, or diagrams are also helpful. -->
bitwalker/distillery
476787016
Title: Deployment is not working Question: username_0: ### Steps to reproduce 1. clone any phoenix application 2. let say directory is `/tmp/build-meida` 3. run `MIX_ENV=prod PORT=4000 mix distillery.release` 4. create a new directory `/opt/evercam_media` 5. extract the release zip to the above directory (release zip "/tmp/build-media/_build/prod/rel/evercam_media/releases/1.0.1564983472/evercam_media.tar.gz") 6. It works fine. 7. Repeat above process again without creating `/opt/evercam_media` again. everything is fine. but the newly added code is not there in `/opt/evercam_media`.. there is no errors but the newly deployed application is not working. even after start and stop changes are not there. ### Description of issue - What are the expected results? * Newly added code should be updated din release. - What version of Distillery? * `{:distillery, "~> 2.1"}` - What OS, Erlang/Elixir versions are you seeing this issue on? * Elixir 1.9.1 Erlang 21.0 - If possible, also provide your `rel/config.exs`, as it is often my first troubleshooting question, and you'll save us both time :) ``` # Import all plugins from `rel/plugins` # They can then be used by adding `plugin MyPlugin` to # either an environment, or release definition, where # `MyPlugin` is the name of the plugin module. ~w(rel plugins *.exs) |> Path.join() |> Path.wildcard() |> Enum.map(&Code.eval_file(&1)) use Distillery.Releases.Config, # This sets the default release built by `mix distillery.release` default_release: :default, # This sets the default environment used by `mix distillery.release` default_environment: Mix.env() # For a full list of config options for both releases # and environments, visit https://hexdocs.pm/distillery/config/distillery.html # You may define one or more environments in this file, # an environment's settings will override those of a release # when building in that environment, this combination of release # and environment configuration is called a profile environment :dev do # If you are running Phoenix, you should make sure that # server: true is set and the code reloader is disabled, # even in dev mode. # It is recommended that you build with MIX_ENV=prod and pass # the --env flag to Distillery explicitly if you want to use # dev mode. set dev_mode: true set include_erts: false set cookie: :"k2THOoD].DS}9DgxvhWdNm{D%PPQ81M4aN8m9auPwT3_n:2IEwre{CO;y|)[mdit" end environment :prod do set include_erts: true set include_src: false set cookie: :"l(qLwCV4Rk1@{3/?45s|.u$v{d1(Vz=moYf65aMcBd6Lo2JY?}OyR9(3npt`s;Jc" set vm_args: "rel/vm.args" end # You may define one or more releases in this file. # If you have not set a default release, or selected one # when running `mix distillery.release`, the first release in the file # will be used by default release :evercam_media do set version: current_version(:evercam_media) set applications: [ :runtime_tools ] end ``` Answers: username_1: As far as I can see, it is running the wrong release. You can run the latest release by doing the following in your release directory: `cp releases/start_erl.data var/start_erl.data` However, I would be happy about a fix for this (or hint what's wrong) as well, doing this on every release is pretty annoying. username_2: This seems to be a duplicate of #693, PR #703 should fix this username_0: Okay thanks, I am closing this. Status: Issue closed
jiggzson/nerdamer
96610148
Title: Get results in a decimal form Question: username_0: Is it possible to make the LaTeX generator return the decimal value instead of the fraction? `3/10 = 0.3 instead of 3/10` I know that we will face a floating point issue (because the result will be 0.30000000000000004). Perhaps add a BigDecimal library to handle it? Answers: username_1: Currently no. In the demo it's currently done by looking at the return and then converting it to a decimal. It's one of those things that got pushed to that magical place called tomorrow in the past. username_1: The issue is addressed and fixed in version 0.6.0. Status: Issue closed
kubernetes/website
1021697769
Title: Second level menu items are missing on docs Question: username_0: **This is a Bug Report** When left column menu goes 3 levels deep, for example Concepts >> Workloads >> Pod is active, 2nd level items following the active parent, for our example anything that follows "Workloads", are not displayed. Check the screenshot for the page https://kubernetes.io/docs/concepts/workloads/pods/, items like "Services, Load Balancing, and Networking" that come after Workloads are missing: ![image](https://user-images.githubusercontent.com/16473630/136657664-d81e2be2-eebc-4889-9069-d1a4e88fea59.png) <!--Additional Information:--> Behavior is confirmed for Firefox and Chrome, both in normal and private windows. Answers: username_1: /area web-development username_0: @username_1 It is not likely a designed behavior because it does not follow a predictable pattern when folding the parent items. Even if that is the case, there should be some indicator because there is no way to know Concepts has hidden items. Current implementation makes it very hard to navigate between third level pages, i.e from "Pods" or its children to "Services, Load Balancing, and Networking" and its children. More importantly there is no obvious benefit in hiding them. username_2: @username_0 @username_1 what's the behaviour decided here and is anyone working on this issue? username_1: The SIG Docs consensus was that this isn’t intended behavior, and we should be looking into a PR to address it. /triage accepted /lifecycle frozen /priority backlog username_1: There's no hint at the moment that anyone's working on this. username_1: - Pull requests are welcome - Work to diagnose the issue further is welcome username_2: okay, it's reproducible . Will check for the fix. @username_1 you can assign to me. username_1: For Kubernetes SIG Docs, anyone can work on any issue (our more jargon-y way of saying this is “no [cookie licking](https://devblogs.microsoft.com/oldnewthing/20091201-00/?p=15843)”) and open a pull request to propose a related change. username_1: (for my part, I rarely assign issues to myself - but when I have a draft or finished PR ready, I will link it to the issue and that provides a hint in the issue about progress). username_2: How to run the website in local?
abarisain/dmix
94607766
Title: Add option to keep the screen on (e.g., when playing) Question: username_0: It would be nice to have the possibility to keep the screen on (e.g., when the music is playing and the app is in foreground). This would allow to dock the phone/tablet and build a very convenient to use docked remote. The screen can be kept on using other apps (e.g., tasker), but having the functionality directly in mpdroid would allow to specify finer grained conditions to control when the screen should not go off. Answers: username_1: Sorry, I think you should use another app for that, especially considering how Android allows this kind of automation. Status: Issue closed
sinkillerj/ProjectE
52822936
Title: Make the EMC system use doubles Question: username_0: This would be a fix for multiple items such as stone slabs being calculated as 0.5EMC each it being math.floored meaning it has 0 emc and thus not gettable and setting it to 1 would cause multiple exploits so could you please implement this. Status: Issue closed Answers: username_1: The switch to doubles has been decided against at this time, if a server wants to use doubles they are free to self compile, there is also the option of increasing the EMC values so that glass panes are no longer a issue.
http-rs/tide
689526800
Title: Add support for routing on subdomains Question: username_0: <!--- Provide a general summary of the issue in the Title above --> ## Feature Request Not all of the url routing is complete. I'm just wondering if support for routing on subdomains could be added into the framework. ## Detailed Description <!--- Provide a detailed description of the change or addition you are proposing --> Add new methods to the `router.rs` and `server.rs` to support users to route on subdomains as well. An example of the resulting api I am thinking of would look like so: ```rust let mut app = tide::new(); app.on_subdomain(":sub1.:sub2.:sub3").at("/").with(validate_subdomain).all(handle_user); ``` ## Context <!--- Why is this change important to you? How would you use it? --> <!--- How can it benefit other users? --> I have an application that can have multiple tenets that can upload content to the server and would like to configure my webserver to be able to route on subdomains. This will benefit other users of tide who want to add the same functionality to their own application. ## Possible Implementation <!--- Not obligatory, but suggest an idea for implementing addition or change --> I haven't dove into the code base that much but i would assume that I would need to add functionality to `server.rs` and `router.rs`. I know this maybe a big addition so I wouldn't mind taking a chance and implementing it and putting in a PR. Just wondering if this project is also interested in adding support for it and want to start a discussion. Answers: username_1: We had a bit of a chat about this earlier today, I think this is a fairly promising direction. Earlier art for subdomain routing includes: - [koa-sub-domain](https://www.npmjs.com/package/koa-sub-domain) - [Rails subdomain routing](https://gist.github.com/indiesquidge/b836647f851179589765) ([official docs](https://guides.rubyonrails.org/routing.html#request-based-constraints)) The Rails subrouting API seems quite interesting; if we were to translate that to Tide it could look like: ```rust let mut app = tide::new(); app.subdomain("blog").at("/").get(|_| async { Ok("welcome to my blog") }); app.listen("localhost:8080").await?; ``` I'm leaning towards `app.subdomain` rather than abbreviating it as `app.sub` to prevent confusion with `app.nest`. Also we should only add `subdomain` to `Server` but not `Route`. Because subdomains feel like they should be declared at the root of the app, not nested within routes. As such we should perhaps also guard that when nesting apps we disallow subdomains on the app we're nesting. --- That's my opinion, @username_2 had some different ideas that I'm hoping he'll share! username_2: ## proposal in brief ```rust app.at("/path/still/works").get(…); app.at("http://*/only-http").get(…); app.at("//example.com/any-scheme/but-only-example.com").get(…); ``` ## why don't other servers do this? This proposal is further afield from examples in other frameworks, in that I've never seen a http server router that does this. In general, perhaps because http/1.0 requests don't have a host, routers treat the host, scheme, and path as distinct aspects of a request. In tide, we always have a host for a request. username_0: Happy to hear the advice. I really enjoyed looking into the Rails subdomain routing and have an approach I think would work. I liked your proposal and I think the API design should look like the following: ```rust let mut app = tide::new(); app.subdomain("example").at("/").get(|_| async { Ok("example subdomain") }); app.subdomain("portal").with(authentication).at("/").get(|_| async { Ok("Secure portal") }); app.subdomain(":user.blog").at("/").get(|req| async { Ok(req.param::<String>("user").unwrap()) }); app.listen("example.com").await?; ``` You may have noticed but I am making one prediction which is that the user treats their apex domain as their base url. I'm not sure 100% sure of a good way to allow the user to state the base url. In terms of the design of additional code needed, I believe it would be best to do the same thing that `route-recognizer` is doing and by having a `Router` container around the `Route` struct. I propose wrapping a `Subdomain` struct around a `Namespace` container: ```rust struct Server<State> { ... namespace: Namespace<State>; ... } struct Namespace<State> { router: SubdomainRouter<Subdomain<State>> } struct Subdomain<State> { subdomain: String, router: Router<State>, middleware: Vec<Arc<dyn Middleware<State>>>, } ``` The `SubdomainRouter<T>` would just be like `route-recognizer` except simpler. I wouldn't mind hearing your thoughts about this this design. I am working on an implementation and hopping for it to be finished soon. username_3: Is this still being considered? Would be nice to e.g. partition app routes from api routes.
viewweiwu/vue-tabs-chrome
843423251
Title: New Tabs always open in the first position Question: username_0: `<vue-tabs-chrome ref="settingstab" :minHiddenWidth="120" :maxWidth="160" v-model="tab" :tabs="tabs" @click="handleClick" insert-to-after /> ` my tabs are stored in my state under navigation. My code to add a new tab is from your example: `addTab (tab) { let item = tab.label.replace(' ', ''); let newTabs = [ { label: item, key: item.toLowerCase(), closable: true } ] this.$refs.settingstab.addTab(...newTabs) this.tab = item.toLowerCase();` When I add a new tab, the data is correct, but the tab is always positioned on top of my first tab. When I inspect the element, it doesn't have any css inline style associated to it like the other tabs do, and I get this console error: `[Vue error] TypeError: Failed to execute 'getComputedStyle' on 'Window': parameter 1 is not of type 'Element'.` Answers: username_0: seems to not work when tabs are coming from computed. I found my own workaround Status: Issue closed username_1: @username_0 can you share the solution here cause I ran into the same issue username_0: @username_1 hey sure! so my issue was related to me attempting to do this: `<vue-tabs-chrome ref="settingstab" :minHiddenWidth="120" :maxWidth="160" v-model="tab" :tabs="tabs" @click="handleClick" insert-to-after />` where the `:tabs=tabs` piece was coming from a computed property. The computed property was the main issue here, so now my working solution is this: I needed the tabs to be synced with my state so I could manage it across the app and also potentially save user tabs for future logins. so when a tab gets added by a user I do this to add to the existing tabs component like the given examples and also the state for my needs: `addUserTab(tabToOpen) { const newTab = { label: tabToOpen.label, key: ${tabToOpen.label.replace(' ', '-')}, // no spaces allowed in key closable: true }; // add our new tab configuration to tabs component and state this.$refs.settingstab.addTab(newTab); this.$store.commit('ADD_TAB', newTab);` so since it's in the state, when the user changes routes and all that I want to continue showing my tabs, but needed a workaround from using computed: `<vue-tabs-chrome theme="custom" ref="settingstab" :minHiddenWidth="120" :maxWidth="160" :value="currentSubTab" :tabs="tabs" @dragstart="handleClick" :onClose="onClose">` so now I have `tabs` defined as an empty array under data and that's what allows the component to render properly to start: `data: function() { return { tabs: [] }; }` and then on mount, I pull the tabs I want to show up from my state and add them: ` mounted() { let mainTabs = this.$store.state.navigation.settingsTabs.concat(this.$store.state.navigation.selectedTabs); mainTabs.map(tab => { this.addTab(tab); }); }` here is my addTab function to do that: `addTab(tab) { const newTab = { label: tab.label, key: tab.key, closable: tab.closable }; if (!this.$refs.settingstab.getTabs().find(tab => tab.label === newTab.label)) { // add our new tab configuration to tabs component if it isn't there already this.$refs.settingstab.addTab(newTab); } }` username_2: Because it adds data to the tabs, don't set it in computed.
TeamMentor/TM_4_0_Design
64058225
Title: CR.Beta.4 - Release notes Question: username_0: Link from this issue the issues added which represent features worth mentioning (this info will be added to the [release notes](https://github.com/TeamMentor/TM_4_0_Design/releases) and the issue closed when the code has been pushed to beta.teammentor.net) **Release notes:** **Features/Issues:** * [ ] Answers: username_1: I think its easier to have a label. username_1: Also this issue is hard to assign to anyone, and it would kind of pollute the label process we already have. username_0: well the idea is that once that issue with a label is closed we add it here, that way you always have the release nodes done username_1: Why not just add the issue with the label it to a wiki page :)? username_0: I think its working quite nice, look how some topics/items have more than one issue mapped to it. this way if we keep updating this page as the issues are being ready, it is easy to cut the release (i.e. there is not a lot a work to do at that moment in time) The other part of the workflow is to move the items around as they go from QA to 'ready for release' username_1: If you want to maintain this issue, certainly feel free :). I am using a kabaan board, so I need to use labels to maintain the status. As far as release notes, we don't need to have them for each Beta, we need to have them for a release. Which is something I can do by going through the closed P0 issues. username_0: cool, where is the kanban board? username_1: You've see it before https://waffle.io/teammentor/master?label=Sprint2&source=TeamMentor%2FTM_4_0_Design username_0: code is on master and ready for Beta.4 release username_0: and QA tests are passing (which include Design and GraphDB tests): ![image](https://cloud.githubusercontent.com/assets/656739/6909307/92ef0ad8-d73c-11e4-91be-91b23ca59791.png) username_0: Marked master as CR.Beta.4 ![image](https://cloud.githubusercontent.com/assets/656739/6924397/2da3a1fe-d7d0-11e4-9be3-087ce945fee3.png) Status: Issue closed
pascalabcnet/pascalabcnet
134943908
Title: Падение компилятора - Comparer Question: username_0: ``` function MinBy<T, TKey>(a: sequence of T; selector: T -> TKey): T; begin var comparer := Comparer&<TKey>.Default; Result := a.Aggregate((min,x)-> comparer.Compare(selector(x),selector(min))<0 ? x : min); end; begin end. ``` Проблема в имени - comparer - оно сливается с типом Comparer. Если переименовать comparer то всё работает. Как-то оно в лямбде неправильно захватывается<issue_closed> Status: Issue closed
TEIC/Stylesheets
198728413
Title: Various strange things in oddybyexample stylesheet Question: username_0: The oddbyexample stylesheet in Tools is a potentially useful way of generating automatically an ODD documenting encoding practice in a corpus of TEI documents. I've been testing it while writing a new tutorial on the subject (http://teic.github.io/TCW/howtoGenerate.html). So far I've noticed the following things in need of attention: - the parameter "method" which supposedly determines whether the generated ODD uses @include or @except, in fact has no effect at all. It should probably be removed. - there are several parameters (keepGlobals, enumerateType, enumerateRend et al) controlling which attributes should be provided with valLists in the output, and it's not clear how they interact or whether they are all necessary - the generated ODD includes null declarations for every class declared by a module, whether any elements from that module are actually present in the corpus or not Answers: username_1: IIRC the original oddbyexample always used except, which was not ideal; Sebastian changed it to use include instead; the method param was intended to support the original behaviour, but presumably never got implemented. username_2: not sure if this is a bug or my own inaptitude, but when pointing to 'tei_simplePrint.odd' (stored locally) as `defaultSource` the output is not confidence inspiring. Before I go on a wild-goose chase, could someone confirm that this is or isn't how it is supposed to work? My goal is to create a myGenerated.odd that deletes/adds elements and classes based on a comparison of my example corpus and simplePrint instead of tei_all. username_3: Shouldn't defaultSource point to the Guidelines? (I.e. the released copy or local copy?) So pointing to an ODD won't work but a 'compiled ODD' like p5subset.xml might work. So if you use oddtorelaxng but with --odd and --debug it should leave a copy of the compiled behind to then point at as a source. But I'm not sure why you want to do this? It generates an odd from a corpus. If it then includes things you don't want then edit the resulting odd to remove them. Am I misunderstanding? (All untested and from a phone so not necessarily accurate!) username_4: I believe @username_3 is right, that the "source" has to be a compiled ODD. But this may indicate a shortcoming in the documentation. Are you doing this from the command line, @username_2? username_2: actually i m doing this from inside exist-db, but @username_3 comment got me on the right track. The reason why I want to be able to point to simplePrint, is for the sake of user friendliness. If a user has a tei file that uses simplePrint, oddbyexample will just duplicate all the simplePrint modifications in myExample, without differentiating between simplePrint customization and actual user customization. Obviously not a major problem, but if i can make it configurable it would be nicer. username_0: If you want to derive your ODD from simplePrint you need to supply a compiled version of simplePrint as value of the @source attribute, as James says. See further http://teic.github.io/TCW/howtoChain.html I don't understand the relevance of oddbyexample here: that's intended for the case when you don't have an existing ODD to derive anything from. username_2: @username_0 thanks for the help. To answer your question, I would like to add the odd-by-example functionality to tei-publisher, where a lot stuff is already based on simplePrint. So I would like to give users the option to choose either simplePrint or tei_all as the base file for their custom odd. I understand that its not strictly necessary. Pointing to the uncompiled `tei_simplePrint.odd` does not raise any errors, the output is valid, but incomplete (to the tune of 90% of declarations missing). using odd2odd.xsl to compile `simplePrintSubset.xml` from the odd does work, but results in an invalid file (90 errors 88 invalid `<attList/>`, one wrongly placed `<p>`, one wrongly placed `<datatype>`). Feeding this to oddbyexample does not work. So the method param seems to work correctly, but i m not sure how to proceed with the validation errors for the compiled simplePrint file. I also tried to point to 'http://www.tei-c.org/Vault/P5/current/xml/tei/Exemplars/tei_simplePrint.doc.xml', since it is was the only simplePrint xml file that i could find in the vault, but that one didn't work either. username_2: FIY i have run some stress-tests and after diffing the results, am happy to report that oddbyexample seems to work as expected with different defaultSources (provided they are valid). I ll open a PR, that adds the name of the default source to the outputs header. Since its user configurable, it should be mentioned somewhere I think. username_2: with the release of tei-publisher `3.1.0` users can now run oddbyexample transformations from its UI (or via their own xquery code) [documentation](http://teipublisher.com/exist/apps/tei-publisher/doc/documentation.xml?root=2.5.8.19&odd=docbook.odd&view=div) ![buildodd](https://user-images.githubusercontent.com/6205362/42408434-0cf1f5e6-81cd-11e8-884b-00d015014291.gif) username_0: This good news reminds me that something still needs to be done about the fact that odd2odd generates invalid source. Maybe the patch Piotr suggests on is not such a bad idea. username_5: Btw. the `corpusList` parameter doesn't work as expected (by me). I had to use the following funny syntax to make it process a single file only: java -jar "./Stylesheets/lib/saxon9he.jar" \ "./Stylesheets/tools/oddbyexample.xsl" \ -o:"$OUTPUT_DIR/$output_file" \ -it:main \ corpusList=$(realpath $INPUT_DIR)?select=$INPUT_FILE (where `$INPUT_FILE?` is a file inside `$INPUT_DIR`)
ktorio/ktor
401528267
Title: form-data Question: username_0: When trying to receive form data (like so https://ktor.io/samples/feature/post.html) I get an error saying `Content type multipart/form-data is not supported`. Does this mean I need to install custom `ContentNegotiation` for it? Is this not something we get out of the box? None of the examples I can find that receive form data seem to do this. Also, `application/x-www-form-urlencoded` isn't supported either? Answers: username_1: Looks like `ContentNegotiation` interferes with the default receive interceptor that provides support for multipart username_1: Receiving request is only possible once. Unfortunately it is not prohibited so extra attempts may fail or may return some empty value. We need to make it fail in any case similar to double attempt to respond a call. username_1: Diagnostics improved in 1.2.3 and introduced `DoubleReceive` feature Status: Issue closed
Sv443/CloudflareDUC
715384828
Title: Make use of `--expose-gc` in the `pkg` calls to manually run Node's GC Question: username_0: As CF-DUC should run 24/7, Node's automatic garbage collector might not be able to keep up. With the `--expose-gc` argument, the `global.gc` property will be made available. It can manually call the GC if needed.
sobolevn/git-secret
867881515
Title: forbidden while trying to download https://dl.bintray.com/sobolevn/deb/git-secret_0.3.3_all.deb Question: username_0: Since today, it does not seem to be possible to download the git-secret debian packages from bin tray anymore. Answers: username_1: Same happening here! All CI/CD down :( username_0: Could the debian packages be moved to the releases section of [github](https://github.com/username_2/git-secret)? username_2: Looks like GitHub cannot store `deb` releases the same way BinTray did. There are different tools to emulate it: https://github.com/rpatterson/github-apt-repos But, it does not seem good enough. username_3: any workaround? I'm having the same issue: `The repository 'https://dl.bintray.com/username_2/deb git-secret InRelease'` is not signed username_2: I have attached `deb` release here: https://github.com/username_2/git-secret/releases/tag/0.4.0.alpha1 Please, check that it is valid and signed properly. Sorry for the fuzz! I am communicating with several providers, there are no free options at the moment. Any ideas are welcome 🙂 username_2: Let's move to https://github.com/username_2/git-secret/issues/646 Status: Issue closed username_0: Related https://github.com/username_2/git-secret/issues/646
cozy/cozy-ui
423794091
Title: div in a p in Empty Question: username_0: I see this warning in my console <img width="583" alt="image" src="https://user-images.githubusercontent.com/465582/54763807-efe04080-4bf6-11e9-8092-35ac59d76851.png"> Answers: username_1: Seems about right. The Empty text is `<p>` and you're giving it a `<div>` instead of a string maybe. Anyway the warning is right. `<div>` cannot be a child of `<p>`. username_0: It was a mistake on our part, fixed, thanks ! Status: Issue closed
strengejacke/sjPlot
474160059
Title: plot_model not recognizing axis labels from sjlabelled Question: username_0: ![image](https://user-images.githubusercontent.com/47396873/62067362-24918380-b1f9-11e9-8095-1a1ea5e6ae65.png) How can I make the plot_model recognize my labels? Answers: username_1: I can't say exactly without a reproducible example, but I _think_ it might be due to the `subset`-option in `lm()`. Make sure to subset your data before (either with dplyr, which preserves the labels, or you use `sjmisc::copy_labels()` to add back value and variable labels), and then fit the model w/o subset-option in lm. Does that work? username_1: _bump_ username_0: I did not fix this problem. I went back and changed the names of labels on the input to reflect what I wanted. Status: Issue closed
jtaghiyar/kronos
213958771
Title: Problem Question: username_0: Hello: I am interesting on titan_workflow, i test it using matched tumor/normal bam file.However, i get the following message during running. python titan_test.py -c /home/zhouchi/software/titan_workflow/components/ -e titantest -w /home/zhouchi/titan_wd1 -b drmaa -d /usr/lib/gridengine-drmaa/lib/libdrmaa.so pipeline finished with exit code 98 Job completed Completed Task = task_0 and I checked the log file, find this error in TASK 6: #job name: TASK_6__0 with job id: 17442 #command /home/zhouchi/titan_wd/2017-03-13_22-43-50/HCC101T_101T_HCC101N_101N_titan_test/scripts/TASK_6__0.sh #cmdout #cmderr Loading required package: foreach Loading required package: IRanges Loading required package: methods Loading required package: BiocGenerics Loading required package: parallel Attaching package: 'BiocGenerics' The following objects are masked from 'package:parallel': clusterApply, clusterApplyLB, clusterCall, clusterEvalQ, clusterExport, clusterMap, parApply, parCapply, parLapply, parLapplyLB, parRapply, parSapply, parSapplyLB The following objects are masked from 'package:stats': IQR, mad, xtabs The following objects are masked from 'package:base': anyDuplicated, append, as.data.frame, as.vector, cbind, colnames, do.call, duplicated, eval, evalq, Filter, Find, get, grep, grepl, intersect, is.unsorted, lapply, lengths, Map, mapply, match, mget, order, paste, pmax, pmax.int, pmin, pmin.int, Position, rank, rbind, Reduce, rownames, sapply, setdiff, sort, table, tapply, union, unique, unlist, unsplit Loading required package: S4Vectors Loading required package: stats4 Loading required package: Rsamtools Loading required package: GenomeInfoDb Loading required package: GenomicRanges Loading required package: XVector Loading required package: Biostrings Running TITAN... titan: Loading data results/museq2counts/TASK_2_output_museq_postprocess.txt titan: Loading default parameters titan: Reading GC content and mappability corrected read counts ... titan: Extracting read depth... Slurping: /home/zhouchi/software/Mappability_File/hg19_100mer_map.wig Parsing: fixedStep chrom=chr1 start=1 step=1000 span=1000 Parsing: fixedStep chrom=chr10 start=1 step=1000 span=1000 [Truncated] __GENERAL__ R /usr/bin/R __SHARED__ reference /home/zhouchi/data/hg19_ref/hg19.fasta __SHARED__ ld_library_path None __SHARED__ pythonpath None __SHARED__ positions_file None __SHARED__ map /home/zhouchi/software/Mappability_File/hg19_100mer_map.wig __SHARED__ gc /home/zhouchi/software/GC_Content_File/hg19.gc.wig __SHARED__ gene_sets_gtf /home/zhouchi/data/hg19_ref/hg19.gtf __SHARED__ interval_file /home/zhouchi/software/titan_workflow/interval_file_titan.txt __SHARED__ r_libs None __SHARED__ genome_type UCSC __SHARED__ model model_single_v4.0.2.npz __SHARED__ museq_interval_file /home/zhouchi/software/titan_workflow/interval_file_mutationseq_ucsc.txt __SHARED__ y_threshold 20 __SHARED__ target_list NULL __SHARED__ chromosomes ['chr1','chr2','chr3','chr4','chr5','chr6','chr7','chr8','chr9','chr10','chr11','chr12','chr13','chr14','chr15','chr16','chr17','chr18','chr19','chr20','chr21','chr22','chrX','chrY'] and matched tumor/normal bam file was produced by GATK workflow Thanks!
facebook/buck
277314529
Title: BUCK and cross compiling on different platform Question: username_0: Hey, I want to use BUCK to cross compile code for Arduino. I made it work for OSX by overwriting compiling tools in `.buckconfig`, which works great. However I also need to make it work on different platforms e.g. Linux for CI etc. I can still use same tools etc, but paths and flags passed for cross compiling would be different. So my question here is: - How do you handle cross compilation for different platforms? - Do I need to create different `.buckconfig` for different platforms or there is better way of doing this? Answers: username_1: The way we do this at Facebook is to use different .buckconfig files, or use Buck's `@` argument syntax to easily pass options on the command-line. For example, you could check in a file called `mode/arduino` that contains ``` --config cxx.cxxflags="-std=gnu++14 -Os" ``` then compile with `buck build @mode/arduino //my:target` to get that configuration option set on the command-line. What flags need to be different per-platform? The include path locations? For Android, we have native support for relocating the NDK, but another option is to just require the header files to be installed in a fixed location. username_0: That approach would work like a charm! Thank you. Status: Issue closed
VSCodeVim/Vim
543929472
Title: Multiple Cursor bug in html file Question: username_0: **Describe the bug** In the latest vscode (1.41) the vscode team release a new feature called [html mirror cursor](https://code.visualstudio.com/updates/v1_41#_html-mirror-cursor) now when i open a html file and go to a tag in normal mode it also select its closing tag in multi cursor mode but when i move my cursor it does not close the multi cursor mode **To Reproduce** create a simple html file with some tag like ``` <form> <input type='text'> <input type='text'> </form> ``` move your cursor in normal mode to opening form tag you see that closing form tag also get now move your cursor into next line in input you see that form tag is still selected **Expected behavior** the multi cursor should be closed when i am not in html tag **Environment (please complete the following information):** <!-- Ensure you are on the latest VSCode + VSCodeVim You can use "Report Issue" by running "Developers: Show Running Extensions" from the Command Pallette to prefill these. --> - Extension (VsCodeVim) version: - VSCode version: 1.41 - OS: elementry os 5 (juno) Answers: username_0: for anyone who is suffering from the same issue you can disable this vscode feature via setting `html.mirrorCursorOnMatchingTag` just search _mirror cursor on matching tag_ in setting and uncheck the checkbox username_1: @username_0 thanks, this was so annoying I temporarily disabled the whole vim extension until I had time to search for a fix username_2: omg this was annoying as hell, had no idea why this was happening. Vim normally doesn't even have multi-cursors, was super confused. Was ready to disable Vim until I found this after an hour. username_3: HTML mirror cursor is an absolutely horrible feature. Was screwing me up for quite a while until I was finally forced to search about it. "On" by default is just an unbelievably bad choice. username_4: @username_3 to be fair, I think if you're not using VSCodeVim, it's probably quite useful username_3: I'm not using that, and I can assure you it's still not useful. I often paste class attributes etc to similar open tags, and it kept pasting it all on the end tags as well. I've turned it off now, but I really disagree with the decision to have it "on" by default. username_4: Ah interesting; I'd figured it would be smart enough to not copy attributes to the end tag. That is bothersome. username_5: In my case, `html.mirrorCursorOnMatchingTag` was off by default but I had the `https://github.com/formulahendry/vscode-auto-rename-tag` extension installed which messed up this vim plugin big time.
chukwuemekachm/GraphQL-API
404255293
Title: Users should be able to query reviews on the platform Question: username_0: #### Why is this important? - This will enable the querying of reviews on the platform along with its relations on the platform. #### Acceptance Criteria **Scenario 1** - **GIVEN** - A user - **WHEN** - Makes a query with a pagination `limit` and `offset` - **THEN** - A list of paginated books should be returned **Scenario 2** - **GIVEN** - A user - **WHEN** - Makes a query with a `sorting` or `filter` parameters - **THEN** - A list of filtered and sorted reviews should be returned to match the users **Scenario 3** - **GIVEN** - A user - **WHEN** - Makes a query with no `filter`, `limit`, `offset` nor `sort`parameters - **THEN** - A default list of reviews, limited to a value set by the server should be returned #### DEV NOTES - Create a `getReviews` resolver which queries publisher on the API.<issue_closed> Status: Issue closed
ClickHouse/ClickHouse
835951722
Title: Materialized view has incorrect data from `ReplacingMergeTree` table when rows are collapsing. Question: username_0: Hey there, first things first thanks a lot for the wonderful project. I am trying to create a "data sync" table with hourly aggregated data from a `ReplacingMergeTree` table. I've created the table and the `MATERIALIZED VIEW` for syncing as: ```sql CREATE TABLE summs ( timestamp Datetime DEFAULT now(), id UInt32, sign Int8 DEFAULT 1, value Float32, projectId Int8, page String ) ENGINE = ReplacingMergeTree(sign) PARTITION BY tuple() order by (id, projectId, page); CREATE TABLE summs_hourly ( id UInt32, hour DateTime, page String, projectId Int8, -- pageviews AggregateFunction(count), valueavg AggregateFunction(avg, Float32) ) ENGINE = AggregatingMergeTree() Partition by toYYYYMM(hour) ORDER BY (hour, id, page, projectId); CREATE MATERIALIZED VIEW summs_hourly_mv TO summs_hourly AS SELECT toStartOfHour(timestamp) as hour, id, page, projectId, countState() pageviews, avgState(value) valueavg FROM summs GROUP BY hour, id, page, projectId; ``` Afterwards I am adding some data and everything seems to work as expected: ```sql INSERT INTO summs (id, sign, value, projectId, page) VALUES (2, 1, 200, 1, '/page2'), (2, 1, 100, 1, '/page3'), (2, 1, 300, 1, '/page3'), (2, 1, 200, 1, '/page3'); ``` And retrieving the data gives me the proper results: ``` SELECT hour, id, page, projectId, countMerge(pageviews) pageviews_final, avgMerge(valueavg) avgState_final FROM summs_hourly FINAL GROUP BY hour, id, [Truncated] page, projectId, countMerge(pageviews) pageviews_final, avgMerge(valueavg) avgState_final FROM summs_hourly FINAL GROUP BY hour, id, page, projectId; ┌────────────────hour─┬─id─┬─page───┬─projectId─┬─pageviews_final─┬─────avgState_final─┐ │ 2021-03-19 12:00:00 │ 2 │ /page3 │ 1 │ 3 │ 200 │ │ 2021-03-19 12:00:00 │ 2 │ /page2 │ 1 │ 3 │ 244.33333333333334 │ └─────────────────────┴────┴────────┴───────────┴─────────────────┴────────────────────┘ ``` `pageviews` and `avgState_final` are also aggregating the rows with `sign:-1`. I am not quite sure if that's a bug or maybe I need to tune the the aggregated view. Any help will be highly appreciated. Answers: username_1: Materialized view is an insert trigger. It knows nothing about an engine of the base table. https://username_1.github.io/Everything_you_should_know_about_materialized_views_commented.pdf https://youtu.be/ckChUkC3Pns?t=9326 username_0: Hey @username_1, thanks for the response. Is there any chance I could use aggregated materialized views along with collapsing rows without manually data syncing with a cron Job? username_1: To be honest I did not read you issue. I don't have time to read/think. I just replied with my snippet. I see you have some sign column, can you mutliple ` sign * value as value` in Materialized view? username_0: @username_1 I appreciate your time and efforts. I don't get your point about the sign values and aggregating the count for pageviews, can you elaborate a bit more? username_1: ``` ReplacingMergeTree(sign) VALUES (2, 1, 200, 1, '/page2'), VALUES (2, -1, 200, 1, '/page2'), (2, 1, 333, 1, '/page2'); ``` You have confused CollapsingMT and ReplacingMergeTree? Because ReplacingMergeTree use the version policy, not the sign. But anyway. You a inserting values into SummingMT, if you negate value by *-1 then you get the matching result: MV value = value*sign === 1*1 + 1*(-1) + 1*1 = 1 ```sql CREATE MATERIALIZED VIEW summs_hourly_mv TO summs_hourly AS SELECT toStartOfHour(timestamp) as hour, id, page, projectId, sumState(1*sign) pageviews, ----<------ avgState(value*sign) valueavg ----<------ FROM summs GROUP BY hour, id, page, projectId; ``` Status: Issue closed
Chia-Network/chia-blockchain
835808472
Title: [BUG] Plot fails with Not enough memory for sort in memory Question: username_0: **Describe the bug** Ive tried multiple times to create a plot on Amazon Linux each time i get the following message. It always seems to fail at bucket 127 no matter how big my instance is ``` Bucket 125 QS. Ram: 0.465GiB, u_sort min: 0.563GiB, qs min: 0.281GiB. force_qs: 0 Bucket 126 QS. Ram: 0.465GiB, u_sort min: 0.563GiB, qs min: 0.281GiB. force_qs: 0 Bucket 127 QS. Ram: 0.465GiB, u_sort min: 1.125GiB, qs min: 0.281GiB. force_qs: 0 Total matches: 4294941446 Forward propagation table time: 3585.387 seconds. CPU (115.580%) Fri Mar 19 10:20:21 2021 Computing table 3 Caught plotting error: Not enough memory for sort in memory. Need to sort 0.562430GiB Traceback (most recent call last): File "/home/ec2-user/chia-blockchain/venv/bin/chia", line 33, in <module> sys.exit(load_entry_point('chia-blockchain', 'console_scripts', 'chia')()) File "/home/ec2-user/chia-blockchain/src/cmds/chia.py", line 59, in main cli() # pylint: disable=no-value-for-parameter File "/home/ec2-user/chia-blockchain/venv/lib64/python3.7/site-packages/click/core.py", line 1026, in __call__ return self.main(*args, **kwargs) File "/home/ec2-user/chia-blockchain/venv/lib64/python3.7/site-packages/click/core.py", line 956, in main rv = self.invoke(ctx) File "/home/ec2-user/chia-blockchain/venv/lib64/python3.7/site-packages/click/core.py", line 1518, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/ec2-user/chia-blockchain/venv/lib64/python3.7/site-packages/click/core.py", line 1518, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/ec2-user/chia-blockchain/venv/lib64/python3.7/site-packages/click/core.py", line 1280, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/ec2-user/chia-blockchain/venv/lib64/python3.7/site-packages/click/core.py", line 711, in invoke return callback(*args, **kwargs) File "/home/ec2-user/chia-blockchain/venv/lib64/python3.7/site-packages/click/decorators.py", line 22, in new_func return f(get_current_context(), *args, **kwargs) File "/home/ec2-user/chia-blockchain/src/cmds/plots.py", line 134, in create_cmd create_plots(Params(), ctx.obj["root_path"]) File "/home/ec2-user/chia-blockchain/src/plotting/create_plots.py", line 176, in create_plots args.nobitfield, RuntimeError: std::exception ``` However I have plenty of memory (see below) **To Reproduce** 1. chia plots create -n 1 -b 512 -e -d /mnt/data/chia-final **Expected behavior** Be able to create a plot **Desktop** ``` chia version 1.0.0 uname -a Linux 4.14.219-164.354.amzn2.x86_64 #1 SMP Mon Feb 22 21:18:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 3.9G 0 3.9G 0% /dev tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs 3.9G 448K 3.9G 1% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/nvme0n1p1 8.0G 1.6G 6.5G 20% / tmpfs 788M 0 788M 0% /run/user/1000 /dev/nvme1n1 600G 15G 586G 3% /mnt/data tmpfs 788M 0 788M 0% /run/user/0 free -h total used free shared buff/cache available Mem: 7.7G 121M 6.4G 448K 1.1G 7.3G Swap: 0B 0B 0B ``` **Additional context** Add any other context about the problem here. Answers: username_1: Your -b setting is way too low. Memory guidance in the middle https://chia.net/2021/02/22/plotting-basics.html Status: Issue closed
OpenSeaMap/online_chart
17361658
Title: Permalink Wikipedia Gallery Layer Question: username_0: This bug has been reported on the mailing list (http://sourceforge.net/mailarchive/message.php?msg_id=31210955). ###### Affected browser versions * Seems to be a browser independent problem. ###### Description When creating a permalink for the Wikipedia Gallery Layer, the Wikipedia POI Layer will be displayed. ###### How to reproduce the problem * Create a permalink while Wikipedia Gallery Layer is activated. * Reopen this link. ###### Expected result * The Wikipedia Gallery Layer will be displayed. Answers: username_1: Yes, this is a bug. The thumbnail feature is not a separate layer but an option for the wikipedia layer. With the current system for creating the permalinks there is no easy way of changing this. Possible solutions: * Implement the Wikipedia-Thumbnail Layer as extry Layer (maybe as pseudo layer) * Change the source for generating and processing the permalinks
mapstruct/mapstruct
419065021
Title: Refactor SourceReference Question: username_0: - [ ] Is this an issue (and hence not a question)? If this is a question how to use MapStruct there are several resources available. - Our reference- and API [documentation](http://mapstruct.org/documentation/reference-guide/). - Our [examples](https://github.com/mapstruct/mapstruct-examples) repository (contributions always welcome) - Our [FAQ](http://mapstruct.org/faq/) - [StackOverflow](https://stackoverflow.com), tag MapStruct - [Gitter](https://gitter.im/mapstruct/mapstruct-users) (you usually get fast feedback) - Our [google group](https://groups.google.com/forum/#!forum/mapstruct-users)<issue_closed> Status: Issue closed
Ruban-P/Inventory-and-Sales-Application-for-a-Book-Store
1012940563
Title: Required Functions Question: username_0: - [ ] OnDiscount() – return all books that are on discount. - [ ] OnDiscount(minDiscount) – return all books that are having a discount of minDiscount or more. - [ ] GetAvailability(ISBN) – “out of stock” if the number of copies is 0, “low stock” if less than 10 copies are available and “in stock” otherwise. - [ ] QuickSellBook(ISBN, copiesSold) – Only for selling a single book. Reduces the number of copies from the inventory and adds the amount to credit by sales in accounts. Default for copiesSold is 1. Generate an Invoice. - [ ] ReStock(ISBN, copiesPurchased) – Add the purchased number of copies to the inventory and add the amount to debit by purchase in accounts. Default for copiesPurchased is 5. Generate a Payment Challan. - [ ] GetSeller(ISBN)-return the seller details for a given book. - [ ] SearchByAuthor(Author) – search a book based on author name. - [ ] SearchByTitle(Title) – search a book based on its title. - [ ] GetISBN(Title) – Given the exact title, return the ISBN number for that book. - [ ] AddToCart(ISBN, copiesSold) – When more than one book is purchased. Add the book to the cart along with number of copies. - [ ] Checkout() – Calculate the total cost after discounts and generate an invoice for the same. - [ ] AddBook() – Provide an option to add a new book to the inventory. - [ ] RemoveBook(ISBN) – Provide an option to remove a book from the inventory. - [ ] EditBookDetails(ISBN) – Option to edit author details and title in case of typos. - [ ] UpdatePrice(ISBN, newPrice) – Update the selling price of a given book. - [ ] UpdateSeller(ISBN, Seller Details)- Update the seller details for a given book. - [ ] AddSeller()- Add a new seller. - [ ] FetchBooks(Seller) – Get all the books that can be purchased from that seller. - [ ] UpdateDiscount(ISBN, Discount%) – option to update the discount given for a given book. - [ ] GetSalesDetails(year) – Get the past sales details of all books for the given year. Number of copies sold in all months of that year, along with total income and profit. - [ ] GetSalesDetails(year, month) – Get the past sales details of all books for the given month in the given year. Number of copies sold in that month, along with total income and profit. - [ ] GetSalesDetails(ISBN) – Get the past sales details of a given book. Number of copies sold every month/year in the past, along with total income and profit. - [ ] GetSalesDetails(ISBN, year) – Get the past sales details of a given book for the given year. Number of copies sold in that year, along with total income and profit. - [ ] GetSalesDetails(ISBN, year, month) – Get the past sales details of a given book for the given month and year. Number of copies sold in that month, along with total income and profit. - [ ] GetPurchaseDetails(Year) – Get the past purchase details fora given year for all the books from all sellers. Number of copies of each book purchased, along with total expense. - [ ] GetPurchaseDetails(Year, month) – Get the past purchase details for a given year and month for all the books from all sellers. Number of copies of each book purchased, along with total expense. - [ ] GetPurchaseDetails(Year, month, ISBN) – Get the past purchase details for a given year and month for the given book from all sellers. Number of copies of that book purchased, along with total expense. - [ ] GetPurchaseDetails(Year, month, Seller) – Get the past purchase details for a given year and month for all the books from that seller. Number of copies of that book purchased, along with total expense.
invertase/react-native-apple-authentication
546677134
Title: Can I translate the button text "Continue with Apple" to portuguese? Question: username_0: Hi, I'm recently using your component to implement Sign In for Apple (Iphone). But my app is in Portuguese (Brazil), is it possible to translate? Answers: username_1: The text is generated via the Apple SDK, so not manually - however, if the device language is in a language, I believe the button text changes based on that. Status: Issue closed username_2: It definitely does, my app is Spanish and English, and it translates to Spanish when the device is in Spanish username_0: Thank you @username_2 and @username_1
gravitee-io/issues
280453183
Title: [Portal] Login page mention that the server is unreachable Question: username_0: Hello support I have installed and started gravitee-gatewat ,gravitee-api , gravitee-api-ui , but when I want to login I have an error 'server unreachable' ![image](https://user-images.githubusercontent.com/19307923/33763149-40971410-dc07-11e7-9ed1-57d4f48f18e5.png) In my dev console ![image](https://user-images.githubusercontent.com/19307923/33763211-86645976-dc07-11e7-9a58-42dd2f45d625.png) Best regards Answers: username_1: hi this probably means that your api management is unreachable :) by default the UI try to request the api on `localhost:8083/management` can you try a `GET http://localhost:8083/management/apis`? username_0: it returns a [ ] ![image](https://user-images.githubusercontent.com/19307923/33763921-3dda69ae-dc0a-11e7-93d9-44a8e8184bd8.png) username_0: I forgot to mention that I've tested that in an external browser , I've changed the "baseUrl" attribut in constant.json, I'm having the same error ![image](https://user-images.githubusercontent.com/19307923/33770007-0f32462a-dc24-11e7-94a8-32ae728811dd.png) username_0: Any suggestions Nicolas ? username_0: Here is my constantes.json , am I missing something ? ![image](https://user-images.githubusercontent.com/19307923/33881949-66a2d3fe-df2e-11e7-948a-fe600fcc9a39.png) username_1: can you try on your laptop : `curl http://192.168.2.241:8083/management/apis`? username_0: Hi Nicolas , When I try this command I'm having a timout ![image](https://user-images.githubusercontent.com/19307923/33899098-58cd0eaa-df62-11e7-8f6d-b9ac2c8de7e5.png) username_1: Ok, so this means that your management api is not reachable from your laptop. Because the UI is a Single Page Application, the code is executing in your browser (means locally). username_0: Thank you Nicolas :+1: So how could we expose the UI externally ? username_1: The problem is not the ui but the management api. The api must be reachable. How do you deploy it ? username_0: After deploying the gateway project , I'm using the command below to start the project ![image](https://user-images.githubusercontent.com/19307923/33931271-71707462-dfe7-11e7-8d00-1281f90f8334.png) ![image](https://user-images.githubusercontent.com/19307923/33931343-9d6cc9e4-dfe7-11e7-9f09-52e1295ad3b0.png) username_0: Hello , sorry for bothering you guys, I think the port 8083 is not accesible remotly , wich file contains this value so that I can change it ? :) username_1: you have to update the gravitee.yml file : https://github.com/gravitee-io/gravitee-management-rest-api/blob/14e09160250b73e12f66a58686ef6ac8d57471f8/gravitee-management-api-standalone/gravitee-management-api-standalone-distribution/src/main/resources/config/gravitee.yml#L18 username_2: Closing for inactivity Status: Issue closed
foundweekends/giter8
307217558
Title: Allow resolve a template from a sub-directory Question: username_0: Either locally or from a sub-directory in a Git repo, to make it possible having multiple (related) templates in the same repo. Answers: username_1: See #396 Locally, it's possible to do e.g. `sbt new file://path/to/a/template` username_2: Isn't this actually implemented [here](https://github.com/foundweekends/giter8/blob/3c90ebedbf63be384a8673c11178c53767d85617/cli-git/src/main/scala/Runner.scala#L89-L91)? Status: Issue closed username_3: I guess so. I'm closing this for now, but let us know if `--directory` doesn't address it.
wso2/product-apim
258703266
Title: [APIM 3.0.0 M6 Store] login user shows as "undefined" Question: username_0: **Environment** apache-activemq-5.14.0 wso2is-5.3.0 wso2apim-das-3.0.0-m6 wso2apim-3.0.0-m6 wso2apim-gateway-3.0.0-m6 **Reproduce Steps** 1) Browse the store page 2) enter username password(ussed admin username password) 3) login to the store **Actual** Logged in user shows as "undefined" **Expected** Logged in user should show ass admin or what ever the user logged in<issue_closed> Status: Issue closed
wurstmeister/kafka-docker
227940301
Title: Why /var/run/docker.sock is needed? It runs without Question: username_0: ```bash $ tail -1 start-kafka-shell.sh docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -e HOST_IP=$1 -e ZK=$2 -i -t username_1/kafka /bin/bash # this also runs $ docker run --rm -it username_1/kafka /bin/bash bash-4.3# ``` Why did you use `-v /var/run/docker.sock:/var/run/docker.sock` for? Answers: username_1: it's only required if you want to run docker from within the container (e.g. https://github.com/username_1/kafka-docker/blob/master/broker-list.sh) username_0: Hmm, `docker port` needs it? username_2: Any chance this can be made optional through an environment variable? I just want to run a single broker for CI purposes. username_3: @username_2 - this is optional. Just don't mount the docker socket through to the container and `start-kafka.sh` will not execute the docker command, see: [https://github.com/username_1/kafka-docker/blob/master/start-kafka.sh#L12](https://github.com/username_1/kafka-docker/blob/master/start-kafka.sh#L12) Specifically the check for the socket. ``` -S /var/run/docker.sock ``` Be sure to set `KAFKA_ADVERTISED_PORT` manually if you need it. Status: Issue closed username_3: Added this info the the FAQ: [https://github.com/username_1/kafka-docker/wiki](https://github.com/username_1/kafka-docker/wiki)
even-cheng/fototo
910311795
Title: LeanCloud 初始化失败 Question: username_0: 运行后提示 ``` Fototo was compiled with optimization - stepping may behave oddly; variables may not be available. LCError(code: 9976, reason: "Server URL not set.") ``` 这个是 LeanCloud Module 报的问题 查了下相关说明 : https://forum.leancloud.cn/t/server-url-not-set/21848 ``` 您好,在新版 SDK(v12.0.0)以上的版本中,要求必须设置服务器地址,即控制台绑定的自定义 API 域名39。开发版临时测试使用的应用可以使用临时的共享域名,(临时共享域名只有三个月有效期且没有可用性保证),临时共享域名(服务器地址)在控制台应用设置 > 应用 Keys 页面可以找到。 建议您先升级商用版进行域名绑定与域名备案,绑定完域名后可以参考 安装文档24,在初始化的时候需要设置服务器地址为您绑定的 API 域名。 ``` 不知道作者可不可以处理一下. 谢谢 Answers: username_1: 更新了Leancloud设置,重新试一下 username_0: ![image](https://user-images.githubusercontent.com/6984865/120737519-ca625780-c520-11eb-8b05-9e57e85ecfe3.png) ``` LCError(code: 9976, reason: Optional("Server URL not set."), userInfo: nil, underlyingError: nil) ``` username_0: 更新了代码 好了 谢谢. 👍 Status: Issue closed
asyncapi/java-spring-template
1058483267
Title: Maven plugin to have the Java files generated automatically during build Question: username_0: Hi, Is there a maven plugin for this project that we could use? It would be great if we could use this project in our maven based spring boot application and we would like to generate the classes during a maven build, not manually. #### Reason/Context Please try answering few of those questions - Why we need this improvement? To generate java files automatically during build, not manually. - How will this change help? It would make the generation faster and smoother. - What is the motivation? To be able to use a plugin that generates the files. #### Description Please try answering few of those questions - What changes have to be introduced? I am not entirely sure. - Will this be a breaking change? I don't think so. - How could it be implemented/designed? 1. Creating plugin that does the npm install and ag commands 2. Adding parameter options 3. Uploading the plugin in a public place Best regards, Laszlo Answers: username_1: HI, I dunno about such plugins, the best what I could suggest you for now is to use npm plugin to run async-api generator. username_2: @username_1 oh man, you are back 😁 @username_0 as @username_1 we always recommend use maven plugins for npm that allow you to run npm packages, so in theory, you could run generation this way in maven. I personally did not use it. @Pakisan did you have a chance to explore it, I think we talked some time ago about it, but I just don't remember, I'm getting old 😄
neo-project/neo
977647359
Title: Is the 'verify' method not being called anymore? Question: username_0: Hi. I don't know if this has changed, if it is a bug, or if it was always like that: But when if I try to withdraw funds from a smart-contract, shouldn't it call the 'verify' method? Is this working properly? The tests are being done using neo-express, but I believe this is not the reason why it doesn't work. Thanks Answers: username_1: A contract can transfer assets owned by itself to others without the need to execute the `verify` method. username_0: Sorry, I didn't make it clear: I sent funds (NEO/GAS) to a contract. I want to withdraw it. It was supposed to call the verify method for that, right? username_1: No. You sent funds to a contract, then they belong to this contract. When it sends them back to you, it doesn't need to call `verify`. username_0: Could you clarify when the verify method is called? How does the Neo know if I can withdraw funds on behalf of 'my_contract'? username_1: In this case, you need to call `verify`. If it's the case that you call the `refund` method of the contract, and then the contract calls `transfer` method of the neo contract, than the `verify` method does not need to be called. username_0: What do you mean by this? Do I need to manually invoke the verify method? Sorry if the questions are stupid, I'm confused. What I expected was: If I do `send neo 'my_contract' 'my_wallet' 'amount`, I was expecting that the Neo contract would invoke the `verify` method on `my_contract`. Is this correct? username_1: Yes. username_0: Ok. So, maybe there is a bug. I'll do some testing on the testnet to see if the problem is there. I'll leave this open while I investigate. username_0: I'll close this. If it happens to be a bug, I'll open another issue. Thanks. Status: Issue closed username_2: You need to attach a witness for a contract, that is you add an additional signer to your transaction that has contract's hash (and appropriate scope) and then for this signer you add a witness with zero-length verification script (this is where `verify` will actually be called automatically by the system) and whatever your contracts needs (or not) in invocation script. username_0: Just tested it using neo-cli and the testnet, and everything is working as expected. Something broke elsewhere. Thanks.