repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
rossfuhrman/_why_the_lucky_markov
566081492
Title: Array the Caterpillar, with a shot of Captain Gravity. The Dir::[] method will be given something back. Question: username_0: Toot: Array the Caterpillar, with a shot of Captain Gravity. The Dir::[] method will be given something back. One comment = 1 upvote. Sometime after this gets 2 upvotes, it will be posted to the main account at https://mastodon.xyz/@_why_toots
JimmyLv/reading
275062713
Title: Productivity Hack: Read One Chapter of a Book to Get 90% of the Value Question: username_0: ## Productivity Hack: Read One Chapter of a Book to Get 90% of the Value<br> Here&rsquo;s a fairly reliable productivity hack if you&rsquo;ve picked up the latest self-help, management theory or sweeping sociological interpretation of our times:&hellip;<br><br> November 18, 2017 at 04:09PM<br> via Instapaper https://hunterwalk.com/2017/11/07/productivity-hack-read-one-chapter-of-a-book-to-get-90-of-the-value/
codeforamerica/syracuse_biz_portal
208225196
Title: Change CofU case sensitivity Question: username_0: Please change the input box for certificate of use to not be case sensitive. OR Since every CU starts with "CU20", we could hardcode that into the query so users only put in the last 2 digits of the year and the 4 digit permit number. This will match up with the paper they will get at the desk where Pam is only going to handwrite "14-0050" on the form. Answers: username_1: Great idea! @millzpaugh, can we do both? We can use the input [group addon style](https://getbootstrap.com/css/#forms) to prepend the CU: ![image](https://cloud.githubusercontent.com/assets/2693690/23042673/5c660828-f44e-11e6-948e-0ff67c38c6e3.png) But if they miss that and start with `CU` or `cu`, we could also accept both of those? Status: Issue closed
gocd/gocd
198428083
Title: OS Detection failing on the docker containers Question: username_0: ##### Issue Type - Bug Report ##### Summary The OS detection which was changed in #2608 seems to fail on our containers (See screenshot). Need to investigate why. The idea was to fallback to `System.getProperty("os.name")` in case all detection fails. But for some reason, this is not happening. See https://github.com/gocd/gocd/pull/2608/files#diff-aeab6424c1e8bdd37d7f8d7773ff62c4R27 for relevant code. Answers: username_0: Verified on build.gocd.io on version `17.1.0 (4494-abfa4b3ea2aa83593e4efd6e455d521c3fdc4ddc)` Status: Issue closed
techninja/hersheytextjs
791158424
Title: Kerning issues Question: username_0: First of all, thanks a lot for creating this great library and tool. The example currently has some kerning issues. For example the result of "efghijlmnop" looks like the following. <img width="400" alt="grafik" src="https://user-images.githubusercontent.com/546852/105363872-cc282000-5bfc-11eb-9fa8-1a806d80fb27.png"> There is: - too much space between: `f` and `g`, `i` and `j`, and a few others - not enough space between: `m` and `n` In comparison the results from [Inkscape Hershey Text extension](https://gitlab.com/oskay/hershey-text): <img width="420" alt="grafik" src="https://user-images.githubusercontent.com/546852/105364012-f548b080-5bfc-11eb-97aa-9d15a3a3d222.png"> and [p5-hershey-js](https://github.com/LingDong-/p5-hershey-js) rendering on canvas: <img width="380" alt="grafik" src="https://user-images.githubusercontent.com/546852/105364272-4658a480-5bfd-11eb-8516-a7cce5b620fb.png"> Both of the other tools don't have this issue. **What is causing this? Can it be fixed?** Thanks a lot in advance. Answers: username_0: I've tried the [Node.js test example](https://github.com/techninja/hersheytextjs/blob/master/hersheytest.js) using 'hershey_sans_1'. It doesn't have that kerning issue. Although, it seems to be loading an `svg` font instead of a `hershey` (JSON) font. username_0: I've investigated a bit more and found that `p5-hershey-js` has [implemented left and right boundaries](https://github.com/LingDong-/p5-hershey-js/blob/7b7ab82bd515efbbadee2bde7b18c058003a6b30/p5.hershey.js#L52) using a [font source file](https://github.com/LingDong-/p5-hershey-js/blob/master/p5.hershey.data.js) which seems a bit closer to the [original Hershey file](https://emergent.unpythonic.net/software/hershey) format. The `o` property of the [Hershey JSON file of this project](https://github.com/techninja/hersheytextjs/blob/master/hersheytext.min.json) contains only what seems to be the right boundary of the original file. I might be missing something here, but could the JSON file and browser renderer be lacking the left boundary of each glyph? Is the original conversion script available somewhere? Otherwise, it should be possible to use the data and parser from `p5-hershey-js`. On the other hand, [this article by Evil Mad Scientist](https://www.evilmadscientist.com/2019/hershey-text-v30/) explains the advantages of SVG fonts over the original Hershey format. They also maintain this [repository with SVG fonts with growing language support and font styles](https://gitlab.com/oskay/svg-fonts). Therefore, I would suggest adding browser support for this library. username_0: As mentioned in #6 and #2 this is a browser-based SVG font renderer developed by me: https://github.com/username_0/svg-font-renderer It's based on this package and doesn't have the above mentioned kerning issues.
webdriverio/webdriverio
203070557
Title: Show more informative stack trace Question: username_0: ## The problem **test.js** ```js let page = require('./page'); describe('test', () => { it('test', () => page()); }); ``` **page.js** ```js module.exports = () => { browser.url('https://e.mail.ru/login'); browser.alertText(); }; ``` ## Actual ``` [chrome #0-0] Session ID: ec6a556d-0074-4b8d-9f28-537090c1fd7d [chrome #0-0] Spec: /Users/a.abashkin/workspace/mail.ru/e.mail.ru/tests/1788/test.js [chrome #0-0] Running: chrome [chrome #0-0] [chrome #0-0] test [chrome #0-0] [chrome #0-0] test [chrome #0-0] 1) test [chrome #0-0] [chrome #0-0] [chrome #0-0] 1 failing (6s) [chrome #0-0] [chrome #0-0] 1) test test: [chrome #0-0] Timeout of 3000ms exceeded. Try to reduce the run time or increase your timeout for test specs (http://webdriver.io/guide/testrunner/timeouts.html); if returning a Promise, ensure it resolves. [chrome #0-0] Error: Timeout of 3000ms exceeded. Try to reduce the run time or increase your timeout for test specs (http://webdriver.io/guide/testrunner/timeouts.html); if returning a Promise, ensure it resolves. [chrome #0-0] at Timeout.<anonymous> (/Users/a.abashkin/workspace/mail.ru/e.mail.ru/tests/1788/node_modules/mocha/lib/runnable.js:232:19) [chrome #0-0] at ontimeout (timers.js:365:14) [chrome #0-0] at tryOnTimeout (timers.js:237:5) [chrome #0-0] at Timer.listOnTimeout (timers.js:207:5) [chrome #0-0] ``` ## Expected ``` [chrome #0-0] Session ID: ec6a556d-0074-4b8d-9f28-537090c1fd7d [chrome #0-0] Spec: /Users/a.abashkin/workspace/mail.ru/e.mail.ru/tests/1788/test.js [chrome #0-0] Running: chrome [chrome #0-0] [chrome #0-0] test [chrome #0-0] [chrome #0-0] test [chrome #0-0] 1) test [chrome #0-0] [chrome #0-0] [chrome #0-0] 1 failing (6s) [Truncated] ``` ``` [chrome #0-0] at Timeout.<anonymous> (/Users/a.abashkin/workspace/mail.ru/e.mail.ru/tests/1788/node_modules/mocha/lib/runnable.js:232:19) [chrome #0-0] at ontimeout (timers.js:365:14) [chrome #0-0] at tryOnTimeout (timers.js:237:5) [chrome #0-0] at Timer.listOnTimeout (timers.js:207:5) ``` ## Environment * WebdriverIO version: any * Node.js version: 6-7 ## Code To Reproduce Issue [__test__.zip](https://github.com/webdriverio/webdriverio/files/729282/__test__.zip) @username_1, I don't know how it can be fixed, maybe you know :) Status: Issue closed Answers: username_1: This issue was moved to webdriverio/wdio-sync#45
Suor/django-cacheops
71484604
Title: Better memory limit support Question: username_0: For now cacheops offers [2 imperfect strategies](https://github.com/username_0/django-cacheops#using-memory-limit) to handle that. They both have flaws. I create this issue to track the topic. Answers: username_0: Alternative strategies available now: 1. Switch off `maxmemory`. Use external periodic job to make custom cleanup when memory usage exceeds limit. Cons: clunky, lag before eviction can cause arbitrary memory use. 2. Use [keyspace notifications](http://redis.io/topics/notifications) and external daemon to subscribe and manage cache structure. Cons: clunky, can miss events upon disconnect, is async so eviction could delete more than needed. 3. Store set of conj keys in a cache key and check integrity on cache fetch. Cons: slower fetch, substantial code complication. username_0: The ideal solution would be custom eviction strategy, probably lua-based - https://github.com/antirez/redis/pull/2319. Another good solution could be managing cache structure with lua script subscribed to keyspace notification - https://github.com/antirez/redis/issues/2540. username_1: @username_0, what are the chances that you can provide a script (or guidance on what the script needs to do) for option 1. We need to put something in place until cacheops supports a solid solution natively (hopefully option 2 or 3). username_0: I won't provide a script, but I can elaborate on strategy: - use `INFO MEMORY` command to find out if usage is above limit, - select some keys with `RANDOMKEY`, choose `conj:*` from them, - for each conjuction key, select its members and delete those keys with conjunction key itself: ```python keys = redis_client.smembers(conj_key) redis_client.delete(*([conj_key] + keys)) ``` (last one is better to run in Lua for atomicity) username_0: The alternative, let's call it strategy 1a is probably better: - use `CACHEOPS_LRU = True` and `maxmemory-policy volatile-ttl` (second strategy from README) - periodically SCAN for conjuction keys and remove them if they are orphant: ```python for conj_key in redis_client.scan_iter(match='conj:*'): keys = redis_client.smembers(conj_key) exists = redis_client.execute_command('EXISTS', *keys) if exists = 0: redis_client.delete(conj_key) ``` (the innards of the loop should be done with Lua for atomicity) username_2: Hi all. I'm trying to understand what it means to use the eviction policy recommended in the README, which is CACHEOPS_LRU = True and maxmemory-policy volatile-lru. If I run my cache like this, do I lose the ability to expire cached views based on time? Is the only way to remove an item from the cache to let it get 'pushed out' by newer items? What I want to do is have my view cache expire after 24 hours like normal, BUT if I hit the max memory limit, the oldest items are pushed out to make room for the new ones. username_0: No you don't loose ability to expire by time. The only downside is that invalidation structures can clutter your redis db over time, cache keys are still evicted by timeout. username_2: Oh ok. So if I understand this right, two keys are created for each item that is cached, one is the actual content and the other is the invalidation instructions. With the method I mentioned the content keys will be removed, but the invalidate keys will remain? And if I run the conj_key function as a management command every so often it will remove those invalidation keys will be cleared out? username_0: Several conj_keys refer to single cache key, here is [the description of how it works](http://hackflow.com/blog/2014/03/09/on-orm-cache-invalidation/). When you use `CACHEOPS_LRU = True` conj_keys are not evicted by time, so they may clutter up, referencing non-existing cache keys. There is no such thing as conj_key function. You basically need to go through conj_keys and check if they refer only non-existing cache keys and remove them if they are, I wrote the draft above. It could be improved though - remove non-existing cache keys from conj key instead of checking all of them and removing the whole set only: ```python for conj_key in r.scan_iter(match='conj:*'): for cache_key in r.smembers(conj_key): # This two lines should be done atomically if not r.exists(cache_key): r.srem(conj_key, cache_key) ``` Redis automatically removes keys for empty sets, so that's it. username_2: Thank you for the quick reply. I’m trying out this strategy and will report back. username_3: I know it would take a large amount of effort to do right, but I think it would be beneficial if we could configure multiple cache backends. That would make this memory limit issue also easily solvable by running multiple redis instances (which many people do already since redis is single-cpu). username_0: This has nothing to do with other backends. BTW cacheops doesn't use other backends because it uses sets and set operations in redis, which other backends just don't provide. username_3: You misunderstood. I'm talking about multiple redis servers so you can have memory limits through redis. username_0: Multiple redises have nothing to do with memory limit. username_3: I don't see why not? Youl could have multiple redis servers and you can specify the `maxmemory` per server separately. For example, assuming you have your sessions in redis you want to be absolutely certain they will never reach an out-of-memory scenario. Whereas many cache layers don't haver any real priority so you can set that server to `allkeys-lru` so you omit the need for a `setex` or `expire`. username_0: All this doesn't matter from cacheops implementation point of view multiple server support and memory limit are completely independent issues. There is no reason to talk about multiple servers or backends here.
Anamico/node-red-contrib-alarm
963350040
Title: Proposal 1 - extend the trigger dropdown Question: username_0: Originally from #24, submitted by @username_1 Currently the sensor node allows two trigger types ("_all messages_" and "_msg.payload=true_"). It would be nice if extra options could be added by using TypedInputs. For example: ![image](https://user-images.githubusercontent.com/14224149/128590464-5c10754a-e467-491c-a1fa-54ce86d0989e.png) Then it becomes possible to trigger an alarm when `msg.payload=1` or `msg.payload="OPEN"` or ... That way my flow would become much cleaner, since I could avoid a lot of Change-nodes (which I also see in the example flows on your readme page): ![image](https://user-images.githubusercontent.com/14224149/128590553-12164ea6-4443-4909-bec0-fb5e6b97684d.png) Of course the existing options should be migrated automatically, to make sure the existing flows are not impacted. BTW It is not clear to me how the current filtering works. Because at first sight the `triggerType` seems only to be used in the sensor.html file, but I might have overlooked something. Would be nice if you could explain it! Answers: username_0: Great idea, I like this. It certainly would make things easier. Can anyone else comment on common use cases? I would think maybe even a "msg.[enter path here]" = "string value also compared as numeric" or something would meet the majority of cases, but what about more complex multi-facet matching? Ie msg.payload.value==200 && msg.tag==3 ? perhaps it's better to allow a small code snippet space like a basic function, then you could do any logic you like? username_1: Indeed that would solved most of the cases. Hadn't even thought about that... Although that indeed might be sufficient for lots of use cases, I see some advantages in using the TypedInputs: + The TypedInput fields is more Node-RED standard style. If I need to check for a boolean= true then every Node-RED user knows that he needs to select a boolean TypedInput from the dropdown. Although your expression syntax is VERY easy, you have to look it up in your readme page. While the TypedInputs are more self explaining. The expression seems (to me) a bit of cheating :-) + The TypedInputs offer some extra types of triggers. You could introduce dynamic thresholds when using the "flow" or "global" types. An simple hypothetical example: + Normally you want to raise an alarm when there are more than 30 people in a bar, because that is required for fire insurance. + During those dark COVID days you want to raise an alarm when there are more than 10 people in the same bar, to be able to insure social distance. So you need some dynamic threshold, which should be easy to change live (without requiring a deploy)! You could again workaround this by adding some logic in your flow to determine whether this is a problem or not, and then pass that boolean to your Sensor node. But the whole idea of this feature request is to simplify our flows, by getting rid of lots of Change/Function/... nodes. By using a numeric TypedInput you can simply write your threshold in Context-memory via some user interface (e.g. the Node-RED dashboard) in flow memory `flow.maximumVisistors=10` and read that threshold directly from your node .... Note that processing the values from multiple TypedInputs can be done using [evaluateNodeProperty](https://nodered.org/docs/api/modules/v/0.20.0/@node-red_util_util.html#.evaluateNodeProperty). I can always create a proposal via a PR in a couple of weeks if you like! username_0: OK, had a think about this. Maybe a very easily understandable way to do it would be to replicate the switch node functionality? <img width="553" alt="Screen Shot 2021-08-18 at 8 49 57 am" src="https://user-images.githubusercontent.com/26127660/129810920-f4dceb4d-0f88-4917-853d-4181c2aef25a.png"> Because technically we need to come up with a boolean value (To trigger or not to trigger? That is the question). This switch node sets the type of expression at the top and then allows the user to set multiple and/or functions to trigger. It would just be for a single "output". What do you think? username_0: [switch.html](https://github.com/node-red/node-red/blob/master/packages/node_modules/%40node-red/nodes/core/function/10-switch.html) [switch.js](https://github.com/node-red/node-red/blob/master/packages/node_modules/%40node-red/nodes/core/function/10-switch.js) username_1: That is indeed something very useful, again to avoid Switch nodes. If e.g. the trigger value arrives in the input message as `msg.trigger_value`, then you could simply specify that msg property name. I had that already in mind as 'proposal 4', but I didn't wanted to look greedy at the time being ;-). username_0: Thanks @username_1, you are probably correct it is overkill to go the complete duplication of the switch node functions AND try and maintain it to remain current. I was actually not thinking to that degree. I was more thinking we use the same sort of approach, and just replicate one or 2 of the most useful ones that will fill our immediate need, and it provides a launchpad and precedent for anyone to chip in if they would like to add (or just request) a function they think should also be ported to meet a use case they have (with suitable justification of course). I also would not care if it was not maintained in step with the switch node as I don't think that will really help us. So we start with about the same (or less effort) to port a small portion of the switch code that is essentially already written for us. Then make a note that people can request or contribute more functions if they want. What do you think? It provides a quick solution now and more flexibility if needed in future. Oh, so if I understand about msg.trigger_value, then you could just put that in the field at the top and this mod idea would also cover it, right? Or did I misunderstand? username_1: To keep it simple and yet powerful, I'm not sure anymore if we should mimic partly the Switch node. It will be a lot of development, make your node much more difficult to maintain, and we will never be able to compete with the Switch node (especially if that is extended in the near future with AND/OR logic). I am now wondering if your initial idea of using a simple expression would be a better solution perhaps? Suppose we would e.g. use this [expression](https://github.com/joewalnes/filtrex#10-second-tutorial) parser library. Don't think it is maintained anymore, but might be sufficient for our needs. I haven't tested it yet, but I assume we could use it like this: 1. The user specifies an expression on your config screen, for example: *"msg.payload == 1 and msg.topic == reed_relay"*. 2. We compile the expression and pass the input message to it: ``` // Compile expression from the config screen to an executable function var myfilter = compileExpression(expression); // Execute function (with input message as parameter) myfilter({msg: msg}); ``` Does this makes sense to you? username_0: I had another look at switch node and think you are right, on my second look at it, there is a lot of capability, but probably overkill. Something a simple expression can resolve. So did you want to have a shot at doing the node the way you think with these considerations in mind? I think we are on the same page, we need to give a basic way to make these simple cases handled, and with an expression option it allows a lot of flexibility and it's still simpler to code. As long as it covers those cases and works well I'd be happy to merge it in and publish an update. Thanks again for helping work this out. We certainly get a far more powerful and better solution when more people collaborate on it. username_1: Sure I can have a look at it. Only thing that bothers me, is that the library is probably not maintained anymore. On the other hand we don't need lots of features from it ... If you now a more recent javascript logical expression parser library, please let me know! And I haven't checked if the library is available on npm, otherwise we have to copy its js file to your repo... My time is up for today... username_1: Seems there is an [ NPM module](https://www.npmjs.com/package/filtrex) available with the same name and functionality, but pointing to another Github repository. This package is well maintained, which is much better in case we need assistance. So I am going to try that one as soon as I have time ... username_1: Currently your node has a dropdown: ![image](https://user-images.githubusercontent.com/14224149/130405717-772c77cb-0df2-46ee-901d-08912791217e.png) Am I correct that this dropdown simply need to be replaced by an input field, where the logical trigger expression can be entered? If so, I need to write a short code snippet for existing nodes. Just to make sure we don't have impact on existing flows. + Existing nodes with option `msg.payload.open == true` will get an identical expression in the new expression editor field. + Existing nodes with option `any message` will get an empty expression editor field? And we consider an empty field as 'no condition' which means always true. Is that ok for you? username_2: Sure. Sounds great! username_1: Hmm, found a partystopper... Would like to use nested input message properties: ![image](https://user-images.githubusercontent.com/14224149/130526846-f6fd70d6-8c6b-4243-9d23-008188e12962.png) However it looks like the Filtrex library doesn't support nested object properties (since it uses hasOwnProperty): ![image](https://user-images.githubusercontent.com/14224149/130526716-8d6df7ec-f2fd-4d80-9e4d-3720e709ebcb.png) I could workaround this, but I think it is better if I create a pull-request for the Filtrex library to implement this feature ... username_1: FYI: I have registered two issues for the filtrex library: + One for accessing nested message properties (see [here](https://github.com/m93a/filtrex/issues/45)) + One for using boolean values (see [here](https://github.com/m93a/filtrex/issues/46)) Hopefully the author can help us soon with this, so I can continue with the implementation of expressions for alarms. He seems to be a very active contributor, so fingers crossed... username_1: I have implemented a small endpoint, that allows us to check the syntax of the expression (on the server side). That syntax check is runned for every character you type, to have a responsive config screen. The result of the last syntax check is also stored in the node, so the flow editor can draw a red triangle if necessary (based on the last syntax check): ![alarm_trigger_condition](https://user-images.githubusercontent.com/14224149/130742843-eb2692ce-85b2-4c85-a166-200009b6da4e.gif) BTW It is not clear to me how the current filtering works. Because at first sight the triggerType seems only to be used in the sensor.html file, but I might have overlooked something. Would be nice if you could explain it! username_1: Hi @username_0, Sorry for the delay! But I don't think it is useful that I create a pull request, as long as the features haven't been added to the filtrex expression library... I see now that somebody else has posted the same feature request at the same day, but solving it in another way: see [issue](https://github.com/m93a/filtrex/issues/44). Hopefully the author can find some time to help us, because I'm a bit stuck now with this... username_1: @username_0, We got [feedback](https://github.com/m93a/filtrex/issues/44#issuecomment-913850364) from the Filtrex author. The poor devil needs to do his final examns in about two weeks... I will contact you again as soon as the requested Filtrex features are available. Bart username_1: Hi @username_0, I finally managed to create a [pull request](https://github.com/Anamico/node-red-contrib-alarm/pull/32). username_1: Hey Andrew, Sorry for the late reply. It is a busy week... Will get back to you this weekend! Bart
department-of-veterans-affairs/va.gov-team
887082303
Title: EDU Forms /1990e Update: Dependent & School Selection Discovery Question: username_0: Requirements for form updates are understaood and documented * Form update is discussed internally * Questions are captured * Questions for EDU are shared, answered * Any outstanding questions are resolved Answers: username_1: https://dsva.slack.com/archives/C01K37HRUAH/p1620741728286000 username_1: Final answer: "Child" No definition or supporting content. Status: Issue closed
postmanlabs/postman-app-support
49242118
Title: Wrong encoding in the Postman app Question: username_0: Hi guys! I have an issue with wrong encoding in the Postman app. This is how my request looks like in the ext: ![2014-11-18_16-59-22](https://cloud.githubusercontent.com/assets/7139091/5089550/d7f5102e-6f44-11e4-9a69-c38cea27724b.png) And this is how it looks like in the app: ![2014-11-18_16-59-13](https://cloud.githubusercontent.com/assets/7139091/5089566/f3d391b2-6f44-11e4-96c2-f1be7d0eb4e8.png) Answers: username_1: I have the same issue, it's breaking my ability to test a third-party API using Postman. If I write a quick python app that uses OAuth1 and I don't percent-encode the realm, it works just fine. If I try it via Postman which percent-encodes that realm, the server returns a 401. Can we make this encoding a configurable option to account for servers that may not be spec compliant?
department-of-veterans-affairs/va.gov-team
1095385801
Title: KPI Dashboards: Training and online documentation Question: username_0: ## Problem Statement *In a couple of sentences, describe the Who, What, Why, and Where of the challenge / pain point you seek to address.* *Follow your problem description up with a "How might we... _______" statement re-framing that challenge as an opportunity. Don't hint too much at what the solution might be, you should have enough of a focal point here to guide your ideas, but plenty of freedom to think laterally and innovatively as you experiment and prototype later.* ## Hypothesis or Bet *How will this initiative impact the quality of VFS or VSP teams' work?* *How will this initiative be easy for VFS or VSP teams? Or how will it be easier than what they did before?* ## We will know we're done when... ("Definition of Done") *What requirements does this project need to meet for you to finish this initiative?* ## Known Blockers/Dependencies *List any blockers or dependencies for this work to be completed* ## Projected Launch Date * When do you expect to be completed rolling out this initiative* ## Launch Checklist ### Guidance (delete before posting) _This checklist is intended to be used to help answer, "is my VSP initiative ready for launch?". All of the items in this checklist should be completed, with artifacts linked---or have a brief explanation of why they've been skipped---before launching a given VSP initiative. All links or explanations can be provided in **Required Artifacts** sections. The items that can be skipped are marked as such._ _Keep in mind the distinction between **Product** and **Initiative** --- each Product needs specific supporting documentation, but Initiatives to improve existing Products should reuse existing documentation for that Product. [VSP Product Terminology](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/teams/vsp/product-management/product-terminology.md) for details._ ### Is this service / tool / feature... ### ... tested? - [ ] Usability test (_TODO: link_) has been performed, to validate that new changes enable users to do what was intended and that these changes don't worsen quality elsewhere. If usability test isn't relevant for this change, document the reason for skipping it. - [ ] ... and issues discovered in usability testing have been addressed. * _Note on skipping: metrics that show the impact of before/after can be a substitute for usability testing._ - [ ] End-to-end [manual QA](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/quality-assurance/README.md) or [UAT](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/research/planning/what-is-uat.md) is complete, to validate there are no high-severity issues before launching - [ ] _(if applicable)_ New functionality has thorough, automated tests running in CI/CD ### ... documented? - [ ] New documentation is written pursuant to our [documentation style guide](https://vfs.atlassian.net/wiki/spaces/AP/pages/622264362/Style+guide) - [ ] Product is included in the [List of VSP Products](https://docs.google.com/spreadsheets/d/1Fn2lD419WE3sTZJtN2Ensrjqaz0jH3WvLaBtn812Wjo/edit#gid=0) * _List the existing product that this initiative fits within, or add a new product to this list._ - [ ] Internal-facing: there's a [Product Outline](https://vfs.atlassian.net/wiki/spaces/PMCP/pages/1924628490/Product+Outline+Template) - [ ] External-facing: a [User Guide on Platform Website](https://vfs.atlassian.net/wiki/spaces/AP/pages/1477017691/Platform+website+guidelines) exists for this product/feature tool - [ ] _(if applicable)_ Post to [#vsp-service-design](https://dsva.slack.com/channels/vsp-service-design) for external communication about this change (e.g. VSP Newsletter, customer-facing meetings) ### ... measurable - [ ] _(if applicable)_ This change has clearly-defined success metrics, with instrumentation of those analytics where possible, or a reason documented for skipping it. * For help, see: [Analytics team](https://depo-platform-documentation.scrollhelp.site/analytics-monitoring/Analytics-customer-support-guide.1586823275.html) - [ ] This change has an accompanying [VSP Initiative Release Plan](https://github.com/department-of-veterans-affairs/va.gov-team/issues/new/choose). ### When you're ready to launch... - [ ] Conduct a [go/no-go] (https://vfs.atlassian.net/wiki/spaces/AP/pages/1670938648/Platform+Crew+Office+Hours#Go%2FNo-Go) when you're almost ready to launch. ## Required Artifacts ### Documentation * **`PRODUCT_NAME`**: _directory name used for your product documentation_ * **Product Outline**: _link to Product Outline_ * **User Guide**: _link to User Guide_ ### Testing * **Usability test**: _link to GitHub issue, or provide reason for skipping_ * **Manual QA**: _link to GitHub issue or documented results_ * **Automated tests**: _link to tests, or "N/A"_ ### Measurement * **Success metrics**: _link to where success metrics are measured, or where they're defined (Product Outline is OK), or provide reason for skipping_ * **Release plan**: _link to Release Plan ticket_ ## TODOs - [ ] Convert this issue to an epic - [ ] Add your team's label to this epic<issue_closed> Status: Issue closed
grafana/agent
708293398
Title: Agent internal metrics not sent to remote_write on Windows Question: username_0: I run the agent v0.6.1 on Windows. `.\agent-windows-amd64.exe '-config.file' agent.yml` Here's my config file: ``` server: http_listen_port: 12345 prometheus: wal_directory: wal integrations: agent: enabled: true prometheus_remote_write: - url: https://prometheus-us-central1.grafana.net/api/prom/push basic_auth: username: **** password: ************** ``` I can see the agent metrics at `localhost:12345/integrations/agent/metrics`. In my Grafana Cloud metrics, I only see the following series from the `integrations/agent` job: ``` curl -u $login -s \ https://prometheus-us-central1.grafana.net/api/prom/api/v1/query \ --data-urlencode 'query={job="integrations/agent"}' \ | jq '.data.result[].metric' { "__name__": "scrape_duration_seconds", "agent_hostname": "alex-windows", "instance": "alex-windows:12345", "job": "integrations/agent" } { "__name__": "scrape_samples_post_metric_relabeling", "agent_hostname": "alex-windows", "instance": "alex-windows:12345", "job": "integrations/agent" } { "__name__": "scrape_samples_scraped", "agent_hostname": "alex-windows", "instance": "alex-windows:12345", "job": "integrations/agent" } { "__name__": "scrape_series_added", "agent_hostname": "alex-windows", "instance": "alex-windows:12345", "job": "integrations/agent" } { "__name__": "up", "agent_hostname": "alex-windows", "instance": "alex-windows:12345", "job": "integrations/agent" } ``` All the values are 0, including `up{job="integrations/agent"}`. Answers: username_1: Hi, thanks for reporting. Can you share error-level messages that you see in the Agent logs? I'm currently on vacation and back on Monday, but I'll look into this as soon as I'm back. username_0: I don't see any error messages in the logs. Here's what the startup looks like: ``` PS C:\Users\Administrator\Downloads\agent-windows-amd64.exe> .\agent-windows-amd64.exe '-config.file' agent.yml level=info ts=2020-09-24T15:21:21.477073Z caller=server.go:194 http=[::]:12345 grpc=[::]:9095 msg="server listening on addresses" level=info ts=2020-09-24T15:21:21.5014613Z caller=wal.go:172 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="replaying WAL, this may take a while" dir=wal\473f173844a16f42857f2d57314d7e7a\wal level=info ts=2020-09-24T15:21:21.5024502Z caller=wal.go:219 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="WAL segment loaded" segment=0 maxSegment=0 ts=2020-09-24T15:21:21.5081254Z caller=dedupe.go:112 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a component=remote level=info remote_name=473f17-50454b url=https://prometheus-us-central1.grafana.net/api/prom/push msg="Starting WAL watcher" queue=473f17-5045 4b ts=2020-09-24T15:21:21.508312Z caller=dedupe.go:112 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a component=remote level=info remote_name=473f17-50454b url=https://prometheus-us-central1.grafana.net/api/prom/push msg="Starting scraped metadata watcher" ts=2020-09-24T15:21:21.508312Z caller=dedupe.go:112 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a component=remote level=info remote_name=473f17-50454b url=https://prometheus-us-central1.grafana.net/api/prom/push msg="Replaying WAL" queue=473f17-50454b ts=2020-09-24T15:21:37.9007862Z caller=dedupe.go:112 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a component=remote level=info remote_name=473f17-50454b url=https://prometheus-us-central1.grafana.net/api/prom/push msg="Done replaying WAL" duration=16.3925765 s level=info ts=2020-09-24T15:22:21.5264975Z caller=wal.go:378 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="series GC completed" duration=994.2µs level=info ts=2020-09-24T15:23:21.5412509Z caller=wal.go:378 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="series GC completed" duration=992.5µs level=info ts=2020-09-24T15:24:21.5521124Z caller=wal.go:378 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="series GC completed" duration=999.2µs level=info ts=2020-09-24T15:25:21.556792Z caller=wal.go:378 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="series GC completed" duration=978.3µs level=info ts=2020-09-24T15:25:21.5597057Z caller=checkpoint.go:96 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="Creating checkpoint" from_segment=0 to_segment=1 mint=1600961077000 level=info ts=2020-09-24T15:25:21.6143898Z caller=wal.go:443 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="WAL checkpoint complete" first=0 last=1 duration=58.5761ms level=info ts=2020-09-24T15:26:21.6202561Z caller=wal.go:378 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="series GC completed" duration=976.4µs level=info ts=2020-09-24T15:27:21.6260849Z caller=wal.go:378 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="series GC completed" duration=956.8µs level=info ts=2020-09-24T15:27:21.6290299Z caller=checkpoint.go:96 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="Creating checkpoint" from_segment=2 to_segment=3 mint=1600961197000 level=info ts=2020-09-24T15:27:21.6876144Z caller=wal.go:443 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="WAL checkpoint complete" first=2 last=3 duration=62.4863ms level=info ts=2020-09-24T15:28:21.6895633Z caller=wal.go:378 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="series GC completed" duration=982.7µs level=info ts=2020-09-24T15:29:21.6963957Z caller=wal.go:378 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="series GC completed" duration=973.7µs level=info ts=2020-09-24T15:29:21.6993157Z caller=checkpoint.go:96 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="Creating checkpoint" from_segment=4 to_segment=5 mint=1600961317000 level=info ts=2020-09-24T15:29:21.7500998Z caller=wal.go:443 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="WAL checkpoint complete" first=4 last=5 duration=54.6778ms level=info ts=2020-09-24T15:30:21.7540007Z caller=wal.go:378 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="series GC completed" duration=979.6µs level=info ts=2020-09-24T15:31:21.7598358Z caller=wal.go:378 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="series GC completed" duration=978.2µs level=info ts=2020-09-24T15:31:21.7617856Z caller=checkpoint.go:96 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="Creating checkpoint" from_segment=6 to_segment=7 mint=1600961437000 level=info ts=2020-09-24T15:31:21.7940043Z caller=wal.go:443 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="WAL checkpoint complete" first=6 last=7 duration=35.1467ms level=info ts=2020-09-24T15:32:21.8008488Z caller=wal.go:378 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="series GC completed" duration=983.2µs level=info ts=2020-09-24T15:33:21.8066763Z caller=wal.go:378 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="series GC completed" duration=1.9515ms level=info ts=2020-09-24T15:33:21.8095998Z caller=checkpoint.go:96 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="Creating checkpoint" from_segment=8 to_segment=9 mint=1600961557000 level=info ts=2020-09-24T15:33:21.8652733Z caller=wal.go:443 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="WAL checkpoint complete" first=8 last=9 duration=60.5486ms level=info ts=2020-09-24T15:34:21.8769755Z caller=wal.go:378 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="series GC completed" duration=1.9453ms level=info ts=2020-09-24T15:35:21.8818613Z caller=wal.go:378 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="series GC completed" duration=965.3µs level=info ts=2020-09-24T15:35:21.8847746Z caller=checkpoint.go:96 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="Creating checkpoint" from_segment=10 to_segment=11 mint=1600961677000 level=info ts=2020-09-24T15:35:21.9238485Z caller=wal.go:443 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="WAL checkpoint complete" first=10 last=11 duration=42.9525ms level=info ts=2020-09-24T15:36:21.9378616Z caller=wal.go:378 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="series GC completed" duration=977.8µs level=info ts=2020-09-24T15:37:21.9441859Z caller=wal.go:378 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="series GC completed" duration=979.1µs level=info ts=2020-09-24T15:37:21.946125Z caller=checkpoint.go:96 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="Creating checkpoint" from_segment=12 to_segment=13 mint=1600961797000 level=info ts=2020-09-24T15:37:22.0076447Z caller=wal.go:443 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="WAL checkpoint complete" first=12 last=13 duration=64.4379ms level=info ts=2020-09-24T15:38:22.0104768Z caller=wal.go:378 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="series GC completed" duration=971.1µs level=info ts=2020-09-24T15:39:22.0161382Z caller=wal.go:378 agent=prometheus instance=473f173844a16f42857f2d57314d7e7a msg="series GC completed" duration=956µs ``` There are no errors in the Application Event Viewer either. As a workaround, I added a prometheus instance to scrape /integrations/agent/metrics. ``` server: http_listen_port: 12345 prometheus: wal_directory: wal configs: - name: prom scrape_configs: - job_name: agent [Truncated] prometheus_tsdb_wal_segment_current 2 prometheus_tsdb_wal_truncations_failed_total 2 prometheus_tsdb_wal_truncations_total 2 prometheus_tsdb_wal_writes_failed_total 2 prometheus_wal_watcher_current_segment 2 prometheus_wal_watcher_record_decode_failures_total 2 prometheus_wal_watcher_records_read_total 4 prometheus_wal_watcher_samples_sent_pre_tailing_total 2 promhttp_metric_handler_requests_in_flight 1 promhttp_metric_handler_requests_total 3 promtail_files_active_total 1 promtail_syslog_target_entries_total 1 promtail_syslog_target_parsing_errors_total 1 promtail_targets_active_total 1 scrape_duration_seconds 1 scrape_samples_post_metric_relabeling 1 scrape_samples_scraped 1 scrape_series_added 1 up 1 ``` username_0: I just found out about the targets API. https://github.com/grafana/agent/blob/master/docs/api.md#list-current-scrape-targets I found the problem in the endpoint, it is URL enconded for the agent integration, but not for the prometheus instance. ``` { "status": "success", "data": [ { "instance": "473f173844a16f42857f2d57314d7e7a", "target_group": "integrations/agent", "endpoint": "http://127.0.0.1:12345/%5Cintegrations%5Cagent%5Cmetrics", "state": "down", "labels": { "agent_hostname": "alex-windows", "instance": "127.0.0.1:12345", "job": "integrations/agent" }, "last_scrape": "2020-09-24T17:21:37.8660875Z", "scrape_duration_ms": 0, "scrape_error": "server returned HTTP status 404 Not Found" }, { "instance": "4b7f54b837cd5e1e6ad121c2d981c6f4", "target_group": "agent", "endpoint": "http://localhost:12345/integrations/agent/metrics", "state": "up", "labels": { "agent_hostname": "alex-windows", "instance": "localhost:12345", "job": "agent" }, "last_scrape": "2020-09-24T17:21:35.8206868Z", "scrape_duration_ms": 6, "scrape_error": "" } ] } ``` username_1: Ha, looks like that's `http://127.0.0.1:12345/\integrations\agent\metrics`. The Agent isn't incorrectly using Go's `filepath` to generate the URL there which will use backslashes on Windows. Should be an easy fix. Thanks again for reporting! Status: Issue closed
PollubCafe/Flare-event-calendar
275875559
Title: Thymeleaf generowanie templatki maila Question: username_0: Research i stworzenie templatki na potwierdzenie maila przy rejestracji i na potwierdzenie wzięcia udziału już po podjęciu przez organizatora decyzji o terminie Answers: username_1: Zajmę się tym username_2: Ja też XD username_3: Od thymelife lepszy jest velocity Status: Issue closed
umijs/umi
735988593
Title: 使用约定式路由,开启antdpro布局,左边没有菜单 Question: username_0: export default defineConfig({ nodeModulesTransform: { type: 'none', }, layout: { name: 'test' } }); Answers: username_1: 我也遇到了,等解决。 username_2: @ant-design/pro-layout 依赖没有安装? username_3: routes 每项配置应该需要加个 name 属性 Status: Issue closed username_5: 这个问题解决了没有啊 username_6: 请问这个问题怎么解决
scheb/2fa
1149150173
Title: Additional step in 2fa process Question: username_0: <!-------------------------------------------------------------- PLEASE CHECK THE TROUBLESHOOTING GUIDE FIRST https://symfony.com/bundles/SchebTwoFactorBundle/current/troubleshooting.html ---------------------------------------------------------------> **Bundle version**: 5.13.1 **Symfony version**: 4.4.35 **PHP version**: 7.4.24 **Using authenticators** (`enable_authenticator_manager: true`): NO **Description** <!-- Please describe what you're trying to do and where you're getting stuck. Which approaches did you try out so far? If you used the troubleshooting guide, how far did you reach and what did you discover? --> I am currently facing a project where I have to add a 'trusted device' feature to an existing 2FA implementation. I consider migrating to your clean implementation for farious reasons. As we don't persist the users MFA method: **Is there any way of hooking into the process of beeing 'partially authenticated' (IS_AUTHENTICATED_2FA_IN_PROGRESS) having a page the user has to select the mfa method BEFORE beeing redirected to the authentication form to enter the code?** Please raise my hopes und thank you for your awesome work :) Answers: username_1: There is a way to define the provider to be used based on the user account [by implementing `PreferredProviderInterface`](https://symfony.com/bundles/SchebTwoFactorBundle/current/multi_authentication.html). But that would require that selection to be persisted in the user entity. There is also a way to choose the provider to be used on the 2fa form page, using the `preferProvider` query parameter. Check out the [demo app](https://github.com/username_1/2fa/tree/5.x/app) that is part of this repository. It includes a demo to switch the 2fa form between different 2fa providers. If you really need to have a dedicated page between login and 2fa form, the you'll likely need: - implement and inject your own Symfony security extension to add a firewall listener - make sure that firewall listener triggers before 2fa-bundle's listener kicks in and forces a redirect to the 2fa form - have that firewall listener check for the "2fa in progress state" and respond with your page to select a 2fa method - when the user selects the 2fa provider, use `TwoFactorToken::preferTwoFactorProvider(string $preferredProvider)` on the security token to make the bundle use that provider username_0: Hey there, thank you for your promt response. To get things done I decided to implement the 'trusted device' feature manually in the first place. But I will come back und check your proposals. In my initial test implementations I wasn't able intercept the events before your 2fa-bundle kicked in. So maybe the only solution would be your suggested way regarding an additional 'security extension'. username_1: You have to go the security extension route is you want to inject something that is called after the security layer being initialized and before the 2fa access check being called. All of this is happening in firewall listeners and the only way to add a firewall listener is the security extension (or some hacky DIC manipulation, wouldn't recommend).
pedromsilvaalves/desempenho-esportivo
450083802
Title: Tarefa 15 - Atualizar a classe Time para suportar o sistema de pontos Question: username_0: ### Atualizar a classe Time para suportar e suprimir as necessidades do sistema de pontos das jogadas - Agrupar todos os pontos de todos os jogadores do time e retornar. - Tirar uma média baseado no total de pontos do time e o número de jogadores. - Criar um relatório de jogadores acima e abaixo da média. - Mostrar o jogador com a maior quantidade de pontos no time. Answers: username_0: Implementado no Pull Request #14 . Status: Issue closed
jlippold/tweakCompatible
349579135
Title: `CrackTool3 (iOS 11)` not working on iOS 11.3.1 Question: username_0: ``` { "packageId": "com.julioverne.cracktool3", "action": "notworking", "userInfo": { "arch32": false, "packageId": "com.julioverne.cracktool3", "deviceId": "iPhone10,6", "url": "http://cydia.saurik.com/package/com.julioverne.cracktool3/", "iOSVersion": "11.3.1", "packageVersionIndexed": false, "packageName": "CrackTool3 (iOS 11)", "category": "Utilities", "repository": "julioverne", "name": "CrackTool3 (iOS 11)", "installed": "3.0~beta8", "packageIndexed": true, "packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.", "id": "com.julioverne.cracktool3", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.1.0", "shortDescription": "Crack Tweaks on one click. arm64", "latest": "3.0~beta8", "author": "julioverne", "packageStatus": "Unknown" }, "base64": "<KEY> "chosenStatus": "not working", "notes": "" } ```
opencb/hpg-bigdata
78515512
Title: Fix MAC incompatibilities and compilation errors Question: username_0: There are some MAC incompatibilities due to the shared library naming policy. In MAC the shared objects uses the extension `*.dylib` instead of `*.so`. Answers: username_1: By running "./examples/vcf2avro.sh", I get the following error message: "Could not initialize class org.xerial.snappy.Snappy" Status: Issue closed
prabushitha/gremlin-visualizer
585346077
Title: Invalid Host header when exposing the server to the outside Question: username_0: I'd like to use your frontend to access a database but I get an error when accessing the server remotely : ``` Invalid Host header ``` Is there anything besides adding `disableHostCheck: true` to the proxy configuration ? Answers: username_1: Did adding disableHostCheck: true solved the issue? username_0: @username_1 yes I had to. Without, even changing the hostname did not work. Thanks Status: Issue closed
rebekahliu/KickStartNow
260435502
Title: PM Review: Projects Question: username_0: I fixed all the bugs from yesterday! Status: Issue closed Answers: username_1: @username_0 Something's broken on Heroku. Try clicking one of the projects from the homepage index and you'll see what I mean (or try navigating to something like `/#/projects/2`) username_1: @username_0 It looks like a lot, but it's mostly small changes (and includes things that affect rewards/categories). **General**: - Need more and better seed data :) **New Project form**: - Needs to have categories instead of indices or id's now - Would prefer placeholder instead of pre-filled text "hello" for title. Or an empty input since there's a label. - Not a fan of the incremental by 1 goal amount. - Not sure if this is something you can fix, but the image upload part doesn't really stand out...I think it showed up after I had already input the information in the top two fields, and it shows up at the top (which isn't intuitive) - It isn't clear where you're supposed to go or when the check boxes are supposed to turn green at the top (if at all). Would prefer to see something like a next button at the bottom to guide the user through adding a new project. - Unclear whether the initial description is supposed to be the long one or if it's the same thing as the story description. Make it clearer by saying "Brief Description" or "Tagline". **Index Items & Show**: - Index item description, if too long to fit in container, should be cut off. You can use an overflow property or look up clamping with CSS. - Story description needs to be able to be formatted. I had a bunch of new lines and it's all bunched up. - Updating project form doesn't autofill with project's current information. - After submitting new project and then navigating to categories, the categories tally of the number of projects it has should be updated with the new count. **Errors**: - Error: `GET https://polar-shelf-46662.herokuapp.com/undefined` - Actions: Explore => one category => project => Explore => different category => different project (then error shows up) - Get rid of the iterator/unique key warning username_0: I fixed all the bugs from yesterday! Status: Issue closed
MicrosoftDocs/dynamics365smb-devitpro-pb
958112252
Title: Enables an action on a test page? Question: username_0: "Enables an action on a test page." should probably read "Retrieves whether or not an action on a test page is enabled." The same thing applies to the Visible member. Given how test pages are used, and judging from the fact that there is no parameter to specify a new value, I think both Enabled and Visible are read-only? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: c2eb12ed-e663-a34e-b17a-8d3322e78079 * Version Independent ID: 455c4f82-ced7-d3f5-f8d8-403430d350b9 * Content: [TestAction.Enabled Method - Business Central](https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/developer/methods-auto/testaction/testaction-enabled-method) * Content Source: [dev-itpro/developer/methods-auto/testaction/testaction-enabled-method.md](https://github.com/MicrosoftDocs/dynamics365smb-devitpro-pb/blob/main/dev-itpro/developer/methods-auto/testaction/testaction-enabled-method.md) * Service: **dynamics365-business-central** * GitHub Login: @SusanneWindfeldPedersen * Microsoft Alias: **solsen**
covid-maps/covid-maps
596221141
Title: Display post submission time with time and day in Store Card Question: username_0: When displaying post submission time on each submission, format the time to show the time at which the submission was created as well as show the day it was posted. Instead of "2 days ago", display time as "6:45 pm on Sunday". We can follow the below convention for this: - updates in the last 24 hours: "6:45 pm" - updates > 24 hours in the last week: "6.45 pm on Sunday" - updates > 1 week: "6:45 pm 11 days ago" Answers: username_1: If no one is working on this right now, I'd like to take a go at it. username_2: Go for it! Feel free to ping/tag me if you need any help :) Status: Issue closed username_3: closed by #254
radzenhq/radzen-blazor
930901252
Title: Popup of RadzenDropDown with filtered values visually detaches from the parent component Question: username_0: When popup position is above and I'm typing the filter then rect of popup colapsed to up and create visual break from parent component. STR: 1. open https://blazor.radzen.com/dropdown 2. drop down lowest "DropDown with multiple selection" 3. type "alfr" to filter 4. see the break as in the screenshot ![изображение](https://user-images.githubusercontent.com/36102143/123541513-ad026100-d74d-11eb-8011-2a1cbf07cabe.png)<issue_closed> Status: Issue closed
sqlalchemy/alembic
1003482394
Title: create_check_constraint condition argument typing incorrect Question: username_0: **Describe the bug** The typing of condition argument is incorrect according to the documentation. Definition: create_check_constraint(constraint_name: Optional[str], table_name: str, condition: **BinaryExpression**, schema: Optional[str] = None, **kw) → Optional[Table] The definition specifies the type for condition must be BinaryExpression, but in the documentation below. condition¶ – SQL expression that’s the condition of the constraint. Can be a **string** or SQLAlchemy expression language structure. https://alembic.sqlalchemy.org/en/latest/ops.html#alembic.operations.Operations.create_check_constraint **Expected behaviour** The typing for condition should be updated. create_check_constraint(constraint_name: Optional[str], table_name: str, condition: **Union[str, BinaryExpression]**, schema: Optional[str] = None, **kw) → Optional[Table] **To Reproduce** ```py from alembic import op def upgrade() -> None: op.create_check_constraint( "check_constraint", "table_name", "applies_from_time < applies_to_time" ) ``` Then run mypy test.py **Error** ``` test.py:9: error: Argument 3 to "create_check_constraint" has incompatible type "str"; expected "BinaryExpression[Any, Any, Any]" ``` **Versions.** - OS: Linux Ubuntu 18.04 - Python: 3.6 - Alembic:1.73 - SQLAlchemy: 1.3.24 - sqlalchemy-stubs: 0.4 - mypy: 0.910 - Database: Not need here - DBAPI: Not need here **Additional context** None **Have a nice day!** Answers: username_1: thanks for reporting Status: Issue closed
zeam-vm/pelemay
490711067
Title: Warning: System.stacktrace/0 outside of rescue/catch clauses is deprecated. Question: username_0: When `mix compile`, the following warning occurs: ``` warning: System.stacktrace/0 outside of rescue/catch clauses is deprecated. If you want to support only Elixir v1.7+, you must access __STACKTRACE__ inside a rescue/catch. If you want to support earlier Elixir versions, move System.stacktrace/0 inside a rescue/catch lib/benchfella.ex:506 ``` to run `elixir -v`: ``` $ elixir -v Erlang/OTP 22 [erts-10.4] [source] [64-bit] [smp:36:36] [ds:36:36:10] [async-threads:1] [hipe] Elixir 1.9.0 (compiled with Erlang/OTP 22) ``` I guess this warning is due to Benchfella, but we should contribute it by reporting this issue. Status: Issue closed Answers: username_1: This issue is not occurred now, because Benchfella was removed from this repository.
conan-io/conan
199186157
Title: conan test_package ignores options of the test_package/conanfile.py Question: username_0: Hi there, i want to build a recent conan package for the Qt 5.8 libs, which contains a lot of compilations flags for external dependencies. The way i want to go is to use as less as possible external dependencies by default. Each project itself should than specify its qt external dependencies by Qt:dependency=yes. It seems great so far. But i get problems with the test_package. Calling "conan test_package" builds the Qt libs with the default configuration set and ignores my test_package Qt:option=yes flags. I assumed, that the test_package options are propagated, before the conan package is build. Isn't that a desired behavior? best, Christian Status: Issue closed Answers: username_0: Sorry for the spamming. My described desired behavior is alraidy the current behavior. I had a type in my test_package/conanfile.py options for the Qt lib. So the config value where not matched to the lib. username_1: Not spamming at all :) Thanks Christian for your question, very glad that you already found the issue. Feel free to ask any other issue, here or in other channels. Best!
gorakhargosh/watchdog
73010154
Title: write close Question: username_0: Would it be possible to have a write close event? This would be valuable to monitor file when copy finishes. Answers: username_1: :+1: username_2: +1 username_3: Not in a portable way. It would be inotify only (maybe windows too). See #217 username_4: `openssl rand 1000000000 -out biiig.file` makes a file big enough that watchdog notifies a couple of times during the file being written. We have to write our own "quiet time" logic to ensure the the file has finished being written to? username_5: @username_7 same issue as in #567 username_6: Unfortunately the lack of this feature makes this library very difficult to use. There are numerous software applications that write in a serialized fashion to the file, generating many modified events. It's not safe to respond to any of those events and access the file. Is there some other way to detect this state, and/or lock it? username_7: Even if the write-close event is not portable, let's implement it where we can. We are open to suggestions and PRs :) Status: Issue closed username_7: Finally implemented the inotify part with 2fab7c2a06df0785a322186b191d0cd7c95f566a (will be part of the 1.0.3 version). Let's open specific issues for other OSes, if required.
Savalone47/oss-enterprise
660621400
Title: None Question: username_0: Create CODE_OF_CONDUCT.md Status: Issue closed Answers: username_0: ## Choose a `CODE_OF_CONDUCT.md` for your organization's open source repository We'll create a `CODE_OF_CONDUCT.md` template file. This template will be recommended for all of your organization's repositories. Unlike the `CONTRIBUTING.md`, it should **not** be customized by the maintainers. A user's experience in your open source project will become a reflection of your brand. How will you protect contributors from harassing or belittling behavior? What will you do when someone is behaving inappropriately? Adding a code of conduct to your projects will promote and facilitate healthy behavior within your community. ### Partners in this process You may want to @ mention the individuals responsible for Diversity, Inclusion and Communication to be your partners in this step. ### Why your project needs a code of conduct For more information on why a code of conduct is a good idea, check out the article [opensource.guide: Your Code of Conduct](https://opensource.guide/code-of-conduct/). ### Using an established code of conduct Thought leaders on establishing healthy behaviors in the open source community have joined forces to develop some fantastic drop-in codes of conduct. GitHub makes it easy to drop these established documents in to any project. To read directions on how to use this drop-in code, check out this [help documentation](https://help.github.com/articles/adding-a-code-of-conduct-to-your-project/). Here are the drop-in codes of conduct currently supported by GitHub: - [Contributor Covenant](https://www.contributor-covenant.org/) - [Citizen Code of Conduct](http://citizencodeofconduct.org/) ### Adding a code of conduct to an existing project If you already have an open source project, it is easy to add a code of conduct: ![gif of adding a code of conduct to existing project](https://user-images.githubusercontent.com/9906718/33984735-eee7c7c0-e0b8-11e7-86c8-af3589c322a2.gif) If you consume or contribute to a project that does not have a code of conduct, you should not be shy about suggesting one to the project maintainers. ### Should you customize the code of conduct? It is generally acceptable to customize the code of conduct to meet your organization's needs, however we find the examples developed by the open source community are very good and will meet the needs of the majority of organizations. If you are interested in creating your own, check out some of these examples for inspiration: - [Django Code of Conduct](https://www.djangoproject.com/conduct/) - [Python Community Code of Conduct](https://www.python.org/psf/codeofconduct/) and [Diversity Statement](https://www.python.org/community/diversity/) - [Ubuntu Code of Conduct](http://www.ubuntu.com/about/about-ubuntu/conduct) - [Geek Feminism Code of Conduct](http://geekfeminism.org/about/code-of-conduct/) ## Step 4: Code of conduct **Decision Time** Decide which code of conduct you will use for your projects. Will you promote the use of an established template or create your own? Based on your decision, follow the path outlined below: ### :keyboard: Activity: Choose a code of conduct **If you want to use an established code of conduct** 1. [Create a new code of conduct from a template](https://github.com/username_0/oss-enterprise/community/code-of-conduct/new). 1. Fill in your information on the right side. 1. Click **Review and submit**. 1. Review the code of conduct and scroll to the bottom of the page. 1. Write a descriptive commit message. 1. Make sure that the option to "create a new branch" is selected, and click **Commit new file**. 1. Create a new pull request to add a code of conduct. **If you want to use a custom code of conduct** 1. [Create a new pull request for your custom code of conduct](https://github.com/username_0/oss-enterprise/new/master?filename=CODE_OF_CONDUCT.md). 1. Enter your code of conduct in the text area. 1. Write a descriptive commit message. 1. Select **Create a new branch for this commit and start a pull request**. 1. Click **Propose new file**. 1. Enter the following title for your pull request: `Create CODE_OF_CONDUCT.md` <hr> <h3 align="center">I'll respond in your new pull request.</h3>
Jbrough0/good-readme-generator
779489046
Title: readme screenshot Question: username_0: ![image](https://user-images.githubusercontent.com/70440198/103692126-54ec5e00-4f65-11eb-83c6-996f2674783b.png) Answers: username_0: screenshot 1 username_0: ![Screenshot (63)](https://user-images.githubusercontent.com/70440198/103692667-1efba980-4f66-11eb-81d1-4a11cdc8def9.png) username_0: ![Screenshot (63)](https://user-images.githubusercontent.com/70440198/103720471-66e7f400-4f99-11eb-8114-b03074155f05.png)
xpdAcq/xpdAcq
182144216
Title: finish shutter test report Question: username_0: report for XPD commission. 1. copy and past entire output from ``CAmonitor`` so that we have a quantitative record of saying shutter operates at 0.5s speed. 2. download dark images collected from test and quantitatively report of the *quality* of shutter Answers: username_1: Sounds reasonable. username_1: create test_shutter(n) function that runs n 0.1 second exposures, opening and closing the shutter between each, and plots the sum of counts across the detector for each one and searches for outliers indicating partial-images, for example.
danCazacu/JavaEE
370013172
Title: Captcha Question: username_0: Captcha feature should be added on input.jsp (or InputController?) For pull request #3 pull request inseamna mearge intre branch de development si branch de master. Cand exista o versiune stabila pe branch de dev aducem pe master (de obicei ambele raman deschise in paralele si doar se face acest merge ca un fel de checkpoint+backup + release new software daca e cazul) Answers: username_0: Added Status: Issue closed
waic/wcag21
757681720
Title: 達成方法集 SCR36の翻訳 Question: username_0: 対象ファイル: https://github.com/waic/wcag21/tree/master/techniques/client-side-script/SCR36.html Techniques for WCAG 2.0にも同名のファイルがある文書です。 分割ワークフローを試験的に導入しています 作業の進め方は:Techniques for WCAG 2.1の作業の進め方(分割ワークフロー版)を参照ください。 https://github.com/waic/wcag21/blob/master/work-step_split.md タイトルについて、表紙の目次ページ https://github.com/waic/wcag21/blob/master/techniques/index.html の該当箇所についても、編集を行ってください。<issue_closed> Status: Issue closed
NCEAS/metacatui
1087200823
Title: Investigate approaches to displaying Data Catalog results on the Cesium map Question: username_0: Look into alternatives to GeoHashes for displaying search results from the DataCatalogView on the Cesium map. For example, we might want to take advantage of Cesium's [clustering](https://cesium.com/learn/cesiumjs/ref-doc/EntityCluster.html) abilities. Answers: username_1: We've decided to continue with the geohashes for now since it's immediately ready to use. We might want to use some other method of displaying results in the future. username_0: Just some notes for when this issue comes up again: - There are currently [626,609 datasets on DataONE](https://cn.dataone.org/cn/v2/query/solr/?q=southBoundCoord:*%20AND%20-obsoletedBy:*&fl=northBoundCoord,eastBoundCoord,southBoundCoord,westBoundCoord,id,title) with location coordinates. - Displaying hundreds of thousands of points in Cesium is no problem. [Here](https://sandcastle.cesium.com/#c=bVJdb5tAEPwrK55Adg47adp8EKuRI/UlUiJF6kuoqgus7VWPO3R3kC/5v+cWF4PbvAzs3MzoZqEw2nloCZ/RwhVofIYlOmoq8bPj4jwqunlptJek0eZRcpnrXLfSQm1Iexd8uwDhCtQoaksVeWrRCVmW8SjznvX3/fHSKIWFJ6PjhDM5EV+8lb9bqTj1cTaFmZgzHDOcMHxhOGX4yvCN4Yzh/Fd3r5WxEHOUMnpNvikxJB3Nz2aXIyaDQ2IySeA91wCDW/q9+Zyl/ZzBeNwbR1YKniChIB3qCIV67TeBHlkADjoH32B4JO6zU+323G1z72TWEW/vol/vUlof3qQ+EStrqhtcW0QXD7UnQ/50aDRik+kQX<KEY>) is an example with 648,000 points displayed. - Clustering these points is also feasible, but actually tends to slow down the rendering rather than improve performance. - The limitation in showing a point for each dataset in the search results catalog would be how to get all of the location data from Solr. We would need to aggregate the locations in some way.
t0mk/packet-bgp-terraform
409516769
Title: Add README.md file to describe project and dependencies Question: username_0: The desired outcome is to have a README.md file that describes the code, and makes explicit any dependencies and setup instructions required. Answers: username_1: hey @username_0 this is just a dummy repo with an example. It's actually not functioning due to the networks order shuffle in packet_device last week. Maybe it's better to focus on https://github.com/packet-labs/Packet-BGP-LoadBalancing
dfaruque/Serenity.Extra
604743503
Title: Munq IocContainer failed to resolve Serenity.Data.IAuditLogRow Error Question: username_0: ![image](https://user-images.githubusercontent.com/44383772/79985122-d02ee200-84b2-11ea-9b6e-2f3717df7f9b.png) I get this error when i try to use [AuditLog] in Master/Detail row. This is my master row: `[AuditLog] public sealed class SevkPlanlamaRow : Row, IIdRow, INameRow {....}` And this is the detail row: `[AuditLog] public sealed class SevkSiparisRow : Row, IIdRow, INameRow {....}` And the error string: `Munq IocContainer failed to resolve Serenity.Data.IAuditLogRow` @dfaruque or anyone else can you help me? Status: Issue closed Answers: username_0: Sorry about this. This is an error of usage. I figured it out thanks to demo. I thought it would be the same as DataAuditLog but it is not. So if anyone else encounters this error the usage is like this: `public sealed class SevkSiparisRow : Row, IIdRow, INameRow, _Ext.IAuditLog {...}`
geoffdutton/amplitude
639314437
Title: License Question: username_0: Hi team, How is this package licensed? Would it be possible to add a LICENSE file? Thank you for maintaining this project! Answers: username_1: Hello, I've added a [LICENSE](https://github.com/username_1/amplitude/blob/master/LICENSE). It's included with v5.1.2 on NPM. Status: Issue closed
iamogbz/chrome-alt-tabs
413842694
Title: Configure CI Question: username_0: **Is your feature request related to a problem? Please describe.** Manual deploying is a pain! **Describe the solution you'd like** - Push to branch triggers CI - Run tests status checks - Run code coverage check - Run lint source checks - Run lint commit checks - Merge to master - Build distribution if feature, fix or breaking changes made - Deploy webstore using semantic commit messages to generate version changelog **Describe alternatives you've considered** - Continue with manual testing, QA and deploy **Additional context** - https://github.com/semantic-release/semantic-release - https://conventional-changelog.github.io/commitlint/ - https://github.com/DrewML/chrome-webstore-upload-cli Status: Issue closed Answers: username_0: :tada: This issue has been resolved in version 1.2.0 :tada: The release is available on: - [Chrome Web Store](https://chrome.google.com/webstore/detail/ebdcpdepkbefmgfdkdplcmhfkddagfon) - [GitHub release](https://github.com/username_0/chrome-alt-tabs/releases/tag/v1.2.0) Your **[semantic-release](https://github.com/semantic-release/semantic-release)** bot :package::rocket:
GoogleCloudPlatform/click-to-deploy
421922750
Title: separate each k8s app into its own repo Question: username_0: Why not have a separate git repo for each GKE marketplace app, managed by that app's team ? Combining all the marketplace apps together into one repo would seem to negatively impact maintenance and support. Answers: username_1: Hello [@username_0](https://github.com/username_0), Thanks for the suggestion, we will consider it in the future, especially taking into account growing number of exemplary Kubernetes applications. For now, our CI/CD pipeline assumes that all the applications are in the same repository so it would not be easy to change it in a short period of time. Thanks username_2: Closing this issue for now. Status: Issue closed
stanfordmlgroup/ngboost
525045070
Title: Return train and val loss Question: username_0: Thank you for the excellent work with NGBoost, really excited to having been testing it out! In commit `c4b46b9` the fit method was altered to return self instead of the train and val losses. Is there any way to access the losses with the current behavior? I believe the losses should be accessible, because we may not be interested in doing early stopping but actually training for a longer number of iterations and simply chose the best val loss. Also, returning the losses is essential to compare different models. Answers: username_1: Hi @username_0 , we are trying to maintain the API to be as similar to sklearn as possible. The behavior of sklearn is to return ``self`` at the end of ``.fit()``, and we are trying to do the same. I'm trying to understand how sklearn enables returning list of and training and val losses and making ngboost work similarly. Thoughts? username_2: maybe use "Return train and val loss " like in base-XGBoost? username_0: Good point @username_1. The sklearn implementation of xgboost has a method called evals_result which returns the train and validations losses. This can be seen here: https://github.com/dmlc/xgboost/blob/a4f5c862760029c24a5ba29b2a2ef4787058856c/python-package/xgboost/sklearn.py username_3: I think [this commit](https://github.com/stanfordmlgroup/ngboost/commit/5d979baba99cce303789c138c421c99cdb2c225f) should address the issue. Please have a look at let me know. It should work like: ``` ... ngb.fit(X_train, Y_train) ngb.evals_result ``` Status: Issue closed
burningmantech/ranger-ims-server
157266391
Title: Use EventSource to push updates to clients Question: username_0: Use [EventSource](https://developer.mozilla.org/en-US/docs/Web/API/EventSource) to push updates to clients. Answers: username_0: Things to potentially push updates for: * incidents * personnel * incident types * incident reports username_0: No way this will get done in the next three weeks, so punting. username_0: Server now has a URI for data updates as of 22d0102df1bce790d569b6a16863e15afdad9a77 username_0: Dispatch queue subscribes to those events: c7fbe38f0cca1c2d956e846338bc8e8becd8b260 This reloads the entire queue data set after any update, so one question is whether that scales OK. It does appear that DataTables only does one GET request even though multiple update events come in, so that's a great thing. username_0: Not done yet is having individual incident contexts update. username_0: @username_1: this is on the demo server. Try opening the dispatch queue in one window, then open an incident in another. Change the incident's state or some other data visible in the dispatch queue and it should automatically update in the queue. username_0: #110 for incident pages #111 for incident types #112 for personnel #113 for incident reports username_0: Closing this bug since the main server side feature is is place. Status: Issue closed username_1: Hmm. Sorry for getting to this late; it was a busy weekend of packing. This "sort of" works for me. Creating an incident in a new tab, I see it show up in the dispatch queue. I then changed the incident description and it didn't update (for at least 10 seconds). I refreshed the dispatch queue and it appeared correctly. I then tried to repeat this process, and loaded another new incident tab. This new description is not present. I do see the default sorting, and the new incident updating URL, so thanks for getting those in there. username_0: Well, poop. Chrome, right? username_1: da username_0: OK I just pushed an update, though I don't think that should have fixed anything… But I have Safari and Chrome open on my Mac and if I change an incident state or summary in Safari, I'm seeing the update in Chrome on both the Dispatch Queue and the open incident page (which this update added support for). I've also tested updated coming to Safari. Might need to debug non-Mac cases on site. username_1: Whether or not that should have fixed anything, I can no longer reproduce the failure. Looking good! ¯\_(ツ)_/¯
sselecirPyM/MMDMotionCompute
1096954642
Title: 足部的4个ik骨骼会导致动作错误,可以关闭ik吗? Question: username_0: ![image](https://user-images.githubusercontent.com/60756285/148649497-d40c491b-4b9e-43df-8534-7c826d3276cc.png) ![屏幕截图 2022-01-08 225151](https://user-images.githubusercontent.com/60756285/148649394-1a3cfa5a-1fc5-4b4e-af7e-44a42309cdd6.png)这4个骨骼可以像MMD里那样关闭么? 发生错误的模型及动作 百度网盘:https://pan.baidu.com/s/1pRy2-yQlN71lZgKaOvjr6g(提取码:6666) 另外,这个工具没有地面碰撞,如果头发或者衣服太长就会穿过地面。这个问题可以通过二次解算解决,但是动作错误完全不知道怎么办了😭
pythonindia/junction
167906517
Title: Shouldn't allow title of proposal to be changed after last date of submission Question: username_0: We should not allowed proposal title to be changed after the last date of submission. If reviewers suggest for change it can just be done by CFP coordinator or admin. Answers: username_1: @sayanchowdhury @anistark Do you think this is needed in a proposal ? username_2: @username_1 Yes. This is needed. After the submission dates are over, it is unfair to let people freely change the title/content of the talk. At least if the title becomes immutable after the deadline, the content can't be freely changed to whatever the proposer wants.
Z3Prover/z3
595636016
Title: fp.xform.magic=true HORN logic Invalid model Question: username_0: Hi, For this formula: ``` (set-logic HORN) (declare-fun a (Real) Bool) (assert (a 0)) (check-sat) ``` Z3 fp.xform.magic=true gives an invalid model: ``` [518] % z3 model_validate=true small.smt2 sat [519] % z3 model_validate=true fp.xform.magic=true small.smt2 sat (error "line 4 column 10: an invalid model was generated") [520] % [520] % cat small.smt2 (set-logic HORN) (declare-fun a (Real) Bool) (assert (a 0)) (check-sat) [521] % ``` OS: Ubuntu 18.04 Commit: cb13641 Answers: username_1: magic disabled Status: Issue closed
micronaut-projects/micronaut-test
586514623
Title: Cannot mock named Bean with Mockito Question: username_0: I have an interface with two implementations, where one of them is `@Primary`. The code compiles and works. Then I wanted to write a test, that has both implementations injected and mocked, but that fails. Here is my set-up of classes. ``` public interface DoInterface { String execute(); } @Primary @Named("DoOne") public class DoOne implements DoInterface { @Override public String execute() { return "one"; } } @Singleton @Named("DoTwo") public class DoTwo implements DoInterface { @Override public String execute() { return "two"; } } ``` And the test. This version of the test works, and both implementation are injected correctly. ``` @MicronautTest public class ExecServiceTest { @Inject @Named("DoOne") DoInterface doThat; @Inject @Named("DoTwo") DoInterface doThatTwo; @Test void shouldReturnCorrectString() { assertEquals("one", doThat.execute()); } } ``` But when I try to mock them, I am getting an error. Here is the test with mocks: ``` @MicronautTest public class ExecServiceTest { @Inject @Named("DoOne") DoInterface doThat; @Inject @Named("DoTwo") DoInterface doThatTwo; @MockBean(DoOne.class) DoInterface getOne() { return mock(DoInterface.class); [Truncated] at io.micronaut.context.AbstractBeanDefinition.getBeanForField(AbstractBeanDefinition.java:1410) ... 79 more hello.world.server.ExecServiceTest > shouldReturnCorrectString() FAILED io.micronaut.context.exceptions.DependencyInjectionException Caused by: io.micronaut.context.exceptions.NoSuchBeanException 1 test completed, 1 failed ``` I tried other variants, for example: ``` @MockBean(value = DoOne.class, named = "DoOne") DoInterface getOne() { return mock(DoInterface.class); } ``` But nothing works. The error message says that the `@Named(DoOne)` Bean does not exists, but it's there. I'm using Micronaut 1.3.3 and io.micronaut.test:micronaut-test-junit5 1.1.5. All dependencies regarding annotation processing are present. Could anybody please explain what I'm doing wrong? I also don't have much experience with that, just started with Micronaut.
qpython-android/qpython.org
127740128
Title: Function round () does not work properly on QPython3 Question: username_0: Qpython3 don't show properly the results of the function round(). For example: * <strong>With QPython:</strong> .>>> Number=round(8.765, 3) .>>> print (Number) .<strong>8.877</strong> (Correct) * <strong>With QPython3:</strong> .>>> Number=round(8.765, 3) .>>> print (Number) .<strong>8.8759999999999994</strong> (Wrong) Please, can you fix it? Answers: username_1: It's strange, it maybe python3.2.x's bug, but I will look into it later. username_2: I'm also facing this issue on latest version of qpython3 from play market, when will it be fixed?
redhat-openstack/tripleo-quickstart
141907322
Title: image building: decouple I.B. playbook setup/config from quickstart proper Question: username_0: Presently the image building playbooks are reusing potentially too much of the libvirt / networking setup & config from the quickstart roles. For example there's no need to set up overcloud network(s) to build the images. In addition it makes the build image workflows prone to breaking on setup/config, for reasons unrelated to image building. Finally it increases the amount of time needed to build an image. As we explore ways to leverage tripleo-quickstart's appliance based undercloud in (potentially) in more projects for CI, being able to quickly, simply, and cleanly build images will increasingly become important. I've got some deltas locally to address this, testing now. Planning to post a review shortly. Answers: username_1: Do we need *any* of the libvirt roles for the image building? I see we'll pulling them in from the `playbooks/build-image.yml` playbook, but I don't think they're actually referenced in any of the `image` roles. I don't see any use of the `virt*` modules or of `virsh` anywhere under `playbooks/roles/images`. username_0: yeah...my "prototype" is just to nuke the ref's from the build-image.yml. The actual build image role pulls in the basic libvirt setup (install libvirt packages, but not networking config). Want me to just post what I have now? Seems to work :) username_0: re: why libvirt is needed at all, we leverage qemu-img (and friends) to generate the actual binaries. username_1: `qemu-img` doesn't require any support from libvirt. On the other hand, the various `virt-*` tools (like `virt-customize`) *do* require libvirt to be installed and running, but don't require any of the network configuration (at all) from our `libvirt/` roles. Simply pulling in the `parts/libvirt` role ought to be sufficient. That installs libvirt and ensures that the service is started. username_0: {nod} - I'm somewhat new to all of this, during virt-customize / virt-sparsify looking at process trees it fires up qemu-img. Thanks for making the distinction. BTW I like the new factoring of tasks (parts) that went in recently. +1 If you're cool with it I'll make sure this works completely (locally) and post a review. username_1: I am always in favor of posting a review :). username_0: WIP - testing it still: https://review.gerrithub.io/#/c/266692/ username_0: Removed WIP tag. Integrated feedback and ready for review. https://review.gerrithub.io/#/c/266692/ Status: Issue closed username_0: merged
geoffrowland/mahara-artefact_cpds
590264531
Title: CPDS for Mahara 19.10.2 Question: username_0: Hi, I was wondering if there will be any development in to making the current cpds plugin to be compatible with mahara 19.10.2? I cloned from the master branch, also added the versioning fix because I received the 500 error, but it then fails when I click to install the block type in mahara - failed to upgrade. [WAR] 54 (artefact/cpds/blocktype/cpds/lib.php:0) Declaration of PluginBlocktypeCpds::get_css_icon() should be compatible with PluginBlocktype::get_css_icon($blocktypename) Call stack (most recent first): log_message("Declaration of PluginBlocktypeCpds::get_css_icon()...", 8, true, true, "/data/mahara-vhosts/mahara-19.10.2/htdocs/artefact...", 0) at /data/mahara-vhosts/mahara-19.10.2/htdocs/lib/errors.php:521 error(2, "Declaration of PluginBlocktypeCpds::get_css_icon()...", "/data/mahara-vhosts/mahara-19.10.2/htdocs/artefact...", 0, array(size 10)) at /data/mahara-vhosts/mahara-19.10.2/htdocs/lib/mahara.php:1629 require_once() at /data/mahara-vhosts/mahara-19.10.2/htdocs/lib/mahara.php:1629 safe_require("blocktype", "cpds/cpds") at /data/mahara-vhosts/mahara-19.10.2/htdocs/lib/upgrade.php:1132 validate_plugin("blocktype", "cpds/cpds", "/data/mahara-vhosts/mahara-19.10.2/htdocs/artefact...") at /data/mahara-vhosts/mahara-19.10.2/htdocs/lib/upgrade.php:159 check_upgrades("blocktype.cpds/cpds") at /data/mahara-vhosts/mahara-19.10.2/htdocs/admin/upgrade.json.php:24 I am running a clean install of Mahara, RHEL 6.10, php 7.1.26, MySql 5.6.36. Any help or assistance with this would be fantastic, thanks for your time. Regards David Answers: username_0: No longer needed thanks - https://github.com/robertlyon777/mahara-artefact_cpds.git Status: Issue closed
saltstack/salt
156225749
Title: [2015.8.10] KeyError: 'file.source_list' Question: username_0: ### Description of Issue/Question After upgrading from 2015.8.8.2 to 2015.8.10 we're seeing this happening at a few minions: ``` 2016-05-23 09:56:28,141 [salt.state ][ERROR ][11334] An exception occurred in this state: Traceback (most rec ent call last): File "/usr/lib/python2.7/dist-packages/salt/state.py", line 1626, in call if 'check_cmd' in low and '{0[state]}.mod_run_check_cmd'.format(low) not in self.states: File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 1492, in wrapper return f(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/salt/states/file.py", line 1617, in managed source, source_hash = __salt__['file.source_list']( File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 900, in __getitem__ func = super(LazyLoader, self).__getitem__(item) File "/usr/lib/python2.7/dist-packages/salt/utils/lazy.py", line 95, in __getitem__ return self._dict[key] KeyError: 'file.source_list' ``` This also happens with `file.check_perms`, `file.user_to_uid`, `file.check_perms` but less often than with `file.source_list`. ### Versions Report ``` Salt Version: Salt: 2015.8.10 Dependency Versions: Jinja2: 2.7.2 M2Crypto: Not Installed Mako: 0.9.1 PyYAML: 3.10 PyZMQ: 14.0.1 Python: 2.7.6 (default, Jun 22 2015, 17:58:13) RAET: Not Installed Tornado: 4.2.1 ZMQ: 4.0.4 cffi: Not Installed cherrypy: 3.3.0 dateutil: 1.5 gitdb: 0.5.4 gitpython: 0.3.2 RC1 ioflo: Not Installed libgit2: Not Installed libnacl: 1.4.3 msgpack-pure: Not Installed msgpack-python: 0.3.0 mysql-python: 1.2.3 pycparser: Not Installed pycrypto: 2.6.1 pygit2: Not Installed python-gnupg: Not Installed smmap: 0.8.2 timelib: Not Installed System Versions: dist: Ubuntu 14.04 trusty machine: x86_64 release: 3.13.0-83-generic system: Ubuntu 14.04 trusty ``` Answers: username_0: It appears that this was being caused by the fact that the package was upgraded but there was still an old salt-minion process running which was stuck at a scheduled highstate (as reported in #32322). username_1: @username_0 are you okay if we close this issue and track the minion process getting stuck at a scheduled highstate in the other issue? username_0: Yes, this problem was indeed caused by the stuck minion (on the older version). Status: Issue closed
google/docsy
763341131
Title: Authentication support Question: username_0: This is probably not the forum for these type of questions. Whart do you use to as an authentication layer in from of docsy. We have a bunch of clients from different organizations that would like to access our API but we do not want to make it public. Any advice? Answers: username_1: In terms of forums, there's a Docsy community mailing list you might like to join: https://www.docsy.dev/community/ To answer your question (just as someone who uses Docsy) - I deploy a company Docsy site to an AWS S3 bucket and then have Cloudfront serve the site. I then use a Lambda at Edge function (https://github.com/Widen/cloudfront-auth) which authenticates employees using Google Workspace (but also support Okta, AD. etc) I'm going to be writing up how I do that with automation examples in the near future. Commercial providers such at Netlify also have solutions that might be worth looking at too if you want to do less of your own plumbing :-) username_2: Another option you may want to consider is [oauth2-proxy](https://github.com/oauth2-proxy/oauth2-proxy#oauth2_proxy) that you can use stand-alone or in conjunction with nginx. if you need https you can use let’s encrypt. an option we use for some customer is google cloud [identity aware proxy](https://cloud.google.com/iap), which is non trivial to setup, but is a very robust and fast option.
rossfuhrman/_why_the_lucky_markov
479437717
Title: Rockettes were spinning, arm in arm, he had learned about the tall fox’s truck in this case, since Paij-ree’s father didn’t want the $$$? 'Father', 'Onnn' =&gt; 'Mother' }, { 'ree' =&gt; 'AM', 'plo' =&gt; 'PM' } ] # A Class is the join separator, used when joining strings with =&gt; nil # with String#split. Question: username_0: Toot: Rockettes were spinning, arm in arm, he had learned about the tall fox’s truck in this case, since Paij-ree’s father didn’t want the $$$? 'Father', 'Onnn' =&gt; 'Mother' }, { 'ree' =&gt; 'AM', 'plo' =&gt; 'PM' } ] # A Class is the join separator, used when joining strings with =&gt; nil # with String#split. One comment = 1 upvote. Sometime after this gets 2 upvotes, it will be posted to the main account at https://mastodon.xyz/@_why_toots
Tangdixi/DCPathButton
118224445
Title: Delegate Improvements Question: username_0: The delegates `willDismissDCPathButtonItems:` and `didDismissDCPathButtonItems:` are not fired unless you tap on the center button to dismiss. I would think they also get fired when tapping on one of the button items. Currently there is no way to trigger an action after the dismiss animation, because `pathButton:clickItemButtonAtIndex:` happens on tap. As an improvement, I suggest either: 1. All 3 of these delegates should fire in the right sequence. 2. Add a new delegate to fire after the dismiss animation for selecting an item button index. Status: Issue closed Answers: username_1: Fixed. PLZ reinstall through PodSpec
mcollovati/vertx-vaadin
392933589
Title: Too short documentation. Question: username_0: I just can not run an independent project. Tried several times. Very little information on how to do this. Examples run fine, but a separate project .... This is hell. Answers: username_1: Thanks for feedback; I will try to write a more detailed documentation. Can you please briefly explain what kind of troubles do you faced? This would be helpful for me in order to provide better informations
pytorch/pytorch
338167457
Title: CUDA_LAUNCH_BLOCKING=1 stucks sometime Question: username_0: ## Issue description This is the context of the `segment.py`: #!/usr/bin/env python import torch from torch import nn import torch.utils.data class DRN(nn.Module): def __init__(self): super(DRN, self).__init__() self.a = nn.Conv2d(3, 16, kernel_size=7) def forward(self, x): print('before DRN forward') return x if __name__ == '__main__': model = torch.nn.DataParallel(DRN()).cuda().train() input_ = torch.rand(2).cuda() print('before input') model(input_) print('end') If I run `CUDA_VISIBLE_DEVICES=0,1 ./segment.py`, it will outputs before input before DRN forward before DRN forward end However, if I run `CUDA_LAUNCH_BLOCKING=1 CUDA_VISIBLE_DEVICES=0,1 ./segment.py`, it will print `before input` only and stucks like below: ![2018-07-04-154901_1916x1058_scrot](https://user-images.githubusercontent.com/3921062/42263854-c97953ae-7fa1-11e8-9b9f-e5976388252f.png) It very strange that if I change `rand(2)` to `rand(1)` or change `kernel__size=7` to `kernel_size=2`, it does not stuck again. So I describe this bug occurs "sometime". I reproduce this bug in an machine which has two GTX 1080 too. ## System Info Collecting environment information... PyTorch version: 0.4.0 Is debug build: No CUDA used to build PyTorch: 8.0.61 OS: Ubuntu 16.04.3 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.5) 5.4.0 20160609 CMake version: version 3.5.1 Python version: 3.5 Is CUDA available: Yes CUDA runtime version: 8.0.61 GPU models and configuration: GPU 0: GeForce GTX 1080 Ti GPU 1: GeForce GTX 1080 Ti GPU 2: GeForce GTX 1080 Ti GPU 3: GeForce GTX 1080 Ti Nvidia driver version: 384.98 cuDNN version: Probably one of the following: /usr/local/cuda-8.0/targets/x86_64-linux/lib/libcudnn.so.5.1.10 /usr/local/cuda-8.0/targets/x86_64-linux/lib/libcudnn.so.6.0.21 /usr/local/cuda-8.0/targets/x86_64-linux/lib/libcudnn_static.a Versions of relevant libraries: [pip3] numpy (1.14.2) [pip3] torch (0.4.0) [pip3] torchvision (0.2.0) [conda] Could not collect Answers: username_1: Yes this is expexted because of NCCL. You need to remove DataParallel or limit the process to a single GPU for debugging username_0: @username_1 Excuse me, how do you know this? I googled this problem many time, no one mention that. username_2: @username_0 we asked some NVIDIA engineers. Apparently NCCL doesn't really like CUDA_LAUNCH_BLOCKING username_1: NCCL really needs to run multiple kernels on multiple GPUs simultaneously, or otherwise it will deadlock. This is why having 2 processes running DataParallels causes deadlocks (one starts a kernel on GPU0, another one on GPU1, you get a deadlock). Same issue happens here, because the CPU has to wait for GPU0 to return before it can queue the GPU1 kernel, but the GPU0 waits for GPU1. username_3: What if a bug is only triggered only on multiple GPUs? In my case after training with multiple gpus for 3 hours the following error will occur: ``` cuda runtime error (59) : device-side assert triggered at /pytorch/torch/lib/THC/generic/THCStorage.c:184 Aborted (core dumped) ``` Any advice for debugging in this scenario? Thanks. username_4: NCCL should work with CUDA_LAUNCH_BLOCKING, but only starting with CUDA 9 (the report mentions CUDA 8), since we can then launch kernels on multiple GPUs in a single cuda operation using cudaLaunchCooperativeKernelMultiDevice. username_5: I think this should be mentioned somewhere because DataParallel stucks with zero warnings and error. It takes me one week to reach this page.
vatlab/sos-papermill
494591017
Title: No parameter translator functions specified for kernel 'sos' or language 'sos' Question: username_0: Currently papermill has no parameter translation function. I have the below notebook: ![image](https://user-images.githubusercontent.com/7631261/65041205-d0db2500-d973-11e9-9141-15f931a885d4.png) On running papermill, I see the below error: ``` $ papermill --engine sos U1.ipynb U1_o.ipynb -y '{"x": 11}' INFO: Input Notebook: U1.ipynb INFO: Output Notebook: U1_o.ipynb Traceback (most recent call last): File "/home/ubuntu/.local/bin/papermill", line 11, in <module> sys.exit(papermill()) File "/home/ubuntu/.local/lib/python3.6/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/home/ubuntu/.local/lib/python3.6/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/home/ubuntu/.local/lib/python3.6/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/ubuntu/.local/lib/python3.6/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/home/ubuntu/.local/lib/python3.6/site-packages/papermill/cli.py", line 254, in papermill cwd=cwd, File "/home/ubuntu/.local/lib/python3.6/site-packages/papermill/execute.py", line 81, in execute_notebook nb = parameterize_notebook(nb, parameters, report_mode) File "/home/ubuntu/.local/lib/python3.6/site-packages/papermill/parameterize.py", line 78, in parameterize_notebook param_content = translate_parameters(kernel_name, language, parameters) File "/home/ubuntu/.local/lib/python3.6/site-packages/papermill/translators.py", line 278, in translate_parameters return papermill_translators.find_translator(kernel_name, language).codify(parameters) File "/home/ubuntu/.local/lib/python3.6/site-packages/papermill/translators.py", line 26, in find_translator kernel_name, language papermill.exceptions.PapermillException: No parameter translator functions specified for kernel 'sos' or language 'sos ``` Can you share some directions on how to get this closed? [Translators](https://github.com/nteract/papermill/blob/master/papermill/translators.py) is the place where I can see the translators. Answers: username_1: Please check out the master branch of `sos-papermill` and `sos-notebook` and test if this works now. username_1: A problem here is that the parameter cell will always be `SoS` cell (at least this is no way to specify kernel from command line), so your notebook will not work. ![image](https://user-images.githubusercontent.com/9889312/65050970-e168ac80-d92d-11e9-9f9a-225488005bfb.png) It is possible to let the "parameters" cell use the kernel of its previous cell (in this case `Python3`), but then a different translator will have to be used. The key here is that if the translater knows other meta information of the cell. username_1: According to [this line of code](https://github.com/nteract/papermill/blob/master/papermill/parameterize.py#L78), papermill only use global kernel name (`sos`) for the translation of parameters, so there is no way to enable cell kernel-specific parameter translater. username_1: Note that we could try to be clever, and add `%put param --to kernel` in the `injected-parameter` cell. However, as shown in the following example, papermill collects all parameters and inject a single cell, so the `%put` idea will fail if there are multiple `parameters` cells with different kernels. ![image](https://user-images.githubusercontent.com/9889312/65053900-51793180-d932-11e9-9834-be5a73c01f98.png) username_0: Hi, Thanks for the super quick turnaround. It just works! We will test this in detail and revert back with any further issues if seen. I think the suggestion to have **parameters** in **SoS** kernel is a fair one and fits much use cases. And then by using the magic of **%magics** of SoS kernel, we can achieve the flow. Status: Issue closed username_1: I realized that at least for now papermill only supports one parameters cell (nteract/papermill#328). That is to say we do not have to worry about multiple parameters defined in multiple kernels, so a simple `%put` magic should do the trick. I have submitted a patch to support non-SoS parameters. ``` papermill --engine sos with_non_sos_param.ipynb sos1.ipynb -y '{"x": "Good Bye"}' ``` ![image](https://user-images.githubusercontent.com/9889312/65065846-da02cc80-d948-11e9-8917-c449c953f169.png) username_0: Oh that is very clever. You are awesome!
DeepRank/deeprank
657319674
Title: Missing grid_info and grid_info explanation in docs/tutorial_deeplearning.rst Question: username_0: **Describe the bug** In docs/tutorial_deeplearning.rst, line 28, where generating the 'data_set', grid_info is not specified. **Actual Results or Error Info** `Traceback (most recent call last): File "dl_try.py", line 51, in <module> dict_filter={'IRMSD':'<4. or >10.'}) File "/home/dariomarzella/deeprank/deeprank/learn/DataSet.py", line 202, in __init__ self.process_dataset() File "/home/dariomarzella/deeprank/deeprank/learn/DataSet.py", line 277, in process_dataset self.get_grid_shape() File "/home/dariomarzella/deeprank/deeprank/learn/DataSet.py", line 813, in get_grid_shape f'Impossible to determine sparse grid shape.\n' ValueError: Impossible to determine sparse grid shape. If you are not loading a pretrained model, specify grid_shape or grid_info` **Additional Context** I have added the grid_info to the code block as it is in "test/test_learn.py" line 45 in commit #0ce8fd8 to branch doc_DM, but we may need a proper explanation of this feature in the tutorial. Answers: username_1: mee too! i also trap myself in this problem and no idea to resolve this problem so far username_0: Hello username_1, thanks for posting here your issue! Could I ask you exactly where did you have this problem, so that I can reproduce it exactly? While running the tutorial present in the documentation (https://deeprank.readthedocs.io/en/latest/tutorial3_learning.html ) or while running a different script you made? In the meantime, you can check the documentation for the [DataSet class](https://deeprank.readthedocs.io/en/latest/deeprank.learn.html#module-deeprank.learn.DataSet). There you can find a short explanation of what ```grid_info``` and ```grid_shape``` are. I also just noticed there is a typo in there, I will fix it asap. username_0: Hello, it's a pleasure to help! Unfortunately, I cannot see your attachment, could you please double check you added it to your message, or re-attach it? I think the issue can be solved by simply giving the correct arguments to the grid_info, but I will be able to tell you once I see the code. username_0: Don't worry, it's a pleasure :) No, not really... can you just attach the google colab file? username_0: Do not worry about the reply time, we are all busy and it can always take some time to reply. Although I am not sure why the ```grid_shape=[30,30,30]``` is not working, I think you can solve it with grid_info. Previously in your code, you defined the dictionary ```grid_info``` with number of points, resolution and atom type. You should be able to use that same dictionary to build your dataset, so instead of ```grid_shape=[30,30,30]```, use ```grid_info=grid_info```. I just modified your google colab notebook, please run it and let me know if the error is solved or not :)
davidharting/personal-site
611425655
Title: Post > Async as an implementation detatil Question: username_0: # Summary This is nothing new, but I was really delated with concurrency in Golang. I have found a few posts online about how async / await pollutes your functions with a "color." But I think it could be interesting to contrast Node.js concurrency with Golang concurrency and note how important it is that in Golang, you write all code the same, and under the hood a function can wait, pass messages, etc.
ZsgsDesign/NOJ
1024736498
Title: Bug with page adaptation Question: username_0: When the split screen function is used on a small screen, there is a problem with the adaptation of the foot bar of the page, as shown in the figure below: ![image](https://user-images.githubusercontent.com/51751659/137062844-869405ac-6c36-4b3d-a68a-446ebc0491e7.png)
github-vet/rangeloop-pointer-findings
771465094
Title: ianlewis/memcached-operator: vendor/k8s.io/kube-openapi/pkg/aggregator/aggregator.go; 3 LoC Question: username_0: [Click here to see the code in its original context.](https://github.com/ianlewis/memcached-operator/blob/42b24e3426c19f6a2031c45f3ead02f8d277b79c/vendor/k8s.io/kube-openapi/pkg/aggregator/aggregator.go#L78-L80) <details> <summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary> ```go for _, v := range schema.PatternProperties { s.walkSchema(&v) } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 42b24e3426c19f6a2031c45f3ead02f8d277b79c<issue_closed> Status: Issue closed
ClimbsRocks/data-formatter
162910747
Title: Error: ValueError: '"continuous"' is not in list Question: username_0: I'm experiencing an interesting error: ``` message from Python: ********************************************************************* message from Python: message from Python: Warning, we have received a value in the first row that is not valid: message from Python: "continuous" message from Python: Please remember that the first row must contain information describing that column of data message from Python: Acceptable values are: "ID", "Output Category", "Output Multi-Category", "Output Regression", "Continuous", "Categorical", "Date", "IGNORE", "Validation Split", and "NLP", though they are not case sensitive. message from Python: message from Python: The column index of this unexpected value is: message from Python: 0 message from Python: The entire row that we received is: message from Python: ['"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"output category"', ' '] message from Python: ********************************************************************* message from Python: This is an error that prevents the rest of the prorgram from running. Please fix and run machineJS again. ``` I know that this error occurs when I'm running machineJS but it also occurs with data-formatter installed via `npm install -g data-formatter`. The `.csv` files I'm look like this (only the first lines and yes, those are from the current numer.ai): training data (numerai_training_data.csv): ``` "Continuous","Continuous","Continuous","Continuous","Continuous","Continuous","Continuous","Continuous","Continuous","Continuous","Continuous","Continuous","Continuous","Continuous","Continuous","Continuous","Continuous","Continuous","Continuous","Continuous","Continuous","Output Category", "feature1","feature2","feature3","feature4","feature5","feature6","feature7","feature8","feature9","feature10","feature11","feature12","feature13","feature14","feature15","feature16","feature17","feature18","feature19","feature20","feature21","target" 0.86864091396194,0.506736891211661,0.612936323346674,0.938594725439847,0.497599118270575,0.666396780090536,0.39187077660641,0.727678764938448,0.150861110163878,0.772584165855727,0.689308577276422,0.860138928538823,0.214899033241811,0.629553604161714,0.242945084419877,0.0733669816596503,0.275066846363697,0.445346760764699,0.508648861192553,0.230880938243752,0.594808578272208,1 0.187006901664161,0.830565721067219,0.50777134657673,0.346875532490266,0.41332329728442,0.470310621632215,0.948287627170581,0.253222134175955,0.825946417585556,0.596589174930343,0.579960501169059,0.763485328236244,0.723233338462968,0.298057535056738,0.729876115901926,0.808066360958326,0.364541079113559,0.573676732740155,0.561999610847449,0.395796299453029,0.337658911516832,0 ``` here is also the tournament data (numerai_tournament_data.csv): ``` 'ID','Continuous','Continuous','Continuous','Continuous','Continuous','Continuous','Continuous','Continuous','Continuous','Continuous','Continuous','Continuous','Continuous','Continuous','Continuous','Continuous','Continuous','Continuous','Continuous','Continuous','Continuous' "t_id","feature1","feature2","feature3","feature4","feature5","feature6","feature7","feature8","feature9","feature10","feature11","feature12","feature13","feature14","feature15","feature16","feature17","feature18","feature19","feature20","feature21" 19778,0.652450941772408,0.454574228014018,0.270804859628407,0.161097880875707,0.905690304463858,0.295944221546047,0.163038610622454,0.853296442428432,0.181040164079697,0.524846886624367,0.405589768854382,0.300021012452414,0.942145182118905,0.332197669804339,0.763536894453461,0.673533824569794,0.524846098121362,0.180003787962373,0.929883021866594,0.54392047168455,0.543158600587601 21465,0.560970703270285,0.51002996638114,0.51939683127632,0.113067422344801,0.183254285878405,0.45550499202652,0.845135463201596,0.411922967229636,0.77756563362742,0.900449910639058,0.915049964295094,0.996133302998115,0.316080113778575,0.313864881224719,0.802118173321136,0.84367550559471,0.637792172884529,0.574702376141301,0.25118322548224,0.611590593003519,0.919855546585812 ``` Having a look at the validatoin.py line 69, it tells me that the expected values are one of those: ``` ['id','continuous','groupby continuous','categorical','groupby categorical','date','groupby date','ignore', 'validation split', 'nlp'] ``` So I don't understand, why (from validatoin.py line 81): ``` ['"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"continuous"', '"output category"', ' '] ``` don't match? The `joinDataDescription = [x.lower() for x in row]` in join.py works fine as we know form the error message, but why every string wrapped in " " and ' ', like this: '"continuous"'? Shouldn't it be only 'continuous' to math the objects? I also tried it with "Continuous" instead of 'Continuous', but that didn't work either. I guess there is something wrong with the join.py: ``` try: dialect = csv.Sniffer().sniff(joinFile.read(2048)) joinFile.seek(0) except: dialect = 'excel' joinRows = csv.reader(joinFile, dialect) ``` I guess the dialect selection could screw up here. Anyone else having an idea or a similar problem? PS: I'm new to python, sorry! And thanks for your great npm packages, all of them, that makes it really easy to start with machine learning! Answers: username_0: Also someone should write the python version in the README. username_0: Same occurs in concat.py line 164 as ``` idHeader = testingHeader[ testingDataDescription.index('id') ] ``` can't find '"id"' (note the quotes).
ember-cli/ember-cli
677633192
Title: wrong argument when calling this._super.included.apply(this, arguments) in addon's include method Question: username_0: Addon's include method receives an app instance which is written here: https://github.com/ember-cli/ember-cli/blob/master/lib/broccoli/ember-app.js#L631 However an addon is calling `this._super.included.apply(this, arguments)` method passing an app's instance in `arguments` other addons receive the addon's instance instead of an app's. It happens because of https://github.com/ember-cli/ember-cli/blob/master/lib/models/addon.js#L769 Is it by design or is it a bug? Answers: username_1: The current implementation is intentional (if a bit annoying in retrospect). The idea is that the "thing that is including you" is what is passed in. When your addon is directly included in the project, the argument will be an instance of the class in `lib/broccoli/ember-app.js`; when your addon is a dependency of another addon the argument will be an instance of `lib/models/addon.js` (it is also the value of `this.parent` FWIW). username_1: I'd definitely be open to better documentation here (both in the API docs side, and the addon authoring guides side) to make this clearer. username_0: @username_1 thank you for the clarification! Should we close this issue, or you want to track the documentation improvements here?
nex3z/ToggleButtonGroup
394197732
Title: androidx version Question: username_0: Hi, can you provide a jetpack version? We already migrated and dont want to add dependency to pre androidx artefacts Answers: username_1: Of course. ``` implementation 'com.username_1:toggle-button-group-x:0.1.0' ``` username_0: thank you Status: Issue closed username_2: So you are going to update x version too right? username_3: @username_1 I think this should be included in Readme. username_1: @username_3 I've updated readme with both version. Further realease will be based on AndroidX.
ccseer/Seer
227550500
Title: 能否加入命令行参数或者单实例运行 Question: username_0: 平时使用 Totalcmd, 按 F3 可以查看文件 如果把查看器配置成 seer.exe, seer 没运行的时候, 第一个文件可以正常查看 但是第二个文件就不正常了, 有时候正常查看, 有时候 seer 通知 seer 已经运行, 不能查看 Totalcmd 实际上是调用了 Seer.exe filepath 我想能否在 seer 已经运行的时候, 调用 seer.exe filepath 直接查看文件, 或者加入命令行参数直接查看文件? 这样不但能在 Totalcmd 使用, 也能在 evertything 等一大批软件使用 Answers: username_1: 目前可以啊 username_1: Seer.exe file_path username_1: 什么版本 username_0: 1.4 版本, 我想更多的是一个 bug 经常跳出 "seer 已经运行", 出发规律很难捉摸 username_1: ![image](https://cloud.githubusercontent.com/assets/15963166/25880799/1be8c5ba-356c-11e7-8d44-0ab3a7308089.png) username_1: 你把 Seer 关了 再用用 Seer.exe -? username_1: 除了安装后第一次运行 点击多次的话, 可能导致 重复运行. 后面基本都不会触发.. 你再试试看看? username_1: 诶..好像确实有个 bug.. username_0: 我退出了 seer 重新运行还是经常会出现, 有时多次好的, 突然就跳出提示, 有时多次跳出提示, 又突然好了, 实在没有什么规律 你找一下代码里什么情况下会系统通知 "Seer 已经运行。" 另外 -? 这个参数不常见, 一般都是 -h /? --help, 我应该试过以为没有命令行参数 username_1: 我下个版本把这里修了. username_1: 你什么系统? 记得之前这块是好的. username_0: Windows 版本 1703 (OS 内部版本 15063.138) username_1: ok... 大概六月中旬以前 发布新版本 Status: Issue closed username_1: 1.5 修复了。 这次没问题了。
flutter/flutter
368542913
Title: Runtime Exception while getting firebase notification. Question: username_0: Here is the list of the packages that I am using on my project, I think all are up to date. ` dependencies: flutter: sdk: flutter firebase_auth: "^0.5.20" firebase_messaging: "^2.0.0" firebase_analytics: "^1.0.3" firebase_admob: "^0.6.1" firebase_core: "^0.2.5+1" firebase_database: "^1.0.4" firebase_storage: "^1.0.3" google_sign_in: "^3.2.1" image_picker: "^0.4.10" url_launcher: "^4.0.1" sqflite: "^0.12.1" path: "^1.6.2" path_provider: "^0.4.1" async_loader: "^0.1.2" flutter_image: "^1.0.0" flutter_search_bar: "^2.0.7" shared_preferences: "^0.4.3" groovin_material_icons: ^1.1.5 ` Here is the issue that I am getting when I receive notification. `java.lang.NoSuchMethodError: No static method zzah()Lcom/google/firebase/iid/zzau; in class Lcom/google/firebase/iid/zzau; or its super classes (declaration of 'com.google.firebase.iid.zzau' appears in /data/app/com.brainants.meroshare-PlTXYg9vGisDS0Gq8tbcVA==/base.apk) E/AndroidRuntime(23824): at com.google.firebase.messaging.FirebaseMessagingService.zzb(Unknown Source:7) E/AndroidRuntime(23824): at com.google.firebase.iid.zzb.onStartCommand(Unknown Source:16) E/AndroidRuntime(23824): at android.app.ActivityThread.handleServiceArgs(ActivityThread.java:3802) E/AndroidRuntime(23824): at android.app.ActivityThread.access$1800(ActivityThread.java:207) E/AndroidRuntime(23824): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1779) E/AndroidRuntime(23824): at android.os.Handler.dispatchMessage(Handler.java:106) E/AndroidRuntime(23824): at android.os.Looper.loop(Looper.java:193) E/AndroidRuntime(23824): at android.app.ActivityThread.main(ActivityThread.java:6863) E/AndroidRuntime(23824): at java.lang.reflect.Method.invoke(Native Method) E/AndroidRuntime(23824): at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:537) E/AndroidRuntime(23824): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:858) W/OPDiagnose(23824): getService:OPDiagnoseService NULL D/OSTracker(23824): OS Event: crash I/Process (23824): Sending signal. PID: 23824 SIG: 9 Lost connection to device.` Answers: username_1: Please add the output of `flutter doctor -v`. username_0: `[✓] Flutter (Channel dev, v0.9.6, on Mac OS X 10.14.1 18B50c, locale en-US) • Flutter version 0.9.6 at /Users/muskan/flutter • Framework revision 13684e4f8e (8 days ago), 2018-10-02 14:15:17 -0400 • Engine revision f6af1f20ba • Dart version 2.1.0-dev.6.0.flutter-8a919426f0 [✓] Android toolchain - develop for Android devices (Android SDK 28.0.3) • Android SDK at /Users/muskan/Library/Android/sdk • Android NDK location not configured (optional; useful for native profiling support) • Platform android-28, build-tools 28.0.3 • Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1136-b06) • All Android licenses accepted. [✓] iOS toolchain - develop for iOS devices (Xcode 10.0) • Xcode at /Applications/Xcode.app/Contents/Developer • Xcode 10.0, Build version 10A255 • ios-deploy 1.9.2 • CocoaPods version 1.5.3 [✓] Android Studio (version 3.2) • Android Studio at /Applications/Android Studio.app/Contents • Flutter plugin version 29.0.2 • Dart plugin version 181.5616 • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1136-b06) [✓] VS Code (version 1.28.0) • VS Code at /Applications/Visual Studio Code.app/Contents • Flutter extension version 2.19.0 [✓] Connected device (1 available) • ONEPLUS A6000 • 3fabac74 • android-arm64 • Android 9 (API 28) • No issues found!` username_2: Me too. username_3: me too, I think, my error: `java.lang.NoSuchMethodError: No static method zzah()Lcom/google/firebase/iid/zzau; in class Lcom/google/firebase/iid/zzau; or its super classes (declaration of 'com.google.firebase.iid.zzau' appears in /data/app/xxxx.xxxx.xxxxx-xxxxxxxxx==/base.apk:classes2.dex)` flutter doctor result: ``` Doctor summary (to see all details, run flutter doctor -v): [✓] Flutter (Channel beta, v0.9.4, on Linux, locale en_US.UTF-8) [✓] Android toolchain - develop for Android devices (Android SDK 27.0.3) [✓] Android Studio (version 3.1) ✗ Flutter plugin not installed; this adds Flutter specific functionality. ✗ Dart plugin not installed; this adds Dart specific functionality. [✓] IntelliJ IDEA Ultimate Edition (version 2018.2) [!] VS Code (version 1.26.1) [✓] Connected devices (1 available) ``` pubspec.yaml ``` dependencies: flutter: sdk: flutter firebase_auth: ^0.5.20 font_awesome_flutter: 8.0.1 scoped_model: ^0.3.0 cloud_firestore: ^0.8.1 google_sign_in: ^3.0.5 firebase_messaging: ^2.0.0 ``` username_4: same here, just did a flutter upgrade and it crashes every time a message is received: pubspec: ``` google_sign_in: ^3.2.1 firebase_analytics: ^1.0.3 firebase_auth: ^0.5.20 firebase_database: ^1.0.4 firebase_storage: ^1.0.3 firebase_messaging: ^2.0.0 firebase_remote_config: ^0.0.5 firebase_performance: ^0.0.8 ``` Flutter doctor ``` Doctor summary (to see all details, run flutter doctor -v): [✓] Flutter (Channel beta, v0.9.4, on Mac OS X 10.13.6 17G65, locale en-ES) [✓] Android toolchain - develop for Android devices (Android SDK 28.0.0-rc2) [✓] iOS toolchain - develop for iOS devices (Xcode 10.0) [✓] Android Studio (version 3.2) [!] IntelliJ IDEA Ultimate Edition (version 2018.3 EAP) ✗ Flutter plugin not installed; this adds Flutter specific functionality. ✗ Dart plugin not installed; this adds Dart specific functionality. [✓] Connected devices (2 available) ! Doctor found issues in 1 category. ``` username_0: Fixed temporarily by adding `implementation 'com.google.firebase:firebase-messaging:17.3.3'` on app/build.gradle inside dependencies username_4: I currently have all firebase dependencies in my gradle to fix the problem (everything that I use) ``` implementation 'com.google.firebase:firebase-core:16.0.4' implementation 'com.google.firebase:firebase-database:16.0.3' implementation 'com.google.firebase:firebase-storage:16.0.3' implementation 'com.google.firebase:firebase-auth:16.0.4' implementation 'com.google.firebase:firebase-messaging:17.3.3' implementation 'com.google.firebase:firebase-config:16.0.1' implementation 'com.google.firebase:firebase-perf:16.1.2' ``` username_5: im facing a similar problem, and the above solution doesn't help `/Users/rosius/flutter/.pub-cache/hosted/pub.dartlang.org/firebase_messaging-3.0.1/android/src/main/java/io/flutter/plugins/firebasemessaging/FlutterFirebaseInstanceIDService.java:11: warning: [deprecation] FirebaseInstanceIdService in com.google.firebase.iid has been deprecated import com.google.firebase.iid.FirebaseInstanceIdService; ^ /Users/rosius/flutter/.pub-cache/hosted/pub.dartlang.org/firebase_messaging-3.0.1/android/src/main/java/io/flutter/plugins/firebasemessaging/FlutterFirebaseInstanceIDService.java:13: warning: [deprecation] FirebaseInstanceIdService in com.google.firebase.iid has been deprecated public class FlutterFirebaseInstanceIDService extends FirebaseInstanceIdService { ^ /Users/rosius/flutter/.pub-cache/hosted/pub.dartlang.org/firebase_messaging-3.0.1/android/src/main/java/io/flutter/plugins/firebasemessaging/FlutterFirebaseInstanceIDService.java:20: warning: [deprecation] getToken() in FirebaseInstanceId has been deprecated intent.putExtra(EXTRA_TOKEN, FirebaseInstanceId.getInstance().getToken()); ^ /Users/rosius/flutter/.pub-cache/hosted/pub.dartlang.org/firebase_messaging-3.0.1/android/src/main/java/io/flutter/plugins/firebasemessaging/FlutterFirebaseInstanceIDService.java:26: warning: [deprecation] onTokenRefresh() in FirebaseInstanceIdService has been deprecated public void onTokenRefresh() {` The main Error now ``` W/FirebaseMessagingPlugin(24411): getToken, error fetching instanceID: W/FirebaseMessagingPlugin(24411): java.io.IOException: SERVICE_NOT_AVAILABLE W/FirebaseMessagingPlugin(24411): at com.google.firebase.iid.zzr.zza(Unknown Source:66) W/FirebaseMessagingPlugin(24411): at com.google.firebase.iid.zzr.zza(Unknown Source:79) W/FirebaseMessagingPlugin(24411): at com.google.firebase.iid.zzu.then(Unknown Source:4) W/FirebaseMessagingPlugin(24411): at com.google.android.gms.tasks.zzd.run(Unknown Source:5) W/FirebaseMessagingPlugin(24411): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167) W/FirebaseMessagingPlugin(24411): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) W/FirebaseMessagingPlugin(24411): at java.lang.Thread.run(Thread.java:764) I/flutter (24411): Push Messaging token: null ``` Flutter Doctor ``` [✓] Flutter (Channel master, v1.2.2-pre.21, on Mac OS X 10.13.6 17G65, locale en-CM) [!] Android toolchain - develop for Android devices (Android SDK version 28.0.3) ✗ Android license status unknown. [!] iOS toolchain - develop for iOS devices (Xcode 10.0) ✗ Verify that all connected devices have been paired with this computer in Xcode. If all devices have been paired, libimobiledevice and ideviceinstaller may require updating. To update with Brew, run: brew update brew uninstall --ignore-dependencies libimobiledevice brew uninstall --ignore-dependencies usbmuxd brew install --HEAD usbmuxd brew unlink usbmuxd brew link usbmuxd brew install --HEAD libimobiledevice brew install ideviceinstaller [!] Android Studio (not installed) [✓] VS Code (version 1.31.0) [✓] Connected device (1 available) ! Doctor found issues in 3 categories. ``` username_0: Closing this issue, it's getting no traction these days. Status: Issue closed username_6: @username_0 This issue has been moved to https://github.com/FirebaseExtended/flutterfire/issues/615. Any further collaboration will be done there.
cossacklabs/themis
381971392
Title: RustThemis Question: username_0: It's hard to use Themis from programs in Rust without bindings. Let's fix this. - [ ] Move source from rust-themis into this repo - [ ] Integrate Rust builds into Makefile - [ ] Integrate Rust builds into CircleCI - [ ] Integrate Rust binding into cross-language tests - [ ] Add CircleCI badges to crates.io READMEs - [ ] Make API documentation available (on docs.rs or elsewhere) - [ ] Write a language guide for Rust on wiki - [ ] Update top-level README to declare Rust support Answers: username_0: Almost there! 🏁 username_0: We did it! 🎉 Status: Issue closed
jutzig/jabylon
225894161
Title: ClassCastException: org.jabylon.properties.impl.ProjectLocaleImpl Question: username_0: When trying to pick a second language, i'm facing this exception: ``` WARN o.e.j.h.HttpParser HttpParser Full for SCEP@60fc294d{l(/0:0:0:0:0:0:0:1:50038)<->r(/0:0:0:0:0:0:0:1:8080),d=true,open=true,ishut=false,oshut=false,rb=false,wb=false,w=true,i=1r}-{AsyncHttpConnection@780faa6e,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=-10,l=0,c=-3},r=217} INFO o.j.r.u.w.c.s.VersionConfigSection$VersionConfig Adding ProjectLocale asdasd to test/master ERROR o.a.w.DefaultExceptionMapper Unexpected error occurred org.apache.wicket.WicketRuntimeException: Method onFormSubmitted of interface org.apache.wicket.markup.html.form.IFormSubmitListener targeted at [Form [Component id = form]] on component [Form [Component id = form]] threw an exception at org.apache.wicket.RequestListenerInterface.internalInvoke(RequestListenerInterface.java:268) ~[org.apache.wicket.wicket-core-6.0.0.jar:6.0.0] at org.apache.wicket.RequestListenerInterface.invoke(RequestListenerInterface.java:216) ~[org.apache.wicket.wicket-core-6.0.0.jar:6.0.0] at org.apache.wicket.core.request.handler.ListenerInterfaceRequestHandler.invokeListener(ListenerInterfaceRequestHandler.java:240) ~[org.apache.wicket.wicket-core-6.0.0.jar:6.0.0] at org.apache.wicket.core.request.handler.ListenerInterfaceRequestHandler.respond(ListenerInterfaceRequestHandler.java:226) ~[org.apache.wicket.wicket-core-6.0.0.jar:6.0.0] at org.apache.wicket.request.cycle.RequestCycle$HandlerExecutor.respond(RequestCycle.java:814) ~[org.apache.wicket.wicket-core-6.0.0.jar:6.0.0] at org.apache.wicket.request.RequestHandlerStack.execute(RequestHandlerStack.java:64) ~[org.apache.wicket.wicket-request-6.0.0.jar:6.0.0] at org.apache.wicket.request.cycle.RequestCycle.execute(RequestCycle.java:253) [org.apache.wicket.wicket-core-6.0.0.jar:6.0.0] at org.apache.wicket.request.cycle.RequestCycle.processRequest(RequestCycle.java:210) [org.apache.wicket.wicket-core-6.0.0.jar:6.0.0] at org.apache.wicket.request.cycle.RequestCycle.processRequestAndDetach(RequestCycle.java:281) [org.apache.wicket.wicket-core-6.0.0.jar:6.0.0] at org.apache.wicket.protocol.http.WicketFilter.processRequest(WicketFilter.java:188) [org.apache.wicket.wicket-core-6.0.0.jar:6.0.0] at org.apache.wicket.protocol.http.WicketFilter.doFilter(WicketFilter.java:245) [org.apache.wicket.wicket-core-6.0.0.jar:6.0.0] at org.jabylon.rest.ui.JabylonFilter.doFilter(JabylonFilter.java:89) [org.jabylon.rest.ui-1.2.0.jar:na] at org.eclipse.equinox.http.registry.internal.FilterManager$FilterWrapper.doFilter(FilterManager.java:173) [org.eclipse.equinox.http.registry-1.1.200.jar:na] at org.eclipse.equinox.http.servlet.internal.FilterRegistration.doFilter(FilterRegistration.java:81) [org.eclipse.equinox.http.servlet-1.1.300.jar:na] at org.eclipse.equinox.http.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:35) [org.eclipse.equinox.http.servlet-1.1.300.jar:na] at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:132) [org.eclipse.equinox.http.servlet-1.1.300.jar:na] at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:76) [org.eclipse.equinox.http.servlet-1.1.300.jar:na] at javax.servlet.http.HttpServlet.service(HttpServlet.java:848) [javax.servlet-3.0.0.jar:na] at org.eclipse.equinox.http.jetty.internal.HttpServerManager$InternalHttpServiceServlet.service(HttpServerManager.java:386) [org.eclipse.equinox.http.jetty-3.0.100.jar:na] at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:669) [org.eclipse.jetty.servlet-8.1.10.jar:8.1.10.v20130312] at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:457) [org.eclipse.jetty.servlet-8.1.10.jar:8.1.10.v20130312] at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229) [org.eclipse.jetty.server-8.1.10.jar:8.1.10.v20130312] at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075) [org.eclipse.jetty.server-8.1.10.jar:8.1.10.v20130312] at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384) [org.eclipse.jetty.servlet-8.1.10.jar:8.1.10.v20130312] at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) [org.eclipse.jetty.server-8.1.10.jar:8.1.10.v20130312] at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009) [org.eclipse.jetty.server-8.1.10.jar:8.1.10.v20130312] at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) [org.eclipse.jetty.server-8.1.10.jar:8.1.10.v20130312] at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) [org.eclipse.jetty.server-8.1.10.jar:8.1.10.v20130312] at org.eclipse.jetty.server.Server.handle(Server.java:368) [org.eclipse.jetty.server-8.1.10.jar:8.1.10.v20130312] at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489) [org.eclipse.jetty.server-8.1.10.jar:8.1.10.v20130312] at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953) [org.eclipse.jetty.server-8.1.10.jar:8.1.10.v20130312] at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014) [org.eclipse.jetty.server-8.1.10.jar:8.1.10.v20130312] at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861) [org.eclipse.jetty.http-8.1.10.jar:8.1.10.v20130312] at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240) [org.eclipse.jetty.http-8.1.10.jar:8.1.10.v20130312] at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82) [org.eclipse.jetty.server-8.1.10.jar:8.1.10.v20130312] at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628) [org.eclipse.jetty.io-8.1.10.jar:8.1.10.v20130312] at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52) [org.eclipse.jetty.io-8.1.10.jar:8.1.10.v20130312] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608) [org.eclipse.jetty.util-8.1.10.jar:8.1.10.v20130312] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) [org.eclipse.jetty.util-8.1.10.jar:8.1.10.v20130312] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79] Caused by: java.lang.reflect.InvocationTargetException: null at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source) ~[na:na] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_79] at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_79] at org.apache.wicket.RequestListenerInterface.internalInvoke(RequestListenerInterface.java:258) ~[org.apache.wicket.wicket-core-6.0.0.jar:6.0.0] ... 39 common frames omitted Caused by: java.lang.ClassCastException: org.jabylon.properties.impl.ProjectLocaleImpl cannot be cast to org.jabylon.properties.ResourceFolder at org.jabylon.properties.util.PropertyResourceUtil.getOrCreateFolder(PropertyResourceUtil.java:285) ~[org.jabylon.properties-1.2.0.jar:na] at org.jabylon.properties.util.PropertyResourceUtil.createMissingChildren(PropertyResourceUtil.java:255) ~[org.jabylon.properties-1.2.0.jar:na] at org.jabylon.properties.util.PropertyResourceUtil.addNewLocale(PropertyResourceUtil.java:245) ~[org.jabylon.properties-1.2.0.jar:na] at org.jabylon.rest.ui.wicket.config.sections.VersionConfigSection$VersionConfig.applyLocaleList(VersionConfigSection.java:249) ~[org.jabylon.rest.ui-1.2.0.jar:na] at org.jabylon.rest.ui.wicket.config.sections.VersionConfigSection$VersionConfig.commit(VersionConfigSection.java:217) ~[org.jabylon.rest.ui-1.2.0.jar:na] at org.jabylon.rest.ui.wicket.config.SettingsPanel$1.commit(SettingsPanel.java:159) ~[org.jabylon.rest.ui-1.2.0.jar:na] at org.jabylon.rest.ui.wicket.config.SettingsPanel$1.onSubmit(SettingsPanel.java:149) ~[org.jabylon.rest.ui-1.2.0.jar:na] at org.apache.wicket.markup.html.form.Form$9.component(Form.java:1249) ~[org.apache.wicket.wicket-core-6.0.0.jar:6.0.0] at org.apache.wicket.markup.html.form.Form$9.component(Form.java:1243) ~[org.apache.wicket.wicket-core-6.0.0.jar:6.0.0] at org.apache.wicket.util.visit.Visits.visitPostOrderHelper(Visits.java:274) ~[org.apache.wicket.wicket-util-6.0.0.jar:6.0.0] at org.apache.wicket.util.visit.Visits.visitPostOrder(Visits.java:245) ~[org.apache.wicket.wicket-util-6.0.0.jar:6.0.0] at org.apache.wicket.markup.html.form.Form.delegateSubmit(Form.java:1242) ~[org.apache.wicket.wicket-core-6.0.0.jar:6.0.0] at org.apache.wicket.markup.html.form.Form.process(Form.java:924) ~[org.apache.wicket.wicket-core-6.0.0.jar:6.0.0] at org.apache.wicket.markup.html.form.Form.onFormSubmitted(Form.java:770) ~[org.apache.wicket.wicket-core-6.0.0.jar:6.0.0] at org.apache.wicket.markup.html.form.Form.onFormSubmitted(Form.java:703) ~[org.apache.wicket.wicket-core-6.0.0.jar:6.0.0] ... 43 common frames omitted ``` Answers: username_1: I think this has been fixed already in this commit: https://github.com/username_1/jabylon/commit/b965c3a78e5aa8fc43cb2a64e85ab22ea1ab0105 Could you try the nightly build to see if that resolves the issue for you? Thanks http://jenkins-jabylon.rhcloud.com/job/jabylon/lastSuccessfulBuild/artifact/releng/karaf/target/jabylon.zip username_1: Actually, I just pushed the 1.3.0 release, so you can try that instead of the nightly: https://github.com/username_1/jabylon/releases/download/1.3.0/jabylon.zip
btellstrom/DD2480_group_19_CI
409332708
Title: Bug: ValueError: skip must be >= 0 in fetch_n_last Question: username_0: `fetch_n_last` currently does not have any way of checking that input is correct and that it can actually fetch `n` builds (might be less). In the case where `n` is larger than the number of builds, it should instead simply fetch all existing builds.<issue_closed> Status: Issue closed
common-workflow-language/cwltool
359698427
Title: mixing of stdout/stderr with --parallel Question: username_0: ## Expected Behavior - the stdout of different steps ran in parallel when using `--parallel` shouldn't be mixxed - it would be nice if `cwltool` had an option to output a log file ## Actual Behavior When running a workflow with the `--parallel` flag, stdout/stderr from steps running in parallel get mixed together. Would it be possible to prevent this in some way? Maybe by waiting until the step is complete before logging? Another solution could be prepending the job name to each line of the output (so each line would start with `[job workflow_name] `) when a flag is used, like `--timestamps`. This could be useful even when `--parallel` isn't used. It would also be useful if `cwltool` had a flag to output logging information to a file as well as stdout/stderr. ## Workflow Code I don't have a simple workflow showing thishat I can share. I can create one if that is useful. ## Your Environment * cwltool version: 1.0.20180912090223 Answers: username_1: Hello @username_0 , this is a good request and similar to one that @FarahZKhan and @stain made for CWLProv (`--provenance`) A question for all three of you: how do you want the logs organized? Separate files for STDERR and STDOUT? A combined file? With timestamps? username_0: I could go either way about separate files. It seems that cwltool's STDOUT is just the json with output information, so if this is either at the end of the log or a separate file that is fine. I like the `--timestamps` option (though I'd probably always keep it on myself). It'd be nice to have options for two logging levels, depending on how verbose you want them to be: - basic info including step/workflow star and completion status, where output is cached, and the commands ran. Note that the command is divided over many lines, I'd prefer if each line was prepended with `[timestamp] [description] `, instead of what is currently done. ``` [2018-09-12 19:51:59] [workflow main] starting step make_output [2018-09-12 19:51:59] [step make_output] start [2018-09-12 19:51:59] [job make_output] Output of job will be cached in /tmp/subjects/cache/d12d4af643fc08c35841382b68d0cf6b [2018-09-12 19:51:59] [job make_output] /tmp/path/name$ command \ argument1 \ argument2 ... \ [2018-09-12 19:52:05] [job make_output] completed success [2018-09-12 19:52:05] [step make_output] completed success ```` - the complete terminal output of the commands that are ran, with each line prepended with `[timestamp] [description] `. This would be useful for sorting through the log later. Also, if log output between different parallel steps are mixed, then they at least have information to help identify where they come from later. If these 2 levels are separate files, that may also be useful (but I'm happy with one). The first is useful for just checking the correct commands ran, and checking where the workflow is currently. The second one is useful for debugging.
ampproject/amphtml
320378178
Title: Please whitelist pro.fontawesome.com Question: username_0: Please whitelist `pro.fontawesome.com`. This is required for Font Awesome 5 Pro subscribers to use our Pro CDN. Note: `use.fontawesome.com` is still the correct URL for our Font Awesome 5 Free CDN. So, both `use.` and `pro.` need to be whitelisted. —Mike, on the Font Awesome dev team Answers: username_1: /to @username_2 username_2: Are the paths the same as use.fontawesome.com? We currently whitelist fonts matching the regular expression: ``` https://use\.fontawesome\.com/releases/v([0-9]+\.?)+/css/(all|brands|solids|fontawesome)\.css ``` username_0: @username_2 I see a couple of mistakes in that regex for our Free CDN. And there would be a few differences for the Pro CDN. **Free CDN** Two mistakes: 1. `solids` should be `solid` (singular) 2. missing `regular` So, corrected, it would be: `(all|brands|solid|regular|fontawesome)` **Pro CDN** An example: ``` https://pro.fontawesome.com/releases/v5.0.12/css/solid.css ``` The options for the filename at the end are: `(all|brands|solid|regular|light|fontawesome)` --- These above are only for loading Font Awesome 5 using our [Webfonts with CSS](https://fontawesome.com/get-started/web-fonts-with-css) method. We also have a new [SVG with JS method](https://fontawesome.com/get-started/svg-with-js), which uses just JavaScript to render SVG elements—no font files. Would AMP developers be able to add the necessary `<script>` tags for that? If so, it would entail a different whitelist pattern, since it's sourcing JS instead of linking to CSS. username_2: I'd leave that question to @username_3, but I suspect the answer is no since that means the font's won't be part of the browser's preload scan. In the meantime, I'll fix the regular expression for the non-JavaScript path. username_3: yeah, we have no way to support JS based font loading. username_0: To clarify, I don't think we're talking about JS based font loading, just loading JS that finds `<i>` elements in the DOM and replaces them with `<svg>` elements. Not possible in AMP? username_3: It would need a new AMP extension. But that sounds like it makes the client do a lot of work for something that could be done server side. username_0: Server-side rendering is also a valid scenario for us, but we also support animations and such that are data bound, which happens client side. So, if making a new AMP extension is a Thing, that may be what we should explore. We already have what may be analogous components for use in other frameworks like React, Ember, Angular and Vue. username_2: The whitelist has been updated. Status: Issue closed username_0: Thank you. username_4: Hi, I am using an older version of FontAwesome (4.7) which gets supplied at this URL: `https://use.fontawesome.com/{random string of 10 letters and numbers}.css` Is it possible for this to be whitelisted also? username_5: cc @username_2 to triage @username_4 's request username_2: Happy to implement, but @username_3 for approval of the change. username_2: Please whitelist `pro.fontawesome.com`. This is required for Font Awesome 5 Pro subscribers to use our Pro CDN. Note: `use.fontawesome.com` is still the correct URL for our Font Awesome 5 Free CDN. So, both `use.` and `pro.` need to be whitelisted. —Mike, on the Font Awesome dev team username_3: Ship it. Status: Issue closed username_2: Resolved.
searchkit/searchkit
148295396
Title: searchOnLoad doesn't work when useHistory is false Question: username_0: The issue title pretty much says it all : searchOnLoad doesn't work when useHistory is false. The first load is done in `listenToHistory`, which isn't used when `useHistory=false` Answers: username_1: I believe this is impacting #4 as well, the problem is how to populate the history on server side, any ideas ?
SwissDataScienceCenter/renku-python
582966008
Title: unexpected keyword argument 'access_conditions' when importing zenodo dataset in Jupyterlab terminal Question: username_0: <!-- Note: for support questions, please use our discourse (https://renku.discourse.group/) --> **Describe the bug** Unexpected keyword argument 'access_conditions' when importing zenodo dataset in Jupyterlab terminal **Link to project** https://renkulab.io/projects/ACE-ASAID/seawater_stable_oxygen_isotopes/environments This is private but I can add a user if need be. **To Reproduce** Steps to reproduce the behavior: - Start new environment in renkulab - Open terminal in jupyterlab - `renku dataset import 10.5281/zenodo.1494915` **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots and/or execution output** Please select an action by typing its name (open, print, ignore) [ignore]: ignore Traceback (most recent call last): File "/home/jovyan/.local/bin/renku", line 8, in <module> sys.exit(cli()) File "/home/jovyan/.local/pipx/venvs/renku/lib/python3.7/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/home/jovyan/.local/pipx/venvs/renku/lib/python3.7/site-packages/renku/cli/exception_handler.py", line 128, in main self._handle_github() File "/home/jovyan/.local/pipx/venvs/renku/lib/python3.7/site-packages/renku/cli/exception_handler.py", line 175, in _handle_github getattr(self, '_process_' + value)() File "/home/jovyan/.local/pipx/venvs/renku/lib/python3.7/site-packages/renku/cli/exception_handler.py", line 119, in main result = super().main(*args, **kwargs) File "/home/jovyan/.local/pipx/venvs/renku/lib/python3.7/site-packages/renku/cli/exception_handler.py", line 90, in main return super().main(*args, **kwargs) File "/home/jovyan/.local/pipx/venvs/renku/lib/python3.7/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/home/jovyan/.local/pipx/venvs/renku/lib/python3.7/site-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/jovyan/.local/pipx/venvs/renku/lib/python3.7/site-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/jovyan/.local/pipx/venvs/renku/lib/python3.7/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/jovyan/.local/pipx/venvs/renku/lib/python3.7/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/home/jovyan/.local/pipx/venvs/renku/lib/python3.7/site-packages/renku/cli/dataset.py", line 644, in import_ download_file_fn=download_file_with_progress File "/home/jovyan/.local/pipx/venvs/renku/lib/python3.7/site-packages/renku/core/commands/client.py", line 89, in new_func result = ctx.invoke(method, client, *args, **kwargs) File "/home/jovyan/.local/pipx/venvs/renku/lib/python3.7/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/home/jovyan/.local/pipx/venvs/renku/lib/python3.7/site-packages/renku/core/commands/dataset.py", line 430, in import_dataset record = provider.find_record(uri) File "/home/jovyan/.local/pipx/venvs/renku/lib/python3.7/site-packages/renku/core/commands/providers/zenodo.py", line 514, in find_record return self.find_record_by_doi(uri) File "/home/jovyan/.local/pipx/venvs/renku/lib/python3.7/site-packages/renku/core/commands/providers/zenodo.py", line 521, in find_record_by_doi return self.get_record(ZenodoProvider.record_id(doi.url)) File "/home/jovyan/.local/pipx/venvs/renku/lib/python3.7/site-packages/renku/core/commands/providers/zenodo.py", line 527, in get_record return ZenodoRecordSerializer(**response.json(), zenodo=self, uri=uri) File "<attrs generated init renku.core.commands.providers.zenodo.ZenodoRecordSerializer>", line 9, in __init__ File "/home/jovyan/.local/pipx/venvs/renku/lib/python3.7/site-packages/renku/core/commands/providers/zenodo.py", line 160, in _metadata_converter return ZenodoMetadataSerializer(**data) TypeError: __init__() got an unexpected keyword argument 'access_conditions' **Run environment (please complete the following information):** **Renku version:** 0.8.2 **OS:** Linux (#1 SMP Mon Oct 22 10:40:32 EDT 2018) **Python:** 3.7.3 **Renkulab:** Jupyterlab **Browser:** Firefox **Additional context** - The following was displayed when creating the environment: Docker Image not available The base image will be used instead. This may work fine, but it may lead to unexpected errors. Answers: username_1: Hey Jen, Thank you for submitting this issue to us. The problem has been partially addressed for this already in our latest renku (0.9.1) version, however even in the new version the underlying problem exists (incorrect user error). We currently don't support protected datasets on any of our providers. For now, I have addressed the issue (PR here: #1112) in a way that you should receive a nice user message in a case where you wish you import a protected dataset. I have created a followup ticket #1113 to build this feature for all of the providers we support, so you can track the progress for that feature there. For now, I will mark this issue to be closed once we merge #1112. username_0: Thanks very much Sam. Good to know it is related to the protected datasets - that's useful, thanks! Status: Issue closed
Jordan141/articleblog
759336413
Title: Categories listing view Question: username_0: Mobile: ![image](https://user-images.githubusercontent.com/2772942/101473628-587ecc00-394a-11eb-8960-b9af61b2c617.png) ![image](https://user-images.githubusercontent.com/2772942/101473682-69c7d880-394a-11eb-8bdd-758c6fc9a48b.png) Desktop: ![image](https://user-images.githubusercontent.com/2772942/101473730-7a784e80-394a-11eb-801c-484aed840543.png) ![image](https://user-images.githubusercontent.com/2772942/101473706-73e9d700-394a-11eb-9cb9-2057ef0341bd.png) Answers: username_0: Blocked by: ![image](https://user-images.githubusercontent.com/2772942/101473777-8c59f180-394a-11eb-852d-f497dabf0de6.png) #41 Status: Issue closed
ThomDietrich/miflora-mqtt-daemon
457021696
Title: CMD in Dockerfile Question: username_0: Hi Thom, first of all many thanks for your great job! I've got a problem running the daemon within the docker. If I am right the miflora-mqtt-daemon.py has a --config argument, which is instead included as "--config_dir" in the CMD of the Dockerfile. As far as I can see the daemon is not started if the entry in the Dockerfile is not fixed Answers: username_1: Hey Pietro! Did you see https://github.com/username_1/miflora-mqtt-daemon#usage-with-docker Does this solve your issue username_0: Hi Thom, Thanks for the reply! I followed the usage notes. The only problem I've found (and solved) is that the line CMD [ "python3", "./miflora-mqtt-daemon.py", "--config_dir", "/config" ] in the Dockerfile tells docker to run "python3 ./miflora-mqtt-daemon.py **--config_dir** /config" but the execution help specifies that the right command should include the " **--config**" argument (not the "**--config_dir**") python3 /opt/miflora-mqtt-daemon/miflora-mqtt-daemon.py --config /opt/miflora-config To make the daemon starting I've been fixing the Dockerfile. Afterwards, everything was working perfectly kind regards! Pietro
mawww/kakoune
766589247
Title: Built-in clipboard integration Question: username_0: <!-- Please make sure that no request has already been made for your feature by checking the issue tracker (tag: `feature request`). If it was to be the case, feel free to post a reply in the already existing issue, indicating that you're interested (and possibly the use you would make for the feature, if not already mentioned). --> ### Feature <!-- what do you want implemented that is not already available in the development version? --> A substantial amount of users complains that the editor doesn't integrate with their system's clipboard out of the box. If that's not going to happen, is there anything we can do to meet them halfway? I was thinking that the editor hardcodes multiple commands already (for example, to interact with window managers like Tmux, Kitty…), and has some selection logic (`termcmd`). Could we not provide users with a command (or user mode? other?) based API that works along the same lines, so that they at least wouldn't have to look for the right `xclip`/`xsel`/… command? It would then be up to them to declare mapping in their user configuration. HTH. Answers: username_1: I'm currently experimenting with kakoune and this was my first real stumble. I had expected to have the system clipboard available via the `"*` and `"+` registers as in vim. I think this is my far the most elegant way to handle it. username_0: How about a command that sets a global `NormalIdle` hook which populates a register (`*` or `+` or other) with the contents of the system clipboard? The user calls the command in their configuration, and the editor does everything else. username_1: If my assumptions about NormalIdle are correct, this wouldn't be a good idea. Counterintuitively, the content of the clipboard is not fetched when you copy, but when you paste. The act of copying merely tells the OS to notify that window the next time something is pasted. It sounds with this solution, if you copied (or selected) something large, kakoune would be constantly requesting that process to send over the contents of the clipboard. Furthermore, if you accessed the register from another mode, it would be outdated. I think a solution where you can specify a hook to populate a register when it is accessed would be better there. username_2: #859 was about register hooks. mawww implemented the `RegisterModified` hook, but I guess the corresponding "about to read register value" hook was too intrusive.
d33pspace/RenewalWebsite
854630294
Title: High severity security vulnerability alert from Github. Question: username_0: We received an email from Github regarding a high severity security vulnerability. ![image](https://user-images.githubusercontent.com/42002784/114204300-9e48b100-998b-11eb-9d74-1a8a4907d5ab.png) full details may be here: https://github.com/advisories/GHSA-52p9-v744-mwjj Answers: username_1: Manually updated package versions to resolve this issue username_0: Close the ticket for now. Status: Issue closed
leekelleher/umbraco-environment-indicator
97798956
Title: Check Database Environment Rather Than Hostname Question: username_0: You appear to just look at the hostname in the URL to ascertain which environment is currently displayed. However, I recommend storing this in the database, as the domain might be used for any environment. This actually happened to me recently. Basically, one machine was hosting multiple sites, and one of the sites was taken offline. Because another site had a wildcard binding, the domain for the environment that got taken offline began pointing to the other environment, and there was no indication to the user that they were using the alternate environment. If this stored the knowledge of which environment was which in the database, it would be more effective in guarding against a scenario like this. Answers: username_1: Hi @username_0, thanks for the suggestion. Yeah, currently the plugin is purely JS, so it only checks the the browser's domain/hostname. I think it would be good to explore your suggestion some more. First question it raises for me is, how would the environment be initially set in the database? username_0: You could set it to a randomly generated color if that environment has no color set yet. Then, you could provide a dashboard (or some mechanism) to allow the user to change the color on that environment. username_1: I was thinking about your original question last night... I'm unsure about how your domain/environments are set-up. Could you provide any examples/details on the domain/environment structure? username_0: So, I have Server1. Server1 has two websites setup in IIS, Website1 and Website2. Those each point to different folders on the file system. I have two domains, www.site.com and stage.site.com. Website1 has a wildcard binding matching any request sent to Server1's IP address. Website2 has a domain binding matching any request sent to stage.site.com. So, Website1 gets traffic for www.site.com and Website2 gets traffic for stage.site.com. However, when I turned off Website2, traffic for stage.site.com started being directed to Website1. There was no indication of this to the CMS user. Status: Issue closed username_1: Closing this issue off. It's an interesting question, but I consider it currently out of scope for this package.
OData/WebApi
627359510
Title: TypeScript alternative for the OData v4 Client Code Generator? Question: username_0: Can some of you suggest some alternative for the OData v4 Client Code Generator, i.e. https://marketplace.visualstudio.com/items?itemName=bingl.ODatav4ClientCodeGenerator, which generates TypeScript? Or alternatively, do some of you know of some other reasonable way of using an OData server from TypeScript?
arithmetric/aws-lambda-ses-forwarder
148360415
Title: Add instructions for alerting user if lambda script fails Question: username_0: The forwarder basically works but I don't know what happens if the lambda fails. It'd be great to get some alert. Answers: username_1: You can add an SNS action while creating the Lambda function in the DLQ field of Advanced settings while creating and subscribe to that SNS topic. I guess that should help your case, instead of building it into the code, since the AWS Lambda can handle it and fire off a notification to you with the error? Even though I am late, this is just for the reference of future users.
NoteMakoti/notemakoti.github.io
101191109
Title: sup Question: username_0: https://youtu.be/UjsIPtRNjvI Answers: username_1: Great job on Jankyscroll, btw. It'll really help me fulfill my goal of infecting all my projects with an ounce of rotten malevolence. username_0: haha, thanks a bunch. i am all about infecting things with rotten malevolence
googlesamples/google-photos
1092087507
Title: This sample not compatible with the latest version of node.js v10.23.0 Question: username_0: Dear Google developer community. I have a humble request. I am trying to run this sample application under the latest version of Node.js. However I keep running into several compiler issues in app.js. For example I had to change the following lines to move forward //import bodyParser from 'body-parser'; var bodyParser = require ('body-parser'); Now I am stuck at this line in app.js. const dirname = path.dirname(new URL(import.meta.url).pathname); ^^^^ SyntaxError: Cannot use 'import.meta' outside a module Would someone please help me in the migration of this great sample app to the latest version of node.js? I am not Node.js expert. I am sure someone with node.js expertise has already migrated it to the latest version of node.js. Would you kindly share it here for people like me? Thanks in advance -Arun Answers: username_0: Does anyone have a working version of this sample on Windows? Please share the zip file. I am stuck. username_1: How are you running the app? If you are using an IDE, it may not be picking up the configuration in the `package.json` file. Have you tried running it directly from the commandline per the steps in the [README](https://github.com/googlesamples/google-photos/blob/main/REST/PhotoFrame/README.md#set-up), using the `npm` and `node` commands directly? Alternatively, it sounds like your environment isn't quite set up for JavaScript ES6. What version of Node JS are you using? Regarding running the sample on Windows, it sounds like you might be running into issue #21 - see the suggested workaround there while we are working on updating this app. username_0: Thank you so much Jan-Felix for your reply. Please find answers below to your questions. 1) I am using node app.js from Node.js command prompt just like it is documented in README 2) I have tried 2 versions of Node.js 1) 10.23.0 (Latest) 2) 7.8.0 documented in the README. I get errors on both. 3) I did a npm install in both versions. Does that not setup JavaScript ES6 automatically? Is there a separate step to install JavaScript ES6. Please let me know 4) Issue #21 will help only when I can compile the solution successfully. I am stuck much before than that. Any help will be deeply appreciated. Regards and thanks in advance -Arun username_1: @username_0 Can you try the latest Node.js LTS release? (see https://nodejs.org/en/ - version `16.13.1 LTS` ) I have just been able to successfully run the application using this version on Windows. Also make sure that you have navigated into the `REST/PhotoFrame` before running `npm install` and then `node app.js`. We will update the README, the referenced version is quite old and may not work correctly with our latest changes. username_0: Thank you so much Jan-Felix for your reply. You are my hero for the day. I was able to run the solution after installing node.js ver 16.13.1 LTS. Hope it will be helpful to other potential developers who may try to run this on Windows. Status: Issue closed username_1: That's great to hear! Glad you were able to get it working. We are going to update the README to specifically point developers towards the latest LTS version, that should hopefully help others in the future.
dotnet/roslyn
664553525
Title: Explicit implementation on interface with nullable reference support Question: username_0: I've got an interface which supports nullable reference types: ``` interface ICache { bool TryOpen(string name, [NotNullWhen(true) ICacheItem? cacheItem); } ``` In Visual Studio 16.6.5 if I implement the interface and let Visual Studio implement the interface for me I get this: ``` class TestCache : ICache { bool TryOpen(string name, [NotNullWhen(true) ICacheItem? cacheItem) { throw new NotImplementedException(); } } ``` However, if I opt to explicitly implement the interface then Visual Studio doesn't add the nullable attributes: ``` class TestCache : ICache { bool ICache.TryOpen(string name, ICacheItem? cacheItem) { throw new NotImplementedException(); } } ``` This causes a warning/error complaining that "cacheItem" doesn't match the implemented member. Answers: username_1: I've checked what's happening in the debugger and it turns out that when interface is being implemented explicitly all attributes for method parameters are being omitted `src\Workspaces\CSharp\Portable\CodeGeneration\ParameterGenerator.cs:144 private static SyntaxList<AttributeListSyntax> GenerateAttributes( IParameterSymbol parameter, bool isExplicit, CodeGenerationOptions options) { if (isExplicit) { return default; } var attributes = parameter.GetAttributes(); if (attributes.Length == 0) { return default; } return AttributeGenerator.GenerateAttributeLists(attributes, options); } ` Blaming the file didn't help since this logic was there from the very start of Roslyn repo. After I removed that check quick action started working as expected but several tests got broken in `Microsoft.CodeAnalysis.Editor.CSharp.UnitTests.ImplementInterface.ImplementInterfaceTests`: `TestAttributesExplicit,TestIUnknownIDispatchAttributes2,TestOptionalDateTime2` If you look closely at the test suite code you'll notice that omitting attributes is expected behavior for " Implement interface explicitly" option - there are instances of these tests for implicit option and those have attributes placed in generated classes, but explicit version lack them. I wonder what was the logic behind that feature. There's a link to internal Microsoft TFS http://vstfdevdiv:8080/DevDiv2/DevDiv/_workitems/edit/530265 maybe it says something about it.
google/trillian
219003391
Title: Document intended usage of the errors package Question: username_0: The `errors` package could use some extra documentation to indicate how it should be used. In particular, it should state that the error codes should describe the problem from the perspective of an RPC client, rather than from the perspective of the function itself. For example, in [this code review](https://github.com/google/trillian/pull/480#discussion_r108390398), a function (`NewFromPrivatePEMFile`) could not find the specified file and so returned an error with code `errors.NotFound`. However, review feedback indicated that it should return `errors.FailedPrecondition` instead, because the caller of the RPC that led to that function call (`CreateTree`) would consider it a failed precondition. Answers: username_1: Fair enough. I'll have a go at it soon-ish. :) Status: Issue closed
rikonor/meteor-simple-mutex
127951489
Title: I don't get it Question: username_0: Sorry don't get what the package is about. can you clarify things for me please ? Answers: username_1: @username_0 Sure thing. A mutex is a mechanism to lock a resource. Basically this attaches to whichever Mongo collection you enable it for - then you can lock individual documents using `doc.lock()`. Once a document is locked, all other clients will be unable to update it. Let's say you have a scenario where multiple users have access to a single resource, e.g. an editable article. By using `article.lock()` one user can make sure the article will stay the same until he decides to save whatever changes he made. Status: Issue closed
kubernetes-sigs/apiserver-network-proxy
964689510
Title: Konnectivity server leaks memory and free sockets Question: username_0: Hello, We are using ANP version 0.0.21 with a patch https://github.com/kubernetes-sigs/apiserver-network-proxy/pull/179 (I know it is already in 0.0.22, but currently it is not an option to upgrade) We have a cluster with 1 ANP server and 4 ANP agents. Sometimes we see that the server starts consuming lot of memory and once it reaches the limit then restarts, and the whole thing starts from the beginning. This is true also for the number of free sockets. Here is the prometheus graph for memory and socket numbers: <img width="980" alt="image" src="https://user-images.githubusercontent.com/60390128/128824608-6210a139-a312-4317-97cc-a0c3afd5e040.png"> <img width="980" alt="image" src="https://user-images.githubusercontent.com/60390128/128824465-82097f80-5ce3-40f0-b456-1fe911f725b7.png"> From the server logs I only see the "usual" things could be seen in previous versions: ``` I0806 18:25:02.385941 1 server.go:736] "Close streaming" agentID="94a18c0b-efdf-476b-a8fa-10e0c0057345" connectionID=57 I0806 18:25:02.388576 1 server.go:723] "Received CLOSE_RSP" connectionID=85 E0806 18:25:02.388662 1 server.go:731] "CLOSE_RSP send to client stream error" err="tls: use of closed connection" connectionID=85 I0806 18:25:02.388764 1 server.go:283] "Remove frontend for agent" frontend=&{Mode:http-connect Grpc:<nil> HTTP:0xc001ff1c00 connected:0xc0000afbc0 connectID:85 agentID:1c6f546a-3e02-46fc-a061-dcb014d7969e start:{wall:13851803065787238258 ext:26239485524435 loc:0x218a780} backend:0xc000952fa0} agentID="1c6f546a-3e02-46fc-a061-dcb014d7969e" connectionID=85 I0806 18:25:02.388843 1 server.go:736] "Close streaming" agentID="1c6f546a-3e02-46fc-a061-dcb014d7969e" connectionID=85 ``` and ``` I0806 18:25:02.428722 1 server.go:680] "Received DIAL_RSP" random=2394930260326768039 agentID="1c6f546a-3e02-46fc-a061-dcb014d7969e" connectionID=91 I0806 18:25:02.428840 1 server.go:262] "Register frontend for agent" frontend=&{Mode:http-connect Grpc:<nil> HTTP:0xc000d33500 connected:0xc0002f16e0 connectID:91 agentID:1c6f546a-3e02-46fc-a061-dcb014d7969e start:{wall:13851803065793093486 ext:26239491382902 loc:0x218a780} backend:0xc000952fa0} agentID="1c6f546a-3e02-46fc-a061-dcb014d7969e" connectionID=91 I0806 18:25:02.428960 1 server.go:709] "Received data from agent" bytes=840 agentID="1c6f546a-3e02-46fc-a061-dcb014d7969e" connectionID=79 I0806 18:25:02.429150 1 server.go:718] "DATA sent to frontend" I0806 18:25:02.429187 1 tunnel.go:121] "Starting proxy to host" host="172.17.57.44:9090" ... I0806 18:25:02.447214 1 tunnel.go:161] "Stopping transfer to host" host="172.17.57.44:9090" agentID="94a18c0b-efdf-476b-a8fa-10e0c0057345" connectionID=61 I0806 18:25:02.450691 1 server.go:723] "Received CLOSE_RSP" connectionID=91 E0806 18:25:02.450768 1 server.go:731] "CLOSE_RSP send to client stream error" err="tls: use of closed connection" connectionID=91 I0806 18:25:02.450834 1 server.go:283] "Remove frontend for agent" frontend=&{Mode:http-connect Grpc:<nil> HTTP:0xc000d33500 connected:0xc0002f16e0 connectID:91 agentID:1c6f546a-3e02-46fc-a061-dcb014d7969e start:{wall:13851803065793093486 ext:26239491382902 loc:0x218a780} backend:0xc000952fa0} agentID="1c6f546a-3e02-46fc-a061-dcb014d7969e" connectionID=91 I0806 18:25:02.450866 1 server.go:736] "Close streaming" agentID="1c6f546a-3e02-46fc-a061-dcb014d7969e" connectionID=91 ``` Did you face this problem? Or any idea what should be changed / investigated? Thanks! Answers: username_1: One issue I'm following up on right now has to do with losing connections during a dial timeout. The most obvious symptom would be lines in the server log containing the string connectionID=0. Do you see this? (I have a fix for GRPC checked in but I think a bit extra needs to be done for http-connect which I believe is what you are using) username_1: It would also be interesting to see what pprof says as your ramping up on memory and sockets. username_0: @username_1 yes we are using http-connect. I cannot see this `connectionID=0` in server log at all. I can only access the logs from the server, but will try to reproduce it in my acc to run pprof and will get back the results to you. Could link the PR you are referring that fixes for GRPC? thank you username_1: https://github.com/kubernetes-sigs/apiserver-network-proxy/pull/253. I think the handling of client.PacketType_DIAL_CLS should also close the http connection but I wanted to get more testing before adding that change. However if your not seeing connectionID=0 then I think you seeing a different issue. username_2: Hi! It is possible that there were some networking issue between the konnectivity server and agent, but the traffic was high. I see this PR: https://github.com/kubernetes-sigs/apiserver-network-proxy/pull/270 It is possible we have found this issue. @username_0 I think based on this, we could try to simulate this somehow. What do you think? Thank you! Adam username_3: Note this is still occuring on the latest master (although not as frequently): Here's an example of one pod that has leaked to 12 Gigs server side with logs [ks-logs.txt.zip](https://github.com/kubernetes-sigs/apiserver-network-proxy/files/7505846/ks-logs.txt.zip) ``` apiVersion: v1 kind: Pod metadata: annotations: cni.projectcalico.org/containerID: 27ceee3a2676bca3248e6d0dc5e0f4bce5cf46888f7778303d8856f80159a810 cni.projectcalico.org/podIP: 172.30.94.13/32 cni.projectcalico.org/podIPs: 172.30.94.13/32 k8s.v1.cni.cncf.io/network-status: |- [{ "name": "k8s-pod-network", "ips": [ "172.30.94.13" ], "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "k8s-pod-network", "ips": [ "172.30.94.13" ], "default": true, "dns": {} }] openshift.io/scc: restricted creationTimestamp: "2021-11-08T19:03:51Z" generateName: konnectivity-server-6b9dc5df9b- labels: app: konnectivity-server hypershift.openshift.io/control-plane-component: konnectivity-server hypershift.openshift.io/hosted-control-plane: master-rgshc3-1-3 pod-template-hash: 6b9dc5df9b name: konnectivity-server-6b9dc5df9b-25zfm namespace: master-rgshc3-1-3 ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: konnectivity-server-6b9dc5df9b uid: bc238276-0488-467a-97e6-efa0780832e8 resourceVersion: "27486240" uid: 15304030-ca9d-49cd-9e39-d76c583782b6 spec: affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - preference: matchExpressions: - key: hypershift.openshift.io/control-plane operator: In values: - "true" weight: 50 - preference: matchExpressions: [Truncated] containerStatuses: - containerID: cri-o://cb9ab8a0b1b9dccfa896724b2f391d9fea1545378788d54a74a6ec7da0d9e5e7 image: registry.ng.bluemix.net/armada-master/rh-apiserver-network-proxy:332d57fea23de99c3cbe58de36623866d8819bde imageID: registry.ng.bluemix.net/armada-master/rh-apiserver-network-proxy@sha256:ad8a3787c5adef59e99346d82a3db7d235f3951f86ee3f3d0b2e47eb11ac2b1b lastState: {} name: konnectivity-server ready: true restartCount: 0 started: true state: running: startedAt: "2021-11-08T19:04:06Z" hostIP: 10.93.93.190 phase: Running podIP: 172.30.94.13 podIPs: - ip: 172.30.94.13 qosClass: Burstable startTime: "2021-11-08T19:03:51Z" ``` username_4: /remove-lifecycle stale username_4: @username_3 which commit is your image based off of, it's hard to tell from the manifest you shared because it's just the image SHA username_4: Reposting from https://github.com/kubernetes-sigs/apiserver-network-proxy/issues/276#issuecomment-1044774850: An interesting observation I noticed while trying to reproduce this issue is that restarting Konnectivity Server will not completely clear the open files and goroutines. So if I restart just proxy server and immediately check the goroutine and open files count, it's a bit lower but still very high. But if I restart kube-apiserver, I see the goroutines and open files reset across both components. So at this point I'm thinking this is mainly a leak in konnectivity-client. Going to run some validation against https://github.com/kubernetes-sigs/apiserver-network-proxy/pull/325 today which has some potential fixes in konnectivity-client but would appreciate it if anyone else that is able to reproduce the issue can also test it. Note that the fix does require a rebuild of kube-apiserver so it might be difficult to test it username_4: @username_0 in your cluster do you also see kube-apiserver memory spike in a similar pattern or is it only ANP server/agent? username_2: @username_4 We are using Konnectivity with http-connect configuration, and NOT GRPC mode. I think the konnectivity-client is only for GRPC, is it right? Of course, we could check, but then please share the commands you are interested and make it comparable. Thanks, Adam
datamade/nyc-councilmatic
139340220
Title: checking search results & highlighting of search terms Question: username_0: Just double checking something maybe in progress. In Chicago version, searching a term from homepage shows excerpt of bill text, and highlights search term in blue. In NYC version, currently no excerpt, hence no highlighting. Just [list of Intro numbers](https://nyc.councilmatic.org/search/?q=parking+). Did excerpts fall out recently in NYC? And the highlighting was mentioned as a desirable feature, so that's good. Rightyo ! Answers: username_0: This came up in some recent user testing, still seek to bring over bill text excerpts as in Chicago version. Status: Issue closed
google/automl
1036931793
Title: AP is below 0.2, but F1 is above 0.8 Question: username_0: Hi, my team has tested with efficientdet-d0 as base model and fine-tuned for tomatoes detection. We've noticed that whenever we trained and re-trained, even with large dataset of tomato (2500+ images) and augmentation, the AP still remained below 0.2. However, when we checked the inference results, F1 score is relatively OK (above 0.8) and confusion matrix also give out many True positives and some False Negatives. May i know why is this happening ? Is this normal ? Answers: username_0: @username_2 Hi, i did some research on the confusion matrix, and the only reference i could get is from here : https://github.com/Sujith93/Tensorflow2_Custom_objectionDetection/tree/master i somehow modified automl's model_inspect.py (in function saved_model_inference()) to generate the predicted csv based on Sujith93's test_pred_with_csv_gen.ipynb. and after we get this csv generated, we must compare the predicted results with the csv generated from the test set. this comparison is based from Sujith93's confusion_martix_object_detection.py username_0: @username_1 Hi, this is my train_and_eval parameter: !python3 main.py --mode=train_and_eval \ --train_file_pattern=train.record \ --val_file_pattern=validate.record \ --model_name=efficientdet-d0 \ --model_dir=trained-model/tomato-d0-G-20 \ --ckpt=efficientdet-d0 \ --train_batch_size=10 \ --eval_batch_size=8 \ --eval_samples=300 \ --num_examples_per_epoch=1200 --num_epochs=20 \ --hparams=config.yaml in config.yaml: num_classes: 1 label_map: {1: tomato} jitter_min: 0.8 jitter_max: 1.2 mixed_precision: true username_1: https://github.com/google/automl/issues/1114#issuecomment-957426145 username_2: Hi @username_0 thanks for sharing. It is almost the same algorithm that I found on github. I found it more comfortable to have the validation set in tfrecord format, but thanks anyway. Now, I got a better understanding on how is the pipeline for the calculation of the confusion matrix.
eslint/eslint
113939901
Title: Conflict between no-undef-init and const Question: username_0: I need to define a variable representing the value `undefined`, value that won't be modify afterward. I would like to use `const`. By writing: ``` const FOO = undefined; ``` I hit the error: `It's not necessary to initialize 'foo' to undefined` But if I remove the value: ``` const FOO; ``` I have the error: `Parsing error: const must be initialized` Answers: username_1: I think you need to disable the rule for this line. This is an edge case that the rule is supposed to catch. username_2: @username_0 Why would anyone need to do such a thing? username_3: @username_0 can you please provide the information requested? username_0: I agree it's not common and I currently disable the rule just for this line. Here is the use case: we have several functions which in some conditions return `undefined`. And we want to factorize that value, which happens to be `undefined` today but could be changed to `null` or an error code tomorrow. ``` const NO_RESULT = undefined; function f1(){ if(...) { ... return NO_RESULT; } ... } function f2(){ if(...) { ... return NO_RESULT; } ... } ``` username_3: @username_0 I mean the information requested in the first comment. username_2: @username_0 That seems reasonable. I would be in favour of this rule ignoring `const` lexical declarations. username_3: Working on this. Status: Issue closed
meetjspl/poznan
220989984
Title: Do we really need WASM? Question: username_0: O tym dlaczego WASM może pomóc webowi w wielu przypadkach. Jego zalety (i wady) nad JSem. Answers: username_1: Temat na czasie. Proponuję też porównanie z asm.js bo idea podobna ale baaardzo się różnią konsekwencjami. username_2: Piszę się na taki temat rękami i nogami, bardzo chętnie posłucham i dowiem się więcej. username_3: Nagrania już są na: http://events.pozoga.eu/meet-js-34/ username_4: @username_0 mógłbyś wrzucić slajdy? username_0: [Prezentacja](https://docs.google.com/presentation/d/1e_AsiB2TjbycWxkAgZQf3ogeCSXUqsGEER6LPMjTx94/edit?usp=sharing) Done :) Status: Issue closed username_4: Super, dziękuję!
marktext/marktext
1071438878
Title: add `Custom Command` as image-uploader Question: username_0: ### Describe your feature request Add `Custom Command` as image-uploader. like `typora`: <img width="714" alt="屏幕截图 2021-12-05 163838" src="https://user-images.githubusercontent.com/51874567/144739762-c9bd4c1a-42b2-480f-96dd-1421243fd97e.png"> Test Result: <img width="716" alt="屏幕截图 2021-12-05 164713" src="https://user-images.githubusercontent.com/51874567/144739859-cca9c711-5fb6-4696-b5a6-0721e6cb8457.png"> https://user-images.githubusercontent.com/51874567/144745226-a05040fb-44e3-4b30-9c36-240789fae8e5.mp4 Answers: username_1: Duplicate of #2687 同时,如果有PR可以修复这个问题,请务必提交。开发者虽然最近很少有时间更新,但情况只是暂时的。并且这个仓库的PR本来就少,更多的PR也可以减轻开发者的负担。感谢支持! username_0: 我能看见有合并pr,但一年都没有发布release,合并了pr短期内也没什么用,毕竟不能要求所有使用者都自行编绎吧? username_2: +1 to this change. It can be very handy because I'd rather upload my images directly to my private blog. username_0: This is the windows installer compiled after adding `Custom Command` option, if you are interested, you can try it. https://www.aliyundrive.com/s/ta6CS1qsxCi username_1: 目前,第三方image uploader还不在MarkText的计划中。所以,PR越早提交,问题越有可能早点修复。如果你打算过一段时间再PR也没关系,但用户们就可能要再过好几个版本才能享受到这个功能了。感谢你对MarkText的关注和支持! username_0: 提交pr了,merge 后再关闭这个 issue 吧。 Status: Issue closed username_3: This feature was implemented in #2100 and will be available in our next release.
phw/discourse-musicbrainz-onebox
141203038
Title: Support localization Question: username_0: Example: https://github.com/paviliondev/discourse-custom-wizard/tree/master/config/locales Status: Issue closed Answers: username_0: Example: https://github.com/paviliondev/discourse-custom-wizard/tree/master/config/locales Status: Issue closed username_0: Localization does not work, onebox locale is always English, even if the site language is set to something else username_0: https://meta.discourse.org/t/making-plugin-templates-localization-friendly/40952
facebook/react-native
543065447
Title: Variable 'require' must be of type 'Require', but here has type 'NodeRequire' Question: username_0: When using `@types/node` with `@types/react-native` I get the following error. ``` node_modules/@types/react-native/index.d.ts(8915,9): error TS2403: Subsequent variable declarations must have the same type. Variable 'require' must be of type 'Require', but here has type 'NodeRequire'. ``` React Native version: ``` System: OS: Linux 4.15 Ubuntu 18.04.3 LTS (Bionic Beaver) CPU: (16) x64 Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz Memory: 17.40 GB / 31.06 GB Shell: 5.4.2 - /usr/bin/zsh Binaries: Node: 12.13.1 - ~/.nvm/versions/node/v12.13.1/bin/node Yarn: 1.19.2 - ~/.nvm/versions/node/v12.13.1/bin/yarn npm: 6.12.1 - ~/.nvm/versions/node/v12.13.1/bin/npm Watchman: 4.9.0 - /usr/local/bin/watchman SDKs: Android SDK: API Levels: 28, 29 Build Tools: 28.0.3, 29.0.2 System Images: android-28 | Intel x86 Atom_64, android-29 | Google APIs Intel x86 Atom IDEs: Android Studio: 3.5 AI-191.8026.42.35.5977832 npmPackages: react: ^16.12.0 => 16.12.0 react-native: ^0.61.5 => 0.61.5 ``` ## Steps To Reproduce 1. Install `@types/node` and `@types/react-native` 2. Build typescript Describe what you expected to happen: I expect there to be no errors Snack, code example, screenshot, or link to a repository: https://github.com/username_0/reactant/tree/master/packages/router Answers: username_0: I'm able to get past the error by running the following postinstall script. _fix-types.sh_ ```sh #!/bin/sh DIRNAME=$(dirname "$0") sed -i 's/var require: NodeRequire/var require: NodeJS.Require/' $DIRNAME/node_modules/@types/react-native/index.d.ts ``` _package.json_ ```json { "scripts": { "postinstall": "sh fix-types.sh" } } ``` username_1: These types are not strictly compatible, as the react-native env != a node env. See for more details and possible workarounds https://github.com/DefinitelyTyped/DefinitelyTyped/issues/15960 Status: Issue closed
firebase/quickstart-java
302090842
Title: Please Help Me Question: username_0: I know this is not the right place to ask that kind to question here but I am not able to get more documentation on the following issue. **Question 1** .setCredentials() and Error:(33, 20) Failed to resolve: com.google.firebase:firebase-admin:11.8.0 FileInputStream serviceAccount = null; try { serviceAccount = new FileInputStream("path/to/serviceAccountKey.json"); } catch (FileNotFoundException e) { e.printStackTrace(); } FirebaseOptions options = new FirebaseOptions.Builder() **.setCredentials(GoogleCredentials.fromStream(serviceAccount))** .setDatabaseUrl("https://example.firebaseio.com/") .build(); **FirebaseApp.initializeApp(options);** Not able to find solution and documentation for this. After reading multiple instructions and documentation I found these following data - public FirebaseOptions build () - public FirebaseOptions.Builder setApiKey (String apiKey) - public FirebaseOptions.Builder setApplicationId (String applicationId) - public FirebaseOptions.Builder setDatabaseUrl (String databaseUrl) - public FirebaseOptions.Builder setGcmSenderId (String gcmSenderId) - public FirebaseOptions.Builder setProjectId (String projectId) - public FirebaseOptions.Builder setStorageBucket (String storageBucket) Status: Issue closed Answers: username_1: @username_0 the "failed to resolve" error message sounds like an issue with your Maven or Gradle configuration. Please ask this question on StackOverflow and someone can help you debug.
labzero/lunch
319715778
Title: Vote race condition causes multiple votes to be created Question: username_0: There is a race condition where: 1. POST 1 is sent 2. POST 2 is sent 3. API responds to POST 1 and sees no existing votes 4. API responds to POST 2 and sees no existing votes 5. API creates a new vote from POST 1 6. API creates a new vote from POST 2 Thus resulting in two votes for a restaurant from the same person. There is a Sequelize call to find a vote so a 409 can be returned in the case of a conflict, before a separate call where new one is created. It would be best to combine the find and create calls into a single call, and return 409 if new no record is returned from the call.<issue_closed> Status: Issue closed
Pure-D/code-d
172353483
Title: Better syntax highlighting for types and enums. Question: username_0: Right now there is limited syntax highlighting, for example when declaring a variable, the type is not highlighted a different color. As well as enum values are not highlighted either. It would be nice to have these values highlighted if it is possible. ![magic](https://cloud.githubusercontent.com/assets/8701667/17841196/24dc1f10-67e4-11e6-848e-ea736e1f244f.png) Answers: username_1: we are using the syntax highlighting file from this repository: https://github.com/textmate/d.tmbundle/blob/master/Syntaxes/D.tmLanguage It seems like there were some updates to it, I am going to update it soon but if it isnt fixed after that, I'm gonna make an issue and a PR there to add it username_2: Small note on your code, it is `Enum.MY_ENUM_VALUE` not just `MY_ENUM_VALUE`. ![img](http://i.imgur.com/LI1ghDk.png) For me it marks the `MY_ENUM_VALUE`, because it thinks it's a constant (I think). I use Solarized Dark. I guess the syntax highlighting is partly decided on the theme you use. username_0: Ah yah that's true, I was thinking of an enum with out a name, as is used in the SDL2 derelict library. With the proper way to write it, it still isn't highlighted. At least for the default color scheme. username_3: The missing highlighting for class names especially turns into a problem when calling constructors. In such a case almost everything has the same color. Left: C# Right: D Note: I like the idea of highlighting return types in a different color (like it is done at the moment). ![example](https://user-images.githubusercontent.com/15967408/27255587-d870ee36-53a0-11e7-9f91-d020122e154f.png) username_1: right now only in the code-d-beta package which will be merged into master eventually, but it's fixed once it's in there. Switched to https://github.com/ysgard/d-struct grammar now because it looks a lot better maintained and the authors suggested me to use it in the atom extension which I will also do Status: Issue closed
gnosis/dex-services
651364162
Title: `pricegraph::Orderbook` Refactor Question: username_0: The `pricegraph::Orderbook` module (`pricegraph/src/orderbook.rs` is currently well over 1000 lines and becoming hard to navigate and add new features. This issue captures the work to break it down into smaller pieces as well as move away from the `Orderbook` name which is a bit overused and makes searching for code harder. Currently on the list of things to do: 1. Rename `Orderbook` -> `Auction` (?) 2. Move primitive operations (path finding and reducing path) into its own module. 3. Implement high level operations (reducing overlapping order, exchange rate estimates) into separate modules and implement them directly on the `Pricegraph` root type.
ponylang/ponyc
555079819
Title: Failed assertion on consuming a tuple accessor of a function result Question: username_0: The [following code](https://playground.ponylang.io/?gist=d687e4cb60cef47817ac11f38370a348) triggers an assertion fail: ```pony actor Main new create(env: Env) => let a = "I am a string" // commenting both lines or swapping the consume for a valid recover X ... end fixes the segfault consume a.chop(1)._1 // should be an error, as "a.chop(1)._1" is not a single identifier consume fn()._1 // same thing fun fn(): (U8, U8) => (2, 3) ``` The error (on a debug build) being: ``` Building builtin -> packages/builtin Building . -> 20-01-25--01 src/libponyc/pass/refer.c:135: generate_multi_dot_name: Assertion `0` failed. Backtrace: build/debug/ponyc(ponyint_assert_fail+0xf1) [0x55f2ac99f4f7] build/debug/ponyc(+0x8c1dda) [0x55f2ac8d6dda] build/debug/ponyc(+0x8c4285) [0x55f2ac8d9285] build/debug/ponyc(pass_refer+0xda) [0x55f2ac8dad9c] build/debug/ponyc(ast_visit+0x28c) [0x55f2ac8c797a] build/debug/ponyc(ast_visit+0x1d1) [0x55f2ac8c78bf] build/debug/ponyc(ast_visit+0x1d1) [0x55f2ac8c78bf] build/debug/ponyc(ast_visit+0x1d1) [0x55f2ac8c78bf] build/debug/ponyc(ast_visit+0x1d1) [0x55f2ac8c78bf] build/debug/ponyc(ast_visit+0x1d1) [0x55f2ac8c78bf] build/debug/ponyc(ast_visit+0x1d1) [0x55f2ac8c78bf] build/debug/ponyc(ast_visit+0x1d1) [0x55f2ac8c78bf] build/debug/ponyc(+0x8b1db6) [0x55f2ac8c6db6] build/debug/ponyc(+0x8b223a) [0x55f2ac8c723a] build/debug/ponyc(ast_passes_program+0x28) [0x55f2ac8c74c5] build/debug/ponyc(program_load+0xc1) [0x55f2ac8c0f59] build/debug/ponyc(+0x89e9e3) [0x55f2ac8b39e3] build/debug/ponyc(main+0x1ce) [0x55f2ac8b3c46] /usr/lib/libc.so.6(__libc_start_main+0xf3) [0x7f23c7b0b153] build/debug/ponyc(_start+0x2e) [0x55f2ac8b386e] [1] 25703 abort (core dumped) build/debug/ponyc . ``` Answers: username_1: The reason for this is that the code in the refer pass at [refer.c](https://github.com/ponylang/ponyc/blob/master/src/libponyc/pass/refer.c#L1129) is assuming it gets a nested reference, if it has something with a DOT at hand and tries to call a method `generate_multi_dot_name` that only supports nested references. If it hits a method call or similar, it simply asserts. The solution i imagine is to verify a `TK_DOT` actually is a (possibly nested) field before calling that function and erroring out. username_1: Thanks for reporting btw! :) Status: Issue closed
poooi/poi
229216784
Title: 近代化改造任务问题 Question: username_0: <!-- 感谢你向 poi 提交 issue,请尽量填写以下内容以方便沟通, 你也许能在 https://github.com/poooi/poi/wiki 找到一些答案。 Thanks for opening an issue, please fill the following template, If you need general information, see https://github.com/poooi/poi/wiki. --> **poi 版本 / poi version:** v7.7.0 **操作系统 / OS:** <!--masOS Windows 10 用户请说明具体的发行版本号(可以通过 `winver` 命令来获得) For Windows 10 users, please specify your build version (can be obtained through `winver` command) --> **插件名和版本 / Plugin name & version:** <!-- 如果这是一个关于插件的 issue,请填写插件名及其版本。 If this is an issue about plugin. --> **你遇到了什么样的问题 / The problem you've met:** 接取日常任务近代化改造后改造一次就出现猫的情况,任务也被修改为完成 **有没有重现的方法,或者与问题相关的任何信息 / How to reproduce, or any information that might be related:** <!-- 可以的话请提供开发者工具 Console 选项卡的截图, 开发者工具的打开方式是按 ctrl + shift + i (macOS 上为 ⌥ + ⌘ + i),或游戏区域下方信息栏的最左边齿轮按钮。 Please provide a screenshot of developer tool's console tab, if possible. To open the dev tool, press ctrl + shift + i (⌥ + ⌘ + i for macOS), or the leftmost gear button on the info bar below the game area. --> <img width="803" alt="screen shot 2017-05-17 at 9 57 31 am" src="https://cloud.githubusercontent.com/assets/21072746/26135315/4a3d417c-3ae7-11e7-816f-8fc8b4d20b4a.png"> Answers: username_1: @username_0 目前知道的情况是连击改修的确定按钮会导致重复发包而猫,至于为什么这个按钮会允许重复发包则暂不可知 username_0: @username_1 那请问为什么任务会被修改为完成? username_1: @username_0 连续发了两个包并且都被判定成功了,任务计两次,然后你客户端因为重复发包的原因猫了…… username_1: @username_0 总之因为原因不明所以目前的解决方法是改修的确定按钮不要连击而只是点一次 username_0: @username_1 好的,谢谢 Status: Issue closed username_2: outdated username_2: <!-- 感谢你向 poi 提交 issue,请尽量填写以下内容以方便沟通, 你也许能在 https://github.com/poooi/poi/wiki 找到一些答案。 Thanks for opening an issue, please fill the following template, If you need general information, see https://github.com/poooi/poi/wiki. --> **poi 版本 / poi version:** v7.7.0 **操作系统 / OS:** masOS **插件名和版本 / Plugin name & version:** <!-- 如果这是一个关于插件的 issue,请填写插件名及其版本。 If this is an issue about plugin. --> **你遇到了什么样的问题 / The problem you've met:** 接取日常任务近代化改造后改造一次就出现猫的情况,任务也被修改为完成 **有没有重现的方法,或者与问题相关的任何信息 / How to reproduce, or any information that might be related:** <!-- 可以的话请提供开发者工具 Console 选项卡的截图, 开发者工具的打开方式是按 ctrl + shift + i (macOS 上为 ⌥ + ⌘ + i),或游戏区域下方信息栏的最左边齿轮按钮。 Please provide a screenshot of developer tool's console tab, if possible. To open the dev tool, press ctrl + shift + i (⌥ + ⌘ + i for macOS), or the leftmost gear button on the info bar below the game area. --> <img width="803" alt="screen shot 2017-05-17 at 9 57 31 am" src="https://cloud.githubusercontent.com/assets/21072746/26135315/4a3d417c-3ae7-11e7-816f-8fc8b4d20b4a.png"> Status: Issue closed
panoramicdata/Meraki.Api
778385710
Title: Will v1 Implementation happen? Question: username_0: Hello, I'm super excited to find this and thank you for your efforts. Are you planning on updating this for the v1 API that they just released? Answers: username_1: Hi there username_0... We're glad you find it useful! This represents a significant dev effort and we have no funding to develop v1 at present. Perhaps someone from Meraki will see this and fund us a few days? Status: Issue closed username_2: v1 is now published as of nuget package v1.11 e.g. https://www.nuget.org/packages/Meraki.Api/1.11.4 There are some breaking changes but it should be fairly logical fixing things.
dotnet/sdk
828884473
Title: With net6.0 preview 3, failed to run razor in CLI. Question: username_0: Repro steps: 1. Install SDK 6.0 preview 3 from master branch of https://github.com/dotnet/installer. 2. Create a new razor in CLI. 3. Dotnet run the razor. Expected Result: Run the razor succeed. Actual Result: Run the razor failed ![image](https://user-images.githubusercontent.com/65638819/110752747-a3c5b500-8280-11eb-9be3-02a265025c23.png) Unhandled exception. System.TypeLoadException: Could not load type 'typeof(global::Microsoft.AspNetCore.Mvc.ApplicationParts.ConsolidatedAssemblyApplicationPartFactory)' from assembly 'Microsoft.AspNetCore.Mvc.Core, Version=6.0.0.0, Culture=neutral, PublicKeyToken=<KEY>'. at System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, StackCrawlMarkHandle stackMark, ObjectHandleOnStack assemblyLoadContext, ObjectHandleOnStack type, ObjectHandleOnStack keepalive) at System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, StackCrawlMark& stackMark, AssemblyLoadContext assemblyLoadContext) at System.RuntimeType.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase, StackCrawlMark& stackMark) at System.Type.GetType(String typeName, Boolean throwOnError) at Microsoft.AspNetCore.Mvc.ApplicationParts.ProvideApplicationPartFactoryAttribute.GetFactoryType() at Microsoft.AspNetCore.Mvc.ApplicationParts.ApplicationPartFactory.GetApplicationPartFactory(Assembly assembly) at Microsoft.AspNetCore.Mvc.ApplicationParts.ApplicationPartManager.PopulateDefaultParts(String entryAssemblyName) at Microsoft.Extensions.DependencyInjection.MvcCoreServiceCollectionExtensions.GetApplicationPartManager(IServiceCollection services) at Microsoft.Extensions.DependencyInjection.MvcCoreServiceCollectionExtensions.AddMvcCore(IServiceCollection services) at Microsoft.Extensions.DependencyInjection.MvcServiceCollectionExtensions.AddRazorPagesCore(IServiceCollection services) at Microsoft.Extensions.DependencyInjection.MvcServiceCollectionExtensions.AddRazorPages(IServiceCollection services) at razor.Startup.ConfigureServices(IServiceCollection services) in C:\Users\v-damu\razor\Startup.cs:line 26 at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor, Boolean wrapExceptions) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at Microsoft.AspNetCore.Hosting.ConfigureServicesBuilder.InvokeCore(Object instance, IServiceCollection services) at Microsoft.AspNetCore.Hosting.ConfigureServicesBuilder.<>c__DisplayClass9_0.<Invoke>g__Startup|0(IServiceCollection serviceCollection) at Microsoft.AspNetCore.Hosting.ConfigureServicesBuilder.Invoke(Object instance, IServiceCollection services) at Microsoft.AspNetCore.Hosting.ConfigureServicesBuilder.<>c__DisplayClass8_0.<Build>b__0(IServiceCollection services) at Microsoft.AspNetCore.Hosting.GenericWebHostBuilder.UseStartup(Type startupType, HostBuilderContext context, IServiceCollection services, Object instance) at Microsoft.AspNetCore.Hosting.GenericWebHostBuilder.<>c__DisplayClass13_0.<UseStartup>b__0(HostBuilderContext context, IServiceCollection services) at Microsoft.Extensions.Hosting.HostBuilder.CreateServiceProvider() at Microsoft.Extensions.Hosting.HostBuilder.Build() at razor.Program.Main(String[] args) in C:\Users\v-damu\razor\Program.cs:line 16 dotnet --info: .NET SDK (reflecting any global.json): Version: 6.0.100-preview.3.21160.18 Commit: 5<PASSWORD> Runtime Environment: OS Name: Windows OS Version: 10.0.19041 OS Platform: Windows RID: win10-x64 Base Path: C:\Program Files\dotnet\sdk\6.0.100-preview.3.21160.18\ Host (useful for support): Version: 6.0.0-preview.3.21159.16 Commit: <PASSWORD> .NET SDKs installed: 6.0.100-preview.3.21160.18 [C:\Program Files\dotnet\sdk] .NET runtimes installed: Microsoft.AspNetCore.App 6.0.0-preview.3.21160.6 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.NETCore.App 6.0.0-preview.3.21159.16 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.WindowsDesktop.App 6.0.0-preview.3.21127.1 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] To install additional .NET runtimes or SDKs: https://aka.ms/dotnet-download Answers: username_0: This issue also repro on MVC App. ![image](https://user-images.githubusercontent.com/65638819/110754745-19cb1b80-8283-11eb-984a-0c69e515ff27.png)
jlippold/tweakCompatible
342512620
Title: `NoNotificationsText` not working on iOS 11.3.1 Question: username_0: ``` { "packageId": "com.tweaksbylogan.nonotificationstext", "action": "notworking", "userInfo": { "arch32": false, "packageId": "com.tweaksbylogan.nonotificationstext", "deviceId": "iPhone9,3", "url": "http://cydia.saurik.com/package/com.tweaksbylogan.nonotificationstext/", "iOSVersion": "11.3.1", "packageVersionIndexed": true, "packageName": "NoNotificationsText", "category": "Tweaks", "repository": "BigBoss", "name": "NoNotificationsText", "packageIndexed": true, "packageStatusExplaination": "This package version has been marked as Not working based on feedback from users in the community. The current positive rating is 0% with 0 working reports.", "id": "com.tweaksbylogan.nonotificationstext", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.0.7", "shortDescription": "Change the \"No Notifications\" text!", "latest": "1.0", "author": "<NAME>", "packageStatus": "Not working" }, "base64": "<KEY> "chosenStatus": "not working", "notes": "" } ```
ngs-doo/dsl-json
337567751
Title: POJO properties annotated as nonnull should be mandatory by default Question: username_0: Currently if a json missing some attribute, the 'null' will be assign to POJO property. But this no make sense if property is annotated as nonnull. We should throw exception instead. Answers: username_1: Well, there is a distinction, although it might not be fully done yet in Java8 version. Default for nonnull String is empty string. Meaning if it's missing from input it will be an empty value. Mandatory is a different thing from non-null. username_0: OK, for simple types such as numbers, strings and booleans some empty value can be used. But what about more complex cases, for example: dates, enums, objects? My main point is to prevent nulls in nonnull field after deserializing json into POJO instance. username_1: Most types can have sane default. Specific enum values, object instances with default values, .... It's rather rare for a type not to have a sane default eg XML, and maybe date in JVM. Anyway, this issue is almost fine, but I'll rename it to be correct. I need to enable defining custom type defaults (you can currently register them, but they wont get used in the processor), or custom defaults per property. So mandatory is orthogonal to non-null (it just checks if a property exists in JSON), but yeah, it makes sense that if a type nor the property have a default value, but it's mandatory and non-null that it throws an exception at the end if property was null at the end of deserialization (I think the dsl processor already does something like that, but it needs to be ported to Java8 version) username_1: With latest commit: https://github.com/ngs-doo/dsl-json/commit/79ebfd330732483799889c9c720f9c08e021f3de library will now consider properties which are not nullable but don't have a default as mandatory. I still need to introduce custom defaults per property, but thats a separate thing. Status: Issue closed username_1: v1.9.2 released