repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
swagger-api/swagger-codegen | 237797258 | Title: [New Generator] akka-http server generator
Question:
username_0: ##### Description
As suggested by @foxmk, we want to add a [akka-http](https://github.com/akka/akka-http) server stub generator.
##### Swagger-codegen version
Latest master
##### Related issues
https://github.com/swagger-api/swagger-codegen/issues/3812#issuecomment-310197754
##### Suggest a Fix
If anyone wants to work on the generator or help out, please reply to let us know.
Answers:
username_1: Pls let me know if this is fixed
username_0: @username_1 I don't think anyone has started working on it. May I know if you've time to contribute the generator?
username_2: I'm likely to work on this in the near future. I'm wondering if anyone else has started in the off chance we might collaborate? |
goharbor/harbor | 1160372546 | Title: upgrade v2.2.0 to latest stable release
Question:
username_0: What can we help you?
Could you please guide me how can I upgrade to harbor latest stable version my current version is 2.2.0
Is need to upgrade stepwise upgrades ? or directly we can upgrade to latest
Status: Issue closed
Answers:
username_1: Just follow the migration guide, v2.2 can be upgraded to latest directly.
https://goharbor.io/docs/2.4.0/administration/upgrade/ |
spring-projects/spring-session | 214031101 | Title: why the 'RedisOperationsSessionRepository.RedisSession' is a inner class?
Question:
username_0: why the 'RedisOperationsSessionRepository.RedisSession' is a inner class? as a resut I can not use the result of method:
```
public Map<String, RedisSession> findByIndexNameAndIndexValue(String indexName,
String indexValue)
```
Answers:
username_1: `RedisSession` is an implementation of `ExpiringSession` that's specific to the `RedisOperationsSessionRepository`. That shouldn't prevent you from using
the `FindByIndexNameSessionRepository`.
Here's a usage example from [find by username sample app guide](http://docs.spring.io/spring-session/docs/1.3.0.RELEASE/reference/html5/guides/findbyusername.html#finding-sessions-for-a-specific-user):
```java
@Autowired
FindByIndexNameSessionRepository<? extends ExpiringSession> sessions;
@RequestMapping("/")
public String index(Principal principal, Model model) {
Collection<? extends ExpiringSession> usersSessions = this.sessions
.findByIndexNameAndIndexValue(
FindByIndexNameSessionRepository.PRINCIPAL_NAME_INDEX_NAME,
principal.getName())
.values();
model.addAttribute("sessions", usersSessions);
return "index";
}
```
username_1: Closing as answered. If you have additional questions or feel that your original question isn't properly answered, please re-open the issue.
Status: Issue closed
username_2: How would you approach this if you want to save/update session ??
username_1: See [#849 (comment)](https://github.com/spring-projects/spring-session/issues/849#issuecomment-321742866). |
sahilsk/reading-list | 183186668 | Title: RT @NodeJSTopNews: Npm Install Drives You Crazy? Yarn And Chill! https://t.co/oDyrtdVMGf via @geek_learning https://t.co/NduhWsnYjr
Question:
username_0: <blockquote class="twitter-tweet">
<p lang="en" dir="ltr" xml:lang="en">Npm Install Drives You Crazy? Yarn And Chill! <a href="https://t.co/oDyrtdVMGf">https://t.co/oDyrtdVMGf</a> via <a href="https://twitter.com/geek_learning">@geek_learning</a> <a href="https://t.co/NduhWsnYjr">http://pic.twitter.com/NduhWsnYjr</a></p>
— NodeJS Top News (@NodeJSTopNews) <a href="https://twitter.com/NodeJSTopNews/status/786592884648660992">October 13, 2016</a>
</blockquote>
<br><br>
October 15, 2016 at 09:15AM<br>
via Twitter |
brookhong/Surfingkeys | 233248274 | Title: How to disable replacement of default chrome pdf viewer?
Answers:
username_1: Press `;s`. Not sure if this is a permanent toggle though.
username_2: Yes, it is a permanent toggle.
Your setting will be persisted in to storage.
Status: Issue closed
username_3: Could you please recommend another way to disable the pdf viewer?
This keyboard shortcut does not work with my current setup.
Thanks |
vega/ts-json-schema-generator | 623914812 | Title: readonly modifier
Question:
username_0: ### A readonly modifier on a field type (not on the field itself) breaks the type mapping.
## Example
```
type Type = {
field: readonly string[]
}
```
## Expected
### This is the output when using ReadonlyArray<> instead or w/o the modifier
```
"field": {
"type": "array",
"items": {
"type": "string"
}
}
```
## Actual
```
"field": {
"type": "number"
}
```<issue_closed>
Status: Issue closed |
mdn/browser-compat-data | 1160490909 | Title: javascript.builtins.String.String - <PUT TITLE HERE>
Question:
username_0: <!-- Tips: where applicable, specify browser name, browser version, and mobile operating system version -->
#### What information was incorrect, unhelpful, or incomplete?
#### What did you expect to see?
#### Did you test this? If so, how?
<!-- Do not make changes below this line -->
<details>
<summary>MDN page report details</summary>
* Query: `javascript.builtins.String.String`
* MDN URL: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/String
* Report started: 2022-03-06T01:30:22.626Z
</details> |
commercialhaskell/stack | 791327656 | Title: `stack repl` no longer picks up magic `stack script` comment
Question:
username_0: Tested with: Version 2.5.1, Version 2.4.0
I have `-- stack script --resolver lts-13.18 --package shake` at the top of a script `Build.hs`, however `stack repl Build.hs` doesn't pick this up and instead uses the global `stack.yaml`.
This also causes lsp integration to break: https://github.com/haskell/haskell-language-server/issues/111
Answers:
username_1: Was it ever possible to do that?
username_0: I recall that this used to work and this comment suggests the same: https://github.com/haskell/haskell-language-server/issues/111#issuecomment-752136211
Perhaps time for a git bisect? I'd help fix this with some guidance.
username_1: I'm just wondering how it could even work as `stack script` uses shell to sell all the details...
If you want to look into it @username_0 - sure go ahead, I could help you with some details I know about. |
BlinkID/blinkid-ui-android | 752834758 | Title: Blink UI compatible with V5.8.0
Question:
username_0: Hi ,
I have upgraded my microblink lib to latest version com.microblink:blinkid:5.8.0@aar
After updating existing blink ui is not working .
Can you please guide how can I upgrade to latest blink uI
Answers:
username_1: Hi,
BlinkID UI was solving a problem we no longer have in BlinkID 5.8, we used to have document specific recognizers so BlinkID UI helped you set them up correctly.
In 5.8, we have one recognizer (BlinkIDCombinedRecognizer) that can be used to scan all documents that we support.
I recommend that you use our [default UI](https://github.com/BlinkID/blinkid-android#blinkidUiComponent). |
Azure/aks-engine | 896918429 | Title: Failed to Ensure load balancer - internal load balancers does not support IPv6
Question:
username_0: **Describe the issue**
When creating an internal load balancer service on a dual-stack-cluster, the service is not created because it throws the error:
`Warning SyncLoadBalancerFailed 34s (x6 over 3m10s) service-controller Error syncing load balancer: failed to ensure load balancer: ensure(nginx/nginx-ipv6-internal): lb(hyp-shared-dev-platform-dualstack-europe-internal) - internal load balancers does not support IPv6
`
**AKS Engine Version***
v0.62.1
**Kubernetes Version**
1.20
**To Reproduce**
Steps to reproduce the behavior:
1. Create a dual-stack cluster from the examples
2. Create a service on any arbitrary namespace with the annotation `service.beta.kubernetes.io/azure-load-balancer-internal: "true"`
3. Run kubectl describe svc <svc_name>
**Expected behavior**
Internal Inbound load balancer to be created with IPv6 internal address
**Screenshots**

Notify @az-policy-kube
Answers:
username_0: My educated guess is that aks-agent defaults to Basic SKU load balancers instead of using Standard, could you confirm this?
username_1: Hi, @username_0 - Dual-stack is now GA in Kubernetes 1.23, and some details may have changed since you were checking it out in 1.20. Also, AKS Engine is now deprecated, so please use dual-stack with AKS: https://docs.microsoft.com/en-us/azure/aks/configure-kubenet-dual-stack?tabs=azure-cli%2Ckubectl
I will close this issue as no work will be done on AKS Engine around this, but please do provide feedback on AKS if questions arise; thanks!
Status: Issue closed
|
jspajic/blog | 592888252 | Title: Factory pattern implementation
Question:
username_0: The factory pattern is a good solution for those cases in which the main part of the app needs to manage the objects and not create them. eg.
https://github.com/jspajic/blog/blob/82325d0c50b9b3e49c2ee6ec44b10b8f929ea977/app/Http/Controllers/CommentsController.php#L54 |
JasperFx/jasper | 220981320 | Title: Make the application of wrapper frames consistent across Chain types
Question:
username_0: Both RouteChain and HandlerChain expose the same Wrappers property, but there's an opportunity to reduce some code.
- [ ] Pull `Wrappers` to a common `IChain` interface
- [ ] HandlerChain --|> IChain
- [ ] RouteChain --|> IChain
- [ ] New `ModifyChainAttribute` base class that works against the base IChain
- [ ] MAYBE -- have a common InputType that would be the message type for messages and the input type for routes
- [ ] New `IChainPolicy` that could be applied to either type of chain<issue_closed>
Status: Issue closed |
Teched25/Jammming-Codecademy | 388889044 | Title: Feature Request Summary - Needs Improvement
Question:
username_0: While your feature request includes all of the required topic they are very brief. I like that you had the ambitious idea of doing song previews but your technical design does not cover how to implement the idea. However, judging by the picture, it seems like you were able to implement it yourself? Lastly, your caveats section is very brief. How you can you improve the downloading experience? What other tests are needed? These are things that would need to be explained in more detail.
Answers:
username_1: Thank you I will work on it. |
WAVM/WAVM | 419948088 | Title: Question :is there any instructions of integrate wavm in java language project?
Question:
username_0: Hi, there
Recently I want to use wavm to parse and run web assembly file , is it possible to integrate this with java project ? how many interfaces need to be implemented from java side ?
Appreciate for any guidance or instructions here!
Answers:
username_1: I'm not very familiar with Java, but you can call C APIs from Java, right? I think if https://github.com/WAVM/WAVM/issues/111 is implemented, then you should be able to use it from Java.
username_0: yes , Java can call C api, I am wondering if the [#111 ](https://github.com/WAVM/WAVM/issues/111) libwavm complete so far .
username_1: Have a look at #111 for some progress on a WAVM C API. If you have any issues using it from Java, feel free to open a new issue!
Status: Issue closed
|
pelias/api | 114573973 | Title: address parser failing to parse apartment numbers
Question:
username_0: ```javascript
"query": {
"text": "1917/2 Pike Drive",
"parsed_text": {
"number": 2,
"street": "Pike Drive",
"regions": []
}
}
```
Answers:
username_1: This is similar to the structure of Canadian apartment/unit notation:
123-4 Fake Street is `housenum: 123`, `unit: 4` `street: fake street`
This is complex because in Queens, the address structure is 123-4 Fake Street, where 123-4 _is_ the complete address.
username_0: see also: https://en.wikipedia.org/wiki/Address_(geography)#Mailing_address_format_by_country
username_2: Al is currently working on updating the training model to support this. We'll check in soon for an update.
username_3: Revisit sources/us/wi/dane in OA to use separated fields.
username_3: Ask Al to see why `/2` isn't being parsed as apt/unit
Status: Issue closed
|
rstudio/htmltools | 396238799 | Title: Improve documentation about how to include HTML5 boolean attributes in tags
Question:
username_0: It took me some times, github issue search (see #81) and code reading to find how we could include HTML5 boolean attributes in `tags`. [About boolean attributes](https://www.w3.org/TR/html5/infrastructure.html#sec-boolean-attributes)
The example was on an audio tag. I would like to have
```html
<audio controls>
<source src="reprex.wav" type="audio/wav"/>
</audio>
```
where `controls` is a boolean attributes. [about this attributes](https://www.w3schools.com/TAGS/att_audio_controls.asp)
And it can be done already because of `NA` special treatment for attributes in `htmltools:::tagWrite`
``` r
htmltools::tags$audio(
controls = NA,
htmltools::tags$source(
src = "myfile.wav",
type = "audio/wav"
)
)
```
will give the correct html code
```html
<audio controls>
<source src="myfile.wav" type="audio/wav"/>
</audio>
```
I think it could be documented in `tags` help page and I am willing to do a PR
* either by adding a section about boolean attributes
* or just by providing an example that show this behaviour
what do you think ?<issue_closed>
Status: Issue closed |
blackducksoftware/hub-detect | 355945231 | Title: Powershell is closing after hub-detect execution
Question:
username_0: # Issue template
## Expected behavior
* Powershell shall be visible after script execution
## Actual behavior
* Powershell is closing after script execution
## Steps to Reproduce
* Open Powersehell Version 5 or later
* execute the following line
[Net.ServicePointManager]::SecurityProtocol = 'tls12'; irm https://blackducksoftware.github.io/hub-detect/hub-detect.ps1?$(Get-Random) | iex; detect
*
## Version
**Project Version:**
* Detect Version 4.1.2
**Language Version:**
* EN
**OS:**
* Windows 10 |
Funwayguy/BetterQuesting | 303179686 | Title: Request: Please add a way to change defualt settings client side.
Question:
username_0: So, its really annoying every time I want to add a retrieval task that I have to change all of the settings like this:

To this:

So what I mean is, have a way to make those settings automatically like the second image, probably by changing the Better Questing config file. That would shave like 10 secs off of each quest I make, which doesn't seem like much unless you are making like 50 quests a day. Thanks!
Answers:
username_1: ...or you could just make a template quest and use the copy/paste tool in the designer toolbox
Status: Issue closed
|
skanaar/nomnoml | 988590317 | Title: Unable to render multiple in-edges to same object pairs with different ports
Question:
username_0: I'd like to be able to visualize different port links between the same pairs of objects. Is it possible?
The following doesn't seem to be available:
```
[ObjectA]0->0[ObjectB]
[ObjectA]0->1[ObjectB]
```
Only the first link is rendered. |
formulahendry/vscode-auto-rename-tag | 929874790 | Title: Issue Auto Rename Tag
Question:
username_0: I have a mistake with this Extenstion, when I open a New file html, i try to change name some tag but it not automatically change both tags of me. I need to click to choose the Extension > Auto Rename Tag and then back to my file HTML, It's working now. I dont know what is the problem of the function. It's very inconvenience for me ! |
enginespot/js-beautify-sublime | 144821817 | Title: Wronk switch indentation even if jslint_happy is true
Question:
username_0: Hello,
the beautifier intent the switch statements in the wrong stile even if I set jslint_happy to true.
I always get this indentation
```javascript
switch (variable) {
case 1:
{
...
break;
}
case 2:
{
...
break;
}
case 3:
{
...
break;
}
default:
{
...
break;
}
}
```
while eslint want this indentation:
```javascript
switch (network) {
case 1: {
...
break;
}
case 2: {
...
break;
}
case 3: {
...
break;
}
default: {
...
}
}
```
How can I get the correct indentation?
Thanks
Daniele |
ryokbys/nap | 738456784 | Title: no regression tests
Question:
username_0: The package doesn't have any regression tests currently. So while I can run an example simualtion, I've got no way of knowing if the numbers produced match what "should" happen. Ideally there should be a "make test" target which runs the example and checks that the answers are similar/equal to a known set of reference values (which have been checked to be true).
Answers:
username_1: Thanks for the suggestion. I wrote "Tests" section in the [documentation](http://username_1.web.nitech.ac.jp/contents/nap_docs/install.html#tests) about the tests of main programs. I agree to your suggestion about making a "make test" target, and I will be doing it soon.
Status: Issue closed
|
raj454raj/rapidshare | 461971164 | Title: test browserstack
Question:
username_0: test
**URL tested:** https://github.com/
Open [URL](https://live-local.bsstag.com/#os=OS+X&os_version=Yosemite&browser=Chrome&browser_version=74.0&zoom_to_fit=true&full_screen=true&resolution=responsive-mode&url=https%3A%2F%2Fgithub.com%2F&speed=1&start_element=ClickedReproduceIssueFromgithub&start=true&furl=https://github.com/&utm_campaign=bug-report&utm_source=bug-report-url-to-reproduce&utm_medium=github) on Browserstack
|Property | Value|
|------------ | -------------|
| Browser | Chrome 74.0 |
| Operating System | OS X Yosemite |
| Resolution | 1440x518 |
**Screenshot Attached**
[Screenshot URL](https://s3.amazonaws.com/bs-stag/c067aea02f5fc5fcb2ebaa0c60c9aaddf0ced508/macyos_chrome_74.0.jpg)

**[Click here](https://live-local.bsstag.com/#os=OS+X&os_version=Yosemite&browser=Chrome&browser_version=74.0&zoom_to_fit=true&full_screen=true&resolution=responsive-mode&url=https%3A%2F%2Fgithub.com%2F&speed=1&start_element=ClickedReproduceIssueFromgithub&start=true&furl=https://github.com/&utm_campaign=bug-report&utm_source=bug-report-url-to-reproduce&utm_medium=github) to reproduce the issue on Browserstack**<issue_closed>
Status: Issue closed |
ScreamingHawk/phone-saver | 239615014 | Title: Support Request - video/*
Question:
username_0: Support request. Generated by Phone Saver.
Intent type: video/*
Intent action: android.intent.action.SEND
Text: https://test.masjidomarohio.org/wp-content/uploads/2017/05/Masjid-Omar-Open-House.mov
Subject:
More information: TYPE_ADDITIONAL_INFORMATION_HERE
Thank you
Answers:
username_1: Can you please check which version of Phone Saver you are using (in the Credits)? I added support for video files 2 days ago so this should be working.
username_0: I am using 1.6, the only and latest version available in F-Droid. It just became available yesterday.

username_0: 
username_1: Thanks @username_0.
Video support was added in v1.7. F-Droid releases are automated but I have no idea what the schedule is for it.
If you know how to push a release to F-Droid, let me know and I'll get it done for you now. Otherwise you'll just have to wait a bit for the repos to update.
username_1: Or you can build from source yourself if you are desperate :)
username_0: Thank you for the follow-up. Alas, I don't know how to push an F-Droid
release out. But if that is the case, then you can close this issue.
Thanks again!
username_1: No problem! I'm tracking `video/*` support in ticket #12 if you have any issues after the update
Status: Issue closed
|
kelleymay/f1-3-c2p1-colmar-academy | 258700564 | Title: Suggestion - give .sect-two-col-1 a width: 100%
Question:
username_0: Seen here: https://github.com/kelleymay/f1-3-c2p1-colmar-academy/blob/master/Colmar/css/style.css#L136
Since we want the image to be a bit larger, I suggest using width: 100%. To get this to line up with the first section, image, you want to wrap a div around that image in the fist section, and give it the same class/styling - width: 60% on the div, width: 100% on the image. |
ec-doris/EuKnowledgeGraph | 584402754 | Title: Retrieve postal codes where missing
Question:
username_0: Can be tricky...
Answers:
username_1: In fact if the postal code is not there we need to compute it, meaning given the geo-coordinate we need to reverse geo-compute it. Meaning nominatim for everywhere where it is missing ...
There are only 60.000 now:
https://query.linkedopendata.eu/#select%20%28count%28%3Fs%29%20as%20%3Fc%29%20where%20%7B%0A%3Fs%20%3Chttps%3A%2F%2Flinkedopendata.eu%2Fprop%2Fdirect%2FP460%3E%20%3Fo%0A%7D
There were only in Czech Rep and maybe denmark and ireland ....
Is it worth doing that?
username_0: That seems a lot of work so let's put it on hold unless it's really needed...
Status: Issue closed
|
dry-python/returns | 832044113 | Title: Add a method to get intermediate return values from flow
Question:
username_0: Hi!
It would be nice for testing to have a way to retrieve the intermediate result of a flow, in a way similar to how `assert_flow()` works but with the return value.
Example:
```python
def do_stuff():
return flow(0, func_1, bind(func_2))
def test_func_1():
v = get_function_value(func_1, do_stuff)
assert v == expected_value
```
In principle this can be done with `sys.setprofile` (in the trace function and for `return` events, `arg` contains the return value).
Proof of concept example:
```python
def trace_func(function_to_search, frame, event, arg):
if event == 'return' and frame.f_code.co_name == function_to_search.__name__:
# modified exception to hold the return value under "value" attr
raise DesiredValueFound(arg)
def get_function_value(function_to_search, my_flow):
old_tracer = sys.gettrace()
sys.setprofile(partial(trace_func, function_to_search))
try:
my_flow()
except DesiredValueFound as e:
return e.value
finally:
sys.settrace(old_tracer)
```
What do you think?
Thanks :)
Answers:
username_1: Can you please tell more about your use-case? What exactly are you testing?
username_0: Thanks for asking! I'm building a library to upload files on a website. The core of it is robobrowser interacting with forms until the upload is finished and unfortunately there is some shared state and function arguments and return values are not simple.
My flow is something like this: `validate_file -> upload_file -> fetch_new_metadata -> update_metadata -> finish_upload`, exposed through a single method e.g. `upload`
Individually testing the `*_metadata` functions is quite hard unless I manually save the returned form from `upload_file` (which is a complex object) for each test case and provide it as an argument together with a manually initialized state. A solution like the one I proposed would allow for much easier testing without having to manage any other data except for `vcrpy` to save and replay network requests with the data.
username_2: Perhaps if bind didn't just assume the returned value was the correct monadic instance (returning it directly), and instead attempted a map, then join... a subclass of a monad could be used (and propagated) through the flow, which could be used for assertions etc. |
bullsbot/BullsBot | 58984149 | Title: Set up a new dev branch for /r/devtheme
Question:
username_0: I'd like us to make a second dev branch for /r/devtheme so that we can get BullsBot acclimated with the new front end templates and some of the modifications and edits required.
Answers:
username_1: I've created a new branch called devtheme. Not sure what the easiest settup would be for you to start editing and running it though... I'll pm you to figure it out.
Status: Issue closed
username_0: I'll take a look at it in a bit and see if I can figure it out ;) |
astropy/specutils | 924316363 | Title: Have model_replace model keyword accept class objects
Question:
username_0: This is a possible follow-on from #782, which added the `model_replace` function. In https://github.com/astropy/specutils/pull/782#pullrequestreview-598731958 I suggested allowing the `model` keyword to accept an astropy modeling *class* instead of an instantiated object. This would then be automatically fit to the spectrum.
The problem with this (as @ibusko pointed out to me out-of-band) is that it glosses over some details like how to initialize the model, etc. That might be fine in that you can always do it yourself by using the model object option, but then why have the class option in the first place? So I'm admittedly not sure I even want this feature, but think it's worth recording as a "might be nice" option.
The original motivation was thinking about splines, where it doesn't really matter because the fit is deterministic and the initial conditions don't matter. But maybe we don't care if we're doing that anyway :shrug: |
Rob2048/PhotonTool | 334203759 | Title: Awsome Master - Can you provide a how to use?
Question:
username_0: Here's a how to use :P
https://www.facebook.com/groups/AnycubicPhoton/permalink/1362281337249838/

For ppl reading this just download the zip and run index.html in your browser.
tested on chrome
Answers:
username_0: Here's a how to use :P
https://www.facebook.com/groups/AnycubicPhoton/permalink/1362281337249838/

For ppl reading this just download the zip and run index.html in your browser.
tested on chrome |
openaddresses/machine | 95694605 | Title: Better conform support for complex addresses, with localization
Question:
username_0: The [addition of an Estonian data source](https://github.com/openaddresses/openaddresses/pull/1076) has surfaced some problems with how our current conform processing language doesn't fully capture addresses. See also [much discussion in this issue](https://github.com/openaddresses/openaddresses/issues/1077) about mapping. I suspect this same issue comes up for other whole-country sources like Denmark and the Netherlands.
Long story short, this example could motivate us improving the quality of the data we are extracting and publishing. I think the best way to make progress would be to get input from people who are experts in actually using this data in geocoders and know something about different locales addressing. End goal: for us to publish a CSV file that is directly useful.
This issue is deliberately broad and open ended, it's really a product discussion more than an engineering question. I imagine it will only be solved with a collection of several small coding projects.
Answers:
username_1: Has this issue been handled by @ingalls’s addition of the new `regexp` syntax? |
scalatest/scalatest-maven-plugin | 274786595 | Title: Parallel flag -P deprecated in Scala Test
Question:
username_0: ```
java.lang.IllegalArgumentException: ERROR: -p has been deprecated for a very long time and is no longer supported, to prepare for reusing it for a different purpose in the near future. Please change all uses of -p to -R
```
ScalaTest 3.0.4
Answers:
username_1: I am sure that the support for parallel test execution is fixed in master, but no plugin release has been with this support
username_0: Is it? https://github.com/scalatest/scalatest-maven-plugin/blob/master/src/main/java/org/scalatest/tools/maven/AbstractScalaTestMojo.java#L444
username_1: Yes.
Hopefully as soon #36 is closed a new plugin version will be available. |
parse-community/Parse-SDK-JS | 428363417 | Title: LiveQueryClient subscribe with token - no updates to ACL secured objects
Question:
username_0: ### Issue Description
Live query subscription to an ACL secured object doesn't work, even when sessionToken is passed.
### Steps to reproduce
```
// user is logged in, there's a Patient object in the DB, whose ACL is set to Read+Write by same user
var lq = Parse.CoreManager.getLiveQueryController();
var cu = Parse.User.current();
var pq = new Parse.Query( 'Patient' );
pq.equalTo( 'userId', cu.id );
var plq = lq.subscribe( pq, cu.getSessionToken() );
plq.on('update', (object) => {
console.log('object updated', object );
});
```
Update a field in Patient object on the server.
### Expected Results
"object updated" handler should be called
### Actual Outcome
nothing happens.
<!--- What is happening instead. --->
### Environment Setup
- **Server**
- parse-server version (Be specific! Don't say 'latest'.) : ^3.1.3
- Operating System: CentOS
- Hardware: AWS instance
- Localhost or remote server? AWS remote server
- **JS SDK**
- JS SDK version: 2.2.1
- Application: Chrome
Answers:
username_1: Improvements were made to LiveQuery https://github.com/parse-community/Parse-SDK-JS/pull/758/files
Can you update to the latest version?
If it still doesn't work can you write a failing test? There are examples in the PR if you need it.
`Note: await query.subscribe() gets the current user session internally`
username_0: @username_1, I've updated my JS SDK to the latest PR and server side `parse-server` node packages, but I'm still getting the same behavior. I can see that `Subscription` object I get from subscribing to a query has the correct `sessionToken` property set. But when I subscribe to a query that returns an object whose ACL is set to the current user, I get no `update` event. ACL set to public read+write posts update events correctly.
So this must be the server side problem, right?
username_1: There is a issue opened on the server side https://github.com/parse-community/parse-server/issues/5393
Would you like to provide a fix?
username_0: I don't think #5393 is the same to what I'm experiencing. I'm not using roles and my object is secured with an ACL containing a single user id, and I'm not getting updates. Unfortunately, I'm not familiar enough with Parse project to be able to fix this. I was hoping for a known workaround.
username_1: Can you write a failing test?
username_1: @username_0 I wrote a test case here https://github.com/parse-community/Parse-SDK-JS/pull/791. The test passed. Can you compare or change it to match your use case?
username_0: @username_1
Thank you for following up. Your test case looks correct, but it still doesn't work in my set up.
I don't get update event for my ACL-user protected object when I test in browser vs AWS backend environment.
I did update parse-server on my AWS instance to the latest. Here's my code on the client side:
```
// queries for getting patients
var cu = Parse.User.current();
var pq = new Parse.Query( 'Patient' );
pq.equalTo( 'userId', cu.id );
pq.subscribe( cu.getSessionToken() ).then( (plq)=>{
plq.on('update', (object) => {
console.log('object updated', object );
});
console.log( "plq", plq );
} );
// refresh patients
pq.find().then( ( list ) => {
console.log( "live query find: ", list );
});
```
I have two Patient objects - one with public read/write ACL, another one ACL'd to this user. The code above prints out both of them (in the find() call), but when I modify a field on Patient object via parse-dashboard, I only get a "object updated" on the public one. Inspecting my "plq" object shows that sessionToken is set correctly.
I appreciate your help, but I'm not sure how I can troubleshoot this further.
username_1: Does it run locally? I just tried it with with 2 objects, public access on one, user access on the other and worked.
Can you post logs? VERBOSE=1
username_0: Thanks for following up - I've decided to try a different framework - the fault is probably in my AWS set up somewhere, but I feel like I lack the tech knowledge to properly troubleshoot it with you.
Status: Issue closed
|
Infosys/openIDP | 391820762 | Title: Log aggregation using ELK stack
Question:
username_0: **Is your feature request related to a problem? Please describe.**
As of now, I need to poke each container using below command to get the logs out.
`docker logs -f <Container_id>`
**Describe the solution you'd like**
It would have been awesome if some log aggregation was used like ELK stack.
**Describe alternatives you've considered**
NA
**Additional context**
NA
Status: Issue closed
Answers:
username_1: @username_0 Thanks for your interest and suggestion. Hope Jaskirat's response was helpful. Closing this issue for now. |
konvajs/konva | 347606989 | Title: Blur filter causing white borders for PNG-24 images
Question:
username_0: Adding a blur filter to an image that has alpha transparency causes its border to turn light.
This is likely to the interpolator treating surrounding/empty pixels as white.
[Example JSbin](http://jsbin.com/taginijame/25/edit?js,output)
(left: 1px blur radius; right: no blur radius/filter)
I was wondering if there is any way to work around this, or where in the code to tackle this (for a PR), respectively?
Thank you so much!
Answers:
username_0: @username_1 any idea?
username_1: @username_0 yeah, so the code is cryptic and I don't know how to update it too. We may try to find another algorithm or find if the current one is updated.
Status: Issue closed
username_1: Closing the task as not a priority. But I am happy to apply any related Pull Requests. |
apache/dubbo | 455122279 | Title: Unable to locate Method of ExceedPayloadLimitException
Question:
username_0: - [x] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate.
- [x] I have checked the [FAQ](https://github.com/apache/dubbo/blob/master/FAQ.md) of this repository and believe that this is not a duplicate.
### Environment
* Dubbo version: 2.7.2
* Operating System version: Linux 3.10.0-957.5.1.el7.x86_64
* Java version: 1.8
### Steps to reproduce this issue
Data length too large
### Expected Result
Locate the exception method based on the log
### Actual Result
java.io.IOException: Data length too large: 18463787, max payload: 8388608, channel: NettyChannel [channel=[id: 0x57203c57, /127.0.0.1:43554 => /127.0.0.1:20890]]
Answers:
username_1: they run in different threads:
caller method in user thread
payload checker in netty's thread
Status: Issue closed
|
google/flatbuffers | 986473685 | Title: [Bug] flatc: C# generated code for optional enum field throws InvalidOperationException when try to pass null
Question:
username_0: ### Environment:
- flatc version: 2.0.0 on commit <PASSWORD>2<PASSWORD>
- OS: MacOS Big Sur 11.5.2
Schema:
```fbs
enum Color:int { Red = 0, Green, Blue = 2 }
table Monster {
id: int;
color: Color = null;
age: int = null;
}
```
As you can see `color` fields is an optional enum. When I generate a code via `flatc` it produces the following code for Monster class:
```csharp
// <auto-generated>
// automatically generated by the FlatBuffers compiler, do not modify
// </auto-generated>
using global::System;
using global::System.Collections.Generic;
using global::FlatBuffers;
public struct Monster : IFlatbufferObject
{
private Table __p;
public ByteBuffer ByteBuffer { get { return __p.bb; } }
public static void ValidateVersion() { FlatBufferConstants.FLATBUFFERS_2_0_0(); }
public static Monster GetRootAsMonster(ByteBuffer _bb) { return GetRootAsMonster(_bb, new Monster()); }
public static Monster GetRootAsMonster(ByteBuffer _bb, Monster obj) { return (obj.__assign(_bb.GetInt(_bb.Position) + _bb.Position, _bb)); }
public void __init(int _i, ByteBuffer _bb) { __p = new Table(_i, _bb); }
public Monster __assign(int _i, ByteBuffer _bb) { __init(_i, _bb); return this; }
public int Id { get { int o = __p.__offset(4); return o != 0 ? __p.bb.GetInt(o + __p.bb_pos) : (int)0; } }
public Color? Color { get { int o = __p.__offset(6); return o != 0 ? (Color)__p.bb.GetInt(o + __p.bb_pos) : (Color?)null; } }
public int? Age { get { int o = __p.__offset(8); return o != 0 ? __p.bb.GetInt(o + __p.bb_pos) : (int?)null; } }
public static Offset<Monster> CreateMonster(FlatBufferBuilder builder,
int id = 0,
Color? color = null,
int? age = null) {
builder.StartTable(3);
Monster.AddAge(builder, age);
Monster.AddColor(builder, color);
Monster.AddId(builder, id);
return Monster.EndMonster(builder);
}
public static void StartMonster(FlatBufferBuilder builder) { builder.StartTable(3); }
public static void AddId(FlatBufferBuilder builder, int id) { builder.AddInt(0, id, 0); }
public static void AddColor(FlatBufferBuilder builder, Color? color) { builder.AddInt(1, (int)color); }
public static void AddAge(FlatBufferBuilder builder, int? age) { builder.AddInt(2, age); }
public static Offset<Monster> EndMonster(FlatBufferBuilder builder) {
int o = builder.EndTable();
return new Offset<Monster>(o);
}
}
```
If I start to create an instance of Monster and add a color which is null by using `AddColor` method, it will throw `System.InvalidOperationException: Nullable object must have a value` because `(int)color` where `Color? color = null;` causes runtime exception. It seems to me that this code can be modified to respect optional (seems like just adding `?` if we want to cast optional field): https://github.com/google/flatbuffers/blob/74c3d7eba2946d82a392fc3be2f22e25e3906b56/src/idl_gen_csharp.cpp#L1194
[Truncated]
code += GenMethod(field.value.type) + "(";
code += NumToString(it - struct_def.fields.vec.begin()) + ", ";
- code += SourceCastBasic(field.value.type);
+ code += SourceCastBasic(field);
code += EscapeKeyword(argname);
if (!IsScalar(field.value.type.base_type) &&
field.value.type.base_type != BASE_TYPE_UNION) {
@@ -1225,7 +1225,7 @@ class CSharpGenerator : public BaseGenerator {
code += "Add";
code += GenMethod(vector_type);
code += "(";
- code += SourceCastBasic(vector_type);
+ code += SourceCastBasic(field);
code += "data[i]";
if (vector_type.base_type == BASE_TYPE_STRUCT ||
IsString(vector_type))
```
Thank you.
[csharp_handle_optional_field.patch.zip](https://github.com/google/flatbuffers/files/7097965/csharp_handle_optional_field.patch.zip)
Answers:
username_1: @username_3
username_2: I agree that this is a bug and imho the proposed solution makes sense. I would also recommend adding a test case for this using `optional_scalars.fbs` which has `maybe_enum: OptionalByte = null;`
username_3: Can you make a PR of this?
username_0: Created a PR: https://github.com/google/flatbuffers/pull/6835
username_0: https://github.com/google/flatbuffers/pull/6835
Status: Issue closed
|
desktop/desktop | 256805553 | Title: Minimize tooltip doesn't disapear after minimize
Question:
username_0: V0.8.1-beta4 desktop on Windows 10
Everytime i click on the top-right minimize button i get a sticky "minimize" tooltip that won't disapear.
I have to minimize from the taskbar to not get this bugy tooltip
Status: Issue closed
Answers:
username_1: Thanks for the report, @username_0!
It looks like this is a bug in Electron, the framework on which the app is built. It looks like they're already tracking it in https://github.com/electron/electron/issues/9943, so I'm gonna close this one. |
craftcms/aws-s3 | 387763480 | Title: Use https urls as generated default
Question:
username_0: By default, it generates a base URL using the virtual-hosted–style url:: http://mybucketname.s3.amazonaws.com/
The default should be https.
Furthermore, by defaulting to the path-based URL, SSL will work no matter what the bucket name is (even if it contains a period "."): https://s3.us-east-1.amazonaws.com/dev.mybucketname/
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro
Answers:
username_1: If you're using environment variables you could also swap this line:
```
'url' => 'https://s3-' . getenv('S3_REGION') . '.amazonaws.com/' . getenv('S3_BUCKET') . '/',
username_0: @username_1 yep, thats how i currently handle things.
This issue was specifically for the plugin to use the path-based url for the one it automatically generates, just so it works for more cases.
Status: Issue closed
|
LTLA/InteractionSet | 438494144 | Title: Reduce() equivalent in InteractionSet?
Question:
username_0: Hello,
I would like to merge interactionSet rows that are overlapping to get a minimum number of interacting pairs; essentially, the equivalent of the "reduce()" function in GenomicRanges. Is there a way to do this for InteractionSets?
seqnames1 ranges1 seqnames2 ranges2
<Rle> <IRanges> <Rle> <IRanges>
[1] chrB [10, 15] --- chrA [20, 25]
[2] chrB [10, 28] --- chrA [15, 25]
[3] chrA [ 9, 18] --- chrA [77, 94]
Would result in:
seqnames1 ranges1 seqnames2 ranges2
<Rle> <IRanges> <Rle> <IRanges>
[1] chrB [10, 28] --- chrA [15, 25]
[2] chrA [ 9, 18] --- chrA [77, 94]
Thanks!
Answers:
username_1: Yes. Not as easily as one might expect, because it's a bit tricky to think in two-dimensional terms. But it can be done without too many tears. Let's set up an example:
```r
library(InteractionSet)
gr1 <- GRanges(c("chrB:10-15", "chrB:10-28", "chrA:9-18"))
gr2 <- GRanges(c("chrA:20-25", "chrA:15-25", "chrA:77-94"))
gi <- GInteractions(gr1, gr2)
```
Now to do it - the purpose of each step is left as an exercise for the reader:
```r
olap <- findOverlaps(gi)
edges <- as.vector(t(as.matrix(olap)))
g <- igraph::make_graph(edges)
comp <- igraph::components(g)$membership
boundingBox(gi, comp)
```
I suppose I could formally add this to _InteractionSet_ somewhere, but `reduce()` by itself seems a bit useless without also getting `comp` to tell you which entries of `gi` go into the reduced interactions.
In any case, I have big plans for the entire Hi-C-related infrastructure, so it will have to wait.
username_0: Thanks Aaron!
I would not have cracked it alone. Certainly useful to wrap up as a function at some point and add to IntersectionSet.
Status: Issue closed
|
mymarilyn/clickhouse-driver | 446574625 | Title: LowCardinality data type support
Question:
username_0: The issue #80 still prevails in 0.0.19 though it is closed.
'''
clickhouse_driver.errors.UnknownTypeError: Code: 50. Unknown type LowCardinality(String)
'''
It is fixed in the master branch tough. When will the next versions be tagged?
Answers:
username_1: Hi.
New release will be published in the end of week.
Status: Issue closed
username_1: Version [0.0.20](https://pypi.org/project/clickhouse-driver/0.0.20/) is published. |
kubeflow/kubeflow | 1116513720 | Title: Notebook Servers custom images giving error while spinning the notebook server
Question:
username_0: While spinning the notebook server in kubeflow1.4 getting below error when used the below image:
Image: j1r0q0g6/notebooks/notebook-servers/jupyter-scipy:v1.4
Error: s6-overlay-preinit: fatal: unable to mkdir /var/run/s6: Permission denied
Answers:
username_1: @username_0 did you get this resolved
/priority p1
/kind question
/area jupyter
@kubeflow/wg-notebooks-leads any comments ?
username_0: I found the work around for this:
Added below lines of code
RUN mkdir -p /var/run/s6 \
&& chown -R ${NB_USER}:users /var/run/s6
But these should be added in original images.
username_2: For clarifying: You deploy a custom image with your own Dockerfile with the added lines?
username_0: yes
username_0: yes |
edgedb/edgedb | 550908795 | Title: High RAM usage (unclosed sessions?)
Question:
username_0: EdgeDB version: `1.0-alpha.2+dev.505.g5b68ec24` (docker image)
Docker version: `19.03.5-ce, build 633a0ea838`
I noticed that EdgeDB uses 5GB of RAM on my server after a few days of running it.
I found out that there are a lot of processes:
```
~ ps -ef|grep edgedb|wc -l
141
```
Looks like the reason for that are unclosed interactive sessions:
```
~ docker exec -it edgedb-server edgedb -u modbay_beta
...
➜ ~ ps -ef|grep edgedb|wc -l
142
```
After restarting containers all these processes are gone and RAM usage is under 150mb:
```
~ docker restart edgedb-server
edgedb-server
~ ps -ef|grep edgedb|wc -l
7
```
Answers:
username_0: Update: process number stopped growing after container restart. It increased by 1 every time before, but now it remains constant.
How I can debug high memory usage in future?
What else could cause this? Dirty disconnects? I might had some.
username_1: Yes, that's the most likely cause, but EdgeDB should obviously handle this better.
username_0: It looks like memory usage raises even without dirty disconnects.
I made sure I always call `await pool.aclose()` and I always check that there are no tracebacks.
I have to restart db container at least once a day because memory usage raises to several gigs.
Is there a way to list active sessions or other way to list active resources?
username_2: Not yet. We'll see if we can add functions / repl commands to inspect the resources consumption.
username_0: The situation improved a lot since then. I have edgedb instance running for days, I created and closed a lot of connections and repl sessions, no noticeable memory usage increase, process count is low.
username_2: Thanks for the update!
username_1: Closing. Feel free to open a new issue if you see abnormal behavior again.
Status: Issue closed
|
benmarwick/rrtools | 275498885 | Title: [Question] References rendering in a specific position
Question:
username_0: There might be a need to render the references before figures. Is there a way to specify the position where references should be inserted in the generated paper?
Answers:
username_1: You can use the chunk below in your rmarkdown document to insert the references where you want them.
```rmd
## References
<div id="refs"></div>
```
Status: Issue closed
username_0: That did it! Thanks, add to documentation and we are closing this issue! :) |
sp614x/optifine | 1119396571 | Title: [Forge] 39.0.40+ make errors
Question:
username_0: ## Description of Issue
Hi
i found a bug
mods that mixin the armor renderer
does not compatible if the mod is on forge 39.0.40+
because forge changed something on it
so the mixin mix a thing by optifine that is not existing
pls update the forge version of optifine
## OptiFine Version
OptiFine HD U H5 pre4
## Fabric/Forge Version
Forge 39.0.40+
## Other Installed Mods
Beyond Earth (they mixin the armor renderer)
## Log Files/Crash Reports
[latest.log](https://github.com/username_2/optifine/files/7970805/latest.log)
Answers:
username_1: Is this an issue without Beyond Earth installed?
username_0: no but we want it that forge update to a newer forge version pls
username_0: (that fix this bug with other mods)
username_2: Fixed, coming in next preview.
Status: Issue closed
|
w3c/spork | 62432698 | Title: appears to be LOTS of <dfn> elements
Question:
username_0: I forked this bit of html spec http://www.w3.org/TR/html51/semantics.html#semantics and in adding respec found there are many duplicate <dfn>s which respec flags as errors, around 500 to be approximate. Any ideas as to why?
Answers:
username_1: I'm not adding them, they come from the source. I'm surprised there would be duplicates, maybe due to different processing rules?
username_0: @username_1 i don't know where they are coming from, but are there in 5.1 and repsec complains about them 100's of times. |
resilience4j/resilience4j | 642031736 | Title: Feature Request: Configure SlidingTimeWindow interval
Question:
username_0: Resilience4j version: v1.5.0
Java version: 8
I would like to configure SlidingTimeWindow interval. But now, SlidingTimeWindow interval seems to be fixed at 1 second. Is there any disadvantage in configuring interval? If not, could this feature be added?
Answers:
username_1: Hi, what is the reason why you have to change the interval?
Currently it's not possible to set a different interval.
username_0: Moving a large size of time-based sliding windows increases the process.
For example, if we slide 300 seconds windows, we need to slide up to 300 times. If we can change the interval every 2 seconds, we can halve it. This way, we can handle it in a fairly large amount of time.
Currently, moving windows is a bottleneck in our project, so we'd like to change the sliding window interval.
username_1: How did you identify it as bottleneck?
username_1: If you like you can give it a try and create a PR. |
Shuttle/shuttle-esb | 945007009 | Title: Intended level of support going forwards
Question:
username_0: Hey Eben,
Apologies if this is not the right way to ask, but I'm keen to understand the path going forward for Shuttle. The documentation currently states that .NET Core 2.1 is supported (rather than more recent versions), but that version is scheduled to go out of Microsoft support next month (August 2021)
Is the intention to continue to support Shuttle going forward?
(Fwiw... we've been using Shuttle within our company for several years, with great success. We implemented it at low cost, and it's been reliable and served its purpose extremely well.)
Thanks!
Answers:
username_1: Hi Clayton,
I've been meaning to update the documentation a bit for some time but I just don't get time :)
That being said, the packages typically target standard:
```
<TargetFrameworks>netstandard2.0;netstandard2.1</TargetFrameworks>
```
Some older ones may still target `net461` and `netcoreapp2.1` but would also include at least `netstandard2.0`. When I come across one that does not target standard only I do update them,
I tinker on the Shuttle components every-so-often, but mostly only when required.
Great to hear that it has served you well :)
I will get around to updating the documentation as soon as I can... just to avoid any confusion. At a minimum I'll update that version bit... soon.
username_0: Okay fantastic, thanks for your fast response Eben.
So am I correct in thinking that this means full support for .NET 5, given that supports Standard 2.1 and lower?
username_1: .NET 5 would use the .NET Standard 2.1 package, yes.
I *could*, of course, add a target to .NET 5 but since I'm not using anything specific to .NET 5 it doesn't seem necessary at this stage. Even so, introducing .NET 5-only functionality would result in conditional compilation and the like. This is something that *may* be required in future.
Currently the [ServiceHost](https://github.com/Shuttle/Shuttle.Core.ServiceHost/blob/master/Shuttle.Core.ServiceHost/Shuttle.Core.ServiceHost.csproj) targets implementations (`net461;netcoreapp2.1`) since the package serves a rather specific purpose. This results in conditional compilation requirements as can be seen in the [ConsoleService](https://github.com/Shuttle/Shuttle.Core.ServiceHost/blob/master/Shuttle.Core.ServiceHost/ConsoleService.cs) class. .NET 5 targeting would seem appropriate here.
If you *do* happen upon anything that you feel requires any updating you are absolutely welcome to let me know.
username_0: That all makes sense, thanks again Eben.
Thanks too for your fast responses, they are always much appreciated!
I'll close this as you've answered my questions, thanks.
Status: Issue closed
|
carlpett/xUnit-TeamCity | 221675575 | Title: Teamcity Version 10.0.1 and version 1.1.3a
Question:
username_0: Hello,
I have TeamCity Version 10.0.1 with xunit teamcity plugin version 1.1.3a.
I uploaded the plugin a month plus ago, but I guess the server never restarted to update the plugin.
But now after some random maintenance reboot, Teamcity plugin 1.1.3a started being used.
But when the build is occurring, the step for the unit tests execution throws an error.
[16:10:15]E: Step 7/9: Tests (xUnit)
[16:10:15] : [Step 7/9] Runner parameters { Version = 1.9.2, runtime = .NET 4.0, platform = AnyCPU/MSIL}
[16:10:15] : [Step 7/9] Failed to run tests
[16:10:15]W: [Step 7/9] java.lang.NumberFormatException: For input string: ""
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:592)
at java.lang.Integer.parseInt(Integer.java:615)
at se.capeit.dev.xunittestrunner.XUnitBuildProcess.call(XUnitBuildProcess.java:58)
at se.capeit.dev.xunittestrunner.XUnitBuildProcess.call(XUnitBuildProcess.java:19)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
This occurred for two project using xunit 1.9.2
I deleted version 1.1.3a and reverted back to 1.1.2 to allow builds to occur without failure.
https://github.com/Jogge/xUnit-TeamCity
Answers:
username_1: Hi!
Version 1.1.3a was built of a fork of this project, and included some code which did not account for some fields being uninitialized if created with previous versions. I have since merged that fork, including a fix for your issue, but this is yet unreleased. I'm hoping to release a version 1.2 "soonish", but if you did not need any of the features from the fork (namely xUnit 2.2 support or parallel runs), I'd recommend sticking with 1.1.2 for now.
Sorry about this!
Status: Issue closed
|
linkerd/linkerd2 | 488762046 | Title: Publish Helm chart to Helm Hub
Question:
username_0: With the official Helm Repo soon to be deprecated, we should publish our Helm chart to Helm Hub, per docs [here](https://github.com/helm/hub/blob/master/Repositories.md).
Answers:
username_1: I've created a PR in Helm's hub repo:n helm/hub#148
username_1: The charts have been published at these locations:
https://hub.helm.sh/charts/linkerd2/linkerd2
https://hub.helm.sh/charts/linkerd2-edge/linkerd2
I'll try now to properly set an icon for them.
username_1: The changes in our Helm repo aren't being picked up by the Helm hub. I've opened helm/hub#160 to ask about it.
username_1: The changes were finally picked up. Closing this issue.
Status: Issue closed
|
swagger-api/swagger-codegen | 109313759 | Title: POST resources with EMPTY json response schema {} fails to generate the API classes
Question:
username_0: Please find the attached jsons samples, I have included documentation with empty {} json as response and other as some response in json. Also, I have given same resource with using PUT with no schema to return.
Case 1: Not working with POST when response has a schema defined which does not have any response in it.

Case 2: Working with POST when response has a schema defined which does have some response in it.

Case 3: Working with PUT when response has a no schema defined for response.

Answers:
username_0: I think, the root cause of the issue is for below,
Not working:
"produces": ["application/json",
"text/json",
"application/xml",
"text/xml"],
Working:
"produces": ["application/json",
"text/json"],
So, looks like when response schema contains {} Empty JSON, its not able to produce the client api with xml and which is causing it to fail.
if we some how make it correct the code for xml response also then it will work
Thanks,
Amit
username_1: @username_0 `produces` is mapped to `Accept` in the API client. For your server, does it have a preference to return data in XML format if "application/xml" is present (the last item) in the HTTP accept header?
username_0: I think, the issue was they had specified {} JSON as response with application/json.
This combination is failing, so, either we need to fix the code-gen to handle the scenario and allow {} to be as return response in application/JSON.
empty response in case of application/XML some how is working.
Thanks,
Amit
username_1: @username_0 Using [petstore.json](https://github.com/swagger-api/swagger-codegen/blob/master/modules/swagger-codegen/src/test/resources/2_0/petstore.json#L218), I created an Object without any properties (using {}) and updated getPetById to return the Object (response). Running `./bin/java-petstore.sh` and `./bin/php-petstore.sh' did not result in any exception.
username_0: Can you use something like below,
"parameters": [
{
"in": "body",
"name": "body",
"description": "Pet object that needs to be added to the store",
"required": false,
"schema": {
"$ref": "#/definitions/Object"
}
}
]
Schema as below.
"Object" : {
"type" : "object",
"properties": {
}
}
username_0: If you try to use something like above it will fail while generating the Api classes.
username_0: If you can send me email on <EMAIL>, I can send you sample json, which you can try to work with.
username_1: I did try with the above but maybe I made a mistake in my tests. I'll do another test tomorrow.
In the meantime, do you mind pulling the latest master (git pull && mvn clean package) and give it a try again?
username_0: Sure, I will.
username_1: I used https://gist.githubusercontent.com/username_1/2270a0bf6b089ebb0f89/raw/340c950e60eb6cb2dac6d137b641468f2e26c762/swagger_empty_object.json but couldn't repeat the issue.
username_0: Yes, it looks like its working.
Some how it was not working before.
Not sure, where I was missing.
Sorry for false alarm and wasting time.
:'(
Thanks,
Amit
username_0: Can you also try to see if we can work on other issue I raised,
https://github.com/swagger-api/swagger-codegen/issues/1324
Status: Issue closed
username_1: OK. Closing this one.
username_0: Thanks. |
WebKit/explainers | 607765461 | Title: How to prevent bad-faith login claims?
Question:
username_0: We really need to at least sketch out a possible approach.
Answers:
username_1: To expand on this a bit: If being logged in gives a site extra storage powers, then clearly they'll be incentivized to say the user is logged in as much as possible. We can try to link permission to call this API to browser-observable login actions (such as using WebAuthN, or user autofilling a password field, followed by some sort of submission of that form) but it might be tricky to prevent evasion.
username_2: This has now been ported to the W3C repo. Please continue the discussion there.
Status: Issue closed
|
pantsbuild/pants | 396597364 | Title: `JunitTestsConcurrencyIntegrationTest.test_parallel_both_cmdline` is flaky
Question:
username_0: :16:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.pants.d/pyprep/sources/fd91bfce58d797e67a52d0c9c8bae80725101694/pants_test/backend/jvm/tasks/test_junit_tests_concurrency_integration.py:126: in test_parallel_both_cmdline
self.assertIn("OK (4 tests)", pants_run.stdout_data)
E AssertionError: 'OK (4 tests)' not found in '\n17:48:14 00:00 [main]\n (To run a reporting server: ./pants server)\n17:48:15 00:01 [setup]\n17:48:15 00:01 [parse]\n Executing tasks in goals: native-compile -> link -> bootstrap -> imports -> unpack-jars -> jvm-platform-validate -> deferred-sources -> gen -> resolve -> resources -> pyprep -> compile -> test\n17:48:15 00:01 [native-compile]\n17:48:15 00:01 [conan-prep]\n17:48:15 00:01 [create-conan-pex]\n17:48:19 00:05 [native-third-party-fetch]\n17:48:19 00:05 [c-for-ctypes]\n17:48:19 00:05 [cpp-for-ctypes]\n17:48:19 00:05 [link]\n17:48:19 00:05 [shared-libraries]\n17:48:19 00:05 [bootstrap]\n17:48:19 00:05 [substitute-aliased-targets]\n17:48:19 00:05 [jar-dependency-management]\n17:48:19 00:05 [bootstrap-jvm-tools]\n17:48:19 00:05 [provide-tools-jar]\n17:48:19 00:05 [imports]\n17:48:19 00:05 [ivy-imports]\n17:48:19 00:05 [unpack-jars]\n17:48:19 00:05 [unpack-jars]\n17:48:19 00:05 [jvm-platform-validate]\n17:48:19 00:05 [jvm-platform-validate]\n Invalidated 1 target.\n17:48:19 00:05 [deferred-sources]\n17:48:19 00:05 [deferred-sources]\n17:48:19 00:05 [gen]\n17:48:19 00:05 [antlr-java]\n17:48:19 00:05 [antlr-py]\n17:48:19 00:05 [jaxb]\n17:48:19 00:05 [protoc]\n17:48:19 00:05 [ragel]\n17:48:19 00:05 [thrift-java]\n17:48:19 00:05 [thrift-py]\n17:48:19 00:05 [wire]\n17:48:19 00:05 [avro-java]\n17:48:19 00:05 [go-thrift]\n17:48:19 00:05 [go-protobuf]\n17:48:19 00:05 [jax-ws]\n17:48:19 00:05 [scrooge]\n17:48:19 00:05 [thrifty]\n17:48:19 00:05 [resolve]\n17:48:19 00:05 [ivy]\n Invalidated 2 targets.\n17:48:19 00:05 [ivy-resolve]\n17:48:20 00:06 [coursier]\n17:48:20 00:06 [go]\n17:48:20 00:06 [scala-js-compile]\n17:48:20 00:06 [scala-js-link]\n17:48:20 00:06 [node]\n17:48:20 00:06 [resources]\n17:48:20 00:06 [prepare]\n17:48:20 00:06 [services]\n17:48:20 00:06 [pyprep]\n17:48:20 00:06 [interpreter]\n17:48:20 00:06 [build-local-dists]\n17:48:20 00:06 [requirements]\n17:48:20 00:06 [sources]\n17:48:20 00:06 [compile]\n17:48:20 00:06 [node]\n17:48:20 00:06 [compile-jvm-prep-command]\n17:48:20 00:06 [jvm_prep_command]\n17:48:20 00:06 [compile-prep-command]\n17:48:20 00:06 [compile]\n17:48:20 00:06 [rsc]\n17:48:20 00:06 [zinc]\n Invalidated 1 target.\n17:48:20 00:06 [isolation-zinc-pool-bootstrap]\n [1/1] Compiling 2 zinc sources in 1 target (testprojects/tests/java/org/pantsbuild/testproject/parallelclassesandmethods:cmdline).\n17:48:20 00:06 [compile]\n \n17:48:20 00:06 [cache]\n17:48:20 00:06 [bootstrap-scalac_2_11]\n17:48:21 00:07 [cache]\n Using cached artifacts for 1 target.\n17:48:22 00:08 [cache]\n Using cached artifacts for 1 target.\n17:48:22 00:08 [cache]\n17:48:22 00:08 [bootstrap-compiler-bridge]\n17:48:22 00:08 [zinc]\n [info] Compiling 2 Java sources to /home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpmyx6bka_.pants.d/compile/zinc/a53b6931478d/testprojects.tests.java.org.pantsbuild.testproject.parallelclassesandmethods.cmdline/current/classes ...\n [info] Done compiling.\n [info] Compile success at Jan 7, 2019 5:48:26 PM [3.924s]\n \n17:48:26 00:12 [javac]\n17:48:26 00:12 [cpp]\n17:48:26 00:12 [errorprone]\n Invalidated 1 target.\n [1/1] testprojects/tests/java/org/pantsbuild/testproject/parallelclassesandmethods:cmdline\n17:48:26 00:12 [cache]\n17:48:26 00:12 [bootstrap-errorprone-javac]\n17:48:27 00:13 [cache]\n Using cached artifacts for 1 target.\n17:48:27 00:13 [errorprone]\n \n17:48:29 00:15 [findbugs]\n Invalidated 1 target.\n [1/1] testprojects/tests/java/org/pantsbuild/testproject/parallelclassesandmethods:cmdline\n17:48:29 00:15 [cache]\n Using cached artifacts for 1 target.\n17:48:29 00:15 [findbugs]\n \n17:48:33 00:19 [go]\n17:48:33 00:19 [test]\n17:48:33 00:19 [test-jvm-prep-command]\n17:48:33 00:19 [jvm_prep_command]\n17:48:33 00:19 [test-prep-command]\n17:48:33 00:19 [test]\n17:48:33 00:19 [pytest-prep]\n17:48:33 00:19 [pytest]\n17:48:33 00:19 [junit]\n Invalidated 1 target.\n17:48:33 00:19 [cache]\n Using cached artifacts for 1 target.\n Using experimental junit-runner logic.\n17:48:33 00:19 [run]\n (0 tests)\n \n \n Time: 0\n \n OK (0 tests)\n \n \n testprojects/tests/java/org/pantsbuild/testproject/parallelclassesandmethods:cmdline..... SUCCESS\n17:48:33 00:19 [go]\n17:48:33 00:19 [node]\n Waiting for background workers to finish.\n17:48:34 00:20 [complete]\n SUCCESS'
generated xml file: /home/travis/build/pantsbuild/pants/.pants.d/test/pytest/tests.python.pants_test.backend.jvm.tasks.junit_tests_concurrency_integration/junitxml/TEST-tests.python.pants_test.backend.jvm.tasks.junit_tests_concurrency_integration.xml
============ slowest 3 test durations ============
21.85s call ../tests/python/pants_test/backend/jvm/tasks/test_junit_tests_concurrency_integration.py::JunitTestsConcurrencyIntegrationTest::test_parallel_both_cmdline
0.00s teardown ../tests/python/pants_test/backend/jvm/tasks/test_junit_tests_concurrency_integration.py::JunitTestsConcurrencyIntegrationTest::test_parallel_both_cmdline
0.00s setup ../tests/python/pants_test/backend/jvm/tasks/test_junit_tests_concurrency_integration.py::JunitTestsConcurrencyIntegrationTest::test_parallel_both_cmdline
=========== 1 failed in 22.05 seconds ============
```
Answers:
username_1: Seen again in master, but for `JunitTestsConcurrencyIntegrationTest.test_parallel_target`.
username_2: Likely fixed via https://github.com/pantsbuild/pants/pull/8535. Please reopen if it occurs again
Status: Issue closed
|
pynvme/pynvme | 573694397 | Title: ERROR nvme_ctrlr.c(2489) nvme_ctrlr_process_init: Initialization timed out in state 3
Question:
username_0: 你好,我想请问下我在中断运行一个脚本的时候,手动中断后,再次运行其他脚本,就会出现error:
INFO driver.c(498) probe_cb: Attaching to NVMe Controller at 0000:01:00.0
[2020-03-02 11:19:07.377150] INFO intr_mgt.c(174) msix_intc_init: vector_num 32
[2020-03-02 11:19:57.234873] ERROR nvme_ctrlr.c(2489) nvme_ctrlr_process_init: Initialization timed out in state 3
python3: nvme_ctrlr.c:2489: nvme_ctrlr_process_init: Assertion `0' failed.
Fatal Python error: Aborted
我想请问下这个可能是什么原因导致的呢,我看百度上说的是SPDK模式的盘已经有进程绑定了,当前进程无法attach 到SPDK模式的盘。kill调所有使用SPDK模式盘的进程,重新拉起程序。但是我看没有这个进程在运行啊?
Answers:
username_1: Yes, you need to kill all processes running SPDK and pynvme. Because pynvme tests are organized by pytest, so you can search by 'pytest' to get the pid as below:
```shell
ps -eo pid,args | grep pytest
```
username_0: 嗯嗯,多谢了
---Original---
username_0: 你好,我输入你发的那个命令,kill的时候发现没有这样的进程,但是运行脚本还是出现超时
---Original---
Status: Issue closed
|
GoogleCloudPlatform/magic-modules | 327527814 | Title: Add retry logic to Terraform templates
Question:
username_0: GCP APIs will sometimes return errors that can be retried (5xx and 429). In Terraform we currently have retry logic in some resources, but not all of them. Let's add it to the templates so all the autogen-ed resources have it.
@rambleraptor, do the other providers have retry logic already or is this something that we also want to add to them?<issue_closed>
Status: Issue closed |
fieryapi/node-js-ws-samples | 272985270 | Title: Simple question
Question:
username_0: What are the differences between batch mode and non-batch mode?
Status: Issue closed
Answers:
username_0: What are the differences between batch mode and non-batch mode?
username_1: The difference between the two is only the aggregation of the command. Performance is about the same for both approaches. The batch approach let the client aggregates multiple commands into a single message while the non-batch approach requires the client to send one command per message.
Status: Issue closed
|
EOSIO/eosjs | 394017392 | Title: How to use eosjs in react-native project?
Question:
username_0: **Version of EOSJS**
20.0.0-beta3
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1.
import { Api, JsonRpc, RpcError } from 'eosjs-rn';
import JsSignatureProvider from 'eosjs-rn/dist/eosjs-jssig';
import { TextEncoder, TextDecoder } from 'text-encoding';
2.
const defaultPrivateKey = "<KEY>";
const signatureProvider = new JsSignatureProvider([defaultPrivateKey]);
const rpc = new JsonRpc('http://127.0.0.1:8888', { fetch });
const api = new Api({ rpc, signatureProvider, textDecoder: new TextDecoder(), textEncoder: new TextEncoder() });
(async () => {
const result = await api.transact({
actions: [{
account: 'eosio.token',
name: 'transfer',
authorization: [{
actor: 'useraaaaaaaa',
permission: 'active',
}],
data: {
from: 'useraaaaaaaa',
to: 'useraaaaaaab',
quantity: '0.0001 SYS',
memo: '',
},
}]
}, {
blocksBehind: 3,
expireSeconds: 30,
});
console.dir(result);
})();
**Expected behavior**
can not run correctly
**Screenshots**

Answers:
username_1: Same here.
react-native doesn't like `JsSignatureProvider` and instantiate `Api` doesn't work because `JsSignatureProvider` is required.
username_2: same problem
username_3: the same bug
username_4: The JsSignatureProvider won't work with React Native due to a dependency on the mobile incompatible `eosjs-ecc` package. You could potentially write your own signature provider that utilizes a community-created `eosjs-ess-rn` package found [here](https://www.npmjs.com/package/eosjs-ecc-rn). However, this has not been audited for security or performance implications so use at your own risk.
Status: Issue closed
|
SeedCompany/cord-api-v3 | 726867178 | Title: [BUG] Can create duplicate Partners with same Organization
Question:
username_0: I created a Partner with a newly created Organization. When searching for the Partner I was only able to find the Organization in the search results.
I created a Partner again with the same name and Organization.
Later in sorting the Partner List I found both of the Partners I created, with identical names and Organizations.
We should limit Partners to unique names.
Answers:
username_1: Partners don't have names, they show the names of their organizations. If we enforce only 1 partner per organization their "names" will be unique
Status: Issue closed
|
OpenNebula/one | 312525460 | Title: Storage quotas and IP's quotas
Question:
username_0: # Enhancement Request
## Description
Add Storage quotas and IP's quotas in the Sunstone dashboard
## Use case
More information for the admins
## Interface Changes
Add two widgets with the respective quotas in the dashoboard
# Progress Status
- [ ] Branch created
- [ ] Code committed to development branch
- [ ] Testing - QA
- [ ] Documentation
- [ ] Release notes - resolved issues, compatibility, known issues
- [ ] Code committed to upstream release/hotfix branches
- [ ] Documentation committed to upstream release/hotfix branches
Answers:
username_1: This would be awesome!
username_2: @tino PR to approve:
- Test: https://github.com/OpenNebula/development/pull/1054
- ONE: https://github.com/OpenNebula/one/pull/4877
Status: Issue closed
username_2: - docs: https://github.com/OpenNebula/docs/commit/95bd5cf54bda141561cf460c3802d1998f22b63d |
g3n35i5/shop-db2 | 425329791 | Title: Deleting verified users
Question:
username_0: Users should also be able to be deleted, even if they have already been verified and have entries in the database. Before a user can be deleted, his credit must be balanced. This can be done automatically by entering positive or negative credits in a new "Loss/Profit" table. At the same time all references for this user have to be deleted.
Answers:
username_0: A difficulty here is when the user is an administrator or once was. In this case, all actions that required the permissions of an administrator and were performed by the user to be deleted must first be migrated to a new, active administrator before deletion. The administrator must receive a list of all actions and confirm that he wants to take them over. Only then can the administrator rights be withdrawn from the user and he can also be deleted like a normal user. |
tensorflow/tensorflow | 894392540 | Title: "download_dependencies.sh" doesn't work
Question:
username_0: **System information**
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
- Mac OS BigSur 11.2.3(arm64):
- TensorFlow installed from (source or binary): https://github.com/tensorflow/tensorflow
- TensorFlow version (use command below): the latest
- Python version: 3.8
- GPU model and memory:
**Describe the current behavior**
I try to use tensorflowlite with ios , and when i followed the instructions to the step "download_dependencies.sh" ,it shows that :
usage: grep [-abcDEFGHhIiJLlmnOoqRSsUVvwxZ] [-A num] [-B num] [-C[num]]
[-e pattern] [-f file] [--binary-files=value] [--color=when]
[--context[=num]] [--directories=action] [--label] [--line-buffered]
[--null] [pattern] [file ...]
I think something wrong with "grep -oP"
**Describe the expected behavior**
download the resources
**[Contributing](https://www.tensorflow.org/community/contribute)** - Do you
want to contribute a PR? (yes): - Briefly describe your candidate solution
(if contributing):
modify the grep -oP
<img width="571" alt="截屏2021-05-18 下午9 28 15" src="https://user-images.githubusercontent.com/55579125/118660438-c8aa4b80-b820-11eb-9d9c-283ccd0cf1a2.png">
**Other info / logs** Include any logs or source code that would be helpful to
diagnose the problem. If including tracebacks, please include the full
traceback. Large logs and files should be attached.
Answers:
username_1: @username_0
seems like this is similar to issue [#35747](https://github.com/tensorflow/tensorflow/issues/35747). Please take a look at this and let us know if you are still facing the same issue. Thanks!
username_0: It's not the same problem I think , I put some "echo" to test and I found that the problem starts from line 37 in "https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/tools/make/download_dependencies.sh"
and I foun some notes in this file in line 64
# TODO(petewarden): Some new code in Eigen triggers a clang bug with iOS arm64,# so work around it by patching the source.
username_2: How did this shit end up in production in the first place???? Are u stupid??? and with a TODO note too !!!11
what the fuck is your cto or whatever thinking bout this?! is he happy? because if I'd be him I would be slapping all your shitfaces with my dick by now
username_3: I am also hitting this same issue related to grep usage when running download_dependencies.sh on macOS Big Sur.
Any recommendation for fix / workaround to resolve this issue?
username_4: How about using CMake?
https://www.tensorflow.org/lite/guide/build_cmake
username_3: `grep -oP` is not supported on macOS.
download_dependencies.sh script starts working for me on macOS after updating all `grep -oP` usage with an equivalent perl command as shown below
Replace
`EIGEN_COMMIT="$(grep -oP 'EIGEN_COMMIT = "\K[0-9a-f]{40}' "${EIGEN_WORKSPACE_BZL_PATH}")"`
with
`EIGEN_COMMIT=$( perl -ne 'if (/EIGEN_COMMIT = "([0-9a-f]{40})/) { print $1 . "\n" }' ${EIGEN_WORKSPACE_BZL_PATH} )`
username_5: @username_3 can you share file here?
username_3: @username_5 I've attached the modified download_dependencies.sh script
[download_dependencies.zip](https://github.com/tensorflow/tensorflow/files/6997138/download_dependencies.zip)
username_3: Hi @username_4 - Would you be able to checkin the changes in the script that I've attached in my comment above in the tensorflow repo? These new changes work on both macOs and linux machines.
username_5: <img width="388" alt="Screen Shot 2021-08-17 at 10 30 06" src="https://user-images.githubusercontent.com/80002509/129691969-88ba8b4f-7556-497f-bcdb-acce94a0531d.png">
I cannot run your script either
Status: Issue closed
username_4: Hi, as I shared today,
https://groups.google.com/a/tensorflow.org/g/tflite/c/NJq6abdIXkU/m/Rssm4t3pAgAJ
the Makefile build is deprecated. Plz try to use CMake. |
FRCTeam5902/FRC-5902-Deep-Space-2019 | 408572654 | Title: Cargo System didn't turn off when button released
Question:
username_0: When testing the cargo system on Saturday we had some weird moments where the system did not turn off when the button was released. It was not consistent. It only happened some of the time.
We should do one of the following:
- Test to see if it was just a button sticking
- Check the code to make sure it functions as we want it to
- Code a button to make the cargo system turn off so we can trigger it if it happens in competition
Answers:
username_0: Tested and think it was just a stuck button. Mapped it to joystick now so no longer using buttons for it. Have not had any issues. Marking this closed.
Status: Issue closed
|
larq/larq | 1079588065 | Title: Power of 2 quantization
Question:
username_0: I was wondering if there is a way to perform power of 2 quantization with Larq. Maybe a specific quantizer is needed? Any suggestions?
Answers:
username_1: Hi! None of the quantizers currently in Larq does power of 2 quantization, but I recall reading
[ADDITIVE POWERS-OF-TWO QUANTIZATION: AN EFFICIENT NON-UNIFORM DISCRETIZATION FOR NEURAL NETWORKS](https://arxiv.org/pdf/1909.13144.pdf)
a while ago, and you should be able to implement it in Larq fairly easily if you desire.
To get some idea of what a `k`-bit quantizer in Larq would look like, I recommend having a look at the [DoReFa quantizer](https://github.com/larq/larq/blob/main/larq/quantizers.py#L538-L693) (though that one is probably slightly more complex than you need yours to be). Do let me know if you have any questions about implementing custom quantizers! |
CompuCell3D/CompuCell3D | 854728056 | Title: Secretion Scaling in Diffusion Solver
Question:
username_0: If we do "constant concentration" secretion in the diffusion solver and set the value in medium to 1 and the diffusion rate to 1, the concentration in medium is 0.125. This is because the diffusion solver is calling itself 8 times per MCS and is rescaling the fixed concentration level by that factor of 8, which it shouldn't (if should for a constant rate but not a constant value). The origin of the problem is clear, because if you set the diffusion constant to 0.5, the value becomes .25, etc....<issue_closed>
Status: Issue closed |
Kozea/WeasyPrint | 405657812 | Title: skip_first_whitespace IndexError
Question:
username_0: E IndexError: list index out of range
weasyprint/layout/inlines.py:206: IndexError
`
Let me know if I can help you in any way, I tried to follow the code to find the real cause but couldn't and have to keep going on other stuff.
Answers:
username_1: Looks like in `skip_first_whitespace` the `skip_stack` and the boxe's children are discoordinated.
When exception happens, the given skip_stack ist `(113, None)`, but the given InlineBox has only 1 child, and consequently in [line 206](https://github.com/Kozea/WeasyPrint/blob/0682f1ba4133ebd1d38484da97d8ab6f46bd6ffc/weasyprint/layout/inlines.py#L206)
```python
result = skip_first_whitespace(box.children[index], next_skip_stack)
```
the recursive call with `box.children[113]` fails with IndexError.
The concerned InlineBox is the `<b><i>l</i></b>`. Don't know who and where 113 children could be established in the skip_stack...
BTW: 113 is a prime number :grimacing:
username_0: Lol, IIRC the real HTML crashed with (107, None) :laughing:
username_1: Yep, let's call it *the prime number bug*
username_2: Another problem, probably related:
```html
<p>*<span>****************************************** *** **** ** ** ******* *********** ************************************************************************* <b>l</b></span><b>aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa</b></p>
```
username_1: Encircling the issue reveals: The *prime number* isn't meant to be the index of a child box, but the index of the letter (space) in a TextBox's text where the overlong text should be/has been broken.
After splitting all the text snippets from the loooong text into separate LineBoxes, the `resume_at` aka `skip_stack` returned by `split_inline_box()` manages to point at the box after the TextBox, wich in the example given by @username_0 happens to be the bold `<InlineBox b>`.
But somehow/sadly it leaves the last entry of the skip_stack still pointing to the letter where the last text splitting happened. This last entry, containing `(<textbreakpos>, None)`, should, of course be replaced with `None`. Dont (yet) know why it isn't.
Subsequently `skip_first_whitespace()` tries to accesses the inexistent child № `textbreakpos` of the `InlineBox` and raises IndexError.
Painstaking preparation is required to trigger this erroneous skip_stack -- particular nesting of InlineBoxes and TextBoxes in combination with the right (wrong?) text content, page width and font size. The TextBox, its text extending close to the right margin, and a following InlineBox (`<b>`) being enclosed in a InlineBox (`<span>`), immediately followed by non-space (`x`) seems to be crucial.
This fine-tuned snippet
```html
<p><span>
********************************* *******
********* *************** ***
********* *************** ***
********************************** ** ******* *** <b>l</b></span>x</p>
```
crashes with prime number 73 :smile:
@username_2 Your snippet is another issue. It just doesn't break where it ought to. The skip_stack is ok, throughout. No leftovers from TextBoxes.
username_1: Found the bug but cannot fix it. Happens in the most ugly most dirty most abominable part of the inline layout code in `split_inline_box`:
https://github.com/Kozea/WeasyPrint/blob/fc089f570001cc1c3879e1e7e6552c396868d5f5/weasyprint/layout/inlines.py#L854-L859
BTW: The *broken_child-detection* was introduced to fix #580.
Under *prime number bug* conditions `broken_child` evaluates to true, the subsequent reconstruction of the skip_stack ("adding skip stacks is a bit complicated") produces the invalid stack that finally raises the IndexError.
By forcibly setting `broken_child=False` and skipping the reconstruction of the stack **everything is fine**. Nah, not everything, of course, #580 is broken again.
So obviously the algorithm to detect `broken_child` is incomplete/wrong/lacking something. But it's beyond my skills to tweak it.
This minimal snippet crashes with prime number 1:
```html
<p><span>
**************************************************************
***********
<b>fits</b></span>xxx</p>
```
And yes, the three whitespaces (here: linebreaks) are vital to the IndexError. Ditto vital: no whitespace in front of the `xxx`.
username_1: Recipe to reproduce the IndexError:
```html
<p><span title="span required to trigger the bug">
*breakable*text* *followed*by*withespace*
<b>+fit+</b></span>CrossingTheMarginWithoutBreakRaisesIndexError</p>
```
Though @username_2's snippet doesnt crash with IndexError, it's related -- of course! he was right!
It reads like this:
```html
<p><span title="span required to trigger the bug">
*non*breaking*text**followed*by*withespace*
<b>+fit+</b></span>CrossingTheMarginWithoutBreakExtendsTheLineUntil next whitespace goes on the next line</p>
```
The crash is prevented because
https://github.com/Kozea/WeasyPrint/blob/fc089f570001cc1c3879e1e7e6552c396868d5f5/weasyprint/layout/inlines.py#L820
returns `False` for the `*non*breaking*text**followed*by*withespace*`, while for the `*breakable*text* *followed*by*withespace*` we get `True` and enter "The dirty solution".
username_1: BTW: Taking "The dirty solution" path (i.e. calling `split_inline_level` one more time), but avoiding the recreation of the skip_stack, seems to fix both, the **IndexError** and the **CrossingTheMarginShouldWrap** bug.
At least if the `CrossingTheMarginWithoutBreak` string doesnt cross the margin of the next line, too.
Status: Issue closed
username_2: 8724bc3 fixes this bug too, but … there's another bug in `can_break_inside` that is unable to detect a line break in `<p>aaa <b>bbb</b></p>`. I've added a failing test in e7fd37b.
username_0: Thanks @username_1 and @username_2 !
username_1: That's because `can_break_text()` from `text.py` returns False for `aaa `, i.e. a string that ends with a space.
That's because `can_break_text()` checks for the `PangoLogAttr`*s `is_line_break` attribute. And indeed, a trailing whitespace isn't a line break. In fact: whitespace is never a line break.
When I get it right, `pango.pango_get_log_attrs()` sets this attribute on letters that could become the first letter of a new line, that is, the letter that **follows** a possible line-breaking (whitespace or punctuation or...) letter.
It's a pity that we must take `box.style['lang']` and `box.style['white_space']` into account. Otherwise we could simply collect the text from the child's TextBoxes, including all the intermediate whitespaces, and pass that as a single string to `can_break_text`...
username_2: Yes, I've learned a lot of things fixing #301, and Unicode is really fascinating. I'm glad to have Pango :smile:.
We already have (almost) the whole logic in WeasyPrint: `split_inline_level` is able to find correctly if we can split a line, and it takes care of line breaks with nested tags. We should use it instead of relying on `can_break_inside` (that's small and fast but definitely buggy). |
freebsd/pkg | 328772933 | Title: Failed pkg install should not corrupt the local meta database...
Question:
username_0: `pkg install` failed while wifi was unavailable momentarily. It was required to rebuild the database via `pkg upgrade`. If DNS is unavailable, `pkg` should not corrupt its database.
```
% doas pkg install postgresql96-server
Updating FreeBSD repository catalogue...
pkg: repository meta /var/db/pkg/FreeBSD.meta has wrong version or wrong format
pkg: Repository FreeBSD load error: meta cannot be loaded No such file or directory
pkg: http://pkg.FreeBSD.org/FreeBSD:12:amd64/latest/meta.txz: No address record
repository FreeBSD has no meta file, using default settings
Fetching packagesite.txz: 100% 6 MiB 144.7kB/s 00:44
pkg: repository meta /var/db/pkg/FreeBSD.meta has wrong version or wrong format
pkg: Repository FreeBSD load error: meta cannot be loaded No such file or directory
Unable to open created repository FreeBSD
Unable to update repository FreeBSD
Error updating repositories!
Exit 3
% doas pkg install -U postgresql96-server
pkg: repository meta /var/db/pkg/FreeBSD.meta has wrong version or wrong format
pkg: Repository FreeBSD load error: meta cannot be loaded No such file or directory
pkg: Repository FreeBSD cannot be opened. 'pkg update' required
pkg: No packages available to install matching 'postgresql96-server' have been found in the repositories
Exit 70
% doas pkg update
Updating FreeBSD repository catalogue...
pkg: repository meta /var/db/pkg/FreeBSD.meta has wrong version or wrong format
pkg: Repository FreeBSD load error: meta cannot be loaded No such file or directory
Fetching meta.txz: 100% 940 B 0.9kB/s 00:01
Fetching packagesite.txz: 100% 6 MiB 151.5kB/s 00:42
Processing entries: 100%
FreeBSD repository update completed. 31365 packages processed.
All repositories are up to date.
``` |
LINNAE-project/SFB-Annotator | 784210807 | Title: Shortlist datasets for archiving at SDR
Question:
username_0: Note: one fieldbook (=dataset) consists of multiple pages (= TIFF files).
related to #36
Answers:
username_1: MMNAT01_B4_F2_V3/PM (TIFF files):
| Size | Nr of pages | folder | content |
|-- |-- |-- |-- |
| 60M | 3 | NNM001001032/ | birds and mammals cover of book
| 7.0G | 239 | NNM001001033/ | chapter on mammals
| 5.6G | 208 | NNM001001034/ | chapter on birds
| 68M | 3 | NNM001001035/ | amphibians cover of book
| 5.4G | 202 | NNM001001036/ | chapter on amphibians
| 6.3G | 240 | NNM001001037/ | another chapter on amphibians
username_0: N.B.: According to https://github.com/LINNAE-project/SFB-Annotator/issues/36#issuecomment-716502649 the files should be renamed (i.e., without `_PM_ `).
username_1: Should we not keep the original file name, and map it to the _UniekId_digitale_collectie_ with a uniqueID property? Just wondering whether changing the name could lead to ambiguity, as there is a fixed folder and file name structure in the collection. @username_0 what do you think?
username_0: **Our selection**: Mammals and birds / `NNM001001033` (book / folder)
JPG images include the `_AF_` infix while TIF images include the `_PM_` in their file names; I thought, URLs should be independent of the file type(s) by removing the latter. See details below:
Text search at https://dh.brill.com/nco/ with:
- `NNM001001033` or `NC_a_Ku_012_002` returns the same entry type (folder) with ID `NC_a_Ku_012_002`, which corresponds to
*Permanent link*: https://dh.brill.com/nco/view/nco_NNM001001033/makingsense (returns 240 items but there are `001-239` pages?)
According to
- [README](https://github.com/LINNAE-project/SFB-Annotator/blob/master/data/jpg/README.md) `NC_a_Ku_012_002` corresponds to the (book) `dc:identifier` and it's auto-generated (#36)
- the shared spreadsheet: `NC_a_Ku_012_002` (_Uniekid_) and `MMNAT01_NNM001001033` (_UniekId_digitale_collectie_)
**Example:** page 1
*URN*: urn:cite:visualeditions:nco_nnm001001033_001
*Permanent link*: https://dh.brill.com/nco/view/nco_NNM001001033_001/makingsense
*Image URL*: https://dh.brill.com/nco/f/MMNAT01_AF_NNM001001033_001.jpg
*IIIF info.json*: https://iiif.arkyves.org/MMNAT01_AF_NNM001001033_001.jpg/info.json
_N.B._: There is no `_AF_` in URN or permalink.
**Google Drive**
`MMNAT01_B4_F2_V3` (:question:) -> `NMM001001033` (sub-folder) -> `MMMAT01_PM_NMM001001033_[001-239].tif`
**SDR**
*Landing page (dataset)*: https://trng-repository.surfsara.nl/deposit/900c341c1c10fff7
*DOI*: [10.21945/SURF-trng.1f9b3206-559da01b](https://dx.doi.org/10.21945/SURF-trng.1f9b3206-559da01b)
*EPIC PID*: [21.T12996/SURF-trng.1f9b3206-559da01b](https://trng-repository.surfsara.nl/deposit/hdl.handle.net/21.T12996/SURF-trng.1f9b3206-559da01b)
*Image URL*: https://trng-repository.surfsara.nl/deposit/900c341c1c10fff7/files/MMNAT01_PM_NNM001001033_001.tif
**IIIF server (local)**
*info.json*: http://localhost:8182/iiif/2/900c341c1c10fff7:MMNAT01_PM_NNM001001033_001/info.json
*TIF->JPG*: http://localhost:8182/iiif/2/900c341c1c10fff7:MMNAT01_PM_NNM001001033_001/full/max/0/default.jpg
_N.B._: We could use the _UniekId_digitale_collectie_ (file prefix without `_PM`_) to request an image (or info about it) from SDR and IIIF.
username_1: SubfolderName Content Files Size
MMNAT01_B1_F2_V3 Publications 4247 134G
MMNAT01_B2_F2_V3 Sketches and Drawings I 2910 48G
MMNAT01_B3_F2_V3 Sketches and Drawings II 191 7.6G
MMNAT01_B4_F2_V3 Field Books and Correspondence I 19767 253G
MMNAT01_B5_F2_V3 FIeld Books and Correspondence II 9229 92G
JPG + TIF
Focus on Field Books & Corr - > +/14600 files
Sketches & Drawings -> +/
Status: Issue closed
|
InfoAmazonia/rede-site | 109863106 | Title: 'remover' is not working
Question:
username_0: on the data section of the sensors, I can't remove readings.
Answers:
username_1: DELETE route for measurement is returning 404
username_2: The correct route is:
DELETE /api/v1/measurements/560e938c860f948069897731
The client is calling:
DELETE /api/v1/measurements?id=560e938c860f948069897731
Status: Issue closed
|
sdi-sweden/geodataportalen | 1052191964 | Title: (1979, 'Metadata dokument har inte publicerats')
Question:
username_0: **2015-01-27T10:21:31.000+00:00**
****:
Hej,
Kan ni skicka denna post till oss på Support.
-----Ursprungligt meddelande-----
Från: <EMAIL> [mailto:<EMAIL>]
Skickat: den 27 januari 2015 06:33
Till: Support Geodatasamverkan
Ämne: Geodata.se - Metadata dokument har inte publicerats
Metadataposten har inte publicerats
Titel: Naturvårdsavtal (Naturvårdsverket, länsstyrelse)
Id: 14500834-D095-4EB2-9968-2D11B653630B
Tekniskt fel, kontakta supporten.
Mvh
<NAME>, Geodataportalen
Answers:
username_0: **2015-02-02T11:45:25.000+00:00**
****:
Hej,
Hämta den gärna via ett GetRecordById anrop till vår katalog, se länk nedan.
/Birgitta
http://mdk.vic-metria.nu/metadataeditor/srv/eng/csw?request=GetRecordById&id=14500834-D095-4EB2-9968-2D11B653630B&service=CSW&version=2.0.2&namespace=xmlns(csw=http://www.opengis.net/cat/csw)&resultType=results&outputSchema=csw:IsoRecord&outputFormat=application/xml&elementSetName=summary&constraintLanguage=CQL_TEXT&constraint_language_version=1.1.0 |
steelbrain/linter | 166421679 | Title: Option to display issue list in sidebar
Question:
username_0: I think every object which occupies too much height is wasted space on widescreens. Therefore what about an option to display the issues in a sidebar?
Answers:
username_1: It would be nice to have a screenshot explaining the issue
username_0: 
I know expressive painting...
But does it clarify what I mean with sidebar here?
username_1: Well actually, if we add it to the side bar, it'll take at least as much space as a pane and because all of the text in it is vertical (except Japanese text) it won't be a very good idea.
Alternatively you could just disable panel and install `minimap` and `minimap-linter`
Status: Issue closed
username_0: Yeah the text is of course the same, but you could just add some line breaks, so that it fits.
And as said in the width there is enough space (at least more than in height).
Additionally it should is course only be an option. |
jlippold/tweakCompatible | 305221524 | Title: `QuickSwipe` working on iOS 10.2.1
Question:
username_0: ```
{
"packageId": "com.nin9tyfour.quickswipe",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.nin9tyfour.quickswipe",
"deviceId": "iPhone8,4",
"url": "http://cydia.saurik.com/package/com.nin9tyfour.quickswipe/",
"iOSVersion": "10.2.1",
"packageVersionIndexed": false,
"packageName": "QuickSwipe",
"category": "Tweaks",
"repository": "BigBoss",
"name": "QuickSwipe",
"packageIndexed": false,
"packageStatusExplaination": "This tweak has not been reviewed. Please submit a review if you choose to install.",
"id": "com.nin9tyfour.quickswipe",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.0.6",
"shortDescription": "Swipe to dismiss videos in Safari.",
"latest": "1.0-1",
"author": "nin9tyfour",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
stephanrauh/ngx-extended-pdf-viewer | 1001285398 | Title: Allow file:// URLs in Electron apps and Chrome browser extensions
Question:
username_0: As discussed in #907, it's almost always a good idea to prevent local file access in a browser application. But ngx-extended-pdf-viewer can also be used in a Cordoba app, an Electron app or a Chrome browser extension. In such a scenario forbidding the local file access is exactly what you don't want to do.
Let's add a option allowing users to disable the protocol check.
Answers:
username_0: @chan-dev @username_1 I've tried to write an Electron app accessing a local file. But I didn't figure out how to do it. Can you help me? Either by adding to my reproducer see the commit above) or by sending me a reproducer of your Chrome extension?
username_0: Note to myself: these files contain a check on the protocol of the URL:
- display_utils.js#isValidFetchUrl()
- fetch_stream.js#PDFFetchStream()
- network.js#constructor()
- util.js#_isValidProtocol()
However, currently the "file not found" error occurs in network.js#request(). The http status code is 0.
username_1: @username_0 Hey, apologies for the delayed response, we haven't got any further with the cordova-windows -> electron migration and added it back to the backlog due to some other major issues. We can keep that open for the time, and I will come back to it as soon as we get back to work on the migration. Our app is massive, so its takes a lot of time.
We are using Ionic with @capacitor-community/electron. I think there is an example somewhere which you could download and run.
username_0: OK, I'll wait! It'd be great if you could boil down your large app to a tiny reproducer, or if you can debug the problem yourself, using `[minifiedJSLibraries]=false` and the pointers I've mentioned above. |
JessicaML/ScienceMotions-rails | 308342212 | Title: post user.id to the completed_lessons model
Question:
username_0: Right now the user can create a lesson, but the completed-lessons needs the current user_id passed in.
If user_id is set to optional => true (required field), it is possible to post a completed_lesson with the lesson_id, but the user_id also needs to be posted.<issue_closed>
Status: Issue closed |
AugurProject/augur | 538583935 | Title: Update Market Rules
Question:
username_0: Here are the updated rules to be used in Market creation, for both template and non template markets, as well is Reporting and disputing pgs as well
Market rules
A market is invalid if:
• The market question, resolution details or its outcomes are ambiguous, subjective or unknown.
• The result of the event was known at market creation time.
• The outcome was not known at event expiration time.
• It can resolve without at least one of the outcomes listed being the winner, unless it is explicitly stated how the market will otherwise resolve in the resolution details.
• The title, details and outcomes are in direct conflict with each other.
• The market can resolve with more than one winning outcome
• Any of the outcomes don’t answer the market question ONLY. (outcomes cannot introduce a secondary question)
• If using a resolution source (a source is a noun that reports on or decides the result of a market), the source's URL or full name is NOT in the Market Question, regardless of it being in the resolution details.
• If using a resolution source, it is not referenced consistently between the Market Question and Resolution Details e.g. as either a URL or its full name
• If it’s a stock, currency or cryptocurrency and its ticker is not used in the market question.
• If it’s an index and the indexes full name is not in the market question.
• For any sports markets that list a player or team not in the correct league, division or conference, at the time the market was created, the market should resolve as invalid.
• A market only covers events that occur after market creation time and on or before reporting start time. If the event occurs outside of these bounds it has a high probability as resolving as invalid.
Answers:
username_1: @bconfortin -- This would be a good one to knock out tomorrow. |
qsniyg/maxurl | 936516616 | Title: open.spotify.com filter pattern
Question:
username_0: I don't know better than opening an issue, so I'm open to advice! I just found out the pattern for [Spotify](open.spotify.com) max image url in case a user comes across a spotify [release](https://en.wikipedia.org/wiki/Art_release#Music "LPs, singles, EPs") image in a new tab, either from network, sources or elements tab in devtools.

given the url:
https://i.scdn.co/image/ab67616d00001e02f0e911d0e5aadefc431bf34a
1. i.scdn.co/image/ab67616d0000
2. 1e02
3. f0e911d0e5aadefc431bf34a
***
**universal rule:**
1. [ab67616d0000](https://i.scdn.co/image/ab67616d00001e02f0e911d0e5aadefc431bf34a) labels a image for a [release](https://en.wikipedia.org/wiki/Art_release#Music "LPs, singles, EPs") on Spotify, [ab6761670000](https://i.scdn.co/image/ab6761670000ecd4b55df6eca9040d9056e0cbbd) for artist banner and [ab6761610000](https://i.scdn.co/image/ab6761610000517412a2ef08d00dd7451a6dbed6) for artist profile picture;
2. First identifies 300x300, second 640x640
1. [1e02](https://i.scdn.co/image/ab67616d00001e02f0e911d0e5aadefc431bf34a) and [b273](https://i.scdn.co/image/ab67616d0000b273f0e911d0e5aadefc431bf34a) for a release thumbnail
2. [f178](https://i.scdn.co/image/ab6761610000f17812a2ef08d00dd7451a6dbed6) and [5174](https://i.scdn.co/image/ab6761610000517412a2ef08d00dd7451a6dbed6) for artist profile picture
4. identifies an unique image id, each release in Spotify library has its own.
***
With a little browsing all the rules above can be verified. Now, what max image url is supposed to do is:
* find and replace [f178](https://i.scdn.co/image/ab6761610000f17812a2ef08d00dd7451a6dbed6) with [5174](https://i.scdn.co/image/ab6761610000517412a2ef08d00dd7451a6dbed6) if the URL begins with "https://i.scdn.co/image/ab6761610000"
* find and replace [1e02](https://i.scdn.co/image/ab67616d00001e02f0e911d0e5aadefc431bf34a) with [b273](https://i.scdn.co/image/ab67616d0000b273f0e911d0e5aadefc431bf34a) if the URL begins with "https://i.scdn.co/image/ab67616d0000"
Expected result: 640x640 (original image source) for artist profile picture of release thumbnail, originally 300x300, on spotify.com.

Answers:
username_0: sorry I have no idea how to implement this in my code nor to label this issue... but I thought it would be a good addition, not only on my browser :)
Status: Issue closed
username_1: Thank you very much for both sharing that research and being so thorough with your explanation! I've implemented it in the script as you explained :)
I'll look around some more to see if I can find any other rules (mainly through `inurl:i.scdn.co`)...
username_0: I can't reopen the issue and don't wanna flood, so, just wanna add through this comment that [4851](https://i.scdn.co/image/ab67616d00004851f0e911d0e5aadefc431bf34a) must be replaced by [b723](https://i.scdn.co/image/ab67616d0000b273f0e911d0e5aadefc431bf34a) too, it's a size preset used for albums on [spotifycharts](http://spotifycharts.com).
https://i.scdn.co/image/ab67616d00004851df16d539f508603bfb1efe02 - smallest
https://i.scdn.co/image/ab67616d00001e02df16d539f508603bfb1efe02 - medium
https://i.scdn.co/image/ab67616d0000b273df16d539f508603bfb1efe02 - biggest
Sorry if these contributions don't add much to the project, but it's what I've found :)
username_1: Anything and everything is very welcome! :)
username_1: I don't know better than opening an issue, so I'm open to advice! I just found out the pattern for [Spotify](https://open.spotify.com) max image url in case a user comes across a spotify [release](https://en.wikipedia.org/wiki/Art_release#Music "LPs, singles, EPs") or artist profile image in a new tab, either from network, sources or elements tab in devtools.

given the url:
https://i.scdn.co/image/ab67616d00001e02f0e911d0e5aadefc431bf34a
1. i.scdn.co/image/ab67616d0000
2. 1e02
3. f0e911d0e5aadefc431bf34a
***
**universal rule:**
1. [ab67616d0000](https://i.scdn.co/image/ab67616d00001e02f0e911d0e5aadefc431bf34a) labels a image for a [release](https://en.wikipedia.org/wiki/Art_release#Music "LPs, singles, EPs") on Spotify, [ab6761670000](https://i.scdn.co/image/ab6761670000ecd4b55df6eca9040d9056e0cbbd) for artist banner and [ab6761610000](https://i.scdn.co/image/ab6761610000517412a2ef08d00dd7451a6dbed6) for artist profile picture;
2. First identifies 300x300, second 640x640
1. [1e02](https://i.scdn.co/image/ab67616d00001e02f0e911d0e5aadefc431bf34a) and [b273](https://i.scdn.co/image/ab67616d0000b273f0e911d0e5aadefc431bf34a) for a release thumbnail
2. [f178](https://i.scdn.co/image/ab6761610000f17812a2ef08d00dd7451a6dbed6) and [5174](https://i.scdn.co/image/ab6761610000517412a2ef08d00dd7451a6dbed6) for artist profile picture
4. identifies an unique image id, each release in Spotify library has its own.
***
With a little browsing all the rules above can be verified. Now, what max image url is supposed to do is:
* find and replace [f178](https://i.scdn.co/image/ab6761610000f17812a2ef08d00dd7451a6dbed6) with [5174](https://i.scdn.co/image/ab6761610000517412a2ef08d00dd7451a6dbed6) if the URL begins with "https://i.scdn.co/image/ab6761610000"
* find and replace [1e02](https://i.scdn.co/image/ab67616d00001e02f0e911d0e5aadefc431bf34a) with [b273](https://i.scdn.co/image/ab67616d0000b273f0e911d0e5aadefc431bf34a) if the URL begins with "https://i.scdn.co/image/ab67616d0000"
Expected result: 640x640 (original image source) for artist profile picture or release thumbnail, originally 300x300, on spotify.com.

Status: Issue closed
|
alexalkis/a68k | 433715360 | Title: -mcrt=nix13 is specified twice for the linker
Question:
username_0: this
```
CFLAGS=-Wall -fomit-frame-pointer -Os -mcrt=nix13 -fbaserel -msmall-code
LDFLAGS=-s -mcrt=nix13
```
results into
```
m68k-amigaos-gcc -s -mcrt=nix13 -o a68k A68kmain.o A68kmisc.o Adirect.o Codegen.o embedlvos.o Opcodes.o Operands.o Symtab.o wb_parse.o -Wall -fomit-frame-poi nter -Os -mcrt=nix13 -fbaserel -msmall-code
```
and link errors with duplicate symbols.
Solution:
```
CFLAGS=-Wall -fomit-frame-pointer -Os -mcrt=nix13 -fbaserel -msmall-code
LDFLAGS=-s
```
Answers:
username_1: Hmm, that fix revealed a compiler error :)
```
m68k-amigaos-gcc -s -o a68k A68kmain.o A68kmisc.o Adirect.o Codegen.o embedlvos.o Opcodes.o Operands.o Symtab.o wb_parse.o -Wall -fomit-frame-pointer -Os -mcrt=nix13 -fbaserel -msmall-code
/opt/amiga/gcc6/lib/gcc/m68k-amigaos/6.5.0b/../../../../m68k-amigaos/bin/ld: internal error /home/alex/t/amiga-gcc/projects/binutils/ld/ldlang.c 6644
collect2: error: ld returned 1 exit status
Makefile:7: recipe for target 'a68k' failed
```
username_0: you have old and bogus libstubs,a inside your prefix
username_0: see here: https://github.com/username_0/amiga-gcc/issues/92
username_1: yeap, rm -rf $(prefix) and untar all the yesterday's build fixed it. Thanks!
Status: Issue closed
|
foxfriends/root | 793825832 | Title: UI improvement to the score track
Question:
username_0: The score track is able to show scores, but there are some improvements that could be made mostly regarding how multiple players on the same space are displayed.
Since the pieces are stacked, simulated using box-shadows, piece elements are added and removed from the DOM when they enter/leave spaces with other tiles already on them. Also when multiple pieces are on the same space, you cannot tell which players they are other than the top one.
Pieces on the score track should always be displayed as distinct pieces, even when stacked, by moving them vertically and layering with z-index as appropriate. When a stack of pieces are moused over, they should spread out so you can see all the pieces below.
Answers:
username_1: I will take care of the score track, but maybe better to make it like UI, not like the board part? Because now player has to scroll down map and hover on track to get all info about scores.
I really don't like current info board that rolls from right on hover, maybe we can put almost all info on screen without even hovering
Here is the basic concept of my vision of this info, what do you think about it?

username_0: This is a good point too... Very possible it could go somewhere else that is better looking yes. The one there is a placeholder still (note the very blue background), I was thinking it would have the decks and small version of each player board visible when collapsed and you can open it on each player to view things. Could be opened by click instead of hover, that is fine. Yours looks like it is just the same thing but on the other side and flipped upside down, maybe I am misunderstanding it?
I will make a separate ticket for that you can do if you like.
username_1: Well, it's not only about moving to right. The idea is just to make field more "solid" to feel:
1. There is only one main thing -- map (now screen is splitted)
2. Nothing is flashing on hover, only clicks (also mobile-friendly)
3. All data is in your view (card count in decks, scores, turn, etc.) -- it means we have several buttons and all info is in different dialogs, not in one big board
Did you play the Root in steam? I think its UI is pretty good, except of some little things
username_0: Ok sounds good! You can work on that too if you like.
I didn't play the Steam one no, looks nice though. Makes me wonder why I bother building this one at all haha it didn't exist long ago when I started this project the first time.
Status: Issue closed
username_0: The score track is able to show scores, but there are some improvements that could be made mostly regarding how multiple players on the same space are displayed.
Since the pieces are stacked, simulated using box-shadows, piece elements are added and removed from the DOM when they enter/leave spaces with other tiles already on them. Also when multiple pieces are on the same space, you cannot tell which players they are other than the top one.
Pieces on the score track should always be displayed as distinct pieces, even when stacked, by moving them vertically and layering with z-index as appropriate. When a stack of pieces are moused over, they should spread out so you can see all the pieces below.
Additionally, the current player's score should always be the top tile when stacked. Currently, it is whatever faction happens to be first in the stack.
username_0: Reopening for now, I feel like something is still missing. We can revisit when players are able to score points normally |
OHIF/Viewers | 172365798 | Title: Dicom wado not loading...
Question:
username_0: 
Answers:
username_0: 
username_1: Hello,
It looks like you need to run the CORS proxy to add the 'Access-Control-Allow-Origin' flag to the header of the request's response. You can take a look at the scripts in the /etc folder, e.g.:
https://github.com/OHIF/Viewers/blob/master/etc/nodeCORSProxy.js
This needs to be run in a separate Terminal using the 'node' command line interface.
The file above will intercept requests to localhost:8043, and send them to localhost:8042, and add the Access-Control-Allow-Origin flag to the header of the response. The configuration value for wadoUriRoot should point to the proxy (i.e. https://github.com/OHIF/Viewers/blob/master/config/orthancDIMSE.json#L7), and the proxy should point to the PACS archive (e.g. Orthanc uses 8042 by default).
You can also read this blog post: http://chafey.blogspot.be/2014/09/working-around-cors.html
We are working to remove the requirement for the separate proxy to make it easier for everyone to get started.
Cheers,
Erik
username_1: We have now moved the CORS proxy internally and so the extra step of starting the proxy with Node is no longer required. Can you test this and let me know if it works for you?
Thanks,
Erik
username_0: hi username_1,
it worked with me.
thank so much,
Status: Issue closed
username_1: Awesome, then I'll close this ticket. |
iRon7/Join-Object | 1078638518 | Title: key expressions
Question:
username_0: Comparison expressions are expressive, e.g.:
```PowerShell
$List1 |Join $List2 -On Name -eq Name
```
is a lot faster than
```PowerShell
$List1 |Join $List2 -Using { $Left.Name -eq $Right.Name }
```
Because in the first situation `$List1` is streamed thought the [pipeline](https://docs.microsoft.com/powershell/module/microsoft.powershell.core/about/about_pipelines) and each related property item is looked up in a [hashtable](https://docs.microsoft.com/en-us/powershell/scripting/learn/deep-dives/everything-about-hashtable) that contains the related `$List2` property items. In the later syntax the expression is invoked for each possible combination between each `$List1` item and `$List2` item meaning that the expression is evaluated (`$List.Count * $List2.Count` times).
Therefore the comparison expression should only be used for comparison other than `-eq` (equals) such as `-gt`.
Yet, there might be situations where you might want to build an expression for a key only, that for example:
[How do I find a user by DisplayName in one CSV file given a DistinguishedName is another CSV file?](https://stackoverflow.com/q/70120859/1701026)
Let's presume a forest with two separate domains where I would like to define a relation between a part of the `DN` or a User name and a FirstName/LastName at the other side:
```PowerShell
$Domain1 = ConvertFrom-SourceTable ' # https://www.powershellgallery.com/packages/ConvertFrom-SourceTable
DN FirstName LastName
-- --------- --------
CN=E466097,OU=Sales,DC=Domain1,DC=COM Karen Berge
CN=E000001,OU=HR,DC=Domain1,DC=COM John Doe
CN=E475721,OU=Engineering,DC=Domain1,DC=COM Maria Garcia
CN=E890223,OU=Engineering,DC=Domain1,DC=COM <NAME>
CN=E235479,OU=HR,DC=Domain1,DC=COM Mary Smith
CN=E964267,OU=Sales,DC=Domain1,DC=COM Jeff Smith'
```
```PowerShell
$Domain2 = ConvertFrom-SourceTable '
DN Name
-- ----
CN=E000001,OU=Users,DC=Domain2,DC=COM <NAME>
CN=E235479,OU=Users,DC=Domain2,DC=COM <NAME>
CN=E466097,OU=Users,DC=Domain2,DC=COM <NAME>
CN=E475721,OU=Users,DC=Domain2,DC=COM <NAME>
CN=E890223,OU=Users,DC=Domain2,DC=COM <NAME>
CN=E964267,OU=Users,DC=Domain2,DC=COM <NAME>'
```
### Example 1
```PowerShell
$Domain1 |Join $Domain2 -On { [RegEx]::Match($_.DN, '(?<=CN=)E\d{6}(?=,OU=)') } -Name Domain1,Domain2
```
### Example 2
```PowerShell
$Domain1 |Join $Domain2 -On { $_.FirstName, $_.LastName -Join ' ' } -Eq Name -Name Domain1,Domain2
```
Both Join commands result in:
```
Domain1DN Domain2DN FirstName LastName Name
--------- --------- --------- -------- ----
CN=E466097,OU=Sales,DC=Domain1,DC=COM CN=E466097,OU=Users,DC=Domain2,DC=COM Karen Berge <NAME>
CN=E000001,OU=HR,DC=Domain1,DC=COM CN=E000001,OU=Users,DC=Domain2,DC=COM John Doe <NAME>
CN=E475721,OU=Engineering,DC=Domain1,DC=COM CN=E475721,OU=Users,DC=Domain2,DC=COM Maria Garcia <NAME>
CN=E890223,OU=Engineering,DC=Domain1,DC=COM CN=E890223,OU=Users,DC=Domain2,DC=COM James Johnson <NAME>
CN=E235479,OU=HR,DC=Domain1,DC=COM CN=E235479,OU=Users,DC=Domain2,DC=COM Mary Smith <NAME>
CN=E964267,OU=Sales,DC=Domain1,DC=COM CN=E964267,OU=Users,DC=Domain2,DC=COM Jeff Smith <NAME>
```
And faster than a comparison expression as they still use a hashtable lookup.
Answers:
username_0: This implementation implies a breaking change for expressions as they could either represent a **_key_ expression** (supplied via the `-On` and possibly via the `-Equals` parameter) or a **_comparison_ expression** which now requires and an explicit named `-Using` parameter.
Status: Issue closed
|
ApiGen/ApiGen | 262034854 | Title: The --quiet doesn't work in master
Question:
username_0: In 4.1 it used to suppress all output, in master the progress is still reported even with ``--quiet``:
```console
$ ./vendor/bin/apigen generate --quiet --destination doc src/
Parsing reflections (this may take a while)... done!
Generating documentation...
11/11 [============================] 100% < 1 sec/< 1 sec 14.0 MiB. done!
Your documentation has been generated successfully!
``` |
pocoproject/poco | 81370758 | Title: What time does Poco support redis client.
Question:
username_0: redis is very popular,in internet company.I use Poco write server program.I think Poco provide support redis c++ ,will be more community members.
Answers:
username_1: I think I understand what his issue is about: you would like to add [Redis](http://redis.io/) support to Poco. Please read [contributing guidelines](https://github.com/pocoproject/poco/blob/develop/CONTRIBUTING.md) and act accordingly. Note that Redis does not build out of the box on Windows and, from what I have seen, authors seem to have no interest in that platform.
username_0: Thanks very much. I want to join you very much,but i am a beginner for
c++.In order to join you as soon as possible i will try to study
hard.This is just beginning.
username_2: May be you could give a short description what you would hope from Redis support in Poco?
I think this would help.
Some ideas that come to my mind are:
- A Poco style Redis client library
- A persistent enhancement for the `Poco::Cache` framework
- An additional persistence layer similar to MongoDB
username_0: An additional persistence layer similar to MongoDb.
I've decided to code redis client using c++,and use in producttion platform.
acl::redis seems to be a good reference, i will read this code.
I will pull request, after stability.
username_0: https://github.com/RedisCppTeam/RedisCpp
i will finished it in a short time.i have used poco::Net in this redis client.i will use poco::Mutex and poco::Condition in redis pool(in the future). I have read acl lib and i think its readable and style is a question. Could you give me some suggestion or help. Welcome to join us.
username_3: Redis support is now available in develop branch: https://github.com/pocoproject/poco/tree/develop/Redis
username_0: We have coded a redis-client and pool.All command is suppored ,and use it in production platforms.
<EMAIL>:RedisCppTeam/redis-client.git
How to join Poco Team
Status: Issue closed
|
Jeromeschmidt/CS-1.2-Intro-Data-Structures | 524116751 | Title: Code review for Tweet Generator Submission 1
Question:
username_0: 1. dictionary_words works well and is optimized for run time. Try adding benchmarking to see if there are alternative ways to choose random words. One possibility is to randomly select words as you are reading in to avoid going through the list an additional time.
2. Great work on the different types of histograms. I noticed the listogram tests took a while to run. I would encourage you to look and see if there are any unoptimized pieces of code slowing down the test. I am not noticing anything currently, but I might have missed something.
3. Sampling works as expected.
4. Heroku app is working and looks nice, great job!
5. Cannot get markov chain script to work, even after uncommenting the constructor.
Answers:
username_1: Thanks for the great feedback. There was an issue with the Markov chain on the website but it is fixed now. I'll take a look at the listogram and see whats causing that!
Status: Issue closed
|
ozum/pg-structure | 743644506 | Title: Type 'DK2' does not satisfy the constraint 'keyof I2'
Question:
username_0: After upgrading to the latest version started to get the following error:

Answers:
username_0: Any ideas on what might be causing this or how to workaround quickly?
username_0: I replaced:
```
_copyMeta<I2 extends any, DK2 extends DK, OK2 extends OK>(source: IndexableArray<I2, DK2, OK2, any>): this;
```
with:
```
_copyMeta<I2 extends any, DK2 extends DK & keyof I2, OK2 extends OK & keyof I2>(source: IndexableArray<I2, DK2, OK2, any>): this;
```
until there will be a proper fix or recommendation.
Do you think it's ok to have it this way?
Status: Issue closed
username_1: @username_0 thank you for your patience, and I'm sorry for being so late. Not a perfect solution, but I relaxed types of `indexable-array` for TypeScript 4. It seems working now. Any further suggestion is appreciated.
Please install the latest version from npm, and it should work with TypeScript 4 as expected. |
runarberg/markdown-it-math | 69426386 | Title: Get rid of enclosing `<math>` tags
Question:
username_0: We shouldn't force people to render math into MathML. If some people want for some reason to render math into PNG, SVG, HTML/CSS or whatever they should be able to. This is possible now by meddling with the renderer for the `math_inline_open`, `math_inline_close`, `math_block_open` and `math_block_close`. But the amount of hacking required for something this simple should be necessary.
There are tons of good math-rendering tools out there that have a wide variety of rendering options. We should be able to use them out of the box without meddling with the renderer.<issue_closed>
Status: Issue closed |
TeXitoi/structopt | 279343698 | Title: Colored help by default ?
Question:
username_0: Hi,
Would it be possible to have `AppSettings::ColoredHelp` by default, or at least an easy way to enable it without calling into the underlying `clap` ?
Answers:
username_1: ```rust
extern crate clap;
extern crate structopt;
#[macro_use]
extern crate structopt_derive;
use structopt::StructOpt;
#[derive(StructOpt, Debug)]
#[structopt(setting_raw = "clap::AppSettings::ColoredHelp")]
struct Opt {
#[structopt(short = "s")]
speed: bool,
#[structopt(short = "d")]
debug: bool,
}
fn main() {
let opt = Opt::from_args();
println!("{:?}", opt);
}
```
Does it fix your problem?
username_0: Indeed it does, thanks !
I'm sorry but I didn't understood this from the documentation. Maybe an example would help, do you want me to make a patch ?
username_0: Update: when trying that:
```rust
#[derive(StructOpt, Debug)]
#[structopt(name = "flashinit", about = "Boot disk populator.",
setting_raw = "clap::AppSettings::ColoredHelp")]
struct Opt {
#[structopt(help = "Disk device or disk image")]
input: String,
}
```
I've got this:
```rustc
Compiling flashinit v0.1.0 (file:///home/xav/flashinit)
error[E0433]: failed to resolve. Use of undeclared type or module `clap`
--> src/main.rs:16:10
|
16 | #[derive(StructOpt, Debug)]
| ^^^^^^^^^ Use of undeclared type or module `clap`
error: aborting due to previous error
error: Could not compile `flashinit`.
To learn more, run the command again with --verbose.
```
username_1: You have to `extern crate clap;` to expose `clap`.
I'm open to any improvement of the doc, feel free to submit a PR if you are inspired.
I tag this issue to doc, and close it when the doc is improved on this point.
username_0: Oh, so I have to also add `clap` as a dependency in `Cargo.toml` and track its version to match the one from structopt to avoid incompatibilities ... that's not an "easy" way anymore.
username_1: `StructOpt` will always use `clap` version 2. It should be included in clap in the near future.
username_0: OK, I've added a note in the doc, see #40
Status: Issue closed
username_1: fixed by #40 |
DevExpress/testcafe | 305482153 | Title: Test duration time with console.log
Question:
username_0: ### Are you requesting a feature or reporting a bug?
Feature
### What is the current behavior?
There is a durationMs with json reporter but I would like to see test durations on console.
### What is the expected behavior?
I would like to see test durations on console with console.log
```
Test Name: {test name}
Duration: xxx seconds
```
Status: Issue closed
Answers:
username_1: Hi @username_0,
TestCafe built-in reporters don't output duration of each test but TestCafe allows to create a custom reporter that outputs test results in a required format. See the [Reporter Plugin](https://devexpress.github.io/testcafe/documentation/extending-testcafe/reporter-plugin/) documentation section for details.
Additionally TestCafe allows to use multiple reporters in a single test run (see the [--reporter](https://devexpress.github.io/testcafe/documentation/using-testcafe/command-line-interface.html#-r-namefile---reporter-namefile) option). You can output results from your custom reporter to the console and save the json report to the file. |
mina-deploy/mina | 173735223 | Title: asset precompilation is significantly slower
Question:
username_0: I'm not sure why this is, but total deploy times seem to have gone up in mina 1.x
For a quick deploy with a small fix, 1.x is definitely faster, especially if just one file needs recompiling. Before, a small asset fix would take around 200 seconds to deploy because of asset precompilation.
However, we have a number of staging environments for our different developers, so it's not uncommon to deploy the master branch to a staging environment that hasn't been updated in a week or two. That can mean that maybe 40 asset files have changes. I just did a deploy like this, and it took 481 seconds. That means that compiling those 40 asset files takes longer than compiling all of the asset files in my current setup (I have 1285 files in the assets directory, most of which are currently getting compiled when I deploy with mina 0.3.x).
From what I can tell, it looks like each individual asset file compilation is taking significantly longer. I've had a hunch for a while that maybe this is because it's reloading the rails stack for each file? But maybe I'm wrong.
I haven't done extensive testing on this yet, so it's possible it's a rails problem and not a mina one. But I figure it's important to raise here, because if possible we should emulate the 0.3.x behaviour if it can be faster than the current 1.x behaviour in certain situations.
Answers:
username_0: OK here's some timing using a copy of my app
1. initial deploy, including gems/db migrations/assets (most of the time was assets), but not git cloning: 750.23s
2. Add one text file, deploy 24.97s
3. Change one javascript file (causing a recompile of about half of our JS): 59.18s
4. Change one image file 30.23s
5. Change the javascript file back to original contents: 30.13s
For comparison, here's what I had using 0.3.8 to deploy (with all assets precompiling taking the most time). Note that this has been significantly faster since using sprockets 3:
1. Initial deploy: 361.92s
2. Add one text file: 32.68s
3. Change one js file (recompile all assets): 77.87s (note that this probably uses sprockets cache)
4. Remove the text file: 29.67s
So the results are definitely mixed. I'm noticing an overall improvement, except for in the initial deploys. I am suspecting that the sprockets cache is not being used to its full potential in the new setup.
username_0: Some recent deploys, particularly those that update our large application.js file, have been taking up to 150s, which is singnificantly longer than mina 0.3.8's 77.87s. I'm thinking again that something has regressed (at least speed-wise).
username_1: both minas do diff on app/assets/* and vendor/assets*. The flags of the diff are slightly different but that shouldn't increase the time.
If the diff returns false 0.3.8 copied the whole public/assets path from previous release folder. The 1.0.0 skips everything as public/assets folder is symlinked by default.
If the diff returns true 0.3.8 did a whole assets:precomplie on an empty public/assets folder. Meaning it precomplie every single file if one line changed in application.js
1.0.0 symlinks public/assets and tmp/cache by default. In theory that should reduce the time the precomple takes place as assets:precompile will see that it has all the file in cache and will precomplie only the file that had the change.
Either way, both mina does a diff and calls assets:precomple. If it is slower that means assets:precompile is slower :(
username_0: Hmm, that does match my current mental model of how it works, but something is off and taking longer. I'll keep trying to isolate it.
It definitely seems like each individual asset is taking longer to compile though. That may be because before I recompiled images every time and they were very fast, and now I'm generally just compiling JS, which takes a long time because of all the require directives.
username_2: In my case it appeared that "bootsnap" was what really was slowing down everything (for 2 minutes on every deploy).
username_3: I agree with @username_2. Firstly I thought that problem is with newer version of mina but problem relied in cached files of bootsnap gem. This answer https://github.com/puma/puma/issues/1717#issuecomment-493920798 , helped me solve slow deploy issue.
Basically once in while delete all files which name starts with bootsnap in /tmp/cache/bootsnap* on your server. After clearing these files (in my case once a week in rake task) the deploy taking same time as on `mina 0.3.x`.
username_4: I'll close this issue since there haven't been other complaints about Mina performance, and bootsnap seems to be a likelier suspect (I too had issues with performance until I found out that bootsnap cache [must be purged once in a while](https://github.com/Shopify/bootsnap#usage)). Feel free to reopen the issue if you get new info!
Status: Issue closed
|
SvenEwald/xmlbeam | 227364498 | Title: XBWrite expression creates wrong result#2 'Root/Intermediate[X='Passed']'
Question:
username_0: `Root/Intermediate[X='Passed']` creates
`<Root>
<Intermediate>Value</Intermediate>
</Root>`
instead of
`<Root>
<Intermediate>
<X>Passed</X>
Value
</Intermediate>
</Root>`
Answers:
username_0: This is a bug. More strage, `Root/Intermediate[@X='Passed']` works as expected.
username_0: Found the reason: Missing else in BuildDocumentVisitor:390.
Currently writing with predicates is not implemented for single nodes or for paths. This has to be implemented, or this case needs to throw an XBExpressionNotAllowedForWritingException.
username_1: @username_0 Thanks for the quick response and looking into this. I thought it was a little strange so wanted to report it. Keep up the great work!
username_0: My analysis was not finished... the reason for this error is deeper. This is happening:
1. The node is searched and while the search does not find anything, the root element is created.
2. The predicate is applied as a write operation to the root element. So the element X is created and the value of X is set to 'Passed'.
3. The write operation to the root element is performed. Here begins the trouble, because the child element X is removed in this process.
It is not clear what to expect when a write operation to an element is to be done. Should child elements
still exist? `The org.w3c.dom.Element.setTextValue()` implementation denies that. If text content is set to an element, all child elements are removed. This would happen to existing child nodes, too.
So, the limitation is like this: Elements may have a text value or child elements, but not both.
This is surprising, because XML allowes such structures and reading them is fine.
I need to sleep over this... multiple times ;-)
username_1: Yeah. Seems like a strange point in the xml / Dom spec. I guess intuitively one would expect a root element node with two children: another element node of X followed by a text node of "Value".
Away from a computer at the moment but maybe there's another api method at the node level to create such a structure vs using setTextValue?
It does create questions about where the text should go though. I.e. before the elements or after etc. There's probably an xpath function to select the node position but that requires more work and confusion. Seems like just appending at the time of invocation would work and let the caller order their calls appropriately.
It does seem like the code is close to handling the situation and though.
username_1: Also I believe the same situation applies in more than just the text node case. I believe if you try to write a sub projection to that same root element with the child element created by the predicate that it will wipe out the X element. So basically it seems to always replace vs appending.
I'm not sure if it's worthy or an attribute on XBWrite for append vs replace?
I think the usual use case for write would be building up a document so appending seems a reasonable default. And then XBUpdate maybe has the replace default?
username_0: Fixed in 1.4.11. Setting text content will not delete child nodes any more. I doubt that anyone relied on the previous behaviour. So this should be an improvement in any case.
Status: Issue closed
username_1: Awesome! Thanks for the fast response! |
blairck/claudius | 291105636 | Title: Update Module: GameNode
Question:
username_0: Situation: Module is written to support F&G.
Target: Module is refactored to support checkers instead.
Proposal: Code that is specific to F&G is refactored to checkers. Other code that is game agnostic is left as-is.<issue_closed>
Status: Issue closed |
Railcraft/Railcraft | 396317099 | Title: [1.12.2] Multiblock tank deletes fluid after its filled
Question:
username_0: **Description of the Bug**
Filled the iron tank full of lava it deleted the lava from the drum after it was filled
**To Reproduce**
Step1: Fill iron tank with lava until its completely full
Step2: iron Tank deletes lava after full
**Expected behavior**
I expected the iron tank to stop accepting lava after it was full
Answers:
username_1: Drum?
username_2: One would assume they were usong a drum from extra utilities 2 as a source of the fluid.
username_1: And how did they get that fluid into the tank?
username_0: i used a fluiduct from thermal expansion
username_0: i was actually using ender tanks for it
username_3: Hi! I built a 3*3*4 iron tank with one valve in beta2 filling it with oil in creative mode with buckets the level meter said full after 64 bucket and keep accepting new buckets of oil. It is not deleted but the scale is wrong. After switching to survival mode i can extract the oil i pured in and the scale wont drop untli i get the overflow out.
Further more all tank size indicate that they are full afer 64 bucket of oil. even the 9*9*8 one.
username_1: So possibly an issue with the fluid widget.
username_4: So this is actually a duplicate of #1567. Moving the talks there.
Status: Issue closed
|
SKLn-Rad/provider_assist | 498077673 | Title: Documentation about BaseView lifecycle
Question:
username_0: Thanks for your lib. Haven't used it yet, but I have a question about its methods. I notice there are 4 methods (`onErrorOccured`, `onEventOccured`, `onModelReady` and `onViewFirstLoad`). What's it in the StatefulWidget lifecycle? Which method should come first?
Status: Issue closed
Answers:
username_1: Sorry for the slow response! I was on holiday :)
I added a flow diagram to the readme which should clear things up! |
ElektraInitiative/libelektra | 108001322 | Title: CMake Error With `man` Target
Question:
username_0: I am getting this error when trying to build Elektra. It appears to be related to adding the dependency for man into [src/libgetenv/CMakeLists.txt](src/libgetenv/CMakeLists.txt) which was done in commit <PASSWORD>.
CMake Error at src/libgetenv/CMakeLists.txt:35 (add_dependencies):
Cannot add target-level dependencies to non-existent target "man".
The add_dependencies works for top-level logical targets created by the
add_executable, add_library, or add_custom_target commands. If you want to
add file-level dependencies see the DEPENDS option of the add_custom_target
and add_custom_command commands.
-- Configuring incomplete, errors occurred!
Answers:
username_0: This happened after when trying to build from scratch (after deleting the contents of the build dir).
Status: Issue closed
username_1: Seems like you have disabled BUILD_DOCUMENTATION, but with ronn installed.
Thanks for reporting!
username_0: Still have the same error. I installed doxygen and it went away...
username_0: I am getting this error when trying to build Elektra. It appears to be related to adding the dependency for man into [src/libgetenv/CMakeLists.txt](src/libgetenv/CMakeLists.txt) which was done in commit <PASSWORD>.
CMake Error at src/libgetenv/CMakeLists.txt:35 (add_dependencies):
Cannot add target-level dependencies to non-existent target "man".
The add_dependencies works for top-level logical targets created by the
add_executable, add_library, or add_custom_target commands. If you want to
add file-level dependencies see the DEPENDS option of the add_custom_target
and add_custom_command commands.
-- Configuring incomplete, errors occurred!
username_1: Did you get the warning "Doxygen not found, Reference Manual can't be created even though requested with BUILD_DOCUMENTATION."
username_1: I will try to improve that together with the BUILD_PDF issue.
username_0: Yes, I did actually get that warning looking back at my bash history.
Status: Issue closed
|
thingsboard/thingsboard | 861461874 | Title: [i would like to send alarms notifications using telegram API , i followed the documentation for sending temperature alarm using API rest , the alarms were configured but i didn't receive the notification from the telegram Bot . Can anyone give me directions ? ] Send notifications using Telegram Bot
Question:
username_0: [**Component**](url)
<!-- Choose one of the following and delete all others. -->
* UI
* Rule Engine
* Installation
* Generic
**Description**
A clear and concise details.
**Environment**
<!-- Add information about your environment and ThingsBoard version if applicable -->
* OS: name and version
* ThingsBoard: version
* Browser: name and version |
ably77/dcos-se | 378621217 | Title: got "no match found dcos:secrets:default" at step to grant k8s cluster secret access
Question:
username_0: Just realized it was because incompatible of zsh command shell which trying to do some magic with *.
can be resolved via
users grant 'kubernetes-cluster dcos:secrets:default:/kubernetes-cluster/*' full
Answers:
username_0: Just realized it was because incompatible of zsh command shell which trying to do some magic with *.
can be resolved via
users grant 'kubernetes-cluster dcos:secrets:default:/kubernetes-cluster/*' full
username_1: Looks like this is something with zsh as mentioned, and not the DC/OS Security CLI itself. Glad you figured it out, and thanks for sharing! Will close this now.
Status: Issue closed
|
kubernetes/kubernetes | 904549920 | Title: [scheduler] deprecates NodeUnschedulable plugin
Question:
username_0: <!-- Feature requests are unlikely to make progress as an issue.
Instead, please suggest enhancements by engaging with SIGs on slack and mailing lists.
A proposal that works through the design along with the implications of the change can be opened as a KEP:
https://git.k8s.io/enhancements/keps#kubernetes-enhancement-proposals-keps
-->
#### What would you like to be added:
Deprecates NodeUnschedulable plugin
#### Why is this needed:
Deprecate Node.Spec.Unschedulable when TaintNodeByCondition is GA'ed(https://github.com/kubernetes/kubernetes/issues/69010)
If node is set to unschedulable, then node will add unschedulable taint.
https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/nodelifecycle/node_lifecycle_controller.go#L633
There is already a taint filter, and TaintNodeByCondition has been GA,why don't we delete the unschedulable plugin to reduce redundant detection.
Answers:
username_0: /sig scheduling |
HunterJuneau/pinterest | 725123157 | Title: Deploy on Firebase
Question:
username_0: # User Story
As a user, I should be able to use your app on the internet
# Acceptance Criteria
App is deployed using Firebase.
# Dependencies
- #1
# Dev Notes
Deploy using Firebase. |
project-chip/connectedhomeip | 1062391221 | Title: TvChannelManager Cluster errors
Question:
username_0: #### Problem
* read tv-channel-lineup, current-tv-channel not implemented yet
* If added in my local with aEncoder.Encode(channelLineupInfo) and the chip-tool will response Wrong TLV type
* ChangeChannel Request has no response
* src/app/clusters/tv-channel-server/tv-channel-server.cpp, // TODO: Enable this once struct as param is supported
Answers:
username_0: logs for read tv-channel-lineup
[1637755782706] [2809:25148] CHIP: [EM] Received message of type 0x5 with protocolId (0, 1) and MessageCounter:1 on exchange 56081i
[1637755782707] [2809:25148] CHIP: [EM] Rxd Ack; Removing MessageCounter:1 from Retrans Table on exchange 56081i
[1637755782707] [2809:25148] CHIP: [EM] Removed CHIP MessageCounter:1 from RetransTable on exchange 56081i
[1637755782707] [2809:25148] CHIP: [DMG] ReportDataMessage =
[1637755782707] [2809:25148] CHIP: [DMG] {
[1637755782707] [2809:25148] CHIP: [DMG] AttributeReportIBs =
[1637755782707] [2809:25148] CHIP: [DMG] [
[1637755782707] [2809:25148] CHIP: [DMG] AttributeReportIB =
[1637755782707] [2809:25148] CHIP: [DMG] {
[1637755782707] [2809:25148] CHIP: [DMG] AttributeDataIB =
[1637755782707] [2809:25148] CHIP: [DMG] {
[1637755782707] [2809:25148] CHIP: [DMG] AttributePathIB =
[1637755782707] [2809:25148] CHIP: [DMG] {
[1637755782707] [2809:25148] CHIP: [DMG] Endpoint = 0x1,
[1637755782707] [2809:25148] CHIP: [DMG] Cluster = 0x504,
[1637755782707] [2809:25148] CHIP: [DMG] Attribute = 0x0000_0001,
[1637755782707] [2809:25148] CHIP: [DMG] }
[1637755782707] [2809:25148] CHIP: [DMG]
[1637755782707] [2809:25148] CHIP: [DMG] Data =
[1637755782707] [2809:25148] CHIP: [DMG] {
[1637755782707] [2809:25148] CHIP: [DMG] 0x0 = "postalCo",
[1637755782707] [2809:25148] CHIP: [DMG] 0x1 = "lineup",
[1637755782707] [2809:25148] CHIP: [DMG] 0x2 = "postalCode",
[1637755782707] [2809:25148] CHIP: [DMG] 0x3 = 0,
[1637755782707] [2809:25148] CHIP: [DMG] },
[1637755782707] [2809:25148] CHIP: [DMG] DataVersion = 0x0,
[1637755782707] [2809:25148] CHIP: [DMG] },
[1637755782707] [2809:25148] CHIP: [DMG]
[1637755782707] [2809:25148] CHIP: [DMG] },
[1637755782707] [2809:25148] CHIP: [DMG]
[1637755782707] [2809:25148] CHIP: [DMG] ],
[1637755782707] [2809:25148] CHIP: [DMG]
[1637755782707] [2809:25148] CHIP: [DMG] }
[1637755782707] [2809:25148] CHIP: [ZCL] ReadAttributesResponse:
[1637755782707] [2809:25148] CHIP: [ZCL] ClusterId: 0x0000_0504
[1637755782707] [2809:25148] CHIP: [ZCL] attributeId: 0x0000_0001
[1637755782707] [2809:25148] CHIP: [ZCL] status: Success (0x0000)
[1637755782707] [2809:25148] CHIP: [ZCL] attribute TLV Type: 0x15
[1637755782707] [2809:25148] CHIP: [ZCL] Failed to get value from TLV data for attribute reading response: ../../examples/chip-tool/third_party/connectedhomeip/src/lib/core/CHIPTLVReader.cpp:398: CHIP Error 0x00000026: Wrong TLV type
[1637755782707] [2809:25148] CHIP: [TOO] Default Failure Response: 0x87
[1637755782711] [2809:25148] CHIP: [EM] Piggybacking Ack for MessageCounter:1 on exchange: 56081i
[1637755782711] [2809:25148] CHIP: [IN] Prepared encrypted message 0x145812680 to 0x000000000001B669 of type 0x1 and protocolId (0, 1) on exchange 56081i with MessageCounter:2.
[1637755782711] [2809:25148] CHIP: [IN] Sending encrypted msg 0x145812680 with MessageCounter:2 to 0x000000000001B669 at monotonic time: 1346794 msec
[1637755782711] [2809:25148] CHIP: [DMG] Client[0] moving to [UNINIT]
[1637755782711] [2809:25143] CHIP: [-] ../../examples/chip-tool/third_party/connectedhomeip/zzz_generated/chip-tool/zap-generated/cluster/Commands.h:2054: CHIP Error 0x000000AC: Internal error at ../../examples/chip-tool/commands/common/CHIPCommand.cpp:84
[1637755782711] [2809:25143] CHIP: [TOO] Run command failure: ../../examples/chip-tool/third_party/connectedhomeip/zzz_generated/chip-tool/zap-generated/cluster/Commands.h:2054: CHIP Error 0x000000AC: Internal error
[1637755782737] [2809:25143] CHIP: [CTL] Shutting down the System State, this will teardown the CHIP Stack
[1637755782737] [2809:25143] CHIP: [DL] Inet Layer shutdown
[1637755782737] [2809:25143] CHIP: [DL] BLE shutdown
[1637755782737] [2809:25143] CHIP: [DL] System Layer shutdown
Status: Issue closed
username_1: Please file this issue in the Test Harness repo. Closing it here.
username_1: #### Problem
* read tv-channel-lineup, current-tv-channel not implemented yet
* If added in my local with aEncoder.Encode(channelLineupInfo) and the chip-tool will response Wrong TLV type
* ChangeChannel Request has no response
* src/app/clusters/tv-channel-server/tv-channel-server.cpp, // TODO: Enable this once struct as param is supported |
NVIDIA/nvidia-docker | 161908326 | Title: How to connect to non-defult docker daemon? (-H is not working?)
Question:
username_0: We are setting up a small HPC with the following requeriments:
- slurm scheduler
- docker available but not forced (not all jobs will we dockerized)
- gpus acces for native and docker jobs
- access to network filesystem (nfs or whatever) from native and docker jobs
- multiple users... without root access!
Following the idea of using multiple docker daemons with usernamespaces ([by <NAME>](https://medium.com/@frntn/not-your-grandmother-s-docker-environment-c3c5c1a5aab)) we already have almost everything working... except for the GPUs access from docker containers.
In [this issue](https://github.com/NVIDIA/nvidia-docker/issues/113#issuecomment-227996349) @username_1 explained a way to use `nvidia-docker` with slurm (thanks Jonathan!).
Altough it works with a default docker daemon, I've noted that `nvidia-docker` is not passing the `-H` parameters to the `docker` client, so I cannot connect to any other `docker` daemon that the default one?
If we cannot use the `-H` parameter, then I cannot have the multiple daemons, not multiple users (namespaces), wich is a key part in our framework...
My first approach was learning about the magic of `nvidia-docker` and use the internals to create ourown reduced and integrated version... but @username_1 is convicing me that `nvidia-docker` could provide a better isolation, and probably a more extended suport... so, I'm tring to integrate the whole tool and not only its magic! ;-)
So, my question is, is `nvidia-docker` able to connect to multiple `docker` daemons in the same host using the `-H` parameter somehow?
I see that I can use the `-l` parameter of the plugin to listen from different ports... but I don't see how to tell the plugin or the client how to connect to an specific daemon port?
Any suggestion to achieve our requeriments will be welcome!
;-)
Thanks!
Albert
Answers:
username_1: What do you mean by `nvidia-docker` is not passing the -H option?
`nvidia-docker` arguments are passthrough to `docker`, you should be able to use the -H.
I think you are referring to internal `docker` calls failing, if that's the case you can either use `DOCKER_HOST` instead of -H or define `NV_DOCKER` with the -H option in it (see [here](https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker#description))
Status: Issue closed
|
HumanDynamics/chapter | 155819473 | Title: Dazza Sections
Question:
username_0: 1. Context
Systems Exemplifying Ethical Precepts - Dazza (list to existing legal systems)
2. Solutions - Dazza
Potential Scenarios and Use Cases
Abstracting Out the Nature and Shape of Needed Methods, Mechanisms and Modes
Solutions Must Cover the Entire Data Lifecycle |
ikedaosushi/tech-news | 413830858 | Title: 女子学生がロボットを使って冬休みの宿題をこなしたと話題に
Question:
username_0: 女子学生がロボットを使って冬休みの宿題をこなしたと話題に<br>
女子高生が長期休暇中の山のような宿題をこなすために、ロボットを使用していたことが発覚し、宿題代返用のロボットにソーシャルメディア上で注目が集まっています。 Chinese schoolgirl shamed for using robot to write homework.<br>
https://ift.tt/2IywNvx |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.