repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
ES-DOC/esdoc-cim-v2-schema | 193838407 | Title: Proposed CIM2 changes resulting from November face-to-face
Question:
username_0: During the face-to-face in November 2016 some structural changes to the CIM were discussed.
The general focus of these changes was to delegate more responsibility to the specializations, following the principle that the utility of the CIM lies in the genericity of the schema and the customizability of the specializations.
These changes are currently described at https://docs.google.com/document/d/1xaTWPilOtRCzNysficzB_SAeDOyu4fvLRpDu2s9ZLjo/edit
Please review, and on Monday 9th December 2016 I shall copy the text from the google doc to this issue, which will then be the place to discuss and change these ideas.
Answers:
username_1: Another summary of changes can be seen by comparing the files in https://github.com/ES-DOC/esdoc-cim/tree/master/v2/schema-science-proposal
username_1: These changes have been approved.
Status: Issue closed
|
cypress-io/github-action | 612550002 | Title: `pull_request` vs `push`
Question:
username_0: In our project we will quickly max out our runners if we run each branch. Really, we only want the actions to run when someone submits their pull request.
If we use `pull_request` what difference will we see in the dashboard?
Answers:
username_1: @bahmutov I'm not sure this comment is accurate anymore since the services team merged fixes for recognizing Pull Request builds - it should be able to get the commit SHA for GitHub integration if the comment was referencing that. Maybe you're referring to some other functionality.
username_2: I'm not sure if using `pull_request` is working for my project.
We currently use `push` but are switching to `pull_request`.
When using `push`, on the cypress dashboard each run has the title of the commit.
However, when using `pull_request` I see a title like:
`merge SHA into SHA`
And also the cypress check no longer seems to run correctly against the PR in question.
Anything we can do here?
username_1: Can you clarify what you mean by `cypress check`? What is not running exactly? Are you referring to GitHub Integration to the Dashboard? Are you referring to the GitHub Action itself?
username_2: @username_1 yes, by `cypress check` I mean the GH action itself
It seems to be stuck at `Expected — Waiting for status to be reported`

Happy to contribute to this repo if need be, just not sure where to start. |
gsantner/markor | 424647473 | Title: What does the LinkBox function *actually* do?
Question:
username_0: Aside from being a separate page, what does the LinkBox actually do? I can't find a guide on Markor, and there's nothing UI-wise, so I'm scratching my head here.
Status: Issue closed
Answers:
username_1: It's for "Read it later" & bookmarks.
You can directly share links from browser into it ;).
username_0: Ahh, thanks. Expect to see a PR on that README later...
username_0: @username_1 You can share links to the QuickNote section too. Does having this as a separate function really offer any advantages over using QuickNote? |
woocommerce/woocommerce | 808078097 | Title: Manual New Order Email Sending is not working
Question:
username_0: <!-- This form is for other issue types specific to the WooCommerce plugin. This is not a support portal. -->
**Prerequisites (mark completed items with an [x]):**
- [ ] I have checked that my issue type is not listed here
- [ ] My issue is not a security issue, support request, bug report, enhancement or feature request (Please use the link above if it is).
**Issue Description:**
Since the latest update to Woocommerce i have found a bug.
When I open a customer order and try to send. NEW order notification by chosing New Order and pressing Save and Send email, the email is not sent.
If i chose anything else like CANCELLED OR COMPLETED order Send manual notification email , it works.
I have checked WP Mail logging and actually the email is not being generated to sent.
The logging list is empty.
Answers:
username_1: Hi @username_0,
Thank you for submitting the issue. However, I can’t reproduce it using the steps you provided. Everything is working as expected on my end.
1. First, I added in the sample products as part of the WooCommerce setup and placed an order as a customer
2. Next, I selected "Resend new order notification" from the `Order actions` dropdown:
<img width="331" alt="Screen Shot 2021-02-16 at 3 04 22 PM" src="https://user-images.githubusercontent.com/71906536/108127868-20da8180-7069-11eb-8cc4-501f65692a7b.png">
3. I tested sending the emails both by clicking "Update" on the order and by using the "Action" button (with the caret `>` icon on it) to trigger the email action:
<img width="304" alt="Screen Shot 2021-02-16 at 3 12 22 PM" src="https://user-images.githubusercontent.com/71906536/108128174-95152500-7069-11eb-9ab9-c9e873d07e64.png">
4. I tested the above with the following order statuses: `Pending payment`, `Processing`, `On hold` and received an email in each case, and saw the logs using the [WP Mail Logging](https://wordpress.org/plugins/wp-mail-logging/) plugin:
<img width="1492" alt="Screen Shot 2021-02-16 at 3 03 26 PM" src="https://user-images.githubusercontent.com/71906536/108128388-e2919200-7069-11eb-901e-58214b7351f8.png">
I tested the above with:
* WooCommerce 5.0.0
* WordPress 5.6.1
Additionally, it will help us further troubleshoot if you can provide the following as well, specifically, if this is happening with other plugins deactivated and only WooCommerce is active:
- [ ] I have deactivated other plugins and confirmed this bug occurs when only WooCommerce plugin is active.
**WordPress Environment**
<details>
```
Copy and paste the system status report from **WooCommerce > System Status** in WordPress admin.
```
</details>
Thank you!
Status: Issue closed
username_2: Hi @username_0,
As a part of this repository’s maintenance, I am closing this issue due to inactivity. Please feel free to comment on it in case we missed something. We’d be happy to take another look.
username_3: I can confirm that something is wrong with the "new order" emails. I have problem when i try to trigger them from functions.php
Every WooCommerce email can successfully be sent from functions.php – except the "new order" email.
This does not send an email successfully:
$mailer = WC()->mailer()->get_emails()['WC_Email_New_Order']->trigger( 1161 );
But all of the following lines send an email successfully
$mailer = WC()->mailer()->get_emails()['WC_Email_Cancelled_Order']->trigger( 1161 );
$mailer = WC()->mailer()->get_emails()['WC_Email_Customer_Completed_Order']->trigger( 1161 );
$mailer = WC()->mailer()->get_emails()['WC_Email_Customer_Invoice']->trigger( 1161 );
$mailer = WC()->mailer()->get_emails()['WC_Email_Customer_On_Hold_Order']->trigger( 1161 );
$mailer = WC()->mailer()->get_emails()['WC_Email_Customer_Processing_Order']->trigger( 1161 );
$mailer = WC()->mailer()->get_emails()['WC_Email_Customer_Refunded_Order']->trigger( 1161 );
$mailer = WC()->mailer()->get_emails()['WC_Email_Customer_Reset_Password']->trigger( 1161 );
$mailer = WC()->mailer()->get_emails()['WC_Email_Failed_Order']->trigger( 1161 );
Any idea what i am missing here? I am however able to go to any order in the backend and resend the new order email successfully from the GUI.
username_4: Hi @username_3 - try using the filter to override this behavior `add_filter( 'woocommerce_new_order_email_allows_resend', '__return_true' );`
username_3: @username_4 Thank you, this fixes the issue i am having. |
SunstriderEmu/BugTracker | 574549864 | Title: [NPC][Shattrath] <NAME>
Question:
username_0: **Describe the bug**
An NPC is missing in Aldor Rise which sells arrows/ammunition.
**Expected behavior**
There should be a vendor NPC at Aldor Rise with arrows and ammunition from various reputations for sale.
**Screenshots/videos**
NPC not being there.

**Additional context**
https://wow.gamepedia.com/Marksman_Bova

Answers:
username_0: On another note, I have also noticed that the repair npc beside the innkeeper is also missing but after further research I have found that he has only been added in 2.4 but perhaps for the sake of convenience he could be impemented early.
https://wowwiki.fandom.com/wiki/Technician_Halmaha
username_1: As you can see, he's supposed to spawn in patch 2.4.
We are still in patch 2.0 so working as intended.
Status: Issue closed
username_0: He is supposed to change appearance in 2.4, not spawn. He has been added in 2.0.3.
username_0: It's literally in the picture.
username_1: I read that too but if <NAME> is not implemented right now, neither will Technician Halmaha.
username_0: They are two diffrent NPCs? One is supposed to be there in 2.0.3, the other in 2.4?
username_1: Am I not getting something right because I can read Technician Halmaha implemented in 2.4.2 which is the repair guy next to the innkeeper.
username_0: I was talking about two different NPCs. One, which is <NAME> which sells arrows/ammunition (impemented in 2.0.3) and second which was engineering supplies (repair guy beside innkeep). That one was implemented in 2.4
username_0: Sooo... this shouldn't be closed?
username_1: **Describe the bug**
An NPC is missing in Aldor Rise which sells arrows/ammunition.
**Expected behavior**
There should be a vendor NPC at Aldor Rise with arrows and ammunition from various reputations for sale.
**Screenshots/videos**
NPC not being there.

**Additional context**
https://wow.gamepedia.com/Marksman_Bova

username_1: Yes, just had to go to work at some point 😅
username_0: Bump
username_0: He is here. closing this :]
Status: Issue closed
|
microuser/Asylum | 539432230 | Title: Add Folderize command
Question:
username_0: Allow a user to folderize files at the given path.
Address complexity of a file with no extension by enumerating then de-enumerating if possible.
Status: Issue closed
Answers:
username_0: merged with https://github.com/username_0/Asylum/pull/8 |
jhipster/generator-jhipster | 46057406 | Title: Use lombok in generated project
Question:
username_0: Use projectlombok.org to replace getters and setters in code.
Answers:
username_1: I'd also like to see Lombok added to JHipster. Be that as a default or an option.
Julien, why not have a discussion about this? The "I don't like it" bit sounds more like a tantrum, sorry :)
username_2: @username_1 : it has been already discussed a lot. You can read old ticket here https://github.com/jhipster/generator-jhipster/issues?utf8=%E2%9C%93&q=lombok
I completely agree with "This is just too much trouble compared to what we gain."
But, why not trying to do a module for that ? https://jhipster.github.io/modules/creating-a-module/
username_3: I have seen people not liking the generated DTO part due to mapstruct so adding more stuff like that is not really something I would like as well
username_1: I have read the other topics about this and it sounds like it comes down to personal feelings rather than reason
username_1: Maybe another solution can be found? For example Spring Roo used to store getters/setters in an aspect. Having so much boilerplate code in domain classes is not ideal.
username_4: For stuff like this, what if we add some section in the .yo-rc.json file so power users can use at their own risk, e.g.:
{
"generator-jhipster": {
"jhipsterVersion": "4.3.0",
...
"unrecommended-options": [ <- new section
"lombok", <- extra option 1
"user.mobileNum" <- extra option 2
]
...
}
....
With this approach the menu is simple but power users (most devs) can be tweeked to their hearts delight.
Personally, I will go through 50 menu items if it means I don't have to fiddle with as much code. For example it is so annoying to go change the SQL ports after generation, which used to be there. It is much easier to answer a few extra questions in prompt which I will do one time per project then go edit code for hours.
As far as Modules go, I have not understood how exactly they will manipulate the project... they should be categorized somehow to show different areas that they will add code too. My 2 cents.
username_4: Maybe in the new menu options that lists ALL the modules... it can be made into a categories which include what I am saying above (user would not have to fiddle with .yo-rc.json then too).
username_5: @username_4 That's exactly what modules do. It's also hard to categorize modules because they can be simple or advanced, and can change any part of the project.
For example, https://github.com/hipster-labs/generator-jhipster-entity-audit/ changes entity files and has hooks that run on entity generation, so new entities receive the same benefits without having to re-run the module. This adds to the .yo-rc.json:
```
"generator-jhipster-entity-audit": {
"auditFramework": "custom"
}
```
And the entity generation hook (`.jhipster/modules/jhi-hooks.json`):
```
[
{
"name": "Entity Audit generator",
"npmPackageName": "generator-jhipster-entity-audit",
"description": "Add support for entity audit and audit log page",
"hookFor": "entity",
"hookType": "post",
"generatorCallback": "jhipster-entity-audit:entity"
}
]
```
That hook adds a question to the end of the entity generation process:
```
Reading the JHipster project configuration for your module
? Do you want to enable audit for this entity(Foo)? (Y/n)
```
This is then saved in the entity's JSON file as a new key:
```
"enableEntityAudit": true
```
Most modules update/replace content in files by regex by using [this method from JHipster](https://github.com/jhipster/generator-jhipster/blob/master/generators/generator-base.js#L1138)
This could easily be done for Lombok, the problem is that someone has to take the time to write the module and maintain it.
username_4: What about high level level categories (they would help a lot):
Backend
Model
User
Entities (e.g. where entity audit can go)
REST
Database
Cache
Server Deployment
Containers
Config
Other
Put anything that doesn't fit above here
Front End
Mobile
Ionic
React
Web
Google Maps,
etc.
Other
Put anything that doesnt fit above here
Basically we show file/option structure, so user gets picture and explore all options... now it is list of ~100 items. No one will want to explore that using the menu or find about all the cool stuff they are missing.
username_6: @username_4 generator-jhipster-entity-audit belongs to both frontend and backend categories. I guess modules could have tags.
username_4: OK, it is a rough mockup I made, probably better to leave all categories high level, then you jump one level deep to see all modules under that category. So there will be one "Other" category, or if there becomes too many categories you put them under Other...
username_2: We can't categorize modules because they are out of control.
Everybody can do module and can do what they want in it:
- delete all existing domain files and replace by other
- overwrite with regex
- etc
To answer to the initial question about Lombok, we can't add it to the main generator-jhipster because we have to maintain it in the future. We can't maintain something we don't like...
What I can see in a Lombok module:
- like audit module, adds a question to the end of the entity generation process
- use regex to delete getter / setter
- add lib to pom.xml or gradle file
I don't understand why people don't want to try to do a module for that.
username_3: This is the exact reason we came up with modules. @username_1 @username_4 As the project team we have to make certain choices and here we choose to make maintenance burden less so that we can focus on most required features rather than spend time on stuff like Lombok which has mixed feelings among the dev community. You guys are free to build a module and support this it shouldn't be hard and I agree it will be useful for a lot of people. Even I might use if for some project. So here we make a decision not to add support for Lombok and as project maintainers I believe we have the right to do that. Its nothing personal its just about feature usefulness vs maintenance burden. JHipster has grown a lot and its becoming more difficult for us to maintain it already so I wouldn't want top bloat it any further. I would even consider removing some of the options like Cassandra, Session clustering etc to modules at some point if entire team agrees.
username_4: I fully support you guys and know you'll make the right decisions. Thanks all for clarifying on what Modules purpose and capabilities are. I had not been able to figure this out before even though I read many modules features and all the module docs at the time. I feel really happy about this feature now and maybe will write a couple now. I am impartial on lombok, have not used it before. |
sass/sass | 3802301 | Title: hyphens and underscores are treated the same in variable names
Question:
username_0: If I go to http://sass-lang.com/try.html and enter the following code:
```
$font-size: 20px;
$font_size: 10px;
p {
font-size: $font-size;
}
```
The result is:
```
p {
font-size: 10px; }
```
It seems that the second variable (with the underscore) is overriding the first one (with the hyphen). I'd assume this isn't intentional, but if it is could you please provide the rationale.
Thanks!
Answers:
username_1: As @username_0 points out it would be great to have a definitive reference for what this behaviour is applied to as it's currently unclear /cc https://github.com/sass/libsass/issues/877#issuecomment-74405004
username_2: Wow. That's kinda strange solution.
username_3: It seems odd that this behaviour is intentional. At least there should be a compile error, right? The current behaviour as it stands according to this thread surely can't be the goal?
username_4: The behaviour is odd, coming from programming languages, but I honestly can't say I disagree with the rationale. Personally I'd enforce consistency within projects I oversee, but hey, VB is case insensitive, what can you do.
I don't think it should be a compile error to redefine a variable. A quick search on google and the issues here show no results for "const" either, so I don't think many other people in the world think so either. |
deeplearning4j/deeplearning4j | 282799600 | Title: Tests-before-merge discussion
Question:
username_0: This isn't an issue really, but general discussion.
With libnd4j we have problem: often after PR is merged, we get issues with specific OSes/compilers. I.e. we use linux/macos for development, but issues can be discovered on android or windows msvc for example.
So, maybe it's time to bring back requirement for jenkins successful tests run before merge?
cc @username_2 @username_4 @username_3 @username_1 @sshepel
Answers:
username_1: I'm in favor of this. Breaking changes and non-compilation can certainly be a disruption, and it's not really feasible to get devs to test all platforms manually before each PR is merged.
Automatic testing of upstream repos (nd4j, dl4j, datavec) would be valuable too IMO.
However, testing isn't free (i.e., can slow development): thus I would suggest looking at clearly defined exceptions to this "test before merge" rule. For example, a PR that modifies (Java) tests only, or is simple/isolated enough to not cause problems, might be mergeable immediately.
username_0: oh yea, that should be clearly separable from libnd4j builds in lots of cases.
I.e. we could have a requirement - same branch name everywhere, if you're adding stuff into multiple repos. If branch name is found in libnd4j - rebuild it. If not - then not.
username_0: at the same time, libnd4j changes would be nice to test again nd4j tests as well.
username_2: I agree with this as well..one thing I'm concerned about here that I'd like to fix is long running changes that cause heavy divergence. I'd like to have a way of keeping PRs small. Jenkins is part of that and will help but there's also a workflow discussion to have here.
username_0: Problem i'm often facing these days usually happens even with small PRs.
I.e. @shyrma added few ops last week. Just ops, nothing ground breaking etc. But few hours later i was contacted by @sshepel - turned out that android build was broken. After fast investigation we've found out that std::tgamma and std::lgamma are missing in gcc 4.9 (which is a bug), but we were not aware of this before build was broken. Obviously, we've fixed that, but as you can see - even small PRs can cause a problem on various platforms.
username_3: I agree, the problem is just with getting CI working...
username_4: As far as I'm concerned, I 200% agree with test-pass-before-merge on all projects. I think the CI efforts deserve help and support increase by a good order of magnitude.
username_2: So the other thing here is pull request management. Right now we're focused on new features and we to set some standards in place for stability as well. The CI will fix a lot of that but it's not the only issue.
The eclipse migration will help that somewhat, but there's definitely more to do yet.
username_0: Implemented for libnd4j, and will be implemented soon for monorepo
Status: Issue closed
|
vimwiki/vimwiki | 297892350 | Title: How to efficiently enter links to files?
Question:
username_0: Hi!
Not an issue but a question,
Using window/gvim, is there an effective way to add links to files?
instead of manually typing c:/Blahh/Blahh/ Blahh...
I have bean digging around, mabe using NEARDTree, in NEARDTree you can use the "m" menu and "l' (list) but i couldent fine any way to use that in Vim/Vimwiki.
Thanks!
Answers:
username_1: You can always use Vim's built-in path completion with Ctrl+x Ctrl+f (see `:h compl-filename`). Doesn't work well if the path contains spaces, though (maybe adding a space to 'isfname' helps, but I didn't try that).
username_1: I don't understand your question. What do you expect? What do you want to achieve?
username_0: Say I am maintaining a Wiki for a project I am building.
I can have detailed descriptions, tables, links to websites with relevant information...
Now I have a Data sheet on my PC, and I want to point to that Data sheet.
I would like to have a link to a file that is not a source file.
username_1: OK, you want to "point" to a data sheet. And then what? What keys do you want to press and what would you like to happen? Please be specific. Anyway, you may want to read the complete part here: `:h vimwiki-syntax-links`.
username_0: I guess... Hitting <CR> will open the file?
Or folder?
Like opening a web link?
username_1: Try `[[file:C:\link\to\file.png]]` and read the part of the documentation I mentioned above.
username_0: So, I finally got what I needed working...
Using NEARDTree and This extention:
https://www.codesd.com/item/copy-the-path-file-with-the-nerdtree-vim-plugin.html
I can use NEARDTree to locate the file I want and copy I'ts path in to vim.
Then just replace the \ slashes with / and the link works!
:-)
Status: Issue closed
username_2: If you are using a Mac, this worked for me: `[[file:~/.....]]` the dots represent any path and Vimwiki recognizes the spaces as well. Very handy. |
simplesteph/grpc-go-course | 952737689 | Title: Chapter 4, part 17, Server
Question:
username_0: Hello
I am trying to study your course, but I can not pass a 17th path of the Lesson
First **protoc** format and keys was changed and I was needed to read a documentation for update keys and set the full path for package.
For now I am trying to follow a simple add listener and getting error
server.go
```
package main
import (
"fmt"
"github.com/username_0/go-grp-study/greet/greetpb"
"google.golang.org/grpc"
"log"
"net"
)
type server struct {}
func main() {
fmt.Println("Hello World")
lis, err := net.Listen("tcp", "0.0.0.0:50051")
if err != nil {
log.Fatalf("Failed %v", err)
}
s := grpc.NewServer()
greetpb.RegisterGteetServiceServer(s, &server{})
if err := s.Serve(lis); err != nil {
log.Fatalf("Failed to serve %v", err)
}
}
```
here at greetpb.RegisterGteetServiceServer(s, &server{}) the error that **"Cannot use '&server{}' (type *server) as the type GteetServiceServer Type does not implement 'GteetServiceServer' as some methods are missing: mustEmbedUnimplementedGteetServiceServer()"**
I tried many ways but still not succeed, probably need to update code, but I do not know what to do
Answers:
username_0: as far as greetpb.UnimplementedGteetServiceServer it is a type, this way is working
`var server greetpb.UnimplementedGteetServiceServer`
username_1: Update the struct as follows:
type server struct {
greetpb.UnimplementedGreetServiceServer
} |
kubernetes-sigs/kind | 786200758 | Title: Support for Apple silicon M1
Question:
username_0: I tried installing kind on the new macbook M1 and got below error. I am able to run docker containers on this machine.
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.19.1) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✗ Starting control-plane 🕹️
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged kind-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 139
Command Output: SIGSEGV: segmentation violation
PC=0x0 m=0 sigcode=0
goroutine 1 [running]:
qemu: uncaught target signal 11 (Segmentation fault) - core dumped
Answers:
username_1: if the node image is for AMD64 it will likely just crash on an architecture that is not supported - i.e. ARM64.
username_0: Please note that I have the docker tech preview version installed and able to spin up docker instances on macbook M1. The problem is with kind only.
username_2: the docker image you're running is AMD64, the pre-built kind images are not multi-arch yet and it will not be trivial to do so.
I'm working on ARM support this quarter, there's an existing bug tracking this #166
Status: Issue closed
username_2: if you see #166 you can see about obtaining or building an image in the meantime.
username_2: m1 release is out https://github.com/kubernetes-sigs/kind/releases/tag/v0.11.0#contributors |
Open-Systems-Pharmacology/Forum | 460851766 | Title: Use of Caco-2 permeability in PK-Sim
Question:
username_0: Dear all,
Is there a way in PK-Sim to convert the Caco-2 permeability to intestinal permeability in PK-Sim? Basically something like what we have to predict permeability from MW and lipophilicity but using Caco-2 permeability. If this is not the case, I have seen in literature some correlation to get the effective permeability from the Caco-2 permeability. Since such relation has been obtained from patients directly, it is independent from the model and consequently it could be included in PK-Sim.
This is the paper I am referring to:
https://rd.springer.com/article/10.1023%2FA%3A1020483911355
Do you any thought about that? I am very much open to your wisdom!
Cheers,
Donato
Answers:
username_1: Should intestinal transcellular permeability be optimized primarily through parameter identification when developing a p.o. model?
username_2: Dear Donato,
I remember [this discussion](https://github.com/Open-Systems-Pharmacology/Forum/issues/23) which touched some of the questions you raised. It may help you.
Best wishes
André
Status: Issue closed
|
ProjectMOSAIC/mosaic | 49580707 | Title: Deprecate linearModel()?
Status: Issue closed
Question:
username_0: I see that this is located in `smoothers.R`. Likely this means that @dtkaplan uses this for something. Rather than deprecate this, I've modified the documentation a bit. Among other things, the documentation now mentions that similar things can be achieved using `makeFun()` and `lm()`.
Answers:
username_0: I see that this is located in `smoothers.R`. Likely this means that @dtkaplan uses this for something. Rather than deprecate this, I've modified the documentation a bit. Among other things, the documentation now mentions that similar things can be achieved using `makeFun()` and `lm()`. |
home-assistant/core | 912453459 | Title: ISY994 Integration fails in 2021.6.0 and 2021.6.2
Question:
username_0: ### The problem
ISY994 integration reports "Retrying setup: ISY Could not parse response, poorly formatted XML."
All ISY devices report unavailable.
### What is version of Home Assistant Core has the issue?
2021.6.0 and 2021.6.2
### What was the last working version of Home Assistant Core?
2021.5.? (maybe 2021.5.2)
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
ISY994
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/isy994/
### Example YAML snippet
```yaml
Upgraded from 2021.5.? to 2021.6.0 when the problem started. Was working fine forever until upgrade. No changes at ISY end. Tried restarting home assistant, deleting and re-adding isy integration, upgrading to 2021.6.2. ISY is on same network as HA using http. No problem accessing ISY directly from web browser. All ISY devices are Insteon.
```
### Anything in the logs that might be useful for us?
```txt
==== 2021.6.2 startup log
2021-06-05 14:16:31 WARNING (SyncWorker_0) [homeassistant.loader] We found a custom integration hacs which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant
2021-06-05 14:16:31 WARNING (SyncWorker_1) [homeassistant.loader] We found a custom integration keymaster which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant
2021-06-05 14:16:38 WARNING (MainThread) [slixmpp.stringprep] Using slower stringprep, consider compiling the faster cython/libidn one.
2021-06-05 14:16:42 WARNING (MainThread) [slixmpp.basexmpp] Legacy XMPP 0.9 protocol detected.
2021-06-05 14:16:43 ERROR (MainThread) [homeassistant] Error doing job: Task was destroyed but it is pending!
2021-06-05 14:16:43 ERROR (MainThread) [pyisy] ISY Could not parse response, poorly formatted XML.: NetworkResources
2021-06-05 14:16:43 WARNING (MainThread) [homeassistant.components.isy994] Error processing responses from the ISY; device may be busy, trying again later
2021-06-05 14:16:43 WARNING (MainThread) [homeassistant.config_entries] Config entry 'ISY Home (192.168.70.21)' for isy994 integration not ready yet: ISY Could not parse response, poorly formatted XML.; Retrying in background
2021-06-05 14:16:57 ERROR (MainThread) [homeassistant.components.zeroconf] Home Assistant instance with identical name present in the local network
2021-06-05 14:16:58 ERROR (MainThread) [homeassistant.components.google_assistant.http] Request for https://homegraph.googleapis.com/v1/devices:requestSync failed: 403
2021-06-05 14:16:59 ERROR (MainThread) [pyisy] ISY Could not parse response, poorly formatted XML.: NetworkResources
2021-06-05 14:16:59 WARNING (MainThread) [homeassistant.components.isy994] Error processing responses from the ISY; device may be busy, trying again later
2021-06-05 14:17:11 ERROR (MainThread) [pyisy] ISY Could not parse response, poorly formatted XML.: NetworkResources
2021-06-05 14:17:11 WARNING (MainThread) [homeassistant.components.isy994] Error processing responses from the ISY; device may be busy, trying again later
2021-06-05 14:17:33 ERROR (MainThread) [pyisy] ISY Could not parse response, poorly formatted XML.: NetworkResources
2021-06-05 14:17:33 WARNING (MainThread) [homeassistant.components.isy994] Error processing responses from the ISY; device may be busy, trying again later
2021-06-05 14:17:38 ERROR (MainThread) [homeassistant.components.google_assistant.http] Request for https://homegraph.googleapis.com/v1/devices:reportStateAndNotification failed: 403
2021-06-05 14:18:14 ERROR (MainThread) [pyisy] ISY Could not parse response, poorly formatted XML.: NetworkResources
2021-06-05 14:18:14 WARNING (MainThread) [homeassistant.components.isy994] Error processing responses from the ISY; device may be busy, trying again later
```
### Additional information
=== More info on the above log errors
Error processing responses from the ISY; device may be busy, trying again later
2:18:14 PM – (WARNING) Universal Devices ISY994 - message first occurred at 2:16:43 PM and shows up 5 times
ISY Could not parse response, poorly formatted XML.: NetworkResources
2:18:14 PM – (ERROR) /usr/local/lib/python3.8/site-packages/pyisy/networking.py - message first occurred at 2:16:43 PM and shows up 5 times
Request for https://homegraph.googleapis.com/v1/devices:requestSync failed: 403
2:17:38 PM – (ERROR) Google Assistant - message first occurred at 2:16:58 PM and shows up 2 times
Home Assistant instance with identical name present in the local network
2:16:57 PM – (ERROR) Zero-configuration networking (zeroconf)
Config entry 'ISY Home (192.168.70.21)' for isy994 integration not ready yet: ISY Could not parse response, poorly formatted XML.; Retrying in background
2:16:43 PM – (WARNING) config_entries.py
Error doing job: Task was destroyed but it is pending!
2:16:43 PM – (ERROR) runner.py
Status: Issue closed
Answers:
username_0: I did a wireshark capture and the problem is with ISY. I don't know why the problem didn't show up prior to 2021.6.0.
It occurred when ISY994 integration does a GET /rest/networking/resources. If the name of the resource has an ampersand in it, ISY does not properly escape it, causing an XML error. Removing the ampersand from the resource name fixes the problem.
username_1: Please report the missing escaping to UDI so they can adjust the ISY firmware where needed. |
kevinzg/facebook-scraper | 1077433910 | Title: failed install facebook-scraper because microsoft visual c++
Question:
username_0: When i use `pip install facebook-scraper` I got an error
` Running setup.py install for lz4 ... error
ERROR: Command errored out with exit status 1:
command: 'D:\Python\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\W I N D O W S\\AppData\\Local\\Temp\\pip-install-k2gosvx2\\lz4_4ce3e8a4711f44f5836ec8e18cd4944a\\setup.py'"'"'; __file__='"'"'C:\\Users\\W I N D O W S\\AppData\\Local\\Temp\\pip-install-k2gosvx2\\lz4_4ce3e8a4711f44f5836ec8e18cd4944a\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\W I N D O W S\AppData\Local\Temp\pip-record-pmavxuma\install-record.txt' --single-version-externally-managed --compile --install-headers 'D:\Python\Include\lz4'
cwd: C:\Users\W I N D O W S\AppData\Local\Temp\pip-install-k2gosvx2\lz4_4ce3e8a4711f44f5836ec8e18cd4944a\
Complete output (20 lines):
WARNING: The wheel package is not available.
WARNING: The wheel package is not available.
WARNING: The wheel package is not available.
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.9
creating build\lib.win-amd64-3.9\lz4
copying lz4\version.py -> build\lib.win-amd64-3.9\lz4
copying lz4\__init__.py -> build\lib.win-amd64-3.9\lz4
creating build\lib.win-amd64-3.9\lz4\block
copying lz4\block\__init__.py -> build\lib.win-amd64-3.9\lz4\block
creating build\lib.win-amd64-3.9\lz4\frame
copying lz4\frame\__init__.py -> build\lib.win-amd64-3.9\lz4\frame
creating build\lib.win-amd64-3.9\lz4\stream
copying lz4\stream\__init__.py -> build\lib.win-amd64-3.9\lz4\stream
running build_ext
building 'lz4._version' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
----------------------------------------
ERROR: Command errored out with exit status 1: 'D:\Python\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\W I N D O W S\\AppData\\Local\\Temp\\pip-install-k2gosvx2\\lz4_4ce3e8a4711f44f5836ec8e18cd4944a\\setup.py'"'"'; __file__='"'"'C:\\Users\\W I N D O W S\\AppData\\Local\\Temp\\pip-install-k2gosvx2\\lz4_4ce3e8a4711f44f5836ec8e18cd4944a\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\W I N D O W S\AppData\Local\Temp\pip-record-pmavxuma\install-record.txt' --single-version-externally-managed --compile --install-headers 'D:\Python\Include\lz4' Check the logs for full command output.`
in my control panel it already use Microsoft visual c++ 2015-2022 Redistributable . I am using Pycharm 2021.2.3 and Windows 10
Answers:
username_1: the package wheel is currently broken on Windows.
Install facebook-scraper==0.4.9
username_2: Duplicate of https://github.com/kevinzg/facebook-scraper/issues/591 |
JuliaStats/Distributions.jl | 741744178 | Title: LoadError
Question:
username_0: Hi,
I'm having problems loading your package.
I've just updated `Distributions` and when I run
`using Distributions`
I have the following error:
```
[ Info: Precompiling Distributions [31c24e10-a181-5473-b8eb-7969acd0382f]
ERROR: LoadError: LoadError: InitError: could not load library "C:\Users\giada\.julia\artifacts\4df2c4eff2e0bc0677ed8aeaee414db028b73021\bin\libatomic-1.dll"
The specified module could not be found.
Stacktrace:
[1] dlopen(::String, ::UInt32; throw_error::Bool) at C:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.5\Libdl\src\Libdl.jl:109
[2] dlopen(::String, ::UInt32) at C:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.5\Libdl\src\Libdl.jl:109
[3] macro expansion at C:\Users\giada\.julia\packages\JLLWrappers\KuIwt\src\products\library_generators.jl:61 [inlined]
[4] __init__() at C:\Users\giada\.julia\packages\CompilerSupportLibraries_jll\790hI\src\wrappers\x86_64-w64-mingw32-libgfortran5.jl:12
[5] _include_from_serialized(::String, ::Array{Any,1}) at .\loading.jl:697
[6] _require_search_from_serialized(::Base.PkgId, ::String) at .\loading.jl:782
[7] _require(::Base.PkgId) at .\loading.jl:1007
[8] require(::Base.PkgId) at .\loading.jl:928
[9] require(::Module, ::Symbol) at .\loading.jl:923
[10] include(::Function, ::Module, ::String) at .\Base.jl:380
[11] include(::Module, ::String) at .\Base.jl:368
[12] top-level scope at C:\Users\giada\.julia\packages\JLLWrappers\KuIwt\src\toplevel_generators.jl:170
[13] include(::Function, ::Module, ::String) at .\Base.jl:380
[14] include(::Module, ::String) at .\Base.jl:368
[15] top-level scope at none:2
[16] eval at .\boot.jl:331 [inlined]
[17] eval(::Expr) at .\client.jl:467
[18] top-level scope at .\none:3
during initialization of module CompilerSupportLibraries_jll
in expression starting at C:\Users\giada\.julia\packages\OpenSpecFun_jll\Xw8XK\src\wrappers\x86_64-w64-mingw32-libgfortran5.jl:4
in expression starting at C:\Users\giada\.julia\packages\OpenSpecFun_jll\Xw8XK\src\OpenSpecFun_jll.jl:8
ERROR: LoadError: Failed to precompile OpenSpecFun_jll [efe28fd5-8261-553b-a9e1-b2916fc3738e] to C:\Users\giada\.julia\compiled\v1.5\OpenSpecFun_jll\TDl1L_srmfK.ji.
Stacktrace:
[1] error(::String) at .\error.jl:33
[2] compilecache(::Base.PkgId, ::String) at .\loading.jl:1305
[3] _require(::Base.PkgId) at .\loading.jl:1030
[4] require(::Base.PkgId) at .\loading.jl:928
[5] require(::Module, ::Symbol) at .\loading.jl:923
[6] include(::Function, ::Module, ::String) at .\Base.jl:380
[7] include(::Module, ::String) at .\Base.jl:368
[8] top-level scope at none:2
[9] eval at .\boot.jl:331 [inlined]
[10] eval(::Expr) at .\client.jl:467
[11] top-level scope at .\none:3
in expression starting at C:\Users\giada\.julia\packages\SpecialFunctions\LC8dm\src\SpecialFunctions.jl:3
ERROR: LoadError: Failed to precompile SpecialFunctions [276daf66-3868-5448-9aa4-cd146d93841b] to C:\Users\giada\.julia\compiled\v1.5\SpecialFunctions\78gOt_srmfK.ji.
Stacktrace:
[1] error(::String) at .\error.jl:33
[2] compilecache(::Base.PkgId, ::String) at .\loading.jl:1305
[3] _require(::Base.PkgId) at .\loading.jl:1030
[4] require(::Base.PkgId) at .\loading.jl:928
[5] require(::Module, ::Symbol) at .\loading.jl:923
[6] include(::Function, ::Module, ::String) at .\Base.jl:380
[7] include(::Module, ::String) at .\Base.jl:368
[8] top-level scope at none:2
[9] eval at .\boot.jl:331 [inlined]
[10] eval(::Expr) at .\client.jl:467
[11] top-level scope at .\none:3
in expression starting at C:\Users\giada\.julia\packages\StatsFuns\CXyCV\src\StatsFuns.jl:6
ERROR: LoadError: Failed to precompile StatsFuns [4c63d2b9-4356-54db-8cca-17b64c39e42c] to C:\Users\giada\.julia\compiled\v1.5\StatsFuns\530lR_srmfK.ji.
[Truncated]
[3] _require(::Base.PkgId) at .\loading.jl:1030
[4] require(::Base.PkgId) at .\loading.jl:928
[5] require(::Module, ::Symbol) at .\loading.jl:923
[6] include(::Function, ::Module, ::String) at .\Base.jl:380
[7] include(::Module, ::String) at .\Base.jl:368
[8] top-level scope at none:2
[9] eval at .\boot.jl:331 [inlined]
[10] eval(::Expr) at .\client.jl:467
[11] top-level scope at .\none:3
in expression starting at C:\Users\giada\.julia\packages\Distributions\HjzA0\src\Distributions.jl:3
ERROR: Failed to precompile Distributions [31c24e10-a181-5473-b8eb-7969acd0382f] to C:\Users\giada\.julia\compiled\v1.5\Distributions\xILW0_srmfK.ji.
Stacktrace:
[1] error(::String) at .\error.jl:33
[2] compilecache(::Base.PkgId, ::String) at .\loading.jl:1305
[3] _require(::Base.PkgId) at .\loading.jl:1030
[4] require(::Base.PkgId) at .\loading.jl:928
[5] require(::Module, ::Symbol) at .\loading.jl:923
```
Am I the only one experiencing this issue?
Answers:
username_1: Hi, I found the following for you: https://discourse.julialang.org/t/unable-to-automatically-install-compilersupportlibraries/36669
username_0: Thank you for your support.
I think I had some problems with the last version of Julia.
I uninstalled and reinstalled it several times and now it works.
I'll close the issue.
Status: Issue closed
|
cginternals/gloperate | 251861129 | Title: Double initialization of render stage
Question:
username_0: This call order is fine:
```
m_canvas->setOpenGLContext();
m_canvas->setRenderStage(...);
```
However, if called the other way around (as a result of my solution to #393), the render stage is initialized twice: once during `setOpenGLContext()` and once during the first call to `render()` (without intermediate deinitialization).
Solution ideas:
- add m_contextInitialized flag to Canvas
- Stage stores a pointer to the context it has been initialized with that can be checked by the Canvas
Answers:
username_0: I like the second idea because it just so happens that I have a Stage that needs this information anyway ;) |
jlippold/tweakCompatible | 413714474 | Title: `IconSupport` working on iOS 12.1.1
Question:
username_0: ```
{
"packageId": "com.chpwn.iconsupport",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.chpwn.iconsupport",
"deviceId": "iPhone10,2",
"url": "http://cydia.saurik.com/package/com.chpwn.iconsupport/",
"iOSVersion": "12.1.1",
"packageVersionIndexed": true,
"packageName": "IconSupport",
"category": "Development",
"repository": "BigBoss",
"name": "IconSupport",
"installed": "1.11.0-1",
"packageIndexed": true,
"packageStatusExplaination": "This package version has been marked as Working based on feedback from users in the community. The current positive rating is 100% with 1 working reports.",
"id": "com.chpwn.iconsupport",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.0",
"shortDescription": "Support library for safe icon tweaks.",
"latest": "1.11.0-1",
"author": "<NAME> (ashikase)",
"packageStatus": "Working"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": "PM me on reddit /u/username_0 for help! It does work by changing compatible firmware"
}
```<issue_closed>
Status: Issue closed |
boostorg/hana | 125422729 | Title: Possible GCC 5.2.1 issue?
Question:
username_0: Hello,
first of all I tried to check whether a similar bug already exists, but apparently I couldn't find any. We have a gcc 5.2.1 on our redhat 6 machine which -according to Redhat- it should be 100% c++14 compliant. However when compiling the simplest hana example:
#include <boost/hana.hpp>
int main() { return 0; }
I'm getting two errors:
include/boost/hana/fwd/optional.hpp:330:29
error: explicitly defaulted function 'constexpr boost::hana::optional<>& boost::hana::optional<>::operator=(const boost::hana::optional<>&)' cannot be declared as constexpr because the implicit declaration is not constexpr:
constexpr optional& operator=(optional const&) = default;
include/boost/hana/fwd/optional.hpp:33129
error: explicitly defaulted function âconstexpr boost::hana::optional<>& boost::hana::optional<>::operator=(boost::hana::optional<>&&)â cannot be declared as constexpr because the implicit declaration is not constexpr:
constexpr optional& operator=(optional&&) = default;
I have tried to comment out those two lines in optional.hpp and finally I was able to compile few hana examples (directly from your documentation).
329 // 5.3.3, Assignment
330 // constexpr optional& operator=(optional const&) = default;
331 // constexpr optional& operator=(optional&&) = default;
Is this an already reported GCC issue (couldn't find any reference to something like that )or is this a new one you weren't aware of?
Thanks,
Luca
Answers:
username_1: Luca, having had the same problem as a Hana user myself, I'd like to give a quick answer for you to try a few things.
Unfortunately, Redhat is promising a bit too much. GCC has quite a few issues with C++14 compliance. I'd like to refer you to the [Hana Wiki](https://github.com/boostorg/hana/wiki/General-notes-on-compiler-support), where those issues are listed directly below the table.
You may also run the tests provided with Hana and then reduce your include statements to those features which are working. Thus, don't do an `#include <boost/hana.hpp>`, but just include the very minimum you require. Running the tests with `cmake` is well documented in the wiki and the [manual](http://boostorg.github.io/hana/).
Also unfortunately, and as far as I know, Redhat Enterprise 6 doesn't provide an up to date Clang compiler package (Clang >=3.5) out of the box. But Google might help you out there. If you want to dig deep, see the previous issues #167 and #168.
Good luck, Markus.
username_2: Luca,
Markus is right in saying that GCC is not fully C++14 compliant. Being compliant means being able to compile without segfaulting on most non-trivial examples, but unfortunately GCC is not quite there yet.
Regarding the exact error that you encountered; I am aware of it, and I even have a fix for it (it's actually a single thing that triggers two error messages). However, there are so many other problems with GCC that it is not really worth doing any workaround to accommodate that compiler for now.
I regularly monitor GCC's progress in terms of C++14; I compile Hana and take the time to generate useful bug reports to help them reach their goal of C++14 conformance, and enable Hana on GCC. But it seems like they are not there yet. Believe me, we are a couple of people waiting for Hana to work on GCC, which is one of the most widely used compilers, after all.
For now, the best you can do if you really want to use Hana is to use Clang. It should be fairly easy to install Clang through a package manager on Linux. On Ubuntu, for example, I think `apt-get clang` would work out of the box. I understand that using a different compiler might not be an option, though. In all cases, I would suggest putting some pressure by signifying to RedHat that you want the following bugs fixed:
- https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67364
- https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68754
- https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67164
- https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69059
- https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69060
__Especially__ if you are paying for entreprise support (which I don't know if you are), they tend to be much more responsive than to non-paying customers. I'll close this now; good luck!
Status: Issue closed
username_0: First of all thanks, I completely missed that table in the wiki with all the gcc open issues.
Unfortunately we cannot move our code base to clang, however we do pay for enterprise support so I'll send those bugs to our Redhat support.
username_3: Hello Guys,
Please, any fix on this issue ? I use gcc 7.2.0 and I have the same issue.
Thank you.
username_2: @username_3 Can you please post a minimal complete reproduction on GCC 7.2.0? I can't reproduce: https://wandbox.org/permlink/wIJlMZNUyLMJ5Fiu
username_3: Hi Idionne.
I'm sorry. I was using gcc 5.1.x. I found out that Hana did not support gcc versions lower than 6.0.0. I now moved to gcc 7.2.0. The issue is no more. Thanks. |
Eg3-git/wad2project | 849269016 | Title: Pictures are not uploaded
Question:
username_0: Uploading pictures via forms is not working for some reason. I can't figure out why, but I think it has to do with views.py
Answers:
username_1: Do you have media dir. created, if so, is it at correct level ?
username_0: Weird. It's loading the default images for user profiles and movie covers but I cannot change them. But if it's my issue maybe I've done something wrong.
Status: Issue closed
username_1: Uploading pictures via forms is not working for some reason. I can't figure out why, but I think it has to do with views.py
username_1: You were right there was a problem with edit functions. Good call.
Status: Issue closed
|
farrokhi/dnsdiag | 171595335 | Title: dnseval: long ttl values -> column shifting
Question:
username_0: Probably not that often, but with long ttl values, the flags column is shifting:
<pre><code>
weberjoh@jw-nb12-lx:~/dnsdiag$ ./dnseval.py -f ../dns-servers long.weberdns.de
server avg(ms) min(ms) max(ms) stddev(ms) lost(%) ttl flags
---------------------------------------------------------------------------------------------------------------
172.16.31.10 10.779 4.351 19.984 6.449 %0 86400 QR -- -- RD RA -- --
172.16.31.10 22.526 8.085 29.851 8.051 %0 86400 QR -- -- RD RA -- --
fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b 6.026 3.694 19.756 4.954 %0 86400 QR -- -- RD RA -- --
fc00:db20:35b:7399::5 17.062 8.387 30.604 9.128 %0 86400 QR -- -- RD RA -- --
8.8.8.8 27.651 15.683 50.579 11.170 %0 21599 QR -- -- RD RA AD --
8.8.4.4 19.400 13.877 35.166 6.893 %0 21599 QR -- -- RD RA AD --
2001:fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b 25.064 14.420 82.547 20.339 %0 21599 QR -- -- RD RA AD --
20fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b 18.189 14.976 22.198 2.553 %0 21598 QR -- -- RD RA AD --
208.67.222.222 6.718 4.117 14.405 3.156 %0 604800 QR -- -- RD RA -- --
208.67.220.220 4.924 4.095 7.297 1.061 %0 604800 QR -- -- RD RA -- --
ns1-v4.weberdns.de 7.293 6.707 10.805 1.275 %0 2592000 QR AA -- RD -- -- --
ns1-v6.weberdns.de 4.897 4.636 5.647 0.395 %0 2592000 QR AA -- RD -- -- --
ns2.weberdns.de 11.860 10.861 15.548 1.640 %0 2592000 QR AA -- RD -- -- --
ns3.weberdns.de 26.325 24.307 29.289 1.316 %0 2592000 QR AA -- RD -- -- --
int-dns.username_0.net 3.182 2.603 6.699 1.246 %0 604800 QR -- -- RD RA AD --
192.168.110.1 2.595 2.279 4.239 0.587 %0 604800 QR -- -- -- -- -- --
192.168.7.1 7.029 5.949 12.397 1.953 %0 86400 QR -- -- RD RA -- --
192.168.7.5 15.036 12.926 26.464 4.220 %0 2591999 QR -- -- RD RA AD --
</code></pre>
Status: Issue closed
Answers:
username_0: Great. Thanks!
<pre><code>weberjoh@jw-nb12-lx:~/dnsdiag$ ./dnseval.py -f ../dns-servers long.weberdns.de
server avg(ms) min(ms) max(ms) stddev(ms) lost(%) ttl flags
------------------------------------------------------------------------------------------------------------------
172.16.31.10 12.173 4.272 22.078 7.196 %0 86400 QR -- -- RD RA -- --
172.16.31.10 19.509 8.341 32.537 9.565 %0 86400 QR -- -- RD RA -- --
2003:fc00:e968:6179::de52:7100 6.646 3.757 20.490 5.112 %0 86400 QR -- -- RD RA -- --
2003:fc00:e968:6179::de52:7100 16.429 8.107 34.164 10.130 %0 86400 QR -- -- RD RA -- --
8.8.8.8 28.091 14.086 54.885 13.151 %0 21598 QR -- -- RD RA AD --
8.8.4.4 46.456 13.659 283.566 83.850 %0 21599 QR -- -- RD RA AD --
2001:4fdf8:f53e:61e4::18 16.541 12.793 20.857 3.679 %0 21599 QR -- -- RD RA AD --
2001:4fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b 20.540 14.270 48.589 10.262 %0 21599 QR -- -- RD RA AD --
208.67.222.222 7.953 3.770 14.538 4.467 %0 604800 QR -- -- RD RA -- --
208.67.220.220 5.823 4.092 10.801 2.694 %0 604800 QR -- -- RD RA -- --
ns1-v4.weberdns.de 7.780 6.166 16.674 3.136 %0 2592000 QR AA -- RD -- -- --
ns1-v6.weberdns.de 4.913 4.649 5.635 0.385 %0 2592000 QR AA -- RD -- -- --
ns2.weberdns.de 10.842 9.626 11.289 0.445 %0 2592000 QR AA -- RD -- -- --
ns3.weberdns.de 26.398 25.695 28.483 0.929 %0 2592000 QR AA -- RD -- -- --
int-dns.username_0.net 2.830 2.509 3.720 0.344 %0 158437 QR -- -- RD RA AD --
192.168.110.1 2.459 2.043 4.271 0.650 %0 158437 QR -- -- -- -- -- --
192.168.7.1 78.437 59.847 143.988 24.270 %0 86399 QR -- -- RD RA -- --
192.168.7.5 63.335 48.498 82.268 11.418 %0 2591999 QR -- -- RD RA AD --
</code></pre> |
rodrigodiasnoronha/chika-fujiwara-bot | 704631175 | Title: Fazer comandos de help em cada comando existente no BOT
Question:
username_0: Este comando de help em cada comando do BOT seria de utilidade pois, caso o usuário tivesse dúvida sobre algum comando, ele pode vir a digita-lo e entender melhor o comando. Funcionaria como no exemplo abaixo.
Leve em conta o comando transferir, eu quero transferir uma certa quantia para algum usuário, logo digito:
`.tr 1200 @user`
O comando de help poderia ser usado como:
`.tr help`
Logo, deveria ser imprimido algo como:
`.tr <valor> @user`<issue_closed>
Status: Issue closed |
pennersr/django-allauth | 50073440 | Title: google redirct uri mis match
Question:
username_0: Hi I am facing redirect_uri_mismatch in django all auth I have given this url in google console setting
The redirect URI in the request: http://localhost:8000/accounts/social/google/login/callback/ did not match a registered redirect URI.
I have this value in my database google setting
http://localhost:8000/accounts/social/google/login/callback/
but why this is error is comming
Kindly help me
Status: Issue closed
Answers:
username_2: Hi
I'm facing a similar problem only in my case it's soundcloud.
My url is of the form /accounts/<provider>/login/callback
username_1: Should be: /accounts/google/login/callback/
username_2: Yeah, so it's /accounts/soundcloud/login/callback/ but I keep getting Social Network Login Failure.
My tokens match and my callback url in soundcloud is correct.
username_3: yup I get this too
username_3: it started happenign after switching to SSL, but I added https to all my Site objects and my redirect uris
username_4: This worked for me :
1. Go to https://console.developers.google.com
2. Add without port http://127.0.0.1/accounts/google/login/callback/
3. Also Add http://localhost/accounts/google/login/callback/
4. worked for me
 |
kezong/fat-aar-android | 469638333 | Title: libs中引用的jar包中包含assets目录,最终的libs中的jar包中丢失了assets目录
Answers:
username_1: 在jar包所在的build.gradle将minifyEnabled打开,assets会保留在classes.jar中,你可以试试。
username_2: 但是如果不想开启 minifyEnabled ,有其他办法可以将ibs中引用的jar包中包含assets目录打包的时候保留么
username_3: 请问这个解决了吗,现在库里面引用了一个jar,jar里面包含assets目录,但是最终打包出来的aar里面没有jar包中的assets
username_1: jar包中的assets会合并至classes.jar中,并不在jar中,你们仔细看看。
Status: Issue closed
username_1: 1.3.1支持了该特性,试试最新版本。 |
osbuild/osbuild | 892704124 | Title: Question: May we add support for Rocky Linux?
Question:
username_0: I am part of the testing team for Rocky Linux, a replacement for CentOS currently in development. We have a release candidate [here](https://rockylinux.org/download). Currently osbuild will not start in Rocky Linux, despite starting in RHEL, see the forum post [here](https://forums.rockylinux.org/t/can-not-use-cockpit-image-builder/2595). I was wondering if we could add support for Rocky Linux? I am willing to provide the patch.
Answers:
username_1: Hi, thanks for the issue report. Let's keep this discussion in one place in the composer repo as osbuild doesn't have any knowledge of the distro definitions.
*Closed as a duplicate of: https://github.com/osbuild/osbuild-composer/issues/1411*
Status: Issue closed
username_2: Well, osbuild has distro-specific runners so the issue is also here. So I think that the more specific question is: How can we provide runners for all RHEL rebuilds?
CentOS Linux 8, CentOS Stream 8 and Rocky Linux 8\[1\] all define `ID_LIKE="rhel fedora"` in `/etc/os-release` so we might be able to provide a generic rhel-like runner? Or maybe I'm just making this more complex than needed and we should just ship specific runners for the rebuilds. @username_4 thoughts?
\[1\]: https://git.rockylinux.org/original/rpms/rocky-release/-/blob/r8/SPECS/rocky-release.spec#L130
username_2: I am part of the testing team for Rocky Linux, a replacement for CentOS currently in development. We have a release candidate [here](https://rockylinux.org/download). Currently osbuild will not start in Rocky Linux, despite starting in RHEL, see the forum post [here](https://forums.rockylinux.org/t/can-not-use-cockpit-image-builder/2595). I was wondering if we could add support for Rocky Linux? I am willing to provide the patch.
username_3: We could just use `ID_LIKE` if we cannot find a runner. But given that this is really just about adding symlinks, I wouldn't put too much effort in. If we end up with huge amount of symlinks, we can always think of such consolidations afterwards, right?
username_4: I wonder how this fits into the (old) and now again new idea of not having specific runners for each distro version (since it breaks osbuild on branch) but auto-detection of the best possible runners with specific "overrides" for distros. We could there just honour an `ID_LIKE` as @username_3 suggested and then use the rhel/centos runners.
username_1: Right, sorry for too early closing.
username_5: Hi all,
Is there a public roadmap for this? We were looking at implementing osbuild/osbuild-composer and hit this roadblock. CentOS 8 is reaching EOL within the month, and we'd like to have zero CentOS 8 installs by then.
Cheers
username_4: We (Image Builder team) have no immediate plan to address this, but would happily accept patches.
username_1: There is an ongoing effort to change the way in which we define distros. The current way is not exactly sustainable. I suggest applying downstream patches for now. |
nrwl/nx | 1165895356 | Title: Compile failure with Next app using typescript lib
Question:
username_0: Node : 16.13.2
OS : darwin arm64
yarn : 1.22.17
nx : 13.8.7
@nrwl/angular : undefined
@nrwl/cli : 13.8.7
@nrwl/cypress : 13.8.7
@nrwl/detox : undefined
@nrwl/devkit : 13.8.7
@nrwl/eslint-plugin-nx : 13.8.7
@nrwl/express : 13.8.7
@nrwl/jest : 13.8.7
@nrwl/js : 13.8.7
@nrwl/linter : 13.8.7
@nrwl/nest : undefined
@nrwl/next : 13.8.7
@nrwl/node : 13.8.7
@nrwl/nx-cloud : undefined
@nrwl/react : 13.8.7
@nrwl/react-native : undefined
@nrwl/schematics : undefined
@nrwl/storybook : 13.8.7
@nrwl/tao : 13.8.7
@nrwl/web : 13.8.7
@nrwl/workspace : 13.8.7
typescript : 4.5.2
rxjs : 6.6.7
---------------------------------------
Community plugins:
``` |
quasarframework/quasar | 575739026 | Title: Rules are not reactive
Question:
username_0: **Describe the bug**
When using i18n plugin (that uses vue-i18n, as described in the documentation), I noticed that the QInput rule's error message is not reactive. Code:
```
<q-input
v-model="name"
:label="$t('yourName*')"
:hint="$t('nameAndSurname')"
filled
lazy-rules
:rules="[ val => val && val.length > 0 || $t('pleaseTypeSomething')]"
/>
```
The _pleaseTypeSomething_ label does not change when the language is changed via `this.$i18n.locale`.
**Codepen/jsFiddle/Codesandbox (required)**
Source code: https://github.com/username_0/quasar-i18n-example
Demo site: http://quasar-i18.surge.sh/
**To Reproduce**
1. Go to http://quasar-i18.surge.sh/
2. Notice the language is set to English
3. Press 'OK' to submit the form. Notice the "Please type something" error in the name QInput
4. Change language to Hebrew from the language QSelect in the toolbar
5. Notice that the entire interface changes, and all labels change to Hebrew, all but the rule error, which still show in English, although it has Hebrew translation
**Expected behavior**
The label is expected to be shown in Hebrew instead of English
**Screenshots**

Answers:
username_1: Can you try to make a computed property for the array of rules and use that computed property in :rules?
username_0: @username_1 , it still doesn't work
```
<q-card-section>
<q-input
v-model="name"
:label="$t('yourName*')"
:hint="$t('nameAndSurname')"
:rules="nameFieldRules"
filled
lazy-rules
/>
</q-card-section>
<q-card-section>
<q-input
v-model="age"
:label="$t('yourAge*')"
:rules="ageFieldRules"
filled
type="number"
lazy-rules
/>
</q-card-section>
```
and
```
computed: {
ageFieldRules () {
return [
val => (val !== null && val !== '') || this.$t('pleaseTypeYourAge'),
val => (val > 0 && val < 100) || this.$t('pleaseTypeARealAge')
]
},
nameFieldRules () {
return [
(val => val && val.length > 0) || this.$t('pleaseTypeSomething')
]
}
},
```
username_0: I added console prints in the computed properties, and noticed that they are being called as expected, whenever the language changes. The issue is probably that the rules are not re-applied after the rule reactive attribute changed, thus the label does not change.
username_0: Created a pen:
https://codepen.io/username_0-the-encoder/pen/abOLqdx?editors=1010
username_1: Thank you. I found the problem: the validation is not rechecked when rules change.
username_2: Thanks for the hard work on this... I recently ran into this issue too--it's certainly a "wow factor" to have an app change language dynamically.
username_3: A possible temporary workaround is to give your qinput a key which is based on some reactive data. Not a great solution, but it works.
username_4: New Boolean prop in "quasar" v1.11.0: `reactive-rules`.
Status: Issue closed
username_3: @username_4 hello! And thanks everybody who was working on this. However, could you please share your thoughts about why this should be a separate property instead of default behavior? In my opinion, in vue ecosystem we expect everything to be reactive by default. Or shouldn't reactive-rules property at least be true by default? I understand that someone could count on that rules are not reactive in their existing apps, but shouldn't you do a breaking change now to provide a better experience in future? Thank you.
username_4: @username_3 It won't be a better experience. It will be a worse experience perf-wise. Which is why a separate prop is needed if you REALLY want this. The reason is simple: most devs declare the rules inline, so a new array is created on each render and supplied to the component which in turn triggers the watcher (cause new memory reference of rules) on each re-render. So essentially this will run whether the rules actually change or not, if Vue determines it should re-render the component, and performance degrades, based on how much it takes to run all the rules.
username_5: Should you set a handler to active the error validations , If the v-model doesn't change never will read the error-message , or currently is there a way to make it? |
dotnet/runtime | 1157254631 | Title: TIME OUT - Mono llvmfullaot Pri0 Runtime Tests Run Linux arm64 release
Question:
username_0: We couldn't find any obvious change that could be causing it between these commits:
git diff a82d92c36624e89e831e37515a9b0c95a4cfe183..c8da2fd5d8b841fc34845218158399cf35eb8bcf
Log which the timeout started:
https://dev.azure.com/dnceng/public/_build/results?buildId=1633880&view=results
Answers:
username_0: @username_1 did we have any relevant infra change between friday and saturday?
username_1: Aside from serious "world-on-fire" hot-fixes we will not make infrastructure changes on a Friday evening or Saturday morning. If something's going to roll out and affect you, it'll almost always be on a Wednesday in the morning time for the Pacific time zone.
If your investigation does make you think there are infra issues, please feel free to also/instead tag @dotnet/dnceng as sometimes I take vacation days and there are other folks who might reply faster on those days. Including things like the Job GUID (sometimes called Correlation id), Work item friendly name, and why you think we caused your timeout can also speed investigation. Sounds like we're waiting for Simon to check it out first, though.
username_2: This seems to be caused by more assemblies getting AOTed.
In particular, every test project now seems to include TestLibrary.dll, so we end up AOTing it like 2000 extra times. The original lane already took about 2 hours, so these extra AOT steps might push it over the timeout.
username_2: The extra TestLibrary dll was added by
https://github.com/dotnet/runtime/commit/572405acd98aedfc9a57dc63debd1a1819431dd1#diff-13390079908954088047032dd93629fec2bcfd68749c2c7422c9ed60511ea108L73
Which adds TestLibrary to every test project.
username_3: cc @username_8
username_4: @username_8 @username_5 Did you complete the test grouping change for this lane? That should have made it faster, since there would be less redundant AOT work.
username_5: We're part-way through the work. We added the `TestLibrary` reference to each project to make it simpler than adding the reference to each project individually. If that's causing issues, then we can move to manually adding the project references to the required projects.
username_6: Is it possible to create one copy of `TestLibrary` and put it under `Core_Root`?
username_7: Yes, please switch to manually adding the project references. We need to keep this lane running in PR (in lieu of iOS device runtime tests).
username_5: cc @username_8 can you make this change? (I’m stretched a little thin on cycles right now)
username_8: @koritzinsky - Sure, will do. I suppose that basically means we need to manually reference TestLibrary from each test using PlatformDetection.
Status: Issue closed
username_6: With @username_8's PR, the `Mono llvmfullaot Pri0 Runtime Tests Run Linux arm64 release` lane still time out. And the time out currently was because the tests running on helix step took too long to finish (10min -> 1h). @simonrozsival Could you re-open your revert PR and merge main to it to see if the testing time would go down? (https://github.com/dotnet/runtime/pull/66088)
username_7: The aot time build issue seems to be fixed by the PR - but we still are seeing some tests running longer and exceeding the overall timeout of the lanes. Changing ownership of issue till we determine what is causing the tests to run longer than expected.
username_1: Don't forget that ARM64 test machines are a limited, not-scalable resources and the runtime team is sending thousands of hours of work through them. Being more clever saves time but we can't forget that we have a static number of limited resources.
Status: Issue closed
|
Patreon/patreon-php | 171717548 | Title: Using wp_remote_get instead of cURL
Question:
username_0: Hi David! Thanks for making all this code available. It's been super helpful for development.
I've got API requests working via cURL but I thought I'd also try using [`wp_remote_get`](https://codex.wordpress.org/Function_Reference/wp_remote_get) since native WordPress functions can be good for simplifying code. However, I am always getting a "403 Forbidden" response whenever I use the `wp_remote_get` method. Is there something special I'd need to do in order to get this working? Is an additional header or user agent needed?
*Example code:*
```php
$access_token = 'xxxxxx';
$api_endpoint = 'https://api.patreon.com/oauth2/api/current_user';
$args = array(
'headers' => array(
'Authorization' => 'Bearer ' . $access_token
)
);
$response = wp_remote_get( $api_endpoint, $args );
return $response;
```
Answers:
username_1: hmmmm, there's no other headers required. I don't know enough about `wp_remote_get` to figure out why that code would fail. Our equivalent code is:
``php
$access_token = 'xxxxxx';
$api_endpoint = 'https://api.patreon.com/oauth2/api/current_user';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $api_endpoint);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$authorization_header = "Authorization: Bearer " . access_token;
curl_setopt($ch, CURLOPT_HTTPHEADER, array($authorization_header));
curl_exec($ch);
```
Try using the above code to see if it's just a `wp_remote_get` issue?
username_0: Yeah, the above code is exactly what I've been using and it works great! I guess this is no real issue then. I'm simply curious why `wp_remote_get` is failing when it should be a reasonable alternative.
username_1: That's a good question.. This is not a wordpress plugin though, so I'd like to keep `wp_*` stuff out of the repo.
Status: Issue closed
username_2: @username_0 Circling back to this issue. The 403's that you were experiencing were a result of some overly aggressive filtering in our routing system which specifically targeted Wordpress. It should now work as expected. |
openvinotoolkit/open_model_zoo | 748510055 | Title: Smart Classroom Demo issue
Question:
username_0: Hello Experts,
Anyone encounter this issue before when I tried to run the smartclass room demo ?
[ INFO ] InferenceEngine: 00007FFFFD6E4390
[ INFO ] Parsing input parameters
[ INFO ] Reading video 'classroom.mp4'
[ INFO ] Loading Inference Engine
[ INFO ] Device info:
CPU
MKLDNNPlugin version ......... 2.1
Build ........... 2020.4.0-359-21e092122f4-releases/2020/4
[ ERROR ] Check 'shape_size(get_input_shape(0)) == shape_size(output_shape)' failed at C:\j\workspace\private-ci\ie\build-windows-icc2018\b\repos\openvino\ngraph\src\ngraph\op\reshape.cpp:290:
While validating node 'v1::Reshape Reshape_187891(Softmax_187890[0]:f32{3249,8}, Constant_187705[0]:i64{3}) -> (f32{1,3249,2})':
Requested output shape Shape{1, 3249, 2} is incompatible with input shape Shape{3249, 8}
Answers:
username_1: @username_0 it seems model IR not corresponds to OpenVINO runtime. If you use OpenVINO 2020.4 runtime then you should not use model from 2021.1 release.
username_0: Hi Vladimir-Dudnik,
I am using IR file for OpenVINO 2020 release and some how I didn't see the error messages already but nothing happen, i guess it is not playing the .mp4 file, no pop out of the video windows, seems weird....
C:\Users\allensen\Documents\Intel\OpenVINO\omz_demos_build\intel64\Release>smart_classroom_demo.exe -i classroom.mp4 -m_act person-detection-action-recognition-0006.xml -m_fd face-detection-retail-0004.xml -m_lm landmarks-regression-retail-0009.xml -m_reid face-reidentification-retail-0095.xml -fg faces_gallery.json
[ INFO ] InferenceEngine: 00007FFFFE094390
[ INFO ] Parsing input parameters
[ INFO ] Reading video 'classroom.mp4'
[ INFO ] Loading Inference Engine
[ INFO ] Device info:
CPU
MKLDNNPlugin version ......... 2.1
Build ........... 2020.4.0-359-21e092122f4-releases/2020/4
C:\Users\allensen\Documents\Intel\OpenVINO\omz_demos_build\intel64\Release>
username_1: strange how it happens that ngraph related issue comes out without any changes.
If you can't playback video file on windows it might be related to absence of required media codecs. You may need to download ffmpeg DLLs for opencv, see this [thread](https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/inference-engine-samples-do-not-take-image-or-video-files-as/m-p/1134761#M9384) on OpenVINO forum
username_0: Hello Vladimir-Dudnik,
Thanks for the recommendation and I had downloaded ffmpeg DLL for OpenCV previously.
One thing I notice, when I use the same classroom.mp4 video with another Interactive_Face_Detetion example, it is working perfectly.
So I am wondering if this is related to smart_classroom_demo.exe only?
![Uploading Interactive_Face_Detection_Video.PNG…]()
username_0: Hello Vladimir-Dudnik,
Same results even I tried on diff machine in OpenVINO 2021, any thoughts, no issue running other demos.
Thanks
username_1: @username_0
I do not have issue to run smart class room demo with OpenVINO 2021.2, which was just released, using parameters below:
smart_classroom_demo.exe -i classroom.mp4 -m_act person-detection-action-recognition-0005.xml -m_fd face-detection-retail-0004.xml -m_lm landmarks-regression-retail-0009.xml -m_reid face-recognition-mobilefacenet-arcface.xml

But, when I change action recognition model to person-detection-action-recognition-0006.xml demo reports error:
[ ERROR ] The number of specified actions and the number of actions predicted by the Person/Action Detection Retail model must match
This model recognize 6 actions vs 4 recognized by person-detection-action-recognition-0005, so you need to specify additional option to demo: -student_ac "sitting,writing,raising hand,standing,turned around,lie on the desk" to make it work.
username_1: @username_0 do you have additional questions?
username_0: Hello Vladimir,
No more questions from my side.
Thanks for the help and advise
username_1: thanks, so I'll close this
Status: Issue closed
|
benjimmy/tappy-frame | 443714275 | Title: Calc is not quite right frames with push chance
Question:
username_0: 
As per above, if the chance of push is .14 and the chance of late is .14 then the push is more likely to have happened for that late frame than .14
Not sure how best to calculate this... |
google/closure-compiler | 32482070 | Title: Type checker misses checking array literals
Question:
username_0: This issue was imported from Closure Compiler's previous home at http://closure-compiler.googlecode.com
The original discussion is archived at:
http://username_0.github.io/closure-compiler-issues/#56
Answers:
username_1: Repro:
```js
// Warning (correct)
/** @type {Array<string>} */
var a = [];
a[0] = 1;
// No warning
/** @type {Array<string>} */
var a;
a = [1];
```
username_2: FYI, NTI warns on both of these.
username_3: Type checking is also broken for literal initialization, not only for reassigment:
```javascript
// No warning
/** @type {Array<string>} */
var a = [1];
``` |
dry-rb/dry-types | 167774364 | Title: Question: Does Maybe types causes coupling?
Question:
username_0: Hello. First of all, thanks a lot for such great suit of gems for Ruby. This is exactly what I was looking long time since I started to care about architecture.
Currently, I try to utilize `dry-types` in my project, and noticed one thing with optional (Maybe) types
```ruby
class MyClass < Dry::Types::Struct
include Dry::Types.module
attribute :optional, Maybe::Strict::String
end
obj = MyClass.new
=> #<MyClass optional=None>
obj.optional
=> None
obj.optional.value
=> nil
```
That fact, that getter returns Monad but real value, just because attribute is optional, changes default client's use of that getter: client needs to send `value` method to get real value.
Validation is matter to be changed, and thus should be encapsulated inside object and decoupled from rest of code. But currently, changing attribute optionality from `Maybe::Strict::String` to `Strict::String` requires update of every client code, because there is no `value` method anymore.
That creates coupling between client's use of attribute and attribute optionality.
I hope you understand my question fully. Thanks!
Status: Issue closed
Answers:
username_1: That's intentional so that handing of `nil` is explicit and required. Coupling is not a problem because it's like being worried that if you change an attr from String to Fixnum then you need to change code that depends on it :) |
aws/amazon-ecs-agent | 411580502 | Title: Containerdefinition container's name docs don't match with actual behavior
Question:
username_0: It doesn't map to these things though, since if I set `name: test` I won't get a container with name `test` but I'll get a container named `ecs-<service-name (or group name?>-<taskdef revision>-<container name (or task name?>-<somehash>`. Very far from `test` ;)
Why is this a problem: We want to run some containers that aren't managed by ECS and use volumes-from. For this it's significantly nicer if we can use a predictable container name. Right now we can't do that. We're aware that only a single container can exist with a specific name :)
### Expected Behavior
The container gets the name as defined in the container definition or the docs are updated.
### Observed Behavior
The container gets a derived/randomized name and the docs don't match
Answers:
username_1: For this kind of use-case, our recommendation would be to employ either [host-based volumes](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bind-mounts.html) (where you specify a concrete path on the host) or to use a [Docker volume driver](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-volumes.html).
Status: Issue closed
username_0: I'm totally OK with this answer :) Just wanted to make sure there wasn't an option somewhere that we overlooked that would make this work. If the docs can be updated to reflect reality that would be great.
We'll have a look at using volumes for this. Thanks again! |
syndesisio/syndesis-quickstarts | 371131711 | Title: All flows return 501 not implemented in the export .zip
Question:
username_0: I suspect maybe they should be set to 200 OK?
To do this click on the return path icon at the end of the flow and change the return code, click "Done" and then click "Save as draft" or if you're on the latest Syndesis master clicking on the "Go to operations list" super sekretly saves the integration for you, then I guess it's just a matter of re-exporting.
Answers:
username_0: Otherwise right after importing the sample .zip the detail page shows 0 flows implemented.
username_1: @username_0 I will redo the video one more time in a week or so when some issues I ran into are resolved. I already recreated a new export, so hopefully you should see 3 flows now.
username_1: updated the API Provider QS.
Status: Issue closed
|
cnbluefire/FDSforCU | 751267388 | Title: 宁德技术服务费发票-宁德技术服务费发票
Question:
username_0: 宁德技术服务费发票【徴:ff181一加一⒍⒍⒍】【Q:249⒏一加一357⒌⒋0】经营范围广、项目齐全、劳务、会议、住宿、餐饮、运输、广告、建筑、手撕、建材、钢材等等...一边又有复习的风波,所以我太难了。虽然这次的试卷写完了,但是万一有些题目不太把握还是会错,所以心惊胆战。闲问字,评风月。时载酒,调冰雪。似初秋入夜,浅凉欺葛。人境不教车马近,醉乡莫放笙歌歇。倩双成、一曲紫云回,
https://github.com/cnbluefire/FDSforCU/issues/809
https://github.com/cnbluefire/FDSforCU/issues/810
https://github.com/cnbluefire/FDSforCU/issues/811 |
brianc/node-pg-types | 361282721 | Title: Error message undefined
Question:
username_0: Hi,
I am using pg to execute a stored procedure on postegres 10, this procedure may fail and raise an exception, the exception message can not be extracted from the error (err.error 'undefined'), but can be logged on the console.
when I log the error (`console.log(err);`) I see the exception message along with the stack trace, but when I log only the error message (`console.log(err.error)`) I get undefined.
```
let query = {
text: 'SELECT insert_events($1, $2, $3, $4, $5)',
values: [provider, type, events, types, expectedVersion],
};
const qfunc = async () => {
const cnn = await pool.connect();
try {
const res = await cnn.query(query);
return res.rows[0].insert_events;
} catch (err) {
console.log('Error : ', err.where);
throw err;
} finally {
cnn.release();
}
};
return qfunc();
```
Answers:
username_1: The property for the message associated with an Error object in JavaScript is `err.message`, not `err.error`.
username_1: The word “error:” there is part of the error message. It’s not a property.
username_0: I am not sure but when I use err.message as you suggested I get the message "Concurrency problem ...... ".
But the issue is solved thank you.
Status: Issue closed
username_0: I will close the issue. Thank you for your help. |
Tornaco/Thanox | 974152197 | Title: 使用thanox会使kustom lwp动态壁纸卡顿
Question:
username_0: 安装thanox后即使不开启任何功能,每隔一段时间返回桌面后kustom lwp会卡顿1秒,强行停止kustom lwp再启用后会恢复正常,一段时间后又会卡顿,如此往复。卸载thanox后无此bug。
thanox版本为Googleplay商店最新版
Answers:
username_1: 安装thanox后即使不开启任何功能,每隔一段时间返回桌面后kustom lwp会卡顿1秒,强行停止kustom lwp再启用后会恢复正常,一段时间后又会卡顿,如此往复。卸载thanox后无此bug。
thanox版本为Googleplay商店最新版 |
intellij-rust/intellij-rust | 778214534 | Title: Analyze with external linter stalls running tests
Question:
username_0: Right now if you run a test and you have `Analyze project with external linter` your test run can get temporarily stalled waiting for the linter (lock on the build dir).
You could get around this by changing your context configuration for the test to use a different `CARGO_TARGET_DIR`, but it would be nice if by default the linter did not run while you were running tests. Or if you could configure it to use a separate `CARGO_TARGET_DIR` to avoid build dir contention.
Answers:
username_0: Oh .. You can configure the external linter so this is a non issue for me

Status: Issue closed
username_0: Right now if you run a test and you have `Analyze project with external linter` your test run can get temporarily stalled waiting for the linter (lock on the build dir).
You could get around this by changing your context configuration for the test to use a different `CARGO_TARGET_DIR`, but it would be nice if by default the linter did not run while you were running tests. Or if you could configure it to use a separate `CARGO_TARGET_DIR` to avoid build dir contention.
username_0: Hmm.. I added `--target-dir target/ide` as the additional arguments yet
```
$ ls target/ide
ls: target/ide: No such file or directory
```
username_0: I just needed to specify the full absolute path to the workspace root target dir.
The dirs were getting created in my workspace subdirectories.
Status: Issue closed
|
CocoaLumberjack/CocoaLumberjack | 273345865 | Title: Thoughts about weak linking and project dependencies
Question:
username_0: Well, it is common to install CocoaLumberjack via CocoaPods or Carthage ( or Swift Package Manager ? ).
It is nice that community has fast reliable library that provides logging.
However, I would like to have an approach, which eliminates strong coupling to this library.
I start to inspect my current project and tell myself, "wait! it is not good!".
I have nearly 1000 occurrences of DDLog. ( Both DDLog logging and DDLogLevel definition ).
Even if I change search to DDLogLevel, the result drops down to 300 entries.
Project has several modules and, of course, this number is good ( ~ 700 log events ).
However, I would like to have an approach which could be described on wiki page or in readme.
It is simple to redefine ```MY_PROJECT_LOG_VERBOSE```. But it is not simple to decoupling usage of DDLogLevel variable.
In most cases logging is scoped to module ( or framework ). Yes, we have situations in which logging should be turned off by configuration.
And in these assumptions I could tell that the static nature of this variable is needless.
Answers:
username_1: @username_0 sorry it took so long to respond to this. Any ideas how to implement this? Are you interested in trying it out?
username_0: @username_1 yes, I am interested in it.
I thought about some kind of compiler flags, for example. However, it will solve only "static variable".
However, I would like to discuss also some "architecture" decisions and, moreover, "build configurations" decisions.
For example, I have found that Filtering uses NSArray as storage for filter flags. But wait, simple Set could fit this position well. |
remote-job-boards/software-engineering | 872440792 | Title: Paperpile: Marketing Manager
Question:
username_0: **Tags:** #copywriting #software #marketing-strategy #saas #marketing #dev #exec #digital-nomad
**Published on:** April 28, 2021
**Original Job Post:** https://remoteOK.io/remote-jobs/103744-remote-marketing-manager-paperpile

##Description
We are a technology startup building productivity software for researchers and students. Our user base is growing fast and so is the team around it. Tens of thousands of customers rely on Paperpile (paperpile.com) every day to organize and write scientific articles.
Our new product, BibGuru (bibguru.com), helps undergrad and high school students to quickly create accurate citations and bibliographies. First released last fall, hundreds of thousands of students already use BibGuru and we are adding thousands of new users every day.
Founded in Cambridge/MA, Paperpile is now headquartered in Vienna, Austria with a fully remote team distributed across 10 countries.
##Your role
* Plan and execute marketing campaigns across the complete funnel.
* Improve existing channels and explore new channels to generate leads.
* Write and optimize landing pages and marketing copy.
* Otimize onboarding and user engagement of our products.
* Keep users engaged with our products and build long-lasting relationships with business and institutional customers.
* Communicate across all our marketing channels (e-mail newsletter, in-app and push notifications, blog, landing pages, social media)
* Work with our in-house designer to create engaging visual marketing content.
* Collaborate and coordinate with your colleagues in our marketing team who specialize in paid advertising and content marketing/SEO.
* Work closely with our product team to coordinate and plan feature and product announcements.
* Find and work with external freelancers.
* Note: This is a hands-on job and the focus is on your work as an individual contributor. If possible, you delegate to colleagues or external freelancers and help to build a team over time.
##Requirements
* At least 4 years of experience in Digital & Growth Marketing.
* Previous experience with Software as a Service (SaaS), including mobile app marketing.
* Strong communication and writing skills. You can adapt your language to our diverse audience of academics, industry researchers, medical doctors, and students.
* Strong knowledge of modern marketing tools like Google Analytics, Mixpanel, Intercom, and others.
* Analytical and quantitative skills for a data-based approach to marketing and growth.
* You work independently, take care of your regular tasks reliably, and finish projects on time.
##Optional
* Design skills and experience creating visual content.
* Background in academic research or publishing.
* Video content marketing.
##Benefits
* Work with an interesting and diverse community of academics. Our customers use Paperpile to study climate change, cancer or medieval history.
* We are a fully remote company. Work from anywhere on your own schedule. We sponsor co-working spaces in your city.
* Work in a small motivated team. Everything you do matters and has a direct impact on the success of our products and company.
* Learn and grow. Try out new things every day. We sponsor relevant courses, seminars, and conferences.
* 4 weeks paid vacation.
* Equity/bonus program.
#Salary and compensation
$40,000 — $60,000/year
### Location
🌏 Worldwide<br><issue_closed>
Status: Issue closed |
ivalab/Optragen | 171025205 | Title: overall structure
Question:
username_0: If done properly, there should be minimal setup for things, and even maybe sub-classes that configure specific common optimal control formulation categories/types.
Leads to questions:
1. how should the ocp2nlp part be abstracted?
2. how should the spline part be abstracted? is this necessary?
3. can design be made to work with a multiple-shooting collocation type of class?
4. how to structure to enable partial problem formulation updates? this should speed up resolving under minor changes (such as terminal condition, or new obstacles). Or is this simply incorporated into the interface object invocation? (that might be fastest). |
pyenv/pyenv | 259177277 | Title: pyenv shims always change PATH
Question:
username_0: The pyenv shims for any executable change the PATH for the current process and child processes. This affects usage of any python program that starts child processes like `envdir`, `pew` and can be demonstrated using `python` itself.
The shim will append the `bin` of the current activated python and also the pyenv `libexec` folder. For example: `/Users/johria/.pyenv/versions/3.6.2/bin:/usr/local/Cellar/pyenv/1.1.3/libexec` will get prepended to the `PATH`.
This can most easily be demonstrated via the shim for `python` itself:
checking the current PATH:
```
$ echo "$PATH"
bin:/Users/johria/.nodenv/shims:/Users/johria/.rbenv/shims:/Users/johria/.pyenv/shims:/usr/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
```
checking the PATH modified by the pyenv shim:
```
$ python -c "import os; print(os.environ['PATH'])"
/Users/johria/.pyenv/versions/3.6.2/bin:/usr/local/Cellar/pyenv/1.1.3/libexec:bin:/Users/johria/.nodenv/shims:/Users/johria/.rbenv/shims:/Users/johria/.pyenv/shims:/usr/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
```
as you can see pyenv has changed the PATH
is there any way to allow shims to work without changing the PATH? I realize this is the root of my issues.
cc @username_1 @yyuu
Answers:
username_0: FWIW, I just tried out [`asdf`](https://github.com/asdf-vm/asdf) with [`asdf-python`](https://github.com/tuvistavie/asdf-python) and for some reason their shims to do not change the PATH.
username_1: So it adds `/Users/johria/.pyenv/versions/3.6.2/bin` and `/usr/local/Cellar/pyenv/1.1.3/libexec`.
Do you use 2 Python versions?
I agree somehow that this would be nice to avoid.
Try looking at the output after `export PYENV_DEBUG=1` to see where it is coming from.
Maybe this?!
https://github.com/pyenv/pyenv/blob/2ebab025f7c5dcb1de07c6fc38ea13592c7df6e4/libexec/pyenv-exec#L44-L47
username_0: linking relevant issues:
- https://github.com/pyenv/pyenv/issues/98
- https://github.com/pyenv/pyenv/issues/789
username_1: See https://github.com/pyenv/pyenv/pull/1169.
username_1: Duplicate of https://github.com/pyenv/pyenv/issues/789.
Status: Issue closed
|
plotly/plotly.js | 235748847 | Title: Allow x,y,z in WebGL plots to accept Float32Array's
Question:
username_0: It looks like currently only `pointcloud` accepts Float32Array's:
Answers:
username_1: Interesting, definitely seems doable, but is the numpy buffer format accessible from other languages? It looks like a fairly straightforward encoding, so seems like even if it's not natively available elsewhere we could likely generate it - hopefully just translating headers from whatever *is* natively available.
username_0: Here's an [R package](https://cran.r-project.org/web/packages/RcppCNPy/vignettes/RcppCNPy-intro.pdf) for writing numpy buffers.
username_2: Referencing https://github.com/plotly/plotly.js/issues/860
username_3: Very loosely related, only because Mikola wrote so much useful stuff that's not fully utilized: https://github.com/scijs/ndarray/issues/18
There's been talk of letting the scijs/ndarray constructor accept a plain object with format {data: [...], shape: [...], stride: [...], offset: ...}. It's a bit annoying/inefficient to pack and unpack those via [ndarray-unpack](https://github.com/scijs/ndarray-unpack) and [ndarray-pack](https://github.com/scijs/ndarray-pack), but it's pretty trivial. Like if a numpy buffer could get unpacked into an scijs ndarray and then sent to plotly as an array of arrays via `ndarray-unpack`. The annoying part is mashalling to/from typed arrays, but maybe the above code handles that.
username_4: ({'plain': [0, 'text']}, {'x': {}, 'y': {'shape': (10, 10)}}, [['x', 'ar'], ['y', 'data']],
[<memory at 0x107ffec48>, <memory at 0x107ffed08>])
```
This can be seen as an extension of json, and I think this part deserves it's own library, which I think can be useful for many other projects. For instance I noticed that bokeh (cc @bryevdv) also has binary transfer on their wish list, so maybe some coordination is useful.
Having a jsonb library for python, js, R and c++ would be of interest of many more people I think, beyond ipywidgets, plotly and bokeh.
What to do with the buffer object on the js side is is I think up to the app developer, in ipyvolume I now mostly directly use typed arrays (such as Float32Array), and for multi-d cases ndarray. I do however check on the Python side that the array is 'C_CONTIGUOUS', so I do not have to worry about strides.
(cc @SylvainCorlay @jasongrout )
PS: @username_0 I don't transfer the full numpy array data any more (that was before ipywidgets 7), I now serialize only the array data, and [send the dtype, and shape separately](https://github.com/username_4/ipyvolume/blob/2ea543dd600eeaf899487d30633182b6ad04cf08/ipyvolume/serialize.py#L95), i need to remove that code.
username_0: Thanks @username_4 for these tips - extremely helpful!
We're in the middle of a few other plotly.js projects right now, but are planning to circle back on this in a few weeks.
@SylvainCorlay @jasongrout @bryevdv happy to think about standalone implementations for this that could be universally useful. Feel free to chime in if you think of ideas :clinking_glasses:
username_5: I haven't yet looked into the awesome details here, but we have an older conversation with a similar overall goal in mind (though maybe different context): https://github.com/plotly/plotly.py/pull/550#issuecomment-241228681
At the time we pondered that maybe the Python side could serialize with [np.ndarray.tobytes](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.tobytes.html) into a WebSocket of `binaryType: "arraybuffer"`. This way, there's still interprocess communication (I think there must be, at least in the browser) but it is limited to the minimum, and it should create no intermediary representation or storage, just a direct array->array binary flow. Wondering if this approach would be slower or faster than the above character based approach.
username_4: Hi,
There is no character based approach what I describe, maybe it seem that way since it is (partly) json, but all the array data is binary transfer with minimal amount of copies. Actually, I wouldn't recommend using `np.ndarray.tobytes` (which makes a copy), the `memoryview(ar)` strategy we use avoids that extra copy. Hope this clarifies our approach a bit more. On the JS part, indeed the buffer can be passed to the typed arrays, which can then be directly fed into the WebGL API, with no (or minimal) memory copies.
cheers,
Maarten
username_5: Thanks for the clarification @username_4 - your approach looks like the one to be followed!
username_6: Somewhat related to this thread and the topic of data serialization: I came across [Apache Arrow](https://arrow.apache.org/) which is a cross-language in-memory representation for columnar data to go from the current inefficient copy & convert:

to a much-more efficient:

username_1: cc @catherinezucker - This came up at gluecon, would be really useful for volume rendering.
username_0: This issue has been tagged with `NEEDS SPON$OR`
A [community PR](https://github.com/plotly/plotly.js/pulls?q=is%3Apr+is%3Aopen+label%3A%22type%3A+community%22) for this feature would certainly be welcome, but our experience is deeper features like this are difficult to complete without the Plotly maintainers leading the effort.
**Sponsorship range: $10k-$15k**
What Sponsorship includes:
- Completion of this feature to the Sponsor's satisfaction, in a manner coherent with the rest of the Plotly.js library and API
- Tests for this feature
- Long-term support (continued support of this feature in the latest version of Plotly.js)
- Documentation at [plotly.com/javascript](https://plotly.com/javascript/)
- Possibility of integrating this feature with Plotly Graphing Libraries (Python, R, F#, Julia, MATLAB, etc)
- Possibility of integrating this feature with Dash
- Feature announcement on [community.plotly.com](https://community.plotly.com/tag/announcements) with shout out to Sponsor (or can remain anonymous)
- Gratification of advancing the world's most downloaded, interactive scientific graphing libraries (>50M downloads across supported languages)
Please include the link to this issue when [contacting us](https://plotly.com/get-pricing/) to discuss. |
depoon/NetworkInterceptor | 366812212 | Title: No such module 'NetworkInterceptor'
Question:
username_0: So I'm pretty much very junior to swift related development (did some Obj-C back in the days but a bit rusty too).
I haven't found a way to properly build the example project:
- The `NetworkInterceptor.framework` in Xcode appear to be missing
- What I try to do is to build the framework independently (works) and show build in finder then drag-drop built file into the exemple project (deleting the previous missing ref).
However it won't build anyway as it could not find the module:
````
~/Developer/foobar/ios-inject/NetworkInterceptor/NetworkInterceptorExample/NetworkInterceptorExample/ViewController.swift:10:8: No such module 'NetworkInterceptor'
```
Have a working static injection project (following your tutorial), but as well, I'm stuck properly loading the `CodeInjectionSwift.swift` code, not sure if because it's Swift or if there is as well an issue while loading the built framework (same drag&drop from build).
Would love any insight you could have on this,
Thanks for the great work! I'm looking to reverse-engineer an home automation app and your code looks like to be my last resort! (SSL certificate pinning).
Answers:
username_1: 1. Please Open **_NetworkInterceptor.xcworkspace_** instead
2. Build ** NetworkInterceptor** target
3. Run Example project
How do you wish to reverse engineer that app? I might have an easier solution
username_0: Thanks for the answer! Indeed opening the workspace seems to have fixed the build issues, back to work!
The app I'm trying to reverse is a free home automation app, available both on iOS and Android. Their are quite famous in my country, but their system is totally closed unfortunately.
I was able to debug the initial login handshake that uses some kind of [Digest access authentication](https://en.wikipedia.org/wiki/Digest_access_authentication) but to initiate a WebSocket connection. That part is working fine and I'm properly receiving WebSocket messages from the server (that looks like to be raw HTTP requests).
The big issue is that I don't have a clue what is the proper API (aka. WebSocket message) to trigger stuff (tried a few variations of the received message with no luck).
I've spent a lot of time trying to debug these network frames using [Charles for iOS](https://www.charlesproxy.com/documentation/ios/), then a combination of [mitm-proxy](https://mitmproxy.org/) and [WireShark](https://www.wireshark.org) combined with the use of a [Remote Virtual Interface](https://developer.apple.com/library/archive/qa/qa1176/_index.html).
Big issue is that it looks like WebSocket frames do not use the proxy, were not properly SSL decrypted by Charles, and when I finally was able to force the SSL override the app won't work anymore (due to SSL pining).
So my only hope left was to somehow debug these WebSocket messages before they are sent / encrypted thanks to your work. If you have any other idea (or maybe some other iOS method injection to perform related to WebSocket usage), I'd be more than happy to hear them.
If you are interested, I can send you unencrypted version of both iOS/Android apps!
Thanks again for the help!
username_1: Drop me an email regarding the unencrypted app pls.
<EMAIL>
username_0: @username_1 done, thanks!
username_1: On PatchLoader.m, since you are creating a framework, use this import instead
```ObjC
#import <PatchFramework/PatchFramework-Swift.h>
```
In CodeInjectionSwift you need to declare your class and func public in order to use them
username_0: Thanks, it did indeed fixed build issues.
Unfortunately it looks like I've hit another deadend, I can't import the required Swift dylibs as they crash (due to missing symbols), eg:
```
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Exception Note: EXC_CORPSE_NOTIFY
Termination Description: DYLD, Symbol not found: _$SSJN | Referenced from: /var/containers/Bundle/Application/7739779D-14CF-4DFD-AF21-4BF6B6E9617C/Tydom.app/Dylibs/libswiftSwiftOnoneSupport.dylib | Expected in: /private/var/containers/Bundle/Application/7739779D-14CF-4DFD-AF21-4BF6B6E9617C/Tydom.app/Frameworks/libswiftCore.dylib | in /var/containers/Bundle/Application/7739779D-14CF-4DFD-AF21-4BF6B6E9617C/Tydom.app/Dylibs/libswiftSwiftOnoneSupport.dylib
```
Probably because either XCode is too recent (and device is in 10.3) or maybe the fact that it is 32-bit.
username_2: Hi @username_0,
Not sure, if I'm on the right way understanding your issue. But I also got into trouble using the required (or not required) Swift dylibs while experimenting with CodeInjection.
First I used all the default Swift dylibs (/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/swift), but soon found out, that the easiest way of using only the required dylibs is to build your framework, open it in finder and copy all dylibs from the /Frameworks directory.
Especially for libswiftSwiftOnoneSupport.dylib, this was only required if I wanted to use a debug version of the framework on my iOS-device. Otherwise this dylib is not required (and shouldn't therefore not be included).
username_0: Indeed it looks like I was building it in debug mode. Had been able to get a 64-bit device and wanted to try again but I'm not sure if an Xcode update broke something but I get:
```
dyld: Library not loaded: @rpath/NetworkInterceptor.framework/NetworkInterceptor
Referenced from: /var/containers/Bundle/Application/96A22878-6F2A-4670-99ED-268CF907F9D2/NetworkInterceptorExample.app/NetworkInterceptorExample
Reason: image not found
```
When trying to run the example. I've opened the workspace and built the NetworkInterceptor target first. Any ideas?
username_0: Finally made it work today!! had to compile my framework with Xcode 9 (still downloadable on Apple developer) to be on the same SDK than the app.
However I'm receiving truncated body/content in the console, is it normal @username_1?
```
default 01:07:35.829411 +0100 Tydom Task <157440F7-FB34-41CC-A208-968EB2C32118>.<0> sent request, body N
default 01:07:35.838641 +0100 Tydom Task <157440F7-FB34-41CC-A208-968EB2C32118>.<0> received response, status 401 content U
```
Anyway I guess I can close this!
Status: Issue closed
|
mengxiong10/vue2-datepicker | 446114324 | Title: To show the next month when the days of the current month are disabled
Question:
username_0: Is it possible to show the next month when all the dates of the current month are disabled?
Answers:
username_1: use the `default-value`
username_0: 
in this case I need to show June
username_1: <date-picker :default-value="new Date(2019, 5,1)" />
username_0: Thanks
Status: Issue closed
|
excaliburjs/Excalibur | 239258598 | Title: Create an excalibur CLI
Question:
username_0: ### Context
It's no surprise most modern frameworks today distribute a CLI (Angular, Ember, React), JS development is hard to bootstrap.
### Proposal
Post-1.0, we should consider creating an excalibur CLI that could:
- Generate games
- Migrate games to new versions (using codemods)
- Allow additional plugins (excalibur-pack, excalibur-tiled)
- Scaffold classes
- Build and distribute the game (electron, UWP, Xamarin, etc.)
Answers:
username_1: From our discussion earlier, possible commands
```
ex init mygame
ex add tiled
ex add actor
ex add scene
ex run
ex debug
ex publish
``` |
bringtree/question_embedding | 443696066 | Title: 查看 hadoop fs 打开的hdfs的ip 和端口 和对应服务
Question:
username_0: hadoop hdfs 配置 指南
首先 找 config 路径
/disk2/huangps/hadoop-2.7.1/etc/hadoop
关注这几个文件
hadoop-env.sh
这个 是虚拟机的配置: OOM 的时候 可以去修改
```bash
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}
...
export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"
export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS"
# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
#HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData $HADOOP_JAVA_PLATFORM_OPTS"
```
接着看 core-site.xml
```xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://abcdefg</value>
</property>
</configuration>
```
发现 指向了 hdfs://abcdefg
查看abcdefg 的ip 发现 没有这个名字的映射. 只能继续往下找
hdfs 有两个概念 一个叫 datanode(存放数据) 另外一个叫 namenode(存放数据的地址)
显然我们要找的是 namenode
```xml
<configuration>
<property>
<property>
<name>dfs.ha.namenodes.abcdefg</name>
<value>nn0,nn1</value>
</property>
<property>
<name>dfs.namenode.rpc-address.abcdefg.nn0</name>
<value>ab:8000</value>
</property>
<property>
<name>dfs.namenode.rpc-address.abcdefg.nn1</name>
<value>cd:8000</value>
</property>
<property>
[Truncated]
</property>
<property>
<name>dfs.namenode.http-address.abcdefg.nn1</name>
<value>cd:8001</value>
</property>
</configuration>
```
dfs.ha.namenodes.abcdefg 下有两个值。 做了备份? 没看懂文档说的啥意思
反正不管了 都试一试
直接调http 的地址ab:8001 和cd:8001
发展 只有 cd:8001 能进去,另外一个报了其他错误。扫了一眼 和
hadoop fs -ls 下hdfs://abcdefg/ 路径是一样的.
结束 |
GoogleChrome/web.dev | 482436913 | Title: Rebuild footer.
Question:
username_0: **Is your feature request related to a problem? Please describe.**
As part of the devsite migration, we need to recreate the footer in a way that doesn't rely on their CSS.
**Describe the solution you'd like**
Tidy up the footer markup and CSS. Ideally it should look the same.<issue_closed>
Status: Issue closed |
PentiaLabs/generator-helix | 366295160 | Title: Serialization.config is not added when Yes to Unicorn is selected
Question:
username_0: ### User story
As a Developer, I want a Serialization.config file so that unicorn will work and the project can build when just added a new helix project. (Please replace the <> areas and remove this parentheses)
### Expected behavior
serialization.config file added to the new project.
### Actual behavior
The file is missing, though the new project reference it.
Result is that the solution won't build.
### Steps to reproduce the behavior
yo helix:add Dummy RK
Answers:
username_1: Can not reproduce it
Status: Issue closed
|
ContinuumIO/anaconda-issues | 1113300204 | Title: Navigator Load Error
Question:
username_0: ### Actual Behavior
<!-- What actually happens? -->
Navigator Error
An unexpected error occurred on Navigator start-up
Report
Please report this issue in the anaconda issue tracker
Main Error
/Users/wor/.continuum/anaconda-client: Permission denied
Traceback
Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.9/site-packages/binstar_client/utils/config.py", line 274, in save_config
os.makedirs(data_dir)
File "/opt/anaconda3/lib/python3.9/os.py", line 225, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/Users/wor/.continuum/anaconda-client'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.9/site-packages/anaconda_navigator/exceptions.py", line 72, in exception_handler
return_value = func(*args, **kwargs)
File "/opt/anaconda3/lib/python3.9/site-packages/anaconda_navigator/app/start.py", line 151, in start_app
window = run_app(splash)
File "/opt/anaconda3/lib/python3.9/site-packages/anaconda_navigator/app/start.py", line 66, in run_app
window = MainWindow(splash=splash)
File "/opt/anaconda3/lib/python3.9/site-packages/anaconda_navigator/widgets/main_window/__init__.py", line 235, in __init__
self.api = AnacondaAPI()
File "/opt/anaconda3/lib/python3.9/site-packages/anaconda_navigator/api/anaconda_api.py", line 1442, in AnacondaAPI
ANACONDA_API = _AnacondaAPI()
File "/opt/anaconda3/lib/python3.9/site-packages/anaconda_navigator/api/anaconda_api.py", line 80, in __init__
self._client_api = ClientAPI(config=self.config)
File "/opt/anaconda3/lib/python3.9/site-packages/anaconda_navigator/api/client_api.py", line 650, in ClientAPI
CLIENT_API = _ClientAPI(config=config)
File "/opt/anaconda3/lib/python3.9/site-packages/anaconda_navigator/api/client_api.py", line 94, in __init__
self.reload_client()
File "/opt/anaconda3/lib/python3.9/site-packages/anaconda_navigator/api/client_api.py", line 324, in reload_client
client = self._load_binstar_client(url)
File "/opt/anaconda3/lib/python3.9/site-packages/anaconda_navigator/api/client_api.py", line 355, in _load_binstar_client
binstar_client.utils.set_config(config)
File "/opt/anaconda3/lib/python3.9/site-packages/binstar_client/utils/config.py", line 284, in set_config
save_config(data, USER_CONFIG if user else SYSTEM_CONFIG)
File "/opt/anaconda3/lib/python3.9/site-packages/binstar_client/utils/config.py", line 279, in save_config
raise BinstarError('%s: %s' % (exc.filename, exc.strerror,))
binstar_client.errors.BinstarError: /Users/wor/.continuum/anaconda-client: Permission denied
### Expected Behavior
<!-- What should have happened? -->
Navigator should load without error
### Steps to Reproduce
<!-- What steps will reproduce the issue? -->
[Truncated]
<details>
```
PASTE OUTPUT HERE:
```
</details>
##### `conda list --show-channel-urls`
<!-- Paste the output of 'conda list --show-channel-urls' between the two sets of backticks (```) below -->
<details>
```
PASTE OUTPUT HERE:
```
</details>
Answers:
username_1: I have the very same issue and persisting. No amount of culling the existing version and reinstalling appears to solve the problem that the lauchpad icon is unresponsive to clicking. Going into the 'Finder' /Applications and clicking on the anaconda navigator makes the icon wobble as it seems to be loading the app but then ends up as the error dump (attached) in a Safari page
[Navigator Error.pdf](https://github.com/ContinuumIO/anaconda-issues/files/7933106/Navigator.Error.pdf)
. |
yajra/laravel-datatables | 911521063 | Title: Multiple select - mjoin
Question:
username_0: Is it possible to implement multiple selection as example here:
[DataTables example - Multiple selection](https://editor.datatables.net/examples/datatables/mJoin.html)
Answers:
username_1: Yes, possible but needs several things to work. Here is an example of multiple selection using jsTree.

## Html Builder
```php
public function html()
{
return $this->builder()
->setTableId('roles-table')
->columns($this->getColumns())
->selectStyleMulti()
->orderBy(2)
->ajaxWithFilters()
->buttons([
CreateButton::makeIfCan('manage-roles'),
EditButton::makeIfCan('manage-roles'),
RemoveButton::makeIfCan('manage-roles'),
CsvButton::make(),
ReloadButton::make(),
])
->editors([
Editor::make()
->fields([
Fields\Field::make('name'),
Fields\Field::make('slug')
->fieldInfo('System role slug cannot be altered.')
->multiEditable(false),
YesNoField::make('system')
->fieldInfo('System role cannot be deleted or slug to be modified.')
->default(0),
JSTreeField::make('permissions[].id')
->label('Permissions')
->options(Permission::jsTree()),
]),
]);
}
```
username_1: @username_0 I just realized that `datatable` is a new field type for Editor 2. Will try to give this a shot when I got the chance. I currently implement a custom field type to achieve this case.
Anyways, theoretically, the new field type should work out of the box.
username_2: Jstree look good, can you share in gist ?
username_0: If possible I also need an example how to create query for controller for data and how to build store function for one-to-many joins. I don't know how to do that in Laravel.
https://editor.datatables.net/manual/php/mjoin
username_1: @username_2 here is the [gist](https://gist.github.com/username_1/a383bf41ec7eb7e673cf9b7794020d51), hope it helps.
username_2: Wow, it look amazing. Thanks for share! |
kosua20/Thoth | 68091128 | Title: Error with references in summaries.
Question:
username_0: If the article beginning contains reference-style links and pictures which link to parts of the articles outside of the summary range, they are not correctly replaced by the Markdown processor in the *{#SUMMARY}* text.
Answers:
username_0: Fixed by generating the whole article HTML representation, before filtering return lines and HTML tags, clamping it and trimming white spaces.
Status: Issue closed
|
rstudio/packrat | 134063387 | Title: Packrat fails to download source from Github repos
Question:
username_0: R version 3.2.3 (2015-12-10)
Platform: x86_64-apple-darwin13.4.0 (64-bit)
Running under: OS X 10.11.3 (El Capitan)
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] dplyr_0.4.3
loaded via a namespace (and not attached):
[1] magrittr_1.5 R6_2.1.1 assertthat_0.1 formatR_1.2.1 parallel_3.2.3 DBI_0.3.1 tools_3.2.3 yaml_2.1.13 memoise_1.0.0
[10] Rcpp_0.12.2 stringi_1.0-1 knitr_1.12.3 digest_0.6.9 stringr_1.0.0 packrat_0.4.6-13 devtools_1.10.0
```
Answers:
username_1: Any chance this is an intermittent error?
What happens if you execute:
packrat:::download('http://github.com/rstudio/packrat/archive/71364778269841eedf2a8748055d759c858ff3a0.tar.gz')
directly? If this fails, can you also give the output of:
getOption("download.file.method")
getOption("download.file.extra")
username_0: Looks like download isn't a function in packrat, did you mean something else?
```
packrat::download('http://github.com/rstudio/packrat/archive/71364778269841eedf2a8748055d759c858ff3a0.tar.gz')
Error: 'download' is not an exported object from 'namespace:packrat'
```
The output of the options is:
```
getOption("download.file.method")
[1] "curl"
getOption("download.file.extra")
NULL
```
username_1: It's an unexported function, so you need to access it using three colons (`:::`) rather than two (`::`).
username_0: In that case it says:
```
packrat:::download('http://github.com/rstudio/packrat/archive/71364778269841eedf2a8748055d759c858ff3a0.tar.gz')
Error in if (!grepl("\\b-L\\b", extra)) extra <- paste(extra, "-L") :
argument is of length zero
```
username_1: Thanks -- looks like packrat is incorrectly assuming that `download.file.extra` is a string. I'll get this fixed soon!
username_0: Yay, I appreciate it!
username_1: Okay, I believe this should be resolved with the latest version of packrat -- can you give it a test and let me know how it goes?
Thanks!
username_0: hmm, I'm seeing this now:
```
packrat::snapshot()
Error in visited[[rec$name]] <- rec :
wrong args for environment subassignment
```
username_1: Any chance you can print a traceback, or use `options(error = recover)` and tell me what the `rec` variable is? My guess is that `rec` itself must be `NULL`...
username_0: sure:
```
traceback()
5: visit(rec$depends)
4: visit(packageRecords)
3: flattenPackageRecords(inferredPkgRecords, depInfo = TRUE, sourcePath = TRUE)
2: snapshotImpl(project, available, lib.loc, dry.run, ignore.stale = ignore.stale,
prompt = prompt && !dry.run)
1: packrat::snapshot()
```
username_1: I think this commit may have fixed it: https://github.com/rstudio/packrat/commit/021f122186cb2b5f80831d46d3b436411852c765
Can you give it another shot with latest master? (Sorry for the breakage!)
username_0: Booya! Thanks!
```
Upgrading these packages already present in packrat:
from to
packrat 0.4.6-12 0.4.6-16
rmarkdown 0.9.2 0.9.5
Fetching sources for dplyr (0.4.3.9000) ... OK (GitHub)
Fetching sources for packrat (0.4.6-16) ... OK (GitHub)
Fetching sources for rmarkdown (0.9.5) ... OK (GitHub)
```
username_2: I am still having trouble with this issue.
I have downloaded the most recent version of the package:
`devtools::install_github('rstudio\packrat')`
Then:
`packrat::init("<directory>")`
Gives:
Fetching sources for packrat (0.4.6-17) ... FAILED
Error in snapshotSources(project, activeRepos(project), allRecordsFlat) :
Errors occurred when fetching source files:
Error in getSourceForPkgRecord(pkgRecord, sourceDir, availablePkgs, repos) :
Failed to download package from URL:
- "http://github.com/rstudio/packrat/archive/9966a824ade72e5fecf6f770c9b75d2f1a6d8987.tar.gz"
With `traceback()`
5: stop("Errors occurred when fetching source files:\n", errors)
4: snapshotSources(project, activeRepos(project), allRecordsFlat)
3: snapshotImpl(project, available.packages(contrib.url(activeRepos(project))),
lib.loc = NULL, ignore.stale = TRUE, fallback.ok = TRUE)
2: withCallingHandlers(expr = {
if (isPackratModeOn())
off()
packify(project = project, quiet = TRUE)
augmentRprofile(project)
options <- initOptions(project, opts)
snapshotImpl(project, available.packages(contrib.url(activeRepos(project))),
lib.loc = NULL, ignore.stale = TRUE, fallback.ok = TRUE)
restore(project, overwrite.dirty = TRUE, restart = FALSE)
file.copy(instInitFilePath(), file.path(project, "packrat",
"init.R"))
updateSettings(project, options)
symlinkSystemPackages(project = project)
message("Initialization complete!")
if (enter) {
setwd(project)
if (!restart || !attemptRestart())
on(project = project, clean.search.path = TRUE)
}
invisible()
}, error = function(e) {
for (i in seq_along(priorStructure)) {
file <- names(priorStructure)[[i]]
fileExistedBefore <- priorStructure[[i]]
fileExistsNow <- file.exists(file)
if (!fileExistedBefore && fileExistsNow) {
unlink(file, recursive = TRUE)
}
}
})
1: packrat::init("<directory>")
Trying this:
packrat:::download('http://github.com/rstudio/packrat/archive/9966a824ade72e5fecf6f770c9b75d2f1a6d8987.tar.gz')
Gives:
Error in download.file(url = url, method = method, extra = extra, ...):
argument "destfile" is missing, with no default
With `traceback()`
4: download.file(url = url, method = method, extra = extra, ...)
3: withCallingHandlers(download.file(url = url, method = method,
extra = extra, ...), warning = function(w) {
caughtWarning <<- w
invokeRestart("muffleWarning")
})
2: downloadFile(url, method, ...)
1: packrat:::download("http://github.com/rstudio/packrat/archive/9966a824ade72e5fecf6f770c9b75d2f1a6d8987.tar.gz")
username_1: Sorry, I should have asked you to specify a `destfile` argument, e.g.
`packrat:::download(url, destfile = tempfile())`
What version of R are you using? Are you running RStudio? If so, what version?
What's the output of `getOption('download.file.method')`? What about `getOption('download.file.extra')`?
username_2: 4: download.file(url = url, method = method, extra = extra, ...)
3: withCallingHandlers(download.file(url = url, method = method,
extra = extra, ...), warning = function(w) {
caughtWarning <<- w
invokeRestart("muffleWarning")
})
2: downloadFile(url, method, ...)
1: packrat:::download("http://github.com/rstudio/packrat/archive/9966a824ade72e5fecf6f770c9b75d2f1a6d8987.tar.gz",
destfile = tempfile())
username_1: Hmmm, that's very odd as the same call works for me on a Windows 10 machine with R 3.2.3 patched (although with the latest release of RStudio).
What if you just use `download.file()`, e.g.
```R
url <- "http://github.com/rstudio/packrat/archive/9966a824ade72e5fecf6f770c9b75d2f1a6d8987.tar.gz"
download.file(url, destfile = tempfile())
```
Does that succeed? What if you set `options(download.file.method = "internal")`?
username_2: trying URL 'https://github.com/rstudio/packrat/archive/9966a824ade72e5fecf6f770c9b75d2f1a6d8987.tar.gz'
Content type 'application/x-gzip' length 119342 bytes (116 KB)
downloaded 116 KB
username_1: That seems odd that going from non-secure to secure would be a failure...
Either way, packrat does construct non-https URLs behind the scenes so this is a bug in packrat. In the interim, you should be able to work around this by using `options(download.file.method = "internal")`.
username_1: I just made a commit to master that should resolve this (prefer `https` for GitHub downloads when possible). Can you let me know if that works for you?
username_1: Also, it looks like this is a setting that can be toggled in Internet Explorer (`wininet` implies the use of the same download machinery that Internet Explorer uses): http://kb.tableau.com/articles/knowledgebase/resolving-internet-communication-error-secure-nonsecure
You might try changing that to see if it makes a difference.
username_2: that one worked! thanks so much. you've been super responsive. can't tell you how much i appreciate it.
username_1: No problem! Glad we were able to find a resolution :)
Status: Issue closed
|
microsoft/winget-pkgs | 946602073 | Title: None
Question:
username_0: Hey @username_1 do you guys not Publish appxbundle installers anymore? Or are they just not public?
Answers:
username_1: @username_0 We do but my Azure subscription got hosed. Still waiting for them to fix it and bring the install resources back up. Recommend doing nothing here for now, short-term outage.
username_1: Endpoint is back up, can close this (if it doesn't already do so automatically).
username_0: We mods can't do so, the bot will probably not do that either, so afaik @username_2 gotta close it
I could do a metadata update for the current version and close it that way if it is okay with you?
username_1: It's already set to the latest production version. No updates at this time. 😄
Status: Issue closed
|
nvim-treesitter/nvim-treesitter | 855972568 | Title: Treesitter is highlighting same groups differently in different places
Question:
username_0: **Describe the bug**
I am writing C++ code and treesitter is highlighting similar groups differently. Here is a snapshot from my editor.

As you can see from the image
1. friend is highlighted with different colors in similar sections.
2. operator keyword in second line is green whereas it is violet in others.
3. bool keyword in two lines are green whereas in others it is yellow.
I don't know if this is a problem related to treesitter particularly. For your reference I am only using treesitter engine for highlighting. Previously I was using Vim Polyglot, but I deleted it. So this is not coming from polyglot I guess.
Answers:
username_1: Could it be that you're affected by https://github.com/tree-sitter/tree-sitter-cpp/pull/72 ? The fix is available since a long time but has never been merged. It would be interesting what the parser actually gives us as a result (try https://nvim-treesitter/playground). When there are errors in the syntax tree the error can propagate to wrong highlighting. You can also use `:TSHighlightsUnderCursor` so see what highlight groups are applied in the different spots.
Do you have a highlight for `@variable` set?
username_0: @username_1 Okay, let me see the issue and try what you told.
username_1: I personally find it very annoying that the `error` regions are highlighted bold. nvim-treesitter does not set a highlight for error. But due to a caching issue in upstream nvim we cannot overwrite it at the moment.
username_0: @username_1 I took a look at the highlight groups, here's what I got
1. The operator keyword which was wrongly highlighted in second line is detected as @variable as well as a @function.
2. The friend keyword in the second line is detected as @variable whereas it should have been @keyword.
3. The friend keyword in the third line is detected as @type.
4. The bool type in fourth and fifth line is detected as @variable as well as @function.
5. The bool type in third and last line is detected as @variable.
username_1: Can you paste the code here?
username_0: ```cpp
#include<bits/stdc++.h>
using namespace std;
template <int MOD> struct Mint{
static constexpr int mod = MOD;
int value;
Mint() : value(0) {}
Mint(int64_t v_) : value((int64_t)(v_ % mod)) { if (value < 0) value += mod; }
static int inv(int a, int m) {
a%=m; assert(a);
return a == 1?1:(int)(m-(int)(inv(m, a))*(int)(m)/a);
}
Mint inverse() const {
Mint res; res.value = inv(value, mod);
return res;
}
Mint power(Mint a, int b) const{
Mint res = 1; while(b>0){ if(b&1) res *= a; a=(a*a); b>>=1;} return res;
}
friend std::istream &operator>>(std::istream &input, Mint &other){
input >> other.value; return input;
}
friend std::ostream &operator<<(std::ostream &out, const Mint &other){
out << other.value; return out;
}
Mint operator- () const { return Mint(-value);}
Mint operator+ () const { return Mint(*this); }
Mint& operator ++ (){ value++;if (value==mod) value=0; return *this; }
Mint& operator -- (){ if(value==0) value = mod; value--; return *this; }
Mint& operator += (const Mint& o) { value = value+o.value; if(value>=mod) value-=mod; return *this; }
Mint& operator -= (const Mint& o) { value = value-o.value; if(value<0) value+=mod; return *this; }
Mint& operator *= (const Mint& o) { value = (int64_t)((int64_t)(value) * (int64_t)(o.value) % mod); return *this; }
Mint& operator /= (const Mint& o) { return *this *= o.inverse(); }
friend Mint operator ++ (Mint& a, int32_t) { Mint r = a; ++a; return r; }
friend Mint operator -- (Mint& a, int32_t) { Mint r = a; --a; return r; }
friend Mint operator + (const Mint& a, const Mint& b) { return Mint(a) += b; }
friend Mint operator - (const Mint& a, const Mint& b) { return Mint(a) -= b; }
friend Mint operator * (const Mint& a, const Mint& b) { return Mint(a) *= b; }
friend Mint operator / (const Mint& a, const Mint& b) { return Mint(a) /= b; }
friend bool operator == (const Mint& a, const Mint& b) { return a.value == b.value; }
friend bool operator != (const Mint& a, const Mint& b) { return a.value != b.value; }
friend bool operator < (const Mint& a, const Mint& b) { return a.value < b.value; }
friend bool operator > (const Mint& a, const Mint& b) { return a.value > b.value; }
friend bool operator <= (const Mint& a, const Mint& b) { return a.value <= b.value; }
friend bool operator >= (const Mint& a, const Mint& b) { return a.value >= b.value; }
};
using mint = Mint<(int)1e9+7>;
int dp[501][3000][501];
void solve(){
}
int32_t main() {
ios_base::sync_with_stdio(false);
cin.tie(0);
```
Yes, removing the spaces did worked.
username_1: Maybe you can push a bit on that fix. But in the past it was ignored. Maybe we should fork the C++-parser...
username_0: Okay, thanks @username_1
username_1: @username_0 to bring this issue forward: Is your problem only related to the parser bug (upstream issue) or are there further issues in our plugin?
username_0: @username_1 Right now I only got this issue, so I thought I should notify it.
Is Neovim 0.5 going to use this treesitter implementation?
And if I get further issues I'll notify it here. Most of the time right now, I'm working in C++, if this issue is there in other languages too, I'm not aware of it.
Thanks for your help!
username_1: @username_0 official tree-sitter support is moved to neovim 0.6. Tree-sitter will only be experimental in nvim 0.5. The issue you encountered is only tied to the specific case of spaces in the `operator` definition in C++ and could be easily fixed by merging the upstream PR. Other languages are not affected.
username_0: @username_1 should I close the issue?
username_2: Not sure, this is not the first time we encounter bugs with the C++ parser, so keeping this open would allow us to push changes in the C++ parser.
Forking the parser is also completely imaginable, but we would need to rebase it pretty often...
username_0: Okay then @username_2, I'll keep this open.
username_3: I went ahead and merged the PR that fixed this issue. Thanks to @calixteman for the actual fix.
I have merge access to the cpp treesitter repo, now. If there are any other cpp parser bugs, make sure to mention me.
Not all bugs are actually fixable due to the nature of the c++ grammar (some things are simply impossible without actual compilation). Others are, like this one.
Status: Issue closed
|
uchicago-cs/chiventure | 625584457 | Title: Implement the new interfaces for accessing libobj to create the game_t and room_t objects in game-state and use as example code
Question:
username_0: The final task for setting up the new libobj interface is to provide an example of how to use these new interface and objects to create something saved in the game-state. This code should implement the interface to take the global game-state object and set up the basic game-object and then create room-objects from libobj objects. This may, depending on the current status of other teams, involve working with other teams, but theoretically could be written properly without need for much cross-team consultation.
This will serve two functions. First, the code may serve as an example for other teams to be able to implement their own use of the interface. Second, this will allow us to replace the previous libobj's implementation that would set up the game and rooms directly from the wdl document.
Answers:
username_0: I have modified the existing chiventure code by splitting up `load_wdl` into two functions: `load_wdl()` and `load_objects()`, respectively.
`load_wdl()` remains in the same location, however it now solely calls functions that parse the WDL. Instead of returning a game, it now returns a `wdl_ctx` struct, which contains either a hash table of `object_t`s or a tree of `obj_t`s.
This is passed to `load_objects()` which is in a module `common/load_objects`. This converts these intermediary objects into a game.
For YAML files, it calls the same functions as old Chiventure did. However for WDZ, instead this is where various teams will write up their code for converting these objects into game aspects. Ideally, this means that future WDL teams will not need to be involved in any changes to code that converts from `object_t`s to a `game_t`.
Status: Issue closed
username_0: The merge is complete. Developers in Chiventure now have all interfaces needed to implement conversion from wdl `object_t`s to a `game_t`. This should mean, overall, the WDL team will run into much less issues of needing to rewrite different parts of code for every single team.
username_1: Issue Score: ✔️+
Comments:
Good work overall! One thing to work on for future issues: for a task of this size, we would have expected to see more status updates in between the initial issue creation and the creation of the relevant PR.
<!-- {"score": "check-plus"} --> |
kubernetes-csi/docs | 502795718 | Title: Driver Feature table is a bit out of control
Question:
username_0: Keeping a table of [production plugins and the features they implement](https://github.com/kubernetes-csi/docs/blob/master/book/src/drivers.md#production-drivers) is handy, but as the project has grown, along with the number of plugins and features; this simple table has become a bit difficult to view and even more difficult to update and keep accurate.
I don't have a solution to reformat what's in place, but I wonder if instead of trying to keep this up to date for features, replace the features columns with a link to the plugins documentation? Said link could be a plugin doc (we could provide a template for plugin authors to use) and we could host that in this repo under sub-sections (sub-section for each plugin).
The downside of this is that users like to have the matrix in front of them to compare things, so this doesn't help with that. It does however clean things up in terms of listing available plugins and their features; it provides the information, just no easy way to compare plugins.
Opening this issue to propose the template idea or to solicit input for a better suggestion if someone has one.
Answers:
username_1: Removing the feature columns sound reasonable to me and leaving only the most important pieces.
What do you think of keeping persistence and access mode columns and removing all the other subfeatures after?
username_2: In my opinion, CSI driver features could split into two kinds, basic features and advanced features.
Basic features include creating/deleting volumes and mounting/unmounting volume on Pods.
Advanced features include expanding volume, snapshot management and other new features.
I think writing down features in one drivers table is very handy and intuitive. What do you think of integrating separated features columns in an advanced features column? Host feature table in this repo under sub-sections is a good idea to show various drivers.
We can keep Persistence, Access Mode and Dynamic Provisioning columns.
username_1: Do you mean something like this?
| Driver | Description | CSI Spec | Persistence | Access Mode | Provisioning | Other Features |
|-|-|-|-|-|-|-|
| foo | awesome driver | v1.0 | Persistent | RWO | Yes | Snapshots, Resizing, Cloning |
| bar | cool driver | v1.0 | Persistent | RWX | Yes | Resizing |
username_2: Yes.
username_0: @username_2 @username_1 I like that idea, md tables are still somewhat painful but the consolidation of columns makes it MUCH better IMO. It also solves the annoying problem of adding a `?` mark to fill the space for unknowns.
username_2: Hi, @username_0 @username_1
Can we name the column's head as `Other Features`?
What do you think of the options allowed to be filled are `Raw Block`, `Snapshot`, `Volume Expansion`, `Volume Cloning`, `Topology`, `Volume Stats` in `Other Features` ?
I would like to submit a PR for this issue.
username_1: That sounds good to me. I'm still unsure about topology and volume stats because I feel like those are not quite optional features that a user would make a comparison on. |
axios/axios | 931881553 | Title: Can interceptors.request.use be async?
Question:
username_0: <!-- Click "Preview" for a more readable version --
Please read and follow the instructions before submitting an issue:
- Read all our documentation, especially the [README](https://github.com/axios/axios/blob/master/README.md). It may contain information that helps you solve your issue.
- Ensure your issue isn't already [reported](https://github.com/axios/axios/issues?utf8=%E2%9C%93&q=is%3Aissue).
- If you aren't sure that the issue is caused by Axios or you just need help, please use [Stack Overflow](https://stackoverflow.com/questions/tagged/axios) or [our chat](https://gitter.im/mzabriskie/axios).
- If you're reporting a bug, ensure it isn't already fixed in the latest Axios version.
- Don't remove any title of the issue template, or it will be treated as invalid by the bot.
⚠️👆 Feel free to these instructions before submitting the issue 👆⚠️
-->
<!-- Click "Preview" for a more readable version --
Please read and follow the instructions before submitting an issue:
- Read all our documentation, especially the [README](https://github.com/axios/axios/blob/master/README.md). It may contain information that helps you solve your issue.
- Ensure your issue isn't already [reported](https://github.com/axios/axios/issues?utf8=%E2%9C%93&q=is%3Aissue).
- If you aren't sure that the issue is caused by Axios or you just need help, please use [Stack Overflow](https://stackoverflow.com/questions/tagged/axios) or [our chat](https://gitter.im/mzabriskie/axios).
- If you're reporting a bug, ensure it isn't already fixed in the latest Axios version.
- Don't remove any title of the issue template, or it will be treated as invalid by the bot.
⚠️👆 Feel free to these instructions before submitting the issue 👆⚠️
-->
#### Describe the issue
*I suspect the Okta library might be buggy but before chasing that rabbit hole I thought I'd start here. This issue looks like the only possible cause...*
Question on using the `interceptors.request.use`, our function is `async` because we're using Okta's auth service to retrieve the access token, which is an `async` method:
```js
axios.interceptors.request.use(async (config) => {
const accessToken = await authService.getAccessToken();
if (!accessToken) {
return invalidToken(); // this is firing sometimes
}
return {
...config,
headers: {
...config.headers,
authorization: `Bearer ${accessToken}`
}
};
}, (error) =>
Promise.reject(error)
);
```
As I commented above, `invalidToken()` fires somewhat randomly. Sometimes a user authenticates, returns to our app, our APIs return data, then the app triggers a reauth because it goes to `invalidToken` because the `accessToken` is empty. This will happen within 30 seconds of the app loading, so I know the token should be fine.
Looking at this example from the read me
```js
axios.interceptors.request.use(function (config) {
config.headers.test = 'I am only a header!';
return config;
[Truncated]
Code snippet to illustrate your question
```js
see.above();
```
#### Expected behavior, if applicable
A clear and concise description of what you expected to happen.
#### Environment
- Axios Version 0.18.1
- Adapter XHR?
- Browser Chrome
- Browser Version 91.0.4472.114
- Node.js Version 10.22.1
- OS: Windows 10
- Additional Library Versions React 16.13
#### Additional context/Screenshots
Add any other context about the problem here. If applicable, add screenshots to help explain.
Answers:
username_1: According to the docs request interceptors are presumed to be asynchronous and if you want it to be synchronous the flag `synchronous` has to be set to `true`. So in your case, it should not be set to `true`. |
svenhjol/Charm | 535007002 | Title: Config not working
Question:
username_0: I have a load of stuff disabled, but most of it seems to still be present in game. I'm seeing all the disabled items/potions in jei and creative tabs and can craft them too. List of stuff below:
- block of redstone sand
- coffee
- bookshelf chests
- gold lantern
- composters giving mushrooms
- emerald block opening trades
- end portal runes
- villages in more biomes
- wolf variants
- totems in inventory
- more barrels
- crates
- flavoured cake
- curse break
Answers:
username_1: Thanks for the report. What version of Charm are you using?
username_0: latest 1.14 ver from curse
Status: Issue closed
username_1: Fixed in 1.4.4 which will be released either tonight or tomorrow. |
RocketChat/Rocket.Chat.Kotlin.SDK | 299446805 | Title: Support for login with email
Question:
username_0: Presently the SDK supports login only with username. In order to support login with email, the payload needs to be built with the key `user` as mentioned in the [api docs](https://rocket.chat/docs/developer-guides/rest-api/authentication/login/) instead of `username` which has been used currently(see [here](https://github.com/RocketChat/Rocket.Chat.Kotlin.SDK/blob/99b4ce6c2e79a7c6f7a4d2a9e3a8b2b719b94720/core/src/main/kotlin/chat/rocket/core/internal/model/LoginPayload.kt#L7)) for supporting login with username.
**Are you willing to work on this?**
Yes
Answers:
username_1: @username_0 Sounds great. Thank you! 👍
username_0: Shall I start working on this @username_1 ?
username_1: @username_0 Yes, please.
username_0: @rafael.kellermann I am getting the following log while trying to run tests for login
`Class not found: "chat.rocket.core.internal.rest.LoginTest"Empty test suite.`
username_0: @username_1 I was working on this one.
username_1: @username_0 Great! Sorry about that, just removed @filipedelimabrito from assigned.
Status: Issue closed
|
dayandias/CS3746392021 | 814096686 | Title: Tester: <NAME>, Developer; <NAME>, Menu Project
Question:
username_0: | Status | Comments
-- | -- | --
Late work and communication issues | Yes/No | No
Code available in GitHub | Yes/No | Yes
Apk available at the root | Yes/No | Yes
Screenshots available at the root | Yes/No | Yes
The app crashes | Yes/No | No
Min SDK | ? | 16
Target SDK | ? | 30
App tested | On phone / on emulator / both | Emulator
SMS (button) – correct behavior | Yes/No | Yes
Phone (button) – correct behavior | Yes/No | yes
Web (button) – correct behavior | Yes/No | Yes
Map (button) – correct behavior | Yes/No | Yes
Share (button) – correct behavior | Yes/No | Yes
New Activity (button) – correct behavior | Yes/No | Yes
Help (button) – correct behavior | Yes/No | Yes
About (menu) – correct behavior | Yes/No | Yes
Settings (menu) – correct behavior | Yes/No | Yes
General comments about the app | Good app |
yiisoft/yii-docker | 490678787 | Title: Docker for tests
Question:
username_0: I'd like to discuss the possibility of adding Docker setups in each repositories to help developers running tests.
This wouldn't be a requirement to run tests, and it won't be involved during the Travis builds.
It would allow us to run `docker-compose up` in the tests directory and immediately having a working environment to fiddle with the extension.
Answers:
username_1: So, there are 7.4 images now. But which tests can/should be run since there's no more `yii-core`?
username_2: Tests for each individual package: https://www.yiiframework.com/status/3.0. But that's not a priority for now. Travis is OK except it being a bit slow. We're considering GitHub actions though.
username_1: What takes a lot of time (and space) is a `composer install` (with dev-packages) for each package, but I think it is the cleanest way to do it.
I'd recommend to run tests which do not need external services such as DBs or caches, since we already have found issues with PHP versions or operating systems.
We can start with a smaller set and/or use branches or triggers to run more if required.
username_2: I think it's better to focus on something else... Travis currently isn't problematic. |
enovoa/EpiNano | 1016221243 | Title: Some problems about RRACH
Question:
username_0: In this paper, I see that you used 5-mers of RRACH to do lots of statistics work.
But I dont see any operations in your code about RRACH.
Here are my questions about it.
In test_data/make_predictions/run.sh
1. How do you get your rrach.q3.mis3.del3.linear.dump in line 46? I think there may something missing.
what are your train data and did you extact all RRACH data?
And if so, how many 5-mers of RRACH you used or the ratio of RRACH in all 5-mer?
2. I want know if the non-RRACH need to be droped before before run Epinano_Predict.py.
Because it seems your cmds and codes do **not** do that thing or I missed it somewhere.
Can you tell me the part you deal with RRACH please?
Answers:
username_1: Hi @username_0 ,
I am awfully for the late reply.
The RRACH motif can be easily selected with regular expression. /[AG][AG]AC[ACT]/ on forward strand or /[TGA]GT[TC][TC]/ on reverse strand.
1.
The `rrach.q3.mis3.del3.linear.dump` file can be trained using relevant features denoted by the name, aka, the quality score, mismatch frequency, deltetion frequency for the 3rd/middle base in the 5mer. You can follow the steps in the `train_models` folder to train the model.
2.
You need to drop non-RRACH results if the model you used was trained with data containing only RRACH motifs.
You can filter it out either before or after making predictions.It does not matter.
Hope this helps and please let me know if you need further help or clarification.
Status: Issue closed
username_0: Actually, I trained a new algorithm and wanna test my algorithms on your data.
It works well on example data, so I wanna get all rrach data.
I tried to rerun run.sh to generate 5mer.csv from your raw data(https://trace.ncbi.nlm.nih.gov/Traces/sra/?study=SRP174366) SAMN10640338/SAMN10640337 and it comes some new issues.
Here is my command
_bin/guppy_basecaller -c rna_r9.4.1_70bps_hac.cfg --compress_fastq -i ./fast5/ -r -s ./mod_fastq/ --fast5_out -x 'auto'
cat */*.fast.gz > mod_fastq.gz
gizp -d mod.fastq.gz
minimap2 --MD -t 6 -ax map-ont cc.fasta mod.fastq | samtools view -hbS -F 3844 - | samtools sort -@ 6 -o mod.bam
samtools index mod.bam
python ../../Epinano_Variants.py -R cc.fasta -b mod.bam -n 6 -T t -s ../../misc/sam2tsv.jar
python ../../misc/Slide_Variants.py ko.plus_strand.per.site.csv 5_
The rows of result rows is no more than 10k, which is less than your example data.
I wanna know what's the problem of my cmds.
Besides,can you send me your entire 5mer.csv of unmod/mod or tell me the exact number of rows about your RRACH 5-mers.
Many thanks. |
kubevirt/kubevirt | 825171526 | Title: Run vmi on ssd nodes, how to persistent the datas between vmi restarts ?
Question:
username_0: **What happened**:
I am running VMI on SSD nodes, and the volume types i used are containerDisk and emptyDisk. After the VMI restarts, the data on the disks are gone. From the user-guide https://kubevirt.io/user-guide/virtual_machines/disks_and_volumes/#volumes
it seems if i want to persistent the data between VMI restarts, i have to use local PV/PVC ?
do you have any suggestions or "best practice" on running VMI on SSD nodes ?
thanks very much~
**What you expected to happen**:
vm data remains between VMI restarts.
**Environment**:
- KubeVirt version (use `virtctl version`): 0.37
- Kubernetes version (use `kubectl version`): 1.16
- VM or VMI specifications:
--
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
name: vm-c7-name01
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: harddisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- name: default
bridge: {}
resources:
requests:
memory: 512M
firmware:
uuid: 5d307ca9-b3ef-428c-8861-06e72d69f223
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: kubevirt-base-centos7-image
- name: harddisk
emptyDisk:
capacity: 50Gi
- name: cloudinitdisk
cloudInitNoCloud:
userDataBase64: I2Nsb3VkLWNvbmZpZwpwYXNzd<KEY>
--
- Cloud provider or hardware configuration:
- OS (e.g. from /etc/os-release):
- Kernel (e.g. `uname -a`):
- Install tools:
- Others:
We are using local storage on the production environments, and we don't have ceph or other distributed storage on our production environments.
Answers:
username_1: Hey, I'd recommend using a VirtualMachine rather than a VirtualMachineInstance.
With a VirtualMachine you can dynamically create a PVC to host a disk and impoart your container disk into that PVC. This will allow the data to persist across restarts. You can accomplish this by installing cdi and using the DataVolumeTemplate section of the VM spec.
```
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
labels:
kubevirt.io/vm: vm-alpine-datavolume
name: vm-alpine-datavolume
spec:
dataVolumeTemplates:
- metadata:
creationTimestamp: null
name: alpine-dv
spec:
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: local
source:
registry:
url: docker://some-registry/some-image:some-tag
running: false
template:
metadata:
labels:
kubevirt.io/vm: vm-alpine-datavolume
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: datavolumedisk1
machine:
type: ""
resources:
requests:
memory: 128Mi
terminationGracePeriodSeconds: 0
volumes:
- dataVolume:
name: alpine-dv
name: datavolumedisk1
```
username_2: Thanks, I also encountered this problem, I will try to take a look
username_0: Thank you David, yes, pv/pvc is a perfect solution, but the problem is we don't have a distributed storage in our production environment, we are using local storage. Local pv/pvc can works, but that's need much effort to prepare the local pv/pvc.
username_1: what are the pain points for you all? would using the hpp for local pvcs help? (https://github.com/kubevirt/hostpath-provisioner-operator)
username_0: The pain point is we want the local pvs/pvcs can be dynamically provisioned, but currently we don't find an available implementation/solution. Looks the combination of cdi and hpp is a potential solution for us, I will try later.
Thanks again ~
Status: Issue closed
|
cottrellr/MTL_aquaculture | 536058901 | Title: Policy mentions
Question:
username_0: policies that mention/focus on lower trophic production.
Answers:
username_0: 2019 CALIFORNIA BILL
**SB 262, McGuire. Marine resources: commercial fishing and aquaculture: regulation of operations.**
"This bill, by December 31, 2020, would require the commission, in consultation with the Department of Fish and Wildlife, any other state agency relevant to coastal permitting, and stakeholders, to develop guidance for applicants for coastal development permits for shellfish, seaweed, and **other low-trophic mariculture production** and restoration, as specified. The bill would repeal these provisions on July 1, 2021."
link: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200SB262
username_0: TNC 2012

link: https://www.conservationgateway.org/News/Pages/tnc-food-security-and-sus.aspx
username_1: IUCN report on 'Sustainability of Fish Feed
in Aquaculture:Reflections and Recommendations'
https://portals.iucn.org/library/sites/library/files/documents/2017-026-Summ.pdf
username_1: European Commission Scientific Advice Mechanism - Food from the Oceans Report states:"The scientific evidence unambiguously points to sustainable "culture" and
"capture" at lower trophic levels (i.e. levels in the ocean food web below the
carnivore levels currently mostly exploited) as the way to bring about such
an increase. "
Link:https://ec.europa.eu/research/sam/pdf/sam_food-from-oceans_report.pdf
username_0: https://www.worldfishing.net/news101/fish-farming/low-trophic-aquaculture-in-the-spotlight
"More than seventy scientists and industry professionals from 16 countries are gathering this week in Tromsø to launch the EU-funded **AquaVitae project (total budget of €8 million)**. Over the next four years, they will work to increase aquaculture production of low-trophic species, in and around the Atlantic Ocean, in sustainable ways."
"This correlates with recommendations made in the **Food from the Oceans report (2017)**, which highlighted the need to expand low- and multi-trophic marine aquaculture as an ecologically efficient source of increasing food and feed."
username_0: WRI (2014):https://www.wri.org/blog/2014/06/sustainable-fish-farming-5-strategies-get-aquaculture-growth-right
Title: **"Sustainable Fish Farming: 5 Strategies to Get Aquaculture Growth Right"**
5. **Eat fish that are low on the food chain.** Fish farming can ease pressure on marine ecosystems if farmed fish don’t need large amounts of wild fish in their diets. **Consumers should therefore demand species that feed low on the food chain—“low-trophic” species such as tilapia, catfish, carp, and bivalve mollusks.** In emerging economies, where consumption of low-trophic species is still dominant, emphasis should continue with these species even as billions of people enter the global middle class in coming decades. At the same time, because fish are a major source of nutrition for more than a billion poor people in the developing world, growing aquaculture to meet the food and nutritional needs of these consumers will be essential.
username_0: https://books.google.com/books?id=tGl6DwAAQBAJ&pg=PA31&lpg=PA31&dq=promote+low+trophic+aquaculture&source=bl&ots=KAN-zH4PVa&sig=ACfU3U03fTzFEFEkFLZO8vb8lnyrVB5oxg&hl=en&sa=X&ved=2ahUKEwjxjb-0qbHmAhUhFjQIHUUjBrU4ChDoATACegQICRAB#v=onepage&q=promote%20low%20trophic%20aquaculture&f=false
Book: **"The Sixth APFIC Regional Consultative Forum Meeting"** by Food and Agriculture Organization of the United Nations 2018

username_0: https://swssdc-static.s3.amazonaws.com/uploads/2017/06/Dan-Lee.pdf
**Seafood Summit, Seattle, 2017** by <NAME>
"Naylor et al.’s priorities for the aquaculture
industry
1. **Expand farming of low trophic level fish**
2. Reduce fishmeal and fish oil inputs in feed
3. Develop integrated farming systems
4. Promotion of environmentally sound aquaculture practices"
username_0: Diane (2009) "Aquaculture Production and Biodiversity Conservation": https://academic.oup.com/bioscience/article/59/1/27/306930
"Other sources of protein will become important components of fish feed, including plant protein and waste products from other operations, as will the **culture of more species at lower trophic levels** for human consumption, since these species do not require fish protein in feed. Alternate feed derivations have been a major subject of aquaculture research and development, and efforts are intensifying (Watanabe 2002, Opstvedt et al. 2003).
username_0: California 2019 legislation SB 69
 |
IIC2115/Syllabus | 493777695 | Title: Matplotlib en jupyter
Question:
username_0: Hola!
Al correr mi código con networkx para graficar en Pycharm no logro que me muestre el grafo (sí hice matplotlib.pyplot.show() ), y al tratar de correrlo en jupyter me dice que no puedo utilizar matplotlib.
Busqué en internet y traté con: %matplotlib notebook , pero me dice que el módulo no existe.
Saben cómo lo podría arreglar?
Gracias!
Answers:
username_1: Tienes que instalar matplotlib, puedes hacerlo directamente en jupyter con `!pip3 install matplotlib` en una celda de código.
Saludos,
Pablo 😁👍
Status: Issue closed
|
TechnologySolutionsUkLtd/ascii-protocol | 920723994 | Title: compatibility .net core 3.1
Question:
username_0: I am not able to get any bluetooth or Usb transports to appear in my .net core 3.1 wpf application. The only transport I see is Physical = Unknown.
Answers:
username_1: While the vast majority of the code is platform independent the connection to the reader via Bluetooth or USB requires platform specific APIs. Which platform(s) do you intend to run your application on?
username_0: We run on Windows 10 64 bit |
facebook/buck | 87789274 | Title: Google APIs not found
Question:
username_0: I saw 2 other threads on this, but one was about changing 19 to 19-1 and the other didn't seem to have resolution.
The specific error I am getting is this
```javascript
BUILD FAILED: Google APIs not found in /Users/username/android-sdk-macosx/add-ons/addon-google_apis-google-21/libs.
Please run '/Users/username/android-sdk-macosx/tools/android sdk' and select both 'SDK Platform' and 'Google APIs' under Android (API 21)
```
If I run
`/Users/username/android-sdk-macosx/tools/android sdk`
I get
`-bash: /Users/username/android-sdk-macosx/tools/android: No such file or directory`
Any thoughts or suggestions?
Answers:
username_1: How did you install the android sdk? It seems like you either uninstalled it or reinstalled it and the path still points to the old location.
What distro are you using?
username_1: Or what package manager. (brew, macports, apt)
Status: Issue closed
username_0: Ok so I fixed the issue, I had set the wrong path to my SDK. But whether the android SDK is right or wrong you still get the 'sucess' message blurb. When I deleted it and set my path the right directory `/Applications/android-sdk-macosx` instead of `~/android-sdk-macosx` it worked. Thanks
username_2: Actually seems that android studio does not install google apis anymore in that path https://stackoverflow.com/a/7860557/11452286 |
dg9vh/MMDVMHost-Dashboard | 174463894 | Title: Active modes green/red/grey don't work properly
Question:
username_0: Hi
I noticed that if disabled System Fusion by MMDVM.ini, in the dashboard the System Fusion become grey
But If for example, I put the DMR enable = 0 in MMDVM.ini, running the MMDVMHost in the dashboard I see that it is enabled, the same for DSTAR enable = 0 and if I disable one timeslot, all remain green.
The only situation that all become RED is if I stop the MMDVMHost process .
Could you verify please ?
73 de IW9GRL
Answers:
username_1: Hi,
you allways have to take a look at both components in MMDVM.ini: the mode itself and the networking-component. All the magic happens in function showMode($mode, $mmdvmconfigs), here depending on the mode (DSTAR network is checked separately from others, because of the dependency to the ircddbgateway, to make it perfect, it should also include YSF network, because this regularly depends on YSFGateway but actually not implemented) the configuration and availablility of the corresponding process is checked.
So if you put DMR on 0 but leave DMR Network on 1, it is logically so, that DMR would be grey but Network stays green, because the MMDVMHost still is linked to the Master.
If you disable only one timeslot (or both) but leave DMR Network enabled, you only see the grey buttons on the timeslot-buttons but not on the Network...
I my eyes everything works ok.
Maybe there is an issue with browser-cache? Did you clean up the temporary internet files or browser-cache?
username_0: Hi
thank you for your answer .
All works as you said only if MMDVHost run by systemctl, but if you try to run the MMDVMHost by hand running : sudo ./blabla/MMDVMHost ./bbbb/MMDVM.ini and in the ini file you have for example put to ZERO DMR mode and DMR network, or changing only a timeslot value, it doesn change in the dashboard.
thank you
73
username_1: Sorry, I can't confirm that. The dashboard checks first the Enabled-state in the MMDVM.ini-file and if enabled it checks with pgrep <programname>, if the MMDVMHost or the ircddbgateway(d) is running and if this is running, it shows green, if not, it is red, if disabled, it is grey whether the corresponding process is running or not.
Do you really use the same MMDVM.ini-file by calling MMDVMHost as configured in the config.php of the dashboard?
username_0: Hi Kim
really sorry, I changed the ini file name ........ MY MISTAKE !!!!
thank you for your kindly support ......
Status: Issue closed
username_1: No problem :-) I am allways trying to find the source of the problem... your issue gave me the point to keep the YSFGateway in mind like the ircddbgateway... so this would be one thing to be implemented next...
username_0: Thank you again .... best regards |
broadinstitute/gatk | 233667183 | Title: gatk-protected should tag releases using annotated tags rather than unannotated tags.
Question:
username_0: @username_0 commented on [Fri Mar 17 2017](https://github.com/broadinstitute/gatk-protected/issues/940)
Since using unannotated tags causes a straightforward `git describe` to fail.
---
@lbergelson commented on [Fri Mar 17 2017](https://github.com/broadinstitute/gatk-protected/issues/940#issuecomment-287459494)
we should sign them too...
Status: Issue closed
Answers:
username_0: Closing as obsolete |
pact-foundation/pact-ruby | 743893861 | Title: Request omits [] in queries after 1.55.6
Question:
username_0: Hi, looks like the recent upgrade broke some query params format during pact verification.
Example:
in pact broker the request is defined correctly:
`"query": "fruits[]=mango&fruits[]=apple"`
whether during pact verification it is sent like:
`with GET /receipts?fruits=mango&fruits=apple`
So the params are not treated as an array.
Looks like the fix was done already: https://github.com/pact-foundation/pact-support/commit/4e9ca9c1f9025574dd2ca40185ec8174e4925a34
But after updating the problem still persists. I am not on Rails btw.
Answers:
username_1: We are seeing something similar on a Rails / Grape project
username_2: Sorry, the change was try to and fix the warnings about the query matchers that have been there for 4 years. Lock pact-support to 1.15.1 until I can work out how to make it work for every situation.
username_2: FYI, have this scheduled in for my OSS day on Wednesday.
username_2: Ok, update to 1.16.4, and everything should be happy again.
Status: Issue closed
username_0: Thank you! All is good! |
knative/docs | 443551192 | Title: Failed to set up the webhook in Github for gitwebhook-go sample
Question:
username_0: I followed the tutorial at https://github.com/knative/docs/tree/master/docs/serving/samples/gitwebhook-go.
I used the GKE as the Kube backend. After configuring the domain and the static IP, I still failed to create a valid webhook available to receive the payload successfully.
SSL validation option is only available when we use HTTPS protocol, like https://<domain>, and it is not available for HTTP protocol, like http://<domain> as the payload URL.
No matter I used https or http, I failed to get successful response from the webhook, when I delivered any payload to the URL http://<domain> or https://<domain>.
Answers:
username_0: @username_1 Do you have any idea?
username_0: This is the body returned:
```
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>405 Method not Allowed - DOSarrest Internet Security</title>
<link href="/DOAError/assets/css/core.css" rel="stylesheet" type="text/css" />
<link rel="shortcut icon" href="/DOAError/assets/images/favicon.ico" /><link rel="apple-touch-icon" href="/DOAError/assets/images/icon-protection.png" />
<script>
function id_process(n){for(var t=n+"=",r=document.cookie.split(";"),e=0;e<r.length;e++){for(var g=r[e];" "==g.charAt(0);)g=g.substring(1,g.length);if(0==g.indexOf(t))return g.substring(t.length,g.length)}return null};
</script>
</head>
<body onload="myFunction(), myFunction2(), myFunction3()">
<div id="apDiv1">
<table width="100%" border="0" cellspacing="0" cellpadding="0" bgcolor="#444444">
<tr>
<td colspan="3" bg background="/DOAError/assets/images/bottom_separator.png"><img src="/DOAError/assets/images/bottom_separator.png" width="5" height="9" /></td>
</tr>
<tr>
<td width="18"><img src="/DOAError/assets/images/bottom_trans_spacer.png" alt="" width="18" height="18" /></td>
<td><img src="/DOAError/assets/images/bottom_trans_spacer.png" alt="" width="18" height="18" /></td>
<td width="18"><img src="/DOAError/assets/images/bottom_trans_spacer.png" alt="" width="18" height="18" /></td>
</tr>
<tr>
<td width="18"><img src="/DOAError/assets/images/bottom_trans_spacer.png" width="18" height="55" /></td>
<td width="148" align="left" valign="top"><table width="960" border="0" align="center" cellpadding="0" cellspacing="0">
<tr>
<td align="left" valign="top" class="bottomtext">DOSarrest Internet Security is a cloud based fully managed DDoS protection service. This request has been blocked by DOSarrest due to the above violation. If you believe you are getting blocked in error please contact the administrator of
<span id="host"></span><script>
function myFunction() {
var x = location.host;
document.getElementById("host").innerHTML = x;
}
</script>
</span> to resolve this issue.</td>
<td width="18"><img src="/DOAError/assets/images/bottom_trans_spacer.png" alt="" width="18" height="55" /></td>
<td align="right" valign="bottom"><img src="/DOAError/assets/images/da_logo.png" width="148" height="50" alt="DOSarrest Internet Security | DDoS Protection" title="DOSarrest Internet Security | DDoS Protection" /></a></td>
</tr>
</table></td>
<td width="18"><img src="/DOAError/assets/images/bottom_trans_spacer.png" width="18" height="55" /></td>
</tr>
<tr>
<td width="18"><img src="/DOAError/assets/images/bottom_trans_spacer.png" width="18" height="18" /></td>
<td><img src="/DOAError/assets/images/bottom_trans_spacer.png" width="18" height="18" /></td>
<td width="18"><img src="/DOAError/assets/images/bottom_trans_spacer.png" width="18" height="18" /></td>
</tr>
</table>
</div>
<div id="apDiv2">
<table width="100%" height="100%" border="0" align="center" cellpadding="0" cellspacing="0">
<tr>
<td align="center" valign="middle"><table width="760" border="0" align="center" cellpadding="0" cellspacing="0">
<tr></tr>
<tr>
<td align="center"><table width="730" border="0" cellspacing="0" cellpadding="0">
<tbody>
<tr>
<td width="1"><img src="/DOAError/assets/images/bottom_trans_spacer.png" width="1" height="340" alt=""/></td>
[Truncated]
</tr>
<tr>
<td align="left" valign="top"><p class="bodytext"><strong>405 Method not Allowed: </strong>This resource does not accept requests of this method. Please check that you are using a method specified in the Allow header of this response.</p>
<p class="bodytext">DOSarrest Internet Security is a cloud based fully managed DDoS protection service. This request has been blocked by DOSarrest due to the above violation. If you believe you are getting blocked in error please contact the administrator of
<span id="host3"></span><script>
function myFunction3() {
var x = location.host;
document.getElementById("host3").innerHTML = x;
}
</script>
</span> to resolve this issue.</p></td>
</tr>
<tr></tr>
</table></td>
</tr>
</table>
</div>
</body>
</html>
```
username_1: Sorry, ive not tried this sample.
/cc @rgregg @mattmoor
username_0: We definitely need a valid public available domain name for the application to create the webhook for github to connect.
I used to have a domain name, but that name is not available over the internet for me.
username_1: Yes you are correct. I just rewrote the TLS content and as you found, a custom domain is required in order to handle HTTPS requests. Ill start looking over your PR now. Thanks! |
cuba-platform/cuba | 485747304 | Title: Health Check URL doesn't work if no REST API included
Question:
username_0: ### Environment
- Platform version: 7.1-SNAPSHOT
### Description of the bug or enhancement
- Minimal reproducible example
1. Create a new application with no REST API app component
2. Start the app
3. Open the Health Check URL ( see https://doc.cuba-platform.com/manual-7.1/health_check_url.html)
- Expected behavior
OK
- Actual behavior
Redirected to `main`
Caused by: https://github.com/cuba-platform/cuba/issues/630
Answers:
username_1: See the updated [docs](https://doc.cuba-platform.com/manual-7.1/health_check_url.html)
Status: Issue closed
|
hypeJunction/hypeAttachments | 275056096 | Title: WARNING: Deprecated in 2.3: elgg_view_input() is deprecated. Use elgg_view_field()
Question:
username_0: Updated to your newest push, still getting these errors.
WARNING: Deprecated in 2.3: elgg_view_input() is deprecated. Use elgg_view_field() Called from [#20] /home/content/18/9142018/html/HostedSites/Jon/social/vendor/elgg/elgg/engine/lib/views.php:1387<br /> -> [#19] /home/content/18/9142018/html/HostedSites/Jon/social/mod/hypeAttachments/views/default/plugins/hypeAttachments/settings.php:28
Answers:
username_1: Deprecation warnings are not errors.
Status: Issue closed
|
deathandmayhem/jolly-roger | 1097297330 | Title: Navbar can be scrolled on puzzle page in iOS Safari
Question:
username_0: In iOS Safari, attempts to scroll PuzzlePage cause the navbar to overscroll. This glitch is more than cosmetic, as the chat log subsequently cannot be scrolled down (dragged up) until it is first scrolled up (dragged down).
This was introduced when navbar was switched from fixed to sticky. There are other ways to address it (for instance giving body a fixed position when FixedLayout is used) but reverting navbar to fixed positioning is probably the most straightforward.<issue_closed>
Status: Issue closed |
kyma-project/test-infra | 1112643034 | Title: Use usernames required for the communication channel.
Question:
username_0: <!-- Thank you for your contribution. Before you submit the issue:
1. Search open and closed issues for duplicates.
2. Read the contributing guidelines.
-->
**Description**
When sending message from any automated process I would like to use a username which is valid for the communication channel I use. This will allow me to use @mentions and send messages to desired user if that's needed. I would like to have one service where I could query about username information.
**Reasons**
At present we use tools which have separate user databases thus use different usernames. Bots are not able to match usernames between different systems. |
JKorf/Binance.Net | 703119482 | Title: creating a new client object with default options
Question:
username_0: Hi there, when running my program in debug mode everything is fine and working as intended, however as soon as i compile it to a release version (standalone exe) it throws this error:
`Unhandled exception. System.TypeInitializationException: The type initializer for 'Binance.Net.BinanceClient' threw an exception.
---> System.MissingMethodException: Method not found: 'Void CryptoExchange.Net.Objects.RestClientOptions.set_HttpClient(System.Net.Http.HttpClient)'.
at Binance.Net.Objects.Spot.BinanceClientOptions..ctor(String spotBaseAddress, String futuresUsdtBaseAddress, String futuresCoinBaseAddress, HttpClient client)
at Binance.Net.Objects.Spot.BinanceClientOptions..ctor(String spotBaseAddress, String futuresUsdtBaseAddress, String futuresCoinBaseAddress)
at Binance.Net.Objects.Spot.BinanceClientOptions..ctor()
at Binance.Net.BinanceClient..cctor()
--- End of inner exception stack trace ---
at Binance.Net.BinanceClient.get_DefaultOptions()
at Binance.Net.BinanceClient..ctor()
at LiquidationSniper.Program.Main(String[] args)`
My Code looks like this:
` client = new BinanceClient(new BinanceClientOptions`
` {`
` ShouldCheckObjects = true,`
` ApiCredentials = new ApiCredentials(config.ApiKey, config.ApiSecret),`
` AutoTimestamp = true,`
` LogVerbosity = logVerb,`
` LogWriters = new List<TextWriter> { Console.Out },`
` AutoTimestampRecalculationInterval = TimeSpan.FromMinutes(10)`
` }); `
this was fine in 5.x release of the wrapper..
Answers:
username_1: Having the same issue
username_2: In addition to this is there an analogy to .SetDefaultOptions for M-Futures? This worked in 5.x releases. In 6.x it only works for Spot.
BinanceClientFuturesUsdt.SetDefaultOptions(new BinanceFuturesClientOptions()
{
});
username_3: It seems like a wrong version of CryptoExchange.Net is used to build the Release version. Can you try to clean the solution?
@username_2 All requests in version 6.x.x are made from the BinanceClient. So BinanceClient is no longer just for Spot, but also for USDT futures and Coin futures by using `client.FuturesUsdt.xxx`/`client.FuturesCoin.xxx` as opposed to `client.Spot.xxx`.
username_0: I used every option to clear, build and rebuild in the menue, i cleared the bin folder still get the same error.
i just updated the binance.net through the packet manager, do i have to update CryptoExchange.Net as well manually?
these versions are showing up during the build process
`2>/r "C:\Users\user\.nuget\packages\binance.net\6.0.1\lib\netstandard2.1\Binance.Net.dll"
2>/r "C:\Users\user\.nuget\packages\cryptoexchange.net\3.0.14\lib\netstandard2.1\CryptoExchange.Net.dll"`
username_3: Those versions should be fine. Do you use any of the other libraries which also use CryptoExchange.Net? Maybe you have an older version of Bitfinex.Net for example, which references an older version of CryptoExchange.Net?
username_0: no im only using binance.net nothing else besides standard .net librarys :-/
any other suggestions? otherwise ill try to rebuild it in a complete new project file/folder
username_3: I just tried it in a new application, self contained and it works in release. So I think there is some wonky caching going on.
username_0: yep i 2nd that... created a new workspace just copied my files in and installed the binance.net => works like a charm :) thanks for your support
Status: Issue closed
|
openstreetmap/iD | 471040824 | Title: right-left jumps during saving ("save" button on the right, "upload" on the left)
Question:
username_0: Ut turns out to be quite confusing for new mappers that on clicking "save" button one needs get to left side of the screen, fill form there and use "upload".
Maybe it would be better to keep "save" and "upload" menu both on the right side of the screen?
It is especially confusing on larger monitors.
Answers:
username_1: What about just blurring and darkening the UI to draw the mappers attention to the right spot? iD already darkens, so all that would be needed would be the blurring.
username_2: Thinking about this, here's my ideal upload workflow:
1. Click "Save"
2. ✅ Save successful
The upload form is just another barrier to contributing.
username_1: Actually, it's quite helpful! It lets people know about issues, let's then see the changes, create a comment, and edit changeset tags. Removing that for simplicity would have some major side effects.
username_2: To be clear, I'm not suggesting we wholly remove the save form. I'm saying that in my personal ideal world that doesn't exist we would never have to bother users about that stuff. That ideal informs my thinking about the UI, like how maybe it could be a popover under the save button instead of an entire inspector screen. But this is just hypothetical brainstorming!
username_1: Where would changeset comments and pre upload validation warnings fit into this?
username_1: Cool!
username_2: Quick mockup:
<img width="450" alt="Screen Shot 2019-07-22 at 10 35 03 AM" src="https://user-images.githubusercontent.com/2046746/61641073-7328a600-ac6c-11e9-8b1b-8b5cc88b2049.png">
Clicking "Advanced" would essentially show the existing upload form. Clicking the warnings would open the Issues pane, or possibly enter some kind of "issue review" mode where iD takes you through each issue step-by-step.
This has the following advantages:
- Less daunting for new mappers
- Only shows the most important info
- Is visually connected with the Save button (solving this original issue)
- Still lets us put whatever we want under the Advanced options
- Frees up the assistant to show something else, like the changes list
username_3: @username_2 I like the idea and direction.
Before I forget: The post-commit-screen with the community index should still get a good amount of space afterwards. Maybe as a separate thank you layer.
About the mockup:
I think it would further benefit from removing the "basic/advanced" toggle to something, that shows the additional features on demand but has more meaning than "basic/advanced".
Quick idea:
<img width="516" alt="Bildschirmfoto 2019-07-22 um 18 48 51" src="https://user-images.githubusercontent.com/111561/61649605-16e87500-acb2-11e9-872d-2de338364370.png">
- remove toggle
- add something like "show / edit changeset tags" below the text area. maybe "advanced changeset tags" so people know it will get technical?
- move the downloadbutton to anywhere else, eg in the "show / edit changeset tags" area
- show the list of changes on demand with the "2 changes" link
For the last item: Is this really useful? I was wondering recently, if an integrated https://github.com/osmlab/changeset-map view might be more useful. Would that even be possible with an uncommited changeset?
username_2: I also don't love the toggle control. Maybe something like this?
<img width="382" alt="Screen Shot 2019-07-22 at 2 36 28 PM" src="https://user-images.githubusercontent.com/2046746/61656755-74b79580-ac8f-11e9-8c05-5fee37b294d6.png">
On-the-map dif styling is definitely something I think we should do eventually.
username_2: I've been thinking even more about the save popover and I'd really like to try it. We should make saving as easy as possible for mappers—they've already done the real work after all.
Latest mockup:
<img width="348" alt="Screen Shot 2019-09-30 at 6 23 41 PM" src="https://user-images.githubusercontent.com/2046746/65897503-b0698c80-e3af-11e9-9f0d-9e0f2354f59e.png">
username_1: I like this! The current design was really meant for the old sidebar, and feels rather cramped in the assistant panel, especially when using a phone. This new design seems very elegant.
username_4: I like this too..
Actually the current design really was originally in a modal dialog that appeared on top of the map, but we moved it to the sidebar years ago so that users could click on the list of items that they edited and look at them before saving.
username_1: Oh, cool! The old design definitely worked well with the sidebar, though.
username_4: background here: https://github.com/openstreetmap/iD/issues/2378#issuecomment-60131151
username_5: Previously discussed in #5974 and probably elsewhere before that. I think it would be a highly desirable feature, but also a complex enough feature to deserve its own issue independent of this redesign.
username_2: @username_5 There's actually an open PR for dif styling: #6843
username_5: #6843 will come in handy as part of the save workflow if paired with a way to fit the map to all the areas that the user has changed. With roving editing sessions that span a large area, smaller features may not be discernible when zoomed far out, which is why the textual changed feature list has been useful. |
npm/cli | 669939936 | Title: [QUESTION] NPM Scripts double SIGINT with CTRL+C - is this intentional?
Question:
username_0: ## Where
This happened specifically with regard to the Firebase CLI. I filed an issue here - https://github.com/firebase/firebase-tools/issues/2507
```
i emulators: Received SIGINT (Ctrl-C) for the first time. Starting a clean shutdown.
i emulators: Please wait for a clean shutdown or send the SIGINT (Ctrl-C) signal again to stop right now.
i Automatically exporting data using --export-on-exit "./data" please wait for the export to finish...
⚠ emulators: Received SIGINT (Ctrl-C) 2 times. You have forced the Emulator Suite to exit without waiting for 2 subprocesses to finish. These processes may still be running on your machine:
```
I have tested this multiple times, and it is not a keyboard error or mistyped - just get's sent twice.
## Who
* n/a
## References
* n/a<issue_closed>
Status: Issue closed |
alibaba/otter | 164744761 | Title: canel自动把域名换成IP了
Question:
username_0: 
因为用的是AWS的RDS,所以给的连接地址是域名,添加到canel里自动转换为IP了,今天发现AWS的RDS域名对应的IP会变换,所以问下canel里能不能设置成域名啊。。。
求解救办法
Answers:
username_1: 写域名会变成私有IP吧, 私有IP跟机器绑定的,重启不变.
username_2: 这个应该是otter保存对象时自动转换为了ip
username_3: 我也面临这个问题
Status: Issue closed
|
Denis-711/Game_is_over | 757756514 | Title: Gameplay bugs
Question:
username_0: 1) Возможно имеет смысл поправить моменты, когда герой цепляется за один пиксель и не падает
(https://user-images.githubusercontent.com/70949270/101260432-dc656800-3740-11eb-8c15-6ac8af0824e0.png).
2) Также баг при управлении: если зажать левую клавишу и не отпускать, а затем нажать и отпустить правую клавишу, зажатие левой клавиши перестанет восприниматься игрой. Баг не критичный, но делает управление довольно неприятным.
3) Также некритичный бег, но довольно неприятный связан с хитбоксом героя (в месте головы). Возможно стоит разделить на два - тело и голова.
(https://user-images.githubusercontent.com/70949270/101260706-b6d95e00-3742-11eb-86aa-1f19d2cd0a7b.png)<issue_closed>
Status: Issue closed |
chfor183/data_science_articles | 611511026 | Title: Computer Vision
Question:
username_0: ## TL;DR
Bias
## Key Takeaways
- 1
- 2
## Useful Code Snippets
```
function test() {
console.log("notice the blank line before this function?");
}
```
## Articles/Ressources
https://medium.com/hal24k-techblog/how-to-track-objects-in-the-real-world-with-tensorflow-sort-and-opencv-a64d9564ccb1
https://towardsdatascience.com/fake-face-generator-using-dcgan-model-ae9322ccfd65 |
w3c/csswg-drafts | 813365777 | Title: [css-overflow] scrollbar-gutter should not do anything for overflow: clip.
Question:
username_0: Why? `overflow: clip` doesn't create an scrolling box, so it seems weird `scrollbar-gutter` somehow affects this.
Answers:
username_1: Neither does `overflow: visible`, but we have it take effect there, too.
username_0: Is that useful? That seems weird that now stuff like `scrollbar-width` or the weird `::-webkit-scrollbar` pseudos would have an effect on non-scrollable boxes.
username_2: It only has an effect when used with the `force` keyword, otherwise it is ignored.
The reason for this is to be able to align content outside of a scrolling box (header, toolbar...) with content inside of it.
username_0: I still think it's wrong. How does it affect inline elements? Svg?
username_2: According to the spec, it should apply to all elements. Perhaps that should be more specific.
In the Chromium implementation, it affects some inline elements (like images) pretty much like it does block elements: the width of the element remains the same, but there is an additional space reserved along one of or both of the inline edge (kind of like setting a `padding` on an element with `box-sizing: border-box`).
username_3: What's the use case for applying this property on non-scrollable boxes?
username_0: Right, I meant non-replaced inline boxes. I think instead of playing whackamole with all different layout types there are this property should not only apply to scrollable boxes. If authors want to do something based on scrollbar sizes like moving content around or what not, we can add environment variables exposing the platform scrollbar size, which is both easier to specify and more flexible.
username_1: We do need to exclude non-replaced inlines, but that's all the whackamole that we could possibly have. Scrollbars aren't something special-cased across display modes. I think the property should just be clarified to "all elements capable of displaying scrollbars", perhaps with this detailed in prose to mean anything that *will* display a scrollbar if set to `overflow:scroll`.
username_0: I disagree. In Gecko at least, the way we implement scrollers is a different box altogether (wrapping the scrolled content).
In Blink / WebKit, the scrollable area seems like a property of the layer / layout object, and looking at Blink's implementation of `scrollbar-gutter: force`, at the very least this seems to affect various CSSOM methods like scrollWidth etc. It also forces layerization, though I don't know what side effects does that have in practice, seems it might affect paint order (as it behaves as if the element was relatively-positioned / created its own stacking context). So I still think it applying to non-scrollable boxes is a bad idea.
username_2: @username_0:
If I remember correctly, `force` elements get their own layer in Chromium simply because a `PaintLayerScrollableArea` is used to calculate the thickness of the element's hypothetical scrollbars. This is only strictly necessary to support custom scrollbars, otherwise it would be enough to query the current scrollbar theme.
(One advantage of supporting custom scrollbars is that they make it possible to write accurate and portable web tests, because their thickness can be specified explicitly)
A better implementation would reduce the number of layers to those strictly needed, thanks for pointing it out.
username_0: @username_2 I find it a bit hard to believe that doesn't affect painting order in other unexpected ways, but I don't have the time to check right now.
username_2: @username_0 My point was just that it is an implementation side effect of how Chromium calculates custom scrollbars, not something mandated by the spec.
username_2: FYI, the issue where `force` was creating unneeded layers in Chromium has already been fixed:
https://chromium.googlesource.com/chromium/src/+/2402abcb4c84404546998aa55a85cb711ae50eec
username_4: In Blink/WebKit, it is true that allocating a PaintLayer (RenderLayer in WebKit) affects paint order. But that's just an implementation mistake (a big one, as it turns out! In Blink we've been working on fixing it for years now.) that should be fixed. In any case, the commit Felipe mentioned above avoids this problem in the case of `scrollbar-gutter`, and any other effect is also a bug. If an element has `overflow:clip` and `scrollbar-gutter`, then paint order should not be affected. Implementations can avoid creating any of these side-effect-causing data structures by just treating `scrollbar-gutter` as another form of margin.
Also, to re-iterate the use cases already mentioned above:
1. Align content outside of a scrolling box (header, toolbar...) with content inside of it.
2. Retain stable layout when switching from a non-scrollable to a scrollable style.
I don't know how common (2) is, but I think (1) is common.
username_5: Just to follow up on this from the meeting:
Simplifying ```scrollbar-gutter``` down to just ```auto``` and ```always``` makes the intent of the developer clear. If ```scrollbar-gutter``` is requested, the intent of the author is most likely to provided cross-UA reservation of scrollbar space regardless of classical or overlay scrollbars.
In the case of ```stable```, the use case here is only to solve for the scrollbar pop from ```overflow: auto``` in y-axis alignment of UI elements. The author may apply padding or margin to a non-scrollbar container above the scrollable container to match the gutter, but would have to conditionally remove this if the UA implemented overlay scrollbars. Using ```always``` instead would guarantee alignment without having to differentiate between UAs with classical scrollbars and with overlay scrollbars.
In the description of ```scrollbar-gutter``` it states:
The ```scrollbar-gutter``` property gives control to the author over the presence of scrollbar gutters separately from the ability to control the presence of scrollbars provided by the ```overflow``` property.
The values ```stable``` and ```force``` seem intricately dependent on the ```overflow``` property, which voids the description of being separate from the ```overflow``` property. This is why I think we should drop everything except ```auto``` and ```always```.
username_4: Restricting to only scrollable boxes is ok by me, in order to unblock consensus on the most important use case for `scrollbar-gutter`. See my comment [here](https://github.com/w3c/csswg-drafts/issues/4674#issuecomment-824245708).
username_6: Changing tags for this and #4674 in favor of a breakout session |
PennyDreadfulMTG/modo-bugs | 626341323 | Title: Swirl the Mists will restart the game if you have two or more permanents with color word in their texts that aren't protection
Question:
username_0: Gonna do in-depth check later
**What is the expected behaviour of the card?**
-
**What is the actual behaviour of the card?**
-
**Screenshot**
(Attach a screenshot or video here)
**Please provide a list of affected cards**
Affects: [Swirl the Mists]
**Card Images**
<!-- Images --> This section is automatically generated from the above list of cards. You don't need to do anything.
Status: Issue closed
Answers:
username_1: Duplicate of #708 |
rook/rook | 403461448 | Title: rook-ceph-osd-prepare jobs log level not set by operator
Question:
username_0: <!-- **Are you in the right place?**
1. For issues or feature requests, please create an issue in this repository.
2. For general technical and non-technical questions, we are happy to help you on our [Rook.io Slack](https://Rook-io.slack.com).
3. Did you already search the existing open issues for anything similar? -->
rook-ceph-osd-prepare-* job pods should be more verbose when LogLevel: DEBUG is set.
**Is this a bug report or feature request?**
* Bug Report
**Deviation from expected behavior:**
**Expected behavior:**
**How to reproduce it (minimal and precise):**
install rook-ceph operator will helm with `logLevel: DEBUG` set, and configure a cluster. Notice that the rook-ceph-osd-prepare-* job pods still have log-level=INFO in the first few lines.
manually recreating with the job works, after injecting ROOK_LOG_LEVEL in env.
**Environment**:
* OS (e.g. from /etc/os-release):
* Kernel (e.g. `uname -a`):
* Cloud provider or hardware configuration:
* Rook version (use `rook version` inside of a Rook Pod): 0.9.1
* Kubernetes version (use `kubectl version`):
* Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
* Storage backend status (e.g. for Ceph use `ceph health` in the [Rook Ceph toolbox](https://rook.io/docs/Rook/master/toolbox.html)): |
h4v1nfun/xiaomi_miio_gateway | 412810684 | Title: Dependencies import error after 0.88
Question:
username_0: Hi.
Can you please update component to make compatiable to the latest HA version.
```
2019-02-21 10:45:59 ERROR (MainThread) [homeassistant.loader] Error loading custom_components.xiaomi_miio_gateway.media_player. Make sure all dependencies are installed
Traceback (most recent call last):
File "d:\program\python\python36\lib\site-packages\homeassistant\loader.py", line 147, in _load_file
module = importlib.import_module(path)
File "d:\program\python\python36\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 978, in _gcd_import
File "<frozen importlib._bootstrap>", line 961, in _find_and_load
File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "d:\User\.homeassistant\custom_components\xiaomi_miio_gateway\media_player.py", line 11, in <module>
from homeassistant.components.media_player import (
ImportError: cannot import name 'SUPPORT_TURN_ON'
```
Maybe related to https://developers.home-assistant.io/blog/2019/02/19/the-great-migration.html
And, please, add support for https://github.com/custom-components/custom_updater
Thanks!
Answers:
username_1: i have this error too after update HA
username_2: thanks for the heads up this weekend i'll look at it and try to update
username_3: news?
username_4: `Error loading custom_components.gateway.media_player. Make sure all dependencies are installed
Traceback (most recent call last):
File "/srv/homeassistant/lib/python3.6/site-packages/homeassistant/loader.py", line 147, in _load_file
module = importlib.import_module(path)
File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/homeassistant/.homeassistant/custom_components/gateway/media_player.py", line 12, in <module>
from homeassistant.components.media_player import (
ImportError: cannot import name 'SUPPORT_TURN_ON'`
username_4: Hi,
I have modified the code to the new dependencies, and it seems that it works
```
""" https://github.com/username_2/xiaomi_miio_gateway/blob/master/custom_components/media_player/xiaomi_miio_gateway.py """
"""
Add support for the Xiaomi Gateway Radio.
"""
import logging
import voluptuous as vol
import asyncio
from functools import partial
import homeassistant.helpers.config_validation as cv
from homeassistant.components.media_player import (
MediaPlayerDevice, MEDIA_PLAYER_SCHEMA, PLATFORM_SCHEMA)
from homeassistant.components.media_player.const import (
SUPPORT_TURN_ON, SUPPORT_TURN_OFF, SUPPORT_VOLUME_MUTE,
SUPPORT_VOLUME_STEP, SUPPORT_VOLUME_SET, SUPPORT_NEXT_TRACK)
from homeassistant.const import (CONF_HOST, CONF_NAME, CONF_TOKEN, STATE_OFF, STATE_ON)
REQUIREMENTS = ['python-miio>=0.3.7']
ATTR_MODEL = 'model'
ATTR_FIRMWARE_VERSION = 'firmware_version'
ATTR_HARDWARE_VERSION = 'hardware_version'
DEFAULT_NAME = "Xiaomi Gateway Radio"
DATA_KEY = 'media_player.xiaomi_miio_gateway'
ATTR_STATE_PROPERTY = 'state_property'
ATTR_STATE_VALUE = 'state_value'
_LOGGER = logging.getLogger(__name__)
SUPPORT_XIAOMI_GATEWAY_FM = SUPPORT_VOLUME_STEP | SUPPORT_TURN_ON | \
SUPPORT_TURN_OFF | SUPPORT_VOLUME_MUTE | SUPPORT_VOLUME_SET | SUPPORT_NEXT_TRACK
PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend({
vol.Required(CONF_HOST): cv.string,
vol.Required(CONF_TOKEN): vol.All(cv.string, vol.Length(min=32, max=32)),
vol.Optional(CONF_NAME, default=DEFAULT_NAME): cv.string,
})
@asyncio.coroutine
def async_setup_platform(hass, config, async_add_devices, discovery_info=None):
"""Set up the Xiaomi Gateway miio platform."""
from miio import Device, DeviceException
if DATA_KEY not in hass.data:
hass.data[DATA_KEY] = {}
host = config.get(CONF_HOST)
token = config.get(CONF_TOKEN)
_LOGGER.info("Initializing Xiaomi Gateway with host %s (token %s...)", host, token[:5])
try:
miio_device = Device(host, token)
[Truncated]
if state == 'pause':
self._state = STATE_OFF
elif state == 'run':
self._state = STATE_ON
self._volume = volume / 100
else:
_LOGGER.warning(
"New state (%s) doesn't match expected values: %s/%s",
state, 'pause', 'run')
self._state = None
self._state_attrs.update({
ATTR_STATE_VALUE: state
})
except DeviceException as ex:
self._available = False
_LOGGER.error("Got exception while fetching the state: %s", ex)
```
username_0: Hi. Please create a gist for it. Can't copy code because all formating breaks down.
username_4: sorry, I was just trying to help while you were publishing the new code
username_0: I'm not the author, just a user.
I'm trying to say that I can't test your code because I can't copy it with Python formatting...
username_4: ups, I misunderstood
```
""" https://github.com/username_2/xiaomi_miio_gateway/blob/master/custom_components/media_player/xiaomi_miio_gateway.py """
"""
Add support for the Xiaomi Gateway Radio.
"""
import logging
import voluptuous as vol
import asyncio
from functools import partial
import homeassistant.helpers.config_validation as cv
from homeassistant.components.media_player import (
MediaPlayerDevice, MEDIA_PLAYER_SCHEMA, PLATFORM_SCHEMA)
from homeassistant.components.media_player.const import (
SUPPORT_TURN_ON, SUPPORT_TURN_OFF, SUPPORT_VOLUME_MUTE,
SUPPORT_VOLUME_STEP, SUPPORT_VOLUME_SET, SUPPORT_NEXT_TRACK)
from homeassistant.const import (CONF_HOST, CONF_NAME, CONF_TOKEN, STATE_OFF, STATE_ON)
REQUIREMENTS = ['python-miio>=0.3.7']
ATTR_MODEL = 'model'
ATTR_FIRMWARE_VERSION = 'firmware_version'
ATTR_HARDWARE_VERSION = 'hardware_version'
DEFAULT_NAME = "Xiaomi Gateway Radio"
DATA_KEY = 'media_player.xiaomi_miio_gateway'
ATTR_STATE_PROPERTY = 'state_property'
ATTR_STATE_VALUE = 'state_value'
_LOGGER = logging.getLogger(__name__)
SUPPORT_XIAOMI_GATEWAY_FM = SUPPORT_VOLUME_STEP | SUPPORT_TURN_ON | \
SUPPORT_TURN_OFF | SUPPORT_VOLUME_MUTE | SUPPORT_VOLUME_SET | SUPPORT_NEXT_TRACK
PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend({
vol.Required(CONF_HOST): cv.string,
vol.Required(CONF_TOKEN): vol.All(cv.string, vol.Length(min=32, max=32)),
vol.Optional(CONF_NAME, default=DEFAULT_NAME): cv.string,
})
@asyncio.coroutine
def async_setup_platform(hass, config, async_add_devices, discovery_info=None):
"""Set up the Xiaomi Gateway miio platform."""
from miio import Device, DeviceException
if DATA_KEY not in hass.data:
hass.data[DATA_KEY] = {}
host = config.get(CONF_HOST)
token = config.get(CONF_TOKEN)
_LOGGER.info("Initializing Xiaomi Gateway with host %s (token %s...)", host, token[:5])
try:
miio_device = Device(host, token)
device_info = miio_device.info()
[Truncated]
if state == 'pause':
self._state = STATE_OFF
elif state == 'run':
self._state = STATE_ON
self._volume = volume / 100
else:
_LOGGER.warning(
"New state (%s) doesn't match expected values: %s/%s",
state, 'pause', 'run')
self._state = None
self._state_attrs.update({
ATTR_STATE_VALUE: state
})
except DeviceException as ex:
self._available = False
_LOGGER.error("Got exception while fetching the state: %s", ex)
```
username_2: hey,
open a pr
my house is remodling and my home assistant is offline
username_4: Sorry,
I do not understand what you mean
Regards
username_5: For me too (((
username_6: I have the same error.
```
Integration 'xiaomi_gateway_radio' not found.
```
username_0: Yes, now is a new error
```
2020-01-01 14:11:18 ERROR (MainThread) [homeassistant.components.media_player] Error while setting up platform xiaomi_miio_gateway
Traceback (most recent call last):
File "c:\users\igorb\appdata\local\programs\python\python37\lib\site-packages\homeassistant\helpers\entity_platform.py", line 150, in _async_setup_platform
await asyncio.wait_for(asyncio.shield(task), SLOW_SETUP_MAX_WAIT)
File "c:\users\igorb\appdata\local\programs\python\python37\lib\asyncio\tasks.py", line 442, in wait_for
return fut.result()
File "c:\users\igorb\appdata\local\programs\python\python37\lib\asyncio\coroutines.py", line 120, in coro
res = func(*args, **kw)
File "d:\User\.homeassistant\custom_components\xiaomi_miio_gateway\media_player.py", line 46, in async_setup_platform
from miio import Device, DeviceException
ModuleNotFoundError: No module named 'miio'
```
username_7: Its work
""" https://github.com/username_2/xiaomi_miio_gateway/blob/master/custom_components/media_player/xiaomi_miio_gateway.py """
"""
Add support for the Xiaomi Gateway Radio.
"""
import logging
import voluptuous as vol
import asyncio
from functools import partial
import homeassistant.helpers.config_validation as cv
from homeassistant.components.media_player import (
MediaPlayerDevice, PLATFORM_SCHEMA)
from homeassistant.components.media_player.const import (
SUPPORT_TURN_ON, SUPPORT_TURN_OFF, SUPPORT_VOLUME_MUTE,
SUPPORT_VOLUME_STEP, SUPPORT_VOLUME_SET, SUPPORT_NEXT_TRACK)
from homeassistant.const import (CONF_HOST, CONF_NAME, CONF_TOKEN, STATE_OFF, STATE_ON)
REQUIREMENTS = ['python-miio>=0.3.7']
ATTR_MODEL = 'model'
ATTR_FIRMWARE_VERSION = 'firmware_version'
ATTR_HARDWARE_VERSION = 'hardware_version'
DEFAULT_NAME = "Xiaomi Gateway Radio"
DATA_KEY = 'media_player.xiaomi_miio_gateway'
ATTR_STATE_PROPERTY = 'state_property'
ATTR_STATE_VALUE = 'state_value'
_LOGGER = logging.getLogger(__name__)
SUPPORT_XIAOMI_GATEWAY_FM = SUPPORT_VOLUME_STEP | SUPPORT_TURN_ON | \
SUPPORT_TURN_OFF | SUPPORT_VOLUME_MUTE | SUPPORT_VOLUME_SET | SUPPORT_NEXT_TRACK
PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend({
vol.Required(CONF_HOST): cv.string,
vol.Required(CONF_TOKEN): vol.All(cv.string, vol.Length(min=32, max=32)),
vol.Optional(CONF_NAME, default=DEFAULT_NAME): cv.string,
})
@asyncio.coroutine
def async_setup_platform(hass, config, async_add_devices, discovery_info=None):
"""Set up the Xiaomi Gateway miio platform."""
from miio import Device, DeviceException
if DATA_KEY not in hass.data:
hass.data[DATA_KEY] = {}
host = config.get(CONF_HOST)
token = config.get(CONF_TOKEN)
_LOGGER.info("Initializing Xiaomi Gateway with host %s (token %s...)", host, token[:5])
try:
miio_device = Device(host, token)
device_info = miio_device.info()
model = device_info.model
_LOGGER.info("%s %s %s detected",
[Truncated]
self._muted = False
if state == 'pause':
self._state = STATE_OFF
elif state == 'run':
self._state = STATE_ON
self._volume = volume / 100
else:
_LOGGER.warning(
"New state (%s) doesn't match expected values: %s/%s",
state, 'pause', 'run')
self._state = None
self._state_attrs.update({
ATTR_STATE_VALUE: state
})
except DeviceException as ex:
self._available = False
_LOGGER.error("Got exception while fetching the state: %s", ex)
username_8: Hello I have the same problem.
I am under 0.108.4. Can you tell me which files to put in the folder of the custom component directory as well as their content. I still get the “no integration found” error. Can you also give me back the code to put in configuration.yaml.
thank you in advance
username_9: I'm getting the same error
`(MainThread) [homeassistant.config] Platform error: media_player - Integration 'xiaomi_gateway_radio' not found.`
On Home Assistant 0.108.9
username_10: same error on Home Assistant 0.109.6 |
mesonbuild/meson | 320760485 | Title: Easy way to recreate current configuration
Question:
username_0: Sometimes the build can get messed up and the best way forward is to just recreate a new build with the same configuration as now. For examples I just upgraded from Fedora 27 to Fedora 28 and build is now broken for my gst-build setup:
```
$ ninja -C build/
ninja: Entering directory `build/'
ninja: error: '/usr/lib64/libdl-2.26.so', needed by 'subprojects/gstreamer/gst/libgstreamer-1.0.so.0.1500.0', missing and no known rule to make it
```
In the autotools worlds, you just get the last run configure commandline from `head config.log`. Would be good for meson to provide something very similar.
Answers:
username_0: Actually it's not just upgrade from one major release of the OS to another but rather each time meson gets updated in my distro, I have to redo the whole thing:
```
mesonbuild.mesonlib.MesonException: Build directory has been generated with Meson version 0.46.1, which is incompatible with current version 0.47.1.
``` |
mbrubeck/agate | 873986954 | Title: Logging remote address
Question:
username_0: The log output does not include the remote address. The readme says `<remote ip or dash>` and I'm just getting a dash. Can I switch that on somehow?
Answers:
username_1: You can enable with by passing the command-line flag `--log-ip`
Status: Issue closed
username_0: Thanks - working now. |
hdon/risk-track | 288883999 | Title: Desktop / Tablet Experience Makeover
Question:
username_0: This project so far has been developed only for mobile. It might be a nice exercise to optimize access to most features when there is more display room by adding more controls.
Answers:
username_0: More ideas if working on desktop experience:
* Disable text selection in SpinControl.
* Add `cursor:pointer` to clickable elements in SpinControl and App components. |
microsoft/botframework-solutions | 478794364 | Title: BingSearchSkill uses thumbnail in Teams
Question:
username_0: #### Is your feature request related to a problem? Please describe.
It seems that Teams does not support big images for adaptive cards, so query like "who is bill gates" failed to display the portrait.
Bug report: https://ocv.azurewebsites.net/#/item/fdcl_v4_d14b609800a953cca0e3139bd2ba9f2b/
#### What is the solution you are looking for?
Use thumbnail instead
#### What alternatives have you considered?
#### Is there any other context you can provide?<issue_closed>
Status: Issue closed |
benjcunningham/govtrackr | 128412072 | Title: R version dependency
Question:
username_0: From Travis-CI [Build #6](https://travis-ci.org/username_0/govtrackr/builds/104389997):
```
* checking data for ASCII and uncompressed saves ... WARNING
Warning: package needs dependence on R (>= 2.10)
```<issue_closed>
Status: Issue closed |
KashubaK/Lively | 311492381 | Title: Implement an actionHandler, eventHandler, and a wrapper for each.
Question:
username_0: Commands fire Actions, and Actions produce Events in the Rooms in context.
We need a system to effectively and efficiently handle these, and the API needs to be a simple as possible to avoid abstraction in developer's apps. |
LightKone/antidote-rest-server | 300643292 | Title: Deleting a set
Question:
username_0: The Beatless were, back in the '60s, amazing:
```
curl localhost:3000/set/add/state-of-the-art/beatless/johnny
ok
$ curl localhost:3000/set/add/state-of-the-art/beatless/paulus
ok
$ curl localhost:3000/set/add/state-of-the-art/beatless/georgia
ok
$ curl localhost:3000/set/add/state-of-the-art/beatless/bringo
ok
$ curl localhost:3000/set/read/state-of-the-art/beatless
["johnny","paulus","georgia","bringo"]
```
At some point they decided to split, but their work remains in our hearts:
```
curl localhost:3000/set/add/oldies-but-goldies/60s/beatless
ok
```
Since they are not part of the state of the art, can The Beatless be removed from the "state-of-the-art" bucket?
Answers:
username_1: I'm not sure if this is supported by AntidoteDB itself.
You can remove objects that are inside a map; but I don't think you can remove objects/keys from a bucket.
I believe this was planned for future work, but I'm not sure of the status.
Antidote supports a `reset` operation as an alternative which reverts the objects to their zero-values, but this is not directly accessible through the [Antidote TypeScript Client](https://syncfree.github.io/antidote_ts_client/) which is how the REST server communicates with Antidote.
I can emulate it if needed (probably with 2 roundtrips), but... :-)
[Here](https://syncfree.github.io/antidote/java_client.html#transactions) is an example of deleting keys in a map using the Antidote Java Client. That's the closest I think that exists.
username_1: https://github.com/SyncFree/antidote/issues/325 |
liferay/clay | 1178429080 | Title: @clayui/css: Mixins _assert-ascending and _assert-starts-at-zero are duplicated
Question:
username_0: The mixins `assert-ascending` and `_assert-starts-at-zero` are duplicated in `mixins/_globals.scss` and `functions/_global-functions.scss`. This is causing the one defined in `mixins/_globals.scss` to win in Portal. I'm not sure why it wins there but not in this repo. We should remove the one from `mixins/_globals.scss`<issue_closed>
Status: Issue closed |
yoshihitoh/zstd-codec | 351911415 | Title: Messing around system events
Question:
username_0: Hi there, first good work! thanks for that.
But, it looks like there is something messing around with rejection/errors treatment in nodejs in your code:
https://github.com/nodejs/node/issues/21950
Answers:
username_1: Hello @username_0 , thanks for using the module, and creating the issue!
I tried your repro code, and reduced version, but the bug does not occur on my environment.
- macOS: 10.12.6
- Node.js: v8.11.3
I'm going to survey the issue on Ubuntu 16.04 (on VM), but it will be hard to resolve.
`unhandledRejection` is auto-generated by Emscripten, I couldn't get the answer from the documents.
IIRC there is two problems.
1. zstd-codec injects `process["exit"](1)` as `unhandledRejection` event hanlder. this code enforces the application process to exit on `unhandledRejection` event.
2. any error occurs on initialization of zstd-codec when using async/promise.
If you are not using `unhandledRejection`, please remove the listener attached to process for workaround.
username_1: Ah, sorry I was misunderstanding.
Your repro code and reduced code are working.
Currently `unhandledRejection` handler is always attached on Node.js.
https://github.com/kripken/emscripten/blob/699d457f4ff55e0422ea79517714b07d691e8dae/src/shell.js#L103-L179
The code generation is added by this PR.
https://github.com/kripken/emscripten/pull/5948
I think it is possible to avoid this issue by downgrading Emscripten, but it is hard to do that or not...
username_1: Issue#8 native bindings for Node.js will solve this issue, so I close this issue.
Status: Issue closed
|
crystal-lang/crystal | 257835369 | Title: Bug: Invalid memory access on LLVMContextDispose
Question:
username_0: I was able to "reduce" the random crash that sometimes happens on travis and CI.
To reproduce, compile this file:
```crystal
require "compiler/crystal/**"
def compile
compiler = Crystal::Compiler.new
compiler.debug = Crystal::Debug::All
compiler.compile([
Crystal::Compiler::Source.new("lala.cr", ""),
], "lala")
end
1000.times do |i|
puts "Start: #{i}"
compile
puts "Done compiling: #{i}"
10.times { GC.collect }
puts "End: #{i}"
end
```
The run it, making sure to set the `CRYSTAL_PATH` env var to the "src" folder of the compiler, otherwise the prelude won't be found.
The crash happens randomly, so after it prints "Start: 20" you can ctrl+c and then run it again. This is the crash I sometimes get:
```
Invalid memory access (signal 11) at address 0xe0
[0x1062fd23b] *CallStack::print_backtrace:Int32 +107
[0x1062d73ac] __crystal_sigfault_handler +60
[0x107d37713] sigfault_handler +35
[0x7fff91199b3a] _sigtramp +26
[0x107c7e860] _ZNK4llvm12DenseMapBaseINS_13SmallDenseMapIPvNSt3__14pairINS_12PointerUnionIPNS_15MetadataAsValueEPNS_8MetadataEEEyEELj4ENS_12DenseMapInfoIS2_EENS_6detail12DenseMapPairIS2_SB_EEEES2_SB_SD_SG_E15LookupBucketForIS2_EEbRKT_RPKSG_ +6
[0x107c770fc] _ZN4llvm12DenseMapBaseINS_13SmallDenseMapIPvNSt3__14pairINS_12PointerUnionIPNS_15MetadataAsValueEPNS_8MetadataEEEyEELj4ENS_12DenseMapInfoIS2_EENS_6detail12DenseMapPairIS2_SB_EEEES2_SB_SD_SG_E5eraseERKS2_ +18
[0x107c76f49] _ZN4llvm16MetadataTracking7untrackEPvRNS_8MetadataE +49
[0x107c7977d] _ZN4llvm9MDOperand5resetEPNS_8MetadataES2_ +35
[0x107c7ac98] _ZN4llvm6MDNode17dropAllReferencesEv +102
[0x107c673f7] _ZN4llvm15LLVMContextImplD2Ev +2715
[0x107c655a8] _ZN4llvm11LLVMContextD2Ev +22
[0x107c2b996] LLVMContextDispose +22
[0x106fb986c] *LLVM::Context#finalize:Nil +140
[0x1062e0fe1] ~proc6Proc(Pointer(Void), Pointer(Void), Nil)@src/gc/boehm.cr:142 +17
[0x1095938e0] GC_invoke_finalizers +172
[0x109593a2f] GC_notify_or_invoke_finalizers +174
[0x10958f943] GC_try_to_collect_general +203
[0x10958f973] GC_gcollect +11
[0x1063331d9] *GC::collect:Nil +9
[0x1062bed38] __crystal_main +2936
[0x1062d7218] main +40
```
Sometimes the trace is a bit different.
This happens when passing `Crystal::Debug::All` for the debug info. It doesn't happen with `Crystal::Debug::None`. It seems to be related to disposing an llvm `MDNode` twice, or something like that.
I have **no idea** why it happens. Our code makes sure that an `LLVM::Context` is disposed only once. Maybe our custom bindings to the LLVM C++ debug API has bugs, I don't know. I also tried to trace this in LLVM's source code but it's huge and it's written in C++.
The "good" thing is that this is very unlikely to happen when compiling one file. But there's a chance it happens in tests, and then we have to restart CI from time to time and hope it passes.
Any help finding the cause of this would be greatly appreciated! :-)
Answers:
username_1: I can't reproduce, so I can't tell for sure, but:
The DIBuilder is attached to a Module or Context, doesn't it? Would it be possible that we dispose of the module/context before we dispose of the DIBuilder but the DIBuilder dispose method needs to access it?
username_0: I thought so, yes, but we never dipose DIBuilders ¯\_(ツ)_/¯
Status: Issue closed
|
cjhutto/vaderSentiment | 388347242 | Title: "To die for" misinterpreted
Question:
username_0: I have found this basic common expression which is misinterpreted by Vader:
`To die for.-------------- {'neg': 0.661, 'neu': 0.339, 'pos': 0.0, 'compound': -0.5994}`
Could you consider adding a new rule for that?
Answers:
username_1: These cases should be added to `special_case_idioms` list with a positive value. You can customize that list and add to it manually.
username_0: @username_1 I am not sure how add these. You mean that I have to change the source code?
username_1: @username_0 Yes, There's a variable in this file `vaderSentiment/vaderSentiment.py` called SPECIAL_CASE_IDIOMS. you have to add to this list
username_2: For ease of understanding & future development might it make sense to store `SENTIMENT_LADEN_IDIOMS` and `SPECIAL_CASE_IDIOMS` with their scores in CSV files?
(Longer-term a database might work even better, so users can easily change values with which they disagree.)
username_0: Sincerely, I like the idea of having this as an optional parameter, which in this case would be a python dictionary. Then, you don't need to worry about changing source code, and just do `pip install` and be happy.
Status: Issue closed
username_3: added to special case idiom rules |
jarmoruuth/AutoIntegrate | 1005090944 | Title: Icons are hidden behind other icons
Question:
username_0: When using AutoContinue new icons are placed on top (or behind) existing icons. An experimental code in prefix-array branch keeps track of icon count and places new icons below older ones. Still repeated AutoContinue runs may still have this issue. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.