repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
crewjam/saml | 665486839 | Title: IdP wants to know whether crewjam/saml can handle this...I have no clue.
Question:
username_0: I used the SAML package to provide authentication to my app. One of the IdPs asked me this question: "One issue I noticed in the metadata is that it sets WantAssertionsSigned="true"; our default, and the recommended best practice, is for the IdP to sign the entire response, which makes signing the enclosed assertion redundant. Do you know if the SP software can properly deal with that, i.e. requiring the response to be properly signed, instead of the assertion? Our IdP will sign both the response and the assertion if so requested, but a requirement to sign the assertion suggests a possible flaw in the software."
Does anyone know the answer.
Many thanks.
Status: Issue closed
Answers:
username_1: Yes, either should work. |
strapi/strapi | 390686661 | Title: Relation tables not created when multi many-to-many relations
Question:
username_0: <!-- ⚠️ If you do not respect this template your issue will be closed. -->
<!-- =============================================================================== -->
<!-- ⚠️ If you are not using the current Strapi release, you will be asked to update. -->
<!-- Please see the wiki for guides on upgrading to the latest release. -->
<!-- =============================================================================== -->
<!-- ⚠️ Make sure to browse the opened and closed issues before submitting your issue. -->
<!-- ⚠️ Before writing your issue make sure you are using:-->
<!-- Node 10.x.x -->
<!-- npm 6.x.x -->
<!-- The latest version of Strapi. -->
**Informations**
- **Node.js version**: 11.4.0<!-- Please ensure you are using the Node LTS version (v10) -->
- **NPM version**: 6.4.1
- **Strapi version**: 3.0.0-alpha.16<!-- Please make sure you are on the latest version -->
- **Database**: Postgres
- **Operating system**: MacOS
**What is the current behavior?**
When I create two many-to-many relations on a Content Type, only the first one is created in the Postgres database.
**Steps to reproduce the problem**
- Create two Content Types (`post` and `tags` for example)
- Create a post
- Create a many-to-many relation between posts and tags
- Create a many-to-many relation between post and users
- Edit the post
```
{ error: relation "pages_posts__posts_pages" does not exist
at Connection.parseE (/Users/username_0/Desktop/blog/node_modules/pg/lib/connection.js:554:11)
at Connection.parseMessage (/Users/username_0/Desktop/blog/node_modules/pg/lib/connection.js:379:19)
at Socket.<anonymous> (/Users/username_0/Desktop/blog/node_modules/pg/lib/connection.js:119:22)
at Socket.emit (events.js:189:13)
at Socket.EventEmitter.emit (domain.js:441:20)
at addChunk (_stream_readable.js:288:12)
at readableAddChunk (_stream_readable.js:269:11)
at Socket.Readable.push (_stream_readable.js:224:10)
at TCP.onStreamRead [as onread] (internal/stream_base_commons.js:145:17)
From previous event:
at Client_PG._query (/Users/username_0/Desktop/blog/node_modules/knex/lib/dialects/postgres/index.js:240:12)
at Client_PG.query (/Users/username_0/Desktop/blog/node_modules/knex/lib/client.js:192:17)
at Runner.<anonymous> (/Users/username_0/Desktop/blog/node_modules/knex/lib/runner.js:138:36)
From previous event:
at /Users/username_0/Desktop/blog/node_modules/knex/lib/runner.js:47:21
From previous event:
at Runner.run (/Users/username_0/Desktop/blog/node_modules/knex/lib/runner.js:33:30)
at Builder.Target.then (/Users/username_0/Desktop/blog/node_modules/knex/lib/interface.js:23:43)
at processImmediate (timers.js:632:19)
at process.topLevelDomainCallback (domain.js:120:23)
name: 'error',
length: 124,
severity: 'ERROR',
code: '42P01',
[Truncated]
position: '158',
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file: 'parse_relation.c',
line: '1180',
routine: 'parserOpenTable' }
```
**What is the expected behavior?**
The relation table should be created.
**Suggested solutions**
If you move up the in second relation in `Page.settings.json`, it will be created.
Answers:
username_1: Are any news here?
username_2: Same problem here, it's really blocking us.
Any progress on this @username_5 ?
username_3: Same problem here :
https://github.com/strapi/strapi/issues/2185#issuecomment-464808490
Also the pages related are not saved as they should be on a MySQL DB. Everything is fine on MongoDB.
Any news about this ?
username_4: seeing the same issue here.. any updates?
Status: Issue closed
username_5: It has been fixed in the beta version
username_6: When will this be released?
Is there a known temporary resolution? I've taken a look at the commit in the Beta branch for this and it looks like a significant change.
Thanks
username_7: @username_6 within this month if everything goes to plan. And yes the beta is a huge change. |
sendgrid/docs | 877516377 | Title: Domain authentication steps for GODADDY are incorrect
Question:
username_0: Sender Authentication -> Domain Authentication
Instructions specific to GoDaddy DNS hosting - the instructions say that the host value should include the root domain.
For example - example.com
The instructions for godaddy.com say that the 3 CNAME HOST / Values should be
| TYPE | HOST | VALUE |
|-------|:-------|:--------|
|CNAME|em####.example.com|u11111111.wl222.sendgrid.net|
|CNAME|s1._domainkey.example.com|s1.domainkey.u11111111.wl222.sendgrid.net|
|CNAME|s2._domainkey.example.com |s2.domainkey.u22222222.wl222.sendgrid.net|
For godaddy, this is incorrect.
These instructions for godaddy should instead be:
| TYPE | HOST | VALUE |
|-------|:-------|:--------|
|CNAME|em####|u11111111.wl222.sendgrid.net|
|CNAME|s1._domainkey|s1.domainkey.u11111111.wl222.sendgrid.net|
|CNAME|s2._domainkey|s2.domainkey.u22222222.wl222.sendgrid.net|
## Supporting link
https://www.godaddy.com/community/Managing-Domains/DNS-TXT-CNAME-records-are-still-not-getting-propagate/m-p/153099
Answers:
username_1: Yes, this should be made much more clear in sendgrid docs, I lost 2 days trying to configure it. There is the notice that some providers do this, but I would have expected that f they do this it is also visually visible so it would show smth like: s1._domainkey.example.com.example.com, but this is not the case with godaddy. So make it clear in your docs that godaddy does this. |
prettier/prettier-vscode | 272924403 | Title: False negatives when saving JSON files
Question:
username_0: 
This happens every time I save any JSON file.

If I disable Prettier, it stops happening.
Here's my `.prettierrc`:
```
{
"useTabs": true,
"printWidth": 80,
"singleQuote": true,
"trailingComma": "none",
"bracketSpacing": true,
"jsxBracketSameLine": false,
"parser": "babylon",
"semi": true
}
```
If it matters, I have both `.prettierrc` and `.prettierrc.json` in my repo (various people on the team use various editors, some need one or the other). They are identical.
Answers:
username_1: this seems strange to me.
I can't say which one will be loaded. @username_2 should know that.
username_0: Plugins for atom, sublime, and vscode all need to work the same. This configuration was the best way we were able to do that without having to ask every developer to modify settings after installing the plugin.
username_2: Which version of prettier are you using?
username_3: @username_2 Same problem here with a fresh installation of VSCode and prettier extension:
```
$ yarn info prettier | grep version
version: '1.8.2',
```
On my case, I tried removing `parser` did not help. I tried also `flow` option and it was the same error. Can I help you more somehow?
username_3: Oups, `yarn info` is not the right command to know the actual installed version. It's `1.8.2` nonetheless:
```
$ yarn list --pattern prettier
yarn list v1.2.0
├─ [email protected]
├─ [email protected]
└─ [email protected]
✨ Done in 2.24s.
$ cat node_modules/prettier/package.json | grep version
"version": "1.8.2",
```
username_2: Does the name of the file you are formatting end with `.json`?
username_3: I was formatting the `package.json` file of my project. I said previously that `parser` options was not changing anything on my case, but I realized later that I had a workspace settings that was setting the `parser` option.
But right now, I'm not able to have the format to work anywhere, so I'm unable to reproduce for now. I'm looking into it, will report once I'm able to make it work again :)
username_3: Ok, after restarting VSCode, I was able to format again. I removed `parser` setting from both my workspace and user settings file.
Still the same problem.
username_1: @username_3 please share your settings, package.json/prettier configs
username_4: Perhaps related. With `parser: babylon` and vscode-prettier 1.6.1, I'm finding that a json file which looks like this:
```json
[
{
"id": "9147C594-02B0-4752-B569-3244EE1F7C26"
}
]
```
is prettified on save to:
```json
[
{
id: '9147C594-02B0-4752-B569-3244EE1F7C26',
}
]
```
The underlying project has `prettier: ^1.12.1` but I don't think that matters.
For now, adding json files to my prettierignore until I figure out the issue. |
boostorg/beast | 583400092 | Title: file_body succeeds opening bad file path
Question:
username_0: I'm running into an issue when the server receives a bogus file path like ".//v1/mapview/". file_body is able to open that path without error (see below), later on in Async Write an error code is set with message "Is a directory".
Should file_body.open check if the path is a directory?
```
// the following code snippet succeeds to open bogus ".//v1/mapview/"
beast::error_code ec;
http::file_body::value_type body;
body.open(response->data.c_str(), beast::file_mode::scan, ec);
if(ec == beast::errc::no_such_file_or_directory || !body.is_open()) {
return send(_defaultController.prepareNotFound(std::move(req), response->data));
}
// Handle an unknown error
if (ec) {
return send(_defaultController.prepareServerError(std::move(req), ec.message()));
}
```
Answers:
username_1: That's perplexing... which implementation is it using for `file`?
username_0: I've updated the code snippet with the method preparing the file response.
username_0: I should eventually mention that ./v1/mapview is an existing path on the server. A url to a non existing path like /v1/mapview/test correctly handles file not found.
username_2: Hmm, this is an interesting one.
A file could be a number of things - symlink, Unix device, etc.
Should the file body interface deal in file names (and therefore have to disambiguate) or file descriptors?
username_0: My interpretation is that the file modes described in file_base.hpp need to be considered in file_body.open(). E.g. if file mode is beast::file_mode::scan or beast::file_mode::read then file_body.open() should fail in case the file is not readable.
username_1: Beast _File_ is never intended for directories
username_0: Should it then fail to open?
username_2: @username_0 I'll have a design discussion with Vinnie about the behaviour of our object. I fear that like all seemingly simple design decisions, this one will end up more nuanced than was originally imagined.
In the meantime, are you comfortable adding a workaround in your code to `stat` the file and behave accordingly?
username_0: @username_2 Sure, that will work for me. I already have a workaround in place and was raising it as a general topic.
username_2: I've had a think about this.
The #1 concern around sending files is security. Are you sending the file you intend to send? This applies whether you are acting as client or server.
Checking the program's intent is well beyond the scope of Beast. Beast has to assume that if you offer a file-name, then the expectation is that the file exists, is of the correct type, and that you do intend to send it.
It is therefore my view that checking that the file exists (and may be legally transmitted) should already have been checked by the application. I see no reason to change Beast's behaviour here.
I'll leave the issue open for a few more hours in case someone wants to argue, but I am minded to close it.
username_0: I can follow your thoughts on this generally. Though from an API user perspective, it seems counter intuitive to what the library seems to offer:
// Attempt to open the file
beast::error_code ec;
http::file_body::value_type body;
body.open(response->data.c_str(), beast::file_mode::scan, ec);
// Handle the case where the file doesn't exist
if(ec == beast::errc::no_such_file_or_directory || !body.is_open()) {
return send(_defaultController.prepareNotFound(std::move(req), response->data));
}
I mean the sbove code reads like: Open file at path, otherwise handle the case when there is no such file or directory. It doesn‘t seem very intuitive that this check is never done.
username_2: Can you suggest an alternative that we can implement without breaking compatibility?
username_0: Putting all pieces together from what I as user would assume, design decisions and what Vinnie said („ Beast File is never intended for directories“), I think after a call to open() body.is_open() should be set to false for „directories“.
Arguably, it changes the result. But in my opinion this is not an API change, but rather a corrective measure due to the expectation that file_body is by definition dealing with files only, and an attempt to open a directory should by this definition never have succeeded.
That said, I can live with any decision.
username_1: it isn't meant for directories
username_2: Are you saying that if the call fails, it should be a guarantee that is_open() should return false?
This would seem reasonable to me.
username_0: Yes. |
runelite/runelite | 1017858053 | Title: Add blighted super restore to the "Show prayer dose indicator" prayer plugin option
Question:
username_0: **Is your feature request related to a problem? Please describe.**
Currently normal prayer potions or super restores show a glow around the prayer orb whenever a prayer dose can be used without spilling anything using the option "Show prayer dose indicator" in the Prayer plugin. However the blighted super restore is not supported by this option. [...]
**Describe the solution you'd like**
Add the blighted super restore to the supported items. |
comraq/My-Tetris | 123168430 | Title: Separate frame into 2 classes
Question:
username_0: frame class should just contain the canvas handling methods along with x,y coordinates corresponding to the canvas pixels
a separate game class to handle the number of actual tetris rows, columns, and block movements
Answers:
username_0: resolved with the latest commit
Status: Issue closed
|
microsoft/reverse-proxy | 765337604 | Title: Routing with YARP
Question:
username_0: ### Describe the bug
Im not actually sure this is a bug vs more of a misunderstanding on my part. But I have the default webapi template and I have set up a project with YARP. Basically I want to visit http://poxy-address/api/weatherforecast and have it proxy to http://weatherapi/weatherforecast
### To Reproduce
<!--
Create a new webapi project and a new YARP project. Add the following config to YARP
```json
"ReverseProxy": {
"Routes": [
{
"RouteId": "weatherroute",
"ClusterId": "weather1",
"Match": {
"Path": "/api/"
}
}
],
"Clusters": {
"weather1": {
"Destinations": {
"weather1/destination1": {
"Address": "http://localhost:5033"
}
}
}
}
}```
try and visit http://proxy/api/weatherforecast and the reply I get is 404 not found.
### Further technical details
- .Net 5.0
- Linux<issue_closed>
Status: Issue closed |
cilium/cilium | 930399916 | Title: CI: Suite-k8s-1.21.K8sFQDNTest Validate that multiple specs are working correctly
Question:
username_0: ## CI failure
```
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:518
Can't connect to to a valid target when it should work
Expected command: kubectl exec -n default app2-58757b7dd5-nrwln -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 --retry 5 http://vagrant-cache.ci.cilium.io -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 28
Err: exit status 28
Stdout:
time-> DNS: '0.000019()', Connect: '0.000000',Transfer '0.000000', total '5.002315'
Stderr:
command terminated with exit code 28
```
DNS resolution to external fqdns are tricky to handle in tests since the failures could be transient.
[eedec955_K8sFQDNTest_Validate_that_multiple_specs_are_working_correctly.zip](https://github.com/cilium/cilium/files/6718076/eedec955_K8sFQDNTest_Validate_that_multiple_specs_are_working_correctly.zip)
Answers:
username_1: I found that the coredns pod in this environment is configured with endpoint-routes while the rest of the endpoints were not, which suggests a link to #16717 (if not the same root cause). I used this technique: https://github.com/cilium/cilium/issues/16717#issuecomment-871794749
username_2: According to [the CI dashboard](https://datastudio.google.com/s/jmn3FXS9X04), could be the same root cause as https://github.com/cilium/cilium/issues/16713#issuecomment-873293963, since it started failing more often around the same time (June 24th).

Status: Issue closed
|
grpc/grpc-go | 239353596 | Title: Possible goroutine/connection leak
Question:
username_0: Please answer these questions before submitting your issue.
### What version of gRPC are you using?
server: grpc-go 1.4.0, client: Python grpcio 1.4.0
### What version of Go are you using (`go version`)?
go1.8.3
### What operating system (Linux, Windows, …) and version?
Ubuntu 16.04
### What did you do?
Prior to grpc-go 1.3, I'd see connection leakage (they seem to be closed by the client w/o the server knowing about it), and goroutines would start leaking. Post 1.3, the keepalive functionality helped close those "abandoned" connections.
I've currently set the keepalive parameters to: MaxConnectionIdle: 1 hr, Time: 1 hr.
Oddly the rate of leakage happens faster when RPC transaction counts are low. Not sure if this is something w/ grpc on the Python side.
<img width="1036" alt="screen shot 2017-06-28 at 10 20 06 am" src="https://user-images.githubusercontent.com/124396/27669678-ef37954e-5c3c-11e7-9bfc-c0adf9ec76da.png">
<img width="353" alt="screen shot 2017-06-28 at 10 20 37 am" src="https://user-images.githubusercontent.com/124396/27669680-ef3f1bf2-5c3c-11e7-92a1-bb91ae809e65.png">
### What did you expect to see?
Fairly steady connection/goroutine count.
### What did you see instead?
Goroutine count seems to go up as RPC transactions per minute goes down.
Answers:
username_1: Hey,
Thanks for the detailed representation of the problem. However, the data seems very hard to believe. The number of go routines seems to be decreasing proportionally to the increase in number of RPCs. This can't be the case. Would you mind vetting the graphs further?
Also, it'd really helpful if you can provide a reproduction of this issue.
username_0: @username_1 It's odd to me as well. My only guess so far is that the Python/C-core is cycling channel connections frequently when activity is low, causing more broken connections which only gets cleaned up by the keepalive on server side after an hour.
username_1: That doesn't answer why would the goroutines increase during inactivity? Broken connections don't spawn new goroutines on Go server.
Status: Issue closed
username_1: Since it's really hard to pursue this without a reliable reproduction and the fact that we haven't noticed anything similar. I'm going to go ahead and close this issue. Feel free to get back to us if you find something more or perhaps a reproduction. Thanks for your time and effort. |
AntonOkryb/File_Commander | 475021116 | Title: Замечания
Question:
username_0: Опоздание 25 дней -12.5
Не считаешь количество файлов и размер директории -5
Все до жути мерцает. Я же показывал как сделать частичную перерисовку -2
При попытки зайти в папку без доступа, программа падает -2
https://github.com/AntonOkryb/File_Commander/blob/master/FileCommander/Program.cs#L22 - Main большой, стоит разбить на методы и даже на классы -1
https://github.com/AntonOkryb/File_Commander/blob/master/FileCommander/Program.cs#L45 - зачем это делать каждый раз? Достаточно сделать при переключении -1
https://github.com/AntonOkryb/File_Commander/blob/master/FileCommander/Program.cs#L62 - это лучше вынести в саму панель. Пусть сама обрабатывает свои кнопки -1
https://github.com/AntonOkryb/File_Commander/blob/master/FileCommander/Program.cs#L82 - в хелпе лучше бы рассказать про шорткаты, которые не описаны
https://github.com/AntonOkryb/File_Commander/blob/master/FileCommander/Program.cs#L90 - комментарий не к месту. -2 Сильно сбивает с толку.
https://github.com/AntonOkryb/File_Commander/blob/master/FileCommander/CPanel.cs#L212 - ну называл же уже методы с большой буквы, не стоило останавливаться -0.5
https://github.com/AntonOkryb/File_Commander/blob/master/FileCommander/CPanel.cs#L23 - свойства именуются с большой буквы -0.5
https://github.com/AntonOkryb/File_Commander/blob/master/FileCommander/CPanel.cs#L43 - для таких штук в C# свойства -1
https://github.com/AntonOkryb/File_Commander/blob/master/FileCommander/CLocalMenu.cs#L11 - обычно пишешь модификаторы доступа, а здесь опустил. Нужно делать единнобразно -0.5
https://github.com/AntonOkryb/File_Commander/blob/master/FileCommander/CCommon.cs#L110 - это хорошо переписывается на LINQ
Проект пока не принят. |
Sylius/Sylius | 276578131 | Title: Corrupted shipping methods
Question:
username_0: hello
I opened those entities
http://demo.sylius.org/admin/shipping-methods/16339/edit
http://demo.sylius.org/admin/shipping-methods/16337/edit
remove "calculator" from HTML and save it
result is **Server has encountered some errors**. ... and it's not possible to edit this entity anymore
sorry :)
Answers:
username_1: I think it just indicate that we're missing `NotBlank` validation on `ShippingMethod::$calculator` ;)
username_2: Should be solved by #9764.
username_1: Should be fixed now, thank you, @username_0 for reporting this issue :)
Status: Issue closed
|
izhangzhihao/intellij-rainbow-brackets | 501859839 | Title: Crash with nginx.conf files
Question:
username_0: ## Expected Behavior
Well, I guess the plugin shouldn't crash?
## Current Behavior
The plugin crashes every time I open a nginx.conf file. This started happening ever since I installed the nginx support plugin (https://plugins.jetbrains.com/plugin/4415-nginx-support)
Ironically, it does mark the brackets in nginx.conf files when they're both enabled, but doesn't when I disable the nginx-support plugin (it also doesn't crash in this instance).
## Possible Solution
I've disabled the nginx-support plugin.
## Code snippet for reproduce (for bugs)
```
java.lang.RuntimeException: java.lang.NoSuchMethodException: net.ishchenko.idea.nginx.annotator.NginxAnnotatingVisitor.<init>()
at com.intellij.util.ExceptionUtil.rethrow(ExceptionUtil.java:116)
at com.intellij.util.ReflectionUtil.newInstance(ReflectionUtil.java:408)
at com.intellij.util.ReflectionUtil.newInstance(ReflectionUtil.java:376)
at com.intellij.codeInsight.daemon.impl.ThreadLocalAnnotatorMap.cloneTemplates(ThreadLocalAnnotatorMap.java:46)
at com.intellij.codeInsight.daemon.impl.ThreadLocalAnnotatorMap.get(ThreadLocalAnnotatorMap.java:66)
at com.intellij.codeInsight.daemon.impl.CachedAnnotators.get(CachedAnnotators.java:31)
at com.intellij.codeInsight.daemon.impl.DefaultHighlightVisitor.runAnnotators(DefaultHighlightVisitor.java:105)
at com.intellij.codeInsight.daemon.impl.DefaultHighlightVisitor.visit(DefaultHighlightVisitor.java:86)
at com.intellij.codeInsight.daemon.impl.GeneralHighlightingPass.runVisitors(GeneralHighlightingPass.java:351)
at com.intellij.codeInsight.daemon.impl.GeneralHighlightingPass.lambda$collectHighlights$5(GeneralHighlightingPass.java:284)
at com.intellij.codeInsight.daemon.impl.GeneralHighlightingPass.analyzeByVisitors(GeneralHighlightingPass.java:311)
at com.intellij.codeInsight.daemon.impl.GeneralHighlightingPass.lambda$analyzeByVisitors$6(GeneralHighlightingPass.java:314)
at com.github.username_1.rainbow.brackets.visitor.RainbowHighlightVisitor.analyze(RainbowHighlightVisitor.kt:32)
at com.intellij.codeInsight.daemon.impl.GeneralHighlightingPass.analyzeByVisitors(GeneralHighlightingPass.java:314)
at com.intellij.codeInsight.daemon.impl.GeneralHighlightingPass.lambda$analyzeByVisitors$6(GeneralHighlightingPass.java:314)
at com.intellij.codeInsight.daemon.impl.DefaultHighlightVisitor.analyze(DefaultHighlightVisitor.java:70)
at com.intellij.codeInsight.daemon.impl.GeneralHighlightingPass.analyzeByVisitors(GeneralHighlightingPass.java:314)
at com.intellij.codeInsight.daemon.impl.GeneralHighlightingPass.collectHighlights(GeneralHighlightingPass.java:281)
at com.intellij.codeInsight.daemon.impl.GeneralHighlightingPass.collectInformationWithProgress(GeneralHighlightingPass.java:225)
at com.intellij.codeInsight.daemon.impl.ProgressableTextEditorHighlightingPass.doCollectInformation(ProgressableTextEditorHighlightingPass.java:84)
at com.intellij.codeHighlighting.TextEditorHighlightingPass.collectInformation(TextEditorHighlightingPass.java:55)
at com.intellij.codeInsight.daemon.impl.PassExecutorService$ScheduledPass.lambda$null$1(PassExecutorService.java:429)
at com.intellij.openapi.application.impl.ApplicationImpl.tryRunReadAction(ApplicationImpl.java:1106)
at com.intellij.codeInsight.daemon.impl.PassExecutorService$ScheduledPass.lambda$doRun$2(PassExecutorService.java:422)
at com.intellij.openapi.progress.impl.CoreProgressManager.registerIndicatorAndRun(CoreProgressManager.java:591)
at com.intellij.openapi.progress.impl.CoreProgressManager.executeProcessUnderProgress(CoreProgressManager.java:537)
at com.intellij.openapi.progress.impl.ProgressManagerImpl.executeProcessUnderProgress(ProgressManagerImpl.java:59)
at com.intellij.codeInsight.daemon.impl.PassExecutorService$ScheduledPass.doRun(PassExecutorService.java:421)
at com.intellij.codeInsight.daemon.impl.PassExecutorService$ScheduledPass.lambda$run$0(PassExecutorService.java:397)
at com.intellij.openapi.application.impl.ReadMostlyRWLock.executeByImpatientReader(ReadMostlyRWLock.java:164)
at com.intellij.openapi.application.impl.ApplicationImpl.executeByImpatientReader(ApplicationImpl.java:204)
at com.intellij.codeInsight.daemon.impl.PassExecutorService$ScheduledPass.run(PassExecutorService.java:395)
at com.intellij.concurrency.JobLauncherImpl$VoidForkJoinTask$1.exec(JobLauncherImpl.java:161)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177)
```
## Your Environment
* Plugin version: 5.23
* IDE & Operating System version, comment your env as below(go to "About IntelliJ IDEA" -> click the "copy" icon):
```
IntelliJ IDEA 2019.2.3 (Ultimate Edition)
Build #IU-192.6817.14, built on September 24, 2019
Licensed to <NAME>
You have a perpetual fallback license for this version
Subscription is active until September 14, 2020
Runtime version: 11.0.4+10-b304.69 amd64
VM: OpenJDK 64-Bit Server VM by JetBrains s.r.o
Linux 4.15.0-43-generic
GC: ParNew, ConcurrentMarkSweep
Memory: 1886M
Cores: 8
Registry: compiler.automake.allow.when.app.running=true, ide.tree.ui.experimental=false, ide.balloon.shadow.size=0
Non-Bundled Plugins: HOCON Converter, Lombook Plugin, String Manipulation, com.github.redfoos.logstash-intellij-plugin, com.microsoft.vso.idea, com.shellcheck, com.thvardhan.gradianto, org.jetbrains.plugins.hocon, org.jetbrains.kotlin, com.chrisrm.idea.MaterialThemeUI, com.intellij.plugins.html.instantEditing, com.jetbrains.php, org.intellij.scala, Dart, io.flutter, username_1.rainbow.brackets, Pythonid, com.intellij.kubernetes, com.jetbrains.edu, de.mariushoefler.flutter_enhancement_suite, org.zalando.intellij.swagger
```
Answers:
username_1: First of all, this is a bug of the 'Nginx-Support' plugin. You can report an issue [here](https://github.com/ishchenko/idea-nginx).
As a workaround, you could disable Nginx support by [this](https://github.com/username_1/intellij-rainbow-brackets#disable-rainbow-brackets-for-specific-languages)
```xml
<application>
<component name="RainbowSettings">
<option name="languageBlacklist">
<array>
<option value="nginx" />
</array>
</option>
</component>
</application>
```
Status: Issue closed
|
goharbor/harbor | 836373244 | Title: Unable to send emails using Office 356 smtp server.
Question:
username_0: **Expected behavior and actual behavior:**
Unable to send emails using Office 356 smtp server.
**Steps to reproduce the problem:**
* Email Server: smtp.office365.com
* Email Server Port: 587
* Email Username: username
* Email Password: <PASSWORD>
* Email From: admin <<EMAIL>>
* Email SSL: True
* Verify Certificate: True
Reference: https://support.microsoft.com/en-us/office/pop-imap-and-stmp-settings-8361e398-8af4-4e97-b147-6c6c4ac95353
**Versions:**
- harbor version: v2.2.0-ec0ba116
- docker engine version: -
- docker-compose version: -
**Additional context:**
- **Log Files**
```
2021-03-19T21:19:19Z [ERROR] [/core/api/email.go:113]: failed to ping email server: tls: first record does not look like a TLS handshake
2021-03-19T21:19:21Z [ERROR] [/core/api/email.go:113]: failed to ping email server: tls: first record does not look like a TLS handshake
2021-03-19T21:22:34Z [ERROR] [/core/api/email.go:113]: failed to ping email server: tls: first record does not look like a TLS handshake
```
#5757
Answers:
username_1: Hi. You can repoduce it as well, setting Port 25 and check/uncheck Email SSL. If you disable it will Try TLS but cor.log throws:
`core[684]: 2021-04-20T11:47:44Z [ERROR] [/core/api/email.go:113]: failed to ping email server: 535 5.7.8 Error: authentication failed: no mechanism available`
if you enable ssl on Port 25 it throws known reported error...
username_2: We also suffering from this.
username_3: Same here.
username_4: We have the same issue using Office365 smtp.office365.com on Port 587
```
2021-12-10T23:55:41Z [ERROR] [/core/api/email.go:115]: failed to ping email server: 504 5.7.4 Unrecognized authentication type [CO1PR15CA0097.namprd15.prod.outlook.com]
2021-12-10T23:51:01Z [ERROR] [/core/api/email.go:115]: failed to ping email server: tls: first record does not look like a TLS handshake
2021-12-10T23:49:59Z [ERROR] [/core/api/email.go:115]: failed to ping email server: 504 5.7.4 Unrecognized authentication type [MW4PR03CA0043.namprd03.prod.outlook.com]
```
username_3: I think this issue is really important because many companies use Exchange.
username_5: I'm seeing the same issue with SMTP setup with Office 365 on port 587 with a failed ping error. Like @username_4 I am also getting the failed ping. The same credentials work in other applications to send mail via SMTP via Office 365/ Exchange. |
mattphillips/deep-object-diff | 715973793 | Title: Comparing object with "hasOwnProperty" keys fails
Question:
username_0: I'm trying to use deep-object-diff to compare different copies of https://github.com/mdn/browser-compat-data, which is a dataset describing the web platform itself. As such, it has [a key called "hasOwnProperty"](https://github.com/mdn/browser-compat-data/blob/d3e87462355e61d24aa9bf8398b0be2d1ef61306/javascript/builtins/Object.json#L930) to describe that method. deep-object-diff uses code like `obj.hasOwnProperty(key)` a lot, and `obj.hasOwnProperty` will in this case be an object in the BCD data, not `Object.prototype.hasOwnProperty`.
This causes deep-object-diff to throw an exception, the first place being here.
https://github.com/username_3/deep-object-diff/blob/45549c8225fc21c178dc6370e8028e4d08498e7b/src/diff/index.js#L12
Simplified a bit, here's a subset of the data:
```json
{
"javascript": {
"builtins": {
"Object": {
"hasOwnProperty": {
"__compat": "an object with more stuff here"
}
}
}
}
}
```
Repro script showing the problem:
```js
const { diff } = require("deep-object-diff");
diff({"hasOwnProperty": 1}, {"hasOwnProperty": 2});
```
This will throw "Uncaught TypeError: r.hasOwnProperty is not a function".
A possible fix for this would be to add a `hasOwnProperty` wrapper to https://github.com/username_3/deep-object-diff/blob/master/src/utils/index.js and always use that, but there may other ways.
Answers:
username_1: https://eslint.org/docs/rules/no-prototype-builtins would catch that.
username_2: @username_0 Review #59?
<sup>(Posting this because I think you don't get a notification when a PR links to your issue.)</sup>
username_0: @username_2 thanks for posting the fix, I've tried it and can confirm it fixes my problem.
Status: Issue closed
username_3: Available in [v1.1.5](https://www.npmjs.com/package/deep-object-diff/v/1.1.5) |
openssl/openssl | 361140116 | Title: set_ciphersuites function should be static or renamed ssl_internal_set_ciphersuites
Question:
username_0: See ssl_ciph.c:1304
`int set_ciphersuites(STACK_OF(SSL_CIPHER) **currciphers, const char *str)`
This should be static and renamed ssl_internal_set_ciphersuites ...
Answers:
username_1: I can make that change for you. However, there are other static int functions which don't carry the `internal` in their name. So I would leave the name unchanged if that's ok for you.
username_1: E.g. `static int update_cipher_list_by_id(...)` and `static int update_cipher_list(...)`.
username_1: See #7253.
Status: Issue closed
|
DynamikArray/Ciima | 506043225 | Title: Close and Cancel buttons on Inventory Dialog Form
Question:
username_0: Add close and cancel buttons, along with helper text to the Inventory dialog form.
Status: Issue closed
Answers:
username_0: We've gone ahead and reworked the inline editing and have built a better solution. beebb3aa8daeced9e4c29b8d41e780f385a99ab7 |
catboost/catboost | 863678039 | Title: Predicting after ranking fails / is poorly documented
Question:
username_0: Problem: Predicting after ranking fails / is poorly documented
catboost version: 0.24.4 (Python 3.8.5)
Operating System: Ubuntu 20.04
CPU: Intel i9 i9-9900K
I have set up a pool `train` and a pool `test` like so:
```
train = Pool(X_train.values, y_train.values, group_id=g_train)
```
Where `X_train` contains my features, `y_train` contains my scores, and `g_train` contains all the groups. Everything is sorted to match the group order.
I create the model and fit it like so:
```
model = CatBoost({"loss_function":"YetiRank"})
model.fit(train, eval_set=(test), use_best_model=True, early_stopping_rounds=100)
```
Then when I try to predict:
```
model.predict(test, prediction_type="Class")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/sakuya/.local/lib/python3.8/site-packages/catboost/core.py", line 2033, in predict
return self._predict(data, prediction_type, ntree_start, ntree_end, thread_count, verbose, 'predict')
File "/home/sakuya/.local/lib/python3.8/site-packages/catboost/core.py", line 1981, in _predict
predictions = self._base_predict(data, prediction_type, ntree_start, ntree_end, thread_count, verbose)
File "/home/sakuya/.local/lib/python3.8/site-packages/catboost/core.py", line 1310, in _base_predict
return self._object._base_predict(pool, prediction_type, ntree_start, ntree_end, thread_count, verbose)
File "_catboost.pyx", line 4308, in _catboost._CatBoost._base_predict
File "_catboost.pyx", line 4323, in _catboost._CatBoost._base_predict
File "_catboost.pyx", line 1456, in _catboost.transform_predictions
File "_catboost.pyx", line 5086, in _catboost._convert_to_visible_labels
IndexError: index 0 is out of bounds for axis 0 with size 0
```
My groups are strings, and there are 5 groups in total. Which makes predicting probability even more confusing, because I only get 2 outputs per row instead of the 5 I was expecting:
```
model.predict(test, prediction_type="Probability")
array([[0.53883141, 0.46116859],
[0.52118767, 0.47881233],
[0.53749065, 0.46250935],
...,
[0.54306105, 0.45693895],
[0.52692491, 0.47307509],
[0.52297527, 0.47702473]])
```
I expected to see predicted classes per row, and predicted probabilities for each class per row. But I don't get either of those.
Where is this documented? The msrank guide only covers training a model for ranking, and nowhere does it even mention how to use the model afterwards.
Answers:
username_1: You're right.
1) Documentation and tutorials do not make it clear how to predict for ranking problems.
In fact a model trained for ranking will just predict one real value for a sample (corresponding to prediction type `RawFormulaVal`) and you should rank samples according to the predicted value (i.e. `predict` function does not return rank, only value that you can use to rank samples).
Something like that (for a single group):
```py
from operator import itemgetter
...
predictions = model.predict(X_group)
...
# print group samples with predictions, ranked by predictions
print (sorted(zip(predictions, X_group), key=itemgetter(0)))
```
Prediction does not use group or label data, only features, so it is enough to have only features data as an argument to `predict` but full `Pool` data is also accepted (but only features data from it is used for prediction).
You can predict for all groups in the dataset in one call to `predict` but additional postprocessing (in your - user - code) to sort all samples in each group according to the predicted value to get ranked samples is required.
2) prediction types `Class` and `Probability` make sense only for Classification problems. In case of non-Classification problems error message should clearly state that. We'll fix this inconsistency. |
gphotosuploader/google-photos-api-client-go | 474417482 | Title: Refactor how Google response is checked
Question:
username_0: We are relying in status message response, we should use code instead as it's suggested [here](https://github.com/googleapis/google-api-go-client/blob/master/GettingStarted.md).
https://github.com/gphotosuploader/google-photos-api-client-go/blob/f16c909f308c4a17c89cecc894cc9597cdf4cc2a/lib-gphotos/client.go#L326<issue_closed>
Status: Issue closed |
SlicerIGT/SlicerIGT | 167883256 | Title: MarkupsToModel auto update checkbox on makes using the module error-prone
Question:
username_0: It's quite unexpected (and unprecedented in Slicer) that simply switching to a module initiates processing.
Also I lost data by removing the automatically created model while auto-update was still on.
I think it would be better if the auto-update combobox was off by default.
Answers:
username_1: @username_2 has been working on a complete GUI rework - it's almost-ready for integration (https://github.com/SlicerIGT/SlicerIGT/pull/90). We should test the behavior again after his changes have been integrated.
Probably the issue is that "None" option is not enabled in the output node selector and so it selects the first model node in the scene.
username_0: OK thanks! I wasn't aware of the UI rework.
Why is auto-update on by default? I think the problem is not (only) the None option, but more the auto-update, as I suggest in the ticket title.
username_1: Auto-update is good, because that's what you need most of the time. Auto-update is enabled by default in Fiducial registration wizard, too, and nobody complained. The problem is that the module randomly selects a node as you enter the module and you don't have a chance to turn off auto-update or choose a different output node.
username_0: Adding the None options would help for sure.
I think it is very unusual in Slicer that by changing selection in a node combobox immediately changes the content of the node without explicit action (may it be enabling auto-update or clicking manual update). I didn't see any module operating like this before outside SlicerIGT.
username_1: Yes, SlicerIGT is "unusual" because it mainly operates on nodes that are changing in real-time, that's why auto-update is the default for most modules (but it can be disabled for a few modules such as for this one).
username_0: If for SlicerIGT's use cases this "real-time behaviour" is better, and it is accepted that the modules in the extension don't work the same way as Slicer core, then auto-update is fine by me.
In this case adding None to the output combobox is enough.
username_2: The GUI update will address this issue by allowing "None" in the output combobox.
username_2: (And "None" will be selected by default)
username_2: Should this be closed?
username_0: Has the above discussed None option been added?
username_2: It was merged in this commit:
https://github.com/SlicerIGT/SlicerIGT/commit/c14531826551f9c0e6ae8e7a5aabaa64bc83ae95
Status: Issue closed
username_0: Thanks! In that case I think this issue can be closed. I'm closing it now. |
nyupcs/pcs-sp21-lab4-server | 840265950 | Title: exploit-main
Question:
username_0: -----BEGIN PGP MESSAGE-----
<KEY>
<KEY>
=xdIZ
-----END PGP MESSAGE-----
Answers:
username_0: My NetID is km3947, <NAME>, and my pub key id is 97E0D099241F8CA3
username_0: -----BEGIN PGP MESSAGE-----
<KEY>
-----END PGP MESSAGE-----
username_0: -----BEGIN PGP PUBLIC KEY BLOCK-----
<KEY>mc<KEY>
-----END PGP PUBLIC KEY BLOCK-----
username_1: This submission has been verified. Well done! |
LBNL-UCB-STI/beam | 384019948 | Title: Harmonize field names for coordinates in output events
Question:
username_0: Right now our events use very different conventions for coordinates, e.g.:
<event type="PathTraversal" start.y="0.02995" end.x="0.02995" start.x="0.03995" .....
<event destinationY="0.01995" originY="0.01995" destinationX="0.01005" ....
This is just plain messy. Please look through all events (note only Beam events have coordinates) and change the format to be consistent with the rest of our event field names and use "start" and "end" in all cases where there are 2 coordinates:
startX, startY
endX, endY
Note, following MATSim convention, please also replace all underscore field names with cameCase, e.g. PathTraversal event is a big offender here.<issue_closed>
Status: Issue closed |
ZacBlanco/hwx-tutorials | 121713166 | Title: Import "Processing streaming data in Hadoop with Apache Storm"
Question:
username_0: http://hortonworks.com/hadoop-tutorial/processing-streaming-data-near-real-time-apache-storm/
Status: Issue closed
Answers:
username_0: Reopening this issue. Closed accidentally
username_0: http://hortonworks.com/hadoop-tutorial/processing-streaming-data-near-real-time-apache-storm/ |
imabug/raddb | 204400920 | Title: Validation errors when adding new survey recommendations
Question:
username_0: validation errors were produced when adding a new survey recommendation. These should not be checked for unless the `resolved` checkbox is set.
Status: Issue closed
Answers:
username_0: validation errors were produced when adding a new survey recommendation. These should not be checked for unless the `resolved` checkbox is set.
username_0: Looks like I should be able to do something by using https://laravel.com/docs/5.4/validation#conditionally-adding-rules
Status: Issue closed
|
WarEmu/WarBugs | 179051591 | Title: So, I got all the possible influence and quest rewards for a time.
Question:
username_0: A while back, the same day when I created my sorcerer, I had a very strange bug. When I got to the first destro camp, where you meet your first rally master, and completed all the quests, as well as did the local PQ, I went back to claim my rewards. I noticed suddenly that after having selected my reward, I couldn't turn in the quest. For some time I was confused, until I tried to select the other reward and it seemed to have selected both of them. I clicked accept, and found both of the rewards in my inventory. The same for the influence rewards, I had every single one of them.
I'm still not sure what caused this bug to happen and it was a few weeks ago. I didn't really think to report it back then. I still have all the possible dark elf quest trophies in my inventory.
Status: Issue closed
Answers:
username_1: No longer applies, bug was fixed. |
eclipse/microprofile-open-api | 277151437 | Title: TCK tests for openapi endpoint
Question:
username_0: We will be using the `/openapi` endpoint for most of our TCK tests already, but this issue will track the inclusion of specific endpoint tests that will use different `Accept` headers for json and yaml.
Answers:
username_1: @username_2, @username_4 and I are working on this.
username_2: We'll want to write assertions which cover each of the annotations. I've copied the checklist below.
Annotation list:
- [ ] Callback
- [ ] Callbacks
- [ ] Components
- [ ] Explode
- [ ] ParameterIn
- [ ] ParameterStyle
- [ ] SecuritySchemeIn
- [ ] SecuritySchemeType
- [ ] Extension
- [ ] Extensions
- [ ] ExternalDocumentation
- [ ] Header
- [ ] Contact
- [ ] Info
- [ ] License
- [ ] Link
- [ ] LinkParameter
- [ ] ArraySchema
- [ ] Content
- [ ] DiscriminatorMapping
- [ ] Encoding
- [ ] ExampleObject
- [ ] Schema
- [ ] OpenAPIDefinition
- [ ] Operation
- [ ] Parameter
- [ ] Parameters
- [ ] RequestBody
- [ ] APIResponse
- [ ] APIResponses
- [ ] OAuthFlow
- [ ] OAuthFlows
- [ ] OAuthScope
- [ ] SecurityRequirement
- [ ] SecurityRequirements
- [ ] SecurityScheme
- [ ] SecuritySchemes
- [ ] Server
- [ ] Servers
- [ ] ServerVariable
- [ ] Tag
- [ ] Tags
username_2: Going to start with the Info, Contact and License annotations.
username_3: I am covering Component, Tag, Tags and Header.
username_4: Starting with Schema
username_5: doing
- [ ] Operation
username_1: I'm covering the following annotations:
- ExternalDocumentation
- Server
- Servers
- ServerVariable
username_6: I'll do SecurityScheme and SecurityRequirement
username_0: thanks everyone. I updated the checkboxes based on the comments.
username_2: I'm now working on Operation.
username_6: I'm working on Link, Encoding
username_0: updated checkboxes
username_4: I'll handle Extension, Extensions, Example Object
username_0: updated
username_6: Also doing OAuthFlow, OAuthFlows, OAuthScope, as these relate to SecurityScheme (confirmed this with Jana @username_5).
@username_0 Also completed LinkParameter, and SecuritySchemeIn/SecuritySchemeType (these are enums for SecurityScheme annotation)
username_3: I am working on Content
username_4: I am working on DiscriminatorMapping
username_0: All done - thanks everyone!
Status: Issue closed
|
simonbyrne/WinReg.jl | 282111250 | Title: WinReg.jl uses deprecated Array(::Type[T], m::Integer)
Question:
username_0: WARNING: Array(::Type{T}, m::Integer) where T is deprecated, use Array{T}(Int(m)) instead.
Stacktrace:
[1] depwarn(::String, ::Symbol) at .\deprecated.jl:70
[2] Array(::Type{UInt8}, ::UInt32) at .\deprecated.jl:57
[3] querykey(::UInt32, ::String) at .\.julia\v0.6\WinReg\src\WinReg.jl:78
[4] querykey(::UInt32, ::String, ::String) at .\.julia\v0.6\WinReg\src\WinReg.jl:113
Answers:
username_1: Now fixed.
Status: Issue closed
|
SiegeEngineers/aoe2techtree | 834851254 | Title: MeleeArmor/PierceArmor differs from Base Melee/Pierce
Question:
username_0: I have found that for some siege units the MeleeArmor/PierceArmor differs from Base Melee/Pierce in the Armours dict.
e.g. Battering Ram
https://github.com/SiegeEngineers/aoe2techtree/blob/master/data/data.json#L13305
```
"MeleeArmor": 0,
...,
"PierceArmor": 180,
```
vs.
```
"Armours": [
{
"Amount": -3,
"Class": 4
},
{
"Amount": 180,
"Class": 3
},
...
],
```
Other siege units affected:
Capped Ram
https://github.com/SiegeEngineers/aoe2techtree/blob/master/data/data.json#L6269
Siege Ram
https://github.com/SiegeEngineers/aoe2techtree/blob/master/data/data.json#L7536
Siege Tower
https://github.com/SiegeEngineers/aoe2techtree/blob/master/data/data.json#L11638
Answers:
username_1: Those properties are actually named "Displayed[Property]" in the game files.
I guess they only influence what is being displayed in-game, but not what is actually calculated. https://github.com/SiegeEngineers/aoe2techtree/blob/master/scripts/generateDataFiles.py#L569 |
N1NTENDO1999/homepage | 442823798 | Title: Скласти додаткові інформаційні блоки
Question:
username_0: Можна зробити резюме менш формальним, додавши додаткові блоки, які містять корисну інформацію, але подають її в "розважальній" формі. Наприклад, що подобається і не подобається, сильні і слабкі сторони, перелік навиків чи хобі, інфографіка або ж інтерактивні модулі, тощо. Варто написати контент такого блоку раніше, ніж починати верстку, бо його розміри впливатимуть на розміщення елементів на сторінці<issue_closed>
Status: Issue closed |
topcoder-platform/community-app | 607245507 | Title: [Dashboard] All the filter options are not displayed when clicking on any filter with 0 challenges.
Question:
username_0: In the filter, when there is even only one challenges, all the list in the filter is displayed but if there is 0 challenges, the list is hidden.
Attached screenshot with challenges and without any challenges
**With Challenges**

**Without challenges**:

Status: Issue closed
Answers:
username_1: This issue is not applicable now. |
upspin/upspin | 243287007 | Title: dir/server: snapshot creation should block until snapshot is created
Question:
username_0: The magic MakeDirectory("TakeSnapshot") that creates a snapshot directory for a user returns immediately. It spins off the creation and returns successfully. This makes it impossible to know when the backup exists (if it ever does!).
I propose it blocks until the snapshot is actually created, for some definition of created.
Answers:
username_1: It's not impossible to know; if you watch the snapshot tree you will see it created. That's how the snapshot tests in package `upspin.io/test` work.
But I agree that this is how it *should* work.
username_2: The goroutine was used because it's a "deamon" that periodically takes snapshots. There is no technical reason it can't block when asked directly by the magic.
username_0: I think watch is overkill. MakeDirectory blocks until the directory is made, and snapshot should too, as it's just a fancier MakeDirectory. |
mitodl/bootcamp-ecommerce | 645873502 | Title: HomePage on CI is broken
Question:
username_0: However it's working on RC so it may not be critical to MVP if we can just fix the home page configuration in the database
Answers:
username_1: Doesn't look totally broken to me, but there's plenty of missing content. We should copy back the content from RC
username_2: @username_1 both of the pages seem alike. I have checked it now.
username_3: Replicated the HomePage contents from RC to CI (https://bootcamp-ecommerce-ci.herokuapp.com/).
anyone can verify it ?
username_3: @username_0 @username_1 any update over it ? Can I close it ?
Status: Issue closed
username_1: I'm getting 500 errors trying to load the images on the home page. I think this happened before, but I don't remember the solution.
For example, https://bootcamp-ci.odl.mit.edu/images/3fGSOcMgMVRZo0bgtyFX3K6dm8k=/3/fill-475x300/Innovation_LeadershipSMALL.original.jpg?v=061f2c7e48685bdf841f9feb96fa340876626ee1 |
manala/ansible-role-shorewall | 192532066 | Title: cannot parse config.yml
Question:
username_0: No matter what i do, even with an empty
```
manala_shorewall_config:
```
or this simple rule
```
manala_shorewall_config:
rules:
- Access to SSH
- { action: ACCEPT, source: net, dest: fw, proto: tcp, dest_port: 2201 }
```
i always get this error
```
{"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'None' has no attribute 'keys'\n\nThe error appears to have been in '/opt/boxen/homebrew/etc/ansible/roles/manala.shorewall/tasks/config.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: config > File\n ^ here\n"}
```
I am using ansible 2.2 on osx
Answers:
username_1: Our readme was outdated, your simple rule should be :
```
manala_shorewall_configs:
- file: rules
config:
# Access to SSH
- ACCEPT: net fw tcp 2201 - -
```
`manala_shorewall_config` is now only used to configure directives in `/etc/shorewall/shorewall.conf`file
Status: Issue closed
|
gokcehan/lf | 925656967 | Title: lf becomes unresponsive after a graphical shell command
Question:
username_0: I have a command in my `lfrc` file like this:
```
cmd img_view ${{
sxiv "$f"
echo "done"
}}
```
Sometimes when I run this command on an image file and quit sxiv afterwards lf completetly freezes (and I only see a blank terminal screen in most cases). I would say this happens about 10-50% of the time. (Sometimes it happens more frequently and sometimes less frequently. I don't know why.)
The wierd thing is, when I switch the `echo` to a `notify-send` this does not happen anymore. (I use dunst, if that is relevant.) (Removing the `echo` command makes no difference.)
The same goes for, when I switch the `$` to a `!` or `%`: the freeze simply does not happen now.
There are some other wierd niche cases, when this also does not happen, e.g. when I have window swallowing activated in my window manager.
This doesn't seem to be related to sxiv, since I get a simmilar behaviour when replacing `sxiv` to `zathura` and viewing pdf files. The problem seems to happen with all graphical programms.
When this happens the shell already exited and is no longer alive. (I used htop to check this.) So the issue seems to come from what lf does after a shell command.
Remote commands send via `lf -remote` also do nothing. This indicates to me, that lf is somehow stuck and frozen and not simply "invisible".
Answers:
username_1: Sounds like #621. The problem is (likely) `lf` getting resized while reinitializing.
Status: Issue closed
|
jlippold/tweakCompatible | 414592145 | Title: `JailProtect` working on iOS 12.1.1
Question:
username_0: ```
{
"packageId": "com.julioverne.jailprotect",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.julioverne.jailprotect",
"deviceId": "iPhone8,2",
"url": "http://cydia.saurik.com/package/com.julioverne.jailprotect/",
"iOSVersion": "12.1.1",
"packageVersionIndexed": true,
"packageName": "JailProtect",
"category": "Tweaks",
"repository": "julioverne's Repo",
"name": "JailProtect",
"installed": "0.0~beta4a",
"packageIndexed": true,
"packageStatusExplaination": "This package version has been marked as Likely working based on feedback from users in the community. The current positive rating is 50% with 1 working reports.",
"id": "com.julioverne.jailprotect",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.0",
"shortDescription": "No Substrate Mode Alternative",
"latest": "0.0~beta4a",
"author": "julioverne",
"packageStatus": "Likely working"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
kdclaw3/ram-oracle | 426911879 | Title: how to get ORACLE SCHEMA??
Question:
username_0: Please help me to find Oracle schema as it is not working with JDOE .
let matches = ram.match('JDOE','587F72032A3C828E','password');
console.log('The input matches the Oracle Database password: ' + matches + '.');<issue_closed>
Status: Issue closed |
dkahle/ggmap | 748926873 | Title: remove rjson dependency?
Question:
username_0: It doesn't look like it's used:
```
grep -Er "((to|from)JSON)|rjson" ggmap
# ggmap/DESCRIPTION: rjson,
# ggmap/R/ggmap-package.R:#' @importFrom rjson fromJSON
# ggmap/NAMESPACE:importFrom(rjson,fromJSON)
# ggmap/NEWS:New depends - 1. rjson
```
At first I was going to offer to migrate it to `jsonlite`, but it seems that's unnecessary.
Answers:
username_1: Good call, thanks!
Status: Issue closed
|
toptal/chewy | 212687598 | Title: Load results from aggregation & paginate throuth them
Question:
username_0: Hi guys,
I'm using the following aggregation to remove duplicates from my query based on a certain field:
`scope = scope.aggregations("dedup": {"terms": {"field": "my_id"}, "aggregations": {"dedup_docs": {"top_hits": {"size": 1}}}})`
This works fine, when I look at the new scope, I see that the results (hits) are grouped by my_id in buckets and only the single top result is in the "_source" field - great.
However, how can I paginate through these results? Usually I would simply do a
`scope = scope.per(params[:limit]).page(params[:page]).load`
and then render out the results. But in this case this gives me the the original results without the aggregation applied.
I also tried
`scope = scope.aggs.per(params[:limit]).page(params[:page]).load`
but this did not work either.
How can I achieve this?
Many thanks and all the best,
Michael
Answers:
username_1: @username_2 Could you help us with this when you have some time? Thanks!
username_2: Oh guys, sorry for being late. This definitely looks like a bug. To bypass it for now - try to use Kaminari as you would use it for array. https://github.com/kaminari/kaminari#paginating-a-generic-array-object. Also, you can always submit a patch, that would be really appreciated. If not - I'll find a time some day for deal with it. |
PaddlePaddle/Paddle | 246277239 | Title: wutai01 MPI的paddle v
Question:
username_0: Thank you for contributing to PaddlePaddle. Submitting an issue is a great help for us.
Both Chinese and English issues are welcome.
It's hard to solve a problem when important details are missing.
Before submitting the issue, look over the following criteria before handing your request in.
- [ ] Was there a similar issue submitted or resolved before ? You could search issue in the github.
- [ ] Did you retrieve your issue from widespread search engines ?
- [ ] Is my description of the issue clear enough to reproduce this problem?
* If some errors occurred, we need details about `how do you run your code?`, `what system do you use?`, `Are you using GPU or not?`, etc.
* If you use an recording [asciinema](https://asciinema.org/) to show what you are doing to make it happen, that's awesome! We could help you solve the problem more quickly.
- [ ] Is my description of the issue use the github markdown correctly?
* Please use the proper markdown syntaxes for styling all forms of writing, e.g, source code, error information, etc.
* Check out [this page](https://guides.github.com/features/mastering-markdown/) to find out much more about markdown.
Answers:
username_1: 从配置看,网络的激活全部是relu,我怀疑是relu的问题。模型存储没有这么脆弱。
- 在训练的时候,ReLU单元比较脆弱并且可能“死掉”。举例来说,当一个很大的梯度流过ReLU的神经元的时候,可能会导致梯度更新到一种特别的状态,在这种状态下神经元将无法被其他任何数据点再次激活。如果这种情况发生,那么从此所以流过这个神经元的梯度将都变成0。
username_0: 嗯,还在验证是不是relu导致的dead unit问题。
但是,
1. 能成功训练,在测试时出现了问题;
2. 跑了三次都是同样的问题;
3. 之前一直采用这样的网络是没有问题的(数据处理逻辑都是一样的,本地运行也部分印证了)
username_1: 请还原会之前一样的数据,一样的配置,一样的设置,确认是paddle的问题。否则我们无从查起。
Status: Issue closed
username_0: ```
I0727 09:40:11.928058 8975 TrainerInternal.cpp:165] Batch=145700 samples=18649600 AvgCost=0 CurrentCost=0 Eval: CurrentEval:
```
1. 训练全程中,cost都是0这个问题出现了有段时间了;
2. 在wutai01的MPI V1训练的模型,因为网络中用到cosine计算两个生成向量的相似度```cos_sim(a=user_dim, b=view_dim, scale=1)```,当把成功训练结束的model在本地做test计算的时候,出现生成的那两个向量都是0向量(训练的数据也测试了,情况一样),导致做cosine时失败。而模型在训练的时候计算相似度都是正常,只能推断是不是集群在save模型的时候的bug。(此外,这个网络在local模式下一切都是正常的,且在前段时间在MPI上也是正常的)
MPI任务的链接:http://10.87.137.36:8920/fileview.html?path=/home/disk1/normandy/maybach/40968/
username_2: AvgCost=0 CurrentCost=0 这个问题有trace么? 会干扰判断。
username_1: AvgCost=0 CurrentCost=0 的啥问题?
username_0: 参数都没有变化,但之前的数据字典更新了,训练数据无法恢复到原样。 出现这样的情况,貌似和训练时出现avgcost=0 currentcost= 这个问题是同时出现的,具体还待tanh的结果出来,看是否是一样的情况来判断。
username_1: 1. 怎么排除数据的问题?
2. 不可能怎么改变都不能改变 `avgcost=0 currentcost= 0`,什么条件不出现 `avgcost=0 currentcost=0` ?
username_1: 数据处理逻辑都是一样的,不能排除数据无异常。
username_0: ```
I0728 17:24:47.582365 25615 TrainerInternal.cpp:165] Batch=110 samples=56320 AvgCost=0.161602 CurrentCost=0.0999691 Eval: CurrentEval:
I0728 17:24:48.212803 25615 TrainerInternal.cpp:165] Batch=120 samples=61440 AvgCost=0.15781 CurrentCost=0.1161 Eval: CurrentEval:
I0728 17:24:49.328068 25615 TrainerInternal.cpp:165] Batch=130 samples=66560 AvgCost=0.156435 CurrentCost=0.139936 Eval: CurrentEval:
I0728 17:24:50.238536 25615 TrainerInternal.cpp:165] Batch=140 samples=71680 AvgCost=0.155359 CurrentCost=0.141367 Eval: CurrentEval:
I0728 17:24:51.023105 25615 TrainerInternal.cpp:165] Batch=150 samples=76800 AvgCost=0.153028 CurrentCost=0.120404 Eval: CurrentEval:
I0728 17:24:51.955370 25615 TrainerInternal.cpp:165] Batch=160 samples=81920 AvgCost=0.151533 CurrentCost=0.129097 Eval: CurrentEval:
I0728 17:24:52.876628 25615 TrainerInternal.cpp:165] Batch=170 samples=87040 AvgCost=0.150316 CurrentCost=0.130859 Eval: CurrentEval:
I0728 17:24:53.815397 25615 TrainerInternal.cpp:165] Batch=180 samples=92160 AvgCost=0.149469 CurrentCost=0.135065 Eval: CurrentEval:
I0728 17:24:54.696384 25615 TrainerInternal.cpp:165] Batch=190 samples=97280 AvgCost=0.147974 CurrentCost=0.121052 Eval: CurrentEval:
```
这是脚本本地跑part的日志。
MPI的当把sparse_update关闭,cost好像就正常了。
其实,问题是怀疑是不是wutai01的paddle环境问题。所以也尝试了提交到wutai02,wutai的集群,一直报错connet的问题。
username_1: sparse_update 一般只对 embeding层用。其它层已经不再是sparse的了。
username_0: 嗯,就是关闭稀疏层的
username_2: sparse_update 一般只对 embeding层用。其它层已经不再是sparse的了。
--
尝试下来就是这样。另外一个任务数据量小,所以一样的配置和数据,用单节点训练, cost的log就正常。可以看这个issue的说明 https://github.com/PaddlePaddle/Paddle/issues/2987
username_3: I'm closing this due to low activity, feel free to reopen.
Status: Issue closed
|
jlippold/tweakCompatible | 484982032 | Title: `TextEmojis` working on iOS 12.4
Question:
username_0: ```
{
"packageId": "se.nosskirneh.textemojis",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "se.nosskirneh.textemojis",
"deviceId": "iPhone9,2",
"url": "http://cydia.saurik.com/package/se.nosskirneh.textemojis/",
"iOSVersion": "12.4",
"packageVersionIndexed": false,
"packageName": "TextEmojis",
"category": "Tweaks",
"repository": "henrikssonbrothers",
"name": "TextEmojis",
"installed": "1.3~beta3",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "se.nosskirneh.textemojis",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "Search and input emojis by text shortcodes",
"latest": "1.3~beta3",
"author": "<NAME>",
"packageStatus": "Unknown"
},
"base64": "<KEY>",
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
Amertz08/drf_ujson2 | 577893607 | Title: Not working with ujson 2.0.0
Question:
username_0: ujson 2.0.0 has removed ```double_precision``` so this lib crashes now.
https://github.com/ultrajson/ultrajson/releases/tag/2.0.0
Answers:
username_1: Yeah just ran into this myself. Going to do a bug fix release pinning `ujson<2` then implement a fix.
username_2: Any progress on this one ???
Status: Issue closed
|
turnkeylinux/tracker | 205167109 | Title: LDAP sync for customer user
Question:
username_0: Hey guys, I've managed to enable the LDAP authentication for customer users successfully. Well... Almost.
The customers user can login using their AD-username and their AD-password - but I have to set a customer user with the exact username in the OTRS-Webinterface first.
So it appears that authentication is working, but my turnkey OTRS does not sync the users (they are all members of a specific AD-group) to OTRS.
Is there a way to enable all my AD-users to log into OTRS without adding them one by one to the OTRS customer user database manually?
Kind regards
Answers:
username_1: Hi,
This is actually our Issue tracker. I.e. for reporting bugs and feature requests. Your problem sounds more like something you need support with, which is probably a better candidate for our [forums](https://www.turnkeylinux.org/forum/) (that is unless there is actually a bug causing your issue).
As for your question; I'm almost sure that there is a way, but TBH I have no idea. LDAP is something that I have only a very basic grasp of.
I recall a few years ago, I used a PHP module that allowed a remote LDAP DB to provide authentication to a web app I was working on. I never really understood how it worked, but it did (it was running within a LAN though - not sure if it would be secure running online?) I'm assuming that there is probably some similar Perl module that OTRS could leverage to provide authentication?
Alternatively, perhaps there is some way to have a cron job which updates the local DB with LDAP users?
Anyway, sorry I'm not much help to you. I'm going to close this now. Please feel free to reopen if you discover that it actually is caused by a bug. Also please feel free to open a new thread on the forums.
Status: Issue closed
|
KhronosGroup/SPIRV-Tools | 910463727 | Title: spirv-opt: Support branch flatten?
Question:
username_0: Hi, i come from [SPIRV-Cross](https://github.com/KhronosGroup/SPIRV-Cross/issues/1684)
I'm trying to cross compile like this: HLSL -> SPIR-V -> SPIRV-OPT -> ESSL
I found physical loop unroll works because SPIRV-OPT supports this feature.
But branch flatten is not supported because like ESSL it needs a [extension](https://github.com/KhronosGroup/GLSL/blob/master/extensions/ext/GL_EXT_control_flow_attributes.txt) and i think not all mobile device support this extension.
Will SPIRV-OPT support this physical branch flatten feature in the future?
I think it's still necessary for platforms like mobile.
Status: Issue closed
Answers:
username_0: We decide to use the solution provide by https://github.com/KhronosGroup/SPIRV-Cross/issues/1684
Using control flow hints seems more reasonable for now. |
tastybento/ASkyBlock-Bugs-N-Features | 284914570 | Title: Problem with the challenges!!
Question:
username_0: hello, the menu of the challenges works but the challenges in itself when clicking it does not work, it does not show an error or something similar, since I saw in other servers that the challenges work, I do not know because in my case it does not give can you help me, please!!

Answers:
username_1: I suspect it's another plugin interfering with the clicking. Remove other plugins one by one until it works.
Status: Issue closed
|
mobxjs/mobx | 1015357522 | Title: Wrong event type in spy when changing an array
Question:
username_0: I am writing a loggin library for Mobx. When I push to observable array Mobx generates spy report with type `splice`. It looks like a mistake.
**Actual outcome:**
The spy report is following:
```
observableKind: "array"
object: Array(1)
debugObjectName: "[email protected]"
type: "splice" // <- why it is splice if I used .push method on array?
index: 0
removed: Array(0)
added: Array(1)
removedCount: 0
addedCount: 1
spyReportStart: true
```
**Intended outcome:**
Either `push` or `array`. It is up to library developers, I don't know why the event type is `splice`
**How to reproduce the issue:**
Go to https://codesandbox.io/s/hungry-resonance-hfegb?file=/index.js and open console.
**Versions**
Mobx 6 |
auth0/passport-linkedin-oauth2 | 18000774 | Title: Retreiving user email with `r_emailaddress`?
Question:
username_0: Authenticating with `scope: ['r_emailaddress', 'r_basicprofile']` the basic profile is returned, except the value for email is undefined, and not in the raw JSON returned.
Per the [LinkedIn API Docs](http://developer.linkedin.com/documents/authentication) the email requires a call to a different endpoint `GET /people/~/email-address` vs `GET /people/~`. Might there be a configuration step I missed?
``` json
{
provider: 'linkedin',
id: 'qBP4vNlwf5',
displayName: '<NAME>',
name: { familyName: 'Doe', givenName: 'John' },
emails: [ { value: undefined } ],
_raw: '{\n "firstName": "John",\n "formattedName": "<NAME>",\n "id": "qBP4vNlwf5",\n "lastName": "Doe",\n "pictureUrl": "http://s.c.lnkd.licdn.com/scds/common/u/images/themes/katy/ghosts/person/ghost_person_60x60_v1.png"\n}',
_json:
{
firstName: 'John',
formattedName: '<NAME>,
id: 'qBP4vNlwf5',
lastName: 'Doe',
pictureUrl: 'http://s.c.lnkd.licdn.com/scds/common/u/images/themes/katy/ghosts/person/ghost_person_60x60_v1.png' }
}
```
Answers:
username_1: thanks,it really helped |
jasmine/jasmine | 372118818 | Title: Error: Expected [ 'Array', 'Contents' ] to be [ 'Array', 'Contents' ].
Question:
username_0: When comparing two arrays in my test framework `expect(['Array', 'Contents']).toBe(['Array', 'Contents']);` Jasmine reports an error in that `Expected [ 'Array', 'Contents' ] to be [ 'Array', 'Contents' ]. ` Jasmine is truly mad about the fact that the address pointers for both of these arrays is not the same regardless of their contents. The first `[ 'Array', 'Contents' ]` is *not* the second `[ 'Array', 'Contents' ]`. While it isn't a huge deal, I would maybe expect some sort of error message explaining that.
## Expected Behavior
I would expect some error message or some way of implying that the base address of my arrays is not the same and therefore they are not the same object.
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
## Current Behavior
Right now jasmine reports `Expected [ 'Array', 'Contents' ] to be [ 'Array', 'Contents' ]. `
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
## Possible Solution
A new error message that tells me the base addresses of my arrays is not the same and they are not the same object because of that. Ideally, I think there could be something that suggests that I should use toEqual to compare the contents of my arrays instead, maybe if it notices their contents are similar.
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
## Suite that reproduces the behavior (for bugs)
<!--- Provide a sample suite that reproduces the bug. -->
```javascript
describe("sample", function() {
expect(['Array', 'Contents']).toBe(['Array', 'Contents']);
});
```
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
Again, this isn't a huge deal. Maybe I shouldn't assume the worst of people, but I feel like had I not been more informed and actually known what the issue was, this could have been a very frustrating error to come across. In every way, it looks like the first array should be the second array as reported by Jasmine, and I think it would be a frustrating experience to see something that looks like it should be working, but is still upset and wrong.
Answers:
username_1: If you want a deep equality of your objects you should be using `toEqual` instead of `toBe`. As I write this up _again_ though, you're probably correct, that an additional note for `toBe` that mentions deep equality vs object equality would be useful. I'd be happy to review a pull request to update the failure message for `toBe`.
Hope this helps. Thanks for using Jasmine!
Status: Issue closed
|
rscustom/rocksmith-custom-song-toolkit | 780825024 | Title: Cannot download any builds
Question:
username_0: Current build or older builds on builds tab (Windows) on rscustom.net generate same return message:
{"message":"Artifact not found or access denied."}
Answers:
username_1: Appveyor removes artifacts after 6 months. Since there hasn't been any activity in awhile to generate new builds they've all been removed.
A new build has been pushed & I published it as a github release which won't have the same 6 month limitation.
Stable builds should be released on github to avoid this in the future.
Status: Issue closed
|
vanatteveldt/frogr | 64035468 | Title: unable to handle large datasets
Question:
username_0: When a large (>150) character vector is used as input, an error is thrown:
Error in textConnection(output) : all connections are in use
In `frog.R` a textConnection is opened (line 50) but it is not closed, causing the number of open file handles to increase and increase.
Answers:
username_0: A workaround is to use
install_github("username_0/frogr")
which will be available until the fix is accepted, see pull request #2
username_0: Fixed by 0afd07a8
Status: Issue closed
|
jupyter/notebook | 283723716 | Title: directory completion causes the jupyter stuck when there are lots of files
Question:
username_0: In Jupyter Notebook, after typing
"/folder/path/"
Use TAB to do completion. If there are lots of files (e.g. 100,000 files) in the folder, the whole Jupyter will be stuck (even on SSD it still happens). |
ChristofferFlensburg/superFreq | 389540079 | Title: Error in importEnsemblData
Question:
username_0: `Error in useMart(biomart = "ENSEMBL_MART_ENSEMBL", dataset = "hsapiens_gene_ensembl", :
Incorrect BioMart name, use the listMarts function to see which BioMart databases are available
Calls: superFreq ... nameCaptureRegions -> xToGeneFromDB -> importEnsemblData -> useMart`
In R when I do the following command from importEnsemblData
`mart = useMart(biomart='ENSEMBL_MART_ENSEMBL', dataset = 'hsapiens_gene_ensembl',
version='Ensembl Genes 94', host='grch37.ensembl.org')`
The error is fixed. listMarts() doesn't have an Ensemebl Genes 90, and I'm not sure why.
Answers:
username_1: Hmm, good old biomart again. I explicitely set the version to 90 a while back, because the then current version had issues... I could update to the newer version 94, but I'm sure it'll come back and bite me later.
There are already annotated capture regions for exomes (ensembl exons), RNA (ensembl exomes) and genome (10kb bins) for mm10, hg19 and hg38. I am tempted to have everyone use those, and just drop support for user-supplied capture regions.
So unless you really want your specific capture regions, can you just not specify the capture regions in the superFreq() call (or set it to the default empty string "")? It'll default to one of the above depending on what 'mode=' and 'genome=' you give to superFreq().
If you want to use your own, I can update the useMart to your version and hopefully you can use that, at least until it's updated next time. :P
username_0: Personally, I've already edited source code to the solve the problem, this is more raising an issue for awareness.
You should be able to fix the problem by removing the specification for which version
`useMart(biomart='ENSEMBL_MART_ENSEMBL', dataset = 'hsapiens_gene_ensembl', host='grch37.ensembl.org')`
Or make it an explicit option when starting the overall superFreq() function
username_1: Ok, good, thanks for the heads up! Unfortunately I am aware of the issue, and the reason I specified the version was that the default version wasn't working (at least for some users, and only for hg19...) at a certain point. :/ I could change back to default version (is that what you did?) for next version I guess, and see how long that lasts. :D
In general, dependencies on external connections have been a big headache, so I've been trying to replace them with a single connection to our servers at WEHI for annotation and resources. BioMart in the case of user supplied capture regions is the last non-WEHI connection left I think, which is why I'm tempted to drop support for it.
Maybe I can ask why you don't use the default capture regions? It'd be good to get a feel for why and how often people need this feature.
Thanks again for the work!
username_0: We've done a whole exome screen, so I'm using the provided bed file from "Agilent_SureSelect_Clinical_Research_Exome.design.hg19.merged.bed"
Either `mart = useMart(biomart='ENSEMBL_MART_ENSEMBL', dataset = 'hsapiens_gene_ensembl', version='Ensembl Genes 94', host='grch37.ensembl.org')`
or
`useMart(biomart='ENSEMBL_MART_ENSEMBL', dataset = 'hsapiens_gene_ensembl', host='grch37.ensembl.org')`
Seemed to resolve the problem. |
madskristensen/NpmTaskRunner | 117110681 | Title: This should ship with Web Tools like grunt / gulp task runners
Question:
username_0: This one is way more elementary than grunt or gulp.
Being less requested than grunt / gulp is partially the new ASP.NET templates' fault as it configures super simple one liner jobs (like clean) by gulp instead of defining an npm script.
Status: Issue closed
Answers:
username_1: It's on the Web Teams backlog. Closing here since I (as a third party extension author) can do nothing about it. |
orientechnologies/orientdb | 186997235 | Title: Records inserted multiple times under heavy load and IO errors.
Question:
username_0: ## OrientDB Version, operating system, or hardware.
- [x] v2.2.11
- [x] v2.2.12
## Operating System
- [X] Linux
## Expected behavior
There are exactly 2 million records in the db and the following query will not return any records: `select * from (select name, count(name) as c from LiveTable group by name) where c != 2`. Also Inmemory records count is 2M.
## Actual behavior
Live updaters soon crash on both (sometimes on only one) nodes, `Caught Network I/O errors on 127.0.0.1:2424/testDb, trying an automatic reconnection... (error: Timeout on reading response) [OStorageRemote]`. Usually there are 2,000,002 records in the db when the loop finishes, some tests ended with 2,000,001 records. The above-mentioned select returns data like this:
```
name | c
---------- | ----
Name_41415 | 3
Name_2734 | 3
```
Also, as the live queries crashed, the inmemory copy has much less than 2M records, eg 35740 on one and 46738 on another node.
## Steps to reproduce the problem
Run the attached program in a 2-node environment (change NODE_1 and NODE_2 placeholders in the hazelcast.xml file first). Both nodes will try to insert 1M records, so there should be 1M different entries, 2 copies of each. But some records are entered 3 times. Seems that one extra from every node where the IO error occurred.
[orient_6874.zip](https://github.com/orientechnologies/orientdb/files/568410/orient_6874.zip)
Answers:
username_1: Hi @username_0, could you please try with the version in 2.2.x branch?
username_0: Hi @username_1,
tried, but unfortunately the problem still extist: 2,000,002 records added and both live updaters died, one of them usually having a bit longer stacktrace:
```
Caught Network I/O errors on 127.0.0.1:2424/testDb, trying an automatic reconnection... (error: Timeout on reading response) [OStorageRemote]com.orientechnologies.common.concur.lock.OInterruptedException: Thread interrupted while waiting for request
at com.orientechnologies.orient.client.binary.OChannelBinaryAsynchClient.beginResponse(OChannelBinaryAsynchClient.java:256)
at com.orientechnologies.orient.client.binary.OAsynchChannelServiceThread.execute(OAsynchChannelServiceThread.java:54)
at com.orientechnologies.common.thread.OSoftThread.run(OSoftThread.java:77)
Caused by: java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2173)
at com.orientechnologies.orient.client.binary.OChannelBinaryAsynchClient.beginResponse(OChannelBinaryAsynchClient.java:244)
... 2 more
```
Another error message seen: `Caught Network I/O errors on 127.0.0.1:2424/testDb, trying an automatic reconnection... (error: Timeout on reading response) [OStorageRemote]`
PS, the machines may have a bit different performance. Technically both are Intel® Core™ i5-3570 CPU @ 3.40GHz × 4, but one has Ubuntu 14, another Ubuntu 16, and that last one seem to finish the inserts always before the other one.
username_1: @luigidellaquila and @tglman any idea about this?
Status: Issue closed
|
marcelotduarte/cx_Freeze | 735684676 | Title: error exe clr and numpy
Question:
username_0: 
This error is only getting when I add numpy and clr together, it does not appear separately
Answers:
username_1: Hi, can you post a sample?
username_2: I have the same issue.
Tried it with the sample from here: #658
```
import platform
import clr
if __name__ == "__main__":
print(platform.system())
print(platform.machine())
```
My setup.py looked like this
```
from cx_Freeze import setup, Executable
import sys
buildOptions = dict(
packages=["clr", "platform"]
)
base = 'Win64GUI' if sys.platform == 'win64' else None
executables = [
Executable('test.py', base=None, icon="icon.ico")
]
setup(
name='Test',
version='1.0',
description='Test',
options=dict(build_exe=buildOptions),
executables=executables
)
```
username_3: Hi
`base = 'Win64GUI' if sys.platform == 'win64' else None` - don't do anything good, as there only Win32GUI, Console, Win32Service options;
There currently problems with icons, try to remove `icon="icon.ico"`
username_1: This is not true if the icon is valid. Issue #824 is about a png renamed to ico, that is invalid.
username_1: @username_0 @username_2
cx_Freeze 6.5 has just been released.
Can you let me know if this has been resolved?
Status: Issue closed
username_1: cx_Freeze 6.6 has just been released.
Closing due to lack of response.
If you had issues please open a new issue. |
aallfredo/DebesAlgo | 134170519 | Title: Que los usuarios puedan sacar fotos de los negocios y subirlas
Question:
username_0: Que saquen foto del letrero o papel del embargo y una foto del negocio incluso que puedan actualizar si abrió de nuevo por ejemplo en tocaba levittown ya abrió el taco maker.
Answers:
username_1: @username_0 pueden actualizar dandole edit si estan registrados. Pero lo de subit fotos seria feature nuevo.
username_0: Ok muevo lo de subir las fotos a features. |
wso2/product-is | 1185954665 | Title: Selectively revoke tokens on role change
Question:
username_0: **Is your suggestion related to an experience ? Please describe.**
When a role gets updated or deleted, Currently we are revoking all the tokens of all the users who have that role. Rather revoking all the tokens, we should revoke only the set of tokens that were issued with that privilege of the role.
For an example, admin user has **admin** role and a *test* role (this does not have any permissions). If the test role gets updated or deleted, we are revoking all the tokens of the admin user. But when we revoke the tokens of a role, we should check the permissions associated and associated scopes. And we should revoke the tokens issued only with the associated scopes.
**Related Issue**
(https://github.com/wso2/product-is/issues/12957)
**Additional context**
<!-- Add any other context or screenshots about the suggestion here. --> |
dotnet/roslyn | 125224193 | Title: Change default pre-selection in Results window
Question:
username_0: When I do Find All References on a method, the pre-selected element in the Results window does not match the instance of the method I have selected in my editor. Instead, it has the root of the result hierarchy pre-selected (see image). This is confusing, because any other time the blue highlight is over a search term I am navigated to that instance in the editor.

It would make more sense to me if the pre-selection was on the instance of the method I selected and clicked FAR for in the editor. For example, the blue highlight in the above image would be over the second element in the hierarchy because that is what is currently highlighted in my editor.
Answers:
username_0: @CyrusNajmabadi said it better than I: "it's inconsistent that the root is selected, but in the editor i'm not on that item"
username_1: @username_0 @Pilchie Can we roll this into https://github.com/dotnet/roslyn/issues/903?
username_1: Rolling into #903.
Status: Issue closed
|
rossfuhrman/_why_the_lucky_markov | 374694935 | Title: at a card and gasps .
Twin: Almost as if.. Remember the deer, the smoke is an Array of our thoughts.
Question:
username_0: Toot: at a card and gasps .
Twin: Almost as if.. Remember the deer, the smoke is an Array of our thoughts.
One comment = 1 upvote. Sometime after this gets 2 upvotes, it will be posted to the main account at https://mastodon.xyz/@_why_toots |
opentok/Opentok-.NET-SDK | 354421465 | Title: Paused Minutes in Archive Duration
Question:
username_0: A few months ago, you star
Answers:
username_1: @username_0 Could you please contact <EMAIL> and share the Archive ID with them where you're not seeing the duration field match the length of the archive.
This does not seem like an issue with the .NET SDK instead with the Archiving functionality.
username_0: I already have been in touch with Tokbox support (https://support.tokbox.com/hc/en-us/requests/22980) and they told me to open an Issue here.
If you look at an Archive in the Archive Inspector, there are 4 times available:
**Meeting Duration**
**Archive Duration** <- This is the Duration field reported by the .NET SDK
**Running Minutes** <- This is the actual length of the archived video file, the only number that I actually care about, but is not available in the .NET SDK
**Paused Minutes**
Here are some recent examples where the Duration doesn't match the length of the video file.
ArchiveID: 09789cfb-c98c-4c64-911b-1dd8232b1c8b
Created At: Aug 18, 2018 09:34:33AM
TokBox Archive Duration: 16:32
Actual File Duration (Running Min): 15:26
Azure Archive: https://tokboxd.blob.core.windows.net/atlas-video/45614802/09789cfb-c98c-4c64-911b-1dd8232b1c8b/archive.mp4
Most of them seem to add about 1 min to the Duration, but this one is over 7 min difference:
ArchiveID: 4ad48bbc-28bd-4778-8b43-f0444f6de870
Created At: Aug 17, 2018 02:58:00PM
TokBox Archive Duration: 36:04
Actual File Duration (Running Min): 29:01
Azure Archive: https://tokboxd.blob.core.windows.net/atlas-video/45614802/4ad48bbc-28bd-4778-8b43-f0444f6de870/archive.mp4
This one is an empty file that still shows a Duration:
ArchiveID: d7edb6a0-b8ea-4f06-b7b6-107b7303940d
Created At: Aug 17, 2018 05:34:29PM
TokBox Archive Duration: 3:24
Actual File Duration (Running Min): 0:00
Azure Archive: https://tokboxd.blob.core.windows.net/atlas-video/45614802/d7edb6a0-b8ea-4f06-b7b6-107b7303940d/archive.mp4
username_0: Is there any update on this? Basically I just need a way to get the Running Minutes field from the SDK. Is this possible or can it be added?
Status: Issue closed
username_0: Please respond. I am still waiting on a way to get Running Minutes.
username_0: A few months ago, you started including Paused Minutes in the Duration field of the Archive object. Since then, the Duration has not matched the actual length of the Archived Video. If I look at the Archive Inspector tool, there is a Running Minutes field that shows the correct video length. I believe you should either correct the Duration field so it only includes Running Minutes, or you should provide a new field that provides the Running Minutes for the Archive object.
username_0: Still no response. I am reopening a support ticket. Being able to access Running Minutes should not be difficult.
username_1: @username_0 Apologies for the delayed response! Unfortunately, we are unable to provide the `running minutes` field as a part of the .NET SDK because it's not a part of the API response: https://tokbox.com/developer/rest/#start_archive.
This is something that is calculated manually by the Archive Inspector tool. To achieve this, you would have to do your own calculations.
Status: Issue closed
username_1: @username_0 I'm going to go ahead and close this issue, please let me know if you have any questions!
username_1: @username_0 At this time, we use internal APIs to calculate the running minutes. Unfortunately, at this time, we do not have a mechanism in place to expose those APIs. I've forwarded your request to our product team and will update you as soon as I have more information.
username_2: is there any update on this??
username_1: @username_2 No update on this yet, we do not have a mechanism in place to expose APIs to calculate the running minutes. |
jlippold/tweakCompatible | 347671139 | Title: `Bazzi` working on iOS 11.3.1
Question:
username_0: ```
{
"packageId": "com.repo.xarold.com.bazzi",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.repo.xarold.com.bazzi",
"deviceId": "iPhone10,3",
"url": "http://cydia.saurik.com/package/com.repo.xarold.com.bazzi/",
"iOSVersion": "11.3.1",
"packageVersionIndexed": false,
"packageName": "Bazzi",
"category": "Tweaks",
"repository": "Xarold Repo",
"name": "Bazzi",
"installed": "1.1.2",
“cracked”: false,
“CydiaTweak”: true,
"packageIndexed": false,
"packageStatusExplaination": "This tweak has not been reviewed. Please submit a review if you choose to install.",
"id": "com.repo.xarold.com.bazzi",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.0",
"shortDescription": "The ultimate battery icon customizer tweak.",
"latest": "1.1.2",
"author": "NeinZedd9",
"packageStatus": "Unknown"
},
"base64": "<KEY>",
"chosenStatus": "working",
"notes": ""
}
``` |
CityOfPhiladelphia/mapboard | 219351631 | Title: Add basemap control
Question:
username_0: To toggle between basemap and imagery, and select from imagery years.
Answers:
username_0: Basic toggle and year selector are working as of 35f0cc03cfbd4d4bdc5415bbdb849547030f8e6a. CSS still needs to be tweaked.
username_0: Let's try making the basemap control a single dropdown with all basemaps, including historic and imagery. |
ashokfernandez/kiste | 114078868 | Title: Persistant Login
Question:
username_0: The player should be able to retain the users login between being opened and closed
Answers:
username_0: Seems to happen automatically. Will need to improve the login experience and have a better understanding of where the login details go and how to manage them.
Status: Issue closed
|
tango-controls/pogo | 254815563 | Title: generated Makefile: Wrong comment for OUTPUT_DIR
Question:
username_0: Version: Tango 9.2.5a
A newly generated Makefiles has
```
#=============================================================================
# OUTPUT_DIR is the directory which contains the build result.
# if not set, the standard location is :
# - ./shlib if OUTPUT_TYPE is SHARED_LIB
# - ./lib if OUTPUT_TYPE is STATIC_LIB
# - ./bin for others
#
#OUTPUT_DIR =
```
but looking at `/usr/local/share/pogo/preferences/tango.opt` gives
```
ifndef OUTPUT_DIR
ifeq ($(PROJECT_TYPE),DEVICE)
OUTPUT_DIR= $(HOME)/DeviceServers
else
ifeq ($(PROJECT_TYPE),STATIC_LIB)
OUTPUT_DIR= lib
else
ifeq ($(PROJECT_TYPE),SHARED_LIB)
OUTPUT_DIR= lib
else
OUTPUT_DIR= bin
endif
endif
endif
endif
```
so one of them is wrong.
Answers:
username_1: You are right.
I will fix it in next release.
username_0: @username_1 Is that already fixed?
username_1: I don't know.
It is not a Pogo problem, it is a distribution problem.
I will check the **tango.opt** file given with the distribution
Status: Issue closed
username_1: Sorry.
I check the tango.opt file. It has not been modified in distribution
But the Makefile comment generated has been modified. |
spring-projects/spring-session | 168844156 | Title: Spring session JDBC - session committed multiple times
Question:
username_0: Hi,
I found the following problem in _SessionRepositoryFilter_. In case we redirect the first request, both commitSession sections get executed. Once when request.redirect is triggered, the other is in finally of this filter. Internally the session returns isNew() = true (as the response was yet never send to the user) and hence the session gets committed twice. The problem is that the entry has the same sessionId (primary database key), which ends up as primary key violation.
Pavel
` @Override
protected void doFilterInternal(HttpServletRequest request,
HttpServletResponse response, FilterChain filterChain)
throws ServletException, IOException {
request.setAttribute(SESSION_REPOSITORY_ATTR, this.sessionRepository);
SessionRepositoryRequestWrapper wrappedRequest = new SessionRepositoryRequestWrapper(
request, response, this.servletContext);
SessionRepositoryResponseWrapper wrappedResponse = new SessionRepositoryResponseWrapper(
wrappedRequest, response);
HttpServletRequest strategyRequest = this.httpSessionStrategy
.wrapRequest(wrappedRequest, wrappedResponse);
HttpServletResponse strategyResponse = this.httpSessionStrategy
.wrapResponse(wrappedRequest, wrappedResponse);
try {
filterChain.doFilter(strategyRequest, strategyResponse);
}
finally {
wrappedRequest.commitSession();
}
}
public void setServletContext(ServletContext servletContext) {
this.servletContext = servletContext;
}
/**
* Allows ensuring that the session is saved if the response is committed.
*
* @author <NAME>
* @since 1.0
*/
private final class SessionRepositoryResponseWrapper
extends OnCommittedResponseWrapper {
private final SessionRepositoryRequestWrapper request;
/**
* Create a new {@link SessionRepositoryResponseWrapper}.
* @param request the request to be wrapped
* @param response the response to be wrapped
*/
SessionRepositoryResponseWrapper(SessionRepositoryRequestWrapper request,
HttpServletResponse response) {
super(response);
if (request == null) {
throw new IllegalArgumentException("request cannot be null");
}
this.request = request;
}
@Override
protected void onResponseCommitted() {
this.request.commitSession();
}
}`
Answers:
username_0: The other problem is, that from developers point of view this class makes it impossible to fix the problem, as the method save cannot be overridden - it has in signature package private final class...
username_0: Ups, my fault. I had too long binary value of some attribute and the code (for some reason) did save the session, but the parameters threw exception -> hence the clear of session state (sets new -> false) was not called. Please ignore this thread.
Status: Issue closed
|
fatiando/harmonica | 519238622 | Title: Split up the isostasy code into a separate package
Question:
username_0: Since I've been rethinking a lot of the Fatiando packages lately, Harmonica is going to escape. If our goal is to be a "gravity and magnetics" package, then isostasy/flexure calculations don't really fit in here. Sure, we use them in conjunction with gravity but actual calculations don't really have any gravity in them.
I propose moving this type of function to a separate isostasy/flexure package. With the new [package template](https://github.com/fatiando/package-template/), it's a lot easier to create new packages and we can release them as often as we need (without waiting for unrelated things in Harmonica to be resolved). Maintenance is not that complicated anymore and a lot of our process is automated. So maintaining a single large package is not really easier now. This is why this whole split of `fatiando` is [happening in the first place](https://www.username_0.com/blog/future-of-fatiando.html).
As for possible names, the following aren't taken on PyPI:
* Roots
* Isostasy
* Isostatic
* Jiboia (a play on potuguese words: jiboia is a [boa constrictor](https://en.wikipedia.org/wiki/Boa_constrictor) [snake] and boia means "float" or "bouy")
Answers:
username_1: I think having a package template and releasing with a lot of automation is a great advantage when we want to take this kind of decisions.
Regarding the name, I like having latin names for our packages, specifically for this one that is completely based on the Archimedes principle (who was born in Sicily).
So, I would propose to translate isostasy to a latin language. For example, in Spanish, it translates to **isostasia**. I'm not entirely sure, but according to [this dictionary](https://www.linguee.com/english-italian/search?source=auto&query=isostasy) it's the same for Italian. It's free on Pypi and very easy to remember.
Other names came up to my head, like:
- gallegia (conjugation of the Italian gallegiare verb than means *to float*)
- gavitello (a small buoy, also in Italian)
username_0: I like *isostasia*. Reading it in English kind of sounds like [Fantasia](https://en.wikipedia.org/wiki/Fantasia_(1940_film)) which makes for nice pun opportunities.
username_0: Gave up on this idea for now because isostasy is tightly coupled with gravity anyway.
Status: Issue closed
|
jarrodek/ChromeRestClient | 146893737 | Title: Make relative redirects work correctly
Question:
username_0: I sometimes use the RestClient to simulate POSTs to websites if I'm trying to test with complex forms or payloads. When the client receives a relative redirect, I get a "Not Resolved" error, whereas if the full URL is given the redirect works.
Answers:
username_1: Thank you for an issue report.
I've created an [issue report](https://github.com/username_1/socket-fetch/issues/5) in the library that is used as the transport. It should be fixed soon.
Status: Issue closed
username_1: It is fixed in beta now. There was an issue with build script that put wrong path to a script in the manifest file. I've fixed and now publishing hotfixes.
username_0: Thank you! |
dart-lang/sdk | 212858410 | Title: Pointer to syntax error at wrong position on Windows
Question:
username_0: Not a big deal, but the `^` pointing to the location of the syntax error is pointing to the wrong position on Windows:
 |
trailofbits/ebpfpub | 766009685 | Title: 重庆双桥区哪有特殊服务的洗浴u
Question:
username_0: 重庆双桥区哪有特殊服务的洗浴【十(微)7813╧72524漂亮】从《哥斯拉》、《侏罗纪世界》、《金刚》为代表的好莱坞大片,到《捉妖记》、《长城》、《九层妖塔》等优质国产影片,怪兽题材电影凭借极具视觉冲击力的刺激场面和真实感征服了无数观众。近日,由徐仕兴、赵晋仪共同执导,著名演员罗嘉良携手新锐青年演员廖银玥月、王炳翔、胡雪儿、何其炜等共同主演的灾难动作电影《巨鳄岛》正式定档月日,爱奇艺独家上线。生化巨鳄凶猛来袭深度刻画危情之下人性百相电影讲述了林浩父女及李治夫妇等一行人乘坐同一班航班,不料途中飞机颠簸失控,掉落巨鳄岛。还未等幸存乘客从坠机事故中反应过来,便遭遇岛屿中的凶残猛兽巨型鳄鱼。众人殊死逃亡,拼命寻求方法离开这座恐怖的小岛,但逃生过程中不断有人丧命鳄口,他们最终能否安全逃生?惊心动魄的绝命逃生战即将上演!蔚蓝辽阔的大海暗藏杀机,迷雾重重的原始森林蕴藏无数危险,幸存的林浩一行人在未知荒岛历经凶险:在没有干净水源,不熟悉地况,没有物资储备的情况下,被拥有利刃步足的异形蜘蛛群围追堵截,与残忍嗜血的鳄鱼生死搏斗。伴随着紧张的配乐,步步惊心,环环惊魂,在不知不觉中将观众带入这场全程高能的逃亡之旅。危情之下,人性百相尽显,看似无意的人物刻画,都为后续剧情埋下了重要伏笔。当社会背景、性格皆不相同的人们,掉落到一个毫无社会秩序、充满险情的荒岛后,血浓于水的亲人、如胶似漆的恋人是否还会选择彼此?原本毫无交集的陌生人是坚守自己的良善之心,还是会遵从原始本能?拭目以待。影片导演徐仕兴、赵晋仪对电影的整体观感、故事刻画、细节把控都发挥到了极致,为给观众带来最佳的视觉享受,将每一个人物塑造得立体真实,成功推动了剧情发展,让观众感受惊险之余,还有对人性本善的期待与感动。著名演员罗嘉良倾情加盟催泪演绎父爱如山著名演员罗嘉良倾情加盟,在影片中饰演单亲爸爸林浩。林浩是一个典型的中国式严父,沉默、强势、鲜少对女儿表达自己的想法与情感,但在面对凶狠恶兽时,一次次以命相搏,只为了给女儿多争取一分活命的机会。罗嘉良将林浩笨拙的关心、不善表达的窘迫、为了女儿不顾一切的情感变化和肢体语言拿捏地恰到好处,将父爱如山演绎得入木三分。据悉,《巨鳄岛》由北京新片场传媒股份有限公司、北京完美影视传媒有限责任公司、中广天择传媒股份有限公司出品,广东精英传媒股份有限公司、北京烯易网络科技有限责任公司联合出品,新片场影业独家宣发。由徐仕兴、赵晋仪共同执导,演员罗嘉良、廖银月、王炳翔、胡雪儿、何其炜等演员联袂出演。月日,让我们登录爱奇艺,一起开启这一险象丛生的冒险之旅!声明:中华娱乐网刊载此文出于传递更多信息之目的,并非意味着赞同其观点或证实其描述。版权归作者所有,更多同类文章敬请浏览:综合资讯苟康轮遗人https://github.com/trailofbits/ebpfpub/issues/168?UoVs6 <br />https://github.com/trailofbits/ebpfpub/issues/2110?plftx <br />https://github.com/trailofbits/ebpfpub/issues/730?duuos <br />https://github.com/trailofbits/ebpfpub/issues/898?gycyu <br />https://github.com/trailofbits/ebpfpub/issues/2840?qcczm <br />https://github.com/trailofbits/ebpfpub/issues/1460?lzlfn <br />https://github.com/trailofbits/ebpfpub/issues/80?87053 <br />hmltntfcbfpzraowbnklvgrzasixnefyewg |
pulumi/pulumi | 686454618 | Title: @pulumi/terraform: state snapshot was created by Terraform v0.13.0, which is newer than current v0.12.29
Question:
username_0: Tested with pulumi version `v2.9.0`
```json
"dependencies": {
"@pulumi/pulumi": "2.9.0",
"@pulumi/terraform": "2.5.0"
}
```
Answers:
username_1: Hi @username_0
I need to release a new version of https://github.com/pulumi/pulumi-terraform/ to do this
I will take care of that today!
Paul |
aws/s2n-tls | 914315913 | Title: Build failed with OpenSSL 1.0.1f on Ubuntu 14.04
Question:
username_0: ### Problem:
Build failed on Ubuntu 14.04, gcc 4.9.4, OpenSSL 1.0.1f
```
/aws-sdk-cpp/crt/aws-crt-cpp/crt/s2n/tls/s2n_x509_validator.c:591:9: error: implicit declaration of function 'X509_get_signature_nid' [-Werror=implicit-function-declaration]
nid = X509_get_signature_nid(x509_cert);
^
```
It blocks C++ SDK (depending on s2n) works on Ubuntu 14.04. It's old, but people still use that.
### Solution:
Extends `defined(LIBRESSL_VERSION_NUMBER) && (LIBRESSL_VERSION_NUMBER < 0x02070000f)` with more conditions may help.
### Requirements / Acceptance Criteria:
* **Testing:** Adding Ubuntu 14.04 with default compiler and default OpenSSL in CI will be sufficient |
sunpy/ndcube | 469101811 | Title: Plotting of NDCube and NDCubeSequence is failing
Question:
username_0: ### Description
When we perform plotting of `NDCube` and `NDCubeSequence` object, the plotting is breaking in some weird ways.
### Expected behavior
The extra keywords such as `axes_coordinates` and `plot_axis_indices` should not cause `NDCube` plotting to break.
### Actual behavior
On initializing the `NDCube` objects from the [docs](https://docs.sunpy.org/projects/ndcube/en/stable/ndcube.html#initialization) and performing plotting using the keywords `plot_axis_indices=0` or `plot_axis_indices=1`, it breaks up without performing the plotting.
Here is a traceback when I perform `my_cube[0].plot(plot_axis_indices=0) `
**Traceback**
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-7-a17e3408b1c5> in <module>
----> 1 my_cube[0].plot(plot_axis_indices=1)
~/nd/ndcube/ndcube/mixins/plotting.py in plot(self, axes, plot_axis_indices, axes_coordinates, axes_units, data_unit, **kwargs)
77 ax = self._animate_cube_1D(
78 plot_axis_index=plot_axis_indices[0], axes_coordinates=axes_coordinates,
---> 79 axes_units=axes_units, data_unit=data_unit, **kwargs)
80 else:
81 if naxis == 2:
~/nd/ndcube/ndcube/mixins/plotting.py in _animate_cube_1D(self, plot_axis_index, axes_coordinates, axes_units, data_unit, **kwargs)
391 ax = LineAnimator(data, plot_axis_index=plot_axis_index, axis_ranges=axes_coordinates,
392 xlabel=default_xlabel,
--> 393 ylabel="Data [{0}]".format(data_unit), **kwargs)
394 return ax
395
~/nd/vnv/lib/python3.7/site-packages/sunpy/visualization/animator/line.py in __init__(self, data, plot_axis_index, axis_ranges, ylabel, xlabel, xlim, ylim, aspect, **kwargs)
100 # supplied by the user for the plotted axis.
101 self.xdata = edges_to_centers_nd(np.asarray(axis_ranges[self.plot_axis_index]),
--> 102 plot_axis_index)
103 if ylim is None:
104 ylim = (data.min(), data.max())
~/nd/vnv/lib/python3.7/site-packages/sunpy/visualization/animator/base.py in edges_to_centers_nd(axis_range, edges_axis)
584 """
585 upper_edge_indices = [slice(None)] * axis_range.ndim
--> 586 upper_edge_indices[edges_axis] = slice(1, axis_range.shape[edges_axis])
587 upper_edges = axis_range[tuple(upper_edge_indices)]
588
IndexError: tuple index out of range
```
For `NDCubeSequence`, this is also not working properly `my_sequence.plot(axes_coordinates=[None, "time", None, None]) ` .
**Traceback**
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-17-315da964f63e> in <module>
----> 1 my_sequence.plot(axes_coordinates=[None, "time", None, None])
~/nd/ndcube/ndcube/mixins/sequence_plotting.py in plot(self, axes, plot_axis_indices, axes_coordinates, axes_units, data_unit, **kwargs)
122 ax = ImageAnimatorNDCubeSequence(
123 self, plot_axis_indices=plot_axis_indices,
--> 124 axes_coordinates=axes_coordinates, axes_units=axes_units, **kwargs)
125
[Truncated]
585 upper_edge_indices = [slice(None)] * axis_range.ndim
--> 586 upper_edge_indices[edges_axis] = slice(1, axis_range.shape[edges_axis])
587 upper_edges = axis_range[tuple(upper_edge_indices)]
588
IndexError: tuple index out of range
```
### Steps to Reproduce
The example code has been taken from [ndcube docs](https://docs.sunpy.org/projects/ndcube/en/stable/ndcube.html#initialization)
### Extra information
The plotting of `NDCube` object seems to be breaking on giving the arguments such as `plot_axis_indices` and `axes_coordinates`
### System Details
- ndcube Version: `1.1.1`
- SunPy Version: `1.0.2`
- Python Version: `3.7`
- OS information: `Ubuntu 16.04`<issue_closed>
Status: Issue closed |
bencevans/node-sonos | 748351377 | Title: Discussion: regenerating services
Question:
username_0: Recently I’ve started a new repository that tries to [document](https://username_0.io/sonos-api-docs) all the available Sonos services.
It combines all the available services (from service discovery) with a [documentation file](https://github.com/username_0/sonos-api-docs/blob/main/docs/documentation.json). PR greatly appreciated. 😉
Questions:
- would you guys be interested in a a auto-generated version of all the services? In order for all the Sonos actions to be available in this library.
- anybody who wants to give it a go? Check the [generator](https://github.com/username_0/sonos-api-docs/tree/main/generator/sonos-docs) and the [documentation template](https://github.com/username_0/sonos-api-docs/tree/main/generator/sonos-docs/templates/docs) to get started.
This issue is just to see if others are interested in this. I’m more then happy to get you guys started, but I’m not in the position to finish this transformation myself (limited time available).
Answers:
username_1: Great idea. I am glad to bring in my knowledge.
- In a first step I will add more links pointing to other sources
- I use a SONOS beam and a Play: 3 (and Play:5, Play:1) - will add that
- explored some action in more detail - will add them in documentation
Thanks for bringing that all together!
username_0: @username_1 you can check out my initial version of the generated services. I deliberately picked a different naming scheme, so the new services don’t override the existing services.
These services aren’t used in the library, but using these services next to the current services should be an option.
If you want to extend the documentation, you can send a PR for the documentation.json file in the docs library.
username_1: yes - found the documentation.json already.
... and the project.json file in _data for additional internet sources.
Status: Issue closed
|
heroku/heroku-buildpack-ruby | 284478475 | Title: Ruby 1.8.7 support?
Question:
username_0: I know this is probably out of left field, but I've got an [ancient old app](https://github.com/username_0/better) I try to keep up and running. The [Heroku Ruby docs](https://devcenter.heroku.com/articles/ruby-support#ruby-versions) suggest that version 1.8.7 should still be supported on `cedar-14`, but when I try to deploy I get an error:
```
remote: ! Debug InformationCommand: 'set -o pipefail; curl -L --fail --retry 5 --retry-delay 1 --connect-timeout 3 --max-time 30 https://s3-external-1.amazonaws.com/heroku-buildpack-ruby/cedar-14/ruby-build-1.8.7.tgz -s -o - | tar zxf - ' failed unexpectedly:
remote: !
remote: ! gzip: stdin: unexpected end of file
remote: ! tar: Child returned status 1
remote: ! tar: Error is not recoverable: exiting now
remote: !
remote: ! Push rejected, failed to compile Ruby app.
```
It looks like maybe the archive is corrupted or missing. Is there any change of getting it fixed?
Answers:
username_1: Hello all,
I am having the exact same error when trying to do a deploy with version 1.8.7.
Can you please assist?
username_2: It looks like you are running an older version of the app, by any chance are you able to upgrade to the newer version? What's holding you back on the older version?
username_3: Ruby 1.8.7 is not "supported" in that if there are security patches they will not be applied to them (so it is very likely insecure). Also if you have a bug specific to 1.8.7 while running on Heroku we technically do not support it.
You'll notice that in the docs these versions are explicitly tied to the `cedar-14` stack https://devcenter.heroku.com/articles/cedar-14-stack
This stack is deprecated and you will not be able to deploy to it after April 2019.
Until then you'll technically be able to deploy. You must set your stack to `cedar-14`:
```
$ heroku stack:set cedar-14
```
However, I checked and we do not have a Ruby 1.8.7 built for cedar-14. We do have a copy of 1.9.3 you can use. Now you can specify this in your Gemfile:
```
ruby '1.9.3'
```
Keep in mind that this is an extremely temporary workaround (to use Ruby 1.9.3). Both versions have been EOL for a long time and are likely very insecure. To continue running on Heroku long term you'll have to upgrade the app to use a more recent Ruby version. The lowest officially supported ruby version is currently 2.3 and that will become unsupported December 25th 2018. My best suggestion would be to try to upgrade to 2.5 and then keep up with updates. It's much easier to do them as they come then to wait and have to jump several versions at once.
If you cannot upgrade your own app, then I would recommend finding a consultancy to do it for you.
Status: Issue closed
username_3: If anyone is interested, the oldest Ruby version built for `heroku-16` is 2.2.10. For `heroku-18` it is 2.4.5. |
EventStore/EventStore | 107541504 | Title: Error during processing ReadAllEventsForward request.
Question:
username_0: **Version: v3.2.1**
```
[16428,54,15:37:56.418] Error during processing ReadAllEventsForward request.
Log record at actual pos 87311 has too large length: 1148478823 bytes, while limit is 16777216 bytes. In chunk #0-0 (chunk-000000.000000).
```
Received this on Windows when enabling a projection, yet it doesn't show as Faulted. Anything to worry about?
Status: Issue closed
Answers:
username_0: Apologies, I think this is related to this which might be resolved in v3.3.0: https://github.com/EventStore/EventStore.UI/issues/85
username_1: Normally this means the server has received a nonsense position as the start of the read all operation (i.e. into the middle of a record). That fix is actually in 3.2.0 (#653). |
squizlabs/PHP_CodeSniffer | 856448640 | Title: PHP 8.0 | File::getMethodParameters() needs to support attributes
Question:
username_0: **Describe the bug**
Parameters in a function declaration can have attributes attached to them. This is currently not handled correctly in the `File::getMethodParameters()` method.
**Code sample**
```php
class ParametersWithAttributes(
public function __construct(
#[\MyExample\MyAttribute] private string $constructorPropPromTypedParamSingleAttribute,
#[MyAttr([1, 2])]
Type|false
$typedParamSingleAttribute,
#[MyAttribute(1234), MyAttribute(5678)] ?int $nullableTypedParamMultiAttribute,
#[WithoutArgument] #[SingleArgument(0)] $nonTypedParamTwoAttributes,
#[MyAttribute(array("key" => "value"))]
&...$otherParam,
) {}
}
```
For the above code, the `T_STRING` tokens from within the attribute would be added to the `type_hint` array key in the return value, so for example, for the first parameter, the `type_hint` would (incorrectly) come back as `'type_hint' => '\MyExample\MyAttributestring'`
**Proposed behavior**
I'm currently looking into fixing this.
My current thinking hinges on four possible "solutions".
1. Ignore attributes in function declarations altogether. Just skip over them.
2. Add an `attributes` index key to the array for each parameter with a boolean flag to indicate whether there are attribute(s) attached to the parameter. This index key would always be set and defaults to `false`.
This option will give sniff writers an indication whether attributes are attached to the parameter. If the sniff writer would need the attribute details, they can do a `findPrevious()` for an `T_ATTRIBUTE_END` token before `'token'` and walk the attribute tokens from there.
3. Add three new index keys to the array for each parameter which would only be set when there are attribute(s) attached to the parameter:
- `attributes` containing a string representation of the attribute(s).
- `attributes_start` containing the stack pointer to the first attribute opener for this parameter
- `attributes_end` containing the stack pointer to the last attribute closer for this parameter.
4. Add a new multi-level array index key `attributes`, which would only be set when there are attribute(s) attached to the parameter. The array format would be along the lines of:
```
'attributes' => [
[0] => [
- `attribute` containing a string representation of the first attribute.
- `attribute_start` containing the stack pointer to the attribute opener for the first attribute.
- `attribute_end` containing the stack pointer to the attribute closer for the first attribute.
],
[1] => ...
]
I'm personally leaning towards either option 1 or option 2.
I don't think option 3 is a good solution as there can be multiple attributes attached to the parameter and that solution does not do that justice.
As for option 4, I'm not so sure that this is really needed as sniffs examining attributes can just listed to `T_ATTRIBUTE` instead and for sniffs specifically only examining attributes in function declarations, having the `attributes` indicator (option 2) or walking the tokens between the open/close parentheses looking for `T_ATTRIBUTE` tokens, is probably enough and would safe the performance hit of processing potential attributes on each call to `File::getMethodParameters()`.
@username_1 Your input/opinion on which solution is preferred would be much appreciated.
**Versions (please complete the following information):**
- PHPCS: `master`
Answers:
username_0: Hmm.. thinking it over some more, I'm not sure attributes should be excluded from the `content` as comments in a parameter declaration aren't excluded either.
username_1: Before reading this comment, I was learning towards option 1. Do you have an example of where comments are included and do you think they should be?
username_0: Well, changing that behaviour now would be a BC-break and what with them being included, it makes sense to also include potential attributes in the value of the `content` key.
So I'm tempted to go for option 2, but with the attributes being included in `content`.
username_0: @username_1 Had a chance to think this over some more ?
username_1: I think you're 100% right in your previous comment. The attributes should be included in `content` to remain consistent.
The extra flag to indicate they are present is a nice to have, but hard to know how useful it is during sniff development right now (unless you need it already). Happy to have it there if you feel it is useful.
username_0: @username_1 Thanks for letting me know. I've pulled the fix now in #3320.
Status: Issue closed
|
newrelic/newrelic-php-agent | 755209738 | Title: No web transaction data on Laravel 8
Question:
username_0: [NOTE]: # ( ^^ Provide a general summary of the issue in the title above. ^^ )
## Description
[NOTE]: # ( Describe the problem you're encountering. )
[TIP]: # ( Do NOT give us access or passwords to your New Relic account or API keys! )
Hi,
We have upgrade our Laravel application to version 8 and are using Newrelic agent v. 9.14.0.290-14bb02701b5c. We are currently not seeing any web transactions. Is that because the agent does not support Laravel 8 yet? We do get non-web transactions reported
## Steps to Reproduce
[NOTE]: # ( Please be as specific as possible. )
## Expected Behavior
[NOTE]: # ( Tell us what you expected to happen. )
## Relevant Logs / Console output
[NOTE]: # ( Please provide specifics of the local error logs, Browser Dev Tools console, etc. if appropriate and possible. )
## Your Environment
[TIP]: # ( Include as many relevant details about your environment as possible. )
Ubuntu 16.04.3 LTS
Php 7.4
Laravel 8.16.0
## Additional context
[TIP]: # ( Add any other context about the problem here. )
Answers:
username_1: Hi @username_0. Yes, you are right, the PHP agent hasn't yet added support for Laravel 8. We are looking into adding support for this in the future. Here's a [list of frameworks](https://docs.newrelic.com/docs/agents/php-agent/getting-started/php-agent-compatibility-requirements#frameworks) that are currently supported.
username_2: [NOTE]: # ( ^^ Provide a general summary of the issue in the title above. ^^ )
## Description
[NOTE]: # ( Describe the problem you're encountering. )
[TIP]: # ( Do NOT give us access or passwords to your New Relic account or API keys! )
Hi,
We have upgrade our Laravel application to version 8 and are using Newrelic agent v. 9.14.0.290-14bb02701b5c. We are currently not seeing any web transactions. Is that because the agent does not support Laravel 8 yet? We do get non-web transactions reported
## Steps to Reproduce
[NOTE]: # ( Please be as specific as possible. )
## Expected Behavior
[NOTE]: # ( Tell us what you expected to happen. )
## Relevant Logs / Console output
[NOTE]: # ( Please provide specifics of the local error logs, Browser Dev Tools console, etc. if appropriate and possible. )
## Your Environment
[TIP]: # ( Include as many relevant details about your environment as possible. )
Ubuntu 16.04.3 LTS
Php 7.4
Laravel 8.16.0
## Additional context
[TIP]: # ( Add any other context about the problem here. ) |
ScoopInstaller/Main | 656066680 | Title: [email protected]: hash check failed
Question:
username_0: Installing 'heroku-cli' (7.42.3) [64bit]
heroku-win32-x64.tar.xz (14.5 MB) [===========================================================================] 100%
Checking hash of heroku-win32-x64.tar.xz ... ERROR Hash check failed!
App: main/heroku-cli
URL: https://cli-assets.heroku.com/heroku-win32-x64.tar.xz#/dl.xz
First bytes: FD 37 7A 58 5A 00 00 04
Expected: 32f0fe01be7568f5b02834274765185e352af54290e05edadf446a30973d16db
Actual: 91d9478dbb3978f9199d65f961a6a90181a6f131e29a356e456f452311e88b34<issue_closed>
Status: Issue closed |
spencerccf/app_settings | 1129430974 | Title: fatal error: module 'app_settings' not found
Question:
username_0: @import app_settings;
~~~~~~~^~~~~~~~~~~~
1 error generated.
`flutter doctor -v`
[✓] Flutter (Channel beta, 2.9.0-0.1.pre, on macOS 11.2.3 20D91 darwin-arm, locale en-IN)
• Flutter version 2.9.0-0.1.pre at /Users/rajeshkolli/Library/Android/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 8f1f9c10f0 (8 weeks ago), 2021-12-14 13:41:48 -0800
• Engine revision 234aca678a
• Dart version 2.16.0 (build 2.16.0-80.1.beta)
• DevTools version 2.9.1
[✓] Android toolchain - develop for Android devices (Android SDK version 31.0.0-rc3)
• Android SDK at /Users/rajeshkolli/Library/Android/sdk
• Platform android-31, build-tools 31.0.0-rc3
• Java binary at: /Applications/Android Studio.app/Contents/jre/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 11.0.10+0-b96-7281165)
• All Android licenses accepted.
[!] Xcode - develop for iOS and macOS (Xcode 12.5)
• Xcode at /Applications/Xcode.app/Contents/Developer
! Flutter recommends a minimum Xcode version of 13.
Download the latest version or update via the Mac App Store.
• CocoaPods version 1.11.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2020.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 11.0.10+0-b96-7281165)
[✓] Connected device (2 available)
• iPhone 12 Pro Max (mobile) • 02ADA7AF-DE26-4C5F-BFF8-A65626C67730 • ios • com.apple.CoreSimulator.SimRuntime.iOS-14-5
(simulator)
• Chrome (web) • chrome • web-javascript • Google Chrome 98.0.4758.80
! Doctor found issues in 1 category.
Full Error:
fatal error:
module
'app_settings' not found
@import app_settings;
~~~~~~~^~~~~~~~~~~~
1 error generated.
[ +832 ms] Could not build the application for the simulator.
[ +9 ms] Error launching application on iPhone 12 Pro Max.
[ +12 ms] "flutter run" took 61,102ms.
[Truncated]
<asynchronous suspension>
#5 FlutterCommandRunner.runCommand.<anonymous closure>
(package:flutter_tools/src/runner/flutter_command_runner.dart:281:9)
<asynchronous suspension>
#6 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:150:19)
<asynchronous suspension>
#7 FlutterCommandRunner.runCommand (package:flutter_tools/src/runner/flutter_command_runner.dart:229:5)
<asynchronous suspension>
#8 run.<anonymous closure>.<anonymous closure> (package:flutter_tools/runner.dart:62:9)
<asynchronous suspension>
#9 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:150:19)
<asynchronous suspension>
#10 main (package:flutter_tools/executable.dart:94:3)
<asynchronous suspension>
[ +265 ms] ensureAnalyticsSent: 260ms
[ +1 ms] Running shutdown hooks
[ ] Shutdown hooks complete
[ +1 ms] exiting with code 1
Answers:
username_1: Exactly the same issue. Also using a Macbook with M1 chip.
Precisely, it occurs whenever I execute
```
flutter build ipa --release
```
which is exactly what is executed in my CI-pipeline. Unfortunately this leaves me unable to build my project.
In the pipeline, a Mac Mini is used so I guess it has nothing to do with the architecture.
username_2: Me too. Exactly the same issue. I am using Macbook Pro 2017.
username_3: I see you are using Flutter beta channel… beta may not be totally ready yet. Try stable channel.
<sub>Sent with <a href="http://githawk.com">GitHawk</a></sub>
username_4: any solution?
username_5: same error here :( |
hive-mind-fs/hive-mind | 587868272 | Title: GAME | As a user, I need to start and compete in the competition at the top time as everyone else
Question:
username_0: Figure out how to sync the game for all users.
- Game start at 5:00pm EST.
- 1min before the competition starts, we'll have some sort of countdown
- Each round lasts 5min (5 rounds)
- Between rounds there's a 15s break
Using Sockets?<issue_closed>
Status: Issue closed |
jlippold/tweakCompatible | 341732408 | Title: `Intelix` working on iOS 11.3
Question:
username_0: ```
{
"packageId": "com.hackyouriphone.intelix",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.hackyouriphone.intelix",
"deviceId": "iPhone7,1",
"url": "http://cydia.saurik.com/package/com.hackyouriphone.intelix/",
"iOSVersion": "11.3",
"packageVersionIndexed": false,
"packageName": "Intelix",
"category": "HYI - Tweaks",
"repository": "HackYouriPhone",
"name": "Intelix",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.hackyouriphone.intelix",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.0.7",
"shortDescription": "Grouped Notifications on iOS 11",
"latest": "1.3.6",
"author": "iOS Creatix",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
``` |
serverless/serverless | 172032875 | Title: Including files outside of the current directory hierarchy via package.include
Question:
username_0: Hi,
package.include seems to only allow the packaging of files in the current directory or below as, if I try to specify something like
`../handler.js`
this file is not packaged up in the ZIP file, which makes some sense as otherwise this would clobber an identically-named file in the current dir.
However, it seems we can refer to resource files outside of the current hierarchy:
https://github.com/username_0/serverless/blob/master/docs/understanding-serverless/serverless-yml.md
```
resources:
Resources:
$ref: ../custom_resources.json # you can use JSON-REF to ref other JSON files
```
##### Benefits:
* Benefit one: it allows for more flexibility in setting up the dir structure, instead of requiring every file to sit under the one dir.
##### Drawbacks:
* the resultant structure can become too complicated, and may result in breakages if people move files around without being aware of the references in serverless.yml.
* there is a risk a file outside of a project's repo is included, which would break when this is cloned into another environment that does not have the same file
* the bahaviour of 'include' would be different to 'exclude', as exclude works on the current dir and below, as there's no point excluding the parent dir and above :)
##### Additional Details:
##### TLDR:
I guess I'd like to start a discussion on the pros and cons of having one directive/setting doing it one way, and another doing it a different way.
##### Discussion:
Answers:
username_1: Hey @username_0 thanks for this issue.
Here's another one which is related and shows why we went with the "zip per service" approach and how you can resolve this issue with the help of reusable npm packages: https://github.com/serverless/serverless/issues/1819
username_2: Thanks for reporting @username_0 . As @username_1 already mentioned there are other discussions about this already and we decided that for now we're not going to support this feature due to potential complications it can lead to and also potential security issues with it. Therefore closing but its linked to other issues now if we ever want to come back to it.
Status: Issue closed
username_0: Thanks for the comments and the links to the related issues guys, it's clearer to me now what the design decisions were, which make sense. I just need to fix the tooling/package management on my end to make it work :) |
minishift/minishift | 230285524 | Title: Usage of openshift_cache file for minishift
Question:
username_0: Currently we have `pkg/minishift/cache/openshift_cache.go` file laying in our code base which doesn't seem to use anywhere. As per my understanding this file was there because before we started using `oc cluster up` for openshift cluster, we used to use `openshift` binary and this file for that. Can we remove this file from our code base if I am not missing any usage of it.
@username_3 @username_1 @username_2 WDYT?
Answers:
username_1: +1, I do not see any use of this file in present code base.
username_2: +1 Looks like we might not do `openshift` caching. That's why there is extra folder `openshift` in `$MINISHIFT_HOME/cache/openshift`.
username_3: This file is obsolete. I actually already removed it, on a feature branch of mine.
username_3: This is removed as well. Or better renamed to something we need.
Status: Issue closed
username_2: Resolved via https://github.com/minishift/minishift/pull/949 |
bazelbuild/rules_nodejs | 844936949 | Title: Importing typescript libraries with ts_project does not work
Question:
username_0: # 🐞 bug report
### Affected Rule
The issue is caused by the rule: ts_project
### Description
I am trying to migrate a personal project from `ts_library` to `ts_project`.
Suppose I have a `app` that imports from `libraries/a` and `libraries/b`.
With `ts_library` I can simply do
- `module_name = "@libraries/a"`
- and `module_name = "@libraries/b"`
`ts_project` does not have a `module_name` attribute. Thus I've tried to replicate this test: [import_package_by_name](https://github.com/bazelbuild/rules_nodejs/tree/stable/packages/typescript/test/ts_project/import_package_by_name). Via `js_library` and `package_name`
But I cannot make it work:
## 🔬 Minimal Reproduction
https://github.com/username_0/bazel-ts-project
- `yarn install`
- `yarn start`
## 🔥 Exception or Error
<pre><code>
app/index.ts(1,15): error TS2307: Cannot find module '@libraries/a' or its corresponding type declarations.
app/index.ts(2,15): error TS2307: Cannot find module '@libraries/b' or its corresponding type declarations.
</code></pre>
Answers:
username_1: Two issues:
* Missing `declaration = True` from both `tsconfig.json` and the `ts_project` for the libraries.
* The `package_name` attr on the libraries `js_library` is set to `@libs/a` / `@libs/b`, but the import statements are `@libraries/a` / `@libraries/b`
```
diff --git a/libraries/a/BUILD b/libraries/a/BUILD
index 3f37245..e50db72 100644
--- a/libraries/a/BUILD
+++ b/libraries/a/BUILD
@@ -13,11 +13,12 @@ ts_project(
name = "a",
srcs = glob(["*.ts"]),
tsconfig = "tsconfig",
+ declaration = True,
)
js_library(
name = "a_js",
- package_name = "@libs/a",
+ package_name = "@libraries/a",
deps = [
"a",
"tsconfig",
diff --git a/libraries/a/tsconfig.json b/libraries/a/tsconfig.json
index db6dd58..7291582 100644
--- a/libraries/a/tsconfig.json
+++ b/libraries/a/tsconfig.json
@@ -1,6 +1,7 @@
{
"extends": "../../tsconfig.json",
"compilerOptions": {
- "moduleResolution": "node"
+ "moduleResolution": "node",
+ "declaration": true
}
}
diff --git a/libraries/b/BUILD b/libraries/b/BUILD
index 80826cf..eb9be37 100644
--- a/libraries/b/BUILD
+++ b/libraries/b/BUILD
@@ -13,11 +13,12 @@ ts_project(
name = "b",
srcs = glob(["*.ts"]),
tsconfig = "tsconfig",
+ declaration = True,
)
js_library(
name = "b_js",
- package_name = "@libs/b",
+ package_name = "@libraries/b",
deps = [
"b",
"tsconfig",
diff --git a/libraries/b/tsconfig.json b/libraries/b/tsconfig.json
index db6dd58..7291582 100644
--- a/libraries/b/tsconfig.json
+++ b/libraries/b/tsconfig.json
@@ -1,6 +1,7 @@
[Truncated]
}
}
```
Applying the diff above results in the following output from the `nodejs_binary`
```
bazel run //app:bin
INFO: Analyzed target //app:bin (0 packages loaded, 1 target configured).
INFO: Found 1 target...
Target //app:bin up-to-date:
bazel-bin/app/bin.sh
bazel-bin/app/bin_loader.js
bazel-bin/app/bin_require_patch.js
INFO: Elapsed time: 0.190s, Critical Path: 0.00s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
a b
```
username_0: Thank you very much, Matt!
username_0: I want to test `libraries/a`. I've added
- `index.test.ts`
- `tsconfig.test.json`
- a `ts_project` target for the test file
```
ts_project(
name = "a_test",
srcs = ["index.test.ts"],
declaration = True,
tsconfig = "tsconfig_test",
deps = [
"a",
"@npm//@types/jasmine",
"@npm//@types/node",
],
)
```
- and a `jasmine_node_test` target
```
jasmine_node_test(
name = "test",
config_file = "jasmine.json",
templated_args = ["--bazel_patch_module_resolver"],
deps = ["a_test"],
)
```
But running `yarn test` throws this error:
```
libraries/a/index.test.ts(1,15): error TS2307: Cannot find module './index' or its corresponding type declarations.
```
You can try it yourself: https://github.com/username_0/bazel-ts-project
What am I doing wrong?
username_0: # 🐞 bug report
### Affected Rule
The issue is caused by the rule: ts_project
### Description
I am trying to migrate a personal project from `ts_library` to `ts_project`.
Suppose I have a `app` that imports from `libraries/a` and `libraries/b`.
With `ts_library` I can simply do
- `module_name = "@libraries/a"`
- and `module_name = "@libraries/b"`
`ts_project` does not have a `module_name` attribute. Thus I've tried to replicate this test: [import_package_by_name](https://github.com/bazelbuild/rules_nodejs/tree/stable/packages/typescript/test/ts_project/import_package_by_name). Via `js_library` and `package_name`
But I cannot make it work:
## 🔬 Minimal Reproduction
https://github.com/username_0/bazel-ts-project
- `yarn install`
- `yarn start`
## 🔥 Exception or Error
<pre><code>
app/index.ts(1,15): error TS2307: Cannot find module '@libraries/a' or its corresponding type declarations.
app/index.ts(2,15): error TS2307: Cannot find module '@libraries/b' or its corresponding type declarations.
</code></pre>
username_1: If your importing from bazel-out (which you are here), then you need to set `rootDirs` appropriately. Something like:
```
"rootDirs": [
".",
"../../bazel-out/host/libraries/a",
"../../bazel-out/darwin-fastbuild/bin/libraries/a",
"../../bazel-out/k8-fastbuild/bin/libraries/a",
"../../bazel-out/x64_windows-fastbuild/bin/libraries/a",
"../../bazel-out/darwin-dbg/bin/libraries/a",
"../../bazel-out/k8-dbg/bin/libraries/a",
"../../bazel-out/x64_windows-dbg/bin/libraries/a",
]
```
This is outlined on the [ts_project docs page](https://bazelbuild.github.io/rules_nodejs/TypeScript.html#ts_project-1)
I think the project you linked is also missing the peer dependencies from `@bazel/jasmine` of `jasmine` and `jasmine-core`.
username_0: Ouch, that is ugly. This only seems to work when I add it to `libraries/a/tsconfig.json`
Is it possible to declare those `rootDirs` once in the root `/tsconfig.json`?
username_2: I am in a similar position @username_0. I have a large codebase that is working with `ts_library`, relatively happily, that I want to move to `ts_project`. My hope was that `ts_project` would, in the mid term, be a simplification over `ts_library`.
The need to enumerate a mass of paths in `rootDirs` makes it practically infeasible for larger projects.
At this point, `ts_library` is far simpler to use. I'm surprised that `ts_project` is the recommended go-to TS rule in its current state.
Is there an example of a codebase using `ts_project` with more than a few targets, all inter-dependent? I would be interested to see how they cope with the config as I do wonder whether I'm missing something.
username_3: @username_2 we have some examples in `/packages/typescript/test/ts_project` as well as the examples/ in the repo.
`ts_library` is simpler to configure for sure. However it's much harder to make sense of its outputs and it's less compatible with the ecosystem. The tradeoff in `ts_project` is that it's a thin wrapper around calling `tsc` so now users are exposed to the semantics of TypeScript's compiler.
The easy way to deal with rootdirs is put those seven lines in your root `tsconfig.json` (the one the editor uses, likely) and then just use `extends` in the others so they also know where bazel's output folder lives.
There is some discussion of smoothing this migration path, I do appreciate that it's difficult right now and you're right that the regression in user experience is something we should fix. Probably we'll do that by validating the inputs you're giving to TypeScript and improving error messaging, rather than introducing a bunch of complexity within the rule like a custom compiler or generating a tsconfig.
username_3: the `rootDirs` setting should be to the package containing the `tsconfig.json` file IIUC |
chromelyapps/Chromely | 493607960 | Title: I hope the server-related code to be placed in a separate project.
Question:
username_0: I found a lot of codes in project Chromely.CefGlue and Chromely.Core are about create web server and websocket server, which makes the Chromely project more difficult to understand.
In fact, I don't want to use this codes, because I will use asp.net core(maybe other webserver) to act as a webserver/websocket server.
So better make these codes(about web server and websocket server) in a separate project, make it replaceable.
thanks!
Answers:
username_1: @username_0 the Webserver like most features on Chromely are optional. You need to register it to use it. So, yes, already replaceable.
I am not sure putting it in a separate assembly will help. We still have to add it somehow ... right now it is in a separate folder - [ServerHandlers](https://github.com/chromelyapps/Chromely/tree/c8f9b2fb45b57670fd7322a9d1fbe67b39ec9782/src/Chromely.CefGlue/Browser/ServerHandlers).
It also looks like it is less used so, in my opinion it would either not be added for next releases or more isolated than it is now. So may be a separate project can help for that reason. Or we just let expert users use the code as they want.
Meanwhile, I will suggest you delete everything relating to the Webserver for your code base.
username_0: Not only the folder ServerHandlers, there are many other folder and files and methods are services for the Chromely's bulit-in webserver, when use other webserver, these codes are useless, e.g.:
Chromely.Core -> Infrastructure Folder->*.*
Chromely.Core -> JsonMapper Folder->*.*
Chromely.Core -> RestfulService Folder->*.*
Chromely.Core -> MimeMapper.cs, IChromelyContainer.cs, IChromelyLogger.cs, IChromelyWebsocketHandler.cs
Chromely.CefGlue->ServerHandlers->*.*
Chromely.CefGlue->RestfulService->*.*
Chromely.CefGlue->Handlers Folder->CefGlueHttpSchemeHandler.cs,CefGlueHttpSchemeHandlerFactory.cs,CefGlueMessageRouterHandler.cs,CefGlueResourceSchemeHandler.cs,CefGlueResourceSchemeHandlerFactory.cs,CefPostDataStream.cs
Chromely.CefGlue->WebsocketMessageSender.cs, WebsocketServerRunner.cs
And also some codes in other files are services for Chromely's bulit-in webserver, such as : window.RegisterServiceAssembly(); window.ScanAssemblies();
All above codes are services for Chromely's bulit-in webserver, are not ui related codes, If use other webserver, they are useless, and it is difficult to remove any one of them, because there are not pluggable, remove anyone will lead compile error.
So i hope the codes for Chromely's bulit-in webserver(not ui related) can put in a separate project (e.g.: Chromely.WebServer) and make it pluggable, after move above code in a separate project, Chromely will be more lighter and readable.
thanks!
username_1: @username_0 I see what you mean. We are not referring to the same thing. I was referring to WebsocketServer.
What you are referring to is at the Core of Chromely IPC - the Restful resource pattern approach. No, you cannot replace that - the way it is designed at the moment. And no, cannot be replaced with a any other server. They are used to translate messages between the Browser and the Renderer and vice-versa. It is a pattern, if you may, rather than a "service/server". In layman's term you can see the "Renderer" as the client and "Browser" as the server.
But yes, it could be in a different assembly. But note that it may also add unnecessary complexity. Some of those files you listed have dependencies on CefSharp or CefGlue, so to do it well and avoid cyclic dependency each must have it's own separate assembly for that purpose.
But we will see.
username_0: `No, you cannot replace that - the way it is designed at the moment. And no, cannot be replaced with a any other server. `
No, it can be replaced with other server ,e.g., it can be replace with asp.net core, only thing to do is create create a CefResourceHandler, overrite ProcessRequest method, do as below:
public class AspHttpSchemeHandler : CefResourceHandler
{
...
protected override bool ProcessRequest(CefRequest request, CefCallback callback)
{
1. convert CefRequest to asp.net core HttpRequest
2. send HttpRequest to asp.net core pipline
3. get HttpResponse from asp.net core pipline
4. send HttpResponse back to Cef
}
...
}
I has a asp.net core program, asp.net core has more features than Chromely's bulit-in webserver, it is not easy to convert every asp.net core controller to Chromely's controller,
So i hope Chromely should do more friendly to asp.net core(and other third-part webserver), and focus in linking to asp.net core , not focus in Chromely's bulit-in webserver, Chromely's bulit-in webserver is still a little young for big project, hope users write code for Chromely's bulit-in webserver is not a good idea.
username_1: @username_0 the way I see it we are not on the same page.
**Chromely does not care what you use for your source of data.**
Chromely is a framework. If you choose .NET ASP.NET Core or any other source that is your choice. You can choose to do that in the handler you described or in the Controller ... all your choice. I think [this](https://github.com/chromelyapps/Chromely/issues/100) is an example of a developer trying to access an external server from the Controller. Please see my [comment -](https://github.com/chromelyapps/Chromely/issues/100#issuecomment-498642875)
Some of the files you listed earlier are are default handlers. This is ok for most developers. What you described above is a custom handler and you are allowed to do that. The wiki tells you how to register that ...
[Register handlers](https://github.com/chromelyapps/Chromely/wiki/Configuration#how-to-register-custom-scheme-handlers)
````csharp
var config = ChromelyConfiguration()
.Create()
....
.RegisterSchemeHandler("http", "username_0.com", new AspHttpSchemeHandler())
.....
````
When I say cannot be replaced at is at the moment, I am talking about the restful pattern. The functionalities are replaceable. They are in wikis and demos.
username_1: @username_0 just curious - any reason you call the handlers built-in webserver? Never see anybody refer to them that way, but thinking about it some people will think they are.
In CEF world they are called handlers - scheme handlers specifically.
[CEF Wiki](https://bitbucket.org/chromiumembedded/cef/wiki/GeneralUsage#markdown-header-scheme-handler)
[Stackoverflow](https://stackoverflow.com/questions/35965912/cefsharp-custom-schemehandler)
[CefGlue page](https://bitbucket.org/xilium/xilium.cefglue/issues/73/http-s-custom-scheme-handler-crashes-cef)
username_0: I know what you mean.
As for me, I only need a cross-platform cef-base ui framework, but in Chromely, most of the code are webserver-related code which i will not use, so i think current Chromely is not for me.
So for me thebetter way is to use CefGlue directly, and maybe use part of Chromely's ui-relate code.
Thank you.
Status: Issue closed
|
twosigma/Cook | 106115110 | Title: Test protobuf <-> datomic roundtrips
Question:
username_0: This is meant to test that we can submit some JSON through the rest api, see it hit Datomic, then convert that to a protobuf, then follow the whole roundtrip back. This could catch potentially unknown serialization/format munging bugs, since we represent job data as Clojure datastructures, Mesos protobufs, Datomic datoms, and JSON objects.<issue_closed>
Status: Issue closed |
tc39/ecma262 | 163032359 | Title: Web compatibility risk of specified RegExp lastIndex semantics
Question:
username_0: I tried to ship a more spec-compliant implementation of RegExps in Chrome, but a user reported a web compat issue at https://bugs.chromium.org/p/chromium/issues/detail?id=624318
Seems like, historically, browsers did not throw for this test case:
```js
x = /a/
Object.freeze(x)
"b".match(x)
```
However, ES2015 and ES5 are fairly clear that when a match isn't found, then a strict-mode write is done to set lastIndex to 0. This is tested by test262 https://github.com/tc39/test262/blob/master/test/built-ins/RegExp/prototype/exec/y-fail-lastindex-no-write.js .
Has anyone else tried shipping these semantics? I'm inclined to revert to legacy semantics for now until I can quantify the size of the breakage and know if it's OK.
[Sidebar: One argument for permanent "legacy" semantics is that it enhances the utility of frozen RegExps, but on the other hand, RegExps are already somewhat limited when frozen (e.g., in global mode, they don't make much sense).]
Answers:
username_1: I don't think "legacy semantics" is the right term here? Perhaps legacy ES5 implementation bug would be a better characterization Do all major browsers actually have this bug?
Some background, `Object.freeze`, strict mode, "strict writes", and the use of a strict write when updating `lastIndex` were all introduced by the ES5 spec. So, prior to ES5 there was no way to "freeze" `lastIndex`. Any code using `Object.freeze` could not be legacy code. If ES5 implementations allowed RegExp built-ins to silently update (or try to update) a non-writable, non-configurable `lastIndex` property, the implementations were buggy.
What is the buggy semantics you want and is this only `match` you are talking about or does the bug occur everywhere `lastIndex` is updated? Allowing the value of a frozen `lastIndex` to be modified would violate one of the fundamental invariants and would be a security hole. Silently ignoring such a write would be a contract violation that may hides bugs (because `lastIndex` is not updated in the manner that the spec. requires.).
Given the ES3 semantics of RegExp algorithms that update `lastIndex`, any code that is trying to freeze an RegExp is probably ill-conceived and likely buggy. I don't think we should memorialize such bugs.
We have talked in the past about developing a functional RegExp form that didn't use any mutable internal state such as `lastIndex`. That still seems like it would be a good direction to pursue.
username_2: Note that Firefox and Safari do throw when the value of `x.lastIndex` will attempted to be set to a different value, e.g.:
```js
"ca".match(Object.freeze(/a/g))
```
According to my tests, Edge 13 is buggy in its own way: although it allows to freeze a regexp (and report that regexp as frozen through `Object.isFrozen`), it leaves the `lastIndex` property of that regexp as writable.
username_2: Don’t be confused by my last comment: the issue seems more related to non-global regexps for which the lastIndex property was historically ignored (neither read, nor updated) in some cases (including String#match).
username_0: For V8'S part, our RegExp implementation executes largely based on sloppy mode JS, so we allow a bunch of failed writes. There is no information leak--the object is really frozen, just non-throwing due to the sloppy mode write.
Personally, I like the Firefox/Safari semantics, if we need to do something non-throwing for this particular case: just pass `false` to theone relevant Set call, right?
username_2: In order to remove some confusion from my comment. In reality, Firefox and Safari always throw when attempting to set the lastIndex property of a frozen regexp to a same value, even when the value is unchanged. However:
* For Firefox you should test it on Aurora or Nightly (the new implementation of `String#match` is not yet in stable release).
* Safari (Technology Preview) seems to incorrectly ignore (neither read nor update) the `lastIndex` property of *non-global* RegExp.
username_2: See PR #627 for a possible fix.
username_3: I like the #627 idea. When the property update is useless any, better not to do it than to cope with a meaningless failure to do it.
username_1: Note that the use of a RegExp instance can be distant from the original provider of the instance (ie, it may have been past through multiple layers of function parameters, value containers, etc). The actual usage site may well be dependent upon use of the updated `lastIndex` value and completely unprepared to deal with the fact that a "frozen" instance was passed to it. Ignoring property updates in that case may silently produce looping or erroneous results. On the other hand, throwing on such updates is more likely to call attention that this is a malformed program that needs to be repaired. BTW, I believe that was the original motivation for using strict puts in these situations (actual its the motivation for even having strict puts).
username_1: Before we rush into changing things, can we step back a bit and survey the actual situations?
First, concerning the various places that may try to updates to non-writable, non-configurable `lastIndex` RegExp properties. Are there any cases, not in conformance to the ES5/ES6 specs that are implemented with the same interoperable but non-conforming behavior across all major browser platforms. It isn't clear to me from Claude's comments that any such fully interoperable bugs exist. Can we verify? Are there test that cover these cases? If there aren't such universally implemented bugs, then we don't have a web compatibility issue that needs remediation. We just have buggy implementations that are not interoperable in some rare situations. In this case, the behavior specified in the standards should apply and implementations should fix their bugs.
Second, it would be a non-breaking spec. change to make the updates to `lastIndex` be non-strict puts. But that doesn't mean it is a good idea. I've already argued why it might not be. Regardless, it would be a normative change that needs to be justified on its on merit rather than as an accommodation to a buggy implementation.
I assume that somebody will bring this issue and the supporting data to the next TC39 meeting.
username_2: @username_1 According to my testings, given
```js
var rx = Object.freeze(/a/)
```
the following expressions used to consistently not throw in browsers for various reasons:
```js
"b".match(rx)
"b".search(rx)
```
* in Firefox and Safari because they forget to update the lastIndex property to 0 (according to tests in https://github.com/tc39/ecma262/pull/627#issuecomment-229743028 — although there is still some mystery for me on how Safari TP implements RegExp.prototype.@@search);
* in Edge, because it lies about freezing (the lastIndex property remains writable);
* in Chrome ≤ 50, apparently because it ignored the failed write to lastIndex.
However, the following expressions *do* throw in Firefox (but not in Safari or Edge for the same reasons as above):
```js
rx.exec("b")
rx.test("b")
```
username_0: @username_1 I won't be able to collect much data beyond the user bug report for a number of months; that's just how long it takes. This issue might not make it to the next meeting and result in a PR that is ready to merge because of that, and that's OK with me; it just means that Chrome might not ship these semantics in new versions due to the issue until we get this worked out, and other implementers might want to be similarly cautious.
username_4: @username_0 were we able to get data on this one?
username_0: I believe we ended up settling on this as the web compat fix: https://github.com/tc39/ecma262/pull/627
Status: Issue closed
|
LonelyCpp/react-native-youtube-iframe | 704624018 | Title: Not working in stack navigator
Question:
username_0: **Controls are disabled in stack navigator**
Users cannot pause or play the video when the screen is wrapped under stack navigator.
Would really appreciate your help. Thanks
Answers:
username_1: Hi @username_0
That's rather odd. The stack navigator usually will not affect how this component works.
Can you share a snippet of how you've used the youtube component?
username_0: Thanks for help it was my internal issue.
Status: Issue closed
|
aspnetboilerplate/aspnetboilerplate | 405119387 | Title: 'enc_auth_token' not cleared
Question:
username_0: https://github.com/aspnetboilerplate/aspnetboilerplate/blob/82e72bb380e543f3c2e81a32a9be3a67af402c10/src/Abp.Web.Resources/Abp/Framework/scripts/abp.js#L173
Forgive me if I have misunderstood something, but when logging out of an angular app `abp.auth.clearToken()` is called which in turn clears `'Abp.AuthToken'` but it does not clear the `'enc_auth_token'` which is what the backend server uses to determine session.
Thus it seems logging out then trying to access the api still seems to work in some cases (for example, I am looking at AbpSession info when logging in, and can see it is already filled in the with previous login user)

(Excuse the fact that this is part of a more complex system, the point is the same, AbpSession should no longer have any info)
Answers:
username_1: @username_0 you are right, moved to https://github.com/aspnetboilerplate/module-zero-core-template/issues/401
Status: Issue closed
|
bootsoon/ng-circle-progress | 350053524 | Title: opacity is inherited from parent div
Question:
username_0: 
still the same
Answers:
username_1: `backgroundColor='white'`
`showBackground=true`
username_0: 
still the same
username_0: NgCircleProgressModule.forRoot
({
"backgroundPadding": 10,
"radius": 60,
"maxPercent": 100,
"outerStrokeWidth": 10,
"outerStrokeColor": "#61A9DC",
"innerStrokeWidth": 0,
"subtitleColor": "#444444",
"showInnerStroke": false,
"startFromZero": false,
"backgroundColor": "white",
"showBackground": true
}),
username_0: app.module:
NgCircleProgressModule.forRoot
({
"backgroundPadding": 10,
"radius": 60,
"maxPercent": 100,
"outerStrokeWidth": 10,
"outerStrokeColor": "#61A9DC",
"innerStrokeWidth": 0,
"subtitleColor": "#444444",
"showInnerStroke": false,
"startFromZero": false,
"backgroundColor": "white",
"showBackground": true
}),
component.html:
<div class="fadeScreen" *ngIf="true">
<div class="loaderCenter">
<circle-progress
[percent]="completePercentage"
maxPercent=100>
</circle-progress>
</div>
</div>
css classes:
.loaderCenter {
top: 50%;
left: 50%;
width: 10em;
height: 10em;
margin-top: -5em;
margin-left: -5em;
position: relative;
}
.fadeScreen {
position: fixed;
top: 0;
height: 100%;
width: 100%;
left: 0;
background: black;
opacity: 0.5;
z-index: 2;
}
this is how it is configured
username_1: @username_0
```
<style>
.loaderCenter {
top: 50%;
left: 50%;
width: 10em;
height: 10em;
margin-top: -5em;
margin-left: -5em;
position: relative;
}
.fadeScreen {
position: fixed;
top: 0;
height: 100%;
width: 100%;
left: 0;
background: black;
opacity: 0.8;
z-index: 2;
}
</style>
<div class="fadeScreen" *ngIf="true">
<div class="loaderCenter">
<circle-progress [percent]=50 [showBackground]='true' [backgroundColor]="'white'"></circle-progress>
</div>
</div>
```

username_0: Thanks for the help but it still looks blur , actually I was making a blunder the loader shouldn't be under that fade screen class . instead it should be outside it like this
` <div *ngIf="isLoading" class="fadeScreen"></div>
<div *ngIf="isLoading" class="loaderCenter">
<circle-progress
[percent]="completePercentage"
maxPercent=100>
</circle-progress>
</div>`
and its z-index should be higher than the fade screen div

now it is supposed to be like this (left) as it is not blur like the one on right.
username_1: ```
<style>
#d1{width:200px;height:200px;background-color:green;opacity:0.5;}
#d2{width:100px;height:100px;background-color:red;opacity:1;}
</style>
<div id="d1">
<div id="d2"/>
</div>
```
You'll get the left one rather the right one.

Status: Issue closed
|
zombodb/zombodb | 143347731 | Title: #expand() queries don't honor transaction visibility rules
Question:
username_0: Queries that include the `#expand()` construct fail to include transaction visibility, leading to incorrect results as the subquery sees dead rows.
Answers:
username_0: Test included to validate that this never happens again. Turns out this has been broken since the beginning. :(
Status: Issue closed
username_0: about to be released in v2.6.10 |
ukon1990/wow-auction-helper | 712682272 | Title: Dashboard bugs
Question:
username_0: - [ ] The drag and drop sorting seems to bug out every now and then…
- [ ] There seems to be an issue with "item rules" on creation
- [ ] There seems to be a bug that has appeared with "or" rules…
Answers:
username_0: Fixed this a little while ago :)
Status: Issue closed
|
riot/riot | 102916452 | Title: Close some issues
Question:
username_0: There are numerous issues which seem to have been resolved but have not been closed.
- How to detect child tag is mounted? #1157
- External CSS mutations #1153
- Tag content is evaluated despite IF attribute being FALSE in IE11 #1108
- Make 'this' point to current element in riot expressions #1105
- can we give preference to fix 990 #1103
- Cannot create scoped selector queries before mount #999
Answers:
username_1: @username_0 thanks for taking a look.
- closed: #999
- label `answerd` added: #1153
- can't close (part of milestones): #1108 #1103
For others, could be. I'll close it later.
Status: Issue closed
username_1: OK, I'll close this issue, too :-) |
lfabbric/cfwheels-fixtures | 303916365 | Title: Remove cfwheels dependencies
Question:
username_0: Remove any cfwheel dependencies from the application. This is a fairly big move, however, I think it will simplify issues currently found with pulling tables back that do not comply with cfwheel norms. (such as a non plurar table).
Answers:
username_0: The dependency has been removed. Two bugs where found in edge case situations and addressed.
Status: Issue closed
|
rancher/rancher | 609017026 | Title: CIS Scan seems to run master tests on worker nodes
Question:
username_0: As I tested further, when removing worker nodes, the test passed. Correlating with the above logs, it looks like the scan tried to perform some tests which should run on master nodes only - require apiServer. That's why it failed there.
**Environment information**
- Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI):
Component | Version
-- | --
Rancher | v2.4.2
User Interface | v2.4.14
Helm | v2.16.3-rancher1
Machine | v0.15.0-rancher35
- Installation option (single install/HA): HA
**Cluster information**
- Cluster type (Hosted/Infrastructure Provider/Custom/Imported): Infrastructure Provider OpenNebula
- Machine type (cloud/VM/metal) and specifications (CPU/memory): VM 4vCPU 8GB
- Kubernetes version (use `kubectl version`):
```
1.17.2
```
- Docker version (use `docker version`):
```
19.03
```
Answers:
username_1: **Verified on 2.4.2. Not able to reproduce the issue.**
**Steps:**
- Deploy a DO rke cluster - 1 etcd/control plane node and 1 worker node
- When the cluster comes up successfully, and is in Active state, run Permissive CIS scan on the cluster.
- The scan run is successful and report is generated.
username_2: Hey there, I am from the same company as @username_0.
We spent some more time debugging our issue and found out that it's the same bug as reported here:
https://github.com/rancher/rancher/issues/26598
We are using OpenEBS which launches a pod with the name "openebs-apiserver-556ffff45c-l4l5s" which runs on a worker node. However, due to the bug mentioned above, that node is detected as a master node, which lets the "sonobuoy-rancher-kube-bench-daemon-set" pod fail on that node.
username_3: We've seen this on v2.5.1 where scans fail due to an etcd-snapshots process stuck runing (Puppet does not diffrentiate between worker and cp nodes)
```
[email protected]:~# pgrep -f /etcd
868
[email protected]:~# ps -ef | grep 868
root 868 814 0 Nov14 ? 00:00:00 inotifywait --format %w%f -m -r -e create /opt/rke/etcd-snapshots
root 6317 6143 0 10:38 pts/0 00:00:00 grep --color=auto 868****``` |
heroku/cli | 592332400 | Title: TypeError: Cannot read property 'id' of undefined when heroku login -i
Question:
username_0: Hi,
I came across a login issue. When i run " heroku login -i", promt following errors
``
root@srv:~# heroku -v
heroku/7.39.2 linux-x64 node-v12.13.0
root@srv:~# heroku login -i
heroku: Enter your login credentials
Email: *****@*****.****
Password: *********
TypeError: Cannot read property 'id' of undefined
at Login.interactive (/usr/local/lib/heroku/node_modules/@heroku-cli/command/lib/login.js:183:30)
``
I tried to uninstall the CLI then reinstall it, It still works no well. Could you help to solve it ? |
nteract/nteract | 401190582 | Title: Layout thrashing in the jupyter nteract app.
Question:
username_0: **Application or Package Used**
The jupyter extension.
**Describe the bug**
Codemirror layout thrashing when rendering the code cells.
**To Reproduce**
Steps to reproduce the behavior:
Type anything in the web application (especially with a big notebook).
**Expected behavior**
That the editor respond in a reasonably quick fashion.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: macOS
- Browser Chrome
Answers:
username_0: https://github.com/nteract/nteract/blob/42d000fbcea07d7cad38f36f30421387688915d0/packages/editor/src/vendored/codemirror.ts#L501
I think this line needs to be removed.
username_1: Oooh, good call. Thank you.
username_1: If I remove our `height: auto;` line I end up with a big cell when I have only one line:

I wonder if there is more to set within codemirror options.
username_0: Ah shoot. Is there any reason code mirror is being used and not monaco? Monaco seems like it could be a better choice given it has a (built in completions provider)[https://microsoft.github.io/monaco-editor/playground.html#extending-language-services-completion-provider-example] and automatically uses web workers.
username_1: That's more legacy than anything else, since CodeMirror was the de-facto good editor to use years ago. We've just carried it on. I'd love to see Monaco integrated as the main editor. I've tinkered with it in the past and noticed that each instance was had a web worker running, resulting in a heavy memory profile.
username_0: I might be interested in taking that on -- this issue from the monaco github makes it seem like you can use a single web worker. https://github.com/Microsoft/monaco-editor/issues/774. I'll start playing around with it.
Status: Issue closed
username_2: We're no longer setting the height via `auto` in the CodeMirror component so we can go ahead and close this issue. |
anmol098/waka-readme-stats | 716175911 | Title: BUG when github action running
Question:
username_0: When github action running for waka Readme update in my github, error occurs in build stage
`ERROR: Could not find a version that satisfies the requirement opencv-python==4.2.0.34 (from -r requirements.txt (line 28)) (from versions: 192.168.3.11, 192.168.3.11, 172.16.58.3, 172.16.58.3, 192.168.3.11, 172.16.58.3, 172.16.58.3, 192.168.3.11)
ERROR: No matching distribution found for opencv-python==4.2.0.34 (from -r requirements.txt (line 28))`<issue_closed>
Status: Issue closed |
flutter/flutter | 376819273 | Title: Debugging experience has been worse since v0.10.0~
Question:
username_0: Not sure what the issue is but ever since a couple weeks ago my debugging experience has progressively gotten worse using flutter in visual studio.
When I use hot reload and refresh within a few seconds I get this error and have no idea what to do about it:
```
Reload already in progress, ignoring request Unhandled exception:
TimeoutException: Request to Dart VM Service timed out: _flutter.listViews({})
#0 VM.invokeRpcRaw (package:flutter_tools/src/vmservice.dart:842:9)
<asynchronous suspension>
#1 VM.invokeRpc (package:flutter_tools/src/vmservice.dart:859:49)
<asynchronous suspension>
#2 VM.refreshViews (package:flutter_tools/src/vmservice.dart:957:25)
#3 FlutterDevice.refreshViews (package:flutter_tools/src/resident_runner.dart:87:30)
```
Answers:
username_0: So I switched to channel `dev` and this issue does not seem to manifest
username_1: @username_2, we were talking about this error impacting tests and adding retries for that - could this actually be a Dart or Engine issue though if it's now taking longer than it used to?
username_2: cc @username_3
username_2: @username_0 Do you still see this on master?
username_3: What does `refresh` mean here?
Can you capture a log (run the `Dart: Capture Logs` command from the command palette and tick the `Flutter Run` and `Debugger (Observatory)` categories), then repro the issue and attach the log? |
nulib/images | 195086470 | Title: Update ARCHV-IMG location for tifs found on imagesarch1
Question:
username_0: Update the 17,403 pids in this list to have ARCHV-IMG location of:
```
http://rstorage.library.northwestern.edu/archive/farchive13/inu-dil/hydra/images/ + _tifname_
```
Currently `datastreams["ARCHV-IMG"].dsLocation
=> "http://www.library.northwestern.edu"`
https://northwestern.box.com/s/6dkvohyq3c8l5excae5qc3ud86oghrwh
(**note that the filename being used here is of the inu-dil (dash rather than colon) variety.)
### Background Info
This issue has been broken off from issue #326
- We identified 24,000 older records in the Images application with an ARCHV_IMG location of `http://www.library.northwestern.edu`.
- We were able to find copies of the tifs on other servers and match by accession number.
- The tif files were copied into `/farchive/farchive13/farchive13/inu-dil/hydra/images/` and named by pid (see source/destination file below)
- https://northwestern.box.com/s/wkw0h973fioe4fx9u590p9pz4hldu4gp
- Since`/farchive/farchive13/inu-dil/hydra/images/` is accessible (curl from repository to test) at `http://rstorage.library.northwestern.edu/archive/farchive13/inu-dil/hydra/images/`, we need to update the ARCHV_IMG location for these records<issue_closed>
Status: Issue closed |
superfly/flyctl | 1110780726 | Title: [regression] DOCKER_HOST is no longer accepted
Question:
username_0: `flyctl` used to accept the `DOCKER_HOST` env variable the same as Docker. This is now failing with a parse error.
```
$ DOCKER_HOST=192.168.1.100 flyctl deploy example ==> Verifying app config
--> Verified app config
==> Building image
WARN Error connecting to local docker daemon: unable to parse docker host `192.168.1.100`
```
but works with Docker
```
$ DOCKER_HOST=192.168.1.100 docker info Client:
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc., v0.7.1)
compose: Docker Compose (Docker Inc., v2.2.3)
...
```
Adding a scheme prefix also fails...
```
$ DOCKER_HOST=tcp://192.168.1.100 flyctl deploy example ==> Verifying app config
--> Verified app config
==> Building image
WARN Error connecting to local docker daemon: request returned Not Found for API route and version http://192.168.1.100/_ping, check if the server supports the requested API version
```
but works with Docker
```
DOCKER_HOST=tcp://192.168.1.100 docker info Client:
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc., v0.7.1)
...
```
Answers:
username_1: So, I tried to find where this broke based on your regression report but both the below yield no answers:
```
$ git grep -n "DOCKER_HOST" $(git rev-list --all)
```
Do you have a rough idea when you saw this working last @username_0?
Status: Issue closed
username_1: You're using a remote builder in the example you pasted. For remote builders (and only for remote builders), `DOCKER_HOST` isn't respected as we're not calling `docker.FromEnv` in the codepath (as we do for local docker).
username_2: @username_1 I'm using a remote docker engine that I'm connecting through wireguard
username_0: `flyctl` used to accept the `DOCKER_HOST` env variable the same as Docker. This is now failing with a parse error.
```
$ DOCKER_HOST=192.168.1.100 flyctl deploy example ==> Verifying app config
--> Verified app config
==> Building image
WARN Error connecting to local docker daemon: unable to parse docker host `192.168.1.100`
```
but works with Docker
```
$ DOCKER_HOST=192.168.1.100 docker info Client:
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc., v0.7.1)
compose: Docker Compose (Docker Inc., v2.2.3)
...
```
Adding a scheme prefix also fails...
```
$ DOCKER_HOST=tcp://192.168.1.100 flyctl deploy example ==> Verifying app config
--> Verified app config
==> Building image
WARN Error connecting to local docker daemon: request returned Not Found for API route and version http://192.168.1.100/_ping, check if the server supports the requested API version
```
but works with Docker
```
DOCKER_HOST=tcp://192.168.1.100 docker info Client:
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc., v0.7.1)
...
```
username_0: This is still an issue. I have an x86-64 server running docker that I occasionally build on. I don't know when it worked last, but it's a feature we've encouraged people to use for some cases. @username_2's use case should work as well even though it's over WireGuard.
username_1: Can you give #803 a spin @username_0 and merge if it fixes the problem for you?
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.