repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
dotnet/aspnetcore | 650257529 | Title: Add more Razor DDRIT tests
Question:
username_0: We added the first set of tests but we want to add more coverage,
- [ ] `<text>` tag autoinsert
- [ ] Colorization
Answers:
username_1: I'll include my personal wishlist below. Also of note here is that it might behoove us to have a couple of these scenarios be local tests (as opposed to Nexus like our existing tests) which are likely faster to run, less error prone, and not currently covered as a test scenario.
- [ ] Semantic Tokens, full-document and edit
username_2: The auto-insert, diagnostics, C# code actions & Razor code actions tests are ready in PR form, but currently blocked on external PRs. Moving this issue to P3.
username_2: Added [Razor HTML onTypeRename](https://github.com/dotnet/aspnetcore/issues/283250) to big rock & removed [Semantic colorization](https://github.com/dotnet/aspnetcore/issues/26977) as that's not scheduled for P3 (blocked on external API).
username_2: All the DDRITs are blocked on [diagnostics not being await-able in the DDRIT environment](https://devdiv.visualstudio.com/DevDiv/_workitems/edit/1253831). The plan is to leverage diagnostics being available as a way for us to know if the language server(s) have initialized. Only thereafter do we actually perform this test.
username_2: Still blocked.
username_3: Re-purposing this to be "local" DDRITs tests |
aws/aws-sdk-js-v3 | 644050906 | Title: AWS typescript SDK for deno typescript runtime.
Question:
username_0: **Is your feature request related to a problem? Please describe.**
Typescript code generated to a file tree on github, in a manner that can be imported and used by deno.
So that users of deno can use the AWS typescript SDK in future.
**Describe the solution you'd like**
Typescript code generated to a file tree on github, in a manner that can be imported by deno.
A simple copy of the generated typescript transformed with a find & replace to make imports use a ".ts" suffix on files eg:
https://github.com/aws/aws-sdk-js-v3/blob/master/clients/client-s3/S3.ts#L1
would read "import { S3Client } from "./S3Client.ts";
would be a start.
Other than deno aims to be very browser compatible including http fetch API.
**Describe alternatives you've considered**
An alternative process could be to create a separate repo that watches this repo and transforms the typescript sources as required.
Another alternative is to address this request upstream to Smithy (?) & origin of packages such as @aws-sdk/fetch-http-handler.
**Additional context**
https://deno.land/
Answers:
username_0: Attempted the following:
According to https://www.reddit.com/r/Deno/comments/hcdtrx/deno_faq_for_this_subreddit/
jspm.dev operates a rewriting service that transforms NPM modules.
```
import S3 from 'https://jspm.dev/@aws-sdk/client-s3';
async function test() {
const s3 = new S3({region:'ap-southeast-2'});
const buckets = await s3.listBuckets({});
console.log(buckets);
}
test();
```
Unfortunately jspm doesn't implement node's http2 library.
```
error: Import 'https://jspm.dev/npm:@jspm/core@2/nodelibs/http2' failed: 404 Not Found
Imported from "https://jspm.dev/npm:@aws-sdk/node-http-handler@1!cjs:8"
```
https://github.com/jspm/jspm-core/issues/5
username_0: Apparantly there is a tool https://www.npmjs.com/package/denoify to transform .ts sources.
username_0: Working on this in https://github.com/username_0/aws-sdk-js-v3/tree/master/deno
username_0: Denoify was too generic for the task - particularly the fact that all the `@aws-sdk/*` dependencies are right here in the repo.
Had an attempt at doing the various find/replaces to get this to work.
See https://github.com/username_0/aws-sdk-js-v3/tree/master/deno
Latest error on that is:
```
error: TS1192 [ERROR]: Module '"https://raw.githubusercontent.com/username_0/aws-sdk-js-v3/master/deno/client-s3/mod"' has no default export.
import S3 from 'https://raw.githubusercontent.com/username_0/aws-sdk-js-v3/master/deno/client-s3/mod.ts';
~~
at file:///tmp/test_deno_s3.ts:1:8
TS2552 [ERROR]: Cannot find name 'FileReader'. Did you mean 'fileReader'?
const fileReader = new FileReader();
~~~~~~~~~~
at https://raw.githubusercontent.com/username_0/aws-sdk-js-v3/master/deno/chunked-blob-reader/mod.ts:7:28
'fileReader' is declared here.
const fileReader = new FileReader();
~~~~~~~~~~
at https://raw.githubusercontent.com/username_0/aws-sdk-js-v3/master/deno/chunked-blob-reader/mod.ts:7:11
```
Which is unfortunately a deno missing API.
https://github.com/denoland/deno/issues/5249
username_1: The fetch-http-handler should work with Deno:
https://github.com/aws/aws-sdk-js-v3/tree/81b2e87067642a8cea8649cbdb2c342ca9fb6ac6/packages/fetch-http-handler
I tried loading the EventBridge client via the pika.dev service to no avail:
https://www.pika.dev/npm/@aws-sdk/client-eventbridge
username_1: Deno support would be awesome for scripts that are easy to pass around without having to pass a whole project directory and without requiring colleagues to have deep knowledge of e.g., pip (Python), bundler (Ruby), npm/yarn, etc.
username_1: It's not pretty but, for anyone who comes looking, I was able to piece together a request that seems to mostly* work:
```
// To Run: deno run --allow-net /path/to/this/file.ts
import FetchHttpHandler from 'https://jspm.dev/@aws-sdk/fetch-http-handler';
import AwsTypes from 'https://jspm.dev/@aws-sdk/types'
import SignatureV4 from 'https://jspm.dev/@aws-sdk/signature-v4';
import Sha256 from "https://jspm.dev/@aws-crypto/sha256-js";
import moment from 'https://cdn.skypack.dev/moment';
const signer = new SignatureV4.SignatureV4({
credentials: {
accessKeyId: 'SomeAccessKey',
secretAccessKey: 'SomeSecretKey',
sessionToken: 'Some<PASSWORD>'
},
region: "eu-west-2",
service: "events",
sha256: Sha256.Sha256
});
var request = {
method: "POST",
headers: {
"X-Amz-Target": "AWSEvents.PutEvents",
"Content-Type": "application/x-amz-json-1.1"
},
body: JSON.stringify({
"Entries": [
{
"DetailType": "Scheduled Event",
"Source": "aws.events",
"Resources": [ "arn:aws:events:eu-west-2:SomeAccountId:rule/SomeRuleName" ],
"Account": "SomeAccountId",
"Time": moment().valueOf(),
"Region": "eu-west-2",
"Detail": "{}"
}
]
}),
protocol: "https:",
hostname: "events.eu-west-2.amazonaws.com",
port: 443,
path: "/",
query: null,
}
request = await signer.sign(request);
const fetcher = new FetchHttpHandler.FetchHttpHandler();
const response = await fetcher.handle(request);
const parsedResponse = await response.body.text();
console.log('response', parsedResponse);
```
\* The Deno code above does seem to reach the right backend logic at AWS but the particular action I was trying to invoke is apparently not allowed by AWS. When I ran the code with expired credentials, I got the following response: `{"__type":"ExpiredTokenException","message":"The security token included in the request is expired"}`. After I fixed that and another minor issue about the format of the timestamp in the body of my request, I got the following response: `{"Entries":[{"ErrorCode":"NotAuthorizedForSourceException","ErrorMessage":"Not authorized for the source."}],"FailedEntryCount":1}` which means that my use-case of manually testing a Cron/Schedule rule won't work (even using a more mature client) because my user account, which did seem to authenticate, is not authorized to impersonate the EventBridge service itself.
username_2: deno FileReader API has been merged to master: https://github.com/denoland/deno/issues/5249 :champagne:
I [forked](https://github.com/username_2/aws-sdk-js-v3/tree/deno) the work from @username_0 and added a few more fixes. We can now make some basic API requests:
```typescript
import { S3 } from 'https://raw.githubusercontent.com/username_2/aws-sdk-js-v3/deno/deno/client-s3/mod.ts'
const s3 = new S3({
region: Deno.env.get('AWS_REGION'),
credentials: {
accessKeyId: Deno.env.get('AWS_ACCESS_KEY_ID')!,
secretAccessKey: Deno.env.get('AWS_SECRET_ACCESS_KEY')!,
},
})
const { Buckets = [] } = await s3.listBuckets({})
Buckets.forEach(bucket => console.log(bucket.Name))
```
username_0: Thanks @username_2
Yes and now deno 1.3.0 has the FileReader landed.
username_3: someone has been working on DynamoDB client https://github.com/chiefbiiko/dynamodb
username_4: I noticed SecretsManager > listSecrets request params are missing 'Filter' param.
```
const listSecretsRequest: ListSecretsRequest = {
Filters: [
{
Key: 'name',
Values: [stackFilter],
},
],
};
const existingSecretsResponse: ListSecretsResponse = await secretsManager.listSecrets(
listSecretsRequest,
);
```
username_5: I noticed SecretsManager > listSecrets request params are missing 'Filter' param.
```
const listSecretsRequest: ListSecretsRequest = {
// this errors out
Filters: [
{
Key: 'name',
Values: ['someSecretName],
},
],
};
const existingSecretsResponse: ListSecretsResponse = await secretsManager.listSecrets(
listSecretsRequest,
);
username_2: @username_5 Could you open an issue on https://github.com/username_2/aws-sdk-js-v3/issues? Please provide a full example to reproduce the problem.
username_5: @username_2 Done, cheers - https://github.com/username_2/aws-sdk-js-v3/issues/3
username_6: Straight question as the outcome of this thread is not clear ..... does the AWS JavaScript SDK fully support Deno?
username_2: @username_6 No, it does not support deno at all. [I maintain a fork](https://github.com/username_2/aws-sdk-js-v3) that I try to keep in sync which supports deno, and is [published on deno.land](https://deno.land/x/[email protected]).
username_6: That's a pity, anyone know if Amazon plans to officially support deno? Is posting in this thread making such a request, or does such a request need to be posted elsewhere?
username_7: This is maintainer of AWS SDK for JavaScript.
Tagging recent commenters @username_6 @username_2 @username_5
Do upvote on the first post on this issue https://github.com/aws/aws-sdk-js-v3/issues/1289#issue-644050906, so that it appears in one of the most voted issues.
Some ways this request can get prioritized is:
* Use existing community supported forks of AWS SDK for JavaScript to use Deno on AWS, like https://github.com/username_2/aws-sdk-js-v3
* Use Deno Runtime on AWS Lambda https://github.com/hayd/deno-lambda. If Deno usage increases on AWS, Lambda team is likely to prioritize providing official support and SDK would likely follow.
* Work with Deno community to request release schedule which is backward compatible. This would increase trust among Deno community. Deno currently releases new minor version [every six weeks](https://deno.land/manual/contributing/release_schedule), and I couldn't find documentation on when `v2.0` will release or what would happen to `v1.0` when next major version is released.
* Write tweets or blog posts about how support for Deno would be helpful for AWS SDK for JavaScript users. For example, a blog post about benefits of using Deno in the cloud vs other runtimes like Node.js.
username_0: Thanks for the encouragement @username_7 and the work @username_2 on https://github.com/username_2/aws-sdk-js-v3 https://deno.land/x/aws_sdk
username_8: @username_7 As of time of writing, this issue has the highest number of 👍s by a fairly large margin: https://github.com/aws/aws-sdk-js-v3/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc
Great work deno community! 🎉
I can't speak to how much progress has been made on the other points, but for my use case, Deno's number 1 advantage compared to node is its excellent support for HTTP2 out of the box with its native fetch for the client side and built-in http server APIs for the server side.
From my limited research so far, the HTTP2 client situation in node is especially dire:
- node-fetch doesn't support http2 and has no plans to add it: https://github.com/node-fetch/node-fetch/issues/560
- axios also doesn't support http2: https://github.com/axios/axios/issues/1175
- fetch-h2 is working for some people but still has bugs to iron out: https://github.com/grantila/fetch-h2/issues/116
- got has been the most usable of the bunch for me so far, but specifically doesn't support node 14 (latest version available for lambda) support due to buggy http2 implementation: https://github.com/sindresorhus/got/blob/main/documentation/2-options.md#http2
Many of the most widely used AWS services (dynamo, s3, lambda, etc) would benefit greatly from a wider availability of high-quality HTTP2-capable clients (fewer tcp connections -> less latency for clients and lower load on servers). In fact, this library itself can significantly reduce its maintenance burden in the long term by relying on a robust native fetch implementation in deno over maintaining its [own http2 client](https://github.com/aws/aws-sdk-js-v3/blob/25107890feeda2ce2d42b8cfc5672485d9d02261/packages/node-http-handler/src/node-http2-handler.ts) built on top of the low-level APIs provided by node (and hope node eventually catches up with a native fetch implementation of its own). |
haskell/containers | 640739944 | Title: All complexity annotations should be in MathJax format
Question:
username_0: …because it's beautiful!
MathJax is already used in:
* `Data.Sequence`
* `compose` (as of https://github.com/haskell/containers/pull/729)
Answers:
username_1: I'm glad you like it! I converted `Data.Sequence` over some time ago, but I decided to leave the rest alone in case I got a bunch of complaints—MathJax has historically been a bit troublesome in some browsers. Fortunately, the complaints haven't materialized, so I guess the situation is now good enough. Let's do this.
username_0: https://github.com/haskell/containers/commit/fdc4ccaa1e29d5fc541815d4ff6f4d058db82a1a seems to contain at least some of your work. Hopefully the browser support issue has been resolved by now!
username_1: Ah, that reminds me of the loading time concern.... Regardless, the complaints have not poured in, so we proceed. |
ESMValGroup/ESMValTool | 410358664 | Title: Recipe has multiple copies of the same variable
Question:
username_0: I think I've uncovered a bug. I want to apply several different preprocessors to a single variable, in this case its Temperature.
For instance, I want to make a single figure where the left hand side shows a zonal mean, and the right hand side is a meridional mean. This kind of figure is very common. There's no sense in doing all the hard work in the diagnostics (ie write my own zonal mean function or call the `zonal_means` preprocessor in my diagnostic script), as it's so much easier and traceable to make it in the preprocessor.
Just to be explicit, my recipe has something like this:
```
diagnostics:
diag:
variables:
thetao: # ocean surface temperature
preprocessor: zonal_mean
field: TO3M
thetao: # ocean surface temperature
preprocessor: meridional_mean
field: TO3M
scripts:
diagnostic
script: diagnostic_plotter.py
```
Note that the `thetao` variable appears twice. Lets assume that the `diagnostic_plotter.py` plots the zonal mean on the left and the meridional mean on the right in a single figure.
At the moment, the script only sees a single thetao. I believe that the problem is that `metadata.yml` file of the first thetao` variable is overwritten by the second, as they are not uniquely specified.
ie the path of both metadata files is:
```
output/preproc/diag//thetao/metadata.yml
```
Perhaps there is an alternatively way of structuring my diagnostic recipe such that they are treated independently but are still passed to the same single output script. Alternatively, can we instead replace this path with:
```
output/preproc/diag//thetao_[preprocessor]/metadata.yml
```
Answers:
username_1: You have to use unique names for the different variable entries in the same diagnostic in the recipe, if you write it like this, it works:
```yaml
diagnostics:
diag:
variables:
zonal_thetao_mean:
short_name; thetao
preprocessor: zonal_mean
field: TO3M
meridional_thetao_mean:
short_name; thetao
preprocessor: meridional_mean
field: TO3M
scripts:
diagnostic
script: diagnostic_plotter.py
```
The name you use in the recipe for the group of datasets of a variable is available as a `variable_group` attribute.
username_0: Brilliant, thanks!
username_2: :arrow_up: that Bouwe dude knows his stuff. Also, no need for you to write a meridional or zonal means diag in the first place, you'll find it in `diag_scripts/validation.py`
Status: Issue closed
|
ariesjia/grunt-riot | 80194704 | Title: concat and cwd
Question:
username_0: ```coffee
module.exports = (grunt) ->
config = require("../config")
riot =
options:
concat:true
dev:
cwd:"app/tags"
src:"**/*.html"
dest:".tmp/jstag/tag.js"
# ext:'.js'
grunt.config("riot",riot)
grunt.loadNpmTasks('grunt-riot')
```
this will not work, grunt tell me that it can not find the files in `app/tags`, however, when I write following, it works
```coffee
module.exports = (grunt) ->
config = require("../config")
riot =
options:
concat:true
dev:
src:"app/tags/**/*.html"
dest:".tmp/jstag/tag.js"
# ext:'.js'
grunt.config("riot",riot)
grunt.loadNpmTasks('grunt-riot')
```
Answers:
username_1: ```
riot: {
options: {
concat : true
},
src: 'public/templates//tags/*.tag',
dest: 'public/templates/scripts/tags.js'
}
..............
.............
grunt.loadNpmTasks('grunt-riot');
```
I run _grunt riot_ and got below message in Terminal. But the output file is not generated
```
Running "riot:src" (riot) task
File src created
Running "riot:dest" (riot) task
File dest created
```
Please note that the compilation is working if do the riot {tags_folder} {final_file} command. |
turnkeylinux/tracker | 1168789284 | Title: using this iso on Xen (Archlinux) stops at emergency console
Question:
username_0: Hi!
I cant get this iso to work. Everytime complains about cant mount root image reading from cdrom iso file. Using debian11 core iso does not produce this error and installs flawlesly.
debian config:
name = "new_moodle"
kernel = "/mnt/iso/install.amd/xen/vmlinuz"
ramdisk = "/mnt/iso/install.amd/xen/initrd.gz"
extra = "debian-installer/exit/always_halt=true -- console=hvc0"
disk = ["file:/home/vibrion/turnkey-moodle.iso,xvdb:cdrom,r" ,"phy:/dev/mapper/VM_vg-new_moodle,xvda,rw"]
root = "/dev/xvda ro"
boot = "d"
memory = 2048
vif = [ 'mac=00:16:3e:cc:bf:ff,bridge=br0' ]
I suspect something related to extra= options
Thanks!
Answers:
username_1: Hi @username_0
Did you build the ISO? Or are you using the v16.1 ISO? Are these the same instructions that work ok for Debian 11?
Have you tried to v17.0rc ISOs? (Only available for Core & TKLDev for now).
Also, could you please explain in a little more detail where it errors, i.e. at which explicit step does it fail, exactly what it's doing when it errors, etc). Also please share the explicit message that it gives when it fails. I'm hoping that it mentions what it is trying to do and that might assist me to understand the actual issue.
I don't have much experience with Xen (actually, other than AWS implementation - none) but is it possible to choose between UEFI boot or legacy (BIOS) boot? I ask because I know that Debian default disks do (or should do AFAIK), but TurnKey definately doesn't (the 17.0rc1 ISOs have part of the UEFI boot supported, but it's incomplete and will be removed for the stable v17.0 release; with a plan to re-look at it later)..
username_1: Looking a bit closer, I'm not at all sure about the `extra` line. We don't use generic debian-installer so that shouldn't do anything I don't think, but as i say, I don't know Xen.
Also, have you tried booting into the live environment? Does that work?
username_2: XEN has not been able to install since TKL 15.x in my experience, at least with TKL-XEN buildtasks.
Something changed with 16.x on the TKL side I believe. I don't think the TKL-XEN buildtasks changed.. That was my experience anyway.
username_1: @username_0 ping
username_0: Sorry sorry sorry for the delay! After several attempts to install tky moodle appliance with different errors switched to core debian 11 install (also turnkey image). With moodle live image after testing some changes in domu.conf (extra = "boot=live initrd=/live/initrd.gz root=/dev/ram rw showmounts di-live single noinithooks net.ifnames=0 --" extracted from grub configs) , i was able to install to disk but a problem with grub arise (this one seems related with my setup and default install in this image: lv volume inside a lv volume)
So debian 11 core installs it without flaws (In fact I use this image with different DomU.conf as a template). Obviously I had to manually install the necessary packages for the moodle platform but it works
Thanks you all for the replies!
Martin |
poetry-book/poetry-book-web | 704980347 | Title: Initial thoughts
Question:
username_0: Two ways in order to get poems from user:
- the web app let you select a directory on your pc where you keep your poems in a format compatible with `poetry-book-cli`
- the web app let you copy paste your poems in some text fields. Then user should be able to order your poems, compile a form for the poetry book attributes and fill the preface.
Then, user should be able to choose between latex or pdf output. |
spacemeshos/go-spacemesh | 435481928 | Title: Multiple miner IDs on single node
Question:
username_0: Every PoST commitment unit (~256 GiB) is associated with a unique ID pair (ed + bls key pairs). In case a user expands several commitment units it will have several IDs running on a single node. All IDs should be sharing a single state and are only distinguished when participating in Hare/layer committees
Answers:
username_1: Variable post is the single most important user-facing new feature of Spacemesh 0.2
username_1: The ideal post size unit is 100GB base and not 250GB. With mesh growth, we can't expect users to give 0.5TB on home drives. The gpu-post scrypt params should target 24 hours of work for the base unit on a fast gpu.
username_2: This epic includes multiple items :
https://github.com/spacemeshos/go-spacemesh/issues/498
https://github.com/spacemeshos/go-spacemesh/issues/842
https://github.com/spacemeshos/go-spacemesh/issues/846
https://github.com/spacemeshos/go-spacemesh/issues/848
https://github.com/spacemeshos/go-spacemesh/issues/849
https://github.com/spacemeshos/go-spacemesh/issues/851
https://github.com/spacemeshos/go-spacemesh/issues/852 |
spring-cloud/spring-cloud-commons | 370252897 | Title: Profile Precedence in Spring Cloud Kubernetes sources
Question:
username_0: <!--
Thanks for raising a Spring Cloud issue. What sort of issue are you raising?
Question
Is the active profile precedence different in spring cloud kubernetes than spring boot, spring cloud?
Bug report
In **Spring Boot/Spring Cloud** (I am using Config Server to load configurations), the active profiles precedence is from left to right, the left most profile override configuration values which is also defined in another profile to the right, example:
spring.profiles.active = ${spring.application.name}, common-db, common-kafka
my application name is MyApp
and in MyApp, I have a configuration
db.username = myuser
and in common-db it's
db.username = genericuser
In spring boot / cloud, the expected result is myuser
The actual result is myuser,.
In **spring.cloud.kubernetes,config.sources**
- name: ${spring.application.name}
- name: common-db
The precedence is bottom up:
Expected result is myuser
The actual result is genericuser
Enhancement
Is this the expected precedence for spring kubernetes? I am using kubernetes **ConfigMap**
--> |
HeavyHorst/remco | 275548423 | Title: [Feature request] Fetch remco.toml and src template files from URL and be watchable
Question:
username_0: Hi,
Thanks for your tool that seems interesting and promising.
Not sure if it is relevant, but i think it will be a great feature to have the possibility to fetch the template file from an public/private URL and be watchable with interval the same manner as values from backend.
I thinking it will be a perfect feature for remco.toml too (with autoreload and delete config files that are not here anymore)
I ask this because i'm looking for a solution to be able to distribute containers with defaults templates in image but to be able to fetch remco.toml/resources templates from URL/GIT.
What do you think ?
Regards,
Answers:
username_1: I think you could try to put your remco.toml and template data into one of the supported backends (could be a yaml file on github or any other url) and then let remco rebuild its own config files and reload itself.
remco can write its pid to a file.
You could then reload with a command like: kill -HUP \`cat /path/to/pidfile\`
username_1: @username_0 does this work for you ?
username_0: Sorry for the late answer.
Didn't try it yet but for now, my brain can't figure how to simply achieve this just by read the docs :)
Will it require at least 2 files by ressource template ?
- a remote yaml with a key containing the real toml
- a local resource template who just print the value of the toml placed inside a key on the remote yaml file
If you have an functional exemple, i'm open to read it :)
Thanks
username_1: Yeah, that's basically what I had in mind.
At the moment I don't have an functional example, but maybe I'll come up with one at the weekend.
If you're faster, feel free to post yours here. |
NativeScript/docs | 227068008 | Title: setup/native-script.rb requires sudo
Question:
username_0: https://github.com/NativeScript/docs/blob/master/start/quick-setup.md requests the user to run a remote script with sudo.
As someone that already has all the dependencies required to just execute `npm install -g nativescript` and go onto the next page I found it off-putting to have to read through the very thorough and clear *quick-setup* guide which requested that I execute a remote script with root privileges. I am not comfy giving a remote script root privileges on my machine. I had to open the script to determine what needed to be installed. The only dependency that needs root privileges to install are the ruby gems and that is only if the user hasn't already installed rbenv or rvm. In this case the user may end up with permission problems because the install script does not check for those conditions before running `sudo gem install`.
Additionally this script falsely represents what it is doing with your system by executing code from a different URL than the URL that the user types in. For an example please see
https://github.com/NativeScript/nativescript-cli/blob/master/setup/native-script.rb#L5 and
https://github.com/NativeScript/nativescript-cli/blob/master/setup/native-script.rb#L11
This could be entirely avoided by simply listing what the requirements are instead of trying to automatically install them for the developer.
Answers:
username_1: Hi @username_0,
Thanks for bringing our attention to this issues. We have identified these and other similar issues with the setup scripts as well and will probably plan some time to improve this as part of our 3.2 release. Will definitely take your feedback into consideration and really appreciate your input.
Let me know if I can help with anything else!
Best,
Emil |
mvmike/min-cal-widget | 1100583204 | Title: the borders of the elements are bolder
Question:
username_0: now it's hard for me to watch the settings, especially during the day in bright light. the elements are all gray and the borders between the elements are poorly separated. I ask you to make the borders more thick and bold

Status: Issue closed
Answers:
username_1: Updated the background (cleaner than the border, at least programmatically) here → 8af9d0f6ce9db587bcfb379006caf2c176b467ad

Will be available on next release. |
jalkoby/squasher | 251466061 | Title: Specifying MySQL Server Settings?
Question:
username_0: I'm getting:
```
Mysql2::Error: Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)
/Users/jonathan/.rbenv/versions/2.3.1/bin/bundle:23:in `load'
/Users/jonathan/.rbenv/versions/2.3.1/bin/bundle:23:in `<main>'
```
I don't have sockets open, but use a TCP/IP connection. Is there a way to pass parameters to squasher? I use dotenv - is squasher not picking up the config from dotenv?
Status: Issue closed
Answers:
username_1: Hi. Squasher requires a valid database.yml config, so on time of squashing please add it and on finish just remove the config file
username_0: The issue is that my database.yml config file specifies ENV['DB_HOST'] variable that in development, is populated by the dotenv gem (https://github.com/bkeepers/dotenv). Squasher is loading up before .env not allowing the ENV['DB_HOST'] environment variable to be properly set. I had to manually load in the shell ```export DB_HOST='...'```. It is a simple enough workaround, but I think it is worth noting that there is a compatibility issue with dotenv and squasher.
username_1: yes, I understood your case from the first time - I'm sorry to say, but there are no plans to add such workarounds. The reason is simple - there are many of dotnet loaders (no common used solution) and I have to spend a lot of time to add each of them and then maintain. The cost is to high to the end result |
Azure/acr-cli | 1051123960 | Title: az acr pack build is failing
Question:
username_0: **Describe the bug**
I tried to build the Spring Boot app using 'az acr pack build' and it fails halfway. Here is the error I'm getting.
`Compiled Application: Contributing to layer
Executing mvnw --batch-mode -Dmaven.test.skip=true package
/workspace/mvnw: 219: /workspace/mvnw: cannot open /workspace/.mvn/wrapper/maven-wrapper.properties: No such file
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: Failed to create the file /workspace/.mvn/wrapper/maven-wrapper.jar:
Warning: No such file or directory
2 50710 2 1067 0 0 1304 0 0:00:38 --:--:-- 0:00:38 1304
curl: (23) Failed writing body (0 != 1067)
Error: Could not find or load main class org.apache.maven.wrapper.MavenWrapperMain
Caused by: java.lang.ClassNotFoundException: org.apache.maven.wrapper.MavenWrapperMain`
Seems like downloading wrapper using curl is failing for some reason.
**To Reproduce**
Steps to reproduce the behavior:
1. Create Spring boot project
2. az acr pack build -r acrjay -t **.azurecr.io/containerapp-build:0.0.1 --builder paketobuildpacks/builder:base --pull https://github.com/username_0/containerapp-demo.git
3.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Any relevant environment information**
- OS: [e.g. Windows]
- Version [e.g. v1.0.0]
**Additional context**
Add any other context about the problem here.
Answers:
username_0: Answering myself to help others. I work around this issue by removing mvnw so that buildpack doesn't need to download wrapper that is seemingly blocked by runners. |
Azure/actions-workflow-samples | 865735133 | Title: cant run command: sudo apt-get python3.6-dev
Question:
username_0: Hi there,
I cannot deploy due to needing some c compiler files that are available in python3.6-dev
You cannot pip install them so you must apt-get.
The Actions deployment complains about stability of the cli etc.
What is the correct way to get this installed so I can take care of the dependencies that keep breaking in my requirements.txt
My yaml is as follows:
.github/workflows/master-appName(production).yml
name: Azure App Service - appName(Production), Build and deploy Python app
on:
push:
branches:
- master
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
# checkout the repo
- name: 'Checkout Github Action'
uses: actions/checkout@master
- name: Set up Python version
uses: actions/setup-python@v1
with:
python-version: '3.6'
- name: apt get python 3.6-dev and shap
run: |
sudo apt-get install python3.6-dev
pip install shap
- name: Build using AppService-Build
uses: azure/appservice-build@v2
with:
platform: python
platform-version: '3.6' |
NLnetLabs/draft-toorop-dnsop-dns-catalog-zones | 1174302989 | Title: Clarify behavior when MUST condition is violated
Question:
username_0: More than one record in the RRset denotes a broken catalog zone which MUST NOT be processed (see (#generalrequirements)).
Does this mean that:
- Whole catalog zone update is ignored? (= whole content of IXFR is ignored?)
- Update for the single zone is ignored and rest of the zone is processed?
- Only first PTR is processed (I hope not)? What if the order PTRs changes of on next AXFR?
Similarly for other MUSTs.
Motivation: It helps with interoperability in the long term. If it is not clearly specified implementations might decide to treat it differently and _then_ users are asking weird questions like "implementation A is wrong because implementation B can handle this" etc.
Answers:
username_0: If this is violated, then...? Ignore the group but install the zone? Or ignore the zone? What if the zone is already present and second group property is added? Keep the previous config? Or remove the zone? What if we keep the previous config, is it expected to persist between restarts (when catalog zone retrasfer will have double-group immediately on catalog zone load?) |
selectize/selectize.js | 298321453 | Title: Refresh tags list after updating input field
Question:
username_0: Sorry if my question has been already ask but is there already a function to update serialize tag list after updating my empty input field in AJAX.
I'm loading the serialize input with basic code.
`
$('.meta').selectize({
plugins: ['remove_button'],
delimiter: ', ',
persist: false,
create: function(input) {
return {
value: input,
text: input
}
}
});`
But after i'm refreshing the input field in ajax and i want to split word added to a tag list.
Value is empty, i'm refreshing with "A lit, of different, tags" and i want to have my 3 tags shown.
Thanks
Answers:
username_1: closing stale issues older than one year.
If this issue was closed in error please message the maintainers.
All issues must include a proper title, description, and examples.
Status: Issue closed
|
jlippold/tweakCompatible | 419497173 | Title: `LeaveMeAlone` working on iOS 12.1.2
Question:
username_0: ```
{
"packageId": "com.repo.xarold.com.leavemealone",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.repo.xarold.com.leavemealone",
"deviceId": "iPhone7,2",
"url": "http://cydia.saurik.com/package/com.repo.xarold.com.leavemealone/",
"iOSVersion": "12.1.2",
"packageVersionIndexed": false,
"packageName": "LeaveMeAlone",
"category": "Tweaks",
"repository": "Xarold Repo",
"name": "LeaveMeAlone",
"installed": "2.0.0",
"packageIndexed": false,
"packageStatusExplaination": "This tweak has not been reviewed. Please submit a review if you choose to install.",
"id": "com.repo.xarold.com.leavemealone",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.4",
"shortDescription": "Adds the dnd icon under the date, and removes the annoying dnd banner on iOS 12",
"latest": "2.0.0",
"author": "Karimo299",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
``` |
onlinemusictools/interval-propmts | 771698950 | Title: Idea for another app
Question:
username_0: Hi Stoyan,
I came across your app https://www.onlinemusictools.com/kb/, and I have a suggestion for you: write an app where one can connect to a peer via webrtc, both peers connect their pianos, and whatever notes player A is playing get transmitted to B and sent to B's keyboard for playback (important!), so they can effectively play a duo with very good sound quality. Internet latency is small, at least within North America (10-20 ms). I know this because I once wrote a similar app. The main challenge is the latency of communication between a keyboard and a computer. On Windows, it's completely hopeless; on Mac, it may work, and I'm pretty sure it will work on tablets/phones. WebRtc part can be implemented without the server - based on event handling of cloud database (e.g. Firebase). If you are interested and would like to discuss, you can write to me directly to username_0 at gmail.
(I'm sending this as an "issue" because I don't know your email address).
Thanks, Alex |
nwjs/nw.js | 312840317 | Title: Angle Platform initialization failed
Question:
username_0: NWJS Version :v0.25.4-win-x64
Operating System : Windows 10 64 bit
### Expected behavior
We are launching nw app with the following parameter-
"--enable-logging --v=1 --trace-startup --trace-startup-file=C:\\CustomerLive\\logs\\zapp\\trace.log"
On launching it generally, we get a command prompt with Elevation Type 3 written over it.
### Actual behavior
In one of the windows 10 device upon launching the app we see the following error in the command prompt.
"Angle Platform initialization failed"
Though we the app runs fine, it would be good to know why this exception is coming and if it has any impacts.
Screenshot:-

### How to reproduce
We do not have the exact steps to reproduce as it is happening in only one of our devices.
Answers:
username_1: Thanks for reporting. Does the error message appear only when you use those arguments?
username_0: Yes.Upon only launching the app without any parameters we do not see this.Though this is not causing us any functional issues as of now, we would like to know if there can be any side effects of this.
username_1: Did you see any error msgs with `eglInitialize`?
username_0: I only see the mentioned error in console.Attaching the chromium debug logs for more info.
[trace.log](https://github.com/nwjs/nw.js/files/1902816/trace.log) |
Chocohead/EU-MJ-Engine | 349760237 | Title: Change dependencies
Question:
username_0: Hello,
I would like to use this mod but I have modified BuildCraft and now BC has got another version (7.99.18-pre1-SNAPSHOT). This means that I cant use this mod because it requires version of bc (7.99.17) Is there some way to change dependencies? I tried change it in fml_cache_annotation.json but nothing happends.<issue_closed>
Status: Issue closed |
danvk/dygraphs | 207879787 | Title: Synchronizer on graphs with different sets of x-coords
Question:
username_0: I'm using dygraphs 2.0.0 and I ran into a problem using extras/synchronizer.js. If I'm synchronizing graphs with different sets of x-values, then the highlights don't stay in sync. I wasn't sure how to create a fiddle for this since it requires synchronizer.js which isn't hosted on cdnjs.com.
I believe the problem occurs because synchronizer.js does this inside `highlightCallback`:
```
var idx = gs[i].getRowForX(x);
..
gs[i].setSelection(idx, seriesName);
```
I put in a workaround which does this instead:
```
var canvasCoords = this.eventToDomCoords(event);
var canvasx = canvasCoords[0];
var canvasy = canvasCoords[1];
var closest = gs[i].findClosestPoint(canvasx, canvasy);
if (closest.row !== null) {
gs[i].setSelection(closest.row, seriesName);
}
```
This works to synchronize the graphs by selecting the closest point to the mouse instead of requiring that the exact same x-coordinate exist in all synchronized graphs. |
berndt1187/murder-mystery | 391434820 | Title: Great use of tests!
Question:
username_0: https://github.com/berndt1187/murder-mystery/blob/0c7ed7bab27308b10d4ffa791024cba567ceadd8/Murder%2BMystery.ipynb#L88-L91
You are great at using tests to develop and test your code. This is the proper way to check your code to see if it's creating the expected results! Good work! |
udos86/ng-dynamic-forms | 339324416 | Title: Beeing able to set focus on a control
Question:
username_0: ## I'm submitting a
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Bug / Regression
[x] Feature Request / Proposal
[x?] Question
</code></pre>
## I'm using
<!-- Check one of the following options with "x" -->
<pre><code>
NG Dynamic Forms Version: `X.Y.Z`
[x] Basic UI
[ ] Bootstrap UI
[ ] Foundation UI
[ ] Ionic UI
[ ] Kendo UI
[ ] Material
[ ] NG Bootstrap
[ ] Prime NG
</code></pre>
## Description
<!-- Describe your issue in detail -->
As the title says, it would be great if there was a way to set the focus on a FormControl. Maybe in the same manner as the 'valueUpdates' Subject.
This is not an uncommon case, and I can't see a way to do it with the current state of the library. |
umeng/MultiFunctionAndroidDemo | 364408118 | Title: isNeedAuthOnGetUserInfo无效
Question:
username_0: UMShareConfig config = new UMShareConfig();
config.isNeedAuthOnGetUserInfo(true);
UMShareAPI api = UMShareAPI.get(this);
改设置不起作用,每次去授权,如果上一次授权过,下一次授权就会跳过三方平台确认界面,直接回掉结果
Answers:
username_0: 版本6.9.3
username_1: 同问
username_0: 为啥提过的问题,没有人回复且在6.9.4中的更新日志中也没有解决
username_2: 9.4.0, 仍然有问题
username_2: 7.1.5, 仍然有问题 |
TheWaWaR/simple-http-server | 308286073 | Title: Will you switch to a different web server now that iron is no longer actively maintained?
Question:
username_0: Or is that too much work?
Answers:
username_1: I can do it, but it should be compile on stable rust, any suggestion?
username_0: I think [nickel](https://github.com/nickel-org/nickel.rs) compiles on stable. You could also use [actix](https://github.com/actix/actix) if you want something else.
I'd help, but i'm still reading the rust book :)
username_1: `actix` is great for me, I'll migrate to it.
username_2: Any update on this? :)
I'm looking forward to the migration to actix..
username_3: I was suggesting to switch simple http server over to a maintained framework, iron is also a framework |
vincaslt/vscode-highlight-matching-tag | 925453947 | Title: Feature Request: Support if-directives from AsciiDoc
Question:
username_0: It would be great if `ifdef` and `endif` [directives from AsciiDoc](https://docs.asciidoctor.org/asciidoc/latest/directives/ifdef-ifndef/#ifdef) would be supported.
These "blocks" are not defined by brackets or tags, but I think it would be usefull to hightloight the connection from `ifdef` to "endif`.
The blocks are defined from `ifdef::foo[]` (or `ifndef:foo[]`) to `endif`.
```
ifdef::foo[]
This content is for GitHub only.
endif::[]
```
Or `endif::[]` can include the same attribute as the beginning like `endif::foo[]`.
```
ifdef::foo[]
Thius is some Text.
And some more Text.
endif::foo[]
```
There is also `ifeval`:
You can use this to supports multiple languages in one document.
And this is where this addon would provide better readability.
```
ifeval::["{lang}" == "de"]
Das ist mein Text.
endif::[]
ifeval::["{lang}" == "en"]
This is my text.
endif::[]
ifeval::["{lang}" == "es"]
Spanish text.
endif::[]
```
There are also one-liner, that I would not highlight.
```
ifdef::foo[This is some text]
```
Answers:
username_1: This is out of scope
Status: Issue closed
|
danganronpa-online/public-gmod | 1000284459 | Title: Радиус обнаружения трупа
Question:
username_0: In GitLab by @Megu on Sep 1, 2021, 16:13
Мне кажется, что радиус обнаружения трупа слишком большой. Я едва видела труп Хифуми, но у меня уже потемнело в глазах. Мне кажется, его нужно сделать меньше, чтобы не было конфузов с обнаружением, к примеру, в темноте. (он находится справа от двери AV) 
Answers:
username_0: In GitLab by @Megu on Sep 1, 2021, 16:13
assigned to @Megu
username_0: In GitLab by @Megu on Sep 1, 2021, 16:13
changed the description
username_0: In GitLab by @fin on Sep 3, 2021, 14:57
unassigned @Megu
username_0: In GitLab by @fin on Sep 5, 2021, 24:16
assigned to @fin
Status: Issue closed
|
ifzhang/FairMOT | 836834599 | Title: how the re-identification branch works
Question:
username_0: Thank you very much for your excellent work. I hope you can give me some advice on something I can't understand.
I don't understand how the re-identification branch works, and if tracing only uses DEEPSORT for ID association. Thank you, I need your help very much, save the child. |
data2health/contributor-role-ontology | 473116612 | Title: review content in CRO website
Question:
username_0: the website is done and ready for review
Answers:
username_1: I just went through all pages and made a few edits and updates. The next time we talk, let's have a 5 min discussion to confirm that everyone is ok with the approach I took on the citation, acknowledging the project as the author (since people are listed on the project pages).
Status: Issue closed
|
coderKevinKim/diablo_omaju | 655333338 | Title: picture
Question:
username_0: 
Status: Issue closed
Answers:
username_0:  |
go-co-op/gocron | 792689424 | Title: [BUG] - `StartAt` in the past causes next execution time to calculate based on `Now`
Question:
username_0: ### Using `StartAt` with a date in the past doesn't behave as expected
If I am creating a scheduled job and use `StartAt` with a date in the past, I expected the next execution time to be calculated based on that start date and the defined interval, but instead it uses the current time and the interval.
**I am not sure if that is a bug or intended behavior** so I chose to open this issue before creating a PR. If you support this change, I can go ahead and do the PR myself.
### To Reproduce
Steps to reproduce the behavior:
```go
package main
import (
"fmt"
"time"
"github.com/go-co-op/gocron"
)
func task() {
fmt.Println("EXECUTING")
}
func main() {
scheduler := gocron.NewScheduler(time.Local)
job, _ := scheduler.
Every(24).
Hours().
StartAt(time.Now().Add(-24 * time.Hour).Add(10 * Minute)).
Do(task)
scheduler.StartAsync()
fmt.Println(job.NextRun())
}
```
### Version
`v0.5.1`
### Expected behavior
The expected output of the code above would be today, at the current time + 10 minutes. Instead, it is 24 hours from the current time which does not fit in the configured schedule.
### Additional context
My use-case here is an [automated garden](https://github.com/username_0/automated-garden) where I have a `Plant` struct containing a `StartDate` to record when it was first planted and serve as the starting point for my watering interval. I want to be able to restart my application, read the `StartDate` from a database or configuration file and run the scheduled jobs based on that instead of based on when the program started.
I implemented a workaround in my code but wanted to have the opportunity to include it in `goCron` if that is a desirable feature/fix.
Answers:
username_1: @username_0 thank for opening this issue!
I would say it’s currently working as designed. But, I don’t believe we specifically said that a date in the past would result in this behavior.
The name StartAt doesn’t imply relative to anything, so I am ok having it handle times in the past.
How would the implementation you’re considering calculate the next run?
username_0: Originally I was thinking about changing [this line](https://github.com/go-co-op/gocron/blob/master/scheduler.go#L136) to set `lastRun = job.startAtTime`, but realized this won't work if the start time is more than one interval in the past. Instead, we would have to loop and use `durationToNextRun` until the next run time is in the future. This might not be ideal if the `StartAt` is far in the past though.
username_1: Perhaps before that line, we could add an if check:
```golang
// scheduleNextRun Compute the instant when this Job should run next
func (s *Scheduler) scheduleNextRun(job *Job) {
now := s.now()
lastRun := job.LastRun()
if job.neverRan() {
+ if job.startAtTime.Before(s.time.Now()) {
+ // override the `startAtTime` value to the next appropriate future time
+ }
lastRun = now
}
durationToNextRun := s.durationToNextRun(lastRun, job)
job.setNextRun(lastRun.Add(durationToNextRun))
job.setTimer(time.AfterFunc(durationToNextRun, func() {
s.run(job)
s.scheduleNextRun(job)
}))
}
```
Status: Issue closed
username_1: This is released in [v0.6.0](https://github.com/go-co-op/gocron/releases/tag/v0.6.0)! |
expo/expo | 651690580 | Title: Jest-Expo 38 Breaks Jest tests
Question:
username_0: ## 🐛 Bug Report
### Summary of Issue <!-- (just a few sentences) -->
After using the expo upgrade command to upgrade from sdk 37 to 38, all my jest tests broke with following error. Only reverting expo-jest to 37 fixed the issue.
`Test suite failed to run
TypeError: this._getEventHandlerFor is not a function
at Object.get (node_modules/jsdom/lib/jsdom/living/helpers/create-event-accessor.js:86:26)
at Array.forEach (<anonymous>)
`
Answers:
username_1: I’m facing the same problem
username_2: We only saw the issue when using the --runInBand switch (used for circle ci build, local dev did not use the switch).
A workaround for us was to replace --runInBand with --maxWorkers=2. We also increased our build machine size on circle ci. Needed more memory to run the tests on the build machine with v38.
Note: if you set --maxWorkers=1 we see the same errors as with --runInBand
username_1: @username_2 didn’t work for me. The issue that I got when I run `yarn test` is this one:
```
FAIL src/screens/sign-in/__tests__/forgot-password.test.tsx
● Test suite failed to run
TypeError: (0 , _jestUtil(...).tryRealpath) is not a function
at ScriptTransformer.transformSource (node_modules/jest-runtime/node_modules/@jest/transform/build/ScriptTransformer.js:424:50)
``` |
cms-patatrack/cmssw | 436136932 | Title: Check the impact of building for cm_75
Question:
username_0: Currently we build all CUDA code for
- `sm_70` - Volta-based: Tesla V100, Titan V
- `sm_61` - Pascal-based: Titan Xp, GeForce GTX 1080 Ti, GeForce GTX 1080, etc.
- `sm_60` - Pascal-based: Tesla P100
According to the documentation, it should be possible to run the code compiled for `sm_70` unmodified also on `sm_75` devices - and in fact we do run on the Tesla T4, which is an `sm_75` device.
We should check if building explicitly for `sm_75` makes any difference on the T4.
Answers:
username_0: With a Tesla T4, running with the current [HEAD](f0b2b572542fa340c024c7b50ef622087fea5b3e) over TTbar MC events, I get ...
### with CMSSW built for `sm_70`
```
2 CPUs:
0: Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz (16 cores, 32 threads)
1: Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz (16 cores, 32 threads)
1 visible NVIDIA GPUs:
0: Tesla T4 (UUID: GPU-0e44f21a-ab43-7bf2-08d7-8cf1c32dd05b)
Warming up
Running 10 times over 4200 events with 1 jobs, each with 8 threads, 8 streams and 1 GPUs
728.3 ± 0.3 ev/s (4000 events)
726.2 ± 0.3 ev/s (4000 events)
727.9 ± 0.4 ev/s (4000 events)
725.1 ± 0.3 ev/s (4000 events)
725.3 ± 0.3 ev/s (4000 events)
725.8 ± 0.3 ev/s (4000 events)
726.7 ± 0.3 ev/s (4000 events)
726.0 ± 0.3 ev/s (4000 events)
724.0 ± 0.3 ev/s (4000 events)
722.4 ± 0.4 ev/s (4000 events)
--------------------
725.8 ± 1.7 ev/s
```
### with CMSSW built for `sm_75`
```
2 CPUs:
0: Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz (16 cores, 32 threads)
1: Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz (16 cores, 32 threads)
1 visible NVIDIA GPUs:
0: Tesla T4 (UUID: GPU-0e44f21a-ab43-7bf2-08d7-8cf1c32dd05b)
Warming up
Running 10 times over 4200 events with 1 jobs, each with 8 threads, 8 streams and 1 GPUs
726.4 ± 0.4 ev/s (4000 events)
726.1 ± 0.4 ev/s (4000 events)
726.0 ± 0.3 ev/s (4000 events)
725.3 ± 0.3 ev/s (4000 events)
722.6 ± 0.3 ev/s (4000 events)
724.5 ± 0.3 ev/s (4000 events)
725.5 ± 0.3 ev/s (4000 events)
724.1 ± 0.4 ev/s (4000 events)
723.1 ± 0.3 ev/s (4000 events)
723.3 ± 0.4 ev/s (4000 events)
--------------------
724.7 ± 1.4 ev/s
```
So far, building specifically for `sm_75` does not seem to have any impact.
Status: Issue closed
username_0: So far, building specifically for `sm_75` does not seem to have any impact. |
w3c/wot-marketing | 626547423 | Title: Consider Submission to API Specifications Conference
Question:
username_0: Let's consider submitting a talk proposal to the [API Specifications Conference](https://sessionize.com/api-specifications-conference-2020/). Call open/close: June 8/June 29.
A couple of paragraphs are needed... Ege to do an initial draft in the comments below and/or a PR, then we can discuss.
Answers:
username_1: # API Specifications Conferenec CfS
Proposal: Talk
Speaker: <NAME>
## Session description
As a member of the W3C Web of Things Working Group, I will be presenting the recently published Thing Description (TD) W3C Recommendation.
TD is a way to describe the network facing APIs of physical (or not) IoT devices (Things).
It proposes a protocol independent approach and can be used to also describe already existing devices such as Philips Hue, (more here) and can be used in different industries such as smart home, industrial automation, smart cities and more.
I will also show what you can do with TDs, such as Mashups for IoT devices, device specific UI generation, automated testing and simulation.
## Notes to organizer
The outline will be as follows:
- Introducing the need to built cross-ecosystem applications in the IoT
- Scope of the standardization efforts by the W3C Web of Things Working Group
- Thing Description Recommendation
- WoT Mashups
- TD based automation: UI generation, automated testing, simulation
username_2: additional devices can be IKEA based Smart Home devices that are based on LWM2M.
username_2: it would be cool when you show some live demos like node-red or the TD playground
username_1: I have updated my comment regarding the submission. @username_2 and @username_0 could you check it? The submission deadline is 01.07.2020
username_1: Now there is a more official website: https://events.linuxfoundation.org/openapi-asc/
username_1: Submitted and below is a screenshot of the submission

username_3: F.Y.I. I appreciate it if you also consider submitting CfP for the Node-RED online conference on October 10th. CfP is opened until August 31st.
https://nodered.jp/noderedcon2020/index-en.html
username_1: I think that this would be a good idea as well! However, since most of the Node-RED node-gen (TD based) was done by @username_4 , I think it would be more fair that he presents it. Any opinion?
username_3: It sounds good! Welcome.
username_1: Submission is acceepted!
username_4: My submission to Node-RED Con Tokyo is also accepted.
(The presentation will be given in Japanese)
username_1: Closing since the presentation has happened! :)
Status: Issue closed
|
pivpn/pivpn | 456050500 | Title: Installer removes .pivpn directory mid-install (octopi)
Question:
username_0: <!--
# PiVPN Issue Template
PLEASE READ THIS TEMPLATE CAREFULLY BEFORE OPENING AN ISSUE!
Any Issue opened that doesn't follow this template will be removed.
Hi, you are about to open a new issue, Please provide us with all the info required below, incomplete issues will decrease our effectiveness to troubleshoot your issue and increase the time we need to spend helping you out, or with your issue closed even if it is a legitimate issue. Please remember we do not have any super power that makes us guess exactly what your issue is without any decent details!
For any output requested below, you may alternatively post it on http://pastebin.com and provide the Pastebin URL in its place
-->
## In raising this issue, I confirm the following:
`{please fill the checkboxes, e.g: [X]}`
- [x] I have read and understood the [contributors guide](https://github.com/pivpn/pivpn/blob/master/CONTRIBUTING.md).
- [x] The issue I am reporting can be *replicated*.
- [x] The issue I am reporting can be *is* directly related to the pivpn installer script.
- [x] The issue I am reporting isn't a duplicate (see [FAQs](https://github.com/pivpn/pivpn/wiki/FAQ), [closed issues](https://github.com/pivpn/pivpn/issues?q=is%3Aissue+sort%3Aupdated-desc+is%3Aclosed), and [open issues](https://github.com/pivpn/pivpn/issues?q=is%3Aissue+sort%3Aupdated-desc+is%3Aopen)).
<!-- If the install failed: can you please copy-paste the console output after running `curl install.pivpn.io | bash` between the backticks -->
<!-- Please explain your issue. Feel free to format your text -->
### Issue
1. Install octopi on RPi 3B+
2. Follow workaround for `git clone` #278
3. `ls -a /etc` ... .pivpn is present
4. `sudo su; curl -L https://install.pivpn.io | bash`
5. Installer opens and gets through a few steps then fails, somehow removing `/etc/.pivpn` and then making it known.
The user installing is a sudoer. The default user `pi` has been locked using `sudo passwd --lock pi`. I select the user to be the secondary when prompted in the installer menu. The static IP settings have already been configured.
### Have you searched for similar issues and solutions?
(yes/no / which issues?)
Manual git clone, i.e. #278 . `sudo su; /usr/bin/git clone https://github.com/pivpn/pivpn.git /etc/.pivpn`
### Console output of `curl -L https://install.pivpn.io | bash`
```
$ sudo curl -L https://install.pivpn.io | bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 178 100 178 0 0 393 0 --:--:-- --:--:-- --:--:-- 392
100 55086 100 55086 0 0 68981 0 --:--:-- --:--:-- --:--:-- 68981
:::
::: sudo will be used for the install.
::: Verifying free disk space...
:::
::: Checking apt-get for upgraded packages.... done!
:::
::: Your system is up to date! Continuing with PiVPN installation...
::: Static IP already configured.
::: Using User: alsdkjfasldjfk
:::
::: Checking for existing base files...
::: Checking /etc/.pivpn is a repo...::: Updating repo in /etc/.pivpn...shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
main: line 565: cd: /etc/.pivpn: No such file or directory
```
<!-- If the generation of an .ovpn file fails / the ovpns folder stays empty, please paste the output of `pivpn add` or `pivpn add nopass` between the backticks -->
### Console output of `pivpn add` or `pivpn add nopass`
```
Output Here
```
<!-- Please paste the output of `pivpn debug` between the backticks, don't forget to substitute your public IP address if you don't want the world to know it -->
### Console output of `pivpn debug`
```
-bash: pivpn: command not found
418: I'm a teapot.
```
### Have you taken any steps towards solving your issue?
```
Rebuild OS, retried install process as described.
```
Answers:
username_1: I don't have a proper fix, nor even a succeeded bodge-fix at this point, but I've been running in to this same problem this evening and I do have some observations.
The installer script (the thing fetched by curl from install.pivpn.io), in its function "clone_or_update_repos", calls the other function "getGitFiles", which either calls "update_repo" or "make_repo". The function "make_repo" ALWAYS deletes the folder where it wants to re-create the git repository clone (to whit, /etc/.pivpn/) before re-cloning the git repo, the function "update_repo" can be told NOT to delete-and-reclone **if** the undocumented "--reconfigure" switch was passed to the install script.
(Of course, the attempt it makes to re-clone after deleting doesn't work on Octopi because that interferes with `sudo git` commands - net result, the folder just got baleeted.)
I have tried downloading the installer script (rather than the official "pipe it into bash" approach) after doing the manual git-clone as described on the other Issue, running it with `sudo ./pivpn-installer.sh --reconfigure` (which does appear to re-do the "installing things phase").
The install (done that way) appeared to work, I could get an oVPN client to reach it from the local LAN (changing the external hostname reference in the generated .ovpn config file to the relevant local IP), but it wasn't playing ball from external. So it's possible that trying to do an "install without messing up /etc/.pivpn/" by passing the --reconfigure switch in somehow-doesn't-quite-work, or it's entirely possible that there's something else non-pivpn-related in my environments which is messing things up for me. Maybe it's worth a try?
I have run out of energy to try to figure this particular stuff out this evening and I might just install it on a different rPi in my fleet of far-too-many tomorrow. The ramblings above may or may not help you bodge your way to a successful install - but I can be pretty confident in saying that the root cause of our problem here is Octopi playing mother-knows-best with `sudo git`.
username_1: OK, here's a bodge workaround which seems to work properly and is least-messy. These steps are not heavily-optimised but should hopefully be helpful. This approach **does not require** the manual git-clone procedure - instead, we just patch the installer script so it works properly with Octopi's silly quirk.
Rather than using the standard one-liner install, download the installer script and rename it (and make it executable):
1. `wget https://install.pivpn.io` (downloads the script, it ends up as a file called index.html)
1. `mv index.html pivpn-install.sh` (renames that index.html to something more sensible)
1. `chmod u+x pivpn-install.sh` (makes that file Executable, which we will need later)
Then edit the script, to change the key invocations of `git` to specifically use `/usr/bin/git` (which avoids Octopi's annoying thing) - here's hand-holding instructions for doing this with Nano:
1. `nano pivpn-install.sh` (opens the downloaded script in the nano editor)
1. Press `Ctrl+W` then type `make_repo()`, press Return (searches for that text string)
1. Cursor-down to the line which starts `$SUDO git clone ...` (immediately underneath the line `$SUDO rm -rf "${1}"`)
1. Change it so it starts `$SUDO /usr/bin/git clone ...` (adding "/usr/bin/" in front of "git")
1. Move further down the file to the function "update_repo()"
1. Find another line in that function starting `$SUDO git clone ...` (again, immediately underneath the line `$SUDO rm -rf "${1}"`)
1. Change that line so it also starts `$SUDO /usr/bin/git clone ...` (adding "/usr/bin/" in front of "git")
1. Press `Ctrl+X` to exit Nano, typing a `y` when it asks if you want to save changes.
Now, execute that locally-edited version of the installer script - `sudo ./pivpn-install.sh`
The script will still delete /etc/.pivpn as before - but this time it should actually succeed in re-git-cloning the repository (thus re-creating and repopulating that folder), so it should run-through and complete "like normal".
Using the above process, I have managed to get PiVPN installed and working on my OctoPi setup, so hopefully it will also work for others.
username_0: Will try this later today and provide an update. I truly appreciate you taking the time to take a deeper look at it. From the other thread I read it seems like the octopi build is just... special.
username_1: I literally ran in to the same situation as you had, about 12-ish hours after you had opened this thread. I wasn't solving your problem for you, I was solving my problem for me! Sharing what I initially-found, and then the workaround that fixed it for me, is just being a good internet-neighbour.
My best guess on why Octopi blocks `sudo git` is that maybe their project has had issues with people reflexively doing sudo for everything even when it's not required or appropriate, causing needless support issues for them, so they put in a block. That works ok within their range of expected use-cases, but when trying to use a script which presumes normal Linux behaviour (where tools do exactly what they are told without so much second-guessing) then Things Don't Work Properly.
My workaround just takes the little grain of knowledge from the other Issue (that `sudo /usr/bin/git` is NOT blocked) and puts it into the install script.
username_2: thanks @username_1 , your workaround did work for me. not the first time, but trying 2 or 3 consecutive times and selecting "reconfiguration" instead of "update" in the setup worked for me :D
username_3: This is the main reason why octopi doesn't work correctly with PiVPN: https://github.com/guysoft/OctoPi/issues/373
and the main reason why we didn't bothered much to support it, however i see some changes have been made and this might fix your issues.
https://github.com/guysoft/OctoPi/pull/374
I will be adding this to our wiki for future reference.
If any of you can confirm if this solves your issue or not would be great we can close this chapter.
Status: Issue closed
|
opnsense/core | 238308343 | Title: [Feature][Proxy] Add support for Windows updates and Debian package caching
Question:
username_0: With the right configuration, Squid is able to cache Windows updates [1] and Debian packages [2]. This requires to add custom refresh_pattern directives to the Squid configuration file which is currently not possible via the web interface. Manual edits to the Squid configuration files do not survive a reboot.
Maybe it is sufficient to include a free text field with configuration options that get added to the configuration file.
Making the Squid cache persistent over a reboot would be a plus - as far as I have observed a reboot empties the Squid cache /var/squid/cache.
[1] http://wiki.squid-cache.org/SquidFaq/WindowsUpdate
[2] https://www.midnightfreddie.com/using-squid-to-cache-apt-updates-for-debian-and-ubuntu.html
Answers:
username_1: @username_0 you can easily add custom hooks (which survive reboots) using pre-auth and post-auth config locations, defined here https://github.com/opnsense/core/blob/17.1.8/src/opnsense/service/templates/OPNsense/Proxy/squid.conf#L273 and https://github.com/opnsense/core/blob/17.1.8/src/opnsense/service/templates/OPNsense/Proxy/squid.conf#L283 .
It could be an idea to see what we miss in terms of features and what's needed to configure both options, but we won't add plain text fields to input configuration data directly. For both security and validation reasons.
username_2: I agree, we should take those two up as checkboxes, they come up quite frequently
username_0: Thank you. Added config files with refresh patterns to the post-auth folder and acls for manager access from the local network to the pre-auth folder. `squid -k parse` shows that this is fine. Was I supposed to find this in the documentation?
Could you please help me understand the security and validation issues the plain text fields would pose? In my view, the only difference between adding own squid directives by file instead via the UI is that the former creates a hurdle while the latter requires at least some sophistication on the end of the firewall admin.
In any case, making the option for custom hooks visible in the UI would be a plus.
username_1: The documentation needs work on this subject, we just can't do all at once.
Raw config data can't be validated, which could lead to broken configurations, that's one of the main reasons why we don't support plain text fields to inject config data. We try to keep things user friendly and as consistent as possible.
When possible we keep the option open for advanced users using plugins and hooks like the one described here (it's not very hard to copy a file to the firewall).
If you share your configuration, we could look into adding the feature using a simple checkbox, which would benefit more users.
username_0: Thanks for the explanation. I wanted to share some documentation anyway after thorough testing.
I am yet unsure whether my config for caching debian packages and windows updates actually works. This needs some investigation after the next patchday and having a close look into the cache. The post-auth hooks currently look like this:
package-cache.conf
```
refresh_pattern deb$ 0 20% 4320 refresh-ims
refresh_pattern udeb$ 0 20% 4320 refresh-ims
refresh_pattern Packages\.bz2$ 0 20% 4320 refresh-ims
refresh_pattern Sources\.bz2$ 0 20% 4320 refresh-ims
refresh_pattern Release\.gpg$ 0 20% 4320 refresh-ims
refresh_pattern Release$ 0 20% 4320 refresh-ims
```
update-cache.conf
```
# http://wiki.squid-cache.org/SquidFaq/WindowsUpdate
refresh_pattern -i microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 129600 reload-into-ims
refresh_pattern -i windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 129600 reload-into-ims
refresh_pattern -i windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 129600 reload-into-ims
```
The pre-auth hook for accessing the cache manager from LAN works. It looks like follows:
manager.conf
```
# http://wiki.squid-cache.org/Features/CacheManager
acl managerAdmin src 192.168.31.0/24
http_access allow managerAdmin manager
```
The latter is well-suited to be exposed through the web interface (just an input field for an IP address or network). But that might be too special a case for consideration.
username_0: Update: after some weeks I can confirm that windows updates end up in the cache and stay in the cache given the aforementioned update-cache.conf
username_1: @username_0 thanks for reporting back, as soon as I can find some time I will see if I can add the checkboxes.
username_3: I am not sure if it's the right place or not but I wrote a windows update caching proxy which works with squid as a cache_peer:
http://www1.ngtech.co.il/wpe/?page_id=301
It runs on systems with 10Gbps+ so... pretty nice.
Is it helping?
username_0: @username_3 That would make a nice Update Accelerator plugin like they have in IPFire which I used before turning to OPNSense.
username_4: @username_1 arch linux package have the suffix `.*\.pkg\.tar\.xz` and red hat (like) systems use `.*\.d?rpm` (d means delta update). They should be appended to the list of @username_0. Probably use the same settings as for the debian packages.
username_4: @username_1 I'll add it
username_3: @username_1 You might also want to use a StoreID helper to de-duplicate different servers for updates.
Anyone have written a helper for this?
username_3: @username_0 Well I will need to understand how it can be integrated on FreeBSD since it needs to have a cron job running every now and then(5 minutes) and also a clean-up every hour for old cached files.
Something like "ls /path && find /path -t f 'older then x days'|xargs rm".
username_0: @username_3 For reference the links to IPFire's (FreeBSD) update accelerator documentation [here](https://wiki.ipfire.org/en/configuration/network/proxy/update_accelerator) and [here](https://wiki.ipfire.org/en/configuration/network/update-booster) and the [source](https://github.com/ipfire/ipfire-2.x/tree/master/src). IPFire removes outdated files, files without source and old files (all optional, based on configuration).
Status: Issue closed
username_1: With the right configuration, Squid is able to cache Windows updates [1] and Debian packages [2]. This requires to add custom refresh_pattern directives to the Squid configuration file which is currently not possible via the web interface. Manual edits to the Squid configuration files do not survive a reboot.
Maybe it is sufficient to include a free text field with configuration options that get added to the configuration file.
Making the Squid cache persistent over a reboot would be a plus - as far as I have observed a reboot empties the Squid cache /var/squid/cache.
[1] http://wiki.squid-cache.org/SquidFaq/WindowsUpdate
[2] https://www.midnightfreddie.com/using-squid-to-cache-apt-updates-for-debian-and-ubuntu.html
username_1: @username_0 @username_3 can you test the patches contributed by @username_4 https://github.com/opnsense/core/commit/8334db3a7b0478d473984b594a64b44513bf57a2 ?
username_0: @username_1 This means removing my pre-auth and post-auth hooks and running
`opnsense-patch -c core 8334db3a7b0478d473984b594a64b44513bf57a2`
Right?
username_1: @username_0 yes that should do the trick
username_0: @username_1 patch did the jobs - have removed the manually entered patterns and checked the boxes.
`squid -k parse`
shows
```
2017/10/29 17:52:45| Processing: refresh_pattern pkg\.tar\.xz$ 0 20% 4320 refresh-ims
2017/10/29 17:52:45| Processing: refresh_pattern d?rpm$ 0 20% 4320 refresh-ims
2017/10/29 17:52:45| Processing: refresh_pattern deb$ 0 20% 4320 refresh-ims
2017/10/29 17:52:45| Processing: refresh_pattern udeb$ 0 20% 4320 refresh-ims
2017/10/29 17:52:45| Processing: refresh_pattern Packages\.bz2$ 0 20% 4320 refresh-ims
2017/10/29 17:52:45| Processing: refresh_pattern Sources\.bz2$ 0 20% 4320 refresh-ims
2017/10/29 17:52:45| Processing: refresh_pattern Release\.gpg$ 0 20% 4320 refresh-ims
2017/10/29 17:52:45| Processing: refresh_pattern Release$ 0 20% 4320 refresh-ims
2017/10/29 17:52:45| Processing: refresh_pattern -i microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 129600 reload-into-ims
2017/10/29 17:52:45| Processing: refresh_pattern -i windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 129600 reload-into-ims
2017/10/29 17:52:45| Processing: refresh_pattern -i windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 129600 reload-into-ims
2017/10/29 17:52:45| Processing: refresh_pattern ^ftp: 1440 20% 10080
2017/10/29 17:52:45| Processing: refresh_pattern ^gopher: 1440 0% 1440
2017/10/29 17:52:45| Processing: refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
2017/10/29 17:52:45| Processing: refresh_pattern . 0 20% 4320
```
Squid starts fine. Will observe caching behavior.
username_3: @username_0 thanks for the details despite to the fact that it's an OpnSense issue.
username_1: @username_0 thanks for confirming, I'll close this issue then.
Status: Issue closed
username_5: Hi,
Is there any sample full squid config for such a purpose as caching windows updates, maybe with the above snippets updated for windows 10 ? |
Nextcalibur/Bugtracker | 633763442 | Title: [NPC][Gauntlet] Shadow Labyrinth - Last boss Gauntlet
Question:
username_0: https://streamable.com/gxti7p how it is here
https://www.youtube.com/watch?v=bmvNzUMsuf8 How it is on NW , 1:02:55
https://www.youtube.com/watch?v=9aLlWZdHa74 22:00
There's a lot of RP in this gauntlet and I will try to go in to as much details as possible.
Some of the general issues:
1. The npc dont cast on Murmur as they should
2. Npc-s dont move around as they should
3. The npc-s are not hit by Murmur , which kill some and knock down others (worth noting they dont get knocked back effect done to them after being pulled by players as far as I can tell)
How the gauntlet should go:
When you open the door:
[22:12:09]Selected: [Cabal Summoner]
[22:12:09]SpawnId: 66845 Entry: 18634 Spawned Entry: 20648 (Heroic)
Should Die by Murmur.
[22:12:47]Selected: [Cabal Executioner]
[22:12:47]SpawnId: 66813 Entry: 18632 Spawned Entry: 20642 (Heroic)
[22:12:49]Selected: [Cabal Executioner]
[22:12:49]SpawnId: 66815 Entry: 18632 Spawned Entry: 20642 (Heroic)
Which are located on the right and left of the entrance into last room should move to the middle and be killed by the players (linked to each other)
https://www.youtube.com/watch?v=bmvNzUMsuf8 1:02:54 shows it on NW
Then the 2nd Pack should consist of 1 Spellbinder 2 Executioners and 1 Summoner.
I will assume this are the one that should form a Pack in the middle
[22:32:55]Selected: [Cabal Executioner]
[22:32:55]SpawnId: 66816 Entry: 18632 Spawned Entry: 20642 (Heroic)
[22:32:55]DisplayId: 18595 (Native: 18595).
[22:32:56]Selected: [Cabal Executioner]
[22:32:56]SpawnId: 66814 Entry: 18632 Spawned Entry: 20642 (Heroic)
[22:32:57]Selected: [Cabal Summoner]
[22:32:57]SpawnId: 66846 Entry: 18634 Spawned Entry: 20648 (Heroic)
[22:32:58]Selected: [Cabal Spellbinder]
[22:32:58]SpawnId: 66881 Entry: 18639 Spawned Entry: 20647 (Heroic)
They are not positioned properly (they should move to position when door open).
Which means that these NPC-s now should be killed by Murmur in this gauntlet as they are not part of the trash players should kill :
[22:34:53]Selected: [Cabal Spellbinder]
[22:34:53]SpawnId: 66886 Entry: 18639 Spawned Entry: 20647 (Heroic)
[22:35:02]Selected: [Cabal Summoner]
[22:35:02]SpawnId: 66849 Entry: 18634 Spawned Entry: 20648 (Heroic)
[22:35:07]Selected: [Cabal Spellbinder]
[22:35:07]SpawnId: 66883 Entry: 18639 Spawned Entry: 20647 (Heroic)
[22:35:07]DisplayId: 18603 (Native: 18603).
[22:35:08]Selected: [Cabal Summoner]
[22:35:08]SpawnId: 66847 Entry: 18634 Spawned Entry: 20648 (Heroic)
[22:35:48]Selected: [Cabal Summoner]
[22:35:48]SpawnId: 66844 Entry: 18634 Spawned Entry: 20648 (Heroic)
As you can see in https://www.youtube.com/watch?v=bmvNzUMsuf8 1:07 and after it, when they killed the 2nd pack + what the summoner summoned the 2nd line of trash is dead, the 5 npc-s I mentioned above are gone, killed by Murmur and not by players, moving on 3rd pack.
[Truncated]
[22:54:24]Selected: [Cabal Summoner]
[22:54:24]SpawnId: 66855 Entry: 18634 Spawned Entry: 20648 (Heroic)
[22:54:25]Selected: [Cabal Spellbinder]
[22:54:25]SpawnId: 66893 Entry: 18639 Spawned Entry: 20647 (Heroic)
[22:54:27]Selected: [Cabal Summoner]
[22:54:27]SpawnId: 66853 Entry: 18634 Spawned Entry: 20648 (Heroic)
[22:54:27]DisplayId: 18605 (Native: 18605).
[22:54:30]Selected: [Cabal Spellbinder]
[22:54:30]SpawnId: 66892 Entry: 18639 Spawned Entry: 20647 (Heroic)
[22:54:32]Selected: [Cabal Summoner]
[22:54:32]SpawnId: 66854 Entry: 18634 Spawned Entry: 20648 (Heroic)
[22:54:38]Selected: [Cabal Spellbinder]
[22:54:38]SpawnId: 66888 Entry: 18639 Spawned Entry: 20647 (Heroic)
[22:54:40]Selected: [Cabal Summoner]
[22:54:40]SpawnId: 66851 Entry: 18634 Spawned Entry: 20648 (Heroic)
Walk through video of the npc ID-s , locations, where they should move, behave etc.
https://streamable.com/kodzis |
electron/electron | 215405454 | Title: window.focus() not work
Question:
username_0: <!--
Thanks for opening an issue! A few things to keep in mind:
- The issue tracker is only for bugs and feature requests.
- Before reporting a bug, please try reproducing your issue against
the latest version of Electron.
- If you need general advice, join our Slack: http://atom-slack.herokuapp.com
-->
* Electron version: 1.6.1
* Operating system: window 7 (64 bit)
### Expected behavior
when i call window.focus() window getting focused.
<!-- What do you think should happen? -->
### Actual behavior
window not come to focus while focus event trigger.
<!-- What actually happens? -->
window not come in focus.
### How to reproduce
**main.js**
mainWindow = new BrowserWindow({width: 100, height: 100, webPreferences: {nodeIntegration: true, sandbox: true}});
mainWindow.loadURL(`http://localhost:9000`);
**index.html**
<button onclick="focusWin()">focus</button>
**app.js**
var win = window.open("child.html", '', 'width=100,height=100,left=0;top=0');
function focusWin(){
setTimeout(function(){
win.focus();
}, 100);
}
<!--
Your best chance of getting this bug looked at quickly is to provide a REPOSITORY that can be cloned and run.
You can fork https://github.com/electron/electron-quick-start and include a link to the branch with your changes.
If you provide a URL, please list the commands required to clone/setup/run your repo e.g.
$ git clone $YOUR_URL -b $BRANCH
$ npm install
$ npm start || electron .
-->
Answers:
username_1: Read this: https://github.com/electron/electron/blob/master/docs/api/browser-window.md#using-ready-to-show-event
username_0: thanks @username_1, but I use window.open(...) not new BrowserWindow(...)
username_1: @username_0
What is window? It's not clear from your code.
I think you should be able to use `ready-to-show` or `did-finish-load`
username_0: yes
username_1: maybe use this then? https://github.com/electron/electron/blob/master/docs/api/browser-window.md#parent-and-child-windows
not getting the idea behind using `window.open`
username_0: ok, can you tell me one thing?
how to send an constructor in other window using BrowserWindow
username_1: You should use https://github.com/electron/electron/blob/master/docs/api/ipc-main.md to communicate between windows. This way you can send data from parent to child or other way around.
username_0: but how could I send an constructor object one to other window? when I try to send constructor object, IPC do auto serilized this object.
username_1: what exactly are you trying to do?
username_0: I want to share a constructor with child window, but IPC, serializes the constructor.
username_1: No, I mean what are you "trying" to do. The point being: you are doing it not electron way, so you need to change that.
Are you trying to show specific data on child window? You can send them via IPC.
Are you trying to calculate something? You should do that in main and send result via IPC.
And so on...
username_0: I am not send a JSON object or string. I want to send a constructor e.g.:
**function ElectronFunc(){
this.addEventListener = function(){
}
this.name= "100100";
}
var ef = new ElectronFunc();**
when I send **ef** to another window using IPC, IPC delete all methods of constructor (like addEventListener); because IPC serializes data before send.
username_1: Why are you not using js code per BrowserWindow? You can create event listener in any window. Why are you trying to send it there? It doesn't make sense.
username_0: It make sense because if I make another Instance of this Function older instance auto distroye so I create it only once and want to share other window.
username_1: I think you need to provide better code example, I am not able to undertsand what are you trying to accomplish. Maybe other people will understand better.
username_0: ok thanks @username_1
username_0: @username_2 Hi, I was following your [comment](https://github.com/electron/electron/issues/1865#issuecomment-249989894) about using sandox flag to access complete window.opener and had a query regarding the same..
window.focus() not working please help me out
username_2: @username_0 sandbox is still an experimental option, so there may be a lot of native APIs that don't work. For now I suggest that you use the preload script to replace `window.focus` by a function that calls `ipcRenderer` to delegate focusing to the main process.
username_0: thanks @username_2, can u tell me one thing.
how I send a constructor one BrowserWindow to another BrowserWindow?
username_3: We are no longer implementing bugfixes for versions of Electron <= `1.7.x`, so i'm going to close this issue but if it is still persisting in more recent versions of Electron we can certainly reopen it!
Status: Issue closed
username_4: <!--
Thanks for opening an issue! A few things to keep in mind:
- The issue tracker is only for bugs and feature requests.
- Before reporting a bug, please try reproducing your issue against
the latest version of Electron.
- If you need general advice, join our Slack: http://atom-slack.herokuapp.com
-->
* Electron version: 1.6.1
* Operating system: window 7 (64 bit)
### Expected behavior
when i call window.focus() window getting focused.
<!-- What do you think should happen? -->
### Actual behavior
window not come to focus while focus event trigger.
<!-- What actually happens? -->
window not come in focus.
### How to reproduce
**main.js**
mainWindow = new BrowserWindow({width: 100, height: 100, webPreferences: {nodeIntegration: true, sandbox: true}});
mainWindow.loadURL(`http://localhost:9000`);
**index.html**
<button onclick="focusWin()">focus</button>
**app.js**
var win = window.open("child.html", '', 'width=100,height=100,left=0;top=0');
function focusWin(){
setTimeout(function(){
win.focus();
}, 100);
}
<!--
Your best chance of getting this bug looked at quickly is to provide a REPOSITORY that can be cloned and run.
You can fork https://github.com/electron/electron-quick-start and include a link to the branch with your changes.
If you provide a URL, please list the commands required to clone/setup/run your repo e.g.
$ git clone $YOUR_URL -b $BRANCH
$ npm install
$ npm start || electron .
-->
username_4: I'm pretty sure this is still true in 3.0.x
username_5: Thank you for taking the time to report this issue and helping to make Electron better.
The version of Electron you reported this on has been superseded by newer releases.
If you're still experiencing this issue in Electron v4.2.x or later, please add a comment specifying the version you're testing with and any other new information that a maintainer trying to reproduce the issue should know.
I'm setting the `blocked/need-info` label for the above reasons. This issue will be closed 7 days from now if there is no response.
Thanks in advance! Your help is appreciated.
Status: Issue closed
username_5: Thank you for your issue!
We haven't gotten a response to our questions in our comment above. With only the information that is currently in the issue, we don't have enough information to take action. I'm going to close this but don't hesitate to reach out if you have or find the answers we need, we'll be happy to reopen the issue. |
AbsaOSS/enceladus | 433839031 | Title: Rename Menas root package from rest to menas
Question:
username_0: ## Background
All of the modules have a root package with their corresponding name except for `Menas` which has `rest` as its root package.
## Feature
It would be nice if our modules were aligned in this.<issue_closed>
Status: Issue closed |
h2non/nar | 125590788 | Title: Don't break if optional dependency is not available
Question:
username_0: Currently the following message is displayed:
Error: Cannot find module 'xxx' from 'xxx'
Dependencies that are marked as optional, are usually not critical for the package.
Answers:
username_1: Which version of `nar` are you running? Optional dependencies are not supported anymore. See: https://github.com/username_1/nar/blob/master/src/create.ls#L430-L435
username_0: My mistake!
package.json is corrupted |
ionic-team/ionic-cli | 234240263 | Title: Ionic CLI update request should be just a warning
Question:
username_0: #### Short description of the problem:
Using Ionic in a CI context the request "There's an update, you want to install it" is overwhelming. It blocks the script and prevent it to complete the execution.
#### What behavior are you expecting?
I'm using Ionic in a CI environment with a builder which creates the _www_ folder dinamically and then launches `ionic build`. The request for the update that waits for a user input keeps the script from running ahead and blocks the whole chain.
I think it could be correct to notice the update available but it shouldn't block the command from running. Maybe it could be noticed to user at the end of the building process
**Steps to reproduce:**
The request is the one you get when launching every Ionic command with @ionic/[email protected] installed.
Status: Issue closed
Answers:
username_1: Please see the README for CI servers: https://github.com/ionic-team/ionic-cli#cli-flags |
argoproj/argo-cd | 753540527 | Title: Argocd cannot create cluster resources even though my project has permission to do it
Question:
username_0: Checklist:
* [X] I've searched in the docs and FAQ for my answer: https://bit.ly/argocd-faq.
* [X] I've included steps to reproduce the bug.
* [X] I've pasted the output of `argocd version`.
**Describe the bug**
My application is not able to create **cluster resources** even though my linked project has permission to do it.
**To Reproduce**
1. Create a project with permission to create **cluster resources**.

2. Create an application with the configured project.
3. In the git repository add a template yaml for a persistent volume or cluster role. Example with persistent volume.
```yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
creationTimestamp: null
spec:
storageClassName: manual
volumeName: test-pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
```
**Expected behavior**
Persistent volume should be created. Instead I get a **"Comparison error. ... can not be managed when in namespaceb mode"**
<img width="1020" alt="Screen Shot 2020-11-30 at 10 46 23 AM" src="https://user-images.githubusercontent.com/49159808/100624116-6785ce80-32f9-11eb-9ae1-3682e0013a5d.png">
**Version**
```shell
argocd: v1.7.8+ef5010c.dirty
BuildDate: 2020-10-16T02:25:17Z
GitCommit: <PASSWORD>
GitTreeState: dirty
GoVersion: go1.15.2
Compiler: gc
Platform: darwin/amd64
argocd-server: v1.7.8+ef5010c
BuildDate: 2020-10-15T22:34:12Z
GitCommit: <PASSWORD>
GitTreeState: clean
GoVersion: go1.14.1
Compiler: gc
Platform: linux/amd64
Ksonnet Version: v0.13.1
Kustomize Version: {Version:kustomize/v3.6.1 GitCommit:<PASSWORD> BuildDate:2020-05-27T20:47:35Z GoOs:linux GoArch:amd64}
Helm Version: version.BuildInfo{Version:"v3.2.0", GitCommit:"<PASSWORD>", GitTreeState:"clean", GoVersion:"go1.13.10"}
Kubectl Version: v1.17.8
```
Answers:
username_1: Hmm, if there are any namespaces defined for your cluster (`Destinations -> Namespace`), only objects in those namespaces will be allowed to be managed by argo-cd.
Set it to `*` to allow all namespaces and global resources.
username_0: Hi, thanks for stepping in. But still the same error.
The yaml
```bash
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-pv
creationTimestamp: null
spec:
storageClassName: ""
capacity:
storage: 500Mi
accessModes:
- ReadWriteOnce
nfs:
path: /data/nfs/consul/
server: 10.128.15.211
persistentVolumeReclaimPolicy: Retain
```
The error:
```bash
ComparisonError
Cluster level PersistentVolume "test-pv" can not be managed when in namespaced mode
```


username_2: I think you have installed the name spaced version of Argo CD (i.e. `namespace-install.yaml`) - this does not allow for cluster scoped resources (or, any resources outside the desired namespace).
username_0: Ouch, that would be too bad. Nonetheless, I documented the setup for my team. And got a copy of the yamls I used. So I used the following install.yaml
```bash
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
```
username_0: I don't mind giving the argocd's service account full privileges or something like that. Is there any workaround I can use to force the creation of cluster resources?
username_2: Digging a little further - do you have restricted the namespaces of the destination cluster?
username_0: @username_2 thank you so much. That was the issue. Such a fool.
Awesome project, this will come very hand to us. Thanks!
Status: Issue closed
|
SciML/DiffEqFlux.jl | 803629495 | Title: TensorLayer update
Question:
username_0: Currently, the documentation says:
https://github.com/SciML/DiffEqFlux.jl/blob/536112c664f8e3b9f75b9c64f05b71647ef001cc/src/tensor_product_layer.jl#L14
which doesn't reflect the reality:
https://github.com/SciML/DiffEqFlux.jl/blob/536112c664f8e3b9f75b9c64f05b71647ef001cc/src/tensor_product_layer.jl#L26
since it is not initialized to zero neither can be given by the user.
Thanks for the great package!<issue_closed>
Status: Issue closed |
ocornut/imgui | 347003798 | Title: ImGUI isn't focusing on window/processing mouse clicks
Question:
username_0: **Version/Branch of Dear ImGui:** tested both 1.62 and 1.63 WIP
Back-ends: imgui_impl_win32.cpp + imgui_impl_dx9.cpp
OS: Windows 10 Pro 17134
Compiler: MSVS + VS Community 2017
**My Issue/Question:** ImGUI isn't focusing on window
After upgrade from 1.61 to 1.62 I've changed this code:
```cpp
- extern IMGUI_API LRESULT ImGui_ImplWin32_WndProcHandler(HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam);
+ extern LRESULT ImGui_ImplWin32_WndProcHandler(HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam);
...
ImGui_ImplWin32_WndProcHandler(window, message_type, wParam, l_param);
if(ImGuiWantsControl(message_type))
return 0;
...
ImGui_ImplDX9_NewFrame();
+ ImGui_ImplWin32_NewFrame();
+ ImGui::NewFrame();
+
DrawGUI();
+
+ ImGui::EndFrame();
ImGui::Render();
ImGui_ImplDX9_RenderDrawData(ImGui::GetDrawData());
...
- ImGui_ImplDX9_Init(game_hwnd, thisptr);
+ ImGui_ImplWin32_Init(game_hwnd);
+ ImGui_ImplDX9_Init(thisptr);
```
And... it's not working.
Status: Issue closed
Answers:
username_1: Sorry but #1586
username_2: Anyone have an actual fix besides this asshole linking something that has nothing to do with the issue? |
WebAssembly/wasm-c-api | 613871332 | Title: wasm-c-api currently lacks a means of determining the instance it was invoked from.
Question:
username_0: Callbacks sometimes need access to memory. Memory is instance specific.
Hence callbacks need to be able to securely identify the instance they are called from.
Proposed solution:
Please extended C-API as follows to add a void* parameter to callbacks.
wasm_trap_t* my_callback(const wasm_val_t args[], wasm_val_t results[], *void *user*)
... which is assigned using (proposed) WASM_API_EXTERN void wasm_instance_userdata(const wasm_instance_t*, void *user);
Embedders can then cast their void pointer to that instance's wasm_memory_t pointer or a more complex struct depending on their needs.
wasm_func_new_with_env accepts a similar void* however the parameter is not instance specific.
Answers:
username_1: Hm, it seems like wasm_instance_userdata already exists, it's just named wasm_instance_set_host_info. ;)
But I'm afraid there is no built-in notion of a "caller's instance" that could be exposed. Functions are stand-alone objects that aren't necessarily called from inside a Wasm instance -- for example, when the host calls them through wasm_func_call, which may e.g. happen in another callback. Moreover, even when the call does come from an instance, that information is not typically passed along in an engine. So the API cannot actually provide this information.
FWIW, caller identity is usually considered a poor basis for security. In fact, for JS, some security folks have long fought to get func.caller removed from the language, because it actually is a security hazard (it's a side channel that leaks information to the callee).
username_0: Obviously an instance calls the callback, Andreas and therefore that instance can either pass itself or something attached to it to a callback. Lacking the 'notion' required here is why this is an issue. Obv callbacks that aren't the result of an instance doing something don't go through this mechanism.
Re: you security concern. we're trusted side. please allow us to deliver in a performant way. if you don't like what we're doing, don't use our implementation. the API should however allow us to perform necessary routing of required data.
This is an APU not language issue.
What you 'cannot' do has been implemented (sub-optimally) by wasmtime_func_new_with_env & wasmtime_caller_export_get. I'm not suggesting folding that particular implementation back to wasm-c-api as its sub-optimal, but it does provide proof of principle.
Please reconsider your stance. You're incorrect.
username_0: FWIW, caller identity is required so we can access caller memory to do actual stuff. this is required beyond single instance per store. For now I have to duplicate stores unnecessarily to work around the limitation of the current API. Thank you for your hard work, we must go further :)
username_0: wasm_instance_set_host_info isn't in the wasm.h in this repository, though a search found it in somebody's rust implementation. if that's the call you're refering to, sounds like it'll do. just need a means of getting that parameter into appropriate callbacks.
username_2: Maybe I'm missing something, but... memories are per-`Store`, not per instance. Functions are also per `Store`. If you use `wasm_func_new_with_env` to create functions, you can pass along a pointer to the same `Store`'s memory (or any other `void*` pointer of your choice, e.g. if you need even more information: store, instance, memory, user data, ...). Calling a function from a different `Store` than it was created in is invalid anyway, I believe. Once the function is called, its `void* env` is passed along to the callback. Shouldn't that cover your use case?
username_0: if memory is indeed per store rather than per instance, yes.
however if that's the case, I'm failing to understand the point of an instance.
a solid architecture is thing gets compiled from intermediate to target once, even if instantiated multiple times.
auto store = wasm_store_new(__engine);
auto module = wasm_module_new(m_store, &intermediate_binary);
store/module get the intermediate, so doing that multiple times, as is my current workaround & what you're suggesting is suboptimal in the case of multiple instances of some program.
username_1: I honestly have no idea what you mean. As @username_2 points out, memories are not tied to instances. Like functions, memories are stand-alone objects that can be created elsewhere and shared across instances if desired. Inversely, in the future of multi-memories, a single instance can reference as many memories as it likes. It is an N-to-N relation, not 1-to-1.
username_0: we have enough disagreement over something simple here that I'll just fork it mate.
username_0: there may be circumstances where you want to share *some* memory between *some* instances, but in general instances of programs run in their own address space. you've got an architecture weirdness here. I'll bash it to suit my needs when multiple instance overhead of doing it your way becomes an issue.
username_0: just because you can share memory between instances doesn't mean that's normally what you want to do. that is the exception, not the rule. having a small overhead for doing things that way is acceptable, when you must do it. a bit like pipes on a host. its a form of inter-process communication.
username_0: Re: wassm_instance_set_host_info created by the macro expansion of WASM_DECLARE_REF(instance).
thanks. I'm all for macro cleverness, but perhaps not in a core api header file :)
username_0: what is the purpose of the user void * assigned in wasm_instance_set_host_info if not as I've requested, please.
username_0: I share your your concern over parameter overhead to callbacks.
wasm_func_new_with_env already has a void *
if that void * was replaced by the calling wasm_instance_t * (where appropriate), the void * currently a parameter there could be queried where necessary using wasm_instance_get_host_info or whatever you've called it. everyone happy unless there remains an objection.
username_0: I apologize. As has been explained to me, wasm_func_new_with_env can be created once per instance, thereby getting me an instance specific parameter in callbacks without the need to change anything.
sorry andreas., please delete. there exists a means to do what I want already. you have a solid architecture as I'm sure you already know & I'm discovering :)
Status: Issue closed
|
vercel/next.js | 635467557 | Title: getServerSideProps error
Question:
username_0: <a className="text-primary">Read more</a>
</Link>
```
This is how I used `getServerSideProps(context)` on `[slug].js`
```
export async function getServerSideProps(context) {
let { slug } = context.query;
let changeCase = startCase(slug); // startCase from lodash
const res = await fetch(
`${process.env.API_URL}/hotels?hotelName=${changeCase}`
);
const data = await res.json();
return {
props: { hotel: data[0] },
};
}
```
## Expected behavior

## Screenshots

## System information
- OS: Windows 10 64bit
- Browser Google Chrome
- Version of Next.js: 9.4.4
- Version of Node.js: v12.18.0
## Additional context
I am using strapi as backend.
Only one hotel not working. Rest of them are working fine.
Answers:
username_1: As the error shows, data[0] is undefined. Maybe you could check what exactly is returned by
```
const res = await fetch(
`${process.env.API_URL}/hotels?hotelName=${changeCase}`
);
```
Status: Issue closed
|
PyCQA/isort | 1114256214 | Title: Specify/exclude files the same ways as black or "use from black"
Question:
username_0: I think it would be good if one could specify the included/excluded files in the same way as black does it:
```toml
[tool.isort]
include = '\.pyi?$'
extend-exclude = '''
/(
| bla/data
| blub
)/
'''
```
I would be even better if you could just say *just use these settings from black*, like e.g., something like that
```toml
[tool.isort]
profile = "black"
[tool.isort.black]
include = true
extend-exclude = true
exclude = true
``` |
bitpay/prestashop-v2 | 581233349 | Title: Issue installing
Question:
username_0: Hi, we are running PS 1.7.5.2 and I can't seem to install the module,
I have tried it uploading
prestashop-v2-master.zip
renaming it to
bitpaycheckout.zip
rezipping it
it keeps saying: This file does not seem to be a valid module zip
I checked the config, it seems valid, even added the logo.gif but no luck.
Answers:
username_1: Hello, I'm having same issue with Prestashop 1.7.6.5 Module ( prestashop-v2-1.9.2020 ) cannot be installed through module installer of prestashop.
username_0: Hi,
The module is only half working, the postbacks are not processed and the bitpay database table remains empty also I see the following error:
[Thu Dec 03 06:55:19.714034 2020] [proxy_fcgi:error] [pid 1475:tid 140489479071488] [client 2601:cb:8000:ea0:40e9:7abf:6e5d:2310:18620] AH01071: Got error 'PHP message: PHP Warning: Creating default object from empty value in ../modules/bitpaycheckout/controllers/front/bitpayorder.php on line 121\n', referer: https://www.username_0.nl/en/order
I guess that might be the root cause.
username_2: Prestashop 1.7.6.8
Can't install module - got error "This file does not seem to be a valid module zip"
username_3: Yeah... Not working "This file does not seem to be a valid module .zip" |
sitnaltax/loot-box-clicker | 340725693 | Title: Should upgrading items trigger auto donation?
Question:
username_0: The way it's currently implemented, when an item is upgraded from the inventory, the previously equipped item gets added back to the inventory, and with it, the auto donation gets triggered.
Is this intentional? I was just confused as sometimes I would get back the old piece (since it's only a chance to auto donate) and I would see "how big" the upgrade is and sometimes it gets donated instantly.
Answers:
username_1: It's not intentional, but there's no particular reason not to--there's no reason to hold onto an old piece of equipment.
Status: Issue closed
|
getgauge/gauge | 214017370 | Title: Initialize project offline does not create java.properties file
Question:
username_0: **Expected behavior**
The properties file created during offline project initialization is similar to the project created with user being online.
**Actual behavior**
The project created with user being offline does not have the java.properties file.
**Steps to replicate**
1. Ensure the user is offline
2. Create gauge java project
`gauge --init java`
3. open the env default folder. It has only the default.properties file
**Version**
OS - Ubuntu
```
Gauge version: 0.8.2.nightly-2017-03-13
Plugins
-------
html-report (3.1.0)
java (0.6.1)
ruby (0.4.0)
```
Answers:
username_1: I get this, with network disconnected:
```
gauge init java
create src/test/java
create libs
create src/test/java/StepImplementation.java
create /tmp/bar/env/default
create env/default/java.properties
create /tmp/bar/.gitignore
create manifest.json
create specs
create specs/example.spec
create .gitignore
create env
create env/default
create env/default/default.properties
Successfully initialized the project. Run specifications with "gauge run specs/".
```
### Gauge version
```
Gauge version: 0.9.1
Plugins
-------
csharp (0.10.1)
html-report (4.0.1)
java (0.6.5)
js (2.0.0)
ruby (0.4.1)
xml-report (0.2.0)
```
Status: Issue closed
|
ZupIT/beagle | 891694242 | Title: How to pass an object (instead of plain text) into a screen request with POST method on Android?
Question:
username_0: <!-- Thank you for using Beagle!
If you are looking for support, please check out our documentation:
* https://docs.usebeagle.io/
-->
## Use case
I'm working on a Beagle screen and I have created an API at Beagle BFF as follows:
```
@PostMapping("/nBoxHome")
fun getNBoxHome(@RequestHeader headers: Map<String, String>, @RequestBody tenant: SingleTenant) : Screen
```
Specifically, on client, I have to pass a map of headers as header and an object called SingleTenant as request body.
Normally I use loadView() method as Beagle lib provided for Android to request a screen and it works properly.
Even when I try to use it along with a POST-method request and a null body, it works also.
However, the problem occured when I had to pass a SingleTenant object as body, I used gson to convert it to JSON string because the datatype for body must be a String, and I got the exception saying that:
```
BeagleSDK: Exception thrown while trying to call http client.
```
At Beagle BFF, I even didn't see any incoming request from the client, therefore I think that these API calls weren't executed successfully.
If you guys have any idea or suggestion, please let me know. Thank you all in advance!
## Proposal
<!--
Briefly but precisely describe what you would like Beagle to be able to do.
Consider attaching images showing what you are imagining.
-->
Answers:
username_1: I think this problem may be in your client:
https://github.com/username_0/Extended-Beagle-Library/blob/master/ExtendedBeagleLib/src/main/java/com/vt/extendedbeaglelib/config/httpclient/HttpClientDefault.kt
Line: 47 `getOrDeleteOrHeadHasData`
try remove this part of code:
```
if (getOrDeleteOrHeadHasData(request)) {
onError(ResponseData(-1, data = byteArrayOf()))
return createRequestCall()
}
```
username_0: I tried removing it and it still doesn't work for me.
The HttpClient I'm using is completely copied from here: https://docs.usebeagle.io/v1.6/resources/customization/beagle-for-android/network-client/
username_1: thanks for the reporter, I think now I understand, there really is a problem that has already been fixed in the master, but it needs an improvement so as not to fall into this scenario, soon I will upload a new version with this corrected, but to solve your problem I changed your customer for this:
client:
```kotlin
/*
* Copyright 2020 ZUP IT SERVICOS EM TECNOLOGIA E INOVACAO SA
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package br.com.zup.beagle.sample.config
import br.com.zup.beagle.android.annotation.BeagleComponent
import br.com.zup.beagle.android.exception.BeagleApiException
import br.com.zup.beagle.android.networking.HttpClient
import br.com.zup.beagle.android.networking.HttpMethod
import br.com.zup.beagle.android.networking.RequestCall
import br.com.zup.beagle.android.networking.RequestData
import br.com.zup.beagle.android.networking.ResponseData
import kotlinx.coroutines.CoroutineScope
import kotlinx.coroutines.Job
import kotlinx.coroutines.cancel
import kotlinx.coroutines.launch
import java.io.EOFException
import java.net.HttpURLConnection
import java.net.URI
typealias OnSuccess = (responseData: ResponseData) -> Unit
typealias OnError = (responseData: ResponseData) -> Unit
@BeagleComponent
class HttpClientDefault : HttpClient, CoroutineScope {
private val job = Job()
override val coroutineContext = job + CoroutineDispatchers.IO
override fun execute(
request: RequestData,
onSuccess: OnSuccess,
onError: OnError,
): RequestCall {
if (getOrDeleteOrHeadHasData(request)) {
onError(ResponseData(-1, data = byteArrayOf()))
return createRequestCall()
}
launch {
[Truncated]
what was the main change in this client, before we used the fields inside the request data object now we created two new attributes: `url` and an object called `httpadditionaldata`, so now you can use them, because in 2.0 they will remain and the others have already been depreciated.
and a new object was created to make the request, in the next update the `uri` is no longer mandatory (it was our mistake).
Example now:
```
startActivity(newServerDrivenIntent<ServerDrivenActivity>(
RequestData(uri = URI(SAMPLE_ENDPOINT),
url = SAMPLE_ENDPOINT,
httpAdditionalData =
HttpAdditionalData(
method = HttpMethod.POST,
body = "Simple text",
)
)))
```
the object `ScreenRequest` can be replaced by `RequestData`
unfortunately there was a bit of caution on my part in this migration to maintain the legacy and this new API, I apologize for the inconvenience this has caused for you
username_0: @username_1 I replaced ScreenRequest by RequestData object and it still doesn't work. I even have to define headers, http method and body outside HttpAdditionalData, otherwise my request will be GET-method request with null body as default.
My RequestData object:
```
val requestData = RequestData(
uri = URI(endpoint),
url = endpoint,
headers = headerMap,
method = HttpMethod.POST,
body = TenantConfig.company,
httpAdditionalData = HttpAdditionalData(
method = HttpMethod.POST,
headers = headerMap,
body = TenantConfig.company
)
)
```
I tried to debug and realized that it always throws exception while creating response data (specifically **"exceeded content-length limit of 1770 bytes"**) if my request has non-null body. It starts throwing exception after setting statusCode for **ResponseData** object in **createResponserData()** method.

username_1: The beagle itself does not make any requests, everything is controlled by this client who is responsible for the dev that is integrated, so in case you want to pass an object in the body, you need to pass it in the form of a string and then convert it back to your object, I will start a conversation internally to change this `String` to `Any` then it would be possible for you to pass an object in the body and do whatever you want when you receive it in the `HttpClient` `interface`.
This example works in demo sample inside the beagle:
Code convert object to string and get object again:
```
// gradle file
// implementation 'com.google.code.gson:gson:2.8.6'
// Example code:
data class User(val name: String)
val body = Gson().toJson(User("text"))
// at this moment beagle onyl accept string because this you need to convert your object
startActivity(newServerDrivenIntent<ServerDrivenActivity>(
RequestData(
url = SAMPLE_ENDPOINT,
httpAdditionalData =
HttpAdditionalData(
method = HttpMethod.POST,
body = body,
)
)))
//but in your client you can convert to your object and do what you want
override fun execute(
request: RequestData,
onSuccess: OnSuccess,
onError: OnError,
): RequestCall {
//Send to your request
val user = Gson().fromJson(request.httpAdditionalData.body!!, User::class.java)
}
```
username_1: already thought about maybe using a lib to make the requests I believe that it treats these cases better? maybe the `[retrofit](https://square.github.io/retrofit/)`
username_0: @username_1 yeah I know I can convert the string back to my object in HttpClient but the problem I'm facing now is the error **"exceeded content-length limit of 1770 bytes"** everytime I try to pass a string into body (as I describe above).
If I don't pass anything to body, it works perfectly.
Do you know the reason why I always encounter this error each time passing a string as body? @@
username_0: Sure. I strongly advocate the idea of switching to retrofit for handling requests better.
username_1: could you create a very simple application with this problem in question so that I can debug, because I really couldn't simulate with the tools I have here?
username_0: Hi @username_1, I'll upload my Beagle BFF project and share with you shortly.
username_0: Or I leave the API of my Beagle screen here, can you help me simulate it in your Beagle BFF project?
```
@PostMapping("/nBoxHome")
fun getNBoxHome(@RequestHeader headers: Map<String, String>, @RequestBody tenant: SingleTenant) : Screen {
val authorization = headers["authorization"] ?: ""
val osType = headers["os_type"] ?: ""
val osVersion = headers["os_version"] ?: ""
val appVersion = headers["app_version"] ?: ""
val headersAfterFiltered = mapOf(
"Accept" to "application/json",
"Content-Type" to "application/json",
"Authorization" to authorization,
"os_type" to osType,
"os_version" to osVersion,
"app_version" to appVersion
)
return tabService.createNBoxHome(headersAfterFiltered, tenant)
}
```
Basically, this is an API that receives POST requests from client.
The client will attach some info such as token in '**authorization**' in the header, and post an object (SingleTenant object in my case) as RequestBody -> you can replace by any object you want for testing.
And at client-side, I use loadView() method on Beagle Android to load this screen (endpoint **/nBoxHome**).
I tried to pass headers and a string as body (this string is converted from SingleTenant object) and I always received the error **"BeagleSDK: Exception thrown while trying to call http client."**
-> I debugged and realized that it always throws exception while creating response data (specifically "exceeded content-length limit of 1770 bytes") if my request has non-null body.
If I set @RequestBody(required=false) tenant: SingleTenant? for my API and from client, I don't pass anything to body (null body), everything still works okay -> I think the problem is caused by non-null body.
The HttpClient I'm using is completely copied from here: https://docs.usebeagle.io/v1.6/resources/customization/beagle-for-android/network-client/
username_1: I simulated a simple `POST` and I couldn't reproduce the error, so in this case, it would be interesting to set up a sample for us, you don't need to share sensitive information about your project, just create an android app that makes a similar request with the error in question and the backend too, what I indicate is to get the object changes the name of the attributes and the data passed so that I can have the error. but follow my e-mail to anything: <EMAIL> if this is not right you can send it in my personal e-mail: <EMAIL> and you can talk to me via google chat in my personal e-mail.
You can send a message by case if you want to resolve issues faster we are 100% at your disposal we can even arrange a meeting to get feedback and help you with any problem
username_0: Hi @username_1, I've just reached you at your Gmail. Please check it out and help me resolve this issue!
Many thanks bro!
username_1: @username_0 I tried to build your backend but I couldn't, but only with the android error I got some information.
Issues I found related:
https://stackoverflow.com/questions/46739366/android-getting-java-net-protocolexception-exceeded-content-length-limit-of-0
https://stackoverflow.com/questions/9767952/how-to-add-parameters-to-httpurlconnection-using-post-using-namevaluepair
I change the client:
before:
private fun setRequestBody(urlConnection: HttpURLConnection, request: RequestData) {
urlConnection.setRequestProperty("Content-Length", request.body?.length.toString())
try {
urlConnection.outputStream.write(request.body?.toByteArray())
} catch (e: Exception) {
throw BeagleApiException(ResponseData(-1, data = byteArrayOf()), request)
}
}
after:
```
private fun setRequestBody(urlConnection: HttpURLConnection, request: RequestData) {
try {
urlConnection.doOutput = true
urlConnection.outputStream.write(request.body?.toByteArray())
urlConnection.outputStream.flush()
} catch (e: Exception) {
throw BeagleApiException(ResponseData(-1, data = byteArrayOf()), request)
}
}
```
username_0: @username_1 Nice! Eventually it worked for me thanks to your updated code snippet :D
I think you should also update the Http client for Android in the docs in order to help others avoid the problem like mine.
Thank you very much!
username_1: @username_0 I thank you for helping me to simulate, yes we will update all the examples with this new excerpt, I will also discuss internally maybe using the retrofit as an example because it already handles a lot.
username_0: In addition, as you mention my project here, I've been working on this project as a library for my later Android project.
My idea is creating as many custom actions and custom widgets as possible and put them with http client, app beagle config,... into one module, and then host it as a library which allows the projects in future to integrate Beagle more easily (it will help me save time copying custom actions and widgets from one to another). With this lib, my project just needs to implement Beagle library to use **loadView()** method. The app gradle will go like the photo I attach here.

Initially, it seems to work properly but I realize that for each project, I have to configure another server address in the AppBeagleConfig and maybe I can change the flow of code within a specific custom widget or custom action, it means I need to find a way to re-implement that custom widget or action in my project and do something to let Beagle know to use my new custom widget or action (with the same name as in library) instead of the custom widget/action already compiled in library. But I don't know how to achieve it, do you have any suggestion for me in this case?
Thank you!
Status: Issue closed
username_1: @username_0 can you open another issue with this question? sorry about the delay, everything you asked above is possible to do in a scalable way, but for me to detail this I will need some time. |
numba/numba | 185685987 | Title: Create a system-reporting-tool
Question:
username_0: This tool would report in a standardized way properties of the system which would be useful for replication and debugging by core devs, especially when the suspected cause is OS specific or GPU drivers/runtime. This data would not contain any personal or user specific information and could include:
- Operating system and version
- CPU string
- Python version
- numba version
- llvmlite version
- numpy version
and if applicable:
- conda version
- conda environment
- GPU model, drivers and runtime
The tool must be easy to run and stand alone.<issue_closed>
Status: Issue closed |
cartalyst/sentinel | 139732480 | Title: [Proposal] Per-Object Roles
Question:
username_0: I read through the documentation and didn't find anything to this effect. Is it currently possible in Sentinel, and if not is it desirable to add, per-object roles? It appears that this (and pretty much any other) authorization package is geared toward system-wide roles.
I'd implemented this in Laravel 4 (without sentry/sentinel) and used a roles table with an optional 'object' column that contained the classname of an object, and an object_id column that contained the id of an object.
* If a role has neither object nor object_id, it is system-wide.
* If a role has object but no object_id, the role applies to all objects of that type.
* If a role has object and object_id, the role applies to a single instance of that object identified by the id.
I kept permissions in a separate table, and didn't have any sort of hard constraints on what roles could be applied to what objects, though that might be desirable.
For implementing in sentinel it might be possible to create a new table, role_user_objects, with columns for user_id, role_id, object, object_id (or overload/replace the role_users table), and still keep all of the roles in the roles table.
Answers:
username_1: Are you talking about RBAC?
username_1: I'll make this as a proposal for Sentinel 3 as i believe this was somewhat requested before.
username_2: I've actually been looking at replacing the in-house [RBAC system I designed for UserFrosting](http://www.userfrosting.com/components/#authorization) with Sentinel, but I have this concern as well.
My system currently allows you to associate checkpoints with users and roles ("groups"), and even define conditions using a restricted subset of PHP (basically, expressions consisting of boolean operators, and methods that return boolean values). We store them in the database, so that they can be dynamically modified through the frontend interface.
Do you think it will be easy to implement this using Sentinel? |
argoproj/argo-rollouts | 407048230 | Title: Support gradual increase of replicas for traffic shaped canary
Question:
username_0: There are two conflicting use cases with canary.
1. After changing the weight of a canary, the amount of traffic directed at the canary could be disproportionate to the scale, and overwhelm the pods. In this case, we would only want to set the weight in a gradual manner and eventually reach the target weight for the canary.
2. The weight set on a canary should be instantaneous despite disproportionate number of replicas. For example, going from 100% old, to 100% new atomically.
We need to allow users to be able to configure which of the two modes should happen when weights are set.
Answers:
username_1: The current implementation solves for both use cases.
Status: Issue closed
|
googlecolab/colabtools | 498643761 | Title: Unable to read content of file with view permissions
Question:
username_0: Bug report for Colab: http://colab.research.google.com/.
For questions about colab usage, please use [stackoverflow](https://stackoverflow.com/questions/tagged/google-colaboratory/).
- Describe the current behavior: When trying to read the content of a file for which I have view permissions I get the error: FileNotFoundError: [Errno 2] No such file or directory: '/content/gdrive/My Drive/test.txt'. Note that os.path.isfile('/content/gdrive/My Drive/test.txt') returns true, so the file _should_ be available
- Describe the expected behavior: I expect that I am able to read the contents of the file
- The web browser you are using (Chrome, Firefox, Safari, etc.): Chrome
- Link to self-contained notebook that reproduces this issue
(click the *Share* button, then *Get Shareable Link*): https://colab.research.google.com/drive/1Ax1yKjAUJnXK4V3IDbVg1txbtdteBQy-

Answers:
username_1: Thanks for the report. Are you able to download the file from drive.google.com ?
Are you the owner of the file, or was it shared with you?
If shared with you, are both you and the owner using @gmail.com accounts?
username_0: The file was shared with "disable copying and downloading", so I cannot download it from drive.
The owner of the file is my work account @indeed.com, and the recipient is my personal account (@gmail.com).
username_0: I think this might be a difference in assumptions.
My assumption is that if I can `view` in drive but not copy or download then I should also be able to `read` in colab because colab read is like drive view.
Perhaps the colab assumption is that colab read is like drive copy/download, not like drive view?
username_1: Right; reading in colab is equivalent to download in terms of permissions.
Thanks for clarifying.
Status: Issue closed
|
projectstorm/react-diagrams | 438125110 | Title: Is there a way to display 'label' on the 'link' on IE11?
Question:
username_0: Hi,
Is there a way to display 'label' on the 'link' on IE11? Do you still not support IE11?
I saw #93 looks not support IE11.
I tried to test. I can create Node but can't create 'link' from node to node.
and I can't see 'label' on the 'link'
Is there a way?
Thanks
Answers:
username_1: Nope, IE11 is trash.
Status: Issue closed
|
jitsi/jitsi-videobridge | 155545486 | Title: jvb leaking file descriptors
Question:
username_0: I am running jitsi-videobridge (726-1) on Debian Jessie with openjdk-8 (8u72-b15-1~bpo8+1)
I am seeing a problem with file desctirptor leakage until the process eventiually maxs out its ulimit (4096) and becomes wedged (needs restarted).
it seems to leak approx 16 FDs every 10 seconds or so.
I establish this by running:
lsof -p $(pgrep -u jvb)
from which I can see an ever-growing list of these:
java 1499 jvb 2337u sock 0,7 0t0 1514853 can't identify protocol
java 1499 jvb 2338u sock 0,7 0t0 1529373 can't identify protocol
java 1499 jvb 2339u sock 0,7 0t0 1514860 can't identify protocol
are there any known issues in this area?
in the jvb logs I see the attached repeated pattern of activity that seems to correlate with the leakage.
it this a normal pattern? I'm pretty sure that there are actually no active conferences.
why would I keep seeing this logging line:
JVB 2016-05-18 14:21:26.815 INFO: [226258] org.jitsi.videobridge.IceUdpTransportManager.info() Failed to connect IceUdpTransportManager: net.java.sip.communicator.service.protocol.OperationFailedException: TransportManager closed
this seems like a potential source of FD allocation.
if you can suggest extra logging I can enable or extra information I can prove or things I can dig into I'm happy to do so.
Answers:
username_0: I might have figured out why I'm having the issue on this particular machine. In the logfile snippet I sent you (initial issue description) I noticed that there were 2 distinct IPs referenced in the
"org.ice4j.ice.Agent.createComponent" logging lines. It turns out that particular machine has a second virtual IP (eth0:1) which explains the second IP (i.e. the eth0 and eth0:1 public IPs both go to the same physical machine).
I contrasted this with the machine not experiencing the leak which only has a single NIC (eth0) and I wondered if the leak could be due to some circular problem relating to the 2 IPs reaching the same endpoint.
anyway, since I am not actively using the eth0:1 IP on the problem machine I disabled it and now I don't believe I am seeing the same leakage.
I will continue to monitor it and report back in a day or so if it continues to be stable. I'm not sure if this is a bug or not. it seems like it could happen to anyone on a machine with multiple NICs and in that regard does not seem like user error. I'm not sure of the nuts and bolts of why the secondary IP is problematic. perhaps it would be good if the IPs which are used by JVB could be overridden. I'm not sure.
username_0: I ran the following script overnight (no jicofo just using curl to hit the heartbeat REST API).
```
#!/bin/bash
for i in `seq 1 10000`; do
echo "$(date)] Iteration $i [FDS: $(lsof -u jvb 2>/dev/null | wc -l)]"
curl http://localhost:8080/about/health
sleep 3
done
```
attached is the jvb.log. the low-watermark seemed to be in the 700s when the script completed.
[jvb.log.gz](https://github.com/jitsi/jitsi-videobridge/files/277518/jvb.log.gz)
[lsof.txt](https://github.com/jitsi/jitsi-videobridge/files/277521/lsof.txt)
username_1: In my environment I don't see the same. I'm running health checks every 1s, and lsof every 30s, and the results so far (after about 2 hours) are here: https://docs.google.com/spreadsheets/d/1j1q2MH_9yqy5wbbU-eZsyeSGTYGwo6ruMV9pHni6rHk/edit?usp=sharing
I'll let it run longer and also run it on java 8.
username_0: I just tried your approach of 1 second pauses between healthchecks:
```
#!/bin/bash
for i in `seq 1 10000`; do
echo "$(date)] Iteration $i [FDS: $(lsof -u jvb 2>/dev/null | wc -l)]"
curl http://localhost:8080/about/health
sleep 3
done
```
I just hit max FDs.
resulting jvb.log.
[jvb2.log.gz](https://github.com/jitsi/jitsi-videobridge/files/278347/jvb2.log.gz)
lsof:
[lsof.txt](https://github.com/jitsi/jitsi-videobridge/files/278348/lsof.txt)
FDs sampled rougly once per second:
[fds_sample.txt](https://github.com/jitsi/jitsi-videobridge/files/278357/fds_sample.txt)
any ideas?
I'm wondering about this piece of logging:
INFO: [189614] org.jitsi.videobridge.IceUdpTransportManager.info() Failed to connect IceUdpTransportManager: net.java.sip.communicator.service.protocol.OperationFailedException: TransportManager closed
I'm wondering if each instance of this could result in leaked resources. it may also be totally unrelated. I really which I was more skilled with java profiling tools. I played around with heap dumps but I can't really zone in on where the problem could be.
username_1: I see the same message many times in my logs, too.
username_0: @username_1 it looks (from your sysctl output) as if you only have one interface (eth0) on the machine you are running this on. when I compare this against the machine I was doing the REST based heathchecks on I have multiple interfaces. I'm wondering if having multiple interfaces involved in each healthcheck somehow exaggerates the symtoms I am seeing compared to what you are seeing.
FWIW, the machine I am running this particular test on is Ubuntu 14.04 and it has 5 interfaces since I have both virtualbox and vmware on it (eth0, vboxnet0, virbr0, vmnet1 and vmnet8).
I will see if I can modify the jvb code to not include some of those interfaces in the healthcheck to see if it changes the outcome. if you do have a multiple interface machine it would help if you could try it on that.
username_1: True, the machine I used has one interface (although it has also has a virtual eth0:1 and also advertises it's public address found through the AWS API).
You can limit the interfaces used by jitsi-videobridge with a property, see here: https://github.com/jitsi/jitsi-videobridge/blob/master/doc/tcp.md#orgice4jiceharvestallowed_interfaces
username_0: thanks. with this:
`org.ice4j.ice.harvest.ALLOWED_INTERFACES=eth0`
the FDs growth is much slower but still seems to be apparent.
if you dont mind telling me: what type of AWS instance are you running? i.e. what AMI number and instance size. I will perhaps try to duplicate on the same setup to see if I can reconcile why the problem doesn't seem to be apparent there.
I'm wondering if AWS is somehow a special case because all the physical machines I have tried have exhibited the problem.
I guess the multiple interface thing could be reproduced on AWS by attaching a bunch of elastic IPs to the same instance as virtual ips (eth0:1, eth0:2 etc).
username_1: One of the machines is a t2.micro, I don't know the AMI number but it is running ubuntu 14.04.
username_0: I guess it doesn't matter hugely but it would be good to know the exact ami ID just for uniformity. if you don't mind you could get it by running the following on the instance:
`wget -q -O - http://instance-data/latest/meta-data/ami-id`
Thanks.
username_1: ami-4d8b2626
Status: Issue closed
username_1: These particular leaks have been fixed. |
ICPI/Coordination | 440822413 | Title: Product Submission: New CHIPS Visuals
Question:
username_0: Sent to ICPI 4/29 by @mehtasp for Leads' Review.
VL Dashboard can be found on [SharePoint](https://www.pepfar.net/OGAC-HQ/icpi/Shared%20Documents/Forms/AllItems.aspx?RootFolder=%2FOGAC%2DHQ%2Ficpi%2FShared%20Documents%2FClusters%2FHIV%2DTB%20Diagnosis%2FWorking%20Folder&FolderCTID=0x012000C815322C717A7E4B8164EA374FA254EC002682B939F9BED347BD49E43D77D3C691&View=%7B94C838B2%2DE166%2D4122%2DB8B4%2D7BEB9E1BC12B%7D&InitialTabId=Ribbon%2ERead&VisibilityContext=WSSTabPersistence).
New visuals address the following questions:
1. How many index clients were offered index testing services?
2. How many index clients accepted index testing services?
3. How many contacts did the index client provide?
Contacts are only sexual partners, biological children (or parent, if pediatric index client), and anyone with whom a needle was shared
5. How many contacts were tested?
6. Of the contacts tested, how many were positive?
Answers:
username_1: Feedback sent to Diagnosis Team
username_1: Response
--
Email Subject: HTS_index mockups next steps
Team responded with acknowledgement and next steps
First of all, thanks to Jamie & Catherine for their feedback and insight, which was extremely valuable.
The cluster co-leads, Randy and I spoke at length about next steps on the HTS_Index mockups for CHIPS based on your feedback and have come to an agreement.
This is how we will proceed for next week’s production:
-remove the age/sex slicer altogether and remove narrative about how to interpret visuals based on age/sex slicer selection. Users will just see the entire cascade without disaggs, for now. This will somewhat align with Pano as well.
-add arrow from 4th bar in cascade to stand alone bar to the right of cascade
-we liked Catherine/Jamie’s suggestions about analyzing the ‘index cases’ and ‘contacts provided’ sub-groups separately, but realize that we won’t be able to mock this up and get it cleared in time for next week’s tool production. We’re looking at clean Q2 for those changes at the earliest
-HTS_index indicator should maybe be revised to reflect feedback, but not sure if anyone can take this on by 5/20
This is how we will proceed for next week’s CHIPS production and long term. Let us know if you have any qualms to this approach. We are informing the Testing SMEs of this approach as well.
Status: Issue closed
|
rome/tools | 1036853549 | Title: ☂️ Rome Parser Architecture Changes
Question:
username_0: # Store leading/trailing trivia for each token
We explored two options on how to store trivia like spaces, tabs, newlines, single and multiline comments. Let's use the following example to illustrate how RSLint works today and how we want to store trivia moving onwards.
```rust
[
1,
2, // test
3
]
```
RSLint stores the trivia as a token of the kind `Whitespace` or `Comment` and attaches it to its enclosing node. The CST for the array example would look like this (whitespace and comments are highlighted):
```yaml
script
- array
- l-bracket
- whitespace: \n\t # <<<
- element
- literal
- number_token: 1
- comma-token
- comment: // test # <<<
- whitespace: \n\t # <<<
- comment: // test # <<<
- element
- literal
- number_token: 2
- comma-token
- whitespace: " " # <<<
- comment: // test # <<<
- whitespace: \n\t # <<<
- element
- literal
- number_token: 3
- whitespace: \n # <<<
- r-bracket
- whitespace: \n # <<<
```
There are a number of concerns that we have about this design that we'd like to resolve:
- It prevents using fixed offsets for accessing a specific child of a node, and, therefore, `O(1)` lookups for child nodes. The problem is that an unspecified number of trivia may exist between any two non-trivia elements. Normally, the `condition` of an if statement is the 3rd element: 1st: `if` keyword, 2nd: whitespace, 3rd: `condition`. However, this isn't guaranteed as `if \n // some comment\ncondition\n...` illustrates, where the `condition` is the 5th element because of the added comment that is followed by a `\n`.
- The leading or trailing `comments` for a node or token can't be accessed using the AST facade. Resolving the comments would require additional helpers that lookup the comments in the parent's children.
That's why we believe that storing leading/trailing trivia on the token is better for our use case. We would use the same rule as Roslyn and Swift to decide to which token a trivia belongs:
1. A token owns all of its trailing trivia up to, but not including, the next newline character.
2. Looking backward in the text, a token owns all of the leading trivia up to and including the first newline character.
Applying these rules to our initial array example creates the following CST:
```yaml
script:
- array:
- l-bracket:
- leading_trivia: "" # <<<
- trailing_trivia: "" # <<<
- element: # (that's not how RSlint represents array elements today but probably is as how it should)
[Truncated]
pub fn else_clause(&self) -> Option<ElseClause>;
}
```
We decided against this approach because users must know that they may have to explicitly check if the node was missing. The API doesn't guide them to handle the missing case and decide what the appropriate behaviour is.
# Resources
- [Swift Syntax](https://github.com/apple/swift/tree/c83e89062038833e049f549538186dac36a2e7a6/lib/Syntax) Documentation
- [Syntax Header](https://github.com/apple/swift/blob/c83e89062038833e049f549538186dac36a2e7a6/include/swift/Syntax/Syntax.h)
- [maklad's explanation of Swift](https://github.com/rust-lang/rfcs/pull/2256#issuecomment-408587672)
- [Swift documentation about internal structures](https://www.youtube.com/watch?v=5ivuYGxW_3M)
# Action Items
- [x] @xunilrj Update with details and links to trivia-in-tokens work
- [x] @ematipico Update with details and links to Ungrammar work
- [x] @username_1 Update with details and links to AST Facade work
PS -- @Stupremee and @RDambrosio016 we're happy to talk more about these changes, these are just where we landed in our own discussions, but we'd really value any feedback you could provide.
Status: Issue closed
Answers:
username_1: The parser architecture changes have been completed. We can track the formatter in its own task (#1726) |
MicrosoftDocs/azure-docs | 613516530 | Title: Invalid Credentials error
Question:
username_0: I have a SCIM endpoint up and running using Apache Syncope 2.1.5. When I try to access my SCIM APIs using Postman they work with Basic Authentication.
But when I try to access it from the Enterprise application, it fails with "You appear to have entered invalid credentials. Please confirm you are using the correct information for an administrative account." Where am I going wrong?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 271dc426-fb47-7441-d627-ec11f8c7a6ea
* Version Independent ID: 4cb8bfc8-4748-e501-425a-e4c2da0f5f5a
* Content: [Develop a SCIM endpoint for user provisioning to apps from Azure AD](https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/use-scim-to-provision-users-and-groups#step-4-integrate-your-scim-endpoint-with-the-azure-ad-scim-client)
* Content Source: [articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md](https://github.com/Microsoft/azure-docs/blob/master/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md)
* Service: **active-directory**
* Sub-service: **app-provisioning**
* GitHub Login: @msmimart
* Microsoft Alias: **mimart**
Answers:
username_0: I even tried with a Bearer token in Postman and it worked. When i used the same Tenant URL and the bearer token, it is failing in Azure.
Is it compulsory for the tenant URL to be a HTTPS or will it work if it is a HTTP?
username_1: @username_0 Thank you for your query. We will investigate and update the thread further.
username_0: Hi, any updates on this? Also what is the URL that is sent to the SCIM interface when clicking on "Test Connection"? It is not clearly mentioned in the documentation
username_2: @username_0, Thank you for reaching out and I apologize for the delay, as its taking me sometime to validate this. Ideally the tenant detail needs to have https, but I am working on this to confirm, as you mentioned that in Postman it works.
Do allow me sometime to check and validate this and will get back to you.
username_3: @username_0, Thank you for reaching out and I apologize for the delay, as its taking me sometime to validate this. Ideally the tenant detail needs to have https, but I am working on this to confirm, as you mentioned that in Postman it works.
Do allow me sometime to check and validate this and will get back to you.
username_0: Additional info
- The VM where we have hosted the Syncope application is on our VPN. We have created an inbound rule to allow requests to come in from AzureActiveDirectoryDomainServices.
- When we ran the test connection and checked the logs on Apache Syncope, there was no log entry at all for the request made.
username_0: It finally worked with using the public IP address of the machine instead of the private IP address. It now works thanks.
username_3: @username_0, Awesome news. Good to know the fix.
Status: Issue closed
|
Azure/azure-service-operator | 758572955 | Title: Bug: PostgreSQLServer sample provided does not work
Question:
username_0: **Describe the bug**
The sample yaml file provided for PostgreSQLServer does not work. It errors out with an error:
Error from server: error when creating "serv.yaml": conversion webhook for azure.microsoft.com/v1alpha1, Kind=PostgreSQLServer failed: Post https://azureoperator-webhook-service.azureoperator-system.svc:443/convert?timeout=30s: dial tcp 10.0.96.217:443: connect: connection refused
Contents of serv.yaml:
apiVersion: azure.microsoft.com/v1alpha1
kind: PostgreSQLServer
metadata:
name: postgreserver-sample
spec:
location: eastus2
resourceGroup:<RG_NAME>
serverVersion: "11"
sslEnforcement: Disabled
createMode: Default
sku:
name: GP_Gen5_4
tier: GeneralPurpose
family: Gen5
size: "51200"
capacity: 4
**To Reproduce**
Steps to reproduce the behavior:
Install ASO and kubectl -f serv.yaml
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
Answers:
username_1: If others see this, check to make sure that the controller pod is up and running in the `azureoperator-system` namespace.
Closing this as I can't repro and the issue is quite old. Apologies for the (very) slow response - we are working on doing better at this.
Status: Issue closed
|
maslianok/react-resize-detector | 511645947 | Title: "ResizeObserver loop limit exceeded" when going into fullscreen mode
Question:
username_0: Hello,
This error occurs only on Chrome (desktop versions are what I tested with). When sending a DOM element into fullscreen mode via `element.requestFullscreen`, I'm getting the "ResizeObserver loop limit exceeded" error. In my case, I have a `video` element that is a sibling to the `ReactResizeDetector` which is bound to the parent DOM element of both. The `onResize` property is triggered when the `video` element goes in/out of fullscreen.
Browsers:
Chrome - Version 77.0.3865.90 (Official Build) (64-bit)
Version 74.0.3713.1 (Official Build) canary-dcheck (32-bit)
OS:
Windows 10, and Mac (most recent version)
Please let me know if there is any other info I can provide.
Thanks,
-Josh
Answers:
username_1: Go fullscreen
</button>
<ResizeDetector
handleWidth
handleHeight
onResize={this.onResize}
targetDomEl={this.parentRef.current}
/>
</div>
);
}
```
Seems the structure is similar to the one you described.
But everything works on my end.
Can you please provide a code snippet that shows the bug?
username_1: Closing due to lack of information
Status: Issue closed
|
algolia/algoliasearch-zendesk | 432136214 | Title: How can I index also the private articles?
Question:
username_0: I'd like to create a separate index which includes our agent-only articles from Zendesk. By default the connector takes only the public articles.
The idea is to switch indexes using the JS in the Zendesk theme:
if (HelpCenter.user.role == "agent" || HelpCenter.user.role == "manager") {
//use private index
}
Can you guide me through the process? I think it might be something in this file https://github.com/algolia/algoliasearch-zendesk/blob/master/crawler/types/user_segment.rb
Answers:
username_1: Hi! Unfortunately this would likely lead to the security issue which we're avoiding by only allowing public articles:
Indeed, switching based on a JavaScript variable means that technical users would be able to inspect the code and find the credentials and index name to use to access private articles.
If that's ok with you, you can run the connector yourself with `private: true` which will allow indexing both public and private articles:
- https://github.com/algolia/algoliasearch-zendesk/tree/master/crawler
- https://github.com/algolia/algoliasearch-zendesk/blob/master/crawler/config.rb
See also:
https://discourse.algolia.com/t/how-to-add-another-index-to-zendesk-search-for-employees-only/7177
Status: Issue closed
|
cocos2d/cocos2d-x | 508438172 | Title: android lua random crashes
Question:
username_0: - cocos2d-x version:3.17 (external/lua had upgraded to 3.17.2)
- devices test on:samsung SM-G9650
- developing environments
- NDK version:19.2.5345600
There are random crashes in 3.17 luajit, so I upgraded to 3.17.2's lua , armeabi7v is ok, but arm64-v8a is not.
Answers:
username_1: You can try latest codes. It updated luajit.
username_0: Do I need to update luacompiler ?
username_1: I think you should update luajit.a in `external/lua/luajit`, and luajit bin in `tools/cocos2d-console/plugins/plugin_luacompile/bin`.
username_0: ok thx, There is some trouble with the resources that were compiled with the old luacompile and the new luacompile, if they are not compatible.
username_1: Yep, you are right.
username_0: I don't use new luacompile, and it seems be ok. I'll test more devices.
Status: Issue closed
|
angular/material | 152405258 | Title: floating icon with mdAutocomplete
Question:
username_0: I would like to be able to add a floating icon on the left side of an autocomplete input as I would do with a simple input field:
```jade
md-input-container.md-icon-float
label Example Label
md-icon(md-svg-src="plus")
input
```
Here are two examples of code I would like to be able to write:
```jade
md-autocomplete(
md-search-text="searchText"
md-items="movie in movies | filter : searchText"
md-item-text="movie.originalTitle"
md-min-length="0"
md-floating-label="Film"
md-icon="movie"
)
```
or event better:
```jade
md-input-container.md-icon-float
label Example Label
md-icon(md-svg-src="plus")
input(md-autocomplete)
```
Thank you
Answers:
username_1: A solution to many of these issues might be if md-autocomplete could take an input as a parameter, similar to what md-chips does.
That would allow for further customization, as we could use a modified input element with all the parameters they allow.
Plus, I wouldn't keep seeing issues like this one, thinking it was possible, then remembering that I was thinking of md-chips :-P
Autocompletes are inputs, they should do everything that inputs can do.
Status: Issue closed
username_2: This issue is closed as part of our ‘Surge Focus on Material 2' efforts.
For details, see our forum posting @ http://bit.ly/1UhZyWs. |
jlippold/tweakCompatible | 419117347 | Title: `DLEasy` working on iOS 12.1.1
Question:
username_0: ```
{
"packageId": "com.yourepo.ahmedbafkir.dleasy",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.yourepo.ahmedbafkir.dleasy",
"deviceId": "iPhone9,3",
"url": "http://cydia.saurik.com/package/com.yourepo.ahmedbafkir.dleasy/",
"iOSVersion": "12.1.1",
"packageVersionIndexed": true,
"packageName": "DLEasy",
"category": "Tweaks",
"repository": "AhmedBafkir Source - YouRepo",
"name": "DLEasy",
"installed": "1.4.3",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.yourepo.ahmedbafkir.dleasy",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.4",
"shortDescription": "Download images & videos from social apps easily!",
"latest": "1.4.3",
"author": "AhmedBafkir",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
saltstack/salt | 108760233 | Title: Fix issues with cross-calling states
Question:
username_0: Refs #26747
Refs #21214
Need to figure out how to propagate `__running__` and `__instance_id__` and similar magic variables to lower-level calls in the loader. Packing probably won't work, unless I do a call-time packing.<issue_closed>
Status: Issue closed |
acani/Chats | 45036374 | Title: tableView does not scroll after message send
Question:
username_0: TableView does not scroll after message send.
this is a knowed problem, someone have an idea for the solution?
Answers:
username_1: @username_0 thank you for reporting this issue and for providing a solution. I'm sorry for not fixing this issue. Unfortunately, this project no longer works, and I'm no longer working on it, so I don't plan on fixing this issue. If anything changes, I'll let you know. |
espressif/esp-adf | 383936982 | Title: Remerge esp_http_client changes
Question:
username_0: The code for the `esp_http_client` component is duplicated from the ESP-IDF component but yet was changed locally here without it being merged back.
I'm talking specifically about the added function `esp_http_client_set_redirection` which exists here [here](https://github.com/espressif/esp-adf/blob/master/components/esp_http_client/include/esp_http_client.h#L337-L360) but not in [ESP-IDF](https://github.com/espressif/esp-idf/blob/master/components/esp_http_client/include/esp_http_client.h#L364-L374).
Why have diverging library code for this component? It makes building harder when using other build systems (e.g. PlatformIO) because I have to un-include the ESP-IDF lib and include this version of the library.
Why not merge https://github.com/espressif/esp-adf/commit/a0cba1164a00116b276a712eeebc8fecbbbf7726 into ESP-IDF?
Answers:
username_1: Hi @username_0
Because ESP-ADF is using ESP-IDF v3.1, which is not have esp_http_client component. And some specific features for ADF, if it's necessary we will send the MR to IDF soon
Thanks
Status: Issue closed
username_0: @username_1 So I see that ESP-IDF (https://github.com/espressif/esp-idf/tree/release/v3.2/components/esp_http_client) is now way ahead with their changes on esp_http_client, but here, esp_http_client still exists. Will the esp_http_client be deleted from here because all necessary changes are in main ESP-IDF?
username_3: @username_0
Yes. This will be done once we update to IDF release/v3.2. |
GSA/innovation.gov | 290131016 | Title: Innovation.gov Beta Feedback 1/19/2018 17:50:44
Question:
username_0: <h1> ----- About You </h1><h3>First Name
</h3>Matt<h3>Last Name
</h3>Kasten<h3>Agency or Organization
</h3>FAA<h3>Email
</h3><EMAIL><h3>Applicable Age Range
</h3>18-25<h3>Why did you visit Innovation.gov today?
</h3>To learn about innovation in the government<h3>How did you first hear about us?
</h3>Better Government Newsletter<h1> ----- Innovation.gov Review </h1><h3>What was your first impression when you entered the website?
</h3>Academics/Researchers<h3>Statement: <div><i>Innovation.gov content is easy to see and well designed
</div></i></h3>Non-supervisor<h3>Statement: <div><i>Innovation.gov content is organized in a clean, easy-to-read manner
</div></i></h3><h3>Statement: <div><i>The content and resources on Innovation.gov are applicable to my daily work
</div></i></h3><h3>Statement: <div><i>I would share the content and resources on Innovation.gov with my peers and colleagues
</div></i></h3><h1> ----- Innovation at Your Organization </h1><h3>How do you define innovation?
</h3><h3>What would you like to learn regarding to innovation and building a better government?
</h3><h3>What problem(s) are you trying to solve in your organization?
</h3><h3>How well-defined is innovation in your organization?
</h3><h3>To what degree is creativity and innovation rewarded in your organization?
</h3><h1> ----- The Better Government Movement & You </h1><h3>How can the Better Government team best support you with your current needs?
</h3><h3>How might you want to get involved in the Better Government Movmeent
</h3><h3>Email
</h3><issue_closed>
Status: Issue closed |
data2health/covid19-challenge | 591583831 | Title: Prepare Sage press release
Question:
username_0: Loop in Hsiao-Ching and Julie (already done) to prepare the press release. Share available material with them.
Answers:
username_0: ## Update
- Thomas to draft a paragraph for the DREAM newsletter based on the content of the challenge home page.
username_0: The challenge has been announced in the DREAM Challenges Newsletter.
Status: Issue closed
|
botify-labs/simpleflow | 182253455 | Title: Add linting to travis tests with flake8
Question:
username_0: Discussed a bit in #129, I often run flake8 manually and @ybastide too, so let's have this in Travis and fix obvious problems right away. I usually like it except for line length rules, but we already have a `setup.cfg` rule for that. |
OYsun/vscode-VueHelper | 243294829 | Title: 关于组件内导航钩子的语法错误
Question:
username_0: 输入beforeRouteEnter和beforeRouteLeave,得出
beforeRouteEnter((to, from, next) => {
//does NOT have access to `this` component instance
}),语法报错,应该改成
beforeRouteEnter(to, from, next) {
//does NOT have access to `this` component instance
}这样才对
Answers:
username_0: 这里不建议用箭头函数哇 |
BIDS-collaborative/brainspell | 111738211 | Title: Cannot upload file
Question:
username_0: I cannot upload the file into myphpadmin because it says that the file is too large. How do I go about fixing this? Thanks!
Answers:
username_1: Same issue here
username_1: Thanks to Frank, I guess that is because we couldn't modify the initial file with the upload_max_filesize variable, so it should be resolved after we get the super user password with issue #8
@jbpoline @username_2 @r03ert0
username_2: As we figure these things out, can we move the info to the wiki or instructions, and close the issues?
Status: Issue closed
|
embulk/embulk-output-jdbc | 148009065 | Title: Connection timed out (output-sqlserver; 0.5.1)
Question:
username_0: I don't open UDP/1434. On the contrary, I don't boot SQL Server Browser service.
If I specify the port for SQL Server engine, "Browser Service" is not required, is it?
it is thankful to be able to use not the same as 0.5.0 also in the previous environment.
... I am not good at English.
Answers:
username_1: Did you specify `instance`?
If both `instance` and `port` are specified, `port` is ignored.
https://msdn.microsoft.com/ja-jp/library/ms378428%28v=sql.110%29.aspx
https://msdn.microsoft.com/en-us/library/ms378428%28v=sql.110%29.aspx
This is a specification of SQL Server JDBC Driver, but a bit confusing.
I'll modify as follows.
- If both `instance` and `port` are specified, error.
- If neither `instance` nor `port` are specified, port 1433 is used.
username_0: Yes. It is untested, but I found a mechanism.
I think it's better "If both a portNumber and instanceName are used, the portNumber will take precedence and the instanceName will be ignored" as note at
https://msdn.microsoft.com/en-us/library/ms378428%28v=sql.110%29.aspx
Neither ignore port nor treat as error.
But, It's up to you. Thank you.
username_0: Test was successful. Problem has gone if not specify instance. Thank you.
Status: Issue closed
|
noselusbe/noselus-backend | 42144515 | Title: Politician 0 still occuring
Question:
username_0: 
Answers:
username_1: Getting there. Now getting the pictures directly from the walloon parliament website for the representatives.
Still some ministers missing, but getting there. |
exercism/java | 271223654 | Title: sum-of-multiples: Add starter implemention
Question:
username_0: sum-of-multiples is missing a starter SumOfMultiples.java file.
Answers:
username_1: (and it _should_ have one, per https://github.com/exercism/java/blob/master/POLICIES.md#starter-implementations)
username_2: I think it does have starter implementation: https://github.com/exercism/java/blob/master/exercises/sum-of-multiples/src/main/java/SumOfMultiples.java?
username_1: I think this was actually for `difference-of-squares`!
username_2: I see! Then I agree, it should have starter implementation :)
Status: Issue closed
|
jlippold/tweakCompatible | 404910117 | Title: `DopeSettings` working on iOS 11.4.1
Question:
username_0: ```
{
"packageId": "xyz.xninja.dopesettings",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "xyz.xninja.dopesettings",
"deviceId": "iPhone10,4",
"url": "http://cydia.saurik.com/package/xyz.xninja.dopesettings/",
"iOSVersion": "11.4.1",
"packageVersionIndexed": true,
"packageName": "DopeSettings",
"category": "Tweaks",
"repository": "BigBoss",
"name": "DopeSettings",
"installed": "0.0.4",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "xyz.xninja.dopesettings",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.0",
"shortDescription": "Spice up the settings view",
"latest": "0.0.4",
"author": "ARX8x",
"packageStatus": "Unknown"
},
"base64": "<KEY>",
"chosenStatus": "working",
"notes": ""
}
``` |
feathersjs/feathers | 371071054 | Title: @feathersjs/commons is using === to compare Symbols
Question:
username_0: `===` comparison on Symbols will cause all hooks returning `SKIP` throwing an error here.
https://github.com/feathersjs/feathers/blob/c6e7e638c9d1c7978c018123768ece0ecc373228/packages/commons/lib/hooks.js#L118
Answers:
username_1: It is comparing the actual instance, not the symbol. There is a difference between
```js
const mySymbol = Symbol('testing');
console.log(mySymbol === mySymbol); // true
```
And
Status: Issue closed
username_0: Ah sorry, my problem is `@feathersjs/commons` version conflict which created 2 instances. |
whosonfirst-data/whosonfirst-data | 456409165 | Title: Rework localadmin in Denmark
Question:
username_0: After reviewing data in Denmark for new labels, I noticed that some of the localadmin records in Denmark have incorrect names and `dk-geodk` source properties. I'm guessing this either occurred during an import of new geometries, but I'm not certain. The geometries themselves look good, but the names and source properties attached to some of these records seem to be incorrect.
Take a look at the source data, compare to what is currently in Who's On First, and make adjustments.
- https://github.com/whosonfirst/whosonfirst-sources/blob/master/sources/dk-geodk.json
- https://spelunker.whosonfirst.org/id/85633121/descendants/?exclude=nullisland&placetype=localadmin<issue_closed>
Status: Issue closed |
bernhard-42/jupyter-cadquery | 748175860 | Title: Elementary installation error
Question:
username_0: I am using Windows 10, and following the instructions to the letter. However, the command
jupyter-labextension install --no-build $(cat labextensions.txt)
produces the error
ValueError: "$(cat" is not a valid npm package
I am not very knowledgeable about npm in general; all I want is to be able to use cadquery, and hopefully within jupyter. I ended up going through each of the four lines in `labextensions.txt` one by one - but is there a better way?
Answers:
username_1: Apologies, @username_0 , this is a UNIX command - it never came to my mind that this doesn't work on Windows.
I expect `jupyter-labextension` works under Windows, so please try
```
jupyter-labextension install --no-build @jupyter-widgets/[email protected] @jupyter-widgets/[email protected] [email protected] [email protected]
```
These are the strings in the `labextensions.txt` file and the syntax with `cat` is short way under UNIX to achieve the long command above.
username_0: No need to apologise! It's just that in my mind whenever I see "conda" I assume a Windows system - I use pip generally on my Linux box. However, for this year what with working from home, I have been switching almost completely to Windows for everything (not without some annoyances). Mostly things work fine. And in the end I did exactly what you recommended, just in four separate commands. The main thing I miss on Windows is a decent shell, like bash or zsh on Linux.
Anyway, I have got cadquery working very nicely in Jupyter, and I like it very much - thank you for your hard work. I will see how easy it is to switch from OpenJSCAD, which has been my scripting CAD up until now.
Thank you again.
username_1: @username_0 I've release the release candidate of the next version. It supports CadQuery 2.1 and (relevant for this issue) Jupyter Lab 3.0 which simplifies the installation drastically and removes the need of labextension installations (see README.md)
username_1: @username_0 Just released 2.0.0 which does not need this step any more
username_0: Thank you very much indeed for both your messages! Lest you think me
unpardonably rude for not replying earlier, I have been busy both with the
start of the academic teaching semester, and the sudden debility of my
mother due to some strokes, and her admittance to a nursing home.
I'll look forward to experimenting with cadquery on Linux as soon as I can.
regards and again thanks,
Alasdair
Status: Issue closed
username_1: Closing, because the whole labextensions installation has been changed and this is (luckily) not needed any more |
jauninb/traceability | 212988045 | Title: Another issue
Question:
username_0: Bluemix [toolchain](https://console.ng.bluemix.net/devops/toolchains/f1125b7d-a59e-4cf3-902d-1f53b2f3f7e9 "test-03-09-a"): [Delivery Pipeline](https://console.ng.bluemix.net/devops/pipelines/20fe3d4e-b741-4f6b-87ff-0f141ec79aef "test yp") deployed commit [3480652](https://github.com/username_0/traceability/commit/34806523175eea3c3ae21cc8551ebb71eff3992f) to [prod](https://console.ng.bluemix.net/apps/aa84a0a7-b39b-4373-b366-414d569dd456 "ibm:yp:us-south:<EMAIL>:prod")
Answers:
username_0: Bluemix [toolchain](https://console.ng.bluemix.net/devops/toolchains/f1125b7d-a59e-4cf3-902d-1f53b2f3f7e9 "test-03-09-a"): [Delivery Pipeline](https://console.ng.bluemix.net/devops/pipelines/20fe3d4e-b741-4f6b-87ff-0f141ec79aef "test yp") deployed commit [3480652](https://github.com/username_0/traceability/commit/34806523175eea3c3ae21cc8551ebb71eff3992f) to [prod](https://console.ng.bluemix.net/apps/aa84a0a7-b39b-4373-b366-414d569dd456 "ibm:yp:us-south:<EMAIL>:prod")
username_0: Bluemix [toolchain](https://console.ng.bluemix.net/devops/toolchains/f1125b7d-a59e-4cf3-902d-1f53b2f3f7e9 "test-03-09-a"): [Delivery Pipeline](https://console.ng.bluemix.net/devops/pipelines/20fe3d4e-b741-4f6b-87ff-0f141ec79aef "test yp") deployed commit [a5c4053](https://github.com/username_0/traceability/commit/a5c4053f91dc2cc3559a98a3177ed5e72d18bd08) to [prod](https://console.ng.bluemix.net/apps/aa84a0a7-b39b-4373-b366-414d569dd456 "ibm:yp:us-south:<EMAIL>:prod") |
pycom/pycom-micropython-sigfox | 308024581 | Title: LoRa RX2 issue
Question:
username_0: We are evaluating Pycom products for a project we are working on and I happen to have access to a RedwoodComm 5020A LoRa Tester.
We're currently using a Pycom LoPy with firmware 1.17.3b1.
On the tester, I've selected US Certification Test v1.2 for US915 region. In test 11, RX2 Receive Window Test, it is failing because it appears the LoPy is echoing back the response to the last command it saw in the RX1 window rather than the data it received in the RX2 window.
Output from the tester:
```
[ LINK MESSAGE ]
L CH SF BW POW TIME FCNT Adr Ack B FP M CMD CNTS
U 2 7 125 -21.1 60.1s 007A 1 0 0 224 U DownlinkCounter Cnt=4
D 2 7 500 -30.0 ---- 0008 1 0 - 000 U RXParamSetReq RX1DROffset=0,RX2DR=8
U 6 7 125 -23.5 8.08s 007C 1 0 0 224 U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
D R2 12 500 -85.0 ---- 0009 1 0 - 100 U DataDown ByteLen=10
U 0 7 125 -19.1 10.1s 007E 1 0 0 224 U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
D R2 12 500 -85.0 ---- 000A 1 0 - 100 U DataDown ByteLen=10
U 0 7 125 -23.6 15.1s 0081 1 0 0 224 U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
D R2 12 500 -85.0 ---- 000B 1 0 - 100 U DataDown ByteLen=10
U 1 7 125 -21.6 5.01s 0082 1 0 0 224 U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
D R2 12 500 -85.0 ---- 000C 1 0 - 100 U DataDown ByteLen=10
U 6 7 125 -19.8 45.1s 008B 1 0 0 224 U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
D R2 12 500 -85.0 ---- 000D 1 0 - 100 U DataDown ByteLen=10
U 7 7 125 -23.1 25.1s 0090 1 0 0 224 U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
D R2 12 500 -85.0 ---- 000E 1 0 - 100 U DataDown ByteLen=10
U 2 7 125 -20.0 15.1s 0093 1 0 0 224 U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
D R2 12 500 -85.0 ---- 000F 1 0 - 100 U DataDown ByteLen=10
```
Answers:
username_0: Some more info: it seems once it gets into this state, it is 'stuck'. The RX1 Receive Window Test also fails at this point in the same way. If I reboot the LoPy and avoid the RX2 test, the RX1 test will succeed.
username_1: Just for interest: What is the test set-up of the LoPy, an especially: which python script are you using at the LoPy?
username_0: I'm using the certification.py script from Pycom:
```
#!/usr/bin/env python
#
# Copyright (c) 2016, Pycom Limited.
#
# This software is licensed under the GNU GPL version 3 or any
# later version, with permitted additional terms. For more information
# see the Pycom Licence v1.0 document supplied with this file, or
# available at https://www.pycom.io/opensource/licensing
#
from network import LoRa
import time
import binascii
import socket
import struct
DEV_EUI = '3C CA 00 E6 9C 84 44 9A'
APP_EUI = 'AD A4 DA E3 AC 12 67 6B'
APP_KEY = '<KEY>'
DEV_ADDR = '00 00 00 0A'
NWK_SWKEY = '<KEY>'
APP_SWKEY = '<KEY>'
class Compliance:
def __init__(self, activation=LoRa.OTAA, region=LoRa.US915):
dr = 3
if region == LoRa.EU868:
dr = 5
self.lora = LoRa(mode=LoRa.LORAWAN, region=region)
self.lora.compliance_test(True, 0, False) # enable testing
self.activation = activation
self._join()
self.s = socket.socket(socket.AF_LORA, socket.SOCK_RAW)
self.s.setsockopt(socket.SOL_LORA, socket.SO_DR, dr)
self.s.setsockopt(socket.SOL_LORA, socket.SO_CONFIRMED, False)
def _join(self):
if self.activation == LoRa.OTAA:
dev_eui = binascii.unhexlify(DEV_EUI.replace(' ',''))
app_eui = binascii.unhexlify(APP_EUI.replace(' ',''))
app_key = binascii.unhexlify(APP_KEY.replace(' ',''))
self.lora.join(activation=LoRa.OTAA, auth=(app_eui, app_key), timeout=0)
else:
dev_addr = struct.unpack(">l", binascii.unhexlify(DEV_ADDR.replace(' ','')))[0]
nwk_swkey = binascii.unhexlify(NWK_SWKEY.replace(' ',''))
app_swkey = binascii.unhexlify(APP_SWKEY.replace(' ',''))
self.lora.join(activation=LoRa.ABP, auth=(dev_addr, nwk_swkey, app_swkey))
# wait until the module has joined the network
while not self.lora.has_joined():
[Truncated]
self.lora.compliance_test().nbr_gateways])
# set the state to 1 and clear the link check flag
self.lora.compliance_test(True, 1, False)
else:
if self.lora.compliance_test().state == 4:
rx_payload = self.s.recv(255)
if rx_payload:
print('RX: {}'.format(rx_payload))
self.tx_payload = bytes([rx_payload[0]])
for i in range(1, len(rx_payload)):
self.tx_payload += bytes([(rx_payload[i] + 1) & 0xFF])
print('TX: {}'.format(self.tx_payload))
self.lora.compliance_test(True, 1) # set the state to 1
else:
self.tx_payload = bytes([(self.lora.compliance_test().downlink_counter >> 8) & 0xFF,
self.lora.compliance_test().downlink_counter & 0xFF])
else:
time.sleep(2)
self._join()
```
username_0: I have the ability to rebuild the firmware and such if anyone needs me to try anything, by the way.
username_1: There were a lot of complaints about the LoRa stack on the xxPy devices, but not listening to the RX2 window wasn't one of those. And it seems not to be an issue when working with the TTN. As far as I could tell, TTN always uses the RX1 window.
username_0: I'm not entirely sure yet that it's just not listening on the RX2 window. It seems it might be an issue with the handling of RXParamSetReq. I edited my comment above to reflect this, but it seems the RX1 Receive Window Timing test fails most of the time with the same issue.
username_0: It must be timing related. If I move the LoPy right next to the Tester, it appears to be passing. The previous position was just 3 feet away, though.
I'll post back when the test finishes.
username_0: It worked for a while and then I started seeing the duplicate RXParamSetAns again:
```
D R2 10 500 -85.0 ---- 00C7 1 0 - 100 U DataDown ByteLen=10
U 7 10 125 -7.3 4.44s 00D8 1 0 0 224 U DownlinkCounter Cnt=191
D R2 10 500 -85.0 ---- 00C8 1 0 - 100 U DataDown ByteLen=10
U 0 10 125 -7.1 4.44s 00D9 1 0 0 224 U DownlinkCounter Cnt=191
D R2 10 500 -85.0 ---- 00C9 1 0 - 100 U DataDown ByteLen=10
U 3 10 125 -7.2 4.44s 00DA 1 0 0 224 U DownlinkCounter Cnt=192
D 3 10 500 -30.0 ---- 00CA 1 0 - 000 U RXParamSetReq RX1DROffset=0,RX2DR=11
U 4 10 125 -7.2 3.42s 00DB 1 0 0 224 U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
D R2 9 500 -85.0 ---- 00CB 1 0 - 100 U DataDown ByteLen=10
U 6 10 125 -7.3 4.39s 00DC 1 0 0 224 U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
D R2 9 500 -85.0 ---- 00CC 1 0 - 100 U DataDown ByteLen=10
U 1 10 125 -7.1 5.01s 00DD 1 0 0 224 U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
D R2 9 500 -85.0 ---- 00CD 1 0 - 100 U DataDown ByteLen=10
U 2 10 125 -7.2 5.01s 00DE 1 0 0 224 U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
D R2 9 500 -85.0 ---- 00CE 1 0 - 100 U DataDown ByteLen=10
U 5 10 125 -7.3 5.01s 00DF 1 0 0 224 U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
D R2 9 500 -85.0 ---- 00CF 1 0 - 100 U DataDown ByteLen=10
U 0 10 125 -7.1 5.01s 00E0 1 0 0 224 U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
D R2 9 500 -85.0 ---- 00D0 1 0 - 100 U DataDown ByteLen=10
U 5 10 125 -7.3 5.01s 00E1 1 0 0 224 U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
D R2 9 500 -85.0 ---- 00D1 1 0 - 100 U DataDown ByteLen=10
U 3 10 125 -7.2 5.01s 00E2 1 0 0 224 U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
D R2 9 500 -85.0 ---- 00D2 1 0 - 100 U DataDown ByteLen=10
U 4 10 125 -7.2 4.43s 00E3 1 0 0 224 U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
U RXParamSetAns RX1DROffset=1, RX2DR=1, CH=1
```
username_0: The basic problem seems to be that the LoPy no longer sees downlinks after responding to a mac command. Because of this, it's retransmitting the mac response repeatedly.
If I issue another mac command even in this state, the downlink counter increments and the LoPy does respond to the command. But, non-mac downlinks are not seen still. |
DSharpPlus/DSharpPlus | 1183732182 | Title: Wrong regex for slash command names
Question:
username_0: # Summary
Change the regex used to check slash command names to allow use of Unicode characters.
# Details
The `Utilities.IsValidSlashCommandName()` method uses following regex: `new Regex(@"^[\w-]{1,32}$", RegexOptions.ECMAScript)`. Specifying the `RegexOptions.ECMAScript` option limits command names to use only `[a-zA-Z_0-9]` characters, but the Discord API [allows to use Unicode characters in command names](https://discord.com/developers/docs/interactions/application-commands#application-command-object-application-command-naming).
So when you try to set a command name or a command option name to something like this: "имя", the `System.ArgumentException: Invalid slash command option name specified. It must be below 32 characters and not contain any whitespace. (Parameter 'name')` is thrown.
Removing the `RegexOptions.ECMAScript` option will allow the use of Unicode characters in command names. |
microsoft/BotFramework-Composer | 747277774 | Title: the side bar icon missing the highlight when click the root item in the project tree
Question:
username_0: <!-- Complete the necessary portions of this template and delete the rest. -->
## Describe the bug

<!-- Give a clear and concise description of what the bug is. -->
## Version
bot-project
<!-- What version of the Composer are you using? Paste the build SHA found on the about page (`/about`). -->
## Browser
<!-- What browser are you using? -->
- [ ] Electron distribution
- [ ] Chrome
- [ ] Safari
- [ ] Firefox
- [x] Edge
## OS
<!-- What operating system are you using? -->
- [ ] macOS
- [x] Windows
- [ ] Ubuntu
## To Reproduce
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
## Expected behavior
<!-- Give a clear and concise description of what you expected to happen. -->
## Screenshots
<!-- If applicable, add screenshots/gif/video to help explain your problem. -->
## Additional context
<!-- Add any other context about the problem here. --><issue_closed>
Status: Issue closed |
UniMath/UniMath | 301407705 | Title: testing with modern coq
Question:
username_0: See issue #921 - Coq is tested against UniMath, so
our travis testing should ensure that UniMath builds also with the forthcoming version
of Coq, as well as with the past one. For example, from coq 8.7 to 8.8 there is a change
in the way the coq_makefile program works, and thus a change to our Makefile could break
the building.
Answers:
username_0: @Zimmi48 & @username_1 -- If we test UniMath also against a modern version of Coq in our travis testing, should it be the master branch of Coq? Or is that too fragile?
username_1: Master should be fine, since we test unimath it should not get broken.
Status: Issue closed
|
ember-cli/ember-cli | 94138927 | Title: Countless issues when install v1.13.1 on Windows
Question:
username_0: C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\socket.io\node_modules\sock
et.io-client\node_modules\engine.io-client\node_modules\ws>if not defined npm_config_node_gyp (node "C:\Program Files\no
dejs\node_modules\npm\bin\node-gyp-bin\\..\..\node_modules\node-gyp\bin\node-gyp.js" rebuild ) else (node rebuild )
Building the projects in this solution one at a time. To enable parallel build, please add the "/m" switch.
bufferutil.cc
C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\socket.io\node_modules\soc
ket.io-client\node_modules\engine.io-client\node_modules\ws\node_modules\nan\nan.h(213): error C2039: 'ThrowException'
: is not a member of 'v8' [C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules
\socket.io\node_modules\socket.io-client\node_modules\engine.io-client\node_modules\ws\build\bufferutil.vcxproj]
C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\socket.io\node_modules\soc
ket.io-client\node_modules\engine.io-client\node_modules\ws\node_modules\nan\nan.h(213): error C2039: 'New' : is not a
member of 'v8::String' [C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\so
cket.io\node_modules\socket.io-client\node_modules\engine.io-client\node_modules\ws\build\bufferutil.vcxproj]
C:\Users\User\.node-gyp\0.12.5\deps\v8\include\v8.h(1599) : see declaration of 'v8::String'
C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\socket.io\node_modules\soc
ket.io-client\node_modules\engine.io-client\node_modules\ws\node_modules\nan\nan.h(213): error C3861: 'ThrowException':
identifier not found [C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\soc
ket.io\node_modules\socket.io-client\node_modules\engine.io-client\node_modules\ws\build\bufferutil.vcxproj]
C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\socket.io\node_modules\soc
ket.io-client\node_modules\engine.io-client\node_modules\ws\node_modules\nan\nan.h(213): error C3861: 'New': identifier
not found [C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\socket.io\node
_modules\socket.io-client\node_modules\engine.io-client\node_modules\ws\build\bufferutil.vcxproj]
C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\socket.io\node_modules\soc
ket.io-client\node_modules\engine.io-client\node_modules\ws\node_modules\nan\nan.h(218): error C2039: 'ThrowException'
: is not a member of 'v8' [C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules
\socket.io\node_modules\socket.io-client\node_modules\engine.io-client\node_modules\ws\build\bufferutil.vcxproj]
C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\socket.io\node_modules\soc
ket.io-client\node_modules\engine.io-client\node_modules\ws\node_modules\nan\nan.h(218): error C3861: 'ThrowException':
identifier not found [C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\soc
ket.io\node_modules\socket.io-client\node_modules\engine.io-client\node_modules\ws\build\bufferutil.vcxproj]
C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\socket.io\node_modules\soc
ket.io-client\node_modules\engine.io-client\node_modules\ws\node_modules\nan\nan.h(222): error C2039: 'New' : is not a
member of 'v8::String' [C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\so
cket.io\node_modules\socket.io-client\node_modules\engine.io-client\node_modules\ws\build\bufferutil.vcxproj]
C:\Users\User\.node-gyp\0.12.5\deps\v8\include\v8.h(1599) : see declaration of 'v8::String'
C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\socket.io\node_modules\soc
ket.io-client\node_modules\engine.io-client\node_modules\ws\node_modules\nan\nan.h(222): error C3861: 'New': identifier
not found [C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\socket.io\node
_modules\socket.io-client\node_modules\engine.io-client\node_modules\ws\build\bufferutil.vcxproj]
C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\socket.io\node_modules\soc
ket.io-client\node_modules\engine.io-client\node_modules\ws\node_modules\nan\nan.h(224): error C2039: 'New' : is not a
member of 'v8::String' [C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\so
cket.io\node_modules\socket.io-client\node_modules\engine.io-client\node_modules\ws\build\bufferutil.vcxproj]
C:\Users\User\.node-gyp\0.12.5\deps\v8\include\v8.h(1599) : see declaration of 'v8::String'
C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\socket.io\node_modules\soc
ket.io-client\node_modules\engine.io-client\node_modules\ws\node_modules\nan\nan.h(224): error C2660: 'v8::Integer::New
' : function does not take 1 arguments [C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem
\node_modules\socket.io\node_modules\socket.io-client\node_modules\engine.io-client\node_modules\ws\build\bufferutil.vc
xproj]
C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\socket.io\node_modules\soc
ket.io-client\node_modules\engine.io-client\node_modules\ws\node_modules\nan\nan.h(224): error C3861: 'New': identifier
not found [C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\socket.io\node
_modules\socket.io-client\node_modules\engine.io-client\node_modules\ws\build\bufferutil.vcxproj]
C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\socket.io\node_modules\soc
ket.io-client\node_modules\engine.io-client\node_modules\ws\node_modules\nan\nan.h(229): error C2039: 'ThrowException'
: is not a member of 'v8' [C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules
\socket.io\node_modules\socket.io-client\node_modules\engine.io-client\node_modules\ws\build\bufferutil.vcxproj]
C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\socket.io\node_modules\soc
ket.io-client\node_modules\engine.io-client\node_modules\ws\node_modules\nan\nan.h(229): error C2039: 'New' : is not a
member of 'v8::String' [C:\Users\User\AppData\Roaming\npm\node_modules\ember-cli\node_modules\testem\node_modules\so
[Truncated]
├── [email protected] ([email protected], [email protected], [email protected], [email protected], [email protected])
├── [email protected]
├── [email protected] ([email protected])
├── [email protected] ([email protected], [email protected], [email protected], [email protected], [email protected])
├── [email protected] ([email protected])
├── [email protected] ([email protected])
├── [email protected] ([email protected], [email protected], [email protected])
├── [email protected] ([email protected], [email protected], [email protected], fast-sourcemap-
[email protected], [email protected])
├── [email protected] ([email protected], [email protected])
├── [email protected] ([email protected])
├── [email protected] ([email protected], [email protected], [email protected], [email protected])
└── [email protected] ([email protected], [email protected], [email protected], [email protected], [email protected], [email protected], xm
[email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected].
8, [email protected], [email protected], [email protected], [email protected], [email protected])
```
Obviously I can't preserve the colors but if I could, pretty much that entire wall of text would be red due to all of the errors.
I have been struggling with ember-cli errors on Windows for two weeks now. At least v0.2.7 gave me a new project with only a few errors - 1.13.1 doesn't even generate a Brocfile.js when I create a new project now.
Answers:
username_1: this error is indeed frustrating ( i share your pain)
* issue tracking this: https://github.com/ember-cli/ember-cli/issues/3993
* pending a 2 month old issue (submitted) on socket.io to get a release -> https://github.com/Automattic/socket.io/issues/2107
* another tracking issue on testem https://github.com/airportyh/testem/issues/588
Status: Issue closed
username_0: Thanks, I was checking the main website for information regarding the disappearance of Brocfile.js and I missed the [release notes](https://github.com/ember-cli/ember-cli/releases/tag/v1.13.0) where this was mentioned.
username_1: thanks website is updated: https://github.com/ember-cli/ember-cli/commit/bea665fc0d969e4d9efe4f7d3771a26451063a4b |
PSMGGamesSS2015/PSMG_SS_2015_PuLa | 102546416 | Title: movement trigger fixes
Question:
username_0: Jump triggers needs to be resized and generally made bigger.
Raycast for StickyWallScript needs to be placed higher in y-Axis as it collides with the plattform triggers
Status: Issue closed
Answers:
username_0: 1400dab5e75ffaee862820413818d4f62f20dcb3 Changed Collider trigger from OnTriggerEnter to OnTriggerStay. This greatly improves jumping on Tiles
b084904a8e1b70f5084833a59fd36963a930f9c3 Raycast origin is not located on the ground anymore. now it has a increased y-Axis value to prevent colliding with Tiles on the floor |
RHEFAR/magazine | 439469223 | Title: fitur 1
Question:
username_0: fitur pra
Answers:
username_0: completed fitur
Status: Issue closed
username_0: fitur pra
stuck
Status: Issue closed
username_0: complete fitur
username_0: fitur pra
stuck
fitur create complete
[User Story Baru Kel 2 baru.docx](https://github.com/username_0/magazine/files/3137184/User.Story.Baru.Kel.2.baru.docx)
Status: Issue closed
username_0: complete
username_0: fitur pra
stuck
fitur create complete
[User Story Baru Kel 2 baru.docx](https://github.com/username_0/magazine/files/3137184/User.Story.Baru.Kel.2.baru.docx) |
ppy/osu | 392506206 | Title: Hard rock great meh not in position
Question:
username_0: this is what happen when play on hr














Status: Issue closed
Answers:
username_1: You should not close the issue if it isn't resolved, you should keep it open. Anyway, I tried to reproduce it and seems that some random "hit words" appear when playing with Hardrock (I tried with all other mods and I can confirm it). Here's a video where this happens: https://streamable.com/ibsxb
username_0: Sorry for closing the issue. I think it is just my eye or something happen on my screen. Nope that's real
Status: Issue closed
|
rubysherpas/paranoia | 126710114 | Title: Please add Source Code URL to https://rubygems.org/gems/paranoia
Question:
username_0: There is no link to https://github.com/rubysherpas/paranoia on the https://rubygems.org/gems/paranoia page (the homepage links to https://rubygems.org/gems/paranoia). Adding a link would make it easier to get to the github repository from the gem page.
This must be done from the https://rubygems.org/gems/paranoia/edit page by a gem owner.
Thank you.
Answers:
username_1: Big +1 for this, please 😄
username_2: 
Status: Issue closed
|
flairNLP/flair | 551287622 | Title: Transfer learning for flair with custom data
Question:
username_0: I am trying to apply transfer learning to pre-trained ner model of sequence tagger model with some data which is of type shown below but when i tried to train using that data it says that it was bad epochs and the model performance became worst after training.
This was the data i used for training:

This was the output before training:

This is how training is done:

This was the output of training:

This was output of the model after training:

So Can anyone please help me to check
i)if the training data is in correct format
ii)if there are any issues in code
iii)why there are bad epochs coming
Answers:
username_1: @username_0 would be interested to know the solution..
username_0: @username_1 working on the same and if anything works for me i would post it here and i had tried training the model with around 200 utterances but it failed and can you please share your training data format you were using
username_2: For some reason the model is not training well. Present all training code (and not as image), especially how `corpus` is loaded.
Maybe label `per` must be uppercased.
username_0: @username_2 I would look into changing per to PER and the training code is as follows
from flair.data import Corpus
from flair.embeddings import TokenEmbeddings, WordEmbeddings, StackedEmbeddings,CharacterEmbeddings,FlairEmbeddings
from typing import List
from flair.datasets import ColumnCorpus
from flair.models import SequenceTagger
from flair.trainers import ModelTrainer
columns = {0: 'text', 1: 'ner'}
data_folder = '/code'
corpus: Corpus = ColumnCorpus(data_folder, columns,train_file='train.csv',dev_file='dev.csv',test_file='test.csv')
tag_type = 'ner'
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
embedding_types: List[TokenEmbeddings] = [ WordEmbeddings('glove'), CharacterEmbeddings(),FlairEmbeddings('news-forward'),FlairEmbeddings('news-backward'),]
embeddings: StackedEmbeddings = StackedEmbeddings(embeddings=embedding_types)
tagger = SequenceTagger(hidden_size=256,embeddings=embeddings,tag_dictionary=tag_dictionary,tag_type=tag_type,use_crf=True).load('ner')
trainer=ModelTrainer(tagger, corpus)
trainer.train('resources/taggers/example-ner',learning_rate=0.1,mini_batch_size=32,max_epochs=150)
model = SequenceTagger.load('resources/taggers/example-ner/best-model.pt')
sentence = Sentence('saivenkat')
model.predict(sentence)
print(sentence.to_tagged_string())
username_3: @username_0 is right. The original model is trained on _uppercase_ labels, your training data is partially _lowercase_. Make the labels in your custom training data uppercase and it should work.
username_0: Thank you @username_2 for your valuable suggestion and @username_3 it worked brilliantly on the sentences which are ending with the name like "I am sreya" but the thing is when i tried to predict names in a sentence like "I am sreya and i am from hyderabad" then it is not predicting the label.
Do i have to change the training data into sentences format or the one which i am following is enough?
username_3: Sorry, just looked at your training data, which basically does not make sense at all. Is this just a list of names? You do not need deep learning for this, you could just make a lookup-table or something. However, training data should be syntactically and semantically correct sentences with annotated named entities.
username_0: @username_3 I wanted to retrieve name ,age,gender and other personal information of a person from a raw text which does not have any syntactic and semantic meaning. So can you please suggest me what to follow ? Thank you :)
username_3: If you do not have valid running text, I'd say sequence tagging with deep learning is not the way to go, if deep learning at all. [Wikidata](https://www.wikidata.org) as a knowledge base and a simple rule-based system might be an interesting starting point. If you have `hyderabad` and want to know what kind of entity it is, you could query Wikidata and get [Hyderabad](https://www.wikidata.org/wiki/Q1361)'s page. Now you have a lot structured information about Hyderabad and can extract in a deterministic way what you want to know about it.
username_0: Thanks a lot. I would work on it.
username_3: I don't think so; the model didn't overfit, it didn't learn anything at all. The task is sequence tagging, i. e. the tag of a token depends on the sequence it is part of. His training data has no sequence information at all, it's just a list of names or places or whatever.
Furthermore, the model he tries to fine-tune depends on Flair embeddings which are context-sensitive, so a vector representation of a token depends on the other tokens in the sequence. Another fact that a model like this cannot learn _anything_ from the training data.
username_2: I disagree. It can be validated by training log and `monitor_train = True`, but I will not do it.
Status: Issue closed
|
mhammond/pywin32 | 276433524 | Title: Medusa fails to start
Question:
username_0: Running `python medusa` fails to start with Import errors.
```
Traceback (most recent call last):
File "C:\Python\Python-2.7.14-x86\lib\runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "C:\Python\Python-2.7.14-x86\lib\runpy.py", line 72, in _run_code
exec code in run_globals
File "C:\Python\src\medusa\medusa\__main__.py", line 63, in <module>
from configobj import ConfigObj
ImportError: No module named configobj
```<issue_closed>
Status: Issue closed |
masonmark/masons-vscode-theme | 670661686 | Title: The stupid word highlighting is fucked
Question:
username_0: How the fuck do we turn off the fucking thing where like nothing is selected and its all like HI I KNOW YOU DON'T HAVE ANYTHING SELECTED BUT I'M JUST GONNA MAKE IT LOOK LIKE THE WORD THE CURSOR IS IN IS HIGHLIGHTED BECAUSE UM
I thought it might be like `editor.wordHighlightBackground` but that did not work...
Answers:
username_0: this shit:
<img width="579" alt="image" src="https://user-images.githubusercontent.com/122212/103163679-89189e00-4844-11eb-8a46-d61368def6fb.png">
username_0: I don't mean I want to disable that feature (it is highly useful when there more than one occurrence is highlighted) I just want the visual appearance to not look like text selection). |
canjs/devtools | 386351492 | Title: Add "name" data to ViewModel data
Question:
username_0: Along with the `viewModel` data returned here:
https://github.com/canjs/devtools/blob/85d4d86f2c2c2992a7fb51ec9805ece9b02a4ae4/canjs-devtools-injected-script.js#L37-L41
...we should also return data for the `canReflect.getName` value of List/Map types so the name can be displayed in devtools.
This may need to happen in `getSerializedViewModel`:
https://github.com/canjs/devtools/blob/85d4d86f2c2c2992a7fb51ec9805ece9b02a4ae4/canjs-devtools-injected-script.js#L173-L199<issue_closed>
Status: Issue closed |
bahmutov/github-post-release | 241253699 | Title: A test issue
Question:
username_0: Just a test issue post release should comment on
Status: Issue closed
Answers:
username_0: Version 1.3.10 has been published.
username_0: Version `1.4.0` has been published to NPM.
**Tip:** safely upgrade dependency github-post-release in your project using [next-update](https://github.com/username_0/next-update)
username_0: Version `1.5.0` has been published to NPM. The full
release note can be found at [github-post-release/releases/tag/v1.5.0](https://github.com/username_0/github-post-release/releases/tag/v1.5.0).
**Tip:** safely upgrade dependency github-post-release in your project using [next-update](https://github.com/username_0/next-update) |
naveed92/dm-ocg-octgn | 93771585 | Title: Look at shield function
Question:
username_0: In need of this function.
Target shield and see it without showing to opp.
Answers:
username_1: Cards cannot be viewed without taking control of them so:
1) Passing remotecall to shield's owner giving control of shield to player.
2) Peeking at shield
3) Passing control back to shield owned
username_2: You can peek at your opponents cards without taking control of them btw |
vuejs/vuex | 257346031 | Title: When passing object to Vuex's state (e.g. initState), copy it instead of changing its reference directly
Question:
username_0: ### What problem does this feature solve?
In server side rendering, if you set separate initial state object outside of Vuex's constructing function, it will be referenced and all changes to it will be shared. It's very confusing when you building custom Vue SSR project using vuex.
It would be better if vuex doesn't use the outside state object's reference directly, instead, copy it to a new state object every time the vuex instance is created.
FYI, my solution is just using object spread syntax like: `state: { ...initState }`.
### What does the proposed API look like?
If there's need to promise backward compatibility, vuex can offer a option like `initWithNewContext`.
<!-- generated by vue-issues. DO NOT REMOVE -->
Answers:
username_1: What is the use case for this? Why would you want to mutate the object passed into store as initial state out of store? Why not enough to [pass a function to state option](https://vuex.vuejs.org/en/modules.html#module-reuse)?
IMHO, it would be too complecated to deep copy the state. What if the state includes user defined model?
username_0: @username_1 OK, I got that. My use case is that use the exported object to get IntelliSense for VSCode, also, it can make type check in props of components be simple:
```
props: _.mapValues(roomInfo, o => ({
type: o.constructor,
required: true
})),
```
Status: Issue closed
username_1: @username_0 Thanks, I realized API reference of `state` option does not include about passing a function. Already made a PR #951. |
ajgraves/aneuch | 108457663 | Title: Fix for QuoteHTML in GetParam when previewing a page
Question:
username_0: Now that GetParam calls QuoteHTML, it's causing some problems in DoEdit when you preview. Initial tests indicate that this doesn't affect actual saving.
Answers:
username_0: The fix needs to be in sub Preview:
$F{text} = GetParam('text');
should be
$F{text} = UnquoteHTML(GetParam('text'));
username_0: This has been fixed in Commit 56cf2ed
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.