repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
gatsbyjs/gatsby | 311455894 | Title: Issues with images and sharp errors
Question:
username_0: Past week I've noticed some odd errors with the sharp plugin
While it's generating thumbnails I'll get 2-3 errors like this
```
Generating image thumbnails [=========] 93/97 12.9 secs 96%
(sharp:11982): GLib-GObject-WARNING **:
invalid uninstantiatable type '(null)' in cast to 'GObject'
(sharp:11982): GLib-GObject-CRITICAL **:
g_object_set_qdata: assertion 'G_IS_OBJECT (object)' failed
(sharp:11982): GLib-GObject-WARNING **:
invalid uninstantiatable type '(null)' in cast to 'GObject'
(sharp:11982): GLib-GObject-CRITICAL **:
g_object_set_qdata: assertion 'G_IS_OBJECT (object)' failed
Generating image thumbnails [========] 97/97 16.5 secs 100%
```
After this one or more of the pages will have missing images.
If I nuke the node_modules and rebuild, **sometimes it works again** sometimes not.
It feels like it's an issue with the image/jpg but I can't figure out what the exact problem is. If i swap out the problem image with a image to work the problem goes away.
Data is coming in from a self-hosted WordPress build with ACF fields
Gatsby related plugins
```
gatsby: ^1.9.246
gatsby-image: ^1.0.43
gatsby-link: ^1.6.40
gatsby-paginate: ^1.0.13
gatsby-plugin-react-helmet: ^2.0.10
gatsby-plugin-react-next: ^1.0.11
gatsby-plugin-sass: ^1.0.25
gatsby-plugin-sharp: ^1.6.41
gatsby-plugin-styled-components: ^2.0.11
gatsby-source-filesystem: ^1.5.28
gatsby-source-wordpress: ^2.0.73
gatsby-transformer-sharp: ^1.6.22
```
Answers:
username_1: May it be a corrupted file ? I would open the image with an ASCII editor and check that the first bytes of data coincide with the file extension.
Are you on Unix ?
username_0: That's what I thought at first and tried to narrow it down to where it works and then doesn't.
Any new images added and approx the last 15 images uploaded refuse to get processed it seems. I've added the same "problem images" to a similar build and they all works just fine. Which makes me think it's some sort of config issue.
username_2: I'm experiencing the same issue, any solution?
Status: Issue closed
username_0: @username_2 my theory is that it was some sort of naming collision? I did two things to that seemed to help.
1: I included the id for each object in the ACF image object.
2: I gave each image object a unique name (e.g `team_photo`). All my image ACF fields were named photo, so doing this made it a bit more clearer
Not sure if any of these will help you, but it seemed to fix it for me after I updated all the GraphQL queries to something like below.
```
node {
id
title
acf {
team_photo: photo {
id
localFile {
id
childImageSharp {
resize(width: 800, height: 800) {
src
}
}
}
}
}
}
``` |
svaarala/duktape | 199406647 | Title: Invalid regexp escape in TypeScript 2.1.4
Question:
username_0: Normally I don't report real-world regexp breakages anymore because I know that nonstandard regexp support is still a work in progress. However, since this breaks the TypeScript compiler it seemed worthwhile to call it out. The offending regexp is:
```js
var tripleSlashDirectiveFragmentRegex = /^(\/\/\/\s*<reference\s+(path|types)\s*=\s*(?:'|"))([^\3"]*)$/;
```
I don't know if there are more broken regexps, but that's the one that caused an error for me. It was a bit too convoluted for me to try to find the offending token :)
Answers:
username_1: Ok, I'll try to figure it out.
username_1: true
```
If this is the case, the fix is to use:
```
[^\3"] --> [^\u0003"]
```
username_1: false
```
username_1: I don't have time now, but I'd like to find the backing from ES6 Annex B for this before changing behavior.
username_1: Btw, maybe there's actually a bug in the original RegExp, `\3` in the class ranges would evaluate to U+0003 i.e. ASCII ETX. Or maybe I just misread the whole regexp :-)
username_0: I doubt very much that the intent was to find U+0003, the regexp seems to be for matching docstrings (i.e. `///` comments).
username_0: RegEx101 breaks down the regexp as follows:
```js
/^(\/\/\/\s*<reference\s+(path|types)\s*=\s*(?:'|"))([^\3"]*)$/
```
```
^ asserts position at start of the string
1st Capturing Group (\/\/\/\s*<reference\s+(path|types)\s*=\s*(?:'|"))
\/ matches the character / literally (case sensitive)
\/ matches the character / literally (case sensitive)
\/ matches the character / literally (case sensitive)
\s* matches any whitespace character (equal to [\r\n\t\f\v ])
* Quantifier — Matches between zero and unlimited times, as many times as possible, giving back as needed (greedy)
<reference matches the characters <reference literally (case sensitive)
\s+ matches any whitespace character (equal to [\r\n\t\f\v ])
+ Quantifier — Matches between one and unlimited times, as many times as possible, giving back as needed (greedy)
2nd Capturing Group (path|types)
\s* matches any whitespace character (equal to [\r\n\t\f\v ])
= matches the character = literally (case sensitive)
\s* matches any whitespace character (equal to [\r\n\t\f\v ])
Non-capturing group (?:'|")
3rd Capturing Group ([^\3"]*)
Match a single character not present in the list below [^\3"]*
* Quantifier — Matches between zero and unlimited times, as many times as possible, giving back as needed (greedy)
\3 matches the character with index 38 (310 or 316) literally (case sensitive)
" matches the character " literally (case sensitive)
$ asserts position at the end of the string, or before the line terminator right at the end of the string (if any)
```
username_0: Errata: That should have been "3<sub>8</sub> (3<sub>10</sub> or 3<sub>16</sub>)" for the `\3` step. The subscripting was lost in the copy/paste.
username_1: Ok, that seems to agree with what I was able to figure out.
username_0: It very well could be a bug in the the regexp, though I can't imagine what the `\3` was meant to be.
username_0: So it's actually a backreference and not a character escape.
username_0: Also found this, it's apparently a common technique:
http://www.dustindiaz.com/regular-expression-back-matching
username_0: ES5.1 has the same text re: decimal escapes (i.e. it's a backreference), so it looks like this is actually a bug in *Duktape*, not the regexp. ;-)
username_1: Backreferences are not allowed in character classes, and they don't make immediate sense: would you interpret each character in the capture separately?
username_1: If you check the links here: https://github.com/username_1/duktape/issues/1275#issuecomment-271150965, do you read the requirement the same way, i.e. reject a decimal escape other than `\0` in a character class?
username_0: Hm. I'm confused then. It might help solve this mystery if I had an example of a string that this regex matched.
username_1: I think my tests with Node.js indicate pretty conclusively that it is parsed as a legacy octal sequence. It's basically an extension of `\0` which *is* allowed in a character class to denote a NUL character. It then allows you to do specify other ASCII characters using legacy escapes.
username_0: That still doesn't answer the question of why they'd want to match ASCII ETX though. Probably a bug like you originally said, maybe I should bring it up on the TypeScript repo.
username_1: Oh that, I guess it might be possible that it was thought to be a backreference - but the 3rd capture group in the regexp is the same group where the character class is. So I'm not sure what the goal there was.
username_0: The comment above that regexp is:
```js
/**
* Matches a triple slash reference directive with an incomplete string literal for its path. Used
* to determine if the caret is currently within the string literal and capture the literal fragment
* for completions.
* For example, this matches
*
* /// <reference path="fragment
*
* but not
*
* /// <reference path="fragment"
*/
```
username_0: Still no idea why they want to match Ctrl+C characters, but the regexp is indeed valid under Annex B semantics it seems like.
username_0: If I remember correctly, TypeScript 2.0 worked fine under Duktape. So this must be related to a new feature in v2.1.
username_0: When I get home I'll check if there are any other regexps in the TypeScript source that break under Duktape.
username_0: I changed `\3` in the offending regexp to `\u0003` and now TypeScript works, so that seems to be the only issue (for now).
username_1: That's good to hear, adding the legacy octal parsing should be relatively straightforward. I'll try to work up a pull tonight.
username_0: @username_1 Is there still time to get this fix into 2.1.0? I noticed there's no pull yet. It's not a showstopper for me, it's just annoying to have to keep editing the TypeScript compiler every time I upgrade it. ;-)
username_1: I can try tonight - should be time if it's a straightforward change.
Status: Issue closed
|
globalpayments/globalpayments-js | 754709372 | Title: How to tokenize CC without submit iFrame
Question:
username_0: I am trying to tokenize credit card data without using an iframe submit button. In the Heartland example I see cardExternalTrigger as the solution to this; however no requests are made and the `token-success` event does not fire.
```javascript
document.getElementById('place-order-btn').addEventListener('click', function(event) {
event.preventDefault();
for (var type in cardForm.fields) {
if (type === "submit") {
continue;
}
var field = cardForm.frames[type];
if (!field) {
continue;
}
GlobalPayments.internal.postMessage.post(
{
data: {
fields: cardForm.fields,
target: field.id
},
id: field.id,
type: "ui:iframe-field:request-data",
},
field.id
);
}
cardForm.on("token-success", (resp) => {
// Save payment token
this.paymentToken = resp.paymentReference;
console.log('token recieved')
// Submit data to the integration's backend for processing
// that.submitForm();
});
cardForm.on("token-error", (resp) => {
});
});
``` |
flutter/flutter | 558543920 | Title: Awkward ReorderableListView animation on Web
Question:
username_0: Reordering animation looks awkward and a dragging cell shadow is huge.

The code is as simple as this:
```dart
List<String> _list = ["Apple", "Ball", "Cat", "Dog", "Elephant"];
return Scaffold(
appBar: AppBar(),
body: ReorderableListView(
children: _list
.map((item) => ListTile(
key: Key("${item}"),
title: Text("${item}"),
trailing: Icon(Icons.menu),
))
.toList(),
onReorder: (int start, int current) {
// dragging from top to bottom
if (start < current) {
int end = current - 1;
String startItem = _list[start];
int i = 0;
int local = start;
do {
_list[local] = _list[++local];
i++;
} while (i < end - start);
_list[end] = startItem;
}
// dragging from bottom to top
else if (start > current) {
String startItem = _list[start];
for (int i = start; i > current; i--) {
_list[i] = _list[i - 1];
}
_list[current] = startItem;
}
// setState(() {});
},
),
);
```
flutter doctor output:
```
[✓] Flutter (Channel dev, v1.14.6, on Mac OS X 10.15.2 19C57, locale en-RU)
• Flutter version 1.14.6 at /Users/andrey/flutter
• Framework revision fabeb2a16f (4 days ago), 2020-01-28 07:56:51 -0800
• Engine revision c4229bfbba
• Dart version 2.8.0 (build 2.8.0-dev.5.0 fc3af737c7)
[!] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
• Android SDK at /Users/andrey/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-28, build-tools 28.0.3
• ANDROID_HOME = /Users/andrey/Library/Android/sdk
[Truncated]
• Xcode 11.3.1, Build version 11C504
• CocoaPods version 1.8.4
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[!] Android Studio (version 3.5)
• Android Studio at /Applications/Android Studio.app/Contents
✗ Flutter plugin not installed; this adds Flutter specific functionality.
✗ Dart plugin not installed; this adds Dart specific functionality.
• Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
[✓] VS Code (version 1.41.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.8.0
[✓] Connected device (2 available)
• Chrome • chrome • web-javascript • Google Chrome 79.0.3945.130
• Web Server • web-server • web-javascript • Flutter Tools
```
Answers:
username_1: The need to pull down too far before reordering (as opposed to animation when pulling up) - true for android as well

username_2: I'm removing the web label because this is a general desktop design issue (you want higher density on the desktop)
/cc @HansMuller
username_0: Huge shadow problem is true for web only. |
simonsobs/so_noise_models | 535352484 | Title: New interface also for SAT?
Question:
username_0: I noticed just now that the new class-based 3.1 model is only for LAT.
It would be great if we had the same interface also for SAT, even though nothing changed from 3.0.4.
This way people have a consistent interface to get noise curves both for SAT and LAT.
If you plan to implement this, please give me an ETA (ideally before mid-January), so I can plan accordingly, otherwise I'll implement a mixed interface where I get LAT noise properties from 3.1 and SAT from 3.0.4.
Answers:
username_1: @username_2 implemented the very nice re-write for the LAT, so I leave the question up to him...
username_2: I should have this ready for review by the end of the day.
Status: Issue closed
username_0: being implemented in #10 |
tarl2019/tarl2019.github.io | 454174892 | Title: Livestream is private
Question:
username_0: Is there any way you can make this public? :pray:
https://slideslive.com/38915490/r09-taskagnostic-reinforcement-learning
Answers:
username_0: Seems fixed. Thanks! https://github.com/tarl2019/tarl2019.github.io/commit/6ff6d23f45b67ec89e95108db8415e0eb0c8ce6e
Status: Issue closed
|
EdenServer/community | 413639481 | Title: Mobs not spawning in Labyrinth of Onzozo.
Question:
username_0: A number of mobs in Labyrinth of Onzozo are missing from the area around moldy earring NM. They were here less than 12 hours ago. I'm counting at least 2 cockatrices and 3 goblins that are not respawning.<issue_closed>
Status: Issue closed |
anheru88/laravel-elixir-browser-sync2 | 106626992 | Title: Am I using this correctly?
Question:
username_0: Hello,
Does this look right?
elixir(function(mix) {
BrowserSync.init();
mix.sass('app.scss')
.copy('OrderingApp/fonts','public/ordering_assets/fonts')
.copy('OrderingApp/images','public/ordering_assets/images')
.scripts([
"../bower_components/jquery/dist/jquery.min.js",
"../bower_components/select2/dist/js/select2.min.js",
"OrderingApp/js/landing-page.js",
"OrderingApp/js/menu-page.js"
]);
mix.BrowserSync(
{
proxy : "rosasthaicafe.orderswift.loc",
logPrefix : "Laravel Eixir BrowserSync",
logConnections : false,
reloadOnRestart : false,
notify : true
});
});
Answers:
username_1: Yes offcourse, only you need using virtualhost and the name my.site is the name of domain.
Remember, you need to see your app in the address localhost:3000.
Regards.
Angel
username_0: Hi,
This isn't working for me.. this is my setup:
``` elixir(function(mix) {
BrowserSync.init();
mix.sass('app.scss')
.copy('OrderingApp/fonts','public/ordering_assets/fonts')
.copy('OrderingApp/images','public/ordering_assets/images')
// .scripts([
// "../bower_components/jquery/dist/jquery.min.js",
// ], 'public/ordering_assets/js/scripts.js')
.browserify([
'OrderingApp/js/app.js'
], 'public/ordering_assets/js/app.js');
mix.BrowserSync(
[
'OrderingApp/**/*',
'public/**/*',
'resources/views/**/*'
],
{
proxy : "rosasthaicafe.orderswift.loc",
logPrefix : "Laravel Eixir BrowserSync",
logConnections : false,
reloadOnRestart : false,
notify : true
});
});elixir(function(mix) {
BrowserSync.init();
mix.sass('app.scss')
.copy('OrderingApp/fonts','public/ordering_assets/fonts')
.copy('OrderingApp/images','public/ordering_assets/images')
// .scripts([
// "../bower_components/jquery/dist/jquery.min.js",
// ], 'public/ordering_assets/js/scripts.js')
.browserify([
'OrderingApp/js/app.js'
], 'public/ordering_assets/js/app.js');
mix.BrowserSync(
[
'OrderingApp/**/*',
'public/**/*',
'resources/views/**/*'
],
{
proxy : "rosasthaicafe.orderswift.loc",
logPrefix : "Laravel Eixir BrowserSync",
logConnections : false,
reloadOnRestart : false,
notify : true
});
}); ```
I am not using homestead. The page loads but there is an error in terminal:
``` 'BrowserSync' errored after 41 ms
[10:47:08] TypeError: undefined is not a function
at /Users/marksteggles/Desktop/orderswift/node_modules/gulp/node_modules/vinyl-fs/node_modules/glob-watcher/node_modules/gaze/node_modules/globule/lib/globule.js:25:17
at Array.reduce (native) ```
username_1: Hi the order is diferent, please check this code.
```javascript
mix.BrowserSync(
{
proxy : "domain.app",
logPrefix : "Laravel Eixir BrowserSync",
logConnections : false,
reloadOnRestart : false,
notify : false
}, [
"app/**/*",
"public/**/*",
"resources/views/**/*"
]);
```
Please tellme if you can fix the problem.
Status: Issue closed
username_0: Hi, tried that but it didn't work.
I tried another package 'laravel-elixir-browser-sync-simple' and it worked straight away. Sorry :| |
stoicflame/enunciate | 204740709 | Title: NPE in JavaScriptClientModule
Question:
username_0: ```
java.lang.NullPointerException: null
at com.webcohesion.enunciate.modules.javascript_client.JavaScriptClientModule.findExampleResourceMethod(JavaScriptClientModule.java:359) ~[na:na]
at com.webcohesion.enunciate.modules.javascript_client.JavaScriptClientModule.readResource(JavaScriptClientModule.java:332) ~[na:na]
at com.webcohesion.enunciate.modules.javascript_client.JavaScriptClientModule.call(JavaScriptClientModule.java:214) ~[na:na]
at com.webcohesion.enunciate.io.InvokeEnunciateModule.onNext(InvokeEnunciateModule.java:46) ~[na:na]
at com.webcohesion.enunciate.io.InvokeEnunciateModule.onNext(InvokeEnunciateModule.java:25) ~[na:na]
at rx.internal.operators.OperatorDoOnEach$1.onNext(OperatorDoOnEach.java:80) ~[na:na]
at rx.internal.producers.SingleProducer.request(SingleProducer.java:65) ~[na:na]
at rx.Subscriber.setProducer(Subscriber.java:209) ~[na:na]
at rx.Subscriber.setProducer(Subscriber.java:205) ~[na:na]
at rx.internal.operators.OperatorSingle$ParentSubscriber.onCompleted(OperatorSingle.java:111) ~[na:na]
at rx.internal.operators.OperatorTakeLastOne$ParentSubscriber.emit(OperatorTakeLastOne.java:175) ~[na:na]
at rx.internal.operators.OperatorTakeLastOne$ParentSubscriber.onCompleted(OperatorTakeLastOne.java:141) ~[na:na]
at rx.internal.operators.OperatorMerge$MergeSubscriber.emitLoop(OperatorMerge.java:648) ~[na:na]
at rx.internal.operators.OperatorMerge$MergeSubscriber.emit(OperatorMerge.java:560) ~[na:na]
```
Status: Issue closed
Answers:
username_0: [Enunciate 2.9.0 has been released.](https://github.com/username_0/enunciate/releases/tag/v2.9.0) |
gbif/gbif-data-validator | 385287902 | Title: Newlines in CSV files fail validation
Question:
username_0: DWCA containing newlines (correctly done in CSV) doesn't validate correctly, due to the newlines.
It is ingested correctly.
[PlantTracker_data_from_2012_onwards.zip](https://github.com/gbif/gbif-data-validator/files/2624919/PlantTracker_data_from_2012_onwards.zip)
https://www.gbif-uat.org/dataset/87a12234-2fae-4075-860a-f60ba007e5e2 if it's still there as a test.
Answers:
username_0: This is probably the line-based (rather than record-based, respecting new lines) deduplication method in FileBashUtils.
I think we have a risky optimization here; we count duplicates a different way in the crawler, which (I think) handles quoted newlines. |
falcosecurity/falco-exporter | 790122097 | Title: Switch Falco-Exporter priorities from int to enum
Question:
username_0: **Motivation**
I would like to have smooth and compliant info in respect to Falco logs.
**Feature**
Instead of exposing metrics with priorities as numbers, it would be great to have them as enum values, like in Falco logs
_Falco logs_
{"output":"Falco internal: syscall event drop. 1 system calls dropped in last second.","output_fields":{"ebpf_enabled":"1","n_drops":"1","n_drops_buffer":"1","n_drops_bug":"0","n_drops_pf":"0","n_evts":"11315"},**"priority":"Critical"**,"rule":"Falco internal: syscall event drop","time":"2021-01-20T16:02:30.753401597Z"}
_Falco-Exporter current metric_
falco_events{container="falco-exporter", endpoint="metrics", hostname="gke-ceiba-prod-cassandra-pool-cb2438d6-0c1m", instance="10.172.15.217:9376", job="falco-exporter", k8s_ns_name="<NA>", k8s_pod_name="<NA>", namespace="auditing", pod="falco-c8tjb", **priority="5"**, rule="Non sudo setuid", service="falco-exporter", source="SYSCALL"}
_Falco-Exporter suggested metric_
falco_events{container="falco-exporter", endpoint="metrics", hostname="gke-ceiba-prod-cassandra-pool-cb2438d6-0c1m", instance="10.172.15.217:9376", job="falco-exporter", k8s_ns_name="<NA>", k8s_pod_name="<NA>", namespace="auditing", pod="falco-c8tjb", **priority="Critical"**, rule="Non sudo setuid", service="falco-exporter", source="SYSCALL"}
**Alternatives**
I don't see suitable alternatives.
**At least a table with priority value mappings could be added to the README.**
Thanks!
Status: Issue closed
Answers:
username_0: Discussing with @leogr we agreed there is no need for this feature as it involves changes also on Falco side. |
pailhead/three-instanced-mesh | 524735229 | Title: working with transparent objects.
Question:
username_0: I have instanced objects spread around, along with individual non-instanced transparent boxes, and you can see that at certain angles the instanced objects are not visible or they are:


Any tips or thoughts on how to handle this?
Answers:
username_0: Basically, it's the issue described in this forum comment: https://discourse.threejs.org/t/material-transparency-problem/3822/5
I suppose Three.js' new `InstancedMesh` probaly has the same issue? Because the "origin" of the InstancedMesh is in one place, while the instanced objects might be all over the place. |
ets-labs/python-dependency-injector | 504192864 | Title: Example for Orator ORM
Question:
username_0: Idea of this issue is to create an example on how to use `Dependency Injector` with `Orator ORM`.
Originally, this question was brought in #230.
Acceptance criteria:
- There is an example mini application showing how to use `Dependency Injector` and `Orator ORM`
- Mini application demonstrates benefits of using `Dependency Injector` and `Orator ORM`
- Description of that mini application is published in `Examples` section of `Dependency Injector` docs
Links:
- https://orator-orm.com/
- http://python-dependency-injector.ets-labs.org/examples/index.html |
flomio/flomio-sdk-ios | 326086853 | Title: Disable excessive logging
Question:
username_0: I have FlojackMsr and when my application starts running the SDK log following messages excessively:
```
sourceTimer Fired
communication:1 CommCounter: 0
Check status comm & bat
elapsedTime: 0.000000
Polling for tags
in didSendStatus
Stat Response time = 0.125901
Stat Sleep Setting = 30
Battery Level:100
timer running on <NSThread: 0x1c0462b80>{number = 5, name = (null)} thread
```
Is tehere any way to stop it?
Answers:
username_1: It seems like Flomio release their SDKs as the DEBUG configuration… all the logging is still enabled, and sometimes these debug features can actually crash your app (see issue #5).
Why on earth would you release a compiled framework with debug logging enabled? Why is the framework doing *anything* with the app bundle's resources?
Why is there NO WAY to disable debugging features like this? Adding or removing the DEBUGLOG does nothing, because the framework is already compiled with it enabled.
username_2: I will be working on these issues tomorrow and will update here when I have updated the framework.
username_2: Excessive logging should be disabled for you now with the latest 3.0.2 release along with fixing the issue with referencing the main bundle. Apologies for these issues, let me know if you have any others.
Status: Issue closed
|
microsoft/playwright | 811590361 | Title: [BUG] Can't run Playwright in Nix
Question:
username_0: **Context:**
- Playwright Version: 1.8.0
- Operating System: Linux
- Node.js version: 12
- Browser: All
**Describe the bug**
Running Playwright in Nix Shell throws `missing dependencies` error. Even after explicitly adding the missing dependencies, the same error is thrown.
```
Error: browserType.launch: Host system is missing dependencies!
--
| Missing libraries we didn't find packages for:
| libexpat.so.1
| libxshmfence.so.1
```
Answers:
username_1: @username_0 we don't support Nix at the moment and this is the first request we've received regarding it so far.
So until there's a huge demand for Nix support, this will be low-pri for us. Is there something easy we can do to enable you use Playwright on Nix?
username_2: This is one case where it's not located there, and loading it from there leads to broken behaviour.
username_3: maybe via an env var `await spawnAsync(process.env.LDCONFIG_PATH || '/sbin/ldconfig', ['-p'], {});`
? Or if that is an unwanted API it would also be thinkable to make it a fallback from the generic `ldconfig` (w/o qualified path, e.g. `command -v ldconfig` or `which ldconfig`) and if that one is not found to fall back to `/sbin/ldconfig`. Or if we want to make the speed sacrifice for more exotic users of this package if could be in the catch block of the `/sbin/ldconfig` spawn and introspect the error and retry with `ldconfig` from path?
username_2: Alternatively an option to entirely disable validating dependencies would work even better: I'm noticing `ldconfig` is only used to provide an error message suggesting how to install missing dependencies.
username_1: @username_2 @username_3 Is this the only thing that breaks Playwright on Nix? In this case, would symlinking ldconfig to `/sbin/ldconfig` solve it on your end?
username_3: A system can have multiple nix profiles. Which one is used is dependent on the environment started. These virtualizations only affect the `PATH` (and a few other things like env) but not the file system, so changing a symlink is not really idiomatic as you would suddenly bind the static filesystem to a specific, arbitrary nix profile.
username_1: @username_3 if this is the only thing that doesn't work on Nix - then sure, we can come up with something as long as it's non-intrusive for the codebase & API.
But is this the only thing that prevents you from running on NIX? For example, I'll be surprised if our browser builds launch on NIX. Have you tried running them?
username_2: The browsers require a wrapper around them to launch on nix: https://github.com/ludios/nixos-playwright
username_1: @username_2 impressive!
So to double-check: the `ldconfig` is the last thing that stops you folks from using playwright on nix?
Would you be open to send a PR that detects *NIX system and disables launch doctor there? I cannot guarantee we land it, but we'll discuss it. I'd appreciate if it's as non-invasive as possible.
username_2: We can make it agnostic to nix: look if `ldconfig` is on `PATH`, if not try `/sbin/ldconfig`, if that doesn't exist either fail?
username_4: I have made a PR that allows disabling host requirements validation, please have a look https://github.com/microsoft/playwright/pull/5806.
username_5: Since the PR for skipping the host validation got merged and there won't be official supported added we will close it for now, feel free to reopen / upvote it when there is a higher demand.
Status: Issue closed
|
Metaxal/quickscript | 285252779 | Title: crashes on startup
Question:
username_0: this is the error

(full text below)
While /Users/username_0/Library/Preferences/quickscript is created, library.rktd is not present on this system.
I'm investigating and will let you know what I find out.
Stephen
(happy new year!)
--
load: contract violation
expected: file-exists?
given: #<path:/Users/username_0/Library/Preferences/quickscript/library.rktd>
in: an and/c case of
the 1st argument of
(->*
()
((and/c path-string? file-exists?))
library?)
contract from:
<pkgs>/quickscript/library.rkt
blaming: <pkgs>/quickscript/tool.rkt
(assuming the contract is correct)
at: <pkgs>/quickscript/library.rkt:79.2
context...:
/Applications/Racket v6.11/collects/racket/contract/private/blame.rkt:163:0: raise-blame-error16
...ow-val-first.rkt:306:25
/Users/username_0/Library/Racket/6.11/pkgs/quickscript/tool.rkt:211:8: reload-scripts-menu
/Applications/Racket v6.11/collects/racket/private/class-internal.rkt:3553:0: continue-make-object
/Applications/Racket v6.11/share/pkgs/deinprogramm-signature/deinprogramm/signature/tool.rkt:18:6
/Applications/Racket v6.11/collects/racket/private/class-internal.rkt:3553:0: continue-make-object
/Applications/Racket v6.11/share/pkgs/htdp-lib/stepper/xml-tool.rkt:339:8
/Applications/Racket v6.11/collects/racket/private/class-internal.rkt:3553:0: continue-make-object
/Applications/Racket v6.11/share/pkgs/htdp-lib/stepper/stepper-tool.rkt:221:2
/Applications/Racket v6.11/collects/racket/private/class-internal.rkt:3553:0: continue-make-object
/Applications/Racket v6.11/share/pkgs/htdp-lib/test-engine/test-tool.scm:73:6
/Applications/Racket v6.11/collects/racket/private/class-internal.rkt:3553:0: continue-make-object
/Applications/Racket v6.11/share/pkgs/htdp-lib/xml/text-box-tool.rkt:21:5
/Applications/Racket v6.11/collects/racket/private/class-internal.rkt:3553:0: continue-make-object
[repeats 1 more time]
/Applications/Racket v6.11/collects/racket/private/class-internal.rkt:3507:0: do-make-object
...
Answers:
username_1: Ha! That's case of a too strict contract. The file does not need to exist
for the procedure to work.
The contract should be
(->*
[]
[path-string?]
library?)
I'm traveling so can't update it right now.
Happy new year's eve!
username_0: Happy new year to you too!
username_1: I'm marking this as closed but feel free to reopen it if it doesn't work for you.
Status: Issue closed
|
574BandOfBrothers/memoriae | 259995401 | Title: Send Suzan Üsküdarlı a message using Piazza to inform me about:
Question:
username_0: a. Your group name
b. Your project repo URL
Answers:
username_0: Group Name: BandOfBrothers decided with a poll http://www.easypolls.net/poll.html?p=59c3d234e4b011fc06d44ed1
username_0: Project Page opened a name: memoriae
Real project name may be changed:https://github.com/574BandOfBrothers/memoriae
username_0: Suzan Üsküdarlı and team members informed via piazza. This issue can be closed.
Status: Issue closed
|
shiplab/vesseljs | 337861406 | Title: Clickable objects in 3d, with editable info in the GUI
Question:
username_0: Let the user click an object, and show basic info of this object in the GUI control (id, l, b, d, file3D, colors)
Answers:
username_0: Is this working? If so, can we have a simple example? @username_1
username_1: Not now. The `ObjectMenu` that I defined in the conflicting branch outlines how to do a GUI that can switch target. Object picking is typically done by casting a ray from the camera through the mouse pointer with `THREE.Raycaster`. Since multiple objects may intersect the ray, one may need a way to cycle between them, e.g. `tab` or mouse wheel. Or use picking only in a deck-wise 2D view.
You are welcome to use and improve the `ObjectMenu`.
username_0: Thanks! I'll close the issue for now!
Status: Issue closed
|
dwalton76/rubiks-cube-NxNxN-solver | 285999864 | Title: 5x5x5: use lookup table for edge pairing
Question:
username_0: Edge pairing on 5x5x5 takes about 70 moves on average :-1: Having #33 would solve this problem for 5x5x5 cubes but all cubes larger than 5x5x5 reduce down to a 5x5x5 to pair their edges...the tsai solver won't help in those scenarios.
Using a lookup table should bring the move count down a lot...my best guess is it will take around 40 moves. |
Azure/azure-powershell | 165191351 | Title: TestFramework for AutoRest generated clients does not expose Graph client creation API
Question:
username_0: However Hyak based TestFramework exposed this API. We need to port it over to unblock partner teams.
Answers:
username_0: Published new Microsoft.Rest.ClientRuntime.Azure.TestFramework version [1.3.0-preview](https://www.nuget.org/packages/Microsoft.Rest.ClientRuntime.Azure.TestFramework/1.3.0-preview) to nuget with the fix
Status: Issue closed
|
cvut/fittable | 90786884 | Title: Change default layout to vertical?
Question:
username_0: I think that familiarity of the horizontal layout may be confusing for users, they will probably expect that fittable is just a redesign of current timetable with 14 days schedule.
What do you think about it, @MMajko, @username_2, @username_1?
Answers:
username_1: Welp, that's true. I like the horizontal layout more, because you can fit more info on a smaller space (and also monitors are usually oriented horizontally), but the confusion from the Timetable layout similarity can be dangerous.
username_2: Choose the one with less bugs for now.
username_0: @username_2 vertical layout?
username_0: I can’t decide, both have some bugs, each different ones.
username_1: 
username_0: Yeah, making GUI is f*cking hard, there are millions of tiny details. Textual interfaces are much simpler and often easy to use. Hmm, we should definitely create a terminal fittable!
username_2: Programming is hard, let's go shopping.
username_0: Why shopping? o.O
username_2: http://itre.cis.upenn.edu/~myl/languagelog/archives/002892.html
Status: Issue closed
|
runelite/runelite | 538081193 | Title: Skill Calculator: Do not reset text inputs when refreshing screen.
Question:
username_0: **Is your feature request related to a problem? Please describe.**
When figuring out how many actions you have left until next XP goal, you need to retype all fields if you were to refresh the current skill you are on the entire calculator resets.
**Describe the solution you'd like**
Have XP/Level Goal and typed in boxes for actions to be done not reset when refreshing box.
**Additional context**
This should only happen if you're refreshing the current stat and not when you do a skill calc on another skill and return.
Example: I calculate the number of battle staffs until 84 crafting when I am at 75 crafting. I craft a bit and I want to know the new total, I should be able to click 'Crafting' and get an updated count of what ever I was previously interested in with the goal of 84 crafting still.<issue_closed>
Status: Issue closed |
naver/billboard.js | 546602566 | Title: onrenderend callbak is fired before chart area finishes
Question:
username_0: ## Description
<!-- Detailed description of the issue -->
For most of chart types, when set `onrendered` callback, is fired before the chart element finishes to render.
Status: Issue closed
Answers:
username_0: :tada: This issue has been resolved in version 1.12.0 :tada:
The release is available on:
- [npm package (@latest dist-tag)](https://www.npmjs.com/package/billboard.js/v/1.12.0)
- [GitHub release](https://github.com/naver/billboard.js/releases/tag/1.12.0)
Your **[semantic-release](https://github.com/semantic-release/semantic-release)** bot :package::rocket: |
hr-js/sta_access_manager | 265504206 | Title: githubの機能を活用する
Question:
username_0: 現時点で使用しているgithubの機能は以下の通り
- Issue
- Wiki
- Pull requests
他にもいろいろなことができるはずなので、最大限に活用したい。思いつくものとしては
- Label
- Projects
- Milestone
- Issue Template
ただ、活用することだけに着目して開発フローが滞ると意味がないので、全体の共有・意見収集は必須。
Answers:
username_1: [こっそり始めるGit/GitHub超入門(12):開発者のスケジュール管理に超便利、GitHub Issues、Label、Milestone、Projects使いこなし術 (1/4) - @IT](http://www.atmarkit.co.jp/ait/articles/1704/28/news032.html)
このサイトが参考になりそうな気がします。
時間見つけて目を通しておくといいかも
username_1: @username_0
運用に関してだけど、閲覧の設定ができる...と聞いて調べて見たけどよくわからなかったから、URL貼ってもらっていいですか?
username_0: @username_1
slackでも送りましたが、wikiとかソースは基本publicみたいです。(勘違いしていたようです。すみません。。。)
調べて見たんですが、ドキュメントの共有サービスは有料が多いみたいなので、Google Driveとslackの連携を調べてみます
username_1: ありだと思う。無料枠(人数制限的な意味で)の確認してみる必要があるかも、、これは...とりあえず、無料枠に関してはこっちで調査しときます。その後、現状いけそうなら検討してみるって感じでどうでしょう?
username_1: 5ユーザまで無料だね
https://ja.atlassian.com/software/bitbucket/pricing?tab=cloud
んー、やり方考えればいけそうだね。
有料枠という選択肢、一度交渉してみるってのもいいかもって思ってはいるかなー
username_0: @username_1
http://japan.blogs.atlassian.com/2012/09/refer-a-friend-to-bitbucket-for-free-users/
bitbucketのアカウントを持っていない人ならさらに+3人まで招待できるみたいですね。ただ2012年の記事なので、今でも可能なのか怪しいですが..
一旦原点に立ち戻って考えたのですが。
__Google Driveのドキュメントが更新されても、他のメンバーが更新を知ることができない__
っていうのが最大のネックだと思っています。
なので、以下のフローに沿ってやれば、現状、GitHubのissue + Google Driveで解決できると思っています。不便は不便ですが...
1. githubに仕様関連のissueを作成する
1. Google Driveのドキュメントを更新する
1. issueをクローズする
ドキュメント運用に関しては、切りわけて考えた方がいいのかもしれません。
username_0: 適宜使うことを検討するのでクローズ
Status: Issue closed
|
scaleway/scaleway-cli | 108290495 | Title: When connecting using SSH, check if host keys are available in the userdata
Question:
username_0: Depends on: https://github.com/scaleway/image-tools/issues/36
If the host keys are available you can :
1. display fingerprints
2. register fingerprint on the local host (ssh-keygen) + remove this: https://github.com/scaleway/scaleway-cli/blob/master/pkg/utils/utils.go#L83
I suggest you to begin with 1 as the userdata API is still very young and most of the actual images won't have scripts that generates the fingerprints<issue_closed>
Status: Issue closed |
fossasia/open-event-frontend | 406563158 | Title: api/auth.py: Implement the OAuth login using the requests-oauthlib library
Question:
username_0: **Describe the bug**
The current implementation of the auth.py module uses requests library to make calls to the API vendors. The project already has the requests-oauthlib dependency which is rarely used but is a better solution. This will help keep the code consistent among the different vendor requests.
**Expected Behaviour**
Use requests-oauthlib instead of requests library
**Additional context**
I am working on it. |
phetsims/scenery | 355339491 | Title: drag `end` callback should be nested under dragListener's release event. (Same with press)
Question:
username_0: After discussing with @username_2 and @username_1, it is clear that there seems to be a general problem for how phet-io events are handled in overridden methods up and down a type hierarchy. In PressListener and DragListener, it manifests most clearly in the `press` and `release` methods. This is because the phet-io event is fully encapsulated in the PressListener (parent) method, and so anything that is changed because of the code in the DragListener (child) method will not be properly nested.
This is further complicated because of the new pattern we are generally adopting for phet-io, in which we convert events on a type to be `{{event}}Emitter`s that emit in the event call, and have a listener that does the meat of the method function. This way we get phet-io event management (among other things) for free, built into the instrumented Emitter.
For this particular case, we are going to try passing a callback up the stack for PressListener to call in the listener added to `this._pressedEmitter`.
Answers:
username_0: Marking for stable release of Faradays law.
username_0: I also just uninstrumented PressListener because it was causing problems in https://github.com/phetsims/faradays-law/issues/117.
username_0: I added above a bug fix for supporting phetioReadOnly in PressListener.
username_0: @username_2, I'm mainly looking for a review of https://github.com/phetsims/scenery/commit/12078fbd93fa78f7fde0afe14bebbc2cef73cdc1, everything else is minor, but please take a look at PressListener and DragListener at master to be sure.
username_1: @username_0 can you please convert the entire file to ES5 for now? There are a few arrow functions that sneaked in.
username_1: Also to discuss with @username_2, we are seeing down and pointerdown on the start, but nothing on the end:

Should we pass through the `{Event}` where possible (even though in some cases like an interruption, it will be undefined)?
username_0: converted back to es5 above, because the es6 conversion should be done on @username_2's terms. @username_2 please review the changes to press, release, and drag for `PressListener` and `DragListener`.
username_2: @ariel-phet, can I do this at the same time as handling the other pending drag/input items? With either I'll be fully reviewing the current setup. And what is the priority? (Can I handle this AND tag on the other input-related changes as a high priority?)
username_0: @ariel-phet could we get an update on this?
username_2: I'll be continuing more of the review tonight, will note progress and notes in the relevant issues. (If I stream-of-consciousness review and record, it would include a lot of duplicated things).
username_0: Updating to new labels signifying that this issue is blocking phet-io deployments.
username_0: Marking high, to hopefully be done before GQ goes out next month.
username_0: Is there a reason to add the listener in two different ways:
```js
this._pressedEmitter.addListener( this.onPress.bind( this ) );
// @private {Emitter} - Emitted on release event
this._releasedEmitter = new Emitter( {
. . .
// The main implementation of "release" handling is implemented as a callback to the emitter, so things are nested
// nicely for phet-io.
listener: this.onRelease.bind( this )
} );
```
username_2: No reason, I wasn't familiar with passing a listener directly into the Emitter constructor. I'll use that for the pressedEmitter.
Status: Issue closed
username_0: Looks great. Thanks for the review. Closing
username_1: Please see https://github.com/phetsims/axon/issues/186 |
xamarin/Xamarin.Forms | 699583128 | Title: [Bug] MultiBinding StringFormat with MultiValueConverter
Question:
username_0: ### Description
`StringFormat` of `MultiBinding` does not use the result of `MultiValueConverter` as a parameter.
### Steps to Reproduce
Add `MultiBinding` with `StringFormat="Result of MultiValueConverter={0}"` and `Converter`
### Expected Behavior
`Result of MultiValueConverter={0}`
### Actual Behavior
`Result of MultiValueConverter={FirstBindingValue}`
### Basic Information
- Version with issue: 4.8.0.1364
- Last known good version: -
- IDE: 16.7.3
Answers:
username_1: I'm not sure whether this is the same bug or not, but Multibinding is broken in Xamarin Forms 4.8. After trying to implement all the examples in the docs without success, I found this example: https://github.com/username_2/MultiBindingPlayground
It works perfectly with the version of Xamarin Forms that it references (4.7), but stops working as soon as you upgrade the nuget package to 4.8.
P.D. Also, for reference, this was my question in StackOverflow when I was unable to make it work: https://stackoverflow.com/questions/64244447/multibinding-in-xamarin-forms-element-is-null
username_2: I have updated https://github.com/username_2/MultiBindingPlayground to Xamarin.Forms 4.8.0.1534 and the sample continue working as expected:

Are you using a different version?
username_1: How weird. I’ll be out of the office until Tuesday, but I’ll double check as soon as I go back (I did try it in the iOS emulator, though, don’t know if that will make any difference).
username_1: Ok, I don't know whether it was the fact that an updated nuget package of XF4.8 was released just after I tested the code, or it was the fact that I was expecting it to fail (as it was also failing in my project) and I made a mistake, but yes, I redownloaded @username_2 example and it works perfectly with the latest stable XF version.
As I mentioned in StackOverflow, even weirder, _my own code_, that didn't work last week, today does. So either the XF update did somethings, I was visited by gremlins, or I somehow chained several silly mistakes when I tried to use MultiBinding for the first time...
So, please ignore my comments, and sorry for the noise 😞
username_3: I vote on that ^^^. that happens to all of us
Status: Issue closed
|
Optum/dce | 1056431546 | Title: How to uninstall
Question:
username_0: Hi,
I have installed dce with the cli version 0.4.0
I ve got a lof of issues, I think due to control Tower so I would like to uninstall everything and start from a fresh instllation. How can I unistall everything in a clean way.
Thanks for your help |
apple/cups | 755906486 | Title: cupsGetDests2 shows deleted printers
Question:
username_0: Dear Cups Maintainers,
Fedora 33
# rpm -qa cups\*
cups-pk-helper-0.2.6-10.fc33.x86_64
cups-pdf-3.0.1-10.fc33.x86_64
cups-libs-2.3.3-18.fc33.x86_64
cups-client-2.3.3-18.fc33.x86_64
cups-ipptool-2.3.3-18.fc33.x86_64
cups-filesystem-2.3.3-18.fc33.noarch
cups-2.3.3-18.fc33.x86_64
cups-libs-2.3.3-18.fc33.i686
cups-filters-libs-1.28.5-3.fc33.x86_64
cups-filters-1.28.5-3.fc33.x86_64
I clean up a bunch of my unused printers.
Problem: printers with the same long name, except for the end still show in certain programs.
$ lpstat -a
B4350 accepting requests since Thu 29 Oct 2020 01:36:30 PM PDT
Cups-PDF accepting requests since Tue 30 Apr 2019 04:05:39 PM PDT
Virtual_PDF_Printer accepting requests since Tue 29 Sep 2020 03:13:17 AM PDT
Which is the way it is suppose to be. And match http://127.0.0.1:621 and Printer Admin
But programs using reading printers using cupsGetDests2, still get the old deleted printers:
The C text: https://bugs.documentfoundation.org/attachment.cgi?id=167701
The Binary: https://bugs.documentfoundation.org/attachment.cgi?id=167702
#include <iostream>
#include <cups/cups.h>
int main() {
cups_dest_t* dests;
int nCount = cupsGetDests2(CUPS_HTTP_DEFAULT, &dests);
for (int i = 0; i < nCount; i++) {
cups_dest_t dest = dests[i];
std::cout << dest.name << std::endl;
}
}
$ list-printers
B4350
Cups-PDF
Cups_PDF_rn6 <-- deleted
Oki_B4350_on_dev_lp0_rn6 <-- deleted
Virtual_PDF_Printer
Virtual_PDF_Printer_rn6 <-- deleted
Programs without the problem (a sampling):
Brave Browser, Firefox, Vivaldi, Water Fox, Leafpad, Simple scan, Gimp, Inkscape, Thunderbird, Geany, Shotwell, PDF Studio 2019
Programs with the problem (also a sampling):
Wine, Libre Office, Free Office, Master PDF Editor
Any ideas? Is cupsGetDests2 not the proper way of doing this?
-T
Answers:
username_1: I tried do reproduce that on my ubuntu linux: installed a printer with mentioned naming scheme, deleted it. On Ubuntu 20.10 with CUPS 2.3.3 installed, the above list-printers command does not show the deleted printers. The list result is identical to lpstat -a or cat /etc/printcap.
username_0: I have seen other people on the fedora mailing list complaining of the same thing.
Here is my printcap and printers.conf. The extra deleted printer are nowhere to be found
[printers.conf.txt](https://github.com/apple/cups/files/5646359/printers.conf.txt)
From https://code.woboq.org/gtk/include/cups/cups.h.html I am seeing:
extern int cupsGetDests2(http_t *http, cups_dest_t **dests) _CUPS_API_1_1_21;
But I can't read C, so I don't know how cupsGetDests2 is generating its list.
Do you know how cupsGetDests2 is generating its list?
Status: Issue closed
username_0: Follow up:
These are not extras that did not delete when I deleted the originals. What transpired was the I was experimenting with several way to access a parallel port card and had created several printers using "_rn6" at the end of their names. "_rn6" is the host name of the computer.
What I "thought" were un-deleted printers was actually the name cups tacks a printer that is shared on the network. An d it took me several day to realize I was looking at a coincidence. "cupsGetDests2" in its "ultimate wisdom" list both the local name and the shared name:
Cups-PDF <-- local name
Cups_PDF_rn6 <-- shared name
Virtual_PDF_Printer <-- local name
Virtual_PDF_Printer_rn6 <--shared name
Now I will go wipe some eggs off my face. Thank you for your patience. |
LBNL-UCB-STI/beam | 448480039 | Title: Migrate UrbanSim to be output at the root
Question:
username_0: It is currently output per iteration, however each time it is duplicative. This means that these should only be loaded once at the overall start of BEAM - and utilized throughout from memory
Answers:
username_0: This one was created after talking with @username_2 and he should be able to speak to it better. However I believe he was referring to things like plans, household, etc - but I am iffy on that.
username_1: @username_2 could you clarify this ticket?
username_2: What specific output are we talking about here? There is no "UrbanSim" output that I'm aware of.
Can you give me the file names you are referring to?
Status: Issue closed
username_0: If there is no clear path forward then I'd say we should close this one. Maybe my recollection was off or I had a misunderstanding from our talk then. If it was important then it'll resurface at some point :) |
sindresorhus/execa | 151756970 | Title: Reimplement maxBuffer
Question:
username_0: We lost this when we abandoned execFile.
It should probably be a feature of `getStream`. I will open an issue there.
Reference: https://github.com/nodejs/node/blob/31600735f40e02935b936802ff42d66d49e9769b/lib/child_process.js#L274<issue_closed>
Status: Issue closed |
fossasia/pslab-android | 248619172 | Title: shift experiment doc from html to md files
Question:
username_0: **Actual Behaviour**
currently we use HTML doc files from PSLab Desktop App.
**Expected Behaviour**
All the docs are shifted to md files. So experiment doc should also be shifted to md. This would considerably reduce app size too.
**Steps to reproduce it**
NA
**LogCat for the issue**
NA
**Screenshots of the issue**
NA
**Would you like to work on the issue?**
Yes<issue_closed>
Status: Issue closed |
umlsynco/umlsync-framework | 120198828 | Title: [content][diagrammer] Make open/edit buttons for embedded diagrams
Question:
username_0: - another buttons for embedded diagrams
- another actions for embedded diagram
Answers:
username_0: - Added Open button for embedded diagram
- Handled Open-click - to prevent markdown edit propagation
username_0: - [ ] Class -> Field/Method -> EditableBehavior could not extract isEmbedded flag from parent
- [ ] Handle edit/view mode
Status: Issue closed
|
shequet/sendMail2Github | 908180634 | Title: Re: Problème ABCD
Question:
username_0: <div dir="ltr">MERCI BEAUCOUP ! <br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Le mar. 1 juin 2021 à 13:04, <<a href="mailto:<EMAIL>"><EMAIL></a>> a écrit :<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Ok nous allons regarder le problème en question.
A+</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr">Cordialement,<br><br><NAME><br>Tél. : 06 95 23 91 19<br>Mail : <a href="mailto:<EMAIL>" target="_blank"><EMAIL></a></div></div>
Status: Issue closed
Answers:
username_0: User: <EMAIL><br><br>sdksqjdksqjdk k fkljsdkfjsdkfj jdfkdsjfklsdjfk |
GoogleCloudPlatform/opentelemetry-operations-python | 729858581 | Title: Remove "gcp.resource_type" resource attribute
Question:
username_0: Instead, use OTel resource semantic conventions. See Go reference impl https://github.com/GoogleCloudPlatform/opentelemetry-operations-go/blob/402639db27fdf8e5143cfd8a871d40cbe10d4bd2/exporter/metric/metric.go#L365 |
scikit-learn/scikit-learn | 314978443 | Title: Is there any way to reduce the processing time for feature selection
Question:
username_0: <!--
If your issue is a usage question, submit it here instead:
- StackOverflow with the scikit-learn tag: http://stackoverflow.com/questions/tagged/scikit-learn
- Mailing List: https://mail.python.org/mailman/listinfo/scikit-learn
For more information, see User Questions: http://scikit-learn.org/stable/support.html#user-questions
-->
<!-- Instructions For Filing a Bug: https://github.com/scikit-learn/scikit-learn/blob/master/CONTRIBUTING.md#filing-bugs -->
#### Description
<!-- Example: Joblib Error thrown when calling fit on LatentDirichletAllocation with evaluate_every > 0-->
#### Steps/Code to Reproduce
<!--
Example:
```python
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.decomposition import LatentDirichletAllocation
docs = ["Help I have a bug" for i in range(1000)]
vectorizer = CountVectorizer(input=docs, analyzer='word')
lda_features = vectorizer.fit_transform(docs)
lda_model = LatentDirichletAllocation(
n_topics=10,
learning_method='online',
evaluate_every=10,
n_jobs=4,
)
model = lda_model.fit(lda_features)
```
If the code is too long, feel free to put it in a public gist and link
it in the issue: https://gist.github.com
-->
#### Expected Results
<!-- Example: No error is thrown. Please paste or describe the expected results.-->
#### Actual Results
<!-- Please paste or specifically describe the actual output or traceback. -->
#### Versions
<!--
Please run the following snippet and paste the output below.
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import sklearn; print("Scikit-Learn", sklearn.__version__)
-->
<!-- Thanks for contributing! -->
Answers:
username_1: check the RAM usage of your system.
username_2: Would computing scores over a sample suffice?
Status: Issue closed
username_3: Seems like this is more a question for stackoverflow:
http://scikit-learn.org/dev/faq.html#what-s-the-best-way-to-get-help-on-scikit-learn-usage |
chechu/serverless-dynamodb-fixtures | 742482101 | Title: Plugin seems to be called before CF variables are evaluated
Question:
username_0: <!--
1. Please check if an issue already exists so there are no duplicates
2. Fill out the whole template so we have a good overview on the issue
3. Do not remove any section of the template. If something is not applicable leave it empty but leave it in the Issue
4. Please follow the template, otherwise we'll have to ask you to update it
-->
# This is a Bug Report
## Description
For bug reports:
It's not possible to use `!Ref` or `!GetAtt` expressions in the `custom.fixtures` section.
Example:
```
fixtures:
rules:
- table: '${{self:provider.stage}}-foobar' # !Ref FoobarTable doesn't work
enable: true
sources:
- ./fixtures/foobars.json
```
Using `!Ref` renders a mysterious error message about keys not being according to format:
`... at 'requestItems' failed to satisfy constraint: Map keys must satisfy constraint: [Member must have length less than or equal to 255, Member must have length greater than or equal to 3, Member must satisfy regular expression pattern: [a-zA-Z0-9_.-]+]`
It took me a while to realize that this was due to the `!Ref`; the literal value `!Ref: FoobarTable` was apparently used instead of the table name resolved through the CF function.
Similar or dependent issues:
N/A
## Additional Data
* ***Operating System***: Ubuntu
* ***Stack Trace***:
* ***Provider Error messages***:
```
Serverless: Invoke fixtures
Serverless: Loading fixtures for table [object Object]
Validation Exception -----------------------------------
ValidationException: 1 validation error detected: Value '{[object Object]=[WriteRequest(putRequest=PutRequest(item={name=AttributeValue <snip> }), deleteRequest=null)]}' at 'requestItems' failed to satisfy constraint: Map keys must satisfy constraint: [Member must have length less than or equal to 255, Member must have length greater than or equal to 3, Member must satisfy regular expression pattern: [a-zA-Z0-9_.-]+]
at Request.extractError
```
(retracted)
Answers:
username_1: I really need this feature for my project. Can't use the plugin meanwhile.
username_2: same here :/ |
rubenv/sql-migrate | 375944845 | Title: Call to github.com/gobuffalo/packr#Box.Bytes yields deprecation warning
Question:
username_0: When using `migrate.PackrMigrationSource`, this error shows up:
```
[DEPRECATED] github.com/gobuffalo/packr#Box.Bytes has been deprecated.
Use github.com/gobuffalo/packr#Box.Find instead.
```
Answers:
username_1: Would you mind sending a PR that fixes this?
username_0: Extremely busy right now, if I find some free time I'll try, but you'll probably be able to solve this in much shorter time :)
username_1: Am currently travelling and not anywhere near a computer, so sadly not :-)
Status: Issue closed
username_1: Thanks a lot, appreciate the kind words!
Spread the love! |
vbagirov/test-reports | 921580028 | Title: www.cybersport.ru
Question:
username_0: ### Issue URL (Ads)
[https://www.cybersport.ru/](https://adguardteam.github.io/AnonymousRedirect/redirect.html?url=https%3A%2F%2Fwww.cybersport.ru%2F)
### Screenshots
<details>
<summary>Screenshot 1</summary>

</details>
<details>
<summary>Screenshot 2</summary>

</details>
### System configuration
Information | value
--- | ---
AdGuard product: | AdGuard for Mac v2.5.4.973 release
Browser: | Chrome
Stealth mode options: | Hide your search queries, <br>Send Do-Not-Track header, <br>Remove X-Client-Data header from HTTP requests, <br>Strip URLs from tracking parameters, <br>Self-destructing third-party cookies (180), <br>Block Push API, <br>Block Location API, <br>Block Java
DNS filtering: | server: `tls://dns.adguard.com`<br>filters: `https://filters.adtidy.org/mac_v2/filters/15.txt, https://raw.githubusercontent.com/StevenBlack/hosts/master/alternates/gambling/hosts, TestTitle, https://raw.githubusercontent.com/StevenBlack/hosts/master/alternates/gambling-porn/hosts, Test 1`
Filters: | <b>Ad Blocking:</b><br/>AdGuard Base<br/><br/><b>Privacy:</b><br/>AdGuard Tracking Protection, <br/>AdGuard URL Tracking<br/><br/><b>Social Widgets:</b><br/>AdGuard Social Media<br/><br/><b>Language-specific:</b><br/>AdGuard Russian
Userscripts: | https://userscripts.adtidy.org/beta/popup-blocker/2.5/popupblocker.meta.js,<br>https://userscripts.adtidy.org/beta/adguard-extra/1.0/adguard-extra.meta.js
Answers:
username_0: ### Issue URL (Ads)
[https://www.cybersport.ru/](https://adguardteam.github.io/AnonymousRedirect/redirect.html?url=https%3A%2F%2Fwww.cybersport.ru%2F)
### Screenshots
<details>
<summary>Screenshot 1</summary>

</details>
<details>
<summary>Screenshot 2</summary>

</details>
### System configuration
Information | value
--- | ---
AdGuard product: | AdGuard for Mac v2.5.4.973 release
Browser: | Chrome
Stealth mode options: | Hide your search queries, <br>Send Do-Not-Track header, <br>Remove X-Client-Data header from HTTP requests, <br>Strip URLs from tracking parameters, <br>Self-destructing third-party cookies (180), <br>Block Push API, <br>Block Location API, <br>Block Java
DNS filtering: | server: `https://security.cloudflare-dns.com/dns-query`<br>filters: `https://filters.adtidy.org/mac_v2/filters/15.txt, https://raw.githubusercontent.com/StevenBlack/hosts/master/alternates/gambling/hosts, Test 1, https://raw.githubusercontent.com/StevenBlack/hosts/master/alternates/gambling-porn/hosts, Test 1, DnsUserRules`
Filters: | <b>Ad Blocking:</b><br/>AdGuard Base<br/><br/><b>Privacy:</b><br/>AdGuard Tracking Protection, <br/>AdGuard URL Tracking<br/><br/><b>Social Widgets:</b><br/>AdGuard Social Media<br/><br/><b>Language-specific:</b><br/>AdGuard Russian
Userscripts: | https://userscripts.adtidy.org/beta/adguard-extra/1.0/adguard-extra.meta.js,<br>https://userscripts.adtidy.org/beta/popup-blocker/2.5/popupblocker.meta.js
username_0: ### Issue URL (Ads)
[https://www.cybersport.ru/](https://adguardteam.github.io/AnonymousRedirect/redirect.html?url=https%3A%2F%2Fwww.cybersport.ru%2F)
### Screenshots
<details>
<summary>Screenshot 1</summary>

</details>
<details>
<summary>Screenshot 2</summary>

</details>
### System configuration
Information | value
--- | ---
AdGuard product: | AdGuard for Mac v2.5.5.999 release
Browser: | Chrome
Stealth mode: | disabled
DNS filtering: | server: `https://dns.adguard.com/dns-query`<br>filters: `https://filters.adtidy.org/mac_v2/filters/15.txt, https://raw.githubusercontent.com/StevenBlack/hosts/master/alternates/fakenews/hosts, DnsUserRules`
Filters: | <b>Ad Blocking:</b><br/>AdGuard Base<br/><br/><b>Language-specific:</b><br/>AdGuard Russian, <br/>AdGuard German, <br/>AdGuard Japanese, <br/>Frellwit's Swedish Filter
Userscripts: | https://userscripts.adtidy.org/release/assistant/4.3/assistant.meta.js,<br>https://userscripts.adtidy.org/release/adguard-extra/1.0/adguard-extra.meta.js |
ros-planning/moveit_grasps | 696819058 | Title: Generalization for non-cuboid objects
Question:
username_0: The source code only supports cuboid objects.
However, how a non-cuboid ones should be handled when using MoveIt Grasps?
Should it be spitted into smaller bounding boxes somehow? Is there any idea on how to do that? Perhaps some papers on that? |
sefide/TIL | 824217453 | Title: [Java]LocalDateTime 인스턴스가 생성되는 과정
Question:
username_0: LocalDateTime는 (구)자바에서 쓰였던 날짜/시간 관련 클래스인 Date, Calendar의 단점을 극복하고자 자바 8에서 추가된 java.time 패키지의 하위 클래스다. Date와 Calendar 클래스는 가변 클래스였지만 LocalDateTime은 불변 클래스로 생성되어.. 불변 클래스를 되집어 보고자.. 뜯어보았다.
정적 팩터리 메서드인 of 함수를 이용해 인스턴스를 만들 수 있다.
<br>
**_LocalDateTime.of_**
``` java
public static LocalDateTime of(int year, Month month, int dayOfMonth, int hour, int minute, int second) {
LocalDate date = LocalDate.of(year, month, dayOfMonth);
LocalTime time = LocalTime.of(hour, minute, second);
return new LocalDateTime(date, time);
}
```
LocalDateTime은 날짜와 시간, 두 가지 개념을 가지고 있는 클래스로
날짜만을 표현하는 LocalDate 클래스와 시간만을 표현하는 LocalTime 클래스를 final 변수로 가진다.
<br>
**_LocalDate.of_**
``` java
public static LocalDate of(int year, Month month, int dayOfMonth) {
YEAR.checkValidValue(year);
Objects.requireNonNull(month, "month");
DAY_OF_MONTH.checkValidValue(dayOfMonth);
return create(year, month.getValue(), dayOfMonth);
}
```
LocalDate도 LocalDateTime과 마찬가지로 of 메서드를 이용해 인스턴스를 생성한다.
LocalDate에서는 날짜 정보를 직접 관리하기 때문에 년/월/일 값에 대한 유효성 체크를 진행한 후, 팩터리 메서드 create를 호출한다.
<br>
**_LocalDate.create_**
``` java
private static LocalDate create(int year, int month, int dayOfMonth) {
if (dayOfMonth > 28) {
int dom = 31;
switch (month) {
case 2:
dom = (IsoChronology.INSTANCE.isLeapYear(year) ? 29 : 28);
break;
case 4:
case 6:
case 9:
case 11:
dom = 30;
break;
}
if (dayOfMonth > dom) {
if (dayOfMonth == 29) {
throw new DateTimeException("Invalid date 'February 29' as '" + year + "' is not a leap year");
} else {
throw new DateTimeException("Invalid date '" + Month.of(month).name() + " " + dayOfMonth + "'");
}
}
}
return new LocalDate(year, month, dayOfMonth);
}
```
[Truncated]
* Constructor, previously validated.
**/
private LocalDate(int year, int month, int dayOfMonth) {
this.year = year;
this.month = (short) month;
this.day = (short) dayOfMonth;
}
```
private 생성자를 두어 외부에서 함부로 생성할 수 없도록 제한을 두었다.
더불어 주석으로 각 필드값의 유효성 체크가 필요하다는 것을 고지한다.
<br>
``` java
private LocalDateTime(LocalDate date, LocalTime time) {
this.date = date;
this.time = time;
}
```
최종적으로 LocalDateTime 또한 private로 생성자를 이용해 인스턴스를 생성한다.
Answers:
username_0: 불변 클래스를 만들기 위해 따라야 할 5가지 규칙
- 객체의 상태를 변경하는 메서드를 제공하지 않는다.
- 클래스를 확장할 수 없도록 한다. (상속 제한)
- 모든 필드를 final로 선언한다.
- 모든 필드를 private로 선언한다.
- 자신 외에는 내부 가변 컴포넌트에 접근할 수 없도록 제한한다. (생성자, 접근자, readObject에 대한 방어적 복사 수행)
username_0: https://github.com/username_0/TIL/blob/main/java/LocalDateTime_Creator.md
Status: Issue closed
|
CodeKal/About-CodeKAL | 265567390 | Title: "Welcome to CodeKAL" programı yazın.
Question:
username_0: "Welcome to CodeKAL" programı yazın ve "Programming Languages/kullandığınız_dil" altına dosyayı yükleyin. Örnek Hello World programını kullanabilirsiniz. Ardından "Hello World" kısmını *"Welcome to CodeKAL"* ile değiştirin. |
vlang/v | 483264514 | Title: REPL : Adding space remove result display
Question:
username_0: ```
Answers:
username_1: That's not an undefined behavior. This check adds `println` to your value
```go
!(line.contains(' ') || line.contains(':') || line.contains('=') || line.contains(',') || line == '')
```
username_2: something to print
```
username_3: ```
Status: Issue closed
|
2sic/2sxc | 282398866 | Title: [Documentation request] Send e-mail feature
Question:
username_0: **I'm submitting a ...**
[x] feature request
**...about**
[x] edit experience / UI
[x] admin experience UI
[x] Razor templating
[x] APIs like REST
I know this is probably the worst place to make such requests, but would it be possible to create a very simple demo with static content on how to use 2sxc to send a templated e-mail?
I've been fiddling with mobius forms, but the controller is way too complex to understand between the possible configs and languages, etc.
For example, I have this view:
```
<div class="sc-element">
@Edit.Toolbar(settings: new { show = "always" })
</div>
<div>
This is the test form
</div>
<div>
<select id="testq1">
<option value="na" selected>N/a</option>
<option value="s">Sim</option>
<option value="n">Não</option>
</select>
</div>
<div>
<textarea id="testq2" rows="4" cols="50" placeholder="Notas"></textarea>
</div>
<div>
<input type="text" id="testq3" style="width: 30px">
</div>
<div>
<input type="submit" id="submit" value="Gravar auditoria" onClick="postData()">
</div>
<script type="text/javascript" src="/desktopmodules/tosic_sexycontent/js/2sxc.api.min.js" data-enableoptimizations="100"></script>
<script>
var today = new Date();
var $todaydate = today.getFullYear()+'-'+(today.getMonth()+1)+'-'+today.getDate();
function postData() {
function getData() {
return {
testq1: $("#testq1").val(),
testq2: $("#testq2").val(),
testq3: $("#testq3").val(),
testdate: $todaydate,
testuser: "@Dnn.User.Username"
};
}
var newItem = getData();
$2sxc(@Dnn.Module.ModuleID).webApi.post('app/auto/content/testdata', {}, newItem);
$2sxc(@Dnn.Module.ModuleID).webApi.post('Form/ProcessForm', newItem, true);
[Truncated]
set { m_Success = value; }
}
private bool m_Success;
[JsonProperty("error-codes")]
public List<string> ErrorCodes
{
get { return m_ErrorCodes; }
set { m_ErrorCodes = value; }
}
private List<string> m_ErrorCodes;
}
```
Is it possible to use this has a demo to send an e-mail with this static content as an example?
Best regards,
João
Status: Issue closed
Answers:
username_1: this is easy to do but has nothing to do with 2sxc. basically if you research c# Mail.SendMail you should find everything you need.
Moebius has a lot of features as you noticed, including dynamic templates etc. but if all you want is a simple mail you'll want to just google that. |
kwakahara/study_making_201411 | 57302335 | Title: 出力ファイルの製品名とコンマ、コンマと数量の間に不要な空白が入っている
Question:
username_0: 出力はCSV形式のファイルですので、空白を入れてはいけません。
現在のプログラムの出力ファイル(例)
```
小倉百人一首 , 3
いぬぼうかるた , 5
```
@2cc7f50
Answers:
username_1: 確認しました。
masterへマージ後、このissueをcloseしてください。
username_0: こちらでも動作確認しました。
以下、エビデンス。

@872a175
Status: Issue closed
|
wp-cli/wp-cli | 233157245 | Title: Composer install with `--optimize-autoloader` breaks WP frontend
Question:
username_0: Hello,
I try to use wp-cli 1.2.0 as a composer dependency (I want to have it at `./vendor/bin/wp`, regardless of the server or environment).
**When installing / updating composer with option `--optimize-autoloader`, it produces following fatal error in WP frontend :**
```
Fatal error: Uncaught Error: Call to undefined function WP_CLI\Dispatcher\get_path() in /home/foo/www/project/vendor/wp-cli/wp-cli/php/WP_CLI/Runner.php on line 49
```
I see two majors problems :
**1. Obviously, wp-cli bootstrap is not compatible with composer `--optimize-autoloader` option**
**2. It shows that some part of wp-cli is loaded in front, despite it is a command line tool**
To me, second point is very annoying, it means that instability / bugs in wp-cli could break WP frontend. As it is a command line tool, no single file of wp-cli should be loaded in front. In fact, classic usage of composer auto-loading is meant to avoid this type of problems : as long as a class is not necessary, don't load it.
I see there already is an issue on related subject https://github.com/wp-cli/wp-cli/issues/4051, and I think we should rethink wp-cli composer auto-loading process, to get back to a more classical use of composer auto-loading. At current state, it is really blocking to use wp-cli as a composer dependency on big WordPress projects (that we normally deploy with a `composer install --optimize-autoloader` step).
I hope this bug report will help,
Bests regards,
Pierre
Answers:
username_1: Hello Pierre,
Thanks for the feedback.
We are indeed trying to work on making WP-CLI be a well-behaving Composer citizen. However, given that most of the existing codebase comes from a procedural approach, this is not without its difficulties. There are still many components within WP-CLI that simply cannot be auto-loaded at all, as they are not classes/interfaces/traits.
Can you share a bit more about your setup to help us pinpoint the current issues?
username_2: I have this issue too. I use own project structure looks like Bedrock (https://github.com/roots/bedrock)/
username_3: We should restore the pre-1.2.0 behavior here. It might be as simple as a kill switch in our bootstrap process that bails when the context isn't CLI.
username_1: The parts that activate WP-CLI through Composer are the procedural files that get loaded through the Composer `"files"` autoloader directive. This is just part of dealing with procedural file in the autoloader.
So, basically, every command that is added through the `"files"` directive will at least call the `WP_CLI::add_command()` method.
We can however put a kill switch into `WP_CLI::add_command()` that will immediately bail when not in CLI.
username_3: Yes, this seems like the most sensible approach to me (as hacky as it sounds).
username_1: Okay, that kill-switch should prevent the errors on the front end. However, I noticed that we should change our "standard" command addition file.
Instead of `if ( ! class_exists( 'WP_CLI' ) return;` we should check for `if ( !defined( 'WP_CLI' ) ) return;`. The class exists on the front end as well if it is part of the same autoloader.
username_3: Yes, although let's hold on doing that for now, because we might want to do something different in #3928
username_3: @username_0 @username_2 Can you test and verify the fix in #4126 ?
username_4: Hello,
Using the "files" property from composer, is it really a good solution ?
Even if the bootstraping file to add the command to WPCli does nothing, PHP still executes the file.
Using your own autoloader when WPCli is initializing would not be better ?
For example, if you using composer as a dependency, it is possible to do a autoloader like this:
```
<?php
use Composer\DependencyResolver\Pool;
use Composer\Factory;
require_once __DIR__ .'/vendor/autoload.php';
// initialize composer
$composer = Factory::create(new Composer\IO\NullIO());
$repo = $composer->getRepositoryManager()->getLocalRepository();
foreach ($repo->getPackages() as $package) {
if ($package->getType() === 'wp-cli-package') {
// If extra key
$extra = $package->getExtra();
$file = isset($extra['register_command_file']) ? $extra['register_command_file'] : 'register_command.php';
$filePath = $composer->getInstallationManager()->getInstallPath($package) . '/' .$file;
if (file_exists($filePath)) {
include $filePath;
}
}
}
```
Additionally, using the WP Cli commands as a suggestion rather than as a direct dependency on the project would allow developpers to select only the commands they need and thus lighten the WP Cli load.
Like [consolidation/robo](https://github.com/consolidation/Robo/blob/master/RoboFile.php#L413), it is possible to build the phar with the suggestion set.
username_3: Not necessarily, but that issue is already tracked in #3928
The scope of this issue is to fix the unexpected breaking change introduced in 1.2.0
username_0: Thank you for the answers.
I just checked the fix in #4126 and it solves the issue : no more fatal error in WP frontend.
I agree it is probably the best solution for a short-term / 1.2.1 patch.
But in the long term, I would suggest a refactoring of the wp-cli bootstrap like suggested by @username_4, that seems a better solution than https://github.com/wp-cli/wp-cli/issues/3928 because it avoids to load multiple wp-cli command files during WP frontend bootstrap, and it solves the circular dependencies issue between wp-cli framework and wp-cli commands.
Bests regards,
Pierre
username_3: We'll keep it in mind, thanks.
Status: Issue closed
username_2: @username_3 fix #4126 solves the issue. Thanks! |
VumBleBot/Group-Activity | 871188268 | Title: [QA/REVIEW] utils_qa.py 리뷰 2
Question:
username_0: ## 📸 코드
```python
# coding=utf-8
# Copyright 2020 The HuggingFace Team All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Pre-processing
Post-processing utilities for question answering.
"""
import collections
import json
import logging
import os
from typing import Optional, Tuple
import numpy as np
from tqdm.auto import tqdm
from konlpy.tag import Mecab
import torch
import random
from transformers import is_torch_available, PreTrainedTokenizerFast
from transformers.trainer_utils import get_last_checkpoint
logger = logging.getLogger(__name__)
# 토크나이징
mecab = Mecab()
def tokenize(text):
# return text.split(" ")
return mecab.morphs(text)
# 시드 고정
def set_seed(seed: int):
"""
Helper function for reproducible behavior to set the seed in ``random``, ``numpy``, ``torch`` and/or ``tf`` (if
installed).
Args:
seed (:obj:`int`): The seed to set.
"""
random.seed(seed)
np.random.seed(seed)
if is_torch_available():
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed) # if use multi-GPU
torch.backends.cudnn.deterministic = True
[Truncated]
# 둘이 같아야 이상적이라 min해서 반환해줌
# train.py에서는 이걸로 모델 train 돌린다.
if "validation" not in datasets:
raise ValueError("--do_eval requires a validation dataset")
return last_checkpoint, max_seq_length
```
---
## 🤔 궁금한 점
- token_is_max_context가 정확히 뭘까요..?
---
## 📚 레퍼런스 첨부
- 의문 해결에 도움을 주는 참고 문헌이 있다면 기록해주세요!
- 책, 블로그, 유튜브, 논문 등<issue_closed>
Status: Issue closed |
rParslow/TeamWhisky | 271165149 | Title: CLYNELISH 33 ans 1973 2nd Edition S.V546%
Question:
username_0: CLYNELISH 33 ans 1973 2nd Edition S.V 54,6%<br>
http://ift.tt/2zdhW1n<br>
#TeamWhisky CLYNELISH 33 ans 1973 2nd Edition S.V 54,6% Single Malt Whisky Ecosse/Highlands LMDW http://ift.tt/2zdhW1n 1 900 € <img src="http://ift.tt/2xZkAqa"><br><br>
via Fishing Reports http://ift.tt/2dm5cfF<br>
November 04, 2017 at 05:14AM |
getsentry/sentry-php | 403524335 | Title: Allow error, fatal error and exception handlers to be enabled separately.
Question:
username_0: Some apps have their own built-in error handlers that we want to use to send errors to Sentry, while we may want to use Sentry's built-in fatal error shutdown handler and/or exception handler. In these cases we want to enable/disable the various Sentry error handlers separately.
It looks like at present in 2.x branch we would have to create our own error handler integration, which should be totally doable, but also seems like a waste :)
Answers:
username_1: I believe this is already possible:
```php
$client = ClientBuilder::create([
'dsn' => 'xxx',
'default_integrations' => false,
])->getClient()
```
This would disable the default integrations (which includes the error handler) and you could supply the additional default integrations in the `integrations` key if you do want the other ones.
For context: https://github.com/getsentry/sentry-php/blob/bf9184f4e111b7e5fd79823e8fd87c145e37370b/src/ClientBuilder.php#L95-L113
username_1: Yep, I missed you meant that, sorry about that.
Let's see if we can possibly split them out via either configuration or something else, I actually like this use case but also understand why they were merged... might be a compromise made somehwere. Thanks for bringing this up!
username_2: This would be possible with #762: in that PR I've splitted the integrations into two separated listeners, you would need to disable the default and add only the error listener, not the exceptions one.
username_0: @username_2 I don't think that helps - I want to use Sentry's fatal error shutdown handler, but not the error handler. The app already has its own error handler that I need to use instead of Sentry's.
username_3: Just asking: since Sentry's error handler should be as transparent as possible it should work with other handlers too, it will catch errors and then just forward them to the handler that was registered previously. Doesn't it works for you?
username_0: No, because it modifies the stacktrace.
username_3: It should not edit the stacktrace as far as I remember. By "it modified the stacktrace" do you mean that you don't want the Sentry's frames to be visible in the stacktrace or that the data of the stacktrace is modified? If that's the case can you please post a little script that reproduces the scenario?
username_0: ```php
<?php
use function Sentry\init;
use function Sentry\captureException;
require('vendor/autoload.php');
set_error_handler(function () {
$backtrace = debug_backtrace();
while ($backtrace && !isset($backtrace[0]['line'])) {
array_shift($backtrace);
}
// The first trace is the call itself.
// It gives us the line and the file of the last call.
$call = $backtrace[0];
// The second call give us the function where the call originated.
if (isset($backtrace[1])) {
if (isset($backtrace[1]['class'])) {
$call['function'] = $backtrace[1]['class'] . $backtrace[1]['type'] . $backtrace[1]['function'] . '()';
}
else {
$call['function'] = $backtrace[1]['function'] . '()';
}
}
else {
$call['function'] = 'main()';
}
error_log("Error in {$call['function']} ({$call['file']})");
});
init(['dsn' => 'http://[email protected]/123' ]);
$foo = $bar[1];
```
Basically, the app figures out the last caller and prints it in the logs as well as any displayed error message. And is already deployed to literally a million websites :D
So the easiest work-around for me will be to make my own custom integration to send fatal errors to Sentry. However, I was thinking it would be nice if Sentry 2.x let me register just the fatal error handler, thus I created this issue.
username_2: I've quickly tested your case, and I've seen that if you register Sentry first, and your handle later, the backtrace is not changed.
I'm not sure if we can fix the behavior in the opposite case, but at least you have a workaround.
username_2: Unfortunately it seems that there's no other way around it. We already do everything that's possible by calling the previous error handler and passing the error forward.
username_0: Would allowing the ErrorHandler class to be extended be a possibility? That way I wouldn't have to duplicate code, I could just have a child class with a different constructor.
username_2: We declared a lot of classes `final` to avoid maintainability issues, like in the past. If we open up such class to extension, we erode our freedom to fix stuff up in the future, to avoid BC.
username_0: Ok. I think it could make sense to break these out into separate classes then, at some point. Assuming I'm not the only person who wants to customize handler registration.
username_3: You may be interested in the linked PR which should solve this issue in a BC-compatible manner. It would be great to have your feedback as you requested the function first and know if what I have implemented will work well for the cases you have in mind 😃
username_3: The linked PR should fix the issue by allowing each handler to be registered separately from the others. @username_0 it would be great if you could take a look at it and give me feedback
Status: Issue closed
|
kubernetes/kubernetes | 415799663 | Title: [Flaky test] Advanced Audit tests are flaky in 1.14-blocking
Question:
username_0: **Which jobs are flaking**:
https://testgrid.k8s.io/sig-release-1.14-blocking#gce-cos-1.14-default
**Which test(s) are flaking**:
- `[sig-auth] Advanced Audit [DisabledForLargeClusters] should audit API calls to create, get, update, patch, delete, list, watch secrets.`
- `[sig-auth] Advanced Audit [DisabledForLargeClusters] should audit API calls to create, get, update, patch, delete, list, watch configmaps.`
**Reasons for flaking**:
Seem to be mostly timeouts, for example: [9538](https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sbeta-default/9538), [9543](https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sbeta-default/9543), [9544](https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sbeta-default/9544).
**Since when has it been flaking**:
Since 2/20.
**Testgrid link**:
https://testgrid.k8s.io/sig-release-1.14-blocking#gce-cos-1.14-default
**Anything else we need to know**:
Latest failure logs can be found in https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sbeta-default/9544.
cc @mortent @kacole2 @username_4 @username_8
/kind flaky-test
/priority important-soon
/sig auth
Answers:
username_0: /milestone v1.14
username_1: /cc @username_2
username_1: This makes me wonder if they may be getting rotated, and I noticed https://github.com/kubernetes/kubernetes/commit/0713f29c289a339fef06647c10fa65d59a169a8d#diff-ff266e54aabe0e2f74de4f944db65edb merged that same day, I'm still testing to see if that impacts the audit logs.
cc @username_3
username_2: If we wanted to test that theory we could temporarily change those file rotation settings back to what they were and see if that fixes the problem.
username_3: @username_1 @username_2 The intent of the change was to check if log rotation was needed more often but still keep the rotation policy the same (new day starts, log file size > 100MB). I'm certainly okay with reverting the change to see if it fixes the bug, but if it does, I'm concerned that it still might flake if the test run crosses hour boundries.
username_3: Created https://github.com/kubernetes/kubernetes/pull/74911 to revert then log rotation change.
username_2: thank you @username_3, we'll see if that fixes it, if so we brainstorm on how to fix it more permanently
username_4: /kind flake
username_5: https://github.com/kubernetes/kubernetes/pull/74915 merged ~18 hours ago
https://storage.googleapis.com/k8s-gubernator/triage/index.html?test=Advanced%20Audit
I'm not yet seeing a clear difference, and suspect we have more work to do here
username_5: https://storage.googleapis.com/k8s-gubernator/triage/index.html?job=e2e-gce&test=Advanced%20Audit - maybe more relevant since it filters out kops jobs that aren't on release-blocking
The main cluster is our good friend `timed out waiting for condition`
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/audit.go:66
after 5m0s failed to observe audit events
Expected error:
<*errors.errorString | 0xc0002caa60>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/audit.go:745
```
username_2: that one looks a lot better, still curious why its failing at all, I'll dig in a bit more
username_6: Once DynamicAudit goes to beta, we should be able to eliminate the reliance on the log files by using a webhook to verify the audit stream instead. Thanks for investigating @username_2 !
username_7: We in code freeze for 1.14. If this issue is blocking the 1.14 release can we please get an update on whether this issue will be closed in the next week? If this issue is not blocking 1.14 can we move it to 1.15?
@username_2
username_8: Seems like #74915 did resolve this issue.
The failed tests in this issue have cleared out.
/close
username_5: If we focus on gci-gce jobs it looks like it's become _less_ flaky but hasn't gone away entirely
https://storage.googleapis.com/k8s-gubernator/triage/index.html?job=e2e-gci-gce&test=Advanced%20Audit
<img width="1104" alt="Screen Shot 2019-03-16 at 11 38 02 AM" src="https://user-images.githubusercontent.com/49258/54479950-0abe4980-47e0-11e9-8831-50d34e177cb4.png">
Definitely not happy as far as kops is concerned
https://storage.googleapis.com/k8s-gubernator/triage/index.html?job=kops&test=Advanced%20Audit
<img width="1098" alt="Screen Shot 2019-03-16 at 11 40 49 AM" src="https://user-images.githubusercontent.com/49258/54479988-5d980100-47e0-11e9-9acd-89a283978243.png">
username_1: **Which jobs are flaking**:
https://testgrid.k8s.io/sig-release-1.14-blocking#gce-cos-1.14-default
**Which test(s) are flaking**:
- `[sig-auth] Advanced Audit [DisabledForLargeClusters] should audit API calls to create, get, update, patch, delete, list, watch secrets.`
- `[sig-auth] Advanced Audit [DisabledForLargeClusters] should audit API calls to create, get, update, patch, delete, list, watch configmaps.`
**Reasons for flaking**:
Seem to be mostly timeouts, for example: [9538](https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sbeta-default/9538), [9543](https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sbeta-default/9543), [9544](https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sbeta-default/9544).
**Since when has it been flaking**:
Since 2/20.
**Testgrid link**:
https://testgrid.k8s.io/sig-release-1.14-blocking#gce-cos-1.14-default
**Anything else we need to know**:
Latest failure logs can be found in https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sbeta-default/9544.
cc @mortent @kacole2 @username_4 @username_8
/kind flaky-test
/priority important-soon
/sig auth
username_1: the methodology the test is using is inherently flaky in the face of server-side log rotation
spoke with @username_6 about this, and I think we should do the following:
* now: mark the existing e2e tests as `[flaky]` while we resolve issues
* short-term: take detailed specific-request-to-specific-audit-event tests and make sure we have integration tests covering that
* medium-term: change e2e tests to general "audit for resource X is enabled" and make them much more robust by moving away from trying observe an audit event for a *particular* request, and unmark them flaky
* long-term: change e2e tests to use dynamic audit and point at a test-specific audit sink
username_1: marking flaky in https://github.com/kubernetes/kubernetes/pull/75447
username_6: Thanks @username_1
Follow up tasks captured by https://github.com/kubernetes/kubernetes/issues/75563
Status: Issue closed
|
romkatv/powerlevel10k | 455103868 | Title: Nerd-Fonts not showing for 'ls' commans
Question:
username_0: It seems that I cant make Nerd-Fonts completely work for me. In my prompt the Nerd-fonts are displayed but when I use the 'ls' command the folder or file icons in front of the names are not showing.
This is a Tilix screenshot:

I tried it also in Terminator:

In the standard Gnome-Terminal i cant choose the Hack Font Mono Regular. Thats why i cant provide a screenshot of that because it doesn't work.
OS: Manjaro Linux 18.0.4 Illyria Gnome Desktop (up-to-date)
zsh: zsh 5.7.1 (x86_64-pc-linux-gnu) + [oh-my-zsh](https://github.com/robbyrussell/oh-my-zsh)
Fonts: Hack Fonts Mono Regular in Tilix:

I downloaded thie directly from the [nerd-fonts](https://github.com/ryanoasis/nerd-fonts/tree/master/patched-fonts/Hack/Regular/complete) repo. Here i picked `Hack Regular Nerd Font Complete Mono.ttf` I installed it after opening the font and clicking on `Install`.
My `.zshrc` file looks like this:
```zsh
# Font mode for powerlevel10k
POWERLEVEL9K_MODE='nerdfont-complete'
# If you come from bash you might have to change your $PATH.
# export PATH=$HOME/bin:/usr/local/bin:$PATH
# Path to your oh-my-zsh installation.
export ZSH="/home/user/.oh-my-zsh"
# Set name of the theme to load --- if set to "random", it will
# load a random theme each time oh-my-zsh is loaded, in which case,
# to know which specific one was loaded, run: echo $RANDOM_THEME
# See https://github.com/robbyrussell/oh-my-zsh/wiki/Themes
ZSH_THEME="powerlevel10k/powerlevel10k"
POWERLEVEL9K_PROMPT_ON_NEWLINE=true
POWERLEVEL9K_RPROMPT_ON_NEWLINE=true
# Separators
POWERLEVEL9K_LEFT_SEGMENT_SEPARATOR_ICON=$'\ue0b0'
POWERLEVEL9K_LEFT_SUBSEGMENT_SEPARATOR_ICON=$'\ue0b1'
POWERLEVEL9K_RIGHT_SEGMENT_SEPARATOR_ICON=$'\ue0b2'
POWERLEVEL9K_RIGHT_SUBSEGMENT_SEPARATOR_ICON=$'\ue0b7'
# VCS colours
POWERLEVEL9K_VCS_MODIFIED_BACKGROUND='orange1'
POWERLEVEL9K_VCS_MODIFIED_FOREGROUND='black'
POWERLEVEL9K_VCS_UNTRACKED_BACKGROUND='lightgreen'
POWERLEVEL9K_VCS_UNTRACKED_FOREGROUND='black'
POWERLEVEL9K_VCS_CLEAN_BACKGROUND='lightgreen'
POWERLEVEL9K_VCS_CLEAN_FOREGROUND='black'
# Command auto-correction.
ENABLE_CORRECTION="true"
# Command execution time stamp shown in the history command output.
HIST_STAMPS="mm/dd/yyyy"
# Which plugins would you like to load?
# Standard plugins can be found in ~/.oh-my-zsh/plugins/*
# Custom plugins may be added to ~/.oh-my-zsh/custom/plugins/
# Example format: plugins=(rails git textmate ruby lighthouse)
[Truncated]
zsh-autosuggestions
zsh-syntax-highlighting
zsh-completions
)
autoload -U compinit && compinit
source $ZSH/oh-my-zsh.sh
#Removes username@hostname
prompt_context() {
if [[ "$USER" != "$DEFAULT_USER" || -n "$SSH_CLIENT" ]]; then
fi
}
```
With `get_icon_names` i get this: https://imgur.com/a/qjNGXQV
Hope you can help me
Cheers username_0
Status: Issue closed
Answers:
username_0: I close this issue because I figured out that it is not powerlevel10k related...
For anybody wondering: The desired behavior is achieved with the ruby gem `colorls`.
username_1: Or https://github.com/Peltoche/lsd. It's about 40 times faster.
username_0: @username_1 why don't you recommend it? An why no aliasing if you mind me asking?
username_1: Aliasing `ls` to `lscolor` is a bad idea because its output is verbose and incompatible. Most of the time when you need to list a directory you want terse output.
Using `lsd` in addition to `ls` is OK. After some time you'll likely realize you never actually use `lscolor`. There will be many reasons for it:
1. It's not available on some of the machines where you need to do stuff.
2. When `lsd` is available, half of the time patched fonts don't work.
3. The output is too verbose. Icons are impractical and useless.
The one good use case for `lsd` is to make pretty screenshots to elicit "wow" from Bash users.
This is just my personal opinion of course.
P.S.
Judging by the two issues I've seen you open I'll hazard a guess that you are new to ZSH. I strongly recommend sticking with more standard tools at least at first rather than installing flashy alternatives that only a tiny fraction of ZSH users employs. Once you get comfortable with the basics, you can try `lsd`, `rg` and a million ZSH plugins that exist out there.
username_0: Thank you for your input.
And yes I'm fairly new to zsh. I thought about what you said and I think you're right.
I installed `lsd` and it is as stated in the repo noticeably faster so i'll switch to that and don't use aliases. I mean it' one letter more to type and that's ok.
username_1: :+1: |
relaxdiego/relaxdiego.github.com | 162261322 | Title: Unable to checkout '<KEY>' in submodule path
Question:
username_0: I did "git push" in the submodule lib/ansible/modules/core and then "git push" of the ansible (both in iosxe_branch). Later I did pull request and merge in the github.
However when i do git clone --recursive, I get the error "Unable to checkout '<KEY>' in submodule path 'lib/ansible/modules/core'"
Pls suggest the steps to properly push, merge and clone
ubuntu:~/junk$ git clone https://github.com/username_0/ansible.git --recursive
Cloning into 'ansible'...
remote: Counting objects: 122855, done.
remote: Compressing objects: 100% (21/21), done.
remote: Total 122855 (delta 8), reused 2 (delta 2), pack-reused 122830
Receiving objects: 100% (122855/122855), 35.77 MiB | 1.36 MiB/s, done.
Resolving deltas: 100% (76068/76068), done.
Checking connectivity... done.
Submodule 'lib/ansible/modules/core' (https://github.com/ansible/ansible-modules-core) registered for path 'lib/ansible/modules/core'
Submodule 'lib/ansible/modules/extras' (https://github.com/ansible/ansible-modules-extras) registered for path 'lib/ansible/modules/extras'
Cloning into 'lib/ansible/modules/core'...
remote: Counting objects: 36650, done.
remote: Compressing objects: 100% (18/18), done.
remote: Total 36650 (delta 5), reused 2 (delta 2), pack-reused 36630
Receiving objects: 100% (36650/36650), 8.71 MiB | 1.50 MiB/s, done.
Resolving deltas: 100% (24178/24178), done.
Checking connectivity... done.
fatal: reference is not a tree: 5ed887be9dda6eba58fd07092d5fbdc6b311bb48
Cloning into 'lib/ansible/modules/extras'...
remote: Counting objects: 33914, done.
remote: Total 33914 (delta 0), reused 0 (delta 0), pack-reused 33913
Receiving objects: 100% (33914/33914), 7.30 MiB | 1.48 MiB/s, done.
Resolving deltas: 100% (22657/22657), done.
Checking connectivity... done.
Submodule path 'lib/ansible/modules/extras': checked out '1c36665545ab3ceb7b5edb998a9b83eb8f6be133'
Unable to checkout '5<KEY>' in submodule path 'lib/ansible/modules/core'
Answers:
username_1: That means that sha 5ed887be9dda6eba58fd07092d5fbdc6b311bb48 is not merged to upstream ansible-modules-extras. do you have it in your fork of the repo? If you do, you should review the steps in "Prep Your Core (or Extras) Modules Repo"
Status: Issue closed
|
galaxyproject/tools-iuc | 195767825 | Title: QIIME Contribution Fest - 9th & 10th January 2017
Question:
username_0: Hi,
Some work has already be done to integrate QIIME into Galaxy: see #431.
But there is still work to be done:
- Add some love on some wrappers
- use argument="--myparam"
- make flake8 passing
- add params in command section in quotes
- add help text and version command
- check the indentation
- and many other make up
- Add tests data for all wrappers
- Add new wrappers for new commands
And we need help your help on this!
We planned a Contribution Fest on the 9th & 10th of January 2017.
It could be great if you can join for this hackathon
Thanks everyone! This will be awesome!
Bérénice, <NAME> Saskia
Answers:
username_1: Only 8 days to go and we will start 2017 with our first online Contribution Fest dedicated t the QIIME wrapper.
ping @galaxyproject/iuc
username_0: Hi all,
Ready for the hackathon?
For live chat, we can use the IUC Gitter channel: https://gitter.im/galaxy-iuc/iuc
We can there coordinate our efforts on QIIME.
Happy hacking :)
username_1: What do you think, do we want to have every tool as separate TS entry, as encouraged by the IUC? it will be a lot of different repos.
username_2: I think that would be too many tools. In theory, one repo per tool is a good idea, but in reality, it's a big admin headache (IMHO).
username_1: @username_2 for admins there will be a suite repository, which aggregates all the single tools. Not enough?
username_2: The suites haven't worked for me in the past. OK for an initial install, but updates to new versions have always failed in various ways.
username_3: I know there are pros and cons to 1 tool per repo, but I've always leaned towards it. I really don't like the fact that you have to install an entire suite of tools (suites sometimes are very large) to get the 1 or 2 that you really want. For example, my ChIP-exo environment needs only a single picard too, but I have to install all of these https://github.com/galaxyproject/tools-devteam/tree/master/tools/picard to get it. I've not experienced the headaches involved in installing and maintaining a tool per repo, but that doesn't mean there aren't any - I'm just not seeing what they are. ;)
username_2: I fully agree that having to install an entire suite of tools is problematic, but I've never been able to install a new version of suite. Always fails and leaves me in a nasty state. Then I end up installing each new version manually (which takes forever). Deeptools was my latest example. Have you actually installed deeptools_suite, then installed a new version of deeptools_suite (in the same Galaxy instance)?
username_1: I did that and it worked, Oo.
username_2: :sigh: Well, if it does indeed work in some cases, then I suppose creating separate tools is OK. Personally, I think separate repos is a big PITA and Galaxy should allow more customization when installing (and putting tools in various places in the tools menu). Also, I'd consider this a bit of a special case since I'm certain the suite install won't work for me with so many tools (proxy will time out). But I guess I can start writing some API scripts to install tools....
Status: Issue closed
|
DefinitelyTyped/DefinitelyTyped | 228630830 | Title: GraphQLInterfaceType error when using resolveType
Question:
username_0: - [x] I tried using the `@types/graphql` package and had problems.
- [ ] I tried using the latest stable version of tsc. https://www.npmjs.com/package/typescript
- [ ] I have a question that is inappropriate for [StackOverflow](https://stackoverflow.com/). (Please ask any appropriate questions there).
- [ ] [Mention](https://github.com/blog/821-mention-somebody-they-re-notified) the authors (see `Definitions by:` in `index.d.ts`) so they can respond.
- Authors: // Definitions by: TonyYang <https://github.com/TonyPythoneer>, <NAME> <https://github.com/calebmer>, <NAME> <https://github.com/intellix>, Firede <https://github.com/firede>
Here is my code
```javscript
export const NodeInterface: GraphQLInterfaceType = new GraphQLInterfaceType({
name: 'NodeInterface',
description: 'Node interface',
fields: {
id: {
type: new GraphQLNonNull(GraphQLID)
}
},
// resolveType (fields) {
// if (fields.name) {
// return people.schema
// }
// return null
// }
})
```
If I uncomment this line, it will throw and error
```javascript
// resolveType (fields) {
// if (fields.name) {
// return people.schema
// }
// return null
// }
```
This is error that shown
```bash
[ts]
Argument of type '{ name: string; description: string; fields: { id: { type: GraphQLNonNull<GraphQLScalarType>; }; ...' is not assignable to parameter of type 'GraphQLInterfaceTypeConfig<any, any>'.
Types of property 'resolveType' are incompatible.
Type '(fields: any) => GraphQLObjectType | null' is not assignable to type 'GraphQLTypeResolver<any, any> | undefined'.
Type '(fields: any) => GraphQLObjectType | null' is not assignable to type 'GraphQLTypeResolver<any, any>'.
Type 'GraphQLObjectType | null' is not assignable to type 'string | GraphQLObjectType | Promise<string | GraphQLObjectType>'.
Type 'null' is not assignable to type 'string | GraphQLObjectType | Promise<string | GraphQLObjectType>'.
any
```
Answers:
username_0: Of course. I messed up by returning `null`
Status: Issue closed
|
sukeesh/Jarvis | 312230470 | Title: Request for contribution
Question:
username_0: I found interesting your project and I'd like to contribute to this. Is there anything that needs improvement?
Answers:
username_1: There are numerous ways you can contribute to Jarvis. You can add any new feature you like or you can try to extend/improve existing functionalities. An other way to contribute is to write tests to improve the code coverage. Do you prefer to make small improvements or to implement a new feature?
username_0: I haven't thought of any new features yet, but I could contribute by writing tests in order to improve the existing code. However, if you have any suggestions, I am willing to try for something else as well.
username_1: Ok you can start with the tests and I will consider a new feature to work on.
username_1: It sounds great. Start with the extension. Please work in a new branch and make a pull request when it's ready for review!
Status: Issue closed
|
ThomasLee94/graph_challenges | 482566922 | Title: Final Project Feedback
Question:
username_0: Final Project Feedback
Overall 22/ 33 (> 20 Passing):
- Proposal 3/4
- Code Graph Data structure 3/4
- Code: Solution 1: 3/4
- Code: Solution 2: 3/4
- Code: Solution 3: 3/4
- Code Documentation 3/4
- Code Testing 0/4
- no tests
- Blog / Presentation 4 /5
- Give more data on the algorithms used. |
fraoustin/redmine_indicator | 613156524 | Title: Project Page does not appear ?
Question:
username_0: Hi first of all this plug in is wonderfull. My Page is OK.
Should I make an adjustment for the project page? Because I couldn't see the screen as you show in scrshot.


Answers:
username_1: Hi
thank you for your message
have you a log of redmine?
can you check CustomField for project (http://<url>//custom_fields?tab=ProjectCustomField) and check indicator* (check visible for your)
username_1: And all it's ok ...
username_0: This is screenshot from Custom Field.
I am working on Windows 8 and using Bitnami RedminePlusAgile 4.1.0-8
Where should I check?
username_1: Can you a screenshot of http://<your_url>/custom_fields?tab=ProjectCustomField
Have you a log of redmine when redmine start ?
username_1: You are connected as "admin"?
username_0: Hi username_1,
I am working on laptop (127.0.0.1 :)). I am using it as admin. Soryy but I could not catch the log file. If you told me the name of it. I can send it also.
username_1: in your log repository ...
username_0: [access.log](https://github.com/username_1/redmine_indicator/files/4593272/access.log)
[error.log](https://github.com/username_1/redmine_indicator/files/4593273/error.log)
[install.log](https://github.com/username_1/redmine_indicator/files/4593274/install.log)
These are the log files. I hope they are the correct ones
username_1: it's mysterious ... the plugin is ok: because you can view graphic ... it's is necessary to check in configuration of admin , the CustomField menu
click on administration and after on Custom Fields

click on Project

click on indicator_left_top

username_0: I believe so.


How come?
Should I upload the plugin one more time?
username_0: I think I have to apologize. I created manually one custom area by looking at the table you send.
I miss one point I think. Should we create these custom fields by hand?

username_1: If you have not load the plugin migration, you can add the fields indicator_left_top, indicator_left_bottom and indicator right

username_0: I tried one more time and I got some warnings during the migration. Normally mysqldatabase has some password !

May I ask a question whhat is the difference between two commands?
rake redmine:plugins:migrate NAME=redmine_indicator
and
bundle exec rake redmine:plugins:migrate NAME=redmine_indicator RAILS_ENV=production
username_0: Hi username_1
You are right and plugin installed correctly. To see and be sure I deleted the data completely. Just install all plugins and push the new Project button. Result is as seen. There the modules.
I am using the commands (I copied from the redmine manuals)
for backup **"mysqldump -A -u root -p > backup.sql"**
for upload **"mysql -u root -p -D bitnami_redmine < bitnami_redmine.sql" command.**
These commands are causing the fault.
I should find a way to migrate my data.

username_1: Ok, i understand
If you want , i can search in your backup.sql the reason of the problem (objective : check the problem is the backup or the restore)
username_0: You are so kind but I am a worker and if I share (I believe this is not secret data) this may cause to loose my job. I am not granted. I will try to get the data from the server mysqldump (without -A). Make sense?
II could not enter localhost\phpmyadmin page. If I could I will try from that page to backup the database and upload.
Murphy is on duty.
username_1: ok
you can check found "indicator_left_top" in your backup.sql ... if you found the problem is in restore , else the problem is in backup.
I don't know mysql ... sorry
Status: Issue closed
username_0: I created manually custom fields in your plugin and it is solved. On the main page of the project, I manage to show the statistics.
 |
filecoin-project/lotus | 552568057 | Title: lotus-storage-miner blocks on start
Question:
username_0: **Describe the bug**
lout-storage-miner blocked on start
you can reproduce it by just initializing a private net, and it probably (30%~50%) happens
**To Reproduce**
Steps to reproduce the behavior on private net
1. Initializing
**for lotus**
```
# prepare pre-seal sectors
lotus-seed --sectorbuilder-dir=./repo/genesis-sectors pre-seal --num-sectors=3
# start lotus daemon
lotus daemon --lotus-make-random-genesis=./genesis.car --bootstrap=false --genesis-presealed-sectors=./repo/genesis-sectors/pre-seal-t0101.json
```
**for miner**
```
# init miner
lotus-storage-miner init --actor=t0101 --genesis-miner --nosync --pre-sealed-sectors=./repo/genesis-sectors/
# start miner
lotus-storage-miner run --nosync
```
2. See error
miner instance blocks
if miner is interrupted (Ctrl-C), lotus output logs:
```
2020-01-21T08:31:13.246+0800 INFO badger [email protected]/logger.go:46 Replay took: 221.299µs
2020-01-21T08:31:13.246+0800 INFO p2pnode lp2p/addrs.go:114 Swarm listening at: [/ip4/127.0.0.1/tcp/63939 /ip4/192.168.0.69/tcp/63939 /ip6/::1/tcp/63940]
2020-01-21T08:31:13.247+0800 WARN hello hello/hello.go:50 running without peer manager
panic: send on closed channel
goroutine 375 [running]:
github.com/filecoin-project/lotus/lib/jsonrpc.(*wsConn).handleChanOut(0xc000358ab0, 0x551eea0, 0xc01d9acee8, 0x92, 0x16, 0x8)
/Users/wanglin/work/ongit/lotus/lib/jsonrpc/websocket.go:209 +0xa6
github.com/filecoin-project/lotus/lib/jsonrpc.handlers.handle.func1(0xb951530, 0xc01da68cc0)
/Users/wanglin/work/ongit/lotus/lib/jsonrpc/handler.go:232 +0xde
github.com/filecoin-project/lotus/lib/jsonrpc.(*wsConn).nextWriter(0xc000358ab0, 0xc0004da640)
/Users/wanglin/work/ongit/lotus/lib/jsonrpc/websocket.go:111 +0x1ba
github.com/filecoin-project/lotus/lib/jsonrpc.handlers.handle(0xc00020c1e0, 0x5a86fa0, 0xc01d9cd980, 0xc01da5c110, 0x3, 0xc01da5c118, 0xc000045d40, 0x14, 0x6576b08, 0x0, ...)
/Users/wanglin/work/ongit/lotus/lib/jsonrpc/handler.go:226 +0xb3d
created by github.com/filecoin-project/lotus/lib/jsonrpc.(*wsConn).handleCall
/Users/wanglin/work/ongit/lotus/lib/jsonrpc/websocket.go:362 +0x295
```
**Expected behavior**
miner begin mining after a while
**Version (run `lotus --version`):**
lotus version 0.2.5
**Additional context**
seems to be some mutex problem, tried to profile it, and add the codes in `cmd/lotus/rpc.g`
```
func init() {
runtime.SetBlockProfileRate(1000)
runtime.SetMutexProfileFraction(5)
}
```
the bug just disappears(or more difficult to reproduce?)
Answers:
username_0: 1. master branch

2. miner blocks

3. lotus crashes

4. add profile codes


5. use the changed binary

6. rollback to original

7. miner block again

username_0: `go version go1.13.3 darwin/amd64`
Status: Issue closed
username_0: **Describe the bug**
lout-storage-miner blocked on start
you can reproduce it by just initializing a private net, and it probably (30%~50%) happens
**To Reproduce**
Steps to reproduce the behavior on private net
1. Initializing
**for lotus**
```
# prepare pre-seal sectors
lotus-seed --sectorbuilder-dir=./repo/genesis-sectors pre-seal --num-sectors=3
# start lotus daemon
lotus daemon --lotus-make-random-genesis=./genesis.car --bootstrap=false --genesis-presealed-sectors=./repo/genesis-sectors/pre-seal-t0101.json
```
**for miner**
```
# init miner
lotus-storage-miner init --actor=t0101 --genesis-miner --nosync --pre-sealed-sectors=./repo/genesis-sectors/
# start miner
lotus-storage-miner run --nosync
```
2. See error
miner instance blocks
if miner is interrupted (Ctrl-C), lotus output logs:
```
2020-01-21T08:31:13.246+0800 INFO badger [email protected]/logger.go:46 Replay took: 221.299µs
2020-01-21T08:31:13.246+0800 INFO p2pnode lp2p/addrs.go:114 Swarm listening at: [/ip4/127.0.0.1/tcp/63939 /ip4/192.168.0.69/tcp/63939 /ip6/::1/tcp/63940]
2020-01-21T08:31:13.247+0800 WARN hello hello/hello.go:50 running without peer manager
panic: send on closed channel
goroutine 375 [running]:
github.com/filecoin-project/lotus/lib/jsonrpc.(*wsConn).handleChanOut(0xc000358ab0, 0x551eea0, 0xc01d9acee8, 0x92, 0x16, 0x8)
/Users/wanglin/work/ongit/lotus/lib/jsonrpc/websocket.go:209 +0xa6
github.com/filecoin-project/lotus/lib/jsonrpc.handlers.handle.func1(0xb951530, 0xc01da68cc0)
/Users/wanglin/work/ongit/lotus/lib/jsonrpc/handler.go:232 +0xde
github.com/filecoin-project/lotus/lib/jsonrpc.(*wsConn).nextWriter(0xc000358ab0, 0xc0004da640)
/Users/wanglin/work/ongit/lotus/lib/jsonrpc/websocket.go:111 +0x1ba
github.com/filecoin-project/lotus/lib/jsonrpc.handlers.handle(0xc00020c1e0, 0x5a86fa0, 0xc01d9cd980, 0xc01da5c110, 0x3, 0xc01da5c118, 0xc000045d40, 0x14, 0x6576b08, 0x0, ...)
/Users/wanglin/work/ongit/lotus/lib/jsonrpc/handler.go:226 +0xb3d
created by github.com/filecoin-project/lotus/lib/jsonrpc.(*wsConn).handleCall
/Users/wanglin/work/ongit/lotus/lib/jsonrpc/websocket.go:362 +0x295
```
**Expected behavior**
miner begin mining after a while
**Version (run `lotus --version`):**
lotus version 0.2.5
**Additional context**
seems to be some mutex problem, tried to profile it, and add the codes in `cmd/lotus/rpc.go`
```
func init() {
runtime.SetBlockProfileRate(1000)
runtime.SetMutexProfileFraction(5)
}
```
the bug just disappears(or more difficult to reproduce?)
username_1: The lotus panic appears to be a bug in the jsonrpc library... will have to have @username_3 look into that.
As for the hang, i'm really not sure... I will try to reproduce locally.
username_0: @username_1
maybe #1122 is duplicate with this issue
and create a work around pr #1129
this bug is actually caused by `chan handling along with some mutex usage` in the `lib/jsonrpc`, but hard to finally locate and fix
username_2: deadlock!hang up because of condition race between writeLk and registerCh.
assume one request is doing nextwrite, writelk is in lock,and attempt to write to registerCh,another requst may wait for writeLk in sendrequest function and block handleOutChans loop, no one will select registerCh in this condition . eventually, a deadlock occurred.
```go
func (c *wsConn) sendRequest(req request) {
c.writeLk.Lock()
if err := c.conn.WriteJSON(req); err != nil {
log.Error("handle me:", err)
c.writeLk.Unlock()
return
}
c.writeLk.Unlock()
}
```
```go
func (c *wsConn) nextWriter(cb func(io.Writer)) {
c.writeLk.Lock()
defer c.writeLk.Unlock()
wcl, err := c.conn.NextWriter(websocket.TextMessage)
if err != nil {
log.Error("handle me:", err)
return
}
cb(wcl)
if err := wcl.Close(); err != nil {
log.Error("handle me:", err)
return
}
}
```
```go
func (c *wsConn) handleOutChans() {
regV := reflect.ValueOf(c.registerCh)
cases := []reflect.SelectCase{
{ // registration chan always 0
Dir: reflect.SelectRecv,
Chan: regV,
},
}
var caseToID []uint64
for {
chosen, val, ok := reflect.Select(cases)
.......
if !ok {
// Output channel closed, cleanup, and tell remote that this happened
n := len(caseToID)
if n > 0 {
cases[chosen] = cases[n]
caseToID[chosen-1] = caseToID[n-1]
}
[Truncated]
Method: chClose,
Params: []param{{v: reflect.ValueOf(id)}},
})
continue
}
// forward message
c.sendRequest(request{
Jsonrpc: "2.0",
ID: nil, // notification
Method: chValue,
Params: []param{
{v: reflect.ValueOf(caseToID[chosen-1])},
{v: val} ,
},
})
fmt.Println("send request finish")
}
}
```
username_0: ok, maybe here is the reason:
assuming 2 concurrent ChainNotify call:
- first call expand the selection, get a HCCurrent immediately, and try to send it out, which will acquire the write lock
https://github.com/filecoin-project/lotus/blob/master/lib/jsonrpc/websocket.go#L193-L198
- the second call finds a chan-kind return value, hold the write lock and try to send a registration via the registerCh
https://github.com/filecoin-project/lotus/blob/master/lib/jsonrpc/handler.go#L232
- since there is no receiver of the registerCh, and no buffer, boom
so, as a quick fix, you can just add some buffer size here:
https://github.com/filecoin-project/lotus/blob/master/lib/jsonrpc/websocket.go#L391
but i think it should be more carefully handled
username_3: Thanks for investigating, I may have a fix for this
username_3: Yeah, that deadlock was pretty fun to resolve - pushed the fix to https://github.com/filecoin-project/lotus/pull/1123/commits/fbc0330fa817a36d14e7063531538289e136d545, can you verify that it solves your issues?
username_0: i think it is fixed in #1123
username_3: #1123 was merged
Status: Issue closed
|
ProseMirror/prosemirror | 139393944 | Title: Can't navigate after making node selection in group of contentless nodes
Question:
username_0: If a user makes a node selection in the middle of a group of contentless nodes, keyboard navigation scrolls the window rather than changing the node selection up and down.

Answers:
username_0: Can reproduce in OSX Chrome and Safari. Can't reproduce in FF.
Seems to have to do with the fact that when a node selection in the middle is set, a selection is non-existent(all nodes are unset in `window.getSelection()`), and the keypress event is not caught by the input handler.
username_1: I think this has some ties to #252. I assume the reason for that hanging cursor is, in part, because there has to be some cursor inside the contenteditable in order for it to catch key/paste events
username_1: One solution I've used before is to put the cursor into some offscreen element which can capture the events. It's not an elegant solution by any means, but it worked
Status: Issue closed
username_2: This is pretty much the same issue as #252 -- the browser makes up some kind of selection because it feels it isn't allowed to set the selection on the non-editable selected node. |
minorua/Qgis2threejs | 29583897 | Title: Could the DEM Come form a WCS service
Question:
username_0: I like this plugin, but when I tried using it with a WCS service it does not recognize it. If would be very useful to be able to work from a WCS DEM.
Answers:
username_1: I was trying the plugin with QGIS 2.18.7, using a WMS service I load through the standard QGIS 'Add WMS/WMTS Layer' functionality, but non of the layers I load get listed as DEM layer in the GUI of QGIS2threejs.
Is there a possibility to use WMS? |
karlogonzales/ministocks-390 | 303228350 | Title: Espresso will not work
Question:
username_0: Espresso will not work on a widget, please use UI Automator for your UI tests.
Answers:
username_1: I can get working on this. I already implemented a UI Automator test in the branch **resizeable-widget-test** that checks if the new widget sizes exist. I'll start adding tests for other fuctionality.
Status: Issue closed
username_2: That sounds good Ill have you
username_2: Espresso will not work on a widget, please use UI Automator for your UI tests.
Status: Issue closed
|
mir-group/phoebe | 780836715 | Title: Phonon-electron self-energy
Question:
username_0: Based on what we already have, it should be simple to also include the phonon self energy correction due to the electron-phonon interaction. It will also need to be incorporated into the scattering matrix. |
rossfuhrman/_why_the_lucky_markov | 718674784 | Title: A method is handed the line where the raccoons wear their mask, for I had come all the particles like a dictionary, with curly braces give the appearance of crab pincers that have United States.
Question:
username_0: Toot: A method is handed the line where the raccoons wear their mask, for I had come all the particles like a dictionary, with curly braces give the appearance of crab pincers that have United States.
One comment = 1 upvote. Sometime after this gets 2 upvotes, it will be posted to the main account at https://mastodon.xyz/@_why_toots |
website-scraper/node-website-scraper | 372256108 | Title: Freeze when scraping recursively
Question:
username_0: <!--
This template is for bug reports.
If you are reporting a bug, please continue on. Note that leaving sections blank will make it difficult for us to troubleshoot and we may have to close the issue.
If you are here for another reason, please remove section below.
-->
* **website-scraper version:**
**Please provide your full options object:**
<details>
<summary>Options</summary>
<!-- Paste your options object below (including urls property): -->
```js
{
urls: ['http://www.dhl.com/en.html'],
directory: './downloads/',
recursive: true,
maxRecursiveDepth: 3,
ignoreErrors: false
}
```
<!-- Note that if you don't provide urls which we can use to reproduce problem - in most of cases we will not be able to help and issue will be closed. -->
</details>
**What is expected result?**
http://www.dhl.com/en.html scraped recursively to depth 3
**What is actual result?**
Nothing, the scraper freezes.
Answers:
username_1: Hi @username_0
Looks like scraper tries to download too much pages and freezes.
I'd suggest to
* ensure that you're trying to download only what you need. To track what scraper is doing you can use `debug` [as described in readme](https://github.com/username_1/node-website-scraper#log-and-debug)
* add urlFilter and/or decrease maxRecursiveDepth
* increase memory limit for node process
Hope it helps |
alin-rautoiu/mastodroid | 199383074 | Title: User details fragment
Question:
username_0: A fragment that shows the users details in the navbar with the user's posts, follows and followers as tabs below.
<issue_closed>
Status: Issue closed |
facebook/screenshot-tests-for-android | 150404008 | Title: Verify wiki for "Running screenshot tests without other instrumentation tests"
Question:
username_0: Greetings @username_1,
I have added a wiki to this repository's wiki pages. Little did I know there is no verification/PR like process for adding wiki pages. Could you verify that https://github.com/facebook/screenshot-tests-for-android/wiki/Running-screenshot-tests-without-other-instrumentation-tests is fit to be on this repository?
Thanks,
username_0
Answers:
username_1: This seems fine, however I'm going to edit this and file this under a Hints and FAQ section
(Although, most of our documentation is on the github page.. send me a pull request for gh-pages branch if you want to change that)
Status: Issue closed
|
Lansoftdev/gateway | 865069196 | Title: Set up env endpoint using actuator
Question:
username_0: As a developer, it's useful to review metainformation like endpoint, env vars, etc about running application. Spring boot provides such capabilities out of the box with [actuator](https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-features.html). For now we're interested in env endpoint. |
snapview/tokio-tungstenite | 943950147 | Title: wss:// connections silently downgrade to plaintext if tokio-tungstenite is not compiled with TLS support
Question:
username_0: If the url scheme is `wss`, tokio-tungstenite will always [connect to port 443](https://github.com/snapview/tokio-tungstenite/blob/2546fa0443f06e2b4cad52529d4feafa79e39d88/src/connect.rs#L38), regardless of the presence of TLS support. However, if TLS support is not compiled in, tokio-tungstenite will speak plaintext over port 443.
This leads to hard-to-debug errors where a server may respond with `HTTP 400` (this is the behaviour I have observed in nginx). tokio-tungstenite should instead refuse to connect to `wss://` URLs if no TLS support is compiled in and return an error.
Answers:
username_0: It appears that returning an error rather than downgrading to plaintext may actually be the intended behaviour: https://github.com/snapview/tokio-tungstenite/blob/2546fa0443f06e2b4cad52529d4feafa79e39d88/src/tls.rs#L123
However this seems to be foiled by the compile-time conditional dispatch in `connect_async_with_config`: https://github.com/snapview/tokio-tungstenite/blob/2546fa0443f06e2b4cad52529d4feafa79e39d88/src/connect.rs#L48-L56
Status: Issue closed
|
CityOfZion/neon-js | 285605200 | Title: feat: Helper to chain multiple scripts together
Question:
username_0: Currently, it is possible to chain multiple scripts together using the ScriptBuilder:
```
const sb = new ScriptBuilder()
sb
.emitAppCall(scriptHash, 'name')
.emitAppCall(scriptHash, 'symbol')
.emitAppCall(scriptHash, 'decimals')
.emitAppCall(scriptHash, 'totalSupply')
```
However, doing this requires you to be exposed to the raw working tools which is not what everyone wants to handle. This functionality is not yet exposed at a high level.
This would be similar to the `createScript` method but take in an array of `props` instead.
## Specifics
- Define a interface describing the `props` object used in `createScript`.
- Implement a helper method with signature: `(Array<props>) => string`
- Helper method should call `emitAppCall` on each `props` object and return the final string produced by `ScriptBuilder`.
- Instead of a new helper method, it is ok to modify `createScript` if you cannot think of a better name.
Answers:
username_1: This can probably be tracked as a separate issue/feature request, but would it be worth making a helper function for creating a script (or object that can be passed to `createScript`) for transferring NEP5 tokens similar to the `makeIntent` function?
username_0: yes, with the new `decimals` field being relevant now, a helper would be useful for NEP5 token transfer to avoid this pitfall
Status: Issue closed
|
google/mediapipe | 665966928 | Title: None
Question:
username_0: Hi,
The text rendering is done via [OpenCV](https://github.com/google/mediapipe/blob/master/mediapipe/util/annotation_renderer.cc#L507). OpenCV 3+ introduced custom fonts, so something like [loadFontData](https://docs.opencv.org/3.4.0/d9/dfa/classcv_1_1freetype_1_1FreeType2.html#af059d49b806b916ffdd6380b9eb2f59a) can be added to the annotation renderer . But first you need to make sure you have the [freetype module](https://github.com/opencv/opencv_contrib/tree/3.4/modules/freetype) installed (either by package manager or re-compiling opencv).
So, according to that link, you could replace the current [putText](https://github.com/google/mediapipe/blob/master/mediapipe/util/annotation_renderer.cc#L507)() call to something like:
```
cv::freetype::FreeType2 ft2;
ft2.loadFontData("your-font.ttf", 0);
ft2.putText(src, .... );
```
(though i think you would want to cache the font object and not re-load a file each frame).
Note, I have not tested this, but it should be enough to get you going!
Answers:
username_0: Closing due to lack of activity
Status: Issue closed
|
tailflow/laravel-orion | 983304079 | Title: Request validation skipped after first success when using octane
Question:
username_0: When making a request that's validated with Orion, using **Octane**, the second request skips validation.
This is likely due to the nuances in [making a package octane ready](https://laravel.com/docs/8.x/octane#dependency-injection-and-octane).
I did some digging into the source code, but started to hit the limits of my knowledge. Reproduced the bug with a basic app instead:
[https://github.com/username_0/orion-octane-request-test](https://github.com/username_0/orion-octane-request-test)
Note that I'm using sail to make it easier to setup octane :)
here's some images of the error:
First request:

Second request:

Also worth noting that a `composer dump` will allow me to validate the initial request again (probably because dumping forces octane to copy the app again)
Here's [Muhamed Said's video on Octane](https://www.youtube.com/watch?v=T5lkBHyypu8) if anyone wants a deeper understanding on this!
Thanks @username_1 :)
Answers:
username_1: Hi @username_0,
Thank you for raising this issue! I haven't checked Orion's compatibility with Octane, but it looks like there might be a few caveats. Will try to include it as part of the next release 👌🏻
username_0: @AbdullahFaqeir amazing! Is there a fork I can use? :slightly_smiling_face:
username_2: Hi @username_0 wondering if you've had a chance to investigate this any further? I'll be serving an API with Octane and was hoping to use Orion!
I'm wondering if it could be as simple updating the `flush` config in `config/octane.php`:
```
/*
|--------------------------------------------------------------------------
| Warm / Flush Bindings
|--------------------------------------------------------------------------
|
| The bindings listed below will either be pre-warmed when a worker boots
| or they will be flushed before every new request. Flushing a binding
| will force the container to resolve that binding again when asked.
|
*/
'warm' => [
...Octane::defaultServicesToWarm(),
],
'flush' => [
],
```
username_0: @username_2 not sure... But let me know if that works!
Would also love to move to octane. Even in local dev, I can really feel the difference.
We just need to be sure it works, because the issue I had is that it's skipping Request Validation. If validation is skipped, that's a pretty serious security concern.
username_2: @username_0 I'm still building out the models, policies, etc., will probably be a bit before I can start configuring and testing Orion.
I've made some notes about Octane, here are some resources I've found which may be helpful in diagnosing the issue:
- "Dependency Injection & Octane" from [Laravel Docs](https://laravel.com/docs/8.x/octane#dependency-injection-and-octane)
- "Coding With Laravel Octane in Mind" [here](https://betterprogramming.pub/the-downsides-of-coding-with-laravel-octane-17f4a7a4ea85)
- "Things to Consider" section [here](https://divinglaravel.com/laravel-octane-bootstrapping-the-application-and-handling-requests) |
DanielSank/theory | 395805907 | Title: Typo in section 6 of qubits 101
Question:
username_0: Equation (86) (the flux quantization condition) and the definition of \delta_L in terms of \delta in the line that follows are inconsistent with equation (87) (dc current through the SQUID in terms of \delta). The expression for \delta_R would equal \delta + (3/2)*\phi as opposed to \delta - \phi/2 as equation (87) suggests.
Answers:
username_0: Also, I am having difficulties reproducing equation (88). To zeroth order in \phi, I can derive an expression which differs by yours by a factor of -1/2. The minus sign is because I used \delta_L = \delta - \phi/2 (which is what I think you meant to do). The factor of 1/2 is because only one of the terms in my expression for dI/d\phi contributes in a zeroth order expansion of \phi. In general, my expression for dI/d\phi is given below
<img width="744" alt="equation_for_dan" src="https://user-images.githubusercontent.com/14102485/50674893-cd536980-0faf-11e9-8ffc-51610f309345.png">
username_1: Working through this myself again, I get

username_0: Sounds like we gotta beer bet this one. Below is a pdf of my derivation of the equation. Perhaps I am messing up a sign or something:
[dc_squid_drive.pdf](https://github.com/username_1/theory/files/3132310/dc_squid_drive.pdf) |
NaturalNode/natural | 1078910259 | Title: Any supported material on browserify compatibility for V8/client-side use?
Question:
username_0: I want to use natural from V8 so I'm looking for a way to apply browserify on it in order to get a single js file that does not require disk access. I think it's analogous to obtaining a client-side version. Is there some sort of official support to it? What I need is to convert it to pure JS (using browserify) and test everything. I would repeat this process every time I want to get the most recent version of natural.
Is there any official step-by-step, tests and maybe a list of compatible modules? Or maybe an official one single file "build"?
Answers:
username_1: There is support for webpack: see https://github.com/NaturalNode/natural/blob/5.1.13/gulpfile.js |
k0kubun/md2key | 150084778 | Title: Not compatible with Keynote 6.6.1?
Question:
username_0: I' using Keynote 6.6.1 and md2key 0.4.4, and will got these errors:
```
every slide of document 1 doesn’t understand the “count” message
```
or
```
slide 2 of document 1 doesn’t understand the “move” message
```
Answers:
username_1: I tested Keynote 6.6.1 and md2key just now but it worked fine.
Maybe some contents in your markdown contain characters which can't be processed by md2key 0.4.4.
Could you try reducing your decks and detecting what kind of code breaks md2key?
Status: Issue closed
username_1: I guess this issue is related to https://github.com/username_1/md2key/issues/16 and it may be resolved.
So closing for now but feel free to reopen this if the issue consists in md2key v0.5.1.
username_0: Ok, after I do some edit of my markdown file, it now worked.
I don't know the extract reason, but now it worked. :-) |
AlexHalogen/MC_DungeonFinder | 795552000 | Title: Makefile issue
Question:
username_0: In makefile:
utils/callbacks.o: utils/callbacks.c
NOT
utils/callbacks.o: utils/callbacks."h"
I had compilation problems with "invalid format callbacks.o"
Answers:
username_1: Hi, thanks for the issue!
I've pushed a fix to correct this error.
I wrote this a long time ago when I didn't know much about CI, testing kind of stuff, so, mistakes like this happen.
Sorry for the confusion, and please let me know if it still doesn't work.
username_0: Compiled!
I did a little test and everything looks ok.
Nice idea. I really like that it's written in C! ;)
username_1: Glad that helps!
Status: Issue closed
|
blockspacer/CXTPL | 496194547 | Title: add code analyzers
Question:
username_0: + code sanitize
+ cppcheck
+ ...
Answers:
username_0: MSAN requires to `Build libc++ and libc++abi with MemorySanitizer` as in https://github.com/google/sanitizers/wiki/MemorySanitizerBootstrappingClang#build-libc-and-libcabi-with-memorysanitizer
Is it good idea to create separate dockerfile for sanitizers like MSAN?
Also i plan to use CXCMake_Sanitizers from CXCMake https://github.com/username_0/CXCMake/blob/master/cmake/core/sanitize/CXCMake_Sanitizers.cmake
Status: Issue closed
|
waLLxAck/Rabbit-Population-Explosion | 731645071 | Title: User Story - foxes can eat up to 20 rabbits every year
Question:
username_0: As a user I want the foxes to eat 20 rabbits every year (if that many rabbits alive) so that I can accurately model their natural habitat.
**Acceptance Criteria**
- max of 20 rabbits gets eaten by foxes every year
**Testing**
- [ ] check that no more than 20 rabbits dies from being eaten by the foxes<issue_closed>
Status: Issue closed |
envoyproxy/envoy | 795904464 | Title: How to config WASM network filter stats in envoy.yaml
Question:
username_0: I have implemented the WASM network filter data statistics via the statistics API, but do not know how to configure it in envoy.yaml. I want to access the statistics results through port 8001. Like TCP proxy,http://10.73.87.58:8001/stats?filter=redis_tcp
Process run result:
[2021-01-28 03:01:35.573][62299][warning][wasm] [source/extensions/common/wasm/context.cc:1174] wasm log: [example/test_cpp.cc:74]::onDownstreamData() onDownstreamData 2 len=23 end_stream=0
*2
$3
get
$4
name
[2021-01-28 03:01:35.582][62299][warning][wasm] [source/extensions/common/wasm/context.cc:1174] wasm log: [example/test_cpp.cc:83]::onUpstreamData() onUpstreamData 2 len=8 end_stream=0
$2
lq
[2021-01-28 03:01:35.582][62299][warning][wasm] [source/extensions/common/wasm/context.cc:1174] wasm log: [example/test_cpp.cc:89]::onUpstreamData() incrementMetric res:0, metric_id current value:1
[2021-01-28 03:01:35.582][62299][warning][wasm] [source/extensions/common/wasm/context.cc:1174] wasm log: [example/test_cpp.cc:93]::onUpstreamData() incrementMetric res:0, total current value:2
Status: Issue closed
Answers:
username_0: I have implemented the WASM network filter data statistics via the statistics API, but do not know how to configure it in envoy.yaml. I want to access the statistics results through port 8001. Like TCP proxy,http://10.73.87.58:8001/stats?filter=redis_tcp
Process run result:
[2021-01-28 03:01:35.573][62299][warning][wasm] [source/extensions/common/wasm/context.cc:1174] wasm log: [example/test_cpp.cc:74]::onDownstreamData() onDownstreamData 2 len=23 end_stream=0
*2
$3
get
$4
name
[2021-01-28 03:01:35.582][62299][warning][wasm] [source/extensions/common/wasm/context.cc:1174] wasm log: [example/test_cpp.cc:83]::onUpstreamData() onUpstreamData 2 len=8 end_stream=0
$2
lq
[2021-01-28 03:01:35.582][62299][warning][wasm] [source/extensions/common/wasm/context.cc:1174] wasm log: [example/test_cpp.cc:89]::onUpstreamData() incrementMetric res:0, metric_id current value:1
[2021-01-28 03:01:35.582][62299][warning][wasm] [source/extensions/common/wasm/context.cc:1174] wasm log: [example/test_cpp.cc:93]::onUpstreamData() incrementMetric res:0, total current value:2 |
petyosi/react-virtuoso | 787194274 | Title: Custom fully managed scroll bar
Question:
username_0: I think it would be a good idea to eventually make Virtuoso work with a custom scroll bar component that it manages fully, like VS Code does basically.
Without this I don't think it's possible to provide a **perfect** user experience.
Right now when scrolling pretty quickly or if the computer is particularly busy it can happen that the user will see some visual glitches, all glitches should be removed eventually, and without a custom fully managed scroll bar I think that's impossible.
# Example use case
How it works now:
- The user clicks somewhere in the scroll bar.
- The scroll position jumps immediately.
- If the content to paint for the new position isn't ready yet the user will stare at a blank list for a bit, even if it's just a couple of frames the user experience will suffer from this.
How it could work with a custom fully managed scroll bar:
- The user clicks somewhere in the custom scroll bar.
- The scroll bar updates immediately, giving an instant feedback to the user for a good user experience.
- If the content to paint for the new position isn't ready yet then nothing happens for a bit, as soon as the new content is ready the list can be repainted with the new content, with no visual glitches (assuming the content doesn't take too long to get ready, if it takes too long for that then that problem is probably outside of Virtuoso's area of competence anyway).
This is just an example, taking control of the scroll bar can similarly help improve the user experience while scrolling too.
Status: Issue closed
Answers:
username_1: I understand the underlying motivation, but I don't believe that custom scrolling implementation will address that. Most likely, be an uncanny valley experience. Mobile devices, laptop trackpads, etc provide a variety of different scrolling methods with their own kinetics. An emulation will feel different.
Instead, I would rather improve the performance of the component to its fullest - both in terms of code and best practice recommendations. Any help (repros, test cases, PRs, etc) in that direction is greatly appreciated.
username_0: It would be difficult to emulate the native behavior, but VS Code does it perfectly on desktop, I know less about mobile but there are efforts like Flutter that basically paints everything itself on a canvas, scrolling there seems to work well too from what I've seen.
I'm suggesting something that wouldn't make sense to implement for a long time, but that I see as a necessary step toward perfection, and basically I want to strive for perfection, which means either Virtuoso becomes perfect or it will have to be replaced by something else eventually, perfection can't be improved upon. |
inception-project/inception | 805909738 | Title: INCEpTION 0.18.1
Question:
username_0: **GitHub issue tracker**
- [ ] Ensure all issues and PRs are resolved/merged
**Local**
- [ ] Run Maven release (increase third digit of version). Check that the JDK
used to run the release is not newer than the Java version specified in the minium system
requirements!
- [ ] Sign the standalone JAR
**GitHub release page**
- [ ] Upload the JAR to the GitHub release
- [ ] Write release announcement and add to GitHub release
**Github pages**
- [ ] Update the `releases.yml` file
- [ ] Add release documentation to GitHub pages
**Demo/test server**
- [ ] *stable instance*: Update to release version
- [ ] *community instance*: Update to release version
- [ ] *testing instance*: Update auto-deployment script to match new SNAPSHOT version
- [ ] *demo instance*: Update to release version
**Docker**
- [ ] Push the release to Docker
**Mailing list**
- [ ] Send release announcement to mailing list<issue_closed>
Status: Issue closed |
spring-projects/spring-boot | 705393893 | Title: when com.github.ben-manes.caffeine:jcache on classpath, property spring.cache.caffeine.spec didn't work
Question:
username_0: I add spring-boot-starter-cache and com.github.ben-manes.caffeine:jcache on classpath , also add following properties
spring.cache.type=jcache
spring.cache.caffeine.spec=maximumSize=100,expireAfterWrite=600s
spring.cache.cache-names=projectNodes
but , the property 'spring.cache.caffeine.spec' didn't work, It seems caffeine also need a JCachePropertiesCustomizer when com.github.ben-manes.caffeine:jcache on classpath
Answers:
username_1: Thanks for the report. What's your reason for using Caffeine's JCache Provider in a Spring Boot application? Doing so [isn't recommended](https://github.com/ben-manes/caffeine/wiki/JCache#spring). As such, I'm not sure that it's something that we'd want to support without good reason.
username_0: hibernate‘s second level cache can make use of Jcache, can't directly use caffeineCacheManager or spring's cacheManager
username_2: Yes, that is expected. If you opt-in for `JCache` support rather than the native Caffeine support, you need to configure Caffeine through the JCache APIs. Spring Boot does not have any opinion when you opt-in for JCache as we have to behave how the spec mandates.
Status: Issue closed
|
trailofbits/ebpfpub | 766029889 | Title: 鹤壁市山城区妹子真实找上门服务q
Question:
username_0: 鹤壁市山城区妹子真实找上门服务【威信781372524美女】年月日院线电影《鲜花盛开的地方》在北京沃德中医研究院隆重举行电影启动仪式。出席此活动的有特约嘉宾毛里求斯驻华大使李淼光及夫人大学生创业达人、沃德中医研究院董事长、出品人何军。芳香文化健康产业集团董事长、出资人刘辉诚信集团董事长出资人冯超、香港智慧城市研究院副总裁梁辉、北京优势传媒有限公司总经理喻一、歌游中国总经理王洪旭三兔广告传媒宁一含及电影《鲜花盛开的地方》主创团队著名导演、监制于敏、著名表演艺术家刘晓庆、著名导演李会东、著名编剧顾伟、制片人李馨馨、演员薛飞、甘婷婷、赵燕国彰、刘大刚、樊锦霖、刘育同、任承浩、凌子善、演员马薇、知名演员张国强、刘桦、甘婷婷因在外地拍摄不能赶到发布会现场专门发来表示祝贺毛里求斯驻华大使李淼光及夫人表示对华语电影充满信心他说解决大学生创业问题是个长期工程也是百年大计并祝愿电影《鲜花盛开的地方》拍摄顺利开机大吉同时欢迎本片导演李会东到毛里求斯来取景拍摄出更多更好的优秀电影电影。大学生创业达人、沃德中医研究院董事长出品人何军先生发言说创业不代表从事商业你所选择的任何一个专业任何一个职业都叫创业人生没有脚本但你可以选定一个专业那就是你的剧本聚焦它一年两年哪怕十年你就有可能成为同个创业领域内的最好创业者。著名表演艺术家刘晓庆讲述了自己拍戏的经验讲到做人要学会感恩她对本片充满信心给与了肯定并且表示还要出演重要角色期待与李会东导演早日合作。著名导演监制于敏发表重要讲话他对影片充满信心从影像风格、舆论导向剧本把控等方面严格要求并说这一定是一部非常有挑战性、充满正能量、充满爱心、充满激情、充满情怀的好电影。编剧顾伟讲《鲜花盛开的地方》是一部现实主义题材电影电影主要讲述了一个菜鸟毕业生和一个落魄老板相爱相杀最后抱团取暖的故事电影取材于真实故事结合当前的经济形势和创业压力力争打造一部接地气有温度与观众产生共鸣的喜剧爱情电影。导演李会东从项目主题、拍摄周期、以及电影风格上做了阐述电影《鲜花盛开的地方》是一部励志青春偶像剧影片将分三个色调来表达三个阶段青春校园、创业、到成功来反映人物性格内心世界的变化给人一种不同影像风格视觉上的享受。电影前期将在厦门拍摄天左右后转场深圳。演员、主持人薛飞发表了自己的感言首次触电的河北初创文化传播有限公司总经理陈令格激动万分的讲特别看好这部影片看好导演看好主创团队我们就投资这部电影。投资方芳香文化健康产业集团公司董事长刘辉、诚信集团董事长冯超、奥格世纪国际影业董事长李馨馨与好莱坞帮国际影业北京有限公司董事长李会东分别进行签约合作电影《鲜花盛开的地方》绞阑刹咽蛋https://github.com/trailofbits/ebpfpub/issues/1625?pV9A3 <br />https://github.com/trailofbits/ebpfpub/issues/245?84ya2 <br />https://github.com/trailofbits/ebpfpub/issues/3034?hl8g1 <br />https://github.com/trailofbits/ebpfpub/issues/1609 <br />https://github.com/trailofbits/ebpfpub/issues/504?lelpo <br />https://github.com/trailofbits/ebpfpub/issues/3147?hudjm <br />https://github.com/trailofbits/ebpfpub/issues/1767?dvsuq <br />jvyfrcyhrbsdjwmfsmstmrcegyombknfjiq |
icnrg/draft-irtf-icnrg-terminology | 501731477 | Title: Nit from <NAME> to fix on next iteration
Question:
username_0: Offlist
Just one nit, not worth iterating on the list, so this
is just in case you want to tweak it in passing later:
"neither the content nor the name can change" is not
quite right as if both change then all bets are off.
Cheers,
S. |
dotnet/maui | 989367814 | Title: Maui library?
Question:
username_0: Why there is no a _Maui Class Library_ project template? Should I create an app an remove the undesired parts?
Is there any documentation about creating Maui controls from scratch?
Answers:
username_1: I think there is no such thing as "MAUI Class library" project
It's just a Class Library, and it targets NET6.

There is not much documentation about MAUI right now. It is still work in progress, you can check that https://github.com/dotnet/docs-maui/issues
username_0: A Net6 class library doesn't seem enough for control's libraries, which will need the "Platforms" folder magic to locate the platform-specific code as well as multi-targeting.
username_1: In that case yes, you can create a MAUI App and delete the things you don't need
OR
You can create a Library Class, and then modify the .csproj file
Replace ` <TargetFramework>NET6</TargetFramework>` with `<TargetFrameworks>net6.0-ios;net6.0-android;</TargetFrameworks>` Notice that it uses TargetFrameworkS with an S
username_2: Visual Studio RC1 will probably add the missing template 🙂. Which is expected to be released this week or early next one.
A little more of patient!!
username_3: We have additional project templates and item templates coming int he next preview, including a new `.NET MAUI Class Library` project template:
```
C:\GitHub\maui>dotnet new -l
These templates matched your input:
Template Name Short Name Language Tags
-------------------------------------------- ------------------- ---------- ---------------------------------------------------
.NET MAUI App maui [C#] MAUI/Android/iOS/macOS/Mac Catalyst/WinUI
.NET MAUI Blazor App maui-blazor [C#] MAUI/Android/iOS/macOS/Mac Catalyst/WinUI/Blazor
.NET MAUI Class Library mauilib [C#] MAUI/Android/iOS/macOS/Mac Catalyst/WinUI
.NET MAUI ContentPage maui-page-xaml [C#] MAUI/Android/iOS/macOS/Mac Catalyst/WinUI/Xaml/Code
.NET MAUI ContentPage (C#) maui-page-csharp [C#] MAUI/Android/iOS/macOS/Mac Catalyst/WinUI/Xaml/Code
.NET MAUI ContentView maui-view-xaml [C#] MAUI/Android/iOS/macOS/Mac Catalyst/WinUI/Xaml/Code
.NET MAUI ContentView (C#) maui-view-csharp [C#] MAUI/Android/iOS/macOS/Mac Catalyst/WinUI/Xaml/Code
```
Status: Issue closed
username_3: Here's the PR where it was merged a short while ago: https://github.com/dotnet/maui/pull/1827 |
agda/agda | 462567057 | Title: Dotted variables / not in scope variables
Question:
username_0: I think in old versions of Agda, there was this concept of dotted variables that was notated:
`.n : ℕ`
But now the notation for the latest Agda 2.6 is:
`n : ℕ (not in scope)`
I could not find anything about this in the docs, have dotted and not in scope variables the same meaning? What was the background of this change?
Status: Issue closed
Answers:
username_0: Thank you so much. This is exactly the information i was looking for.
username_0: Just noticed that the changelog was split.
Here is the link where you can find the reference above:
[Release notes 2.6.0](https://github.com/agda/agda/blob/master/doc/release-notes/2.6.0.md)
And the split commit:
064095e |
DIYgod/RSSHub | 1105150072 | Title: 本地宝无法抓取成都的内容,其他地区没有发现问题
Question:
username_0: ### 路由地址
rsshub.app/bendibao/news
### 完整路由地址
rsshub.app/bendibao/news/cd
### 相关文档
https://docs.rsshub.app/new-media.html#ben-de-bao
### 预期是什么?
成功抓取本地宝成都的焦点资讯
### 实际发生了什么?
抓取的所有文字内容都是<![CDATA[ undefined ]]>
### 部署
RSSHub 演示 (https://rsshub.app)
### 部署相关信息
Docker 20.10.12
### 额外信息
```shell
自建的与公共的rsshub服务都存在这个问题
```
### 这不是重复的 issue
- [X] 我已经搜索了[现有 issue](https://github.com/DIYgod/RSSHub/issues),以确保该错误尚未被报告。<issue_closed>
Status: Issue closed |
bumptech/glide | 298757356 | Title: Question is it correct: scaletype XY when placeholder, scale CENTER_CROP when image
Question:
username_0: I would like to use placeholder when I fail load img with scaletype FIT_XY because my placeholder is .9.png
In case where I load img correctly I would like to use scaletype CENTER_CROP.
Below code is working but is it correct (especially line in onLoadFailed where I set drawable for placeholder)?
```
Glide
.with(context)
.load(url)
.listener(new RequestListener<Drawable>() {
@Override
public boolean onLoadFailed(@Nullable GlideException e, Object model, Target<Drawable> target, boolean isFirstResource) {
ImageView view = ((ImageViewTarget<?>) target).getView();
view.setScaleType(ImageView.ScaleType.FIT_XY);
Glide.with(coreInstance).load(model).apply(new RequestOptions().placeholder(R.drawable.blank_photo)).into(target);
return false;
}
@Override
public boolean onResourceReady(Drawable resource, Object model, Target<Drawable> target, DataSource dataSource, boolean isFirstResource) {
ImageView view = ((ImageViewTarget<?>) target).getView();
view.setScaleType(ImageView.ScaleType.CENTER_CROP);
return false;
}
})
.into(imageView);
```
I am using Glide library `compile("com.github.bumptech.glide:glide:4.0.0")`
Answers:
username_1: You probably should use the latest version of Glide, not `4.0.0`.
It's not safe to start a new load into the same Target in ``onLoadFailed``. If the ``target`` is not `imageView` then it's ok, though a little odd. If they're the same, use http://bumptech.github.io/glide/javadocs/460/com/bumptech/glide/RequestBuilder.html#error-com.bumptech.glide.RequestBuilder-
Changing the scale type is ok, but keep in mind that Glide will set it's `Transformation` based on the scale type if you don't specify a `Transformation` yourself. You might want to specify `fitCenter()` or something to avoid using the default scale type. |
martynassateika/FUEL-CMS-phpstorm-plugin | 357882246 | Title: Advanced module view
Question:
username_0: Implement a way for users to work with advanced modules.
Just like IntelliJ supports listing and enabling/disabling installed plugins, the FuelCMS plugin should support listing installed advanced modules, with a checkbox that allows instant enabling / disabling of the module.
This task will ***not*** deal with downloading modules.
For now, the goal is to:
* be able to view advanced modules in a JBTable with 2 columns:
* "Folder" (e.g. 'fuel', 'user_guide')
* "Version" (e.g. '1.4.2', '1.0') -- can be taken from the config/constants.php file. If we can't get to the version number (constants file does not exist, version constant undefined, etc) then display an error (underline name?)
* "Allowed in admin" flag. This would be a checkbox. Disabling the checkbox would immediately alter the "modules_allowed" array in the MY_fuel.php file.
Not sure whether the flag should be disabled for 'fuel'. Enabling it might cause issues for people with an older version of the CMS, see https://github.com/daylightstudio/FUEL-CMS/pull/428. |
Mittagskogel/Sulfurous | 313345329 | Title: Double Pendulum su Scratch
Question:
username_0: The pendulum leaves a white trace, instead of a red trace like in the original project.
https://scratch.mit.edu/projects/20528746/
https://sulfurous.aau.at/#20528746
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36
Answers:
username_1: Couldn't resist!
when I start as clone block sets brightness to 200 (this is the same as setting it to 0)
if you remix the project with 0 it works.
so brightness % 200 should be the fix.
username_0: Ok! By setting brightness to 0 the trace color is brownish instead of red. So, is there a difference in the way Scratch and Sulfurous handle brightness values?
username_1: that could be the color effect (ie change it from 80 to something else) - it's very subtle though on my (old) monitor, could even be how RGB is being handled internally.
username_2: The brightness is fixed but the color is still brownish i will try to fix also the color soon
username_2: The color is now red and with the brightness also fixed I think this issue can be closed.
Status: Issue closed
|
itchyny/lightline.vim | 306286100 | Title: Not working in vim, works in neovim
Question:
username_0: Hello,
I've just installed this plugin, it's working perfectly fine in nvim, but it doesn't seem to be working on vim8 at all (statusline isn't showing up). Any thoughts?
Answers:
username_1: I use this plugin in Vim 8, not neovim and it works well. Please paste a minimal vimrc to reproduce the problem so that I can investigate the problem.
username_0: Here's my vimrc file:
`set nocompatible
set number
" Switch syntax highlighting on
syntax on
syntax enable
if (has("termguicolors"))
set termguicolors
endif
" Enable file type detection and do language-dependent indenting.
filetype plugin indent on
call plug#begin('$HOME/.config/nvim/plugged')
autocmd vimenter * NERDTree
if has('nvim')
Plug 'Shougo/deoplete.nvim', { 'do': ':UpdateRemotePlugins' }
else
Plug 'Shougo/deoplete.nvim'
Plug 'roxma/nvim-yarp'
Plug 'roxma/vim-hug-neovim-rpc'
endif
Plug 'kristijanhusak/vim-hybrid-material'
Plug 'vim-airline/vim-airline-themes'
Plug 'w0rp/ale'
Plug 'username_1/lightline.vim'
Plug 'scrooloose/nerdtree'
Plug 'carlitux/deoplete-ternjs', { 'do': 'npm install -g tern' }
call plug#end()
colorscheme hybrid_material
let g:lightline = {
\ 'colorscheme': 'wombat',
\ }
set background=dark`
username_1: It works for me with your configuration, it's weird. Can you give me information of Vim version? Also, can you test if
- setting `set laststatus=2` in your vimrc
- removing `set termguicolors` configuration
- moving `Switch syntax highlighting on` at the end of vimrc
username_0: Yay, setting set laststatus=2 worked! I am pretty sure I've tried it before and it didn't work, but perhaps I am wrong... Thanks!
Status: Issue closed
username_1: OK. |
LoveLifeEveryday/LoveLifeEveryday.github.io | 548372907 | Title: 写博客没头绪?我手把手教你! | Hexo
Question:
username_0: https://lovelifeeveryday.github.io/2020/01/11/xie-bo-ke-mei-tou-xu-wo-shou-ba-shou-jiao-ni/
写博客没头绪?我手把手教你!前言
文本已经收录到我的Github个人博客,欢迎大佬们光临寒舍:
我的GIthub博客
本篇文章并不涉及到博客的搭建,主要是想分享下我对写博客整体思路的一些见解,学习清单:
写博客的好处
博客的思路框架
博客 |
blokadaorg/blokada | 316521481 | Title: Whitelist may be ignored in some scenarios
Question:
username_0: Add a file with whitelist hosts. save.
Add or remove blacklist host files.
all displays seem OK
but white list file is no longer accessed by Blokada
stays that way no matter what change made in app
recovery requires clearing blokada cache, data and forced stop
and reseting all files.
I have an added black list file .... that might be a factor too<issue_closed>
Status: Issue closed |
microsoft/onnxruntime | 727854401 | Title: How to use batchsize in onnxruntime?
Question:
username_0: **Describe the bug**
I read the sample of imagenet and the Q&A(https://github.com/microsoft/onnxruntime/issues/1632), but I still do not know how to use batchsize.
I think the sample means that the session itself do not care the shape of the input, just set the input_tensor the proper shape with the batchsize, and run the session, like the code below:
auto output_tensors = session.Run(Ort::RunOptions{nullptr}, input_node_names.data(), &input_tensor, 1, output_node_names.data(), 2);
But, it doesn't work for me in the current version. It gives me some errors like:
terminate called after throwing an instance of 'Ort::Exception'
what(): Got invalid dimensions for input: data for the following indices
index: 0 Got: 4 Expected: 1
Please fix either the inputs or the model.
I deleted the check code and re-make the whole project.
But it still doesn't work. In output tensor, it just gives me random result except the first data in a batchsize.
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 18.04):
- ONNX Runtime installed from (source or binary): source
- ONNX Runtime version:1.5.2
- GCC/Compiler version (if compiling from source): g++ 7.5.0
- CUDA/cuDNN version:10.2
- GPU model and memory: 1660Ti 6G
Answers:
username_1: "I think the sample means that the session itself do not care the shape of the input, just set the input_tensor the proper shape with the batchsize," - That is true but your model should also support it. In this case, your model does not - it is not flexible in its requirement - it expects batch size to be 1 and you provided 4.
Usually the model input has a "dynamic" shape for the batch dimension (see https://github.com/microsoft/onnxruntime/issues/2118 https://github.com/microsoft/onnxruntime/issues/1944). I think some frameworks (Torch for example) support exporting the model with "dynamic" input shape requirements that can be used for batched inferencing.
Status: Issue closed
|
type-challenges/type-challenges | 1142602461 | Title: 268 - If
Question:
username_0: <!--
Notes:
🎉 Congrats on solving the challenge and we are happy to see you'd like to share your solutions!
However, due to the increasing number of users, the issue pool would be filled by answers very quickly.
Before you submit your solutions, please kindly search for similar solutions that may already be posted. You can "thumb up" on them or leave your comments on that issue. If you think you have a different solution, do not hesitate to create the issue and share it with others. Sharing some ideas or thoughts about how to solve this problem is greatly welcome!
Thanks!
-->
```ts
type If<C extends true | false, T, F> = C extends true ? T : F
``` |
RobotWebTools/ros2djs | 477532453 | Title: TraceShape maxPoses doesn't allow you to set maxPoses to 0 (Infinity).
Question:
username_0: I made a pull request for this, but the CI build failed due to one of your npm modules (https://travis-ci.org/RobotWebTools/ros2djs/jobs/568488362). I don't want to make any changes to your package.json, so I'm including this issue.
Here's the overview:
To set maxPoses to Infinity, you have to pass a 0 to TraceShape, but when setting this.maxPoses in the TraceShape function, it does this: this.maxPoses = options.maxPoses || 100. If I set maxPoses to 0 (meaning I want it to be infinity) the line of code evaluates it to be a falsy value and sets maxPoses to 100 instead. My code just checks to see if it's defined OR it equals 0. |
OpenNTF/org.openntf.nsfodp | 419085844 | Title: Observed StackOverflowError in the Bazaar during local compilation
Question:
username_0: It's possible the fix will be in the Bazaar, but I expect that the cause will be something generated by the compiler.
```
java.lang.StackOverflowError
at java.io.UnixFileSystem.getBooleanAttributes0(Native Method)
at java.io.UnixFileSystem.getBooleanAttributes(UnixFileSystem.java:242)
at java.io.File.isDirectory(File.java:849)
at java.io.File.toURL(File.java:686)
at org.eclipse.osgi.storage.bundlefile.DirZipBundleEntry.getLocalURL(DirZipBundleEntry.java:61)
at org.eclipse.osgi.storage.url.BundleURLConnection.getLocalURL(BundleURLConnection.java:120)
at org.eclipse.osgi.storage.url.BundleURLConverter.resolve(BundleURLConverter.java:52)
at org.eclipse.core.runtime.FileLocator.resolve(FileLocator.java:229)
at org.eclipse.core.runtime.FileLocator.getBundleFile(FileLocator.java:245)
at com.ibm.xsp.extlib.javacompiler.impl.SourceFileManager.resolveBundle(SourceFileManager.java:125)
at com.ibm.xsp.extlib.javacompiler.impl.SourceFileManager.resolveBundle(SourceFileManager.java:203)
at com.ibm.xsp.extlib.javacompiler.impl.SourceFileManager.resolveBundle(SourceFileManager.java:214)
at com.ibm.xsp.extlib.javacompiler.impl.SourceFileManager.resolveBundle(SourceFileManager.java:203)
at com.ibm.xsp.extlib.javacompiler.impl.SourceFileManager.resolveBundle(SourceFileManager.java:214)
at com.ibm.xsp.extlib.javacompiler.impl.SourceFileManager.resolveBundle(SourceFileManager.java:203)
at com.ibm.xsp.extlib.javacompiler.impl.SourceFileManager.resolveBundle(SourceFileManager.java:214)
at com.ibm.xsp.extlib.javacompiler.impl.SourceFileManager.resolveBundle(SourceFileManager.java:203)
at com.ibm.xsp.extlib.javacompiler.impl.SourceFileManager.resolveBundle(SourceFileManager.java:214)
at com.ibm.xsp.extlib.javacompiler.impl.SourceFileManager.resolveBundle(SourceFileManager.java:203)
...
```<issue_closed>
Status: Issue closed |
nodejs/help | 292215342 | Title: How to force the occurance of a problem.
Question:
username_0: I am hunting down a tricky error in Node.js's core for Win. Its about the following lines:
https://github.com/nodejs/node/blob/02fef8ad5a6c0e5c1ce0d4b46aa3a762935c981c/lib/internal/child_process.js#L571-L579
```javascript
target._send({ cmd: 'NODE_HANDLE_ACK' }, null, true);
var obj = handleConversion[message.type];
// Update simultaneous accepts on Windows
if (process.platform === 'win32') {
handle._simultaneousAccepts = false;
net._setSimultaneousAccepts(handle);
}
```
it seems that under the right circumstances `._send(` might trigger the closing of the handle which then later lets `net._setSimultaneousAccepts` throw a memory access error (in c-code: triggering a hard-close of node)
That behavior has been hard for me to reproduce so I wonder: How could I force trigger the error in a simple example, worthy of an issue to the node repository.
Answers:
username_1: How frequently does that problem occur? Enough to get it to fail in a C/C++ debugger after a couple attempts?
username_0: Hmm, yes : in a complex system. I have a test case failing constantly but only if the whole of the suite is run - not some single test. Which made it hard to triangulate. Not sure how to attach a C/C++ debugger to the node process (not a windows person; not a c person)
username_1: If you’re using VSCode, there are some things you can look up to make that happen.
username_0: It's the spectron testcase of an dat-desktop. Tbh. It had been kinda rough to track the issue - also because I use a Mac - which is why I thought the opposite road: trying to dump a socket during ACK might be easier to reproduce, pinpoint and test... I will need to look at it more in the office
username_2: @username_0 - is this still outstanding?
Status: Issue closed
username_2: inactive, closing |
webignition/php-basil-compilable-source-factory | 514894802 | Title: Generated test class should extend PantherTestCase
Question:
username_0: Extend `Symfony\Component\Panther\PantherTestCase`
Probably just need to add this to the test code generator for now. Will need same in whatever package generates code from these models.
Status: Issue closed
Answers:
username_0: No, it really shouldn't. |
skywind3000/vim-quickui | 735452799 | Title: [feature] context entry shows only if the current file type can be matched Support
Question:
username_0: 比如Manual中举的例子:
```
call quickui#menu#install('&C/C++', [
\ [ '&Compile', 'echo 1' ],
\ [ '&Run', 'echo 2' ],
\ ], '<auto>', 'c,cpp')
```
上面的menu只有在filetype是C的时候出现。
context menu通常有较多的条目,韦大能不能对于context menu的每个entry支持auto?
Answers:
username_1: 当然支持,第三个参数即可:
```
let g:context_menu_k = [
\ ["&Peek Definition\tAlt+;", 'call quickui#tools#preview_tag("")'],
\ ["S&earch in Project\t\\cx", 'exec "silent! GrepCode! " . expand("<cword>")'],
\ [ "--", ],
\ [ "Find &Definition\t\\cg", 'call MenuHelp_Fscope("g")', 'GNU Global search g'],
\ [ "Find &Symbol\t\\cs", 'call MenuHelp_Fscope("s")', 'GNU Gloal search s'],
\ [ "Find &Called by\t\\cd", 'call MenuHelp_Fscope("d")', 'GNU Global search d'],
\ [ "Find C&alling\t\\cc", 'call MenuHelp_Fscope("c")', 'GNU Global search c'],
\ [ "Find &From Ctags\t\\cz", 'call MenuHelp_Fscope("z")', 'GNU Global search c'],
\ [ "--", ],
\ [ "Goto D&efinition\t(YCM)", 'YcmCompleter GoToDefinitionElseDeclaration'],
\ [ "Goto &References\t(YCM)", 'YcmCompleter GoToReferences'],
\ [ "Get D&oc\t(YCM)", 'YcmCompleter GetDoc'],
\ [ "Get &Type\t(YCM)", 'YcmCompleter GetTypeImprecise'],
\ [ "--", ],
\ ['Dash &Help', 'call asclib#utils#dash_ft(&ft, expand("<cword>"))'],
\ ['Cpp&man', 'exec "Cppman " . expand("<cword>")', '', "c,cpp"],
\ ['P&ython Doc', 'call quickui#tools#python_help("")', '', 'python'],
\ ["S&witch Header\t<SPC>fw", 'SwitchHeaderEdit', '', "c,cpp"],
\ ]
```
username_0: 嗯嗯,感谢!
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.