repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
dotnet/aspnetcore | 925581650 | Title: Try replacing transition cleanup with Razor syntax formatting
Question:
username_0: At the moment there is cleanup code that runs as part of `CSharpFormattingPass` and `CSharpOnTypeFormattingPass`, which formats the beginning and end of each SoruceMapping. Since it works on text only it can't reason about some specifics (eg, can't do something different for `@{ }` versus `@if { }`) so it is worth investigating if this logic can be moved to `RazorFormattingPass`, which can format based on syntax nodes.
Note that this might not work, and doing it on type might be even harder, so consider a fix for this as a spike. |
fac21/Final-project-NSMM | 905235373 | Title: Settings page
Question:
username_0: log out
change username
change password
delete account
Answers:
username_0: Settings page:
--
add Input form/button to change password
Insert new password to user table
Logout button + logic
Delete account button + logic
- Report user input form (user name/reason)submit button. Logic to add to db?/reponse (STRETCH goal)
Status: Issue closed
|
libexpat/libexpat | 581485463 | Title: lt-xmlwf.c:164:35: error: expected ',' or ';' before 'C'
Question:
username_0: Hi Everyone,
I'm testing Expat master with Autotools builds on MSYS2 i686 using a Windows 7 x86 host. The environment can be downloaded from [MSYS2 Home | Installer](https://www.msys2.org/). Click the button with the label *"msys2-i686-20190524.exe"*.
`./buildconf.sh` and `configure` goes well. Make is not going so well:
```
libtool: link: gcc -g -O2 -Wall -Wextra -fexceptions -fno-strict-aliasing -Wmiss
ing-prototypes -Wstrict-prototypes -pedantic -Wduplicated-cond -Wduplicated-bran
ches -Wlogical-op -Wrestrict -Wnull-dereference -Wjump-misses-init -Wdouble-prom
otion -Wshadow -Wformat=2 -Wmisleading-indentation -fvisibility=hidden -DXML_ENA
BLE_VISIBILITY=1 -fno-strict-aliasing -o .libs/xmlwf.exe xmlwf-xmlwf.o xmlwf-xml
file.o xmlwf-codepage.o xmlwf-unixfilemap.o ../lib/.libs/libexpat.dll.a -L/ming
w32/lib
./.libs/lt-xmlwf.c:164:35: error: expected ',' or ';' before 'C'
164 | const char * LIB_PATH_VALUE = ""C:\\msys32\\home\\<NAME>\\libe
xpat\\expat\\lib\\.libs";";
| ^
./.libs/lt-xmlwf.c:164:37: error: stray '\' in program
164 | const char * LIB_PATH_VALUE = ""C:\\msys32\\home\\Jeffrey Walton\\libe
xpat\\expat\\lib\\.libs";";
| ^
...
```
-----
Autoconf seems to detect the system OK. I think this may be a libtool bug based on the filename `lt-xmlwf.c`.
```
$ cat -n xmlwf/.libs/lt-xmlwf.c
1
2 /* ./.libs/lt-xmlwf.c - temporary wrapper executable for .libs/xmlwf.exe
3 Generated by libtool (GNU libtool) 2.4.6
4
...
162 externally_visible const char * MAGIC_EXE = "%%%MAGIC EXE variable%%%";
163 const char * LIB_PATH_VARNAME = "PATH";
164 const char * LIB_PATH_VALUE = ""C:\\msys32\\home\\Jeffrey Walton\\libe
xpat\\expat\\lib\\.libs";";
165 const char * EXE_PATH_VARNAME = "PATH";
166 const char * EXE_PATH_VALUE = "\\home\\Jeffrey:Walton\\libexpat\\expat
\\lib\\.libs:\\mingw32\\lib:\\mingw32\\bin;";
```
And:
```
$ uname -s
MINGW32_NT-6.1-7601
$ ./configure
configure: loading site script /mingw32/etc/config.site
checking build system type... i686-w64-mingw32
checking host system type... i686-w64-mingw32
checking for a BSD-compatible install... /usr/bin/install -c
...
```
-----
After installing/running *"msys2-i686-20190524.exe"*:
```
pacman -Syu
# click "X" on MSYS2 window
# open new MSYS2 window
pacman -S autoconf automake libtool gcc git cmake make
# now clone and configure
```
Answers:
username_1: The way I read this there is a bug in libtool that affects compilation of Expat on MSYS2.
Unless you have a tiny, maintainable workaround in mind for Expat, I'm not sure if Expat is a good place to even address an issue like that.
username_0: Yeah, agreed. I'm in the habit of reporting the problem to the affected project first (Expat), and then to upstream or the external project (libtool).
username_1: No worries, thanks for bringing it up.
Status: Issue closed
|
egulias/EmailValidator | 963936342 | Title: Whitespaces before and after the @ should not be valid
Question:
username_0: Hello,
we are using your RFCValidation (Package version 2.1.25) through the laravel-framework validation and found a problem. Some customers entered their email address with a whitespace before or after the "@" separating the local and domain part. In our opinion this should not be valid according to RFC.
#### currently
```txt
<EMAIL> -> valid
name @domain.com -> valid
name@ domain.com -> valid
name @ domain.com -> valid
```
#### should be
```txt
<EMAIL> -> valid
name @domain.com -> invalid
name@ domain.com -> invalid
name @ domain.com -> invalid
```
kind regards
Torben
Answers:
username_1: Hello @username_0
As you can see in https://github.com/username_1/EmailValidator/blob/2.1.x/tests/EmailValidator/Validation/RFCValidationTest.php#L243 , that is the expected behaviour.
The email is valid, albeit with warnings for a rare option where it might work (like https://github.com/username_1/EmailValidator/blob/2.1.x/tests/EmailValidator/Validation/RFCValidationTest.php#L103).
You can use [NoRFCWarningsValidation](https://github.com/username_1/EmailValidator/blob/2.1.x/src/Validation/NoRFCWarningsValidation.php) to invalidate these and other cases. |
GjjvdBurg/labella.py | 1114377814 | Title: FileNotFoundError: [WinError 2] The system cannot find the file specified
Question:
username_0: Hello,
I'm new to Python. Really excited to use your module, but I seem to be stuck.. Hoping you can help.
**Error:**
`FileNotFoundError: [WinError 2] The system cannot find the file specified`
**Steps to reproduce:**
1. Create a Python virtual env in a new folder: `python -m venv .venv`
2. Activate venv: `.\.venv\Scripts\activate`
3. `pip install labella`, success
4. Copy your example [`timeline_kit_1.py`](https://github.com/GjjvdBurg/labella.py/blob/master/examples/timeline_kit_1.py), paste in a new file in folder `timeline.py`
5. Execute example: `python .\timeline.py`
Results in the above error.
**[Preliminary search](https://stackoverflow.com/questions/35443278/filenotfounderror-winerror-2-the-system-cannot-find-the-file-specified):**
- Indicates that a filename could be incorrect(?)
- It seems that the `tl = TimelineSVG(items, options=options)` line causes the problem. If I comment it out, the script executes (though of course, no output).
**Error Full-text:**
```
(.venv) PS E:\dataDocuments\python\timeline_test> python .\timeline.py
Traceback (most recent call last):
File "E:\dataDocuments\python\timeline_test\timeline.py", line 64, in <module>
main()
File "E:\dataDocuments\python\timeline_test\timeline.py", line 56, in main
tl = TimelineSVG(items, options=options)
File "E:\dataDocuments\python\timeline_test\.venv\lib\site-packages\labella\timeline.py", line 311, in __init__
super().__init__(items, options=options, output_mode="svg")
File "E:\dataDocuments\python\timeline_test\.venv\lib\site-packages\labella\timeline.py", line 145, in __init__
self.items = self.parse_items(dicts, output_mode=output_mode)
File "E:\dataDocuments\python\timeline_test\.venv\lib\site-packages\labella\timeline.py", line 179, in parse_items
it = Item(
File "E:\dataDocuments\python\timeline_test\.venv\lib\site-packages\labella\timeline.py", line 97, in __init__
self.width, self.height = self.get_text_dimensions()
File "E:\dataDocuments\python\timeline_test\.venv\lib\site-packages\labella\timeline.py", line 103, in get_text_dimensions
width, height = text_dimensions(self.text, fontsize="12pt")
File "E:\dataDocuments\python\timeline_test\.venv\lib\site-packages\labella\tex.py", line 139, in text_dimensions
width, height = get_latex_dims(tex, silent=silent,
File "E:\dataDocuments\python\timeline_test\.venv\lib\site-packages\labella\tex.py", line 106, in get_latex_dims
compile_latex(fname, tmpdirname, latexmk_options, silent=silent)
File "E:\dataDocuments\python\timeline_test\.venv\lib\site-packages\labella\tex.py", line 91, in compile_latex
raise (e)
**File "E:**\dataDocuments\python\timeline_test\.venv\lib\site-packages\labella\tex.py", line 89, in compile_latex
output = subprocess.check_output(command, stderr=subprocess.STDOUT)
**File "C:**\Python39\lib\subprocess.py", line 424, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "C:\Python39\lib\subprocess.py", line 505, in run
with Popen(*popenargs, **kwargs) as process:
File "C:\Python39\lib\subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Python39\lib\subprocess.py", line 1420, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] The system cannot find the file specified
```
- I don't know why my interpreter shifts from `E:` to `C:`, Thats weird<issue_closed>
Status: Issue closed |
nomadbsd/handbook | 1119519451 | Title: Unable to install Linux packages.
Question:
username_0: The NomadBSD Handbook states that the steps for installing Linux packages is as follows:
`# sysrc linux_enable=YES
# service abi start
# pkg install linux-sublime`
Unfortunately, that doesn't work for me:
`~> cat /etc/rc.conf | grep linux
linux_enable="YES"
# service abi start
abi does not exist in /etc/rc.d or the local startup
directories (/usr/local/etc/rc.d), or is not executable
# ls -l /etc/rc.d | grep abi
root@NomadBSD:~ #
# ls -l /usr/local/etc/rc.d | grep abi
root@NomadBSD:~ #`
It has been suggested that there has been some luck with a work around, where the user installs a Linux browser first, by using the Linux Browser install GUI. I have considered this, and am interested in installing a Linux browser, however I am only interested in installing the new Microsoft Edge for Linux browser, which isn't in the list. Is there a way to install the Microsoft Edge browser by modifying the GUI script, or should the issue of being unable to any Linux packages be tackled head-on? I am a bit unsure, but I definitely need some help on both interrelated issues...
Answers:
username_1: I've just added support for installing Microsoft Edge to https://github.com/username_1/linux-browser-installer.git
```
$ git clone https://github.com/username_1/linux-browser-installer.git
$ cd linux-browser-installer
$ sudo ./linux-browser-installer install edge
```
username_1: These message are nothing to be concerned about. |
natemcmaster/CommandLineUtils | 376053098 | Title: Improve the error message when constructor parameter type is not registered
Question:
username_0: **Is your feature request related to a problem? Please describe.**
When forgetting to add a service to the DI container, we get a MissingMethodException with the generic message: No parameterless constructor defined for this object.
**Describe the solution you'd like**
A clear exception with explicit type of the constructor and parameter that is being instantiated would be very useful to know.<issue_closed>
Status: Issue closed |
dKvale/aqi-watch | 130670021 | Title: High concentrations reported at 5:00, 02/02/2016
Question:
username_0: __Update for Feb 02, 2016 at 06:33 CST.__ There is 1 monitoring site reporting a 1-hr AQI above 85. The maximum 1-hr AQI of 87 (PM2.5) was reported at MADISON EAST [AQS ID: 550250041] by the **Wisconsin Dept. of Natural Resources**. Learn more at <a href=http://dkvale.github.io/aqi-watch> AQI Watch</a>. </br> </br> |
stretchr/testify | 125584889 | Title: require.FailNow() removed from exported interface following codegen merge
Question:
username_0: As a result of the code generation PR being merged (#241) the `FailNow()` method is no longer exported from the `require` package.
The `FailNow()` method is still present on the `TestingT` interface within the package, but presumably the code generation doesn't have this?
0d5a14c5a477957864f3b747d95255ad4e34bcc0 - `require.FailNow()` exists
efd1b850c1e5df1c539e83f61f7d5e113b6484e9 - first commit on the codegen branch (#241) removes `require.FailNow()`
c92828f29518bc633893affbce12904ba41a7cfa - #241 merged into master
Answers:
username_1: Hi @username_0,
Thanks for the heads up, looking into it right now.
Status: Issue closed
|
blitz-js/blitz | 789492330 | Title: Files compiled or cached in ./blitz are not deleted, when renaming or delete from src
Question:
username_0: ### What is the problem?
When I create a file app, it will also created in ./blitz/caches/dev
When I delete or rename the file, it is not deleted or renamed from>./blitz/caches/dev
### Steps to Reproduce
1. create a file in app/auth/pages
2. delete that file
3. check in ./blitz/caches/dev/pages - the file created in step 1. will still be here
### Versions
blitz: 0.29.2
Answers:
username_1: Related issue (maybe duplicate): https://github.com/blitz-js/blitz/issues/1712
Status: Issue closed
|
tensorflow/tensorflow | 221553178 | Title: Windows: //tensorflow/python/estimator:estimator_test failing in Bazel build
Question:
username_0: http://ci.tensorflow.org/job/tf-master-win-bzl/751/consoleFull
It has been failing on ci for a while with:
```
22:36:33 ======================================================================
22:36:33 ERROR: test_train_save_copy_reload (__main__.EstimatorTrainTest)
22:36:33 ----------------------------------------------------------------------
22:36:33 Traceback (most recent call last):
22:36:33 File "\\?\c:\tmp\Bazel.runfiles_r2z2r52c\runfiles\org_tensorflow\py_test_dir\tensorflow\python\estimator\estimator_test.py", line 267, in test_train_save_copy_reload
22:36:33 os.renames(model_dir1, model_dir2)
22:36:33 File "C:\Program Files\Anaconda3\lib\os.py", line 288, in renames
22:36:33 rename(old, new)
22:36:33 PermissionError: [WinError 5] Access is denied: 'c:\\tmp\\tmp8f7qnomv\\model_dir1' -> 'c:\\tmp\\tmp8f7qnomv\\model_dir2'
```
@gunan Can we fix this test case on Windows? Otherwise we'd better disable this test on Windows.
Answers:
username_1: Friendly ping @username_2 re: state of Windows Jenkins build.
username_2: Looks like the author of the broken test isn't on GitHub, so reassigning to @username_3 as overall owner for the Estimator code.
username_3: Is there something special about mkdtmp on Windows? Seems we don't have permissions to rename a file in such a directory?
username_2: I think the main pitfall here is failing to close all of the files in the directory before trying to move it. We had various tests that used to be sloppy about closing temp files before opening them elsewhere... perhaps the Estimator code does this somewhere?
Status: Issue closed
|
Moya/Moya | 227964182 | Title: Moya website.
Question:
username_0: Hey guys! In #1078, @orta made a good point that maybe we should think about a website for Moya project. What do you think about it, @Moya/contributors? 🤔
Personally, I don't have much experience with that, so we would need to have someone taking care of that mini-project.
Answers:
username_1: Using github.io website generator, or even jekyll can be a good solution for a brief and good website !
username_2: What would be the advantage compared to a list in the repo? Or maybe: what would we want to get out of a website apart from just a list of project built with / on top of / extending Moya?
username_2: Also, as @username_1 mentioned, I think doing so with Jekyll would be pretty nice, as I think most - and at least some of us - already have some experience with it.
username_0: I think we don't have to constraint it only to the community around Moya (projects/extensions/etc), but we can also show examples/tutorials/documentation there as well. We could also show how we do open source or how to start contributing. We'd have everything in docs, but website may be an alternative way of presenting the important stuff. But that makes me wonder, can we connect the website with our markdown docs fairly easy? Would it be time-consuming to maintain it?
username_1: Even without Jekyll. A good start could be to check the Github Pages, which can use easily the markdown doc in the repo.
Source: [Github Pages](https://pages.github.com)
username_3: I've helped out with similar sites like this one: https://github.com/RxSwiftCommunity/rxswiftcommunity.github.io Jekyll is great, Middleman is also great. Whatever people are familiar with sounds good to me.
username_0: Great! I think we can start with a skeleton in Jekyll as a first step. Does anyone here want to take a stab at it?
username_4: I was thinking of cleaning up Moya's in source documentation and generating documentation using [jazzy](https://github.com/realm/jazzy). We would need to create a `moya.github.io` repo to host the page. **Is anyone opposed to this?** @Moya/contributors
I have no web development skills so it would only be jazzy for now, but this gets the ball rolling.
username_3: I have no objections – starting with Jazzy makes sense. As I mentioned above, Middleman is also nice, but the cool thing the hip kids are using these days is Gatsby. I've not used it yet but am planning on looking over [the free Eggheads course](https://egghead.io/lessons/gatsby-install-gatsby-and-scaffold-a-blog) tonight.
username_4: Awesome, I started here: https://moya.github.io. Checking out what the current status of the docs look like.
That Eggheads link is awesome btw, I may try to learn a little web development. I feel limited because I don't know Javascript (would prefer learning Typescript first if possible TBH). I doubt anyone wants me to work on the Moya website in Elixir or Go 😂
username_4: I'm starting to learn React this morning so I will probably move forward with making Moya a website. Likely using Gatsby. Hope everyone is ok with that setup
username_2: Go for it! |
boa-dev/boa | 714197479 | Title: Panic when no arguments are given to JSON.parse
Question:
username_0: <!--
Thank you for reporting a bug in Boa! This will make us improve the engine. But first, fill the following template so that we better understand what's happening. Feel free to add or remove sections as you feel appropriate.
-->
**Describe the bug**
A panic occurs when no arguments are passed to `JSON.parse`.
<!-- E.g.:
The variable statement is not working as expected, it always adds 10 when assigning a number to a variable"
-->
**To Reproduce**
```js
JSON.parse()
```
<!-- E.g.:
This JavaScript code reproduces the issue:
```javascript
var a = 10;
a;
```
-->
**Expected behavior**
Expected: a JavaScript error is raised (Chrome seems to raise a `SyntaxError`)

<!-- E.g.:
Running this code, `a` should be set to `10` and printed, but `a` is instead set to `20`. The expected behaviour can be found in the [ECMAScript specification][spec].
[spec]: https://www.ecma-international.org/ecma-262/10.0/index.html#sec-variable-statement-runtime-semantics-evaluation
-->
**Build environment (please complete the following information):**
- OS: Windows 10
- Version: V1909, Build 18363.1082
- Target triple: `x86_64-pc-windows-msvc` (also happens with the WASM example on the website)
- Rustc version: `rustc 1.46.0 (04488afe3 2020-08-24)`
**Additional context**
It's caused by an `.expect` in the `builtins/json/mod.rs` file.
More info on `JSON.parse`: [https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse)
<!-- E.g.:
You can find more information in [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/var).
--><issue_closed>
Status: Issue closed |
react-bootstrap/react-bootstrap | 150685773 | Title: Using FormControl with serverside rendering gives invalid checksum
Question:
username_0: When using `<FormControl type="text" name="Tussenvoegsels" />` and using serverside rendering.
I get the checksum warning with:
```
(client) ><input type="text" name="Tussenvoegsels
(server) ><input type="text" class="form-control"
```
frontend renders: `<input type="text" name="Tussenvoegsels" class="form-control" data-reactid="43">`
server renders: `<input type="text" class="form-control" name="Tussenvoegsels" data-reactid="43"/>`
This is weird right? I don't know who controls the rendering order of the attributes.
Answers:
username_1: Doesn't look like it's anything on our end. Make sure you're using matching versions of React on server and client.
Status: Issue closed
|
craftcms/cms | 726562735 | Title: Wrong result while ordering by lightswitch and DateTime field
Question:
username_0: ### Description
I have a list of entries, each entry has the lightswitch field that should put entry on the top of the query result list when the lightswitch is turned on, also I need to sort entries by post date. So, in the end there should be the following order: highlighted entries ordered by date, then unhighlighted entries also ordered by date
I am using orderBy: "<lightswitch_field> DESC, postDate DESC" query variable, the ordering by lightswitch is correct, but the postDate ordering is wrong in the middle of the list
### Steps to reproduce
1. Create multiple entries of the same entry type containing the lightswitch field
2. Turn on the lightswitch for several entries
3. Query the entries with using orderBy "<lightswitch_field> DESC, postDate DESC"
### Additional info
- Craft version: 3.5.12.1
- PHP version: 7.4.11
- Database driver & version: MySQL 5.6.49
- Plugins & versions:
Guest Entries | 2.3.0
Reasons | 2.2.2
Redactor | 2.8.2
Super Table | 2.6.3
Typed link field | 1.0.23
Video Embedder | 1.1.4

Answers:
username_1: If you remove `platformHighlighted DESC`, do the entries come back in the right order?
username_0: removing the `platformHighlighted DESC` give the right order by postDate:

what I've noticed is when I order only by `platformHighlighted DESC` it sorts properly by lightswitch, but both highlighted and unhighlighted entries are not ordered by postDate:

but when I apply also `postDate DESC` it sorts only the highlighted part:


Looks like the postDate ordering applies only to the first part of the result obtained by lightswitch ordering
username_1: Can you try creating a Twig template with this?
```twig
{% set entries = craft.entries()
.section('platform')
.orderBy('platformHighlighted ASC, postDate DESC')
.all() %}
{% dd entries|map(e => {
postDate: e.postDate|date('Y-m-d Hi-i-s'),
platformHighlighted: e.platformHighlighted,
}) %}
```
Does it output entries in the same order?
username_0: Sorry for the long response, these are the results from the twig:


Ordering is still incorrect for the unhighlighted items
username_1: Aha, that makes sense. Unexpected side effect of Lightswitch fields trying to respect their Default Value setting for `null` values. If you run the `resave/entries` command, this will resolve itself.
```bash
php craft resave/entries --section=platform
```
Status: Issue closed
username_0: Ok, thank you for the hint 👍 |
koumoul-dev/vuetify-jsonschema-form | 947111754 | Title: Conditional schema is very limited. Need workaround.
Question:
username_0: I tried the conditional schemas, but from what I can see it looks like once I add a condition then I need to split my schemas which means I can't access the other schema with a property from the first schema or I am limited with the order of my properties.
For example, if I want to keep this property order and hide conditionalProp1 based on showOrHideProp1 and then conditionalProp2 based on showOrHideProp2, there doesn't seem to be a way to do this:
```
properties: {
showOrHideProp1: { type: bool },
showOrHideProp2: { type: bool },
regularProp: { type: string },
conditionalProp1: { type: string },
conditionalProp2: { type: string },
finalProp: { type: string },
}
I can use conditional schema to apply this condition for the first condition:
"allOf": [
{
properties: {
showOrHideProp1: { type: bool}
showOrHideProp2: { type: bool},
regularProp: { type: string}
},
if: {...showOrHideProp1}
then: {...condtionalProp1}
else: {...}
},
properties: {
conditionalProp2: { type: string } /// Propblem here
finalProp: { type: string }
}
]
```
Now I have no way to hide conditionalProp2 based on showOrHideProp2 because they are in two different schemas.
Is there any other way to do this and keep the property order?
As a workaround I am rendering the input using another instance of v-jsf and then evaulating an express with an attribute I made up "x-show-express":
```
<template slot="custom-component" slot-scope="context">
<v-jsf v-show="evaluateExpression(context, 'x-show-expression')" :disabled="context.disabled" :required="context.required" :rules="context.rules" :value="context.value" :schema="JSON.parse(JSON.stringify(context.schema))" :options="options" @input="context.on.input" @change="context.on.change" >
</v-jsf>
</template>
```
```
"conditionalProp2": {
"type": "string",
"x-display": "custom-component",
"x-tag": "v-textarea",
"x-show-expression": "this.model.showOrHideProp2 === true"
}
```
```
evaluateExpression: function (context, property) {
if (context.schema[property]) {
return eval(context.schema[property]);
} else {
return true;
}
}
```
Now I can control when to display the property with an express. Obviously, this is not following the json schema way to do things, but since it is so limited, it would be great if you could expose an event or have a supported way to allow us to custom render and extend the vjsf input so we can use our own expressions without having to write a new component for each input.
Answers:
username_0: As a workaround I am rendering the input using another instance of v-jsf and then evaluating an express with an attribute I made up "x-show-expression":
```
<template slot="custom-component" slot-scope="context">
<v-jsf v-show="evaluateExpression(context, 'x-show-expression')" :disabled="context.disabled" :required="context.required" :rules="context.rules" :value="context.value" :schema="JSON.parse(JSON.stringify(context.schema))" :options="options" @input="context.on.input" @change="context.on.change" >
</v-jsf>
</template>
```
```
"conditionalProp2": {
"type": "string",
"x-display": "custom-component",
"x-tag": "v-textarea",
"x-show-expression": "this.model.showOrHideProp2 === true"
}
```
```
evaluateExpression: function (context, property) {
if (context.schema[property]) {
return eval(context.schema[property]);
} else {
return true;
}
}
```
I'm not sure if I'm going to run into problems doing this, but bow I can control when to display the property with an expression. Obviously, this is not following the json schema way to do things, but since it is so limited, it would be great if you could expose an event or have a supported way to allow us to custom render and extend the vjsf input so we can use our own expressions without having to write a new component for each input.
username_1: Yes this whole subject is not easy and there is a lot of room for improvement.
There is an undocumented ["x-if" annotation](https://github.com/koumoul-dev/vuetify-jsonschema-form/blob/8b68dfee3700bee4d8101927e4bd068aad74487d/lib/VJsfNoDeps.js#L247). It uses the same mechanism as x-fromData for selects, you can watch a property either in options.context or the model. But it is limited, it only checks if the value found at this key is truthy, it does not evaluate an expression.
I don't know how we could have more useful expressions. The nice thing with the if/then/else syntax is that json-schema itself is the expression language, but I agree that it is not really usable in many cases.
username_2: Hey, I work on a PR which improve x-if annotation using [expr-eval](https://www.npmjs.com/package/expr-eval) package.
I just wan't to be sure is good for you
username_1: I'm ok with that. Please include in the PR extending this part of the documentation https://koumoul-dev.github.io/vuetify-jsonschema-form/latest/configuration/#expressions and probably also this example https://koumoul-dev.github.io/vuetify-jsonschema-form/latest/examples#_x-if
Also I would prefer not to add the package as a required dependency and only try to use it if the "evalMethod" option asks for it.
username_2: I add a Expression mixin to configure the parser. How can I use it dynamically with the eval method property? |
opendistro-for-elasticsearch/sql | 740281103 | Title: Workbench Test Suite is not configured correctly
Question:
username_0: `yarn jest` yields mostly failed test cases.
* Cypress tests don't pass with an import error `cy not defined`
* Snapshots have ids that are auto-generated on each run vs and will always fail when running against old snapshots with different ids
* multiple `console.error` statements
<issue_closed>
Status: Issue closed |
type-challenges/type-challenges | 1138219752 | Title: 459 - Flatten
Question:
username_0: <!--
小贴士:
🎉 恭喜你成功解决了挑战,很高兴看到你愿意分享你的答案!
由于用户数量的增加,Issue 池可能会很快被答案填满。为了保证 Issue 讨论的效率,在提交 Issue 前,请利用搜索查看是否有其他人分享过类似的档案。
你可以为其点赞,或者在 Issue 下追加你的想法和评论。如果您认为自己有不同的解法,欢迎新开 Issue 进行讨论并分享你的解题思路!
谢谢!
-->
```ts
type Flatten<T extends unknown[]> = T extends [] ? [] : (T extends [infer E] ? (E extends unknown[] ? Flatten<E> : [E]) : (T extends [infer First, ...infer Rest] ? (First extends unknown[] ? [...Flatten<First>, ...Flatten<Rest>] : [First, ...Flatten<Rest>]) : []))
```
Answers:
username_0: 写出解答的两分钟后,就看不懂自己的解答了 |
jlippold/tweakCompatible | 430391482 | Title: `SmallSiri` working on iOS 12.1.1
Question:
username_0: ```
{
"packageId": "com.muirey03.smallsiri",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.muirey03.smallsiri",
"deviceId": "iPhone8,1",
"url": "http://cydia.saurik.com/package/com.muirey03.smallsiri/",
"iOSVersion": "12.1.1",
"packageVersionIndexed": true,
"packageName": "SmallSiri",
"category": "Tweaks",
"repository": "Packix",
"name": "SmallSiri",
"installed": "1.2.1",
"packageIndexed": true,
"packageStatusExplaination": "This package version has been marked as Working based on feedback from users in the community. The current positive rating is 100% with 8 working reports.",
"id": "com.muirey03.smallsiri",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "Make siri smaller and unobtrusive!",
"latest": "1.2.1",
"author": "Muirey03",
"packageStatus": "Working"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
dherault/serverless-offline | 567486502 | Title: `application/json` `Content-Type` response header getting overriden
Question:
username_0: ## Bug Report
**Current Behavior**
When setting the `content-type` response header to `application/json` serverless-offline overwrites the response header to `application/json; charset=utf-8`.
If we set a different `content-type` (`application/xml` for example) the response header is not overwritten.
**Sample Code**
We created a simple project that demonstrates the issue [here](https://github.com/AurelijaZuba/simple-serverless-offline). You will see the `content-type` gets overriden as described.
**Expected behavior/code**
I wouldn't expect serverless-offline to override this header in any circumstances. Is there a reason it does?
**Environment**
- `serverless-offline` version: [e.g. v6.0.0-alpha.67]
- `node.js` version: [e.g. v12.10.0]
- `OS`: [macOS 10.14.5, Ubuntu 18.04]
Answers:
username_1: I am facing the same issue, everytime I try to render any file that isn't html, I get the same `index.html` with an error like below:
`Resource interpreted as stylesheet but transferred with MIME type text/html`
For every source file (runtime, main, polyfills).
I am trying to run an Angular app which is going to be hosted on AWS. It works as I made a [sample repo](https://github.com/username_1/example-angular-ng-toolkit) but can't make it work with `serverless-offline`.
<details><summary>Log from running `SLS_DEBUG=* serverless offline start`</summary>
```console
offline: ANY /production/main.77b67f8cb504676a07fa.js (λ: api)
[offline] contentType: application/json
[offline] requestTemplate:
[offline] payload: null
[offline] event: {
body: null,
headers: {
Host: 'localhost:3000',
Connection: 'keep-alive',
Pragma: 'no-cache',
'Cache-Control': 'no-cache',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36',
Accept: '*/*',
'Sec-Fetch-Site': 'same-origin',
'Sec-Fetch-Mode': 'no-cors',
Referer: 'http://localhost:3000/production/',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US,en;q=0.9,es-CO;q=0.8,es;q=0.7'
},
httpMethod: 'GET',
isBase64Encoded: false,
multiValueHeaders: {
Host: [ 'localhost:3000' ],
Connection: [ 'keep-alive' ],
Pragma: [ 'no-cache' ],
'Cache-Control': [ 'no-cache' ],
'User-Agent': [
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36'
],
Accept: [ '*/*' ],
'Sec-Fetch-Site': [ 'same-origin' ],
'Sec-Fetch-Mode': [ 'no-cors' ],
Referer: [ 'http://localhost:3000/production/' ],
'Accept-Encoding': [ 'gzip, deflate, br' ],
'Accept-Language': [ 'en-US,en;q=0.9,es-CO;q=0.8,es;q=0.7' ]
},
multiValueQueryStringParameters: null,
path: '/{proxy+}',
pathParameters: { proxy: 'main.77b67f8cb504676a07fa.js' },
queryStringParameters: null,
requestContext: {
accountId: 'offlineContext_accountId',
apiId: 'offlineContext_apiId',
authorizer: {
claims: undefined,
principalId: 'offlineContext_authorizer_principalId'
},
domainName: 'offlineContext_domainName',
domainPrefix: 'offlineContext_domainPrefix',
[Truncated]
'content-length': [ '25824' ],
etag: [ 'W/"64e0-QuIQqBVs8M4UHZWbTqVkRS7BAuo"' ],
date: [ 'Fri, 21 Feb 2020 02:33:19 GMT' ],
connection: [ 'keep-alive' ]
}
offline: (λ: api) RequestId: ck6vk85es000ko01s4f131igx Duration: 39.23 ms Billed Duration: 100 ms
[offline] _____ HANDLER RESOLVED _____
[offline] Using response 'default'
[offline] _____ RESPONSE PARAMETERS PROCCESSING _____
[offline] Found 0 responseParameters for 'default' response
[offline] headers {
'x-powered-by': [ 'Express' ],
'content-type': [ 'text/html; charset=utf-8' ],
'content-length': [ '25824' ],
etag: [ 'W/"64e0-QuIQqBVs8M4UHZWbTqVkRS7BAuo"' ],
date: [ 'Fri, 21 Feb 2020 02:33:19 GMT' ],
connection: [ 'keep-alive' ]
}
```
</details> |
nestjs/docs.nestjs.com | 499016695 | Title: Broken link in CONTRIBUTING.md
Question:
username_0: <!--
PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION.
ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION.
-->
## I'm submitting a...
<!--
Please search GitHub for a similar issue or PR before submitting.
Check one of the following options with "x" -->
<pre><code>
[ ] Regression <!--(a behavior that used to work and stopped working in a new release)-->
[ ] Bug report
[ ] Feature request
[X] Documentation issue or request (new chapter/page)
[ ] Support request => Please do not submit support request here, instead post your question on Stack Overflow.
</code></pre>
## Current behavior
<!-- Describe how the issue manifests. -->
The ```CONTRIBUTING.md``` file has two broken links that redirect to a non-existing page (https://github.com/nestjs/nest/blob/master/docs/DEVELOPER.md).
## Expected behavior
<!-- Describe what the desired behavior would be. -->
Redirect to some valid content.
## Minimal reproduction of the problem with instructions
[Submitting a Pull Request (PR)](https://github.com/nestjs/docs.nestjs.com/blob/master/CONTRIBUTING.md#submit) -> Submitting a Pull Request (PR)
```
5. Run the full Nest test suite, as described in the -->developer documentation<--, and ensure that all tests pass.
```
[Coding Rules](https://github.com/nestjs/docs.nestjs.com/blob/master/CONTRIBUTING.md#coding-rules)
```
We follow Google's JavaScript Style Guide, but wrap all code at 100 characters. An automated formatter is available, see -->DEVELOPER.md<--.
```<issue_closed>
Status: Issue closed |
rust-lang/rust | 499295566 | Title: Failed to run `cargo rustc --profile=check -- -Zunstable-options --pretty=expanded` on simple code
Question:
username_0: Code:
```rust
fn main() {}
/*
*/
```
Command (used to expand macros):
```shell
RUST_BACKTRACE=full cargo rustc --profile=check -- -Zunstable-options --pretty=expanded
```
Result:
```
thread 'rustc' panicked at 'assertion failed: `(left == right)`
left: `2`,
right: `1`', src/libsyntax/print/pprust.rs:491:17
stack backtrace:
0: 0x109cd6155 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::hfa26be1652cd9787
1: 0x109d0ce10 - core::fmt::write::h9204dad1f05752a2
2: 0x109cc97ab - std::io::Write::write_fmt::h3236e023140ca9e5
3: 0x109cda49a - std::panicking::default_hook::{{closure}}::h6b39e7bec1c0e07f
4: 0x109cda1a5 - std::panicking::default_hook::hd940c417edf5953e
5: 0x107236962 - rustc_driver::report_ice::he8f462601d7e6546
6: 0x109cdacd2 - std::panicking::rust_panic_with_hook::hb5f0fb06584b7c80
7: 0x109cda73d - std::panicking::continue_panic_fmt::hfee7d7df13a8f380
8: 0x109cda69e - std::panicking::begin_panic_fmt::hf84ec34c30992fe3
9: 0x108c59f71 - syntax::print::pprust::PrintState::print_comment::h24e7691f3cd985b3
10: 0x108c5259d - syntax::print::pprust::print_crate::h8bcae347660e5c9a
11: 0x1071df51f - rustc_driver::pretty::print_after_hir_lowering::{{closure}}::h84dff5163ceef5e6
12: 0x1071dded7 - rustc_driver::pretty::print_after_hir_lowering::h5c41c09a7531a919
13: 0x107205c7f - rustc_interface::passes::BoxedGlobalCtxt::access::{{closure}}::hc486952ecd38d4e9
14: 0x10732d5bb - rustc_interface::passes::create_global_ctxt::{{closure}}::h8852eb73e68f14aa
15: 0x1072068e0 - rustc_interface::interface::run_compiler_in_existing_thread_pool::h33cd099bc358d208
16: 0x1072382d4 - std::thread::local::LocalKey<T>::with::h2792dce5fe6fe5af
17: 0x10723c6d2 - scoped_tls::ScopedKey<T>::set::h16d79aeca6a52ed8
18: 0x107255575 - syntax::with_globals::h92b424eebfdbfd3d
19: 0x1071ca8ad - std::sys_common::backtrace::__rust_begin_short_backtrace::haa6aa3f0127b885d
20: 0x109ce9f7f - __rust_maybe_catch_panic
21: 0x1071fb427 - core::ops::function::FnOnce::call_once{{vtable.shim}}::hd2853b0c0007f607
22: 0x109cbbffe - <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once::h26e311d1235d42ea
23: 0x109ce8d8e - std::sys::unix::thread::Thread::new::thread_start::h3569effff07e966a
24: 0x7fff769f02eb - _pthread_body
25: 0x7fff769f3249 - _pthread_start
error: internal compiler error: unexpected panic
note: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports
note: rustc 1.40.0-nightly (ddf43867a 2019-09-26) running on x86_64-apple-darwin
note: compiler flags: -Z unstable-options -C debuginfo=2 -C incremental --crate-type bin
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
end of query stack
```
Answers:
username_1: I know it's been roughly seven months since there's been any activity on this issue, but in my efforts to add things to the [glacier](https://github.com/rust-lang/glacier), I can't get an ICE from this. Have you tried the latest nightly?
username_2: I also noticed a few things:
- `left` is always the number of lines of the last block comment and `right` is always `1`
- adding anything after the block comment prevents the crash, even an empty line |
ciena-frost/ember-frost-sort | 277080153 | Title: Remove dependency management from blueprint and address other inconsistencies in package.json
Question:
username_0: https://github.com/ciena-frost/ember-frost-object-details/issues/54 **MUST** be completed before this issue.
* remove from blueprints and make dependency
* ember-frost-core
* remove from devDependency and make dependency
* ember-prop-types
* ember-spread
* update ember-prop-types to ^5.0.1
* update ember-spread to ^3.0.2
* check whether these are specifically needed or not and remove if not, otherwise move to dependencies
* ember-computed-decorators
* ember-concurrency
* ember-elsewhere
* ember-truth-helpers
Status: Issue closed
Answers:
username_0: https://github.com/ciena-frost/ember-frost-bunsen/issues/491 **MUST** be resolved before this issue can be addressed. Some of the individual tasks _may_ be able to be worked on before the completion of the dependent issue. If doing so indicate which tasks have been completed by checking them off in the list below, as well as adding a comment to this issue indicating the name of the branch you have pushed to this repo so that work can completed once the dependent task has been addressed.
* Tasks that have the text _"(verified)"_ after them are ones that have either already been tested or are known to work. This does not mean that you should not still be cautious but you likely don't have to do as much research as you might initially be inclinded to.
* None of the tasks make a determination of whether a dependency should be a **dependency** or **devDependency** _UNLESS_ otherwise stated. Even then it is prudent to confirm this indicator.
* All dependencies should float to `^` unless the directions in the tasks indicate otherwise.
**NPM**
* [ ] change ember-cli-babel to ^5.1.7 (verified)
* [ ] remove ember-cli-chai (verified)
* [ ] remove ember-cli-mocha (verified)
* [ ] remove ember-sinon (verified)
* [ ] remove ember-test-utils (verified) (note that ember-frost-test is now providing it at ^7.0.2 so *will* encounter linting failures)
* [ ] remove sinon-chai
* [ ] remove chai-jquery
* [ ] upgrade ember-frost-test to latest version (verified)
* [ ] pin ember-cli-code-coverage to 0.3.12
* [ ] verify ember-cli-code-coverage is tied into "npm run test" script
* [ ] verify "tests/config/coverage.js" and not "config/coverage.js"
* [ ] pin ember-cli-htmlbars-inline-precompile to 0.3.12
* [ ] downgrade and pin ember-code-snippet to 1.7.0
* [ ] pin ember-computed-decorators to 0.3.0 (verified)
* [ ] upgrade ember-cli-sass to pinned 7.1.1 (verified)
* [ ] pin ember-hook to 1.4.2
* [ ] determine whether ember-hook needs to be a **dependency**
* [ ] update or install ember-cli-frost-blueprints to ^5.0.0 (or whatever the latest release is) as **devDependency**
* [ ] update ember-prop-types to ^6.0.0 (or whatever the latest release is)
* [ ] update ember-spread to ^4.0.0 (or whatever the latest release is)
* [ ] update ember-frost-core to ^5.0.0 (or whatever the latest release is)
* [ ] determine **dependencies** vs **devDependencies**
* [ ] check whether these are specifically needed or not and remove if not, otherwise move to dependencies
* [ ] ember-computed-decorators
* [ ] ember-concurrency
* [ ] ember-elsewhere
* [ ] ember-truth-helpers
**BOWER**
* [x] n/a
username_0: JIRA: FROST-588
username_0: https://github.com/ciena-frost/ember-frost-object-details/issues/54 **MUST** be completed before this issue.
* remove from blueprints and make dependency
* ember-frost-core
* remove from devDependency and make dependency
* ember-prop-types
* ember-spread
* update ember-prop-types to ^5.0.1
* update ember-spread to ^3.0.2
* check whether these are specifically needed or not and remove if not, otherwise move to dependencies
* ember-computed-decorators
* ember-concurrency
* ember-elsewhere
* ember-truth-helpers
Status: Issue closed
|
bbc/speculate | 469583799 | Title: CentOS 8 Support?
Question:
username_0: The docs say speculate works on CentOS 7. CentOS 8 was just released. Any plans to support the next version?
Answers:
username_1: Have you tried CentOS 8?
Are there any changes between CentOS 7 and 8 that might cause problems? |
hyb1996-guest/AutoJsIssueReport | 259990308 | Title: [163]com.stardust.autojs.runtime.exception.ScriptInterruptedException
Question:
username_0: Description:
---
com.stardust.autojs.runtime.exception.ScriptInterruptedException
at com.stardust.autojs.runtime.api.Shell.onInterrupted(Shell.java:158)
at com.stardust.autojs.runtime.api.Shell.execExitAndWait(Shell.java:185)
at com.stardust.autojs.runtime.api.Shell.exitAndWaitFor(Shell.java:171)
at com.stardust.autojs.runtime.ScriptRuntime.onExit(ScriptRuntime.java:306)
at com.stardust.autojs.ScriptEngineService$1.onFinish(ScriptEngineService.java:61)
at com.stardust.autojs.ScriptEngineService$1.onException(ScriptEngineService.java:68)
at com.stardust.autojs.execution.ScriptExecutionObserver.onException(ScriptExecutionObserver.java:29)
at com.stardust.autojs.execution.RunnableScriptExecution.execute(RunnableScriptExecution.java:41)
at com.stardust.autojs.execution.RunnableScriptExecution.execute(RunnableScriptExecution.java:32)
at com.stardust.autojs.execution.RunnableScriptExecution.run(RunnableScriptExecution.java:27)
at java.lang.Thread.run(Thread.java:761)
at com.stardust.lang.ThreadCompat.run(ThreadCompat.java:61)
Device info:
---
<table>
<tr><td>App version</td><td>2.0.16 Beta2.1</td></tr>
<tr><td>App version code</td><td>163</td></tr>
<tr><td>Android build version</td><td>092</td></tr>
<tr><td>Android release version</td><td>7.1.1</td></tr>
<tr><td>Android SDK version</td><td>25</td></tr>
<tr><td>Android build ID</td><td>V092</td></tr>
<tr><td>Device brand</td><td>360</td></tr>
<tr><td>Device manufacturer</td><td>360</td></tr>
<tr><td>Device name</td><td>QK1607</td></tr>
<tr><td>Device model</td><td>1607-A01</td></tr>
<tr><td>Device product name</td><td>QK1607</td></tr>
<tr><td>Device hardware name</td><td>qcom</td></tr>
<tr><td>ABIs</td><td>[arm64-v8a, armeabi-v7a, armeabi]</td></tr>
<tr><td>ABIs (32bit)</td><td>[armeabi-v7a, armeabi]</td></tr>
<tr><td>ABIs (64bit)</td><td>[arm64-v8a]</td></tr>
</table> |
goibon/spooky-meter | 366075135 | Title: Increase Accessibility
Question:
username_0: Everyone should have equal access to check the current spooky level and as such, we should make sure to follow best practices for accessibility on the web. Some points that could be improved are:
- Use proper [alt] attributes on `img` tags
- Specify language in the `html` tag |
frictionlessdata/website | 271358914 | Title: Add a table of contents to the pattern page
Question:
username_0: Add a table of contents to the [pattern page](http://frictionlessdata.io/specs/patterns/) to allow readers to jump to the pattern of interest.
Similar to toc used on http://frictionlessdata.io/specs/table-schema/
Answers:
username_0: Whoops, this may be in the wrong repo given the patterns pages in over at https://github.com/frictionlessdata/specs/blob/master/content/patterns/contents.lr
Status: Issue closed
username_1: Finally got round to fixing this, @username_0. Should be good now.
username_0: @username_1 thanks but I guess what I was hoping for was to be able to navigate down to individual patterns using the ToC like on other spec pages
username_0: Prefect!
:tada: |
darkalfx/requestrr | 964162731 | Title: Hosting on Heroku
Question:
username_0: is there a way to host requestrr on heroku or any other alternative platforms, if so can you help with some guide
Answers:
username_1: I don't know about heroku, but I was able to do this on replit. I imagine it would be similar on heroku.
Login and create a bash repl.
**main.sh**
```bash
wget -O requestrr.zip https://github.com/darkalfx/requestrr/releases/download/V2.1.1/requestrr-linux-x64.zip
mkdir requestrr
unzip requestrr.zip -d requestrr
cd requestrr
cd requestrr-linux-x64
chmod +x Requestrr.WebApi
~/requestrr/requestrr/requestrr-linux-x64/Requestrr.WebApi
```

This is my first attempt at this. The down side is you'll need to expose your other services to the internet.
I will probably make another go at this using a python repl. With python you can import a keep_alive.py script that keeps the repl running 24/7. It's described here (https://replit.com/talk/learn/keep_alivepy-AestheticGaming/11008/478530) as well as many other places on the internet. It would also be easier to work with github api and automatically pull down the latest version of requestrr. Both of these things could probably be done with bash as well, but I am not well versed in those.
I will post an update here if/when I finish the python version.
Note: I did not test any of the functionality of requestrr running here, as I don't have my services exposed to the internet at this point.
username_1: Here is a better version
https://github.com/username_1/repl-requestrr |
pityka/saddle | 1148600818 | Title: flaky tests in matcheck
Question:
username_0: ```
Elementwise matrix operations with scalar (I,D) => B
[info] + op < works
[error] x op <= works
[error] Falsified after 12 passed tests.
[error] > ARG_0: [100 x 100]
[error] 1269956579 -1651588045 -23183[90](https://github.com/username_0/saddle/runs/5309342893?check_suite_focus=true#step:5:90)47 1009904595 ... 598167718 -10[91](https://github.com/username_0/saddle/runs/5309342893?check_suite_focus=true#step:5:91)060996 -56654787 376489325
[error] -808139313 1961476986 -1414633998 -383026468 ... 1799829621 1321614098 1435773124 358693408
[error] -118862603 -518816064 385642582 -17791247 ... -655062533 -1228565848 1002622225 2070131445
[error] -1099712990 639182453 -786264453 205137673 ... 1188727935 539641967 1972840830 1311770901
[error] ...
[error] -297240988 -1704265288 1080866841 2115918235 ... -179810603 873080620 -964765650 1945350146
[error] 1566446872 -1128376140 -1901055413 -1774102629 ... -2143441516 -65718677 166885557 -1808064785
[error] 1555259345 581502718 614097437 -1372645149 ... -1596531520 -78749081 -584278297 -534540360
[error] -304506615 -1694821768 -170322[92](https://github.com/username_0/saddle/runs/5309342893?check_suite_focus=true#step:5:92)01 -57132164 ... 1995229777 1356349720 1926262097 21358187[93](https://github.com/username_0/saddle/runs/5309342893?check_suite_focus=true#step:5:93)
[error]
[error] > ARG_1: -5.3055700234605[99](https://github.com/username_0/saddle/runs/5309342893?check_suite_focus=true#step:5:99)E-165
[error] The seed is rHuHhy2VdhK_RTuBaXb7PkeY6mOGYXBmj_fsLhY-L0J=
[error]
[error] > [[100](https://github.com/username_0/saddle/runs/5309342893?check_suite_focus=true#step:5:100) x 100]
[error] false true true false ... false true true false
[error] true false true true ... false false false false
[error] true true false true ... true true false false
[error] true false true false ... false false false false
[error] ...
[error] true true false false ... true false true false
[error] false true true true ... true true false true
[error] false false false true ... true true true true
[error] true true true true ... false false false false
[error] != [100 x 100]
[error] false true true false ... false true true false
[error] true false true true ... false false false false
[error] true true false true ... true true false false
[error] true false true false ... false false false false
[error] ...
[error] true true false false ... true false true false
[error] false true true true ... true true false true
[error] false false false true ... true true true true
[error] true true true true ... false false false false (MatCheck.scala:78)
```
Answers:
username_0: The problem may be in the handling of missing values.
It looks like the implementation in BinOp is not doing missingness check, while the implementation of Mat.map does. |
electron/electron | 351406471 | Title: setLoginItemSettings doesn't add process start args on windows.
Question:
username_0: I am trying to run my app in hidden mode at startup on windows. used below code snippet:
```
const path = require('path');
const appFolder = app.getPath('appData');
const exe = path.resolve(appFolder, '..', 'Local\\Programs\\update.exe');
const exeName = 'update.exe';
app.setLoginItemSettings({
args: [
'--processStart', `"${exeName}"`,
'--process-start-args', `"--hidden"`,
],
openAtLogin: true,
path: exe,
});
```
Any suggestions will be greatly appreciated. Thanks!
Answers:
username_1: GitHub issues are for feature requests and bug reports, questions about using Electron or code assistance requests should be directed to the [community](https://github.com/electron/electron#community) or to the [Slack Channel](https://atom-slack.herokuapp.com/).
Status: Issue closed
|
LightTable/LightTable | 183245254 | Title: Default keymap for :show-connect does not work unless it already has focus
Question:
username_0: The Connection Pane is toggled open and closed with the following line in `default.keymap`:
```
[:sidebar.clients "esc" :show-connect]
```
However, it does not open or close via this keybinding (or any other you change it to) unless the pane already has focus.
Given, that the default is set to "esc", it seems likely the originally intention was to only use this for closing the Connection Pane. Unfortunately, if focus goes elsewhere before pressing "esc", the command will not work until focus is restored. |
openshift-knative/docs | 823180305 | Title: Add configuration file reference documentation
Question:
username_0: **Is your feature request related to a problem? Please describe.**
The `func.yaml` file has not been documented. In the simplest of cases, a function developer doesn't need to know anything about this file. But that really is just the simplest of cases. For example, a developer can use this file to set environment variables when building and running the function. See: https://github.com/boson-project/func/pull/267.
**Describe the solution you would like to see**
A reference document for the `func.yaml` configuration file providing documentation for all of the supported properties, with examples where needed. At the moment, the document tree is
```
Functions --
| -- User Guide
| -- Administration Guide
| -- Reference
```
I propose adding a Developer's Guide section, under which the configuration file reference can go. And when https://github.com/openshift-knative/docs/pull/50 lands, the Template Reference docs can be moved to the Developer's Guide as well.<issue_closed>
Status: Issue closed |
facebook/flow | 131715960 | Title: Allow == operator for variant type conditions
Question:
username_0: Currently, to use branching on variant types, you have to use the `===` operator. If you change it to `==` in the below code Flow won't acknowledge the refinement.
```js
type Opts = {bzt:{}}
type Variant = { type: 'a', opts_a: Opts } | { type: 'b', opts_b: Opts }
function f1 (val : Variant) : Opts {
if(val.type === 'a') {
return val.opts_a
}
else {
return val.opts_b
}
}
```
This is a gotcha. There should be a warning advising to use `===`, or better yet Flow should allow `==` here and prevent cases where `==` and `===` would have different results.
Using version af67541 |
dilettant/thepublicoffice | 6212163 | Title: Graphic design flyer/poster
Question:
username_0: Make a graphic representation of the project, based on the identity profile made by Alba. The objective of the flyer/poster is to attract and inform potential PO members of the POC and the POMA.
Answers:
username_1: 
Try Createer.com. You can easily design online ads, responsive landing pages and print materials such as flyers, posters, covers, invitations and more. Createer lets you easily share your designs on social media, send them via email, download as HTML, image or PDF. Createer also includes 3,000 predesigned template (look here https://beta2.createer.com/Templates) s and over 250,000 images (illustrations, photos and icons) to work with in its user-friendly drag’n’drop enviroment. |
GoogleCloudPlatform/python-docs-samples | 374014102 | Title: bigtable: metricscaler should fail gracefully for development instances
Question:
username_0: ## In which file did you encounter the issue?
`https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/bigtable/metricscaler/metricscaler.py`
### Did you change the file? If so, how?
No
## Describe the issue
When I ran the metric scaler, I accidentally specified a Cloud Bigtable development instance, which can't be scaled. As a result, I got the following stack trace:
```
Traceback (most recent call last):
File "metricscaler.py", line 172, in <module>
int(args.long_sleep))
File "metricscaler.py", line 129, in main
scale_bigtable(bigtable_instance, bigtable_cluster, False)
File "metricscaler.py", line 83, in scale_bigtable
cluster.reload()
File "/Users/[REDACTED]/python-docs-samples/bigtable/metricscaler/env/lib/python3.7/site-packages/google/cloud/bigtable/cluster.py", line 184, in reload
self._update_from_pb(cluster_pb)
File "/Users/[REDACTED]/python-docs-samples/bigtable/metricscaler/env/lib/python3.7/site-packages/google/cloud/bigtable/cluster.py", line 92, in _update_from_pb
raise ValueError('Cluster protobuf does not contain serve_nodes')
ValueError: Cluster protobuf does not contain serve_nodes
```
If the user specifies a development instance, we should fail gracefully and display an appropriate error message.
(It also seems odd that the client library is raising an error when we call `cluster.reload()` on a development instance, so perhaps this is a client-library bug as well. I'm not familiar enough with the Python client library to know for sure.)
Answers:
username_1: @username_2 What is the status of this issue?
username_2: @username_1 I haven't really looked into it. Been working on higher priority items
username_1: @username_2 PTAL
Status: Issue closed
|
sequelize/cli | 360228319 | Title: perform command 'db:create' create database fail
Question:
username_0: <!--
Please note this is an issue tracker, not a support forum.
For general questions, please use StackOverflow or Slack.
For bugs, please fill out the template below.
-->
## What you are doing?
perform command 'db:create'
```js
sequelize db:create
```
## What do you expect to happen?
I want create database success;
## What is actually happening?
show below error message for me; create database fail
```
Unhandled rejection TypeError: Cannot read property 'replace' of undefined
at Object.removeTicks (/Volumes/shi/WebDemo/hapi-mini/node_modules/sequelize/lib/utils.js:415:12)
at Object.quoteIdentifier (/Volumes/shi/WebDemo/hapi-mini/node_modules/sequelize/lib/dialects/mysql/query-generator.js:393:33)
at QueryInterface.quoteIdentifier (/Volumes/shi/WebDemo/hapi-mini/node_modules/sequelize/lib/query-interface.js:1323:32)
at Object.<anonymous> (/Volumes/shi/WebDemo/hapi-mini/node_modules/sequelize-cli/lib/commands/database.js:38:80)
at Generator.next (<anonymous>)
at Generator.tryCatcher (/Volumes/shi/WebDemo/hapi-mini/node_modules/bluebird/js/release/util.js:16:23)
at PromiseSpawn._promiseFulfilled (/Volumes/shi/WebDemo/hapi-mini/node_modules/bluebird/js/release/generators.js:97:49)
at Promise._settlePromise (/Volumes/shi/WebDemo/hapi-mini/node_modules/bluebird/js/release/promise.js:574:26)
at Promise._settlePromise0 (/Volumes/shi/WebDemo/hapi-mini/node_modules/bluebird/js/release/promise.js:614:10)
at Promise._settlePromises (/Volumes/shi/WebDemo/hapi-mini/node_modules/bluebird/js/release/promise.js:694:18)
at _drainQueueStep (/Volumes/shi/WebDemo/hapi-mini/node_modules/bluebird/js/release/async.js:138:12)
at _drainQueue (/Volumes/shi/WebDemo/hapi-mini/node_modules/bluebird/js/release/async.js:131:9)
at Async._drainQueues (/Volumes/shi/WebDemo/hapi-mini/node_modules/bluebird/js/release/async.js:147:5)
at Immediate.Async.drainQueues [as _onImmediate] (/Volumes/shi/WebDemo/hapi-mini/node_modules/bluebird/js/release/async.js:17:14)
at runCallback (timers.js:696:18)
at tryOnImmediate (timers.js:667:5)
at processImmediate (timers.js:649:5)
```
__Dialect:__ mysql
__Database version:__ 5.7
__Sequelize CLI version:__ 4.1.1
__Sequelize version:__ 4.38.1
Answers:
username_1: Hit by this issue too. I guess we will need to provide a complete reproducible example.
username_2: If this can help someone else, this issue was because I've had an _undefined_ property in the config file.
username_3: @username_2 can you post your old config file? As that is a vague example... :(
username_4: @username_2 any update on how resolved this? I am running into the same issue. |
sanghoon/pva-faster-rcnn | 264915609 | Title: generated prototxt and caffemodel ?
Question:
username_0: Hi @sanghoon
Thank you for your work and share.
I tried your script for ResNet50 to combine conv, batchnorm and scale layer into one conv layer. I have doubts regarding generated prototxt and caffemodel
1. generated prototxt combines only scale and batchnorm into one and keep conv layers as it is
before : conv - BN- scale - relu
after : conv - BN - scale - relu
2. generated caffemodel weight should also change as now number of parameters are less to be stored?
- anand |
ned14/outcome | 287175649 | Title: Will Outcome support heterogeneous operations
Question:
username_0: For example, would `result<T>` be convertible from `result<U>` if T is constructible from U.
See in addition EXPLICIT on the standard (std::pair)
Status: Issue closed
Answers:
username_1: For some `result<T, E>`, a `result<X, Y>` will *explicitly* convert from it if X is constructible from T and Y is constructible from E (or E is void).
Comparison of `result<T>` and `result<U>` should work just fine. My own code does it, and here it is working on wandbox: https://wandbox.org/permlink/ViQAOLbYbvF6iAoE
username_0: It is not clear from the documentation
username_1: That's because none of the comparison operators are appeared. I'm working on it. |
vert-x3/vertx-web | 327648964 | Title: SecurityHandler for OpenAPI3RouterFactory
Question:
username_0: ### Version
* vert.x core: 3.5.1
* vert.x web: 3.5.1
### Context
Hello, i use an openapi contract to describe my REST API. I use JWT to access most of WS but I have some WS without auth like authenticate, healthcheck... But it seems that the security handler is applied to all routes, even the routes with no auth. Is it a normal behavior ?
### Do you have a reproducer?
No
### Steps to reproduce
1. contract
```yaml
openapi: 3.0.1
paths:
/authenticate:
post:
summary: Authenticate
security: [] #To disable default security
# ...
components:
#...
securitySchemes:
bearerAuth:
type: http
scheme: bearer
bearerFormat: JWT
security:
- bearerAuth: []
```
2. java code
```java
OpenAPI3RouterFactory routerFactory = ar.result();
routerFactory.addSecurityHandler("bearerAuth", authHandler);
```
3. ...
4. ...
### Extra
* Anything that can be relevant
Answers:
username_1: Can you try the new version of the stack 3.5.2.CR3? It should include some fixes also for router factory security handlers
username_0: Hello, i still have the issue with 3.5.2 stable version
username_1: Ok I understood the problem, i'm working on it
username_1: @username_0 should be fixed by #953. Please check the tests:
- https://github.com/vert-x3/vertx-web/blob/b40aea24f8cb6cf2d18b0580d89956541332cc0d/vertx-web-api-contract/src/test/java/io/vertx/ext/web/api/contract/openapi3/OpenAPI3RouterFactoryTest.java#L354
- https://github.com/vert-x3/vertx-web/blob/b40aea24f8cb6cf2d18b0580d89956541332cc0d/vertx-web-api-contract/src/test/resources/swaggers/global_security_test.yaml#L9
Status: Issue closed
|
bstascavage/plexReport | 76657897 | Title: Error sending gmail
Question:
username_0: I cant send a gmail anymore. How can this be fixed ?
```root@DD-WRT:/opt/plexReport# /usr/local/sbin/plexreport -t
/var/lib/gems/2.1.0/gems/mail-2.6.3/lib/mail/core_extensions/string.rb:26:in `=~': invalid byte sequence in US-ASCII (ArgumentError)
from /var/lib/gems/2.1.0/gems/mail-2.6.3/lib/mail/core_extensions/string.rb:26:in `!~'
from /var/lib/gems/2.1.0/gems/mail-2.6.3/lib/mail/core_extensions/string.rb:26:in `blank?'
from /var/lib/gems/2.1.0/gems/mail-2.6.3/lib/mail/body.rb:36:in `initialize'
from /var/lib/gems/2.1.0/gems/mail-2.6.3/lib/mail/message.rb:2012:in `new'
from /var/lib/gems/2.1.0/gems/mail-2.6.3/lib/mail/message.rb:2012:in `process_body_raw'
from /var/lib/gems/2.1.0/gems/mail-2.6.3/lib/mail/message.rb:1244:in `body'
from /var/lib/gems/2.1.0/gems/mail-2.6.3/lib/mail/message.rb:2038:in `identify_and_set_transfer_encoding'
from /var/lib/gems/2.1.0/gems/mail-2.6.3/lib/mail/message.rb:1792:in `ready_to_send!'
from /var/lib/gems/2.1.0/gems/mail-2.6.3/lib/mail/message.rb:1810:in `encoded'
from /var/lib/gems/2.1.0/gems/mail-2.6.3/lib/mail/check_delivery_params.rb:12:in `check_delivery_params'
from /var/lib/gems/2.1.0/gems/mail-2.6.3/lib/mail/network/delivery_methods/smtp.rb:98:in `deliver!'
from /var/lib/gems/2.1.0/gems/mail-2.6.3/lib/mail/message.rb:252:in `deliver!'
from /var/lib/plexReport/mailReport.rb:93:in `block in sendMail'
from /var/lib/plexReport/mailReport.rb:84:in `each'
from /var/lib/plexReport/mailReport.rb:84:in `sendMail'
from /usr/local/sbin/plexreport:381:in `main'
from /usr/local/sbin/plexreport:385:in `<main>'
```
Answers:
username_1: What does your config look like?
username_1: Following up on this.
username_0: this is the content of the config
---
email:
title: TV and Movies
plex:
server: 192.168.1.2
api_key: xxxxxx
mail:
address: smtp.gmail.com
port: '587'
username: <EMAIL>
password: <PASSWORD>
from: Plex Updates
subject: Weekly Updates of Series and Movies
username_0: just tried again without -t command
---
root@xxxx:/etc/plexReport# /usr/local/sbin/plexreport -d
/var/lib/gems/2.1.0/gems/mail-2.6.3/lib/mail/check_delivery_params.rb:9:in `check_delivery_params': An SMTP To address is required to send a message. Set the message smtp_envelope_to, to, cc, or bcc address. (ArgumentError)
from /var/lib/gems/2.1.0/gems/mail-2.6.3/lib/mail/network/delivery_methods/smtp.rb:98:in `deliver!'
from /var/lib/gems/2.1.0/gems/mail-2.6.3/lib/mail/message.rb:252:in `deliver!'
from /var/lib/plexReport/mailReport.rb:93:in `block in sendMail'
from /var/lib/plexReport/mailReport.rb:84:in `each'
from /var/lib/plexReport/mailReport.rb:84:in `sendMail'
from /usr/local/sbin/plexreport:381:in `main'
from /usr/local/sbin/plexreport:385:in `<main>'
username_1: Ok, so when you run it with -t it works, but doesn't when you don't?
Can you give me the output of:
https://plex.tv/pms/friends/all
?
username_0: here is the output
```javascript
This XML file does not appear to have any style information associated with it. The document tree is shown below.
<MediaContainer friendlyName="myPlex" identifier="com.plexapp.plugins.myplex" machineIdentifier="xxxxxxxx" totalSize="5" size="5">
<script/>
<User id="938844" title="xxxx" username="xxx" email="<EMAIL>" recommendationsPlaylistId="0da07cd1cbad5476" thumb="https://secure.gravatar.com/avatar/xxxx?d=https%3A%2F%2Fplex.tv%2Favatars%2F8C8165%2F48%2F1">
<Server id="199937" serverId="xxx" machineIdentifier="xxxx" name="HvacLine" lastSeenAt="1437791062" numLibraries="5" owned="0"/>
<Server id="2161498" serverId="1158052" machineIdentifier="xxx" name="xxxx" lastSeenAt="1439547523" numLibraries="3" owned="1"/>
</User>
<User id="5832720" title="Kids" username="" email="" recommendationsPlaylistId="c0b61b8dba27fcc8" thumb="https://plex.tv/avatars/EB9F9F/4b/1"></User>
<User id="5832724" title="Kids" username="" email="" recommendationsPlaylistId="89dabb6608ef79bd" thumb="https://plex.tv/avatars/8C8165/4b/1"></User>
<User id="3209442" title="merwone" username="merwone" email="<EMAIL>" recommendationsPlaylistId="fd13de43c7b86207" thumb="https://secure.gravatar.com/avatar/xxxxx?d=https%3A%2F%2Fplex.tv%2Favatars%2F7EB0D5%2F4d%2F1">
<Server id="967023" serverId="2442420" machineIdentifier="xxxx" name="merwone" lastSeenAt="1435750225" numLibraries="2" owned="0"/>
<Server id="2161497" serverId="1158052" machineIdentifier="xxxx" name="FloRaMac" lastSeenAt="1439547523" numLibraries="4" owned="1"/>
</User>
<User id="1089415" title="xxxx" username="xxx" email="<EMAIL>" recommendationsPlaylistId="298fa5847d1b3d1b" thumb="https://secure.gravatar.com/avatar/xxxx?d=https%3A%2F%2Fplex.tv%2Favatars%2FF18052%2F53%2F1">
<Server id="1351417" serverId="1158052" machineIdentifier="xxxxx" name="FloRaMac" lastSeenAt="1439547523" numLibraries="8" owned="1"/>
</User>
</MediaContainer>
```
username_1: Ah, thats it! There are blank emails there.
Let me update the code to check for that (I didn't even know that was a possibility)
username_1: Ok, I updated the code. Update your end and give it a shot.
username_0: perfect, just updated and now it works, thank you for the fix!!
Status: Issue closed
|
machawk1/wail | 73699606 | Title: Launch WebUI button does not display Heritrix interface in browser in Windows
Question:
username_0: 
Answers:
username_0: 
username_0: Manually entering https://localhost:8443/ into a browser produces the security message (#125) but then asks for credentials through a dialog.
username_0: See https://github.com/username_0/wail/blob/osagnostic/bundledApps/WAIL.py#L546
username_0: IE no longer supports encoding the username and password in the URI: http://support.microsoft.com/kb/834489/EN-US
username_0: Relevant answer for detecting default browser, might be useful for conditioning on IE or, better yet, provide a workaround for IE:
http://stackoverflow.com/questions/19037216/how-to-get-a-name-of-default-browser-using-python |
solgenomics/sgn | 316282190 | Title: Calendar use case for project pre-planning.
Question:
username_0: Expected Behavior <!-- Describe the desired or expected behavour here. -->
--------------------------------------------------------------------------
Breeders would want to preplan for a project to be added to the database, and also be able to track its implementation as the different activities are carried out.
This has been requested by Ted's group. They basically want to plan a project and follow each of those activities during implementation. They would want to add an event for field preparation, planting, phenotyping and harvesting. As well as every other activity in between planting and harvesting.
For Bugs:
---------
### Environment
<!-- Where did you encounter the error. -->
#### Steps to Reproduce
<!-- Provide an example, or an unambiguous set of steps to reproduce -->
<!-- this bug. Include code to reproduce, if relevant. -->
Answers:
username_1: see other issues in the calendar/trial activity project
username_2: Duplicate of #876
Status: Issue closed
|
agda/agda | 135224618 | Title: Fill level metas during instance search
Question:
username_0: ```agda
-- <NAME>
-- Sorry to keep pestering about typeclasses, but here is another example from my
-- codebase which doesn't typecheck in both 2.5.0.20160213 and master, and used to
-- typecheck in 2.4.2.5.
open import Agda.Primitive
record Order {ℓ} ℓ' (A : Set ℓ) : Set (ℓ ⊔ lsuc ℓ') where
field
_≤_ : A → A → Set ℓ'
open Order {{...}} public
data ℕ : Set where
Zero : ℕ
Succ : ℕ → ℕ
data _≤ⁿ_ : ℕ → ℕ → Set where
Zero : ∀ {n} → Zero ≤ⁿ n
Succ : ∀ {n₁ n₂} → n₁ ≤ⁿ n₂ → Succ n₁ ≤ⁿ Succ n₂
instance
Order[ℕ] : Order lzero ℕ
Order[ℕ] = record { _≤_ = _≤ⁿ_ }
subtract : ∀ (n₁ n₂ : ℕ) → n₂ ≤ n₁ → ℕ
subtract n₁ n P = {!P!} -- want to split on P here
-- The issues is that Agda is not able to resolve the use of `≤` in `subtract`,
-- leaving `n₂ ≤ n₁` highlighted in yellow (unresolved metas). Shouldn't this be a
-- clear case where Agda picks the `Order[ℕ]` instance?
```
Answers:
username_1: I believe this would be safe from the looping problem you can get if instantiating unconstrained metas. Any instance that would lead to a loop should be rejected by the (as yet fictious) instance termination checker.
username_1: See also #1422.
Status: Issue closed
|
redox-os/orbtk | 535906546 | Title: Procedurally derive new and setter methods for types
Question:
username_0: Procedural macros that could eliminate a lot of the boilerplate that we are currently doing by hand.
- [derive-new](https://github.com/nrc/derive-new)
- [derive_setters](https://github.com/Lymia/derive_setters)
Answers:
username_1: @username_0 do you want to do it, otherwise I can do it next.
username_0: I'm currently busy with Pop Shell at the moment.
username_1: Maybe we should avoid using more procedural macros. OrbTk uses macros and proc macros a lot and I'm worry it has an impact on the compile time. Therefore I close this for now.
Status: Issue closed
|
os-js/osjs-client | 578841634 | Title: Context menu opening drop down only on right side
Question:
username_0: Is it possible to make contextmenu to have a drop down opened on left side of it
Answers:
username_1: Do you mean to swap the positions entirely, or having some kind of option when you create a context menu ?
username_0: To have an option to open left or right
username_0: Example :when we click panel position it shows the drop down menu only on right side of it
username_1: Are you using RTL by any chance ?
username_0: Nope
username_1: Just to make sure we're on the same page, is this what you mean ?

username_0: Yes that’s right
username_1: Sure. That could be added.
I'll move the issue to the appropriate repository and follow up there. |
ampproject/amphtml | 503526059 | Title: beforeInstallPrompt Event not triggering via AMP-Script
Question:
username_0: Hi all, I've tried to prevent the Chrome PWA Mini-Info bar showing by intercepting the beforeInstallPrompt event via Amp-Script in this [demo](https://developers.google.com/web/fundamentals/app-install-banners) but it doesn't seem to work.
https://amp-beforeinstallpropmpt.glitch.me
Can we support this event into the AMP-Script Worker DOM if possible to be able to control the showing of the [PWA Web App Install Banner](https://developers.google.com/web/fundamentals/app-install-banners)?
AMP First websites would benefit of this by choosing when to show the prompt to avoid also showing multiple prompt at once (thinking about Amp-Consent prompt covered by the PWA one).
Answers:
username_1: Ah, amp-script/worker-dom runs JS on a web worker, which doesn't have access to main thread objects like `window`. So support `beforeinstallprompt` event would need to be a special integration.
username_1: @username_0 Can you share specifically under what conditions you'd want to for the app install prompt to be displayed? This may be more easily accomplished with components other than `amp-script`.
username_0: Hi William, thanks for your answer.
I would be really interested to hear more voices but so far I've see the following potential issues that may cause a UX issues on AMP First Websites.
Example: Browser http://standard.co.uk/ directly out from AMP Cache.
Result: AMP-Consent loading, Amp-sticky-ad loading, lastly Chrome MiniInfobar loading.
All of the three one over the other.
I believe it would be interesting to give developers the power to decide to display these consents in order.
Although I agree with you that the amp-script may not be the correct contest where to do that, but a part from this I don't know if we can have other windows of JS handlers to do the same.
@b1tr0t and @username_3 for additional feedback about UX Chrome Permissions on AMP
username_0: Hi @username_3 and @username_1, the condition that would require an AMP page to intercept and handle beforeInstallPrompt is when a website wants to implement Native App Install Banners https://developers.google.com/web/fundamentals/app-install-banners/native.
Unfortunately the only way to make it work would be to handle beforeInstallPrompt somehow, that's why I thought about using amp-script to support that but if you have alternatives in mind I am happy to discuss.
username_2: Same here, @username_0 . Did you have any advance on making custom prompt to install PWA on AMP pages?
Maybe should have a new tag similar to [allow push notifications](https://amp.dev/documentation/components/amp-web-push/)
username_0: @username_3 do you know if there is already a draft FR about a component to allow PWA UX Install personalization?
username_3: There isn't an FR - to pick up on @username_1's question earlier:
what level of customization are you looking for in a PWA install prompt?
username_2: The main two things INMHO are: frequency (display control) and text/buttons.
I was thinking something related to [user-notification](https://amp.dev/documentation/examples/components/amp-user-notification/) with the option to display text and two buttons, like: Install / Not Now and track them.
The "not now" could work exactly the same way as the "dismiss" in the user-notification
username_4: I am working with AMP publishers who would benefit from being able to configure the "Add to Home" PWA prompt, in terms of:
- Triggers: after X sec, after Y page visits...
- UI: being able to put the 'Add to Home' button in their navigation, on a newsletter sign-up page, inline an article...
It would be nice if this could be a feature request.
username_5: Hello everyone!
Is there any advance in this matter?
I'm trying to accomplish the same behavior as described by @username_0 along with send this user interaction to google analytics.
Please advice |
Mixiaoxiao/Arduino-HomeKit-ESP8266 | 781911623 | Title: ESP8266 cannot boot when PIN is connected
Question:
username_0: Hi,
everything works great so far and I've achieved to realize a dimmable LED Panel.
It runs on 100V DC so I'm controlling a MOSFET on PIN D4.
Everything works. Until I unplug the ESP8266 from power (5V USB) and plug it in again.
The Serial Monitor gives me this and doesn't stop:
``n⸮e⸮⸮%⸮!⸮maA⸮%!ai⸮icCneaA⸮o⸮e⸮⸮k⸮!⸮eaA⸮k!!a⸮maAo%!A⸮m⸮e⸮⸮k⸮C⸮eaA⸮Epd⸮s⸮⸮⸮c⸮b⸮⸮r⸮l ⸮⸮ol⸮;⸮⸮o'obl# l`{d⸮d#$⸮{#dcd⸮s{lcd⸮r⸮⸮⸮b⸮#⸮⸮;⸮$`⸮⸮nl⸮{⸮⸮ogn#$c l {d⸮dcd⸮sclcd⸮r{lcl⸮;⸮⸮⸮#⸮c⸮⸮⸮{⸮d`⸮⸮g$⸮{⸮⸮'ogcdc $`sl⸮lcl⸮rclcl⸮;s$#l⸮{⸮Ãc⸮c⸮⸮`
When I disconnect PIN D4 it boots normally.
Any idea what might be the problem?
Answers:
username_1: Read the Pinout Reference for esp8266
https://randomnerdtutorials.com/esp8266-pinout-reference-gpios/
Status: Issue closed
|
UniMath/UniMath | 184843782 | Title: Aborted proofs in UniMath
Question:
username_0: Thanks to a question by Tomi I discovered that there are aborted proofs in FiniteSequences.
This raises three issues:
1. Please, in the future immediately let everybody know if you find aborted or admitted proofs. Such "proofs" are strictly forbidden in the UniMath.
2. Please immediately remove all statements without full proofs or complete the proofs as the very first priority.
3. Please (@DanGdayson, @username_3?) write a script that will be run every time the make is run that will check that the words "abort" and "admit" do not occur in the .v files.
Answers:
username_1: The following script searches for the words Abort, Admitted and admit in all files in the UniMath subfolder:
``grep -r -E 'Abort|Admitted|admit' UniMath/``
Many of the results seems to be in comments though. I'm not sure how an automated procedure could take that into account, so maybe we should forbid these words from comments as well.
username_2: We don't have a rule against aborted proofs, only against admitted proofs. And there
is no reason to introduce such a rule, since an aborted proof results in no definitions.
username_0: There is a reason. UniMath is a library that should be clean. Aborted proofs are Ok in individual forks but there is no reason to have an aborted proof in the library.
>
username_0: BTW, Tomi, I plan to work on these proofs tomorrow. Hopefully I will be able to complete them. It requires some preliminary work in the standard finite sets file.
>
username_3: `Admitted` and `Abort` are always followed by a `.`, and then either by the end of the line or a whitespace character. (I sincerely hope you never do something like `Admitted (* this is an admitted proof but we haven't ended the proof yet *) .`) The tactic `admit` is usually followed by a `.` or `;` after optional whitespace, but not always. Perhaps these are sufficiently distinguishing features?
username_1: I don't think all Abort's are bad. It's sometimes instructive to start a Goal, show that something doesn't work and end with an Abort. I have used this for things that should hold by computation if there wasn't any Axiom's involved. See for example https://github.com/UniMath/UniMath/blob/master/UniMath/CategoryTheory/Inductives/Lists.v#L224. If someone is trying use this definition of lists and run into problems with something not computing they can read the original file, find this example and first see that direct computation doesn't work and then see what I do afterwards to get it to work.
username_1: @username_3 But that will still find `(* Abort. *)`? The problem I had was that many Abort's are actually commented out, but a simple grep will not see this.
username_4: It seems worth pointing out that there is a big difference between
'abort' and 'admit'.
Aborting a proof just discards the statement. In particular, the
statement is not taken as an axiom, and its name is not added to the
list of defined constants.
Now, one might still want to avoid aborted proofs in UniMath, because
they are potentially confusing to the reader.
But they are not a danger to consistency.
username_0: Right. I understood it from Dan’s message. Still this looks strange to me. Why put an aborted “proof” into the library? There may be a separate file with problems, statements whose proofs at the moment are not known, but such problems should not be in the form of aborted proofs.
>
username_5: Good. When we have a formalization of finite sequences with all the basic accessor
functions it should be easier to formalize polynomials, linear algebra, etc.
username_4: Is there a better solution?
If we want to record conjectures, then these conjectures should also be
type-checked.
The two solutions that come to my mind are
Theorem foo : bar.
Abort.
or
Check bar.
But the second variant produces noise and looks much less like
'Conjecture' than the first one. Someone might even accidentially delete
the 'Check'.
So, to state a conjecture usefully in Coq only seems possible by using
Abort for the time being.
Putting all the conjectures in a separate file might be one way to clean
up in one sense, but will produce a mess of a different sort.
username_2: One good reason to allow aborted proofs in the library is that substantial progress may have already been made toward the proof. Putting the successful part of the proof into the library ensures that it will continue to work as people change other parts of UniMath. Eventually the proof will be complete.
If you disallow partial proofs in the code, then you should also disallow partial proofs in comments, logically. Partial proofs in comments are even worse, because they may stop working.
username_0: My preferred solution is:
Definition Conjecture_1 : UU := …. .
(it may, of course, have parameters).
>
username_0: Wasn’t your plan to move towards triangulated categories and t-structures?
>
username_5: It still is my plan but I have some other plans too.
username_1: This is off topic, but @username_5: If you're going to do linear algebra and polynomials in UniMath I would suggest you take a look at the Mathematical Components project (also referred to as SSReflect). They have a very well written and huge library of formalized mathematics in Coq (including a **lot** of theory about polynomials and linear algebra). This library was developed for the formalization of the odd order theorem about finite groups which is by far the largest formalization in mathematics done in Coq so far.
username_4: May I suggest that you discuss such ideas on the unimath mailing list instead of on the issue tracker?
As a reminder, the address for that is <EMAIL>.
username_1: @username_4: Excellent suggestion, I'll do that in the future.
username_0: The problem with transporting their ideas to UniMath is that they use the inductive types very freely.
>
username_1: I agree that transporting the ideas from SSReflect to UniMath is not easy for multiple reasons. I just wanted to say that there is this huge and very carefully designed library with a lot of linear and abstract algebra that we can get inspiration from when developing these theories in UniMath. I have some more concrete ideas, but let's discuss this in a separate thread on the google group instead.
username_0: I have added two lemmas to the FiniteSequences and with them it should be easy to complete some and probably all of the aborted proofs in FiniteSequences.
I don’t know who have started these proofs and suggest that the person who started them completes them using these lemmas and if any other difficulties arise point them out to the rest of us and we will try to find a solution.
Vladimir.
username_2: Those `Abort`s are mine. (I'd be happy to assign myself to an issue to deal with that later...)
"git blame" is the tool for discovering that, and it's accessible inside emacs this way:
```
C-x v g runs the command vc-annotate (found in global-map), which is an
interactive autoloaded Lisp function in ‘vc-annotate.el’.
It is bound to C-x v g, <menu-bar> <tools> <vc> <vc-annotate>.
(vc-annotate FILE REV &optional DISPLAY-MODE BUF MOVE-POINT-TO VC-BK)
Display the edit history of the current FILE using colors.
This command creates a buffer that shows, for each line of the current
file, when it was last edited and by whom. Additionally, colors are
used to show the age of each line--blue means oldest, red means
youngest, and intermediate colors indicate intermediate ages. By
default, the time scale stretches back one year into the past;
everything that is older than that is shown in blue.
With a prefix argument, this command asks two questions in the
minibuffer. First, you may enter a revision number REV; then the buffer
displays and annotates that revision instead of the working revision
(type RET in the minibuffer to leave that default unchanged). Then,
you are prompted for the time span in days which the color range
should cover. For example, a time span of 20 days means that changes
over the past 20 days are shown in red to blue, according to their
age, and everything that is older than that is shown in blue.
If MOVE-POINT-TO is given, move the point to that line.
If VC-BK is given used that VC backend.
Customization variables:
‘vc-annotate-menu-elements’ customizes the menu elements of the
mode-specific menu. ‘vc-annotate-color-map’ and
‘vc-annotate-very-old-color’ define the mapping of time to colors.
‘vc-annotate-background’ specifies the background color.
‘vc-annotate-background-mode’ specifies whether the color map
should be applied to the background or to the foreground.
[back]
```
Status: Issue closed
username_2: All but one of those Aborts in FiniteSequences is now gone. |
eagle7410/electron_docker_manager | 348362888 | Title: Change container limit
Question:
username_0: [Docker doc](https://docs.docker.com/engine/reference/commandline/update/)
Usage: docker update [OPTIONS] CONTAINER [CONTAINER...]
Update configuration of one or more containers
--blkio-weight Block IO (relative weight), between 10 and 1000
--cpu-shares CPU shares (relative weight)
--cpu-period Limit CPU CFS (Completely Fair Scheduler) period
--cpu-quota Limit CPU CFS (Completely Fair Scheduler) quota
--cpuset-cpus CPUs in which to allow execution (0-3, 0,1)
--cpuset-mems MEMs in which to allow execution (0-3, 0,1)
--help Print usage
--kernel-memory Kernel memory limit
-m, --memory Memory limit
--memory-reservation Memory soft limit
--memory-swap Swap limit equal to memory plus swap: '-1' to enable unlimited swap
--restart Restart policy to apply when a container exit<issue_closed>
Status: Issue closed |
musonza/chat | 525981122 | Title: Trait 'Musonza\Chat\Traits\Messageable' not found
Question:
username_0: I just install v4.0.0-rc2 but not working Messageable trait
what version I need?
Answers:
username_1: @username_0 for this version `4` there is no real support for 5.8. Unless you want to fork in the meantime and update the requirements |
firebase/firebaseui-web | 427224870 | Title: feat: Microsoft and Yahoo sign in
Question:
username_0: **Is your feature request related to a problem? Please describe.**
Firebase Auth client SDKs make it possible to sign in as a Firebase user from federated identity providers, including Google, Facebook, and Twitter. The Firebase Auth team is always looking for opportunities to improve the auth experience for developers and users. We know that more sign-in options mean more opportunities to create the best app experience. That's why we are pleased to announce that you can now sign in to Firebase Auth using Microsoft and Yahoo!
**Describe the solution you'd like**
Signing in to Firebase with a given identity provider requires garnering a credential from that provider. This often involves including the provider's SDK and implementing the provider's sign-in methods before passing the credentials to Firebase Auth. For some providers, this can be particularly difficult on a native client, especially ones which do not support their own native SDKs. In order to remove the headache of implementing sign-in flows for these identity providers, we now offer generic OAuth2 provider sign-in.
**Additional context**
https://firebase.googleblog.com/2019/03/microsoft-and-yahoo-identity-auth.html?linkId=65404801
Answers:
username_1: Thanks for the request @username_0, we're already on it.
username_2: Any rough ETA on when this might be happening? (not pushing/rushing at all, just wondering if it's something measured in days/weeks/months/quarters)
username_3: @username_2 Our plan is to release this feature in the next few weeks. I'll keep you guys posted. Thanks!
username_2: Fantastic, thanks @username_3...I was just about to fork this and roll my own, but a few weeks I can hold out and spend time working on other items in the backlog. Thanks so much for the quick reply!!
username_3: @username_2 Microsoft and Yahoo are supported now in FirebaseUI [V3.6](https://github.com/firebase/firebaseui-web/releases/tag/v3.6.0). Please make sure to update the firebase-js-sdk version to 5.10 to use the new feature. Thanks!
Status: Issue closed
username_4: I am attempting to add Yahoo & Microsoft support using FirebaseUI 3.6.0 for web integration however the following does not appear to work still.
firebase.auth.YahooAuthProvider.PROVIDER_ID
firebase.auth.MicrosoftAuthProvider.PROVIDER_ID
Any ideas?
username_1: Hey @username_4, as these are generic providers, they don't have constants defined in the firebase SDK. Instead you should pass the string `'yahoo.com'` and `'microsoft.com'` instead of `firebase.auth.YahooAuthProvider.PROVIDER_ID` and `firebase.auth.MicrosoftAuthProvider.PROVIDER_ID`.
username_4: Hey @username_1 thank for your reply you are indeed correct. I have managed to get the buttons to appear now and link to the correct locations. However the button is not styled to reflect Yahoo or Microsoft like Google and Twitter is for example.

username_3: @username_4 As @username_1 mentioned, since these are generic providers, how you can configure them is slightly different with the existing providers. Here is the guideline: https://github.com/firebase/firebaseui-web#generic-oauth-provider
Also, you can refer to the demo app here: https://github.com/firebase/firebaseui-web/blob/master/demo/public/app.js#L73
username_4: Yahoo icon options if anyone needs them:



username_4: @username_3 Unfortunately I have found a bug.
When using the default authDomain xxxxxxx.firebaseapp.com the logins for Yahoo and Microsoft work fine.
However when using a custom authDomain I get the following error after being returned from each providers login page:
Microsoft:
Remote site 5XX from yahoo.com for CODE_EXCHANGE Dismiss

Yahoo:
The supplied auth credential is malformed or has expired. Dismiss

I have correctly setup the redirect URL on Microsoft and Yahoo to reflect the custom authDomain in the same way as on Google, Facebook and Twitter.
username_3: @username_4 Thanks for reporting the issue. Could you please go to the network tab in chrome developer tools and provide the related network request? It would be helpful for debugging. Thanks!
username_4: https://www.googleapis.com/identitytoolkit/v3/relyingparty/verifyAssertion?key=<KEY>

username_1: There is nothing different about Yahoo and Microsoft. You are likely not following the correct instructions for setting a custom domain.
1. You have to have your custom domain `www.custom.com` point to [YOUR_PROJECT_ID].firebaseapp.com. Learn more [here](https://firebase.google.com/docs/hosting/custom-domain).
2. Next you need to update your Microsoft and Yahoo callback URL to `https://www.custom.com/__/auth/handler`.
3. You then need to whitelist `www.custom.com` in the authorized domain in the Console.
4. Finally in the `config` object, you set `authDomain` as `www.custom.com`.
username_5: @username_3 the types for these additional parameters are missing (i.e. iconurl, buttonColor, providerName)
See:
https://github.com/firebase/firebaseui-web/blob/master/types/index.d.ts
Don't mind creating a PR to add them?
username_4: @username_1 I am doing it correctly as mentioned I already have a custom domain working on Google, Facebook and Twitter via the exact method you have mentioned.
username_4: @username_5I have tested with these paramters and it is working fine.
`signInOptions: [
firebase.auth.GoogleAuthProvider.PROVIDER_ID,
firebase.auth.FacebookAuthProvider.PROVIDER_ID,
firebase.auth.TwitterAuthProvider.PROVIDER_ID,
firebase.auth.EmailAuthProvider.PROVIDER_ID,
{
provider: 'yahoo.com',
providerName: 'Yahoo',
buttonColor: '#2d1152',
iconUrl: '/images/yahoo_icon.png',
loginHintKey: 'login_hint'
},
{
provider: 'microsoft.com',
providerName: 'Microsoft',
buttonColor: '#2F2F2F',
iconUrl: 'https://docs.microsoft.com/en-us/azure/active-directory/develop/media/howto-add-branding-in-azure-ad-apps/ms-symbollockup_mssymbol_19.png',
loginHintKey: 'login_hint'
}
]`
username_3: @username_5 Thanks for reporting this. Will add it ASAP.
username_6: I am also experiencing this issue:
<img width="1552" alt="Screen Shot 2019-06-14 at 5 41 18 AM" src="https://user-images.githubusercontent.com/44956/59500505-968b5480-8e67-11e9-80f3-8ae18529c558.png">
URL loaded in popup window:
`https://auth.my.custom.domain/__/auth/handler?apiKey=[REDACTED]&appName=%5BDEFAULT%5D-firebaseui-temp&authType=signInViaPopup&providerId=yahoo.com&eventId=275911849&v=6.0.4&fw=FirebaseUI-web`
Redirects to following Yahoo auth screen:
`https://api.login.yahoo.com/oauth2/request_auth?response_type=code&client_id=[REDACTED]&redirect_uri=https://auth.my.custom.domain/__/auth/handler&state=[REDACTED]scope=openid`
<img width="612" alt="Screen Shot 2019-06-14 at 5 46 09 AM" src="https://user-images.githubusercontent.com/44956/59500623-d2beb500-8e67-11e9-8984-2eb504861809.png">
This does trigger an email from Yahoo, so it seems that things are somewhat working:
<img width="520" alt="Screen Shot 2019-06-14 at 5 49 15 AM" src="https://user-images.githubusercontent.com/44956/59500841-3ea11d80-8e68-11e9-96b7-4d52c767eee8.png">
Thank you for looking into this!
Related: https://stackoverflow.com/questions/55655321/firebase-authentication-web-for-yahoo-invalid-idp-response-error
username_1: Please open a ticket with [Firebase Support](https://firebase.google.com/support/). They will route you to the right engineers to help you out. Also be prepared to provide information about your project so they can investigate it. This is not the best channel for backend related errors and sharing sensitive information about your project.
username_7: Well the stack over flow question was posted by me and it turned out to be a configuration issue about permissions on my end. I have fixed the issue, and yahoo login is working fine in production for our app.
Gist:
In my case the error boiled down to asking appropriate permission while making a request and the OAuth app you setup in Yahoo dashboard also must have the correct permissions setup. I needed user's email so to achieve this I had to do two things 1) In client app added provider.addScope('sdpp-w') . 2) While setting up the OAuth app, had to explicitly add can read and write profile permission (Kinda scary sounding -- stupid Yahoo)
username_6: @username_7 -- that solved it! Thank you so much!
I too had to authorize `sdpp-w` in the Yahoo configuration and request that scope in my client code.
I did get it working with just `sdps-r` but that did not provide an email address, as you noted.
Thanks again!
username_7: Glad that worked out. Should be documented somewhere.
username_1: We have [documented](https://firebase.google.com/docs/auth/web/yahoo-oauth#handle_the_sign-in_flow_with_the_firebase_sdk) that the requested OAuth scopes must be exact matches to the preconfigured ones in the app's API permissions from the first day we supported Yahoo sign-in.
username_7: @username_1 Well when I said it should be documented somewhere, I was meaning Yahoo OAuth documentation does not give any indication that you need sdpp-w permission to just read an Email. The fact that their APIs moved around a lot, they also had the primary email and secondary email concept, the company also changed hands somewhere this did not get clearly documented. The link in firebase documentation that takes you to Yahoo documentation, if you read it says mail-r is good enough to work with the APIs and from my practical experience and as confirmed by @username_6 asking for just that permission would result in not being able to login at all.
username_1: Hmm, we'll look into that. We can update our documentation once we confirm it.
username_8: `sdpp-r` will give you the email address of the logged in user: https://developer.yahoo.com/oauth/social-directory-eol/
username_6: This doesn't work for me by simply changing the scope request in my client code from `sdpp-w` to `sdpp-r`:
<img width="612" alt="Screen Shot 2020-02-21 at 3 52 12 PM" src="https://user-images.githubusercontent.com/44956/75071084-9d706980-54c2-11ea-9839-0021492a5f7c.png">
App configuration (unchanged):
<img width="466" alt="Screen Shot 2020-02-21 at 3 52 23 PM" src="https://user-images.githubusercontent.com/44956/75071094-a2cdb400-54c2-11ea-9ca6-b4e6dbe75285.png">
username_8: username_6: You must also change the API Permission in the Yahoo App to `Read Public Extended` which corresponds to the `sdpp-r` scope set on the client. |
argoproj/argo-cd | 666741020 | Title: Option to disable validation when creating Application through UI or cli before repo/path exists
Question:
username_0: # Summary
Allow creating invalid apps when `validate=false` flag is set.
# Motivation
Currently if you try and create an Application through the UI or cli and the repository doesn't exist, the path is missing, or just the current HEAD doesn't pass through e.g. `kustomize` cleanly, then it doesn't let you proceed.
However, this validation does not occur for an already existing Application: an application doesn't suddenly disappear if the repository is deleted/moved, a path is deleted, or the files invalid for some reason.
At the moment, you *can* manually create an "invalid" application by e.g. `kubectl create applications.argoproj.io ... `, but this requires access to the cluster, rather than an argocd RBAC role.
# Proposal
Add a `validate` field to the UI and cli to allow creating argo applications that point at missing/invalid repositories. e.g. the following should successfully create an app:
```shell
argocd app create --validate=false myapp --repo <EMAIL> --path doesntexist/yet --dest-server https://clusternotupyet.com --dest-namespace doesntexistyet
```
Status: Issue closed
Answers:
username_0: I've just upgraded now, thanks!
Do we need a new issue to add a checkbox in the GUI? |
thunkable/thunkable-issues | 615476875 | Title: PDF Reader trouble
Question:
username_0: ### Platform: ✕ Android companion and ✕ iOS companion?
Hi, today I tried to use the Pdf Reader component for the first time and it seems to work only in "Live test":
with ✕ Android companion appears this message: "Alert - Sorry, an error occurred"
while with ✕ iOS companion appears a white page.
Answers:
username_1: Thanks for reporting. I am looking into the issue now. |
CTeX-org/ctex-kit | 679722303 | Title: ctex: 调用 luatexja 时的禁则处理
Question:
username_0: 在 #513 下发现的问题,新开一个 issue
ctex 在调用 luatexja 时,似乎并未进行标点的禁则(避头尾)与标点同西文间距的处理?
```LaTeX
\documentclass{article}
\usepackage{ctex}
\usepackage[text=9em]{geometry}
\usepackage{lua-visual-debug}
\begin{document}
一二三,四五六,七八(9,0)。
\end{document}
```

而直接调用 luatexja,效果正常。luatexja.lua 下有调用 ltj-kinsoku.tex 相关的语句,不止为何 ctex 在调用时未起作用。
```LaTeX
\documentclass{article}
\usepackage{luatexja}
\usepackage[text=9\zw]{geometry}
\usepackage{lua-visual-debug}
\usepackage{luatexja-adjust}
\begin{document}
\parindent=2\zw
一二三,四五六,七八(9,0)。
\end{document}
```

手动调用 ltj-kinsoku.tex
```LaTeX
\documentclass{article}
\usepackage{ctex}
\usepackage[text=9em]{geometry}
\usepackage{lua-visual-debug}
\usepackage{luatexja-adjust}
\makeatletter
\input ltj-kinsoku.tex\relax
% 或者 \include{ltj-kinsoku}
\makeatother
\begin{document}
一二三,四五六,七八(9,0)。
\end{document}
```
避头尾处理与间距处理在 ltj-kinsoku.tex 中得到规定,且其中规定可以较好地适用于中文排版。
Answers:
username_0: 破案了,ctex 进行禁则处理时,调用的是 ltj-kinsoku.lua,但 luatexja 在 8/7 的[此次更新](https://osdn.net/projects/luatex-ja/scm/git/luatexja/commits/5219ca4c42db340b0751bdcf67f2af6f8955cb7b)中,删去了 ltj-kinsoku.lua,改为提供 ltj-kinsoku.tex 文件,故而禁则处理失效了
Status: Issue closed
|
orientechnologies/orientjs | 97094216 | Title: Problem with date before the start of Gregorian Calendar
Question:
username_0: Hi,
I have a problem when inserting date before the start of the gregorian calendar (10-15-1582)
If i insert a date in OrientDb before 10-15-1582, the date returned on select does not match to the inserted.
If i insert a date in OrientDb after 10-15-1582, the date returned on select match to the inserted.
I have created a unit test if you want to see :
```javascript
describe("Bug 330: Select date before gregorian calendar", function () {
this.timeout(10 * 10000);
var LIMIT = 5000;
before(function () {
return CREATE_TEST_DB(this, 'testdb_bug_330')
.bind(this)
.then(function () {
return this.db.class.create('User', 'V');
})
.then(function (item) {
this.class = item;
return item.property.create([
{
name: 'firstname',
type: 'String'
},
{
name: 'birthDate',
type: 'datetime'
}
])
})
.then(function () {
return this.db.query('CREATE VERTEX User SET firstname = :firstname, birthDate = :birthDate',
{
params: {
firstname: 'Robert',
birthDate: new Date("1200-11-11T00:00:00.000Z")
}
}
);
})
.then(function () {
return this.db.query('CREATE VERTEX User SET firstname = :firstname, birthDate = :birthDate',
{
params: {
firstname: 'Marcel',
birthDate: new Date("1582-10-15T00:00:00.000Z") // Start Gregorian calendar
}
}
);
})
.then(function () {
return this.db.query('CREATE VERTEX User SET firstname = :firstname, birthDate = :birthDate',
{
params: {
firstname: 'Andrew',
birthDate: new Date("1987-03-03T00:00:00.000Z")
}
[Truncated]
});
it('should get the previously inserted date', function () {
return this.db.query('SELECT FROM User WHERE firstname = :firstname',
{
params: {
firstname: 'Andrew'
}
}
).then(function (result) {
var expectedDate = new Date("1987-03-03T00:00:00.000Z");
result[0].birthDate.should.be.eql(expectedDate);
})
});
});
```
Answers:
username_0: If someone else have the same problem, below a temporary fix :
If you need to get date, get the date as string :
```sql
SELECT birthDate.format("yyyy-MM-dd HH:mm:ss:SSS Z", "UTC") as birthDate FROM User
```
And in javascript, reconstruct the date field :
```javascript
var resultDate = new Date(result[0].birthDate);
```
username_1: Created your test case in Java and passes.
username_0: The LONG value of `1200-11-11T00:00:00.000Z` in javascript is `-24271660800000`.
When i select this field with orientjs, the raw result equals `-24271056000000`
```
-24271660800000 = Sat, 11 Nov 1200 00:00:00 GMT
-24271056000000 = Sat, 18 Nov 1200 00:00:00 GMT
```
You should test the LONG value returned to the binary protocol.
username_2: @username_0
the problem is
```JS
var expectedDate = new Date("1200-11-11T00:00:00.000Z");
console.log(expectedDate.getTime());
-24271660800000
```
```JAVA
DateFormat format = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS");
Date d = format.parse("1200-11-11 00:00:00.000");
System.out.println(d.getTime());
-24271059600000
```
username_0: In the meantime, we have fixed the problem by storing dates as LONG. |
thotluna/components | 851869011 | Title: Create Picture, Loading, snackbar component
Question:
username_0: The snackbar with error and info
Answers:
username_0: add Modal and Layout/Container, Layout / Container with header, body, and footer
username_0: Thinking, it is better to develop the snack bar with the Modal. once the problem of how we are going to do it is solved.
Status: Issue closed
username_0: all ok
username_0: Please, create modal, snackbar and other layout
username_0: The snackbar with error and info
username_0: Ready
Status: Issue closed
|
solidusio-contrib/solidus_comments | 153826355 | Title: Broken routes
Question:
username_0: CRUD routes are added for comments, but none of the views are implemented so going to any of them errors.
Since comments are all added to specific objects (Orders and shipments at the moment), these routes should be removed.<issue_closed>
Status: Issue closed |
getgauge/gauge-vscode | 613456474 | Title: Java Maven Project creator isn't using the current Maven archetype Version
Question:
username_0: After creating a new gauge project for Maven and Java with Gauge plugin a `mvn test` results in
```shell
[INFO] --- gauge-maven-plugin:1.1.0:execute (default) @ gaug-maven-test ---
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1.435 s
[INFO] Finished at: 2020-05-06T18:31:49+02:00
[INFO] ------------------------------------------------------------------------
Error: unknown command "/home/username_0/dev/workspace_inkasso40/gaug-maven-test/specs" for "gauge"
Run 'gauge --help' for usage.
[ERROR] Failed to execute goal com.thoughtworks.gauge.maven:gauge-maven-plugin:1.1.0:execute (default) on project gaug-maven-test: Gauge Specs execution failed -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
```
Expected result is a working project.
I think the plugin is using an elder Maven archetype version for generation the project.
**Workaround**
Modifying `pom.xml` helps:
```xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>example</groupId>
<artifactId>gaug-maven-test</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>com.thoughtworks.gauge</groupId>
<artifactId>gauge-java</artifactId>
<version>0.7.7</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.assertj</groupId>
<artifactId>assertj-core</artifactId>
<version>3.16.0</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.1</version>
<configuration>
<source>11</source>
<target>11</target>
[Truncated]
**Steps for reproduce**
Using Command Palette:
`Create a new Gauge Project -> java_maven-> Select a folder -> choose project name`
**Version VS Code**
1.44.2
**Version Gauge Plugin**
0.0.13
**Gauge Version**
Gauge version: 1.0.8
Commit Hash: 28617ea
Plugins
-------
html-report (4.0.10)
java (0.7.7)
screenshot (0.0.1)
Answers:
username_1: @username_0 thanks for finding this. Can you send a pull request to the following repo with the fix?
https://github.com/getgauge/gauge-mvn-archetypes/blob/master/pom.xml
username_0: @username_1 your mentioned pom.xml is the pom.xml for the archetype project itself and it's not the template for the generated Maven project pom. The template for that is https://github.com/getgauge/gauge-mvn-archetypes/blob/master/gauge-archetype-java/src/main/resources/archetype-resources/pom.xml and it's look good enough to solve this problem here. I think the VSCode plugin is using archetype version 1.0 (https://github.com/getgauge/gauge-mvn-archetypes/blob/582840bb007a55fe1f569e9089ec2222736b9039/gauge-archetype-java/pom.xml) because the template for the Maven Project looks like I see in my use case.
username_2: The VS Code plugin does not use a specific archetype version. So it ends up using the latest archetype version.
Can you please verify it once again?
username_0: @username_2 I tried it again today with the same result.
The generated POM:
```xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>example</groupId>
<artifactId>gauge-test</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>com.thoughtworks.gauge</groupId>
<artifactId>gauge-java</artifactId>
<version>0.3.4</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.1</version>
<configuration>
<source>1.7</source>
<target>1.7</target>
</configuration>
</plugin>
<plugin>
<groupId>com.thoughtworks.gauge.maven</groupId>
<artifactId>gauge-maven-plugin</artifactId>
<version>1.1.0</version>
<executions>
<execution>
<phase>test</phase>
<configuration>
<specsDir>specs</specsDir>
</configuration>
<goals>
<goal>execute</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
```
**Version VS Code**
1.45.1
**Version Gauge Plugin**
0.0.13
username_3: Closing as the template initialisation is updated in the latest version of gauge-vscode plugin and this is no more relevant.
Status: Issue closed
|
mblackstock/node-red-contrib-data-view | 775015742 | Title: Function of the node hotspot
Question:
username_0: A very welcome node Mike, thank you!
Just an observation... I wondered if the hotspot would be of more value if it actually hid the chart from view, instead of just stopping it.
Answers:
username_1: Good idea, maybe I will make that an option like hide on deactivate or
something. Thanks!
username_2: I agree with username_0 on this, in fact it was the behaviour I expected. It would be especially useful in complex flows to avoid clutter when using the node purely for debugging.
username_1: added hide on deactivate, true by default. on develop branch.
Status: Issue closed
|
edges-collab/edges-io | 814871253 | Title: New Structure
Question:
username_0: Given how things must be measured, it would make much more sense for the structure to be:
```
ambient-<run>/
spectra/
<time-stamp-1>.acq
<time-stamp-2>.acq
resistance/
<time-stamp>.csv
s11/
open-01.s1p
open-02.s1p
match-01.s1p
etc.
```
Here, the `SwitchingState` and `ReceiverReading` would be their own directories. Typically, these could be bundled in an overall observation, similar to now. But it would be much easier to identify what's missing and grab it from another observation then.
In addition, *each load* should have a `meta.yaml` that defines meta information about the run. This is important for the `SwitchingState` and `ReceiverReading`, which need resistance readings. Those should be inside their directories, not in some random definition file! |
geckolinux/geckolinux-project | 1105120259 | Title: [Firmware Bug]: TSC_DEADLINE disabled due to Errata; please update microcode to version: 0x52 (or later)
Question:
username_0: Pantheon Tumbleweed
Core i3 Haswell
Which package do i have to upgrade to fix it?
Answers:
username_1: Hi there, are you experiencing any missing functionality or lack of hardware support?
username_0: Wow that was such a quick reply. Open source/hobbyist devs are so devoted!
I'm not sure. Even hardware acceleration seems to be working. However the maximize and multitasking animations are a tad slow n jittery.
username_0: I have just noticed that some other animations are a bit slow too. Might be a spec limitation. I tried my best to make do with 4G ram with low swappiness, less startup scripts and services etc. Otherwise performance is perfect.
username_1: Please try updating the system with `sudo zypper dup`, that will install any missing firmware packages for your hardware.
Status: Issue closed
username_0: I tried it but unfortunately the error still persists.
username_1: It's probably just an inconsequential kernel warning, they're very common. |
dart-lang/sdk | 849568975 | Title: Analyzer does not check that bounds are satisfied be generic metadata arguments
Question:
username_0: The following example should be an error, since `String` is an invalid type argument for the generic class `Y`:
```dart
class Y<T extends num> {
const Y();
}
@Y<String>()
class Test1 {}
```
The analyzer currently issues no errors on this code when run with `--experiment=generic-metadata`.
This is tests by `co19/src/Language/Metadata/syntax_t11.dart`, but unfortunately that test is broken pending the next co19 roll.
cc @devoncarew @username_1 @natebosch
Answers:
username_1: https://dart-review.googlesource.com/c/sdk/+/194000
Status: Issue closed
|
tensorflow/tensorflow | 166322179 | Title: Why doesn't tensorflow support tensor as the feed_dict?
Question:
username_0: **What I'm trying to do**
I am trying to extract CNN features for my own images with residual-net based on https://github.com/ry/tensorflow-resnet. I plan to input image data from JPG files before exploring how to convert the images into a single file.
**What I have done**
I have read https://www.tensorflow.org/versions/r0.9/how_tos/reading_data/index.html and some related materials about how to input data like feeding and placeholder. Here is my code:
import tensorflow as tf
from convert import print_prob, checkpoint_fn, meta_fn
from image_processing import image_preprocessing
tf.app.flags.DEFINE_integer('batch_size', 1, "batch size")
tf.app.flags.DEFINE_integer('input_size', 224, "input image size")
tf.app.flags.DEFINE_integer('min_after_dequeue', 224, "min after dequeue")
tf.app.flags.DEFINE_integer('layers', 152, "The number of layers in the net")
tf.app.flags.DEFINE_integer('image_number', 6951, "number of images")
FLAGS = tf.app.flags.FLAGS
def placeholder_inputs():
images_placeholder = tf.placeholder(tf.float32, shape=(FLAGS.batch_size, FLAGS.input_size, FLAGS.input_size, 3))
label_placeholder = tf.placeholder(tf.int32, shape=FLAGS.batch_size)
return images_placeholder, label_placeholder
def fill_feed_dict(image_ba, label_ba, images_pl, labels_pl):
feed_dict = {
images_pl: image_ba,
}
return feed_dict
min_fraction_of_examples_in_queue = 0.4
min_queue_examples = int(FLAGS.image_number *
min_fraction_of_examples_in_queue)
dataset = tf.train.string_input_producer(["hollywood_test.txt"])
reader = tf.TextLineReader()
_, file_content = reader.read(dataset)
image_name, label, _ = tf.decode_csv(file_content, [[""], [""], [""]], " ")
label = tf.string_to_number(label)
num_preprocess_threads = 10
images_and_labels = []
with tf.Session() as sess:
for thread_id in range(num_preprocess_threads):
image_buffer = tf.read_file(image_name)
bbox = []
train = False
image = image_preprocessing(image_buffer, bbox, train, thread_id)
image = image_buffer
images_and_labels.append([image, label])
image_batch, label_batch = tf.train.batch_join(images_and_labels,
batch_size=FLAGS.batch_size,
capacity=min_queue_examples + 3 * FLAGS.batch_size)
images_placeholder, labels_placeholder = placeholder_inputs()
new_saver = tf.train.import_meta_graph(meta_fn(FLAGS.layers))
new_saver.restore(sess, checkpoint_fn(FLAGS.layers))
graph = tf.get_default_graph()
prob_tensor = graph.get_tensor_by_name("prob:0")
images = graph.get_tensor_by_name("images:0")
feed_dict = fill_feed_dict(image_batch, label_batch, images, labels_placeholder)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
sess.run(tf.initialize_all_variables())
prob = sess.run(prob_tensor, feed_dict=feed_dict)
print_prob(prob[0])
coord.request_stop()
coord.join(threads)
**What my question is**
The code above got the error `TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, or numpy ndarrays.` I am quite confused why feed_dict doesn't support tensor as an input. But the batch_join function return a tensor and so do other ops. I found that in the mnist example of tensorflow there is even another function to produce a batch when tensorflow is providing methods for batching. So I wonder if there is an elegant way to do these things. If this is because of my lack of searching and careful reading I really apologize.
Answers:
username_1: I new to tensorflow, according to my experience, you really don't need to feed a tf.tensor to feed_dict. Once you have a tensor with value, you can get this value with your_tensor.eval() or sess.run(you_tensor) and then you can feed the output to your feed_dict
username_0: I have also searched about how to get the value of a tensor to solve this problem and seen the solutions like above. However when I try to get image_batch.eval() or even image_buffer.eval(), the program just keep working without stopping or outing anything. I have tried to get a constant value from a tensor and succeeded. But this is just useless with tensor gotten from tf.read_file() and etc.
username_2: If you have your data in tensors anyway, you don't need to use placeholders and `feed_dict`. You can just use `image_batch` / `label_batch` instead of `images_placeholder` / `label_placeholder` when setting up your model / loss.
username_0: Yes, I can see from the examples of training procedures that I can just use tensors as input of any ops in tensorflow. However in this case I am using a predefined network so I am confused whether I can change the network definition. That is to say, I have no idea how to change the input of loss function(the definition of node "images" and "prob" is ` images = tf.placeholder("float32", [None, 224, 224, 3], name="images")
logits = resnet.inference(images,
is_training=False,
num_blocks=num_blocks,
preprocess=True,
bottleneck=True)
prob = tf.nn.softmax(logits, name='prob')` in the predefined model).
username_3: StackOverflow is a better venue. One key thing to note is that `Tensor` is simply a **symbolic** object. The values of your `feed_dict` are the **actual** values, e.g. a Numpy ndarry.
Status: Issue closed
username_4: I have the same questions with @username_0 .
"I have no idea how to change the input of loss function"...
How?
username_5: Why is this closed?
Author raises legitimate concerns.
1. Why data already on GPU, cannot be fed into GPU model.
2. Why redefine model to use data already on GPU. This makes code more complicated.
3. What if I have 5 variables with data, do I need to define now 5 models with shared trainable variables?
Yes it is confusing, make you write complicated (or slow) code -- I think you guys should reopen and address this.
username_6: I have the same opinion as yours. These days, i use the cnn to do some text work. For the input's dimension is too large, i have to change it to other type in case of the memory error. But , what confused me is the same as yous. Why tensor cannot be a feed_dict value!!!
username_7: Looks like you could use a [session_handle](https://www.tensorflow.org/api_docs/python/tf/get_session_handle) to do this:
Simple case:
```
p = tf.placeholder(tf.float32, shape=())
c = tf.constant(1.0, dtype=tf.float32)
h = tf.get_session_handle(c)
h = sess.run(h) # gives you a handle
sess.run(p, feed_dict={p:h})
```
The below is an example from the get_session_handle doc:
```
c = tf.multiply(v1, v2)
h = tf.get_session_handle(c)
h = sess.run(h) # gives you a handle
p, a = tf.get_session_tensor(h.handle, tf.float32)
b = tf.multiply(a, 10)
c = sess.run(b, feed_dict={p: h.handle})
```
username_8: This works for a scalar value but when you try to get handle for a sequence of inputs, it is failing.
Basically, i am also getting the same error **"TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, numpy ndarrays, or TensorHandles."**
operation I performed is feeding the sequence of number from -10 to 10 for a function.
part of code:
f = tf.square(x) + 2 * x + 5
val_x = tf.range(-10, 10, 0.1)
val_f = sess.run(f, feed_dict = {x: val_x})
But if i replace val_x = tf.range(-10, 10, 0.1) --> val_x = np.arange(-10, 10, 0.1), it works fine without any errors.
why is sequence generated using tf.range() is not treated as tensor object?
Thanks in Advance!
username_6: @username_8 For the feed_dict can't accept the tensor object. As for the f = tf.square(x) + 2 * x + 5, it can be fed with scalars generated by numpy
username_9: You can do something like this in a loop to make batches without using any tensorflow backend. This will ensure that the output of it remains non-tensor.
`with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for index, offset in enumerate(range(0, tr, batch_size)):
x_epoch, y_epoch = np.array(train_x[offset: offset + batch_size,:]), np.array(tr_yoh[offset: offset + batch_size])`
username_10: The short answer to the question is because of Tensorflow make simple things complicated
username_11: I may have a different question but similar with yours: if my input data is already in tensor type, how can i feed them into the network? this is very simple and straightforward in pytorch, but as we know, tensorflow uses static computational graph. we have to pre-define the graph and then change the input in a cycle. if the inputs of network and loss function are defined when creating the graph, how can we change it during training?
username_12: Not the best approach, but one workaround is to construct a graph def and load it in with [input_mapping](https://www.tensorflow.org/api_docs/python/tf/graph_util/import_graph_def) , through this the [tf.data.Iterator](https://www.tensorflow.org/api_docs/python/tf/data/Iterator) can be hooked up to any arbitrary graph which previously used placeholders for input. This is especially beneficial for inference, since weights can be frozen.
username_13: For anyone who lands on this question, Lakrish provided a very usable solution (only a few lines of code). I have posted an example at this related SO question: https://stackoverflow.com/questions/38618960/tensorflow-how-to-insert-custom-input-to-existing-graph/57015133#57015133 |
matrix-org/matrix-rust-sdk | 819961612 | Title: Turn common Room methods into trait
Question:
username_0: The SDK currently has three room types `JoinedRoom`, `LeftRoom`, `InvitedRoom` which share the following methods:
```
pub fn room_id(&self) -> &RoomId
pub fn own_user_id(&self) -> &UserId
pub fn is_encrypted(&self) -> bool
pub fn encryption_settings(&self) -> Option<EncryptionEventContent>
pub fn history_visibility(&self) -> HistoryVisibility
pub async fn display_name(&'_ self) -> String
```
I think we should create a trait containing these methods and implement it for the base types of the above types: `StrippedRoom`, `Room` and `RoomState` and therefore replacing the current methods.
Answers:
username_1: Why? You want to be generic over this trait somewhere?
username_0: The `RoomState` could use it to be generic for different room types
https://matrix-org.github.io/matrix-rust-sdk/src/matrix_sdk_base/rooms/mod.rs.html#34-41
username_0: the main reason why i suggested it is to clearly group together the common methods of the different room type.
username_0: My specific use case is the sidebar (room list) in Fractal it doesn't care about the specific type of the room. I can use the `RoomState` enum but it requires me to wrap `Vec<Rooms>` like so
` client.joined_rooms().iter().map(|room| RoomState::Joined(room.clone())).collect());` and in multiple locations we need to special case for different types.
username_0: As an additional benefit a trait would makes sure that common methods are actually the same.
This two methods have a sightly different return value:
https://matrix-org.github.io/matrix-rust-sdk/matrix_sdk/struct.InvitedRoom.html#method.display_name
https://matrix-org.github.io/matrix-rust-sdk/matrix_sdk/struct.JoinedRoom.html#method.display_name
username_1: I wonder whether an enum with the same methods wouldn't be better for that (without having looked at any of the code).
username_0: That's already the case with `RoomState` but it doesn't implement all common methods.
username_1: So would adding those fix your issues?
username_0: yes, it would fix most of my specific issue. It would still require wrapping like this: `client.joined_rooms().iter().map(|room| RoomState::Joined(room.clone())).collect());
username_2: We can add an `From` implementation so `JoinedRoom` -> `RoomState` is easier to construct, I prefer an enum here as well since traits usually make things harder to discover.
username_0: I would prefer not to have to construct it, but works for me :)
username_2: We can also add an `rooms()` method that would return all rooms in `RoomState` form, if Fractal wants to treat all rooms equally seems like such a method would be preferred instead of calling three methods to collect the different room types?
username_0: I still need to group them by type.
I think we can close this issue, if you don't like the common methods to be a trait. :)
username_2: Not sure I understand how you need to treat them differently at one place but then treat them the same way at another. The `rooms()` method is probably be a good idea anyways so added this in https://github.com/matrix-org/matrix-rust-sdk/commit/8f481dd8592511454f3c1119ea9018204a1f0ab1.
username_0: I have separate collections for each room type but I want to use the Rooms in each collection the same way.
username_1: I think it would help to see the code in question. It sounds like generics might truly be what you want, and then a trait would be the best solution. If this is what ends up happening, making it [inherent](https://docs.rs/inherent/0.1.6/inherent/) could fix the discoverability issue. If `From` conversions to the enum exist, you could also be generic over `Into<RoomState>`, but that seems a bit hacky.
username_0: @username_1 I don't think we should design the API around my specific use case.
My issue with the current API is that it forces me to treat rooms differently based on there underlying matrix type even that I care just about the part that is common for all rooms.
Status: Issue closed
|
bernardd/Crossings | 603293753 | Title: Rollback to Harmony 1.2 or use shared Harmony 2.x dependency
Question:
username_0: We noticed from user bug reports that you are using Harmony 2.0.0.8.
This version of Harmony is not compatible with widely used 1.2.0.1.
We have been working on a central Harmony 2.0.0.9 mod that restores compatibility. I would strongly recommend you to make use of that mod (there is a small utility that auto-subscribes it on game startup, so there is no disruption for existing users).
Alternatively you could roll back to Harmony version 1.2 which would make it compatible wit other 1.2 mods (99%) and mods using 2.0.0.9 through the shared dependency.
See https://github.com/username_0/CitiesHarmony
Also see this reddit thread about Harmony 2.x drama: https://www.reddit.com/r/RimWorld/comments/fbnm45/harmony_the_full_story/ (it also partially applies to C:S)
Answers:
username_0: Ah we noticed that you reverted to pre-Harmony versions shortly after you updated in march. well I guess then there is less pressure. with the shared Harmony mod you can could make the switch!
username_1: Thanks - when I have a moment I'll reinstate the use of Harmony with the new system - it definitely makes the code an order of magnitude simpler, so here's hoping it goes more smoothly this time around.
username_1: Unfortunately that one isn't an option - I hit this bug: https://github.com/pardeike/Harmony/issues/227 which is only fixed in v2.
username_1: Done
Status: Issue closed
|
mdshw5/pyfaidx | 60357875 | Title: Subclass Fasta to ingest VCF and return consensus sequence
Question:
username_0: This should be straightforward using PyVCF and pysam. The subclass could be called FastaVariant and return a consensus sequence with either homozygous variants or heterozygous variants or both included as substitutions. Skip over indels and complex variants. MNPs can be handled but the length of the return sequence should be checked.<issue_closed>
Status: Issue closed |
ampproject/amphtml | 180781673 | Title: Perform video autoplay feature detection in vsync
Question:
username_0: Both the [`appendChild`](https://github.com/ampproject/amphtml/blob/master/src/service/video-manager-impl.js#L366) and `remove()`
Answers:
username_1: Not Applicable anymore per https://github.com/ampproject/amphtml/pull/5412
Status: Issue closed
|
NCEAS/metacatui | 1169914693 | Title: Create UI test suite
Question:
username_0: - Use Selenium
- Review ESS-DIVE's Selenium test suite to see what we can merge into the main code base (reduce their tech debt and save us time!)
Answers:
username_1: In case it's useful, Chrome's DevTools now has a preview-level feature called [Recorder](https://developer.chrome.com/docs/devtools/recorder/) for recording [Puppeteer](https://pptr.dev/) flows. [Checkly](https://www.checklyhq.com/) also provides a [Chrome extension](https://www.checklyhq.com/docs/headless-recorder/) for generating Playwright and Puppeteer flows. I'm not sure if similar things exist for Selenium. |
pytest-dev/pytest-django | 180971655 | Title: django.db.utils.OperationalError if execute query in apps.ready
Question:
username_0: Hello, needs help.
If I execute query in [ready](https://docs.djangoproject.com/en/1.10/ref/applications/#django.apps.AppConfig.ready) method, `pytest` fails with `django.db.utils.OperationalError: no such table`, while `manage.py runserver` works fine. Django documentation says, it should works fine also.
https://docs.djangoproject.com/en/1.10/ref/applications/#how-applications-are-loaded
Traceback:
```bash
Traceback (most recent call last):
File "/home/user/.pyenv/versions/csc/bin/pytest", line 11, in <module>
sys.exit(main())
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/config.py", line 46, in main
config = _prepareconfig(args, plugins)
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/config.py", line 131, in _prepareconfig
pluginmanager=pluginmanager, args=args)
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 745, in __call__
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 334, in <lambda>
_MultiCall(methods, kwargs, hook.spec_opts).execute()
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 613, in execute
return _wrapped_call(hook_impl.function(*args), self.execute)
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 250, in _wrapped_call
wrap_controller.send(call_outcome)
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/helpconfig.py", line 32, in pytest_cmdline_parse
config = outcome.get_result()
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 279, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 265, in __init__
self.result = func()
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 614, in execute
res = hook_impl.function(*args)
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/config.py", line 881, in pytest_cmdline_parse
self.parse(args)
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/config.py", line 1037, in parse
self._preparse(args, addopts=addopts)
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/config.py", line 1008, in _preparse
args=args, parser=self._parser)
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 745, in __call__
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 334, in <lambda>
_MultiCall(methods, kwargs, hook.spec_opts).execute()
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 613, in execute
return _wrapped_call(hook_impl.function(*args), self.execute)
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 254, in _wrapped_call
return call_outcome.get_result()
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 279, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 265, in __init__
self.result = func()
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 614, in execute
res = hook_impl.function(*args)
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/pytest_django/plugin.py", line 245, in pytest_load_initial_conftests
_setup_django()
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/pytest_django/plugin.py", line 148, in _setup_django
django.setup()
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
[Truncated]
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/django/db/models/query.py", line 258, in __iter__
self._fetch_all()
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/django/db/models/query.py", line 1074, in _fetch_all
self._result_cache = list(self.iterator())
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/django/db/models/query.py", line 52, in __iter__
results = compiler.execute_sql()
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/django/db/models/sql/compiler.py", line 848, in execute_sql
cursor.execute(sql, params)
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/django/db/utils.py", line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/home/user/.pyenv/versions/py34/lib/python3.4/site-packages/django/db/backends/sqlite3/base.py", line 323, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.OperationalError: no such table: notifications_type
```
Answers:
username_0: Error occurs on Django 1.9.x
username_0: I realized it's a bad idea to execute queryset in `apps.ready`.
Status: Issue closed
|
vim-syntastic/syntastic | 250587263 | Title: Syntastic does not look at stderr to determine checker version
Question:
username_0: I recently had the following error when trying to use `jshint` from Syntastic:
```
syntastic: error: checker output:
syntastic: error: checker javascript/jshint: can't parse version string (abnormal termination?)
```
After some investigation I found that `jshint` writes its version to stderr and that the version is retrieved using `system()` which only takes stdout into account.
I found a similar issue (#1369) that was closed because it seemed to be a problem with `jshint`.
Answers:
username_1: Here's an educated guess: please read `:h syntastic-csh`. If that doesn't solve your problem, please open a JavaScript file, set `g:syntastic_debug` to 3, run `:SyntasticInfo`, and post the output.
username_0: I am running a Bourne like shell so that's not the problem (I did set `g:syntastic_shell = /bin/sh` just to be sure).
`:SyntasticInfo` output:
```Syntastic version: 3.8.0-65 (Vim 800, Linux, GUI)
Info for filetype: javascript
Global mode: active
Filetype javascript is active
The current file will be checked automatically
syntastic: 26.636831: system: command run in 0.259476s
syntastic: 26.637491: javascript/eslint: getVersion: 'eslint --version': ['v3.19.0', '']
syntastic: 26.638454: javascript/eslint: eslint version = [3, 19, 0]
syntastic: 26.641516: javascript/closurecompiler: g:syntastic_javascript_closurecompiler_path = ''
syntastic: 26.642136: javascript/closurecompiler: filereadable('') = 0
Available checkers: eslint tern_lint
Currently enabled checkers: -
syntastic: 26.646934: &shell = '/bin/loksh', &shellcmdflag = '-c', &shellpipe = '| tee', &shellquote = '', &shellredir = '>', &shelltemp = 1, &shellxquote = '', &autochdir = 0, &shellxescape = ''
```
In https://github.com/vim-syntastic/syntastic/blob/master/autoload/syntastic/util.vim#L43, when I change `system(a:command)` with `system(a:command . ' 2>&1')` to redirect stderr to stdout, it works, but I don't know whether that is ok (it might if we only support running on Bourne like shells).
username_0: Nevermind, just learned about ``shellredir``, and how its default value is set. I use a Bourne compatible shell with a name that's different from `sh`, `ksh` or `bash` and because of that `shellredir` is set to `>`.
Setting `shellredir` to `>%s 2>&1` solves the issue.
Status: Issue closed
username_1: It isn't. |
flutter/flutter | 680357709 | Title: [google_maps_flutter] Image overlay
Question:
username_0: I am developing an application for indoor navigation. I have a .svg image, my goal is to define 4 coordinates of corner points (top left, top right, bottom left, bottom right) to place it on the map. Any idea how to solve this?
Answers:
username_1: Hi @username_0,
This platform is not meant for assistance on personal code. Please see https://flutter.dev/community for resources and asking questions like this.
You may also get some help if you post it on Stack Overflow and if you need help with your code, please see https://www.reddit.com/r/flutterhelp/
Closing, as this isn't an issue with Flutter itself. If you disagree, please write in the comments and I will reopen it.
Thank you
Status: Issue closed
|
Azure/azure-iot-cli-extension | 1163806160 | Title: Error message on "az iot hub monitor-events -n"
Question:
username_0: ### **This is autogenerated. Please review and update as needed.**
## Describe the bug
**Command Name**
`az iot hub monitor-events
Extension Name: azure-iot. Version: 0.13.0.`
**Errors:**
```
The command failed with an unexpected error. Here is the traceback:
cannot import name 'c_uamqp' from partially initialized module 'uamqp' (most likely due to a circular import) (/home/kalika/.azure/cliextensions/azure-iot/uamqp/__init__.py)
Traceback (most recent call last):
File "/opt/az/lib/python3.8/site-packages/knack/cli.py", line 231, in invoke
cmd_result = self.invocation.execute(args)
File "/opt/az/lib/python3.8/site-packages/azure/cli/core/commands/__init__.py", line 658, in execute
raise ex
File "/opt/az/lib/python3.8/site-packages/azure/cli/core/commands/__init__.py", line 721, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
File "/opt/az/lib/python3.8/site-packages/azure/cli/core/commands/__init__.py", line 692, in _run_job
result = cmd_copy(params)
File "/opt/az/lib/python3.8/site-packages/azure/cli/core/commands/__init__.py", line 328, in __call__
return self.handler(*args, **kwargs)
File "/opt/az/lib/python3.8/site-packages/azure/cli/core/commands/command_operation.py", line 121, in handler
return op(**command_args)
File "/home/kalika/.azure/cliextensions/azure-iot/azext_iot/operations/hub.py", line 2945, in iot_hub_monitor_events
_iot_hub_monitor_events(
File "/home/kalika/.azure/cliextensions/azure-iot/azext_iot/operations/hub.py", line 3058, in _iot_hub_monitor_events
from azext_iot.monitor.builders import hub_target_builder
File "/home/kalika/.azure/cliextensions/azure-iot/azext_iot/monitor/builders/hub_target_builder.py", line 8, in <module>
import uamqp
File "/home/kalika/.azure/cliextensions/azure-iot/uamqp/__init__.py", line 12, in <module>
from uamqp import c_uamqp # pylint: disable=import-self
ImportError: cannot import name 'c_uamqp' from partially initialized module 'uamqp' (most likely due to a circular import) (/home/kalika/.azure/cliextensions/azure-iot/uamqp/__init__.py)
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- _Put any pre-requisite steps here..._
- `az iot hub monitor-events -n {}`
## Expected Behavior
## Environment Summary
```
Linux-5.4.0-1070-azure-x86_64-with-glibc2.28 (Cloud Shell), Common Base Linux Delridge (quinault)
Python 3.8.12
Installer: DEB
azure-cli 2.34.1
Extensions:
azure-iot 0.13.0
ai-examples 0.2.5
ssh 1.0.0
Dependencies:
msal 1.16.0
azure-mgmt-resource 20.0.0
```
## Additional Context
<!--Please don't remove this:-->
<!--auto-generated-->
Answers:
username_1: Hi @username_0 , can you please try appending `--repair` to the command to force a re-install of `uamqp`?
username_0: Thank you for your suggestion! The problem has been solved. :-) |
npm/npx | 585696591 | Title: [FEATURE] Make it easier to install several dependencies at once
Question:
username_0: #### currently
```
npx -p yo -p generator-code -c "yo code"
```
#### desired
```
npx -c "yo code" -d yo generator-code
npx -c "<command-string>" -d <package>[ <package>]...
```
dependencies `-d`
- `-d <space-separated-packages-list>` is a shorthand for `[-p <package>]...`
- The `-d` option will interpret any `<command>` after `-d` as a package name until another `<option>` is encountered
Answers:
username_1: By the way, the `-c` isn't needed, right? You can use just `npx -p yo -p generator-code yo code`
username_0: @username_1 I don't know, but in that case, the new preferred syntax would be
```
npx yo code -d yo generator-code
```
username_2: ```sh
# Does not work
$ npx yo code -p yo -p generator-code
Error code -p yo -p generator-code
```
```sh
# Works
$ npx -p yo -p generator-code yo code
```
The reason is that everything from "yo" on is used to construct the command line to be executed.
How about instead changing `-p` to take a comma delimited list?
```sh
# Concept: comma separated list
$ npx -p yo@latest,generator-code@^1.2.10 yo code
```
username_3: The fact is that `npx` treats any non-option argument as the first command to run. And all the subsequent arguments to be passed as-is for the command. Just like `docker run`, and this is not supposed to be changed.
I think accepting a comma-delimited list is the best approach so far, as suggested by @username_2. |
Borvik/vscode-postgres | 339276047 | Title: Problem Connecting to Remote Database
Question:
username_0: Running Visual Studio Code with the Postgres extension on the same machine works fine. When I run VSCode with the extension from a different client machine on the same network, I get the following error:
no pg_hba.conf entry for host "XX.XX.XX.XX", user "NNN", database "NNN", SSL off
But there *IS* a valid entry in the pg_hba.conf file, and I've confirmed it is valid by virtue of using a different tool (JetBrains DataGrip) to access the database from the aforementioned client machine.
Any suggestions?
Answers:
username_1: The database server is most likely returning that error, not the extension, or the client library the extension uses (https://www.npmjs.com/package/pg). I would at least start with verifying the connection details.
Does the DB explorer work?
username_0: The DB explorer does not work. Note - I do NOT have PostgreSQL installed on the client machine. Could I therefore be missing certain libraries that were otherwise present when running VSCode and PostgreSQL on the same machine?
I did an NPM install of the pg package you referenced above, but not sure how to test / run this to further diagnose the issue.
username_1: you shouldn't need to run npm install - that is just a library used by the extension.
the pg_hba.conf - does the user you are using have limited database access? is it "all"?
the extension uses the "postgres" database to query the structure of the database.
username_0: I've tried multiple variations of the pg_hba.conf file -- my specific database/user, all / all, postgres / all, etc. -- and get the same error each time.
username_1: I have seen that the record order in the pg_hba.conf file is important - perhaps that is what is going on.
username_0: I moved the entry to the top of the pg_hba.conf file but still got the same error. I'm not sure what you mean when you say you fixed your listen_address. I simply used the IP address of my client machine to the pg_hba.conf file and ensured that the postgres config was reloaded after I updated it.
username_1: A fresh install of postgres server defaults to only listen to localhost. I had to edit my postgresql.conf file to listen on all network interfaces. Probably not your issue if another pg client connected from the same machine.
The only other thing I can suggest is try removing the connection from the DB explorer, and readding it. Perhaps something in the connection details is not the same.
username_1: As I have successfully tested a remote connection, and the error message is being generated by the server - this isn't really an issue with the extension and will be closed.
Status: Issue closed
|
karmaphp/karma | 246570403 | Title: php versiyon hatası
Question:
username_0: Problem 1
- Installation request for doctrine/annotations ^1.5 -> satisfiable by doctrine/annotations[v1.5.0].
- doctrine/annotations v1.5.0 requires php ^7.1 -> your PHP version (7.0.18) does not satisfy that requirement.
Answers:
username_1: @username_0 tekrar dener misiniz?
Status: Issue closed
|
usnistgov/fipy | 43285010 | Title: lid driven cavity example Re 1000
Question:
username_0: I'd like to contribute the lid driven cavity for Re 1000 to the fipy flow examples. So this is like the stokes cavity but not in the viscous limit.
In attachment the example and the figures to create the documentation.
Comparison is made with the Ghia et al 1982 results so as to see how fipy stacks up to that for a flow problem.
Qualitatively the result is ok, but improvements to fipy are needed to better match the curve.
I find this example important to also show users what _not_ to expect from fipy.
_Imported from trac ticket [#306](http://www.ctcms.nist.gov/cgi-bin/redirectLegacyMatforge.py?url=http://matforge.org/fipy/ticket/306), created by <EMAIL> on 07-19-2010 at 05:39, last modified: 01-18-2011 at 22:27_
Answers:
username_1: Should we close this issue?
username_2: Use of the [SIMPLE algorithm](https://en.wikipedia.org/wiki/SIMPLE_algorithm) is neat -- thanks to @bmcage for the contribution!
However, considering the issue has not resurfaced for 9 years, yes, I think this should be closed as a niche example.
Status: Issue closed
|
quasarframework/quasar | 251861189 | Title: quasar.mat.css + quasar.mat.styl missing variables
Question:
username_0: Hello, I've tried to set up Quasar Framework v0.14.1 with Laravel and compile it while noticing that there is some kind of bug in "quasar.mat.css":
1.
.q-collapsible-sub-item.indent {
padding-left: $collapsible-menu-left-padding; //
padding-right: 0;
}
2.
.q-progress-model:not(.indeterminate) {
position: absolute;
top: 0;
bottom: 0;
border-radius: $progress-border-radius; //
transition: width 0.3s linear;
}
And when I checked "quasar.mat.styl", these 2 variables below seem to be missing:
1.
.q-collapsible-sub-item
padding $collapsible-padding
&.indent
padding-left $collapsible-menu-left-padding //
padding-right 0
2.
.q-progress-model
background currentColor
&.animate
animation q-progress-stripes 2s linear infinite
&:not(.indeterminate)
position absolute
top 0
bottom 0
border-radius $progress-border-radius //
transition $progress-transition
Answers:
username_1: Fixed. Available in edge and future 0.14.2. Thanks for reporting!
Status: Issue closed
|
rust-bitcoin/rust-bitcoin | 531017924 | Title: FromStr for hex?
Question:
username_0: Currently we use the `FromStr` https://doc.rust-lang.org/std/str/trait.FromStr.html to convert from hex to a type.
I argue that this should be a `FromHex` trait. as `FromStr` usually is a bit more literal and doesn't concern itself with the radix. (i.e. for integers it uses "regular" decimals)
Answers:
username_1: Depends. Which types are you talking about? In many cases the hex strings are the "standardized" string representations of the types. So `FromStr` makes sense in that case for me.
If something expects a `sha256d::Hash`, I should be able to just pass in `"000000000000000000039805d44e1d9f42442e592eb361c4741716543aac1a7c".parse()?` because that's a SHA-256d hash like you'd see them all over the place.
`FromStr` is the reverse of `Display`, right? Or at least it's supposed to be (because `ToString` is auto-implemented using `Display`). So I'm in favor of probably 99% of the cases where `FromStr::from_str(x.to_string())? == x`.
username_1: Also, we have `FromHex` in `bitcoin_hashes::hex` and it's implemented for all `bitcoin_hashes` type.
For one, I'm unhappy with `hex` being part of `bitcoin_hashes`. I'd much rather just use the `hex` crate or something that is broader than our repos.
username_2: we tried that with base64 and got burned. I don't understand why hex parsing is not in the stdlib of a systems language, but given that it's not, better to NIH it.
Sure, it could be its own crate I guess.
username_0: I think mostly because the current `LowerHex` / `UpperHex` etc. kinda suck (see https://github.com/rust-lang/rust/pull/67021 )
username_3: Is the consensus here just to write our own hex crate?
username_1: Or specify the issues with the `hex` crate and try fix them? What's our issue with them? I use it in almost all my projects and never had an issue. They have a good MSRV. They don't have traits FromHex and ToHex I think, but if that's the problem we can easily add them.
And when it comes to the reverse ordering for bitcoin_hashes I think that should be something bitcoin_hashes specific. We can f.e. have a marker trait `ReversedHex` and then `impl FromHex for T where T: Hash + ReversedHex` or something.
username_0: I personally prefer https://github.com/debris/rustc-hex which is basically https://github.com/rust-lang/rust/blob/master/src/libserialize/hex.rs
username_2: The issue with the hex crate is simply that it's an extra dep for nearly-trivial functionality.
username_2: Closing this, out of date.
Status: Issue closed
|
mesonbuild/meson | 209974127 | Title: When using link_with, transitive pthread dependency is ignored
Question:
username_0: I'm building a common library which depends on math and thread libraries. Linking this library with an executable using `link_with : [common_lib]` should pick up both these transitive dependencies. However, the thread dependency is not picked up.
I'm using meson 0.38.1. A snippet of my meson.build file is reproduced below, with filenames replaced by "..." for clarity.
```
cc = meson.get_compiler('c')
m_dep = cc.find_library('m', required : false)
thread_dep = dependency('threads')
inc = include_directories(...)
common_src = [...]
common_lib = static_library('common',
common_src,
include_directories : inc,
dependencies : [m_dep, thread_dep]);
executable('foo',
[...],
include_directories: inc,
link_with : [common_lib],
install : true)
```
The resulting compile command contains `-lm` but not `-pthread`.
Answers:
username_1: You probably want:
```meson
cc = meson.get_compiler('c')
m_dep = cc.find_library('m', required : false)
thread_dep = dependency('threads')
inc = include_directories(...)
common_src = [...]
common_lib = static_library('common',
common_src,
include_directories : inc,
dependencies : [m_dep, thread_dep]);
common_dep = declare_dependency(
include_directories: inc,
dependencies: [m_dep, thread_dep],
link_with: common_lib)
executable('foo',
[...],
dependencies: common_dep,
install : true)
```
username_0: This does work, but why doesn't the other form work? It does seem to be a bug because the behavior of `dependency('threads')` is inconsistent with other dependencies.
username_2: This is a known bug. The way we handle transitive dependencies isn't consistent right now. You should use the `declare_dependency()` format for now.
username_2: Fixed with https://github.com/mesonbuild/meson/pull/3895.
Status: Issue closed
|
pingcap/tidb | 460195597 | Title: Dumping Tidb data will make mydumper critical
Question:
username_0: ## Bug Report
1. What did you do?
If possible, provide a recipe for reproducing the error.
Dumping tidb data will make mydumper critical occasionally.
2. What did you expect to see?
Handle success.
3. What did you see instead?
```
** (mydumper:1): CRITICAL **: 17:17:58.896: Could not read data from ****.***: other error: request outdated
```
4. What version of TiDB are you using (`tidb-server -V` or run `select tidb_version();` on TiDB)?
version: 2.1.11
Answers:
username_0: Command like this for mydumper:
```
$ mydumper --host=xxxx --database=xxxx--port=3306 --user=xxxx --password=<PASSWORD> --chunk-filesize=64 --threads=16 --skip-tz-utc -o /path/to/backup
```
username_1: Thanks for reporting @username_0
Would you please take a look? @csuzhangxc
username_2: @username_0 Please check the system log like `dmesg`. I guess you meet OOM issue. It is a known issue for 2.1.11. Please upgrade to the 2.1.13.
username_3: This issue was fixed in TiDB 2.1.13.
I am going to close this issue now, but please feel free to re-open if you have additional questions. Thanks!
Status: Issue closed
|
malditotm/EarthquakeMonitor | 114156037 | Title: images for Readme
Question:
username_0: 

 |
laravel/dusk | 568931500 | Title: Operation timed out ChromeDriver
Question:
username_0: - Dusk Version: **5.9**
- Laravel Version: **6.1**
- PHP Version: **7.2.21**
- Database Driver & Version: **MySQL 5.7.26**
- Chrome version: **80.0.3987.116**
- OS: **macOS Catalina 10.15.3**
### Description:
If I run `php artisan dusk` it will show this message 9/10 times:
```
Time: 53.97 seconds, Memory: 26.00 MB
There was 1 error:
1) Tests\Browser\RequestAccountTest::requestAccountFormTest
Facebook\WebDriver\Exception\WebDriverCurlException: Curl error thrown for http POST to /session/ee572e334d5b7ecd5b56736f861aa31b/element/0.26325793063635383-9/click
Operation timed out after 30001 milliseconds with 0 bytes received
```
So sometimes it works 3 times straight, but most of the time it will produce the above output 20 times straight. I am unable to run any tests.
### Steps To Reproduce:
1. Create a new Laravel application.
2. Install dusk.
3. Install the Chrome driver.
4. Update Chrome if necessary.
5. Create a simple test.
6. Run test.
Thank you for helping in advance.
Answers:
username_1: Hi there,
Thanks for reporting but it looks like this is a question which can be asked on a support channel. Please only use this issue tracker for reporting bugs with the library itself. If you have a question on how to use functionality provided by this repo you can try one of the following channels:
- [Laracasts Forums](https://laracasts.com/discuss)
- [Laravel.io Forums](https://laravel.io/forum)
- [StackOverflow](https://stackoverflow.com/questions/tagged/laravel)
- [Discord](https://discordapp.com/invite/KxwQuKb)
- [Larachat](https://larachat.co)
- [IRC](https://webchat.freenode.net/?nick=laravelnewbie&channels=%23laravel&prompt=1)
However, this issue will not be locked and everyone is still free to discuss solutions to your problem!
Thanks.
Status: Issue closed
username_0: @username_1 I don't see why this is not a valid bug report. Many people all over the internet complain about this issue.
Please correct me if I'm wrong, but Isn't this a bug in Dusk itself?
Sources:
https://github.com/laravel/dusk/issues/440
https://stackoverflow.com/questions/49837939/laravel-dusk-chrome-driver-timeout
https://laracasts.com/discuss/channels/laravel/laravel-dusk-process-exceeded-timeout
username_2: I had the same problem. Installing the latest version of the Chrome Browser fixed it for me, using these instructions here:
https://stackoverflow.com/questions/39541739/chromedriver-error-chrome-version-must-be-52-using-nightwatch/45227939#45227939
username_3: I have Ubuntu and the ungoogled chrome (Chromium). Tried everything to work with Chromium, none worked, then just installed the normal chrome and everything works like a charm:
Download:
wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
Instalation:
sudo apt install ./google-chrome-stable_current_amd64.deb |
YumaInaura/YumaInaura | 448900358 | Title: チーム論にすごく興味があるけどリモートワークでも実現できるかな?うん。 2019-05-28 on Twitter
Question:
username_0: # 乱立? 2019-05-27 on Twitter
[*](https://twitter.com/username_0/status/1132964914522402816")
<https://github.com/username_0/username_0/issues/2035>
# Rails4年エンジニア@フルリモートいなうら [@username_0](https://twitter.com/username_0/)

稲浦悠馬 / いなうらゆうま / Webサイト開発運営 自営業10年 → Aiming大阪スタジオ Rails4年 → リモートワークWeb開発なう / Ruby / BigQuery / MySQL / Linux / ansible etc ..
# 引用させていただいた方々
# ぽこもこ [@migawari_poko](https://twitter.com/migawari_poko/)

ナエトル、みがわり、ゲッコウガ、ゼラオラなどのポケモンが大好き!
好きなキャラの着ぐるみを見てるとすごくワクワクしてきます!
アニメ:ポケモン、プリキュア、デジモン
ゲーム:ポケモン
ヘッダー:Taka☆さん(@Takario21)
アイコン:こうっちさん(@koucchi378)
別名:しかるナエトル
# ガルム [@N606ILDHdpp5mXZ](https://twitter.com/N606ILDHdpp5mXZ/)

みんなと仲良くなりたい。思うがままに生きていく。ただそれだけだ。絵描きしたりみんなとおしゃべりしたりとな。我はまったりと気ままにやっていく。多少気に入らないところあるかもしれないが許して欲しい。無言フォロー歓迎。仲良くなりたいならフォローしてね。君の暇つぶしになりたい。 |
oh-my-fish/theme-bobthefish | 110923073 | Title: Unexpected line wrap happens in git directory.
Question:
username_0: 
After the git repo becomes dirty, the right most `5` gets wrapped to the next line.
Answers:
username_1: Assuming you're using iTerm, try changing this?
<img width="392" alt="screen shot 2015-10-12 at 8 01 35 am" src="https://cloud.githubusercontent.com/assets/53660/10431036/82256f98-70b7-11e5-8825-2fd2fc263132.png">
username_0: I'm using `XShell` on Windows 10.

still no good.
username_1: Yeah, I'm not sure. It's definitely not a CJK character. It could be that XShell is more aggressive about their wide characters than iTerm, and only let you disable it for CJK characters?
It looks like the issue is with the `…` character, not the branch glyph, right? Have you noticed it with any of the other symbols? I could add an alt style that uses something else for "untracked files", if that's the only one affected.
username_0: @username_1 Currently, no. I would love to test it if this alt style gets implemented.
Status: Issue closed
username_1: @username_0 I've added a fix (though probably not the best fix). You can try it out by updating to the latest version of bobthefish, then adding this to your `.config/fish/config.fish` or `.config/omf/init.fish`:
```fish
set -g theme_avoid_ambiguous_glyphs yes
```
username_0: Now the line wrap works correctly. But I keep getting this output.

username_1: Oh, boo. Those are a pain to track down.
My first guess is line 325. Do you mind checking it? Change this:
```fish
if [ (command git ls-files --other --exclude-standard) ]
```
To this:
```fish
set new (command git ls-files --other --exclude-standard)
if [ "$new" ]
```
username_0: @username_1 Thanks. Now it works properly. |
chef/chef-cli | 632027093 | Title: chef install Policyfile.rb cannot resolve cookbook-specific source of supermarket
Question:
username_0: `chef install -D`
# Stacktrace
Building policy policyfile
Expanded run list: recipe[example::default]
Caching Cookbooks...
Error: Failed to generate Policyfile.lock
Reason: (Solve::Errors::NoSolutionError) Unable to satisfy the following requirements:
- `chef-dk (~> 3.1.0)` required by `user-specified dependency`
/opt/chefdk/embedded/lib/ruby/gems/2.3.0/gems/solve-3.1.0/lib/solve/ruby_solver.rb:197:in `rescue in resolve_with_error_wrapping'
/opt/chefdk/embedded/lib/ruby/gems/2.3.0/gems/solve-3.1.0/lib/solve/ruby_solver.rb:195:in `resolve_with_error_wrapping'
/opt/chefdk/embedded/lib/ruby/gems/2.3.0/gems/solve-3.1.0/lib/solve/ruby_solver.rb:75:in `resolve'
/opt/chefdk/embedded/lib/ruby/gems/2.3.0/gems/solve-3.1.0/lib/solve.rb:64:in `it!'
/opt/chefdk/embedded/lib/ruby/gems/2.3.0/gems/chef-dk-1.2.22/lib/chef-dk/policyfile_compiler.rb:182:in `graph_solution'
/opt/chefdk/embedded/lib/ruby/gems/2.3.0/gems/chef-dk-1.2.22/lib/chef-dk/policyfile_compiler.rb:127:in `install'
/opt/chefdk/embedded/lib/ruby/gems/2.3.0/gems/chef-dk-1.2.22/lib/chef-dk/policyfile_services/install.rb:98:in `generate_lock_and_install'
/opt/chefdk/embedded/lib/ruby/gems/2.3.0/gems/chef-dk-1.2.22/lib/chef-dk/policyfile_services/install.rb:62:in `run'
/opt/chefdk/embedded/lib/ruby/gems/2.3.0/gems/chef-dk-1.2.22/lib/chef-dk/command/install.rb:78:in `run'
/opt/chefdk/embedded/lib/ruby/gems/2.3.0/gems/chef-dk-1.2.22/lib/chef-dk/command/base.rb:58:in `run_with_default_options'
/opt/chefdk/embedded/lib/ruby/gems/2.3.0/gems/chef-dk-1.2.22/lib/chef-dk/cli.rb:73:in `run'
/opt/chefdk/embedded/lib/ruby/gems/2.3.0/gems/chef-dk-1.2.22/bin/chef:25:in `<top (required)>'
/usr/bin/chef:81:in `load'
/usr/bin/chef:81:in `<main>'
Answers:
username_1: While this workaround works, we should update the docs if use of the `:supermarket` symbol as a source is not supported. |
microsoft/DeepSpeedExamples | 755487619 | Title: Ignore index for mpu cross entropy
Question:
username_0: It seems to me that https://github.com/microsoft/DeepSpeedExamples/blob/fa1d1a71c48623db8a091d9cf636a5fe3b8f43c7/Megatron-LM/pretrain_bert.py#L221-L222 does not take `ignore_index=-1` into account, while it does for bing-bert example: https://github.com/microsoft/DeepSpeedExamples/blob/a3cfec7c0f4aeaee2f0bb0f52b21dbf9637ac4f2/bing_bert/nvidia/modelingpreln.py#L1148 |
BiologicalRecordsCentre/iRecord | 606532862 | Title: Unhelpful English name lookup
Question:
username_0: @username_1 could you look into this one please. We have two bee species in genus Osmia that have similar vernacular names:
- Osmia bicornis = Red Mason Bee
- Osmia bicolor - Red-tailed Mason Bee (also =Two-coloured Mason Bee)
A bee recording project is currently being promoted in northern England, where Osmia bicolor does not occur. Unfortunately, if you type in "Red Mason Bee" to the iRecord forms, all you are offered is "Red-tailed Mason Bee", which is leading to records of the wrong species.
In a way it's a shame that these two species have been given names that are so similar both for the scientific and vernacular names, but we can't do anything about that,
Can we change the behaviour of the look-up so that both species appear when you type in "Red Mason Bee", and preferably so that it shows "Red Mason Bee" first and "Red-tailed Mason Bee" second.
Relates to #669
Answers:
username_1: @username_0 - I'll take a look this morning.
username_1: @username_0 - this problem doesn't occur on dev (i.e. Red Mason Bee is in drop-down list) and sure enough there is an entry for 'Red Mason Bee' in the UK Master List on dev warehouse but not on live warehouse. I don't know why the difference - both lists were updated with the same UKSI stuff last year (around September).
```
select preferred_taxon, taxon, search_name, * from cache_taxa_taxon_lists
where taxon_list_id=15
and preferred_taxon like 'Osmia bi%'
```
```
select * from taxa_taxon_lists ttl
join taxa t on t.id = ttl.taxon_id
where t.taxon like 'Red Mason%'
and ttl.taxon_list_id=15
```
@username_2 - whatever the cause, would it be feasible to workaround this by manually inserting a row an in the cache_taxa_taxon_lists table so the taxon is found when users search on 'Red Mason'? Do you foresee any problems with me doing that?
username_2: Use the web UI to add the missing common name. Then, check the record in the taxa table - the external key should be the preferred name TVK (according to http://nbn-sd-dev.nhm.ac.uk/taxon.php?linkKey=<KEY> it should be NHMSYS0000876509) and the search_code should be the vernacular name's TVK: NBNSYS0000169832.
Don't just add to the cache_taxa_taxon_lists table as this would miss out storing the underlying raw versions of the data.
username_1: Thanks @username_2 - I've done that. I wasn't aware that taxa could be added like this - I should have thought to look, but I probably would have needed your advice re the search_code anyway. @username_0 I think it will become searchable once the associated task on the work_queue has be processed and inserted a corresponding record in cache_taxa_taxon_lists table. I'll keep an eye on it.
username_2: @username_1 FYI - the way we use search_code is unique to the way that we ensure existing names are recognised when doing the UKSI update.
username_1: @username_0 - 'Red M<NAME>' appears in the drop-down now, though the scientific name is not shown. @username_2 is there an easy way to fix that?
@username_2 - I'm still trying to understand the root of this problem. The 'Red Mason Bee' is represented in the 'Garden Bioblitz Ticklist 2013' (list ID 74) which is a child list of the UK Master List (ID 15). On dev there are entries in the taxa_taxon_lists table for 'Red Mason Bee' against both lists, but on the main warehouse it is only represented in taxa_taxon_lists for the 'Garden Bioblitz Ticklist 2013' (twice with different taxon_meaning_ids) and not against the main list. I guess that's why it doesn't appear in the dropd-down list.
May be this happened during the last UKSI update, though why it occurred on live and not dev, I do not know. Do you think that we should try to fix this or, given that it is not a widespread problem, just deal with it as we have done so far?
Status: Issue closed
username_0: Thanks @username_2 and @username_1 - closing. |
MicrosoftDocs/azure-docs | 1121134528 | Title: Confusing requirements documentation
Question:
username_0: [Enter feedback here]
The page that points to these docs [ https://www.microsoft.com/en-us/download/details.aspx?id=56506 ] states that the techinque described works on Windows and Ubuntu. The specific example on this page says to install Jira on Windows 64.
Nothing in the example seems to require Windows, i.e. the examples are in the Azure AD portal, and in the Jira config file and web app config pages.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d38cad90-6574-38c2-3262-059a9f975064
* Version Independent ID: dd08d5c5-65a4-c2cc-8b51-7a35268dde68
* Content: [Tutorial: Azure Active Directory single sign-on (SSO) integration with JIRA SAML SSO by Microsoft](https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/jiramicrosoft-tutorial)
* Content Source: [articles/active-directory/saas-apps/jiramicrosoft-tutorial.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/saas-apps/jiramicrosoft-tutorial.md)
* Service: **active-directory**
* Sub-service: **saas-app-tutorial**
* GitHub Login: @jeevansd
* Microsoft Alias: **jeedes**
Answers:
username_1: @username_0
Thanks for your feedback! I've assigned this issue to the author who will investigate and update as appropriate. |
mrdoob/three.js | 1015880166 | Title: Orbitcontrols animate to updated target
Question:
username_0: I would like to be able to set a new target, and rather than the camera jumping directly to it, have it lerp to the new position.
Answers:
username_1: Sorry, but this needs to be solved on application level. Please use the [forum](https://discourse.threejs.org/) for more help.
BTW: There are some existing topics about combining `OrbitControls` with animations, e.g.
https://discourse.threejs.org/t/tweening-with-orbitcontrols/17356
https://discourse.threejs.org/t/need-help-with-gsap-camera-and-orbitcontrols-animations/17420
Status: Issue closed
username_0: Thank you! |
dotnet/aspnetcore | 1057487439 | Title: Razor should have it’s own SFC
Question:
username_0: The utility of a single file that joins markup, scripts and styling for a component is undeniable. While Razor supports this the tooling to extract, bundle and async/import them when necessary is lacking.
I'd like to be able to build a page or component with all it's related content together and have a build tool that, like Webpack can isolate, bundle and provide them when that page/component is loaded.
Answers:
username_1: @username_0 thanks for contacting us.
We already support CSS isolation and JS isolation through [Scoped CSS](https://docs.microsoft.com/en-us/aspnet/core/blazor/components/css-isolation?view=aspnetcore-6.0) and [JS modules](https://devblogs.microsoft.com/dotnet/asp-net-core-updates-in-net-6-rc-1/#collocate-javascript-files-with-pages-views-and-components).
We already bundle and auto-import the CSS and for JS it is easy to achieve this behavior in your own component base class. For bundling JS, you can integrate a JS bundling pipeline into the build as described [here](https://devblogs.microsoft.com/dotnet/build-client-web-assets-for-your-razor-class-library/).
Status: Issue closed
|
rkulkar5/OnlineTAT | 708238589 | Title: Alignment was not aligned properly
Question:
username_0: 
Answers:
username_1: @username_0 Can you also tell me where is this screen? Thank you.
username_0: Hi,
When am doing one round of testing i found that screen, For exact path where it is i will check again and get back to you
Thanks®ards
<NAME>
----- Original message -----
Status: Issue closed
username_2: Management role > Dashboard > View details

username_2: 
username_1: @username_2 @username_0 Can you highlight the alignment issues here?
username_2: 


username_3: Fixed the issue..please verify once code is deployed to cloud..
Status: Issue closed
username_2: Issue fixed
 |
makinacorpus/accessimap-editeur-der | 222426170 | Title: Implémenter la fonctionnalité 'undo' / 'redo'
Question:
username_0: Permettre à l'utilisateur de stocker un historique des modifications apportées à un DER.
En ajoutant deux boutons, undo & redo, l'utilisateur pourrait annuler une (à plusieurs idéalement) action effectuée, et la refaire également si aucune autre action n'a été réalisée.
Answers:
username_1: La fonction undo/redo marche avec la plupart des fonctionnalités. La branche history comprends aussi une améloration de l'ergo (changement du menu etc).
Reste à améliorer :
- Quand on applique des modifications à une image, les changements ne sont pas pris en compte dans l'historique
- peut être d'autres fonctions que je n'ai pas tester ?
Dans l'ensemble je pense que la branche peut être mergée car elle apporte pas mal de changements graphiques qui peuvent être bénéfiques.
username_2: Test effectué pour le Undo/Redo: OK si on ajoute un objet et qu'on veut le supprimer après. Mais si on applique des changements sur un objet, ceux-ci ne sont pas pris en compte dans l'historique. Exemple: J'ajoute un objet. Puis je change sa texture ou hachures. Je fais le "undo" ça supprime carrément l'objet. Comme ci le dernier changement c'est l'ajout de l'objet pas le changement effectué sur cet objet. |
qdm12/ddns-updater | 802944612 | Title: Help: i could not install on a RaspBerry Pi4 (4Gb model) running RaspBerry PI OS 64 bit
Question:
username_0: <!--
HAVE A CHAT FIRST!
https://github.com/username_1/ddns-updater/discussions
-->
**TLDR**: The container is not running in as PI OS 64 bit RaspBerry PI4.
1. Is this urgent: No
2. DNS provider(s) you use: NOIP.COM
3. Program version:
<!-- See the line at the top of your logs -->
`Running version latest built on 2020-03-13T01:30:06Z (commit d0f678c)`
4. What are you using to run the container: docker run
5. Extra information (optional): i could not use docker-compose. I get an error about the version (3.7).
Logs:
```log
```
Configuration file (**remove your credentials!**):
```json
```
Host OS: PI OS 64 bit
Answers:
username_1: Try pulling latest? The version you're using is quite old. Also you can use an older version of docker-compose by changing its version at the top or just upgrade your docker-compose version.
username_0: Hi! thank you for the time you dedicate in replying to me.
I've used this command:
`docker run -d -p 8000:8000/tcp -v "$(pwd)"/data:/updater/data qmcgaw/ddns-updater:latest`
So i think i'm using the latest version.
I've run:
`sudo apt-get upgrade docker-compose`
and the result of the command _docker-compose version_ is the following:
```
docker-compose version 1.21.0, build unknown
docker-py version: 3.4.1
CPython version: 3.7.3
OpenSSL version: OpenSSL 1.1.1d 10 Sep 2019
```
I try also to run the container in interactive mode and here may be is the problem..:
```
pi@pi-worker-01:~/data $ docker run --name ddns -p 8000:8000/tcp -v "$(pwd)"/data:/updater/data qmcgaw/ddns-updater:latest
=========================================
=========================================
========= DDNS Universal Updater ========
=========================================
=== Made with ❤️ by github.com/username_1 ====
=========================================
Running version latest built on 2021-01-30T20:01:15Z (commit 78c86b0)
🔧 Need help? https://github.com/username_1/ddns-updater/issues/new
💻 Email? <EMAIL>
☕ Slack? Join from the Slack button on Github
💸 Help me? https://github.com/sponsors/username_1
2021-02-07T14:43:40.451Z ERROR cannot write to file "/updater/data/updates.json": open /updater/data/updates.json: permission denied_
```
It seems a permission problem, but I'm new at Linux and I don't know how to solve this...
If you have any suggestions let me know.
Thank you very much for the time you dedicate in helping me.
Greetings from Italy.
Paolo.
username_2: try something like:
`docker run --user $(id -u):$(id -g) -d -p 8000:8000/tcp -v "$(pwd)"/data:/updater/data qmcgaw/ddns-updater:latest`
you need to set a user for the container that has the same uid than the one that owns your data folder. This is documented under the section setup of https://hub.docker.com/r/qmcgaw/ddns-updater/
username_0: Tks!
I'll try asap.
Paolo.
username_0: As far as you know what is ther correct syntax for the variables in the config.json file for the NOIP.COM site?
If you follow the [link](https://hub.docker.com/r/qmcgaw/docs/noip.md) indicate you'll get an error...
Tks.
Paolo.
username_0: ...but DynDNS is'nt supported?
`2021-02-07T15:32:25.809Z ERROR provider "DynDNS" is not supported`
Thank for your help...
Paolo.
username_2: The correct id for DynDNS is "dyn". The documentation is here https://github.com/username_1/ddns-updater/blob/master/docs/dyndns.md
username_2: Happy to know you were able to make ddns-updater work Paolonervi. Let's close this issue once https://github.com/username_1/ddns-updater/pull/167 is reviewed and merged
Status: Issue closed
username_1: Thanks @username_2 for helping out!
@username_0 also note that if you run on your host as root (`id -u` being `0`), you should ideally not run the container with `--user $(id -u):$(id -g)` and just change the ownership and permission of the directory & files as described [here](https://github.com/username_1/ddns-updater#setup). |
AztechMC/Modern-Industrialization | 1092166613 | Title: coke oven not being an oven
Question:
username_0: the coke oven will not show the Gui to allow me to turn coal (or coal dust) into coke please fix at soonest convenes I'm playing on 1.18 using the benchmark 2 modpack.
Answers:
username_1: All the useful information about using the Coke Oven are in the Guidebook.
Status: Issue closed
|
EzequielB/jar-to-dep | 460835417 | Title: how to use-it?
Question:
username_0: Hello, I don't understand how your lib work.
By example I want to integrate this lib to my node project :
https://mvnrepository.com/artifact/io.github.microutils/kotlin-logging-js/1.6.26
How do I do -it?
Answers:
username_1: Hi,
I changed the readme to improve the phrasing.
It's a simple tool to find the deps of a set of jars, mostly used for migrating old projects into gradle|maven.
Status: Issue closed
|
adobe/react-spectrum | 790213961 | Title: NumberField docs (Hooks)
Question:
username_0: <!---
Thanks for filing an issue 😄! Before you submit, please read the following:
Search open/closed issues before submitting since someone might have asked the same thing before!
-->
# 🙋 Documentation Request
<!--- Provide a general summary of the documentation request here -->
Write documentation for NumberField hooks.
## 🧢 Your Company/Team
<!--- Which product team is requesting this change? (i.e. Adobe/Photoshop) -->
Answers:
username_1: This is done.
Status: Issue closed
|
zzzprojects/html-agility-pack | 322185798 | Title: cannot scrap text or password fields only in this html site https://account.genndi.com/login
Question:
username_0: this is xpath that is used to scrap "//input[@type='text'] | //input[@type='password']" ,but only for [https://account.genndi.com/login](url) website it is not working.
Answers:
username_1: Hello @username_0 ,
Do you think you could provide us a project example/test with this issue?
It will make easier/faster for my developer to getting started for investigating it.
_We now always ask for a project since we found out that most issues are missing some essential information or are resolved by the requestor when creating it_
_(Even if the issue seem very easy to reproduce, by getting a test project, it allow us to give a faster support and better experience for the support of all our free libraries)_
Best Regards,
Jonathan
---
**Performance Libraries**
`context.BulkInsert(list, options => options.BatchSize = 1000); `
[Entity Framework Extensions](http://entityframework-extensions.net/) • [Bulk Operations](http://bulk-operations.net/) • [Dapper Plus](http://dapper-plus.net/) • [LinqToSql Plus](http://linqtosql-plus.net/)
**Runtime Evaluation**
`Eval.Execute("x + y", new {x = 1, y = 2}); // return 3`
[C# Eval Function](http://eval-expression.net/) • [SQL Eval Function](http://eval-sql.net/)
username_0: This is my code that i wrote to scrap input field names from html.This code working fine for all other site that i checked except [https://account.genndi.com/login](url) . Code : `inputs_raw = (doc.DocumentNode.SelectNodes("//input[@type='text'] | //input[@type='password']")).Where(node => !string.IsNullOrEmpty(node.Attributes["name"].Value)).Select(value => value.Attributes["name"].Value).ToList();`
username_1: Hello @username_0 ,
The issue is caused because you try to get the attribute name for every input with type = 'text'
However, they don't have all a name such as:
```html
<input type="text" class="form-control" placeholder="Enter your email here..." id="resetPasswordEmail">
```
This input throws the error.
You can easily fix it by checking if the name attribute exists first before trying to get the value
```csharp
var inputs_raw = (doc.DocumentNode.SelectNodes("//input[@type='text'] | //input[@type='password']"))
.Where(node => node.Attributes["name"] != null && !string.IsNullOrEmpty(node.Attributes["name"].Value))
.Select(value => value.Attributes["name"].Value).ToList();
```
Let me know if that solves your issue.
Best Regards,
Jonathan
username_0: yes but in my case there is a tribute called "name" for both inputs.But scraper cant identify that.
`<input type="text" **name="username"** class="required form-control" value="" placeholder="Enter your username...">`
`<input **name="password"** type="password" class="required form-control" style="" placeholder="Enter your password...">`
username_1: I don't understand your last comment,
What doesn't work in the provided solution?
username_0: `var inputs_raw = (doc.DocumentNode.SelectNodes("//input[@type='text'] | //input[@type='password']"))
.Where(node => node.Attributes["name"] != null && !string.IsNullOrEmpty(node.Attributes["name"].Value))
.Select(value => value.Attributes["name"].Value).ToList();`
this code that you send me before also having same problem.It cant scrap from [https://account.genndi.com/login](url) ,**Only from this site**.If you visit that site you will see two inputs to "username" and "password". And those element have name attribute . Why cant i scrap it?.
username_1: It works like a charm here
```csharp
string url = "https://account.genndi.com/login";
var doc = new HtmlWeb().Load(url);
var outer = doc.DocumentNode.OuterHtml;
var inputs_raw = (doc.DocumentNode.SelectNodes("//input[@type='text'] | //input[@type='password']"))
.Where(node => node.Attributes["name"] != null && !string.IsNullOrEmpty(node.Attributes["name"].Value))
.Select(value => value.Attributes["name"].Value).ToList();
```
Please provide me a full code that doesn't work as I did.
Best Regards,
Jonathan
username_1: Hello @username_0 ,
Could you provide us a full project with the error? Again, the code you provided is working fine on our side. Maybe a critical line has been removed from your example?
```csharp
string url = "https://account.genndi.com/login";
var htmlweb = new HtmlWeb();
var doc = htmlweb.Load(url);
var inputs_raw = new List<string>();
try
{
inputs_raw = (doc.DocumentNode.SelectNodes("//input[@type='text'] | //input[@type='password']"))
.Where(node => node.Attributes["name"] != null && !string.IsNullOrEmpty(node.Attributes["name"].Value))
.Select(value => value.Attributes["name"].Value).ToList();
string s = "";
foreach (string x in inputs_raw)
{
s += x;
}
}
catch (Exception ex)
{
}
```
I'm not sure to understand what that's working for us and not for you.
Best Regards,
Jonathan
username_0: I fixed that issue by updating html agility pack.And your solution working fine now Now .Much appreciated your support and time on this problem Jonathan.Thank you very much.
Status: Issue closed
username_1: Great :)
Best Regards,
Jonathan |
websauna/websauna | 157105401 | Title: documentation: secrets.yaml smtp_host vs smtp_server
Question:
username_0: The documentation [suggests](https://github.com/websauna/websauna/blame/4db1fc9803bd1f94112cea1dcdbf2d231f2056b1/docs/source/tutorials/deployment/email.rst#L37) using smtp_host, this results in an error when running the myapp playbook when following the production deployment instructions.
Using smtp_server solves this error.
Status: Issue closed
Answers:
username_1: Fixed in 64fd0970cce8b71d4f7de69f9ca5d507ee3040f3 |
vesoft-inc/nebula-python | 1038361397 | Title: 建议移除logging.basicConfig
Question:
username_0: 项目在[这里](https://github.com/vesoft-inc/nebula-python/blob/master/nebula2/gclient/net/__init__.py#L27)调用了`logging.basicConfig`,我认为这种方式是不友好的,会使用户试图定义自己的logger对象时无法直接生效。
建议移除。
Answers:
username_1: @username_2 could you kindly review this suggestion? thanks :)
username_2: Hello @username_0, thanks for your feedback. Would you say a module-specific logger may solve this problem as proposed in https://github.com/vesoft-inc/nebula-python/issues/150#issuecomment-928969206?
username_0: If there is no handlers in root logger, call `basicConfig` will initialize the logging system in this current process, which means users cannot customize their logger performance anymore unless call their `basicConfig` earlier than import `nebula2`, or remove root handlers manually.
For example, if you code like this:
```python
from nebula2.gclient.net import ConnectionPool
import logging
logging.basicConfig(level=logging.DEBUG, format='[%(asctime)s] %(levelname)-8s [%(filename)s:%(lineno)d]:%(message)s')
logger = logging.getLogger(__name__)
logger.debug('debug msg')
```
will not print 'debug msg' in console because the logger level was already set to be `logging.INFO` when you run `from nebula2.gclient.net import ConnectionPool`.
To avoid this, there are two choices:
```python
# option 1, call basicConfig earlier than import nebula2
import logging
logging.basicConfig(level=logging.DEBUG, format='[%(asctime)s] %(levelname)-8s [%(filename)s:%(lineno)d]:%(message)s')
from nebula2.gclient.net import ConnectionPool
logger = logging.getLogger(__name__)
logger.debug('debug msg')
```
or
```python
# option 2, remove all handlers in root logger and reset basicConfig
from nebula2.gclient.net import ConnectionPool
import logging
for handler in logging.root.handlers:
logging.root.removeHandler(handler)
logging.basicConfig(level=logging.DEBUG, format='[%(asctime)s] %(levelname)-8s [%(filename)s:%(lineno)d]:%(message)s')
logger = logging.getLogger(__name__)
logger.debug('debug msg')
```
It's troublesome, right? I don't know clearly if there is great demand for you developers to use `basicConfig` in your code and I think the problem in #150 is similar to the situation here.
Status: Issue closed
|
didierrenardub/ub_testing | 427209584 | Title: Espacios en blanco
Question:
username_0: 1-Proyecto de la falla encontrada: Buscaminas.
2- Version: 2.0
3-Sistema operativo: Windows 10 pro.
4- Gravedad del bug: medium
5- Tasa de reproduccion: 100%
7- Paso a paso: no puede haber espacios en blanco al iniciar el juego, deben aparecer numeros para deducir donde se encuentra la mina.

Answers:
username_1: Not a bug. En todos los buscaminas hay espacios en blanco.
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.