repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
psteitz/mongster | 91694961 | Title: REST API
Question:
username_0: There is currently no for an external process to start / stop / clear or query a Mongster server. You have to have a reference to the server to do these things. A REST API enabling these things should be provided. |
astropy/specutils | 637922200 | Title: Result from fit_lines drops names of compound model components
Question:
username_0: Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'QuantityModel' object is not subscriptable
```
The specific context in which I'm affected by this error is using the model fitting tool implemented in [this PR](https://github.com/spacetelescope/jdaviz/pull/119) to fit any combination of 2 or more models. Also note that fitting a single model rather than a compound model preserves the `name` parameter in the output:
`<QuantityModel Const1D(amplitude=1.61789905, name='C'), input_units=m, return_units=1e-17 erg / (Angstrom cm2 s)>`
Answers:
username_1: `QuantityModel` was never subscriptable, so I'm not sure how it was working for you before?
You can however just access the underlying unitless compound model and subscript that.
<img width="878" alt="Screen Shot 2020-06-16 at 8 19 58 AM" src="https://user-images.githubusercontent.com/4141126/84773389-55fa7580-afaa-11ea-9bfa-68a66c989896.png">
username_0: Well, I was already feeling crazy since there are obviously no recent commits that would have changed this, good to have confirmation that I'm just nuts. Thanks for the tip on grabbing the unitless model for subscripting purposes. Might be worth adding a note about that to the docs somewhere, but I'll go ahead and close this since it's expected behavior, not a bug.
Status: Issue closed
|
fatedier/frp | 185167843 | Title: 客户端无法开启自启动
Question:
username_0: 树莓派平台,无论是在rc.local中还是init.d中写脚本都是无法正常启动,服务端在vps的 Centos 6 x86上的rc.local就能成功自启动,树莓派在家里,没有保证的7*24小时供电!求协助!
Answers:
username_1: `#!/bin/sh
### BEGIN INIT INFO
# Provides: frpc
# Required-Start: $local_fs $remote_fs $network
# Required-Stop: $local_fs $remote_fs $network
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: frpc
# Description:
# https://github.com/username_2/frp
# 保存为 frpc 到/etc/init.d
# 注册为服务 update-rc.d frpc defaults
### END INIT INFO
NAME=frpc
DAEMON=/opt/frp/$NAME
CONFIG=/opt/frp/myfrpc.ini
[ -x "$DAEMON" ] || exit 0
case "$1" in
start)
echo "Starting $NAME..."
start-stop-daemon --start --chuid pi --exec $DAEMON --quiet --oknodo --background -- -c $CONFIG || return 2
;;
stop)
echo "Stopping $NAME..."
start-stop-daemon --stop --exec $DAEMON --quiet --oknodo --retry=TERM/30/KILL/5 || return 2
;;
restart)
$0 stop && sleep 2 && $0 start
;;
*)
echo "Usage: $0 {start|stop|restart}"
exit 1
;;
esac
exit 0
`
自己pi用的 拿走不谢
Status: Issue closed
username_3: sudo -u 用户名 再加命令也可以 |
api-platform/core | 287744122 | Title: Strange behavior with relation and validation
Question:
username_0: When there is a relation, and When I post something the behavior of API Platform is not always the same.
## Case 1: Missing property
```json
{
"@context": "\/app_dev.php\/v2\/contexts\/ConstraintViolationList",
"@type": "ConstraintViolationList",
"hydra:title": "An error occurred",
"hydra:description": "organization: This value should not be blank.",
"violations": [
{
"propertyPath": "organization",
"message": "This value should not be blank."
}
]
}
```
It's perfect :white_check_mark:
## Case 2: The property is here, but malformed (or the ID does not exist anymore):
```json
{
"@context": "\/app_dev.php\/v2\/contexts\/Error",
"@type": "hydra:Error",
"hydra:title": "An error occurred",
"hydra:description": "Expected IRI or nested document for attribute \"organization\", \"string\" given."
}
```
IMHO, this is wrong, I would expect something like this:
```json
{
"@context": "\/app_dev.php\/v2\/contexts\/ConstraintViolationList",
"@type": "ConstraintViolationList",
"hydra:title": "An error occurred",
"hydra:description": "organization: This value should not be blank.",
"violations": [
{
"propertyPath": "organization",
"message": "This value can not be found."
}
]
}
```
Answers:
username_1: It's because the sanity check doesn't occur at the same level. The "case 2" exception is thrown directly by the denormalizer because the provided document is not a valid JSON-LD one. Just as if your JSON document contains a syntax error, or a type error (when you put a string in a property that must contain a number for instance).
As rule of thumb, the `violations` key is a proprietary extension provided by API Platform, but is not part of the Hydra spec (it's valid to add custom properties). There is no guarantee that this key will be present, a client should always failback to the `hydra:description` property if there is no `violations`.
username_0: Ah OK. I did not know that.
So I think we can close this issue
Status: Issue closed
|
pooza/makoto | 1156056395 | Title: コマンドラインツール 更新
Question:
username_0: <!-- Edit the body of your new issue then click the ✓ "Create Issue" button in the top right of the editor. The first line will be the issue title. Assignees and Labels follow after a blank line. Leave an empty line before beginning the body of the issue. --><issue_closed>
Status: Issue closed |
naser44/1 | 102464866 | Title: جئت لا أعلم من أين ولكني أتيت
Question:
username_0: <a href="http://ift.tt/1JaTfPY">جئت، لا أعلم من أين ، ولكنّي أتيت</a> |
rstacruz/cheatsheets | 458984132 | Title: GraphQL is in wrong category
Question:
username_0: Now, GraphQL is in `Database` category, but it is not related with any kind of Database.
As we can se in this [link](https://graphql.org), GraphQL is a Query Language for APIs, in fact is an especification.
I sugest to create a new category `GraphQL`, because we have most related projects for it.
Answers:
username_1: I want to second this, GraphQL is not a `Database`.
Looking at the current list of categories, I'd suggest a new one called `API`.
username_2: :rofl: good catch! I'll bundle it with some others under API.
Status: Issue closed
|
pdewouters/better-archives-widget | 64739774 | Title: Only english output
Question:
username_0: Is it possible to do so that it prints months in a language other than English?
Answers:
username_1: it is, i will internarionalize it when I get a chance
but to be honest it's quite an old plugin
there must be something better
username_1: done
Status: Issue closed
|
AssemblyScript/assemblyscript | 975996890 | Title: why i return string or object,but get number
Question:
username_0: 
Answers:
username_1: Those returned numbers are pointers to the heap. You can either use [the runtime](https://www.assemblyscript.org/status.html#interop-with-js) or [as-bind](https://github.com/torch2424/as-bind) to reconstruct complex structures created in wasm.
username_2: Perhaps also useful: [Exports and imports / On values crossing the boundary](https://www.assemblyscript.org/exports-and-imports.html#on-values-crossing-the-boundary) |
NativeScript/nativescript-cli | 175926886 | Title: Getting an error with the CLI installation
Question:
username_0: Please anyone let me know what to do next with that
Answers:
username_0: Please anyone let me know what to do next with that
username_1: Hey @username_0
Have you restarted your shell as pointed in the log message. The environment settings have been changed and in order for the setup to complete you need to restart your command prompt.
username_0: Thank you so much... I resolved issue
Status: Issue closed
|
pytorch/pytorch | 639550200 | Title: Make Scaling in BatchNorm optional
Question:
username_0: ## 🚀 Feature
Using the learnable scale γ in batch normalization (`weights` in [_BatchNorm)](https://pytorch.org/docs/1.1.0/_modules/torch/nn/modules/batchnorm.html) should be optional.
## Motivation
If batch normalization is used before a piecewise linear function such as ReLU, the learnable scale γ is practically redundant. It is therefore common to set it equal to 1 in such cases. (See, for example, last paragraph in Section 4 in [https://arxiv.org/ab/1710.05941.pdf](https://arxiv.org/ab/1710.05941.pdf)].
## Pitch
A boolean flag `scale` in `class _BatchNorm(Module)` which sets `self.weights` to ones if `False`.
I am implementing this. If you also think this is useful in general, I can create a PR.
Answers:
username_1: Could you elaborate why this is true?
username_0: There is a brief discussion [here](https://datascience.stackexchange.com/questions/22073/why-is-scale-parameter-on-batch-normalization-not-needed-on-relu), which may be summarized as follows:
Consider batch normalization of an input x
<img src="https://latex.codecogs.com/gif.latex?y=\mathrm{BN(x,\gamma,\beta)}=\frac{x-\mathrm{E}[x]}{\sqrt{\mathrm{Var}[x]+\epsilon}}\cdot\gamma+\beta" />
which may as well be expressed as
<img src="https://latex.codecogs.com/gif.latex?y=\gamma\left(\frac{x-\mathrm{E}[x]}{\sqrt{\mathrm{Var}[x]+\epsilon}}+\frac{\beta}{\gamma}\right)=\gamma\cdot\mathrm{BN(x,1,\beta/\gamma)}=\gamma\cdot\hat{x}" />
If this is followed by, say, a ReLU function `z=max(0, y)`, then
<img src="https://latex.codecogs.com/gif.latex?z=\mbox{max}(0,\gamma\hat{x})=\gamma\cdot\mbox{max}(0,\hat{x})" />
If this is followed by a layer with weights `W`, then
<img src="https://latex.codecogs.com/gif.latex?Wz=W\gamma\mbox{max}(0,\hat{x})=\hat{W}\mbox{max}(0,\hat{x})" />
This means that having a learnable scale in the batch normalization layer which is followed by a piecewise linear function and another layer does not add additional expressiveness to the model, since it can be absorped in β and W.
username_1: Oh I see. Thanks for the explanation. It confused me because this redundancy also exists if BN is used after non-linearity (before linear layers).
username_2: Thank you for your suggestion. We would accept a patch that fixes this. |
transloadit/node-sdk | 834491876 | Title: Build error!!!
Question:
username_0: W20210318-14:32:08.970(7)? (STDERR) return Number(process.hrtime.bigint() / 1000000n)
W20210318-14:32:08.970(7)? (STDERR) ^^^^^^^
W20210318-14:32:08.970(7)? (STDERR)
W20210318-14:32:08.971(7)? (STDERR) SyntaxError: Invalid or unexpected token
W20210318-14:32:08.971(7)? (STDERR) at createScript (vm.js:80:10)
Answers:
username_1: Hi there, what version are you on? Did you recently upgrade? Can you share more snippets how you run and use the sdk? We recently launched v3, breaking a lot of api surface, so perhaps it is related?
username_0: hello @username_1 , and i have not upgrade(first time), and
my env:
Nodejs: v10.23.3
Meteor: 1.8.1
just import it: import Transloadit from 'transloadit';
username_2: Are you running any transpilation of the code? I think it is struggling with the bigint literal (`1000000n`), but this is supported on node 10, so I'm not sure why it's failing for you unless you're transpiling code.
username_0: dear @username_2 , thank you, i downgrade to 2.0.10 version
username_2: Ok. Let me know if you need assistance upgrading to v3. There will probably be many nice features and benefits of using v3 in the future. v2 will probably not see any big updates.
username_0: Yeah, now The V2 have enough with me. Thank your support!
Status: Issue closed
|
nuxt/content | 642125533 | Title: Timeout error when passing twitter card validator
Question:
username_0: ### Version
@nuxt/content: ^1.3.2
nuxt: ^2.12.X
### Steps to reproduce
1. Go to **https://cards-dev.twitter.com/validator**
2. Type a nuxt application web url
3. Click **preview card** button
### What is Expected?
Card preview is displayed with detected info as possible.
First example with an own application with nuxt

http://www.anhqv-stats.es
https://github.com/username_0/anhqv-stats-front
Second example with a simple autogenerated nuxt application without @nuxt/content

http://nuxt-demo-jmlp.herokuapp.com/
https://github.com/username_0/nuxt-demo-jmlp
### What is actually happening?
Twitter validator is returning a timeout error.
Third example with a nuxt application with @nuxt/content

http://blog.juanmanuellopezpazos.es/
https://github.com/username_0/blog
As this error appears, the twitter card will never work.
**IMPORTANT**
- I also tried to add @nuxt/content to http://nuxt-demo-jmlp.herokuapp.com/ and then the timeout error appeared.
- Googlebot or Facebook crawler can access to previous web applications in all cases. I suppose that Twitter card validator performs the request in different way like others
Answers:
username_1: @username_0 This doesn't seem to be a bug related to @nuxt/content but with your application.
If however it is really a bug, please provide a reproduction with the source of the problem.
Status: Issue closed
username_0: Hi @username_1 !
I supposed also that could be an error in my nuxt app. However, I created a new app with ```npx create-nuxt-app```` command and I deployed it to Heroku.
You can get this app working here:
- http://nuxt-demo-jmlp.herokuapp.com
- https://github.com/username_0/nuxt-demo-jmlp
For better comprehension I have just deployed another boilerplate app with @nuxt/content configured:
- http://nuxt-demo-jmlp-with-content.herokuapp.com/
- https://github.com/username_0/nuxt-demo-jmlp-with-content
If you try twitter card validator with the first web application you'll see that Twitter bot could access to the page but didn't found any meta tag info.

However, if you try do the same with the last app you'll get the mentioned error with time out.

After that, we can guess that this error is not caused by a code error. This error is produced only with adding @nuxt/content module in nuxt.config.js file.
I though debugging @nuxt/content with messages in the server as it seems to Twitter card validator requests to be not resolved in @nuxt/content middleware or similar stage.
I'm trying to load @nuxt/content as a project module for debug middleware in a production environment. If I guess something I'll report here.
Thank you for your time.
username_1: @username_0 Okay so just adding @nuxt/content as a module seems to cause this error.
I'll reopen this issue and investigate asap. If you find any lead don't mind posting it here!
username_1: ### Version
@nuxt/content: ^1.3.2
nuxt: ^2.12.X
### Environment
node/npm:
- 10.15.3 / 6.4.1
- 12.18.1 / 6.14.5
cloud system: Heroku
### Config
mode: 'universal'
### Steps to reproduce
1. Go to **https://cards-dev.twitter.com/validator**
2. Type a nuxt application web url
3. Click **preview card** button
### What is Expected?
Card preview is displayed with detected info as possible.
First example with an own application with nuxt

http://www.anhqv-stats.es
https://github.com/username_0/anhqv-stats-front
Second example with a simple autogenerated nuxt application without @nuxt/content

http://nuxt-demo-jmlp.herokuapp.com/
https://github.com/username_0/nuxt-demo-jmlp
### What is actually happening?
Twitter validator is returning a timeout error.
Third example with a nuxt application with @nuxt/content

http://blog.juanmanuellopezpazos.es/
https://github.com/username_0/blog
As this error appears, the twitter card will never work.
**IMPORTANT**
- I also tried to add @nuxt/content to http://nuxt-demo-jmlp.herokuapp.com/ and then the timeout error appeared.
- Googlebot or Facebook crawler can access to previous web applications in all cases. I suppose that Twitter card validator performs the request in different way like others |
spelufo/nw-wrap | 130830153 | Title: nw-wrap errors
Question:
username_0: Hi,
I am trying to use your nw-wrap, but everytime I will get these errors.
Can you check?
```
$ node_modules/.bin/nw-wrap
events.js:154
throw er; // Unhandled 'error' event
^
Error: spawn node-webkit ENOENT
at exports._errnoException (util.js:856:11)
at Process.ChildProcess._handle.onexit (internal/child_process.js:178:32)
at onErrorNT (internal/child_process.js:344:16)
at nextTickCallbackWith2Args (node.js:474:9)
at process._tickCallback (node.js:388:17)
at Function.Module.runMain (module.js:449:11)
at startup (node.js:139:18)
at node.js:999:3
```
Answers:
username_1: Do you have node-webkit installed? It is now called nwjs, and I don't know if it still has the same issues with stdio. What is likely happening is that your nwjs executable is not called `node-webkit`, as this module expects. You can run `NW_CMD=path/to/nw node_modules/.bin/nw-wrap` to use another executable.
If this module is still useful, and people are still running into the same stdio issues in nwjs as there were in node-webkit, I should probably make `nw` the default, but I don't know, since I haven't used node-webkit/nwjs in a while.
username_2: Hi @username_1, just started with nwjs for the first, have the latest version 0.25.3.
First of all, the stdin issue appears to still be there, which is why I tried nw-wrap. I needed to change the path to the executable as you wrote in your comment.
If possible, it would be much appreciated if you could update nw-wrap to use `nw` by default. |
conda-forge/texlive-core-feedstock | 160554971 | Title: Handle of individual Texlive packages
Question:
username_0: We can:
- install a bundle @username_4 like did [here](https://github.com/username_4/conda-recipes/blob/master/recipes/texlive-selected/make-tarball.sh) (see https://github.com/conda-forge/staged-recipes/pull/518#issuecomment-216530676)
- package manager: https://www.tug.org/texlive/doc/tlmgr.html (see https://github.com/conda-forge/staged-recipes/pull/518#issuecomment-216563521)
Answers:
username_1: IMO it would be good to have certain packages (e.g. whatever things like nbconvert or knitr need per default) installed when the default install method (not sure if that is `texlive-core` or a different package) is used.
username_2: So, TexLive is pretty massive and I would like to steer clear of some of the mistakes made by more traditional package managers of installing the kitchen sink. It would be nice if this package remained very minimal to solve that problem.
Though I do agree with @username_1 that we should have one package (probably a metapackage) to install the minimum usable level of TeX for use with things like `nbconvert` and `knitr. I just do not think that `texlive-core` should be it. Maybe we could create a new package for that purpose and call it something like `texlive-basic` or similar. Not attached to that name per se, but something that conveys that minimum level of usability.
username_2: Also, interested in the answer to this question. Didn't keep up with the latest changes to the PR. May muck around to see how this works now.
username_2: Regarding that last question, this seems like an interesting [read]( http://tug.org/texlive/distro.html ), which could give us some pointers.
username_2: Seems Slack [prefers not]( https://slackbuilds.org/slackbuilds/13.37/office/texlive/README.tlpkg ) to use `tlmgr`, but repackages everything and use their package manager to do that. While that is a sensible decision in some ways, it requires a large amount of bandwidth to do. It might be nice to have `tlmgr` if people are uninterested in packaging/maintaining something TeX package.
username_2: We are having some discussion here on TeX Live and how we should use it in conda-forge. I'm copying you guys ( @damianavila @takluyver @willingc @minrk @username_3 @parente @pelson ) as I think we could really benefit from your feedback. The main use cases would be conversion programs like [`nbconvert`]( https://github.com/jupyter/nbconvert ) and, as such, could be included in things like [`docker-stacks`]( https://github.com/jupyter/docker-stacks ). If you have any thoughts on how you would like to see it packaged and how you would like to use it, that would be very useful for informing how we might proceed.
username_2: To answer this question, @username_1, the package manager is disabled by default. The source code does not include this possibility. There is a fair question of whether we want to have another package manager used (and if we want it installed by default).
IMHO we should investigate setting up `tlmgr`, but do so as a separate package (just as is the case with `pip`). This way if users want to use it to get something we didn't package, they have an option to do so. It should also be an explicit step for them to install it. Though we should really caution against using it for any complex packages and instead try to package those at conda-forge.
username_3: At a minimum I'd love to get a baseline test that ensures our use of Tex with nbconvert is handled. Not sure if that means testing notebooks direct or finding out what minimum set of features we need.
username_4: The set of Texlive packages in my [texlive-selected](https://github.com/username_4/conda-recipes/tree/master/recipes/texlive-selected) recipe has enough to run nbconvert — [see this file](https://github.com/username_4/conda-recipes/blob/master/recipes/texlive-selected/make-tarball.sh#L64). Be warned that you need to install a fairly large number of Texlive packages to get an installation that runs at all, so if you want to distribute packages individually, you're going to be asking people to install a really large number of little packages. This is what led me to decide to put them all into one big lump.
username_2: Not sure that there is agreement about breaking them up, but if there is we can still always have a metapackage to handle installation of all of these smaller packages as if they were bundled.
username_5: It would be great if we could use tlmgr to freely install non-conda tex packages completely analogously to how pip can be used to install non-conda python packages.
username_0: I moved all my work flow to tectonic, installed with conda, and it works great for my needs. Thanks @pkg!
username_6: Would it at least be possible to build xelatex as a part of texlive-core, so JupyterLab could export notebooks to PDF via nbconvert? The nbconvert specifically requires xelatex. Thanks.
username_7: @username_6 if you can get that build going i'll be happy to merge it.
username_8: Luckily pull request #18 has now solved missing ``pdflatex`` and ``latex`` binaries (which it turns out were just missing symlinks).
Hopefully I'll be able to use ``conda install -c conda-forge texlive-core`` in more places, but in others I do still need additional packages via `tlmgr`` or otherwise.
username_9: @username_4 What happened to your [texlive-selected conda package](https://github.com/username_4/conda-recipes/tree/master/recipes/texlive-selected)?
username_4: @username_9 I developed [tectonic](https://github.com/conda-forge/tectonic-feedstock) :-) |
atkm/avazu-ctr | 373756358 | Title: Expedite FFM file generation
Question:
username_0: Consider splitting up a DataFrame into chunks to manage memory consumption.
Answers:
username_0: With 'smaller' data set, `len(df_train_site) == 58316`. Using the `inplace=True` flag of the replace function runs into the same memory issue.
username_0: Using pd.Series.map seems promising. See https://stackoverflow.com/questions/42012339 .
username_0: Success. The new `encode_features` converts `df_train_small` in 1.5s.
Status: Issue closed
username_0: `ffm_row_generator` is slow because it uses `pd.DataFrame.iterrows` to create an ffm row (i.e. a string of format 'label field:feature:value').
A faster approach is to convert each column to 'i_col:feature:1', then use `pd.DataFrame.to_csv`. A disadvantage of this approach is the increased memory usage, since each of its columns has dtype('O') instead of dtype('int64'). For `df_train_site` for 'small', the memory usage is eight times larger, but much faster (04s instead of 02m30s). |
scrapinghub/js2xml | 207588097 | Title: Extend `make_dict` to handle more generic cases
Question:
username_0: I while ago I made this `get_vars(snippet)` helper on top of `js2xml`: https://gist.github.com/username_0/c7ad34cf1f70d114ec64b072581fb1f9
Which is helpful to turn a JS snippet into a python object where you can access JS variables by name and get the parsed values.
Is this a good addition to `js2xml`?
There are open question, like how to transform anonymous functions `var myfunc = function() { var foo = "bar"; }` so that you can access to `foo` value.
Answers:
username_0: [{}, {'a': True}]
```
Probably we could start a new project `js2dict` which depends on `js2xml` and converts the js code into a lookable dictionary.
Status: Issue closed
|
LeetCode-Feedback/LeetCode-Feedback | 782753848 | Title: Test case quality - 127. Word Ladder
Question:
username_0: <!--
Note - Any content mention below in `<!-- ->` blocks are just comments
to help you fill-up the issue. It won't be visible in the actual issue after
you click on submit.
-->
#### Your LeetCode username
<!-- Your LeetCode username -->
Garid
#### Category of the bug
- [x] Question
- [ ] Solution
- [ ] Language
#### Description of the bug
<!-- A clear and concise description of what the bug is. -->
I implemented a BFS algorithm without creating an adjacent node map (named **all_combo_dict** in the solution) which is not fast enough to get accepted (time complexity: O(N* N * M)). But when I was trying to figure out the problem I started my BFS from the **endWord** without changing anything else, therefore my solution got accepted.
I think the overall test case quality have be get improved concerning the slow BFS solutions could get accepted by start from the endWord like mine.
#### Code you used for Submit/Run operation
<!--
Please make sure you wrap your code with ``` tags.
Otherwise we may reject your request.
-->
``` python
# 127.Word Ladder
class Solution:
def ladderLength(self, beginWord: str, endWord: str, wordList: List[str]) -> int:
if endWord not in wordList: return 0
def cntDiff(x, y):
d = 0
for ix, iy in zip(x, y):
if d == 2: return 2
if ix != iy:
d += 1
return d
cur = [endWord]
used = {}
used[endWord] = True
ans = 1
while cur:
tmp = []
for cur_word in cur:
if cntDiff(beginWord, cur_word) == 1: return ans + 1
for word in wordList:
if word in used: continue
if cntDiff(word, cur_word) != 1: continue
tmp.append(word)
used[word] = 1
ans += 1
cur = tmp
return 0
[Truncated]
cur = [beginWord]
used = {}
used[beginWord] = True
ans = 1
while cur:
tmp = []
for cur_word in cur:
if cntDiff(endWord, cur_word) == 1: return ans + 1
for word in wordList:
if word in used: continue
if cntDiff(word, cur_word) != 1: continue
tmp.append(word)
used[word] = 1
ans += 1
cur = tmp
return 0
```
Answers:
username_1: @username_0 To quality for rewards, can you please provide a test case that will fail the first solution?
username_0: I created couple of test cases. When I try the test cases separately it gets Time Limit Exceeded most of the times. **But** 2 test cases together it is guaranteed exceed the time limit.
The test case was quite big for the comment so I pasted it [here](https://gist.github.com/username_0/74532745afa30861bb31c0202903792e#file-127-word-ladder-test-txt) .
You can find the test case generator that used to create it in next to it.
username_1: Hi @username_0
Thank you. I've relayed this issue to our team to investigate.
username_1: Hi @username_0
Thank you for your time. The team reviewed your report and noticed the problem was missing constraints and validations. After adding them, the test case you contributed becomes invalid so we didn't add the test case. We appreciate your support!
Status: Issue closed
|
blitz450/nodejs | 670188895 | Title: Create a contact us page and setup it's route
Question:
username_0: ## Description
Create a `views/contact.ejs` file to contain proper HTML structure and dummy contact page contents. The contact page should be rendered on `/contact` route.
The about page must include:
- Navigation links to `/`, `/about`, `/signup`, `/login` at the top.
- `h1` Heading & page title as `Contact us`.
- Dummy contact us text (3-4 lines) below the heading.
Styling the page is optional.
Refer [this](https://learn.co/lessons/using-ejs-in-express) for using EJS templates in express.
Answers:
username_1: I'll solve this issue .
username_0: Sure, go ahead.
username_2: please assign this issue to me
username_3: Handled in #24 . Closing
Status: Issue closed
|
jens-maus/RaspberryMatic | 441072142 | Title: Fehler beim Auflösen von DNS-Namen mit Version 3.45.7.20190504
Question:
username_0: <!---
ACHTUNG / ATTENTION:
===================
BITTE DIESES TEMPLATE NICHT LÖSCHEN (!!) SONDERN AN DEN JEWEILIGEN
STELLEN MIT EIGENEN INFORMATION ERGÄNZEN/AUSFÜLLEN.
BITTE HIER KEINE HILFEANFRAGEN FÜR DIE BEDIENUNG/NUTZUNG VON
RASPBERRYMATIC, SONDERN HIERFÜR DAS RASPBERRYMATIC FORUM VERWENDEN:
https://homematic-forum.de/forum/viewforum.php?f=65
HINWEIS:
=======
- GitHub ist KEIN Diskussionsforum -> RaspberryMatic Forum verwenden
- Bitte NUR Meldungen absetzen die auf einen direkten Fehler/Bug
in RaspberryMatic schliessen lassen oder nachvollziehbare
Featurewünsche sind (am besten erst im Forum diskutieren)
- Bitte NUR Fehler melden die mit der aktuellen RaspberryMatic
Version bereits reproduziert werden konnten.
- Im Zweifel das Anliegen zuerst im RaspberryMatic Forum mit anderen
Nutzern diskutieren und erst bei Konsens hier ein Ticket eröffnen.
--->
**Describe the bug**
Seit RM Version 3.45.7.20190504 funktioniert bei mir die Auflösung von DNS-Namen im Heimnetz nicht mehr. Ich nutze die AddOns HM-pdetect und System-Update. HM-pdetect war so konfiguriert, dass es mit den FRITZ! Hostnamen: fritz.box und fritz.repeater kommuniziert. Diese Verbindung war nach dem Update auf RM 3.45.7.20190504 nicht mehr möglich. Nach Eintragen der konkreten IP-Adressen funktioniert es wieder.
Beim Aufrufen des Addons System-Update bekomme ich die Fehlermeldung "BAD GATEWAY".
Zurück in der Paspberrymatic-Systemsteuerung unter Zusatzsoftware erscheint bei sämtlichen Addons unter "verfügbare Version" lediglich der Hinweis: "n/a", vermutlich auch aufgrund einer inkorrekten Namensauflösung.
**System information (please complete the following information):**
- Version RaspberryMatic 3.45.7.20190504 on Raspberry Pie 3 B
Answers:
username_1: Bitte mal via SSH einloggen und die Ausgaben folgender Kommandos hier posten:
```
cat /etc/config/netconfig
```
```
cat /etc/resolv.conf
```
```
route -n
```
```
ifconfig -a
```
```
ls -la /etc/config/wpa_supplicant.conf
```
username_0: # cat /etc/config/netconfig
HOSTNAME=homematic-ccu2
MODE=DHCP
CURRENT_IP=192.168.0.14
CURRENT_NETMASK=255.255.255.0
CURRENT_GATEWAY=192.168.0.1
CURRENT_NAMESERVER1=192.168.0.1
CURRENT_NAMESERVER2=0.0.0.0
IP=192.168.2.120
NETMASK=255.255.254.0
GATEWAY=192.168.2.1
NAMESERVER1=192.168.2.1
NAMESERVER2=0.0.0.0
CRYPT=0
------------------------------------------------------------------------------
# cat /etc/resolv.conf
cat: can't open '/etc/resolv.conf': No such file or directory
------------------------------------------------------------------------------
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 10 0 0 eth0
0.0.0.0 0.0.0.0 0.0.0.0 U 1003 0 0 wlan0
10.200.0.0 10.206.0.1 255.255.0.0 UG 0 0 0 tun0
10.206.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 tun0
10.251.0.0 10.206.0.1 255.255.0.0 UG 0 0 0 tun0
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 wlan0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
------------------------------------------------------------------------------
# ifconfig -a
eth0 Link encap:Ethernet HWaddr B8:27:EB:52:0A:0F
inet addr:192.168.0.14 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::ba27:ebff:fe52:a0f/64 Scope:Link
inet6 addr: 2003:e9:f729:1700:ba27:ebff:fe52:a0f/64 Scope:Global
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:73203 errors:0 dropped:17539 overruns:0 frame:0
TX packets:50177 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:9721984 (9.2 MiB) TX bytes:12460007 (11.8 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:171011 errors:0 dropped:0 overruns:0 frame:0
TX packets:171011 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:35091143 (33.4 MiB) TX bytes:35091143 (33.4 MiB)
tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:10.206.136.93 P-t-P:10.206.0.1 Mask:255.255.255.255
inet6 addr: fe80::7406:b303:11ff:41df/64 Scope:Link
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1
RX packets:38163 errors:0 dropped:0 overruns:0 frame:0
TX packets:38777 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:5820267 (5.5 MiB) TX bytes:14295449 (13.6 MiB)
wlan0 Link encap:Ethernet HWaddr B8:27:EB:07:5F:5A
inet6 addr: fe80::ba27:ebff:fe07:5f5a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:2469 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:679072 (663.1 KiB)
wlan0:avahi Link encap:Ethernet HWaddr B8:27:EB:07:5F:5A
inet addr:169.254.6.81 Bcast:169.254.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
------------------------------------------------------------------------------
# ls -la /etc/config/wpa_supplicant.conf
-rw-r--r-- 1 root root 249 Mar 20 08:31 /etc/config/wpa_supplicant.conf
username_1: Ok, dann ist das letztere das Problem. Wenn du nicht explizit WLAN verwendest dann bitte die Datei `/etc/config/wpa_supplicant.conf` mit folgendem Befehl löschen:
```
rm -f /etc/config/wpa_supplicant.conf
```
Danach Neustarten und dann sollte es wieder gehen. Der Fehler wird dann mit der nächsten kommenden Version behoben sein.
username_0: Hat funktioniert!
Vielen Dank für die superschnelle Unterstützung!!
Status: Issue closed
username_1: <!---
ACHTUNG / ATTENTION:
===================
BITTE DIESES TEMPLATE NICHT LÖSCHEN (!!) SONDERN AN DEN JEWEILIGEN
STELLEN MIT EIGENEN INFORMATION ERGÄNZEN/AUSFÜLLEN.
BITTE HIER KEINE HILFEANFRAGEN FÜR DIE BEDIENUNG/NUTZUNG VON
RASPBERRYMATIC, SONDERN HIERFÜR DAS RASPBERRYMATIC FORUM VERWENDEN:
https://homematic-forum.de/forum/viewforum.php?f=65
HINWEIS:
=======
- GitHub ist KEIN Diskussionsforum -> RaspberryMatic Forum verwenden
- Bitte NUR Meldungen absetzen die auf einen direkten Fehler/Bug
in RaspberryMatic schliessen lassen oder nachvollziehbare
Featurewünsche sind (am besten erst im Forum diskutieren)
- Bitte NUR Fehler melden die mit der aktuellen RaspberryMatic
Version bereits reproduziert werden konnten.
- Im Zweifel das Anliegen zuerst im RaspberryMatic Forum mit anderen
Nutzern diskutieren und erst bei Konsens hier ein Ticket eröffnen.
--->
**Describe the bug**
Seit RM Version 3.45.7.20190504 funktioniert bei mir die Auflösung von DNS-Namen im Heimnetz nicht mehr. Ich nutze die AddOns HM-pdetect und System-Update. HM-pdetect war so konfiguriert, dass es mit den FRITZ! Hostnamen: fritz.box und fritz.repeater kommuniziert. Diese Verbindung war nach dem Update auf RM 3.45.7.20190504 nicht mehr möglich. Nach Eintragen der konkreten IP-Adressen funktioniert es wieder.
Beim Aufrufen des Addons System-Update bekomme ich die Fehlermeldung "BAD GATEWAY".
Zurück in der Paspberrymatic-Systemsteuerung unter Zusatzsoftware erscheint bei sämtlichen Addons unter "verfügbare Version" lediglich der Hinweis: "n/a", vermutlich auch aufgrund einer inkorrekten Namensauflösung.
**System information (please complete the following information):**
- Version RaspberryMatic 3.45.7.20190504 on Raspberry Pie 3 B
username_1: Das ist ein BugTracker. Geclosed wird nur durch den Maintainer wenn der ursprüngliche Bug beseitigt ist :)
Status: Issue closed
|
dotnet/roslyn | 523032277 | Title: Filters in CodeLens results for references
Question:
username_0: _This issue has been moved from [a ticket on Developer Community](https://developercommunity.visualstudio.com/content/idea/802616/filters-in-codelens-results-for-references.html)._
---
<p>Ability to filter list of references. The existing collapse all, refresh, and dock options are great, but in a large code base, the ability to also filter is necessary. <br><br>For example, I am currently working in a project that has multiple interfaces, and these interfaces are implemented by many classes, CodeLens is great to see all references, but it is still really hard and time consuming to find what I'm looking for. <br><br>Below is an example of CodeLens telling me that there are 99+ references for this interface method, imagine being able to filter and group these results. It could be a feature that only works when you dock the CodeLens results, and that would be fine. <br></p><p><a href="https://developercommunityapi.westus.cloudapp.azure.com/storage/attachments/101874-codelens.png"> Image:101874-codelens.png</a></p>
---
### Original Comments
#### Visual Studio Feedback System on 11/1/2019, 00:21 AM:
Thank you for taking the time to provide your suggestion. We will do some preliminary checks to make sure we can proceed further. We'll provide an update once the issue has been triaged by the product team.
Answers:
username_1: [[Copied from VS Feedback Hub marked as duplicate as this issue.](https://developercommunity.visualstudio.com/content/idea/827039/add-codelense-reference-filters.html)]
I love the codelense reference, but often there’s so many references I need to collapse all and dig through all of them.
The particular case I’m facing often is that I’m not looking for references of a properties getter, I only want to see the use of setters to figure out how the property is set.
I’d like to be able to filter on Get vs. Set operation for properties.
username_2: @username_3 Is this even possible using the available CodeLens APIs?
username_3: Not at the moment, but we own References UI so please route the bug to the editor, @username_4
username_4: @username_3 Have moved the DC feedback to your team.
username_5: I would love this feature. I specifically want to be able to exclude all the references from my unit tests. Sometimes I'll scan through the source looking for dead code with zero references, but if the method has a unit test it will always have references.
username_3: Please vote on https://developercommunity.visualstudio.com/content/idea/802616/filters-in-codelens-results-for-references.html, @username_5. Thanks! |
BUGS-NYU/bugs-nyu.github.io | 400041787 | Title: Additional Pages for generating PR's
Question:
username_0: I think we should add a page that generates PR's using query parameters (or maybe edit the connect page?). So, for example, the form might have 3 fields, *title*, *date*, and *content*. Then, when the form submission button is pressed, a javascript function `onsubmit()` is called which takes the data and passes it as a query parameter to the PR link. So this form field:
```
title: My Title
date: 2019-01-16
body: my text here
```
is sent through the URL as `?title=My%20Title&body=---%0D%0Atitle%3A+My+Title%0D%0A----%0D%0Amy+text+here` or at least something like it. It might take a while to iron out, but I think that it'd be nice to have the formatting of most of the stuff handled by Javascript.
Answers:
username_1: I believe this can be done via GitHub API. The user will probably need to have a GitHub account or make one thou
Status: Issue closed
username_1: I think this might not be as relevant anymore as we want to encourage interacting directly with the codebase |
jMonkeyEngine/jmonkeyengine | 881329217 | Title: TestManyLocators crashes due to dead URLs
Question:
username_0: Similar to #982, but this time caused by reorg of our wiki. Seen in v3.4.0-beta3:
```text
May 08, 2021 2:16:30 PM com.jme3.asset.plugins.UrlLocator locate
WARNING: Error while locating Interface/Fonts/Default.fnt
java.io.IOException: Server returned HTTP response code: 403 for URL: http://wiki.jmonkeyengine.org/jme3/beginnerInterface/Fonts/Default.fnt
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
at com.jme3.asset.plugins.UrlAssetInfo.create(UrlAssetInfo.java:58)
at com.jme3.asset.plugins.UrlLocator.locate(UrlLocator.java:79)
at com.jme3.asset.ImplHandler.tryLocate(ImplHandler.java:173)
at com.jme3.asset.DesktopAssetManager.locateAsset(DesktopAssetManager.java:203)
at jme3test.asset.TestManyLocators.main(TestManyLocators.java:59)
May 08, 2021 2:16:32 PM com.jme3.asset.plugins.UrlLocator locate
WARNING: Error while locating casaamarela.jpg
java.io.IOException: Server returned HTTP response code: 403 for URL: http://wiki.jmonkeyengine.org/jme3/beginnercasaamarela.jpg
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
at com.jme3.asset.plugins.UrlAssetInfo.create(UrlAssetInfo.java:58)
at com.jme3.asset.plugins.UrlLocator.locate(UrlLocator.java:79)
at com.jme3.asset.ImplHandler.tryLocate(ImplHandler.java:173)
at com.jme3.asset.DesktopAssetManager.locateAsset(DesktopAssetManager.java:203)
at jme3test.asset.TestManyLocators.main(TestManyLocators.java:62)
May 08, 2021 2:16:32 PM com.jme3.asset.plugins.UrlLocator locate
WARNING: Error while locating glasstile2.png
java.io.IOException: Server returned HTTP response code: 403 for URL: http://wiki.jmonkeyengine.org/jme3/beginnerglasstile2.png
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
at com.jme3.asset.plugins.UrlAssetInfo.create(UrlAssetInfo.java:58)
at com.jme3.asset.plugins.UrlLocator.locate(UrlLocator.java:79)
at com.jme3.asset.ImplHandler.tryLocate(ImplHandler.java:173)
at com.jme3.asset.DesktopAssetManager.locateAsset(DesktopAssetManager.java:203)
at jme3test.asset.TestManyLocators.main(TestManyLocators.java:65)
May 08, 2021 2:16:32 PM com.jme3.asset.plugins.UrlLocator locate
WARNING: Error while locating beginner-physics.png
java.io.IOException: Server returned HTTP response code: 403 for URL: http://wiki.jmonkeyengine.org/jme3/beginnerbeginner-physics.png
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
at com.jme3.asset.plugins.UrlAssetInfo.create(UrlAssetInfo.java:58)
at com.jme3.asset.plugins.UrlLocator.locate(UrlLocator.java:79)
at com.jme3.asset.ImplHandler.tryLocate(ImplHandler.java:173)
at com.jme3.asset.DesktopAssetManager.locateAsset(DesktopAssetManager.java:203)
at jme3test.asset.TestManyLocators.main(TestManyLocators.java:68)
May 08, 2021 2:16:32 PM com.jme3.asset.DesktopAssetManager locateAsset
WARNING: Cannot locate resource: beginner-physics.png (Flipped)
Found classpath font: com.jme3.asset.plugins.UrlAssetInfo[key=Interface/Fonts/Default.fnt]
Found zip image: com.jme3.asset.plugins.ZipLocator$JarAssetInfo[key=casaamarela.jpg]
Found online zip image: com.jme3.asset.plugins.HttpZipLocator$1[key=glasstile2.png]
Failed to load from HTTP
```
Looks like an easy fix!<issue_closed>
Status: Issue closed |
efcore/EFCore.NamingConventions | 598353530 | Title: Not making any changes to table names, but does columns
Question:
username_0: 
Answers:
username_0: 
username_1: Duplicate of #2
username_1: This is a problem specific to ASP.NET Identity. Note that you can use the workaround detailed here: https://github.com/efcore/EFCore.NamingConventions/issues/2#issuecomment-612651161
Status: Issue closed
|
weecology/portalr | 287231697 | Title: Improve get data options
Question:
username_0: esp. download location
Answers:
username_1: `download_observations` now gets the latest release from PortalData repo by default.
If there's nothing else, then I think this is resolved by PR #77
username_0: Does `observations_are_new` get used anywhere anymore? Did we just lose track of that at some point?
username_1: I use it in the vignette, but it's not used in portalr or portalPredictions...
username_1: [per discussion with @username_0, @emchristensen, @sdtaylor]
`load_data` functionality should be expanded:
* if repo (get data from repo)
* if download (use `download_observations`)
username_1: Somehow check for release version and/or newer copy of local data.
Status: Issue closed
|
skyline75489/SimpleDNS | 133453747 | Title: 偶尔出现无法解析的情况
Question:
username_0: ```
2016-02-14 01:48:22+0800 [-] Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.defer.TimeoutError: [Query('finance.sina.com.cn', 28, 1)]
2016-02-14 01:48:22+0800 [-] Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.defer.TimeoutError: [Query('finance.sina.com.cn', 28, 1)]
2016-02-14 01:48:22+0800 [-] Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.defer.TimeoutError: [Query('finance.sina.com.cn', 28, 1)]
2016-02-14 01:48:22+0800 [-] Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.defer.TimeoutError: [Query('finance.sina.com.cn', 28, 1)]
2016-02-14 01:48:22+0800 [-] Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.defer.TimeoutError: [Query('finance.sina.com.cn', 28, 1)]
2016-02-14 01:48:22+0800 [-] (UDP Port 47931 Closed)
2016-02-14 01:48:22+0800 [-] Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.defer.TimeoutError: [Query('finance.sina.com.cn', 1, 1)]
2016-02-14 01:48:22+0800 [-] Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.defer.TimeoutError: [Query('finance.sina.com.cn', 1, 1)]
2016-02-14 01:48:22+0800 [-] Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.defer.TimeoutError: [Query('finance.sina.com.cn', 1, 1)]
2016-02-14 01:48:22+0800 [-] Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.defer.TimeoutError: [Query('finance.sina.com.cn', 1, 1)]
2016-02-14 01:48:22+0800 [-] Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.defer.TimeoutError: [Query('finance.sina.com.cn', 1, 1)]
2016-02-14 01:48:22+0800 [-] (UDP Port 38608 Closed)
2016-02-14 01:48:22+0800 [-] Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.defer.TimeoutError: [Query('d4.sina.com.cn', 28, 1)]
2016-02-14 01:48:22+0800 [-] Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.defer.TimeoutError: [Query('d4.sina.com.cn', 28, 1)]
2016-02-14 01:48:22+0800 [-] Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.defer.TimeoutError: [Query('d4.sina.com.cn', 28, 1)]
2016-02-14 01:48:22+0800 [-] Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.defer.TimeoutError: [Query('d4.sina.com.cn', 28, 1)]
[Truncated]
2016-02-14 01:48:23+0800 [-] Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.defer.TimeoutError: [Query('open.weather.sina.com.cn', 28, 1)]
2016-02-14 01:48:23+0800 [-] Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.defer.TimeoutError: [Query('open.weather.sina.com.cn', 28, 1)]
2016-02-14 01:48:23+0800 [-] Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.defer.TimeoutError: [Query('open.weather.sina.com.cn', 28, 1)]
2016-02-14 01:48:23+0800 [-] (UDP Port 12016 Closed)
2016-02-14 01:48:23+0800 [-] Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.defer.TimeoutError: [Query('open.weather.sina.com.cn', 1, 1)]
```
通常这不需要我做什么,过一小会就会好了。<issue_closed>
Status: Issue closed |
git4school/git4school-visu | 729621878 | Title: Clean up modals
Question:
username_0: **Description**
Remove all traces of the classic bootstrap modals to use only the `ng-bootstrap` modals
**Hints**
We should see if it is possible to create our own modal component to mutualize the code and make it even easier to use. If so, it may be interesting to put it in a `shared` module<issue_closed>
Status: Issue closed |
thumbor/thumbor | 718450069 | Title: Thumbor remove EXIF data , images displayed rotated
Question:
username_0: I'm applying thumbor filters on s3 stored images.
Original : https://d37aahii0nv5jf.cloudfront.net/filters:strip_exif()/0Zoxha2t9S1HR4oYGmSa/IMG_9292.JPG
filter url file : https://d37aahii0nv5jf.cloudfront.net/fit-in/1000x0/filters:strip_exif()/0Zoxha2t9S1HR4oYGmSa/IMG_9292.JPG
image displayed rotated on our website.
Original:

on Website:

What should i do?
@heynemann , @guilhermef Can u please help me?
Answers:
username_1: Link to original is already rotated and already missing exif even if I remove filters. So can't reproduce.
But you could try setting `RESPECT_ORIENTATION = True` in config.
username_0: @username_1 i did it in the environment variable , it didn't helped.

username_1: @username_0 Could you share original image? |
dotnet/docs | 712666757 | Title: sdk-and-target-framework-change needs more clarification
Question:
username_0: The documentation is clear about these points:
- For .NET 5.0, we need to use `Microsoft.NET.Sdk`.
- For pre-.NET 5.0, we need to use `Microsoft.NET.Sdk.WindowsDesktop`.
But isn't clear for the scenario when multiple TFMs are targeted. For example:
```xml
<TargetFrameworks>net5.0;netcoreapp3.1;net472</TargetFrameworks>
```
---
#### Issue metadata
* Issue type: breaking-change
Answers:
username_1: @username_2 Can you comment on this one?
username_2: I removed the breaking change metadata. That issue type is for reporting a breaking change between versions, not general questions between versions 😄 Really this should be filed on the article that you discovered the discrepancy.
There is an email out internally to figure this out. We'll get back to you 😃
username_0: @username_2 @username_1 I think I found the answer.
https://github.com/dotnet/sdk/issues/13427
The warning was **incorrectly** generated for multi-targeted projects and this issue was fixed.
I would suggest to update the doc to clearly emphasize that this breaking change doesn't affect multi-targeting.
username_2: @username_0 update which doc?
username_0: @username_2 https://docs.microsoft.com/en-us/dotnet/core/compatibility/3.1-5.0#winforms-and-wpf-apps-use-microsoftnetsdk
The include markdown file in repo is: https://github.com/dotnet/docs/blob/master/includes/core-changes/windowsforms/5.0/sdk-and-target-framework-change.md
A small note similar to: "If multiple frameworks are targeted and one of them is prior to .NET 5.0, you don't need to take any actions." would make things more clear in my opinion.
username_2: Since this warning was removed in this scenario, I think adding a note just adds confusion. The warning was created to get people out of targeting the Desktop SDK directly. The note made sense when the warning was going to be generated, but now it's not generated.
username_0: Sometimes I visit the breaking changes docs without actually facing some error or warning, just to read. In that case, the note makes things clearer.
But your opinion is reasonable too, and it's more common that breaking changes docs visitors are looking for a solution to a warning/error they face. So I'll close.
Status: Issue closed
|
WarEmu/WarBugs | 127106198 | Title: History and Lore Unlock: Nomads
Question:
username_0: Going by old wikia notes. This I think is supposed to be unlocked by either being close to the Destro prologue portal or by interacting with screamer @ 30,10. Either method isn't producing this unlock.
Answers:
username_1: I'll see which Screamer I'll be able to add this to at the Norsca start point.
Status: Issue closed
username_3: Going by old wikia notes. This I think is supposed to be unlocked by either being close to the Destro prologue portal or by interacting with screamer @ 30,10. Either method isn't producing this unlock.
username_3: Reopening because https://github.com/WarEmu/WarBugs/issues/8416
username_4: The Screamer behind the tent will trigger the unlock 'Nomads'
Status: Issue closed
|
TensorSpeech/TensorFlowASR | 873037222 | Title: conformer MLS-pt
Question:
username_0: Hi, I'm trying to train the conformer model with the MLS Pt dataset, but I found some strange metrics in their order of magnitude.
greedy_wer: 0.9748954176902771
greedy_cer: 0.9024566411972046
beamsearch_wer: 1.0
beamsearch_cer: 1.0
Due to its loss curve, I believe that the model is not learning, but when I put just one example for the test, it predicts correctly.

GROUNDTRUTH
foice cortou o pescoço do animal a cabeça ficou segura na carne da vítima e das artérias rotas jorrava o sangue não havia mais cães a matar o terreiro ficara alastrado de corpos decepados mutilados de membros esparsos os homens maltratados doloridos deitavam
TRANSCRIPT
foice cortou o pescoço do animal a cabeça ficou segura na carne da vítima e das artérias rotas jorrava o sangue não havia mais cães a matar o terreiro ficara alastrado de corpos decepados mutilados de membros esparsos os homens maltratados doloridos deitavam
When I place all test dataset for evaluation, the phrases are correct, but a GROUNDTRUTH phrase corresponds to a TRANSCRIPT in another position
Answers:
username_1: @username_0 the test result file is a tsv with headers: PATH DURATION GROUNDTRUTH GREEDY BEAMSEARCH
So your result will be in GREEDY and BEAMSEARCH, the GROUNDTRUTH is the label to be compared to calculate metrics, so the GROUNDTRUTH in the result and TRANSCRIPT in the dataset are the same, so you may be mistaken 😄
Status: Issue closed
username_0: oh ok, I found my mistake.
thanks |
h3abionet/h3agwas | 995866713 | Title: AWS Batch issue
Question:
username_0: If sex information isn't available, workflow crases
Thie underlying issue is that Batch demands that file channels must contain files. You can't output a value onto a channel and interpret it as a file. We do this in a few places to deal with optional input |
ACRA/acra | 529862288 | Title: ReportAdministrator not registered using AutoService Annotation
Question:
username_0: I got a few questions related to my ReportAdministration.
1. I tried to add ReportAdministrator on my library module. I register it using annotation but the class won't load.

What should I do?
If I use ServiceLoader should I need starting the service manually? ServiceLoader<ReportingAdministrator> on my application?
2. Can I use the application id as a filter to get the exception in my library?
Answers:
username_0: I think, I already got the solution by registering services loader manually

And I got the answer for the package name also. I can get all the class name and iterate it once the crash happened

case closed
Status: Issue closed
|
dotnet/project-system | 253450390 | Title: Long bin paths even for single output type
Question:
username_0: Compared to old-style .NET Framework csproj, bin output paths are more involved/nested than they used to be. For example, the default used to be just
bin\Debug
bin\Release
(folders per configuration only.)
For secondary platforms, they'd add one more level, such as:
bin\Debug\x64
bin\Debug\x86
bin\Release\x64
bin\Release\x86
With AnyCPU staying in the higher-level directory.
This organization was simple and nice.
With this csproj:
```
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net47</TargetFramework>
</PropertyGroup>
</Project>
```
There's a minimum extra level of nesting:
bin\Debug\net47
And with this one:
```
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net47</TargetFramework>
<RuntimeIdentifier>win-x64</RuntimeIdentifier>
</PropertyGroup>
</Project>
```
it gets even worse:
bin\Debug\net47\win-x64
In both cases, there's just one value in use (it's a single TargetFramework, not multiple TargetFramework**s** and a single RuntimeIdentifier, not multiple RuntimeIdentifier**s**). For these cases, could the extra levels of bin subdirectory nesting be removed?
Answers:
username_1: This issue was moved to dotnet/sdk#1547
Status: Issue closed
|
argoproj/argo-cd | 772911353 | Title: User-definable app parameter files in Git
Question:
username_0: # Summary
With #4084 and #5003, application parameter overrides in Git were introduced. This feature should be enhanced to enable user-specified override files.
# Motivation
The currently implemented approach follows a convention-over-configuration approach, and allows for zero-configuration setups by just dropping the appropriate files (`.argocd-source.yaml` and `.argocd-source-<applicationName>`) into the Git repository at application's `.source.spec.path` directory.
While this is good enough for most use-cases, it could result in merge conflicts if those files are being updated my more than one external tool, or by humans and external tools. In order to allow external tools to take care about such parameter overrides independently, a configurable list of (additional) files to consider for parameter overriding should be provided in the application's spec.
This will easily allow users managing their own override files, while tools such as argocd-image-updater can safely push their changes to Git as well to different files, without risk of merge conflicts.
# Proposal
We should introduce a new field in Application's `.spec.source`: `overrideFiles`, which contains a list of file names within `.spec.source.path`, that contain parameter overrides same as `.argocd-source.yaml`. Foundation for handling multiple files has been laid in #5003, so it should be rather easy to add more files to be processed.
Generally, the previously implemented convention should not be affected. The order of applying parameter overrides from files should stay the same, i.e.
* `.argocd-source.yaml` will be applied first,
* `.argocd-source-<appName>` will be applied second,
* files specified in `.spec.source.overrideFiles` will be applied third-to-Nth, in their order of appearance in the list
If any of the files in the list do not exist in the repository, they will be silently ignored (maybe a log statement - level tbd - could indicate that a file was specified but was not found).
We need to make sure that those files will not rendered as part of any application's manifests. My suggestion is therefore:
* We ignore all files matching pattern `.argocd-*.yaml` when rendering manifests for any given application,
* File names in `.spec.source.overrideFiles` are treated as partial names, so for example user specifies:
```yaml
spec:
source:
overrideFiles:
- override-one
- override-two
```
This will result in Argo CD looking for `.argocd-override-one.yaml` and `.argocd-override-two.yaml`, which match the exclusion pattern defined above.
Answers:
username_1: Rather than splitting the overrides file to multiple files just to allow the tooling to blindly modify its "owned" files to prevent git merges, shouldn't the tooling be a little smarter and understand how to do json merge patching against the standard override file (e.g. similar to the smartness of kubectl apply but to a lesser degree). |
dianna-ai/dianna | 915975062 | Title: Create readmes in the new data file structure
Question:
username_0: For every dataset, create readmes in the new data file structure that introduce each file/script/notebook briefly.
- MNIST
- Leafs
- Geometric
- Movie reviews
Answers:
username_1: I created a branch for this issue (69-readmes) and added readmes for Geometric dataset preparation, and leafs/movie reviews model generation.
username_2: Sorry, I forgot what we agreed at the last stand up- adding to the same branch? I guess so.
username_1: I think we decided that small PRs are nice, so perhaps a new branch is better. Although I think it doesn't matter too much in this case. I already opened a PR for the changes I made, but you could still add change to that branch, of course.
username_2: OK, I'll make a new branch for my small addition.
username_2: Maybe we should add the Zenodo DOIs after releasing our datasets there?
username_0: Readme for geometric model generation in PR #95
username_0: Readme for mnist model generation in PR #96
Status: Issue closed
|
Masuzu/SarasaBot | 313624596 | Title: bot not clicking ok after summon select
Question:
username_0: my sarasa bot is not clicking ok at this page any fix on this?
http://prntscr.com/j466nv
Answers:
username_1: Hello,
You may want to increase the parameter `MaxResponseDelayInMs` or else, if the party selection page does not appear "fast enough", Sarasa will refresh the summon selection page.
Status: Issue closed
|
sandermangel/rkvatfallback | 158751417 | Title: VIES now blocks IPs
Question:
username_0: I get this response;
```html
<h2>VIES VAT number validation</h2>
<fieldset>
<table id="vatResponseFormTable">
<tr>
<td class="labelLeft" colspan="3"><b><span class="invalidStyle">Your request for VAT validation has not been processed. Your IP address is currently blocked, please contact <EMAIL> for further information.</span></b></td>
</tr>
```
.. etc.
It appears our custom VIES call through curl now gets blocked by VIES.<issue_closed>
Status: Issue closed |
SAP/fundamental-ngx | 508041523 | Title: Inline help overflow
Question:
username_0: #### Is this a bug, enhancement, or feature request?
Bug
#### Briefly describe your proposal.
In case of a very long text the Inline Help overflows the window.
- The user should be able to "control" the width of the component
- When used in a modal, the popover should detect and react to the boundaries of the modal body, not the window
- The user should be able to overwrite the styling of the control element (SCC needs an inline help that has a transparent background and blue ? icon)
#### Which versions of Angular and Fundamental NGX are affected? (If this is a feature request, use current version.)
Current<issue_closed>
Status: Issue closed |
galaxyproject/tools-iuc | 323354277 | Title: Error when running Freebayes: (galaxy-18.01)
Question:
username_0: Fatal error: Exit code 2 ()
grep: ./vcf_output/part_.vcf: No such file or directory
The `samtools view -H b_0.bam ` seems to be returning nothing.
Answers:
username_1: Could you explain this a little more in depth ? What was the input file, what settings have you chosen ?
username_0: from testtoolshed `https://testtoolshed.g2.bx.psu.edu/view/devteam/freebayes/2fb16f415220`
username_1: Can you reproduce this with a small test file ? What file are you referring to with `b_0.bam` ? |
newrelic/newrelic-logenricher-dotnet | 774062516 | Title: Is there a way to use NewRelicJsonLayout from NLog.config xml file?
Question:
username_0: Documentation [here](https://docs.newrelic.com/docs/logs/enable-log-management-new-relic/logs-context-net/net-configure-nlog) only mentions how to used it from c# config.
I tried a few a config below, but it didn't work:
```xml
<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
autoReload="true">
<targets>
<target type="AsyncWrapper" name="NewRelicFileTarget" queueLimit="5000" overflowAction="Grow">
<target xsi:type="File" name="f1" fileName="portal.newrelic.json" layout="${NewRelicJsonLayout}" />
</target>
<rules>
<logger name="*" minlevel="Trace" writeTo="NewRelicFileTarget" enabled="true" />
</rules>
</nlog>
```
Answers:
username_0: I figured it should used like this:
```
<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
autoReload="true">
<targets>
<target name="NewRelicFileTarget" xsi:type="File" fileName="log.newrelic.json" />
<layout xsi:type="newrelic-jsonlayout" />
</target>
</targets>
<rules>
<logger name="*" minlevel="Trace" writeTo="NewRelicFileTarget" enabled="true" />
</rules>
</nlog>
```
Status: Issue closed
username_1: Thanks for bringing this to our attention, and thanks for letting us know that you'd solved the issue for yourself. We may add your example to our documentation to help other developers. |
keep-run/Blog | 488161736 | Title: 操作开发机
Question:
username_0: - 登录开发机:`ssh username@ip`
- 查看用户:`who am i`
- 创建用户:`adduser`
- 删除用户:`userdel name` 只删除用户,不删除用户家目录, 如果要把家目录一起删了,需要执行:`deluser -remove-home name`
- 切换用户:`su username`
- 修改主机名称:` sudo hostnamectl set-hostname newhostname` |
google/ExoPlayer | 648872329 | Title: AspectRatioFrameLayout.RESIZE_MODE not working in Android MotionLayout
Question:
username_0: Before filing a bug:
-----------------------
- Search existing issues, including issues that are closed:
https://github.com/google/ExoPlayer/issues?q=is%3Aissue
- Consult our developer website, which can be found at https://exoplayer.dev/.
It provides detailed information about supported formats and devices.
- Learn how to create useful log output by using the EventLogger:
https://exoplayer.dev/listening-to-player-events.html#using-eventlogger
- Rule out issues in your own code. A good way to do this is to try and
reproduce the issue in the ExoPlayer demo app. Information about the ExoPlayer
demo app can be found here:
http://exoplayer.dev/demo-application.html.
When reporting a bug:
-----------------------
Fill out the sections below, leaving the headers but replacing the content. If
you're unable to provide certain information, please explain why in the relevant
section. We may close issues if they do not include sufficient information.
### [REQUIRED] Issue description
Describe the issue in detail, including observed and expected behavior.
### [REQUIRED] Reproduction steps
Describe how the issue can be reproduced, ideally using the ExoPlayer demo app
or a small sample app that you’re able to share as source code on GitHub.
### [REQUIRED] Link to test content
Provide a JSON snippet for the demo app’s media.exolist.json file, or a link to
media that reproduces the issue. If you don't wish to post it publicly, please
submit the issue, then email the link to <EMAIL> using a subject
in the format "Issue #1234", where "#1234" should be replaced with your issue
number. Provide all the metadata we'd need to play the content like drm license
urls or similar. If the content is accessible only in certain countries or
regions, please say so.
### [REQUIRED] A full bug report captured from the device
Capture a full bug report using "adb bugreport". Output from "adb logcat" or a
log snippet is NOT sufficient. Please attach the captured bug report as a file.
If you don't wish to post it publicly, please submit the issue, then email the
bug report to <EMAIL> using a subject in the format
"Issue #1234", where "#1234" should be replaced with your issue number.
### [REQUIRED] Version of ExoPlayer being used
Specify the absolute version number. Avoid using terms such as "latest".
### [REQUIRED] Device(s) and version(s) of Android being used
Specify the devices and versions of Android on which the issue can be
reproduced, and how easily it reproduces. If possible, please test on multiple
devices and Android versions.
Status: Issue closed
Answers:
username_1: If you provide no useful information, we cannot investigate your issue. |
sarl/sarl | 191020375 | Title: Error during documentation generation.
Question:
username_0: When generating the documentation with Jnario, the following error message is logged. This error does not cause a generation failure.
```
Running io.sarl.docs.faq.SARLSyntaxFAQGeneralSyntaxSpec
21449 [main] ERROR base.resource.BatchLinkableResource - generation context cannot be found for: io.sarl.docs.faq.syntax.A$__A
java.lang.IllegalStateException: generation context cannot be found for: io.sarl.docs.faq.syntax.A$__A
at io.sarl.lang.jvmmodel.SARLJvmModelInferrer.getContext(SARLJvmModelInferrer.java:371)
at io.sarl.lang.jvmmodel.SARLJvmModelInferrer.transform(SARLJvmModelInferrer.java:1195)
at org.eclipse.xtend.core.jvmmodel.XtendJvmModelInferrer.transform(XtendJvmModelInferrer.java:565)
at io.sarl.lang.jvmmodel.SARLJvmModelInferrer.transform(SARLJvmModelInferrer.java:1058)
at org.eclipse.xtend.core.jvmmodel.XtendJvmModelInferrer.inferLocalClass(XtendJvmModelInferrer.java:847)
at org.eclipse.xtend.core.jvmmodel.XtendJvmModelInferrer.initializeLocalTypes(XtendJvmModelInferrer.java:821)
at org.eclipse.xtend.core.jvmmodel.XtendJvmModelInferrer.setBody(XtendJvmModelInferrer.java:646)
at io.sarl.lang.jvmmodel.SARLJvmModelInferrer.setBody(SARLJvmModelInferrer.java:295)
at io.sarl.lang.jvmmodel.SARLJvmModelInferrer.transform(SARLJvmModelInferrer.java:1303)
at org.eclipse.xtend.core.jvmmodel.XtendJvmModelInferrer.transform(XtendJvmModelInferrer.java:565)
at io.sarl.lang.jvmmodel.SARLJvmModelInferrer.transform(SARLJvmModelInferrer.java:1058)
at io.sarl.lang.jvmmodel.SARLJvmModelInferrer.appendAOPMembers(SARLJvmModelInferrer.java:1873)
at io.sarl.lang.jvmmodel.SARLJvmModelInferrer.initialize(SARLJvmModelInferrer.java:666)
at io.sarl.lang.jvmmodel.SARLJvmModelInferrer$2.run(SARLJvmModelInferrer.java:396)
at org.eclipse.xtend.core.jvmmodel.XtendJvmModelInferrer$3.run(XtendJvmModelInferrer.java:217)
at org.eclipse.xtext.xbase.resource.BatchLinkableResource.ensureJvmMembersInitialized(BatchLinkableResource.java:231)
```
<!---
@huboard:{"order":2.6067310094159225e-54,"milestone_order":6.934323504128013e-59}
-->
Answers:
username_0: See #576.
Status: Issue closed
|
stringologytimes/StringologyTimes | 471510319 | Title: Computing the k-binomial complexity of the Thue-Morse word
Question:
username_0: Preprint (e.g, arXiv) URL: https://arxiv.org/abs/1812.07330
Year of Publication:
Conference Paper URL:
Year of Publication:
Journal Paper URL:
Year of Publication:
Your Proposal Tags:
Comments:
Slides: https://orbi.uliege.be/handle/2268/233045 |
codeforchiba/covid19 | 802116227 | Title: データ取り込みバッチで相談件数の取り込みが 1/20 以降のデータが取り込まれていない
Question:
username_0: ## 起こっている問題 / The Problem
1日1回 Github Actions の仕組みで動かしている convert バッチで相談件数のデータ(= contacts) が 2021/1/19 を最後に data.json に格納されていない。元データのエクセル上にはデータが存在しているので、バッチ側に問題がありそう。
## スクリーンショット / Screenshot
## 期待する見せ方・挙動 / Expected Behavior
## 起こっている問題の再現手段 / Steps to Reproduce
## 動作環境・ブラウザ / Environment
Answers:
username_1: む、これは時期的に思い当たるフシがあるので確認します。
(たぶん1年使う想定になってない)
Status: Issue closed
|
mono/SkiaSharp | 303609833 | Title: NuGet - "Failed to add reference to 'libSkiaSharp'."
Question:
username_0: I get the following message when installing my NuGet package.
```
Failed to add reference to 'libSkiaSharp'.
Please make sure that the file is accessible, and that it is a valid assembly or COM component.
```
My package does reference through one of its components (I am not directly using it) the following:
`<package id="SkiaSharp" version="1.60.0" targetFramework="net471" />
`
Any idea why this is happening?
Thankx!
Status: Issue closed
Answers:
username_0: I could just remove the package... everything is fine now! |
sigurdm/grpc_web_flutter_example | 1097253324 | Title: How to use grpc for mobile and grpc-web for browser
Question:
username_0: When you say conditional import, do you mean: [what is mentioned in this post](https://www.coderancher.us/2021/05/03/dart-flutter-conditional-imports/)?
And would that mean [this code in your example](https://github.com/sigurdm/grpc_web_flutter_example/blob/3eaf741062abaed148c950542d014f83a9ffb969/lib/main.dart#L8) would need to be adjusted?
I'm new to flutter and dart and trying to see if this is possible before beginning a project. |
bwssytems/ha-bridge | 286434124 | Title: Alexa Rooms or Groups od Devices
Question:
username_0: Hallo everybody,
at the moment there is a thing that I noticed and hopefully somebody can help.
I helped myself with an way around but it is not just as elegant.
So, I have some Devices in my HA-Bridge.
Mostly it just executes an command on the raspberry to turn on or off my RF-Switches or the TV over IR.
That works fine. But if I wanted to turn on the Switch for the TV AND also turn on the TV itself (over Alexa Rooms) then nothing reacts. The only way around is for me at the moment to create another Device where both commands for the raspberry are combined.
Does anybody have similar problems with Alexa groups (Rooms) and the HA-Bridge?
Is there a way to fix this?
Also notice that I have activated "Use Rooms for Alaxa" on the HA-Bridge.
But there is one point in the documentary which I don't understand:
"Use Rooms for Alexa
This setting controls rooms for Alexa. If it is set to true, any device ID abaove 10000 is treated as a special group. The default is set as false."
Where or how do I set the device ID over 10000? Is this maybe the key?
Thank you for any advice!
PS: You may eventually find an picture of my HA-Bridge attached. It shows how I use 4 commands in one "HA-Bridge-Device" to control 4 "real-life-devices". Don't get confused with the delay. I found it not relevant.

Answers:
username_0: Now I have also tried the Hue App. Rooms doesn't work there either. But the Devices one by one work.
Status: Issue closed
|
spywhere/vscode-guides | 135210076 | Title: Guides skip blank lines
Question:
username_0: If i have a code bracket with blank lines in it for code separation and readability the guides lines have holes in them. I did not see an option to span across blank lines. Can you please add that. Also explain what rulers are.
Status: Issue closed
Answers:
username_1: The skipped guides on blank lines are expectable due to the limitation of the extension API. The guide is create by inserting an outline border at the character position so in order to show a guide, characters are required on those lines.
For the ruler, it's simply a static guide on particular character length (mostly 80). These are use to indicate a long line (as to conform the code guideline of character limits) or just a visual guide line in some cases.
username_1: Reopened for the experiment with a new potential API introduced in Visual Studio Code v1.3.0
username_1: If i have a code bracket with blank lines in it for code separation and readability the guides lines have holes in them. I did not see an option to span across blank lines. Can you please add that. Also explain what rulers are.
besides that. Great Extension
username_2: Hello, any update on this issue?
username_1: Sadly no. Although they provide the CSS's `after` and `before` into the decorators, there is not enough settings to make this happen at this moment.
username_2: The internal guides are shown on the editor, you can take hint from there.
username_1: After tinkering with CSS's `after` and `before` provided by VSCode, unfortunately this issue, though highly wish comes true, cannot be implemented without a proper VSCode rendering API.
To add more details, @username_2 solution is not possible due to the way extension cannot directly set particular CSS attributes. Another solution is to set a decoration with an offset which ends up cluttering on the beginning of the empty line, hence the solution is failed.
This issue will be close until VSCode has finally provide an extension a way to render a line on an empty line.
Status: Issue closed
|
chexagon/redis-session-manager | 263368146 | Title: endpoint config bug
Question:
username_0: I found the below not work
```
<Manager className="com.crimsonhexagon.rsm.redisson.SingleServerSessionManager"
endpoint="redis://localhost:6379"
/>
```
console will show hostname can't be null.
but this is work
```
<Manager className="com.crimsonhexagon.rsm.redisson.SingleServerSessionManager"
endpoint="localhost:6379"
/>
```
Answers:
username_1: the `redis://` prefix is required for redisson 3.4.3+ (see issue #20). There has been no release with this version of redisson yet, so the README config applies only to master. Omitting the prefix as you specified will work for older redisson versions (all current RSM releases)
Status: Issue closed
|
argoproj/argo-events | 803995251 | Title: SNS Parameterization JSON handling
Question:
username_0: Hi all,
I have an SNS that I want to use to trigger workflows.
Example message sent to SNS:
```
{
"field1" : "test1",
"field2" : "test2"
}
```
I want my sensor to directly parse this JSON and use the fields as arguments to start the workflow.
My sensor ( I've omitted part of the sensor for the sake of brevity)
```
triggers:
- template:
name: workflow-trigger
k8s:
group: argoproj.io
version: v1alpha1
resource: workflows
operation: create
source:
resource:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: test-workflow-
namespace: workflows
spec:
serviceAccountName: default
arguments:
parameters:
- name: field1
value: "{{inputs.parameters.message}}"
workflowTemplateRef:
name: creator-workflow
parameters:
- src:
dependencyName: sns-consumer
dataKey: body.Message.field1
dest: spec.arguments.parameters.0.value
```
Essentially, the aim is to have 'field1' param from the SNS to be used as an input to creator-workflow.
The current issue im facing is that it does not pick up 'field1' and the full event payload object (context, data) gets sent as a paramter. I can also paste this here if it helps.
Answers:
username_1: The dataKey should be `body.field1`. https://argoproj.github.io/argo-events/setup/aws-sns/#event-structure
username_0: The data above is actually nested in the ‘Message’ object. So it’s more like:
```
{
“Message”: {"field1" : "test1", “field2”: “test2”}
}
```
Status: Issue closed
username_0: Hi all,
I have an SNS that I want to use to trigger workflows.
Example message sent to SNS:
```
{
"field1" : "test1",
"field2" : "test2"
}
```
I want my sensor to directly parse this JSON and use the fields as arguments to start the workflow.
My sensor ( I've omitted part of the sensor for the sake of brevity)
```
triggers:
- template:
name: workflow-trigger
k8s:
group: argoproj.io
version: v1alpha1
resource: workflows
operation: create
source:
resource:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: test-workflow-
namespace: workflows
spec:
serviceAccountName: default
arguments:
parameters:
- name: field1
value: "{{inputs.parameters.message}}"
workflowTemplateRef:
name: creator-workflow
parameters:
- src:
dependencyName: sns-consumer
dataKey: body.Message.field1
dest: spec.arguments.parameters.0.value
```
Essentially, the aim is to have 'field1' param from the SNS to be used as an input to creator-workflow.
The current issue im facing is that it does not pick up 'field1' and the full event payload object (context, data) gets sent as a paramter. I can also paste this here if it helps.
username_2: The only possibility I can think of for this case is, `dateKey` was not set, and it would expose everything of the event including context and data. However, `dataKey` is in the spec you posted... Could you post your live sensor spec? `kubectl get sensor xxx -o yaml`
username_3: Did you manage to get this working? I'm now certain this is an issue, I've tried everything to extract parameters from JSON in SNS and there is no way it seems. It's fine when extracting anything other than JSON in the message body. I asked in Slack but no response yet: https://cloud-native.slack.com/archives/C01TNKD6KL6/p1627631999001900
username_2: Hi all,
I have an SNS that I want to use to trigger workflows.
Example message sent to SNS:
```
{
"field1" : "test1",
"field2" : "test2"
}
```
I want my sensor to directly parse this JSON and use the fields as arguments to start the workflow.
My sensor ( I've omitted part of the sensor for the sake of brevity)
```
triggers:
- template:
name: workflow-trigger
k8s:
group: argoproj.io
version: v1alpha1
resource: workflows
operation: create
source:
resource:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: test-workflow-
namespace: workflows
spec:
serviceAccountName: default
arguments:
parameters:
- name: field1
value: "{{inputs.parameters.message}}"
workflowTemplateRef:
name: creator-workflow
parameters:
- src:
dependencyName: sns-consumer
dataKey: body.Message.field1
dest: spec.arguments.parameters.0.value
```
Essentially, the aim is to have 'field1' param from the SNS to be used as an input to creator-workflow.
The current issue im facing is that it does not pick up 'field1' and the full event payload object (context, data) gets sent as a paramter. I can also paste this here if it helps.
username_2: I think this is the same problem as https://github.com/argoproj/argo-events/issues/1752, that a field to indicate if it's JSON content is missing in SNS. |
rossfuhrman/_why_the_lucky_markov | 456615979 | Title: I knew that <NAME> was you. People and steaks, side-by-side.
Question:
username_0: Toot: I knew that <NAME> was you. People and steaks, side-by-side.
One comment = 1 upvote. Sometime after this gets 2 upvotes, it will be posted to the main account at https://mastodon.xyz/@_why_toots |
gdombiak/OctoPod | 367425772 | Title: Fix app crash when updating printer from non-main thread
Question:
username_0: 
<issue_closed>
Status: Issue closed |
KnisterPeter/vscode-github | 746839718 | Title: Extension causes high cpu load
Question:
username_0: - Issue Type: `Performance`
- Extension Name: `vscode-github`
- Extension Version: `0.30.4`
- OS Version: `Windows_NT x64 10.0.18362`
- VSCode version: `1.51.1`
:warning: Make sure to **attach** this file from your *home*-directory:
:warning:`c:\Users\<NAME>\AppData\Local\Temp\username_1.vscode-github-unresponsive.cpuprofile.txt`
Find more details here: https://github.com/microsoft/vscode/wiki/Explain-extension-causes-high-cpu-load
Answers:
username_1: Duplicate of #443
Status: Issue closed
|
appium/ruby_lib | 40250997 | Title: tap broken
Question:
username_0: ```
Hi ,
i try to use def tap(opts) its not allowing me to pass parameters if i pass any parameters it says 1 or zero aruments exception
here are the my implmentaions
1.i tired tap(:x => 0.5,:y => 0.5,:fingers => 1) does't worked
2. @op = {
opts: {
'x' => 0.5,
'y' => 0.5,
'fingers' => 1
}}
tap(@op) - does't worked
correct me if i am wrong in implementation
```
reported by @prathimak
Answers:
username_1: This worked for me:
Appium::TouchAction.new.tap(:x => 50, :y => 70).release.perform
username_1: So one point to make, I can use swipe without referencing the module and class, but tap I need to. Why is that?
swipe(:start_x => a, :start_y => b, :end_x => c, :end_y => d, :duration => e).perform
Appium::TouchAction.new.tap(:x => 50, :y => 70).release.perform
username_2: it is still broken..
username_3: @username_2 could you provide an example what exactly does not work? What appium version? Android?
For example code like this ```Appium::TouchAction.new.tap(x: 60, y: 1200).perform``` works like a charm for me.
Status: Issue closed
username_4: according to the comment, https://github.com/appium/ruby_lib/issues/254#issuecomment-221553085, and source code `Appium::TouchAction.new.tap(x: 60, y: 1200).perform` seems work.
If someone can't `tap` with latest appium server and ruby_lib, let open an issue.
username_5: I have the same issue with appium 1.6.3
I have a XCUIElementTypeAlert displayed on my screen
these are the coordinates of my XCUIElementTypeAlert:
x=52, y=263, h=141, w=271
and this is the size of my screen where XCUIElementTypeAlert is displayed: height = 667, width = 375
I would like to tap outside XCUIElementTypeAlert:
`> Appium::TouchAction.new.tap(:x => 0, :y => window_size.width*0.5).release.perform
Selenium::WebDriver::Error::UnknownError: Support for this gesture is not yet implemented. Please contact an Appium dev
from /Users/admin/.rvm/gems/ruby-2.3.0/gems/selenium-webdriver-3.0.5/lib/selenium/webdriver/remote/response.rb:69:in `assert_ok'`
and when I try without release, it does not give any error but it tap the first button of the element instead of tapping outside:
`Appium::TouchAction.new.tap(:x => 0, :y => window_size.width*0.5).perform
#<Appium::TouchAction:0x007fe958d77fa0 @actions=[{:action=>:tap, :options=>{:x=>0, :y=>187.5, :count=>1}}]>`
username_4: ```
Hi ,
i try to use def tap(opts) its not allowing me to pass parameters if i pass any parameters it says 1 or zero aruments exception
here are the my implmentaions
1.i tired tap(:x => 0.5,:y => 0.5,:fingers => 1) does't worked
2. @op = {
opts: {
'x' => 0.5,
'y' => 0.5,
'fingers' => 1
}}
tap(@op) - does't worked
correct me if i am wrong in implementation
```
reported by @prathimak
username_4: I created ^ comment as new issue
https://github.com/appium/ruby_lib/issues/453
Status: Issue closed
|
r-lib/pkgdepends | 423595135 | Title: Wrong entry types: `filesize (double, expected integer)` on windows
Question:
username_0: i Checking for package metadata updates
v All 10 metadata files are current.
v Using session cached package metadata
v Using cached package metadata
Error in res_add_defaults(entries) :
Wrong entry types: `filesize (double, expected integer)`
Call `rlang::last_error()` to see a backtrace
```
<sup> reprex not working - copy and call `reprex::reprex_rescue()` before pasting </sup>
the `df` provided to `res_add_df_entries` contains a column filesize that is not integer but double.
I did not find yet where it comes from, but I believe this is what is causing the issue.
I tried even after cleaning the cache.
Answers:
username_0: I believe the cache is read with `pkgcache` and it is where the double filesize info is read from.
I tried this, and get double where pkgdepends expects integer.
``` r
cache_path <- dirname(pkgcache::meta_cache_summary()$cachepath)
cmc <- pkgcache::cranlike_metadata_cache$new(primary_path = cache_path)
df <- cmc$list("glue")
typeof(df$filesize)
#> [1] "double"
df
#> # A tibble: 2 x 29
#> package version depends suggests license imports linkingto archs enhances
#> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
#> 1 glue 1.3.1 R (>= ~ "testth~ MIT + ~ methods <NA> i386~ <NA>
#> 2 glue 1.3.1 R (>= ~ "testth~ MIT + ~ methods <NA> <NA> <NA>
#> # ... with 20 more variables: os_type <chr>, priority <chr>,
#> # license_is_foss <chr>, license_restricts_use <chr>, repodir <chr>,
#> # platform <chr>, rversion <chr>, needscompilation <chr>, ref <chr>,
#> # type <chr>, direct <lgl>, status <chr>, target <chr>, mirror <chr>,
#> # sources <list>, filesize <dbl>, sha256 <chr>, deps <list>,
#> # md5sum <chr>, path <chr>
```
<sup>Created on 2019-03-21 by the [reprex package](https://reprex.tidyverse.org) (v0.2.1)</sup>
Is it something off in pkgcache ?
username_0: Everything is now working fine after updating everything with last dev version
Status: Issue closed
username_0: i Checking for package metadata updates
v All 10 metadata files are current.
v Using session cached package metadata
v Using cached package metadata
Error in res_add_defaults(entries) :
Wrong entry types: `filesize (double, expected integer)`
Call `rlang::last_error()` to see a backtrace
```
<sup> reprex not working - copy and call `reprex::reprex_rescue()` before pasting </sup>
the `df` provided to `res_add_df_entries` contains a column filesize that is not integer but double.
I did not find yet where it comes from, but I believe this is what is causing the issue.
I tried even after cleaning the cache.
username_0: 🤔 in fact this is not working...
<details closed>
<summary> <span title='Click to Expand'> current package info </span> </summary>
```r
package * version date lib source
assertthat 0.2.1 2019-03-21 [1] CRAN (R 3.5.3)
backports 1.1.3 2018-12-14 [1] CRAN (R 3.5.1)
base64enc 0.1-3 2015-07-28 [1] CRAN (R 3.5.0)
callr 3.2.0 2019-03-15 [1] CRAN (R 3.5.3)
cli 1.1.0 2019-03-19 [1] CRAN (R 3.5.3)
cliapp 0.1.0 2018-12-16 [1] CRAN (R 3.5.2)
crayon 1.3.4 2017-09-16 [1] CRAN (R 3.5.1)
curl 3.3 2019-01-10 [1] CRAN (R 3.5.2)
desc 1.2.0 2019-03-29 [1] Github (r-lib/desc@c860e7b)
digest 0.6.18 2018-10-10 [1] CRAN (R 3.5.1)
fansi 0.4.0 2018-10-05 [1] CRAN (R 3.5.1)
filelock 1.0.2 2018-10-05 [1] CRAN (R 3.5.2)
glue 1.3.1 2019-03-12 [1] CRAN (R 3.5.3)
hms 0.4.2 2018-03-10 [1] CRAN (R 3.5.1)
jsonlite 1.6 2018-12-07 [1] CRAN (R 3.5.1)
lpSolve 5.6.13 2015-09-19 [1] CRAN (R 3.5.0)
magrittr * 1.5 2014-11-22 [1] CRAN (R 3.5.1)
pillar 1.3.1 2018-12-15 [1] CRAN (R 3.5.1)
pkgbuild 1.0.3 2019-03-29 [1] Github (r-lib/pkgbuild@79cb7a0)
pkgcache 1.0.3.9001 2019-03-21 [1] Github (r-lib/pkgcache@dada698)
pkgconfig 2.0.2 2018-08-16 [1] CRAN (R 3.5.1)
pkgdepends 0.0.0.9003 2019-03-31 [1] Github (r-lib/pkgdepends@4b1c5cd)
prettycode 1.0.2 2018-09-11 [1] CRAN (R 3.5.2)
prettyunits 1.0.2 2015-07-13 [1] CRAN (R 3.5.1)
processx 3.3.0 2019-03-10 [1] CRAN (R 3.5.3)
progress 1.2.0 2018-06-14 [1] CRAN (R 3.5.1)
ps 1.3.0 2018-12-21 [1] CRAN (R 3.5.1)
R6 2.4.0 2019-02-14 [1] CRAN (R 3.5.1)
rappdirs 0.3.1 2016-03-28 [1] CRAN (R 3.5.1)
Rcpp 1.0.1 2019-03-17 [1] CRAN (R 3.5.3)
rematch2 2.0.1 2017-06-20 [1] CRAN (R 3.5.1)
rlang 0.3.3 2019-03-29 [1] CRAN (R 3.5.3)
rprojroot 1.3-2 2018-01-03 [1] CRAN (R 3.5.1)
selectr 0.4-1 2018-04-06 [1] CRAN (R 3.5.1)
stringi 1.4.3 2019-03-12 [1] CRAN (R 3.5.3)
stringr 1.4.0 2019-02-10 [1] CRAN (R 3.5.3)
tibble 2.1.1 2019-03-16 [1] CRAN (R 3.5.3)
utf8 1.1.4 2018-05-24 [1] CRAN (R 3.5.1)
uuid 0.1-2 2015-07-28 [1] CRAN (R 3.5.0)
withr 2.1.2 2018-03-15 [1] CRAN (R 3.5.1)
xml2 1.2.0 2018-01-24 [1] CRAN (R 3.5.1)
[1] C:/Users/chris/Documents/R/win-library/3.5
[2] C:/Program Files/R/R-3.5.3/library
```
</details><br>
username_1: With pkgcache 1.0.5, as on CRAN, I get an integer column on both Windows and macOS, so this is probably solved.
Status: Issue closed
username_1: Apparently this was creeping back, because `read.csv()` sometimes creates a double column, even if there are only integers there. I don't know why but now we always convert to integers explicitly, both in pkgcache and pkgdepends. |
elastic/apm-agent-java | 1084629127 | Title: CVE-2021-45105 / Apache Log4j2 does not always protect from infinite recursion in lookup evaluation
Question:
username_0: Unfortunately, a further CVE has been discovered in Log4J2. See CVE - [CVE-2021-45105](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-45105): Apache Log4j2 does not always protect from infinite recursion in lookup evaluation
An upgrade to Log4J 2.12.3 ist highly recommended to mitigate this issue.
https://logging.apache.org/log4j/2.x/security.html
P.S. I have also send a reported this as stated in the security policy.
Status: Issue closed
Answers:
username_1: As mentioned in the [advisory](https://discuss.elastic.co/t/apache-log4j2-remote-code-execution-rce-vulnerability-cve-2021-44228-esa-2021-31/291476), the APM Java agent is not exploitable by this vulnerability.
Once 2.12.3 is released, we will upgrade to it. |
fullcalendar/fullcalendar | 1019614855 | Title: Set row height programatically
Question:
username_0: ### Checklist
Please mark these items with an [x]
- [X] I've already searched through [existing tickets](https://fullcalendar.io/issues)
- [X] Other people will find this feature useful
### Feature Description
I need to show the calendar in the `timeGridWeek` mode, but condensed/overview mode: all events fitting into a smaller space without horizontal scrolling, say in a 500x500 px space. In order to achieve that, I'd like to set the row height to a smaller value, say 8px.
Thank you.
Answers:
username_1: An issue with using CSS is that dragging event, etc. won't work properly.
There is `expandRows` to make the rows bigger but that doesn't work to make them smaller, that could be useful as a new feature.
You could also try less slots like this:
https://codepen.io/username_1/pen/gOxpJjZ |
CodetheChangeFoundation/dda-tap | 565966562 | Title: Play generic audio when clicking an image on main page
Question:
username_0: Getting the audio to play when an image is selected on the main page. So basically, when on the main page the user can select an image and it will go full screen. Need to add functionality to playback the associated recording for that image. |
playframework/playframework | 213879124 | Title: 2.6.0-M1/M2: play.test.WithApplication#stopPlay does not call ApplicationLifecycle-Hooks anymore
Question:
username_0: I'm having troubles with my Unit/Integration Tests while moving to Play 2.6.0 M1/M2.
My tests are extending the play.test.WithApplication class and the play.test.WithApplication#stopPlay is also invoked correctly.
However, the Stop method in play.api.Play#stop does not invoke the ApplicationLifecycle Callbacks anymore.
This leaves my app in an inconsistent state, as I cannot invoke cleanup actions, like closing httpClients that I acquired by doing wsClient.getUnderlying or stopping schedulers in actors.
Especially, this leaves a thread hanging around for (almost) every test I execute, printing the remaining open connections in
/Users/username_0/.ivy2/cache/com.typesafe.play/shaded-asynchttpclient/jars/shaded-asynchttpclient-1.0.0-M4.jar!/play/shaded/ahc/org/asynchttpclient/netty/channel/DefaultChannelPool.class:289
Answers:
username_1: Hi @username_0, this is a Java project, right?
Looking at the current code, we are calling the `ApplicationLifecycle` callbacks. Navigating through `WithApplication.stop` -> `Helpers.stop` -> `play.api.Play.stop` we end up at this stop method:
https://github.com/playframework/playframework/blob/master/framework/src/play/src/main/scala/play/api/Play.scala#L128-L128
It calls `Application.stop` and then, if you are not overriding `WithApplication.provideApplication` method, should return an instance of `play.DefaultApplication` which wraps a `play.api.DefaultApplication`. Then `stop` method is calling `ApplicationLifecycle.stop`:
https://github.com/playframework/playframework/blob/master/framework/src/play/src/main/scala/play/api/Application.scala#L242-L242
Is it possible to have a project that reproduces the problem?
username_1: It is called once per test case. So, if your test class has 10 test cases, it will called 10 times. Of course, stop will be called for each test case too.
username_0: Ok, looks like this is from another issue. I had my tests running with "fork in Test := false".
So at one point, the MetaSpace ran out, which prevented the stop hooks from being called, which then prevented thread cleanup.
I've created this issue https://github.com/sbt/junit-interface/issues/77 to properly cleanup the metaspace after each testclass. I think if that one is fixed, also the issue seen here will disappear. For now, running tests in fork mode works.
username_1: @username_0 I think we can close this then?
Status: Issue closed
|
mapstruct/mapstruct | 444821314 | Title: FilerException with intelliJ
Question:
username_0: Hello.
I am using mapstruct very well.
The environment I use is Spring boot 2.1.4, JDK12, gradle 5.2.1, mapstruct 1.3.0.Final
But often the following error occurs,

The reproduction path is as follows.
1. compileJava
2. modify mapper file
3. compileJava
4. start application.
Answers:
username_1: @username_3 any information that can be provided on this? Facing the same issue with gradle 5+? Works fine via command line but not in intelliJ.
username_2: i have the same exception, when i run mvn install without clean, or a mvn spring-boot:run directly after an install, it seem's that when the Mapper implementation files are generated, the second attempts always fail,
username_3: @username_2 is it possible for you to provide a minimal setup project so that we can have a look at it?
I can't seem to be able to reproduce this problem
username_4: I am facing a similar issue. any resolution to this problem?
It happened for me after I migrated to JDK 11 with maven compiler plugin 3.8.1 and map-struct version 1.4.1.Final.
With JDK 8, it was working fine.
On the first compilation, it works, however, if you run mvn install without clean, a similar issue arises.
username_5: I have the same issue in my current project, and it affects only the auto build in IntelliJ
And I think because of this error the Spring Boot Dev Tools does not auto restart the app.
The configuration is like this. Lombok version is managed by Spring Boot Dependencies
```
// Code Generation and Annotation Processors
implementation 'org.projectlombok:lombok'
implementation 'org.mapstruct:mapstruct:1.4.2.Final'
annotationProcessor 'org.mapstruct:mapstruct-processor:1.4.2.Final'
annotationProcessor 'org.projectlombok:lombok'
annotationProcessor 'org.projectlombok:lombok-mapstruct-binding:0.2.0'
testAnnotationProcessor 'org.projectlombok:lombok'
testAnnotationProcessor 'org.mapstruct:mapstruct-processor:1.4.2.Final'
testAnnotationProcessor 'org.projectlombok:lombok-mapstruct-binding:0.2.0'
```
username_6: I have the same issue in my current project,,help
username_3: My https://github.com/mapstruct/mapstruct/issues/1818#issuecomment-595453763 is still valid. We are unable to reproduce this problem. We would need someone to provide an example where this is always happening to be able to reproduce the problem.
It could be related to using outdated versions of IntelliJ, Gradle and / or the Maven Compiler plugin.
I know that both Gradle and IntelliJ have improved support for annotation processors and for me it has worked consistently for a while.
username_7: error: Internal error in the mapping processor: java.lang.RuntimeException: javax.annotation.processing.FilerException: Attempt to recreate a file for type com.rivian.commerce.t2d.delivery.mapper.DeliveryMapperImpl at org.mapstruct.ap.internal.processor.MapperRenderingProcessor.createSourceFile(MapperRenderingProcessor.java:59) at org.mapstruct.ap.internal.processor.MapperRenderingProcessor.writeToSourceFile(MapperRenderingProcessor.java:39) at org.mapstruct.ap.internal.processor.MapperRenderingProcessor.process(MapperRenderingProcessor.java:29) at org.mapstruct.ap.internal.processor.MapperRenderingProcessor.process(MapperRenderingProcessor.java:24) at org.mapstruct.ap.MappingProcessor.process(MappingProcessor.java:338) at org.mapstruct.ap.MappingProcessor.processMapperTypeElement(MappingProcessor.java:318) at org.mapstruct.ap.MappingProcessor.processMapperElements(MappingProcessor.java:267) at org.mapstruct.ap.MappingProcessor.process(MappingProcessor.java:166) at org.gradle.api.internal.tasks.compile.processing.DelegatingProcessor.process(DelegatingProcessor.java:62) at org.gradle.api.internal.tasks.compile.processing.IsolatingProcessor.process(IsolatingProcessor.java:50) at org.gradle.api.internal.tasks.compile.processing.DelegatingProcessor.process(DelegatingProcessor.java:62) at org.gradle.api.internal.tasks.compile.processing.TimeTrackingProcessor.access$401(TimeTrackingProcessor.java:37) at org.gradle.api.internal.tasks.compile.processing.TimeTrackingProcessor$5.create(TimeTrackingProcessor.java:99) at org.gradle.api.internal.tasks.compile.processing.TimeTrackingProcessor$5.create(TimeTrackingProcessor.java:96) at org.gradle.api.internal.tasks.compile.processing.TimeTrackingProcessor.track(TimeTrackingProcessor.java:117) at org.gradle.api.internal.tasks.compile.processing.TimeTrackingProcessor.process(TimeTrackingProcessor.java:96) at jdk.compiler/com.sun.tools.javac.processing.JavacProcessingEnvironment.callProcessor(JavacProcessingEnvironment.java:1023) at jdk.compiler/com.sun.tools.javac.processing.JavacProcessingEnvironment.discoverAndRunProcs(JavacProcessingEnvironment.java:939) at jdk.compiler/com.sun.tools.javac.processing.JavacProcessingEnvironment$Round.run(JavacProcessingEnvironment.java:1267) at jdk.compiler/com.sun.tools.javac.processing.JavacProcessingEnvironment.doProcessing(JavacProcessingEnvironment.java:1382) at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.processAnnotations(JavaCompiler.java:1234) at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:916) at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.lambda$doCall$0(JavacTaskImpl.java:104) at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.invocationHelper(JavacTaskImpl.java:152) at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.doCall(JavacTaskImpl.java:100) at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.call(JavacTaskImpl.java:94) at org.gradle.internal.compiler.java.IncrementalCompileTask.call(IncrementalCompileTask.java:89) at org.gradle.api.internal.tasks.compile.AnnotationProcessingCompileTask.call(AnnotationProcessingCompileTask.java:94) at org.gradle.api.internal.tasks.compile.ResourceCleaningCompilationTask.call(ResourceCleaningCompilationTask.java:57) at org.gradle.api.internal.tasks.compile.JdkJavaCompiler.execute(JdkJavaCompiler.java:54) at org.gradle.api.internal.tasks.compile.JdkJavaCompiler.execute(JdkJavaCompiler.java:39) at org.gradle.api.internal.tasks.compile.NormalizingJavaCompiler.delegateAndHandleErrors(NormalizingJavaCompiler.java:97) at org.gradle.api.internal.tasks.compile.NormalizingJavaCompiler.execute(NormalizingJavaCompiler.java:51) at org.gradle.api.internal.tasks.compile.NormalizingJavaCompiler.execute(NormalizingJavaCompiler.java:37) at org.gradle.api.internal.tasks.compile.AnnotationProcessorDiscoveringCompiler.execute(AnnotationProcessorDiscoveringCompiler.java:51) at org.gradle.api.internal.tasks.compile.AnnotationProcessorDiscoveringCompiler.execute(AnnotationProcessorDiscoveringCompiler.java:37) at org.gradle.api.internal.tasks.compile.ModuleApplicationNameWritingCompiler.execute(ModuleApplicationNameWritingCompiler.java:46) at org.gradle.api.internal.tasks.compile.ModuleApplicationNameWritingCompiler.execute(ModuleApplicationNameWritingCompiler.java:36) at org.gradle.jvm.toolchain.internal.DefaultToolchainJavaCompiler.execute(DefaultToolchainJavaCompiler.java:57) at org.gradle.api.tasks.compile.JavaCompile.lambda$createToolchainCompiler$1(JavaCompile.java:232) at org.gradle.api.internal.tasks.compile.incremental.SelectiveCompiler.execute(SelectiveCompiler.java:99) at org.gradle.api.internal.tasks.compile.incremental.SelectiveCompiler.execute(SelectiveCompiler.java:41) at org.gradle.api.internal.tasks.compile.incremental.IncrementalResultStoringCompiler.execute(IncrementalResultStoringCompiler.java:66) at org.gradle.api.internal.tasks.compile.incremental.IncrementalResultStoringCompiler.execute(IncrementalResultStoringCompiler.java:52) at org.gradle.api.internal.tasks.compile.CompileJavaBuildOperationReportingCompiler$2.call(CompileJavaBuildOperationReportingCompiler.java:59) at org.gradle.api.internal.tasks.compile.CompileJavaBuildOperationReportingCompiler$2.call(CompileJavaBuildOperationReportingCompiler.java:51) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53) at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73) at org.gradle.api.internal.tasks.compile.CompileJavaBuildOperationReportingCompiler.execute(CompileJavaBuildOperationReportingCompiler.java:51) at org.gradle.api.tasks.compile.JavaCompile.performCompilation(JavaCompile.java:279) at org.gradle.api.tasks.compile.JavaCompile.performIncrementalCompilation(JavaCompile.java:165) at org.gradle.api.tasks.compile.JavaCompile.compile(JavaCompile.java:146) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:104) at org.gradle.api.internal.project.taskfactory.IncrementalInputsTaskAction.doExecute(IncrementalInputsTaskAction.java:32) at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:51) at org.gradle.api.internal.project.taskfactory.AbstractIncrementalTaskAction.execute(AbstractIncrementalTaskAction.java:25) at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:29) at org.gradle.api.internal.tasks.execution.TaskExecution$2.run(TaskExecution.java:239) at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:29) at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:26) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.run(DefaultBuildOperationRunner.java:47) at org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:68) at org.gradle.api.internal.tasks.execution.TaskExecution.executeAction(TaskExecution.java:224) at org.gradle.api.internal.tasks.execution.TaskExecution.executeActions(TaskExecution.java:207) at org.gradle.api.internal.tasks.execution.TaskExecution.executeWithPreviousOutputFiles(TaskExecution.java:190) at org.gradle.api.internal.tasks.execution.TaskExecution.execute(TaskExecution.java:168) at org.gradle.internal.execution.steps.ExecuteStep.executeInternal(ExecuteStep.java:89) at org.gradle.internal.execution.steps.ExecuteStep.access$000(ExecuteStep.java:40) at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:53) at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:50) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53) at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73) at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:50) at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:40) at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:68) at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:38) at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:48) at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:36) at org.gradle.internal.execution.steps.CancelExecutionStep.execute(CancelExecutionStep.java:41) at org.gradle.internal.execution.steps.TimeoutStep.executeWithoutTimeout(TimeoutStep.java:74) at org.gradle.internal.execution.steps.TimeoutStep.execute(TimeoutStep.java:55) at org.gradle.internal.execution.steps.CreateOutputsStep.execute(CreateOutputsStep.java:51) at org.gradle.internal.execution.steps.CreateOutputsStep.execute(CreateOutputsStep.java:29) at org.gradle.internal.execution.steps.CaptureStateAfterExecutionStep.execute(CaptureStateAfterExecutionStep.java:61) at org.gradle.internal.execution.steps.CaptureStateAfterExecutionStep.execute(CaptureStateAfterExecutionStep.java:42) at org.gradle.internal.execution.steps.BroadcastChangingOutputsStep.execute(BroadcastChangingOutputsStep.java:60) at org.gradle.internal.execution.steps.BroadcastChangingOutputsStep.execute(BroadcastChangingOutputsStep.java:27) at org.gradle.internal.execution.steps.BuildCacheStep.executeWithoutCache(BuildCacheStep.java:188) at org.gradle.internal.execution.steps.BuildCacheStep.lambda$execute$1(BuildCacheStep.java:75) at org.gradle.internal.Either$Right.fold(Either.java:175) at org.gradle.internal.execution.caching.CachingState.fold(CachingState.java:59) at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:73) at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:48) at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:38) at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:27) at org.gradle.internal.execution.steps.RecordOutputsStep.execute(RecordOutputsStep.java:36) at org.gradle.internal.execution.steps.RecordOutputsStep.execute(RecordOutputsStep.java:22) at org.gradle.internal.execution.steps.SkipUpToDateStep.executeBecause(SkipUpToDateStep.java:109) at org.gradle.internal.execution.steps.SkipUpToDateStep.lambda$execute$2(SkipUpToDateStep.java:56) at java.base/java.util.Optional.orElseGet(Optional.java:364) at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:56) at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:38) at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:73) at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:44) at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:37) at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:27) at org.gradle.internal.execution.steps.ResolveCachingStateStep.execute(ResolveCachingStateStep.java:89) at org.gradle.internal.execution.steps.ResolveCachingStateStep.execute(ResolveCachingStateStep.java:50) at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:114) at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:57) at org.gradle.internal.execution.steps.CaptureStateBeforeExecutionStep.execute(CaptureStateBeforeExecutionStep.java:76) at org.gradle.internal.execution.steps.CaptureStateBeforeExecutionStep.execute(CaptureStateBeforeExecutionStep.java:50) at org.gradle.internal.execution.steps.SkipEmptyWorkStep.lambda$execute$2(SkipEmptyWorkStep.java:93) at java.base/java.util.Optional.orElseGet(Optional.java:364) at org.gradle.internal.execution.steps.SkipEmptyWorkStep.execute(SkipEmptyWorkStep.java:93) at org.gradle.internal.execution.steps.SkipEmptyWorkStep.execute(SkipEmptyWorkStep.java:34) at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsStartedStep.execute(MarkSnapshottingInputsStartedStep.java:38) at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:43) at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:31) at org.gradle.internal.execution.steps.AssignWorkspaceStep.lambda$execute$0(AssignWorkspaceStep.java:40) at org.gradle.api.internal.tasks.execution.TaskExecution$3.withWorkspace(TaskExecution.java:284) at org.gradle.internal.execution.steps.AssignWorkspaceStep.execute(AssignWorkspaceStep.java:40) at org.gradle.internal.execution.steps.AssignWorkspaceStep.execute(AssignWorkspaceStep.java:30) at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:37) at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:27) at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:44) at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:33) at org.gradle.internal.execution.impl.DefaultExecutionEngine$1.execute(DefaultExecutionEngine.java:76) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:142) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:131) at org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:77) at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46) at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:51) at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57) at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:56) at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53) at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52) at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:74) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:402) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:389) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:382) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:368) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.lambda$run$0(DefaultPlanExecutor.java:127) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:191) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.executeNextNode(DefaultPlanExecutor.java:182) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:124) at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64) at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:61) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: javax.annotation.processing.FilerException: Attempt to recreate a file for type com.rivian.commerce.t2d.delivery.mapper.DeliveryMapperImpl at jdk.compiler/com.sun.tools.javac.processing.JavacFiler.checkNameAndExistence(JavacFiler.java:732) at jdk.compiler/com.sun.tools.javac.processing.JavacFiler.createSourceOrClassFile(JavacFiler.java:498) at jdk.compiler/com.sun.tools.javac.processing.JavacFiler.createSourceFile(JavacFiler.java:435) at org.gradle.api.internal.tasks.compile.processing.IncrementalFiler.createSourceFile(IncrementalFiler.java:45) at org.mapstruct.ap.internal.processor.MapperRenderingProcessor.createSourceFile(MapperRenderingProcessor.java:56) ... 182 more
1 error
FAILURE: Build failed with an exception.
```
username_7: By the way, this is happening regardless of whether I use IntelliJ Gradle tools or I run the Gradle commands directly via command line.
username_3: @username_7 if this happens every time for you can you please create a simple project showcasing this error so we can look into it?
username_7: So I was able to reproduce the issue by adding Netflix DGS codegen to my project.
And I was also able to find a solution! By upgrading the codegen plugin `com.netflix.dgs.codegen` to 5.1.16, the issue went away.
So I got curious and found this PR 😂 : [Use more specific output directory to avoid conflicts between output directories with other code generating tools (mapstruts) #310
](https://github.com/Netflix/dgs-codegen/pull/310)
For reference, here's the repo to reproduce it: https://github.com/username_7/mapstruct-issue-1818.
username_3: Good finding that @username_7. MapStruct delegates the creation of the classes to javac and the location of the created sources is done by the configuration of the java compiler. When using Maven this is the `maven-compiler-plugin` usually or the Gradle configuration of the java compiler.
I am going to close this issue. If someone else has a similar problem please create a repo to reproduce the problem like @username_7 did.
Status: Issue closed
|
Financial-Times/origami | 998236036 | Title: Possible bug or new feature: Have tooltip stay at last decided position if no suitable position can be found
Question:
username_0: Currently o-tooltip will revert to the position defined in `data-o-tooltip-position` if it can not find a suitable position to display itself instead of staying at the last known suitable position.
You can see this issue on [the demo](https://www.ft.com/__origami/service/build/v2/demos/[email protected]/demo), because the tooltip element has `data-o-tooltip-position="above"`, when the viewport is small enough that no suitable position can be found, the tooltip reverts to being above the button. |
Robertof/perl-www-telegram-botapi | 316602898 | Title: ERROR: SSL connect attempt failed due to Telegram blocking in Russia
Question:
username_0: Bot stopped working 7 days ago, seems like it's related with Telegram blocking in Russia.
Error message is:
```
ERROR: SSL connect attempt failed
at /usr/local/share/perl/5.20.2/WWW/Telegram/BotAPI.pm line 208.
WWW::Telegram::BotAPI::api_request() called at /usr/local/share/perl/5.20.2/WWW/Telegram/BotAPI.pm line 76
WWW::Telegram::BotAPI::AUTOLOAD(WWW::Telegram::BotAPI=HASH(0x2a64c28)) called at bot_embedded.pl line 212
```
Is there any built-it WWW::Telegram::BotAPI capabilities for blocking interlock like setting up proxy servers or using alternative network interface for connection?
Answers:
username_1: Hi there!
I'm sorry you're having issues due to geo-blocking of Telegram. Proxies are indeed supported by the underlying `agent`s, see https://github.com/username_1/perl-www-telegram-botapi/issues/14.
Hope this can help solve your problems.
Cheers!
username_1: I'm closing this as it's been a while, let me know if you need further help :)
Cheers!
Status: Issue closed
|
Azure/azure-cli-extensions | 722668500 | Title: subprocess.CalledProcessError returned non-zero exit status 3.
Question:
username_0: ### **This is autogenerated. Please review and update as needed.**
## Describe the bug
**Command Name**
`az image copy
Extension Name: image-copy-extension. Version: 0.2.7.`
**Errors:**
```
Command '['/usr/bin/python3', '-m', 'azure.cli', 'snapshot', 'create', '--name', 'Windows-2019-Crowe-Template-NCUS-PRD1_os_disk_snapshot', '--location', 'northcentralus', '--resource-group', 'IS-CoreSystem-PRD1', '--source', '/subscriptions/9<KEY>/resourceGroups/TEMP-image-transfer/providers/Microsoft.Compute/snapshots/Windows-2019-Crowe-Template_os_disk_snapshot-northcentralus', '--output', 'json', '--tags', 'created_by=image-copy-extension']' returned non-zero exit status 3.
Traceback (most recent call last):
...
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- _Put any pre-requisite steps here..._
- `az image copy --source-object-name {} --source-resource-group {} --target-location {} --target-resource-group {} --target-subscription {} --target-name {}`
## Expected Behavior
## Environment Summary
```
Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-centos-7.8.2003-Core
Python 3.6.8
Installer: RPM
azure-cli 2.13.0
Extensions:
log-analytics 0.2.1
image-copy-extension 0.2.7
```
## Additional Context
<!--Please don't remove this:-->
<!--auto-generated-->
Answers:
username_1: hi @username_2 could you pls help to have a look? thanks
username_2: @username_0 Can you show the traceback? Thanks.
username_0: Sure!
`Getting OS disk ID of the source VM/image
Creating source snapshot
command failed: ['/usr/bin/python3', '-m', 'azure.cli', 'snapshot', 'create', '--name', 'Windows-2019-Crowe-Template-NCUS-PRD1_os_disk_snapshot', '--location', 'northcentralus', '--resource-group', 'IS-CoreSystem-PRD1', '--source', '/subscriptions/<<removed>>/resourceGroups/TEMP-image-transfer/providers/Microsoft.Compute/snapshots/Windows-2019-Crowe-Template_os_disk_snapshot-northcentralus', '--output', 'json', '--tags', 'created_by=image-copy-extension']
output: ResourceNotFoundError: Resource Windows-2019-Crowe-Template_os_disk_snapshot-northcentralus is not found.
CLIInternalError: The command failed with an unexpected error. Here is the traceback:
Command '['/usr/bin/python3', '-m', 'azure.cli', 'snapshot', 'create', '--name', 'Windows-2019-Crowe-Template-NCUS-PRD1_os_disk_snapshot', '--location', 'northcentralus', '--resource-group', 'IS-CoreSystem-PRD1', '--source', '/subscriptions/<<removed>>/resourceGroups/TEMP-image-transfer/providers/Microsoft.Compute/snapshots/Windows-2019-Crowe-Template_os_disk_snapshot-northcentralus', '--output', 'json', '--tags', 'created_by=image-copy-extension']' returned non-zero exit status 3.
Traceback (most recent call last):
File "/usr/lib64/az/lib/python3.6/site-packages/knack/cli.py", line 215, in invoke
cmd_result = self.invocation.execute(args)
File "/usr/lib64/az/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 654, in execute
raise ex
File "/usr/lib64/az/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 718, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
File "/usr/lib64/az/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 711, in _run_job
six.reraise(*sys.exc_info())
File "/usr/lib64/az/lib/python3.6/site-packages/six.py", line 703, in reraise
raise value
File "/usr/lib64/az/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 688, in _run_job
result = cmd_copy(params)
File "/usr/lib64/az/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 325, in __call__
return self.handler(*args, **kwargs)
File "/usr/lib64/az/lib/python3.6/site-packages/azure/cli/core/__init__.py", line 784, in default_command_handler
return op(**command_args)
File "/home/resadmin/.azure/cliextensions/image-copy-extension/azext_imagecopy/custom.py", line 105, in imagecopy
run_cli_command(cli_cmd)
File "/home/resadmin/.azure/cliextensions/image-copy-extension/azext_imagecopy/cli_utils.py", line 45, in run_cli_command
raise ex
File "/home/resadmin/.azure/cliextensions/image-copy-extension/azext_imagecopy/cli_utils.py", line 21, in run_cli_command
cmd_output = check_output(cmd, stderr=STDOUT, universal_newlines=True)
File "/usr/lib64/python3.6/subprocess.py", line 356, in check_output
**kwargs).stdout
File "/usr/lib64/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['/usr/bin/python3', '-m', 'azure.cli', 'snapshot', 'create', '--name', 'Windows-2019-Crowe-Template-NCUS-PRD1_os_disk_snapshot', '--location', 'northcentralus', '--resource-group', 'IS-CoreSystem-PRD1', '--source', '/subscriptions/<<removed>>/resourceGroups/TEMP-image-transfer/providers/Microsoft.Compute/snapshots/Windows-2019-Crowe-Template_os_disk_snapshot-northcentralus', '--output', 'json', '--tags', 'created_by=image-copy-extension']' returned non-zero exit status 3.`
username_2: It says "Resource Windows-2019-Crowe-Template_os_disk_snapshot-northcentralus is not found". Can you check if it exists?
username_0: So, that's what I was thinking at first, maybe I got the name, subscription, or RG incorrect, but so far as I can tell, I got them right.
Here's the command I ran:
`az image copy --source-object-name "Windows-2019-Crowe-Template-NCUS-PRD1" --source-resource-group "IS-CoreSystem-PRD1" --target-location "North Central US" --target-resource-group "IS-CoreSystem-QA1" --target-subscription "Crowe QA 1" --target-name "Windows-2019-Crowe-Template-NCUS-QA1" --verbose`

With ResourceID:
`/subscriptions/<<removed>>/resourceGroups/IS-CoreSystem-PRD1/providers/Microsoft.Compute/images/Windows-2019-Crowe-Template-NCUS-PRD1`
username_3: I am running into this same issue. What is likely happening is that the original OS disk that was used to create the image no longer exists. This error is occurring for me as well because I am using packer to generate the image. Packer creates a temporary resource group, creates a VM, takes an image, then puts the image in the specified resource group. Finally, Packer cleans up the temporary VM and resource group. When the copy image command runs it tries to locate the original OS disk that no longer exists and fails. Perhaps there would be a way to output a better error message to help people who run into this issue as a short term fix. For a long term fix it would be nice to not have to require the original OS image, but I realize there are likely some azure constraints to work around. It would be create if azure had the ability to just clone an image rather than these other games that are being played.
username_2: A similar issue https://github.com/Azure/azure-cli-extensions/issues/1756. I am still trying to find the cause.
username_2: Totally agree. This extension is full of magic. It is a very long path to copy a image. I suggest Azure service support image copy operation. But unfortunately, they are all devoted to shared image gallery. It is an advanced version of image management. You can have a try. Need some learning time.
username_2: I can't reproduce it and find cause so far.
username_1: image-copy extension need to be improved systematically, add feature-request to this one. |
MicrosoftDocs/microsoft-365-docs | 558358919 | Title: Misleading Information.
Question:
username_0: Please add details that the fingerprinting is limited to Exchange online only and would not cover any other workloads in the M365 Suite. Also, please correct the MCAS documentation as it suggests that you can use fingerprinting in MCAS policies.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 327f1fbb-ffb4-4d3a-53db-d43ad239ef0c
* Version Independent ID: f23ccb3e-b493-8eaf-d87d-61ae6acc3919
* Content: [Document Fingerprinting - Microsoft 365 Compliance](https://docs.microsoft.com/en-us/microsoft-365/compliance/document-fingerprinting#feedback)
* Content Source: [microsoft-365/compliance/document-fingerprinting.md](https://github.com/MicrosoftDocs/microsoft-365-docs/blob/public/microsoft-365/compliance/document-fingerprinting.md)
* Service: **exchange-online**
* GitHub Login: @username_1
* Microsoft Alias: **username_1**
Answers:
username_1: Hi @username_0
Thank you for your comment, I am investigating this and will update the content as appropriate.
Status: Issue closed
username_1: Please add details that the fingerprinting is limited to Exchange online only and would not cover any other workloads in the M365 Suite. Also, please correct the MCAS documentation as it suggests that you can use fingerprinting in MCAS policies.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 327f1fbb-ffb4-4d3a-53db-d43ad239ef0c
* Version Independent ID: f23ccb3e-b493-8eaf-d87d-61ae6acc3919
* Content: [Document Fingerprinting - Microsoft 365 Compliance](https://docs.microsoft.com/en-us/microsoft-365/compliance/document-fingerprinting#feedback)
* Content Source: [microsoft-365/compliance/document-fingerprinting.md](https://github.com/MicrosoftDocs/microsoft-365-docs/blob/public/microsoft-365/compliance/document-fingerprinting.md)
* Service: **exchange-online**
* GitHub Login: @username_1
* Microsoft Alias: **username_1**
username_1: @username_0
confirmed with PG that the documentation is correct, document fingerprinting can be used in MCAS file policies.
Status: Issue closed
|
matlab2tikz/matlab2tikz | 165562510 | Title: Manual positioning of colorbar isn't possible
Question:
username_0: Hi all,
im trying to use one single colorbar in a figure with multiple plots. Like in this post: [https://de.mathworks.com/matlabcentral/answers/144453-how-to-make-one-colorbar-for-all-subplots](url)
Unfortunately the colorbar doesn't show up in the tikz figure and matlab2tikz prints the following error:
```
Error using matlab2tikz>getColorbarPosOptions (line 4216)
getColorbarOptions: Unknown 'Location' manual.
Error in matlab2tikz>getColorbarOptions (line 4085)
[cbarTemplate, cbarStyleOptions] = getColorbarPosOptions(handle, ...
Error in matlab2tikz>handleColorbar (line 1186)
getColorbarOptions(m2t, handle));
Error in matlab2tikz>saveToFile (line 455)
m2t = handleColorbar(m2t, cbar);
Error in matlab2tikz (line 352)
saveToFile(m2t, fid, fileWasOpen);
```
It would be very helpful for me to have this option to position a colorbar manually.
Thanks,
Uli
Answers:
username_0: I got it to work using quite a dirty hack in matlab2tikz.m. Well, only the case of a vertical colorbar with axis to the right.
In function `getColorbarPosOptions(handle, cbarStyleOptions)` in the `switch lower(loc)` environment I added the case 'manual':
```
case 'manual'
cbarTemplate = 'right';
origUnits = handle.Units;
handle.Units = 'centimeters';
cbarStyleOptions = opts_add(cbarStyleOptions, 'at',...
sprintf('{(%fcm,%fcm)}',handle.Position(1),...
handle.Position(2)+handle.Position(4)));
cbarStyleOptions = opts_add(cbarStyleOptions, 'height',...
sprintf('%fcm',handle.Position(4)));
handle.Units = origUnits;
```
May be somebody with some more experience in matlab2tikz can take this and make it an actual feature.
Thanks, Uli
username_1: @username_0 I had a look at it but forgot to anwser here.
The problem is, that is is not possible to determine the placement generically. While we can get the position from it the orientation seems tricky.
username_0: @username_1 I see the problem, If i find the time, I'll try to come up with something. Could you give me some advice on how convert matlab positions to proper tikz positions in a m2t style?
username_1: I think the positioning is fine as it is. The problem is the orientation of the color bar. You cannot say generically this is a horizontal left oriented colorbar or whatever. This information would need to be hand tuned every time.
What i would propose to help other users is that you add this text commented out to the switch with a comment, that similar code might work hand tuned to the respective multiplot figure, and make a PR for it.
username_0: There might be an option to get the orientation automatically comparing the positions of the associated axes. Please see the pull request #937.
Cheers, Uli |
ttadano/alamode | 1089734841 | Title: tools/makedisp_qe.py
Question:
username_0: Is tools/makedisp_qe.py obsolete now? Or it it the new version?
Answers:
username_1: That script is relatively new, but it is still not well supported officially because I haven't prepared documents for it. I believe the script works properly. If you find any problems, please report them here. |
OnToology/OnToology | 643584697 | Title: Fix lost requests in RabbitMQ
Question:
username_0: If the main consumer/worker is forcefully terminated, the pending requests in the rabbitMQ queue are lost too. I still do not know why. (whether it is a pika issue or a rabbitMQ issue).
Answers:
username_0: Fixed. The issue was due to rabbitMQ configuration (`auto_delete` was `True` in `rabbit.py`)
Status: Issue closed
|
ivantrujillo79/conciliacionbancaria | 273075183 | Title: Selección de archivo aparece en una nueva columna
Question:
username_0: La pantalla de importación de archivos externos presenta una diferencia respecto a la versión inicial
"Entrar al Menú princial e ir a la opción ""Archivos"" -> ""Importar"".
La interfaz de usuario se presenta de un modo diferente al de la versión original:
-El control de selección de archivos se muestra a la derecha (nueva columna)
-Se deberá ver bajo el textbox ""Año"" y arriba de los botones ""Cancelar y Guardar"""
Answers:
username_1: No parece error, está programado para que se vean dos columnas
Status: Issue closed
|
feathericons/feather | 640302006 | Title: Pressure Gauge Icon Request
Question:
username_0: <!--
Before creating an icon request, please search to see if someone has requested the icon already. If there is an open request, please add a 👍.
-->
# Icon Request
* Icon name: Pressure Gauge
* Use case: Icon to identify sensor data
* Screenshots of similar icons:

## I have seen a similar request with a different design.
Below is the svg code I created myself and would like to contribute a few more if given the change. I'll also try pushing this icon alone and I hope it does not cause unnecessary error. Thanks 👍
**SVG Code**
```
<svg
xmlns="http://www.w3.org/2000/svg"
width="24"
height="24"
viewBox="0 0 24 24"
fill="none"
stroke="black"
stroke-width="2"
stroke-linecap="round"
stroke-linejoin="round">
<circle cx="12" cy="12" r="10" />
<circle cx="12" cy="12" r="1" />
<polyline points="12 6 12 12 12 12" />
</svg>
```
Answers:
username_1: <path d="M16 2.83209C14.7751 2.2969 13.4222 2 12 2C6.47715 2 2 6.47715 2 12C2 17.5228 6.47715 22 12 22C17.5228 22 22 17.5228 22 12C22 10.5778 21.7031 9.22492 21.1679 8" />
<path d="M14 10C15.9805 8.01976 17.0909 6.90952 19.0714 4.92928" />
<path d="M12 14.5C13.3807 14.5 14.5 13.3807 14.5 12C14.5 10.6193 13.3807 9.5 12 9.5C10.6193 9.5 9.5 10.6193 9.5 12C9.5 13.3807 10.6193 14.5 12 14.5Z" />
</svg>
```
username_1: Feel free! Also me and other members of the Feather community have gone off and started a community run fork of this project, as PRs tend to not get merged into production.
So feel free to submit icons here as well as on @featherity.
username_0: I'll try adding a few more. I did something like this as well.

```
<svg
xmlns="http://www.w3.org/2000/svg"
width="24"
height="24"
viewBox="0 0 24 24"
fill="none"
stroke="black"
stroke-width="2"
stroke-linecap="round"
stroke-linejoin="round">
<circle cx="12" cy="12" r="10" />
<circle cx="12" cy="12" r="1" />
<polyline points="12 6 12 12 12 12" />
<line x1="10" y1="16" x2="14" y2="16" />
</svg>
```
username_2: is there any way to add a customs svg icon? |
explosion/spaCy | 381427191 | Title: is_ascii unclear documentation
Question:
username_0: ## Which page or section is this issue related to?
https://spacy.io/api/token#attributes
Answers:
username_1: Hello,
Standart code for is_ascii is all(ord(c) < 128 for c in token.text) i.e. all characters are ASCII. I rechecked the lexeme code and it's as expected.
@ines I think documentation needs a quick fix. I'll do it soon.
Status: Issue closed
|
vim/vim | 646133373 | Title: LaTeX syntax got confused by unmatched nested delimiters
Question:
username_0: The LaTex syntax highlight got confused by nested, unmatched delimiters and fails to switch off math mode:

**To Reproduce**
Run `vim -u ./plainrc open-intervals.tex`
with `plainrc` as
```vim
set nocompatible
filetype plugin indent on
syntax enable
```
and `open-intervals.tex` as:
```latex
Correct syntax $math$ here
\begin{tabular}{cc}
$\text{(20, 25]}$ & \\
\end{tabular}
Wrong syntax here...
```
**Expected behavior**
The last line should not be in math mode. You can correct the thing by adding a `%stopmode` after the `\\` as a workaround...
**Environment (please complete the following information):**
- vim version:
```
VIM - Vi IMproved 8.0 (2016 Sep 12, compiled Mar 18 2020 18:29:15)
Included patches: 1-1453
Modified by <EMAIL>
Compiled by <EMAIL>
Huge version with GTK3 GUI. Features included (+) or not (-):
```
- OS: Ubuntu 18.04
- Terminal: GNOME Terminal, or `gvim`
**Additional context**
Previously reported here: https://github.com/username_1/vimtex/issues/1723
Answers:
username_1: A more minimal LaTeX example:
```tex
Correct syntax $math$ here
$\text{(20, 25]}$
Wrong syntax here...
```
username_2: Any fixes for this issue?
username_1: FYI: This issue is not present in Vimtex, which now ships with its own syntax plugin.
username_3: Have you contacted Charles (@username_4) directly or tried out the latest versions from http://www.drchip.org/astronaut/vim/index.html#SYNTAX_TEX?
Status: Issue closed
username_4: I can see where one may not want mismatched delimiter checking. Fortunately, there's already a mechanism in syntax/tex.vim to accomodate that wish:
```vim
let g:tex_matchcheck= '[{}]'
```
Its not currently documented, so I'll forward a bit more document to Bram. You may provide that setting in your .vimrc. If you do this manually while in the document, you'll want to use `set ft=tex` to get the syntax to reload with it effective. |
dotnet/roslyn | 354922894 | Title: Editor Completion: CompletionItemRules.FilterCharacterRules
Question:
username_0: See https://github.com/dotnet/roslyn/issues/27427
Answers:
username_1: We call
`Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.Completion.IsFilterCharacter(CompletionItem item, char ch, string textTypedSoFar)`
from the CommitManager and from the ItemManager
username_1: Corresponding tests are enabled.
Status: Issue closed
|
zw3rk/scripts | 416258000 | Title: Error on macos 10.14.3 and clang 7.0.1
Question:
username_0: Hi,
Im getting this error:
```
libtool: link: gcc -W -Wall -Wstrict-prototypes -Wmissing-prototypes -Wshadow -Wwrite-strings -I./../zlib -g -O2 -Wl,-no_pie -o as-new app.o as.o atof-generic.o compress-debug.o cond.o depend.o dwarf2dbg.o dw2gencfi.o ecoff.o ehopt.o expr.o flonum-copy.o flonum-konst.o flonum-mult.o frags.o hash.o input-file.o input-scrub.o listing.o literal.o macro.o messages.o output-file.o read.o remap.o sb.o stabs.o subsegs.o symbols.o write.o config/tc-arm.o config/obj-elf.o config/atof-ieee.o ../opcodes/.libs/libopcodes.a ../bfd/.libs/libbfd.a -L/Users/paulo/Downloads/rpi3-sdk/binutils-2.32/zlib -ldl -lz ../libiberty/libiberty.a ./../intl/libintl.a -liconv
ld: warning: ld: warning: ignoring file ../bfd/.libs/libbfd.a, file was built for archive which is not the architecture being linked (x86_64): ../bfd/.libs/libbfd.aignoring file /Users/paulo/Downloads/rpi3-sdk/binutils-2.32/zlib/libz.a, file was built for archive which is not the architecture being linked (x86_64): /Users/paulo/Downloads/rpi3-sdk/binutils-2.32/zlib/libz.a
ld: warning: ignoring file ../opcodes/.libs/libopcodes.a, file was built for archive which is not the architecture being linked (x86_64): ../opcodes/.libs/libopcodes.a
ld: warning: ignoring file ../libiberty/libiberty.a, file was built for archive which is not the architecture being linked (x86_64): ../libiberty/libiberty.a
ld: warning: ignoring file ./../intl/libintl.a, file was built for archive which is not the architecture being linked (x86_64): ./../intl/libintl.a
Undefined symbols for architecture x86_64:
"__bfd_elf_obj_attrs_arg_type", referenced from:
_obj_elf_vendor_attribute in obj-elf.o
"__bfd_std_section", referenced from:
_main in as.o
_dwarf2_directive_loc in dwarf2dbg.o
_dwarf2_finish in dwarf2dbg.o
_check_eh_frame in ehopt.o
_make_expr_symbol in expr.o
_expr_build_dot in expr.o
_current_location in expr.o
...
"__hex_value", referenced from:
_integer_constant in expr.o
_hex_float in read.o
"__obstack_begin", referenced from:
_hash_new_sized in hash.o
_read_begin in read.o
_subsegs_begin in subsegs.o
_subseg_set_rest in subsegs.o
"__obstack_free", referenced from:
_s_endif in cond.o
_cond_exit_macro in cond.o
_hash_die in hash.o
_hash_delete in hash.o
_s_stab_generic in stabs.o
"__obstack_newchunk", referenced from:
_s_ifdef in cond.o
_s_if in cond.o
_s_ifb in cond.o
_s_ifc in cond.o
_s_ifeqs in cond.o
_cfi_add_label in dw2gencfi.o
_frag_alloc in frags.o
...
"__sch_istable", referenced from:
_atof_generic in atof-generic.o
_dwarf2_directive_loc in dwarf2dbg.o
_input_file_open in input-file.o
_listing_newline in listing.o
_debugging_pseudo in listing.o
_macro_expand_body in macro.o
_read_a_source_file in read.o
...
"__sch_tolower", referenced from:
_operand in expr.o
_define_macro in macro.o
_check_macro in macro.o
_delete_macro in macro.o
[Truncated]
_s_func in read.o
_stabs_generate_asm_func in stabs.o
_stabs_generate_asm_endfunc in stabs.o
...
"_xstrndup", referenced from:
_obj_elf_vendor_attribute in obj-elf.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[4]: *** [as-new] Error 1
make[3]: *** [all-recursive] Error 1
make[2]: *** [all] Error 2
make[1]: *** [all-gas] Error 2
make: *** [all] Error 2
```
macos: 10.14.3
clang: 7.0.1
binutils: latest
ip: 192.168.0.22
raspberry pi: pi 3 model B
Answers:
username_1: You seem to be building for x86_64, not for arm. Hence it can't read the archives properly.
username_0: Exactly. But it is inside your script. I need run it from macos, correct?
username_2: @username_1 , Is there a way to email you?
The mail on [your website](https://zw3rk.com/) doesn't work.
Sorry for the noise. |
steenburgh/HvZHub-Mobile | 202343031 | Title: Chat freezes when loading new chats
Question:
username_0: If the user scrolls in just the right pattern, the entire app will freeze for several seconds.
Steps to reproduce:
1. Scroll up until the loading spinner appears
2. Scroll back down until the spinner is hidden.
3. Continue trying to scroll up/down, just don't scroll up past where the spinner was. The entire app will lock up for a bit.
- This might be new in the 1.0.2 Beta PR - #56, I haven't time to look at it yet.
- I suspect it's caused by an [onPreDrawListener hack](https://github.com/username_0/HvZHub-Mobile/blob/master/app/src/main/java/com/hvzhub/app/ChatFragment.java#L284) I did to make the view less jerky when loading new chats. Removing the listener might fix the problem.<issue_closed>
Status: Issue closed |
RenderHeads/UnityPlugin-AVProVideo | 530386012 | Title: It is possible to read ID3 metadata from HLS?
Question:
username_0: Hello!
I need to get time elapsed since the start of stream on server side. I know that HLS supports custom metadata in ID3 format. How can I read this metadata using AVPro? (I try the trial version and I found everything necessary for the implementation of my project, except for this. And this is the only thing that stops me before buying the full version..)
Answers:
username_1: Which platform are you targeting?
username_2: I would also need to read the timestamps from a HLS stream. Is this possible with AVPro?
username_0: Windows 10
username_2: Im targetting Android (Oculus Quest to be precise)
username_3: Currently we do not support ID3 from HLS or from anywhere actually.
This is a feature we do plan to add in the future though, but we don't have any timescale for this yet.
username_3: Hello!
I need to get time elapsed since the start of stream on server side. I know that HLS supports custom metadata in ID3 format. How can I read this metadata using AVPro? (I try the trial version and I found everything necessary for the implementation of my project, except for this. And this is the only thing that stops me before buying the full version..)
username_4: Is reading timed_id3 metadata from HLS streams available, yet? |
typestyle/typestyle | 214924466 | Title: number-maniputation for percent / px values
Question:
username_0: I'm converting a sass-style to typestyle, so this might be a 'sass-minded' mode of thinking...
Here's an old code that uses 'variables' in sass:
```sass
.someClass {
$baseXpos: 108%;
....
&.class2 {
width: $baseXpos * 2;
}
}
```
as you can see, $baseXpos is in 'percent', and it's multiplication-able.
What do you think about having an extra-api for value manipulation?
e.g:
```typescript
const $baseXpos = variablePercent(108);
const widthOfClass2 = $baseXpos.multiplyBy(2).toValue(); // returns CSSLength type
// or:
const widthOfClass3 = $baseXPos.calc(value => value * 2).toValue();
```
Answers:
username_0: sorry wrong repo... moving to csx
Status: Issue closed
|
Pokecube-Development/Pokecube-Issues-and-Wiki | 1097357160 | Title: New worldgen chunks without NPCs
Question:
username_0: #### Issue Description:
When generating brand new chunks in the world (including ultraspace), NPCs are not spawning in villages or structures. NPCs which had been previously spawned in villages still remain. Affects all NPC types from wandering trainers to stationary ones like N.
#### What happens:
Moving to new chunks, visiting villages or structures that have NPCs do not have them, when pokemobs spawn just fine.
#### What you expected to happen:
NPCs to spawn when I found cool new structures and villages
#### Steps to reproduce:
1. Move to brand new chunks in the world
2. Find places that NPCs/villagers spawn
3. No spawns
...
____
#### Affected Versions:
BiomesOPlenty-1.16.5-13.1.0.477-universal.jar
journeymap-1.16.5-5.7.3.jar
jei-1.16.5-7.7.1.139.jar
- Pokecube AIO: 1.16.5-3.14.0.jar
- Minecraft: 1.16.5
- Forge: 36.2.20
Answers:
username_1: this should be fixed in 1.16.5-3.14.1, can you confirm and close the issue if it is?
Status: Issue closed
|
sequelize/sequelize | 269488195 | Title: Sequeuelize retrieve spesific field when using as
Question:
username_0: <!--
Please note this is an issue tracker, not a support forum.
For general questions, please use StackOverflow or Slack.
For bugs, please fill out the template below.
-->
The Query is like below.
```js
models.A.findAndCountAll(
{
include: [
models: models.B,
as: 'C',
attribute: ['a']
]
}
A hasOne B as 'C'
B belongsTo A
```
There are 'a', 'b', 'c' fields in B and I just want to retrieve 'a'.
So I add the attribute option.
But when I execute this query I get all the field defined in B.
It's really terrible when there are many fields in B.
Did I do something wrong?
Sorry for my poor English.<issue_closed>
Status: Issue closed |
KhronosGroup/SPIRV-Cross | 383720815 | Title: Need to handle pointer of pointer
Question:
username_0: Here is a shader from vulkan CTS variable pointer test group. Please have a look!
[pointer_of_pointer.zip](https://github.com/KhronosGroup/SPIRV-Cross/files/2609925/pointer_of_pointer.zip)
Answers:
username_0: [https://www.khronos.org/registry/spir-v/extensions/KHR/SPV_KHR_variable_pointers.html](url)
username_1: I don't believe this is implementable. GLSL does not support pointers.
username_0: Could we let it parse successfully for this case? It will throw an exception for pointer-to-pointer.
```
if (ptrbase.pointer)
SPIRV_CROSS_THROW("Cannot make pointer-to-pointer type.");
```
username_1: We could defer the failure to compilation, I agree. There is no reason why we can't do reflection at least.
Would that be enough?
username_2: I'm already working on this.
username_0: Would that be enough to resolve this issue?
Yes for me.
username_3: What is it that you are working on in this area?
username_2: Supporting this extension.
username_1: Ok, I will defer the failure of KHR_variable_pointers to compile time and close this once it hits master. I assume @username_2 will make a PR for Metal support later as it supports pointers.
username_1: Created #773 to track MSL.
Status: Issue closed
username_4: @HansKristian-Work @username_2 @username_3 interested to hear thoughts on new VK_EXT_buffer_device_address ext. which seems to bring pointer support in shaders ( similar to shader buffer load NV GL ext.) and possible MSL support in spirv-cross for exposing that ext. in MoltenVK.. as I recall reading that Metal kinda supports pointers in shading language..
Thanks in advance! |
vlafranca/ngxAutocomPlace | 1130152219 | Title: Conflicts with Angular 13
Question:
username_0: I'm trying to upgrade my project to angular 13, but I'm getting the following error every time I run ```npm install```
```js
npm ERR! code ERESOLVE
npm ERR! ERESOLVE unable to resolve dependency tree
npm ERR!
npm ERR! While resolving: [email protected]
npm ERR! Found: @angular/[email protected]
npm ERR! node_modules/@angular/common
npm ERR! @angular/common@"^13.2.2" from the root project
npm ERR!
npm ERR! Could not resolve dependency:
npm ERR! peer @angular/common@"^10.0.0" from [email protected]
npm ERR! node_modules/ngx-autocom-place
npm ERR! ngx-autocom-place@"^4.0.0" from the root project
npm ERR!
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force, or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
```
As it seems, this package is not supporting angular 13
Answers:
username_1: Thanks for report, I will fix this soon.
username_1: I don't reproduce the issue with a fresh Angular 13 project the lib install just fine. Do you have a playground to reproduce ? |
sinri/VpnGateFinder | 317939511 | Title: dear friend, can you modify the code to download the ovpn configure file by analyze mirror page?
Question:
username_0: dear friend, can you modify the code to download the openvpn configure tcp and udp file by analyze mirror page and following links to openvpn config file?
Answers:
username_1: Emm, it is a project written about four years ago, when VPN is still a possible way to go.
Recently this way is not so available, and I have turned to other methods such as SS.
I might test OpenVPN later if I have time. If it is usable, I might try to fill your requirement. |
SharePoint/sp-dev-docs | 230604972 | Title: Cannot add SPFx Webpart to German SiteCollection in Sharepoint Online
Question:
username_0: As stated, I am not able to add my SPFx Webpart to a SiteCollection in Sharepoint Online, which was created in German.
As the developer tools tell me, there is a Syntax error in the automatically embedded Javascript. The error Message in German is enclosed in double quotes, altough in the text itself double quotes appear again, therefore causing a parameter error.
`console.error("Der Webpart-Wrapper auf dieser Seite kann nicht geladen werden. Verwenden Sie die Schaltfläche "Zurück" im Browser, um es noch mal zu versuchen. Wenn das Problem weiterhin besteht, wenden Sie sich an den Administrator der Website.");`
(You see the double quotes inside the double quoted string? Thats the js-code embedded when using an SPFx webpart)
I used the default SPFx webpart generated with the react-framework, no adjustments. It works well on English SiteCollections as well as in the workbench.
Our Sites are all in German so we cannot switch to English Sites, but we want to build our new Project on the new Framework Infrastructure. Could you double-check this issue and if possible tell me, when or how this could be solved? (I guess there is just the need to update som ressource-files)
thanx and Regards
Leo
Answers:
username_1: Thank you @username_0 for submitting this in here. This seems to be a bug in the platform, which certainly should be resolved asap. Seems to be a duplicate with #587.
username_2: In case you are in a hurry, you can take just this code part:
`window.moduleLoaderPromise.then(function(application) {
application.loadWebPart(...);
});`
and add it to the SharePoint page as a script. Not the best and most flexible solution but it worked when we had to do a presentation for the customer;)
username_3: Seems the workaround with the code part doesn't work in Edge if there is more than one SPFx Webpart on the page. It works in IE and Firefox though. Will this bug be fixed in the foreseeable future or is there a workaround which works for multiple Webparts in Edge?
username_4: I will take a look at it and update the thread with my findings.
username_5: @username_4 What is the current state here? Any updates?
username_6: Any info on the timeline (it's a showstopper in more than one project ...)? Thanks!
username_6: Just to explain why I'm checking back on a weekly basis:
It's not only that go-live dates come closer:
- From my point of view there are not too many arguments for building "classic" Web Parts when you can use client-side Web Parts on classic pages.
- Enterprise customers tend to decide for a certain strategies at certain points in time.
(I did not make this up)
Me: "You should start building client-side Web Parts." -> Customer: "But you said they currently don't work on German classic pages." -> Me: "That's right, but they will." -> Customer: "How long have you been waiting for a fix?" -> Me: "Right now - um - 2 months." -> Customer: "We'll think about it - in 2018."
I really appreciate the SPFX team's work and progress! There's no need for an overnight fix - all I'm asking for is a realistic timeline.
Thanks again! Dirk
username_7: We're facing the same problem. We want to use SPFx Webparts on display forms of a list item so we can't use modern pages. Any updates would be great. Thanks!
username_6: As mentioned in https://github.com/SharePoint/sp-dev-docs/issues/497, properties are not persisted on German pages. This is still the case when you use the workaround @username_2 mentioned above.
-> You currently cannot have SPFx Web Part properties on German classic pages.
Anyone?
username_4: @username_6 - this issue is about `Cannot add SPFx Webpart to German SiteCollection in Sharepoint Online` and this should be fixed now. Properties not being persisted seems like a different issue. Can you please open a separate issue for this problem, if an issue is not already open against it?
I will close this issue as the reported problem is fixed. Feel free to re-open it if you think the issue is not fixed.
Thanks,
Srikanth
Status: Issue closed
username_3: I can confirm that this works now in different browsers (including Edge) on german classic pages.
Also the properties are now being persisted. |
appium/appium | 691098706 | Title: Building scheme "iOS Framework" in CocoaAsyncSocket.xcodeproj Build Failed Task failed with exit code 1:
Question:
username_0: ## The problem
This issue happens while configuring the appium 1.18.1( with XCode12.0 and iOS14) when i run the step
$ sudo ./Scripts/bootstrap.sh -d
## Environment
* Appium version (or git revision) that exhibits the issue:1.18.1
* Last Appium version that did not exhibit the issue (if applicable):1.17.0
* Desktop OS/version used to run Appium:1.18.1
* Node.js version (unless using Appium.app|exe):10.16.3
* Npm or Yarn package manager:6.9.0
* Mobile platform/version under test: iOS 14.0
* Real device or emulator/simulator: Real Device
* Appium CLI or Appium.app|exe:both
## Details
The same exception been displayed
1. while trying to launch the app using appium desktop
2. While building WebDriverAgentRunner on Xcode12
## Link to Appium logs
xxxxxxxMBP3:appium-webdriveragent xxxxxxxx$ ./Scripts/bootstrap.sh -d
Fetching dependencies
*** Checking out YYCache at "1.1.0"
*** Checking out CocoaAsyncSocket at "72e0fa9e62d56e5bbb3f67e9cfd5aa85841735bc"
Failed to check out repository into /usr/local/lib/node_modules/appium/node_modules/appium-webdriveragent/Carthage/Checkouts/YYCache: Could not create working directory (Error Domain=NSCocoaErrorDomain Code=513 "You don’t have permission to save the file “YYCache” in the folder “Checkouts”." UserInfo={NSFilePath=/usr/local/lib/node_modules/appium/node_modules/appium-webdriveragent/Carthage/Checkouts/YYCache, NSUnderlyingError=0x7fa31c23bfe0 {Error Domain=NSPOSIXErrorDomain Code=13 "Permission denied"}})
xxxxxxxxx:appium-webdriveragent xxxxxxxxx$ sudo ./Scripts/bootstrap.sh -d
Fetching dependencies
*** Checking out CocoaAsyncSocket at "72e0fa9e62d56e5bbb3f67e9cfd5aa85841735bc"
*** Checking out YYCache at "1.1.0"
*** Fetching CocoaAsyncSocket
*** xcodebuild output can be found in /<KEY>carthage-xcodebuild.sg6izT.log
*** Building scheme "iOS Framework" in CocoaAsyncSocket.xcodeproj
Build Failed
Task failed with exit code 1:
/usr/bin/xcrun lipo -create /var/root/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A8189n/CocoaAsyncSocket/72e0fa9e62d56e5bbb3f67e9cfd5aa85841735bc/Build/Intermediates.noindex/ArchiveIntermediates/iOS\ Framework/IntermediateBuildFilesPath/UninstalledProducts/iphoneos/CocoaAsyncSocket.framework/CocoaAsyncSocket /var/root/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A8189n/CocoaAsyncSocket/72e0fa9e62d56e5bbb3f67e9cfd5aa85841735bc/Build/Products/Release-iphonesimulator/CocoaAsyncSocket.framework/CocoaAsyncSocket -output /usr/local/lib/node_modules/appium/node_modules/appium-webdriveragent/Carthage/Build/iOS/CocoaAsyncSocket.framework/CocoaAsyncSocket
This usually indicates that project itself failed to compile. Please check the xcodebuild log for more details: /<KEY>T/carthage-xcodebuild.sg6izT.log
xxxxxxxxMBP3:appium-webdriveragent xxxxxxx$ sudo ./Scripts/bootstrap.sh -d
Password:
Fetching dependencies
*** Checking out CocoaAsyncSocket at "72e0fa9e62d56e5bbb3f67e9cfd5aa85841735bc"
*** Checking out YYCache at "1.1.0"
*** xcodebuild output can be found in /var/<KEY>T/carthage-xcodebuild.1L1Dam.log
*** Building scheme "tvOS Framework" in CocoaAsyncSocket.xcodeproj
Build Failed
Task failed with exit code 1:
/usr/bin/xcrun lipo -create /var/root/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A8189n/CocoaAsyncSocket/72e0fa9e62d56e5bbb3f67e9cfd5aa85841735bc/Build/Intermediates.noindex/ArchiveIntermediates/tvOS\ Framework/IntermediateBuildFilesPath/UninstalledProducts/appletvos/CocoaAsyncSocket.framework/CocoaAsyncSocket /var/root/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A8189n/CocoaAsyncSocket/72e0fa9e62d56e5bbb3f67e9cfd5aa85841735bc/Build/Products/Release-appletvsimulator/CocoaAsyncSocket.framework/CocoaAsyncSocket -output /usr/local/lib/node_modules/appium/node_modules/appium-webdriveragent/Carthage/Build/tvOS/CocoaAsyncSocket.framework/CocoaAsyncSocket
This usually indicates that project itself failed to compile. Please check the xcodebuild log for more details: /var/<KEY>00000/T/carthage-xcodebuild.1L1Dam.log
xxxxxxxxMBP3:appium-webdriveragent xxxxxxxxx$ sudo ./Scripts/bootstrap.sh -d
Fetching dependencies
*** Checking out CocoaAsyncSocket at "72e0fa9e62d56e5bbb3f67e9cfd5aa85841735bc"
*** Checking out YYCache at "1.1.0"
*** xcodebuild output can be found in /var/folders/<KEY>0000000/T/carthage-xcodebuild.ECaWd7.log
*** Building scheme "tvOS Framework" in CocoaAsyncSocket.xcodeproj
Build Failed
[Truncated]
## Code To Reproduce Issue [ Good To Have ]
$sudo npm install -g appium --chromedriver-skip-install
$ sudo npm install -g appium-doctor
$ brew install libimobiledevice --HEAD
$ brew install ideviceinstaller
$ npm install -g ios-deploy
$sudo xcode-select --switch /Library/Developer/CommandLineTools
$ sudo gem install xcpretty
$cd /usr/local/lib/node_modules/appium/node_modules/appium-webdriveragent
$ brew install carthage
$ sudo npm i -g webpack
$sudo xcode-select -s /Applications/Xcode.app
$mkdir -p Resources/WebDriverAgent.bundle
$ ./Scripts/bootstrap.sh -d
after above step experiencing the issue
Kindly help me on resolving this issue.
Status: Issue closed
Answers:
username_1: Duplicate of https://github.com/appium/appium/issues/14611
username_0: Now issue resolved after replacing bootstrap.sh file as per
https://github.com/appium/appium/issues/14611 |
sheistechy/sheistechy.github.io | 550515776 | Title: Update the readme
Question:
username_0: **Is your feature request related to a problem? Please describe.**
The readme needs to be updated for beginners/contributors to easily find their way around the project
**Describe the solution you'd like**
You can make reference to https://github.com/gdg-x/hoverboard
**Would you like to work on it?**
No |
grpc/grpc-java | 188405594 | Title: OkHttpChannelBuilder doesn't work
Question:
username_0: in android high level platform ,OkHttpChannelBuilder works fine, but in android level 17 platform , it logs out :
E/GrpcService: java.lang.NoClassDefFoundError: io.grpc.okhttp.OkHttpChannelBuilder
is the everyone meet the same problem ? if you know that, please tell me , thank you guys !
Answers:
username_1: @username_0 Are you using Progaurd?
username_0: yes , is there anything wrong ?
username_0: @username_1 yes , is there anything wrong ?
username_1: Are you certain that Progaurd isn't rewriting that files name? Can you inspect the jar (apk?) to see if it is actually inside?
username_2: This doesn't make any sense to me. I know we have issue #2207, which includes a workaround... But this seems different/strange.
username_0: @username_1 @username_2 that's my fault, OkHttpChannelBuilder works ok! thank you guys
username_1: @username_0 what was the root cause?
username_0: @username_1 i don't know ,but now it work fine automatically
Status: Issue closed
|
ExcursionClub/ExCSystem | 384973996 | Title: Replace Nginx with Traefik?
Question:
username_0: Reverse proxy and load balancer
I think it will make CD easier as we are doing a lot of manual steps with Nginx
It is/has
Single binary or tiny Docker image
Simple config
No restarts
...and the greatest logo of all time
<img src="https://user-images.githubusercontent.com/9142800/49109633-80cb8e80-f28b-11e8-99f1-4fa8b244847f.png" width="200">
Status: Issue closed
Answers:
username_0: As long as we don't spin up a new server each time we deploy Nginx works fine |
SimpleSoftwareIO/simple-qrcode | 106609155 | Title: ->size() breaks merge()
Question:
username_0: It appears if I attempt to use size() either before or after the merge method in a chain - the image is not merged.
Example:
<img src="data:image/png;base64, {!! base64_encode(QrCode::format('png')->merge('/public/tiny-green.png',.3)->size(190)->generate($url)) !!}">
QR code generates properly - but without the merged image. (http://goo.gl/ImyLlm)
If you remove size - it generates the code like a boss, complete with merged image (http://goo.gl/dlZGgi)
Answers:
username_1: Thank you for reporting this bug. Try running the size method before the merge method. This is a bug we will fix in the next release!
username_0: That was pretty much the first thing I tried. I didn't want to bother anyone with a bug report without attempting to resolve it on my own. I tried size() at the beginning, right before merge, right after, etc. The only way merge would work was without size.
I was wanting to take a look and see if I can squash this one on my own and send you guys a pull - but time has not been on my side. I doubt we will ultimately use merge for this project - I was just playing around with it when I ran into the issue with size().
Thank you very much for the follow up - if you need anything else from me please let me know.
username_1: Hmm. I must have missed this when testing! I'll try to get a bug fix out in the next few days. Thanks again for the report!
username_1: Fixed in 1.3.1 :wink:
Status: Issue closed
|
ant-design/pro-components | 839379300 | Title: 🧐[问题]
Question:
username_0: ### 🧐 问题描述
当我在ProTable外部套一个antd的Form,proTable的search设置的false,但是外层自定义的form无法控制ProTable的column中render的表单项
### 💻 示例代码
<>
<Form labelAlign="right" onValuesChange={handleValuesChange} form={form} >
<ProTable {...tableProps}/>
</Form>
</>
### 🚑 其他信息
<!--
如截图等其他信息可以贴在这里
-->
Answers:
username_1: protable 自己报了个 form,所以你做不到,你可以用 `editable={{ form }}` 来控制 |
microsoft/artifacts-keyring | 1030271213 | Title: Support for Ubuntu 18.04 LTS?
Question:
username_0: According to [this page](https://docs.microsoft.com/en-us/azure/devops/artifacts/quickstarts/python-packages?view=azure-devops#connect-to-feed), "artifacts-keyring is not supported on newer versions of Ubuntu". This seems to mean that Ubuntu 18.04 LTS is not supported. This is a shame, as many Azure services such as Azure Machine Learning compute instances use this as their distribution of choice.
Any plans to update this package to support Ubuntu 18.04 LTS? Or should this service work in that version?
Status: Issue closed
Answers:
username_1: Our engineers were able to get the artifacts-keyring working on Ubuntu 18.04. Some tips:
* Make sure to have SDK set up.
* Make sure to upgrade pip and setuptools to latest version.
* Download the artifacts-keyring using the instructions in Connect to Feed (in the Artifacts UI).
* Then also update keyrings.alt package (e.g, `pip3 install -U keyrings.alt`).
* You may need to do the interactive flow once. |
appium/appium-uiautomator2-server | 398061314 | Title: Instrumentation Error
Question:
username_0: My team uses the UiAutotmator2 Server (this component) independently of Appium. We get the apk and the test runner apk and start the server with an adb command and then send queries directly to it using the JSON Wire Protocol. Recently we decided to investigate moving from the 0.4.1 version we've been on since we started to the latest apks. To get the apks, I installed the latest version of Appium (1.10) and took them out of the node-modules folder. I discovered that the instrumentation had changed from what is listed in the ReadMe for this site:
```io.appium.uiautomator2.server.test/android.support.test.runner.AndroidJUnitRunner```
to
```io.appium.uiautomator2.server.test/androidx.test.runner.AndroidJUnitRunner```
So it appears that the documentation for this component needs to be updated.
However, the bigger issue is that when I try to start the server using the instrumentation, I get a PermissionDenial error as follows:
```
java.lang.SecurityException: Permission Denial: starting instrumentation ComponentInfo{io.appium.uiautomator2.server.test/androidx.test.runner.AndroidJUnitRunner} from pid=4100, uid=4100 not allowed because package io.appium.uiautomator2.server.test does not have a signature matching the target io.appium.uiautomator2.server
at android.os.Parcel.readException(Parcel.java:1684)
at android.os.Parcel.readException(Parcel.java:1637)
at android.app.ActivityManagerProxy.startInstrumentation(ActivityManagerNative.java:4546)
at com.android.commands.am.Am.runInstrument(Am.java:889)
at com.android.commands.am.Am.onRun(Am.java:400)
at com.android.internal.os.BaseCommand.run(BaseCommand.java:51)
at com.android.commands.am.Am.main(Am.java:121)
at com.android.internal.os.RuntimeInit.nativeFinishInit(Native Method)
at com.android.internal.os.RuntimeInit.main(RuntimeInit.java:262)
INSTRUMENTATION_STATUS: id=ActivityManagerService
INSTRUMENTATION_STATUS: Error=Permission Denial: starting instrumentation ComponentInfo{io.appium.uiautomator2.server.test/androidx.test.runner.AndroidJUnitRunner} from pid=4100, uid=4100 not allowed because package io.appium.uiautomator2.server.test does not have a signature matching the target io.appium.uiautomator2.server
INSTRUMENTATION_STATUS_CODE: -1
```
Answers:
username_1: Seems like a signing error. Did you try uninstalling any old version first? Otherwise, looks like you may need to run apksigner on the artifacts.
username_0: Hi @username_1, I did uninstall old versions, I had read about this, so I was sure to do that. If the problem is solved by running apksigner, why would it work for the Appium?
I'll research it and try it anyhow.
username_1: there's code in appium that allows it to resign things.
and anyway if you just downloaded the uia2 apks, they'll have been signed with appium's cert, and your app certainly won't have been. you're probably better off running this project from source and using gradle to build apks with your own debug cert.
username_0: @username_1 that first line seems like the key. So Appium is re-signing the pair of them?
As for the second paragraph, I'm using the two apks together (the server, and the 'test' that starts the server). That's the pair right? So the fact that its appium's cert shouldn't matter. Or am I misunderstanding how this works?
It sounds like the take away is that the released apks for this appium component can't be used unless they are signed together. There doesn't seem to be any documentation that informs users of this. I know I'm an atypical user by using the component on its own, but I probably am not the only one. Our team is trying to limit our dependencies as much as we can. |
internet-sicherheit/visualisation_of_bloxberg_network | 799246962 | Title: Indexing blockchain transaction data locally
Question:
username_0: In order to allow a faster search and retrieval of bloxberg transaction data, indexing blockchain transaction data locally seems to be the a necessary step.
The suggest steps:
- Using AWS server to install local full node of the bloxberg blockchain and data visualisation website.
- The bloxberg transaction data can be accessed over web3 and IPC.
- Installing database on AWS
- Copying data from full node to database
Possible resources:
https://github.com/username_1/ethereum-network-analysis
Questions:
- Is a database necessary or can the full node be indexed?
- Which databas would be recommended? (MongoDB?)
Answers:
username_1: With regard to the choice of database technology, as I see it, this seems to be dependent on the kind of queries that need to be run. In general, going with an RMDS is never a bad choice (e.g. [PostgreSQL](https://www.postgresql.org/)), since it is flexible for most use cases. However, this requires a relational schema and one needs to decide if de-normalization is necessary for better query performance.
I am not sure what a document database (such as [MongoDB ](https://www.mongodb.com/) or [CouchDB](https://couchdb.apache.org/)) would bring to the table. Do we need to store semi-structured and potentially changing documents? Of course, it is very nice, that they answer directly with JSON. This means an architecture can directly plugin a Javascript frontend onto a MongoDB backend.
For high-performance queries, something like [Cassandra ](https://cassandra.apache.org/) or [ScyllaDB](https://www.scylladb.com/) might be a good choice. If the ledger is stored flat (not relational), the ledger itself is actually a Cassandra table.
Another consideration: Initial population of the database (aka index) might take some time, but once the system is in place, subsequent updates should have low-performance needs, sind bloxberg does not have high load. |
ringcentral/ringcentral-web-phone | 698478872 | Title: session.ignore()
Question:
username_0: What is this supposed to do, because it doesn't seem to work in 2 common scenarios. On a direct call to the users DID it should go to VM, but it leaves the caller to hear hold music for several seconds, then says no one is available and hangs up. On a call queue call it should simply leave the call in queue, which it seems to do. On an extension to extension or user to user call it also does not go to voicemail, but plays the "no one is available to take your call" message and hangs up.<issue_closed>
Status: Issue closed |
godotengine/godot | 1114489812 | Title: TileMap editor never disappears
Question:
username_0: ### Godot version
b25c7fe
### System information
W10
### Issue description

When the tile editor opens, changing nodes or even opening empty scenes will not close it. Interestingly, it can be used simultaneously with any other editor.
### Steps to reproduce
1. Add TIleMap
2. Click it
3. Now try to make the TileMap tab disappear
### Minimal reproduction project
_No response_<issue_closed>
Status: Issue closed |
phpMv/ubiquity | 650277635 | Title: [Rest] Validation on insertion should be complete
Question:
username_0: <!--
Use the format: [part] Element Should Do X
i.e. [Router] Route requirement should allow to set an integer url parameter
[part] is one of [Views,Controllers,ORM,Router,REST,Config,Git,SEO,Cache,UbiquityMyAdmin]
-->
### Steps
A model with validators
```php
class Inscription {
/**
*
* @id
* @column("name"=>"idIns","nullable"=>false,"dbType"=>"int(11)")
* @validator("id","constraints"=>array("autoinc"=>true))
*/
private $id;
/**
*
* @column("name"=>"nom","nullable"=>false,"dbType"=>"varchar(50)")
* @validator("length","constraints"=>array("max"=>50))
*/
private $nom;
/**
*
* @column("name"=>"email","nullable"=>false,"dbType"=>"varchar(100)")
* @validator("email","constraints"=>array("notNull"=>true))
* @validator("length","constraints"=>array("max"=>100))
*/
private $email;
```
With a Rest controller base on RestBaseController:
```php
/**
* Rest Controller RestInscription
* @route("/inscriptions/","inherited"=>true,"automated"=>true)
* @rest("resource"=>"models\\Inscription")
*/
class RestInscription extends \Ubiquity\controllers\rest\RestController {
}
```
Trying to add an instance with a missing email address:
see https://github.com/phpMv/ubiquity/issues/122
### Expected Result
The insertion should generate a violation on missing email
### Actual Result
No violation on insertion,

### Versions
- Ubiquity framework 2.3.10
- Ubiquity devtools 1.2.15
- php 7.4.4
- OS all
Answers:
username_0: So, logically, the only exception to the validation on insertion concerns the possible auto-increment field (`IdValidator`).
https://github.com/phpMv/ubiquity/blob/ab8579856161b4684ef8a4904a7b654457dac592/src/Ubiquity/controllers/rest/RestBaseController.php#L244-L246
Status: Issue closed
|
Gala4peace/Dance-in-Zurich | 404181390 | Title: The latest versionnof the website
Question:
username_0: Hi @sophialittlejohn
I really like the navigation now :) I added a :hover effect on one of the pages with the help of Joao.
I think I will still try to animate the grid today ... see you shortly :) |
hongyuanjia/eplusr | 812739348 | Title: Error with upcoming units 0.7-0
Question:
username_0: We are preparing units 0.7-0 for release, and our revdep checks show a new error in `eplusr` (see [here](https://github.com/r-quantities/units/blob/master/revdep/problems.md#eplusr)). Please, review the [changes](https://github.com/r-quantities/units/blob/master/NEWS.md#version-07-0) to fix the issues and safely retain your package on CRAN.
Status: Issue closed
Answers:
username_1: Thanks @username_0. Will upload a new version for compatibility |
popcodeorg/popcode | 194622038 | Title: Error in /
Question:
username_0: ## Error in Popcode
**Error** in **/**
PERMISSION_DENIED: Permission denied
[View on Bugsnag](https://app.bugsnag.com/popcode/popcode/errors/584acc4442a024dfe3379814?event_id=584acc443620df11005ed343)
## Stacktrace
webpack:///~/firebase/lib/firebase-web.js:76 -
[View full stacktrace](https://app.bugsnag.com/popcode/popcode/errors/584acc4442a024dfe3379814?event_id=584acc443620df11005ed343)
Answers:
username_0: An error linked to this issue has been marked as fixed in Bugsnag
[**Error** in **/**](https://app.bugsnag.com/popcode/popcode/errors/584acc4442a024dfe3379814?event_id=584acc443620df11005ed343)
Status: Issue closed
|
davisking/dlib | 274744964 | Title: Segmentation Fault in impl_extract_fhog_features - Ubuntu 16.04
Question:
username_0: Hello,
I am running some stability testing with the very basic code that is using the FHOG for face detection. I am running a HD video coming in at 30fps. I am currently running the detector on each of the frames.
I checked with GDB and was stopped in the push_back below. I need to check the memory allocation.
.....
scanner.load(img);
std::vector<std::pair<double, rectangle> > dets;
std::vector<rect_detection> dets_accum;
for (unsigned long i = 0; i < w.size(); ++i)
{
const double thresh = w[i].w(scanner.get_num_dimensions());
scanner.detect(w[i].get_detect_argument(), dets, thresh + adjust_threshold);
for (unsigned long j = 0; j < dets.size(); ++j)
{
rect_detection temp;
temp.detection_confidence = dets[j].first-thresh;
temp.weight_index = i;
temp.rect = dets[j].second;
dets_accum.push_back(temp); <<<<< last call before the segmentation fault
}
}
Thread 1 "mai-app" received signal SIGSEGV, Segmentation fault.
malloc_consolidate (av=av@entry=0x7ffff2c88b20 <main_arena>) at malloc.c:4167
4167 malloc.c: No such file or directory.
(gdb) bt
#0 malloc_consolidate (av=av@entry=0x7ffff2c88b20 <main_arena>) at malloc.c:4167
#1 0x00007ffff2945cde in _int_malloc (av=av@entry=0x7ffff2c88b20 <main_arena>,
bytes=bytes@entry=1073088) at malloc.c:3450
#2 0x00007ffff2948184 in __GI___libc_malloc (bytes=1073088) at malloc.c:2913
#3 0x00007ffff323ae78 in operator new(unsigned long) ()
from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#4 0x00007ffff323af19 in operator new[](unsigned long) ()
from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#5 0x0000000000426ff4 in void dlib::impl_fhog::impl_extract_fhog_features<dlib::cv_image<dlib::bgr_pixel>, dlib::array<dlib::array2d<float, dlib::memory_manager_stateless_kernel_1<char> >, dlib::memory_manager_stateless_kernel_1<char> > >(dlib::cv_image<dlib::bgr_pixel> const&, dlib::array<dlib::array2d<float, dlib::memory_manager_stateless_kernel_1<char> >, dlib::memory_manager_stateless_kernel_1<char> >&, int, int, int) ()
#6 0x000000000042b8be in void dlib::impl::create_fhog_pyramid<dlib::pyramid_down<6u>, dlib::cv_image<dlib::bgr_pixel>, dlib::default_fhog_feature_extractor>(dlib::cv_image<dlib::bgr_pixel> const&, dlib::default_fhog_feature_extractor const&, dlib::array<dlib::array<dlib::array2d<float, dlib::memory_manager_stateless_kernel_1<char> >, dlib::memory_manager_stateless_kernel_1<char> >, dlib::memory_manager_stateless_kernel_1<char> >&, int, int, int, unsigned long, unsigned long, unsigned long) ()
#7 0x00000000004313ef in void dlib::object_detector<dlib::scan_fhog_pyramid<dlib::pyramid_down<6u>, dlib::default_fhog_feature_extractor> >::operator()<dlib::cv_image<dlib::bgr_pixel> >(dlib::cv_image<dlib::bgr_pixel> const&, std::vector<dlib::rect_detection, std::allocator<dlib::rect_detection> >&, double) ()
#8 0x0000000000414b8d in makexxxxGui() ()
#9 0x0000000000410550 in main ()
Answers:
username_1: Post something that I can run that reproduced the problem.
username_0: Thanks for the quick reply, I am working on it.
username_0: I am closing this issue, as we were not able to reproduce the crash with unit testing. We are investigating other SW components now.
Status: Issue closed
|
Subsets and Splits