repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
martenson/public-galaxy-servers | 261106558 | Title: Programmatically Generate this list from the Public Galaxy Server List page
Question:
username_0: @martin, @username_1, @bebatut,
Since we migrated the [public Galaxy server list on the hub](https://galaxyproject.org/public-galaxy-servers/) from a monolithic web page to directory based approach, I think it would be easy to programmatically generate this CSV from that directory structure. Here's how:
## Current columns
### `name`
This is `title` in the [server page metadata](https://github.com/galaxyproject/galaxy-hub/blob/master/CONTRIBUTING.md#public-galaxy-server-metadata)
### `url`
This is `url` in the server page metadata. Would require checking that everyone of these actually point to the server. (I think they do - I'll be visiting every page anyway.)
### `support`
As far as I can tell, these are all email addresses. These do not exist in the current metadata, although sometimes they are in the *User Support* section of the page content.
Are these all supposed to be a single email address? Are there other options we could do here, like a semicolon separated list of emails, or a URL?
See `email_contacts` below.
### `location`
This is a standard two letter country code.
See `home_country_code` below
### `tags`
I was thinking about adding tags to the server pages and I asked @username_2 to look into metalsmith support for tags, but I also told him it was an unimaginably low priority. We can support tags in the page metadata before we do anything with them in the hub. Some of the tags are already on the pages, but with a different name:
`server_group: "general"`
There are three groups: `general`, `domain`, and `tool-publishing`. `general` maps to `genomics`, and `tool-publishing` maps to `tools`. Those two are easy.
Domain-specific tags like `phage` aren't currently supported in the hub.
See `tags` below.
## Proposed new columns and metadata
### `info_page`, in CSV
URL of the server's information page on the hub.
### `email_contacts`, in Hub
Copied from `support` in CSV.
### `home_country_code`, in Hub
Copied from `location` in CSV.
But ...
#### Country codes are not as informative as country names
Displaying "DK" in the hub is not informative. But, country names are ambiguous and 5 names can map to one country.
What say ye?
#### More location?
@bebatut and I have discussed having *Event* locations be free form text, but be specific enough that we could pass the string to a mapping service, and it would return some geolocation.
Should we do that with `location`, or is country all we'll ever care about (or all we care about now :-)?
#### I'm OK with country code for now.
I just don't want to display it, and it's easy to change this programmatically later if we want to go there.
### `tags`, in Hub
Initially populated from `tags` in CSV. Combined with `server_group` when updating `tags` in CSV.
## Mixed Model
We don't have to go fully one way or the other. We could use a mixed model where the file can be both programmatically and manually updated. The program would read in the CSV first, and then update information in place. It would report on any updates it did, and on anything that's in the CSV, but not in the Hub.
Differences would be reconciled before the new CSV is pushed.
Answers:
username_1: We could switch to [ISO 3166-2](https://en.wikipedia.org/wiki/ISO_3166-2) and that'd be fine, but more work for the admin / author. I'd personally like it since it'd make the [map page](https://grafana.denbi.uni-freiburg.de/dashboard/db/public-galaxy-servers?orgId=1) even more attractive / detailed. I'm not really caring so much, I'm not sure how much the end user cares. This is probably more important for the GTN map than it is for "where are the servers"
username_0: I'm good with too. I was just worried about the multiple source that were given as sources of this file.
username_1: they don't change very often. We could really just hardcode the list for lookup. They're 2-letter ISO country codes here because that's what the world map plugin uses in grafana. https://grafana.com/plugins/grafana-worldmap-panel We could easily switch to lat/lon or 3-letter country codes. Anything else I'd have to post-process into country codes which I could live with if need be.
username_0: @username_1: I think this would be workable. At the time I tried to do this, I didn't know how to get markdown in the YAML to work. Now, I know how to do this.
If this can wait until the last week of October/first week of November, then I can work on the translation in the hub. I can also go through and add Citations sections at that time (a manual process).
username_1: Oh! I didn't realise that was the issue, that metalsmith has so much difficulty for things like markdown in yaml. I'm really used to jekyll and other systems in which this is really normal / well supported.
Again, I'm sorry, I don't mean to be difficult, I have strong preferences, these should be tempered with other's opinions, don't let me go pushing things on y'all just because they make sense to me (and not necessarily the silent majority)
Of course can wait, zero rush on any of this.
username_2: Just have to push back on the perceived failure of metalsmith here; it's really not a fair judgement. It's perfectly well-supported and normal in metalsmith, too -- we just have the build pipeline set up to not automatically attempt to convert data in yaml fields because that's not the common case for us. The markdown is generally in the markdown, the yaml data is in the frontmatter. For when we *do* have markdown in the yaml frontmatter, @username_0 now knows how to do it, when we want to do it.
username_1: Is there no equivalent concept of `site data` like there is in jekyll / hugo / other static site generators? Just a folder where you dump yaml files that are used in templates, etc.?
username_2: Sure. Keep in mind that metalsmith is very DIY, maximum flexibility to do basically anything you want.
We use yaml files for the menu, for example (https://github.com/galaxyproject/galaxy-hub/blob/master/src/config/menu.yaml). Which is loaded here: https://github.com/galaxyproject/galaxy-hub/blob/master/build.js#L190
But basically *all* of the other data on the site is in per-object yaml frontmatter of composite markdown files. This makes it way easier to deal with individual content items, instead of digging through large comprehensive yaml blobs.
username_1: Cool, thanks. That's good to know.
username_0: Note for future reference: @username_0's ignorance of most things, doesn't say a thing about most things.
I'm trying to catch up! :-)
username_0: Hi All,
I haven't forgotten about this and I plan to work on this starting in bout 10 days. I thought I would update the thread. @jxtx had a conversation at Genome Informatics about adding domain tags to the server descriptions.
I'm all for this if we can identify ontologies that cover our bases. As I see it, there are three general domains:
1. Parts of the tree of life: Whale Shark! Viruses, etc.
2. Disciplines: Genomics, Computational Chemistry, Social Science, etc.
3. Methodologies: RNA-Seq, machine learning, CLIP-Seq, etc
I haven't worked with ontologies for years, but I'll do some research when I get to this. My vague plan is to
- Identify an all-encompassing ontology service, and use that.
- if that doesn't exist then we'll use individual ontology services as needed.
- In the server metadata, add an array of these:
- tag id
- tag URL (needed if all encompassing service does not exist)
- tag text (sure, this is redundant, but I don't want to pull it live when building the hub)
Tag IDs and text would be displayed and link to a URL.
If you see better ways to do this, please post here by, say, November 8.
username_1: :+1: ontologies |
laravel-admin-extensions/multi-language | 489649846 | Title: language-menu.blade.php not working
Question:
username_0: "encore/laravel-admin": "1.7.1",
"laravel-admin-extensions/multi-language": "^0.0.3",
"laravel/framework": "5.8.*",
language-menu.blade.php
$.ajaxSetup({headers: {'X-CSRF-TOKEN': $('meta[name="csrf-token"]').attr('content')}});
underfined
$.post(`/admin/locale`,{locale: id}, function () {
change to
#1 $.post(`/admin/locale`,{_token:LA.token,locale: id}, function () {
this will work<issue_closed>
Status: Issue closed |
ms-iot/ROSOnWindows | 453633224 | Title: [DevOps] Could NOT find PythonInterp: Found unsuitable version "1.4", but required is at least "2"
Question:
username_0: **Describe the bug**
By using Azure DevOps pipelines for CI, I added the following step in the flow:
```yaml
- script: |
call "C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\VC\Auxiliary\Build\vcvars64.bat"
call "C:\opt\ros\melodic\x64\setup.bat"
pushd "%Build_StagingDirectory%\catkin_ws"
catkin_make_isolated --only-pkg-with-deps kobuki_core --install
```
Then, when I kickoff a build, this error shows up:
```
-- This workspace overlays: C:/opt/ros/melodic/x64
CMake Error at C:/opt/rosdeps/x64/share/cmake-3.11/Modules/FindPackageHandleStandardArgs.cmake:137 (message):
Could NOT find PythonInterp: Found unsuitable version "1.4", but required
is at least "2" (found C:/ProgramData/chocolatey/bin/python2.exe)
```
**Expected behavior**
Kick off a DevOps build without this error.
Answers:
username_0: https://github.com/Kitware/CMake/blob/v3.11.0/Modules/FindPythonInterp.cmake#L90-L108
FindPythonInterp.cmake is using system registry key to search for `PYTHON_EXECUTABLE`. On machines with multiple pythons installed system-wide, it could eventually pick a wrong one than the one from ROS installation.
Adding `-DPYTHON_EXECUTABLE=c:/opt/python27amd64/python.exe` can override this behavior to get around this problem. For example,
```yaml
- script: |
call "C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\VC\Auxiliary\Build\vcvars64.bat"
call "C:\opt\ros\melodic\x64\setup.bat"
catkin_make_isolated -DPYTHON_EXECUTABLE=c:/opt/python27amd64/python.exe
```
Status: Issue closed
username_1: Have you solved the mistake? I'm making the same mistake now. I hope I can get your help. thank you!
username_2: Hi @username_1 ,
I think you are asking about a different problem. This bug is about using Python in a ROS DevOps environment, which @username_0 offered a correction for.
username_1: @username_2 Yes, but it's the same for me to report an error. I can't find a solution yet. Do you know what caused this mistake?
username_2: @username_1 The solution presented here, is to pass the python executable to catkin.
```
catkin_make_isolated -DPYTHON_EXECUTABLE=c:/opt/python27amd64/python.exe
```
username_1: @username_2 My platform is Android studio, can I handle it like this?
username_2: @username_1 I'm sorry, we can't help you. This bug is unrelated. Perhaps try stack overflow or the android studio github. |
openfaas/openfaas-cloud | 570453565 | Title: [Feature] Store CUSTOMERS list in a secret for privacy
Question:
username_0: ## Description
This feature would allow the CUSTOMERS list to be stored in a secret.
Why? For privacy. Today a HTTPS URL is required, most people use a public GitHub repo and the RAW CDN URL.
A Kubernetes secret could be used for the list, and then attached to each function.
A mode would be required to support existing users and the community cluster.
Answers:
username_0: - [x] edge-auth
- [x] sdk
- [x] github-event
- [ ] gitlab-event
Status: Issue closed
|
JuliaData/JuliaDB.jl | 660579798 | Title: Make a new release of JuliaDB?
Question:
username_0: Would it be possible to make a new release of JuliaDB? There have been 13 commits to `master` since the last release (v0.12.0).
Answers:
username_0: cc: @andreasnoack
username_1: There is, but it is not tagged on GitHub, see #312. But I am all for a little maintenance!
username_2: Will be fixed by #347
Status: Issue closed
username_2: 0.13.1 tagging is on the way! |
kotest/kotest | 807442062 | Title: Merge FunSpec and ShouldSpec?
Question:
username_0: Is there any architectural reason to not have a single style where both `test` and `should` can be used? I actually have tests where that could be convenient.
Answers:
username_1: No reason why you couldn't combine a bunch of them really.
I guess history.
Could have a new one that replaces them both, and the other two are discouraged.
username_1: Or what you say, add should support into fun spec.
username_1: Closing this as this won't happen before 5.0 which is probably a year or two away.
Status: Issue closed
|
Azure/azure-functions-core-tools | 390290411 | Title: Access denied to wrong path in func publish
Question:
username_0: I am attempting to publish a function, but I am running into problems as the windows user login is being transformed ( "." is removed). Creating a new account would be the obvious option, but is not possible here.
<issue_closed>
Status: Issue closed |
CityOfZion/neon-wallet | 294634590 | Title: Withdrew NEO tokens from Binance to Nano Ledger S. Showing ZERO BALANCE
Question:
username_0: Hello, I recently withdrew my NEO tokens from Binance and sent it to my Nano Ledger S Neo Wallet. Binance had a confirmation saying it was sent. However, when I logged into my Nano Ledger S, it said ZERO BALANCE. It is because now I found out that I have a different NEO ADDRESS. It was sent to my ORIGINAL NEO ADDRESS but it's no longer my current address when I log into my Nano Ledger. Why did my 'sent address' change?? Can someone please reply back with the proper steps to help me retrieve my tokens. Thanks in advance.
Answers:
username_1: Hello, is your issue the same as here?
https://github.com/CityOfZion/neon-wallet/issues/524
If so, you can fill this form, it will be analysed by Ledger support.
https://docs.google.com/forms/d/e/1FAIpQLSfgEim2xICDBcpL6tL5q0_BKYCNevT5Q6-qVyegJhe4tdz6cw/viewform?usp=sf_link
username_0: @username_1 Thank you! I filled out the online form and submitted. I hope we can get this issue resolved soon.
username_2: My thread is #524.
Ledger wants to claim it is the software wallets.
We need to gather as many users, as possible. who lost tokens of any kind, in one list
This "undocumented feature" (bug) in the Ledger Nano S is causing users to loose assets.
When we show the same symptoms across several tokens, that takes it back to one common denominator. The hardware. The Ledger Nano S.
username_3: How did you found out that you have a different adress? I have a similar problem...I withdrawal from binance 150 neo ,sent in to my neo ledger adress,my neo showed in there,but after a few hours my neo were transferred to another adress...I didn't made that transfer....but when I log in to ledger I have the same adress as I had before....is it possible that I ve been hacked or it's a bug??please help!!!
username_4: This is happening to me too.. using ledger nano s. I see the block is far behind and would not update to the present one. I think this is why the balanace does not show up.. because the neon wallet did not reach the transfer block yet.
username_2: We are asking users who lost tokens using the Ledger Nano S input here (any and all token)
https://docs.google.com/forms/d/e/1FAIpQLSfgEim2xICDBcpL6tL5q0_BKYCNevT5Q6-qVyegJhe4tdz6cw/viewform?usp=sf_link
Results can be viewed here : https://docs.google.com/spreadsheets/d/1afBgZ5yvr6FHN1CBl2bSlJIEfbpNtHukQSqW731gXmE/edit?usp=sharing
Anyone who has suggestions of where to go with this please give us you input!
Status: Issue closed
username_5: Please consolidate in thread #524 . Also note: Github is used to report code issues. Any support can be found in the Discord #support channel. |
cmangos/issues | 1115689507 | Title: Wastewander Rogue NPC model in Tanaris stands still on client but can still melee hit.🐛 [Bug Report]
Question:
username_0: ### Bug Details
Engage Wastewander _rogue's_ between Gadgeztan and Steamwheedle Port in Tanaris. These mobs are stealth by default. On entering combat, they come out of stealth but do not move from that position on the client side, however, you will take damage from them even when out of the models range as I assume on the server side they are within range. You are also unable to melee hit them as you are never sure exactly where they 'really' are as the client side model is not the server-side game position.
### Steps to Reproduce
1. Teleport to Tanaris.
2. Find Wastewander Rogue's.
3. Melee combat one.
4. Run away and take damage while model is stationary or try to combat model.
### Expected behavior
melee npc Model follows player while in combat and/or player can hit npc when in range.
### Suggested Workaround
_No response_
### Crash Log
_No response_
### Core SHA1 Commit Hash
258bbaea18ec324f8e0a901c8a9580e0647b9fcc
### Database SHA1 Commit Hash
50d8fbad49e1c044b2f43fb45f32f359b7868eb
### Operating System
windows 10
### Client Version
2.4.3 (The Burning Crusade)<issue_closed>
Status: Issue closed |
connect-foundation/2019-18 | 533766153 | Title: [Design] 버그는 아니고 디자인 요청
Question:
username_0: 
버그는 아닌데 디자인 요청 사항입니다.
1. 화면 좌측 상단에 행성 버튼을 누르면 메인 페이지로 이동하는데 CRAFOLIO 로고 눌러도 가지면 좋을 거 같아요.
2. 화면을 줄이면 크라폴리오 로고 글씨랑 검색 창이 겹칩니다.
3. 업로드 버튼 글씨가 왼쪽으로 치우쳐있으니깐 중앙정렬하면 좋을 것 같아요
4. CRAFOLIO 로고랑 작품, 배경화면, 음악과 같은 탭들은 드래그가 안되게 하면 좋을 거 같아요.
- user-select: none; 속성 주기
Answers:
username_1: 감사합니다. !! |
yb172/experiments | 411675447 | Title: Kube: make sure something is generated
Question:
username_0: Right now it sometimes happens that empty page is shown. Since there are no errors reported most likely it is just an empty response: random number was generated and it was 0 - exit immediately.
It's better to actually generate something.
And this looks like a great use case for end-to-end test |
CMLTeam/cmltemplate | 808752380 | Title: Pre-configure RabbitMQ
Question:
username_0: Make sure we can easily start RabbitMQ for local development with all needed default settings pre-set.
- Make it run via docer compose (same as for mysql/postgres)
- wire the starting shortcut in `Makesurefile`<issue_closed>
Status: Issue closed |
facebook/react-native | 142143181 | Title: TypeError: babelHelpers.typeof is not a function. (In 'babelHelpers.typeof(target)', 'babelHelpers.typeof' is undefined)
Question:
username_0: Hey there and thank you for using React Native!
React Native, as you've probably heard, is getting really popular and truth is we're getting a bit overwhelmed by the activity surrounding it. There are just too many issues for us to manage properly.
Do the checklist before filing an issue:
- [ ] Is this something you can **debug and fix**? Send a pull request! Bug fixes and documentation fixes are welcome.
- [ ] Have a usage question? Ask your question on [StackOverflow](http://stackoverflow.com/questions/tagged/react-native). We use StackOverflow for usage question and GitHub for bugs.
- [ ] Have an idea for a feature? Post the feature request on [Product Pains](https://productpains.com/product/react-native/). It has a voting system to surface the important issues. GitHub issues should only be used for bugs.
None of the above, create a bug report
------------------------------------------------------------------
Make sure to add **all the information needed to understand the bug** so that someone can help. If the info is missing we'll add the 'Needs more information' label and close the issue until there is enough information.
- [ ] Provide a **minimal code snippet** / [rnplay](https://rnplay.org/) example that reproduces the bug.
- [ ] Provide **screenshots** where appropriate
- [ ] What's the **version** of React Native you're using?
- [ ] Does this occur on iOS, Android or both?
- [ ] Are you using Mac, Linux or Windows?
Answers:
username_1: This is an issue with your .babelrc file (99%). I think that's a good question for stack-overflow.
username_1: @facebook-github-bot stack-overflow |
JuliaPlots/StatsPlots.jl | 566080808 | Title: New release?
Question:
username_0: Is it possible to release a new version? As https://github.com/JuliaPlots/StatsPlots.jl/issues/301 shows, if Pkg installs Tables >= 1 before adding StatsPlots, users end up with StatsPlots 0.12 that does not upper bound its dependencies but actually is incompatible with Tables >= 1.
Answers:
username_1: Yes please - I'm having a similar problems.
username_2: does it work with 1.0? @username_1
username_1: @username_2 Not sure, I'm blocked from upgrading by explicit version bounds 🤦♂. I haven't tested - TBH I'm not sure how Tables is integrated
username_2: @username_0 we might have to fix this in the registry as well.
Status: Issue closed
username_3: is there a way of getting a working `@df` macro right now??
username_2: Does it not work? If so, could you open another issue with the error?
username_3: I get the same error as in #301which is already fixed on master, but I'd rather not install it as it (or seems to) requires dev verstion of Plots, etc. So the question is rather:
how do I workaround, so that I can an environment which can run the examples from readme.
username_3: ```julia
using Pkg
Pkg.rm("DataFrames")
Pkg.rm("StatsPlots")
pkg"add [email protected]"
Pkg.add("DataFrames")
Pkg.add("StatsPlots")
```
seems to have fixed this in my case, sorry for the noise. |
beardedspice/beardedspice | 154886013 | Title: Google Chrome Slowness
Question:
username_0: I have noticed in the latest version of Chrome and the Beardedspice application considerable slowness in moving to the next track in Pandora, and starting / stopping. Thoughts?
Answers:
username_1: I've noticed this as well and have been looking to address it. Thankfully my latest branch of changes seem to address it (40+ tabs open and instant play/pause response). We'll see as I start wrapping up, I'll update as it gets closer to release.
username_0: Awesome, I was wondering if it was just me. Thank you for all of your hard work on this. Once the update goes live I will switch back to using it.
username_1: So far it's faster than before, but still has a significant (~3s) delay for me. My scenario has over 50 tabs open across 5 windows, and the delay scales linearly with that. It'll be one of the focuses of my next push. :)
username_0: Same here, I regularly have many browsers open across multiple spaces. I bet this is common.
username_2: Same thing with Safari…
username_3: @username_0 what version of the BeardedSpice? Or was it build from master branch?
username_0: I am using version `2.1.0`
username_4: +1 for fixing this issue
Status: Issue closed
|
snowflakedb/snowflake-connector-python | 505288236 | Title: 1.9.0 is no longer on pypi
Question:
username_0: It seems 1.9.0 has been pulled from pypi. This is breaking a lot of our pipelines and build processes that we version locked to 1.9.0 (with an asn2crypto lock on our end to make it work). Please don't pull things from pypi.
Answers:
username_1: same experience here this morning, multiple project builds broke with this removal.
username_2: notice that the release notes for 1.9.1 and 2.0.1 are both this:
`Add asn1crypto requirement to mitigate incompatibility change.`
maybe a mistake?
however you slice it, no bueno
username_3: Background:
Python Connector v1.9.0 was removed because it did not pin the asn1crypto version resulting in several customers breaking when they upgraded the asn1crypto library to version 1.0.0. asn1crypto removed an API that is currently used by the Snowflake Python Connector
Impact:
Customers who have pinned a dependency on this package will receive a build error similar to:
Snowflake Python connector version 1.9.0 not found
Solution:
Customers should use 1.9.1 which properly pins the supported asn1crypto versions.
pip install -U snowflake-connector-python==1.9.1
Additional Notes:
Python Connector v2.0.0 is impacted by the same issue as 1.9.0 and should be avoided
Python Connector v2.0.1 can be used but may break lamda deployments since the package size is over 200MB. The large size is due to inclusion of some optional libraries that are in Private Preview and not needed for normal operation.
Python Connector v2.0.2 will be released shortly with a much smaller package size.
Snowflake will not remove python versions in the future without proper notification to our customers.
username_0: @username_3 thanks for the feedback.
Sorry to be frank here, but this is not a good way to release a patched version. What Snowflake has done here is break every production pipeline or build process that had the snowflake connector and asn2crypto pinned to `1.9.0` and `0.24.0`. Many Snowflake customers using a proper `pip freeze` or equivalent dependency management could fall into this group. A lot of perfectly fine, working code likely got broken because Snowflake removed this from PyPI.
A much better solution would have been to release 1.9.1 and 2.0.1 with the fixes and leave 1.9.0 and 2.0.0, as is, in PyPI. That would ensure nothing that is currently working breaks, and for anyone with broken stuff, there would be an update to fix their issue.
Generally, it is asking for trouble to remove packages once they have been released.
username_3: Noted @username_0 .
I am keeping this issue open for a couple of days so anyone facing this issue can refer to it and get the details.
username_4: @username_0 Your point is well taken. We will not be removing versions in the future.
Status: Issue closed
|
luvolondon/fvtt-module-jitsiwebrtc | 849952668 | Title: Custom server connection failure
Question:
username_0: I am using version 0.5.4 of the module, and I noticed a change in behaviour. My custom Jitsi server no longer works. I have rebooted the server. I have confirmed that it functions as a stand-alone service in a web browser. My credentials have not changed on the server, and I have not changed the firewall settings on either my Foundry server or Jitsi server.
I conducted this test with every other module disabled but the Jitsi module. My Jitsi server URL and credentials are set and verified. Custom URLs is selected in the module configuration settings.
The following errors appear in the log:
Logger.js:154 2021-04-04T20:06:02.108Z [modules/xmpp/strophe.util.js] <Object.r.Strophe.log>: Strophe: TypeError: this._jitsiConference.setReceiverConstraints is not a function
at JitsiRTCClient._onUserJoined (https://<foundry server name>/modules/jitsirtc/scripts/jitsirtc.js:948:27)
at a.emit (https://<jitsi server name>/libs/lib-jitsi-meet.min.js:1:115270)
at ie.onMemberJoined (https://<jitsi server name>/libs/lib-jitsi-meet.min.js:10:56250)
at a.emit (https://<jitsi server name>/libs/lib-jitsi-meet.min.js:1:115213)
at E.onPresence (https://<jitsi server name>/libs/lib-jitsi-meet.min.js:10:155329)
at u.onPresence (https://<jitsi server name>/libs/lib-jitsi-meet.min.js:10:147587)
at I.Handler.run (https://<jitsi server name>/libs/lib-jitsi-meet.min.js:1:26549)
at https://<jitsi server name>/lib-jitsi-meet.min.js:1:34987
at Object.forEachChild (https://<jitsi server name>/libs/lib-jitsi-meet.min.js:1:18211)
at I.Connection._dataRecv (https://<jitsi server name>/libs/lib-jitsi-meet.min.js:1:34836)
at D.Bosh._onRequestStateChange (https://<jitsi server name>/libs/lib-jitsi-meet.min.js:1:54821)
Logger.js:154 2021-04-04T20:06:02.109Z [modules/xmpp/strophe.util.js] <Object.r.Strophe.log>: Strophe: error: this._jitsiConference.setReceiverConstraints is not a function
Logger.js:154 2021-04-04T20:09:14.780Z [modules/xmpp/strophe.ping.js] Ping timeout null
This is not the case when I select the public Jitsi server. That still works.
Status: Issue closed
Answers:
username_1: Thank you for reporting this! It looks like older versions of Jitsi don't support a method I was using. This could probably have been fixed by upgrading Jitsi, but it may not be in the stable channel yet. The newly released v0.5.5 should no longer have this problem.
username_0: It works. Thanks! |
pytti-tools/pytti-book | 1185147862 | Title: Issue on page /Setup.html Step 10
Question:
username_0: In step 10 of the setup it says to clone the dev branch of the pytti-core module, but there seems to be no dev branch.
$ git clone --recurse-submodules -j8 --branch dev https://github.com/pytti-tools/pytti-core
Cloning into 'pytti-core'...
fatal: Remote branch dev not found in upstream origin
If I clone without specifying the branch I'm missing the config, images_out and pretrained folder in comparison to the folder structure given under Step 10.
Answers:
username_1: thanks, I definitely need to update this! you don't need to specify a branch. you can initialize the config folder by running `python -m pytti.warmup`
thanks again, tagging this as a bug for now and will close after I've updated the setup instructions
username_1: ok, updated the setup instructions. lemme know what you think! https://pytti-tools.github.io/pytti-book/Setup.html
Status: Issue closed
|
getformwork/formwork | 516563335 | Title: The proposal editor in the admin panel
Question:
username_0: Good afternoon! Proposal for the editor in the admin panel. I studied the work of CMS very interesting ideas ! Wanted to offer instead editor which is used use editor of CKEditor have him many additional rashireniy plugins that much will simplify and will increase ability to CMS. This editor outputs html code that can be inserted into md files. Here is a link with very interesting plugins such as bootstrap columns and others have the opportunity to try to test.
https://kisameev.ru/javascript/nastroyka-ckeditor-pod-sebya---podklyuchenie-plaginov
Answers:
username_1: Thank you @username_0 for the proposal.
I need to do some tests. Markdown output with HTML only if needed should be preferred. As I can read CKEditor 5 can save in Markdown format but I need to see how handles "extra" features. However the focus should always be on the content. Perhaps this editor could be implemented not in the immediate future but as a plugin, but obviously we need a plugin infrastructure first, which I'm already starting to develop.
username_0: It seems to me that the emphasis should not be on md format Markdown this format is very stripped down it will be difficult to do something creative on it. Let the md Markdown format remain but I think it is worth looking at the html format on it you can create more creative pages and a lot of good editors for it.
username_1: I understand your point, but that's beyond the scope of the contents of a page. That's on purpose. A page should contain only marked up text, and Markdown is more concise than HTML, all the presentation is handled by the templates. Occasionally you can add some HTML directly in the content (for elements which can't be expressed in Markdown) but that should be avoided. It's up to the templates to decide how a certain element is rendered, i.e. the `.md` files should only contain the semantics.
username_0: Good ! Thanks!
username_1: CKEditor could be used as an alternative to the standard CodeMirror editor, just because it's WYSIWYG and it could be preferred by people who understandably don't like writing directly the Markdown syntax.
Status: Issue closed
|
nathanpabst/PromoTracker | 406029742 | Title: Get Order Data
Question:
username_0: # Story
As a developer, I need to be able to retrieve all promotion code info from the database
# Acceptance Criteria
- Set dubugger on HttpGet call
- JSON returning in Postman
# Technical Requirements
create public class **OrderRepository** with OrderRepository constructor and GetOrders
create **OrderController** with constructor
**[HttpGet]** request for GetOrderss()
**SQL query**
Status: Issue closed
Answers:
username_0: order info displaying on Reporting page as expected. |
zopefoundation/persistent | 568329767 | Title: Support ability to require use of C extensions
Question:
username_0: This has been a problem for end users and our own testing (cf #124 and https://github.com/zopefoundation/persistent/pull/129#issuecomment-588081525).
I propose doing basically what [zope.interface now does](https://github.com/zopefoundation/zope.interface/pull/151/)<issue_closed>
Status: Issue closed |
Dart-Code/Dart-Code | 1057678656 | Title: Update location of Flutter survey JSON to docs.flutter.dev
Question:
username_0: https://flutter.dev/f/flutter-survey-metadata.json should be updated to https://docs.flutter.dev/f/flutter-survey-metadata.json.
The old URL is currently a 404 due to site changes. @username_1 is trying to get the current survey JSON restored there for existing plugins as a temporary measure, but for future surveys, we need to read from the docs.flutter.dev site.
(cc @kenzieschmoll @stevemessick - DevTools/IntelliJ will need updating too)
Answers:
username_1: For the future DevTools survey, the new URL would be https://dosc.flutter.dev/f/dart-devtools-survey-metadata.json
username_0: Looks like there's a DevTools survey live now (https://docs.flutter.dev/f/dart-devtools-survey-metadata.json), so it may be worth copying that one over if possible too (I think DevTools is shipping with SDKs now, so not releasing as frequently as the editor plugins).
username_0: Whoops, missed the year 😄
Status: Issue closed
|
paritytech/ink | 948502345 | Title: The block number of `env().random()` is always `0`
Question:
username_0: The block number of `env().random()` is always `0`
Answers:
username_1: @username_0 can you provide us with more context/some code?
username_0: ```rs
#[ink(message)]
pub fn random(&self, arr: Vec<u8>) -> (Hash, BlockNumber) {
return self.env().random(&arr);
}
```
Just call it simply.
I use [Europa](https://github.com/patractlabs/europa) to execute these codes. I am confused about this return value
username_1: That being said, I'm not sure when a block is "determinable", but I suspect that's dictated by the contracts pallet. Pinging @username_2 who can probably give you a better answer than me.
Status: Issue closed
|
snap-stanford/snap | 289198885 | Title: Core dump when doing word embedding
Question:
username_0: Hi, I have an edgelist with 6M lines, I can learning walking successfully, but It cored when it doing embedding learning process. By the way, I have 200G memory in my server.
Answers:
username_1: I have seen similar issue. For me, it only happens with `-dr` flag is set.
I've also experienced program terminating before displaying 100% on the shell prompt.
username_2: I met the same issue with `-dr` flag. There may be a bug in the training of word2vec. |
AutoMapper/AutoMapper | 32825739 | Title: Make it easier to create a MappingEngine instance
Question:
username_0: Most of the time, using the `Mapper` static class is fine, but sometimes you need to map the same types with different configurations, so you need to explicitly use an `IMappingEngine` like this:
var configuration = new ConfigurationStore(new TypeMapFactory(), MapperRegistry.Mappers);
var engine = new MappingEngine(configuration);
The way to create a `MappingEngine` is not obvious at all (I had to dig in the source code to find how it was done); it would be nice to have a `Mapper.CreateEngine` method to create an engine with the default configuration.<issue_closed>
Status: Issue closed |
slazarov/python-bittrex-websocket | 288511385 | Title: Getting real-time non-stop ticker updates
Question:
username_0: Hi everyone!
I had a question on how to get real-time non-stop ticker updates using bittrex-websocket library.
Thanks to @username_1 I got this sample, may be useful for others:
`
from __future__ import print_function
from time import sleep
from bittrex_websocket.websocket_client import BittrexSocket
def main():
class MySocket(BittrexSocket):
def on_open(self):
pass
def on_ticker_update(self, msg):
name = msg['MarketName']
print('Just received ticker update for {}.'.format(name))
print(msg)
# Create the socket instance
ws = MySocket()
# Enable logging
ws.enable_log()
# Define tickers
tickers = ['BTC-ETH', 'BTC-NEO']
# Subscribe to ticker information
ws.subscribe_to_ticker_update(tickers)
while True:
sleep(1)
if __name__ == "__main__":
main()
`
Answers:
username_1: Lately, I have been getting a lot of messages in my email related to that.
First of all thanks for all the kind words, they do mean a lot!
Secondly, going through the readme and the examples is a must. If you have any suggestions for improvements, please say so.
Thirdly, although, I would like to help as much as I can, the questions are more related to general python usage rather than actual issue. I'd prefer to focus what little time I have on actual issues. The rest is commented. I am going to use @username_0's thread/issue to highlight what's going on exactly within `on_open` and `on_ticker_update` before I create some sort of wiki.
```python
def main():
class MySocket(BittrexSocket):
def on_open(self):
# Quick and dirty way to create variables within the BittrexSocket class
# i.e if you create a dict like this self.my_personal_dict = {}
# you will be able to access it from the main thread (in this case the While True loop below)
# like this ws.my_personal_dict; of course depending on how you instantiated the BittrexSocket class
pass
def on_ticker_update(self, msg):
# The ticker update channel as per the documentation.
# People tend to ask me why they are not getting getting quantity/rate/high/low etc
# This is a basic example so it's up to you to debug the contents of the 'msg' variable
# and use them as you like
name = msg['MarketName']
print('Just received ticker update for {}.'.format(name))
print(msg)
# Create the socket instance
ws = MySocket()
# Enable logging
ws.enable_log()
# Define tickers
tickers = ['BTC-ETH', 'BTC-NEO']
# Subscribe to ticker information
ws.subscribe_to_ticker_update(tickers)
while True:
sleep(1)
if __name__ == "__main__":
main()
```
username_2: Hi,
I try to run the example of get_ticker_update
but here is some problem about the connect with the https://socket-stage.bittrex.com/signalr and https://socket.bittrex.com/signalr here is the message and code , Did I miss something?
```
from __future__ import print_function
from time import sleep
from bittrex_websocket.websocket_client import BittrexSocket
def main():
class MySocket(BittrexSocket):
def on_open(self):
self.ticker_updates_container = {}
def on_ticker_update(self, msg):
name = msg['MarketName']
if name not in self.ticker_updates_container:
self.ticker_updates_container[name] = msg
print('Just received ticker update for {}.'.format(name))
# Create the socket instance
ws = MySocket()
ws.on_open()
# Enable logging
ws.enable_log()
# Define tickers
tickers = ['BTC-ETH', 'BTC-NEO', 'BTC-ZEC', 'ETH-NEO', 'ETH-ZEC']
# Subscribe to ticker information
ws.subscribe_to_ticker_update(tickers)
while len(ws.ticker_updates_container) < len(tickers):
sleep(1)
else:
print('We have received updates for all tickers. Closing...')
ws.disconnect()
if __name__ == "__main__":
main()
```
```
2018-01-22 13:24:00 - bittrex_websocket.websocket_client - INFO - [Connection][5b544e6af24248609866802184f5879e]:Trying to establish connection to Bittrex through https://socket-stage.bittrex.com/signalr.
2018-01-22 13:24:05 - bittrex_websocket.websocket_client - ERROR - [Connection][5b544e6af24248609866802184f5879e]:Timeout for url https://socket-stage.bittrex.com/signalr. Please check your internet connection is on.
2018-01-22 13:24:05 - bittrex_websocket.websocket_client - INFO - [Connection][5b544e6af24248609866802184f5879e]:Trying to establish connection to Bittrex through https://socket.bittrex.com/signalr.
2018-01-22 13:24:10 - bittrex_websocket.websocket_client - ERROR - [Connection][5b544e6af24248609866802184f5879e]:Timeout for url https://socket.bittrex.com/signalr. Please check your internet connection is on.
2018-01-22 13:24:10 - bittrex_websocket.websocket_client - ERROR - [Connection][5b544e6af24248609866802184f5879e]:Failed to establish connection through supplied URLS. Leaving to watchdog...
2018-01-22 13:24:20 - bittrex_websocket.websocket_client - ERROR - Failed to subscribe [TickerUpdate][['BTC-ETH', 'BTC-NEO', 'BTC-ZEC', 'ETH-NEO', 'ETH-ZEC']] from connection 5b544e6af24248609866802184f5879e after 20 seconds. The connection is probably down.
```
username_1: Your connection is not going through at all. What version are you using? Can you try with [0.0.5.1](https://github.com/username_1/python-bittrex-websocket/releases/tag/v0.0.5.1) ?
This is what I am getting:
```
2018-01-22 07:45:19 - bittrex_websocket.websocket_client - INFO - [Connection][0633937571b94b03965053eb51b6de3d]:Trying to establish connection to Bittrex through https://socket-stage.bittrex.com/signalr.
2018-01-22 07:45:27 - bittrex_websocket.websocket_client - INFO - [Connection][0633937571b94b03965053eb51b6de3d]:Connection to Bittrex established successfully through https://socket-stage.bittrex.com/signalr
2018-01-22 07:45:27 - bittrex_websocket._auxiliary - INFO - [Subscription][Trades][BTC-ETH]: Enabled.
2018-01-22 07:45:27 - bittrex_websocket._auxiliary - INFO - [Subscription][Trades][BTC-XMR]: Enabled.
[Trades]: BTC-ETH
[Trades]: BTC-ETH
[Trades]: BTC-ETH
[Trades]: BTC-ETH
[Trades]: BTC-ETH
[Trades]: BTC-ETH
```
username_2: Here is the version information
```
bittrex-websocket==0.0.6.2
cfscrape==1.9.1
Events==0.3
requests==2.18.4
signalr-client==0.0.7
websocket-client==0.46.0
```
here is more problem when I run with version 0.0.5.1
```
2018-01-22 14:41:09 - bittrex_websocket.websocket_client - INFO - Trying to establish connection to Bittrex through https://socket-stage.bittrex.com/signalr.
Exception in thread Thread-2:
Traceback (most recent call last):
File "C:\Users\username_2\AppData\Local\Programs\Python\Python35-32\lib\site-packages\cfscrape\__init__.py", line 114, in solve_challenge
node = execjs.get("Node")
File "C:\Users\username_2\AppData\Local\Programs\Python\Python35-32\lib\site-packages\execjs\_runtimes.py", line 22, in get
return _find_runtime_by_name(name)
File "C:\Users\username_2\AppData\Local\Programs\Python\Python35-32\lib\site-packages\execjs\_runtimes.py", line 61, in _find_runtime_by_name
"{name} runtime is not available on this system".format(name=runtime.name))
execjs._exceptions.RuntimeUnavailableError: Node.js (V8) runtime is not available on this system
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\username_2\AppData\Local\Programs\Python\Python35-32\lib\threading.py", line 914, in _bootstrap_inner
self.run()
File "C:\Users\username_2\AppData\Local\Programs\Python\Python35-32\lib\threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\username_2\AppData\Local\Programs\Python\Python35-32\lib\site-packages\bittrex_websocket\websocket_client.py", line 287, in _init_connection
conn.start()
File "C:\Users\username_2\AppData\Local\Programs\Python\Python35-32\lib\site-packages\signalr\_connection.py", line 47, in start
negotiate_data = self.__transport.negotiate()
File "C:\Users\username_2\AppData\Local\Programs\Python\Python35-32\lib\site-packages\signalr\transports\_auto_transport.py", line 16, in negotiate
negotiate_data = Transport.negotiate(self)
File "C:\Users\username_2\AppData\Local\Programs\Python\Python35-32\lib\site-packages\signalr\transports\_transport.py", line 26, in negotiate
negotiate = self._session.get(url)
File "C:\Users\username_2\AppData\Local\Programs\Python\Python35-32\lib\site-packages\requests\sessions.py", line 521, in get
return self.request('GET', url, **kwargs)
File "C:\Users\username_2\AppData\Local\Programs\Python\Python35-32\lib\site-packages\cfscrape\__init__.py", line 47, in request
return self.solve_cf_challenge(resp, **kwargs)
File "C:\Users\username_2\AppData\Local\Programs\Python\Python35-32\lib\site-packages\cfscrape\__init__.py", line 77, in solve_cf_challenge
params["jschl_answer"] = str(self.solve_challenge(body) + len(domain))
File "C:\Users\username_2\AppData\Local\Programs\Python\Python35-32\lib\site-packages\cfscrape\__init__.py", line 116, in solve_challenge
raise EnvironmentError("Missing Node.js runtime. Node is required. Please read the cfscrape"
OSError: Missing Node.js runtime. Node is required. Please read the cfscrape README's Dependencies section: https://github.com/Anorov/cloudflare-scrape#dependencies.
```
username_1: Yes, you need node.js. Check Dependencies section in the readme.
Status: Issue closed
|
keycdn/python-keycdn-api | 442691881 | Title: Fails if the response isn't JSON
Question:
username_0: ```
...
File "/var/lib/django/django-username_0com/venv/lib/python3.5/site-packages/keycdn/keycdn.py", line 55, in get
return self.__execute(call, 'GET', params)
File "/var/lib/django/django-username_0com/venv/lib/python3.5/site-packages/keycdn/keycdn.py", line 107, in __execute
return r.json()
File "/var/lib/django/django-username_0com/venv/lib/python3.5/site-packages/requests/models.py", line 897, in json
return complexjson.loads(self.text, **kwargs)
File "/usr/lib/python3.5/json/__init__.py", line 319, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.5/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.5/json/decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
```
The problem is here: https://github.com/keycdn/python-keycdn-api/blob/08bec1956db40f9d8d704b5285f7bbbb3ac0aad2/keycdn/keycdn.py#L107
Kinda. I think what happened was that the line
```python
r = self.session.get(url, auth=(self.__api_key, ''), data=params)
```
then assumes that `r` returned a JSON content-type response. Which might not be the case.
But something's definitely not right. The "more right" thing to do is first check that the response was successful but suppose it's not, it would just be a different exception. A more descriptive one at least.<issue_closed>
Status: Issue closed |
rplugge/quiz_game_v3 | 94137561 | Title: Feedback - Javascript/Sinatra - Part 2
Question:
username_0: http://musician-ranks-63085.bitballoon.com/
Answers:
username_1: - [ ] Doesn't seem to show result total after the final question.
- [ ] Use only one submit button.
- [ ] Store the correct answer in an array somewhere so that you don't have to have a Javascript function to check the correctness of each question. |
Alenovaalla/PageObject | 676187419 | Title: Отрицательный баланс на карте при переводе суммы свыше имеющейся на ней
Question:
username_0: ## Шаги
1. Перейти по ссылки http://localhost:9999/
1. Ввести логин: vasya
1. Ввести пароль: <PASSWORD>
1. Нажать кнопку продолжить
1. Ввести пароль: 12345
1. Нажать кнопку "Продолжить"
1. Нажать на кнопку "Пополнить" напротив карты "***0001"
1. .В поле "Сумма" ввести значение превышающее баланс карты "***0002"
1. .В поле "Откуда" ввести номер карты: 5559 0000 0000 0002
1. Нажать кнопку "Пополнить"
## Ожидаемый результат
Появляется сообщение: "Недостаточно средств на карте"
## Фактический результат
Баланс счета карты, с которой происходит списание отрицательный

## Окружение
Windows 10 64 бит
Версия Java 11.0.7 |
go-ini/ini | 208728728 | Title: It is unable to get a full string value if it contains "#" symbol
Question:
username_0: It is unable to get a full string value if it contains "#" symbol.
Sample of the ini file:
```ini
#===== Start of the file =======
test_value="qwe#123"
#===== End of the file ========
```
Expected result:
`
{TestValue: qwe#123}
`
Actual result:
`
{TestValue: qwe}
`
Status: Issue closed
Answers:
username_1: See https://github.com/go-ini/ini#comment... |
swagger-api/swagger-codegen | 258123107 | Title: HTML output exception: preprocessSwagger(StaticHtmlGenerator.java:186)
Question:
username_0: ##### Description
When trying to generate html from either valid swagger 2 yaml or json, I get the following exception pasted below. I looked up the code line 186 (it's the "for" loop):
public void preprocessSwagger(Swagger swagger) {
Info info = swagger.getInfo();
info.setDescription(toHtml(info.getDescription()));
info.setTitle(toHtml(info.getTitle()));
Map<String, Model> models = swagger.getDefinitions();
for (Model model : models.values()) {
model.setDescription(toHtml(model.getDescription()));
model.setTitle(toHtml(model.getTitle()));
}
}
But, my swagger has a title and a description.
[main] INFO io.swagger.parser.Swagger20Parser - reading from ./api.oas.2.0.yaml
Exception in thread "main" java.lang.NullPointerException
at io.swagger.codegen.languages.StaticHtmlGenerator.preprocessSwagger(StaticHtmlGenerator.java:186)
at io.swagger.codegen.DefaultGenerator.configureGeneratorProperties(DefaultGenerator.java:134)
at io.swagger.codegen.DefaultGenerator.generate(DefaultGenerator.java:687)
at io.swagger.codegen.cmd.Generate.run(Generate.java:285)
at io.swagger.codegen.SwaggerCodegen.main(SwaggerCodegen.java:35)
##### Swagger-codegen version
##### Swagger declaration file content or url
The yaml and json are online here:[https://github.com/hmis-api/user-service-api/blob/master/api-markup/api.oas.2.0.json](https://github.com/hmis-api/user-service-api/blob/master/api-markup/api.oas.2.0.json) and here: [https://github.com/hmis-api/user-service-api/blob/master/api-markup/api.oas.2.0.yaml](https://github.com/hmis-api/user-service-api/blob/master/api-markup/api.oas.2.0.yaml) .
##### Command line used for generation
/home/eric/.linuxbrew/bin/swagger-codegen generate -i ./api.oas.2.0.yaml -l html -o /home/eric/git/user-service-api/api-markup/
##### Steps to reproduce
run command
##### Related issues/PRs
no
##### Suggest a fix/enhancement
Answers:
username_1: @username_0 thanks for reporting the issue. May I know if you've time to contribute a PR with better null check?
username_0: Yes, what would you like me to run and send exactly?
username_0: Yes, but it may take a little time. I've added it to my to-do list.
username_2: To anyone looking for a quick workaround: create a `definitions` section. Should work after that.
username_3: at io.swagger.codegen.languages.StaticHtmlGenerator.preprocessSwagger(StaticHtmlGenerator.java:186)
at io.swagger.codegen.DefaultGenerator.configureGeneratorProperties(DefaultGenerator.java:171)
at io.swagger.codegen.DefaultGenerator.generate(DefaultGenerator.java:737)
at io.swagger.codegen.cmd.Generate.run(Generate.java:285)
at io.swagger.codegen.SwaggerCodegen.main(SwaggerCodegen.java:35)
Please try to help me out from this.
Thanks in advance
username_1: @username_3 Thanks for tagging me but I'm no longer involved in this project. I hope others will be able to help you out. Good luck.
username_4: fixed in #9615
Status: Issue closed
|
xJon/The-1.12.2-Pack | 547243525 | Title: Continuously Crashing When Trying to Join Server
Question:
username_0: <!--
Please fill in the following information.
-->
1. Are you using a legitimate launcher and Minecraft account:
yes
2. Are you using a computer with 8GB or more of RAM?:
yes
3. Are you using Java 8, 64-bit?:
yes
**Information required**
1. Crash/latest log: https://pastebin.com/k73ghJaY
2. Is Optifine installed or any other additional mods: yes
Optifine
Astral Sorcery
Fidget Spinner (don't ask why. I don't know why. Friends are weird mate)
**Issue description**
When attempting to join the server my friend is hosting, my other friend continuously crashes with this as the crash log each time, with it repeating the crash until the game is closed.
Answers:
username_1: Your friend seems to have an Intel HD integrated graphics 630. What are the full specs of their laptop? What's their laptop model? This isn't a modpack for toasters :P
Status: Issue closed
username_0: He has an actual graphics card in the laptop. We ended up getting it to work. He had to wait for the mystcraft profiling to finish.
username_1: Ah ok, glad to hear it
username_2: Reinstalling / updating should resolve this issue as well. |
vimeo/VIMNetworking | 164510226 | Title: Use `NSURLSession` instead of `AFNetworking`
Question:
username_0: #### Issue Summary
`AFNetworking` is a major, well-known library, however not every project is willing to integrate it. There could be several reasons, some of them could be:
1. Project is Swift-based, so `AlamoFire` is already integrated in the project
2. Project is based on `NSURLSession`, so the developer could find annoying to include the huge library to support downloading of preview images for videos
I realise that previously `NSURLRequest` based API was not the best, but now the `NSURLSessison` is as good as third party APIs.
Would be great to provide the version of `VIMNetworking` that does not have any network library dependency and works directly on Apple API. |
Liresol/anki-custom-shortcuts | 1069743174 | Title: Addon throws an error at launch on QT6 beta of Anki
Question:
username_0: #### Problem description
Dae's [posted a QT6 beta of Anki](https://forums.ankiweb.net/t/new-toolkit-and-packaging-test-round-2/14513). Most of my addons work fine, but unfortunately, that doesn't include Custom Shortcuts.
#### Information about your Anki set-up
```
Anki 2.1.50 (abd671d4) Python 3.9.7 Qt 6.2.0 PyQt 6.2.0
Platform: Mac 12.0.1
Flags: frz=True ao=True sv=3
Add-ons, last update check: 2021-12-02 08:37:14
===Add-ons (active)===
(add-on provided name [Add-on folder, installed at, version, is config changed])
===IDs of active AnkiWeb add-ons===
===Add-ons (inactive)===
(add-on provided name [Add-on folder, installed at, version, is config changed])
```
#### Error message (if any)
```python
An add-on you installed failed to load. If problems persist, please go to the Tools>Add-ons menu, and disable or delete the add-on.
When loading 'Customize Keyboard Shortcuts':
Traceback (most recent call last):
File "aqt.addons", line 239, in loadAddons
File "/Users/ec/Library/Application Support/Anki2/addons21/24411424/__init__.py", line 1, in <module>
from . import custom_shortcuts
File "/Users/ec/Library/Application Support/Anki2/addons21/24411424/custom_shortcuts.py", line 665, in <module>
cs_main_setupShortcuts()
File "/Users/ec/Library/Application Support/Anki2/addons21/24411424/custom_shortcuts.py", line 101, in cs_main_setupShortcuts
if scut.id() in id_main_config:
AttributeError: 'QShortcut' object has no attribute 'id'
```
Answers:
username_1: Hello,
I just pushed out an update to fix this issue. It works for me on 2.1.50beta1 but if you are encountering any issues, let me know and I will look into it further.
username_0: Hm. I installed from the Anki Addon Store — there's no more crashes, but the addon doesn't seem to … have any effect. For instance, I've modified the following:
```json
{
"reviewer choice 1": "a",
"reviewer choice 2": "r",
"reviewer choice 3": "s",
"reviewer choice 4": "t",
}
```
And only the second line, `"reviewer choice 2": "r",`, is … working.
Worse, it *does* seem to unbind the existing keys (1, 2, 3, 4 don't work).
username_1: Ok, I think I managed to work this one out as well.
I think based on how Anki checks for updates you might need to check for updates manually for this patch, but let me know if any issues persist or if there are other problems with 2.1.50.
username_0: Hm. I'm not sure what you meant by manually updating — I tried uninstalling and re-installing via the Anki code, but no luck: the above snippet still has no effect, except for the key "r"; it also still disables the 1, 2, 3, 4 keys.
Any debugging steps I could take? I'm a fairly competent programmer, although not one that uses Python …
username_1: The addon tab has a "Check for Updates" button on the top right which you usually never need to use since Anki checks for updates every 24 hours or so. The shortcut patch came out very soon after the initial fix, so Anki might not automatically tell you the patch existed at the time.
The main cause of the shortcut issue is that A, S, and T are all shortcuts used on the main menu and on the reviewer, so what you were experiencing before was an annoying interaction between main shortcuts and reviewer shortcuts (which was my fault).
If you have two functions with the same shortcuts (e.g. you have the default `"main add": "a"` and `"reviewer choice 1": "a"` then both shortcuts stop functioning).
Actually, since you reinstalled the shortcuts, it possible that the main shortcuts got reset to the defaults, which would cause an identical behavior to what was happening before.
username_0: Ah, that's frustrating!
For now, switching all the main-window bindings to be behind <kbd>Alt</kbd> fixed my issue — `arst` now function as I need them to.
Okay, I checked for updates, and it insisted none were available — unless I misunderstand, then, the "main-and-reviewer" behaviour is unavoidable / intended, right now? Because it's definitely still occurring, even though "check for updates" yielded no further changes.
If it *is* unavoidable, a programmatic check-for-conflicts and an error-message to present to the user might be a great enhancement; but may not be worth the time if I was the only user stupid enough to miss that. 🤣
username_1: Yeah the issue with collisions is inherent to the way Anki is built, which does the weird non-functioning shortcut thing if you try to make a shortcut do more than one thing at once.
The conflict detection feature actually exists, but it happened to miss this because it normally catches shortcut collisions within a single context (and I actually didn't realize the main-reviewer interaction existed until you found it here). I'm going to see if I can add a warning for this, because it's definitely annoying to have shortcuts disappear on you.
Status: Issue closed
|
zetadevelopment/PMZ_Software | 430055961 | Title: Toma de decisiones
Question:
username_0: Dentro del proceso de toma de decisiones de la organización se tiene establecido un sistema para la ejecución del proyecto, a través de la plataforma github se decidirá cada proceso o cambio a efectuar con la calificación positiva o negativa de cada uno de los integrantes de la organización con el fin de optar por los procesos mas acertados basados en la experiencia y el conocimiento de cada integrante.
Status: Issue closed
Answers:
username_0: Dentro del proceso de toma de decisiones de la organización se tiene establecido un sistema para la ejecución del proyecto, a través de la plataforma github se decidirá cada proceso o cambio a efectuar con la calificación positiva o negativa de cada uno de los integrantes de la organización con el fin de optar por los procesos mas acertados basados en la experiencia y el conocimiento de cada integrante.
Status: Issue closed
|
utmsigep/member-directory | 892440257 | Title: 🚨 Potential Cross-site Scripting (XSS) - Generic
Question:
username_0: 👋 Hello, @username_1, @dependabot-preview[bot], @ImgBotApp - a potential medium severity Cross-site Scripting (XSS) - Generic vulnerability in your repository has been disclosed to us.
#### Next Steps
1️⃣ Visit **https://huntr.dev/bounties/1-other-utmsigep/member-directory** for more advisory information.
2️⃣ **[Sign-up](https://huntr.dev/)** to validate or speak to the researcher for more assistance.
3️⃣ Propose a patch or outsource it to our community - whoever fixes it gets paid.
---
#### Confused or need more help?
- Join us on our **[Discord](https://huntr.dev/discord)** and a member of our team will be happy to help! 🤗
- Speak to a member of our team: @JamieSlome
---
*This issue was automatically generated by [huntr.dev](https://huntr.dev) - a bug bounty board for securing open source code.*<issue_closed>
Status: Issue closed |
sys27/xFunc | 322820741 | Title: Pow-Function should support complex numbers
Question:
username_0: Hi Dmitry,
I've tested a few things and found an issue (tested with dev-branch).
The following equation x^(1/2), should return a complex number for a negative variable.
But at the moment the result is NaN.
The System.Numberics.Complex Library already supports a Complex.Pow-Method.
Kind regards
Ronny<issue_closed>
Status: Issue closed |
Kotti/Kotti | 85041098 | Title: kotti_project generator or seed
Question:
username_0: Explaining the ``kotti_project`` idea I've partially implemented on a private project (partially relates to the ``kotti_cms`` idea. See #347).
The goal is:
* provide a package ready to be used in production with dev/production environments, configured logging with logrotate and filesystem logs, project documentation based on sphinx, pip installable requirements.txt for dev/production, virtualenv environment creation for dev/production, easy setup for published plugins or private repositories, real world examples with suggested configurations for the most common databases, real world uwsgi config for production deployements, nginx conf generation ready to be used for production sites
* provide the full kotti capabilities including by default the most useful plugins for the "website" use case (events, news, etc)
The above goals are difficult to achieve if you are not yet pyramid-aware or if you want to try out Kotti for the first time.
So it would be cool (or, better, **kool** :) provide an interactive package generator for kotti projects or a simple seed project. The ``kotti_cms`` idea could be merged supporting the above goals also.
Package generator:
* a little bit difficult to maintain/update, interactive generation, a little bit hard implementing tests
Project seed (based on console script clone project and string substitution):
* easy to maintain/update/merge improvements, easy to test, no interactive generation (it is just a seed), a bit hackish (it works with string substitution so choosing a bad project name it will break, obviously). See for example the "scaffolding tool" section here http://username_0.blogspot.it/2014/09/plone-angularjs-yeoman-starter.html
Opinions?
Answers:
username_0: Created a prototype here: https://github.com/username_0/kotti_project |
skrafft/react-native-jitsi-meet | 606118708 | Title: is it possible to set password required
Question:
username_0: is it possible to set password required after joined the room, For example, user-created the room and showing him set a password for the room,
Please help me!
Answers:
username_1: Please see https://community.jitsi.org/t/enable-password-for-rooms/19881/3
username_2: This issue marked as staled!
It'll be closed in 10 days, if there will be no activity. Please let us know if this issue still not resolved and/or bug still reproducing in the latest version.
Status: Issue closed
|
dotnet/efcore | 601928818 | Title: Cosmos: Pipe character is escaped for key value when creating a new document
Question:
username_0: We receive an id from a third party application where the format of the id is `<name>|<guid>` for example `"test|36f47ad4-e23f-43f1-8c37-304df4e6193a"`.
When creating a document and marking this value as the key-value EF escapes the pipe character for the id so the example ends up being `test/|36f47ad4-e23f-43f1-8c37-304df4e6193a`. If the value is not marked as the key-value it is not escaped. Using EF to retrieve the same document works fine however when viewing the values using the Azure Data Explorer it breaks with the following message:

When creating the same document using the same key using the Cosmos SDK the character is not escaped and I am able to view the document using Azure Data Explorer.
The fluent setup I used
```
builder.Property(user => user.UserId).HasMaxLength(100);
builder.HasKey(user => user.UserId);
builder.HasPartitionKey(user => user.regionCode);
```
and saving the user
```
context.Add(new User
{
UserId = "test|36f47ad4-ef3f-43f1-8c67-304df4e6193a",
Name= "Test",
regionCode = "1-Test"
});
await context.SaveChangesAsync()
```
### Further technical details
EF Core version: 3.1.3
EF Core.Cosmos version 3.1.3
Target framework: NET Core 3.1
Answers:
username_1: These characters can't be used in the resource id: '/', '\\', '?', '#', [source](https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.documents.resource.id?view=azure-dotnet)
We can instead url encode the '|' char, but that might be blocked by https://github.com/Azure/azure-cosmos-dotnet-v2/issues/458
username_2: Is this slated for the next release?
username_1: @username_2 Yes
Status: Issue closed
|
zendesk/outbound_ios_sdk | 360345718 | Title: User-facing text should use localized string macro
Question:
username_0: The static analyzer has found strings in the Outbound SDK which are not localized.
```
OBAdminViewController.m:51:16: User-facing text should use localized string macro
OBAdminViewController.m:61:24: User-facing text should use localized string macro
OBAdminViewController.m:184:32: User-facing text should use localized string macro
OBAdminViewController.m:189:32: User-facing text should use localized string macro
``` |
raiden-network/raiden | 573913502 | Title: PFS - The provided payment is lower than service fee
Question:
username_0: In [BF4](https://github.com/raiden-network/raiden/blob/develop/raiden/tests/scenarios/bf4_multi_payments_same_node.yaml) we can reproduce the error messages that
```
scenario_player.exceptions.legacy.RESTAPIStatusMismatchError: HTTP status code "409" while fetching http://127.0.0.1:43945/api/v1/payments/0x59105441977ecD9d805A4f5b060E34676F50F806/0xB096C6924B760806568c48dda66f01c1c8b99Def. Expected 2..: {"errors": "Payment couldn't be completed because: PFS: The provided payment is lower than service fee. (2104)"}
```
In that specific scenario, we send from node 0 in parallel 100 payments to node 2, 3 and 5. Somehow either a Raiden Client or the PFS does not calculate the fees correctly.
```
pfs-goerli-with-fee_1 | {"error": "InsufficientServicePayment(None)", "details": {"expected_amount": 1500, "actual_amount": 1400}, "message": "The provided payment is lower than service fee.", "event": "Error while handling request", "level": "warning", "logger": "pathfinding_service.api", "timestamp": "2020-03-02 11:09:28.475189"}
pfs-goerli-with-fee_1 | {"error": "InsufficientServicePayment(None)", "details": {"expected_amount": 1900, "actual_amount": 1800}, "message": "The provided payment is lower than service fee.", "event": "Error while handling request", "level": "warning", "logger": "pathfinding_service.api", "timestamp": "2020-03-02 11:09:38.898771"}
pfs-goerli-with-fee_1 | {"error": "InsufficientServicePayment(None)", "details": {"expected_amount": 2900, "actual_amount": 2800}, "message": "The provided payment is lower than service fee.", "event": "Error while handling request", "level": "warning", "logger": "pathfinding_service.api", "timestamp": "2020-03-02 11:09:57.597401"}
pfs-goerli-with-fee_1 | {"error": "InsufficientServicePayment(None)", "details": {"expected_amount": 3400, "actual_amount": 3300}, "message": "The provided payment is lower than service fee.", "event": "Error while handling request", "level": "warning", "logger": "pathfinding_service.api", "timestamp": "2020-03-02 11:10:07.662816"}
pfs-goerli-with-fee_1 | {"error": "InsufficientServicePayment(None)", "details": {"expected_amount": 3600, "actual_amount": 3500}, "message": "The provided payment is lower than service fee.", "event": "Error while handling request", "level": "warning", "logger": "pathfinding_service.api", "timestamp": "2020-03-02 11:10:13.500040"}
pfs-goerli-with-fee_1 | {"error": "InsufficientServicePayment(None)", "details": {"expected_amount": 7500, "actual_amount": 7400}, "message": "The provided payment is lower than service fee.", "event": "Error while handling request", "level": "warning", "logger": "pathfinding_service.api", "timestamp": "2020-03-02 11:11:27.232467"}
pfs-goerli-with-fee_1 | {"error": "InsufficientServicePayment(None)", "details": {"expected_amount": 9000, "actual_amount": 8900}, "message": "The provided payment is lower than service fee.", "event": "Error while handling request", "level": "warning", "logger": "pathfinding_service.api", "timestamp": "2020-03-02 11:11:57.660872"}
pfs-goerli-with-fee_1 | {"error": "InsufficientServicePayment(None)", "details": {"expected_amount": 9900, "actual_amount": 9800}, "message": "The provided payment is lower than service fee.", "event": "Error while handling request", "level": "warning", "logger": "pathfinding_service.api", "timestamp": "2020-03-02 11:12:14.088394"}
pfs-goerli-with-fee_1 | {"error": "InsufficientServicePayment(None)", "details": {"expected_amount": 15600, "actual_amount": 15500}, "message": "The provided payment is lower than service fee.", "event": "Error while handling request", "level": "warning", "logger": "pathfinding_service.api", "timestamp": "2020-03-02 11:14:32.954778"}
pfs-goerli-with-fee_1 | {"error": "InsufficientServicePayment(None)", "details": {"expected_amount": 16100, "actual_amount": 16000}, "message": "The provided payment is lower than service fee.", "event": "Error while handling request", "level": "warning", "logger": "pathfinding_service.api", "timestamp": "2020-03-02 11:14:47.080058"}
```<issue_closed>
Status: Issue closed |
dokku/dokku | 505033892 | Title: Enhancement: optional HSTS
Question:
username_0: First of all, apologies for not filling in the template, but this ticket is not a bug report, rather an enhancement idea.
## Problem
I went through the previous tickets related to SSL and HSTS and found mentions such as “[HSTS at the moment but may do so in a future minor release](3161#issuecomment-381160310)“ and “[I love and use HSTS I don't want to enforce it due to usability concerns - it should be a conscious decision from the operator to use HSTS.](522#issuecomment-39591640)”
I understand the worry that enabling HSTS by default and automatically could cause unexpected troubles to users, but I think Dokku could go a bit further in enabling “the operator to use HSTS”.
## Current situation
Currently, the only (as far as I know) way to enable HSTS on Dokku-level is to customize Nginx config by providing a custom `nginx.conf.sigil` template that would include the `add_header Strict-Transport-Security` directive if SSL is enabled for the application.
However, while this could work well for buildpack and Dockerfile-based deploys, it quickly becomes troublesome when deploying an existing Docker image. For example, on my personal website, I run the default, unmodified image of Ghost CMS, and using the above solution would mean the necessity to re-package the vanilla image to add custom Nginx template. Possible, but not exactly user-friendly.
Furthermore, when it is wanted to enable HSTS to all apps running in Dokku, it is suddenly required to add the custom Nginx configuration file into every application’s repository, and perhaps undertake additional steps for Dockerfile-based deploys (I’m not too sure as I don’t use this feature of Dokku, but I remember reading in the documentation something about the necessity of deleting the Nginx configuration file later, for security reasons). Again, possible but at this point, quite troublesome and somewhat annoying.
Another solution would be using a simple plugin – in fact, I was thinking of just forking a [completely different plugin](https://github.com/Zeilenwerk/dokku-nginx-max-upload-size/blob/master/nginx-pre-reload) that alters Nginx configuration and simply adjusting it to my needs. However, I quickly realized that I’m not quite sure how to ensure that this header is only appended to requests that reached the server through HTTPS…
## Proposed solution
I think Dokku could be slightly more helpful in regards to this and while it probably shouldn’t enable HSTS by default for reasons mentioned above, I think it would be a welcome addition to have a new command (perhaps `dokku certs:hsts <app> [--enable --disable]`?) that would alter generating of the Nginx configuration file in such a way that if the administrator wishes to have HSTS enabled for an application, he or she can do so easily and quickly.
Repeating this task for other applications then just involves re-running the same command with a different application name.
I wish I had the skill to implement this myself, but until then, all I can do is ask you for your kind consideration. Thank you for reading through this, and thank you even more for the amazing work you’ve done on Dokku!
Answers:
username_0: In the end, I found a way to alter Nginx configuration in (hopefully) desired way, and created a small plugin that allows enabling HSTS on a per-app basis.
You can find it at [username_0/dokku-hsts](https://github.com/username_0/dokku-hsts), although I still think this feature should be present in Dokku itself due to all the reasons mentioned above.
username_1: I think it's time to set this, and even make hsts default. We've been striving for the most secure setup by default, and while hsts is sometimes annoying to disable, I think the tradeoff is worth it. Thoughts @michaelshobbs?
We can support the following properties:
- `hsts`: bool, default true
- `hsts-include-subdomains`: bool, default true
- `hsts-max-age`: integer, time in seconds
- `hsts-preload`: bool, default false
These values/defaults come from what the [kubernetes ingress controller](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/) does.
username_1: Closing as there is a pull request open.
Status: Issue closed
|
strasdat/Sophus | 572995585 | Title: Uninitialized `so2_` in `SE2` default constructor
Question:
username_0: The default constructor for `SE2` currently initializes the translation value (`translation_`), but not the rotation value (`so2_`). This means anyone calling the default constructor does not get an identity, but rather an undefined and potentially invalid (e.g. [0, 0], [nan, nan], etc.) rotation.
See: https://github.com/strasdat/Sophus/blob/master/sophus/se2.hpp#L729
Potential fix: initialize `so2_` to `SO2<Scalar>()`, similar to `SE2::trans()`. I'd be willing to raise a PR if the fix looks ok.
Answers:
username_1: I think it should be ok.
For classes, the default constructor (`SO2<Scalar>()`) is called automatically if you don't specify anything in the initializer list. Only basic member types like `int` may stay uninitialized.
username_1: PS: For translation it is needed, because by default, Eigen does not initialize matrices in the default constructor.
username_0: We've seen issues crop up in our codebase due to using the default constructor without an initializer list. For example, someone might pass `SO2<double>()` to a function, thinking it represents an identity (as mentioned in the source). Likewise, we've seen patterns like `SO2<double> foo = SO2<double>()` being used to initialize a member variable, again thinking the default constructor gives you an identity. This would probably be easier to catch if the instance was completely uninitialized, but because the translation gets set, issues can go undetected for a while.
username_1: I agree 100%, however, what I'm saying is that from a quick glance at the code it seems to me that rotation **is** initialized to identity (via the default constructor of SO2).
username_0: Sorry, replace `SO2` with `SE2` in my previous comment. I'll play around with it some more, and let you know what I find. Thanks for the quick help!
username_1: Sure, my reply referred to SE2. Maybe you can come up with a minimal reproducible example that shows the issue you were seeing.
username_0: After some more digging, it looks like the issue is in our code. I'll admit, my C++ isn't great. Initializing the SE2 seemed to alleviate the issue, but it was apparently coincidence. Thanks for the help, and sorry for the wild goose chase!
Status: Issue closed
username_1: Sure. Happy coding! |
nicehash/NiceHashMiner | 192099131 | Title: [BUG] AMD running/benchmarking certain algorithms (sgminer related)
Question:
username_0: Opening a new issue to reference common opened issues related to **AMD GPUs** and **sgminer** (they will be closed and referenced to this new issue to easily keep track).
### Description:
Many AMD users are reporting issues with **_most algorithms_** (all are related to **sgminer**) when bench-marking or running resulting in a **BSOD**. Most users reporting the issue seem to be using **AMD RXxxx (AMD Polaris) GPUs (RX480, RX470, etc..)**, there are also some users with older GPUs.
### Potential solutions (may not apply with all users):
- Disabling Default optimizations. To do this go to **Settings > General > Disable Default Optimizations** and check option.
- Disabling AMD Temperature Control. To do this go to **Settings > General > Disable AMD Temperature Control** and check option.
- Try bench-marking/running **only** the following algorithms (Disabling **Default optimizations** and **AMD Temperature Control** ):
- NeoScrypt* **(will not work on Ellesmere/Polaris GPUs and will most likely BSOD with AMD Temperature Control enabled, so don't benchmark/use this one if your GPU is Ellesmere/Polaris)**
- Lyra2REv2
- DaggerHashimoto
- Decred
- Lbry
- Equihash
### Referenced issues (closed, all new related issues should be closed and reference to this issue):
- #246
Answers:
username_1: Closing this since we made several changes with algorithms selection in latest releases.
Also, sgminer-gm issues should be [discussed here](https://github.com/nicehash/sgminer-gm/issues)
sgminer (common version) issues should be [discussed here](https://github.com/nicehash/sgminer/issues)
Status: Issue closed
username_2: Nada de eso sirvió. No puedo minar con mis MSI RX570 y RX470. Cual es la solucion de NiceHash??? |
fullcalendar/fullcalendar-react | 825101487 | Title: "Maximum update depth exceeded" error when dateIncrement is of type { day: # }
Question:
username_0: In a custom view, when the ``dateIncrement`` is set to ``{ day: # }`` as opposed to a string ``"hh:mm"``, a "Maximum update depth exceeded" error occurs.
https://codesandbox.io/s/full-calendar-date-increment-bug-yo87v?file=/src/DemoApp.jsx
The above sandbox initially works. But if you comment out the ``dateIncrement: "24:00"`` and uncomment the ``dateIncrement: { days: 1 }``, an error occurs.

Answers:
username_1: I believe the issue is with updating the event state from dateSet which causes a loop, and likely the same cause as in this issue:
https://github.com/fullcalendar/fullcalendar-react/issues/97
eg. there is no error by commenting out "datesSet":
https://codesandbox.io/s/full-calendar-date-increment-bug-forked-1p5e1?file=/src/DemoApp.jsx
It does seem like a bug and I'm not sure why `dateIncrement` is related, but if you want to supply dynamic events based on dates of the view, you can supply the events "as a function" which is designed for that:
https://fullcalendar.io/docs/events-function
username_2: same as me on 5.8.0 |
fetchai/agents-aea | 555075323 | Title: Upgrade `generate-wealth` command
Question:
username_0: **Is your feature request related to a problem? Please describe.**
Currently it is tied to specific test nets.
**Describe the solution you'd like**
Make the user aware what testnet this is targetting.
add an optional `--sync` arg so the user can make the command wait till the faucet has released the funds<issue_closed>
Status: Issue closed |
JavaSaBr/jME-Asset-Store | 263466705 | Title: Упражнение #1
Question:
username_0: Создать каждый свою собственную ветку от мастера.
Сделать математические эндпоинты:
- Методы GET по пути math/
- /add - слаживает 2 числа
- /sub - отнимает второе от первого
- /mult - перемножает
- /devide - делит первое на второе
- /pow - возводит первое число в степень второго числа
- Методы POST по пути test_post/
- /uplaod_file - загрузка файла на диск сервера
- /send_text - принятие большого теста и вывода его в консоль
- Метод GET по пути download/random - при вызове метода, должен сгенерироваться файл с небольшим случайным содержанием и отправлен вызывающей программе.
все эндпоинты долны уметь вызываться при любом наборе параметров и если какие-то параметры не правильные или не достает, должен придти ответ с соотвествующим кодом и пояснением.<issue_closed>
Status: Issue closed |
reifyhealth/lein-git-down | 1114598361 | Title: Difference between windows vs macos + linux
Question:
username_0: I'm not sure why, but I'm getting a difference is deps resolution between mac / linux vs windows:
I've been using github actions and debugging this, so I've got a current build log which I'll pull in the relevant segments.
[linux](https://github.com/username_0/fruit-economy/runs/4946666885?check_suite_focus=true#step:12:186):
```bash
Retrieving humbleui/humbleui/fc60fd319ba905dada9a6917a1a1632413c132d5/humbleui-fc60fd319ba905dada9a6917a1a1632413c132d5.jar from public-github
Created /home/runner/.gitlibs/libs/humbleui/humbleui/fc60fd319ba905dada9a6917a1a1632413c132d5/target/humbleui-0.1.0.jar
Compiling 2 source files to /home/runner/work/fruit-economy/fruit-economy/target/classes
Compiling fruit-economy.core
```
[macos](https://github.com/username_0/fruit-economy/runs/4946666907?check_suite_focus=true#step:12:186):
```bash
Retrieving humbleui/humbleui/fc60fd319ba905dada9a6917a1a1632413c132d5/humbleui-fc60fd319ba905dada9a6917a1a1632413c132d5.jar from public-github
Created /Users/runner/.gitlibs/libs/humbleui/humbleui/fc60fd319ba905dada9a6917a1a1632413c132d5/target/humbleui-0.1.0.jar
Compiling 2 source files to /Users/runner/work/fruit-economy/fruit-economy/target/classes
Compiling fruit-economy.core
```
[windows](https://github.com/username_0/fruit-economy/runs/4946666937?check_suite_focus=true#step:10:194):
```bash
Retrieving humbleui/humbleui/fc60fd319ba905dada9a6917a1a1632413c132d5/humbleui-fc60fd319ba905dada9a6917a1a1632413c132d5.jar from public-github
Created C:\Users\runneradmin\.gitlibs\libs\humbleui\humbleui\fc60fd319ba905dada9a6917a1a1632413c132d5\target\humbleui-0.1.0.jar
Could not find artifact humbleui:humbleui:jar:fc60fd319ba905dada9a6917a1a1632413c132d5 in central (https://repo1.maven.org/maven2/)
Could not find artifact humbleui:humbleui:jar:fc60fd319ba905dada9a6917a1a1632413c132d5 in clojars (https://repo.clojars.org/)
Could not transfer artifact humbleui:humbleui:jar:fc60fd319ba905dada9a6917a1a1632413c132d5 from/to public-github (git://github.com): Checksum validation failed, no checksums available
Failed to read artifact descriptor for humbleui:humbleui:jar:fc60fd319ba905dada9a6917a1a1632413c132d5
This could be due to a typo in :dependencies, file system permissions, or network issues.
If you are behind a proxy, try setting the 'http_proxy' environment variable.
Uberjar aborting because jar failed: Could not resolve dependencies
```
No idea why this is happening, hopefully there's an obvious step I've been missing?
I'm assuming the problem is at this level and not [`leiningen`](https://github.com/technomancy/leiningen)?
Answers:
username_1: Thanks for the detailed report! It looks like something with the checksum step in the plugin is behaving differently on Windows. I'll leave this open until it can be more properly investigated.
username_0: Do you need me to do anything? I have ready access to a windows machine =)...
username_1: Thanks for the offer! PR's are welcome, so if you have the time/inclination, feel free to dive in 😄
Otherwise, I may reach out for validation when there is time to circle back on this, but unfortunately time has been in short supply lately so I can't guarantee when this will get some attention.
It it is indeed an issue with the checksum, [this is the code](https://github.com/reifyhealth/lein-git-down/blob/develop/src/lein_git_down/git_wagon.clj#L280-L311) that handles it.
username_0: It turns out to be a file separator problem...
On Mac:
```clojure
(str/split "~/.m2/repository/humbleui/humbleui/fc60fd319ba905dada9a6917a1a1632413c132d5/humbleui-fc60fd319ba905dada9a6917a1a1632413c132d5.jar" (re-pattern (str "\\" File/separatorChar)))
#_#_=> ["~" ".m2" "repository" "humbleui" "humbleui" "fc60fd319ba905dada9a6917a1a1632413c132d5" "humbleui-fc60fd319ba905dada9a6917a1a1632413c132d5.jar"]
```
On Win:
```clojure
TODO FROM WIN
```
username_1: Thanks for the PR! As for the above... Looks like the project it's trying to pull down, HumbleUI/JWM, doesn't have a recognized build file (eg: pom.xml, project.clj, or deps.edn), so it's probably falling into the [default jar resolve multimethod impl](https://github.com/reifyhealth/lein-git-down/blob/develop/src/lein_git_down/git_wagon.clj#L259-L271). I'm not sure why you're not getting errors on that one on other OS's, but not sure lein-git-down can build this library.
username_0: @username_1, you're right. For some reason the powershell version of maven can't seem to build it and so I had to resort to `wsl`, however I forgot that `wsl` has it's own `.m2` repository which it stores in `~/.m2/repository/` and not window's one at `/mnt/c/Users/<Username>/.m2/repository/`.
So I had to do: `<maven unzip location>/bin/mvn install:install-file -Dfile=jwm-b3fecb126a.jar -DpomFile=maven/META-INF/maven/io.github.humbleui/jwm/pom.xml -DlocalRepositoryPath=/mnt/c/Users/<Username>/.m2/repository/`.
Sigh...
Anyway, could you please cut a release on clojars or somewhere so I can use the latest version in my github actions?
Status: Issue closed
username_1: Just pushed the release. Thanks again for the detailed issue report and the fix!
username_0: Just an quick update @username_1 to say thank you =)...
[Managed to get this working](https://github.com/username_0/fruit-economy/actions/runs/1766008578) thanks to your help! |
rsummers11/CADLab | 636156546 | Title: Error when I run 3DCE project with python=3.5
Question:
username_0: @viggin , Hello,Sorry to bother you, I get an error:"ImportError: dynamic module does not define module export function (PyInit_bbox)" when my python version is 3.5. Just like the photo. I'm looking forward to your answer. Thank you!
Answers:
username_0: This is the code:
Traceback (most recent call last):
File "/data/lcy/github/lesion_detector_3DCE/./rcnn/tools/train.py", line 16, in <module>
from rcnn.core import callback, metric
File "/data/lcy/github/lesion_detector_3DCE/rcnn/tools/../../rcnn/core/callback.py", line 6, in <module>
from rcnn.tools.validate import validate
File "/data/lcy/github/lesion_detector_3DCE/rcnn/tools/../../rcnn/tools/validate.py", line 4, in <module>
from rcnn.tools.test import test_rcnn
File "/data/lcy/github/lesion_detector_3DCE/rcnn/tools/../../rcnn/tools/test.py", line 11, in <module>
from rcnn.core.loader import TestLoader
File "/data/lcy/github/lesion_detector_3DCE/rcnn/tools/../../rcnn/core/loader.py", line 12, in <module>
from rcnn.fio.rpn import get_rpn_testbatch, get_rpn_batch, assign_anchor
File "/data/lcy/github/lesion_detector_3DCE/rcnn/tools/../../rcnn/fio/rpn.py", line 26, in <module>
from rcnn.processing import bbox_transform
File "/data/lcy/github/lesion_detector_3DCE/rcnn/tools/../../rcnn/processing/bbox_transform.py", line 7, in <module>
from rcnn.cython import bbox
ImportError: dynamic module does not define module export function (PyInit_bbox)
username_0: OK, Thamk you for your answer and time, It's useful!
Status: Issue closed
|
facebook/react-native | 488308862 | Title: Running vanilla project from init, with `react-native run-ios` fails with glog install errors.
Question:
username_0: <!--
Please provide a clear and concise description of what the bug is.
Include screenshots if needed.
Please test using the latest React Native release to make sure your issue has not already been fixed: http://facebook.github.io/react-native/docs/upgrading.html
-->
Output of manually running pod install in ios folder
```
[!] /bin/bash -c
set -e
#!/bin/bash
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
set -e
PLATFORM_NAME="${PLATFORM_NAME:-iphoneos}"
CURRENT_ARCH="${CURRENT_ARCH}"
if [ -z "$CURRENT_ARCH" ] || [ "$CURRENT_ARCH" == "undefined_arch" ]; then
# Xcode 10 beta sets CURRENT_ARCH to "undefined_arch", this leads to incorrect linker arg.
# it's better to rely on platform name as fallback because architecture differs between simulator and device
if [[ "$PLATFORM_NAME" == *"simulator"* ]]; then
CURRENT_ARCH="x86_64"
else
CURRENT_ARCH="armv7"
fi
fi
export CC="$(xcrun -find -sdk $PLATFORM_NAME cc) -arch $CURRENT_ARCH -isysroot $(xcrun -sdk $PLATFORM_NAME --show-sdk-path)"
export CXX="$CC"
# Remove automake symlink if it exists
if [ -h "test-driver" ]; then
rm test-driver
fi
./configure --host arm-apple-darwin
# Fix build for tvOS
cat << EOF >> src/config.h
/* Add in so we have Apple Target Conditionals */
#ifdef __APPLE__
#include <TargetConditionals.h>
#include <Availability.h>
#endif
/* Special configuration for AppleTVOS */
#if TARGET_OS_TV
#undef HAVE_SYSCALL_H
#undef HAVE_SYS_SYSCALL_H
#undef OS_MACOSX
#endif
/* Special configuration for ucontext */
[Truncated]
3. `cd ios && pod install`
results in the same error, which is pasted above
<!--
Issues without reproduction steps or code are likely to stall.
-->
Describe what you expected to happen:
I expected the vanilla application to build and launch in the simulator. A few other issues are similar, but their solutions did not work for me - and have all been marked as resolved. (#22703, #19774)
Snack, code example, screenshot, or link to a repository:
<!--
Please provide a Snack (https://snack.expo.io/), a link to a repository on GitHub, or
provide a minimal code example that reproduces the problem.
You may provide a screenshot of the application if you think it is relevant to your bug report.
Here are some tips for providing a minimal example: https://stackoverflow.com/help/mcve. --
-->
Answers:
username_0: Similar issue reported here: #18408, but closed. Solution there doesn't work for my system.
username_1: Today I started with react-native and ran into a similar problem of react-native run-ios producing a non fixable error:
info Installing "DerivedData/Build/Products/Debug-iphonesimulator/BRFv5ReactNative.app"
An error was encountered processing the command (domain=NSPOSIXErrorDomain, code=2):
Failed to install the requested application
An application bundle was not found at the provided path.
Provide a valid path to the desired application bundle.
Print: Entry, ":CFBundleIdentifier", Does Not Exist
error Command failed: /usr/libexec/PlistBuddy -c Print:CFBundleIdentifier DerivedData/Build/Products/Debug-iphonesimulator/BRFv5ReactNative.app/Info.plist
Print: Entry, ":CFBundleIdentifier", Does Not Exist
. Run CLI with --verbose flag for more details.
Error: Command failed: /usr/libexec/PlistBuddy -c Print:CFBundleIdentifier DerivedData/Build/Products/Debug-iphonesimulator/BRFv5ReactNative.app/Info.plist
Print: Entry, ":CFBundleIdentifier", Does Not Exist
My fix for it was:
react-native init TestProject
cd TestProject/ios && pod install
cd ..
Open TestProject/ios/Pods/Pods.xcodeproj
New Scheme > Pods-TestProject > Compile for a Generic iOS Device + Compile for eg. iPhone X Simulator
New Scheme > Pods-TestProjectTests > Compile for a Generic iOS Device + Compile for eg. iPhone X Simulator
Close Pods.xcodeproj
Open TestProject/ios/TestProject.xcodeproj
Optional: Set bundle identifier: com.yourcompany.testproject
Setup Code Signing for TestProject
Setup Code Signing for TestProjectTest
Compile TestProject for an attached iOS device
This will launch a metro bundler terminal window. If it doesn't, check if port 8081 is in use in Terminal and kill the process:
lsof -n -i4TCP:8081
kill -9 PID
Compile will fail anyway, because the libraries (Pods) can't be found/linked, because of a /bin/ vs /build/ mismatch.
Select the project for the settings.
Select the target TestProject.
Select All + Combined in Build Settings.
Search for PODS_BUILD_DIR
Change the value to ${BUILD_DIR}/../build which overwrites the bin/ somehow.
Add those paths, and keep build/ instead of $(BUILD_DIR):
Add in TestProject > Library Search Paths: build/$(CONFIGURATION)$(EFFECTIVE_PLATFORM_NAME)
Add in TestProjectTest > Library Search Paths: build/$(CONFIGURATION)$(EFFECTIVE_PLATFORM_NAME)
For a later release, add all the release versions of pods and paths.
Now Compile again: TestProject for an attached iOS device or simulator.
This time the app will launch on the phone/simulator.
react-native run-ios won't work, but starting form XCode works for me now.
username_0: I found that it was probably due to the way Cocoapods was being installed, when I did this separately I managed to get the `run-ios` command to work for me. Not really a fix for this though, more of a hack.
username_2: Unable to install pod and shows below error message,
```
[!] /bin/bash -c
set -e
#!/bin/bash
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
set -e
PLATFORM_NAME="${PLATFORM_NAME:-iphoneos}"
CURRENT_ARCH="${CURRENT_ARCH}"
if [ -z "$CURRENT_ARCH" ] || [ "$CURRENT_ARCH" == "undefined_arch" ]; then
# Xcode 10 beta sets CURRENT_ARCH to "undefined_arch", this leads to incorrect linker arg.
# it's better to rely on platform name as fallback because architecture differs between simulator and device
if [[ "$PLATFORM_NAME" == *"simulator"* ]]; then
CURRENT_ARCH="x86_64"
else
CURRENT_ARCH="armv7"
fi
fi
export CC="$(xcrun -find -sdk $PLATFORM_NAME cc) -arch $CURRENT_ARCH -isysroot $(xcrun -sdk $PLATFORM_NAME --show-sdk-path)"
export CXX="$CC"
# Remove automake symlink if it exists
if [ -h "test-driver" ]; then
rm test-driver
fi
./configure --host arm-apple-darwin
# Fix build for tvOS
cat << EOF >> src/config.h
/* Add in so we have Apple Target Conditionals */
#ifdef __APPLE__
#include <TargetConditionals.h>
#include <Availability.h>
#endif
/* Special configuration for AppleTVOS */
#if TARGET_OS_TV
#undef HAVE_SYSCALL_H
#undef HAVE_SYS_SYSCALL_H
#undef OS_MACOSX
#endif
/* Special configuration for ucontext */
#undef HAVE_UCONTEXT_H
#undef PC_FROM_UCONTEXT
#if defined(__x86_64__)
#define PC_FROM_UCONTEXT uc_mcontext->__ss.__rip
#elif defined(__i386__)
#define PC_FROM_UCONTEXT uc_mcontext->__ss.__eip
#endif
EOF
# Prepare exported header include
EXPORTED_INCLUDE_DIR="exported/glog"
mkdir -p exported/glog
cp -f src/glog/log_severity.h "$EXPORTED_INCLUDE_DIR/"
cp -f src/glog/logging.h "$EXPORTED_INCLUDE_DIR/"
cp -f src/glog/raw_logging.h "$EXPORTED_INCLUDE_DIR/"
cp -f src/glog/stl_logging.h "$EXPORTED_INCLUDE_DIR/"
cp -f src/glog/vlog_is_on.h "$EXPORTED_INCLUDE_DIR/"
checking for a BSD-compatible install... /usr/bin/install -c
[Truncated]
checking for mawk... no
checking for nawk... no
checking for awk... awk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking for arm-apple-darwin-gcc... /Applications/Xcode 10.3.0.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -arch armv7 -isysroot /Applications/Xcode 10.3.0.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS12.4.sdk
checking whether the C compiler works... no
/Users/harsh.mehta/Library/Caches/CocoaPods/Pods/External/glog/2263bd123499e5b93b5efe24871be317-1f3da/missing: Unknown `--is-lightweight' option
Try `/Users/harsh.mehta/Library/Caches/CocoaPods/Pods/External/glog/2263bd123499e5b93b5efe24871be317-1f3da/missing --help' for more information
configure: WARNING: 'missing' script is too old or missing
configure: error: in `/Users/harsh.mehta/Library/Caches/CocoaPods/Pods/External/glog/2263bd123499e5b93b5efe24871be317-1f3da':
configure: error: C compiler cannot create executables
See `config.log' for more details
[!] [!] Xcodeproj doesn't know about the following attributes {"inputFileListPaths"=>[], "outputFileListPaths"=>[]} for the 'PBXShellScriptBuildPhase' isa.
If this attribute was generated by Xcode please file an issue: https://github.com/CocoaPods/Xcodeproj/issues/new
[!] [!] Xcodeproj doesn't know about the following attributes {"inputFileListPaths"=>[], "outputFileListPaths"=>[]} for the 'PBXShellScriptBuildPhase' isa.
If this attribute was generated by Xcode please file an issue: https://github.com/CocoaPods/Xcodeproj/issues/new
``` |
libp2p/rust-libp2p | 711754213 | Title: mdns recv_buffer too small for some configurations
Question:
username_0: My PC has a couple of interfaces, with something like 16 IP addresses in total. This results in mdns responses of a size of approximately 3000 bytes. The problem is, [recv_buffer](https://github.com/libp2p/rust-libp2p/blob/aaa6e4add30c6a905c8a00ec85ef6912d1a89b49/protocols/mdns/src/service.rs#L125) is only 2048 bytes, which obviously causes parsing of responses to fail with an EOF error.<issue_closed>
Status: Issue closed |
HEPCloud/decisionengine_modules | 588027447 | Title: Source NerscJobInfo fails to make a connection to nersc
Question:
username_0: 2018-12-04 08:40:45,597 - decision_engine - TaskManager - NerscJobInfo - ERROR - Exception running source NerscJobInfo Failed to establish session to https://newt.nersc.gov/newt/
login/
Please investigate.. have verified that permissions on the authenitication secret is OK and the same API call works interactively from the command line.<issue_closed>
Status: Issue closed |
DeckHack/discoveries | 238288709 | Title: Controllers
Question:
username_0: Controllers handle a lot of stuff inside TweetDeck, and to figure out a lot of the behind-the-scenes stuff, we should document them as much as possible.
### Controllers
* [ ] auth `TD.controller.auth`
* [ ] clients `TD.controller.clients`
* [x] [columnManager `TD.controller.columnManager`](https://github.com/DeckHack/discoveries/blob/master/docs/controller/columnManager.md)
* [ ] feedManager `TD.controller.feedManager`
* [ ] FeedPoller `TD.controller.FeedPoller()`
* [ ] feedScheduler `TD.controller.feedScheduler`
* [x] [filterManager `TD.controller.filterManager`](https://github.com/DeckHack/discoveries/blob/master/docs/controller/filterManager.md)
* [ ] init `TD.controller.init`
* [ ] notifications `TD.controller.notifications`
* [x] [progressIndicator `TD.controller.progressIndicator`](https://github.com/DeckHack/discoveries/blob/master/docs/controller/progressIndicator.md)
* [ ] scheduler `TD.controller.scheduler`
* [ ] stats `TD.controller.stats`
* [ ] upgrade `TD.controller.upgrade` |
xtermjs/xterm.js | 985097632 | Title: Implement canvas measure strategy
Question:
username_0: I have seen some reports like this:

I had a debug session with someone and there were some interesting findings:
- It happened with monospace, Courier New, Consolas, Cascasia Code but not Arial (at least it wasn't obvious)
- Switching fontSize (forcing the char atlas to get refreshed) did not fix the issue
- Increasing the letter spacing mostly worked around the issue
Still not entirely sure why it's happening, but I believe handling this TODO will probably solve the issue and I can get it retested after we've done it:
https://github.com/xtermjs/xterm.js/blob/353a8c5388e3242af4957da5b89fc50a6e8ab663/src/browser/services/CharSizeService.ts#L54-L54
Answers:
username_1: I get that display (Firefox on Linux, and it's _very_ consistent), but if I use the mouse to get a highlight:

Remove the highlight and it goes right back to the original look. So whatever causes this, it changes when the highlighting is applied. |
NJACKWinterOfCode/cruzz | 386470707 | Title: Add issue template
Question:
username_0: Issue template is missing in repo.
Answers:
username_1: working on this issue
username_1: what shoud i add in it
username_0: To send pr only after assignment from a maintainer, shouldn't be working on more than 2 issues at a time, issue contains relevant screenshots, checkmark indicating if they want to work or not -- a bulleted list of these points well written
And finally a section to mention expected time to fix the issue |
rubyzip/rubyzip | 106191458 | Title: Files in zipe corrupted on windows
Question:
username_0: Hi,
I'm generating zips with rubyzip. I posted an issue yesterday where clients couldn't extract the zips. I upgraded to the latest rubyzip, but windows clients are now saying that they can extract the zips, but it's saying that the contents (images) of the zip are corrupted.
Zip::File.open(zip_path, Zip::File::CREATE) do |zipfile|
names = []
photo_paths.each do |path|
photo_filename = path.split('/').last
event_id = photo_filename.split(".").first.split("_")[1]
photo_id = photo_filename.split(".").first.split("_").last
photo_filename = "nc_#{event_id}_#{photo_id}#{File.extname(path)}"
zipfile.add(photo_filename, path)
end
end
Has anyone run into these issues before ? Works fine for mac users and the files themselves, I can send one by one.
Status: Issue closed
Answers:
username_0: sorry i had this complaint from a client, could not reproduce
username_1: I am currently having this issue right now. Everything works file on macOS, but on Windows complaints that the zip is invalid. I try winwar, and was able to unzip certain files, but not all of them; and still got a message the the zip is corrupt.
At first I thought it could be because of non-ascii filenames, but was not the case; I still ensure ```Zip.unicode_names = true``` was set, but that did not resolve the problem. |
mxck/react-native-material-menu | 812664514 | Title: How to work with hook
Question:
username_0: [import React, {useRef} from 'react';
import {
View,
Text,
StyleSheet,
Image,
Pressable,
Dimensions,
Keyboard,
Platform,
} from 'react-native';
import {
moderateScale,
screenPadding,
verticalScale,
horizontalScale,
} from '../helpers/functions';
import colors from '../helpers/colors';
import imageConstant from '../helpers/images';
import {isIphoneX} from 'react-native-iphone-x-helper';
import Menu, {MenuItem, MenuDivider} from 'react-native-material-menu';
function TopHeader(props) {
const _menu = useRef(null);
function showMenu() {
_menu.show();
}
function onMenuPress(id) {
_menu.hide();
Keyboard.dismiss();
props.onMenuPress(id);
}
const menus =
props.isMenu && props.menuList && props.menuList.length > 0
? props.menuList.map((item, index) => (
<MenuItem
key={index.toString()}
style={styles.menu}
textStyle={styles.menuItemText}
onPress={() => onMenuPress(index)}>
{item}
</MenuItem>
))
: null;
return (
<View style={{backgroundColor: colors.blueText}}>
<View style={styles.headerView}>
<Pressable
hitSlop={{top: 20, bottom: 20, left: 50, right: 40}}
onPress={() => {
props.onBackPress();
}}>
<Image style={styles.back} source={imageConstant.back} />
</Pressable>
<Text style={styles.headerTitle}>{props.title}</Text>
[Truncated]
includeFontPadding: false,
},
headerView: {
width: Dimensions.get('window').width,
alignItems: 'center',
flexDirection: 'row',
minHeight: verticalScale(60),
paddingHorizontal: moderateScale(16),
},
menu: {
width: horizontalScale(200),
},
menuItemText: {
fontSize: moderateScale(16),
color: colors.fontPrimary,
fontFamily: 'GraphikWeb-Regular',
},
});
](url)
Answers:
username_1: I update README with working example.
Status: Issue closed
|
amiaopensource/ffmprovisr | 210314683 | Title: [wish] formula for ffmprovisr
Question:
username_0: This way, any time
```
brew update
brew upgrade
```
is run, also `ffmprovisr` will be updated to the lastest release, if needed.
Answers:
username_1: What do you have in mind for this, just that it clones and pulls down the most recent version and someone can open up index.html locally by running `ffmprovisr`?
One of my colleagues asked me if I'd considered converting to a CLI cheat-sheet interface, which I think is interesting but unsure how we would be able to realistically do that, although if that's something you're interested in, I'm happy to brainstorm in this thread about it.
username_1: Something like a Ruby gem that would act as a simplified `man` page, so you could call it and it would pull up the names of recipes, and then you can, like, pass in 4 for this recipe and it would give you the line. He was thinking about something like this: http://cheat.errtheblog.com/ for short, concise documentation for when you need a memory-jog without having to leave the Terminal. This is my #1 usecase for ffmprovisr.
username_0: I was thinking just in downloading the folder and you can open `index.html` locally with the browser. You don’t need internet access for using it. Yet it’s automatically updated every time the Homebrew stuff is updated. That’s my main point, related to my activity in countries with sometimes catastrophic access to the Web.
username_1: Yes, making sure ffmprovisr opened and worked fine locally was a big consideration when I started this project!
username_0: The update of the local copy would be automated (e.g. with `vrecord` or any other Homebrew formula).
username_0: Yet the idea of having a CLI cheat-sheet is more than interesting.
username_1: I've also been experimenting with Electron.js to turn ffmprovisor into a functioning GUI, although I'm still many many hours away from a functional prototype.
username_1: So this is a thing, but it is a tedious thing to keep building...
<img width="757" alt="screen shot 2017-03-10 at 19 23 46" src="https://cloud.githubusercontent.com/assets/3260492/23818730/1e968060-05cc-11e7-85b5-8156244b6558.png">
username_0: I modified the title, because we are discussing here three distinct things here:
- a Brew formula that installs locally the original Web version
- a CLI cheat-sheet
- a GUI for ffmprovisr
username_2: Oh my God that is amazing
username_0: @username_1 Would it not be better to chose first an action and then the needed files? In fact, no input file is needed e.g. for test patterns, one for the majority of the recipes and more than one for some.
username_1: @username_0 Trust that this is a proof-of-concept and a smart, functional app with good UX would take at least a month of full-time dev work. 😉
username_0: We use internally a web interface that allows to create the all the file formats we daily provide to the clients. This works on any modern browser, without having to be Terminal literate. It’s done by a Perl script which is called from an HTML page using a CSS. I guess «something» similar should be «translatable» into an app. One week should be sufficient ;-)
username_1: Oh, was that built in a week? 😹
username_0: One big weekend… but I’m fluent in Perl. It should include:
16:9 <-> 4:3 with pillar or letter-boxing
image (and sound) speed change, mainly 16 or 18 fps, or 24 <-> 25 fps
FFV1, ProRes 422 HQ or 4444, H.264 or H.265
WAVE, BWF, FLAC, AAC
with QC and MD5 (kudos to MediaConch!)
username_1: 👏 open the source 👏
username_0: The modern open source community does not like Perl (I mean the classic one, before 6) and wishes Python or PHP. Especially PHP cause me heavy allergies… ;-)
username_0: 1) has been resolved in https://github.com/amiaopensource/ffmprovisr/pull/181 by @privatezero (thank you!) and the brew formula can now be implemented.
username_0: I presume a CLI cheat-sheet and a GUI for ffmprovisr are not of actuality. Closing.
Status: Issue closed
|
docker/compose | 153041233 | Title: .env file ignored
Question:
username_0: Hi, I'm running `docker-compose` and `.env` file is ignored.
Location of the files:
```
$ ls -la | grep -e ".env" -e "docker-compose.yml"
-rw-rw-r-- 1 vlado vlado 2878 May 4 17:17 docker-compose.yml
-rw-rw-r-- 1 vlado vlado 171 May 4 16:42 .env
```
Environment:
- engine: `1.11.1, build 5604cbe` (`1.11.1-0~xenial`)
- compose: `1.7.0, build 0d7bf73`
Docker compose located in `~/bin/docker-compose`.
Thanks!
Answers:
username_0: This build should output:
```
testing
testing
```
https://travis-ci.org/username_0/docker-compose-test/builds/127832493
username_1: The `.env` file has to be present in the working directory. Could that be the issue you're seeing?
username_2: https://gist.github.com/username_2/1b27afbea0a992a7518807c6e6079167
The `.env` file is on the same folder as everything else
username_2: From https://docs.docker.com/compose/env-file/
If that means that the `.env` file is used **only** for variable substitution then it is safe to ignore my comments
username_3: If you want the environment variables to turn up inside the container, you have to explicitly pass them through in `docker-compose.yml`:
```yaml
version: "2"
services:
nothing:
image: debian:jessie
command: /bin/false
environment:
- FOO
```
username_0: Comment by @username_3 solved my problem. Thanks :+1:
Status: Issue closed
username_4: I just want to comment on this, because I keep forgetting it.
Not only do you need to set the env names in `docker-compose.yml` like this:
```
services:
my_service:
environment:
- FOO
```
You also need to write the value in the `.env` file *without* quotes:
```
# this works
FOO=bar
# this doesn't work
FOO="bar"
``` |
MicrosoftDocs/azure-docs | 339320160 | Title: Unable to connect to cluster from VS.
Question:
username_0: I managed to run the docker image on my windows 10 machine and can view the cluster using the SF explorer in the browser. I am also able to connect to the cluster using the SF CLI command sfctl cluster select --endpoint http://localhost:19080. All nodes are healthy. However from a service fabric project in VS I am unable to publish my application. When entering the connection endpoint (localhost:19080) in the "Publish Service Fabric Application" dialog I get a red cross next to it with the tooltip message "Failed to contact the server: Please try again later or get help from "How to configure secure connections"." Note that I have no problems when connecting to a local fabric cluster running on my windows machine.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 5a831bb7-e76a-a4d3-3312-36d4090e1b80
* Version Independent ID: f81a40e6-9162-1a1a-7fbf-6b558dd1e303
* Content: [Set up Azure Service Fabric Linux cluster on Windows](https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-local-linux-cluster-windows)
* Content Source: [articles/service-fabric/service-fabric-local-linux-cluster-windows.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-fabric/service-fabric-local-linux-cluster-windows.md)
* Service: **service-fabric**
* GitHub Login: @username_1
* Microsoft Alias: **username_1**
Answers:
username_1: Just to narrow down the issue, are you able to deploy that application using the SF CLI (sfctl) or the PowerShell modules?
username_1: @username_0 and @radderz, I discussed this with some folks and tried it out on my own and worked through Visual Studio. Therefore, we can in fact connect to the cluster running in a container from Visual Studio. The issue was with the documentation which will get updated very shortly.
username_1: @username_0 and @radderz can you guys confirm if this is fixed now? We did update the documentation.
username_2: It looks like this issue has been addressed, so I am closing this. Thanks!
Status: Issue closed
|
abrensch/brouter | 886786742 | Title: feature request: Add public transportation provider networks
Question:
username_0: Use case: I want to use the train / bus to get back and want to plan my tour accordingly.
References for getting started:
https://ptna.openstreetmap.de
http://www.öpnvkarte.de
Answers:
username_1: Is not better to keep it separated within respective cycling and transportation resources ? |
zephyrxo/cmput250lab_marvermette | 1026996600 | Title: Lab-005
Question:
username_0: **Description:** Entering the right house gives you an error
**Environment:** Windows 10 RPG Maker MV
**Steps to reproduce:**
1. Walk to the house on the right
2. Attempt to enter it
**Expected behavior:** You are able to enter the house
**Actual behavior:** You cannot enter the house and get an error message
**Priority:** High |
Azure/azure-cli | 1023083058 | Title: Internal Server Error on what-if analysis after upgrading to 2.28.0 from 2.26.0
Question:
username_0: **Describe the bug**
Performing a what-if analysis results in an internal server error, i'm unable to decipher the problem. The actual deployment then continues to works fine on 2.28.0.
Reverting to AZ 2.26.0 results in the what-if and the deployment both working.
See the full What-If debug log here: [https://github.com/Azure/Aks-Construction/runs/3863296553?check_suite_focus=true#step:11:146](https://github.com/Azure/Aks-Construction/runs/3863296553?check_suite_focus=true#step:11:146)
```
DEBUG: cli.azure.cli.core.sdk.policies: ***"status":"Failed","error":***"code":"InternalServerError","message":"Encountered internal server error while processing the deployment what-if request. Diagnostic information: timestamp '20211011T200722Z', scope '***', tracking id '74f6c7cc-d330-4264-ad96-77c3a8ae7f55', request correlation id '2fd86693-ec6c-4931-9c2a-7e8b413496d5'."***
DEBUG: cli.azure.cli.core.util: azure.cli.core.util.handle_exception is called with an exception:
DEBUG: cli.azure.cli.core.util: Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/azure/cli/core/commands/__init__.py", line 691, in _run_job
result = cmd_copy(params)
File "/usr/local/lib/python3.9/site-packages/azure/cli/core/commands/__init__.py", line 328, in __call__
return self.handler(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/azure/cli/core/commands/command_operation.py", line 121, in handler
return op(**command_args)
File "/usr/local/lib/python3.9/site-packages/azure/cli/command_modules/resource/custom.py", line 772, in what_if_deploy_arm_template_at_resource_group
return _what_if_deploy_arm_template_at_resource_group_core(cmd, resource_group_name,
File "/usr/local/lib/python3.9/site-packages/azure/cli/command_modules/resource/custom.py", line 795, in _what_if_deploy_arm_template_at_resource_group_core
what_if_result = _what_if_deploy_arm_template_core(cmd.cli_ctx, what_if_poller, no_pretty_print, exclude_change_types)
File "/usr/local/lib/python3.9/site-packages/azure/cli/command_modules/resource/custom.py", line 897, in _what_if_deploy_arm_template_core
raise CLIError(err_message)
knack.util.CLIError: InternalServerError - Encountered internal server error while processing the deployment what-if request. Diagnostic information: timestamp '20211011T200722Z', scope '***', tracking id '74f6c7cc-d330-4264-ad96-77c3a8ae7f55', request correlation id '2fd86693-ec6c-4931-9c2a-7e8b413496d5'.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/azure/cli/core/commands/arm.py", line 109, in handle_template_based_exception
raise CLIError(ex.inner_exception.error.message)
AttributeError: 'CLIError' object has no attribute 'inner_exception'`
```
**To Reproduce**
Use the [bicep files](https://github.com/Azure/Aks-Construction/tree/main/bicep) and the [parameter file](https://github.com/Azure/Aks-Construction/blob/gb-akv/.github/workflows_dep/AksDeploy-ByoVnet.parameters.json).
Run the following command, substituting two subnet resource id's and an arbitary resource name.
```bash
az deployment group what-if --debug -f bicep/main.bicep -g $RG -p .github/workflows_dep/AksDeploy-ByoVnet.parameters.json -p resourceName=$RESNAME byoAKSSubnetId=*** byoAGWSubnetId=***
```
**Expected behavior**
I'd like a clear error message of what's actually failed, or for it just to work as it did in the previous CLI version.
**Environment summary**
GitHub Az CLI Action.
**Additional context**
https://github.com/Azure/Aks-Construction/runs/3863296553?check_suite_focus=true#step:11:146
Answers:
username_1: @Shenglol Could you please help to have a look at this issue?
username_2: Found the root cause. This is indeed a bug in the What-If engine. We added normalization process for Azure KeyVault access policies in 2021-01-01, but there's a case we failed to handle. I'm going to fix it.
username_0: Hi @username_2 - my workaround of using an older version of the Az CLI is broken because of https://github.com/Azure/cli/issues/56 - So i'm forced to use v2.30.0 of the Az CLI until either
1. This feature is implemented https://github.com/Azure/login/issues/164
2. I stop using the Azure Login action, decompose the json secret myself and just do an az login command myself.
Do you have a view of what CLI version your fix will make it into?
username_2: This is a bug in our service with API version 2021-01-01 and after. We have checked in a fix and it should be rolled out in a about 2 weeks. Once it's rolled out any CLI version should work.
username_3: Hi, any update on the rollout progress ? I'm getting the same error with az version:
```
{
"azure-cli": "2.31.0",
"azure-cli-core": "2.31.0",
"azure-cli-telemetry": "1.0.6",
"extensions": {}
}
```
Is there any workaround I can apply while fix is rolling e.g. switch specific bicep resources to a different api version or something else ?
Interesting, though `--what-if` flag in `az deployment sub create --what-if` is working fine for me but getting the below error message when trying to deploy my bicep with `az deployment sub create --confirm-with-what-if`:
```
InternalServerError - Encountered internal server error while processing the deployment what-if request. Diagnostic information: timestamp '20211213T025031Z', scope '/subscriptions/***', tracking id 'de477509-e614-4795-8cdc-91d5e082f640', request correlation id '4500c47a-afa4-4215-948d-a7078e308894'.
```
Any hints for workaround are greatly appreciated.
username_3: Additional note: when I run `az deployment` with `--debug` flag - bicep code works fine and everything is deployed.
If I run it without `--debug` (which is preferrable way as it's less noisy in CI/logs) -
this issue strikes back:
```
InternalServerError - Encountered internal server error while processing the deployment what-if request. Diagnostic information: timestamp '20211213T030820Z', scope '/subscriptions/***', tracking id '2bc75173-fc1b-4c6c-9949-dda4dddb1a89', request correlation id '719c792b-e545-42ef-89b7-66521e78376d'.
```
username_2: @username_3 This is most likely because the requests were picked up by worker jobs in different regions. Up to this point, the fix is rolled out to 5 regions, and it still needs more time to be fully deployed.
username_3: @username_2 Can I somehow affect what workers pick up my jobs ? We have resources in AU East and Southeast regions and it's affecting our productivity a lot - deployments are failing randomly and there is nothing we can do to work around it.
username_0: On previously working actions that were using 2.30.0 with no template/parameter changes
Moving to 2.31.0 has not fixed the issue.
username_2: @username_0 I just realized you might be using the MSFT tenant which is onboarded to deployments preview features...and this appears to be a new bug we just identified in the recently added preview feature that enables `reference` function preview in What-If.
The bug is basically an unhandled edge where a null ref exception will be thrown if the referenced resource in the template does not contain the `properties` property. Unfortunately, that happens to be the case in your [generated ARM template file](https://github.com/Azure/Aks-Construction/blob/main/bicep/compiled/main.json) which contains two `Microsoft.ManagedIdentity/userAssignedIdentities` whose `properties` is not emitted by Bicep because it is read-only.
The current workaround would be to replace all user assigned identity property accesses in [`main.bicep`](https://github.com/Azure/Aks-Construction/blob/main/bicep/main.bicep) with full mode `reference` functions to opt out `reference` function evaluation in What-If. For example:
```
appGwIdentity.properties.principalId
=>
reference(appGwIdentity.id, appGwIdentity.apiVersion, 'Full').properties.principalId
```
We have committed a fix for this, but given the upcoming holiday deployment freeze, it might take an extended time for the fix to be rolled out. My apologies for any inconvenience caused!
username_2: @username_3 There's no way to control that. Do you mind sharing your ARM template and emailing me it at <EMAIL>? I am curious to see if I can provide a workaround, but I won't be able to tell without seeing the contents of the template.
username_0: Yes - i'm using the Microsoft tenant. Thanks for the workaround note. |
ohsewon/test | 354528850 | Title: [PR][CLOSED] [Sink] Support 32bit systems.
Question:
username_0: <a href="https://github.sec.samsung.net/myungjoo-ham"><img src="https://github.sec.samsung.net/avatars/u/30515?" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [myungjoo-ham](https://github.sec.samsung.net/myungjoo-ham)**
_Tuesday Jul 03, 2018 at 01:25 GMT_
_Originally opened as https://github.sec.samsung.net/STAR/nnstreamer/pull/202_
----
Use %zd for size_t
Fixes #200
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
----
_**[myungjoo-ham](https://github.sec.samsung.net/myungjoo-ham)** included the following code: https://github.sec.samsung.net/STAR/nnstreamer/pull/202/commits_<issue_closed>
Status: Issue closed |
mwang87/MetabolomicsSpectrumResolver | 666392543 | Title: Link to original MS/MS file doesn't work
Question:
username_0: <img width="1131" alt="Screen Shot 2020-07-27 at 17 40 23" src="https://user-images.githubusercontent.com/5069736/88561872-43a33b00-d030-11ea-879e-9839293043fb.png">
The link that comes next to the accession doesn't point to the right place. It has "None" in the URL, so this might be an issue with the template.
Answers:
username_1: Ah, that probably didn't get properly updated when switching from the legacy USIs to the new USIs. Thanks for reporting.
username_0: In general, if there's a python function that takes a USI and generates a URL for either the resource, or to directly download the file, that would be really useful.
Hopefully after you check out #107, it would be more obvious how somebody could make use of functions in this repo :)
username_1: The `parse_usi` function in `parsing.py` returns a tuple of an `MsmsSpectrum` objects and an URL for the resource, so you can get it from there.
We don't provide functionality to download spectrum files because that's not a universal concept across all repositories. GNPS and MassIVE contain raw spectral data and thus deal with files, but for example MS2LDA consists of machine learned MS2 spectra patterns, while Massbank contains individual reference spectra.
The use case for the spectrum resolver wasn't really to provide a stand-alone API, but have it just be the current web resource. We can consider exposing some API functionality, but we'll have to see how to refactor the code somewhat to that extent.
Status: Issue closed
|
kubernetes/ingress-nginx | 1094008982 | Title: Allow plugins override $remote_addr for X-Forwarded-For and X-Real-Ip
Question:
username_0: ## Goal
We're using ingress-nginx as origin for Cloudflare proxy, the problem is Cloudflare already includes [`X-Forwarded-For` header](https://developers.cloudflare.com/fundamentals/get-started/http-request-headers) and we would like to preserve the original `X-Forwarded-For` header by resetting it with `CF-Connecting-IP` from Cloudflare instead Nginx overriding it with `$remote_addr`.
## Problem
We tried to solve this with a plugin `_M.rewrite()` [as suggested](https://github.com/kubernetes/ingress-nginx/issues/6358#issuecomment-722351158) in a related issue:
```lua
function _M.rewrite()
ngx.var.remote_addr = ngx.var.http_cf_connecting_ip
end
```
With the intention that the template does its stock behavior in:
```nginx
{{ $proxySetHeader }} X-Real-IP $remote_addr;
. . .
{{ $proxySetHeader }} X-Forwarded-For $remote_addr;
```
The problem is `$remote_addr` is special and lua can't override it.
## Proposed Solution
Modify the template to introduce an overridable variable for plugins, something like:
```nginx
set $x_forwarded_remote_addr $remote_addr;
```
Then change the assignments to use the new variable like this:
```nginx
{{ $proxySetHeader }} X-Real-Ip $x_forwarded_remote_addr;
. . .
{{ $proxySetHeader }} X-Forwarded-For $x_forwarded_remote_addr;
```
This way plugins can do:
```lua
function _M.rewrite()
ngx.var.x_forwarded_remote_addr = ngx.var.http_cf_connecting_ip
end
```
So upstreams can get an unmodified `X-Forwarded-For` and `X-Real-Ip` headers.
I'm ready to send a PR with the changes.
Thoughts?
Answers:
username_1: Hey there,
I'm unsure if that's necessary, isn't something like this what you need?
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#forwarded-for-header
username_0: @username_1 thanks for your answer, the solution you propose would work when the entire nginx is behind a proxy so it affects all locations at nginx level(ConfigMap), what I’m suggesting here would allow plugins to selectively modify the header per location.
there is no current annotation for ingress that allows to change the x-forward-header per ingresss, so a plug-in that does it would do.
thoughts?
username_0: @username_1 thanks for the help, I actually found a solution by combining a ConfigMap you mentioned and a snippet annotation at ingress level thanks to the included module [`headers-more-nginx-module`](https://github.com/openresty/headers-more-nginx-module#more_set_input_headers):
When an nginx ingress is behind Clouflare:
```yaml
nginx.ingress.kubernetes.io/configuration-snippet: more_set_input_headers -r X-Forwarded-For
$http_cf_connecting_ip; more_set_input_headers -r X-Real-Ip $http_cf_connecting_ip;
```
For regular cases where nginx is serving an ingress without cloudflare:
```yaml
nginx.ingress.kubernetes.io/configuration-snippet: more_set_input_headers -r X-Forwarded-For $remote_addr; more_set_input_headers -r X-Real-Ip $remote_addr;
```
Config map:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: wm-ingress-ingress-nginx-controller
namespace: ingress-nginx
data:
allow-snippet-annotations: "true"
enable-real-ip: "true"
```
Status: Issue closed
|
DestinyItemManager/DIM | 303035637 | Title: Nightfall challenge cards show 100%
Question:
username_0: This is probably related to how the new progress emblems were showing, but the nightfall challenge card shows a green border on the top and has 100% on the icon.

Answers:
username_1: That's not the only problem with the challenge card. Somehow Singe descriptions are all wrong.
username_2: Oh interesting, I don't have one of those. I can tweak the logic for "complete" - in this case it has both an integer and a regular objective, and I hadn't seen that before.
username_1: Yeah, you need to finish a Prestige Nightfall to get this one.
username_1: 
Status: Issue closed
|
mschlenstedt/LoxBerry-Plugin-WU4Lox | 317758810 | Title: Software error
Question:
username_0: LoxBerry V1.2.1 - Wunderground4Loxone V4.1.0
Software error:
HTML::Template::param() : attempt to set parameter 'server.stationid' with an array ref - parameter is not a TMPL_LOOP! at /opt/loxberry/webfrontend/htmlauth/plugins/wu4lox/index.cgi line 552.
Depending of what you have done, report this error to the plugin developer or the LoxBerry-Core team.
Further information you may find in the error logs.

Answers:
username_0: Hallo,
vielen Dank für die Info.
Kann ich die gefixte Version schon irgendwo downloaden und testen?
VG
Heinz |
remote-job-boards/software-engineering | 651964014 | Title: Rockbite Games: Kubernetes/Spring Engineer
Question:
username_0: **Published on:** June 15, 2020
**Original Job Post:** https://weworkremotely.com/remote-jobs/rockbite-games-kubernetes-spring-engineer

<div> <a href="https://play.google.com/store/apps/details?id=com.rockbite.sandship&hl=en">Sandship</a> is a complicated popular game created by Rockbite Games. Currently, it is expanding and scaling like crazy in terms of its user base. As we are facing new multilayer issues that came with that we know most of the system needs redoing, proper issue diagnosing and solutions. We are looking for an “I can solve all of the backend problems” person.</div><div> <br>You will be responsible for diagnosing and analyzing the current state of high load applications experiencing a multitude of issues. And finding strategies of solving them one by one till the system is refactored/stable and perfect. <br><br> </div><div><strong>To handle this we expect true Expertise:</strong></div><ul> <li>Strong experience of Java spring applications on Kubernetes</li> <li>Microservices expertise</li> <li>Extremely high load backend experience (5000 req per second or more on heavy application)</li> <li>MongoDB ins and outs</li> <li>Understanding of low-level details and optimizations</li> <li>Attention to detail, and thorough testing of every single possible issue, anticipating problems before they go to production to millions of players</li> </ul><div><strong>It will be super nice if you have:</strong></div><ul> <li>Experience in networking understanding Nginx ingress, load balancers and the whole cluster ecosystem is a big bonus</li> <li>Contributions to high-load or Spring connected Open Source projects</li> </ul><div> <br><strong>This is a fully remote role with a start date as soon as possible.</strong> </div><div> <br> </div><issue_closed>
Status: Issue closed |
opensalt/opensalt | 298383757 | Title: Allow weak ETags when checking if a response has been modified
Question:
username_0: When a web server compresses a response it turns an ETag into a weak ETag (as is no longer byte-for-byte) which still has the equivalent content.
We need to allow weak ETags in requests to match responses instead of forcing all the content to be sent again (especially responses with the entire package of a framework).<issue_closed>
Status: Issue closed |
angular/material | 198704525 | Title: md-input-container input doesn't blur
Question:
username_0: When i blur simple inputs without md-input-container - it blurs. When i blur an md-input-container input - it woun't blur.
I tried add ng-blur function - still no luck.
HTML code:
`<div layout="row" layout-align="center center">
<md-input-container class="uppercase">
<label>{{translates.enter_car_plate_template}}</label>
<input class="uppercase" type="text" ng-model="auto_number_edit" ng-blur="hideKeyboard()" style="text-align: center;font-size: 28px;font-weight: 600;color: white; width: 200px; height: 50px">
</md-input-container>
</div>`
Answers:
username_1: facing the same issue
username_2: I had to remove the class md-input-focused.
username_3: Same here, is anybody going to have a look at this ?
username_2: From the input:
```
element.blur();
element.parent().removeClass('md-input-focused');
```
username_3: I don't event get ng-blur event on input itself.
username_4: It's not clear what is being requested here. There is no demo and the OP doesn't use the issue template.
If you have found an issue with using `ng-blur` on an `input` within `md-input-container`, please open a new issue with a CodePen reproduction.
Status: Issue closed
|
Pushwoosh/pushwoosh-ios-sdk | 308006276 | Title: "url" disappeared from the APNS payload
Question:
username_0: Hey!
We we're using deeplinks when sending out push notifications.
Somehow we chose to parse those URL's within the ```onPushAccepted``` method in our AppDelegate.
We actually always had a ```url``` JSON key within the ```aps``` root object.
That suddenly disappeared.
Are we missing something or did you change that?
Answers:
username_1: Hey,
Could you please when exactly did you notice this behavior had changed? Also, did you apply any changes on your application's side?
The URL, if sent as a Pushwoosh deeplink, will be passed as a value of "l" key in a push payload.
username_1: @username_0 Did you manage to solve the issue or is it still persistent?
Status: Issue closed
|
linuxppc/issues | 675003090 | Title: PPC32 signal code spends time copying FP regs even on embedded platforms without a FPU
Question:
username_0: On 8xx, I noticed that signal code spends quite a lot of time in __copy_tofrom_user() via copy_fpr_to_user() allthough I have selected neither CONFIG_PPC_FPU nor CONFIG_MATH_EMULATION. So the FP registers are not saved on interrupts it seems but they are copied back anyway (garbage I guess ?) by signal code.
Answers:
username_0: Handled by the 6 first patches of series https://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=196278 which implement the replacement of copies to user by 'unsafe_' copies to user in the PPC32 signal code
The last of the 6 patches adds the possibilities to not build the book3s32 kernel with FPU support when MPC832x is part of the selected targets.
However, we could also add feature handling around CPU_FTR_FPU_UNAVAILABLE, just like done for ALTIVEC and SPE.
That way, the area would still be there in thread_struct but kernel built with CONFIG_PPC_FPU wouldn't suffer the useless copies.
username_0: First step merged at https://github.com/linuxppc/linux/commit/b6254ced
Next step is to also avoid copies when we have CPU_FTR_FPU_UNAVAILABLE |
mplanchard/pydecor | 548704787 | Title: Travis deploy fails
Question:
username_0: Possibly due to this: https://stackoverflow.com/questions/32320746/pypi-deployment-error-invalid-option-password
Manually deploying 2.0.0 for the moment
Answers:
username_1: Somehow, your pipeline stops before deploying. It's still installing deployment dependencies.
See this successful run of one of my packages:

https://travis-ci.com/username_1/pyTerminalUI/builds/142492686#L212
Here is your line: https://travis-ci.org/username_0/pydecor/jobs/636181063#L212
What are differences?
- You use multiple jobs, I have just one because my package is so simple.
- you run on Travis.org, I run on Travis.com
Travis converts (currently, if you trigger it) accounts from `org` to `com`.
Questions:
- Have you configured a secret called `PYPI_TOKEN` with a value like this `pypi-AgEIcH......................PF7ZClU`?
- What access rights are set for the token? Has it full account right or package push rights?
* I recommend to limit it to one package only.
You can deploy from you local machine with a token for testing purposes:
1. create a `.pypirc` file in your home directory
```
[pypi]
username: __token__
password: <PASSWORD>-AgEIcHl...................pkRwHB9ng
```
2. use twine to publish
`twine upload dist/*` |
AndroidDagashi/AndroidDagashi | 714123524 | Title: Flutter 1.22
Question:
username_0: https://medium.com/flutter/announcing-flutter-1-22-44f146009e5f
Android 11とiOS 14に対応したようです。
その他、Button系Widgetや、i18n/l10nサポート周り、Navigationに変更が入っています。
Answers:
username_0: redditの反応はこちら
https://www.reddit.com/r/FlutterDev/comments/j3ao2f/announcing_flutter_122/
username_0: 日本語では[@ntaoo](https://twitter.com/ntaoo)さんがこちらでざっくりまとめてくださってます
https://medium.com/@ntaoo/%E9%80%B1%E5%88%8A-dart-flutter-%E3%82%A4%E3%83%B3%E3%83%97%E3%83%83%E3%83%88-38-61c3f4d5f3f |
python-discord/bot | 674878760 | Title: RedisCache: boolean values are being converted to float typestrings
Question:
username_0: There is a bug in the `_to_typestring` conversion method.
https://github.com/python-discord/bot/blob/02d1dd1b5034778f6bfc296317c9241e93395b2a/bot/utils/redis_cache.py#L108-L121
When converting values, the first tuple in `prefixes` is `("f|", float)`. The conditional on line 113 then triggers and returns `f|0` or `f|1`. On the way out, the int get cast to float as per the typestring and so we get `1.0` or `0.0`.
The `test_set_get_item` test function does not catch this because it compares the expected and received values for equality.
Both `True == 1.0` and `False == 0.0` hold. Printing out the values in the test, these pass:
```
Expected: 0.0, got: False
Expected: 1.0, got: True
```
As far as I can tell no code on the master branch currently relies on bool storage so this isn't critical.
Answers:
username_0: I think a good way to approach this issue would be to first adjust the tests so that they fail if an incorrect type is retrieved. Then make the necessary changes to the `_to_typestring` method. |
bsc-s2/lua-acid | 377171143 | Title: net.lua增加1个函数get_domain_ips(domain_or_ip)
Question:
username_0: 解析域名参数domain_or_ip, 返回随机过的ip列表.
如果domain_or_ip是一个ip, 直接返回.
Answers:
username_1: s2项目下 src/ngx.conf/lua/netutil.lua 已经有 get_ips_from_domain
要实现 lua-acid下 net.lua 参考上面吗,还是有可以复用的?
username_0: 最重要去掉s2项目里的实现. 通用函数尽量都重构到公用库里.
username_0: 啥叫复用...谁复用谁...
username_1: 你的意思是,最终如下?
s2 lua/netutil.lua:
- get_ips_from_domain
lua-acid net.lua
+ get_domain_ips
username_1: 还需要建议,我目前想法是
实现参考`lua/netutil.lua`里的`get_ips_from_domain`函数
妥不?
username_1: lua-acid独立出来很好,还有个细节再确认下
s2里面的lua,比如callback/cb_bcwriter.lua
require('acid.xxx'),应该引用 s2下的dep/acid呢,还是 lua-acid下lib/acid呢?
username_0: dep/acid是导入的lua-acid/acid... 一样的. |
GoogleChrome/lighthouse | 207267704 | Title: Image Optimisation - Which Tools to Use?
Question:
username_0: Running the latest Lighthouse build over my side I'm getting an error on one my images where there is a 2% saving on a jpg image.
Are there recommendations for what tools to use to get to this saving? At the moment I'm using "gulp-imagemin".
Answers:
username_1: If we stick with @username_2's current proposal, after #1635 things offering ≤ 10% potential savings won't be called out anymore, which will ease some of this.
username_2: Great that you're already using imagemin! That should capture what we're advocating here the 2% might just be a slight difference in the encoder quality handling. This is also getting loosened a bit in #1635.
Status: Issue closed
username_2: We now give much more leeway on image encoding and explicitly state expected quality settings. 👍 |
wasmerio/wasmer | 832290983 | Title: can't represent 65536 pages
Question:
username_0: ### Describe the bug
Memory minimum and maximum sizes range 0..65536 inclusive. The verifier blocks 65537 and higher. However, wasmer only works on 65535 and lower.
Testcase:
```
(module (memory (;0;) 65536))
```
fails:
```
$ target/release/wasmer --cranelift x.wat
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: TryFromIntError(())', lib/vm/src/memory.rs:262:74
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
The line of code in question is:
```
let mem_length = memory.minimum.bytes().0.try_into().unwrap();
```
and the error is that 65536*65536 is 4294967296, which is the first value too large to fit in a `u32`.
Wasmer needs to either remove the problematic conversions to bytes keeping the value as pages, use a larger datatype for the byte count, or store it in a 'max-1' format (assuming '0' is unnecessary since we store the absence of a memory differently.) |
Cinchoo/ChoETL | 792240334 | Title: CSV Import Fails When A Text Column Had Linefeeds
Question:
username_0: If a CSV file that has a text column in it and that text contains linefeeds, the parsing does not work properly. With line feeds, it parses the text rows as multiple CSV rows.
It's simple to test. Just create text with line feeds in Notepad and then copy into column field, then run the import code.
Answers:
username_1: In this case, you have to use `MayContainEOLInData` to instruct the parser to handle it.
PS. the field value must be surrounded by quotes to handle this. |
CocoaPods/CocoaPods | 361224850 | Title: please help me
Question:
username_0: I don't known who you are! I hope you can help me.
According to company regulations, i must delete my repository!(It's PXNetwork)
Due to it use cocopods, and I don't know how to delete Spec file in https://github.com/CocoaPods/Specs/tree/e122b8b9b53322d60da4d55fd4e10c477ae4fcb7/Specs/c/4/b/PXNetwork
Please Delete it!! I'm a chinese, my english is poor, i hope you can understand!
my email:<EMAIL>
Answers:
username_1: Please use https://guides.cocoapods.org/terminal/commands.html#pod_trunk_delete to delete a pod.
Ask StackOverflow under `cocoapods` tag for further questions about this command.
Status: Issue closed
|
WEC-Sim/WEC-Sim | 138753543 | Title: Paraview Macro Error
Question:
username_0: We are having trouble finding this module, have you guys seen this before?
Thanks,
Alex
Answers:
username_1: It looks like you have to setup a few environmental variables (http://www.simpleitk.org/Wiki/ParaView/Python_Scripting#ParaView_and_Python).
I will look at installing ParaView from scratch and update the documentation today.
username_1: I was not able to reproduce your error. I installed ParaView (version 5) from scratch on a Windows machine and it worked fine (no extra step required).
Let us know if the solution on the link above works for you.
username_1: We found out that our Macros are not working properly on the latest version of ParaView (v5) on Macs. Until they fix whatever is wrong revert to version 4.4 on Macs.
Status: Issue closed
|
kubernetes/kubernetes | 317945185 | Title: apiserver panic when running integration test
Question:
username_0: **Is this a BUG REPORT or FEATURE REQUEST?**:
/kind bug
**What happened**:
Integration failed, the apiserver panic :(
It seems race condition case for `clusterroles` as follow; refer to the attachment for the detail,
```
2132 I0426 17:17:14.653388 20580 trace.go:76] Trace[1009546348]: "List /apis/rbac.authorization.k8s.io/v1/clusterroles" (started: 2018- 04-26 17:16:14.653009585 +0800 CST m=+153.540700977) (total time: 1m0.000345077s):
2133 Trace[1009546348]: [1m0.000345077s] [1m0.00034003s] END
2134 E0426 17:17:14.653566 20580 runtime.go:66] Observed a panic: &errors.errorString{s:"killing connection/stream because serving requ est timed out and response had been started"} (killing connection/stream because serving request timed out and response had been sta rted)
2135 /home/username_0/go_ws/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.g o:72
2136 /home/username_0/go_ws/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.g o:65
2137 /home/username_0/go_ws/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.g o:51
2138 /usr/local/go/src/runtime/asm_amd64.s:573
2139 /usr/local/go/src/runtime/panic.go:502
2140 /home/username_0/go_ws/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go :221
2141 /home/username_0/go_ws/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go :105
2142 /home/username_0/go_ws/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup. go:45
2143 /usr/local/go/src/net/http/server.go:1947
2144 /home/username_0/go_ws/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/request info.go:39
2145 /usr/local/go/src/net/http/server.go:1947
2146 /home/username_0/go_ws/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:41
2147 /usr/local/go/src/net/http/server.go:1947
2148 /home/username_0/go_ws/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:189
2149 /home/username_0/go_ws/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/scheduler/util.go:100
2150 /usr/local/go/src/net/http/server.go:1947
2151 /usr/local/go/src/net/http/server.go:2694
2152 /usr/local/go/src/net/http/server.go:1830
2153 /usr/local/go/src/runtime/asm_amd64.s:2361
2154 2018/04/26 17:17:14 http: multiple response.WriteHeader calls <=========== race condition for http server.
2155 E0426 17:17:14.653839 20580 wrap.go:34] apiserver panic'd on GET /apis/rbac.authorization.k8s.io/v1/clusterroles: killing connecti on/stream because serving request timed out and response had been started
```
[test.log](https://github.com/kubernetes/kubernetes/files/1950519/test.log)
**What you expected to happen**:
**How to reproduce it (as minimally and precisely as possible)**:
`make test-integration WHAT=k8s.io/kubernetes/test/integration/scheduler`
**Environment**:
- Kubernetes version (use `kubectl version`): master branch
- Cloud provider or hardware configuration: none
- OS (e.g. from /etc/os-release):
```
NAME="Ubuntu"
VERSION="16.04.4 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.4 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
```
- Kernel (e.g. `uname -a`): `Linux wgl-ubuntu 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux`
- Install tools: none
- Others: Can not reproduced every time :(
Answers:
username_1: 2133 Trace[1009546348]: [1m0.000345077s] [1m0.00034003s] END
username_1: panic called in staging/src/k8s.io/apiserver/pkg/server/filters/timeout.go#L221
username_2: normally that means the test etcd instance did not come up successfully...
username_0: I checked those daemons: etcd is there, no apiserver; but I'm not sure whether etcd start up after apiserver crash :(.
username_1: Integration test failed the process exited, not crash.
The timeout panic is recovered by apiserver and printed an error message
`2155 E0426 17:17:14.653839 20580 wrap.go:34] apiserver panic'd on GET /apis/rbac.authorization.k8s.io/v1/clusterroles: killing connecti on/stream because serving request timed out and response had been started
`
username_0: Did not trigger this issue again, close this for now.
Status: Issue closed
|
amfoss/Hack4Amrita | 435481387 | Title: Automating the attendance system
Question:
username_0: **What is the issue that you are facing?**
Currently, the attendance system that we are following is manual. The teacher has to check and verify with the log book the roll numbers of the students that are present inside the classroom and note down those who are absent. The same procedure has to be repeated after every hour which takes a sheer amount of time from every hour that could have been spent more productively.
**How often do you face this issue?**
This issue is faced by everyone during every class hours.
**Where do you face this issue?**
I face this issue inside classes during class hours.
**Do you think, a lot of your friends face the same issue?**
Yes, I think every student on the campus is directly or indirectly affected with this issue.
**What have you tried to solve the issue?**
I have not tried to solve this issue.
**What do you think is causing the issue?**
Lack of automation in the attendance system.
**Have you thought of any ideas that may solve the issue?**
The attendance system could be automated, Every student has there Id cards with barcodes or NFC chips or both, this could be taken advantage of and some system could be implemented to automatically verify the roll numbers of the students who are present inside the class and report this to the class counsellors after every hour.
- [x] By submitting this issue, I acknowledge that this follows the [code of conduct](CODE_OF_CONDUCT.md) of Hack4Amrita. |
monero-project/monero | 285030861 | Title: Segmentation fault (core dumped) with latest build after restoring a wallet from keys and trying to check balance
Question:
username_0: I'm running Ubuntu Mate 16.04.3 64-bit in virtualbox
After I build from master, this is the cli version...
Monero 'Helium Hydra' (v0.11.1.0-master-a0a8706)
After I run this command..
./monero-wallet-cli --log-level 4 --generate-from-keys fromkeys
I am able to input the standard address, secret spend key, and secret view key.. however, when I run the balance command, it gives me segmentation fault (core dumped)
Here is the log file..
https://gist.github.com/username_0/440f695ea21fe0342454ace52c764402
Here is a screenshot of my terminal session...
https://i.imgur.com/sb3JBvp.png
After the Segmentation fault (core dumped) error, I can open the wallet normally and see balance without issue.. as seen here...
https://i.imgur.com/HdOnsIy.png
Answers:
username_1: This is a bug. #3028
Thank you for reporting!
username_2: +resolved
Status: Issue closed
|
jaredsburrows/gradle-spoon-plugin | 456288492 | Title: Wrong snapshot reference in readme
Question:
username_0: Hi,
This will not take much of your time!
Readme references following snapshot, which does not exist:
classpath 'com.username_2:gradle-spoon-plugin:**1.5.1**-SNAPSHOT'
Please update with the correct link:
classpath 'com.username_2:gradle-spoon-plugin:**2.0.0**-SNAPSHOT'
Thanks,
Jean-Michel
Answers:
username_1: Actually, the correct version is 1.5.1-SNAPSHOT but it is not published due to `/home/travis/.travis/functions: line 109: .buildscript/deploy-snapshot.sh: No such file or directory`
Status: Issue closed
username_2: This was fixed in #87. |
RobertSkalko/Age-of-Exile | 993847803 | Title: Vanilla Sweep attack no longer working
Question:
username_0: As the title says the Vanilla Sweep attack doesn't seem to be working anymore I assume this is a bug because visually sweeping still happens but no sweep attack occurs.
I request a hotfix for 1.16.5 because this is a very important feature.
Answers:
username_1: What do you mean by sweep does not work, sweep is when you hit multiple mobs right? Cause that does still work for me.
username_0: It doesn't seem to work on the latest version of Age of Exile try a wooden sword it does it visually(white slash) but it doesn't seem to work (slash multiple mobs).
What version are you on?
username_1: I use latest version, also i use AOE swords and i do hit multiple mobs.
Here is a SS of a vanilla wooden sword (no enchants) hitting 2 mobs with 1 slash
 |
librosa/librosa | 427313920 | Title: down pitch shifting causes small magnitude of high frequency components in log Mel-spectrogram
Question:
username_0: #### Description
I believe this is how `pitch_shift` works, but I don't know much about it. As titled, in the below are three log Mel-spectrogram; from left to right: original, up-shifted by 2 steps, down-shifted by 2 steps.
The magnitude of specific high frequency components is extremely low in the down-shifted sample, which does not happen in the up-shifted sample in this particular case.
I found this has something to do with `fmax` in `feature.melspectrogram`, though I am not sure how it works. The audio `sr` is 22050, and setting a high `fmax` seems to cause the issue, and it actually happens for both up- and down-shifted samples. I guess it is related to `resample`, could someone please shed light on it?

#### Steps/Code to Reproduce
import sys
import numpy as np
import librosa
x, sr = librosa.core.load('vn-G#6.wav')
x_d2 = librosa.effects.pitch_shift(x, sr, -2)
S = np.log(librosa.feature.melspectrogram(x, sr, n_mels=256, fmin=27, fmax=11000) + sys.float_info.epsilon)
S_d2 = np.log(librosa.feature.melspectrogram(x_d2, sr, n_mels=256, fmin=27, fmax=11000) + sys.float_info.epsilon)
in this case, change `fmax=9000` doesn't lead the issue.
Linux-4.4.0-116-generic-x86_64-with-debian-stretch-sid
Python 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34)
[GCC 7.3.0]
NumPy 1.16.2
SciPy 1.2.1
librosa 0.6.3
Answers:
username_1: Short summary: this is expected behavior.
What's happening here is that the original signal is band-limited to `fmax=sr/2 = 11025 Hz` (when `sr=22050`). That means that it has no energy at frequencies above 11025. When you pitch-shift downward, it's going to bring in energy from those frequencies above the band limit, which leads to the low energy you see in the top bands on the third plot. For example, when you pitch shift downward by 2 semitones, the energy you see at, say, 10000 Hz would originally have been at 10000 * 2^(2/12) ~= 11225 Hz, which is above `sr/2`.
This doesn't happen when up-shifting because the band limits are assumed to go all the way down to 0, which you'll never go past when scaling a frequency `f>0` by a positive factor.
The short high-energy bursts you see in the high frequencies at the beginning and ends of the signal are probably due to transients associated with the start and end of the recording.
I hope that clears things up.
username_0: Yes it helps a lot! Thanks for your prompt and thorough response.
Just to make sure, it is an expected behavior when `fmax` goes past frequencies that were originally outside of the band limit which is decided by `sr`, is it?
So if I understand correctly, depends on the number of steps down, say `d`, the theoretical good choice of `fmax` would be the solution to `fmax * 2^(d/12) = sr/2`.
username_1: Yeah, if your goal is to hide any high-frequency artifacts introduced by downshifting your signal.
username_2: Will close at the end of this week if there are no further comments to make
Status: Issue closed
username_3: I wonder how ideal pitch shift should look like on spectrogram? should it look like only like scaling on frequency Y axis of spectrogram? Should it change range of Y axis of spectrogram? Should it change sampling rate or length of audio?
Here is original audio:
https://google.github.io/tacotron/publications/tacotron2/demos/romance_gt.wav
Example of pitch shift using ffmpeg:
```
ffmpeg -i ffmpeg_effect/romance_gt.wav
Input #0, wav, from 'ffmpeg_effect/romance_gt.wav':
Duration: 00:00:01.36, bitrate: 384 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 24000 Hz, mono, s16, 384 kb/s
ffmpeg -i ffmpeg_effect/romance_gt.wav -af asetrate=24000*3/4,atempo=4/3 ffmpeg_effect/romance_gt_down.wav
ffmpeg -i ffmpeg_effect/romance_gt.wav -af atempo=3/4,asetrate=24000*4/3 ffmpeg_effect/romance_gt_up.wav
ffmpeg -i ffmpeg_effect/romance_gt_down.wav
Input #0, wav, from 'ffmpeg_effect/romance_gt_down.wav':
Metadata:
encoder : Lavf58.20.100
Duration: 00:00:01.38, bitrate: 288 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 18000 Hz, mono, s16, 288 kb/s
ffmpeg -i ffmpeg_effect/romance_gt_up.wav
Input #0, wav, from 'ffmpeg_effect/romance_gt_up.wav':
Metadata:
encoder : Lavf58.20.100
Duration: 00:00:01.34, bitrate: 512 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 32000 Hz, mono, s16, 512 kb/s
```
ffmpeg changes sampling rate and length of audio and Y range of spectrogram.
Spectrogram for romance_gt.wav:

Spectrogram for romance_gt_down.wav:

Spectrogram for romance_gt_up:

Here is example using librosa:
```
import os
import argparse
import librosa
def apply_pitch_shift(input_wav_filepath):
shift_list = [-4,-2,2,4]
for shift in shift_list:
y, sr = librosa.load(input_wav_filepath)
y_new = librosa.effects.pitch_shift(y, sr, n_steps=shift, bins_per_octave=12)
filepath = os.path.splitext(input_wav_filepath)[0]+'_'+str(shift)+'.wav'
[Truncated]
parser.add_argument('input_wav_filepath')
args = parser.parse_args()
apply_pitch_shift(args.input_wav_filepath)
```
romance_gt_spectrogram.png

romance_gt_2_spectrogram.png

romance_gt_4_spectrogram.png

romance_gt_-2_spectrogram.png

romance_gt_-4_spectrogram.png

username_1: Typically these things will change during the process, but the output of the pitch shift should preserve the signal length and sampling rate. This is handled automatically by the `librosa.effects` wrapper function, as well as the `pyrubberband` interface. |
tuupola/base58 | 421614078 | Title: Example Script Doesn't Seem to Work
Question:
username_0: This looks like a great resource but I can't seem to get it to run!
After installing with composer and trying your test script:
`<?php
require __DIR__ . '/vendor/autoload.php';
use RuntimeException;
use Tuupola\Base58;
$base58check = new Base58([
"characters" => Base58::BITCOIN,
"check" => true,
"version" => 0x00
]);
print $base58check->encode("Hello world!"); /* 19wWTEnNTWna86WmtFsTAr5 */
try {
$base58check->decode("19wWTEnNTWna86WmtFsTArX");
} catch (RuntimeException $exception) {
/* Checksum "84fec52c" does not match the expected "84fec512" */
print $exception->getMessage();
}
?>`
It seems to fail with the following error:
`
Warning: The use statement with non-compound name 'RuntimeException' has no effect in test.php on line 3
19wWTEnNTWna86WmtFsTAr5Checksum "84fec52c" does not match the expected "84fec512"`
Answers:
username_1: The example demonstrates how decoding fails with the wrong checksum. If you look closely the result on encode is `19wWTEnNTWna86WmtFsTAr5` while decode is called with deliberately broken base58 string `19wWTEnNTWna86WmtFsTArX`.
Formatting of output would be bit clearer if adding a newline. Also the `use RuntimeException` can be removed to silence the warning.
```php
require "vendor/autoload.php";
use Tuupola\Base58;
$base58check = new Base58([
"characters" => Base58::BITCOIN,
"check" => true,
"version" => 0x00
]);
print $base58check->encode("Hello world!"); /* 19wWTEnNTWna86WmtFsTAr5 */
print "\n";
try {
$base58check->decode("19wWTEnNTWna86WmtFsTArX");
} catch (RuntimeException $exception) {
/* Checksum "84fec52c" does not match the expected "84fec512" */
print $exception->getMessage();
}
```
Yields
```
19wWTEnNTWna86WmtFsTAr5
Checksum "84fec52c" does not match the expected "84fec512"
```
username_0: Got it! Thanks for the quick response - really appreciate it :)
Status: Issue closed
|
quarrant/mobx-persist-store | 1020899256 | Title: window.sessionStorage weird behaviour on refreshes
Question:
username_0: Visible
</li>
);
});
```
Answers:
username_0: Sorry for the late response, the error was on my end! I was able to identify the issue after your example, as I did not use a global state for the initialization of the store via `useState` (myStore).
**RootStore**
```
export class RootStore {
menuStore: MenuStore;
checkoutStore: CheckoutStore;
priceStore: PriceStore;
constructor() {
makeAutoObservable(this, {}, { autoBind: true });
this.menuStore = new MenuStore();
this.checkoutStore = new CheckoutStore();
this.priceStore = new PriceStore(this);
}
}
export const rootStore = new RootStore();
export const StoreContext = createContext(rootStore);
export function useStore(): RootStore {
const context = useContext(StoreContext);
if (context === undefined) {
throw new Error('useStore must be used within StoreProvider');
}
return context;
}
```
**_app**
```
import { RootStore, rootStore StoreContext } from 'stores/RootStore';
const App: FC<AppProps> = observer(({ Component, pageProps }) => {
const [myStore] = useState(() => new RootStore()); // <- THIS WAS MISSING, I WAS USING THE IMPORTED rootStore
return (
<>
<StoreContext.Provider value={myStore}>
<Component {...pageProps} />
</StoreContext.Provider>
</>
);
});
export default App;
```
Status: Issue closed
|
vercel/pkg | 1056459153 | Title: Warning Babel parse has failed: import.meta may appear only with 'sourceType: "module"' (11:69)
Question:
username_0: node:path
```
I do use `import.meta` in the code, as I'm targeting node.js 12 and above where this should be available. I've also set `"type": "module"` in my `package.json` to signal that this is an esm module.
### Expected Behavior
I would expect it to not throw an error :)
### To Reproduce
To reproduce this, using the latest version of Node.js 16.x:
```
git clone https://github.com/username_0/linkinator
cd linkinator
npm install
npm run build-binaries
```
Answers:
username_1: esm modules are not supported yet, follow updates in other issue
Status: Issue closed
|
influxdata/ui | 1159242374 | Title: Cant view usage
Question:
username_0: <!-- Thank you for reporting a bug in InfluxData UI.
- Please ask usage questions on the Influx Community site.
- https://community.influxdata.com/
- Please add a :+1: or comment on a similar existing bug report instead of opening a new one.
- https://github.com/influxdata/ui/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+is%3Aclosed+sort%3Aupdated-desc+label%3Akind%2Fbug+
- Please check whether the bug can be reproduced with the latest release.
- The fastest way to fix a bug is to open a Pull Request.
- https://github.com/influxdata/ui/pulls
-->
## About the bug
**Steps to reproduce:**
List the minimal actions needed to reproduce the behavior.
1. Click on view usage
2. ...
3. ...
**Expected behavior:**
<!-- Describe what you expected to happen. -->
Usage charts displayed
**Actual behavior:**
<!-- Describe What actually happened. -->
Error message
**Visual Proof:**
<!-- (please attach screenshots, videos, as applicable) -->
## About your environment
**Environment info:**
<!--
- System info: Run `uname -srm` and copy the output here
- InfluxDB version: Run `influxd version` and copy the output here
- Other relevant environment details: Container runtime, disk info, etc
-->
**Config:**
<!-- Copy any non-default config values here or attach the full config as a gist or file. -->
<!-- The following sections are only required if relevant. -->
**Logs:**
<!--- Include snippet of errors in log. -->
**Performance:**
Generate profiles with the following commands for bugs related to performance, locking, out of memory (OOM), etc.
```sh
# Commands should be run when the bug is actively.
# Note: This command will run for at least 30 seconds.
curl -o profiles.tar.gz "http://localhost:8086/debug/pprof/all?cpu=true"
curl -o vars.txt "http://localhost:8086/debug/vars"
iostat -xd 1 30 > iostat.txt
# Attach the `profiles.tar.gz`, `vars.txt`, and `iostat.txt` output files.
```
Answers:
username_1: 
Can reproduce
username_2: @username_0 -- Can you tell me if you are a free tier user (in which case you would not see usage information), or a PAYG client?
username_3: We have the same issue (I actually opened a [new Issue](https://github.com/influxdata/ui/issues/4031)). We are under Pay as You Go tier. We use InfluxDB Cloud and we are not able to see our organization metrics.
username_1: Once I added my billing info and upgraded from free tier I could see data
username_4: This was a bug that we have addressed, closing for now.
Status: Issue closed
|
romanchyla/solrjmeter | 53677892 | Title: tries to spawn non-existent jar CMDRunner.jar
Question:
username_0: I've tried on Linux X86-64 and Windows 7 and neither platform has CMDRunner.jar as part of JMeter. Perhaps it was part of an old build or else I need some extension that I don't know what it is.
java -jar /fast/jmeter/apache-jmeter-2.12/lib/ext/CMDRunner.jar --tool Reporter --input-jtl summary_report.data --plugin-type TransactionsPerSecond --generate-png transactions-per-sec.png
Error: Unable to access jarfile /fast/jmeter/apache-jmeter-2.12/lib/ext/CMDRunner.jar
Answers:
username_1: hmm, interesting, it points out shortcomings in my setup - it was a quick hack, but I take it is not good - i'll look into it...
username_0: I see now CMDRunner.jar is available from JMeter as part of the Standard Plugins Set (though not standard enough to have included it in the ZIP!)
http://jmeter-plugins.org/downloads/all/.
Still it would be simple for you to test the existence of CMDRunner.jar and let the user know he should download the plug-in set.
username_1: agreed
username_1: OK, so actually the script installs solrmeter, if you pass -j (or -a argument)
I've tested it against solr 4.10 and it worked, here is my invocation:
```
solrjmeter.py -a -x ./jmx/SolrQueryTest.jmx -q ./queries/demo/demo.queries -s localhost -p 8983 -a --durationInSecs 60 -R test -t /solr/collection1
```
but i'm also seeing one error Jmeter error:
```
java.lang.Throwable: Could not access /var/lib/montysolr/solrjmeter/jmeter/lib/ext/lib
at kg.apc.cmd.UniversalRunner.buildUpdatedClassPath(UniversalRunner.java:109)
at kg.apc.cmd.UniversalRunner.<clinit>(UniversalRunner.java:55)
```
i'm not sure this is critical, the graphs get generated
username_0: Hi Roman,
Yes I had not been using the –a automatic install option before, that is true.
Once I ran that and due to our local configs I knew the wget’s would require “--no-proxy” so I hesitated to use that
Today I added --no-proxy just to see how it would work.
First thing I saw was that it references a non-existent file, the JMeter 2.9 version is no longer posted at http://mirrors.gigenet.com/apache/jmeter/binaries/apache-jmeter-2.9.tgz , you have to change it to 2.12 which I did to get past the “404 Not Found” error
My full commandline is now
python solrjmeter.py -a -x ./jmx/SolrQueryTest.jmx -q ./queries/demo/demo.queries -s localhost -p 9999 -a --durationInSecs 60 -R test --e pqis -t /solr/pqis
I agree the UniversalRunner errors seem to be some kind of minor classpath setback that might not matter (see https://github.com/undera/cmdrunner/blob/master/src/kg/apc/cmd/UniversalRunner.java )
Here’s a post on stackoverflow about it as pertains to headless JMeter bug
http://stackoverflow.com/questions/18571427/getting-uncaught-exception-while-running-jmeter-tests-from-commandline
It might be solvable by just doing a “mkdir” of solrjmeter/jmeter/lib/ext/lib to stop its whining. Please consider adding that.
Other than that, it appears to run smoothly until this problem and now it is halted shortly after using 0% CPU
INFO 2015-01-08 08:34:01.884 [kg.apc.c] (): Saving PNG to /fast/solrjmeter/solrjmeter-master/solrjmeter/test/2015.01.08.08.32.42/demo.queries/response-times-percentiles.png
**ERROR**
File "solrjmeter.py", line 1386, in <module>
main(sys.argv)
File "solrjmeter.py", line 1357, in main
generate_graphs(options)
File "solrjmeter.py", line 713, in generate_graphs
error('We got a zombie!')
File "solrjmeter.py", line 66, in error
traceback.print_stack()
We got a zombie!
$ cd /fast/solrjmeter/solrjmeter-master/solrjmeter/test/2015.01.08.08.32.42
$ cd /fast/solrjmeter/solrjmeter-master/solrjmeter/test
$ cd /fast/solrjmeter/solrjmeter-master/solrjmeter
$ cd /fast/solrjmeter/solrjmeter-master
Here the output halts and nothing more comes out or is done.
The only “running” task is the process monitor (“top”) I’m using
Doesn’t seem to be anything but the template files in the html dir
username_1: Hi Victor,
Pls use the python3 branch, it had some of the changes that you suggested previously; i also added the --no-proxy and new timeout
there was a hardcoded limit (not good), but on some machines maybe it takes longer (even if you used 60s); so now you can say ```-w 200```
but i'm seeing errors from jmeter (which I didn't fix yet, will have to look into it tomorrow), one report is complaining
```
failed: java -jar /var/lib/montysolr/solrjmeter/jmeter/lib/ext/CMDRunner.jar --tool Reporter --input-jtl summary_report.data --plugin-type PageDataExtractorOverTime --generate-csv page-data-extractor-over-time.csv
```
i should finally kick myself into writing a functional test, to cover that stuff with certainty |
stripe/stripe-terminal-android | 536961547 | Title: StripeTerminal: READER_ERROR.BLUETOOTH_DISCONNECTED: Bluetooth unexpectedly disconnected during operation
Question:
username_0: Still the same issue https://github.com/stripe/stripe-terminal-android/issues/70
device serial: CHB204902001016
used last example
library version 1.0.1
to make example work i just used secret = "<KEY>"
it always fails with this error during connecting the device and then jump back to the first fragment
12-12 14:48:14.630 30640-30693/com.stripe.example.javaapp E/StripeTerminal: READER_ERROR.BLUETOOTH_DISCONNECTED: Bluetooth unexpectedly disconnected during operation.
com.stripe.stripeterminal.model.external.TerminalException
at com.stripe.stripeterminal.adapter.BbposBluetoothAdapter.onDisconnectReader(BbposBluetoothAdapter.kt:229)
at com.stripe.stripeterminal.bbpos.BbposDeviceControllerListener.onBTDisconnected(BbposDeviceControllerListener.kt:93)
at com.stripe.bbpos.bbdevice.BBDeviceController$aaa042zz.run(SourceFile:1)
at android.os.Handler.handleCallback(Handler.java:815)
at android.os.Handler.dispatchMessage(Handler.java:104)
at android.os.Looper.loop(Looper.java:194)
at android.app.ActivityThread.main(ActivityThread.java:5637)
at java.lang.reflect.Method.invoke(Native Method)
at java.lang.reflect.Method.invoke(Method.java:372)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:959)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:754)
Answers:
username_0: Looks like it don't work only with the phone: Land Rover X8
With beta version of your library and this phone all works
If you need more logs i'll send
username_1: @username_0 - what was the beta version of the library you used when it worked? And was it the same phone on both beta and 1.0.1 versions? Trying to get a sense of the total diff between your last working version and this one. Thanks.
username_0: So with version 1.0.0-b9 it works, but with rc1 already don't
username_0: The phone the same, if i downgrade to b9 it starts to work, but with rc1 and higher connecting to the reader fails with that error
username_2: Hi @username_1 confirming here it does appear to work on **some** devices but not others.
When we were using the initial b9 version it was working on all devices.
Now the RC-1 only works on a subset of the devices we've been testing on (we've tried the 1.0.0 and the 1.0.1 and see the same result of partial success across the number of devices we're testing).
If it helps we're able to see the devices in the list of devices we wish to connect to but are unfortunately able to actually connect. We've also tried a number of the stripe terminals to ensure it wasn't a bad unit but we're seeing it on all of the half dozen we've tested on, some being test units and some being production units (i.e purchased from stripe admin).
Is there potential for us to nominate stripe account_ids that can use the older version of the beta in production until this is resolved?
username_2: Did some further testing with this today and it seems like an issue when the reader isn't in pairing mode properly, hard reseting the reader using the reset seems to make it start working again. Will continue to mess around with them and see if we can figure it out.
username_0: So Land Rover X8 running android 5.1
Looks like that is the issue
So we suggest that library started to work incorrectly with old android phones because you started to use BLE exclusively
username_2: @username_1 confirming we had a really big issue with this at our last trial, constant dropouts and disconnections.
It appears as tho the BLE implementation is really unstable on older android 6 devices, to double check we fell back to the b7 and it appears to work without the same disconnection issues. Almost like the stripe library has a connection issue then has to go through the entire re-pairing process or in some cases be turned off and restarted in order for it to function again.
username_2: Still seeing this on 1.0.8 although it now throws this on every single payment. The initial connection step does seem more solid tho.
username_1: @username_2 are you trying to pair from the system bluetooth menu or from within the app?
username_1: Also, are you getting this error from the sample apps?
username_2: 1. Pairing within the application
2. Yep it also happens in the sample application - appears to be linked to the lower OS version and BLE only in the RC versions
username_1: Thanks for the quick response - what's the first api version that you're seeing this behavior in? 23 (6.0) ?
username_1: The rc1 was the first version we switched to BLE over BT, so I'm wondering if it's just bad BLE connectivity on older android devices. I'll order both an Android 5 and Android 6 device to test, thanks.
Status: Issue closed
username_1: For internal reasons we've been forced to bump our min sdk (going forward) to API level 24, so all future releases of the sdk will target those builds.
I've also been able to repro (not 100% of the time) the issue here, and it looks like an issue from one of our third party libraries. They've released a newer version of their SDK which our newer library versions are using, but again that targets api level 24.
Apologies for the delay here. |
project-chip/connectedhomeip | 1109830694 | Title: Minimal mdns should send Rmv notifications as needed
Question:
username_0: #### Problem
If a node is commissionable, it publishes records with CM=1.
When it stops being commissionable, it should stop advertising those. Right now platform mdns seems to handle this right, but minimal mdns does not.
#### Proposed Solution
Fix minimal mdns to send Rmv notifications for the CM=1 records, so we are not incorrectly advertising ourselves as being commissionable until the TTL is hit
@andy31415
Answers:
username_1: Minimal mdns must be spec compliant for v1.
Removing tag `v1_triage_split_4` |
jOOQ/jOOQ | 803930025 | Title: Support parsing RETURN statement
Question:
username_0: We do not yet parse the `RETURN` statement in functions or procedures:
```sql
create or replace procedure p as
begin
return;
end;
```
It's just interpreted as an oracle-style no-args procedure call:
```
create or replace procedure P
as
begin
RETURN();
end;
```
Or:
```sql
create or replace function f return integer as
begin
return 1;
end;
```
Doesn't work at all:
```
Unsupported query type: [3:3] ...e or replace function f return integer as
begin
[*]return 1;
end;
```<issue_closed>
Status: Issue closed |
rails/rails | 171912447 | Title: Rails 5 schema.rb adding non-agnostic options. dev mysql -> test sqlite
Question:
username_0: ### Steps to reproduce
Use MySQL for Development, and SQLite3 for Test. On an app I'm upgrading from 4.2 to 5.0 I run into trouble **after** running a migration in the Rails5 branch. Here's why:
Rails 4:
```bash
ack "create_table" db/schema.rb
create_table "absences", force: :cascade do |t|
create_table "accounts", force: :cascade do |t|
create_table "addresses", force: :cascade do |t|
...
```
Rails 5:
```bash
ack "create_table" db/schema.rb
create_table "absences", force: :cascade, options: "ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci" do |t|
create_table "accounts", force: :cascade, options: "ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci" do |t|
create_table "addresses", force: :cascade, options: "ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci" do |t|
...
```
That's fine for MySQL, but when I load the schema for tests I get this:
```
Tasks: TOP => db:test:load_schema
(See full trace by running task with --trace)
ActiveRecord::StatementInvalid: SQLite3::SQLException: near "ENGINE": syntax error: CREATE TABLE "absences" ("id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, "person_id" integer, "date" date, "source" varchar, "appointment_id" varchar, "reason" varchar, "category" varchar, "hours" decimal(4,2), "approved" boolean, "created_at" datetime NOT NULL, "updated_at" datetime NOT NULL, "entry_source" varchar, "voided" boolean DEFAULT 'f' NOT NULL, "mark_for_deletion" boolean DEFAULT 'f') ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
```
### Expected behavior
Migration ignores options it's can't use, or they're not in the schema.rb in the first place.
### Actual behavior
sqlite3 (1.3.11) builds invalid SQL (per sqlite3).
### System configuration
Working: 4.2.5.1
Broken: 5.0.0.1
**Ruby version**:
2.3.1p112
Answers:
username_1: Can you share your database config wrt extra options?
username_0: It's pretty stock. Mysql for the local DB, and sqlite3 for test.
```
development:
adapter: mysql2
encoding: utf8
database: xxxx
pool: 32
username: xxxx
password: <PASSWORD>
host: localhost
test:
adapter: sqlite3
database: db/test.sqlite3<%= ENV['TEST_ENV_NUMBER'] %>
pool: 5
timeout: 5000
```
username_0: I setup a blank Rails 5.0.0.1 app, added mysql2, and set development to mysql and test/prod to sqlite3.
Add a migration:
```ruby
rails g migration create_foos bar
```
### Observations of schema.rb
rake db:setup
```ruby
ActiveRecord::Schema.define(version: 20160822152937) do
create_table "foos", force: :cascade do |t|
t.string "bar"
end
end
```
rake db:migrate
```ruby
ActiveRecord::Schema.define(version: 20160822152937) do
create_table "foos", force: :cascade, options: "ENGINE=InnoDB DEFAULT CHARSET=utf8" do |t|
t.string "bar"
end
end
```
database.yml
```
default: &default
adapter: sqlite3
pool: 5
timeout: 5000
development:
adapter: mysql2
encoding: utf8
database: issue26209
username: root
password: <PASSWORD>
host: 127.0.0.1
test:
<<: *default
database: db/test.sqlite3
production:
<<: *default
database: db/production.sqlite3
```
username_2: This is the expected behavior. The role of `schema.rb` is to provide a format which is easier for diffs and less prone to merge conflicts. The addition of those options is necessary for properly reproducible builds. We strongly recommend against using a different database for development and test mode.
Status: Issue closed
username_0: @username_2 Could the new behavior be made optional? This is a practice I've used since Rails 3 beta without issue. Using sqlite for testing is also very common on Travis CI, where folks will use in-memory databases, or with [parallel_tests](https://github.com/grosser/parallel_tests) for the ease of creating numerous temporary database copies. I've spoken to a number of developers who use this approach despite using MySQL or Postgres in production. I find a real value in testing against a different database, as it forces you to write database agnostic code.
I understand it may be discouraged, but I respectfully disagree it should be prevented.
username_2: If a config option were to be added, it wouldn't be until 5.1 at the earliest, but yes if you are interested in working on a pull request I would merge one which added a configuration option to not dump table options in `schema.rb`.
username_2: Yeah, that's a bit out of date. That hasn't really been true since 4.0
username_0: Would you be willing to reopen this issue? There is either a bug in the code or a bug in the documentation. No one will see this closed.
I _really_ strongly disagree that despite ActiveRecord migrations being agnostic, the schema shouldn't be. If the additional options in the schema are useful, they should be prefixed with the database type they apply to, and ignored if loading the schema against a different database type.
username_3: I too would like the schema to remain optionally agnostic.
Regardless, this ticket shouldn't be closed until the schema option is implemented, or at the very least until a link is provided to the new ticket that describes updating the previously mentioned out-of-date documentation. "Working as intended" while the official documentation says otherwise is not valid.
username_4: I ran into this exact same problem, too. I'd like to have SQLite as test database, and MySQL for development.
username_5: To make the schema agnostic, we could qualify these table options with the database they apply to.
For example,
```ruby
create_table "foos", force: :cascade, mysql_options: "ENGINE=InnoDB DEFAULT CHARSET=utf8" do |t|
t.string "bar"
end
```
They'd simply be ignored (at your peril, if they're meaningful or necessary) on other databases.
username_6: I ran into the same problem 5 minutes ago and I totally agree with @username_5. That should solve the merge problem and allow users to use SQLite and MySQL without hassle.
username_5: Let's see a PR! 😁
username_7: What about those who build an app that has to support multiple database types and uses `schema.rb` to bootstrap a fresh install for each new tenant? Our tests run on both MariaDB and PostgreSQL and we are using both in development for obvious reasons. ActiveRecord is supposed to be an abstraction, not ignoring options that obviously make no sense and basically passing them straight to SQL is as leaky as it can get.
```ruby
create_table "foos", force: :cascade, options: "ENGINE=InnoDB DEFAULT CHARSET=utf8"
```
should really be:
```ruby
create_table "foos", force: :cascade, options: { mysql2: "ENGINE=InnoDB DEFAULT CHARSET=utf8" }
```
username_5: @username_7 Definitely! That nicely summarizes the discussion above. Need a champion to step up with a PR.
username_4: @username_7 Exactly my opinion.
username_8: I also would like the schema to remain optionally agnostic. @username_2 Could we reopen this ticket as it breaks existing flexibility and there is no mention of this change in any upgrade notes.
username_9: Ran into this also, very bad.
It should just allow us to run migrations and be coherent to the 'outdated' documentation posted above that states you can use any database.
username_10: ### Steps to reproduce
Use MySQL for Development, and SQLite3 for Test. On an app I'm upgrading from 4.2 to 5.0 I run into trouble **after** running a migration in the Rails5 branch. Here's why:
Rails 4:
``` bash
ack "create_table" db/schema.rb
create_table "absences", force: :cascade do |t|
create_table "accounts", force: :cascade do |t|
create_table "addresses", force: :cascade do |t|
...
```
Rails 5:
``` bash
ack "create_table" db/schema.rb
create_table "absences", force: :cascade, options: "ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci" do |t|
create_table "accounts", force: :cascade, options: "ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci" do |t|
create_table "addresses", force: :cascade, options: "ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci" do |t|
...
```
That's fine for MySQL, but when I load the schema for tests I get this:
```
Tasks: TOP => db:test:load_schema
(See full trace by running task with --trace)
ActiveRecord::StatementInvalid: SQLite3::SQLException: near "ENGINE": syntax error: CREATE TABLE "absences" ("id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, "person_id" integer, "date" date, "source" varchar, "appointment_id" varchar, "reason" varchar, "category" varchar, "hours" decimal(4,2), "approved" boolean, "created_at" datetime NOT NULL, "updated_at" datetime NOT NULL, "entry_source" varchar, "voided" boolean DEFAULT 'f' NOT NULL, "mark_for_deletion" boolean DEFAULT 'f') ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
```
### Expected behavior
Migration ignores options it's can't use, or they're not in the schema.rb in the first place.
### Actual behavior
sqlite3 (1.3.11) builds invalid SQL (per sqlite3).
### System configuration
Working: 4.2.5.1
Broken: 5.0.0.1
**Ruby version**:
2.3.1p112
username_10: We need to keep DB agnostic `schema.rb` as possible.
username_11: To copy over a comment from #29472:
Per https://github.com/rails/rails/pull/27981#issuecomment-279427794, I don't think we should ignore settings that mean something, and we should instead aim to exclude default settings from the schema dump.
username_12: I ran into this every time I wanted to use or support multiple databases (e.g. SQLite3 for some devs), and there is a lot more inconsistencies than just these options.
e.g. looking at the diff of what PostgreSQL to SQLite gave us for the exact same migrations (which do not contain any special SQL at all!):
* SQLite3 used `integer` columns for my foreign keys (`t.belongs_to` etc.), but PostgreSQL used `bigint`.
* SQLite3 driver doesn't support foreign keys, so all the `add_foreign_key`'s at the end got deleted.
* Many columns were reordered.
I have not looked in depth, but with `schema.rb` being made by dumping the database (rather than concatenating migrations in some way) I am inclined to think that there could be a whole list of edge cases between the different databases where datatypes and features differ.
If Rails wants `schema.rb` to be agnostic, maybe it should instead build itself from the migrations directly? Even if DB specific stuff is to be moved to its own parameters, like `options: {mysql: ..., sqlite: ...}` we would still then want it to "merge" the dumps rather than rewrite the `schema.db` so that it contains both at the same time, not just the one the developer that committed it was using (and especially not having stuff like it deleting all the foreign keys).
Else it seems its better to say it will never truly be, and generate separate files (`schema_mysql.rb`, `schema_postgresql.rb`, `schema_sqlite.rb`, etc.) that while following the same basic format and being possible (maybe with manual effort) to move correctly between DB's, are in no way advertised as such and can co-exist in version control.
username_13: @username_10 +1 for your fix.
Can I assume to find that in 5.1.5 ?
username_7: @username_12 Similarly, there are differences on `text` types with `limit`: some MySQL builds may output `4294967295`, some `2147483647`, while PostgreSQL doesn't output anything, so loading that "database agnostic" schema with MySQL makes it [fall back to 65535](https://dev.mysql.com/doc/refman/5.7/en/storage-requirements.html#data-types-storage-reqs-strings), which can result in silent truncation. Introspecting schema solely from the database cannot produce a database agnostic `schema.rb` at the semantic level.
username_12: Well like I said, i dont think the current method can ever work without catches. Seperate schema file names might be better, at least if someone must rename the file they can expect issues, or rerun all migrations and have both schemas checked in.
Different builds of the same version is harder. If you have a schema with 4294967295 and import it into a build with 2147483647 for example.
username_14: @username_2 what is the status of this issue?
It is disheartening to see this issue has been raised in Aug 2018, and it has still not been addressed..
Is there an ETA when this will be fixed, and what version of Rails is targeted for the fix?
We too use SQLite3 for much faster testing, and MySQL in prod.
One other issue I noticed is that some of the limits from the old schema file no longer show up in the new format.
username_14: maybe @username_10 can help?
username_2: @username_14 Please stop pinging various team members. It will not bring more attention to your issue. As has already been stated, we're happy to accept a fix for this, but this issue needs a champion to write up a PR. It is not currently a priority for anybody on the team. Continuously pinging us here and on other mediums will not make something happen faster.
username_14: It's username_0's issue really.
So the question is _how_ one is supposed to find a "champion"?
Filing an issue and waiting for 16 months, hoping someone stumbles over it, doesn't seem to work
Introducing this sort of dependency in the schema dump seems pretty short-sighted IMHO.
AR is supposedly an abstraction, so people can easily switch out their DB..
username_14: @username_2
So there is already a PR for this, from the original author who made the change in the schema dumping, but it is still not merged:
https://github.com/rails/rails/pull/29472
username_12: @username_2 Is a complete design known, or does other things need to happen first?
For example in username_14's case, even with the PR, if a developer was to use SQLite locally rather than MySQL to add a migration and commit the resulting (SQlite specific) schema, using that later for MySQL will still have lost the new MySQL specific options, and potentially worse foreign keys and other items.
username_14: Sorry, I don't mean to be critical, but maybe this whole idea of saving DB-specific options was not such a good idea to begin with, given that AR is supposed to abstract from the underlying DB..?
What was the original issue this new dump format was supposed to solve?
Is it really worth doing?
username_15: The core team has been pretty clear about what needs to happen here. Further commentary is unnecessary.
Status: Issue closed
|
cnbluefire/FDSforCU | 751071112 | Title: 邯郸劳务费发票-邯郸劳务费发票
Question:
username_0: 邯郸劳务费发票【徴:ff181一加一⒍⒍⒍】【Q:249⒏一加一357⒌⒋0】经营范围广、项目齐全、劳务、会议、住宿、餐饮、运输、广告、建筑、手撕、建材、钢材等等...一边又有复习的风波,所以我太难了。虽然这次的试卷写完了,但是万一有些题目不太把握还是会错,所以心惊胆战。闲问字,评风月。时载酒,调冰雪。似初秋入夜,浅凉欺葛。人境不教车马近,醉乡莫放笙歌歇。倩双成、一曲紫云回,
https://github.com/cnbluefire/FDSforCU/issues/3
https://github.com/cnbluefire/FDSforCU/issues/4
https://github.com/cnbluefire/FDSforCU/issues/5 |
kyma-project/test-infra | 726544714 | Title: pre/post-master-kyma-integration does not collect junit report when the tests have failed
Question:
username_0: <!-- Thank you for your contribution. Before you submit the issue:
1. Search open and closed issues for duplicates.
2. Read the contributing guidelines.
-->
**Description**
The jobs `pre/post-master-kyma-integration` do not collect JUnit reports when failed.
<!-- Provide a clear and concise description of the problem.
Describe where it appears, when it occurred, and what it affects. -->
<!-- Provide relevant technical details such as the Kubernetes version, the cluster name and provider, the Prow version, the browser name and version, or the operating system. -->
**Expected result**
JUnit report is fetched and available after the job regardless the test state.
<!-- Describe what you expect to happen. -->
**Actual result**
JUnit reports are only collected when the job finishes with success state.
<!-- Describe what happens instead. -->
**Troubleshooting**
Succesful: https://status.build.kyma-project.io/view/gcs/kyma-prow-logs/logs/post-master-kyma-integration/1318907921739812864
Failed: https://status.build.kyma-project.io/view/gcs/kyma-prow-logs/logs/post-master-kyma-integration/1318910186773024768
<!-- Describe the steps you have already taken to solve the issue. --><issue_closed>
Status: Issue closed |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.