repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
ClusterLabs/fence-agents | 229733912 | Title: using fence_ipmilan from RHCS with pacemaker on SLES
Question:
username_0: Hi,
i'm trying to use a fence_ipmilan from the RHCS in conjunction with pacemaker 1.1.12 on a SLES 11 SP4 node. I found a rpm for SLES 11, fence-agents-3.1.11-7.17. I also installed the necessary python packages.
First i'd like to know if it is basically possible to use RHCS fence agents with pacemaker.
I found some postings claiming that it's possible:
http://lists.clusterlabs.org/pipermail/users/2016-February/002287.html
I followed the guide:
# mkdir /usr/lib{,64}/stonith/plugins/rhcs
# cd /usr/lib{,64}/stonith/plugins/rhcs
# ln -s /usr/sbin/fence_pve
but with fence_ipmilan.
I can stonith the other node using /usr/sbi/fence_ipmilan:
fence_ipmilan -a 192.168.127.12 -p ********* -P -l root -o reboot -v
But using the fence agent with stonith does not suceed:
ha-idg-1:/usr/sbin # stonith -dt rhcs/ipmilan ipaddr=192.168.127.12 login=root passwd=*** -S
** (process:10896): DEBUG: NewPILPluginUniv(0x6062b0)
** (process:10896): DEBUG: PILS: Plugin path = /usr/lib64/stonith/plugins:/usr/lib64/heartbeat/plugins
** (process:10896): DEBUG: NewPILInterfaceUniv(0x606890)
** (process:10896): DEBUG: NewPILPlugintype(0x606350)
** (process:10896): DEBUG: NewPILPlugin(0x6071d0)
** (process:10896): DEBUG: NewPILInterface(0x607220)
** (process:10896): DEBUG: NewPILInterface(0x607220:InterfaceMgr/InterfaceMgr)*** user_data: 0x(nil) *******
** (process:10896): DEBUG: InterfaceManager_plugin_init(0x607220/InterfaceMgr)
** (process:10896): DEBUG: Registering Implementation manager for Interface type 'InterfaceMgr'
** (process:10896): DEBUG: PILS: Looking for InterfaceMgr/generic => [/usr/lib64/stonith/plugins/InterfaceMgr/generic.so]
** (process:10896): DEBUG: Plugin file /usr/lib64/stonith/plugins/InterfaceMgr/generic.so does not exist
** (process:10896): DEBUG: PILS: Looking for InterfaceMgr/generic => [/usr/lib64/heartbeat/plugins/InterfaceMgr/generic.so]
** (process:10896): DEBUG: Plugin path for InterfaceMgr/generic => [/usr/lib64/heartbeat/plugins/InterfaceMgr/generic.so]
** (process:10896): DEBUG: PluginType InterfaceMgr already present
** (process:10896): DEBUG: Plugin InterfaceMgr/generic init function: InterfaceMgr_LTX_generic_pil_plugin_init
** (process:10896): DEBUG: NewPILPlugin(0x606be0)
** (process:10896): DEBUG: Plugin InterfaceMgr/generic loaded and constructed.
** (process:10896): DEBUG: Calling init function in plugin InterfaceMgr/generic.
** (process:10896): DEBUG: NewPILInterface(0x607b90)
** (process:10896): DEBUG: NewPILInterface(0x607b90:InterfaceMgr/stonith2)*** user_data: 0x0x6072d0 *******
** (process:10896): DEBUG: Registering Implementation manager for Interface type 'stonith2'
** (process:10896): DEBUG: IfIncrRefCount(1 + 1 )
** (process:10896): DEBUG: PluginIncrRefCount(0 + 1 )
** (process:10896): DEBUG: IfIncrRefCount(1 + 100 )
** (process:10896): DEBUG: PILS: Looking for stonith2/rhcs => [/usr/lib64/stonith/plugins/stonith2/rhcs.so]
** (process:10896): DEBUG: Plugin path for stonith2/rhcs => [/usr/lib64/stonith/plugins/stonith2/rhcs.so]
** (process:10896): DEBUG: Creating PluginType for stonith2
** (process:10896): DEBUG: NewPILPlugintype(0x607ee0)
** (process:10896): DEBUG: Plugin stonith2/rhcs init function: stonith2_LTX_rhcs_pil_plugin_init
** (process:10896): DEBUG: NewPILPlugin(0x6089c0)
** (process:10896): DEBUG: Plugin stonith2/rhcs loaded and constructed.
** (process:10896): DEBUG: Calling init function in plugin stonith2/rhcs.
** (process:10896): DEBUG: NewPILInterface(0x608010)
** (process:10896): DEBUG: NewPILInterface(0x608010:stonith2/rhcs)*** user_data: 0x0x7fbc764172b0 *******
** (process:10896): DEBUG: IfIncrRefCount(101 + 1 )
** (process:10896): DEBUG: PluginIncrRefCount(0 + 1 )
debug: rhcs_set_config: called.
debug: rhcs_status: called.
debug: rhcs_run_cmd: Calling '/usr/lib64/stonith/plugins/rhcs/fence_ipmilan'
debug: set rhcs plugin param 'agent=ipmilan'
debug: set rhcs plugin param 'action=monitor'
debug: set rhcs plugin param 'login=root'
debug: set rhcs plugin param 'passwd=***'
debug: set rhcs plugin param 'ipaddr=192.168.127.12'
[Truncated]
May 18 18:23:11 ha-idg-1 stonith: rhcs_status: 'ipmilan monitor' failed with rc -1
May 18 18:23:11 ha-idg-1 stonith: rhcs/ipmilan device not accessible.
May 18 18:23:12 ha-idg-1 stonith: rhcs_run_cmd: fence agent exit code: 1
May 18 18:23:12 ha-idg-1 stonith: rhcs_status: 'ipmilan monitor' failed with rc -1
May 18 18:23:12 ha-idg-1 stonith: rhcs/ipmilan device not accessible.
May 18 18:23:12 ha-idg-1 stonith-ng[8393]: notice: log_operation: Operation 'monitor' [12102] for device 'prim_stonith_ipmilan_ha-idg-2' returned: -201 (Generic Pacemaker error)
May 18 18:23:12 ha-idg-1 stonith-ng[8393]: warning: log_operation: prim_stonith_ipmilan_ha-idg-2:12102 [ Performing: stonith -t rhcs/ipmilan -S ]
May 18 18:23:12 ha-idg-1 stonith-ng[8393]: warning: log_operation: prim_stonith_ipmilan_ha-idg-2:12102 [ failed: 255 ]
May 18 18:23:12 ha-idg-1 crmd[8397]: warning: stonith_plugin: rhcs plugins don't really support getinfo-devid
May 18 18:23:12 ha-idg-1 crmd[8397]: error: process_lrm_event: Operation prim_stonith_ipmilan_ha-idg-2_start_0 (node=ha-idg-1, call=1906, status=4, cib-update=2030, confirmed=true) Error
May 18 18:23:12 ha-idg-1 crmd[8397]: warning: status_from_rc: Action 45 (prim_stonith_ipmilan_ha-idg-2_start_0) on ha-idg-1 failed (target: 0 vs. rc: 1): Error
May 18 18:23:12 ha-idg-1 crmd[8397]: warning: update_failcount: Updating failcount for prim_stonith_ipmilan_ha-idg-2 on ha-idg-1 after failed start: rc=1 (update=INFINITY, time=1495124592)
May 18 18:23:12 ha-idg-1 crmd[8397]: notice: abort_transition_graph: Transition aborted by prim_stonith_ipmilan_ha-idg-2_start_0 'modify' on ha-idg-1: Event failed (magic=4:1;45:39:0:be494227-3368-4ea
...
Is it a basic problem or did i just make an error ?
Thanks.
Bernd
Answers:
username_0: Hi,
i managed it. It was my fault:
ha-idg-1:~ # stonith -vt rhcs/ipmilan ipaddr=192.168.127.12 login=root passwd=*** auth=password lanplus=1 -S
info: rhcs/ipmilan device OK.
Fencing also succeeds:
stonith -vt rhcs/ipmilan ipaddr=192.168.127.12 login=root passwd=*** auth=password lanplus=1 delay=10 -T reset ha-idg-2
info: plugin output: Rebooting machine @ IPMI:192.168.127.12...Done
Just missing some parameters.
Bernd
Status: Issue closed
|
dresden-elektronik/deconz-rest-plugin | 372334569 | Title: GE Zigbee Smart Dimmer Switch (45857GE)
Question:
username_0: GE has an excellent smart dimmer switch called the GE Smart Dimmer (45857GE). This is a powered light switch that replaces a standard wall switch. I would appreciate you adding it to your supported device list. Here is the basic info. Let me know if you would like any other screen shots.




Answers:
username_1: Please provide a screen shot of the _Node info_ panel, see
https://github.com/dresden-elektronik/deconz-rest-plugin/wiki/Request-Device-Support
It should already be exposed as a dimmable light, to control the wired light(s). To expose the _Simple metering_ cluster as ZHAConsumption, we need to whitelist the dimmer. Does it report power (_Instantaneous demand_) in 0.1W and the lifetime consumption _Current summation delivered_ on 0.1 Wh, as the _Divisor_ would suggest?
I expect endpoint 0x02 is used to other smart lights (that are not wired to the dimmer). The _OnOff_ and/or _Level_ client clusters probably need to be bound to a group for that to work. If you can find out what commands are sent, we could expose that endpoint as a ZHASwitch.
username_1: The plug should also already be exposed as a `/lights` resource. For the rest, it looks very similar to the dimmer - we can probably share most of the code. |
godotengine/godot-proposals | 1083880541 | Title: Turn off `loop` for audio in Import
Question:
username_0: ### Describe the project you are working on
A couple of games
### Describe the problem or limitation you are having in your project
When making games I found myself turning off this button extremely often

I keep Loop on for background music etc but the number of such files is much less
### Describe the feature / enhancement and how it helps to overcome the problem or limitation
Having `loop` turned off by default would solve the problem
### Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams

I believe loop should be off for all other formats too.
### If this enhancement will not be used often, can it be worked around with a few lines of script?
Currently it can't
### Is there a reason why this should be core and not an add-on in the asset library?
Not sure if this can be done with a plugin.
But even if so, I still believe this should be in core not as a plugin.
Answers:
username_1: for games i make i have short sounds for:
- each character's move, attack, receiving dmg, death
- each skill
- each interaction
- each mob with its skills, etc
- each dialogue piece
- some ambient random sounds
all of them are not repeatable.
music on the other hand is about 3-5 tracks.
so yeah, its very annoying to disable loop for every tiny sound, loop should be disabled by default.
Status: Issue closed
username_2: Duplicate of https://github.com/godotengine/godot-proposals/issues/3120.
username_0: That proposal is for moving Loop from an import option to a property in the AudioStreamPlayer nodes.
Does it include turning it off?
username_2: Yes, this is a compatibility-breaking change so it should be discussed at the same time :slightly_smiling_face:
username_3: You can change the default from the Preset button menu.
username_0: Isn't it per project?
That means I need to do this for each new project.
And the projects are not only games, they could be plugins or just test projects to test something.
username_3: I don't think this is relevant for such projects.
username_0: I don't get why this can't be relevant. A test project or a project to test a plugin can use audio.
username_0: My point is that for me the ideal solution would be if loop was off by default.
And that's not only my case (we had a short discussion about this in our local discord server).
I prefer straightforward solutions, not workarounds that's why I created this proposal.
username_3: It can, but maybe it can use wavs or looping doesn't matter. I just can't imagine why would you need e.g. 10 random non-game projects that must use non-looping MP3s.
username_0: what if the project already exists?
The workaround won't be helpful
username_0: If the project already exists and has a lot of data, setting loop off manually may be not so easy.
I think setting it from the editor would be even easier (set it off once then save the preset)
But I still don't get why I need to do this or anything else if the option could simply be turned off by default (in case most of Godot users find it fine) |
parcel-bundler/website | 1072612838 | Title: What is the license for this repository?
Question:
username_0: What is the license for this repository, please?
Thank you.
Answers:
username_1: Not sure if this counts:
https://github.com/parcel-bundler/website/blob/738cb704891eee7a813e2472bd0765499ff54010/package.json#L5
Ping @mischnic since you probably want to add one even though it doesn't really matter.
Status: Issue closed
|
algolia/algoliasearch-rails | 384030620 | Title: Test Suite is a little Flaky
Question:
username_0: ### Context
Running the test suite with `bundle exec rspec`
1. When I point to a blank Algolia App the tests pass.
1. When I run them again they fail.
### Impact
#319 shows this error is affecting PR's
### Fix
I fixed this locally and on CI by deleting all the indexes after the suite has run. Then the next run starts on a clean Algolia App.
```ruby
#spec_helper.rb
c.after(:suite) do
# Sort these so we don't try and delete a replica
indices = Algolia.client.list_indexes()['items'].sort_by { |index| index["primary"] || "" }
indices.each do |index|
# Bang on the delete so we wait for the task to finish
Algolia.client.delete_index!(index['name'])
end
end
```
I would open up a PR - **but since this deletes all indices between runs i didn't want to put side effects onto your testing application** - particularly if you are sharing this app between instances.
I test this multiple times in CI and locally and it passes without a hiccup.
Open PR in my Fork [Fix test suite failure](https://github.com/username_0/algoliasearch-rails/pull/2)
@username_1
@redox
Appreciate your time.
Answers:
username_1: Hi @username_0,
Thank you for looking into it! I think many people (at least me) have different projects using the same app, so I'd rather not delete all indices. What we could do is define some norm in the [safe_index_name](https://github.com/algolia/algoliasearch-rails/blob/master/spec/spec_helper.rb#L32-L36) and only delete indexes matching this pattern.
The approach we had in other projects is either to:
* Delete the index before using it
* Save the name of all index created, delete them at the end
First one works every time, the second one might not work if your tests suite breaks. 💥
For example, in PHP, if you get a fatal error, your clean up code never get executed.
What do you think?
username_0: Hi @username_1,
I agree deleting all indexes is too dangerous.
#### Option 1 - Delete the index before using it
This means all contributors have to follow the pattern which can give some human error. To fix this you would maybe write a `rspec` hook like:
```ruby
before(:each) do
AlgoliaSearch.included_in.each do |klass|
Algolia.client.delete_index!(klass.index.name)
end
end
```
This gets messy. You need to delete the Primary, then replicas/slaves. There is also a high overhead and I found I got some edge cases that left a lot of `if`'s hanging around.
#### Option 2 - Save the name of all index created, delete them at the end
In my opinion this is easier. I have the following suggestion for this. I prefer to use the algolia client to get the list of indexes because I need to sort. I need to ensure that I don't attempt to delete a replica or slave before I delete the primary.
1. Remove reliance on ENV['TRAVIS_JOB_NUMBER'] so local and CI behaviour is the same
1. Store a SecureRandom constant to prefix all index names so we get `ea5f1cd615b24f6b_Book_development`
1. Rspec `after(:suite)` will run on early exit - will only fail if test suite configuration doesn't evaluate.
1. We are now using `SecureRandom` - this solves the underlying issue anyway - because now all index names are always unique. The delete step is no longer required to make subsequent runs work. It is now just a cleanup opperation.
```ruby
RSpec.configure do |c|
...
# Remove all indexes setup in this run in local or CI
c.after(:suite) do
safe_index_list.each do |index|
Algolia.client.delete_index!(index['name'])
end
end
end
# A unique prefix for your test run in local or CI
SAFE_INDEX_PREFIX = SecureRandom.hex(8).freeze
# avoid concurrent access to the same index in local or CI
def safe_index_name(name)
"#{SAFE_INDEX_PREFIX}_#{name}"
end
# get a list of safe indexes in local or CI
def safe_index_list
Algolia.client.list_indexes()['items']
.select { |index| index["name"].include?(SAFE_INDEX_PREFIX) }
.sort_by { |index| index["primary"] || "" }
end
```
I have tested both these options on my fork, but prefer option 2 - https://github.com/username_0/algoliasearch-rails/pull/2
It might also be good to specify that indexes are not deleted if you pass a flag e.g. `bundle exec rspec --debug`. SecureRandom will ensure the next test run is successful even without the delete, and you can inspect your previous indexes to do any debugging.
What do you think?
username_1: Oh I really like the second option! I don't think we tried this approach before. Do you mind opening a PR on this repo? I'd be happy to merge this solution.
There is another constraint I have in mind. For Travis, [we generate keys](https://blog.algolia.com/travis-encrypted-variables-external-contributions/) for each build but they have an index prefix constraint. I can edit the constraint but I need something fixed.
I'd like to have `rails_` so I guess we would have:
```ruby
SAFE_INDEX_PREFIX = "rails_" + SecureRandom.hex(8).freeze
```
Then, depending on how many indexes are created by the tests, we can create a batch to have one http call. It's generally better to send all operations and then wait, than wait between each operations. This might be early optimization, I don't know if the suite creates 3-5 indexes or 30 ^^
Status: Issue closed
|
CartoDB/carto-vl | 423682923 | Title: Add options attribute to the Layer constructor
Question:
username_0: Adding an extra attribute `options` to the `carto.Layer` constructor would be useful. For example, in my case, I found not possible to set up the visibility of the layer at the beginning to false using the API.
Default visibility is always true: https://github.com/CartoDB/carto-vl/blob/master/src/Layer.js#L69.
The hack workaround is:
```js
const visible = false
const layer = new carto.Layer(id, source, viz)
layer._visible = visible
```
But it would be better if this can be done using the API:
```js
const visible = false
const layer = new carto.Layer(id, source, viz, { visible })
```
Answers:
username_1: For this purpose, you can use `hide` and `show` methods to change the visibility, which are public. Although it'd be better to set the visibility when creating the layer, I suggest to use the following code meanwhile:
```js
const layer = new carto.Layer(id, source, viz)
layer.hide();
```
username_0: I tested your proposed solution but it throws the following error:
`TypeError: Cannot read property 'setLayoutProperty' of undefined`
To fix that you need to execute `layer.hide()` inside the `layer.on('loaded', ...)` callback but then, it appears the layer during a short period of time before being hidden.
Another solution is to fix the show and hide functions to check if `this.map` exists but in any case update `this._visible`.
username_2: The "start as hidden layer" has been required before (see https://github.com/CartoDB/carto-vl/issues/1151).
username_3: just ran into this in a small app I'm building too. Wanted to have the layers be toggled by Airship radio buttons and "select" and "clear" buttons. IE, pick a radio, hit "select" and have layer shown; hit "clear" and have all layers hidden again.
username_2: Right now, we don't have plans to implement this, so I would recommend to test the workaround: ```
const layer = new carto.Layer(id, source, viz)
layer._visible = false
```
Once you are in your app, you can then use the public `show / hide` methods connected to your UI. |
wekan/wekan | 476038338 | Title: Mobile problem with immediate moving of columns instraf scrolling
Question:
username_0: When opening wekan on a mobile browser you cannot scroll the „vertical“ board because instead the „columns“ are moved around
Answers:
username_1: Agreed, was just about to come and post about this one. If the column width sizes are shrunkin down a bit to the left or right would allow a section to act as a scroll bar. This would greatly increase the usability on mobile devices since a mobile app hasn't been pushed out.
username_0: I'd rather suggest completely disabling the movement of columns in mobile view. I think when working on a mobile device it's more about actual card work and not changing the layout of columns.
An even better solution would be to let us toggel editing of colums from the board settings.
Several times now I had to reorder the columns because a user would simply reorder them by accident. So if the columns are locked by default and only the board owner would be able to edit them - that would really help.
username_2: Freezing / Locking columns would be of value in both the mobile application but also in the existing desktop layout when more than one user is interacting with the board. If I create a board, I don't want it modified by other users. If there was a setting for this it would at least make that less likely to be rearranged without intention.
This would also put us one step closer to mobile use.
**These two changes would allow for immediate use in mobile environment** while future app considerations are being thought out.
1. 1. Ability to lock the columns so finger touches do not drag them.
2. A 'Move To' button allowing you to move a card to another column
Jay
username_3: Moved to #2081
Status: Issue closed
|
taylorhcarroll/NatzHarmonyCapstone | 611530547 | Title: User can enter profile information
Question:
username_0: Once user has submitted account login info, user will be redirected to a series of pages asking user to begin answer questions to build out their profile.
This includes:
Year born
Country of Origin
Prefered pronouns and gender
Ability to select MULTIPLE languages other than English.
Availability (morning, midday, evening) |
alibaba/nacos | 393344296 | Title: Health check mode confict when building muilt clusters whit nacos sync + nacos
Question:
username_0: ## 版本
nacos版本 0.6.0
## 描述
我们通过nacos和nacos-sync搭建两地的集群, 支持服务就近注册和拉取服务地址, 两地的服务可以通过nacos-sync双向同步。
我们遇到的一个问题是, 由于同样一个服务名,在两个集群都可以进行服务注册,所以他的健康检查方式在两个集群是一致的, 而如果是客户端检查, 这就会导致通过nacos-sync同步的实例由于没有心跳维持,很快就被注销掉, 这样跨集群调用就无法进行
不知道nacos这边有没有什么建议,感谢<issue_closed>
Status: Issue closed |
rafaelmardojai/firefox-gnome-theme | 470250409 | Title: Hide the tab bar when only one tab that is open doesn't work
Question:
username_0: 
I use Firefox 68.0 on Ubuntu 18.04
Answers:
username_1: Move the "New tab" button to the headerbar.
username_0: It's work. I think it's a bug.
Status: Issue closed
username_2: Hi. I also need help for this... How do you move the new tab button to the headerbar? It doesn't appear when I'm in the "Customize Firefox" page.
username_3: @username_2 Mmm, seems like the overflow menu is on top of the tabsbar. I need to fix that.
username_2: @username_3 Thanks! Your comment helped me find a workaround.
For future visitors: Simply disable *Firefox-Gnome-Theme* by going to your Firefox profile's `userChrome.css` file, and commenting out the line
```css
@import "firefox-gnome-theme/userChrome.css";
```
Reopen Firefox, and you will now be able to move the new tab icon to the toolbar. Now, uncomment the above line and reopen Firefox.
username_4: I can confirm that this is fixed in latest version (dfa1bc5 on 26 Nov 2020). When you hit "Customize Firefox" the "New Tab" Button appears and you can drag it to the main bar. |
transformerjnm/facebook-clone | 776619331 | Title: Material UI light and dark theme
Question:
username_0: ** Facebook Color Schema **
We need the entire application to have a consistent color schema using Material UI.That will allow for switching to a dark theme in the future.
** Material UI Theme Pallet **
We need to use theme.js to implement a MuiTheme from Material UI. Then update Material UI default color pallet to our color pallet. We should have two MuiTheme. One for light the theme and one for the dark theme.
** Alternatives Considered **
The primary alternative would be to use CSS/SASS variables to create color pallet. However this functionality is already provided to use by Material UI.
** Theme Colors **
The theme colors for Light and Dark theme are coming from Facebook.com. |
OrchardCMS/OrchardCore | 784701799 | Title: NewtonsoftJsonMvcOptionsSetup is never registered
Question:
username_0: OrchardCore never calls Microsoft's `AddMvc()` or `AddMvcCore()` during registration. The `AddMvcCore()` extension register [NewtonsoftJsonMvcOptionsSetup](https://github.com/dotnet/aspnetcore/blob/b795ac3546eb3e2f47a01a64feb3020794ca33bb/src/Mvc/Mvc.NewtonsoftJson/src/DependencyInjection/NewtonsoftJsonMvcOptionsSetup.cs) class which replaces `SystemTextJsonOutputFormatter` with `NewtonsoftJsonInputFormatter` to allow the binding engine to bind request data from json to objects.
For more info #8275
Answers:
username_1: We call `services.AddMvc(options =>`, see https://github.com/OrchardCMS/OrchardCore/blob/dev/src/OrchardCore/OrchardCore.Mvc.Core/Startup.cs#L73
And you can re-call it from a module startup, or from the app with one of our helpers
We also call `builder.AddNewtonsoftJson()`, see https://github.com/OrchardCMS/OrchardCore/blob/dev/src/OrchardCore/OrchardCore.Mvc.Core/Startup.cs#L89 |
colorer/Colorer-library | 639698917 | Title: Colorer.exe v1.0.5 Git-2cf327d - утилита не подхватывает схемы
Question:
username_0: Проверка простейшая схемы взяты из подкаталога FarColorer/base и положены рядом с утилитой, затем она запущена с ключом -v. Что я вижу:
в коммите Git 2cf327d

в коммите Git b18867cb7

собираю простейшим скриптом:
```
@echo off
if "%1" == "-u" (git pull -f && git submodule update --recursive)
if exist ./bin rd /s/q bin
if exist ./build rd /s/q build
mkdir build Colorer\x64 Colorer\x86
cd build
cmake -G "Visual Studio 15 2017" -Tv141_xp ..
cmd /c "%VS150COMNTOOLS%\..\..\VC\Auxiliary\Build\vcvarsall.bat" x86 && "%VS150COMNTOOLS%\..\IDE\devenv.exe" colorer.sln /Build "Release|Win32" /Project "ALL_BUILD"
cd ..
move /y bin\vc\Release\colorer.exe Colorer\x86\colorer.exe
rd /s/q build > nul
mkdir build
cd build
cmake -G "Visual Studio 15 2017 Win64" -Tv141_xp ..
cmd /c "%VS150COMNTOOLS%\..\..\VC\Auxiliary\Build\vcvarsall.bat" x86_amd64 && "%VS150COMNTOOLS%\..\IDE\devenv.exe" colorer.sln /Build "Release|x64" /Project "ALL_BUILD"
cd ..
move /y bin\vc\Release\colorer.exe Colorer\x64\colorer.exe
rd /s/q bin
rd /s/q build
exit
```
[build.zip](https://github.com/colorer/Colorer-library/files/4787029/build.zip)
Answers:
username_1: тут ошибка не в подхвате схем, а как уже писал на форуме - сломан парсинг, в виду неверного значения по умолчанию для размера блока. пока править не буду, надо подумать как вообще использовать параметры "редактора" . ранее они были в схемах, потом их вынесли в отдельный файл чисто под far. Но если работа и с консольной утилитой, то и для неё нужны аналогичные настройки.
username_0: Согласен, я бы сделал ей собственный конфиг. А пока можно и старым вариантом пользоваться. Иной раз утилита здорово выручает. Особенно при просмотре логов коли под рукой только терминал и места в обрез.
username_1: Ошибка поправлена ранее. но корневая причина не решена.
закрываю -)
Status: Issue closed
username_0: Ща попробую что вышло - интересно, да и утилита штука нужная. |
swaywm/sway | 624427556 | Title: heap-buffer-overflow handle_pointer_constraint_set_region
Question:
username_0: `sway version 1.4-5d13f647 (May 25 2020, branch 'master')`
```
03:17:34.785 [DEBUG] [xwayland/xwm.c:1304] unhandled X11 event: FocusOut (10)
03:17:34.786 [DEBUG] [types/wlr_pointer_constraints_v1.c:242] new locked_pointer 0x61200016bac0 (res 0x60c000359140)
=================================================================
==1569==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60600040ff28 at pc 0x562d6a30bfdb bp 0x7ffc3a0504c0 sp 0x7ffc3a0504b0
WRITE of size 1 at 0x60600040ff28 thread T0
#0 0x562d6a30bfda in handle_pointer_constraint_set_region ../sway/sway/input/cursor.c:836
#1 0x7fd104521a66 in wlr_signal_emit_safe ../util/signal.c:29
#2 0x7fd10450c49d in pointer_constraint_commit ../types/wlr_pointer_constraints_v1.c:132
#3 0x7fd10450c4b2 in handle_surface_commit ../types/wlr_pointer_constraints_v1.c:140
#4 0x7fd104521a66 in wlr_signal_emit_safe ../util/signal.c:29
#5 0x7fd1045167b0 in surface_commit_pending ../types/wlr_surface.c:379
#6 0x7fd104516b72 in surface_commit ../types/wlr_surface.c:448
#7 0x7fd103b47a8c (/usr/lib/libffi.so.7+0x6a8c)
#8 0x7fd103b4701a (/usr/lib/libffi.so.7+0x601a)
#9 0x7fd1045dbf61 (/usr/lib/libwayland-server.so.0+0xcf61)
#10 0x7fd1045d82db (/usr/lib/libwayland-server.so.0+0x92db)
#11 0x7fd1045d9fa9 in wl_event_loop_dispatch (/usr/lib/libwayland-server.so.0+0xafa9)
#12 0x7fd1045d84e6 in wl_display_run (/usr/lib/libwayland-server.so.0+0x94e6)
#13 0x562d6a2ecc42 in server_run ../sway/sway/server.c:225
#14 0x562d6a2eb646 in main ../sway/sway/main.c:409
#15 0x7fd1041fb001 in __libc_start_main (/usr/lib/libc.so.6+0x27001)
#16 0x562d6a2d204d in _start (/usr/bin/sway+0x3d04d)
Address 0x60600040ff28 is a wild pointer.
SUMMARY: AddressSanitizer: heap-buffer-overflow ../sway/sway/input/cursor.c:836 in handle_pointer_constraint_set_region
Shadow bytes around the buggy address:
0x0c0c80079f90: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c0c80079fa0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c0c80079fb0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c0c80079fc0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c0c80079fd0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0c0c80079fe0: fa fa fa fa fa[fa]fa fa fa fa fa fa fa fa fa fa
0x0c0c80079ff0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c0c8007a000: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c0c8007a010: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c0c8007a020: fa fa fa fa 00 00 00 00 00 00 00 fa fa fa fa fa
0x0c0c8007a030: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
[Truncated]
name = 0x562d6a39cd80 "unsupported-gpu",
has_arg = 0,
flag = 0x0,
val = 117
}, {
name = 0x562d6a39cdc0 "my-next-gpu-wont-be-nvidia",
has_arg = 0,
flag = 0x0,
val = 117
}, {
name = 0x0,
has_arg = 0,
flag = 0x0,
val = 0
}}
config_path = 0x0
usage = 0x562d6a39c160 "Usage: sway [options] [command]\n\n -h, --help", ' ' <repeats 13 times>, "Show help message and quit.\n -c, --config <config> Specify a config file.\n -C, --validate Check the validity of the config file, th"...
c = <optimized out>
```
Answers:
username_1: Thanks for reporting this, could you give https://github.com/swaywm/sway/pull/5384 a try and see if it fixes the issue for you?
Status: Issue closed
username_0: @username_1 wow that was fast. Thank you! Looks like it's been merged to master so I'll rebuild and test it out tonight.
username_3: @username_0 Can you cliff notes on how this bug was triggered? Thanks!
username_0: @username_3 Sure. It happened when exiting Stardew Valley. Specifically stardew installed with `stardew_valley_1_4_3_379_34693.sh` and `SMAPI-3.5.0-installer.zip`, but I doubt that matters.
username_3: @username_0 Exactly what I needed, thanks! |
leafo/moonscript | 546588990 | Title: Adding features supporting writing code in a functional style
Question:
username_0: 1 2
```
but it'll definitely improve with time. The next feature I want to implement are
"back-calls"[5], which are basically a little syntactic cheat to make working
with nested callbacks more pleasant. Unexpectedly, the feature is more
versatile, and can be used for many kinds of DSLs.
Summing all this up: I'm going to implement many new syntactic constructs by
changing/adding to the MS implementation. I have no idea how long I will keep at
it, which features exactly I'll end up implementing, nor how big the needed
changes/rewrites in MS would be needed - and most of all I don't know when I'll
get stuck - but I'm currently determined to at least try doing this.
I'd like to ask a few questions related to the above:
1. Is there a technical reason for some of the features being impossible to
implement (or simply being a "bad idea" in Lua)? Which ones?
2. Is there anyone interested in helping? Or at least discussing the unexpected
problems and the features themselves... I could use a whole lot of that :)
3. Is there any hope of ever merging my changes upstream? I'm doing it for my
own use, but if there's a chance, I'll try to make the changes/PR "production
ready", with more comments, an additional round of cleanups, and updated docs.
PS. All of this is written with the assumption that MS is MIT-licensed - please
let me know if there are any legal problems with my ideas!
[1] https://livescript.net/
[2] http://www.preludels.com/
[3] https://github.com/username_0/nimiTCL
[4] https://klibert.pl/posts/adding_destructuring_bind_to_io.html
[5] http://livescript.net/#functions-backcalls
Answers:
username_1: Yeah, thanks for 541, this bit of code is hairy.
username_2: My best guess would be: not a chance. I do not think that Leafo would agree with these changes, let alone maintain them. That is not necessarily a bad thing and I admire Leafo for how he has managed MoonScript, it just simply isn't in the focus of the language. Your best bet is a self-maintained fork. I would have said community-maintained, but us regular and known MoonScript users are like 20 at most.
Other comments:
- I know that MoonScript is eventually getting a better rewrite of the compiler, but everyone is unsure when. I don't think big changes will be introduced before that.
- While pattern-matching functions are great, it would break with the current way functions are defined terribly. Either you propose a magically perfect standard that fixes that, or it is not going to look good.
- Mind compatibility. MoonScript is not a language big enough to survive such a giant syntax change and would essentially phase most known projects out.
- I am personally not very good at parser and compilers, but I can contribute some of my ideas and always discuss syntax.
username_0: Sure. I'm talking specifically about language extension, I don't plan (right now at least) to change any existing syntax. It's not needed in the vast majority of cases in general. Consider that there are languages well into their 70s which still compile and run their first implementation (ok, I'm cheating by bringing up Common Lisp here, but you can really see it everywhere: for every Python3 and Perl6 out there we have hundreds of (almost) perfectly backward-compatible language releases). Especially in small languages, if the features are reasonably orthogonal, you can go very far before crashing into the compatibility wall.
---
Whoa, that's a lot of text, I feel I wrote more just now then on my blog for the last two years, yet I still didn't even get to describing even a single feature I want in detail... I'd love to keep at it for another 2 hours, but unfortunately, there's that thing called life which also requires my attention, so I can't :( I'll probably write a bit more tomorrow, and will use the weekend to clean this up: https://github.com/username_0/moonscript/pull/1/files - the spurious requires and duplicated utils are the remnants of a debugging session, don't let them bother you ;) - and later to set up all the things I ignored for now, such as tests.
---
[1] If you wonder how outrageous: enough that retargeting MS to bare metal via LLVM seems tame in comparison ;)
username_1: [Why not?](https://metacpan.org/pod/PPR)
username_0: Wow, that's nice. Well, it just means I didn't know what the "all the extensions" means in practice, esp. in PERL. TIL.
username_1: Without derailing the thread further: [video on the topic](https://www.youtube.com/watch?v=ob6YHpcXmTg). This is possible with the regex flavors of .NET, Ruby, Python should also be able to do that.
username_2: I will note that the compiler rewrite notice was like, only superficially mentioned in the Discord? I don't even think I can pinpoint the actual quote but you can always try. But judging by Leafo's behavior towards other smaller issues, I predict that if there is a rewrite it will produce semantically identical compiled code (Lua syntax may change, the generals won't), and that once the language is replicated as it currently is, then new features can be introduced.
username_0: A short update from me: I managed to implement pipe operator and backcall syntax. They look like this:


---
Other than that, I've been thinking about what you @username_2 wrote here and I have to say I got kind of discouraged. First, MoonScript seems dead as a project, or maybe un-dead - a zombie animated by just a few people who feel strongly about it. The original author is not among these people, apparently, and the development efforts lack leadership, which results in them being unfocused and chaotic (why would you ever reimplement the codebase in C++ is beyond me, for example). There are people much more knowledgable than me - lurking on Discord taught me that - but the changes they suggest and prototype are so far removed from the basics (eg. the tickets you mention) that they are of little use to me right now. It's kind of disappointing to see all these smart people arguing about type systems while things like `exp1; exp2` are still unimplemented. Also, to be honest, the code of Moonscript could use some love in basically every part of it - it's not the most beautiful thing I've ever read, to say the least.
Due to all that I'm thinking about a more radical departure from MoonScript. My free time is limited and my use case is pretty niche - though I still believe MS is a good fit for Awesome WM - so I'm thinking about simply dropping all the unnecessary parts, backward compatibility be damned. I would rename my fork to something like AwesomeScript and focus on integration with the window manager - this is something I personally want and need; it's obvious that I won't get any help in this, so I need to use my time as efficiently as possible. I will need to cut a lot of corners to get what I want before the heat death of the universe, and I feel that discussing said corners with people who won't contribute any code would be a waste of time.
---
To summarize: I still intend to work with the codebase, but I'm going to get away from MoonScript-the project and MoonScript-the community. I'm sorry and sad about that, and I of course welcome any contributions to my fork, but I don't have the time to waste on things outside of my immediate needs. I can't let my enthusiasm to diminish any further - there's a huge amount of other languages I could use and I happen to know most of them, but once I start looking at them I'll have to admit I wasted lots of time on MoonScript and Lua already - I don't want that to happen.
Status: Issue closed
username_2: This is honestly sad to hear, since I was kind of hoping that we would get a driving force for a MoonScript rework, but it seems like that is never happening, since people come to the project and then decide to get away from it, and they are very right to do so! I do not understand why I'm still on this sinking boat but I like to be on it. I just wish we had something better, but if that doesn't come from people who really don't want to drop MoonScript like me, then it just isn't happening.
Looking forward to your fork. |
space-wizards/space-station-14 | 1108319976 | Title: Generators don't consume fuel
Question:
username_0: ## Description
Currently generators are an infinite source of power as they don't consume any fuel.
We need a generator component that consumes fuel from an inserted tank to produce power and that stops producing power when the fuel runs out (maybe even spit the container out as an indicator or at least stops playing the sound of the generator to indicate it is off).
**Additional context**
<!-- Add any other context about the problem here. -->
Similar issue to #5815 except they will consume plasma (in proportion to?) the amount of radiation they take, but still produce power in relation to the amount of plasma used so I can see generator component being quite versatile. E.g.
`- type: Generator
fuel: PlasmaTank
trigger: time (or radiation in case of rad collectors).`
Whether generator component is responsible for power output or not is up to whoever is taking this on.
Answers:
username_1: Working on this now. I am making a generator component and system, allowing for other generators to be included.
username_0: Excellent! Will complement #6455 very well. If you'd like to discuss it any hmu on discord.
I should add it would be nice to specify if and how much fuel the generator should start with on round start. Also slightly related would be having the generator make noise or change sprite state depending on fuel status (like apc visualiser) but that's fine being out of scope - simply consuming fuel would be a good start point so thanks for taking it on.
username_1: @username_0 Yeah, I am just got it to accept a fuel and then start. Now I am working on consuming fuel, then ejecting. And yes I would like to add functionality, where if the fuel is spent either change the icon state or make a noise. I'll message you on discord sometime this week, just busy with school and life, haha. |
nightwatchjs/nightwatch | 255410053 | Title: Geckodriver Selenium 3.5.3 launches chromes instead of firefox
Question:
username_0: System Properties
OS : Windows 7
Selenium Server :3.5.3
Node Version : v6.10.3
##Nightwatch Command :-
nightwatch --config nightwatch.windows.js -e firefox --reporter ./html-reporter.js
## conf JS
selenium: {
start_process: true,
server_path: "node_modules/selenium-server-standalone-jar/jar/selenium-server-standalone-3.5.3.jar",
log_path: "./reports",
host: "127.0.0.1",
port: 4444,
cli_args: {
"webdriver.chrome.driver": "./node_modules/chromedriver/lib/chromedriver/chromedriver.exe",
"webdriver.gecko.driver": "./node_modules/geckodriver/geckodriver.exe",
"webdriver.ie.driver": "./node_modules/iedriver/lib/iedriver/IEDriverServer.exe"
}
},
test_settings: {
default: {
selenium_port: 4444,
selenium_host: "localhost",
silent: true,
skip_testcases_on_fail: false,
end_session_on_fail: false,
screenshots: {
enabled: false,
on_failure: true,
path: "./reports"
},
firefox: {
desiredCapabilities: {
browserName: "firefox",
marionette: true,
}
}
On executing with the firefox environment , chrome is launched instead of firefox . This happens for ie as well . Same is working for 3.4.0
Answers:
username_1: ```javascript
selenium: {
start_process: true,
server_path: 'node_modules/selenium-server-standalone-jar/jar/selenium-server-standalone-3.5.3.jar',
log_path: './reports',
host: '127.0.0.1',
port: 4444,
cli_args: {
'webdriver.chrome.driver': './node_modules/chromedriver/lib/chromedriver/chromedriver.exe',
'webdriver.gecko.driver': './node_modules/geckodriver/geckodriver.exe',
'webdriver.ie.driver': './node_modules/iedriver/lib/iedriver/IEDriverServer.exe',
},
},
test_settings: {
default: {
selenium_port: 4444,
selenium_host: 'localhost',
silent: true,
skip_testcases_on_fail: false,
end_session_on_fail: false,
screenshots: {
enabled: false,
on_failure: true,
path: './reports',
},
firefox: {
desiredCapabilities: {
browserName: 'firefox',
marionette: true,
},
}
```
That's because you actually do not have an environment named _firefox_. Your config structure is not correct. Cheers.
```javascript
test_settings: {
default: {
selenium_port: 4444,
selenium_host: 'localhost',
silent: true,
skip_testcases_on_fail: false,
end_session_on_fail: false,
screenshots: {
enabled: false,
on_failure: true,
path: './reports',
},
},
firefox: {
desiredCapabilities: {
browserName: 'firefox',
marionette: true,
},
}
```
username_0: I didn't put my fully config structure . I have editted the same
username_2: @username_0 it would be worth changing the default to firefox and see if firefox opens. If nigjtwatch has any issues with your alternative environments it defaults to the default which is chrome.
username_3: @username_2 Just tried this out. Setting `desiredCapabilities` on default `test_settings` to Chrome will launch Chrome regardless of environment parameter. When we remove it, Firefox will be the default and environment parameters work as expected. I'm guessing this is because any default browser set will be launched regardless and this is probably a bug. Would be nice to receive confirmation on this from someone else.
username_4: This bug seems to be triggered by the presence of `chromeOptions`. A work-around is to not add `chromeOptions` to `test_settings.default.desiredCapabilities` and only add them to environments that are supposed to use Chrome. |
eleurent/rl-agents | 978125091 | Title: Make the default highway-v0 configuration faster
Question:
username_0: Or at least implement an `highway-fast-v0` variant with higher framerate.
Several changes can be made to increase simulation speed, such as:
* A: set the simulation frequency to 5Hz rather than 15Hz (less accurate)
* B: only check collisions for the controlled vehicles, but not between other vehicles
* C: use fewer vehicles in the scene, and thus shorter episodes
Answers:
username_0: Here are three configuration tested:
Default config:
```
"simulation_frequency": 15,
"lanes_count": 4,
"vehicles_count": 50,
"duration": 40, # [s]
"ego_spacing": 2,
"disable_collision_checks": False
```
A:
```
"simulation_frequency": 5,
```
B:
```
"vehicles_count": 20,
"lanes_count": 3,
"duration": 20, # [s]
"ego_spacing": 1.5,
```
C:
```
"disable_collision_checks": True
```
And here are the corresponding measured FPS for the different combinations, evaluated over 50 episodes:
| Variant | FPS |
| -------|------|
| Original | 1.1 |
| A | 2.9 |
| B | 4.8 |
| C | 1.7 |
| A + B | 11.3 |
| B + C | 6.6 |
| A + C | 4.5 |
| A + B + C | 14.5 | |
clirdlf/ndsa.org | 123348206 | Title: Clean up Calendar page tables
Question:
username_0: As per your note from the NDSA Website Content Inventory spreadsheet
http://ndsa.diglib.org/calendar/
Answers:
username_1: Let's set this up as a Google calendar. Did you set up a separate Google account for NDSA?
username_0: Yes- it's ndsa.cal, and I just shared the calendar with you.
Status: Issue closed
|
Almenon/AREPL-vscode | 743924480 | Title: make it work on python3
Question:
username_0: ### **There is a problem when working with python3 environment.**
#### I think it's just a matter of command executing . add "python3" in command
Answers:
username_1: Arepl works with python 3. Try adding python2 to arepl.pythonpath
username_2: @username_0 I just released a new version of AREPL (2.0.1) with a few bug fixes. Does it work there?
username_3: I have to add 'python3' in arepl.pythonpath. I use AREPL (2.0.1).
username_2: Glad that fixed the issue!
On a unrelated note, would you be interested in a short interview around your (current or former) usage of AREPL? I'm very curious how people are using it.
Status: Issue closed
|
adjust/ios_sdk | 96681980 | Title: Adjust doesn't work with iOS 9, when app transport security is enabled
Question:
username_0: The console is filled with logs like this:
[Adjust]e: Failed to track session. (An SSL error has occurred and a secure connection to the server cannot be made.) Will retry later.
See Apple's technical note about App Transport Security for help fixing this: https://developer.apple.com/library/prerelease/ios/technotes/App-Transport-Security-Technote/
Answers:
username_0: Looks like app.adjust.com is using TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA for its ciper, which is not on the white list for App Transport Security
username_1: @username_0 Thank you for this info, we'll take a look at it and let you know as soon as there's an update on our side.
username_2: @username_0 thank you for bringing this to my attention. we are working on that problem. as we need to do some testing to guarantee downward compatibility this could take some time. but we will fix this asap.
username_0: @username_2 @username_1 no problem!
username_2: Hello @username_0 ,
right now this is what we support of the mentioned ciphersuites with TLS 1.2
```
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 ECDHE-RSA-AES256-GCM-SHA384
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 ECDHE-RSA-AES128-GCM-SHA256
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 ECDHE-RSA-AES256-SHA384
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 ECDHE-RSA-AES128-SHA256
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA ECDHE-RSA-AES128-SHA
```
I included the openssl formatted name so you can check that they are working with:
```
openssl s_client -cipher $CIPHER -tls1_2 -connect app.adjust.io:443 -debug
```
e.g.
```
openssl s_client -cipher ECDHE-RSA-AES128-SHA -tls1_2 -connect app.adjust.io:443 -debug
```
We are not enforcing these ciphers though. Does that fix your issue?
username_2: I just realized, that we are still using SHA1 certificates, so i guess your issue is not fixed. I will update this as well.
username_2: @username_0
Okay, i also updated our certificates to have SHA2 signature hashes. Can you please test, if that fixes your issue? If not, can you somehow give me a self contained test? Thanks in advance!
username_0: @username_2 looks like it's fixed! Good call on the SHA2 certificates. I was just looking at the ciper suite that Firefox was preferring (as a quick test as to why the connection might be failing). Thanks for fixing this so quickly!
Status: Issue closed
username_2: You are very welcome. Thanks again for bringing this to our attention. I will close that issue now.
Have a nice weekend!
username_0: You too! |
mavoweb/mavo | 411665503 | Title: outsideCollection where outsideCollectionProperty = localProperty produces unexpected results
Question:
username_0: Originally discovered by @username_1.
Testcase: https://codepen.io/leaverou/pen/RvEKyW?&editors=1100#0
Testcase schema is:
- col1#
- name
- foo
- col2#
- name
- bar
`col1 where foo = bar` from **inside** col2 items produces unexpected results. The expected result is returned by `filter(col1, col1.foo = bar)`. `col1 where foo = this.bar` will also produce the expected result.
This is not exactly a bug, but the behavior of Mavo is quite unintuitive.
**An explanation of what's going on:**
When you write `col1 where foo = bar`, it gets rewritten internally to something like `filter(col1, scope(col1, foo = bar))` where `scope(A, B)` gives you the B you'd get if you had written B inside A. It's an undocumented function that's currently only used internally, to implement the `where` operator.
The reason this happens is so that you can have expressions like `woman where age > 30` where the women are filtered by their own ages, and not by the ages of man items that just happened to be first in the page.
So essentially, you're comparing `col1.foo` and `col1.bar`, but you're expecting to compare `col1.foo` and `this.bar`.
The problem is that `foo = bar` needs to be evaluated **before** the expression reaches the `where` operator. So it has to pick a scope: either the scope it's used in (the default behavior of every expression), or the scope of the first operand of `where` (via rewriting). The former would fix this problem, but led to incredibly confusing behavior when there were collections with internal properties of the same name (e.g. men and women with ages) filtered from the outside, so we changed it to the latter early on.
It's also not possible to pick a different scope per identifier, based on the filtered collection, since expressions are rewritten before any data touches them.
As we've seen, none of the two possible scoping behaviors are optimal. I wonder if there's any third option that would do the right thing in this case, without breaking other cases...
Answers:
username_1: I hesitate to suggest, but we could introduce some disambiguating
syntax.
Filter A on x,y where x..a..y...b.
could mean that when you filter collection A, any references to x and y
are references to properties "inside" A (should be evaluated inside A's
scope) while any other references are not.
Or, going the other way, we could introduce a this construct to qualify
terms that should be evaluated in the current scope instead of the
filtered collection's scope.
Filter A where x... this.a... y... this.b
------ Original Message ------
username_0: @username_1 Disambiguating syntax exists already (and your suggested `this` is already implemented, I even used it above). Different disambiguating syntax also existed even before the rewriting. The issue is about avoiding unexpected results that lead people to have to search what they did wrong and what kind of disambiguating syntax they have to choose.
username_1: I know there's disambiguation like the filter operator but I think that
goes into relatively technical territory compared to just extending the
syntax of the where operator.
------ Original Message ------
username_0: @username_1 did you miss the `col1 where foo = this.bar`?
username_1: It's not in the codepen and I missed it in the note.
------ Original Message ------
username_0: Ok, I think I have a fix, though I'm not sure if I should push it.
It's based in the same way that `this`/`$this` works. The current scope is passed as a parameter to the rewriting function and its properties become unscopables, so they pass through the dynamic scoping imposed by `scope()`. So, the expression is still scoped to the first operand, EXCEPT `this` and properties that are immediate children of the current scope.
- Could this break anything?
- What about descendant properties of the current scope?
username_1: Also, consider a scoping like
a multiple
---b
c multiple
---d
---a where b... d...
In this case, it seems pretty obvious that author wants to refer to d inside current c, but refer to b of each a as it is being filtered. The disambiguation here would seem to be clear because there is no b inside c and no d inside a. Does that work?
username_0: @username_1 How is this different than the use case explained in the original post and the testcase? It seems exactly the same. Your a = col1, c = col2, b = foo, d = bar.
username_1: yes same schema; I was proprosing requested "third options"
username_0: @username_1 https://github.com/mavoweb/mavo/issues/484#issuecomment-464915946
username_1: You're suggesting only immediate children of current scope be privileged. But what about children of ancestors of the current scope? I know that's vague--- the collection being filtered might itself be such an element,---but that was what came up in my use case that originally hit this problem.
username_0: Not sure we're on the same page, because the fix I'm suggesting does fix your use case that originally hit this problem (I just verified it).
username_1: Here's another possibly kooky heuristic. What if you privilege singleton values over collections? E.g., where a=b seems more likely to be wanting to compare a single a to a single b. So if one of your scopes yields a singleton eval while the other yields a collection, its likely the singleton that is wanted.
username_0: Like I said, the `a=b` needs to eval before it touches the `where`.
Status: Issue closed
username_0: I've just changed the fix for now to only privilege child properties.
The problem with privileging descendant properties is that if the `where` is in the root (as often happens) this is ALL properties!! Any ideas @username_1? :(
username_0: This fix also broke things like `name where age > 40`, not sure why, investigating. |
HewlettPackard/hpe3par_ansible_module | 368286324 | Title: Create schedule fails when and invalid range is passed in "task_freq_custom" field
Question:
username_0: **Create a playbook with the hours range exceeding 24 hrs.**
storage_system_ip: "192.168.67.6"
storage_system_username: "3paradm"
storage_system_password: "<PASSWORD>"
state: schedule_create
schedule_name: "Ansible_schedule_01"
snapshot_name: "Ansible_volume_SS_snap_01"
base_volume_name: "Ansible_volume_SS_01"
read_only: true
expiration_time: 2
expiration_unit: "Hours"
**task_freq_custom: "0 8-72 * * *"**
**The create playbook displays as successful**
changed: [localhost] => {
"changed": true,
"invocation": {
"module_args": {
"allow_remote_copy_parent": null,
"base_volume_name": "Ansible_volume_SS_01",
"expiration_hours": 0,
"expiration_time": null,
"expiration_unit": "Hours",
"new_name": null,
"priority": null,
"read_only": true,
"retention_hours": 0,
"retention_time": null,
"retention_unit": "Hours",
"rm_exp_time": null,
"schedule_name": "Ansible_schedule_01",
"snapshot_name": "Ansible_volume_SS_snap_01",
"state": "schedule_create",
"storage_system_ip": "192.168.67.6",
"storage_system_password": "<PASSWORD>",
"storage_system_username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"task_freq": null,
"task_freq_custom": "0 8-72 * * *"
}
},
**"msg": "Created Schedule Ansible_schedule_01 successfully."**
**The 3par "showsched" output list no schedules**
**CSSOS-SSA06 cli% showsched
No scheduled tasks listed**
Answers:
username_1: Fixed in PR https://github.com/HewlettPackard/hpe3par_ansible_module/pull/8
username_0: fatal: [localhost]: FAILED! => {
"changed": false,
"module_stderr": "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py:857: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning)\n/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py:857: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning)\nTraceback (most recent call last):\n File \"/tmp/ansible_dmP_jv/ansible_module_hpe3par_snapshot.py\", line 788, in <module>\n main()\n File \"/tmp/ansible_dmP_jv/ansible_module_hpe3par_snapshot.py\", line 771, in main\n schedule_name, base_volume_name, read_only, expiration_time, retention_time, expiration_unit, retention_unit, task_freq, task_freq_custom)\n File \"/tmp/ansible_dmP_jv/ansible_module_hpe3par_snapshot.py\", line 528, in create_schedule\n if (int(hour_task) > 23 or int(hour_task) < 0):\nValueError: invalid literal for int() with base 10: '8-72'\n",
"module_stdout": "",
"msg": "MODULE FAILURE",
"rc": 1
}
The above error is seen when a list (range) is specified as input in "task_freq_custom" field
username_1: I think you have taken older changes
username_1: Please take the latest code https://github.com/HewlettPackard/hpe3par_ansible_module/pull/8
Status: Issue closed
username_0: Closing this issue. Task_freq_custom field is removed |
flutter/flutter-intellij | 331324087 | Title: Widget property sometimes fail to show its dartdoc when hovering over it
Question:
username_0: ## Steps to Reproduce
The widget property won't display its dartdoc sometimes, when I hover over a widget property name in the detail pane of the inspector in Android Studio.

## Version info
[✓] Flutter (Channel master, v0.5.4-pre.5, on Mac OS X 10.13.4 17E202, locale en-US)
• Flutter version 0.5.4-pre.5 at /Users/taodong/Code/flutter_repos/flutter
• Framework revision 3019ad976d (71 minutes ago), 2018-06-11 11:31:25 -0700
• Engine revision d33bbff470
• Dart version 2.0.0-dev.60.0.flutter-a5e41681e5
[✓] Android toolchain - develop for Android devices (Android SDK 27.0.0)
• Android SDK at /Users/taodong/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-27, build-tools 27.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b01)
• All Android licenses accepted.
[✓] iOS toolchain - develop for iOS devices (Xcode 9.3.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 9.3.1, Build version 9E501
• ios-deploy 1.9.2
• CocoaPods version 1.5.2
[✓] Android Studio (version 3.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 25.0.1
• Dart plugin version 173.4700
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b01)
[!] VS Code (version 1.23.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension not installed; install from
https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[✓] Connected devices (1 available)
• iPhone 6s • 713F5DD5-D1A0-46ED-976D-3D1487DBCFC6 • ios • iOS 11.3 (simulator) |
duyue6002/Blog | 455593230 | Title: [总结] JS里的数
Question:
username_0: # 数值范围
JS 中都是浮点数,Number类型,64位,8字节。
做精确计算时,范围是 [-2^53, 2^53]。
做位运算时,只处理32位整型。
Answers:
username_0: # 负数的二进制表示
原码:一个数的绝对值转换成的二进制数,存在+0/-0.
反码:原码按位取反,存在+0/-0.
补码:反码+1,不存在+0/-0.
负数的二进制就是其绝对值的补码。补码的存在,使加法变得简单。
username_0: # 位运算的边界值验证
正数:1,0x |
stevenroach7/Ceres | 223953118 | Title: Small black box appears over pause button in simulator at small scale
Question:
username_0: When simulating on an SE with a scale of 33% or less, a small black box is appearing over the pause button.
Answers:
username_1: I think I see what you mean, but I think that's just because the image is so small that the yellow pause sign is hard to fully make-out. Don't think this is a problem that can be fixed
Status: Issue closed
|
egingric/2015-Racing-Game | 66615109 | Title: No collision on building!
Question:
username_0: 
on the second lap of your level, The house in the middle of the path split doesn't have collision and let's me navigate around the fallen house that's supposed to force me to go on the left path.
There are also no blocking/killing volumes around your level, so the part where there're little hills right after the town let me ramp off them and off the course. |
FX-Examples/FX-SaaS-Example-Project-1 | 358570531 | Title: FX-Examples : ApiV1TestSuitesProjectIdIdGetPathParamSqlInjectionTimeboundMysqlId
Question:
username_0: Project : username_0
Job : Example_Project_1_Env
Env : Example_Project_1_Env
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 500
Headers : {}
Endpoint : http://172.16.58.3/api/v1/test-suites/project-id/' AND sleep(7)=0; --
Request :
Response :
I/O error on GET request for "http://1172.16.17.32/api/v1/test-suites/project-id/'%20AND%20sleep(7)=0;%20--": Timeout waiting for connection from pool; nested exception is org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool
Logs :
Assertion [@StatusCode != 404] passed, not expecting [404] and found [500]Assertion [@ResponseTime < 7000] failed, expected value [7000] but found [15003]
--- FX Bot --- |
numpy/numpy | 66245700 | Title: Increase maximum number of array dimensions?
Question:
username_0: At my lab, we're working with arrays with very many dimensions. We've run up against the hardcoded limit of 32 dimensions for `np.array`s. What is the rationale for this limit, and is it possible to increase it? Thanks!
Answers:
username_1: This has been discussed before, but is not completely trivial. I'd point you to the mailing list archives, but my browser currently crashes on http://dir.gmane.org/gmane.comp.python.numeric.general.
username_2: Mostly the reason we have a finite limit is laziness -- there are a number
of places in the code that just statically allocate a MAX_DIM buffer
instead of writing slightly more complicated code using malloc/free. I
think we'd be happy to accept patches to remove the limit, and this could
be done incrementally, one place at a time. It shouldn't be difficult, just
a bit tedious.
Given the current situation where we do have a single static limit, I think
we hesitate a bit to just bump up that limit, because once we try handling
all the weird 0.1% long-tail use cases then we'll keep having to bump the
limit up arbitrarily far, and that impacts all the users who don't need
massively high dimensions.
username_0: Hello again (5 years later)!
We're running into this problem again. It seems that in the meantime it's been noticed by others.
@username_2, I could try to implement a dynamic solution as you described, but I don't have a lot of experience with C and I have no experience with NumPy internals. Looks like `NPY_MAXDIMS` appears in the source 234 times; the tedium isn't a problem, but it's possible that this would change behavior meaningfully somewhere. Is this still feasible? If it's still worth trying, it would be a huge help if you or another maintainer could describe the approach you had in mind in more detail.
@username_5, @mrocklin, @username_4, would this be useful for Dask beyond dask/dask#5595?
username_3: I think the biggest/toughest chunk for relaxing either `MAX_ARGS` or `MAX_NDIM` will be the `NPY_ITER` code. But as an experiment, it may be better to try with some other functions and see how it goes/what the impact is before diving into that directly (more as a trial balloon).
I do not know if the heap allocations would mean overhead, but I assume that can be made so low, that it just doesn't matter. Or should we consider again to simply bump up `NPY_MAXDIMS` slightly, because the use-case is different from one that adds many dimensions of size 1?
username_1: I suspect that the original 32 came from `2*32` for 32 bit systems, so 64 might be the proper number for `MAX_DIM` these days, although actual memory address buses were about 48 bits last time I looked.
username_4: Yes, the maximum number of dimensions impacts dask arrays (at least those backed by numpy arrays) outside of `tensordot`. For example:
<details>
<summary>dask.array.empty example:</summary>
```python
In [1]: import dask.array as da
In [2]: x = da.empty((1,) * 33)
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
~/github/dask/dask/dask/array/utils.py in meta_from_array(x, ndim, dtype)
90 if ndim > x.ndim:
---> 91 meta = meta[(Ellipsis,) + tuple(None for _ in range(ndim - meta.ndim))]
92 meta = meta[tuple(slice(0, 0, None) for _ in range(meta.ndim))]
IndexError: invalid index to scalar variable.
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-2-021c43ce84db> in <module>
----> 1 x = da.empty((1,) * 33)
~/github/dask/dask/dask/array/wrap.py in wrap_func_shape_as_first_arg(func, *args, **kwargs)
70
71 dsk = dict(zip(keys, vals))
---> 72 return Array(dsk, name, chunks, dtype=dtype)
73
74
~/github/dask/dask/dask/array/core.py in __new__(cls, dask, name, chunks, dtype, meta, shape)
1053 raise ValueError(CHUNKS_NONE_ERROR_MESSAGE)
1054
-> 1055 self._meta = meta_from_array(meta, ndim=self.ndim, dtype=dtype)
1056
1057 for plugin in config.get("array_plugins", ()):
~/github/dask/dask/dask/array/utils.py in meta_from_array(x, ndim, dtype)
96 meta = meta.reshape((0,) * ndim)
97 except Exception:
---> 98 meta = np.empty((0,) * ndim, dtype=dtype or x.dtype)
99
100 if np.isscalar(meta):
ValueError: maximum supported dimension for an ndarray is 32, found 33
```
</details>
That said, it doesn't seems to be a limitation lots of users are running into (https://github.com/dask/dask/issues/5595 is the only reported issue I'm aware of)
Just out of curiosity, @username_0 how many dimensions are you typically dealing with? Would the previous suggestion from @username_1 help?
username_0: @username_4, we likely wouldn't need more than 64 dimensions for now, so if increasing to 64 is an option, @username_1, that would be great.
In case it's helpful to motivate the change, I'll describe our use case:
We're representing the probabilities of state transitions over a single timestep in a discrete dynamical system. If the system has `N` elements with `n_i` states each, then there are `S = n_0 * n_1 * ... * n_(N - 1)` states of the system, and the joint distribution of states at `t = 0` and states at `t = 1` has `S * S` elements.
Our approach is to use separate dimensions to index the state of each element at `t = 0` and `t = 1`, so we need `2 * N` dimensions. This is useful because in our calculations we need to multiply different marginal distributions together; in many cases these distributions are over the states of distinct but overlapping sets of elements, so it isn't a straightforward outer product. But if each element has its own dimension, then we can take advantage of broadcasting semantics and just use the `*` operator to implement the product we need, which is efficient and conceptually simple. (If anyone has any suggestions for a more efficient way to do this, let me know!)
Increasing the limit to 64 would allow us to represent systems of up to 32 elements instead of 16, which would be a qualitatively significant improvement for our research questions.
@username_2's point about hesitating to increase the limit for long-tail cases is well taken, though. But if the overhead is very low, then increasing to 64 would be a simple stopgap solution until we can make the allocations dynamically.
username_1: @username_0 That sounds like something that could be more flexibly handled by some custom code.
username_3: I would like to know what @username_5 reason was for opening the dask issue on the limited number of dimensions numpy/dask support and whether that is related to memory size (something like 2**48), etc.
username_5: Hi, yes I have run up against this limit quite a few times.
As some technical context, basically when simulating quantum systems its natural to represent each system as a dimension (rather than manually 'vectorizing' them into a joint dimension). This is particularly true for tensor networks (basically graphs with an array at each node, with ``ndim`` given by the number of incident edges), where the number of dimensions of each array changes pretty dynamically throughout manipulations.
In a very common case each dimension is of size 2, and I guess this is a key point: ``2^32 * float32`` e.g. is still only ~16GB so even for straight numpy arrays well within what would be tractable in a fairly moderate HPC setting.
In principle ``dask`` might be a great, simple way to handle these sizes of array out-of-core/in a distributed fashion, but because 'chunked' dimensions are still dimensions (even if size 1), the ``ndim`` limit comes into play.
The problem is particularly compounded for ``dask.array.tensordot`` (*the* key function for tensor networks!), for which the current implementation first computes the outer product, then sums over the shared dimensions. Meaning that even if for ```z = tensordot(x, y, axes=....)``` each array ``x``, ``y`` and ``z`` all have ``ndim<32``, the intermediate array can break the limit.
I should say, this isn't a blocker for me right now, and these borderline HPC cases are understandably I guess not ``numpy``'s focus. But it's certainly possibly to bump up against this particular limit quite easily in the quantum setting.
username_6: If I understand correctly, the shape of a ndarray as described would contain only zeros and ones, otherwise memory use would explode. This would be better served by a different data structure, and is not a sufficient reason to fundamentally change ndarray.
Any proposal to make the number of dimensions flexible would need to be carefully benchmarked and should come with a cache of vectors that could be reused rather than repeatedly calling malloc/free.
username_7: What would be the cost of just bumping MAX_DIMS up to 64?
username_5: If this was in reference to my usecase - the arrays are generally filled with complex numbers, and yes, the memory does explode! To be clear, my view is just that a moderate increase in MAX_DIM, to e.g. 64, would be potentially useful.
username_8: I've been running trade studies on how different models interact with each other. Each dimension in my results array is linked to a particular input parameter (whether that is a single value or a range of values). Since I am using multiprocessing pools to run the simulations in parallel, my problem with the 32 dimension limit arises when I return the data from the pool and reshape it to match the inputs (+ a couple dimensions for outputs).
So far my workaround has been to write different versions of the same functions to evaluate the effects of different combinations of inputs independently.I would also appreciate an update to this, even if just to 64 dimensions to start with. Thanks!
username_3: To be clear, the "64 to start with" is exactly the thing that makes this harder to just decide on for me at least... 64 is the absolute plausible limit at this time, I would personally prefer much less, e.g. 40 or 48 since both should be plenty in a `2**N` scenario.
username_9: Have you considered using xarray, which lets you name your dimensions, meaning you do not need to insert padding dimensions of length 1?
username_5: As [noted here](https://github.com/dgasmith/opt_einsum/issues/80), this does also currently limit the total number of indices involved in an ``numpy.einsum`` expression, which is a much easier threshold to breach:
```python
import numpy as np
import string
d = 11
# total dims involved = 3 * d
# but max number of array dims = 2 * d
x = np.random.uniform(size=[2] * 2 * d)
y = np.random.uniform(size=[2] * 2 * d)
chars = string.ascii_lowercase + string.ascii_uppercase
eq = f"{chars[0 * d:2 * d]},{chars[1 * d:3 * d]}"
print(eq)
# abcdefghijklmnopqrstuv,lmnopqrstuvwxyzABCDEFG
# ........... ...........
# contracted
z = np.einsum(eq, x, y)
# ValueError: too many subscripts in einsum
```
(n.b. this is just the simplest illustrative equation - I know one could use ``tensordot`` for this specific case).
The following ``if (ndim_iter >= NPY_MAXDIMS) {`` part of ``c_einsum`` is the relevant numpy code:
https://github.com/numpy/numpy/blob/4cba2d91e1546872d29af6b25ad35947f27e03ac/numpy/core/src/multiarray/einsum.c.src#L947-L964
(Possibly for this particular problem there are also other fixes other than increasing ``NPY_MAXDIMS``.)
I do fairly regularly come across this limitation in real contractions! Currently I need to switch to e.g. [``ctf``](https://github.com/cyclops-community/ctf) in these cases.
username_10: @username_5 Your code gives me no problem now and I didn't need to change `einsum.c.src`
I want to do a PR but first wanted to ask if there is still any blocking issue. |
google/site-kit-wp | 753762463 | Title: Reduce excessive amount of Analytics report requests for AdSense linked check
Question:
username_0: When Analytics and AdSense are active, some widgets use the Analytics Reporting API to query for AdSense-related metrics. Those requests result in an error if the AdSense account and the Analytics property are not linked.
This has been a long-standing problem, because there is unfortunately no endpoint in the Analytics Management API that exposes whether such a link exists or not. There is an [endpoint for Google Ads links](https://developers.google.com/analytics/devguides/config/mgmt/v3/mgmtReference/management/webPropertyAdWordsLinks), but none for AdSense links.
The Analytics frontend uses an internal `linkableproperties` endpoint for this information as seen here:
<img width="1669" alt="Screenshot 2020-11-30 at 11 29 45" src="https://user-images.githubusercontent.com/3531426/100654941-6e174480-32ff-11eb-8a51-65522822e6ff.png">
We cannot use it, so it looks like we're limited to relying on the approach we're already using. However, we should explore ways to issue fewer of these erroring requests: Generally we don't cache API error responses, which is why this request is fired basically on every pageload, resulting in an excessive amount of error requests, which could be avoided. We should consider caching certain errors on the client-side too.
---------------
_Do not alter or remove anything below. The following sections will be managed by moderators only._
## Acceptance criteria
* The `googlesitekit-api` layer's cache approach should be modified so that cache TTL is determined based on the cache entry and is stored with each entry rather than specified manually when reading an entry:
* `setItem` should receive a new `ttl` parameter that specifies the entry's time-to-live in seconds and defaults to `3600`. `ttl` should be included in the cache field alongside the existing `timestamp` and `value`.
* `getItem` should have its `cacheTimeToLive` parameter removed. Instead, the found item's `ttl` property value should be used to determine whether it's expired or not. If an item doesn't have a `ttl` property (needed for BC), it should be considered expired.
* The `cacheTTL` argument of the `siteKitRequest` function should be used in the `setItem` call instead of the `getItem` call, based on the above changes.
* The error `catch` clause of `siteKitRequest` should be enhanced as follows: If the error object has a `error.data.cacheTTL` field, the error should be stored the same way, with the `ttl` as defined in the error object.
* After a successful `getItem` call in `siteKitRequest` it should be checked whether the `value` is a `WP_Error`-like object. If so, it should be thrown instead of returned. The special logic that dispatches datastore actions for certain errors currently in the `catch` clause should be run for those errors as well. It shouldn't be tracked or logged as API error though because it is a cached result.
* On the PHP side, the `Analytics` exception handling function should be updated so that a "Restricted metrics error" (see `isRestrictedMetricsError` in JS) receives a `cacheTTL` of 10 minutes in its error data. This ensures this specific type of error will be cached for 10 minutes, so that Site Kit doesn't trigger those requests as excessively. At the same time, 10 minutes is low enough so that a user that actually links AdSense with Analytics doesn't have to wait for one hour to see that action being recognized by Site Kit.
## Implementation Brief
* <!-- One or more bullet points for how to technically implement the feature. -->
### Test Coverage
* <!-- One or more bullet points for how to implement automated tests to verify the feature works. -->
### Visual Regression Changes
* <!-- One or more bullet points describing how the feature will affect visual regression tests, if applicable. -->
## QA Brief
* <!-- One or more bullet points for how to test that the feature works as expected. -->
## Changelog entry
* <!-- One sentence summarizing the PR, to be used in the changelog. -->
Answers:
username_1: I think we can improve this part a bit. I don't think we should check the shape of the response to determine if it was previously caught. Instead, I think we could include an `error` or `isError` as a top-level property on the cached object (on the same level as `value`). In addition to this, I don't think we should throw it because that would require additional logic to avoid being reported again. For that, I think we simply want to extract a common error response handler function that is used with the return in both cases but doesn't include everything in the `catch`.
https://github.com/google/site-kit-wp/blob/160b5d1393cee4cce60574827564b5b02cd8b78e/assets/js/googlesitekit/api/index.js#L115-L121
Here `getItem` could return an additional `isError` variable and if `cacheHit && isError` it could return `handleError( value )`.
As for this chunk https://github.com/google/site-kit-wp/blob/160b5d1393cee4cce60574827564b5b02cd8b78e/assets/js/googlesitekit/api/index.js#L142-L153
I think this would be better to implement with an action that the core user store can listen on which also removes the coupling between the catch here and the data store.
```js
} catch ( error ) {
global.console.error( 'Google Site Kit API Error', error );
trackAPIError( { method, datapoint, type, identifier, error } );
doAction( 'googlesitekit.api.error', error, { method, datapoint, type, identifier } );
throw error;
}
```
The `doAction` could then be the body of the aforementioned `handleError` function which would be called from cache hits and caught errors.
The core user store could add the action on to it right after it's registered where it already has a reference to the global registry:
https://github.com/google/site-kit-wp/blob/160b5d1393cee4cce60574827564b5b02cd8b78e/assets/js/googlesitekit/datastore/user/index.js#L54-L55
Alternatively, the registered store returned by `registerStore` could be used instead:
```js
const registeredStore = Data.registerStore( STORE_NAME, store );
addAction( 'googlesitekit.api.error', 'googlesitekit.core/user.apiError', ( error ) => {
if ( error.code === ERROR_CODE_MISSING_REQUIRED_SCOPE ) {
registeredStore.dispatch( actions.setPermissionScopeError( error ) );
} else if ( error.data?.reconnectURL ) {
registeredStore.dispatch( actions.setAuthError( error ) );
}
} )
```
What do you think?
username_0: Regarding this and your following idea, I think that sounds good, but IMO goes beyond the scope of this. We can decouple the API layer from these datastore-specific parts in a separate follow-up issue.
username_1: I'll open a new issue for implementing this part 👍
Moving this forward to IB since the above is just a naming change which could easily be reverted if you feel strongly about the previous.
username_1: There is already some handling of this error in place in the Analytics class, see `\Google\Site_Kit\Modules\Analytics::exception_to_error`. We only need to return the error differently for that case, where now it simply updates a setting and returns the default error from the base method.
username_2: @username_1 IB has been updated. Can you check? Thanks!
username_1: IB ✅
username_3: ### QA ✅
Installed Analytics and Adsense followed the QA Brief paying attention to the console. I was unable to receive an error message. Refreshed multiple times, ensured each page was loading properly and nothing was broken.

Sending to approval
username_1: @username_3 – it looks like you have the Analytics data simulating a connected state which would not trigger the relevant error.
I think these two points in the QAB are inaccurate and should be skipped:

With a clean cache, you should see a "Restricted metric(s):" error in the console on the initial request (it may show more than once depending on the number of requests) but after that it should not show up again for another 10 min.
Can you give it another pass? cc: @username_4
username_4: @username_1 like Cole, I do not see any error messages in the console.
1. Cleared cache
2. Connected Adsense and AnalytIcs (I am using a site that has Analytics and Adsense connected previously)
3. Refreshed the dashboard and looked in the console but no error appearing.
Please note that I do have the tester plugin but the two options you mentioned were skipped.
cc. @username_3
username_1: This will prevent the error 😄 – you need to use AdSense and Analytics accounts which are _not linked_ to observe the error that this issue is about.
username_4: @username_1 ah buggar, noted. On it.
username_4: @username_1 okay, we are getting somewhere! 😄
I now see the error message:

I have refreshed the page a few times and waited for a couple of minutes, and it still appears. According to the QAB, it suggests it should not appear for around 10 minutes. Any thoughts?
username_1: @username_4 One more thing to note: I was wrong to say that the error/warning would no longer show in the console – it would. The difference is that the request that results in this error is no longer made (as the error response is cached now for a short time). So to verify, you'll need to observe the network requests. Once triggered initially, the cache should be primed so that refreshing the page should still result in the same error shown without an additional request causing it.
If it's too complicated, we can have an engineer look at this but I forgot this detail of the implementation that the console error would still be expected. The main thing here is that the error is cached and thus not retried on every request like it was before.
username_4: @username_1 thanks for the additional information.
I've had a look at network for the dashboard and noticed most are saying `no-cache` in the headers but it could be that I am looking at the wrong requests. I'd be happier if an engineer looked at this to be honest.

username_1: ## QA ✅
Error is cached now and no longer re-requested via the modern `googlesitekit-api` API once cached.
**Does not affect legacy data API, but this change will reduce the number of error requests made**
<details>
<summary>Screenshots</summary>
* Initial Error

* After page reload

* Cached error

* Request is made again after cached error is deleted

* Does not affect legacy data API (restricted error request still made here via batch request)


</details>
Status: Issue closed
|
dagjo90/Electronic-Pomodoro-System2 | 416703654 | Title: 📝 Code review
Question:
username_0: Yo,
J'ai un petit :
- [ ] Une dépendance de développement manquante : `parcel` - impossible de lancer ton projet sans l'installer et fixer le truc. **note :** une solution alternative aurait été, dans ton `package.json`, de remplacer les scripts `"start": "parcel index.html"` par `"start": "npx parcel index.html"`.
⚠️ **NOTE:** Malgré tout mes efforts, je n'ai pas su faire tourner ton projet en local, à cause de dépendances foireuses.
Le code React en lui-même ne m'a pas l'air moche, j'aimerais le voir tourner un de ces quatre, nous verrons ça ensemble, ensuite, je compléterai la review. |
itchyny/calendar.vim | 66478282 | Title: taskwarrior/taskwiki support
Question:
username_0: Hi! I've been a taskwarrior.org user for several years. It's a "command-line todo-list manager" and juggles dated tasks nicely, but it has never had the benefit of a true calendar.
Recently, one of the tw dev-team wrote taskwiki (https://github.com/tbabej/taskwiki) which enhances vimwiki with taskwarrior tasks. I've filed a taskwiki issue (https://github.com/tbabej/taskwiki/issues/53) hoping to see taskwiki support your (brilliant) calendar, but the (also brilliant) taskwiki author (tbabej) would need quite a few more clues in order to make this happen.
This goal (taskwarrior dated tasks in calendar, undated tasks viewable in the Todo list) could be achieved in more that one way:
a) taskwiki integrates calendar.vim (tbabej needs clues)
b) calendar.vim supports taskwarrior directly (username_1 needs to lean a lot more about taskwarrior)
I'm not a programmer, just an enthusiastic user, but I believe this integration would be "life-changing" so I would be happy to act as a "go-between" if necessary.
Specific question will be added as comments to this issue, thanks a TON!
Answers:
username_0: 1) When I first ran calendar.vim, a prompt asked for the name of a calendar, where would I find that file?
1.a) is the calendar kept in a known format?
Status: Issue closed
username_1: I'm not interested in integration with taskwarrior because I('ve) never use it in my workflow. The downloaded items are kept in `~/.cache/calendar.vim/`. The app `calendar.vim` just saves the response of Google Calendar API, thus the format is just as you see in its reference https://developers.google.com/google-apps/calendar/v3/reference/events. I'm very sorry but I close this issue with the wontfix tag because I'll never implement at this moment. If dozens of people requested me to implement, I would register to taskwarrior and check for its API. |
hasadna/Open-Knesset | 40084635 | Title: Add function that returns relevant mks for a list of time ranges
Question:
username_0: In the mks app, for the Member model create a new MemberManager that inherits from BetterManager and has a function mks_during_range(ranges=None) where ranges is a list of datetime pairs (the pairs are a list too), returning a list of Members that were a member of the knesset during any of the ranges.
Use the Membership model for start and end dates of mk membership in parties.
Notes:
- if the ranges parameter is empty ([]) or was not passed (is None), retrieve mks that have a non-null current_party attribute
- add indexes to the Membership model if needed for fast querying, since this will be used often.
- cache results so that popular ranges (such as when using current_party != null) return quickly
<!---
@huboard:{"milestone_order":131.0,"order":263.0,"custom_state":""}
-->
Answers:
username_1: what's the use-case? where is this function going to be used? |
jdi-testing/jdi-light | 486266245 | Title: Add common BDD steps into documentation for Label element
Question:
username_0: See complete list of common BDD steps here: https://github.com/jdi-testing/jdi-light/blob/bdd/jdi-light-bdd-tests/src/test/resources/steps_list
1. Add steps that are absent and applicable for that element (for both - article and code example)
2. Make sure that link to the Tutorial looks like: More information in the <link>Tutorial</link>
3. Make sure that link to tests looks like: <link>Cucumber tests</link> for <ELEMENT_NAME>
4. Make sure that label for action steps examples (at the right) looks like: "<ELEMENT_NAME> action examples:"
5. Make sure that label for validation steps examples (at the right) looks like: "<ELEMENT_NAME> validation examples:"
6. Make sure that label for scenario examples (at the right) looks like: "Scenario example(s) for <ELEMENT_NAME>:"
Status: Issue closed
Answers:
username_0: Demo has been done. |
Adam-Henrie/LiveCourier-UT3 | 798662059 | Title: Coding language for Live Courier project
Question:
username_0: Discord based discussion of what type of coding language we would like to use for the app. A survey was administered via strawpoll.me.

https://www.strawpoll.me/42536072
Answers:
username_0: Results of the straw poll indicate that we want to use Java as our coding language.

Status: Issue closed
username_0: Issue closed Java chosen. |
AY1920S2-CS2113-T14-3/tp | 591740042 | Title: None
Question:
username_0: Should we have a save command? I thought that the task list was saved automatically every time that the user changed the list
Answers:
username_1: No, I have removed the save command from our UG already, don't think it's needed.
Status: Issue closed
|
intel/safestringlib | 295067385 | Title: Please consider library versioning with releases or stable branches
Question:
username_0: Hi,
I'm a member of a project, which uses your library. Our build scripts used to download the source code
[from github](https://github.com/intel/safestringlib/archive/master.zip)
for compilation.
This is problematic for two reasons. We want to verify the archive file's integrity using a checksum. However, the zip file changes with every commit, so the checksum changes too, so the checks fail.
Secondly, whenever a recent commit [causes compilation problems](https://github.com/intel/safestringlib/issues/15), this is a problem for us too, because we have to patch the library on our side.
Could you please consider either using github releases to provide library snapshots with versions (eg, safestringlib 1.0, 1.1), or moving development to another branch, which would be merged to the master branch after the changes are complete, tested and verified?
Status: Issue closed
Answers:
username_1: Added branch for v1.0.0 on last stable master commit.
Will add branches and releases from now on.
username_0: Thank you! |
pulibrary/ares | 930289864 | Title: Update Language on Instructor Book Request Form
Question:
username_0: On the Reserve item: Book form a sub-heading should be added under the title saying
"Items will be made available digitally whenever possible. Please add a note if you prefer this item to be available on physical reserve."
 |
prismicio/prismic-dom | 743830331 | Title: t.elected.boundaries is not a function
Question:
username_0: When trying to get the HTML from a rich text in prismic we get following error in React Native
t.elected.boundaries is not a function
`RichText.asHtml(item['paragraph-rich-text'], () => console.log('handle link'))
`
When we try asText it succeeds. But we need HTML.
Anyone having the same problem?
Thanks!
Answers:
username_1: Hey @username_0, I'm really sorry you didn't get proper support back then. Usually, you can get support quickly using our [community forum](https://community.prismic.io/c/kits-and-dev-languages/react/15). I hope you figured out a way to solve your issue since then and therefore I'm closing it for now. Feel free to reopen if anything!
Status: Issue closed
|
vim/vim | 268793142 | Title: undefined left shift in get_string_tv()
Question:
username_0: Triggered in `67435d9`, compiled with clang 6.0.0-trunk and `-fsanitize=address,undefined`.
./vim -u NONE -X -Z -e -s -S test020 ':qa!'
```
eval.c:4876:16: runtime error: left shift of 234881024 by 4 places cannot be represented in type 'int'
#0 0x685a6a in get_string_tv /root/vim/src/eval.c:4876:16
#1 0x685a6a in eval7 /root/vim/src/eval.c:4224
#2 0x680777 in eval6 /root/vim/src/eval.c:3963:9
#3 0x67f220 in eval5 /root/vim/src/eval.c:3779:9
#4 0x67bbf5 in eval4 /root/vim/src/eval.c:3478:9
#5 0x67b25d in eval3 /root/vim/src/eval.c:3395:9
#6 0x6289ad in eval2 /root/vim/src/eval.c:3327:9
#7 0x6289ad in eval1 /root/vim/src/eval.c:3255
#8 0x626cf2 in eval0 /root/vim/src/eval.c:3215:11
#9 0x62603a in eval_to_bool /root/vim/src/eval.c:682:9
#10 0x857524 in ex_if /root/vim/src/ex_eval.c:904:11
#11 0x7d4058 in do_one_cmd /root/vim/src/ex_docmd.c:2908:2
#12 0x7bc866 in do_cmdline /root/vim/src/ex_docmd.c:1071:17
#13 0x7ae076 in do_source /root/vim/src/ex_cmds2.c:4355:5
#14 0x7ab31c in cmd_source /root/vim/src/ex_cmds2.c:3968:14
#15 0x7ab31c in ex_source /root/vim/src/ex_cmds2.c:3943
#16 0x7d4058 in do_one_cmd /root/vim/src/ex_docmd.c:2908:2
#17 0x7bc866 in do_cmdline /root/vim/src/ex_docmd.c:1071:17
#18 0x13f138f in exe_commands /root/vim/src/main.c:2954:2
#19 0x13f138f in vim_main2 /root/vim/src/main.c:799
#20 0x13e5d3d in main /root/vim/src/main.c:415:12
#21 0x7f13b14153f0 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x203f0)
#22 0x41bba9 in _start (/root/vim/src/vim+0x41bba9)
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior eval.c:4876:16
```
[test020.gz](https://github.com/vim/vim/files/1418750/test020.gz) |
iceoss/confluence-pageawareness-macro | 410788649 | Title: How to create a list of all pages that should be reviewed ?
Answers:
username_1: This macro stores the approval status as properties with the page so there is no simple way to generate a list of all pages which require approval.
One way I've accomplished this is to use a "Page Properties" macro with the status of the page as one of the rows in the page properties table, and a label on the page.
On another page you can then create a "Page Properties Report" macro which lists all the pages with that specific label and will display the approval status in the table since it's a part of the page properties.
Example:
<img width="580" alt="screen shot 2019-02-15 at 9 37 13 am" src="https://user-images.githubusercontent.com/533721/52870623-7fc93100-3105-11e9-9eee-979ed607fe3d.png">
<img width="542" alt="screen shot 2019-02-15 at 9 37 25 am" src="https://user-images.githubusercontent.com/533721/52870632-83f54e80-3105-11e9-9b5a-440af69c5665.png">
<img width="437" alt="screen shot 2019-02-15 at 9 37 56 am" src="https://user-images.githubusercontent.com/533721/52870639-8657a880-3105-11e9-882a-e3e81e6fe5db.png">
<img width="461" alt="screen shot 2019-02-15 at 9 38 03 am" src="https://user-images.githubusercontent.com/533721/52870647-88216c00-3105-11e9-8120-fc5e5a9fe2eb.png"> |
DDDEastMidlandsLimited/dddem-web | 823939884 | Title: Bug: Client-side URL redirect
Question:
username_0: ### Details can be seen here:
https://github.com/DDDEastMidlandsLimited/dddem-web/security/code-scanning/1?query=ref%3Arefs%2Fheads%2Fmain
Status: Issue closed
Answers:
username_0: Ah looks like its been fixed by dependabot:
 |
squti/Android-Wave-Recorder | 749506518 | Title: Add `changeFile` feature
Question:
username_0: If I have to do two recordings, I'm forced to created a new instance of WaveRecorder since the filePath is private in WaveRecorder():
`
class WaveRecorder(private var filePath: String)
`
@username_1 I'm thinking the `changeFile(newFilePath: String)` function should change the filePath when the `isRecording = false` and throw an IllegalStateException if otherwise.
Answers:
username_1: In what situations do you need change the file path without creating a new instance? Because it is safer to create a new instance for a new file.
username_0: Mainly in a recording app, a user may finish recording and then desire to record another audio file.
If all the recording and renaming is done within the same fragment, there is no need to create another instance when one instance can be recycled.
An example would be MediaPlayer's API.
```
mediaPlayer.setDataSource(filePath)
mediaPlayer.prepare()
mediaPlayer.start()
```
username_1: I will add this in the next version. Thanks for your suggestion.
username_0: Could I try working on this issue then if the code is clean enough, you can merge it
username_1: Sure. I've assigned it to you.
username_1: Feature has been added to the project. Thanks for your contribution.
Status: Issue closed
username_2: Hi, @username_1 @username_0 can we add file path on direct `startRecording` function? Because I would like to fix issue of scope storage.
I have designed a function like below. Check it.

username_0: Sounds like a great idea. Do the necessary changes then pull for review. Scoped storage has been a tough topic I'm yet to grasp and I would like to read some scoped storage code. |
briandfoy/ghojo | 185291535 | Title: Get a list of gitignore templates
Question:
username_0: The part of the API that creates a repo can choose a gitignore template from https://github.com/github/gitignore. I want to get a list of those templates to validate input. That is, when Ghojo runs, it should get the current state of that list.
https://developer.github.com/v3/repos/#create
Answers:
username_0: O hai, look at this: https://developer.github.com/v3/gitignore/#listing-available-templates |
PierreRainero/PictureMetadataEditor | 459960595 | Title: CI : Add code review workflow and quality
Question:
username_0: Add code review tool to the CI in order to improve project quality/maintainability/visibility (chose the word you prefer).
Blocked by : #13
Possible tools (both can be used) :
- [Codeco](https://codecov.io/) (code coverage)
- [Coveralls](https://coveralls.io/) (code coverage/quality)
- [ReadTheDocs](https://readthedocs.org/) (doc versionning)
- [CodeClimate](https://codeclimate.com/) (severals metrics) |
tidb-challenge-program/bug-hunting-issue | 602494076 | Title: P2-[4.0-bug-hunting]-[AutoRandom Key]-MySQL compatible insert_id seeding
Question:
username_0: ## Bug Report
### 1. What did you do?
MySQL supports the ability to seed the insert_id which is used by `auto_increment`. This feature was historically used by statement-based replication to guarantee deterministic replay. Both TiDB's `auto_increment` and `auto_random` lack this feature.
### 2. What did you expect to see?
```
DROP TABLE IF EXISTS t1;
CREATE TABLE t1 (
id int not null primary key auto_increment,
pad varbinary(10) not null
);
SET INSERT_id=99;
INSERT INTO t1 (pad) VALUES (RANDOM_BYTES(10));
```
MySQL:
```
mysql8019> SELECT * FROM t1;
+----+------------------------+
| id | pad |
+----+------------------------+
| 99 | 0x307BE51236AEC7296CBC |
+----+------------------------+
1 row in set (0.00 sec)
```
### 3. What did you see instead?
TiDB:
```
tidb> SELECT * FROM t1;
+----+------------------------+
| id | pad |
+----+------------------------+
| 1 | 0xD1B34F5C2973FBCAAE55 |
+----+------------------------+
1 row in set (0.00 sec)
```
And the same example with auto_random instead:
```
DROP TABLE IF EXISTS t1;
CREATE TABLE t1 (
id int not null primary key auto_random,
pad varbinary(10) not null
);
SET INSERT_id=99;
INSERT INTO t1 (pad) VALUES (RANDOM_BYTES(10));
...
tidb> SELECT * FROM t1;
+------------+------------------------+
| id | pad |
+------------+------------------------+
| 1677721601 | 0x586D395094675465AACC |
+------------+------------------------+
1 row in set (0.00 sec)
```
### 4. What version of TiDB are you using? (`tidb-server -V` or run `select tidb_version();` on TiDB)
```
mysql> SELECT tidb_version()\G
*************************** 1. row ***************************
tidb_version(): Release Version: v4.0.0-beta.2-290-ga0c740784
Git Commit Hash: a0c7407846fbc84f939afbc091f2db54f48c1bfa
Git Branch: master
UTC Build Time: 2020-04-17 04:04:45
GoVersion: go1.13
Race Enabled: false
TiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306
Check Table Before Drop: false
1 row in set (0.00 sec)
```
Answers:
username_1: /bug P2 |
davidmoten/subethasmtp | 365187348 | Title: TLSv1 Record Layer: Handshake Protocol: Multiple Handshake Messages
Question:
username_0: when i send(mail);
code:
...
javaMailProperties.put("mail.smtp.ssl.enable", "true");
javaMailProperties.put("mail.smtp.socketFactory.class", "javax.net.ssl.SSLSocketFactory");
....
SimpleMailMessage mail = new SimpleMailMessage();
imail.setJavaMailProperties(javaMailProperties);
....
imail.send(mail);
**in the Wireshark:**
1 Client Hello : TLSv1 Record Layer: Handshake Protocol: Client Hello
2 **Server Hello** : TLSv1 Record Layer: Handshake Protocol: Multiple Handshake Messages
Content Type: Handshake (22)
Version: TLS 1.0 (0x0301)
Length: 1393
Handshake Protocol: Server Hello
Handshake Protocol: Certificate
Handshake Protocol: Server Hello Done
3 TLSv1 Record Layer: Alert (Level: Fatal, Description: Certificate Unknown)
but, i only want
Server Hello : TLSv1 Record Layer: Handshake Protocol: Server Hello
not
**Multiple Handshake Messages**
how can i do?
java version "1.8.0_121"
Answers:
username_1: Hi, this issue slipped my attention, sorry. I'm assuming that "multiple handshake messages" is a good thing generally because multiple parts (server hello and certificate) are sent in the same packet. Why do you want what you are asking for? |
JimmyLv/reading | 353972507 | Title: GraphQL Server Tutorial with Apollo Server and Express - RWieruch
Question:
username_0: ## GraphQL Server Tutorial with Apollo Server and Express - RWieruch<br>
A complete Apollo Server with Express and GraphQL Tutorial Follow I was devastated hearing the news about what happened on Lombok in the last couple of days.…<br>
<br>
August 25, 2018 at 09:49AM<br>
via Instapaper https://www.robinwieruch.de/graphql-apollo-server-tutorial/<issue_closed>
Status: Issue closed |
redwoodjs/redwood | 582025032 | Title: webpack cell loader not detecting the Cell files on Windows
Question:
username_0: I was trying the redwood blog tutorial (https://redwoodjs.com/tutorial/welcome-to-redwood) on Windows 10. I could try upto Layouts topic. But I could not proceed with getting dynamic topic (https://redwoodjs.com/tutorial/getting-dynamic). I created the posts using the following command:
`yarn rw g scaffold post`
and then I opened [Posts Page](http://localhost:8910/posts) page in the chrome browser. I got 'something went wrong ' page.
Warnings from react on the console:
```
react.development.js:315 Warning: React.createElement: type is invalid -- expected a string (for built-in components) or a class/function (for composite components) but got: undefined. You likely forgot to export your component from the file it's defined in, or you might have mixed up default and named imports.
Check the render method of `PostsPage`.
in PostsPage (created by PageLoader)
in PageLoader (created by RouterImpl)
in RouterImpl (created by LocationProvider)
in LocationProvider (created by Context.Consumer)
in Location (created by Router)
in Router (created by Routes)
in Routes
in ApolloProvider (created by GraphQLProvider)
in GraphQLProvider (created by RedwoodProvider)
in RedwoodProvider
in FatalErrorBoundary
```
After debugging , I understood the webpack cell loader is not detecting the file /web/components/PostsCell/PostsCell.js.
Changed the test regex for Cells files under @redwoodjs\core\config\webpack.common.js:
```
{
test: **/web\\\src\\\components\\\\.+Cell.js$/,**
use: {
loader: path.resolve(
__dirname,
'..',
'dist',
'loaders',
'cell-loader'
),
},
}
```
Posts Page is displayed on the browser
Please suggest appropriate solution to this problem.
Answers:
username_1: Experiencing the same issue.
username_2: Thanks for this. I'll get a fix and release out in the next few hours.
Status: Issue closed
username_2: I'll release a new version of create-redwood-app, in the meantime you can bump @redwoodjs/core to `0.2.3` |
mrdoob/three.js | 84666753 | Title: Raycaster not working when PointCloud scaling is negative.
Question:
username_0: When a PointCloud object have a negative value for any of the scale axis, whatever threshold value is set, not points gets picked as the localThreshold value become negative and the following test is always false:
if ( rayPointDistance < localThreshold ) {
I can make things work by making localThreshold an absolute value:
var localThreshold = Math.abs(threshold / ( ( this.scale.x + this.scale.y + this.scale.z ) / 3 ));
Answers:
username_1: @username_2 what do you think?
username_2: three.js does not support reflections in the object matrix -- it only supports pure rotations and positive scale factors.
username_0: Since rayCasting deals with distance and not coordinates, I thought it makes sense to add the Math.abs since it doesn't make sense for a distance to be negative.
Apart from raycasting, negative scale factors (on PointCloud) works pretty well for me. I'm using the reflection to convert data points from left hand <-> right hand depending on the asset currently loaded.
Thanks for the response and good job with three.js!
username_2: Your `abs` is in the wrong place, anyway.
var localThreshold = threshold / ( ( Math.abs( this.scale.x ) + Math.abs( this.scale.y ) + Math.abs( this.scale.z ) ) / 3 );
But given that the formula is somewhat of a hack, and three.js does not support negative scale factors in general, I am not sure we should do it.
username_3: What is a negative scale in physical space? What is a negative distance between two points in space (i mean length of separation vector)?
username_2: (1) a reflection in object space, (2) a signed distance -- not applicable in this case.
Status: Issue closed
|
pingcap/tidb | 334775043 | Title: rolling update faild while check current PD leader
Question:
username_0: It seems PD3 can not transfer to another PD server
Answers:
username_0: It seems PD3 can not transfer to another PD server
username_1: @username_0 Please provide more log information for `PD3`.
username_0: @username_1
谢谢
已经处理好了。
主要的问题在于每次选举leader都是pd3,可能跟pd1,pd2已经升级到2.0.4,pd3还是1.0.6有关。
然后我们采用下面的方法绕过了:
1. 手动停掉 pd3。 这时候leader能切换到pd1。
2. 修改inventory.ini,在PD list 配置里面,将PD3 放在第一个。(否则,再次执行rolling update,还是会从pd1,pd2开始升级,这样会将leader再切到pd3.)
3. 执行升级。
Status: Issue closed
username_2: 您好。请教一个问题:如果我配置一个PD,能够实现滚动升级吗?
我的配置:PD和TiDB各配置一个,并且在一台机器上。TiKV配置三个,分别在三台服务器上。
172.18.5.9 PD和TiDB
172.18.5.10 TiKV1
172.18.5.11 TiKV2
172.18.5.12 TiKV3
使用:tidb-ansible将TiDB v2.1.0-rc.3滚动升级到TiDB-v3.0.0。遇到如下问题:
[tidb@izwz9gckpmdcwq2glh1kitz tidb-ansible]$ ansible-playbook excessive_rolling_update.yml
PLAY [check config locally] ***************************************************************
TASK [check_config_static : Ensure only one monitoring host exists] ***********************
TASK [check_config_static : Ensure monitored_servers exists] ******************************
TASK [check_config_static : Ensure TiDB host exists] **************************************
TASK [check_config_static : Ensure PD host exists] ****************************************
TASK [check_config_static : Ensure TiKV host exists] **************************************
TASK [check_config_static : Check ansible_user variable] **********************************
TASK [check_config_static : Ensure timezone variable is set] ******************************
TASK [check_config_static : Close old SSH control master processes] ***********************
ok: [localhost]
PLAY [check system environment] ***********************************************************
TASK [check_system_dynamic : Disk space check - Fail task when disk is full] **************
ok: [172.18.5.9]
TASK [check_system_dynamic : get facts] ***************************************************
ok: [172.18.5.9]
TASK [check_system_dynamic : Preflight check - Get hostnames of all nodes in cluster] *****
ok: [172.18.5.9]
TASK [check_system_dynamic : Preflight check - Does every node in cluster have different hostname] ***
TASK [check_system_dynamic : Preflight check - Get NTP service status] ********************
ok: [172.18.5.9]
TASK [check_system_dynamic : Preflight check - NTP service] *******************************
TASK [check_system_dynamic : Preflight check - Get umask] *********************************
ok: [172.18.5.9]
TASK [check_system_dynamic : Preflight check - Does the system have a standard umask] *****
TASK [check_system_dynamic : Preflight check - Get maximum number of open file descriptors limit] ***
ok: [172.18.5.9]
TASK [check_system_dynamic : Preflight check - ulimit -n] *********************************
TASK [check_system_dynamic : Preflight check - Check swap] ********************************
PLAY [gather all facts, and check dest] ***************************************************
TASK [check_config_dynamic : Set enable_binlog variable] **********************************
TASK [check_config_dynamic : Set deploy_dir if not set] ***********************************
TASK [check_config_dynamic : environment check (deploy dir)] ******************************
[Truncated]
fatal: [172.18.5.9]: FAILED! => {"changed": false, "elapsed": 300, "msg": "the PD port 2379 is not down"}
#错误#错误#错误#错误#错误#错误#错误#错误#错误#错误#错误#错误
NO MORE HOSTS LEFT ************************************************************************
to retry, use: --limit @/home/tidb/tidb-ansible/retry_files/excessive_rolling_update.retry
PLAY RECAP ********************************************************************************
172.18.5.9 : ok=22 changed=2 unreachable=0 failed=1 ##错误#错误#错误#错误#错误
172.18.5.10 : ok=3 changed=0 unreachable=0 failed=0
172.18.5.11 : ok=3 changed=0 unreachable=0 failed=0
172.18.5.12 : ok=3 changed=0 unreachable=0 failed=0
localhost : ok=1 changed=0 unreachable=0 failed=0
#错误#错误#错误#错误#错误#错误
### ERROR MESSAGE SUMMARY *********************************************************************
[172.18.5.9]: Ansible FAILED! => playbook: excessive_rolling_update.yml; TASK: wait until the PD port is down; message: {"changed": false, "elapsed": 300, "msg": "the PD port 2379 is not down"}
Ask for help:
Contact us: <EMAIL>
It seems that you encounter some problems. You can send an email to the above email address, attached with the tidb-ansible/inventory.ini and tidb-ansible/log/ansible.log files and the error message, or new issue on https://github.com/pingcap/tidb-ansible/issues. We'll try our best to help you deploy a TiDB cluster. Thanks. :-)
username_3: Related issue: https://github.com/pingcap/tidb/issues/11240 |
dbekaert/RAiDER | 819007573 | Title: NCUM model filename issue
Question:
username_0: See attached screenshot for the reported error by @kam3545

Answers:
username_0: @username_1 Could you look into this one while we add unit test for all models in scenario 1?
username_1: The issue should be fixed by this PR #281.
username_0: Thanks @username_1.
@kam3545 can you give it a try?
Status: Issue closed
|
jeremylong/DependencyCheck | 409841898 | Title: scanSet of subproject not processed by Gradle plugin dependencyCheckAggregate
Question:
username_0: The logfile of ```dependencyCheckAggregate``` on the root project + a minimal setup in the build.gradle file of the root project and subproject can be found [here](https://gist.github.com/username_0/c53c622f32fdf46e889d9902cb6210f1)
I have a project that contains subprojects. The root project has no dependencies of its own, the subprojects have both gradle dependencies from mavencentral and local jar-dependencies. I specified the folder containing the jar-dependencies using the ```scanset``` tag in ```dependencyCheck```.
When I run ```dependencyCheckAnalyze``` on the subproject, I get a report that contains the scanned jar-dependencies. The same is true when I run ```dependencyCheckAggregate``` on the subproject. I did notice that this last task also scans the dependencies of the other subprojects, which I find strange but doesn't really impede me.
However, when I run ```dependencyCheckAggregate``` on the root project, the jar-dependencies are not included in the report. They are included if I include the ```scanset``` tag in the build.gradle file of the root project.
Answers:
username_1: Your minimal setup is missing the settings.gradle:
settings.gradle
```groovy
rootProject.name = 'aggregate-test'
include 'child'
```
Also, the root project failed to include the plugin:
build.groovy
```groovy
plugins {
id "org.owasp.dependencycheck" version "4.0.2"
}
allprojects {
repositories {
mavenLocal()
mavenCentral()
}
apply plugin: 'java'
apply plugin: 'org.owasp.dependencycheck'
}
```
Lastly, the child gradle.build you provided did not actually define any dependencies? So I added log4j:
child/build.gradle
```groovy
dependencies {
implementation group: 'log4j', name: 'log4j', version: '1.2.17'
}
```
Given the above setup when I run `gradle dependencyCheckAggregate` I get a report that includes log4j.
Status: Issue closed
|
jlippold/tweakCompatible | 407145026 | Title: `SwipeSelection` working on iOS 11.4.1
Question:
username_0: ```
{
"packageId": "com.iky1e.swipeselection",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.iky1e.swipeselection",
"deviceId": "iPhone9,3",
"url": "http://cydia.saurik.com/package/com.iky1e.swipeselection/",
"iOSVersion": "11.4.1",
"packageVersionIndexed": true,
"packageName": "SwipeSelection",
"category": "Tweaks",
"repository": "BigBoss",
"name": "SwipeSelection",
"installed": "1.5.2-1",
"packageIndexed": true,
"packageStatusExplaination": "This package version has been marked as Working based on feedback from users in the community. The current positive rating is 100% with 1 working reports.",
"id": "com.iky1e.swipeselection",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.0",
"shortDescription": "keyboard swipes move cursor & select text",
"latest": "1.5.2-1",
"author": "iKy1e (<NAME>)",
"packageStatus": "Working"
},
"base64": "<KEY>",
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
robmarkcole/HASS-Deepstack-face | 898263217 | Title: Teach face service ( home assistance )
Question:
username_0: hi
i am trying to add faces but i am getting this error :
Failed to call service image_processing.deepstack_teach_face. extra keys not allowed @ data['sequence'][0]['file_path']. Got '/config/www/faces/moh_1.jpg' extra keys not allowed @ data['sequence'][0]['name']. Got 'mohammed'
the service code :
service: image_processing.deepstack_teach_face
name: mohammed
file_path: /config/www/faces/moh_1.jpg
there is no spaces and yamle is typed not copy & paste
adding { or " or ' give me the same error
Answers:
username_1: It has to look like that ...
service: image_processing.deepstack_teach_face
data:
name: name
file_path: your_path
username_0: Hi
thanks for your reply and your time
i tried it but i am getting another error related to Directory ( the path is correct , i do not know where is the mistake ) + extra key not allowed :
Failed to call service image_processing.deepstack_teach_face. expected dict for dictionary value @ data['sequence'][0]['data']. Got None extra keys not allowed @ data['sequence'][0]['file_path']. Got '/config/faces/moh_1.jpg' extra keys not allowed @ data['sequence'][0]['name']. Got 'mohammed'
the code :
service: image_processing.deepstack_teach_face
data:
name: mohammed
file_path: /config/faces/moh_1.jpg
i tried many formats and all returned the same error
service: image_processing.deepstack_teach_face
data:
"name": "mohammed"
"file_path": "/config/faces/moh_1.jpg"
and also :
service: image_processing.deepstack_teach_face
data:
name: 'mohammed'
file_path: '/config/faces/moh_1.jpg'
and :
service: image_processing.deepstack_teach_face
data:
name: "mohammed"
file_path: "/config/faces/moh_1.jpg"
username_2: same here :(
username_3: I'm not the only one then, same here.
username_4: Yea not working here either, but everyone posting YAML in non code blocks is not likely helpful as the formatting can be removed.
to post blocks of code such as yaml use 3 taps on the back ticks key ` and yaml. like this:

(before the number one on top row) then type code, then another 3 taps of the backticks key
it will put the code in proper formatting and look like this

with an end result of :
```yaml
service: imaginary_service.run_me
data:
name: 'name here'
file_path: '/path/goes/here/image.jpg'
```
Status: Issue closed
|
arangodb/arangodb | 203892486 | Title: Long pauses when loading a large dataset
Question:
username_0: ## my environment running ArangoDB
I'm using the latest ArangoDB of the respective release series:
- [ ] 2.8
- [ ] 3.0
- [x] 3.1.9
- [ ] self-compiled devel branch
On this operating system:
- [ ] DCOS on
- [ ] AWS
- [ ] Azure
- [ ] own infrastructure
- [ ] Linux
- [ ] Debian .deb
- [ ] Ubuntu .deb
- [ ] SUSE .rpm
- [ ] RedHat .rpm
- [ ] Fedora .rpm
- [ ] Gentoo
- [ ] docker - official docker library
- [ ] other:
- [ ] Windows, version:
- [+] MacOS, version: 10.11.16
### Large imports
Hi
I'm playing with arangodb, and as part of a test I'm loading a large dataset in - approx 100GB of JSON, around 25m documents. I'm using arangoimp, which has been chugging along fine for several hours.
However, over the last couple of hours, I've seen the load pause for increasing periods of time - see the attachments of screenshots from the dashboard.
<img width="849" alt="screen shot 2017-01-29 at 19 47 55" src="https://cloud.githubusercontent.com/assets/24726/22407375/3b02983a-e65d-11e6-84bd-ea990ea3aa38.png">
<img width="1281" alt="screen shot 2017-01-29 at 19 48 02" src="https://cloud.githubusercontent.com/assets/24726/22407376/3b098546-e65d-11e6-8f58-77f7cad438ec.png">
During these quiet moments, there is little CPU activity from arangod, and disk I/O drops to zero. I took a couple of samples of the arangod process at this time, in case that's useful. Sample 1:
```
Sampling process 10974 for 3 seconds with 1 millisecond of run time between samples
Sampling completed, processing symbols...
Analysis of sampling arangod (pid 10974) every 1 millisecond
Process: arangod [10974]
Path: /usr/local/Cellar/arangodb/3.1.9/sbin/arangod
Load Address: 0x1039dd000
Identifier: arangod
Version: 0
Code Type: X86-64
Parent Process: bash [88921]
Date/Time: 2017-01-29 19:53:48.955 +0000
Launch Time: 2017-01-29 13:49:14.102 +0000
OS Version: Mac OS X 10.11.6 (15G1217)
Report Version: 7
Analysis Tool: /usr/bin/sample
----
Call graph:
2630 Thread_3974767 DispatchQueue_1: com.apple.main-thread (serial)
+ 2630 ??? (in arangod) load address 0x1039dd000 + 0xfffffffefc62a374 [0x7374]
+ 2630 arangodb::application_features::ApplicationServer::wait() (in arangod) + 210 [0x103e1dc9a]
+ 2629 usleep (in libsystem_c.dylib) + 54 [0x7fff87aafc02]
[Truncated]
0x7fff94df7000 - 0x7fff94e08ff7 libz.1.dylib (61.20.1) <B3EBB42F-48E3-3287-9F0D-308E04D407AC> /usr/lib/libz.1.dylib
0x7fff98176000 - 0x7fff9817effb libsystem_dnssd.dylib (625.60.4) <80189998-32B0-316C-B5C5-53857486713D> /usr/lib/system/libsystem_dnssd.dylib
0x7fff982eb000 - 0x7fff982f3fff libcopyfile.dylib (127) <A48637BC-F3F2-34F2-BB68-4C65FD012832> /usr/lib/system/libcopyfile.dylib
0x7fff987c8000 - 0x7fff987c9ffb libSystem.B.dylib (1226.10.1) <012548CD-614D-3AF0-B3B1-676F427D2CD6> /usr/lib/libSystem.B.dylib
0x7fff99f67000 - 0x7fff99f7eff7 libsystem_asl.dylib (323.50.1) <41F8E11F-1BD0-3F1D-BA3A-AA1577ED98A9> /usr/lib/system/libsystem_asl.dylib
0x7fff9a34c000 - 0x7fff9a363ff7 libsystem_coretls.dylib (83.40.5) <C90DAE38-4082-381C-A185-2A6A8B677628> /usr/lib/system/libsystem_coretls.dylib
0x7fff9a4bd000 - 0x7fff9a4bdff7 libunc.dylib (29) <DDB1E947-C775-33B8-B461-63E5EB698F0E> /usr/lib/system/libunc.dylib
0x7fff9a613000 - 0x7fff9a68afeb libcorecrypto.dylib (335.50.1) <B5C05FD7-A540-345A-87BF-8E41848A3C17> /usr/lib/system/libcorecrypto.dylib
0x7fff9ac86000 - 0x7fff9ac8efef libsystem_platform.dylib (74.40.2) <29A905EF-6777-3C33-82B0-6C3A88C4BA15> /usr/lib/system/libsystem_platform.dylib
0x7fff9afcd000 - 0x7fff9b338657 libobjc.A.dylib (680) <D55D5807-1FBE-32A5-9105-44D7AFE68C27> /usr/lib/libobjc.A.dylib
Sample analysis of process 10974 written to file /dev/stdout
```
The documents I'm loading have a somewhat irregular structure, and are quite 'deep'. Aside from the default index on `_id`, I have a single persistent unique non-sparse hash index on a string attribute that every document has.
I've noticed that the arangod process never seems to consume more than 50% of my RAM (according to its own metrics and Activity Monitor) - not sure if that's intentional, but it doesn't seem unreasonable.
I'll let it continue to chug - let me know if I can provide any more information.
(Obviously I wouldn't expect stellar performance using a 100GB dataset on a machine with 16GB of RAM, but this behaviour of stopping totally for a few minutes at a time, then processing a few thousand more records, looks weird to me; I'd expect the machine to be thrashing, not knowing anything about arangodb's internals.)
Answers:
username_0: Just for info - the import eventually completed successfully.
username_1: Hi,
We're pleased to anounce that ArangoDB 3.2 with the rocksdb storage engine option, which could improve this situation.
Please have a look.
Status: Issue closed
|
Ayugi/uproom | 97680645 | Title: Запуск на heroku
Question:
username_0: Пока не очень выходит. Лог:
```
Running `java -jar target/dependency/webapp-runner.jar server/target/server-1.0.0-SNAPSHOT.war` attached to terminal... up, run.4463
Adding Context for server/target/server-1.0.0-SNAPSHOT.war
Jul 28, 2015 11:14:56 AM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["http-nio-8080"]
Jul 28, 2015 11:14:56 AM org.apache.tomcat.util.net.NioSelectorPool getSharedSelector
INFO: Using a shared selector for servlet write/read
Jul 28, 2015 11:14:56 AM org.apache.catalina.core.StandardService startInternal
INFO: Starting service Tomcat
Jul 28, 2015 11:14:56 AM org.apache.catalina.core.StandardEngine startInternal
INFO: Starting Servlet Engine: Apache Tomcat/8.0.24
Jul 28, 2015 11:14:57 AM org.apache.catalina.startup.ContextConfig getDefaultWebXmlFragment
INFO: No global web.xml found
Jul 28, 2015 11:17:03 AM org.apache.jasper.servlet.TldScanner scanJars
INFO: At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
Jul 28, 2015 11:17:03 AM org.apache.catalina.core.ApplicationContext log
INFO: No Spring WebApplicationInitializer types detected on classpath
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: ../logs/server.log (No such file or directory)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:133)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:289)
at org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:167)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:163)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:256)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:132)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:96)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:654)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:612)
at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:509)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:415)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:441)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:470)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:122)
at org.apache.log4j.Logger.getLogger(Logger.java:104)
at org.apache.commons.logging.impl.Log4JLogger.getLogger(Log4JLogger.java:262)
at org.apache.commons.logging.impl.Log4JLogger.<init>(Log4JLogger.java:108)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.commons.logging.impl.LogFactoryImpl.createLogFromClass(LogFactoryImpl.java:1025)
at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:844)
at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:541)
at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:292)
at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:269)
at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:282)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:106)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4729)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5167)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1408)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1398)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[Truncated]
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Jul 28, 2015 11:17:14 AM org.apache.catalina.loader.WebappClassLoaderBase clearReferencesJdbc
WARNING: The web application [ROOT] registered the JDBC driver [com.mysql.jdbc.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.
Jul 28, 2015 11:17:14 AM org.apache.catalina.loader.WebappClassLoaderBase clearReferencesJdbc
WARNING: The web application [ROOT] registered the JDBC driver [com.mysql.fabric.jdbc.FabricMySQLDriver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.
Jul 28, 2015 11:17:14 AM org.apache.catalina.loader.WebappClassLoaderBase clearReferencesThreads
WARNING: The web application [ROOT] appears to have started a thread named [Abandoned connection cleanup thread] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
com.mysql.jdbc.AbandonedConnectionCleanupThread.run(AbandonedConnectionCleanupThread.java:41)
Jul 28, 2015 11:17:14 AM org.apache.coyote.AbstractProtocol start
INFO: Starting ProtocolHandler ["http-nio-8080"]
Jul 28, 2015 11:17:14 AM org.apache.coyote.AbstractProtocol pause
INFO: Pausing ProtocolHandler ["http-nio-8080"]
Error waiting for process to terminate: No child processes
``` |
Trust-Code/odoo-brasil | 303852072 | Title: [ br_account_payment ] not compatible with oca-bank-payment repo
Question:
username_0: We got a conflict with oca-bank-payment repo. We have a field named "payment_mode_id" m2o with different objects in both repos. which isn't letting modules install and work together
// [trustcode-odoo-brasil/blob/10.0/br_account_payment/models/account_invoice.py](https://github.com/thinkopensolutions/trustcode-odoo-brasil/blob/10.0/br_account_payment/models/account_invoice.py)
`class AccountInvoice(models.Model):
_inherit = 'account.invoice'
payment_mode_id = fields.Many2one(
'payment.mode', readonly=True,
states=FIELD_STATE, string=u"Modo de pagamento")`
[OCA/bank-payment/blob/10.0/account_payment_partner/models/account_invoice.py](https://github.com/OCA/bank-payment/blob/10.0/account_payment_partner/models/account_invoice.py)
`class AccountInvoice(models.Model):
_inherit = 'account.invoice'
payment_mode_id = fields.Many2one(
comodel_name='account.payment.mode', string="Payment Mode",
ondelete='restrict',
readonly=True, states={'draft': [('readonly', False)]})`
Answers:
username_1: Won't be fixed.
Status: Issue closed
|
milieuinfo/webcomponent-vl-ui-pill | 576242168 | Title: [BUG] - 'type' attribuut van vl-ui-pill-element van naam veranderen
Question:
username_0: **Omschrijf het probleem**
Een VlPill component kan 3 verschillende types hebben (error, success, warning) die via het type attribuut gezet worden. Een VlButtonPill component kan ook 3 deze 3 verschillende types hebben, maar aangezien dat dit een button is, krijgt met als type 'button' terug (en niet success, error of warning).
**Hoe te reproduceren**
Volgende e2e test faalt:
`// type gaat gewijzigd worden naar data-vl-type omdat er nu een clash is met het default type attribuut
it('als gebruiker wil ik het verschil kunnen zien tussen een pill knop van een bepaald type en een gewone pill knop', async () => {
const pillButton = await vlPillPage.getPillButton();
const pillSuccessButton = await vlPillPage.getPillSuccessButton();
const pillWarningButton = await vlPillPage.getPillWarningButton();
const pillErrorButton = await vlPillPage.getPillErrorButton();
await assertPillButtonWithTextHasCorrectType(pillButton, 'Optie 1', undefined);
await assertPillButtonWithTextHasCorrectType(pillSuccessButton, 'Optie 1', 'success');
await assertPillButtonWithTextHasCorrectType(pillWarningButton, 'Optie 1', 'warning');
await assertPillButtonWithTextHasCorrectType(pillErrorButton, 'Optie 1', 'error');
});
async function assertPillButtonWithTextHasCorrectType(pillButton, text, type) {
await assert.eventually.equal(pillButton.getText(), text);
await assert.eventually.equal(pillButton.getType(), type);
await assert.eventually.equal(pillButton.isSuccess(), type === 'success');
await assert.eventually.equal(pillButton.isWarning(), type === 'warning');
await assert.eventually.equal(pillButton.isError(), type === 'error');
}`
**Gewenst gedrag**
De getType (of ev. nieuwe naam) functie van het e2e-component-object, zou niet button mogen teruggeven.
Answers:
username_1: Dit is reeds aangepast in de code. `getType` geeft de inhoud van het attribuut `type` terug, wat in de praktijk error, success of warning zal zijn.
Status: Issue closed
username_2: **Omschrijf het probleem**
Een VlPill component kan 3 verschillende types hebben (error, success, warning) die via het type attribuut gezet worden. Een VlButtonPill component kan ook deze 3 verschillende types hebben, maar aangezien dat dit een button is, is type 'button' en niet success, error of warning.
**Hoe te reproduceren**
Volgende e2e test faalt:
```
// type gaat gewijzigd worden naar data-vl-type omdat er nu een clash is met het default type attribuut
it('als gebruiker wil ik het verschil kunnen zien tussen een pill knop van een bepaald type en een gewone pill knop', async () => {
const pillButton = await vlPillPage.getPillButton();
const pillSuccessButton = await vlPillPage.getPillSuccessButton();
const pillWarningButton = await vlPillPage.getPillWarningButton();
const pillErrorButton = await vlPillPage.getPillErrorButton();
await assertPillButtonWithTextHasCorrectType(pillButton, 'Optie 1', undefined);
await assertPillButtonWithTextHasCorrectType(pillSuccessButton, 'Optie 1', 'success');
await assertPillButtonWithTextHasCorrectType(pillWarningButton, 'Optie 1', 'warning');
await assertPillButtonWithTextHasCorrectType(pillErrorButton, 'Optie 1', 'error');
});
async function assertPillButtonWithTextHasCorrectType(pillButton, text, type) {
await assert.eventually.equal(pillButton.getText(), text);
await assert.eventually.equal(pillButton.getType(), type);
await assert.eventually.equal(pillButton.isSuccess(), type === 'success');
await assert.eventually.equal(pillButton.isWarning(), type === 'warning');
await assert.eventually.equal(pillButton.isError(), type === 'error');
}
```
**Gewenst gedrag**
De getType (of ev. nieuwe naam) functie van het e2e-component-object, zou niet button mogen teruggeven.
username_2: Het probleem werd blijkbaar (tijdelijk) opgelost door volgende code:
```HTML
<vl-demo data-vl-title="Button pill types">
<button is="vl-button-pill" id="button-pill-success" type="button" data-vl-type="success">Optie 1</button>
<button is="vl-button-pill" id="button-pill-warning" type="button" data-vl-type="warning">Optie 1</button>
<button is="vl-button-pill" id="button-pill-error" type="button" data-vl-type="error">Optie 1</button>
</vl-demo>
```
Het type bij button moet `button` zijn. Zodra dit aangepast wordt, falen de testen. Ik heb #type branch voorzien met een fix.
Status: Issue closed
|
youzan/vant | 398771275 | Title: 样式名改动也顺便放到更新日志里T^T
Question:
username_0: 之前遮罩层的名字是van-modal。。。原来有对遮罩的组件做了二次封装,关闭后 会从dom上进行移除的。。可是改了之后不知道。。。导致程序也报错了(和我写的也有关系 = =)
`this.$el.parentNode.removeChild(document.querySelector(".van-modal"))`
希望之后如果有改动的到样式名的。。也提个醒。。。
Status: Issue closed
Answers:
username_1: 手动操作 DOM 的方式是不推荐的呢
我们这边会尽量减少类名的改动,开放更多定义样式的 props
username_0: @username_1 对了 你们有计划 针对这些pop或者toast开放一些关闭后的回调吗
username_1: @username_0 可以加的,需要的话可以新开个 issue 描述下 |
trufflesuite/ganache | 440670395 | Title: System Error when running Ganache 2.0.1 on darwin
Question:
username_0: <!-- Please give us as much detail as you can about what you were doing at the time of the error, and any other relevant information -->
PLATFORM: darwin
GANACHE VERSION: 2.0.1
EXCEPTION:
```
TypeError: Cannot read property 'transactions' of null
at ProjectsWatcher.handleBlock (/src/truffle-integration/projectsWatcher.js:180:37)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
```
APPLICATION LOG:
```
T+36955ms: Gas usage: 273162
T+36955ms: Block Number: 13
T+36966ms: Block Time: Mon May 06 2019 14:13:09 GMT+0200 (CEST)
T+36974ms: eth_getBlockByNumber
T+36986ms: eth_getTransactionReceipt
T+36993ms: eth_getCode
T+37004ms: eth_getBlockByNumber
T+37012ms: eth_getBlockByNumber
T+37021ms: eth_sendTransaction
T+37032ms: Transaction: 0xe925b59bf5770a0b9a6392850ba4adf7a8d0f8f7aba8829e463113d8b21d9443
T+37032ms: Gas usage: 42028
T+37032ms: Block Number: 14
T+37032ms: Block Time: Mon May 06 2019 14:13:09 GMT+0200 (CEST)
T+37040ms: eth_getBlockByNumber
T+37051ms: eth_getTransactionReceipt
T+37058ms: eth_getBlockByNumber
T+37069ms: eth_accounts
T+37077ms: eth_getBlockByNumber
T+37085ms: eth_getBlockByNumber
T+37096ms: eth_getBlockByNumber
T+37104ms: eth_estimateGas
T+37116ms: eth_getBlockByNumber
T+37124ms: eth_blockNumber
T+37136ms: eth_sendTransaction
T+37144ms: Transaction: 0x4f38839553a9c0bf681bebf2ca4bde128d5aed80d9ae4d6968c0e98752ca0a66
T+37144ms: Contract created: 0xfb8bdccc47acf909871c47af6b0c7798897ec80f
T+37144ms: Gas usage: 1989809
T+37144ms: Block Number: 15
T+37144ms: Block Time: Mon May 06 2019 14:13:09 GMT+0200 (CEST)
T+37166ms: eth_getBlockByNumber
T+37166ms: eth_getTransactionReceipt
T+37166ms: eth_getCode
T+37176ms: eth_getBlockByNumber
T+37191ms: eth_getBlockByNumber
T+37208ms: eth_getBlockByNumber
T+37231ms: eth_estimateGas
T+37253ms: eth_getBlockByNumber
T+37284ms: eth_blockNumber
T+37292ms: eth_sendTransaction
T+37300ms: Transaction: 0xd40d2e564c693190c770f26f7adb2d436305068bbaa6fb330e97e4a48f49a201
T+37300ms: Contract created: 0xe103cd9f75e9222da752e11969ffbc0cafd4de27
T+37300ms: Gas usage: 3078108
T+37300ms: Block Number: 16
T+37300ms: Block Time: Mon May 06 2019 14:13:09 GMT+0200 (CEST)
[Truncated]
T+37758ms: eth_getBlockByNumber
T+37766ms: evm_revert
T+37774ms: Transaction: 0x3f93b6f65bbb06a3a99671f0be95256fc3fbb10d356099d8a7a94677d847a35f
T+37774ms: Gas usage: 28698
T+37774ms: Block Number: 23
T+37774ms: Block Time: Mon May 06 2019 14:13:10 GMT+0200 (CEST)
T+37774ms: Runtime Error: revert
T+37785ms: Revert reason: FlightSuretyApp::addAirline - Already 4 airlines have been added, you must pas by the queue process
T+37794ms: eth_getBlockByNumber
T+37805ms: eth_call
T+37814ms: eth_getBlockByNumber
T+37826ms: web3_clientVersion
T+37834ms: evm_snapshot
T+37859ms: eth_unsubscribe
T+37869ms: eth_unsubscribe
T+37880ms: eth_unsubscribe
T+37880ms: eth_unsubscribe
T+37890ms: eth_unsubscribe
T+37890ms: eth_unsubscribe
``` |
dask/distributed | 685163355 | Title: Preserve hostnames in worker addresses
Question:
username_0: The scheduler internally converts an address with a hostname, e.g.,
```
tls://worker1.example.com:8786
```
into an IP address:
```
tls://192.168.127.12:8786
```
For the `tcp://` protocol, this is fine -- probably saves a few cycles in address translation.
However, for `tls://`, this drops an important piece of information: the desired hostname. A TLS router (such as Traefik) can use the desired hostname information to internally route requests to the right location.
For example, we run a Dask cluster per user that's partially split across Kubernetes and a HTCondor cluster. The scheduler is run behind a TLS router (Traefik) which exposes the scheduler externally. A client request to connect to the scheduler `tls://scheduler1.example.com` preserves the hostname. Hence, SNI (https://en.wikipedia.org/wiki/Server_Name_Indication) allows the router to proxy the client to the correct scheduler. We do this as we don't have sufficient public IPs to allocate one to each scheduler. [Yes, we are aware of dask-gateway; it doesn't quite fit our needs - I don't want to derail the ticket on that though.]
Given the majority of the data comes from a distributed storage system and we aren't trying to push GB/s through the proxy, this setup with the scheduler works swimmingly well.
However, it doesn't work for the workers as the scheduler coerces the address of the worker almost immediately to an IP address.
SO:
1. Is there a good reason why this is done? It seems purposeful; what's the advantage that using IPs conveys?
2. Would there be interest in a patch that preserves the hostname instead? [Preserving the hostname might also convey some advantage to dual IPv6/IPv4 hosts]
3. If not, what about keeping both? That is, address based on IP but keep around the hostname so it can be passed to the TLS layer.
Answers:
username_1: 1. I think that historically it was because some of this was developed on
a system that took a long time to resolve hostnames. I think that we
should probably consider reversing this decision.
2. I'd be interested in that. It seems like a more sensible choice today
username_1: @jacobtomlinson is this something that you would be interested in?
username_0: For what it's worth, I'm unfortunately limited on my throughput for hacking on this to my "fun time" (i.e., weekends and nights). Trying to find someone internally who can volunteer to look into this.
username_1: I recommend starting a popular open source project with lots of need of improvement.
It really forces you to up your convince-friends-to-do-free-work game :) |
efanovjohn/Test_Case_Extractor | 565240813 | Title: ANDROID-1761: java.lang.RuntimeException at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:502) [Bugsee]
Question:
username_0: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
Reported by j
View full Bugsee session at: [https://appdev.bugsee.com/#/apps/ANDROID/issues/ANDROID-1761](https://appdev.bugsee.com/#/apps/ANDROID/issues/ANDROID-1761) |
SharePoint/sp-dev-docs | 608486491 | Title: Attribute incorrectly documented
Question:
username_0: Should be "OffsetDays" not just "Offset".
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: f1aed3c6-d0f2-df91-ecae-0345158a138b
* Version Independent ID: 8df123fe-96af-d50a-a614-2f265a1cd8fa
* Content: [Today element (Query)](https://docs.microsoft.com/en-us/sharepoint/dev/schema/today-element-query)
* Content Source: [docs/schema/today-element-query.md](https://github.com/SharePoint/sp-dev-docs/blob/master/docs/schema/today-element-query.md)
* Product: **sharepoint**
* GitHub Login: @spdevdocs
* Microsoft Alias: **spdevdocs**
Answers:
username_1: Thanks @username_0 for your feedback. The issue has just been fixed and the documentation will be updated in the upcoming days.
Status: Issue closed
|
agda/agda | 204831660 | Title: Generalising compiler pragmas
Question:
username_0: Continuing the discussion started in #2431.
One obstacle to stand-alone compiler backends is the way that compiler pragmas are handled at the moment. Basically each backend has a set of pragmas hard-wired into the guts of Agda (parser, scope checker and type checker). It would make much more sense if we could shift the work of dealing with these pragmas to the backend in question.
Currently there are two kinds of compiler pragmas:
- those that attach to a definition (`COMPILED`, `COMPILED_DATA`, `COMPILED_TYPE` and `COMPILED_EXPORT` for the GHC backend)
- those that are global to the current file (`IMPORT` and `HASKELL`)
I propose to simplify this to only two pragmas, one for each kind, with the following syntax
```agda
{-# COMPILE <backend> <qname> <text> #-}
{-# COMPILED_CODE <backend> <text> #-}
```
Each backend would then be responsible for making sense of its own pragmas. For instance, for the GHC backend we might replace
```agda
{-# COMPILED _>>=_ \ _ _ _ _ -> (>>=) #-}
{-# COMPILED_TYPE IO AgdaIO #-}
{-# COMPILED_DATA Maybe Maybe Nothing Just #-}
{-# COMPILED_EXPORT foo agdaFoo #-}
```
with something like
```agda
{-# COMPILE GHC _>>=_ = \ _ _ _ _ -> (>>=) #-}
{-# COMPILE GHC IO = type AgdaIO #-}
{-# COMPILE GHC Maybe = data Maybe (Nothing | Just) #-}
{-# COMPILE GHC foo as agdaFoo #-}
```
`IMPORT` pragmas could be replaced by inline code (the compiler would take care to put it at the top of the file):
```agda
{-# IMPORT Some.Haskell.Module #-} -- becomes
{-# COMPILED_CODE import qualified Some.Haskell.Module #-}
```
A nice side effect of this is that we would get rid of the `COMPILED_DECLARE_DATA` pragma, which only exists because we are too eager when checking the `COMPILED_DATA` pragmas make sense. If we leave it to the GHC backend that problem goes away.
Answers:
username_2: Maybe we can also improve on the pragma names? I think ``COMPILE`` and ``COMPILED_CODE`` are not entirely self-explaining. What about ``COMPILED_DEF`` and ``COMPILE`` instead?
username_1: I suggest that we avoid abbreviations like `DEF`.
username_0: My reading of the pragma names were: `{-# COMPILE <backend> <name> <text> #-}` means _**compile**_ `<name>` using the information in `<text>`, and `{-# COMPILED_CODE <backend> <text> #-}` means `<text>` is some (already _**compiled**_) _**code**_ in the backend's target language that should get included in the compiled output.
username_0: A minor consideration is that the new pragmas don't conflict with the old ones, since I'd like to leave the old ones in there for a version or two (with deprecation warnings).
username_3: Instead of `COMPILED_CODE` we could have `EMBED` or `FOREIGN`.
It is not really "already compiled code", it is just embedded code from a foreign (=not Agda) language.
username_0: I like `FOREIGN`.
Status: Issue closed
username_4: @username_0 Is there a way to turn some language extensions on in the generated code from inside Agda? E.g. I'd like `PackageImports`. If I use:
```agda
module Example where
{-# FOREIGN GHC {-# LANGUAGE PackageImports #-} #-}
```
then I get
```haskell
{-# LANGUAGE EmptyDataDecls, ExistentialQuantification,
ScopedTypeVariables, NoMonomorphismRestriction, Rank2Types,
PatternSynonyms #-}
module MAlonzo.Code.Example where
import MAlonzo.RTE (coe, erased, addInt, subInt, mulInt, quotInt,
remInt, geqInt, ltInt, eqInt, eqFloat)
import qualified MAlonzo.RTE
import qualified Data.Text
{-# LANGUAGE PackageImports #-}
```
and the `LANGUAGE` pragma is just ignored by ghc.
username_1: @username_4, I suggest that you open a new issue for this question. |
viper3400/DoManager | 149496441 | Title: When using a Firebird DB ib_util.dll must be delivered
Question:
username_0: Else the will be a firebird.log in application dir which reads:
``
Tue Apr 18 19:32:27 2016
ib_util init failed, UDFs can't be used - looks like firebird misconfigured
C:\DoManager_3.0.1.1_master\bin/ib_util.dll library has not been found
C:\DoManager_3.0.1.1_master\ib_util.dll library has not been found
ib_util.dll library has not been found
``
Status: Issue closed
Answers:
username_0: Else the will be a firebird.log in application dir which reads:
``
Tue Apr 18 19:32:27 2016
ib_util init failed, UDFs can't be used - looks like firebird misconfigured
C:\DoManager_3.0.1.1_master\bin/ib_util.dll library has not been found
C:\DoManager_3.0.1.1_master\ib_util.dll library has not been found
ib_util.dll library has not been found
``
Status: Issue closed
|
leandrowd/react-easy-swipe | 243608687 | Title: Module build failed: ReferenceError: Unknown plugin "transform-es2015-modules-umd"
Question:
username_0: If I do a **npm install** on a project that lists "react-easy-swipe" as a dependency, **npm run build** fails with:
```js
ERROR in ./~/react-easy-swipe/lib/index.js
Module build failed: ReferenceError: Unknown plugin "transform-es2015-modules-umd" specified in "C:\\Project\\node_modules\\react-easy-swipe\\.babelrc"
```
Manually installing "react-easy-swipe" does serve as a work-around so this isn't strictly speaking a blocking issue.
I believe a similar error was experienced in the following project, which outlines how they managed to resolve it by not publishing the .babelrc file to npm.
https://github.com/developit/preact-redux/issues/19
Answers:
username_1: I am experiencing this as well.
@username_0 the solution you provided worked for me
username_2: Even I am facing same issue.
Status: Issue closed
username_3: I ignored the .babelrc file on .npmignore. Last published version should have this fixed. |
nevrome/wellspell.addin | 558619061 | Title: Progress bar stops at less than 100% with no error message
Question:
username_0: The progress bar of spellchecker does not always come to 100%. Here is a print screen:

E.g., when I run the spellchecker tool (`en_US` dictionary) on the text
```r
if (as.character(unlist(context$selection)["text"]) == "") {
stop("No text selected.", call. = FALSE)
}
```
I get only 33%:

Is this behavior expected or is there a hidden issue?
Answers:
username_1: Please check if 93f5822 solves this for you, @username_0
username_0: Nice solution 👍
Status: Issue closed
|
tulip/ppe-logistics | 618718058 | Title: Produce Units - Need to clear screen after producing or you can easily enter twice
Answers:
username_1: This tickets affects both of these steps:
Supplier Manager / Product Creator / Add Image
Apps to Share / Production Worker / Add Image
username_1: This is apparently not currently possible in Tulip.
The image upload is done via a built-in Photo Input widget. The only way you can change sizing within that widget is by altering the font size of the prompt text, and that doesn't seem to affect the actual size of the preview:


username_1: (Deleted prior comments that were on the wrong ticket)
username_1: I also added a description of what the user just produced under the serial number, because clearing the information means they would no longer be able to double check what they were just working on.
 |
envoyproxy/envoy | 1016527545 | Title: envoy.reloadable_features.health_check.immediate_failure_exclude_from_cluster deprecation
Question:
username_0: Your change #14772 (healthcheck: exclude hosts when receiving x-envoy-immediate-health-check-fail) introduced a runtime guarded feature. It has been 6 months since the new code has been exercised by default, so it's time to remove the old code path. This issue tracks source code cleanup so we don't forget. |
AlexBaranosky/print-foo | 56658351 | Title: Prettifying the output of print-foo
Question:
username_0: I really like print-foo, but I'd like it even better if the output it gave me was formatted easier for the eyes. So I was thinking about making some changes to use either
[doric](http://github.com/joegallo/doric) or [table](http://github.com/cljdwalker/table) to format the output.
Would you like a pull request for this or should I do my own thing with it? I'm buidling a REPL utility library for myself. |
spring-projects/spring-boot | 280495929 | Title: bootstrap.properties not working with @SpringBootTest(webEnvironment = WebEnvironment.RANDOM_PORT)
Question:
username_0: Hi folks,
as I can not find anything regarding my scenario, this might be a bug in general.
Currently I'm developing some spring-boot-application which contains some library using some entry inside it's own `bootstrap.properties`, but it seems to not work while testing.
I've created some dummy project for re-creating my scenario (as I'm not allowed to give out my working project): https://github.com/username_0/spring-boot-with-bootstrap-and-feign-testing-bug
I'm not sure if this is related to #4424 (or even the same problem).
Sorry if this is the wrong project for my problem
Answers:
username_1: `bootstrap.properties` is not managed in Spring Boot but it is a Spring Cloud concept. I'd ask on the #spring-cloud channel how you can mitigate this problem as I guess you shouldn't be the first one to ask.
Status: Issue closed
username_0: @username_1 thanks for the hint, but what "channel" do you mean?
username_1: I am sorry, I meant the Gitter channel. |
home-assistant/people | 430099024 | Title: @rcloran: Home Assistant asks you to fix your GitHub account!
Question:
username_0: Hi @rcloran - you're a member of [Home Assistant](https://github.com/home-assistant)'s organisation on GitHub, but our audit-bot has noticed your GitHub account isn't fully set up to meet the standards we need.
If you're no longer supposed to be a member of the organisation, please use your [GitHub organisation settings](https://github.com/settings/organizations) page to leave.
Otherwise, we just need you to help us with a few things to keep our code safe and secure on GitHub:
* Get a pull request opened to add your username to our [users.txt](https://github.com/home-assistant/people/blob/master/users.txt) file _- ideally, an existing member of Home Assistant should open this request for you_.
You have a limited amount of time to fix these issues, otherwise you'll be automatically removed from this organisation.
Thanks for helping us out - it makes our lives a lot easier, and help keeps our code secure! |
facebook/flipper | 806296746 | Title: Discussion: Change bolts applinks 1.4.0 dependencie to allow jetifier removal
Question:
username_0: ```
Is it possible to replace this library?
Answers:
username_1: This seems to come from the Fresco plugin. I'm not sure why we have this as a top-level dependency for the entire Flipper SDK. Will check if we can quickly move that. |
peers/peerjs | 73499852 | Title: Failed to set local offer sdp: Called in wrong state: STATE_RECEIVEDINITIATE
Question:
username_0: This issue occurs at times when Firefox v38 makes an attempt to call Chrome v42 and with Temasys AdapterJS integration when IE v11 is the one starting the call with Chrome v42.
Answers:
username_1: This might be a duplicate of #269. Does the error actually cause peerjs to stop working?
username_0: Hi @username_1, yes it does. There wasn't a video stream connection.
username_2: Hi I also got the same exception . Can anyone help me in fixing it .
username_3: Just had this happen now.
Using a slightly altered version of peerjs (adding a few things outside the peerconnection or mediastream)
Both computers were connecting fine. Then by accident one of em was unplugged mid call. From then on haven't been able to get the 2 computers to share mediastream again, even after restarting peerjs server and having them refresh ids.
username_4: Hi,
This issue is happening to me as well
Failed to setLocalDescription, Failed to set local offer sdp.
called in wrong state STATE_RECEIVED_INITIATE. Is there a workaround for this ?
username_3: When it happens the way I fix it is restarting the computer that didn't crash, my understanding is it's probably storing some ice candidates and trying to connect the fastest way possible while the other side isn't prepared yet.
Didn't try a simple browser restart though (simple page refresh doesn't work)
username_5: I'm getting this as well, but on Chrome to Chrome connections. Doesn't seem to prevent the clients from connecting OK however.
username_6: Getting the same problem Chrome to Chrome. Any solutions yet?
username_7: Failed to set local offer sdp: Called in wrong state: STATE_RECEIVEDOFFER
username_8: Restarting the (chrome) browser made the problem go away for me, and I was able to make calls again
username_9: **## Failed to set local offer sdp: Called in wrong state: kHaveRemoteOffer**
in android native WebChromeClient. Any solition?
username_10: +1, have `kHaveRemoteOffer` error, any solutions?
username_11: Am also getting the same error
peer.min.js:1 PeerJS: ERROR Error: (OperationError) Failed to set local offer sdp: Called in wrong state: kHaveRemoteOffer
peer.min.js:1 PeerJS: Failed to setLocalDescription, (OperationError) Failed to set local offer sdp: Called in wrong state: kHaveRemoteOffer
Any solution for this?
username_9: https://gitlab.com/zkry.akgul/peerjs
Use this instead of orginal lib. I made little touchs to lib files and
seems like problem solved.
------------------------
Zekeriya AKGÜL
username_12: Could you test with the latest build from master branch? in "dist" folder
username_9: Nope i couldn't. I work on it couple months ago and use it my small range
project. It was worked for me.
16 Tem 2018 Pzt 10:49 tarihinde <NAME> <
username_11: Thanks @username_9 , @username_12
I checkout the latest peer.mim.js file and its working fine now.
Status: Issue closed
username_12: Mmm, it should be the same file, maybe your browser is catching it with an older version.
Anyway, feel free to reopen if you get the error again. Also I published the changes to NPM so you can use latest version from NPM now.
username_13: Now I have this issue: "OperationError: Failed to set local offer sdp: Called in wrong state: kHaveRemoteOffer". Trying to make connection between two browsers on one computer
username_12: Which PeerJS are you using?
username_13: @username_12 0.3.14
username_12: Update it, latest is 0.3.16
username_13: @username_12 the error has gone! but no still no audio.
here is how I make call:
`navigator.getUserMedia({audio: true, video: true}, function(stream){
var call = this.peer.call(this.idToConnect, stream);
call.on('stream', function(stream) {
console.log('STREAM');
console.log(stream);
})
},
err => console.log(err));`
and this is how I revieve it:
` this.peer.on('call', function(call) {
navigator.getUserMedia({
audio: true,
video: true
}, function(stream) {
call.answer(stream);
call.on('stream', function(stream) {
console.log('RESPONED CALL');
console.log(stream)
});
}, function(error) {
//...
});
});` |
aws-quickstart/quickstart-jfrog-artifactory | 564334212 | Title: RDS DB instances now use new certificates; servers fail to start
Question:
username_0: Amazon has switched over to the new RDS certs for all new RDS instances. Because of this, the default Artifactory CloudFormation template now fails for all Artifactory servers with a PKIX error. We'd verified that adding the new cert to the default cacerts on the server instances allows the servers to successfully start. As a work-around (since we don't want to manually add the cert every time we bring up an instance), we're able to switch our RDS instance to use the older 2015 cert.
Does the CloudFormation template need to be updated in some way to work with the new certs?
Thanks!
John
Answers:
username_1: Ran into this today. Within jfrog-artifactory-ecs-ec2.template.yaml (file may vary depending on where you deploy), find rds_cert and change the url to https://s3.amazonaws.com/rds-downloads/rds-ca-2019-root.pem. I also changed the dest accordingly. Hope this helps.
username_1: John, the change you'll likely need to make is in the jfrog-artifactory-ecs-ec2-template.yml (file name will vary based on deployment architecture); search for "rds_cert" and modify the URL accordingly, based on the information here: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html.
Hope this helps.
username_0: I'm assuming this hasn't been tested by JFrog after Amazon implemented the cert change. The default QuickStart template still critically fails when the Artifactory server attempts to start on each machine and then critically fails because it can't communicate via SSL to the RDS server. By critically failing, I mean that the primary and secondary servers will terminate after 10 minutes and attempt to restart again until the RDS SSL issue is resolved.
I'm quite sure this can be easily duplicated by simply attempting to execute this default template. We're working around it by either changing the RDS instance to use the 2015 cert and then restarting the servers (which is the most stable workaround, but will stop working soon since the 2015 cert is supposed to expire soon). The other work around is to apply the rds-ca-2019-root.pem on each machine after they come up.
The purpose of creating this issue is to make sure JFrog is aware of the issue since this default template will always fail as far as I can tell. We're simply testing a plugin, so will continue to use one of the workarounds until it's resolved (we don't need a production HA cluster and don't have time to modify the QuickStart template to attempt to fix it).
It should be noted that the RDS cert appears to be the correct one (https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem). I've opened it on the server and can see the 2019 cert (as well as the 2015 cert, which is at the top). However, as mentioned earlier, for some reason this doesn't work until you add the rds-ca-2019-root.pem to the cacerts store.
username_2: Hello John,
Fixing this requires an Ansible modification for the underlying scripts. We are working on it for Artifactory6 for the EC2. Fix is expected to be released later this month.
For any users who come here in future, we are also working on Artifactory7 templates, there we had to disable SSL with RDS for other reasons. This will be fixed in a later patch.
username_0: Excellent, thanks for the update Vinay!
username_0: Hi Vinay, I know things have turned upside down for everyone, so I suspect this has been pushed out to much later, but do you have an updated ETA for the Artifactory6 script changes to support the new RDS cert, or the Artifactory7 templates? Thanks!
username_2: Hello John, Artifactory 7 templates are coming very soon. My apologies that we missed our March deadline, we are still working on these and expected to be publicly released within 1-2 weeks.
We will not be updating Artifactory 6 templates as they are being replaced with Artifactory7 templates soon.
username_0: Thanks for the quick update and looking forward to the Artifactory 7 templates.
username_2: Hello @username_0 , sorry it took a long time but I just got notification from Amazon that new quickstart templates have been published. These support installation of Artifactory 7.2.1
best,
username_0: Thanks @username_2, we should be trying them out sometime in the near future!
Best, John |
noorzaie/vue-circular-count-down-timer | 908678092 | Title: More circles!
Question:
username_0: Hello,
Could it be possible to add days, months and years?
Good job, anyway.
Thanks :)
Answers:
username_1: Hi,
I cant do it right now, you can make a pull request.
username_1: Check out new release:
https://github.com/username_1/vue-circular-count-down-timer/releases/tag/v2.0.0
Status: Issue closed
|
bitnami/charts | 811119871 | Title: redis + sentinel master pod reschedule / deletion results in two masters
Question:
username_0: <!--
Before you open the bug report please review the following troubleshooting guide:
- [Troubleshoot Bitnami Helm Chart Issues](https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues)
-->
**Which chart**:
bitnami/redis 12.7.4
**Describe the bug**
If the master pod is rescheduled / deleted manually, a new master is elected properly but when the old master comes back online it elects itself as a master too.
**To Reproduce**
Steps to reproduce the behavior:
1. Install chart
```
helm install my-release bitnami/redis --set cluster.enabled=true,cluster.slaveCount=3,sentinel.enabled=true
```
1. Delete master pod
2. observe failover correctly happening and new master elected
3. when deleted pod is recreated and comes back online, it thinks it is a master.
4. now there are two masters
**Expected behavior**
Expected old master to rejoin as slave
**Version of Helm and Kubernetes**:
- Output of `helm version`:
```
version.BuildInfo{Version:"v3.5.0", GitCommit:"32c22239423b3b4ba6706d450bd044baffdcf9e6", GitTreeState:"dirty", GoVersion:"go1.15.6"}
```
- Output of `kubectl version`:
```
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-14T05:15:04Z", GoVersion:"go1.15.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.15", GitCommit:"<PASSWORD>", GitTreeState:"clean", BuildDate:"2021-01-22T22:45:59Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
```
**Additional context**
Add any other context about the problem here.
Answers:
username_0: Note this is on 12.2.3 because that's the only version of the chart i can get working that doesn't initialise all instances as masters, as per #5347
username_1: Hi,
Thanks for reporting. Pinging @username_2 as he is looking into the Redis + Sentinel issues.
username_2: Hi @username_0 ,
Could you indicate which kubernetes cluster are you using ?
Also, I need a bit of clarification, in the first message of this issue you indicated this for v12.7.4, but later you indicated 12.2.3. I guess you mean you have this issue with 12.2.3 because with 12.7.4 you get all the instances as master. Am I right ?
username_2: Hi,
A new version of the chart was released.
Could you give it a try and check if this fixed the issue for you ?
username_0: Yup that's correct. For now I'm using the dandydeveloper chart as it works with pod deletion and also correctly promotes only one pod to master. I'll give this chart a spin again soon though and get back to you
username_3: I'm having the same issue, with different result. My problem is caused by the chart using: `{{ template "redis.fullname" . }}-node-0.{{ template "redis.fullname" . }}-headless...` in the sentinel configuration [here](https://github.com/bitnami/charts/blob/master/bitnami/redis/templates/configmap.yaml#L45). If the `node-0` is killed, it will never come back as it can't connect to itself on boot.
I think it should be using the `redis` service to connect to a sentinel node and then it could get the information it needs to bootstrap.
Example below with kind:
```sh
→ kubectl logs redis-node-0 -c sentinel
14:17:44.81 INFO ==> redis-headless.default.svc.cluster.local has my IP: 10.244.0.72
14:17:44.83 INFO ==> Cleaning sentinels in sentinel node: 10.244.0.75
Could not connect to Redis at 10.244.0.75:26379: Connection refused
14:17:49.83 INFO ==> Cleaning sentinels in sentinel node: 10.244.0.74
1
14:17:54.84 INFO ==> Sentinels clean up done
Could not connect to Redis at 10.244.0.72:26379: Connection refused
→ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP
redis-node-0 1/2 CrashLoopBackOff 8 13m 10.244.0.72
redis-node-1 2/2 Running 0 12m 10.244.0.74
redis-node-2 0/2 CrashLoopBackOff 14 12m 10.244.0.75
→ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h
redis ClusterIP 10.96.155.117 <none> 6379/TCP,26379/TCP 14m
redis-headless ClusterIP None <none> 6379/TCP,26379/TCP 14m
```
username_2: Hi @username_3 ,
Could you enable debug and get the logs from the nodes that are in CrashLoop ?.
username_4: Bumping this...this is a really nasty bug and I cannot make sense of it.
Bitnami redis sentinel setup is beyond unstable. I actually think this chart should be quarantined until this is resolved. I will continue to investigate and report back.
username_4: Ok so I have gotten to the bottom of this: if you lose the pod with both the leader sentinel and leader redis, we end up in a situation where another sentinel is promoted to leader, but continues to vote for the old redis leader which is down. When the pod comes back online, start-sentinel.sh polls the quorum for leader and attempts connection, which due to the above is pointing to its own IP.
This might be an issue with Redis, as it appears that if the leader sentinel goes down as it's failing over the leader redis to a follower, then the follower sentinels are unaware of the change and can never converge back on a consistent state.
username_2: Hi,
@username_3 , @username_4 . Could you indicate which version of the chart and container images are you using ?
I would like to try to reproduce the issue.
username_3: Hi @username_2, thanks for the follow up.
I was deploying with:
```sh
kind create cluster --name=redis-test
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install my-release bitnami/redis --set=usePassword=false --set=cluster.slaveCount=3 --set=sentinel.enabled=true --set=sentinel.usePassword=false
```
And then executing `kubectl delete pod my-release-redis-node-0` to force a disruption on the cluster. After running this command I would see the behaviour described [above](https://github.com/bitnami/charts/issues/5543#issuecomment-795489200). I can't remember the exact version that I had, but it was something along the 12.7.x version.
The good news are that I can't reproduce this problem again (just tried now with `13.0.1`). Looks like #5603 and #5528 might have fixed the issues I was having.
username_2: Hi,
Yes, there was some issues that were fixed.
Please, @username_4 could you also check your versions and see if your issues were also fixed?
username_5: Hi,
I was dealing with the same issue and I can confirm that the issue seems resolved in the most recent 14.1.0 version ( commıt #6080). I was observing the same problem with the 14.0.2 version. It was not always reproducible but I could not able to find a workaround. The problem was when the master Redis pod is restarted with `kubectl delete pod` command, the sentinel containers in the other pods can not choose a new master and `sentinel get-master-addr-by-name` still returns the old master's IP address which doesn't exist anymore.
username_2: Hi @username_5 ,
Is the case you observed in 14.0.2 solved for you in 14.1.0, or is it happening in other deployment you have with 14.0.2 ?
username_5: Hi @username_2,
I upgraded my deployment from 14.0.2 to 14.1.0 and I don't observe the issue anymore. I don't recall the versions exactly but I can say the latest versions of 11.x, 12.x and 13.x have the same issue, too.
username_2: Hi,
Yes, it could happen it those versions.
I am happy that this is fixed for you now.
username_2: I am closing this issue.
Feel free to reopen it if needed or to create a new issue.
Status: Issue closed
|
openthos/multiwin-analysis | 148327765 | Title: Daily Report 2016-04-14 <NAME>
Question:
username_0: Hello all:
1. Communicate with <NAME> about the current product and status.
2. Communicate with <NAME> for several issues.
3. Communicate with <NAME> for several issues.
4. Merge code: merge the code from Hu Shaolong and commit to git server.
5. Analyzing bug: Analyzing shortcut always on top issue, at present, we know the the related applications have a style which multiwindow does not consider about. We need find the related style and let multiwindow support it.
6. Communicate with <NAME> about the current status and next plan.
Thanks. |
kubernetes/ingress-nginx | 560807274 | Title: Support canary ingress header value use regex
Question:
username_0: <!--
Welcome to ingress-nginx! For a smooth feature request process, try to
answer the following questions. Don't worry if they're not all applicable; just
try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
-->
<!-- What do you want to happen? -->
can canary ingress support regex ?
like this:
```
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-header: version
nginx.ingress.kubernetes.io/canary-by-header-value: ^[0-9]ve31v[0-9]$
nginx.ingress.kubernetes.io/canary-use-regex: true
```
<!-- Is there currently another issue associated with this? -->
<!-- Does it require a particular kubernetes version? -->
<!-- If this is actually about documentation, add `/kind documentation` below -->
/kind feature |
SAP/fundamental-ngx | 843017505 | Title: avatar: Images there is not alternative text
Question:
username_0: Description: Images there is not alternative text
Expected: All the images and icon should provide the alternative text
Screenshot:

Answers:
username_1: This can be achieved as part of the component `fd-avatar` by setting `aria-label`. We can improve our doc site |
kdahlquist/GRNmap | 68756479 | Title: Implement test file structure and naming convention as documented in wiki
Question:
username_0: GRNmap has converged on a file structure and naming convention for test and demo files. It is [documented here](https://github.com/username_2/GRNmap/wiki/Naming-Convention-and-Test-Files-Organization). This issue represents the initial task of making the current set of files fit these conventions.
Answers:
username_1: For spreadsheets that have the same number of genes, edges, etc., we will have an optional # to append at the end to differentiate.
username_2: Given Issue #74 where we will systematically test all 16 possibilities for running the code, we may need to think further about our organization and naming convention here.
Right now at the top level we have split off the forward and estimation tests into separate folders, which makes sense.
I am wondering if underneath both of these we should make sub-folders for MM and Sigmoid to further organize the tests.
Also, should we add to the filename information about fix vs. est for P and b? What about for making graphs? Or these could be organized in folders?
Let's talk about this at our next meeting.
username_2: We looked at what was on the wiki for http://www.openwetware.org/wiki/GRNmap_Test_Inputs Issue #74 and we are going to make a slight change to the way it is shown there.
Say fixP-1 or fixP-0, fixb-1 or fixb-0 for the conditions of fixing or estimating P or b. It's shorter and doesn't re-use the word estimate. Will use "graph" or "no-graph" for the graphing condition.
Once either @username_3 or @username_1 fixes the filenames on the wiki, they can close this issue.
Also, on the wiki, please make the filenames visible in the hyperlink labels.
All of these 16 test files can also be uploaded to the test-files folder on GitHub (beta branch).
username_2: We will not do any further subdivision of folders in test-files at this time.
username_3: The file names are corrected on the wiki.
http://openwetware.org/wiki/GRNmap_Test_Inputs#Observations_from_test
Status: Issue closed
|
andywer/typed-emitter | 1104922954 | Title: Promise-based events.once in typed-emitter
Question:
username_0: Does typed-emitter have support for the "once" function of the "events" inbuilt package?
https://github.com/nodejs/node/pull/26078
Used like this:
```
import { once } from "events"
// ...
await once(emitter, "event");
```
This is not typed, and its not even a function on the EventEmitter - is there a clean way to type this without reimplementing the promise-based `once` function?
Thanks for your work on this package!
Answers:
username_1: Hey @username_0.
We might be able to "fix it" with a global type declaration extending the existing `events` interface, potentially under a new entry point in `typed-emitter`.
Haven't tried yet, though. Would be happy to give it a try if you were to prepare a PR.
PS: I do think it is a function on the `EventEmitter`, a static one, though. |
ffxiv-teamcraft/ffxiv-teamcraft | 874970327 | Title: feat: Boxplots or violin plots for fishing spot page bite times
Question:
username_0: **The problem.**
Figuring out min and max possible bite time for the fish (with AND without Chum active) are a crucial part of high-end fishing.
The request is aimed to facilitate that and present the data in more readable format.
**The solution.**
Violin plots are quite possibly the best candidate for the job; box plots are also great but hard to read for people unfamiliar with them. Here is an example of what it would look like, with the relevant Node package: [https://www.npmjs.com/package/chartjs-chart-box-and-violin-plot](url)
The plots do not have to retain the relative bite frequency and should be normalized per-fish; bite chances are given in the bait table.
Horizontal box/violin plots would also be similar to the format CatBecameHungry uses, but better/more informative, given the bait is a selector now and more info could be displayed on a single plot.
Ideally, colors would be assigned by fish tug types also because the combination of the two [i.e. bite time + tug strength] is the only way to tell fish apart from a player's perspective.
**Describe alternatives you've considered**
Hovering over the current stacked histogram plot can be problematic, especially for extremely low bite chance fish like Sculptor; overall, it can be hard to reason about when trade-offs are concerned (e.g. how much of a chance to get fish X does one forgo by cancelling the cast early at Y seconds to avoid accidentally double hooking fish Z completely).
**Additional context**
Unfortunately, ngx seems to lack the feature, but page load times and older style timeline plots seem to be the only reason some people still stick with cbh (aside from comments from JP community that is).
**Somewhat related notes**
Performance is of concern, but it shouldn't be fundamentally different from the existing solution.
Potentially worthy of a separate discussion - maybe it is possible to move expensive calculations server-side entirely and update caches for plot data every day or week or so? This would add as many calculations as every client does while requesting the spot page to the server but eliminate the need to move large amounts of data between the client and the server every time it is requested, speeding up page loads and potentially reducing overall upkeep costs.
Answers:
username_1: It looks like https://github.com/sluger/ng-chartjs-boxplot is able to do this for Angular, but I guess this woule be better as horizontal plots, and I'm not sure if the lib makes it possible or not.
username_0: This is understandable; going server-side is tempting sometimes but quickly grows into an entire devops and admin mess 🤕
I'm just asking because fishing charts specifically are one area where periodic serverside caching seems to make the most sense.
Thank you for looking into this; definitely looks painful to implement for seemingly marginal gains but I do believe it'd be a noticeable improvement in the long run!
username_1: Example with limsa lower decks:

Status: Issue closed
|
echobind/ember-links-with-follower | 161025204 | Title: Use css transforms for positioning
Question:
username_0: As brought up by @samselikoff: See https://npmcdn.com for an example, it's much smoother, especially on window resize.
This would also remove the need to check for velocity.js as animation would be taken care of with css.<issue_closed>
Status: Issue closed |
smartline1/legacy-site | 652411558 | Title: Ghpages wont publish
Question:
username_0: I am new here! and i just downloaded a relatively large site using WP2Static to GH Pages - it all looked to work ok but I got an error message a couple of times about 'lock file already exists' and when I go to my site - either using the default domain or the custom one i created with a CNAME record, it doesn't work - I'd love some help with this please!<issue_closed>
Status: Issue closed |
galexrt/docker-sinusbot | 202885420 | Title: Scripts cannot be modified
Question:
username_0: Hey there,
unfortunately I can't modify the scripts because they aren't mounted in /sinusbot/data. Is it possible to add a volume for the scripts?
Answers:
username_1: I added a volume for the scripts.
Please note that this fixes the "wrong" data directory path, that I just found out about!
The data directory is now located at `/sinusbot/data` and not `/data` anymore.
For compatibility reasons, the "old" directory `/data` will still continue to work.
username_1: The images have been built, tested and updated.
***
Don't forget to pull the image to update it:
For the quay.io image run:
```
docker pull quay.io/username_1/sinusbot:latest
```
For the Docker Hub image run:
```
docker pull username_1/sinusbot:latest
```
username_1: Sorry for the inconveniences!
Issues with the TeamSpeak Client have now also been fixed.
Status: Issue closed
username_0: It looks like there's a little problem with yt-dl...
I get get the following error since I pulled the new Image:
2017/01/25 12:02:28 exit status 127 youtube-dl not found
username_0: The output of
`docker images quay.io/username_1/sinusbot:latest`
was just
`REPOSITORY TAG IMAGE ID CREATED SIZE`
I also tried
`docker images username_1/sinusbot:latest`
and get
`REPOSITORY TAG IMAGE ID CREATED SIZE
username_1/sinusbot latest b703a86681eb 30 minutes ago 524.4 MB`
username_1: You are using the image from the Docker Hub. You need to run `docker pull username_1/sinusbot:latest`.
username_0: That's exactly what I just did when you said the I have to re-pull the image.
username_1: You already stopped and deleted the container and re-run it?
username_0: Yes, but is it necessary to delete also the files?
I did:
```
docker pull ...
docker stop sinusbot
docker rm sinusbot
docker run --name sinusbot ...
```
username_1: No you shouldn't need to delete the files.
Can you post your full run command.
username_0: ```docker run --name sinusbot2 -d -v /opt/docker/sinusbot2/data:/sinusbot/data -v /opt/docker/sinusbot2/scripts:/sinusbot/scripts -p 8088:8087 username_1/sinusbot:latest```
The name and path is changed to sinusbot2 because I didn't want to destroy my original sinusbot container :D
username_1: The `youtube-dl` problem has been fixed. The fixed images have been built and pushed.
Please repull the image and try again. :)
Thanks for your patience!
username_0: I'll try it, thank you very much for your help.
username_1: @username_0 Please report back if it is working now.
username_0: It seems like it's working perfectly, I looked at it only for a few seconds but it loads yt-dl just as it should. |
wemake-services/wemake-django-template | 288593483 | Title: Support gitlab-pages
Question:
username_0: There should be a research:
- can pages be private
- can we build pages inside CI
- does gitlab support sphinx
Links:
- https://gitlab.com/pages/sphinx
- https://gitlab.com/help/user/project/pages/index.md
- https://gitlab.com/gitlab-org/gitlab-ce/issues/33422
Answers:
username_0: Right now private pages are not supported. We can wait for this to be merged: https://gitlab.com/gitlab-org/gitlab-ce/issues/33422
username_0: Progress https://docs.gitlab.com/ee/user/project/pages/introduction.html#gitlab-pages-access-control-core-only
username_1: - [X] can we build pages inside CI
- [X] does gitlab support sphinx
I made a publish stage on our private gitlab. @username_0 I can handle this.
username_0: gitlab.com still does not support private docs. But, having an example is always good!
So, please send your solution. |
CenterForOpenScience/osf.io | 103292513 | Title: API V2 JSON API Extension: Bulk Operations
Question:
username_0: The JSON API allows for optional extensions. One of those is a method of doing multiple requests at the same time, such as updating multiple nodes or removing multiple contributors. This feature would allow people to reduce the number of requests they make to the api and ensure that multiple related requests all happen or none do.
http://jsonapi.org/extensions/bulk/
Note that any optional extensions to the JSON API will require returning header information about the availability of that extension.<issue_closed>
Status: Issue closed |
DoctorGester/crumbling-island-arena | 246532882 | Title: Add 0-2 randomly placed techies bomb each round which explode on hit and deal 3 damage in an area
Answers:
username_1: 1. Рандом-такое. Думаю было бы лучше:
1) Увеличить кол-во мин и убрать фактор рандома.
2) Уменьшить урон до 2(тк мин будет больше).
2. Думаю логично, что они будут нейтральными(наносить урон всем, не зависимо от того кто её взорвал), и тогда, уверен будет много неразберихи(багов, не будет понятно как это работает). Устраняется более подробным описанием и тестами в CIA Test.
3. Как по мне они должны также влиять на землю(ломать ближе к эпицентру взрыва и повреждать землю, которая находиться относительно далеко). Это не прописано в названии, хотя, думаю, так и задумывалось.
4. Это даст преимущество ренж героям, соответственно, нужно понерфить метовых.
Status: Issue closed
|
Katerina639/UPlab | 324042182 | Title: Task 8
Question:
username_0: @Katerina639
**node_modules**
* В task8 её действительно нет, но в task7 всё ещё есть. Идея в том, чтобы этой директории вообще не было в репозитории. Так что хотелось бы везде удалить.
**package.json**
* Этот файл отсутствует в task8.
**/getPost?:id**
* express не опирается на параметры квери (те, которые перечисляются в url после вопросительного знака). Поэтому их не надо указывать в путях обработчиков, и такой возможности нет. А всяческие специальные символы (вопросительный знак, двоеточие и так далее) используются для создания шаблона. Конкретно вопросительный знак означает необязательность символа, стоящего до него. Можете почитать эту [документацию](https://expressjs.com/en/guide/routing.html).
То есть, вы можете просто оставить '/getPost' в качестве пути обработчика. А объект req.query со всеми переданными параметрами квери вам предоставляет сам express.<issue_closed>
Status: Issue closed |
spotify/scio | 181168799 | Title: Resume Apache Beam port for 0.2.0
Question:
username_0: Right now we have an old [apache-beam](https://github.com/spotify/scio/tree/apache-beam) branch that works with Beam 0.1.0, and a [neville/beam-0.2.0](https://github.com/spotify/scio/tree/neville/beam-0.2.0) branch that compiles against Beam 0.2.0 but fails tests.
We need to fix those, update `apache-beam` branch to 0.2.0, and resume cherry picking commits from master.
- [ ] fix tests in `neville/beam-0.2.0` branch
- [ ] update `apache-beam` branch to 0.2.0
- [ ] cherry pick commits from `master`
Answers:
username_1: Many tests are failing because of this issue (which may or may not be a problem on the Scio side - investigation in progress). https://issues.apache.org/jira/browse/BEAM-741
username_0: Let's close this and use #279 to track all things Beam.
Status: Issue closed
|
parksunwoo/ocr_kor | 915312500 | Title: 데모 이미지 입력 문의
Question:
username_0: 웹캠에 연결해 글자부분을 잘라 해당 모델로 넘겨주어 작동하게 하려합니다.
demo.py 코드를 수정하며 작업중인데
```Python
demo_data = RawDataset(root=opt.image_folder, opt=opt) # use RawDataset
demo_loader = torch.utils.data.DataLoader(
demo_data, batch_size=opt.batch_size, shuffle=False, num_workers=int(opt.workers),
collate_fn=AlignCollate_demo, pin_memory=True)
```
해당 부분에서 에러가 납니다.
현재 이미지 폴더가 아니라 하나의 이미지를 받는것이라`demo_data`를 지우고
````Python
demo_loader = torch.utils.data.DataLoader(
(img,), batch_size=opt.batch_size,
shuffle=False,num_workers=4,
collate_fn=AlignCollate_demo, pin_memory=True)
````
수정했습니다.
```Python
# predict
ocr_model.eval()
for image_tensors, image_path_list in demo_loader: #<---
batch_size = image_tensors.size(0)
with torch.no_grad():
```
표시 부분에서 아래와 같이 에러가 납니다.
```
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/user/.local/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/home/user/.local/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "./ocr_kor/deep-text-recognition-benchmark/dataset.py", line 273, in __call__
images, labels = zip(*batch)
ValueError: too many values to unpack (expected 2)
```
이미지 폴더가 아닌 하나의 이미지를 받아 처리하는 방법이 있을까요? |
rust-vmm/linux-loader | 627073228 | Title: Replace `std::error::description` with `to_string()`
Question:
username_0: In newer Rust versions `description` is deprecated (since 1.42.0). We can use to_string instead.
This function seems to be heavily used in implementing display for errors.
Answers:
username_1: We ended up implementing the `Display` trait, which is used to automatically implement `ToString` when present..
Status: Issue closed
|
portainer/portainer | 631504378 | Title: Simple problem with ingress ports limitation
Question:
username_0: **Is your feature request related to a problem? Please describe.**
I've got a lot of stacks in portainer, everyone have min. 1 port exposed by ingress. Now on I have around 60 stacks, 128 ports are exposed and when I want exposed another one container don't want to start and return in logs
```rpc error: code = Unknown desc = warning: incomplete log stream. some logs could not be retrieved for the following reasons: node pg43507dlpaln8kkztvievva8 is not available```
Portainer version: `1.23.1`
Answers:
username_1: Can you deploy a new service via CLI?
username_1: Also, on your host can you run the command “dmesg”
And look for errors relating to arp_cache.
username_0: @username_1 Every deploy are succesfully, but container inside have status redy, and change to failed. I can succesfuly deploy service without port expose by ingress or change port exposing by host.
username_1: Looks like you are hitting an ingress limitation… exactly how many ports are you exposing before you see this error, and how many nodes in your cluster?
You could try deploying with DNSRR publishing mode to see if it’s a limit of the routing mesh.
username_1: OK, so an update... you have ABSOLUTELY hit a limit with Swarm's Ingress Network.
I just deployed 130 NGINX services (single replica, single port exposed).. 128 of them provisioned perfectly, the 129th and 130th just sit in an error state. If i kill one of the working services, then the 129th will successfully provision.
Status: Issue closed
|
bumptech/glide | 153969917 | Title: How to use Glide to upload image to server?
Question:
username_0: <!--
Please fill in the below fields with some data to help us best diagnose the issue.
The more specific you are, the better! You can help a lot by not making us ask these questions.
Feel free to remove any irrelevant parts that you know are not related to the issue.
Any HTML comment like this will be stripped when rendering markdown, no need to delete them.
-->
<!-- What version of Glide you're running, for example: 3.7.1 | 3.8.0-SNAPSHOT | 4.0.0-SNAPSHOT
It's essentially the version number from your build.gradle: `dependencies { compile '...:x.y.z' }` -->
**Glide Version**: 3.7
I have an image stored in internal storage and I want to compress and upload it to server. How to do this using glide? Thank you for providing this library. it's amazing :)
Answers:
username_1: Glide can only read files, see https://github.com/bumptech/glide#glide
That said you can prepare your image for upload with `.asBitmap().toBytes().into()`, see https://github.com/bumptech/glide/issues/481#issuecomment-107643716 for a full example. You'll need to upload the byte[] instead of `IOTools.writeFile`, and do it on a background thread!
username_0: How do you save the byte to a particular path?
username_1: https://www.google.com/search?q=write%20byte%20array%20to%20file%20java
If you want to upload search for "upload http java".
username_0: Is there a way to specify the size of the compressed image? Let's say the initial size is 1MB and I want it to be compressed to about 100kb.
username_1: That's only possible if you're using low level libjpeg to compress. If size is important you can do a binary search and compress the image with different qualities to find a good enough approximation.
I think it's much easier though to opt for a max size, because if the image cannot be bigger than that, then the byte size is capped as well. Play around with a few quality/max size variations with a few sample images to find good values.
username_0: By opt for a max size you mean setting the quality to 100?
So the code will be something like :
` .toBytes(CompressFormat.JPEG, 100)
`
Is that right?
username_1: No, that would result in the best quality, and the byte size would be the biggest possible. I meant these two lines:
```java
.atMost()
.override(2048, 2048)
```
which will result in downsizing bigger images to have each side less than 2048. Here's a sample I have, I took 1882 images (with varying image dimensions) with my phone's camera and used the exact code in https://github.com/bumptech/glide/issues/481#issuecomment-107643716 to store it in my DB:
max: 775k, min: 11k, average: 159k, a quick histogram plot made with https://www.easycalculation.com/graphs/create-histogram.php:

You can see that most images are 72-200k. So I guess if you want to average around 100k you either need to use 70-75 as quality, or decrease the override size to ~1500.
username_0: Thank you for the explanation. May God bless your soul!
Status: Issue closed
|
department-of-veterans-affairs/va.gov-team | 521784661 | Title: Filter & Transform paragraph-link_teaser content ## Problem Statement
Question:
username_0: Our tome-sync data doesn't match up with our templates, and we need to follow the steps [here](https://github.com/department-of-veterans-affairs/va.gov-team/issues/2835) to ensure that the data will be compatible with our current templates.
## Goal
Content from `paragraph-link_teaser` entities match their corresponding entities from `pages.json`<issue_closed>
Status: Issue closed |
icgc-argo/roadmap | 648497836 | Title: E2E Testing: Typescript transition:
Question:
username_0: Typescript will introduce a proper build step for the code, which helps with test code validation. This both improves the quality of test code in the long run, and makes writing tests easier by providing a standard interface for test developers to implement.
- Set up initial typescript build system, with allowJs set to true.
- Create a base ArgoTestSuite class that implements the test suit interface, and provides a base implementation of BrowserStack status update functionalities.
- Re-implement existing tests as an extension of ArgoTestSuite in typescript. Create typescript versions of utility functions along
Answers:
username_0: ///NOTE FOR PLANNING - @username_2 @username_1 please decide a single test to implement to scope the level of this ticket correctly in planning tomorrow! additional tickets for the remaining tests can be made after
username_1: I would suggest `join-program.js`
`join-program.js` test has the most varied functionality (interacting with iframes, setting valid arbitrary timeouts to wait for email confirmation, form input) in addition to the standard "navigate to page, wait, click, assert".
username_2: Deprecate old utility functions as you create new typescript implementations of them.
username_1: Do we want the TS test to be the only one run right now?
Or hold off until we transition all the tests? ( there's only a handful right now )
username_0: - need to remove the older tests that are typescript files that are 2 versions of the same test
- deploying the once script that is converted
username_0: @username_1 is there anything to test or deploy here?
Status: Issue closed
|
ant-design/ant-design | 241216798 | Title: react中使用DatePicker发现两个问题,需要大神指点迷津
Question:
username_0: <!--
IMPORTANT: Please use the following link to create a new issue:
http://new-issue.ant.design
If your issue was not created using the app above, it will be closed immediately.
-->
<!--
注意:请使用下面的链接来新建 issue:
http://new-issue.ant.design
不是用上面的链接创建的 issue 会被立即关闭。
-->
**react中使用DatePicker发现两个问题:**
一:清空输入框内的值在API找不到可以调用的函数,需要移动的输入框点击按钮才能清空值;
二:DatePicker的showTime为true时,时间选择不可控制,比如:时间间隔,关闭时间毫秒的选择等等。
第一次使用antd,感觉是挺好用的,只是功能上有些不足!
Answers:
username_1: 请问第一个问题有好的解决办法吗,我现在也需要自动清空输入框内的值,但是找不到接口
username_2: @username_1 `value` 设置成 `null` 就是清空。
username_1: 好的,感谢。 |
zopefoundation/zc.intid | 192570630 | Title: Raise subclasses of KeyError/ValueError
Question:
username_0: We've found it useful to be able to distinguish the errors that come from an intid utility from generic dictionary/BTree errors. The proposed errors are:
```python
class IntIdMissingError(KeyError):
"""
Raised by the utility when ``getId`` fails.
"""
class ObjectMissingError(KeyError):
"""
Raised by the utility when ``getObject`` fails.
"""
class IntIdInUseError(ValueError):
"""
Raised by the utility when ``register`` tries to reuse an intid.
(We haven't actually defined this one but it seems like it should be defined for completeness.)
"""
```
Ideally these would actually go in `zope.intid` (although zc.intid doesn't currently have a dependency there).
Answers:
username_0: cf zopefoundation/zope.intid#5
username_0: The basics for these have been added to zope.intid, though that is not yet on PyPI.
username_0: zope.intid 4.2.0 is released to PyPI
username_0: Another useful exception from this library would be `IntIdMismatchError`. It's similar to `IntIdsCorruptedError`, but because the attribute is external and not necessarily under the library control, it's not necessarily corruption.
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.