repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
aranja/rakning-c19-app | 602770883 | Title: [Idea] Add info link about testing and appointments
Question:
username_0: I noticed none of the links in the Home Screen are related to the testing efforts from deCODE Genetics. I think it would be a good idea to add this, or the link to book an appointment for a test.
I was reading this [piece of news](https://www.mbl.is/frettir/innlent/2020/04/19/serstakt_atak_til_ad_na_utlendingum_i_skimun/) about an effort to reach foreigners to increase the screening in that population. Well, if it's really important for the authorities, we have the perfect tool for that with [more than 132.000 installations](https://www.mbl.is/frettir/taekni/2020/04/17/bylting_handan_vid_hornid/) :) In theory, I guess we could even send targeted push notifications to people with the app in languages other than Icelandic, but personally I would think it very thoroughly if we want to go down that path.
Do the authorities even know about the translation effort we are making? Maybe they would try a different strategy if they were aware of it.
Answers:
username_1: Good idea. I'll be sure to bring it up with the larger teams, though meetings are less regular right now. |
baskerville/bspwm | 208141408 | Title: Issues with rofi window swtiching
Question:
username_0: I'm not sure if this is the right place or this is even a bspwm specific issue, but I've been having an issue where [rofi](https://github.com/DaveDavenport/rofi) is only switching windows around 30-45% of the time.
There was an issues submitted to the rofi github [here](https://github.com/DaveDavenport/rofi/issues/557) that describes the issue in more detail (see gyfcat links for video). The dev was unable to track down the source of the issue and suggested that its possible that bspwm could be causing the issue.
Here is the version info for my machine:
```
$ bspwm -v
0.9.2-19-gada559d
```
```
$ rofi -v
Version: 1.3.1-240-g98c625f-dirty (makepkg)
```
```
$ uname -r
4.9.8-1-ARCH
```
Distro: `Arch Linux`
[bspwmrc](https://gist.github.com/username_0/318938553d5f56715f22d3fb03c912d7)
Let me know if there's any more info I can provide that might help.
Answers:
username_1: I see this appears resolved upstream in the linked issue.
Status: Issue closed
|
spatie/laravel-model-states | 502228834 | Title: Tinyint columns are not loading state correctly
Question:
username_0: Tinyint columns are not loading state correctly.
when the model is loaded Laravel brings 'null' instead of the instance of State.:
Answers:
username_1: Can you PR a failing test for this?
Status: Issue closed
username_1: Thanks for the help. It's fixed in 1.1.3: https://github.com/spatie/laravel-model-states/releases/tag/1.1.3 |
terraform-google-modules/terraform-google-github-actions-runners | 936535278 | Title: Update README.md Requierments Terraforrm Version
Question:
username_0: README requirements stipulate Terraform v0.12 while modules require usage of v0.13
Requirements: https://github.com/terraform-google-modules/terraform-google-github-actions-runners#requirements
Module version example: https://github.com/terraform-google-modules/terraform-google-github-actions-runners/blob/master/modules/gh-runner-mig-vm/versions.tf#L18 |
bferguson3/advent-serv | 725636926 | Title: unique packet ID for all packets
Question:
username_0: I think so, the unix epoch should work
Answers:
username_1: @username_0 Can we use a timestamp on the packets for this? Some packets will already have it... can just guarantee all will have it going forward.
username_0: I think so, the unix epoch should work
username_1: Ok cool then this one should be done already via the `ts` property on the response packets.
Status: Issue closed
|
aiortc/aiortc | 455130188 | Title: aiortc fails to generate answer because of timeout
Question:
username_0: I am working on werbserver that transforms webrtc video streams. Localy it's working fine, but after deploying to VM and fails to generate sdp answer.
RTCConfiguration has STUN and TURN server. STUN server is default from google, TURN server is my own coturn deployed to another VM and I am sure that TURN server works fine, because it's shared between other projects. According to cotrun logs, there are bunch requests from client side(android app), but there is no any request from aiortc server. All network ports are opened(I tested it with simple client-server app).
According to verbose logs from aiortc, STUN works fine, but TURN connection fails with timeout. TURN server url is replaced intentionally.
```
INFO:pc:51c8861d-2f65-4559-a6d4-0ed6880181ad Created for 51c8861d-2f65-4559-a6d4-0ed6880181ad
INFO:pc:51c8861d-2f65-4559-a6d4-0ed6880181ad Track video received
offer set
answer ready
DEBUG:ice:Connection(0) protocol(0) connection_made(<_SelectorDatagramTransport fd=30 read=idle write=<idle, bufsize=0>>)
DEBUG:ice:Connection(0) protocol(1) connection_made(<_SelectorDatagramTransport fd=31 read=idle write=<idle, bufsize=0>>)
DEBUG:ice:Connection(0) protocol(2) connection_made(<_SelectorDatagramTransport fd=32 read=idle write=<idle, bufsize=0>>)
DEBUG:ice:Connection(0) protocol(3) connection_made(<_SelectorDatagramTransport fd=33 read=idle write=<idle, bufsize=0>>)
DEBUG:ice:Connection(0) protocol(4) connection_made(<_SelectorDatagramTransport fd=34 read=idle write=<idle, bufsize=0>>)
DEBUG:ice:Connection(0) protocol(3) > ('172.16.31.10', 19302) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'\xc8\x88\x91\xe8\xfe\x9f\xf5\x9f9\xdf\xe0\xae')
DEBUG:ice:Connection(0) protocol(4) > ('172.16.31.10', 19302) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'5\x8e\x87\x12\xf7\x90\x8d\x92Y\r\xb8\xe4')
DEBUG:ice:Connection(0) protocol(0) > ('172.16.31.10', 19302) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'jPM\x08\xb4\xdc:\xec\xaf\r\xb3\xd7')
DEBUG:ice:Connection(0) protocol(2) > ('172.16.31.10', 19302) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'8\x0f\xb6\xba_\x917\x0e\xba|E\xb3')
DEBUG:ice:Connection(0) protocol(1) > ('172.16.31.10', 19302) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'\xd6O(\xa9\x0f\xdd\xad>\x05j\xdb\xe0')
DEBUG:ice:Connection(0) protocol(3) > ('172.16.31.10', 19302) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'\xc8\x88\x91\xe8\xfe\x9f\xf5\x9f9\xdf\xe0\xae')
DEBUG:ice:Connection(0) protocol(4) > ('172.16.31.10', 19302) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'5\x8e\x87\x12\xf7\x90\x8d\x92Y\r\xb8\xe4')
DEBUG:ice:Connection(0) protocol(0) > ('172.16.31.10', 19302) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'jPM\x08\xb4\xdc:\xec\xaf\r\xb3\xd7')
DEBUG:ice:Connection(0) protocol(2) > ('172.16.31.10', 19302) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'8\x0f\xb6\xba_\x917\x0e\xba|E\xb3')
DEBUG:ice:Connection(0) protocol(1) > ('172.16.31.10', 19302) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'\xd6O(\xa9\x0f\xdd\xad>\x05j\xdb\xe0')
DEBUG:ice:Connection(0) protocol(3) > ('172.16.31.10', 19302) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'\xc8\x88\x91\xe8\xfe\x9f\xf5\x9f9\xdf\xe0\xae')
DEBUG:ice:Connection(0) protocol(4) > ('172.16.31.10', 19302) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'5\x8e\x87\x12\xf7\x90\x8d\x92Y\r\xb8\xe4')
DEBUG:ice:Connection(0) protocol(0) > ('172.16.31.10', 19302) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'jPM\x08\xb4\xdc:\xec\xaf\r\xb3\xd7')
DEBUG:ice:Connection(0) protocol(2) > ('172.16.31.10', 19302) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'8\x0f\xb6\xba_\x917\x0e\xba|E\xb3')
DEBUG:ice:Connection(0) protocol(1) > ('172.16.31.10', 19302) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'\xd6O(\xa9\x0f\xdd\xad>\x05j\xdb\xe0')
DEBUG:ice:Connection(0) protocol(3) > ('172.16.31.10', 19302) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'\xc8\x88\x91\xe8\xfe\x9f\xf5\x9f9\xdf\xe0\xae')
DEBUG:ice:Connection(0) protocol(4) > ('172.16.31.10', 19302) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'5\x8e\x87\x12\xf7\x90\x8d\x92Y\r\xb8\xe4')
DEBUG:ice:Connection(0) protocol(0) > ('172.16.31.10', 19302) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'jPM\x08\xb4\xdc:\xec\xaf\r\xb3\xd7')
DEBUG:ice:Connection(0) protocol(2) > ('172.16.31.10', 19302) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'8\x0f\xb6\xba_\x917\x0e\xba|E\xb3')
DEBUG:ice:Connection(0) protocol(1) > ('172.16.31.10', 19302) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'\xd6O(\xa9\x0f\xdd\xad>\x05j\xdb\xe0')
DEBUG:turn:turn/udp connection_made(<_SelectorDatagramTransport fd=35 read=idle write=<idle, bufsize=0>>)
DEBUG:turn:turn/udp > ('<turn server url>', 3478) Message(message_method=Method.ALLOCATE, message_class=Class.REQUEST, transaction_id=b'\xb0)\x01\xfb\xab\xf4\xa3O\x00\xc1Q\xdb')
DEBUG:turn:turn/udp > ('<turn server url>', 3478) Message(message_method=Method.ALLOCATE, message_class=Class.REQUEST, transaction_id=b'\xb0)\x01\xfb\xab\xf4\xa3O\x00\xc1Q\xdb')
DEBUG:turn:turn/udp > ('<turn server url>', 3478) Message(message_method=Method.ALLOCATE, message_class=Class.REQUEST, transaction_id=b'\xb0)\x01\xfb\xab\xf4\xa3O\x00\xc1Q\xdb')
DEBUG:turn:turn/udp > ('<turn server url>', 3478) Message(message_method=Method.ALLOCATE, message_class=Class.REQUEST, transaction_id=b'\xb0)\x01\xfb\xab\xf4\xa3O\x00\xc1Q\xdb')
DEBUG:turn:turn/udp > ('<turn server url>', 3478) Message(message_method=Method.ALLOCATE, message_class=Class.REQUEST, transaction_id=b'\xb0)\x01\xfb\xab\xf4\xa3O\x00\xc1Q\xdb')
DEBUG:turn:turn/udp > ('<turn server url>', 3478) Message(message_method=Method.ALLOCATE, message_class=Class.REQUEST, transaction_id=b'\xb0)\x01\xfb\xab\xf4\xa3O\x00\xc1Q\xdb')
DEBUG:turn:turn/udp > ('<turn server url>', 3478) Message(message_method=Method.ALLOCATE, message_class=Class.REQUEST, transaction_id=b'\xb0)\x01\xfb\xab\xf4\xa3O\x00\xc1Q\xdb')
(<class 'aioice.exceptions.TransactionTimeout'>, TransactionTimeout(), <traceback object at 0x7f13ac4b0108>)
DEBUG:asyncio:Using selector: EpollSelector
(<class 'concurrent.futures._base.CancelledError'>, CancelledError(), <traceback object at 0x7f13ac4b0348>)
DEBUG:ice:controlled - new -> closed
INFO:pc:51c8861d-2f65-4559-a6d4-0ed6880181ad ICE connection state is closed
DEBUG:ice:Connection(0) protocol(0) connection_lost(None)
DEBUG:ice:Connection(0) protocol(1) connection_lost(None)
DEBUG:ice:Connection(0) protocol(2) connection_lost(None)
DEBUG:ice:Connection(0) protocol(3) connection_lost(None)
DEBUG:ice:Connection(0) protocol(4) connection_lost(None)
```
Any ideas what can be wrong?
Answers:
username_0: RTCPeerConnection creation:
``` python
configuration = RTCConfiguration([
RTCIceServer("turn:rideondev.westeurope.cloudapp.azure.com:3478", "username", "password"),
RTCIceServer("stun:stun.l.google.com:19302")
])
self.pc = RTCPeerConnection(configuration)
```
Offer processing:
``` python
try:
# handle offer
await pc.setRemoteDescription(offer)
print("offer set")
self.addLocalTracks()
# send answer
answer = await pc.createAnswer()
print("answer ready")
await pc.setLocalDescription(answer)
print("answer set")
return messages.Answer(pc.localDescription.sdp, pc.localDescription.type)
except:
info = sys.exc_info()
print(info)
```
username_0: RTCPeerConnection creation:
``` python
configuration = RTCConfiguration([
RTCIceServer("turn:<turn server url>:3478", "username", "password"),
RTCIceServer("stun:stun.l.google.com:19302")
])
self.pc = RTCPeerConnection(configuration)
```
Offer processing:
``` python
try:
# handle offer
await pc.setRemoteDescription(offer)
print("offer set")
self.addLocalTracks()
# send answer
answer = await pc.createAnswer()
print("answer ready")
await pc.setLocalDescription(answer)
print("answer set")
return messages.Answer(pc.localDescription.sdp, pc.localDescription.type)
except:
info = sys.exc_info()
print(info)
```
username_0: Also I have tried to send UDP packages from VM where server is hosted to VM where TURN is hosted and it works fine
username_0: Never mind, I found out that on server VM port 3478 is blocked. I moved TURN server to another port and now everything is working fine.
Status: Issue closed
username_1: Thanks for letting me know what the issue was! |
willis-meyers/artgallery | 109888736 | Title: Image Slider
Question:
username_0: @username_1 @username_2 -
Hey folks,
I deployed our site to aggregatespace.com/hackbright so we can start viewing mobile and windows bugs more easily. I am testing out a new slider since the bootstrap carousel wasn't really working for me aesthetically. Checkout the link above on https://www.browserstack.com/responsive and we can get going from there. Looking forward to tonight!
Answers:
username_1: @username_0 I like it that way! I'm excited, this is coming together nicely.
Awesome job on the contact page, too!
username_2: awesome! it's looking good!
Status: Issue closed
|
JuliaGPU/CUDA.jl | 911751450 | Title: CUDA.jl cannot find installed CUPTI libraries with local installation on linux
Question:
username_0: I have a local installation of CUDA that I've been using successfully with CUDA.jl. The one caveat is that CUDA.jl doesn't seem to detect CUPTI even though it is installed. Here is the output of `CUDA.versioninfo`:
```
julia> CUDA.versioninfo()
CUDA toolkit 11.3.0, local installation
CUDA driver 11.3.0
NVIDIA driver 465.31.0
Libraries:
- CUBLAS: 11.5.1
- CURAND: 10.2.4
- CUFFT: 10.4.2
- CUSOLVER: 11.1.2
- CUSPARSE: 11.6.0
- CUPTI: missing
- NVML: 11.0.0+465.31
- CUDNN: 8.20.0 (for CUDA 11.3.0)
- CUTENSOR: 1.3.0 (for CUDA 11.2.0)
Toolchain:
- Julia: 1.6.0
- LLVM: 11.0.1
- PTX ISA support: 3.2, 4.0, 4.1, 4.2, 4.3, 5.0, 6.0, 6.1, 6.3, 6.4, 6.5, 7.0
- Device support: sm_35, sm_37, sm_50, sm_52, sm_53, sm_60, sm_61, sm_62, sm_70, sm_72, sm_75, sm_80
Environment:
- JULIA_CUDA_USE_BINARYBUILDER: false
1 device:
0: NVIDIA GeForce RTX 3090 (sm_86, 22.306 GiB / 23.697 GiB available)
```
I've been trying to figure out why this might be. It seems like the root cause may be that the CUPTI version number is handled by CUDA.jl as a string instead of a version number, and as a consequence only the _exact_ version number for this toolkit number can be used, instead of compatible library versions (like mine).
It seems that the library versions that correspond to a particular toolkit version are defined in `cuda_library_versions`, and unlike the other libraries which have version numbers the CUPTI version number is a simple string, as shown here: https://github.com/JuliaGPU/CUDA.jl/blob/5d6127dbbef495c94d3dd8de98162188062e11b1/deps/discovery.jl#L277
This version number is then used by `CUDA.find_library` to generate the versioned library names that would be compatible with the specified library version. If the version is specified as a version number on a unix system then acceptable library files just have to match the major version number as shown here: https://github.com/JuliaGPU/CUDA.jl/blob/5d6127dbbef495c94d3dd8de98162188062e11b1/deps/discovery.jl#L53-L56
However for strings this is not the case: https://github.com/JuliaGPU/CUDA.jl/blob/5d6127dbbef495c94d3dd8de98162188062e11b1/deps/discovery.jl#L57-L58
My particular libcupti library is at `/usr/local/cuda-11.3/lib64/libcupti.so.2021.1.1`, which seems to be following semver with a rather unusual major version (the year), and I assume is compatible with `libcupti.so.2021.1.0` expected by CUDA.jl.
Happy to try to make a PR to fix this if you agree that this is the cause of the problem.
Answers:
username_1: CUPTI 2021.1.1 is part of CUDA 11.3 Update 1, and should be discovered _when_ your ptxas/nvdisasm versions are discovered as being from CUDA 11.3.1. Where did you get that CUDA distribution from? Could you show the version strings as reported by `ptxas` and `nvdisasm` from `/usr/local/cuda-11.3`?
username_0: I think I have CUDA 11.3 Update 1, if that's what you're saying.
```
$ /usr/local/cuda-11.3/bin/ptxas --version
ptxas: NVIDIA (R) Ptx optimizing assembler
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Mon_May__3_19:14:31_PDT_2021
Cuda compilation tools, release 11.3, V11.3.109
Build cuda_11.3.r11.3/compiler.29920130_0
$ /usr/local/cuda-11.3/bin/nvdisasm --version
nvdisasm: NVIDIA (R) CUDA disassembler
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Mar_21_19:13:56_PDT_2021
Cuda compilation tools, release 11.3, V11.3.58
Build cuda_11.3.r11.3/compiler.29745058_0
```
I thought CUDA.jl no longer differentiated between 11.3.1 and 11.3? https://github.com/JuliaGPU/CUDA.jl/blob/ad946c499315f54fa71cf2901812e3f451bd9723/deps/discovery.jl#L502-L511
I don't think this rounding down the toolkit version to the lowest compatible version would be a problem if the CUPTI version was a version number instead of a string.
username_0: Or rather, the nvdisasm version is used to determine the toolkit version, and it is the same between 11.3.0 and 11.3.1: https://github.com/JuliaGPU/CUDA.jl/blob/ad946c499315f54fa71cf2901812e3f451bd9723/deps/discovery.jl#L357-L364
username_1: https://github.com/JuliaGPU/CUDA.jl/commit/c2dfd606fc7757f12f8387aefce69f8b0d7c44e5#diff-a2e31525a9acaf0886979de31e8d7a41562932c21a82a0c080deb60f4e8641dbR174
username_0: Oh then maybe an update to CUDA.jl will fix it? Thanks! I'll try that!
username_1: Yes, please try the master branch.
username_0: Success! Thank you so much!
```julia
julia> CUDA.versioninfo()
CUDA toolkit 11.3.1, local installation
CUDA driver 11.3.0
NVIDIA driver 465.31.0
Libraries:
- CUBLAS: 11.5.1
- CURAND: 10.2.4
- CUFFT: 10.4.2
- CUSOLVER: 11.1.2
- CUSPARSE: 11.6.0
- CUPTI: 14.0.0
- NVML: 11.0.0+465.31
- CUDNN: 8.20.0 (for CUDA 11.3.0)
- CUTENSOR: 1.3.0 (for CUDA 11.2.0)
Toolchain:
- Julia: 1.6.0
- LLVM: 11.0.1
- PTX ISA support: 3.2, 4.0, 4.1, 4.2, 4.3, 5.0, 6.0, 6.1, 6.3, 6.4, 6.5, 7.0
- Device capability support: sm_35, sm_37, sm_50, sm_52, sm_53, sm_60, sm_61, sm_62, sm_70, sm_72, sm_75, sm_80
Environment:
- JULIA_CUDA_USE_BINARYBUILDER: false
1 device:
0: NVIDIA GeForce RTX 3090 (sm_86, 23.001 GiB / 23.697 GiB available)
```
username_1: Glad it works! FYI, I hope to create a release some time this week, so you won't have to use the master branch for long.
username_0: Unfortunately master is not passing tests locally, so I may just live without profiling until the new release. Seemed like most of the tests were failing for me. Also I noticed that the test dependencies are missing `FillArrays`, so I'll make a PR to add that.
Nonetheless, this issue seems to be solved so I'll close the issue. Thanks again!
Status: Issue closed
username_0: Also using a version number wouldn't have worked, however it looks like the "real" version (11.3) is used for the library on my system (I installed from official NVIDIA repo):
```
$ ls -la /usr/local/cuda-11.3/lib64/libcupti*
lrwxrwxrwx. 1 root root 16 May 5 22:35 /usr/local/cuda-11.3/lib64/libcupti.so -> libcupti.so.11.3
lrwxrwxrwx. 1 root root 20 May 5 22:35 /usr/local/cuda-11.3/lib64/libcupti.so.11.3 -> libcupti.so.2021.1.1
-rwxr-xr-x. 1 root root 6748816 May 5 22:35 /usr/local/cuda-11.3/lib64/libcupti.so.2021.1.1
-rw-r--r--. 1 root root 16022554 May 5 22:35 /usr/local/cuda-11.3/lib64/libcupti_static.a
``` |
w3c/aria-at | 1034537095 | Title: Create tests for APG design pattern example: Main Landmark
Question:
username_0: #### Applies To
* APG design pattern: [Main](https://w3c.github.io/aria-practices/#aria_lh_main)
* Specific example: [Main Landmark Example](https://w3c.github.io/aria-practices/examples/landmarks/main.html)
#### Testing Notes
This issue thread will house all documentation relating to the development of a test plan for the "Main Landmark" APG example, which conforms to the "Main" design pattern. All follow-ups (including the test plan itself) will be added as new comments to facilitate notifications, but this initial comment will be updated with relevant details and URLs as needed for convenience.
#### Additional References
* [main (role)](https://w3c.github.io/aria/#main)<issue_closed>
Status: Issue closed |
Puzzlepart/prosjektportalen365 | 1168069372 | Title: If a project is deleted the timeline doesn't show projects
Question:
username_0: **Describe the bug**
If a project is deleted, the project is still an element in the "Prosjekter" list. This causes the timeline to fail and won't show.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Timeline after a project has been deleted
**Solution**
Add more error handling for this and add null checks for projects that are deleted.<issue_closed>
Status: Issue closed |
yejg/yejg.github.io | 340089044 | Title: SpringCloud注册中心搭建使用 — 钢仔
Question:
username_0: https://username_0.top/2018/07/10/springcloud-registry-center/
一、SpringCloud注册中心搭建注册中心使用Eureka早期artifactId使用spring-cloud-starter-eureka-server但是新版需要使用spring-cloud-starter-netflix-eureka-server。1. 新建一个maven工程2. pom文件引入如下依赖<project xmlns=
Answers:
username_0: test
username_1: 抢个沙发~哈哈
Spring系列现在好火。
username_0: 恩,是啊。我近期正在学这,跟上Spring全家桶的节凑,应该不至于out |
platformio/platformio-vscode-ide | 718590898 | Title: Start PIO Home Server
Question:
username_0: %23 Description of problem
Leave a comment...
BEFORE SUBMITTING, PLEASE SEARCH FOR DUPLICATES IN
- https://github.com/platformio/platformio-vscode-ide/issues%3Fq=is%3Aissue
%23 Configuration
VSCode: 1.50.0
PIO IDE: v2.1.0
System: Windows_NT, 10.0.18363, x64
%23 Exception
```
Error: Webview is disposed
at b.assertNotDisposed (c:\Users\PMCA\AppData\Local\Programs\Microsoft VS Code\resources\app\out\vs\workbench\services\extensions\node\extensionHostProcess.js:840:836)
at b.set html [as html] (c:\Users\PMCA\AppData\Local\Programs\Microsoft VS Code\resources\app\out\vs\workbench\services\extensions\node\extensionHostProcess.js:840:225)
at u.newPanel (c:\Users\PMCA\.vscode\extensions\platformio.platformio-ide-2.1.0\dist\extension.js:1:26272)
at processTicksAndRejections (internal/process/task_queues.js:94:5)
at async u.toggle (c:\Users\PMCA\.vscode\extensions\platformio.platformio-ide-2.1.0\dist\extension.js:1:25794)
```<issue_closed>
Status: Issue closed |
tamakon/imakoko | 318210302 | Title: 動的に生成されたパスをリロードするとエラーページが表示される
Question:
username_0: /find/_id/map
のページをリロードすると発生した。
エラーページを表示させないように修正するとして、現在はfirebaseのエラーページが直で表示されてしまうので、エラーページも別途用意したほうが良さそう。
Answers:
username_1: https://nuxtjs.org/api/configuration-generate#routes
設定ファイルでPromissやるかFirebaseやめるかw
username_0: やりたいこと変わらんし一旦、promise実装する方針で進まます!
username_0: あと、エラーページはアプリケーションが食えばいいかなって考えています。
なのでnuxt側でページ作るよ
username_0: Promiseダメそうだわ。
これ結局動的な数だけ静的なページを作るって話っぽい。
なので1万通りあったら1万ページ作りそうw
username_1: まじかww
username_1: これが取りうる手段かな。
- `generate`やめる。(`firebase function`でssr)
- 動的パスをやめる(パラメータでどうにかする)
- firebaseやめる(Herokuにする)
やっぱ一番いいのはssrかな。本来的だし。
username_0: 実はクライアント側も利用しているGoogleMapのライブラリはssr非対応で対策が必要だったりする
username_1: 現在色々頑張っていてssrで表示できるようにはなったが、element-uiのタグが変換されない。
username_1: `buildDir`を指定すると`element-ui`が機能しなくなることがわかった。
Status: Issue closed
username_1: 間違えてcloseしちゃったw
username_1: /find/_id/map
のページをリロードすると発生した。
エラーページを表示させないように修正するとして、現在はfirebaseのエラーページが直で表示されてしまうので、エラーページも別途用意したほうが良さそう。
username_1: きたわ。
username_1: nodejsが6までしか対応できてなくてnodejs8以上が前提のnuxtがデプロイできないwww
username_0: ダメだこりゃw
むしろnuxt側の方を違う言語に変える?w
username_0: google maps apiはfirebaseからみて同じになっていないとか?
username_1: https://github.com/tamakon/imakoko/tree/ssr-firebase
一応、`firebase-serve`まではできる。問題はデプロイw
username_1: そうっぽいですw くそぅ!
https://stackoverflow.com/questions/48168873/firebase-functions-external-network-is-not-accessible-and-quotas-are-severely-l?rq=1
username_1: 一応共有しておくと、firebaseがnodejsV6までしか対応してなくてV8からサポートのnuxtが動かない問題はnuxtのバージョンを下げて解決(1.0.0-rc11)
`buildDir`を指定すると`element-ui`が機能しなくなる問題は、デフォルトの設定(`.nuxt`)でビルドしたあと、`cp`コマンドで移動して解決w
username_0: 話し戻してssrだとかなり手間そうだし、nuxt側が動的パスを消してquery paramsでどうにかする方針に変更してやってみようかと思う。
username_0: query paramsは取り消して、アプリをhashモードにして起動するように修正した。
Status: Issue closed
username_0: vue-routerの `mode: hash` で解決した |
NVIDIA/apex | 647290142 | Title: Error in installing
Question:
username_0: I am facing the following issue while installing apex.
ERROR: Command errored out with exit status 1: /home/angand/.conda/envs/bart_env/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-f_fckio8/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-f_fckio8/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' --pyprof --cpp_ext --cuda_ext install --record /tmp/pip-record-jrn2mpbx/install-record.txt --single-version-externally-managed --user --prefix= --compile --install-headers /home/angand/.local/include/python3.6m/apex Check the logs for full command output.
Exception information:
Traceback (most recent call last):
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/pip/_internal/req/req_install.py", line 841, in install
req_description=str(self.req),
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/pip/_internal/operations/install/legacy.py", line 86, in install
raise LegacyInstallFailure
pip._internal.operations.install.legacy.LegacyInstallFailure
Answers:
username_1: Is there any further error message available?
username_0: This is the entire error:
running build_ext
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-req-build-f29h187l/setup.py", line 390, in <module>
extras_require=extras,
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/setuptools/__init__.py", line 161, in setup
return distutils.core.setup(**attrs)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/distutils/core.py", line 148, in setup
dist.run_commands()
File "/home/angand/.conda/envs/bart_env/lib/python3.6/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/setuptools/command/install.py", line 61, in run
return orig.install.run(self)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/distutils/command/install.py", line 545, in run
self.run_command('build')
File "/home/angand/.conda/envs/bart_env/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/home/angand/.conda/envs/bart_env/lib/python3.6/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 87, in run
_build_ext.run(self)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/distutils/command/build_ext.py", line 339, in run
self.build_extensions()
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 309, in build_extensions
self._check_abi()
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 630, in _check_abi
check_compiler_abi_compatibility(compiler)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 216, in check_compiler_abi_compatibility
if not check_compiler_ok_for_platform(compiler):
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 192, in check_compiler_ok_for_platform
which = subprocess.check_output(['which', compiler], stderr=subprocess.STDOUT)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/subprocess.py", line 356, in check_output
**kwargs).stdout
File "/home/angand/.conda/envs/bart_env/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['which', 'g++']' returned non-zero exit status 1.
Running setup.py install for apex ... error
ERROR: Command errored out with exit status 1: /home/angand/.conda/envs/bart_env/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-f29h187l/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-f29h187l/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' --cpp_ext --cuda_ext install --record /tmp/pip-record-3qypoddu/install-record.txt --single-version-externally-managed --compile --install-headers /home/angand/.conda/envs/bart_env/include/python3.6m/apex Check the logs for full command output.
Exception information:
Traceback (most recent call last):
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/pip/_internal/req/req_install.py", line 841, in install
req_description=str(self.req),
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/pip/_internal/operations/install/legacy.py", line 86, in install
raise LegacyInstallFailure
pip._internal.operations.install.legacy.LegacyInstallFailure
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/pip/_internal/cli/base_command.py", line 188, in _main
[Truncated]
Found link https://files.pythonhosted.org/packages/4a/08/6ca123073af4ebc4c5488a5bc8a010ac57aa39ce4d3c8a931ad504de4185/pip-19.3-py2.py3-none-any.whl#sha256=e100a7eccf085f0720b4478d3bb838e1c179b1e128ec01c0403f84e86e0e2dfb (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*), version: 19.3
Found link https://files.pythonhosted.org/packages/af/7a/5dd1e6efc894613c432ce86f1011fcc3bbd8ac07dfeae6393b7b97f1de8b/pip-19.3.tar.gz#sha256=324d234b8f6124846b4e390df255cacbe09ce22791c3b714aa1ea6e44a4f2861 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*), version: 19.3
Found link https://files.pythonhosted.org/packages/00/b6/9cfa56b4081ad13874b0c6f96af8ce16cfbc1cb06bedf8e9164ce5551ec1/pip-19.3.1-py2.py3-none-any.whl#sha256=6917c65fc3769ecdc61405d3dfd97afdedd75808d200b2838d7d961cebc0c2c7 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*), version: 19.3.1
Found link https://files.pythonhosted.org/packages/ce/ea/9b445176a65ae4ba22dce1d93e4b5fe182f953df71a145f557cffaffc1bf/pip-19.3.1.tar.gz#sha256=21207d76c1031e517668898a6b46a9fb1501c7a4710ef5dfd6a40ad9e6757ea7 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*), version: 19.3.1
Skipping link: yanked for reason: <none given>: https://files.pythonhosted.org/packages/60/65/16487a7c4e0f95bb3fc89c2e377be331fd496b7a9b08fd3077de7f3ae2cf/pip-20.0-py2.py3-none-any.whl#sha256=eea07b449d969dbc8c062c157852cf8ed2ad1b8b5ac965a6b819e62929e41703 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*)
Skipping link: yanked for reason: <none given>: https://files.pythonhosted.org/packages/8c/5c/c18d58ab5c1a702bf670e0bd6a77cd4645e4aeca021c6118ef850895cc96/pip-20.0.tar.gz#sha256=5128e9a9401f1d16c1d15b2ed766a79d7813db1538428d0b0ce74838249e3a41 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*)
Found link https://files.pythonhosted.org/packages/57/36/67f809c135c17ec9b8276466cc57f35b98c240f55c780689ea29fa32f512/pip-20.0.1-py2.py3-none-any.whl#sha256=b7110a319790ae17e8105ecd6fe07dbcc098a280c6d27b6dd7a20174927c24d7 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*), version: 20.0.1
Found link https://files.pythonhosted.org/packages/28/af/2c76c8aa46ccdf7578b83d97a11a2d1858794d4be4a1610ade0d30182e8b/pip-20.0.1.tar.gz#sha256=3cebbac2a1502e09265f94e5717408339de846b3c0f0ed086d7b817df9cab822 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*), version: 20.0.1
Found link https://files.pythonhosted.org/packages/54/0c/d01aa759fdc501a58f431eb594a17495f15b88da142ce14b5845662c13f3/pip-20.0.2-py2.py3-none-any.whl#sha256=4ae14a42d8adba3205ebeb38aa68cfc0b6c346e1ae2e699a0b3bad4da19cef5c (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*), version: 20.0.2
Found link https://files.pythonhosted.org/packages/8e/76/66066b7bc71817238924c7e4b448abdb17eb0c92d645769c223f9ace478f/pip-20.0.2.tar.gz#sha256=7db0c8ea4c7ea51c8049640e8e6e7fde949de672bfa4949920675563a5a6967f (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*), version: 20.0.2
Found link https://files.pythonhosted.org/packages/ec/05/82d3fababbf462d876883ebc36f030f4fa057a563a80f5a26ee63679d9ea/pip-20.1b1-py2.py3-none-any.whl#sha256=4cf0348b683937da883ccaae8c8bcfc9b4c7ba4c48b38cc2d89cd7b8d0b220d9 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*), version: 20.1b1
Found link https://files.pythonhosted.org/packages/cd/81/c1184456fe506bd50992571c9f8581907976ce71502e36741f033e2da1f1/pip-20.1b1.tar.gz#sha256=699880a47f6d306f4f9a87ca151ef33d41d2223b81ff343b786d38c297923a19 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*), version: 20.1b1
Found link https://files.pythonhosted.org/packages/54/2e/df11ea7e23e7e761d484ed3740285a34e38548cf2bad2bed3dd5768ec8b9/pip-20.1-py2.py3-none-any.whl#sha256=4fdc7fd2db7636777d28d2e1432e2876e30c2b790d461f135716577f73104369 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*), version: 20.1
Found link https://files.pythonhosted.org/packages/d1/05/059c78cd5d740d2299266ffa15514dad6692d4694df571bf168e2cdd98fb/pip-20.1.tar.gz#sha256=572c0f25eca7c87217b21f6945b7192744103b18f4e4b16b8a83b227a811e192 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*), version: 20.1
Found link https://files.pythonhosted.org/packages/43/84/23ed6a1796480a6f1a2d38f2802901d078266bda38388954d01d3f2e821d/pip-20.1.1-py2.py3-none-any.whl#sha256=b27c4dedae8c41aa59108f2fa38bf78e0890e590545bc8ece7cdceb4ba60f6e4 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*), version: 20.1.1
Found link https://files.pythonhosted.org/packages/08/25/f204a6138dade2f6757b4ae99bc3994aac28a5602c97ddb2a35e0e22fbc4/pip-20.1.1.tar.gz#sha256=27f8dc29387dd83249e06e681ce087e6061826582198a425085e0bf4c1cf3a55 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*), version: 20.1.1
Found link https://files.pythonhosted.org/packages/fe/3b/0fc5e63eb277d5a50a95ce5c896f742ef243be27382303a4a44dd0197e29/pip-20.2b1-py2.py3-none-any.whl#sha256=b4e230e2b8ece18c5a19b818f3c20a8d4eeac8172962779fd9898d7c4ceb1636 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*), version: 20.2b1
Found link https://files.pythonhosted.org/packages/77/3e/6a1fd8e08a06e3e0f54182c7c937bba3f4e9cf1b26f54946d3915021ea2e/pip-20.2b1.tar.gz#sha256=dbf65ecb1c30d35d72f5fda052fcd2f1ea9aca8eaf03d930846d990f51d3f6f6 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*), version: 20.2b1
Given no hashes to check 139 links for project 'pip': discarding no candidates
Removed build tracker: '/tmp/pip-req-tracker-yuo60_uz'
username_1: Thanks for posting that. In those message, you can see the following one related to `g++`:
```
subprocess.CalledProcessError: Command '['which', 'g++']' returned non-zero exit status 1.
```
Can you check that whether `g++` is available in your machine?
username_1: Possible related issue: https://github.com/NVIDIA/apex/issues/386
username_0: Ok Thanks.
Status: Issue closed
|
mintproject/mic | 656083125 | Title: Configuration file must be storage in src directory
Question:
username_0: Wow.
The swat model configuration is outdated
Thanks
Status: Issue closed
Answers:
username_1: mic wrapper already copies any config files (defined by configs command) into the src folder.
```
chris@chris-ubuntu20:/home/chris/Desktop/tests/configTest (master)$ mic encapsulate configs config.json
Automatically found mic.yaml in /home/chris/Desktop/tests/configTest/mic/mic.yaml
Added: /home/chris/Desktop/tests/configTest/config.json as a configuration file
chris@chris-ubuntu20:/home/chris/Desktop/tests/configTest (master)$ mic encapsulate wrapper
Automatically found mic.yaml in /home/chris/Desktop/tests/configTest/mic/mic.yaml
Generating the MIC Wrapper. This generates the directory structure and commands required to run your model
Copying the code: linemodel.py to the MIC Wrapper directory mic/src
Copying the config: config.json to the MIC Wrapper directory mic/src
Success
The wrapper has been generated. You can see it at /home/chris/Desktop/tests/configTest/mic/src/run
The next step is `mic encapsulate run`
The command run is going to create a new directory (execution directory), and MIC is going the inputs, code, and configuration files and run the model.
For more information, you can type.
mic encapsulate run --help
chris@chris-ubuntu20:/home/chris/Desktop/tests/configTest (master)$ tree
.
├── config.json
├── LICENSE
├── linemodel.py
├── mic
│ ├── data
│ │ ├── config.json
│ │ └── x.csv
│ ├── docker
│ │ ├── Dockerfile
│ │ ├── entrypoint.sh
│ │ └── requirements.txt
│ ├── mic.yaml
│ └── src
│ ├── config.json
│ ├── io.sh
│ ├── linemodel.py
│ ├── output.sh
│ └── run
├── README.md
├── results
│ └── y.csv
└── x.csv
5 directories, 17 files
```
username_0: Wow.
The swat model configuration is outdated
Thanks
Status: Issue closed
|
godotengine/godot | 362552002 | Title: Weird behavior with kinematicbody snap when moving from floor to wall
Question:
username_0: Godot version 534b7ef292fa4686ae0fd7d34b6dcdcbe14045ea
**This is 3D, it is just sideview to better illustrate what is happening.**
Without snap.

With snap.

As you can see with snap the ball tries to snap to wall, but then realises it shouldn't. Without snap it continues smoothly.
[Minimal Project.zip](https://github.com/godotengine/godot/files/2404907/Minimal.Project.zip)
Answers:
username_1: What's the wanted result?
username_0: I would say it should not snap to a wall.
I guess someone else might want it to snap and slide down, but I don't know.
username_1: I didn't implement it so i'm not sure about it's correct behaviour, @username_4 can you please check this?
username_0: I think you are mixing this with how 2D works. In 3D up is positive so Vector3(0, 1, 0) is up vector.
If you want to move up the y axis in 2D you add velocity like Vector2(0, -1), but in 3D you add Vector3(0, 1, 0). (I haven't used 2D though so I might be wrong.)
username_2: oh, yeah, it's like that in 2D and works properly. I didn't know the Y coordinates were inverted in 3D.
Then it might broken indeed.
username_2: OK it's still not broken. You did it right the snap margin you set was too big for your speed.
username_0: That doesn't fix it, it just alleviates the problem.
username_2: it's not a problem, snap traps the kinematicbody to the floor unless it moves fast enough to escape from it
username_0: The problem is that when moving over that corner on the surface the ball will keep snapping to it even though the right side of the bump is wall because of the high angle. Then when the ball has finally gone around the corner it realises there is a wall and thus stop snapping. Snapping however doesn't (or atleast shouldn't) affect velocity so the ball flies away from wrong point.
username_3: Snapping was added specifically to _avoid_ the body flying on slopes, sticking to the ground. So it seems to me it's behaving correctly but it's not configured correctly for your case. The `snap` parameter is for distance not really for direction.
Status: Issue closed
username_4: I changed the snap behavior so it will only snap to floor if whatever is snapping to is a floor. I think it makes a lot more sense this way (if you don't provide a floor vector, it will snap to anything). It should fix this issue efficiently. |
FontoXML/fontoxpath | 365510666 | Title: Incorrect value from calculation
Question:
username_0: Hi,
The below example is returning the incorrect value.
https://xpath.playground.fontoxml.com/?xml=&xpath=5+-+1+-+1
**Expected value**
3
**Actual value**
5
It seems to be doing 5 - (1 - 1) instead?
Answers:
username_1: Yup, it's a parser fault. I would expect this to fail somewhere in the QT3 test set.
We are working on a re-implementation of the parser; I will pay extra attention to this part. I expect this to be done somewhere this month (we are moving to an XQueryX based parser so that we have more room for static optimizations / constant folding).
If this is blocking you in any way, I should be able to fix this in the current parser.
username_0: If you could take a look at fixing the current parser that would be appreciated. If I get any free time this week, I'll try and take a quick look too.
Get Outlook for Android<https://aka.ms/ghei36>
username_1: If I can get some time to work on this, I'll fix it in the current parser. It shouldn't be too hard. If you're able to have a look, that would be awesome!
The bug is in the grammar: I tried to be smart and work around the difficulty regarding addition and subtraction. Obviously that didn't work... [This code can be found in the pegjs file around line 124](https://github.com/FontoXML/fontoxpath/blob/master/src/parsing/xpath.pegjs#L124).
Here, we effectively make a tree of the repetition of binary operators (`'+'/'-'`) by recursing on the right hand side. This makes the tree lean to the right hand side, which is wrong. Because it is impossible to do left-hand-side recursion in a PEG grammar, this makes it a nice challenge. If you're interested in the background of why left recursion is a problem, [this is a nice read](https://pdos.csail.mit.edu/papers/parsing:popl04.pdf).
I'd flatten out the operator 'tree' first, keep track of the operator, and use a normal reduce/fold-left to build up the correct tree: `5 - 1 - 1` -> `(5 (- 1) (- 1))` -> `(- (- 5 1) 1)`.
BTW: The unit tests are executed a bit differently from before. We removed the browser-based karma tests in favour of plain mocha in Node. We've updated the CONTRIBUTING.md file accordingly.
Status: Issue closed
|
knutigro/COBezierTableView | 362926733 | Title: How to Create Half Circle
Question:
username_0: hi.i've used this library a lot but unfortunately I can not make the table view curved like a half circle although I've changed lot's of things in library files.please guide me.
Answers:
username_1: Hi, @username_0. Cool that you use the control.
You should only have to change one of the four params
` UIView.BezierPoints.p1 = CGPoint(x: 0, y: 0)
UIView.BezierPoints.p2 = CGPoint(x: 0, y: 0)
UIView.BezierPoints.p3 = CGPoint(x: 0, y: 0)
UIView.BezierPoints.p4 = CGPoint(x: 0, y: 0)
`
Did you try to use the editor?
username_0: I tried these value but it did not worked.also my half circle is positioned on the right of the screen. |
DBMS-Consulting/CQT2 | 221323297 | Title: Copy/Update/Browse & Search -> Search and Details form : Same Product name is appearing twice.
Question:
username_0: The same product is appearing twice in the Search and Details page.


Answers:
username_1: I think there are some duplicates on db.
And newly created lists doesn't show such issues.
An algorithm has been applied to remove duplicacy from UI side.
username_0: Fixed:


Status: Issue closed
|
dcowden/cadquery | 223666218 | Title: Fix FreeCAD 0.17 Issues
Question:
username_0: FreeCAD 0.17 is breaking several tests ( 9 at this point).
One issue is explained here:
https://github.com/username_1/cadquery-freecad-module/issues/77
I have fix that one, but there are others.
I'd recommend waiting until freeCAD 0.17 is stable before digging in
Answers:
username_0: to test against latest FreeCAD, do these steps:
sudo add-apt-repository ppa:freecad-maintainers/freecad-daily
sudo apt-get update
sudo apt-get install freecad-daily
export FREECAD_LIB=/usr/lib/freecad-daily/lib
This installs the latest freecad alongside the stable version, so that you can run tests with both.
username_0: pushed changes that make it possible to test against latest FreeCAD 0.17 daily build.
The tests are closer to passing in 0.17, but still fail. They pass in 0.16, at at least we are at the point
where this version allows it.
This way we can iterate on getting 0.17 working while not breaking 0.16
username_0: As of 4/23, here are the errors:
```
ERROR: testChamferAsymmetrical (tests.TestCadQuery.TestCadQuery)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/username_0/gitwork/cadquery/tests/TestCadQuery.py", line 941, in testChamferAsymmetrical
edge = cube.edges("|Z").vals()[0]
IndexError: list index out of range
======================================================================
ERROR: testEnclosure (tests.TestCadQuery.TestCadQuery)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/username_0/gitwork/cadquery/tests/TestCadQuery.py", line 1337, in testEnclosure
oshell.edges("|Z").fillet(p_sideRadius)
File "/home/username_0/gitwork/cadquery/cadquery/cq.py", line 840, in fillet
raise ValueError("Fillets requires that edges be selected")
ValueError: Fillets requires that edges be selected
======================================================================
ERROR: testFillet (tests.TestCadQuery.TestCadQuery)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/username_0/gitwork/cadquery/tests/TestCadQuery.py", line 918, in testFillet
c = CQ( makeUnitCube()).faces(">Z").workplane().circle(0.25).extrude(0.25,True).edges("|Z").fillet(0.2)
File "/home/username_0/gitwork/cadquery/cadquery/cq.py", line 840, in fillet
raise ValueError("Fillets requires that edges be selected")
ValueError: Fillets requires that edges be selected
======================================================================
FAIL: testAndSelector (tests.TestCQSelectors.TestCQSelectors)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/username_0/gitwork/cadquery/tests/TestCQSelectors.py", line 380, in testAndSelector
self.assertEqual(2, len(el))
AssertionError: 2 != 0
======================================================================
FAIL: testParallelEdgeFilter (tests.TestCQSelectors.TestCQSelectors)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/username_0/gitwork/cadquery/tests/TestCQSelectors.py", line 134, in testParallelEdgeFilter
self.assertEqual(4, c.edges("|Z").size())
AssertionError: 4 != 0
======================================================================
FAIL: testPerpendicularDirFilter (tests.TestCQSelectors.TestCQSelectors)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/username_0/gitwork/cadquery/tests/TestCQSelectors.py", line 105, in testPerpendicularDirFilter
self.assertEqual(8,c.edges("#Z").size() ) #8 edges are perp. to z
AssertionError: 8 != 0
======================================================================
FAIL: testSumSelector (tests.TestCQSelectors.TestCQSelectors)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/username_0/gitwork/cadquery/tests/TestCQSelectors.py", line 398, in testSumSelector
self.assertEqual(8, len(el))
[Truncated]
Traceback (most recent call last):
File "/home/username_0/gitwork/cadquery/tests/TestCadQuery.py", line 435, in testNestedCircle
self.assertEqual(14,s.faces().size() )
AssertionError: 14 != 9
======================================================================
FAIL: testWorkplaneOnExistingSolid (tests.TestCadQuery.TestCadQuery)
Tests extruding on an existing solid
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/username_0/gitwork/cadquery/tests/TestCadQuery.py", line 678, in testWorkplaneOnExistingSolid
self.assertEqual(10,c.faces().size() )
AssertionError: 10 != 6
----------------------------------------------------------------------
Ran 151 tests in 5.009s
FAILED (failures=6, errors=3)
```
Status: Issue closed
username_1: With ask the changes to the CI system and the fact that all the tests are passing, I think we have this covered. |
zendesk/racecar | 391908458 | Title: Signal handlers are not re-entrant
Question:
username_0: Hi racecar team, love your gem <3
Racecar's binstub does not properly handle signals in my experience. I get this printed in the output when I send Ctrl+C to `bin/racecar MyConsumer`:
```
log writing failed. can't be called from trap context
```
I've encountered unpredictable failures scenarios around sending signals to racecar actually:
- after killing, I get a console prompt as though the process is gone but I still get output printed to my console
- if you press Ctrl+C enough times you can just crash the process (this is likely a race condition)
The reason this happens is because the signal handlers are not re-entrant. The handlers in `runner.rb` call shutdown code directly from within the `trap` block. This code could be called at any time, including before the previous signal is finished being handled. This is prime race condition territory.
Instead, you should consider using a global queue + a "self pipe" to handle signals serially. This is what sidekiq, foreman, unicorn, and I'm sure many others do to handle this problem.
Answers:
username_1: Can you whip up a PR for that?
Status: Issue closed
|
alphagov/govuk-design-system | 864688281 | Title: Add analytics enhancements for design system and frontend docs
Question:
username_0: <!--
This is a template for any issues that aren’t bug reports or new feature requests. The headings in this section provide examples of the information you might want to include, but feel free to add/delete sections where appropriate.
-->
## What
In https://github.com/alphagov/govuk-design-system/issues/1610 we will have added a cookie banner, cookie page and Google Tag Manager to the designs system and frontend docs to enable us to track pageviews. This was an MVP - now this has been implemented, we can consider what other events on the site we might want to track.
## Why
Pageviews give us a limited view of what users are interacting with on our site. Additional events like link clicks and search tracking will give us a better insight into where we can improve.
## Who needs to know about this
Performance Analyst; Developers
## Done when
- [ ] TBC |
OpenEmu/OpenEmu | 258145715 | Title: Hotkey on controller for "Special Keys"
Question:
username_0: It would be great if there was an option to set a hotkey on the controller, so that if pressed in combination with other buttons, one might for example activate "Save State" when pressing:
<Select>+<R> - (<Select> used as hotkey)
I'm using a 8Bitdo SNES controller - which doesn't have enough keys to bind a special button. With older consoles it's fine, but with newer there are no free buttons for this...
This works perfectly in RetroArch/LibRetro:
https://github.com/RetroPie/RetroPie-Setup/wiki/RetroArch-Configuration
Answers:
username_1: Having combination keys would complicate the current input system and key mapping process, so not likely we will implement something like this.
Status: Issue closed
|
hawtio/hawtio | 44546800 | Title: Container/Urls page - git url
Question:
username_0: UPDATE:
I think after this commit on Fabric8 side: https://github.com/fabric8io/fabric8/commit/33973e143b8d1a3ed98392bd8fa5c947910516f8
we should disable / alter the Git entry for non elected master repo entries. If you try to clone one of those non master repo, you get an error:
```
$ git clone -b 1.0 http://localhost:8183/git/fabric child
Cloning into 'child'...
fatal: repository 'http://localhost:8183/git/fabric/' not found
```
which is a coherent behavior with that Fabric8 commit.
----
Git url always suggest `-b 1.0` even when that container or even the whole version default of a Fabric is set to a different versions.
Is this correct?<issue_closed>
Status: Issue closed |
ProxymanApp/Proxyman | 959812422 | Title: Activate Proxyman License by CLI
Question:
username_0: ### Description
Some companies have more than 100 licenses and they would have a seamless onboarding for their colleague, we have to find a way to automatically activate a license 👍
### Acceptance Criteria
- Activate by CLI `$ /Applications/Proxyman.app/Contents/MacOS/Proxyman -activate <key>`
- Support -silent flag
- Activate by URL Schema `proxyman://activate?key=<key>`
Answers:
username_0: Done 👍
Beta: https://proxyman.s3.us-east-2.amazonaws.com/beta/Proxyman_2.30.0_activate_license.dmg
### Activate by CLI
- `/Applications/Proxyman.app/Contents/MacOS/proxyman-cli activate <key>`
- `/Applications/Proxyman.app/Contents/MacOS/proxyman-cli unlink`
<img width="925" alt="Screen Shot 2021-08-05 at 14 12 33" src="https://user-images.githubusercontent.com/5878421/128307433-8887f444-5c94-42bb-8ec6-f30bce271f50.png">
### Activate by url
It's possible to activate a license by URL `proxyman://activate?key=<key>`
username_1: Thank you for implementing this @username_0 🎉 |
ntntnlstdnt/codelab-jquery-intro | 112846184 | Title: Code Lab Feedback
Question:
username_0: Hey @username_1, can you take a look at this? It's [hosted here](ossified-ant.surge.sh) and meets the following [project](http://lansingcodelab.com/lessons/jquery-intro/1) criteria:
- [x] [This HTML file](https://gist.github.com/chrisvfritz/6ff8b4a898468bdf6d2c) is used to build off (you may change any part of it)
- [x] Players have some way to add either an X or O to each square (apart from event handlers, you may also find [the `text` function](http://api.jquery.com/text/#text2) useful)
- [x] Squares with Xs and squares with Os have different background colors
- [x] The reset button clears Xs and Os from every square
- [x] (optional) Win conditions do _NOT_ have to be detected and reported, but you can try to accomplish this if you'd like a challenge
Answers:
username_1: The four main criteria look pretty solid. Are you going to try detecting the win conditions?
username_0: I am still thinking about it. Codelab wouldn't let me submit the code without checking the last box, so I did. I submitted it for now, so I can focus on the other projects
username_1: Okay, if you want to brainstorm on Wednesday I'll be there! :shipit: |
gamerwalt/laramultidbtenant | 245287259 | Title: Work with JWT or others auth methods
Question:
username_0: Hi, congratulation for this package.
I would like to make it as an Restful API in my app, and works with jwt instead of session by using ajax.
What is your advice?
Answers:
username_1: It's very possible. You will have to authenticate first with Laravel using
the api token then direct to the right tenant. I might end up doing an
example with API authentication.
Adewoyin <NAME>
Winnipeg, Manitoba,
Canada
<EMAIL>
<EMAIL>
username_0: Any API token auth example?
username_1: Not yet. Been busy with another project. Not sure when I'll do this though. |
NVIDIA/TensorRT | 506479033 | Title: Support on 1080 and Titan X
Question:
username_0: Hi does this sample works on GTX 1080 ti and Titian X?
Answers:
username_1: Hi @username_0,
You can see the support matrix for hardware here: https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html#hardware-precision-matrix
And according to here: http://www.nvidia.com/object/geforce_family.html, both GTX 1080 ti and Titan X have Compute Capability 6.1
Please refer to the support matrix row above corresponding to CC6.1 for what is supported and not supported.
Status: Issue closed
username_1: Closing - Reopen if you are still having a problem.
username_0: Thanks for the support ryan
username_2: @username_1 how to get compute capability using cpp code?
username_1: @username_2 I'm not sure off the top of my head. What's your goal for knowing CC?
TensorRT has some built-in methods to see if the hardware supports things like FP16/INT8, if that's what you're looking for:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/c_api/classnvinfer1_1_1_i_builder.html#a1d18b948852cb088d22a87cff70a2b2f |
liubog2008/oooops | 553990503 | Title: A better way to generate default status
Question:
username_0: ### Summary
Now we use `kubebuilder:default` to generate default value. However, it is not suitable for a complex type which needs a long comment.
### Expected
[ ] Auto generate default field in CRD by writing an initialize function. |
philiprbrenan/Dita | 269399835 | Title: App: philiprbrenan/Dita generation failed with ERRORS
Question:
username_0: 2017-10-29 at 13:38:16 Generate AppaAppsPhotoApp version 20171021-222
2017-10-29 at 13:38:20 Step: parseSource
2017-10-29 at 13:38:20 Good source file!
2017-10-29 at 13:38:20 Step: loadAssets
2017-10-29 at 13:38:21 Failed to generate audio file
/tmp/AppaAppsPhotoApp/users/username_0/Dita/assets/audio/Amy/alt.mp3
An error occurred (InvalidSsmlException) when calling the SynthesizeSpeech operation: Invalid SSML request<issue_closed>
Status: Issue closed |
JuliaLang/julia | 501747745 | Title: Missing line number of closure from generator expression
Question:
username_0: Trivial example:
```
julia> function foo()
collect(y for _ in 1:10)
end
foo (generic function with 1 method)
julia> foo()
ERROR: UndefVarError: y not defined
Stacktrace:
[1] #3 at ./array.jl:0 [inlined]
[2] iterate at ./generator.jl:47 [inlined]
[3] collect(::Base.Generator{UnitRange{Int64},var"##3#4"}) at ./array.jl:636
[4] foo() at ./REPL[20]:2
[5] top-level scope at REPL[21]:1
```
The topmost stack frame should also have the `REPL[20]:2` line number inside the closure. |
Guiiii-m/apache-nifi | 578724626 | Title: Deserialization Of Untrusted Object in Guiiii-m/apache-nifi (master)
Question:
username_0: # [Deserialization Of Untrusted Object in username_0/apache-nifi (master)]()
## Issue Details
- **Vulnerability**: Deserialization Of Untrusted Object
- **Severity**: Medium
- **Project**: username_0/apache-nifi
- **Branch**: master
- **Scan Date**: Unknown
## Issue Description
jackson-databind is vulnerable to deserialization of untrusted data. It is possible because it was possible for an untrusted class, `br.com.anteros.dbcp.AnterosDBCPConfig`, to be used as a serialization gadget through polymorphic typing.
[View more details](https://sca.analysiscenter.veracode.com/teams/jppFq6b/issues/vulnerabilities/29273383) |
vert-x3/vertx-web | 131072781 | Title: Exception in failure handler
Question:
username_0: My code looks something like this
```
apiRouter.route().failureHandler(
routingContext ->
{
if(!routingContext.response().ended())
{
int statusCode = routingContext.statusCode();
routingContext.response().setStatusCode(statusCode).end();
}
}
);
```
My code seems to be in line with the documentation
```
1992 [vert.x-eventloop-thread-1] ERROR io.vertx.ext.web.impl.RoutingContextImplBase - Unexpected exception in route
java.lang.IllegalArgumentException: code: -1 (expected: 0+)
at io.netty.handler.codec.http.HttpResponseStatus.<init>(HttpResponseStatus.java:473)
at io.netty.handler.codec.http.HttpResponseStatus.<init>(HttpResponseStatus.java:468)
at io.netty.handler.codec.http.HttpResponseStatus.valueOf(HttpResponseStatus.java:455)
at io.vertx.core.http.impl.HttpServerResponseImpl.setStatusCode(HttpServerResponseImpl.java:113)
at com.qnective.qtalk.admingateway.handlers.Starter.lambda$setupHttpHandlers$15(Starter.java:88)
at io.vertx.ext.web.impl.RouteImpl.handleFailure(RouteImpl.java:227)
at io.vertx.ext.web.impl.RoutingContextImplBase.iterateNext(RoutingContextImplBase.java:76)
at io.vertx.ext.web.impl.RoutingContextWrapper.next(RoutingContextWrapper.java:141)
at io.vertx.ext.web.impl.RouterImpl.handleFailure(RouterImpl.java:260)
at io.vertx.ext.web.impl.RouteImpl.handleFailure(RouteImpl.java:227)
at io.vertx.ext.web.impl.RoutingContextImplBase.iterateNext(RoutingContextImplBase.java:76)
at io.vertx.ext.web.impl.RoutingContextImpl.next(RoutingContextImpl.java:93)
at io.vertx.ext.web.impl.RoutingContextImpl.doFail(RoutingContextImpl.java:349)
at io.vertx.ext.web.impl.RoutingContextImpl.fail(RoutingContextImpl.java:124)
at io.vertx.ext.web.impl.RoutingContextImplBase.iterateNext(RoutingContextImplBase.java:84)
at io.vertx.ext.web.impl.RoutingContextImpl.next(RoutingContextImpl.java:93)
at io.vertx.ext.web.impl.RouterImpl.accept(RouterImpl.java:79)
at io.vertx.core.http.impl.ServerConnection.handleRequest(ServerConnection.java:274)
at io.vertx.core.http.impl.ServerConnection.processMessage(ServerConnection.java:392)
at io.vertx.core.http.impl.ServerConnection.handleMessage(ServerConnection.java:134)
at io.vertx.core.http.impl.HttpServerImpl$ServerHandler.lambda$createConnAndHandle$25(HttpServerImpl.java:537)
at io.vertx.core.impl.ContextImpl.lambda$wrapTask$16(ContextImpl.java:333)
at io.vertx.core.impl.ContextImpl.executeFromIO(ContextImpl.java:225)
at io.vertx.core.http.impl.HttpServerImpl$ServerHandler.createConnAndHandle(HttpServerImpl.java:535)
at io.vertx.core.http.impl.HttpServerImpl$ServerHandler.doMessageReceived(HttpServerImpl.java:469)
at io.vertx.core.http.impl.HttpServerImpl$ServerHandler.doMessageReceived(HttpServerImpl.java:420)
at io.vertx.core.http.impl.VertxHttpHandler.channelRead(VertxHttpHandler.java:85)
at io.vertx.core.net.impl.VertxHandler.channelRead(VertxHandler.java:124)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:354)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
at java.lang.Thread.run(Thread.java:745)
```
Answers:
username_1: @username_0 what status code are you trying to set? it needs to be a HTTP status code which is always a positive number: https://en.wikipedia.org/wiki/List_of_HTTP_status_codes
username_0: As described in the code and in vertx-help, it tries to set routingContext.statusCode() but that is -1.
Status: Issue closed
|
aws/aws-sdk-net | 349295438 | Title: 403 response from public elastic search domain
Question:
username_0: I am having trouble accessing a public elastic search domain from AWS Lambda. The domain belongs to a separate AWS account from mine, but it is public.
I am able to access it from postman or while executing locally, anyone can access it. However when I publish my solution which accesses it to AWS Lambda I get the response: Elasticsearch.Net.ElasticsearchClientException: Request failed to execute. Call: Status code 403.
Answers:
username_0: I never solved this issue. Like you said it's tough to really nail down where exactly the issue lies it could also be in NEST. Eventually I will have to implement cross account security anyways, which hopefully will fix it.
username_1: If you still think this may be an issue with the SDK, could you please provide some information on your environment? Please keep us updated if you make progress with this problem.
**Clarification**
Are you trying to connect to Elastic's Elasticsearch or Amazon's Elasticsearch service? These are two different offerings. If you are using the Elastic API in your lambda function but trying to access an Amazon Elasticsearch domain, that would definitely be a reason why you are having trouble authenticating. You can see [Amazon's Elasticsearch documentation] for more info. (https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Elasticsearch/NElasticsearch.html)
username_0: So here are the details: I have 2 amazon accounts.
I have an Amazon Elasticsearch domain version 6.2 on Account 1. It is configured for public access. I can access it via postman or just by web.
I have an AWS .net core 2.1 Serverless Application which uses NEST V6.2 to access the elastic search domain. When I run it locally under my AWS credentials for Account 2 I can access the elastic search domain. However when I publish the solution to AWS Account 2 I get the message:
```
{"errorCode":"A server error occurred.
Elastic Search error: Invalid NEST response built from a unsuccessful low level call on POST:
/currentexperienceelasticus-west-2/currentexperienceelastic/_search?typed_keys=true
Audit trail of this API call:
- [1] BadResponse:
Node: https://search-repsavvy-shared-77z4kweei27kzjs5fale45sczq.us-east-1.es.amazonaws.com/ T
ook: 00:00:02.4583753\
# OriginalException: Elasticsearch.Net.ElasticsearchClientException: Request failed to execute. Call:
Status code 403 from: POST /currentexperienceelasticus-west-2/currentexperienceelastic/_search?typed_keys=true\n# Request:\r\n{\"from\":0,\"size\":50,\"query\":{\"bool\":{\"must\":[{\"bool\":{\"filter\":[{\"term\":{\"createdBy\":{\"value\":\"<EMAIL>\"}}}]}},{\"bool\":{\"should\":[{\"match\":{\"firstName\":{\"query\":\"2\"}}},{\"match\":{\"lastName\":{\"query\":\"2\"}}},{\"match\":{\"address\":{\"query\":\"2\"}}}]}}]}}}
# Response:
{\"Message\":\"User: arn:aws:sts::[myaccount]:assumed-role/ProxyFunctionRole-10ODI5DVKJRWE/ProxyFunction-1K6TOTOKFEUI0 is not authorized to perform: es:ESHttpPost\"}\n","args":null}
```
username_1: Hi @username_0, is the domain secured with an access policy that could potentially prevent it from being accessed by the second account? Even if you have configured the Elasticsearch domain to be publicly accessible (compared to accessible via your VPC), it is recommended that the domain be protected with an access policy that restricts usage by, for example, whitelisting IP addresses or AWS users. You can check out the Elasticsearch Domain Access Policy Documentation [here](https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-access-policies).
username_0: It is configured for open access. The domain is in the error message from my previous message feel free to try it.
username_1: Hi @username_0, sorry for taking some time to get back to you. I confirmed that the domain is publicly accessible. That is almost certainly not the problem.
I'm fairly confident that the problem is your lambda configuration and not any issue with the functionality of any of the services or SDK. If you go to your AWS Console and then navigate to the Lambda service and then the specific Lambda function in question, you should see a page with your configuration details. At the top of this page should be a chart that shows your function's triggers on the left and role permissions on the right. If Amazon Elasticsearch Service is not in the list on the right, that is a problem. If it is there, you should be able to click on it and then see a tab below that shows "Allow:es :ESHttpPost" under "By resource". If that is there and this is still not working, we'll have to continue looking into next steps, and I'll reach out to others for help.
If you don't have Elasticsearch permissions on your Lambda function's role, you can add them by modifying the role you chose or creating a new execution role for the function based on the "Elasticsearch permissions" policy template.
username_0: Thanks! That fixed it.
Status: Issue closed
|
flutter/flutter | 651506605 | Title: RichText When Chinese is combined with punctuation, there is a new line error
Question:
username_0: `Container(
color: Colors.red,
child: RichText(
textAlign:TextAlign.justify,
text: TextSpan(
text: '我是: ',
style: TextStyle(
fontSize: 30,
color: Colors.black,
),
children: <TextSpan>[
TextSpan(
text: ',,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,.............................,', style: TextStyle(fontSize: 16, color: Colors.blue)),
],
),
),
)`
Answers:
username_1: Hi @username_0
could you provide your `flutter doctor -v` and the error log?
I can't see an error when running with current master `1.20.0-3.0.pre.130` on Android 9 (API 28) device
username_0: current version v1.12.13+hotfix.9
Sorry, my description is not very clear. The problem I have now is that when many characters are spliced after the Chinese character, the new line is carried out as expected,
[https://ae01.alicdn.com/kf/H6a18485bb6bc4d98aa0b595b41618644C.jpg](url)
username_1: Reproduces with not only with RichText, but with simply Text as well
```
import 'package:flutter/material.dart';
void main() => runApp(MaterialApp(home: MyApp()));
class MyApp extends StatefulWidget {
@override
_MyAppState createState() => _MyAppState();
}
class _MyAppState extends State<MyApp> {
@override
Widget build(BuildContext context) {
return SafeArea(
child: Scaffold(
body: Container(
padding: EdgeInsets.all(20),
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
Text('我是我是 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .'),
Text('我是我是 some text some text some text some text some text some text some text some text some text'),
],
),
),
),
);
}
}
```

Expected result: punctuation behaves the same as text
Tried with current master `1.20.0-3.0.pre.130`
@username_0 correct me if I misunderstood
username_2: Text Widget in Chinese paragraph Always get some wrong,plz fix.
Some APIs in TextPainter also have unexpected results. |
tmux-plugins/tmux-resurrect | 715924033 | Title: strange issue with a window at 7th index
Question:
username_0: If I have a window at index 7, and I restore, the 7th index is not restored (and disappears);
Instead the 7th index's window name is placed in index 2 (the contents of index 2 is correct, just the name moves).
Additionally, the window at index 0 also disappears
I'm looking into trying to attach a video...
Answers:
username_1: This one seems to be a bug related to this commit https://github.com/tmux-plugins/tmux-resurrect/commit/5f5f9d8fd5ff9769e5ef08d64a430ee7ab525dc7 |
firebase/firebase-tools-ui | 1058452912 | Title: Op
Question:
username_0: # Github Actions
This directory contains [Github Actions](https://help.github.com/en/actions) workflows
used for testing.
## Workflows
- `node-test.yml` - unit tests and integration tests.
## Secrets
The following secrets must be defined on the project:
| Name | Description |
| ----------------------------- | ------------------------------------------------------------------------------ |
| `FBTOOLS_TARGET_PROJECT` | The project ID that should be used for integration tests |
| `service_account_json_base64` | A base64-encoded service account JSON file with access to the selected project |<issue_closed>
Status: Issue closed |
spinnaker/spinnaker | 550941771 | Title: [FEATURE]: Dynamic editing/creation of kustomization file
Question:
username_0: ### Issue Summary:
This is a feature request to support adding additional dynamic information to kustomization files, similarly as you would override values in the HELM template engine.
### Feature Area:
Baking Kubernetes manifests using Kustomize.
### Description:
kustomize does not support env variables, which is [described here](https://github.com/kubernetes-sigs/kustomize/blob/master/docs/eschewedFeatures.md#build-time-side-effects-from-cli-args-or-env-variables).
There is a [general discussion](https://github.com/kubernetes-sigs/kustomize/issues/388) within the kustomize project on how to achieve this and what are some of the use cases, like image tags.
In particular, my current use case is I have docker-registry artifacts that trigger my pipeline as well as allowing developers to deploy any versions on an application on demand, and using the tag selected `${trigger["tag"]}` to label kubernetes resources appropriately.
In kustomization it would look something like this (which does not, and will not work)
Need to set this somehow `APP_VERSION=${trigger["tag"]}`
```
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- [email protected]:username_0/my-repo.git/kustomize/base?ref=master
commonLabels:
version: APP_VERSION
images:
- name: base-application-image
newName: hello-world
newTag: APP_VERSION
```
Answers:
username_1: For what it's worth, I'm going to outline my use case for this.
I'm submitting a simple job manifest via CI and need to dynamically apply 4 fields. This is the kind of thing that is easily done with Helm (using --set) but Helm doesn't support doing this with a single job. It feels the only way I could make this work with kustomize is creating a temp kustomize file but this seems absurd.
I understand it's purely for philosophical reasons that kustomize doesn't support build time args ("kustomize supports the best practice of storing one's entire configuration in a version control system"). However, in my case I'm dealing with dynamically created one off jobs. There is no reason for them to be in VCS at all.
This incredibly common use case should not be so convoluted.
username_2: I have similar use cases, nearly every other day.
During deployment we (dynamically) create AWS resources (e.g. using Pulumi, Terraform or AWS CDK) and then need to inject some resource identifiers (ARN) into Kubernetes YAML files. One specific example that I had today is:
- We create an IAM Role in AWS using Pulumi
- I have to create a k8s service account which uses that IAM role and for that reason I have to inject it's ARN into the corresponding k8s resource file.
Note, how the exact value of the information that needs to be injected into the YAML resources is not known ahead of time and therefore cannot be part of version control anyway.
While I understand and support the intentions of Kustomize's philosophy that everything should be under version control, I think it is a little bit to "fundamentalistic" (or idealistic) and simply ignores reality. I would really like to see Kustomize adding such a feature, which would make it much simpler to solve those corner cases and reduce the need for other tools like `sed` or templating tools.
username_3: I need this too
In other use case, i have many pv resources all use the same nfs server ip
```
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv
spec:
accessModes:
- ReadWriteMany
capacity:
storage: $(NFS_STORAGE_CAPACITY)
nfs:
path: $(NFS_SHARED_PATH)
server: $(NFS_SERVER)
```
If i use patch file, i need repeat it by myself....
It's not make sense |
loiane/javascript-datastructures-algorithms | 265852558 | Title: Truthy and falsy - Result of value type(string) is wrong
Question:
username_0: Can be verify on output as **true** when val.length is 1. Using the same function example, below the table of truthy and falsy, we can see:
```javascript
function testTruthy(val){
return val ? console.log('truthy') : console.log('falsy');
}
testTruthy('a'); //true
```
Answers:
username_1: Thanks Bruno! Will review it and get back to you.
username_1: Errata submitted for both editions. Exampled added and text will be corrected in next ed.
Status: Issue closed
|
tensorflow/models | 435933250 | Title: Cannot reproduce mAP scores for ssd_mobilenetv2 model
Question:
username_0: -----------------------
### System information
- **What is the top-level directory of the model you are using**:
- **Have I written custom code (as opposed to using a stock example script provided in TensorFlow)**: Some code to generate COCO2014 minival set
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Windows 10 x64
- **TensorFlow installed from (source or binary)**: binary
- **TensorFlow version (use command below)**: 1.12.0 (Python3.6)
- **Bazel version (if compiling from source)**: nA
- **CUDA/cuDNN version**: 9.0/7
- **GPU model and memory**: Quadro P2000 5120 MB
- **Exact command to reproduce**:
`python -m object_detection.legacy.eval --logtostderr --checkpoint_dir=tfmodel/ssd_mobilenet_v2_coco_2018_03_29 --eval_dir=output8059 --pipeline_config_path=tfmodel/ssd_mobilenet_v2_coco_2018_03_29\pipeline_eval_cocomAP.config
`
### Describe the problem
I am trying to reproduce the mAP score for the ssd_mobilenetv2_coco model from the detection model zoo page: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md
with COCO mAP score given as **22** over COCO 14 minival set. However I can only get a cocomAP score of **18.5**. Can someone point out, if I am doing anything wrong? How can I reproduce mAP score 0.22?
Tensorflow object_detection module was installed from: https://github.com/tensorflow/models/blob/master/research/
The file object_detection_evaluation.py was modified to replace **unicode** with **str** for python3.6 compatibilty
pycocoapi for windows was installed following instructions from
https://github.com/philferriere/cocoapi
After downloading the coco2014 dataset, I modified the creat_coco_tf_record.py to generate the COCO minival set and provided mscoco_minival_ids.txt as an additional input to pick only 8059 images with matching id . See the relevant portions of the code below
```
def _create_tf_record_from_coco_annotations(
annotations_file, image_dir, output_path, include_masks, val_id_list_file, num_shards):
text_file = open(val_id_list_file, "r")
val_id_list = np.loadtxt(val_id_list_file,'int32')
subset_count = 0
with contextlib2.ExitStack() as tf_record_close_stack, \
tf.gfile.GFile(annotations_file, 'r') as fid:
output_tfrecords = tf_record_creation_util.open_sharded_output_tfrecords(
tf_record_close_stack, output_path, num_shards)
groundtruth_data = json.load(fid)
images = groundtruth_data['images']
category_index = label_map_util.create_category_index(
groundtruth_data['categories'])
annotations_index = {}
if 'annotations' in groundtruth_data:
tf.logging.info(
'Found groundtruth annotations. Building annotations index.')
for annotation in groundtruth_data['annotations']:
image_id = annotation['image_id']
if image_id not in annotations_index:
annotations_index[image_id] = []
annotations_index[image_id].append(annotation)
missing_annotation_count = 0
for image in images:
image_id = image['id']
if image_id not in annotations_index:
missing_annotation_count += 1
annotations_index[image_id] = []
[Truncated]
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.110
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.411
INFO:tensorflow:Writing metrics to tf summary.
INFO:tensorflow:DetectionBoxes_Precision/mAP: 0.184874
INFO:tensorflow:DetectionBoxes_Precision/mAP (large): 0.335407
INFO:tensorflow:DetectionBoxes_Precision/mAP (medium): 0.085965
INFO:tensorflow:DetectionBoxes_Precision/mAP (small): 0.009968
INFO:tensorflow:DetectionBoxes_Precision/[email protected]: 0.302518
INFO:tensorflow:DetectionBoxes_Precision/[email protected]: 0.193669
INFO:tensorflow:DetectionBoxes_Recall/AR@1: 0.178459
INFO:tensorflow:DetectionBoxes_Recall/AR@10: 0.226661
INFO:tensorflow:DetectionBoxes_Recall/AR@100: 0.227609
INFO:tensorflow:DetectionBoxes_Recall/AR@100 (large): 0.411261
INFO:tensorflow:DetectionBoxes_Recall/AR@100 (medium): 0.110236
INFO:tensorflow:DetectionBoxes_Recall/AR@100 (small): 0.015886
INFO:tensorflow:Losses/Loss/classification_loss: 4.157327
INFO:tensorflow:Losses/Loss/localization_loss: 1.164577
INFO:tensorflow:Metrics written to tf summary.
INFO:tensorflow:Finished evaluation!
```
Answers:
username_1: @username_0 Hi, do you reproduce the mAP? I also can not reproduce the results. Maybe the hyperparameters is wrong. If you have solved it, can you share the config file? thx
username_2: @username_1 , I have not been able to reproduce the claimed mAP score. As mentioned above I am getting 0.184. Are you also getting the same?
username_3: @username_1 , I have not been able to reproduce the claimed mAP score. As mentioned above I am getting 0.184. Are you also getting the same?
username_1: I also have not been able to reproduce the results. Finally, I get 0.188 mAP score.
username_4: Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.299
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.197
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.017
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.140
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.398
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.180
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.227
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.228
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.022
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.165
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.490
username_5: @username_3
In pipeline.config file:
please modify score_threshold from 0.3 to 1e-8 as follow:
post_processing {
batch_non_max_suppression {
**score_threshold: 1e-8**
iou_threshold: 0.600000023842
max_detections_per_class: 100
max_total_detections: 100
}
score_converter: SIGMOID
}
you will get mAP:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.217
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.367
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.222
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.015
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.119
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.392
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.216
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.319
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.338
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.043
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.238
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.573
username_6: Hi There,
We are checking to see if you still need help on this, as this seems to be an old issue. Please update this issue with the latest information, code snippet to reproduce your issue and error you are seeing.
If we don't hear from you in the next 7 days, this issue will be closed automatically. If you don't need help on this issue any more, please consider closing this.
Status: Issue closed
username_7: @username_3 @username_1 @username_4 did somebody solved this? I have same problem for ssd_mobilenet_v1_coco_2018_01_28:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.184
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.307
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.192
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.017
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.157
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.372
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.180
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.242
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.242
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.026
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.206
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.491 |
ngardiner/TWCManager | 562075149 | Title: True power measurement
Question:
username_0: In various places throughout the code, an apparent power is computed using an assumed line voltage of 240V. Seems like line voltage is not always constant. Also, is there any evidence that the power factor of the onboard charger in the car maintains a power factor of 1.0 and that the sine wave does not have distortion?
I'm wondering if the RS-485 protocol supports providing the voltage in addition to amps? If that were the case, even if the sine wave isn't pure and the power factor is not exactly 1.0, it seems like it is still going to be an improvement to compute an apparent power using a measured voltage.
Answers:
username_1: Hi @username_0
We can't assume that the power factor for vehicle charging is 1.0 as it demonstrably is not so - there's just not 100% efficiency for vehicle charging. The problem being solved in this project is the control of input power to the wall charger through the specification of charging current, primarily to provide cost control or off-grid operation.
The wall charger then allocates this power to the vehicle onboard charger, which is where the efficiency is lost. The actual energy efficiency during conversion to battery charge, whilst we would prefer it to be as high efficiency as possible, won't be known or taken into account by querying the wall connector alone.
Other projects, which query the vehicle's charging parameters may report on this - projects such as TeslaFi or TeslaMate which query the Tesla Vehicle API and derive the efficiency of charges. If there were a particular reason to query this value (such as use in policy or automation to improve the efficiency of charges), we'd be able to do so (with some difficulty, due to the inability to directly identify vehicles connected to a TWC), but so far I haven't come across a use case for querying the vehicle charge efficiency and acting upon it. Interesting topic for consideration, though.
In terms of the voltage constants, these are for calculating the output power from the wall connector to the vehicle onboard charger. I'm not aware of any protocol output that specifies a measurement of this output value in real terms, however there would be ways to measure this (using a clamp or other measuring device) if someone were motivated to do so. This does impact the output to the user (ie it has a cosmetic impact) in that the real output voltage can be between 200-240v depending on wiring mechanism and yet our calculations are based on the constant, however as we can only specify input current, there's not a lot we could do to compensate for this.
I will tag this as discussion for now - we do have the ability to read grid voltage from green power system EMS modules where they are used and where the particular inverter/battery/meter supports it, and this would provide more accurate wattage calculations, however more consideration is needed as to how we would use this calculation, especially because EMS readings are single-phase, and due to the way that TWC is wired in 120v geographies, the conversion of single phase grid voltage to actual TWC differs depending on the wiring approach. It is not quite as simple as reading single phase EMS grid voltage and substituting this for the 240v constant.
username_0: Hello,
I'm not talking about efficiency here, but instead true power. Please see the following link for a discussion of true power (Watts, W), reactive power (Volt Amps Reactive, VAR), and apparent power (Volt Amps, VA):
https://www.allaboutcircuits.com/textbook/alternating-current/chpt-11/true-reactive-and-apparent-power/
There are some notes in the code about how there is an RS-485 message that the slave may send that indicates the energy consumption in kW*hours. If that is so, then that could be used to calculate true power (Watts) by a simple numerical differentiation scheme based on the time between the messages were sent. Any idea if this kW*hour message is actually being sent on the RS-485 port and could be used? I would assume that tesla is properly integrating Volts and Amps to get kW*hours.
Here is the relevant portion of the code:
https://github.com/username_1/TWCManager/blob/v1.1.7/TWCManager.py#L653
https://github.com/username_1/TWCManager/blob/v1.1.7/TWCManager.py#L932
Thoughts?
username_1: Yes, there is the message that you've referenced above for those with a TWC that has a certain firmware version. My firmware version is **4.5.3** and it does support this command. Many units out there will not have support, unfortunately.
The response to the query gives me the following:
Lifetime kWh is hex 31 (decimal 49) and Phase 1 voltage is hex F9 (decimal 249v).
The lifetime kWh doesn't align with that reported by the Tesla API, however given it is a lifetime value, there is every chance that there were testing cycles run through the unit prior to my use. This is a testing unit I use only for debugging, so 49kWh is well above my actual utilisation to date (around 33 kWh).
Comparing the phase voltage reading against my solar inverter, they disagree (fronius inverter logged 244v at the exact moment TWC logged 251v).
I will shortly run a charging test where I will measure the lifetime kWh at intervals and compare it to the vehicle charging stats, however with the kWh delivered precision being to the nearest kWh, it is much less precise than the Tesla API reading from the onboard charger (which provides amps and volts with 2 decimal place precision plus the charge_energy_added field which is a calculation in watts to 2 decimal places of the charge added to the battery by the charger):
https://tesla-api.timdorr.com/vehicle/state/chargestate
So ultimately whilst we could poll this value on an interval, I don't think we would get any value from it other than a more universal phase voltage reading than the 240v constant - but only for chargers which support this command. If the TWC reports it delivered 1 kWh lifetime over a period of 20 minutes, would that be any more precise data than the current V x A apparent calculation, which *could* be improved by polling green energy sources but with the caveat that I mentioned regarding different wiring approaches, which does add more configuration complexity than the current 240v constant (but could be overriden by those who wanted higher precision output).
We could read this value from the vehicle API, but we are getting closer and closer to the role of a data logger with a significant drawback to our approach (we are controlling a wall charger, not a vehicle, so we have to make inferences about which vehicle is at that wall charger at any one time). I personally run data loggers for my vehicle to get information on charging sessions, as the only detail I have come across so far that we have more insight into than the vehicle itself is the amount of time we are charging under green power vs from the grid.
Ultimately this project is a bit of a blunt instrument in that through the reverse engineering of a load sharing protocol for the TWC, we are able to specifiy a rounded decimal number of amps to supply to the vehicle for charging. In the future when API commands exist to control this from the vehicle side, there will really be no need for this project anymore (except perhaps for non-Tesla vehicles which is fairly ironic as it's a Tesla wall charger based solution). It will always suffer from a precision problem given the space that it occupies (as it neither provides detailed metering nor does it do the actual charging which is an on-board process on the vehicle, so our observation of utilization is limited).
username_0: You have a lot of valid points.
I am curious though, what did you use to issue the command and receive the response? Are you saying they are decimal _integers_?
Are you saying that the Amps you tell TWC to allow the car to use are rounded to the nearest _integer_, or to 1 or 2 decimal places?
How do you tell the firmware version, and is it ever updated?
username_1: The firmware version can be fetched with:
```
http://<TWC IP>/index.php?sendTWCMsg=FB1B&submit=1
```
There is some discussion that it may have been updated in the past, although there is really no confirmation of this that I have seen. You can see some discussion here:
https://teslamotorsclub.com/tmc/threads/charge-connector-update-in-progress.112886/
Either way, nobody has seen such an update in the wild for at least 18 months, and the new TWC devices just released by Tesla include wifi connectivity for the TWC, so more than likely the answer is no for legacy TWC devices and yes for the new form factor devices:
https://electrek.co/2020/01/15/tesla-home-charging-station-wifi-connection-design/
username_0: According to
https://github.com/username_1/TWCManager/blob/v1.1.7/docs/SlaveProtocol.md
the amps are accurate to 10 milliamps. Also,
https://github.com/username_1/TWCManager/blob/v1.1.7/lib/TWCManager/TWCSlave.py#L758
and
https://github.com/username_1/TWCManager/blob/v1.1.7/lib/TWCManager/TWCSlave.py#L58
indicates the same thing. If you go through the code and replace some `int` with `float` in various places, a non-integer amp setting will be applied.
However, I'm not sure what the real amps are that are being used. Set point and actual seem to be off by 0.2 to 0.7 amps often times, the actual being higher than the set point, it seems. I tried measuring with a multi-meter and the actual seems to be lower than the set point, but not sure how accurate the multi-meter is.
Also, with the same multi-meter, it indicates a power factor of 0.999, so my question about real power vs apparent power _may_ not be as big of a deal if the actual amps being reported by the TWC is actually correct. I may try to get a better multi-meter or the Tesla API (phone app only reports integers, and I'm not sure if it's rounding them correctly) to get to the bottom of the amps accuracy issue.
Regarding voltage, also probably will be better to compare to the Tesla API, but when comparing the phone app to the value received via RS-485, the phone app reports a value that is about 1-3 volts lower. This could make sense because 1-3 volts drop could occur through the cabling possibly?
username_1: Yes you are right, we instruct the TWC to charge the vehicle with 10 ma precision, that was a mistake on my part. We do not however see any more than integer level detail for the lifetime kWh value that we have been discussing in this thread.
username_0: Okay, well, if the power factor is indeed 0.999, then apparent power is only off from true power by 0.1%, so it's close enough. Since Volts has 3 significant figures and amps has 3 significant figures below 10 amps and 4 significant figures above 10 amps, we have a decent amount of precision on the power measurement, and therefor can have a fairly accurate energy measurement. I will do further testing to confirm the power factor is really 0.999 and the current and voltage values coming from the RS-485 port are accurate.
username_0: By the way, the voltage/kW response coming from the slave over RS-485 has more bytes than the example ones shown in the code where I linked above. Most are zeros, but a few are values that change sometimes, and I'm not sure why because they go up and down.
username_1: Is it the CRC byte at the end that you are seeing vary, or other positions in the output? We could certainly note and observe any other relevant information that seems to change in the output, however there is no documentation to refer to so it is purely observational/reverse engineering to identify the significance of the data.
username_2: I added two factors for min and maxAmps. It works like a charm. I added this in the following branch: https://github.com/username_2/TWCManager/tree/v1.2.0-betterCalculationOfChargeLoadAndTrackGreenEnergy
username_3: Note that this is not at all unusual. I have 6(!) IoTaWatt's installed at my house, across 5 sub-panels. And 1 TWC that reports voltage. And 3 solar inverters. Of all of these readings, only 2 are matching exactly. Between slight miscalibrations (two IoTaWatts are literally powered off the same outlet, and are showing 0.1-0.2V differences), and _actual_ differences caused by varying loads and the distance of the measurement to the load, you shouldn't expect them to be the same. Large loads (and measurements near them) like a TWC will cause local voltage sags, while solar locally raises the voltage to "push back" against the grid. I routinely see my voltage go up during the day during solar production. I am a bit surprised at as large of a difference between your inverter and your wall connector (7V!), and that the inverter was lower of the two. Right now I'm within 2V across all of my readings. |
gabo2192/patio-comida | 696920735 | Title: View localState in Apollo Dev Tools (cache tab)
Question:
username_0: Hey there! I'm using your apollo/cache.js design of using typePolicies and a reactive var to handle local state. This seems to be working great except that I'm unable to view the state in Apollo Dev Tools. I'm using Apollo Client v3.1.4. I'm able to write the contents of localState to the console just fine when using the useQuery hook to execute the query...but nothing in the cache tab of the dev tools. Have you experienced this issue? I'm experimenting with moving all local state into Apollo from Redux for one of my projects, but if I can't consistently view the local state without writing to the console, I can't justify making the switch. Thank you for your time!
Answers:
username_1: Yes, Apollo Dev Tools isn't working properly at the moment, at least not properly. I heard that there should be upcoming changes to that. |
lox/ecsy | 206954561 | Title: Thoughts on adding an ALB stack?
Question:
username_0: Just reviewing this stack based on the work done in https://github.com/awslabs/ecs-refarch-cloudformation which includes a template for the ALB, and registers tasks with the ALB listener.
Interested in your thoughts on adding support for this model.
Answers:
username_1: Yup, it's been on my todo list. It should be a pretty simple port.
username_1: Would accept/collab on a PR in a heartbeat :)
username_1: This will get included in v2. |
Oshlack/Clinker | 379138523 | Title: Streamline installation process
Question:
username_0: Hi,
the current installation is a little cumbersome and I think we can make it easier with little effort. The reason is that all the software that gets installed outside of conda right now if you follow the wiki page is also available through conda. In the end, of course it would be very nice if Clinker itself would also make to bioconda.
Short-term, we can aim for an `environment.yaml` file (https://conda.io/docs/user-guide/tasks/manage-environments.html#create-env-file-manually).
Note that the conda `r` channel is not needed anymore, if we use the multichannel `defaults` (https://github.com/conda/conda/issues/7695#issuecomment-416350450)
For example:
```yaml
name: clinker
channels:
- defaults
- bioconda
- conda-forge
dependencies:
- python=2.7
- star=2.5.3a
- samtools
- bpipe
- bioconductor-gviz # pulls biomart, R
```
The above is untested, I'll let you know how it goes and then we can decide what the best course of action is.
Clemens
Answers:
username_1: Thanks, Clemens, this looks cool. The shareable environment is a good idea and would definitely make installation more simple.
Let me know how you go :).
username_2: I actually submitted a Clinker recipe for bioconda a little while ago, but it missed a QC check at some point and there might be an issue with it. It's here: https://github.com/bioconda/bioconda-recipes/tree/fe678d1b7817c221e58888ccba98e4b52b733534/recipes/clinker
I'm hoping to revisit it or help with this effort, but either way, this would be great to get going
username_0: I see, I only checked the package search and not the recipes. But this is really strange, it was merged into master but apparently there is no package...
I added `libiconv` above, which was missing and I also had some problems with `bioconductor-genomeinfodb` and `bioconductor-genomeinfodbdata`, where apparently the conda packages did not work and I needed to install from bioconductor directly.
But first, we really need to find out why this package does not show up... |
Eugeny/ajenti-v | 41312740 | Title: php-fcgi socket does not get created
Question:
username_0: So on the newest version of ajenti-v, php-fpm no longer starts normally on system startup. This has been confirmed on two of my Centos 7 servers. (Selinux is not the cause) However, you can go into ajenti panel and click on "Restart Websites" button to get everything running. Please advise.
~~~shell
[27-Aug-2014 14:02:04] ERROR: unable to bind listening socket for address '/var/run/ajenti-v/php-fcgi-my-website-php-fcgi-0.sock': No such file or directory (2)
[27-Aug-2014 14:02:04] ERROR: FPM initialization failed
~~~
Answers:
username_1: Having the same issue on Ubuntu 14.04.1 x64
username_2: +1 |
mattermost/mattermost-plugin-jira | 996043064 | Title: Provide a way to select a default Jira instance if none is set
Question:
username_0: If the user connects to multiple Jira instances and does not yet create an issue, no default instance will be set yet.
This causes some commands to fail which use the default instance if none is passed in. fir example:
- _/jira transition DP-44 done_
- _/jira instance settings list_
The feedback is bad and suggests a problem has occurred. We should tell the user no default instance is set and possibly a way to select one.
Steps:
- Login as a test user
- Connect to 2 installed Jira instances
- Do **not** create and issue or attach so, user has no default instance
- type _/jira instance settings list_
**Observed**:
 |
yijia2413/collections | 689765247 | Title: influxdb-grafana-docker-compose.yml
Question:
username_0: ## docker-compose.yml
```
version: '3.5'
services:
influxdb:
image: influxdb:1.8.1-alpine
ports:
- "8086:8086"
restart: always
healthcheck:
test: ["CMD-SHELL", "wget --server-response --spider --quiet http://localhost:8086/ping"]
volumes:
- influxdb-storage:/var/lib/influxdb/
environment:
- INFLUXDB_DB=db0
- INFLUXDB_ADMIN_USER=root
- INFLUXDB_ADMIN_PASSWORD=root
chronograf:
image: chronograf:latest
ports:
- "8888:8888"
restart: always
healthcheck:
test: ["CMD-SHELL", "ps aux | grep chronograf | grep -v grep || exit 1"]
depends_on:
- influxdb
grafana:
image: grafana/grafana:latest
ports:
- '3000:3000'
volumes:
- grafana-storage:/var/lib/grafana
depends_on:
- influxdb
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=<PASSWORD>
volumes:
influxdb-storage:
grafana-storage:
```
Just change `users`, `passwords` with your own settings, then run `docker-compose up -d`
Answers:
username_0: ## init db and set retention policies
```
healthcheck:
# not a good idea create db here, but works fine
test: "wget --server-response --spider --quiet http://localhost:8086/ping \
&& [[ -e /tmpflag ]] || (touch /tmpflag && echo 'create database prometheus' | influx \
&& echo 'create database koios' | influx \
&& echo 'alter RETENTION POLICY \"autogen\" on aiops duration 30d REPLICATION 1' | influx \
&& echo 'alter RETENTION POLICY \"autogen\" on koios duration 30d REPLICATION 1' | influx \
&& echo 'alter RETENTION POLICY \"autogen\" on prometheus duration 30d REPLICATION 1' | influx)"
interval: 3s
timeout: 10s
retries: 10
``` |
microsoft/PowerToys | 671734930 | Title: [Image Resizer] filename format parameter description misplaced
Question:
username_0: ## ℹ Computer information
- Windows build number: 20180.1000
- PowerToys version: 0.20.0
- PowerToy module: Image Resizer
## 📝 Detailed reproduction steps
1. Set the output format to [ %1_%6w×%5h ] which means **Original filename _ Actual width w × Actual height h**.

2. Resize pic.jpg (1920w×1080h) with [ Custom Fit 123×456 Pixel ].


3. Then we got the... **pic_69w×123h.jpg**, but the resized pic is 123w×69h.

4. So our format actually is **Original filename _ Actual _height_ w × Actual _width_ h**.
### ✔️ Expected result
pic_123w×69h.jpg
### ❌ Actual result
pic_69w×123h.jpg
### 🔄 Fixing
%5 - Actual height --> Actual width
%6 - Actual width --> Actual height
They may just misplaced, exchanging the text can fix it.

Answers:
username_1: It's standard to use the form "width x height" so I think it's better to look at the code that names the file.
username_2: Thanks @ideaploter. Will put in a PR to fix the issue and to move these strings to the resource file.
Status: Issue closed
|
saltstack-formulas/postfix-formula | 388250464 | Title: master.cf does not support custom arguments with custom services
Question:
username_0: I made a change to postfix/files/master.cf. I don't know how to make a pull request here.
I replaced lines 139 - 149 of the current version with the below. It allows me to setup amivisd on port 10025 with all the arguments that I need.
```
{%- if wrap %}
{{ parameter_str | wordwrap(width=wrap, break_long_words=False, wrapstring='\n%s ' | format(comment)) }}
{%- else %}
{{ parameter_str }}
{%- endif -%}
{%- elif 'user' in service or 'argv' in postfix_master_services.defaults[service_name] -%}
{%- set parameter_str = "%s user=%s argv=%s %s" | format(comment,
service_param(service, service_name, 'user'),
service_param(service, service_name, 'argv'),
service_param(service, service_name, 'extras', '')) -%}
{%- if wrap %}
{{ parameter_str | wordwrap(width=wrap, break_long_words=False, wrapstring='\n%s ' | format(comment)) }}
{%- else %}
{{ parameter_str }}
{%- endif -%}
{%- endif -%}
{%- if service.args is not none -%}
{%- for option in service.get('args', postfix_master_services.defaults[
service_name].get('args', [])) -%}
{%- if option.startswith('#') %}
{{ option }}
{%- else %}
{{ comment }} {{ option }}
{%- endif %}
{%- endfor %}
{%- endif %}
```
Answers:
username_1: Creating a PR is super easy.
1. Fork the repository
2. Edit the file in your forked repository, (The pencil icon next to the trash can)
3. Add a commit message, and then select the "Create a new branch for this commit and start a pull request." option.
4. Click "Propose file change button".
I take it you want to add the `if service.args ...` block to the **extra_service** macro?
username_0: Yes, that would be about correct.
username_2: @username_0 I'd like a bit of clarification of your initial problem. I believe I have a fix for your problem, but wanna make sure I understand your underlying issue.
You would like to add a custom service to the master.cf config.
You are trying to do that via the postfix:master_config:services pillar.
Your service should be something like one of the existing postfix services, such as smtpd, smtp etc.?
But instead you end up with a dovecot-deliver or mailman styled service at the bottom of the file where argv and user is defined but empty? And you actually do not want that, right?
username_0: It has been a while since I reviewed this, but I am pretty certain that is the issue I was having is what you describe. It was putting in text that I can't have for the amavisd setup. It should be setup like smtpd, but have a different name and port.
username_3: I've had the same problem - as workaround i used the extras variable:
```
mymailfilter:
enable: true
extras: |
-o content_filter=
-o local_recipient_maps=
-o relay_recipient_maps=
-o myhostname=localhost
-o smtpd_helo_restrictions=
-o smtpd_client_restrictions=
-o smtpd_sender_restrictions=
-o smtpd_recipient_restrictions=permit_mynetworks,reject
-o mynetworks=127.0.0.0/8
no_args: true
``` |
reapit/foundations | 630901979 | Title: Production ready tweaks to standalone apps pages
Question:
username_0: **Background context or User story:**
_General UI tweaks and fixes to make the standalone apps pages prod ready_
Status: Issue closed
Answers:
username_1: **Background context or User story:**
_General UI tweaks and fixes to make the standalone apps pages prod ready_
Status: Issue closed
|
lrmapp/issues | 678702880 | Title: LRM could use a "slew mode"
Question:
username_0: I'm using LRM with P3Dv4.5. When I'm slewing around in-game, if I slew up, it outputs a takeoff report, when I come down, it Tries to output a landing report, but usually results in an application crash. The only way to then continue to use LRM is to restart my computer. It would be nice if there were a way to detect that the game was in slew mode and just go to a sleep or standby mode until exiting the slew.
Answers:
username_0: In fact, I just took a look at username_1's post. The crash I'm talking about produces the same Microsoft Framework error he has in his screenshot.
username_1: Hi @username_0, I appreciate that it's been a while and you may have moved on from Simming/using another platform but I wanted to let you know that there is a new BETA release that can be found here: https://forums.fshub.io/t/lrm-client-beta-4-2-2-released-for-testing/36
I'd be interested in hearing if you still experience the same problem when using that version?
Status: Issue closed
username_1: Closing this aged issue/no response from user. |
pH7Software/pH7-Social-Dating-CMS | 426299563 | Title: Can't search members without a city
Question:
username_0: There's a default value (Houston) in city field in search members box. You can't delete it and search members only by country. Shouldn't you fix this?
Status: Issue closed
Answers:
username_1: Why don't you fix it ?
username_1: https://github.com/pH7Software/pH7-Social-Dating-CMS/issues/499
Great answer from pH-7 that you should keep in your mind.
username_0: I already did the reading of his comment and that's why I closed this.
Did I do something wrong here? Are you a teacher in high schools? Cause you act like them.
username_1: I do act like one when people do act with the mind of less then teenage.
Never went through your mind about contributing something that others can benefits instead of only giving complains about what you dislike but never about what you like ? Tell me why the hell are you still here if nothing's good about this work. Maybe you should do the same with you gitHub account than you did with this issue.
Starting by being able to write one line of code before complaining about others work.
You compare a small project being run by some volunteers with projects being run by big team. Maybe you should ask yourself what you are missing here ? |
docker/compose | 105953572 | Title: docker-compose pull doesn't work with mixure of official and local images
Question:
username_0: I use my own image build process in my development environment. This process create images localy so I can test them. I also use mose official images (databases) in my `docker-compose.yml`. With this combination `docker-compose pull` command fail when its tries to pull my local only images and if I'm not lucky no database images are pulled.
I would like to from time to time pull new images from repository and so far I have to list them all. Is it possible to pass some flag to `pull` that it tries to pull all images only report those that failed (instead of stop pulling others)?
Answers:
username_1: I've run into this issue myself, and I think it's pretty common. It would be good to fix it. The workaround I've used is to list the services you want to pull instead of pulling all of them.
A flag do `docker-compose pull --skip-missing` might be good, to allow any failed pulls to be skipped. I'm not sure if we want to change the default behaviour.
username_0: I've create implementation that skips all pull errors. Name of the flag is up to discussion but I've rolled with @username_1 's `--skip-missing`
Status: Issue closed
|
aspnetboilerplate/aspnetboilerplate | 275956052 | Title: Can not migrate the database.
Question:
username_0: hi @username_1
Download the latest template today (including Zero) and the template contains the latest Abp 3.2.4.
We did not make any changes
run `PM> Update-Database`
``` csharp
Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[0]
User profile is available. Using 'C:\Users\malim\AppData\Local\ASP.NET\DataProtection-Keys' as key repository and Windows DPAPI to encrypt keys at rest.
Application startup exception: System.Data.SqlClient.SqlException (0x80131904): Invalid object name 'AbpEditions'.
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
at System.Data.SqlClient.SqlDataReader.TryConsumeMetaData()
at System.Data.SqlClient.SqlDataReader.get_MetaData()
at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite, SqlDataReader ds)
at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean asyncWrite, String method)
at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior)
at System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior)
at System.Data.Common.DbCommand.ExecuteReader()
at Microsoft.EntityFrameworkCore.Storage.Internal.RelationalCommand.Execute(IRelationalConnection connection, DbCommandMethod executeMethod, IReadOnlyDictionary`2 parameterValues)
at Microsoft.EntityFrameworkCore.Storage.Internal.RelationalCommand.ExecuteReader(IRelationalConnection connection, IReadOnlyDictionary`2 parameterValues)
at Microsoft.EntityFrameworkCore.Query.Internal.QueryingEnumerable`1.Enumerator.BufferlessMoveNext(Boolean buffer)
at Microsoft.EntityFrameworkCore.Storage.Internal.SqlServerExecutionStrategy.Execute[TState,TResult](TState state, Func`3 operation, Func`3 verifySucceeded)
at Microsoft.EntityFrameworkCore.Query.Internal.QueryingEnumerable`1.Enumerator.MoveNext()
at System.Linq.Enumerable.TryGetFirst[TSource](IEnumerable`1 source, Boolean& found)
at lambda_method(Closure , QueryContext )
at Microsoft.EntityFrameworkCore.Query.Internal.QueryCompiler.<>c__DisplayClass17_1`1.<CompileQueryCore>b__0(QueryContext qc)
at Microsoft.EntityFrameworkCore.Query.Internal.QueryCompiler.Execute[TResult](Expression query)
at Microsoft.EntityFrameworkCore.Query.Internal.EntityQueryProvider.Execute[TResult](Expression expression)
at System.Linq.Queryable.FirstOrDefault[TSource](IQueryable`1 source, Expression`1 predicate)
at EasyFast.EntityFrameworkCore.Seed.Host.DefaultEditionCreator.CreateEditions() in D:\OAOA\src\EasyFast.EntityFrameworkCore\EntityFrameworkCore\Seed\Host\DefaultEditionCreator.cs:line 25
at EasyFast.EntityFrameworkCore.Seed.Host.DefaultEditionCreator.Create() in D:\OAOA\src\EasyFast.EntityFrameworkCore\EntityFrameworkCore\Seed\Host\DefaultEditionCreator.cs:line 20
at EasyFast.EntityFrameworkCore.Seed.Host.InitialHostDbBuilder.Create() in D:\OAOA\src\EasyFast.EntityFrameworkCore\EntityFrameworkCore\Seed\Host\InitialHostDbBuilder.cs:line 14
at EasyFast.EntityFrameworkCore.Seed.SeedHelper.SeedHostDb(EasyFastDbContext context) in D:\OAOA\src\EasyFast.EntityFrameworkCore\EntityFrameworkCore\Seed\SeedHelper.cs:line 25
at EasyFast.EntityFrameworkCore.Seed.SeedHelper.WithDbContext[TDbContext](IIocResolver iocResolver, Action`1 contextAction) in D:\OAOA\src\EasyFast.EntityFrameworkCore\EntityFrameworkCore\Seed\SeedHelper.cs:line 41
at EasyFast.EntityFrameworkCore.Seed.SeedHelper.SeedHostDb(IIocResolver iocResolver) in D:\OAOA\src\EasyFast.EntityFrameworkCore\EntityFrameworkCore\Seed\SeedHelper.cs:line 17
at EasyFast.EntityFrameworkCore.EasyFastEntityFrameworkModule.PostInitialize() in D:\OAOA\src\EasyFast.EntityFrameworkCore\EntityFrameworkCore\EasyFastEntityFrameworkModule.cs:line 46
at System.Collections.Generic.List`1.ForEach(Action`1 action)
at Abp.AbpBootstrapper.Initialize() in D:\Github\aspnetboilerplate\src\Abp\AbpBootstrapper.cs:line 158
at Abp.AspNetCore.AbpApplicationBuilderExtensions.UseAbp(IApplicationBuilder app, Action`1 optionsAction) in D:\Github\aspnetboilerplate\src\Abp.AspNetCore\AspNetCore\AbpApplicationBuilderExtensions.cs:line 36
at EasyFast.Web.Startup.Startup.Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) in D:\OAOA\src\EasyFast.Web.Mvc\Startup\Startup.cs:line 58
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Microsoft.AspNetCore.Hosting.ConventionBasedStartup.Configure(IApplicationBuilder app)
at Microsoft.AspNetCore.Hosting.Internal.AutoRequestServicesStartupFilter.<>c__DisplayClass0_0.<Configure>b__0(IApplicationBuilder builder)
at Microsoft.AspNetCore.Hosting.Internal.WebHost.BuildApplication()
ClientConnectionId:38aca840-9275-4d9e-8b92-ac0c5a50f807
Error Number:208,State:1,Class:16
crit: Microsoft.AspNetCore.Hosting.Internal.WebHost[6]
Application startup exception
System.Data.SqlClient.SqlException (0x80131904):Invalid object name 'AbpEditions'
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
at System.Data.SqlClient.SqlDataReader.TryConsumeMetaData()
at System.Data.SqlClient.SqlDataReader.get_MetaData()
[Truncated]
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Microsoft.AspNetCore.Hosting.ConventionBasedStartup.Configure(IApplicationBuilder app)
at Microsoft.AspNetCore.Hosting.Internal.AutoRequestServicesStartupFilter.<>c__DisplayClass0_0.<Configure>b__0(IApplicationBuilder builder)
at Microsoft.AspNetCore.Hosting.Internal.WebHost.BuildApplication()
ClientConnectionId:38aca840-9275-4d9e-8b92-ac0c5a50f807
Error Number:208,State:1,Class:16
An error occurred while calling method 'BuildWebHost' on class 'Program'. Continuing without the application service provider. Error: Invalid object name 'AbpEditions'
System.Exception: Could not find content root folder!
at EasyFast.Web.WebContentDirectoryFinder.CalculateContentRootFolder() in D:\OAOA\src\EasyFast.Core\Web\WebContentFolderHelper.cs:line 30
at EasyFast.EntityFrameworkCore.EasyFastDbContextFactory.CreateDbContext(String[] args) in D:\OAOA\src\EasyFast.EntityFrameworkCore\EntityFrameworkCore\EasyFastDbContextFactory.cs:line 15
at Microsoft.EntityFrameworkCore.Design.Internal.DbContextOperations.CreateContextFromFactory(Type factory)
at Microsoft.EntityFrameworkCore.Design.Internal.DbContextOperations.<>c__DisplayClass14_0.<FindContextFactory>b__1()
at Microsoft.EntityFrameworkCore.Design.Internal.DbContextOperations.CreateContext(Func`1 factory)
at Microsoft.EntityFrameworkCore.Design.Internal.DbContextOperations.CreateContext(String contextType)
at Microsoft.EntityFrameworkCore.Design.Internal.MigrationsOperations.UpdateDatabase(String targetMigration, String contextType)
at Microsoft.EntityFrameworkCore.Design.OperationExecutor.UpdateDatabase.<>c__DisplayClass0_1.<.ctor>b__0()
at Microsoft.EntityFrameworkCore.Design.OperationExecutor.OperationBase.Execute(Action action)
Could not find content root folder!
```
Answers:
username_1: Hi,
I downloaded it and worked as expected. Please check that:
1. Ensure that you have selected correct startup project (Web.Host of angular UI, Web.Mvc for MVC UI based on your framework selection).

2. Ensure that you have selected .EntityFrameworkCore project as default project from Package Manager Console

3. If you previously has a database, delete it (probably named EasyFastDb on SQL Server).
username_0: @username_1
I am using the Multi Page Web Application template. The previous operation is the same as what you said.
Then I changed to the Single Page Web Application Template (same as your screenshot)
I get the following output:
``` csharp
PM> Update-Database
Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[0]
User profile is available. Using 'C:\Users\malim\AppData\Local\ASP.NET\DataProtection-Keys' as key repository and Windows DPAPI to encrypt keys at rest.
Application startup exception: System.Data.SqlClient.SqlException (0x80131904): Invalid object name 'AbpEditions'
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
at System.Data.SqlClient.SqlDataReader.TryConsumeMetaData()
at System.Data.SqlClient.SqlDataReader.get_MetaData()
at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite, SqlDataReader ds)
at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean asyncWrite, String method)
at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior)
at System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior)
at System.Data.Common.DbCommand.ExecuteReader()
at Microsoft.EntityFrameworkCore.Storage.Internal.RelationalCommand.Execute(IRelationalConnection connection, DbCommandMethod executeMethod, IReadOnlyDictionary`2 parameterValues)
at Microsoft.EntityFrameworkCore.Storage.Internal.RelationalCommand.ExecuteReader(IRelationalConnection connection, IReadOnlyDictionary`2 parameterValues)
at Microsoft.EntityFrameworkCore.Query.Internal.QueryingEnumerable`1.Enumerator.BufferlessMoveNext(Boolean buffer)
at Microsoft.EntityFrameworkCore.Storage.Internal.SqlServerExecutionStrategy.Execute[TState,TResult](TState state, Func`3 operation, Func`3 verifySucceeded)
at Microsoft.EntityFrameworkCore.Query.Internal.QueryingEnumerable`1.Enumerator.MoveNext()
at System.Linq.Enumerable.TryGetFirst[TSource](IEnumerable`1 source, Boolean& found)
at lambda_method(Closure , QueryContext )
at Microsoft.EntityFrameworkCore.Query.Internal.QueryCompiler.<>c__DisplayClass17_1`1.<CompileQueryCore>b__0(QueryContext qc)
at Microsoft.EntityFrameworkCore.Query.Internal.QueryCompiler.Execute[TResult](Expression query)
at Microsoft.EntityFrameworkCore.Query.Internal.EntityQueryProvider.Execute[TResult](Expression expression)
at System.Linq.Queryable.FirstOrDefault[TSource](IQueryable`1 source, Expression`1 predicate)
at QA.EntityFrameworkCore.Seed.Host.DefaultEditionCreator.CreateEditions() in C:\Users\malim\Desktop\123\aspnet-core\src\QA.EntityFrameworkCore\EntityFrameworkCore\Seed\Host\DefaultEditionCreator.cs:line 25
at QA.EntityFrameworkCore.Seed.Host.DefaultEditionCreator.Create() in C:\Users\malim\Desktop\123\aspnet-core\src\QA.EntityFrameworkCore\EntityFrameworkCore\Seed\Host\DefaultEditionCreator.cs:line 20
at QA.EntityFrameworkCore.Seed.Host.InitialHostDbBuilder.Create() in C:\Users\malim\Desktop\123\aspnet-core\src\QA.EntityFrameworkCore\EntityFrameworkCore\Seed\Host\InitialHostDbBuilder.cs:line 14
at QA.EntityFrameworkCore.Seed.SeedHelper.SeedHostDb(QADbContext context) in C:\Users\malim\Desktop\123\aspnet-core\src\QA.EntityFrameworkCore\EntityFrameworkCore\Seed\SeedHelper.cs:line 25
at QA.EntityFrameworkCore.Seed.SeedHelper.WithDbContext[TDbContext](IIocResolver iocResolver, Action`1 contextAction) in C:\Users\malim\Desktop\123\aspnet-core\src\QA.EntityFrameworkCore\EntityFrameworkCore\Seed\SeedHelper.cs:line 41
at QA.EntityFrameworkCore.Seed.SeedHelper.SeedHostDb(IIocResolver iocResolver) in C:\Users\malim\Desktop\123\aspnet-core\src\QA.EntityFrameworkCore\EntityFrameworkCore\Seed\SeedHelper.cs:line 17
at QA.EntityFrameworkCore.QAEntityFrameworkModule.PostInitialize() in C:\Users\malim\Desktop\123\aspnet-core\src\QA.EntityFrameworkCore\EntityFrameworkCore\QAEntityFrameworkModule.cs:line 46
at System.Collections.Generic.List`1.ForEach(Action`1 action)
at Abp.AbpBootstrapper.Initialize() in D:\Github\aspnetboilerplate\src\Abp\AbpBootstrapper.cs:line 158
at Abp.AspNetCore.AbpApplicationBuilderExtensions.UseAbp(IApplicationBuilder app, Action`1 optionsAction) in D:\Github\aspnetboilerplate\src\Abp.AspNetCore\AspNetCore\AbpApplicationBuilderExtensions.cs:line 36
at QA.Web.Host.Startup.Startup.Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) in C:\Users\malim\Desktop\123\aspnet-core\src\QA.Web.Host\Startup\Startup.cs:line 95
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Microsoft.AspNetCore.Hosting.ConventionBasedStartup.Configure(IApplicationBuilder app)
at Microsoft.AspNetCore.Hosting.Internal.AutoRequestServicesStartupFilter.<>c__DisplayClass0_0.<Configure>b__0(IApplicationBuilder builder)
at Microsoft.AspNetCore.Hosting.Internal.WebHost.BuildApplication()
ClientConnectionId:f61b1c33-8b09-4dff-b170-c699d0d73a21
Error Number:208,State:1,Class:16
crit: Microsoft.AspNetCore.Hosting.Internal.WebHost[6]
Application startup exception
System.Data.SqlClient.SqlException (0x80131904): Invalid object name 'AbpEditions'
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
at System.Data.SqlClient.SqlDataReader.TryConsumeMetaData()
at System.Data.SqlClient.SqlDataReader.get_MetaData()
[Truncated]
at QA.EntityFrameworkCore.QAEntityFrameworkModule.PostInitialize() in C:\Users\malim\Desktop\123\aspnet-core\src\QA.EntityFrameworkCore\EntityFrameworkCore\QAEntityFrameworkModule.cs:line 46
at System.Collections.Generic.List`1.ForEach(Action`1 action)
at Abp.AbpBootstrapper.Initialize() in D:\Github\aspnetboilerplate\src\Abp\AbpBootstrapper.cs:line 158
at Abp.AspNetCore.AbpApplicationBuilderExtensions.UseAbp(IApplicationBuilder app, Action`1 optionsAction) in D:\Github\aspnetboilerplate\src\Abp.AspNetCore\AspNetCore\AbpApplicationBuilderExtensions.cs:line 36
at QA.Web.Host.Startup.Startup.Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) in C:\Users\malim\Desktop\123\aspnet-core\src\QA.Web.Host\Startup\Startup.cs:line 95
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Microsoft.AspNetCore.Hosting.ConventionBasedStartup.Configure(IApplicationBuilder app)
at Microsoft.AspNetCore.Hosting.Internal.AutoRequestServicesStartupFilter.<>c__DisplayClass0_0.<Configure>b__0(IApplicationBuilder builder)
at Microsoft.AspNetCore.Hosting.Internal.WebHost.BuildApplication()
ClientConnectionId:<KEY>
Error Number:208,State:1,Class:16
An error occurred while calling method 'BuildWebHost' on class 'Program'. Continuing without the application service provider. Error: Invalid object name 'AbpEditions'
Applying migration '20170424115119_Initial_Migrations'.
Applying migration '20170608053244_Upgraded_To_Abp_2_1_0'.
Applying migration '20170621153937_Added_Description_And_IsActive_To_Role'.
Applying migration '20170703134115_Remove_IsActive_From_Role'.
Applying migration '20170804083601_Upgraded_To_Abp_v2.2.2'.
Done.
```
username_0: My colleague got the same error on his computer.
username_0: The problem has been solved, thank you very much @username_1 .
I found some things.
**I created EasyFastDb database in advance, there will be Invalid object name 'AbpEditions' error.**
But if I do not create the database in advance, VS will prompt me [Can not open database "EasyFastDb" requested by the login. The login failed.], Even though I am using sa or a local windows account.
But even so, the database will be successfully created, the migration is successful, there is only a mistake.
I will continue to study this issue. Temporarily turn off the issue.
Status: Issue closed
|
ag-grid/ag-grid | 307612360 | Title: How to disable tool panel (enterprise)
Question:
username_0: On v17 you have this nice looking ToolPanel that I don't want. I tried to remove it with `showToolPanel` but that doesn't work.
Can't see anything in the docs about it here: https://www.ag-grid.com/javascript-grid-tool-panel/
I will `display: none` for now but seems a waste of resources to render it in the first place.
Thanks
Answers:
username_1: Try
`toolPanelSuppressSideButtons = true;`
username_0: Thanks that worked
Status: Issue closed
username_2: I just curious why it isn't mentioned in official documentation? I couldn't find it there. |
pybind/pybind11_json | 581984154 | Title: Error: nlohmann::json as input argument
Question:
username_0: I'm trying to send a JSON to `my_func()`, modify it and return it back to Python.
**Example:**
```
pybind11::object my_func(nlohmann::json& a)
{
a["new"] = "new string";
return a;
}
```
It compiles Ok, but when I tried to call this function inside my Python module, I got this error below:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: test(): incompatible function arguments. The following argument types are supported:
1. (self: my_lib.MusicXML, arg0: nlohmann::basic_json<std::map, std::vector, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool, long, unsigned long, double, std::allocator, nlohmann::adl_serializer>) -> object
```
**What I have to do to fix it?**
Thank you,
Answers:
username_1: You should take a `py::object obj` as argument and do the conversion like this:
```
nl::json j = obj;
```
Status: Issue closed
username_1: This will change in the next release. You will be able to take an `nlohmann::json` as input. The `py::object` to `nlohmann::json` conversion will be done automatically |
raiguard/Factorio-SmallMods | 755490445 | Title: Quickbar template crash
Question:
username_0: ## Describe the Bug
Quickbar template crashed the server
## Reproduce
no idea
## logs
```
11270.170 Error MainLoop.cpp:1281: Exception at tick 1599005: The mod Quickbar Templates (2.2.0) caused a non-recoverable error.
Please report this error to the mod author.
Error while running event QuickbarTemplates::on_player_cursor_stack_changed (ID 29)
LuaGuiElement API call when LuaGuiElement was invalid.
stack traceback:
[C]: in function '__index'
__QuickbarTemplates__/control.lua:213: in function <__QuickbarTemplates__/control.lua:198>
11270.173 Error ServerMultiplayerManager.cpp:91: MultiplayerManager failed: "The mod Quickbar Templates (2.2.0) caused a non-recoverable error.
Please report this error to the mod author.
Error while running event QuickbarTemplates::on_player_cursor_stack_changed (ID 29)
LuaGuiElement API call when LuaGuiElement was invalid.
stack traceback:
[C]: in function '__index'
__QuickbarTemplates__/control.lua:213: in function <__QuickbarTemplates__/control.lua:198>"
11270.173 Info ServerMultiplayerManager.cpp:780: updateTick(1599005) changing state from(InGame) to(Failed)
11270.173 Quitting: multiplayer error.
```
Answers:
username_1: Do you happen to have Krastorio 2 enabled?
username_0: Yes, we did not know that the mods where incompatible
Status: Issue closed
username_0: https://mods.factorio.com/mod/Krastorio2/changelog ah, you mean updating to 1.0.13.
I will check that when i have time. I am not sure what exactly you mean with re-enable, but i have already uninstalled your mod and i would reinstall it, however continue to play on a map that was generated with your mod enabled.
In any case, i thank you for your replay. I would close the issue and re-open it in the unlikely case of continuing crashes. |
dotnet/command-line-api | 965996029 | Title: CommandHandler InvokeAsync doesnt set exit code
Question:
username_0: My app has main method looking something like this:
```csharp
var rootCommand = new RootCommand { };
// ...
rootCommand.Handler = CommandHandler.Create<T1, T2, T3>(MyHelperMethod);
await rootCommand.InvokeAsync(args);
```
The function I use to do all the work, MyHelperMethod, is designed to return exit codes for certain exceptions the app might encounter. I expected the main method to also return those same codes when the command is invoked. If this is by design, would you make an example of how to have main return negative error codes?
Answers:
username_1: Where/How does your Main method return any error code? It seems it doesn't, because you seem to ignore the return value of rootCommand.InvokeAsync...
```c#
await rootCommand.InvokeAsync(args);
``` |
beetbox/beets | 728451418 | Title: How to Autotag Discogs "Style" to "Genre" metadata? P
Question:
username_0: ### Use case
I'm trying to use beets to Autotag Discogs' "Style" to my "Genre" field in the track metadata. I am completely new to python, beets, coding. Can anyone help?
Answers:
username_1: This is probably a question best for our [forums](https://discourse.beets.io).
Status: Issue closed
username_0: thanks! |
Same-Writer/SearchExpander | 940007422 | Title: Seeing unexpected bursts of results from notify_if_changed()
Question:
username_0: It seems like this function is puking out a handful of old listings as new/changed. Not sure where logic is failing, but there's some debugging that needs to be done
Answers:
username_0: Turns out the 'result-hood' class' value was changing and causing this, so everything is and was working correctly. Since the neighborhood is not particularly useful info, I've removed it from scraping. This could be added back in a larger change to allow users to filter results that trigger notifications in settings.yaml.
username_0: closed with a3595fab437ec297e349c62c345dad748b19e215
Status: Issue closed
|
hamidrezaomidvar/LINDER | 657376841 | Title: xn and yn potential confusion
Question:
username_0: xn and yn for dividing the rectangular for fraction calculation: this is the number of nodes for each x and y direction. So value of 1 does not make sense for them as one pixel has 2 nodes in each direction. Either fixing this, or raising an error and explanation in the tutorial |
flutter/flutter-intellij | 454941923 | Title: Code reformatted incorrectly when using ifs with no brackets
Question:
username_0: I apologize for the horrible naming and horrible placement, but I'm drawing a huge blank on terminology, so please rename and move this for me.
The issue is explained easily with this gif.

Thanks for the feedback! If your issue is related to the Flutter framework itself,
please open an issue at
[github.com/flutter/flutter](https://github.com/flutter/flutter/issues/new).
## Steps to Reproduce
type an if statement like so:
`if(true) return false;`
insert an open curly brace and press enter and the closing bracket will place itself after the following word regardless of what's actually there.
if(true)**{|** return false;
turns to
```dart
if(true){
return
} false;
```
_Please tell us what you were doing and what went wrong_
Minding my own business when next thing I knew, this happened!
## Version info
[✓] Flutter (Channel stable, v1.5.4-hotfix.2, on Mac OS X 10.14.4 18E226, locale en-US)
• Flutter version 1.5.4-hotfix.2 at /Users/thinkdigital/development/flutter
• Framework revision 7a4c33425d (6 weeks ago), 2019-04-29 11:05:24 -0700
• Engine revision 52c7a1e849
• Dart version 2.3.0 (build 2.3.0-dev.0.5 a1668566e5)
[✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
• Android SDK at /Users/thinkdigital/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-28, build-tools 28.0.3
• ANDROID_HOME = /Users/thinkdigital/Library/Android/sdk
• Java binary at: /Users/thinkdigital/Library/Application Support/JetBrains/Toolbox/apps/AndroidStudio/ch-0/183.5522156/Android
Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
• All Android licenses accepted.
[✓] iOS toolchain - develop for iOS devices (Xcode 10.2.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 10.2.1, Build version 10E1001
• ios-deploy 1.9.4
• CocoaPods version 1.5.3
[✓] Android Studio (version 3.4)
• Android Studio at /Users/thinkdigital/Library/Application Support/JetBrains/Toolbox/apps/AndroidStudio/ch-0/183.5522156/Android
Studio.app/Contents
• Flutter plugin version 35.2.1
• Dart plugin version 183.6270
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
[!] IntelliJ IDEA Ultimate Edition (version 2019.1)
• IntelliJ at /Users/thinkdigital/Applications/JetBrains Toolbox/IntelliJ IDEA Ultimate.app
✗ Flutter plugin not installed; this adds Flutter specific functionality.
✗ Dart plugin not installed; this adds Dart specific functionality.
[Truncated]
[✓] Android Studio (version 3.4)
• Android Studio at /Users/thinkdigital/Library/Application Support/JetBrains/Toolbox/apps/AndroidStudio/ch-0/183.5522156/Android
Studio.app/Contents
• Flutter plugin version 35.2.1
• Dart plugin version 183.6270
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
[!] IntelliJ IDEA Ultimate Edition (version 2019.1)
• IntelliJ at /Users/thinkdigital/Applications/JetBrains Toolbox/IntelliJ IDEA Ultimate.app
✗ Flutter plugin not installed; this adds Flutter specific functionality.
✗ Dart plugin not installed; this adds Dart specific functionality.
• For information about installing plugins, see
https://flutter.dev/intellij-setup/#installing-the-plugins
[✓] VS Code (version 1.34.0)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.1.0
[✓] Connected device (1 available)
• iPhone X • AA9863F4-FD5E-403D-BD7B-3303F6928AF9 • ios • com.apple.CoreSimulator.SimRuntime.iOS-12-2 (simulator)
Answers:
username_1: Thanks for the feedback! The work to address this would likely happen in the Dart plugin for IntelliJ
I can repro something similar to your above screenshot in Dart code. When I do the same in Java, IntelliJ does put the matching `}` in a location you'd expect (the curly braces are balanced appropriately).
cc @jwren @username_2
username_0: My pleasure. Someone please move it to the appropriate place. It's hard to
figure out and place issues in the correct place. I'm just proud of myself
for getting this up here
username_0: Found another one.

username_2: @username_0, I failed to reproduce the issue in IntelliJ IDEA 2019.1.3 with the following code
```
main() {
if (true) return false;
}
````
I also tried with something similar to the gif from the issue description, but still no luck. Could you paste a minimal file to reproduce?
username_3: I can't reproduce either. @username_1 any tips on how you reproduced the issue?
username_1: I was able to repro via:
```
if (true) /*caret*/return;
```
However, I did see correct behavior in other situations:
```
if (!serviceManager.hasConnection) /*caret*/toast('Device connection lost.');
``` |
godotengine/godot | 233772911 | Title: MeshDataTool get_vertex_count() returns the count of tris instead of verteces.
Question:
username_0: **Operating system or device - Godot version:**
OS: Manjaro 17.0.1 Gellivara
Kernel: x86_64 Linux 4.9.30-1-MANJARO
CPU: Intel Celeron G1840 @ 2x 2.8GHz
GPU: Mesa DRI Intel(R) Haswell Desktop
RAM: 1416MiB / 3827MiB
Godot v2.1.3.stable.official
**Issue description:**
When using get_vertex_count() from the mesh generated with create_from_surface() the array returned has almost twice the amount of vertices it should have.
Comparing it to Blender data displayed, the number matches the amount of tris in the mesh, which obviously is way off. When I attempt to place other nodes on the vertices locations there are 2 or 3 copies all grouped in the same location because of this issue.
**Steps to reproduce:**
1. Load a mesh into a meshInstance (or import a scene).
2. Create a new MeshDataTool and mesh_from_surface, then get the vertex count.
3. print the array size and compare.
**Link to minimal example project:**
I can provide one if requested.
Answers:
username_1: Is the mesh flat shaded, per chance?
If it is, then each flat face would get a seperate copy of the vertex (as if you have applied the edge split modifier), so that normals don't get interpolated between faces, as normals are per-vertex.
username_0: I tried with a bunch of different meshes, the last one I think was flat shaded. I will try again and come back.
username_2: I will archive this issue, since it seems it's resolved.
Status: Issue closed
|
line/centraldogma | 1097391131 | Title: Feature request: Copy api url
Question:
username_0: I wanted to find API url because I want to use `curl` to collect my configuration.
I finally found the api url, and it looks like this:
`curl http://xx.xx.xx.xx:xxxxx/api/v1/projects/food/repos/dogma/contents/apple`
where `food` is a project name, `dogma` is a user repository name and `apple` is a content name.
I found it by using the command line tool 'dogma' since it returned:
```
[
{
"type": "TEXT",
"path": "/apple",
"revision": 4,
"url": "/api/v1/projects/food/repos/dogma/contents/apple"
}
]
```
However, it would be better if I can get this url easily from the web console. |
nodejs/help | 237367762 | Title: Use transform stream to prepend a string to beginning of each stdout/stderr line
Question:
username_0: I am looking for an answer to this question on SO:
https://stackoverflow.com/questions/44664207/transform-stream-to-prepend-string-to-each-line
I spawn a child process like so:
```js
const n = cp.spawn('bash');
n.stdout.pipe(process.stdout);
n.stderr.pipe(process.stderr);
```
I am looking for a transform stream so that I can prepend something like '[child process]' to the beginning of each line from the child, so I know that the stdio is coming from the child versus the parent process.
So it would look like:
```js
const getTransformPrepender = function() : Transform {
return ...
}
n.stdout.pipe(getTransformPrepender('[child]')).pipe(process.stdout);
n.stderr.pipe(getTransformPrepender('[child]')).pipe(process.stderr);
```
does anyone know if there is an existing transform package like this or how to write one?
Answers:
username_0: I ended up writing this to solve my need and it works well:
https://github.com/username_0/prepend-transform
Status: Issue closed
|
rstudio/rsconnect | 298643061 | Title: [Feature request] Show line numbers of errors in shinyappsio
Question:
username_0: Debugging a deployed app that's on shinyappsio is difficult. One thing that can make the process a bit easier is if a line number of where the error occurred appeared in the log beside the offending function, like it does when you're running the same app in RStudio.
Status: Issue closed
Answers:
username_1: This would be helpful, but would need to happen in the Shiny package and Shinyapps.io, not in this package. |
masaun/crowdsourcing-portal-for-art | 636940351 | Title: Create a demo video and/or deploy an example version
Question:
username_0: Hi @masaun , thank you for the submission. Your voting and awarding interest on the deposits is interesting :)
Would you be able to make a quick video to help us understand how to use the product?
It would also be useful if you deployed a version of this using github pages or vercel.com.
Let me know, thanks!
Answers:
username_0: Hi @masaun , I realy appreciated the experimentation that you did on the mechanism and using tokens for voting. Great work.
Unfortunately this submission is further from what we would be able to put in production that the other submission. If you want to follow the project I'm sure we will add crowd voting for the art too at some point :) You can join us on Telegram if you are interested!
We really appreciate the effort you put into this :) |
yuantuo666/baiduwp-php | 873702788 | Title: WebSocket 地址验证正则有误
Question:
username_0: 直接使用 IPv6 地址作为域名的 WebSocket 连接地址无法正常使用。
Answers:
username_1: /dog 下个版本修复,上个版本屯太久了
username_0: (我重新写的正则明明没有问题的)
username_1: `/^wss?\:\/\/(((([A-Za-z0-9]+[A-Za-z0-9\-]+[A-Za-z0-9]+)|([A-Za-z0-9]+))(\.([A-Za-z0-9]+[A-Za-z0-9\-]+[A-Za-z0-9]+)|([A-Za-z0-9]+))*(\.[A-Za-z0-9]{2,10}))|localhost|(([01]?\d?\d)|(2[0-4]\d)|(25[0-5]))(\.([01]?\d?\d)|(2[0-4]\d)|(25[0-5])){3}|((\[[A-Za-z0-9:]{2,39}\])|([A-Za-z0-9:]{2,39})))(\:\d{1,5})?(\/.*)?$/`
就是有**一**点长
Status: Issue closed
|
ReeceStevens/ut_ewh_audiometer_2014 | 1114088078 | Title: Improvements for audiometer
Question:
username_0: Hi,
I made quite a few improvements to your audiometer in my fork
https://github.com/username_0/audiometer
-Update to SDK30
-Fix bug: left/right results were identical
-Different colors for left/right line charts
-Improve test cycle
-Add option to skip calibration if a calibration is available
-Extend measurement down to 125Hz
-Remove email export and replace by share intent
-Add option to delete test results
-Fix bug: sound played on left and right at same time on some devices
-Migrate to better FFT algorithm from University of Princeton -> needs license to be changed to GPL V3
A question: Do you remember where the "70" in resultingdB[x] = 20 * Math.log10(resultingRms[x]) + 70;
comes from?
Maybe you want to have a look at it: (simply remove .zip)
[app-release.apk.zip](https://github.com/ReeceStevens/ut_ewh_audiometer_2014/files/7935145/app-release.apk.zip)<issue_closed>
Status: Issue closed |
tensorflow/tensorflow | 498396406 | Title: Same Issue As Issue #31509 With Adamax - BaseCollectiveExecutor::StartAbort Out of range:
Question:
username_0: The previous issue described in #31509 was fixed, but I am now experiencing exactly the same issue with all the same setup using the latest nightly build of TF2.0 when using tf.keras.optimizers.Adamax
Answers:
username_1: @username_0
I am not seeing any issue with tf.keras.optimizers.Adam in latest TF 2.0.0-rc2 version.Please, find the gist [here](https://colab.sandbox.google.com/gist/username_1/fbae46031175e81f1c11f7e76a1e3ed0/untitled224.ipynb). Thanks!
username_2: I am having the exact same problem using this mock model. I am using tf 2.0.0 release.
On windows
```python
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
if __name__ == '__main__':
x = tf.random.normal((14000, 30, 1))
y = tf.ones_like(x)
discriminator = tf.keras.models.Sequential([
tf.keras.layers.LSTM(100, input_shape=(30, 1), return_sequences=True),
tf.keras.layers.LSTM(100, recurrent_dropout=0.4,
dropout=0.4, return_sequences=True)
])
discriminator.compile(loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(lr=0.001))
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.batch(64)
discriminator.fit(dataset, epochs=2)
``
username_1: @username_2
I am able to execute the code successfully in colab using TF 2.0.0-rc2 .Please, find the gist [here](https://colab.sandbox.google.com/gist/username_1/422f9b591f99f38e5c7f9f9b7211fea1/untitled248.ipynb).Thanks!
username_3: I am also having this message feeding a dataset into a 1D Convnet. Happens on my Mac with tf version 2.0.0-rc2. Not reproducible on Colab.
```python
import numpy as np
import tensorflow as tf
def create_timeseries_element():
# returns a random time series of 100 intervals, each with 3 features,
# and a random one-hot array of 5 entries
data = np.random.rand(100,3)
label = np.eye(5, dtype='int')[np.random.choice(5)]
return data, label
def data_generator():
d, l = create_timeseries_element()
yield (d, l)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(128, 9, activation='relu', input_shape=(100, 3)),
tf.keras.layers.Conv1D(128, 9, activation='relu'),
tf.keras.layers.MaxPooling1D(2),
tf.keras.layers.Conv1D(256, 5, activation='relu'),
tf.keras.layers.Conv1D(256, 5, activation='relu'),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(5, activation='softmax')])
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
ds = tf.data.Dataset.from_generator(data_generator, output_types=(tf.float32, tf.int32),
output_shapes=(tf.TensorShape([100, 3]), tf.TensorShape([5])))
model.fit(ds.batch(32))
```
username_4: I am having an issue similar to this and tried to run in Collab just to get an un-ending runtime. I asked my question in full on SO [here](https://stackoverflow.com/questions/58245855/numpy-arrays-used-in-training-in-tf1-keras-have-much-lower-accuracy-in-tf2?noredirect=1#comment102869129_58245855).
I had some numpy arrays that were trained in keras in the previous version of tf and now have to rewrite my model. Got way worse accuracy so I am thinking I need to switch to tf.data.Dataset.
So I did:
`
train_dataset = tf.data.Dataset.from_tensor_slices((X_train_deleted_nans, y_train_no_nans))
train_dataset = train_dataset.shuffle(SHUFFLE_CONST).batch(BATCH_SIZE)
`
`model.summary()` gave me:
```
BatchDataset shapes: ((None, 2756), (None,)), types: (tf.float64, tf.int64)
Model: sequential
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 1379) 3801903
dropout (Dropout) (None, 1379) 0
dense_1 (Dense) (None, 1379) 1903020
dropout_1 (Dropout) (None, 1379) 0
dense_2 (Dense) (None, 1379) 1903020
dropout_2 (Dropout) (None, 1379) 0
dense_3 (Dense) (None, 1379) 1903020
dropout_3 (Dropout) (None, 1379) 0
dense_4 (Dense) (None, 1) 1380
=================================================================
Total params: 9,512,343
Trainable params: 9,512,343
Non-trainable params: 0
```
```
model.compile(optimizer=adam, loss=bce, metrics=['accuracy'])
model.fit(train_dataset, epochs=1000, verbose=0)
```
Once the training starts I get this warning error:
```
2019-10-04 23:47:56.691434: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
[[{{node IteratorGetNext}}]]
```
username_5: I am having the same issue as above on TF 2.0 release. Is this a bug with tensorflow or is there an issue with the code?
username_0: It seems everybody who is having this issue is using Windows. I presume that must have something to do with it?
username_3: I am having the issue on a Mac with the latest version of MacOS
username_6: I am having the same problem after porting my code from 1.14 to 2.0.
I am running on UBUNTU 18.04 (not only a windows problem). It occurs for me during both training and predict. (so not linked to the optimiser). I do NOT get the problem if i hide the GPU. I do get the problem if I expose the GPU.
username_7: I think I may have found why it is complaining. However, I have no idea how to fix it. While training, we all get the IteratorGetNext Error: Sequence out of range.
I noticed that let’s say I have a dataset size of 60,000 with a batch size of 64 that would require floor(60000/64)= 937 to iterate through entire dataset for one epoch. However, when training using .fit(verbose=1) i botice that it attempts to iterate through the dataset 938 (most likely a rounding error because 60000/64=937.5) and thus I get this error. Can someone please confirm this is the case for you as well? Thanks
username_6: Good idea. I think you can use the take statement on the dataset to limit
yourself to the first 64*937 records avoiding any need to round at the end
username_8: I'm experiencing a similar problem to @username_7 with my code in that I have a number of samples that doesn't neatly divide by the batch size. @username_7 code works for me and the error disappears if I repeat the dataset and specify `steps_per_epoch`.
- I'm seeing this on Ubuntu 18.04, so definitely not a Windows only problem.
- I see this issue with both the tensorflow 2 release and the tensorflow 2 RC2.
Trying @username_6's advice above and using a `take` a number of samples that are divisible by the batch size I find that I still get the same message, even when using the simplified example provided by @username_7:
```python
import tensorflow as tf
data = tf.random.normal((60000,30,4))
ground_truth = tf.ones((60000,1))
dataset = tf.data.Dataset.from_tensor_slices((data, ground_truth)).take(512).batch(64)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(1, activation='softmax')
])
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
#predefined model here: input: [?, 30,4] output: [?,1]
model.fit(dataset, epochs=5)
```
```
Epoch 1/5
2019-10-08 09:01:00.212603: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
8/Unknown - 1s 84ms/step - loss: 102.0359 - accuracy: 1.00002019-10-08 09:01:00.443158: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
[[{{node IteratorGetNext}}]]
[[IteratorGetNext/_16]]
2019-10-08 09:01:00.443241: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
[[{{node IteratorGetNext}}]]
8/8 [==============================] - 1s 85ms/step - loss: 102.0359 - accuracy: 1.0000
Epoch 2/5
1/8 [==>...........................] - ETA: 0s - loss: 102.0359 - accuracy: 1.00002019-10-08 09:01:00.502043: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
[[{{node IteratorGetNext}}]]
[[Shape/_4]]
2019-10-08 09:01:00.502100: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
[[{{node IteratorGetNext}}]]
8/8 [==============================] - 0s 7ms/step - loss: 102.0359 - accuracy: 1.0000
Epoch 3/5
1/8 [==>...........................] - ETA: 0s - loss: 102.0359 - accuracy: 1.00002019-10-08 09:01:00.544339: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
[[{{node IteratorGetNext}}]]
[[IteratorGetNext/_16]]
2019-10-08 09:01:00.544373: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
[[{{node IteratorGetNext}}]]
8/8 [==============================] - 0s 5ms/step - loss: 102.0359 - accuracy: 1.0000
Epoch 4/5
1/8 [==>...........................] - ETA: 0s - loss: 102.0359 - accuracy: 1.00002019-10-08 09:01:00.587002: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
[[{{node IteratorGetNext}}]]
[[IteratorGetNext/_16]]
2019-10-08 09:01:00.587044: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
[[{{node IteratorGetNext}}]]
8/8 [==============================] - 0s 5ms/step - loss: 102.0359 - accuracy: 1.0000
Epoch 5/5
1/8 [==>...........................] - ETA: 0s - loss: 102.0359 - accuracy: 1.00002019-10-08 09:01:00.631688: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
[[{{node IteratorGetNext}}]]
[[Shape/_4]]
2019-10-08 09:01:00.631740: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
[[{{node IteratorGetNext}}]]
[Truncated]
[[{{node IteratorGetNext}}]]
[[IteratorGetNext/_2]]
2019-10-08 09:03:55.507100: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
[[{{node IteratorGetNext}}]]
937/937 [==============================] - 3s 3ms/step - loss: 102.0359 - accuracy: 1.0000
Epoch 4/5
918/937 [============================>.] - ETA: 0s - loss: 102.0359 - accuracy: 1.00002019-10-08 09:03:58.045499: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
[[{{node IteratorGetNext}}]]
[[IteratorGetNext/_2]]
2019-10-08 09:03:58.045610: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
[[{{node IteratorGetNext}}]]
937/937 [==============================] - 3s 3ms/step - loss: 102.0359 - accuracy: 1.0000
Epoch 5/5
932/937 [============================>.] - ETA: 0s - loss: 102.0359 - accuracy: 1.00002019-10-08 09:04:00.654601: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
[[{{node IteratorGetNext}}]]
[[IteratorGetNext/_2]]
2019-10-08 09:04:00.654715: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
[[{{node IteratorGetNext}}]]
937/937 [==============================] - 3s 3ms/step - loss: 102.0359 - accuracy: 1.0000
```
username_7: I think we may need to summon @fchollet
username_4: @username_7 Your suggestion fixed my issue!
username_1: @username_0
Can you please let us know if the issue still persists?.Please close the issue if it was resolved already. Thanks!
username_9: Any update?
username_5: This fixes, my issue, however, surely the final batch size being reduced should not create an issue?
Repeating the part of the first batch of data should not be the solution, surely?
username_0: Yes, it still persists...see all the other posts with the same issue.
username_10: Hi, I agree with @BeWe11, the issue is still there.
Moreover if you are using a data.from_generator() function the number of actual steps must be computed on the first epoch.
In addition it seems to always happen when validation_data is used as model.fit() argument
username_11: have the same issue on a centos7 (tf-2.0)
demo source:
```
import numpy as np
import tensorflow as tf
if __name__ == '__main__':
x = tf.random.normal((14000, 30, 1))
y = tf.ones_like(x)
discriminator = tf.keras.models.Sequential([
tf.keras.layers.LSTM(100, input_shape=(30, 1), return_sequences=True),
tf.keras.layers.LSTM(100, recurrent_dropout=0.4,
dropout=0.4, return_sequences=True)
])
discriminator.compile(loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(lr=0.001))
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.batch(64)
discriminator.fit(dataset, epochs=2)
```
console:
```
2019-11-15 18:56:52.780799: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-11-15 18:56:52.787215: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2494130000 Hz
2019-11-15 18:56:52.788336: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4ebe390 executing computations on platform Host. Devices:
2019-11-15 18:56:52.788386: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version
Epoch 1/2
2019-11-15 18:56:55.992738: W tensorflow/core/grappler/optimizers/implementation_selector.cc:310] Skipping optimization due to error while loading function libraries: Invalid argument: Functions '__inference___backward_standard_lstm_4693_5178' and '__inference___backward_standard_lstm_4693_5178_specialized_for_StatefulPartitionedCall_at___inference_distributed_function_5271' both implement 'lstm_d3e9f06d-9768-4d2d-a545-12ffe5c3e735' but their signatures do not match.
219/Unknown - 20s 93ms/step - loss: 1.37292019-11-15 18:57:13.959804: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
219/219 [==============================] - 20s 93ms/step - loss: 1.3729
Epoch 2/2
218/219 [============================>.] - ETA: 0s - loss: 0.46322019-11-15 18:57:31.342946: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
219/219 [==============================] - 17s 79ms/step - loss: 0.4632
```
username_12: ```
username_12: I think An example is missing in Doc :
```
import numpy as np
import tensorflow as tf
def prepara_data(x, y, num_epochs, batch_size):
dataset = tf.data.Dataset.from_tensor_slices(( x , y ))
dataset = dataset.batch(batch_size, drop_remainder=True).repeat(num_epochs)
return dataset
batch_size = 2
num_epochs = 3
x = np.array([1, 2, 3, 4 ,5, 6, 7, 8, 9],np.float32)
y = 2 * x + 1
dataset = prepara_data(x, y, num_epochs, batch_size)
iterateur = dataset.__iter__()
for i in range(0,1000):
try:
x_batch , y_batch = iterateur.get_next()
print("iter ", i,'->',x_batch)
except:
print ("Exception Iteration ", i," max = ",num_epochs * x.shape[0] // batch_size )
break
dataset = prepara_data(x, y, num_epochs, batch_size)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(1, kernel_initializer='uniform', activation='linear',input_shape=(1,)))
model.layers[0].set_weights([np.array([[7.3]],np.float32),np.array([5.5],np.float32)])
model.layers[0](np.array([[3]]))
model.compile(optimizer='sgd', loss='mse')
model.fit(dataset, epochs=num_epochs ,steps_per_epoch=x.shape[0] // batch_size, verbose=0)
print(model.layers[0].get_weights())
```
you must set drop_remainder to True in batch method argument and set steps_per_epoch=x.shape[0] // batch_size in fit method
@username_11
```
import numpy as np
import tensorflow as tf
if __name__ == '__main__':
x = tf.random.normal((14000, 30, 1))
y = tf.ones_like(x)
discriminator = tf.keras.models.Sequential([
tf.keras.layers.LSTM(100, input_shape=(30, 1), return_sequences=True),
tf.keras.layers.LSTM(100, recurrent_dropout=0.4,
dropout=0.4, return_sequences=True)
])
discriminator.compile(loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(lr=0.001))
dataset = tf.data.Dataset.from_tensor_slices((x, y))
batch_size = 64
num_epoch = 2
dataset = dataset.batch(batch_size, drop_remainder=True).repeat(num_epoch)
discriminator.fit(dataset, epochs=num_epoch, steps_per_epoch=x.shape[0] // batch_size)
```
username_11: @username_12 thx, but i don't think it is a good idea for forcing users to set the `steps_per_epoch`.
username_9: @username_12 sometimes we even don't know the full size of the data.
username_12: @username_11 I changed code and use [tf.data.experimental.cardinality](https://www.tensorflow.org/api_docs/python/tf/data/experimental/cardinality)[ issue here](https://github.com/tensorflow/tensorflow/issues/26966)
@username_9 Why? (of course I must shuffle data before fit but it's only an example)
username_13: 将全量数据集切分成多个batch,对模型进行分批迭代训练,是常规做法,先理解一下两个概念:
- [batch_size:每个batch的数据量大小]
- [batches:整个数据集按照batch_size切分后的batch数量]
```
keras的fit函数的部分参数如下:
model.fit(self, x=None, y=None,epochs=1,steps_per_epoch=None)
```
- [epoch:迭代次数,一次迭代可以粗糙的理解成使用一个batch的训练数据对model进行训练。
The number of iterations, one iteration can be roughly understood as using a batch of training data to train the model.]
- [steps_per_epoch:每次迭代被model消费的batches,可以粗糙的理解为将固定数量的batch合并成一个更大的bigger_batch,然后用bigger_batch对model进行训练,训练结束即为完成一个epoch。
Each iteration of the batches consumed by the model. It can be roughly seen as a combination of a fixed number of batches into a bigger batch named bigger_batch, and then the model is trained with the bigger_batch. The end of the training is the completion of an epoch.]
- 个人看法:这是model训练过程中训练数据不足的问题,可以将model的训练过程理解为生产者与消费者的关系。tensorflow2.0的dataset集成了generator的功能,可直接作为吐出训练数据的生成器.
dataset不断提供数据,model训练过程不断消费数据,一旦dataset没有数据提供并且model的训练过程还没结束,就会报错,所以需要确保epochs*steps_per_epoch <= dataset所能提供的batches。
你可以根据经验确定batch_size和steps_per_epoch,然后对全量数据集使用repeat()来避免model训练过程中数据不足的问题。如果觉得没有必要对batch再做处理,可令steps_per_epoch=1。
- My View:This is the problem of insufficient training data in the model training process, which can be seen as the relationship between producers and consumers.
Tensorflow2.0 Dataset integrates the function of generator and can be used to spit out training data directly.Dataset provides data continuously, and model training process consumes data continuously. Once dataset does not provide data and model training process is not finished, an error will be reported. Therefore, it is necessary to ensure that epochs*steps_per_epoch is less than the size of batches provided by dataset.You can determine batch_size and steps_per_epoch based on experience, and then use repeat() for Dataset to avoid data shortage during model training.If you don't think it's necessary to deal with the batch again, you can make steps_per_epoch = 1.
验证过程:
```
train_data = tf.random.normal((5,4))#5个4维特征向量
label = tf.ones((5,1))#5个类别标签
dataset = tf.data.Dataset.from_tensor_slices((data, label))
```
```
dataset
<TensorSliceDataset shapes: ((4,), (1,)), types: (tf.float32, tf.float32)>
```
全量数据集dataset按照batch_size进行切分,如果最后一个batch的数量不足batch_size,根据drop_remainder判断是否将其丢弃。在上述例子中,train_data和label构成的Dataset包含5个用来训练model的张量,暂且称之为train张量,train张量又包含2个张量:1个4维特征向量和1个label。
`dataset = dataset.batch(batch_size, drop_remainder=True).repeat(2)`
```
dataset
<RepeatDataset shapes: ((2, 4), (2, 1)), types: (tf.float32, tf.float32)>
```
调用batch()进行切分,batch_size=2,drop_remainder=True,可知batches==2,每个batch包含2个train张量,最后一个batch的大小为1,丢弃;repeat(2)后batches==4。
```
model.fit(dataset, epochs=4, steps_per_epoch=1)
#fit函数中的x和y参数代表特征向量和类别,可直接用Dataset类型的变量赋值
```
dataset有4个batch,batch_size == 2,每次使用1个batch的数据训练model(bigger_batch_size == batch_size x 1 == 2),可以迭代4次
```
model.fit(dataset, epochs=1, steps_per_epoch=4)
```
dataset有4个batch,batch_size == 2,每次使用4个batch的数据训练model(bigger_batch_size == batch_size x 4 == 6),可以迭代1次
```
完整的验证代码如下:
import tensorflow as tf
tf.__version__
def build_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)),
tf.keras.layers.Dense(10, activation=tf.nn.relu),
tf.keras.layers.Dense(3, activation='softmax')])
model.compile(
[Truncated]
while i<100:
#data = next(iterator)
data = iterator.get_next()
i += 1
print('id:',i)
print('data:',data)
except Exception as e:
print(repr(e))
return i
batch_size = 2
data = tf.random.normal((5,4))
label = tf.ones((5,1))
dataset = tf.data.Dataset.from_tensor_slices((data, label))
dataset = dataset.batch(2, drop_remainder=True).repeat(2)
batches = check_data_batch_size(dataset)
print('batches:',batches)
model = build_model()
model.fit(dataset, epochs=2, steps_per_epoch=2)
```
username_14: Is this reply related to the original question?
username_13: I just mentioned my understanding of the two parameters of epochs and steps_per_epoch in keras model, why they cause errors, and my handling method. I'm sorry to confuse you.
------------------ 原始邮件 ------------------
username_15: @username_7 Brilliant, your suggestion fixed my issue.
And for other people facing this issue, just for the record:
```python
# loss, acc = net.evaluate(tst_set) # do not use this when using a Repeating dataset
loss, acc = net.evaluate(tst_set, steps=3) # e.g., 3
```
username_16: I got the same problems in the Tensflow 2-gpu in the centos. Has anyone know how to fix this problem?
username_17: This issue should be fixed by the pull request for issue https://github.com/tensorflow/tensorflow/issues/35314 . The warning was actually propagated up from C++ and so python was passing it forward. But there is really no problem here, no issues with training or anything, according to the issue.
The solution was that Google lowered the logging level to ignore these warnings. The change is in TF 2.0 nightly, and will be widely available in the next release. But you can use TF nightly to get the benefit now.
So this issue can probably be closed.
username_16: Tensorflow 2.1(stable) released, has anyone known if this warning fixed in the new version ?
username_18: I have the same problem when practicing the code from official tutorial.
I am using Catalina 10.15 python 3.76 TF 2.1.0
`mbedding = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1"
hub_layer = hub.KerasLayer(embedding, input_shape=[],
dtype=tf.string, trainable=True)
hub_layer(train_examples_batch[:3])
model = tf.keras.Sequential()
model.add(hub_layer)
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
history = model.fit(train_data.shuffle(10000).batch(512),
epochs=20,
validation_data=validation_data.batch(512),
verbose=1)
results = model.evaluate(test_data.batch(512), verbose=2)
for name, value in zip(model.metrics_names, results):
print("%s: %.3f" % (name, value))`
`29/30 [============================>.] - ETA: 0s - loss: 0.2012 - accuracy: 0.92892020-01-13 13:53:00.393082: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
[[{{node IteratorGetNext}}]]`
username_19: @username_16 I can also confirm that the issue is not solved. Using the example code from the TF docs still produces the issue:
```
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
def make_datasets_unbatched():
# Scaling MNIST data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(10000)
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32,
3,
activation='relu',
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
train_datasets = make_datasets_unbatched().batch(64)
model = build_and_compile_cnn_model()
model.fit(x=train_datasets, epochs=2)
```
I noted that this only happens during the first iteration where the total count seems to be unknown. This is odd too because the `numExamples` in the `statistics` key of the dataset_info.json is set correctly.
username_20: Can also confirm that the error (warning) is still being raised on 2.1 (my docker base image is cuda:10.1-cudnn7-devel-ubuntu18.04).
`dataset = raw_dataset.map(_parse_proto).take(32).batch(8)
model.evaluate(dataset)`
username_21: I also have this warning in TF2.1.0. `model.predict(ds.batch(1))` works but gives this warning :
```
2020-03-11 17:04:24.760612: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
[[{{node IteratorGetNext}}]]
```
username_22: I have a similar error but can't seem to find anywhere else where anyone is experiencing here is the traceback of my error:
Train on 2737611 samples, validate on 2737612 samples
Epoch 1/123
Epoch 2/123
Epoch 3/123
Epoch 4/123
Epoch 5/123
Epoch 6/123
2020-08-20 22:56:33.810266: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Invalid argument: assertion failed: [predictions must be >= 0] [Condition x >= y did not hold element-wise:] [x (sequential/dense_10/Sigmoid:0) = ] [[nan][nan][nan]...] [y (metrics/tp/Cast_2/x:0) = ] [0]
[[{{node metrics/tp/assert_greater_equal/Assert/AssertGuard/else/_1/Assert}}]]
[[metrics/recall/assert_greater_equal/Assert/AssertGuard/pivot_f/_143/_157]]
2020-08-20 22:56:33.824745: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Invalid argument: assertion failed: [predictions must be >= 0] [Condition x >= y did not hold element-wise:] [x (sequential/dense_10/Sigmoid:0) = ] [[nan][nan][nan]...] [y (metrics/tp/Cast_2/x:0) = ] [0]
[[{{node metrics/tp/assert_greater_equal/Assert/AssertGuard/else/_1/Assert}}]]
WARNING:tensorflow:Can save best model only with val_precision available, skipping.
Traceback (most recent call last):
File "tf_working.py", line 399, in <module>
keras_auto_tuner(training_df, '1week_target_class')
File "tf_working.py", line 382, in keras_auto_tuner
validation_data=(val_features, y_val))
File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\kerastuner\engine\base_tuner.py", line 130, in search
self.run_trial(trial, *fit_args, **fit_kwargs)
File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\kerastuner\engine\multi_execution_tuner.py", line 96, in run_trial
history = model.fit(*fit_args, **copied_fit_kwargs)
File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 819, in fit
use_multiprocessing=use_multiprocessing)
File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 342, in fit
total_epochs=epochs)
File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 128, in run_one_epoch
batch_outs = execution_function(iterator)
File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\keras\engine\training_v2_utils.py", line 98, in execution_function
distributed_function(input_fn))
File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\eager\def_function.py", line 568, in __call__
result = self._call(*args, **kwds)
File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\eager\def_function.py", line 599, in _call
return self._stateless_fn(*args, **kwds) # pylint: disable=not-callable
File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\eager\function.py", line 2363, in __call__
return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\eager\function.py", line 1611, in _filtered_call
self.captured_inputs)
File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\eager\function.py", line 1692, in _call_flat
ctx, args, cancellation_manager=cancellation_manager))
File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\eager\function.py", line 545, in call
ctx=ctx)
File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\eager\execute.py", line 67, in quick_execute
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: assertion failed: [predictions must be >= 0] [Condition x >= y did not hold element-wise:] [x (sequential/dense_10/Sigmoid:0) = ] [[nan][nan][nan]...] [y (metrics/tp/Cast_2/x:0) = ] [0]
[[{{node metrics/tp/assert_greater_equal/Assert/AssertGuard/else/_1/Assert}}]]
[[metrics/recall/assert_greater_equal/Assert/AssertGuard/pivot_f/_143/_157]]
(1) Invalid argument: assertion failed: [predictions must be >= 0] [Condition x >= y did not hold element-wise:] [x (sequential/dense_10/Sigmoid:0) = ] [[nan][nan][nan]...] [y (metrics/tp/Cast_2/x:0) = ] [0]
[[{{node metrics/tp/assert_greater_equal/Assert/AssertGuard/else/_1/Assert}}]]
0 successful operations.
0 derived errors ignored. [Op:__inference_distributed_function_222355]
Function call stack:
distributed_function -> distributed_function
username_23: Can anyone confirm what the result of this behavior is? I'm confused whether its a logging error, or whether the final batch does not get train/evaluated. For example, imagine I had 100 samples with a batch size of 52. Would I be training on a batch of 50 and 48 (expected behavior), or would I train on 50 and then just fail to fill the next batch and move to the next epoch? This is especially scary in a validation batch and I would be terrified to find that I have a variable validation set (especially if you shuffle!). There is alot of discussion in many spots, but no clear indication of the significance of this error. Some would have you believe it is just a warning. I am on tensorflow==2.1.0.
username_24: hi,I am a freshman in DL from CHina ,I meet the same error like you.Through search the interent,I found the answer:you need add the 'repeat()',but rember that don't input function parameter,then you need to add "step_per_epoch" in fit(),and it's value is "x_train//batchszie''.it works in my project,I hope it can help you to solve your problem .my English is poor,don't mind!
username_25: I tried to run on Colab with TF v2.5 and faced a different error,please find the gist [here](https://colab.research.google.com/gist/username_25/116fd9d5f99e22cf0fb927b0bc2ce936/untitled307.ipynb#scrollTo=M9SHFU8JyT61)..Thanks! |
wee-slack/wee-slack | 215344073 | Title: New Unicode channel names are not supported and lead to disconnection
Question:
username_0: Slack [recently enabled users to go full Unicode in their channel names](https://twitter.com/SlackHQ/status/842475876134481920).
This has had the unfortunate consequence that wee-slack seems to go into a state of disconnection once it has enumerated through the list of of channels and reached one with unicode in it.
It also provides the following, not surprising, traceback if you try to join a channel with Unicode in it:
```
08:07 python: stdout/stderr: __weechat_plugin__:445: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
08:07 python: stdout/stderr: __weechat_plugin__:899: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
08:07 python: stdout/stderr: Traceback (most recent call last):
08:07 python: stdout/stderr: File "/home/username_0/.weechat/python/autoload/slack_extension.py", line 1049, in wrapper
08:07 python: stdout/stderr: return f(current_buffer, *args, **kwargs)
08:07 python: stdout/stderr: File "/home/username_0/.weechat/python/autoload/slack_extension.py", line 1083, in join_command_cb
08:07 python: stdout/stderr: elif command_talk(current_buffer, args[1]):
08:07 python: stdout/stderr: File "/home/username_0/.weechat/python/autoload/slack_extension.py", line 1198, in command_talk
08:07 python: stdout/stderr: server.buffer_prnt("User or channel {} not found.".format(args))
08:07 python: stdout/stderr: File "/home/username_0/.weechat/python/autoload/slack_extension.py", line 364, in buffer_prnt
08:07 python: stdout/stderr: message = message.encode('ascii', 'ignore')
08:07 python: stdout/stderr: UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 17: ordinal not in range(128)
08:07 =!= python: error in function "join_command_cb"
```
Answers:
username_1: I think this is fixed with #348.
Status: Issue closed
username_0: @username_1 I think you're quite likely right! I at the very least cannot reproduce this, as unicode channel names works great with the current codebase. 👍 Closing the issue. |
sacmwg/draft-ietf-sacm-architecture | 114989526 | Title: Clarify the potential use of existing transport protocols for collection as part of the SACM Architecture
Question:
username_0: In Section 4 the text states that:
"Those interfaces are outside the scope of SACM."
This text is concerning for a number of reasons:
1) It is not clear which interfaces this text is referencing.
2) IMHO, adapting the referenced protocols to exchange SACM information and to integrate with control plane functions is within the scope of SACM.
3) The architecture draft should not be defining what the scope of SACM is. This should fall to the SACM charter.
To address this concern, I'd like to suggest the following changes:
Old:
A variety of protocols, such as SNMP, NETCONF, NEA protocols
[RFC5209], and other similar interfaces, may be used for collection
of data from the target endpoints by the Posture Information
Provider. Those interfaces are outside the scope of SACM.
New:
A variety of protocols, such as SNMP, NETCONF, NEA protocols
[RFC5209], and other similar interfaces, may be used for collection
of data from the target endpoints by the Posture Information
Provider. If any such protocol is adopted by SACM, additional
specification will be required to describe how the protocol will
support the exchange of SACM assessment information. Furthermore, it
will be necessary to define how such a protocol integrates with or
provides specific control and/or data plane functions defined by
this architecture.
Answers:
username_1: Hi,
SACM is a temporary entity. I recommend it not be mentioned in an RFC (a cast-in-stone entity).
It might be helpful to discuss the (transfer) protocols separately from the data models.
It is common that a given type of data model is used with a given protocol.
However, it has been a design recommendation to make it possible for data models to be reused with different transfer protocols.
A variety of protocols, such as SNMP, NETCONF, NEA protocols
[RFC5209], and other similar interfaces, may be used for collection
of data from the target endpoints by the Posture Information
Provider. Proposals for use of such protocols should define how such a protocol integrates with or
provides specific control and/or data plane functions defined by
this architecture. Specification of information models and/or data models will be required to describe how the protocol will
support the exchange of SACM assessment information.
<NAME>
<EMAIL> <EMAIL>
username_0: Good points. I like your updated text.
Thanks,
Dave
-------- Original Message --------
username_2: Would there be a mandatory to implement protocol?
username_3: @username_2 Your question feels loaded... Do you have an opinion?
username_1: I don’t think I really have a horse in the race on this, but I’ve heard various folks voice the strong opinion that there would have to be a mandatory to implement protocol for compatibility.
Maybe if I create a protocol then I can pick it. :)
--
<NAME>
<EMAIL>
>
username_1: I believe we need MTIs to be defined for interoperability. What isn't clear to me at this time is if we can have one MTI to "rule them all" or if we need different MTIs to address different types of devices (e.g., network infrastructure, mobile, desktop, server, etc.). I believe this should be a topic for discussion as we work to complete the architecture.
Dave
-----Original Message----- |
obsidiandynamics/kafdrop | 506336958 | Title: Create Topic & manage ACLs via kafdrop
Question:
username_0: I'm probably asking the obvious here, but are there are any plans to support creating topics and manage ACL rules or even zookeeper users (SCRAM)?
It seems to be the last missing piece in managing a kafka cluster :)
Answers:
username_1: Yes it would be nice. However, I'm a little short on time at the moment. Always looking for new contributors :)
Status: Issue closed
username_1: Both are now done.
username_2: I have a question, does this project currently has the ability to add ACL's ?
username_1: No, but it would be an awesome feature to have. Wanna give it a crack?
username_2: Sure , I did add ACL's fairly new to Kafka but I feel we need to implement the feature for adding Users as well ? I can probably work on it and raise a PR sounds good ?
username_1: Sounds good indeed. Essentially we need a form to be able to add literal and prefix-bases ACLs to arbitrary entities. |
CodeMillApp/support | 142196963 | Title: Empty marketplace
Question:
username_0: Hi, I signed up a couple days ago but still haven't seen any task available in the marketplace. Is it just me or there is actually a lack of available tasks at the moment (I signed up using the current github account) ?
Answers:
username_1: @username_0 there is currently an imbalance in the demand/supply ratio, which causes each submitted task to be grabbed within a very short time. I hope we can drive more tasks to balance the market.
Status: Issue closed
username_2: Hm. This was ~6 weeks ago, and the situation apparently didn't change… :-(
username_1: We're working on it. It is very challenging indeed but we are on it. |
zcreativelabs/react-simple-maps | 837217282 | Title: "undefined" class in ZoomableGroup
Question:
username_0: ## Description
If you create a `ZoomableGroup` and don't add the `className` attribute, it will be created with the `rsm-zoomable-group undefined` classes.
## Example
Search `undefined` in [https://www.react-simple-maps.io/docs/zoomable-group/](https://www.react-simple-maps.io/docs/zoomable-group/)

## Importance
This can lead to real problems if someone has a class called `undefined`, something dangerous in js applications by the way, but not that common.
## How to temporarily avoid this problem
To avoid this problem, just create a `ZoomableGroup` and pass an empty string in className. |
hasgeek/coaster | 145935758 | Title: failsafe_add should use UPSERT where available
Question:
username_0: Nested transactions are nice, but UPSERT is more reliable. `failsafe_add` should use it when it detects the underlying engine supports it.
SQLAlchemy supports custom compilation for standard statements like INSERT ([docs](http://docs.sqlalchemy.org/en/rel_1_0/core/compiler.html)). We could have tables specifically flag support for upserts in `__table_args__` like this:
```python
class MyTable(db.Model):
__tablename__ = 'my_table'
__table_args__ = ({'upsert': None},)
```
This should generate an `ON CONFLICT DO NOTHING` clause. The alternative `ON CONFLICT DO UPDATE` is trickier as it requires specifying specific columns to update. In this case the syntax is:
```python
class MyTable(db.Model):
__tablename__ = 'my_table'
__table_args__ = ({'upsert': ['col1', 'col2', …]},)
```
Some fields like `id` and `created_at` should _not_ be updated in case of a conflict, while most other fields need to be. The table should whitelist (or blacklist?) fields to update.
Finally, this approach to upsert will result in the parameter to `failsafe_add` no longer guaranteed to be the same entity persisted to disk. `session.merge` can't be used either because the primary key is not guaranteed to match, especially with UUID primary keys that are currently generated client-side in Coaster. Since PostgreSQL's `ON CONFLICT` clause does not tell us whether a conflict occurred, `failsafe_add` has no option but to attempt reloading each time, even if the object is no longer needed. In this case the caller should avoid passing filters to `failsafe_add` so it skips the reload (and `failsafe_add` should support optional filters).
Answers:
username_0: Proposed syntax change for the `upsert` key in `__table_args__`:
* `None`: No upsert at all
* `[]`: `ON CONFLICT DO NOTHING`
* `['col1', 'col2', …]`: `ON CONFLICT DO UPDATE SET col1 = :col1, col2 = :col2, …`
username_0: **Problem:** PostgreSQL's docs say `ON CONFLICT DO UPDATE` requires a mandatory conflict target (optional for `DO NOTHING`), so the SQL syntax will be like one of the following:
1. `ON CONFLICT (col_name) DO UPDATE …`
2. `ON CONFLICT ON CONSTRAINT constraint_name DO UPDATE …`
This means we can't simply specify `{'upsert': ['col1', 'col2', …]}` in `__table_args__`. We also need somewhere to specify the conflict target in either of its forms.
username_0: It is also possible to use an `INSERT ... WHERE NOT EXISTS` clause that works on PostgreSQL < 9.5.
1. http://stackoverflow.com/a/36377530/78903
2. http://stackoverflow.com/a/17991647/78903
This is not as reliable as an upsert, but is still better than making distinct roundtrips to the database, and could still be protected inside a nested transaction.
username_0: [SQLAlchemy has support for upserts now](http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#insert-on-conflict-upsert) but only with `insert` statements, not at the declarative level. |
aio-libs/aiocache | 759528295 | Title: ERR Protocol error: invalid multibulk length when calling `clear()`
Question:
username_0: Steps to reproduce:
1. I have about 5 millions of items in Redis.
2. Call `cache.clear()` without arguments.
As a temporary workaround I had to call `redis-cli FLUSHALL` from terminal.
Answers:
username_0: Solution:
```python
from aioitertools.more_itertools import chunked
max_multibulk_length = 1024 * 1024: # https://github.com/StackExchange/StackExchange.Redis/issues/201
@conn
async def _clear(self, namespace=None, _conn=None):
if namespace:
keys = await _conn.keys("{}:*".format(namespace))
if len(keys) > max_multibulk_length:
async for keys_chunk in chunked(keys, max_multibulk_length - 1):
await _conn.delete(*keys_chunk)
else:
await _conn.delete(*keys)
else:
await _conn.flushdb()
return True
```
username_1: Hey @username_0, thanks for reporting this! I've opened an issue in aioredis because I think it's worth to fix it there directly. Let's wait some days to see if there is a positive answer and if not, we can patch it in aiocache directly :). |
holoviz/panel | 666391285 | Title: Add Binder links to all examples and user guides
Question:
username_0: As the title says, all pages built from notebooks should have links to binder.
Answers:
username_1: Should be supported now by nbsite - https://github.com/pyviz-dev/nbsite/pull/188
However, might look ugly or take up too much vertical space - haven't seen how it looks yet with our theme. Will be happy to make that better when we see an example...
username_2: Where can I check this out @username_1 . I tried to look at references example at for example https://pyviz-dev.github.io/panel/reference/widgets/DatetimeInput.html#widgets-gallery-datetimeinput. But they don't have any links.
username_1: @username_2 the website build hasn't been updated to use this new feature of nbsite yet. I'll have a go at that now (no promises...I haven't built the panel docs myself for 3 years or so :) ).
username_1: Currently looks like this:

It's a sphinx note (https://docutils.sourceforge.io/docs/ref/rst/directives.html#note). I think we need to style it better and remove some text!
Some thoughts/questions...
Is the button well known enough that no text at all is required?
I think a binder link might not be useful on every notebook (though if just a small button, doesn't matter...although if just a small button, will people find it on notebooks where it does matter...).
I also think it might be worth combining with the "download from github" link.
And might be worth adding binder badge to notes about non-interactive notebooks:

username_1: For comparison, here's how dask does it:

username_3: The Dask way to to it looks fine, though I don't know if it needs the bar saying "Live Notebook" at all.
username_0: Agreed, the dask one looks fine.
username_1: As far as nbsite goes, it's the same as dask, except "note" (nbsite) vs "admonition" (dask):
dask:
```
.. admonition:: Live Notebook
You can run this notebook in a `live session <https://mybinder.org/v2/gh/dask/dask-examples/master?urlpath=lab/tree/{{ docname }}>`_ |Binder| or view it `on Github <https://github.com/dask/dask-examples/blob/master/{{ docname }}>`_.
.. |Binder| image:: https://mybinder.org/badge.svg
:target: https://mybinder.org/v2/gh/dask/dask-examples/master?urlpath=lab/tree/{{ docname }}
```
nbsite:
```
.. note:: Try live in your browser!
|Binder| to run this notebook in your browser (no setup required).
.. |Binder| image:: https://mybinder.org/badge_logo.svg
:target: https://mybinder.org/v2/gh/{org}/{repo}/{branch}?filepath={examples}/{relpath}
```
note's just a type of admonition (https://docutils.sourceforge.io/docs/ref/rst/directives.html#note). So it seems likely that we need to change https://github.com/pyviz-dev/sphinx_holoviz_theme/, right?
username_0: That sounds right, or is it wherever the rst is generated?
username_1: If that's true, the best I could offer at the moment would be for nbsite not to generate a sphinx admonition, but instead to generate a larger button (on its own - no other text), like this:

You could play with wording at https://mybinder.readthedocs.io/en/latest/howto/badges.html
Seems like having a badge alone means it could also maybe be placed in a way that wouldn't need to take up vertical space...though that would also require changes to sphinx_holoviz_theme 🤔
Someone else might be able to do better by working on sphinx_holoviz_theme...
username_0: We should also update the partially broken Binder links in the gallery demos.
username_2: Currently you cannot run the Panel repo on binder

username_2: As a part of a nice binder setup we should include
- jupyter-server-proxy and setup all example notebooks to be served as apps in there. And have one icon in the launcher to launch the panel apps.
- vs-code. Demonstrating that you can use something like VS as well in there and work would make the look and feel of Panel very modern and powerful. See https://github.com/betatim/vscode-binder
I know how to enable both things and it is simple.
Status: Issue closed
|
SAP/jenkins-library | 568090015 | Title: declarative pipeline: inner library statement
Question:
username_0: The declarative pipeline contains a library statement within the init stage:
[library 'piper-lib-os'](https://github.com/SAP/jenkins-library/blob/ea45136c3d2be5cc8ce70299a9ad59021661c1f9/vars/piperPipeline.groovy#L14)
why? We still need to add it to the Jenkinsfile and it prevents from using test libraries.
```
@Library('piper-lib-test') _
piperPipeline script: this
```
`ERROR: Library resource default_pipeline_environment.yml ambiguous among libraries [piper-lib-os, piper-lib-test]
`
Answers:
username_1: in order to use a test-library you need to change the definition of the library in Jenkins.
Alternatively you can work on a branch in the original repo which people from the core team can do.
Thus you would only need to add `@Library('piper-lib-os@yourBranch') _` to get your version of the library loaded.
username_0: @username_1 According to the documentation, the Jenkinsfile starts with a `@Library` statement. So why does the declarative pipeline contains another statement?
Background: To ease the test of changes of multiple forks of the piper library, it is comfortable to setup different libraries in jenkins, like piper-lib-os-standard, piper-lib-os-fork1, piper-lib-os-fork2.
username_0: @username_1 I guess I understood the background of the library entry. The piperPipline is a copy of the sapPiperPipeline which includes the OS piperPipeline to avoid that the SAP internal consumers have to adjust their Jenkinsfile. I guess the develper forgot to remove the statement once. So I mark it as bug.
Status: Issue closed
|
spark-jobserver/spark-jobserver | 91850367 | Title: SparkContextFactory makeContext sparkConf parameter type causing error
Question:
username_0: basically I was trying to create a custom Context but when I was building I got a type mismatch error from makeContext.
```
[error] /vagrant/SQLJob/src/main/scala/sqljob/context/CassandraSQLContextFactory.scala:12: class CassandraSQLContextFactory needs
to be abstract, since method makeContext in trait SparkContextFactory of type (config: com.typesafe.config.Config, contextConfig:
com.typesafe.config.Config, contextName: String)CassandraSQLContextFactory.this.C is not defined
[error] (Note that com.typesafe.config.Config does not match org.apache.spark.SparkConf)
[error] class CassandraSQLContextFactory extends SparkContextFactory {
[error] ^
[error] one error found
[error] (compile:compileIncremental) Compilation failed
[error] Total time: 4 s, completed Jun 29, 2015 4:00:36 PM
```
Basically even though there is a method defined that should take a SparkConf it throws an error. The way I got around it was by changing the type of sparkConf to a typesafe Config instead of using a SparkConf and then converting it to a SparkConf using `configToSparkConf` method from SparkJobUtils
All of the example custom Contexts from spark-jobserver-extras use SparkConf type in their make context methods.
I'm pretty sure this behavior isn't intended.
Answers:
username_1: @username_0 were you able to fix this?
Status: Issue closed
username_2: Since this hasn't had an update in a long time I am closing this. Please give the latest code a try and open a new issue if you are having issues still. Thanks. |
vercel/next.js | 747871799 | Title: Rewrite does not work
Question:
username_0: <!-- NOTE: This template is not optional. If you remove it or leave out sections there is a high likelihood it will be moved to the GitHub Discussions "Help" section -->
# Bug report
## Describe the bug
I'm trying to reuse the pages with rewrite and all I get are 404 errors
## To Reproduce
Steps to reproduce the behavior, please provide code snippets or a repository:
Example:
1. Config
```
const config = {
i18n: {
locales: ['en','es','de','no','se'],
defaultLocale: 'en',
localeDetection: false,
},
async rewrites() {
return [
{ source: '/', destination: '/' },
{ source: '/es/', destination: '/' },
{ source: '/de/', destination: '/' },
{ source: '/no/', destination: '/' },
{ source: '/se/', destination: '/' },
{ source: '/es/blog/', destination: '/es/blog' },
{ source: '/es/politica-privacidad/', destination: '/privacy' },
{ source: '/privacy-policy/', destination: '/privacy' },
{
source: '/de/datenschutzerklarung-rechtlicher-hinweis/',
destination: '/privacy'
},
{ source: '/no/personvern-juridisk-notis/', destination: '/privacy' },
{
source: '/se/integritetspolicy-rattsligt-meddelande/',
destination: '/privacy'
},
{ source: '/es/politica-cookies/', destination: '/cookies-policy' },
{ source: '/cookies-policy/', destination: '/cookies-policy' },
{
source: '/de/richtlinie-fur-cookies/',
destination: '/cookies-policy'
},
{ source: '/no/cookiespolicy/', destination: '/cookies-policy' },
{
source: '/se/cookies-kakor-policy/',
destination: '/cookies-policy'
}
]
},
```
2. When I navigate to `/es/politica-cookies/`, I only get a 404 error, I just try to reuse the "cookies-policy.js" page, for all languages. How complicated is it?
I have also tried to reulize the same page with `getStaticPaths `and I couldn't either
3.
## Expected behavior
The expected behavior is that a 404 will not return, and be able to reuse my pages either for languages or something else.
## System information
- OS: Windows 10
- Browser: Chrome
- Version of Next.js: 10.0.2
- Version of Node.js: 12.16.1
- Deployment: next dev
## Additional context
There is no possibility that something like this exists https://nuxtjs.org/docs/2.x/features/file-system-routing#extending-the-router, and not be so complex?
Status: Issue closed
Answers:
username_0: I also have to add, that I don't understand why I can't add query parameters in `Rewrites`, I find the route system a bit rare.
I don't understand why in `getStaticProps` I can't get the context with the query parameters, can perfectly be passed on in build time, so that they are then generated statically.
I refer to have the option of having an alternative routing system, ie something similar to Rewrite but fully functional and has the options (query, redirect, etc ...), this will solve the amount of methods that are being applied to avoid this, making a more stable system for more complex pages, not for simple pages.
username_0: <!-- NOTE: This template is not optional. If you remove it or leave out sections there is a high likelihood it will be moved to the GitHub Discussions "Help" section -->
# Bug report
## Describe the bug
I'm trying to reuse the pages with `rewrite `and all I get are 404 errors
## To Reproduce
Steps to reproduce the behavior, please provide code snippets or a repository:
Example:
1. Config
```
const config = {
i18n: {
locales: ['en','es','de','no','se'],
defaultLocale: 'en',
localeDetection: false,
},
async rewrites() {
return [
{ source: '/', destination: '/' },
{ source: '/es/', destination: '/' },
{ source: '/de/', destination: '/' },
{ source: '/no/', destination: '/' },
{ source: '/se/', destination: '/' },
{ source: '/es/blog/', destination: '/es/blog' },
{ source: '/es/politica-privacidad/', destination: '/privacy' },
{ source: '/privacy-policy/', destination: '/privacy' },
{
source: '/de/datenschutzerklarung-rechtlicher-hinweis/',
destination: '/privacy'
},
{ source: '/no/personvern-juridisk-notis/', destination: '/privacy' },
{
source: '/se/integritetspolicy-rattsligt-meddelande/',
destination: '/privacy'
},
{ source: '/es/politica-cookies/', destination: '/cookies-policy' },
{ source: '/cookies-policy/', destination: '/cookies-policy' },
{
source: '/de/richtlinie-fur-cookies/',
destination: '/cookies-policy'
},
{ source: '/no/cookiespolicy/', destination: '/cookies-policy' },
{
source: '/se/cookies-kakor-policy/',
destination: '/cookies-policy'
}
]
},
```
2. When I navigate to `/es/politica-cookies/`, I only get a 404 error, I just try to reuse the "cookies-policy.js" page, for all languages. How complicated is it?
I have also tried to reulize the same page with `getStaticPaths `and I couldn't either
3.
## Expected behavior
The expected behavior is that a 404 will not return, and be able to reuse my pages either for languages or something else.
## System information
- OS: Windows 10
- Browser: Chrome
- Version of Next.js: 10.0.2
- Version of Node.js: 12.16.1
- Deployment: next dev
## Additional context
`!important`: There is no possibility that something like this exists https://nuxtjs.org/docs/2.x/features/file-system-routing#extending-the-router, and not be so complex?
username_0: Something similar to this https://nuxtjs.org/docs/2.x/features/file-system-routing#extending-the-router, but a little cleaner
username_1: Hi, it looks like you need to be using `locale: false` for these rewrites as mentioned in [the docs here](https://nextjs.org/docs/api-reference/next.config.js/rewrites#rewrites-with-i18n-support)
username_0: Same effect: Error 404
Thanks!
username_0: It seems that with the Canary version if it is working, I will do some more tests tomorrow and I already confirm.
Thanks
username_0: The routes I have in a subdirectory, under a specific locale, are not working, since I only want the blog in **one language** for now, so based on the same routes of the first post and adding `locale: false`, it doesn't work. The other routes that are not related to the blog are working.
`{ source: '/es/blog/', destination: '/es/blog', locale: false },`
This route should handle the destination of: `/es/blog/index.js`, but it does not.
I have also tried to remove the locale on that route, but it doesn't work either.
I have to do it differently? to have the blog in only one language.
username_0: I found a temporary solution, but it didn't help, because when I was exporting my app, I got a message that I18n is not compatible with export. Anyway, I put it here just in case.
Instead of putting the blog in the directory `/es/blog/index.js`, I have put it in `/blog/index.js`, and all the requests that are not the language of the blog I response with `404`.
```js
export const getStaticProps = async (context) => {
const language = context.locale;
if (language !== 'es'){
return {
notFound: true,
}
}
const posts = await getLastPosts(10);
const {default: lngDict = {}} = await import(`../../../public/locales/${language}/common.json`);
return {
props: {
posts,
language,
lngDict
}
}
}
```
I would also like this to be solved, as it is ignoring the previous route with locale: false, also being able to have static pages with i18n compatibility
Status: Issue closed
|
dart-lang/sdk | 84532541 | Title: dart2js: make noSuchMethod on native classes work again
Question:
username_0: Moving native classes to interceptors broke noSuchMethod on native classes.
Option 1: put generic noSuchMethod hooks on Object.prototype that redirect to the isolate.
Option 2: all methods use the interceptor convention when a native class defines noSuchMethod. The hooks can then be placed on $.Object
Answers:
username_1: @username_0 - i believe this is now working, correct?
username_0: I doubt it. |
ErikEJ/SqlCeToolbox | 347911222 | Title: When adding rows to a database, you can't edit them until reloading the query window
Question:
username_0: When adding rows to a database, you can't edit them until reloading the query window
### Steps to reproduce
1. Create a new DB
2. Create a new Table with a text column
3. Insert Text into the column and repeat till you have 2 rows
4. Edit the text of the first item, get an error
5. Right click the table > Edit top 200 rows
6. Edit the text of the first item, now it works
```
---------------------------
Microsoft SQL Server Management Studio
---------------------------
SQLite / SQL Server Compact Toolbox
System.Data.DBConcurrencyException: Concurrency violation: the UpdateCommand affected 0 of the expected 1 records.
at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount)
at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount)
at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping)
at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows)
at username_1.SqlCeToolbox.WinForms.ResultsetGrid.UpdateRowToDatabase() in C:\projects\sqlcetoolbox\src\GUI\SqlCe35Toolbox\WinForms\ResultsetGrid.cs:line 294
---------------------------
OK
---------------------------
```
### Further technical details
Toolbox for SSMS 4.7.460
Database engine: SQlite
SSMS version: 17.8.1
Answers:
username_1: Does your new table have a primary key?
username_0: I am not an SQLite expert, so if a new table does not, then no
username_1: This will only work for table with primary keys, and may not always work anyway - due to flaws in data binding outside of my control. You can always modify data via UPDATE statements instead.
Status: Issue closed
|
saltstack/salt | 275390581 | Title: CherryPy 12.0 removed support for "engine.timeout_monitor.on" config option
Question:
username_0: ### Description of Issue/Question
The CherryPy tests are currently failing on all branches due to the latest version of CherryPy being released.
Version 12.0.0 removed support for the `engine.timeout_monitor.on` config options, as stated in the [release change notes](https://github.com/cherrypy/cherrypy/blob/c17be0f9c42bbce95b37688dd548e47114f4a136/CHANGES.rst#v1200).
Removal of this support causes `rest_cherrypy/app.py` to crash.
This is because the `expire_responses` config setting (see [rest_cherrypy salt docs](https://docs.saltstack.com/en/latest/ref/netapi/all/salt.netapi.rest_cherrypy.html)) is used to set the `engine.timeout_monitor.on` CherryPy setting, as seen [here](https://github.com/saltstack/salt/blob/717dd8e549aab3d4d09caf41e8aa5aef806521e5/salt/netapi/rest_cherrypy/app.py#L2842).
### Setup
Upgrade version of CherryPy to 12.0.0. Run the tests for the rest_cherrypy app.
### Steps to Reproduce Issue
```
# ./tests/runtests.py -n integration.netapi.rest_cherrypy.test_app
<snipped>
ERROR: test_webhook_noauth (integration.netapi.rest_cherrypy.test_app.TestWebhookDisableAuth)
[CPU:91.3%|MEM:35.6%]
----------------------------------------------------------------------
Traceback (most recent call last):
File "/root/SaltStack/salt/tests/support/cherrypy_testclasses.py", line 86, in setUp
root, apiopts, conf = app.get_app(base_opts)
File "/root/SaltStack/salt/salt/netapi/rest_cherrypy/app.py", line 2790, in get_app
cpyopts = root.get_conf() # cherrypy app opts
File "/root/SaltStack/salt/salt/netapi/rest_cherrypy/app.py", line 2774, in get_conf
cherrypy.config.update(conf['global'])
File "/usr/local/lib/python2.7/dist-packages/cherrypy/_cpconfig.py", line 154, in update
reprconf.Config.update(self, config)
File "/usr/local/lib/python2.7/dist-packages/cherrypy/lib/reprconf.py", line 158, in update
self._apply(config)
File "/usr/local/lib/python2.7/dist-packages/cherrypy/_cpconfig.py", line 164, in _apply
reprconf.Config._apply(self, config)
File "/usr/local/lib/python2.7/dist-packages/cherrypy/lib/reprconf.py", line 170, in _apply
self.namespaces(config)
File "/usr/local/lib/python2.7/dist-packages/cherrypy/lib/reprconf.py", line 112, in __call__
handler(k, v)
File "/usr/local/lib/python2.7/dist-packages/cherrypy/_cpconfig.py", line 272, in _engine_namespace_handler
plugin = getattr(engine, plugin)
AttributeError: 'Bus' object has no attribute 'timeout_monitor'
```
### Versions Report
Any current Salt version will reproduce this issue. I tested it at the HEAD of the `2016.11`, `2017.7`, and `develop` branches.<issue_closed>
Status: Issue closed |
barbajs/barba | 701825326 | Title: Issue with slider and scrollbar
Question:
username_0: Hello, all transitions are working correctly but there are two issues. In the first load of the index page everything works correctly but after the first transition when the page has much more content then index, then there is no scrollbar... that`s the first issue. Secondly, when you navigate to the first page there is no slide. Just text but no photos and no buttons for sliding to other picture...
```
(function (Barba) {
document.addEventListener("DOMContentLoaded", function (event) {
console.log("Loaded");
console.log('pjax start');
// Barba init
Barba.Pjax.start();
// Barba prefetch init
Barba.Prefetch.init();
var transEffect = Barba.BaseTransition.extend({
start: function() {
this.newContainerLoading.then(val => this.fadeInNewcontent($(this.newContainer)));
},
fadeInNewcontent: function(nc) {
nc.hide();
var _this = this;
$(this.oldContainer).fadeOut(500).promise().done(() => {
nc.css('display', 'block');
nc.fadeIn(500, function() {
_this.done();
});
/*$('html, body').animate({
scrollTop: 10
},500);*/
});
}
});
Barba.Pjax.getTransition = function() {
return transEffect;
}
Barba.Prefetch.init();
});
}(Barba));
```
That is code. I am using WordPress for this project...
Answers:
username_1: Hi @username_0,
Unfortunately, Barba v1 is no longer maintained...
You should [**take a look at the v2**](https://barba.js.org/).
For those kind of questions/help, please **use the Slack workspace** in order to ask the whole community for support. Join using the invite link here: https://barba.js.org/docs/getstarted/useful-links/#Developer
I am closing the issue.
Status: Issue closed
username_0: Okej so I tried to convert the current version to V2 and this error is accruing all the time:
`Uncaught ReferenceError: Barba is not defined at app.js:98`
This is how I load scripts:
```
<script src="<?php echo get_template_directory_uri() ?>/src/js/jquery-3.2.1.min.js"></script>
<script src="<?php echo get_template_directory_uri() ?>/src/js/jquery.smartmenus.min.js"></script>
<script src="<?php echo get_template_directory_uri() ?>/src/js/popper.min.js"></script>
<script src="<?php echo get_template_directory_uri() ?>/src/bootstrap/js/bootstrap.min.js"></script>
<script src="<?php echo get_template_directory_uri() ?>/src/js/owl.carousel.min.js"></script>
<script src="<?php echo get_template_directory_uri() ?>/src/js/jquery-ui.min.js"></script>
<script src="<?php echo get_template_directory_uri() ?>/src/js/jquery.easing.min.js"></script>
<script src="<?php echo get_template_directory_uri() ?>/src/js/icheck.min.js"></script>
<script src="<?php echo get_template_directory_uri() ?>/src/js/jquery.nice-select.min.js"></script>
<script src="<?php echo get_template_directory_uri() ?>/src/js/aos.js"></script>
<script src="<?php echo get_template_directory_uri() ?>/src/gallery-plugin/photoswipe.min.js"></script>
<script src="<?php echo get_template_directory_uri() ?>/src/gallery-plugin/photoswipe-ui-default.min.js"></script>
<script src="<?php echo get_template_directory_uri() ?>/src/gallery-plugin/jqPhotoSwipe.min.js"></script>
<script src="<?php echo get_template_directory_uri() ?>/src/js/all.js"></script>
<script src="<?php echo get_template_directory_uri() ?>/src/js/jquery.mCustomScrollbar.concat.min.js"></script>
<script src="<?php echo get_template_directory_uri() ?>/src/js/pm.js"></script>
<script src="https://unpkg.com/@barba/core"></script>
<script src="<?php echo get_template_directory_uri() ?>/src/js/app.js"></script>
``` |
NativeScript/NativeScript | 180423511 | Title: Swipe back after using clear history not working right.
Question:
username_0: ### Did you verify this is a real problem by searching [Stack Overflow](http://stackoverflow.com/questions/tagged/nativescript) and the [other open issues in this repo](https://github.com/NativeScript/nativescript/issues)?
Yes
### Tell us about the problem
Swipe back after using clear history in the experimental routerExtensions not working right.
### Which platform(s) does your issue occur on?
iOS
### Please provide the following version numbers that your issue occurs with:
- CLI: (run `tns --version` to fetch it)
2.4.0-2016-09-29-6718
- Cross-platform modules: (check the 'version' attribute in the
`node_modules/tns-core-modules/package.json` file in your project)
2.4.0-2016-09-29-4267
- Runtime(s): (look for the `"tns-android"` and `"tns-ios"` properties in the
`package.json` file of your project)
"tns-android": "version": "2.3.0"
"tns-ios": "version": "2.4.0-2016-9-17-1"
- Plugin(s): (look for the version number in the `package.json` file of your
project)
"nativescript-angular": "^0.3.0",
"nativescript-geolocation": "0.0.12",
"nativescript-local-notifications": "^1.1.5",
### Please tell us how to recreate the issue in as much detail as possible.
Navigate to a page using the "clear history" option.
Swipe back.
Tap a link navigating to another page.
Swipe back again.
Expected behavior: The app is moved to the background on the first swipe leaving you where you were before the app started.
Actual behavior: On the first swipe, nothing happens. On the tap to navigate away, nothing happens. On the second swipe, the page you should have navigated to shows briefly before going back.
Additionally, if you don't complete the second swipe, the app will exit as expected.
### Is there code involved? If so, please share the minimal amount of code needed to recreate the problem.
Status: Issue closed
Answers:
username_1: This issue was moved to NativeScript/nativescript-angular#486 |
matrix-org/matrix-synapse-ldap3 | 480627834 | Title: Migrate existing users
Question:
username_0: Is there a way to migrate existing users to a LDAP directory?
It seems like the internal database password table is looked up before LDAP is checked.
Is there a way to delete just the password (not the account) so that LDAP password is used instead or disable the internal password provider?
Answers:
username_1: Maybe this works also for you:
https://github.com/matrix-org/synapse/issues/1707#issuecomment-405455882
Status: Issue closed
username_2: synapse has an option to disable the local user database. |
jxz12/s_gd2 | 544259579 | Title: TypeError: Array of type 'double' required. A 'unknown type' was given
Question:
username_0: Could this be an issue with the swig compatibility? I installed via pip (python3), using in jupyter notebook.
`I = [0,1,2,3,4]
J = [1,2,3,4,0]
X = s_gd2.layout(I, J)
s_gd2.draw_svg(X, I, J, 'C5.svg')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-105-8b1a20c57805> in <module>
1 I = [0,1,2,3,4]
2 J = [1,2,3,4,0]
----> 3 X = s_gd2.layout(I, J)
4 s_gd2.draw_svg(X, I, J, 'C5.svg')
/opt/conda/lib/python3.6/site-packages/s_gd2/s_gd2.py in layout(I, J, V, t_max, eps, random_seed, init)
16
17 if V is None:
---> 18 cpp.layout_unweighted(X, I, J, t_max, eps, random_seed)
19 else:
20 cpp.layout_weighted(X, I, J, V, t_max, eps, random_seed)
TypeError: Array of type 'double' required. A 'unknown type' was given`
Answers:
username_0: Seems to work after I restarted the instance.
Status: Issue closed
username_1: I'm glad you were able to get past the problem, and I think your diagnosis about it being a SWIG problem was close to correct. Please feel free to ask if you have any other questions at all, about the code here or the algorithm itself. |
jexp/neo4j-shell-tools | 128231886 | Title: Error exporting neo4j movies database to graphml
Question:
username_0: I ran through the neo4j movies tutorial with a new installation, and then tried exporting the data to graphml and cypher.
[movies.graphml](https://gist.github.com/username_0/0f4546375e6ce454ef09)
[movies.cypher](https://gist.github.com/username_0/f0c745dbe160bcab9ed3)
Answers:
username_1: Thanks for the heads-up, I check the export of array properties. That was an issue in other formats too. What would be sensible? Comma separated strings?
username_0: To be honest, I'm not sure, I'm not very familiar with GraphML. It appears that it is possible from the spec to nest data elements inside data elements, so you could do something like `<data key="roles"><data key="role">role1</data></data key="role">role2</data></data>`. One possible advantage of that over CSV inside the data element would be it would mitigate the need for escaping commas inside values.
For my particular use-case, what would be "most sensible" is whatever would make it easiest for Neo4j to import the data correctly.
thanks
username_2: Is there any progress on this? I'm still seeing this issue in tools v3.1.0 when used against neo4j-enterprise 3.1.4, Red Hat 7.3 with java version java-1.8.0-openjdk-headless-1.8.0.131-2.b11.el7_3.
username_1: With neo4j-shell being deprecated, I currently focus the effort on the apoc library which can be used both from shell and browser. APOC also has export-graphml functionality that contains the lastest fixes.
Please have a look in the [APOC docs](https://neo4j-contrib.github.io/neo4j-apoc-procedures/#_graphml_import_export) |
KnowledgeCaptureAndDiscovery/OBA | 584149708 | Title: We should have an easier way to define custom queries
Question:
username_0: The current way is a little verbose: You have to define the custom query, generate a file, add it to the spec, create the parameters, restart the system... Maybe if the user already provided the query definition (construct) we could generate the spec part with the parameters.
Answers:
username_1: I dont have the time and it's not important for me... right now. Removing the assignation. |
ruanyf/weekly | 952492894 | Title: 【自荐工具】能帮您记录和整理碎片化的生活信息云记录
Question:
username_0: 
[项目地址](https://www.TiBiJi.com/)
提笔记是一种对待生活态度。通过提笔记这个在线小工具,您能随时随地掌握家庭或公司的日常收支、人脉情况、联系人资料、重要事件提醒通知、迅速提笔记事情等,以便您做出合理规划与安排。 并同时提供像记事本一样的网络云端便签纸,能迅速记录你想记录的一切文字信息。
功能简介
提笔记打造无纸化生活让我们严肃的对待恶化的环境吧!利用云存储技术实现多平台多端数据同步,通过电脑、手机、平板、车载电脑等设备随时随地使用提笔记。 |
britecharts/britecharts | 1057580186 | Title: Sampling or skipping labels
Question:
username_0: Have a large dataset, spanning 2 years(it can be larger) and the labels for each value(label: Date, value: number), will simply overlap and pile up, making them impossible to read or distinguish.
## Expected Behavior
Should sample only one month, skipping days; screenshot to how another chart library renders the same dataset

## Difference from Current Behavior
cannot specify
## Possible Implementation
<!--- Not obligatory, suggest ideas of how to implement the addition or change -->
## Example Screenshots (if appropriate):

## Context
We need this improvement b/c without it, for large datasets, britecharts would be unusable
Answers:
username_1: Hi @username_0, thanks for creating this issue!
I see that you are using a bar chart with a time series dataset. We do have a configuration for selecting what time frame to show on the ticks on other charts that were supposed to chart time series data (Linechart and Stackedarea) but we don't have it on the Bar chart for now.
I think adding support to the bar chart to have time series is something that makes sense and we can add into the backlog.
If you want to chart this info right now, I will suggest you to try the line chart and stacked area though.
username_1: Actually, if you use the LineChart and set the 'lineCurve' option to 'step', you might get something really similar to what you are looking for.
username_0: @username_1 thanks for taking the time!
Will try your suggestion and see where that goes.
username_1: Happy to help!
Try also the stacked area chart areaCurve, you might prefer it if you want the bottom part of the line to be filled |
typeorm/typeorm | 431273547 | Title: How to Set NumberLong Data Type?
Question:
username_0: **Issue type:**
[ ] question
[ ] bug report
[ ] feature request
[ ] documentation issue
**Database system/driver:**
[ ] `cordova`
[X ] `mongodb`
[ ] `mssql`
[ ] `mysql` / `mariadb`
[ ] `oracle`
[ ] `postgres`
[ ] `cockroachdb`
[ ] `sqlite`
[ ] `sqljs`
[ ] `react-native`
[ ] `expo`
**TypeORM version:6.0.0**
[ ] `latest`
[ ] `@next`
[ ] `0.x.x` (or put your version here)
**Steps to reproduce or a small repository showing the problem:**
<!--
To answer those questions you need to put "x" inside the square brackets, for example:
[x] `mysql`
[ ] `postgres`
Also, please format your code properly (by taking code blocks into ```ts .... ```)
!>
import { Entity, Column, ObjectID, ObjectIdColumn } from 'typeorm';
@Entity('cat')
export class Cat {
// 主键
// 自增一般使用:PrimaryGeneratedColumn,
// 但是mongodb里面没有所有使用ObjectIdColumn
@ObjectIdColumn()
id: ObjectID;
// 昵称
@Column()
nickname: string;
// 品种
@Column()
species: string;
// 长数据类型【NumberLong data type】
@Column()
longNum: number;
}
Answers:
username_1: Did you try:
```ts
const Long = require('mongodb').Long;
...
Long.fromInt(1)
...
```
Reference: https://stackoverflow.com/questions/27167569/how-to-insert-long-value-in-mongo-using-node
username_2: Closing issue as no answer from the author.
Status: Issue closed
|
ethereum/ethereum-org-website | 944724809 | Title: List of Eth2 clients is misleading
Question:
username_0: **Describe the bug**
Currently we list all Eth2 clients here:
https://ethereum.org/en/eth2/get-involved/#clients
We should make it clear which clients are production-ready vs. those that are not.
**Expected behavior**
We should probably break the list out into 2 sections:
1. mainnet-compatible
2. "experimental" (or something along those lines)
**Screenshots**

**Want to contribute?**
We love contributions from the Ethereum community! Please comment on an issue if you're interested in helping out with a PR.
Answers:
username_1: For a list of "mainnet-compatible" clients, we can probably use the ones which appear on https://launchpad.ethereum.org (Prysm, Nimbus, Teku, Lighthouse). The one that may also be in that category is Lodestar. @username_2 @wemeetagain @mpetrunic any thoughts about how to categorize Lodestar? I didn't think it was mainnet-compatible, but I see a lot of PRs merged that reference Altair.
For the others, I think they may both actually be deprecated. @sgryphon is Cortex still maintained? @pipermerriam same question for Eth2 Trinity.
username_2: Hey! I think "mainnet-compatible" is a bad term. Probably all clients in that list are "compatible" in the sense that they follow the spec and can join the network. A more appropriate term would be "product-ionized", or "mainnet-ready".
Specifically for Lodestar we having good performance on the Prater testnet and may publicly announce that we are "mainnet-ready" after further testing and maybe an audit to some key parts if applicable. I would agree to put Lodestar under an "experimental" category until then.
Status: Issue closed
|
yokomizor/ejabberd-auth-jwt | 424156361 | Title: Check JWT Date fields and issuer and audience field
Question:
username_0: It sems that current version does not check JWT against exp, iat, nbf fields.
So currently a given JWT does give infinite access for the user.
It seems important to support this features for security reasons.
In addition,checking agains issuer and audience (if found in config) could be nice too.
Answers:
username_1: Hey @username_0, Thank you for your feedback.
JWT exp, iat and nbf should be verified by `jose_jwt`. I believe that `verify` is rejecting expired tokens.
About checking issuer and audience, indeed, it would be very nice 👍
Would you be interested in implementing it? |
googleapis/google-api-ruby-client | 543090775 | Title: Google::Apis::ClientError: ipRefererBlocked: The request did not specify any referer. Please ensure that the client is sending referer or use the API Console to remove the referer restrictions
Question:
username_0: #### Environment details
- OS: Ubuntu (AWS)
- Ruby version: google-api-client
- Gem name and version: google-api-client (0.28.7)
#### Steps to reproduce
the code return error is: `calendar_service.list_events(calendar_id, params)`
#### Code example
```ruby
def self.list_events_by_key(calendar_id:, key:, time_min:, time_max:)
calendar_service = Google::Apis::CalendarV3::CalendarService.new
calendar_service.key = key
params = {
time_min: time_min,
time_max: time_max
}
calendar_service.list_events(calendar_id, params)
end
```
Answers:
username_1: Did you use the API console to remove the referer restrictions for this API?
username_0: No I did not.
username_1: It seems like you need to remove the restriction in order to use that API with the ruby client.
Or, you can set the referer header on [`RequestOptions#header`](https://googleapis.dev/ruby/google-api-client/latest/Google/Apis/RequestOptions.html#header-instance_method) for the request.
username_2: I see this: `This key is unrestricted. Restrictions help prevent unauthorized use and quota theft`
I will try your suggestion.
username_0: I see this: `This key is unrestricted. Restrictions help prevent unauthorized use and quota theft`
I will try your suggestion.
username_3: @username_0 Were you able to resolve this?
username_4: @username_3 yes I did resolve this by adding a referrer from developer console against my keys.
the second option was to select `None` in referrer restrictions. @username_1 answer gave me the hint. Thanks.
username_0: @username_3 yes I did resolve this by adding a referrer from developer console against my keys.
the second option was to select `None` in referrer restrictions. @username_1 answer gave me the hint. Thanks.
Status: Issue closed
username_5: Greetings, we're closing this. Looks like the issue got resolved. Please let us know if the issue needs to be reopened. |
whatwg/dom | 301229910 | Title: dispatch algorithm doesn't seem to handle the case when target or relatedTarget is Window object
Question:
username_0: Things like ''Let relatedTarget be the result of retargeting event’s relatedTarget against target'
where retargeting itself doesn't seem to work with Window.
"If A’s root .." what is Window's root? I couldn't find any definition for that.
Answers:
username_1: @username_0 could you please provide more context. What exactly is there to handle?
username_0: Things like ''Let relatedTarget be the result of retargeting event’s relatedTarget against target'
where retargeting itself doesn't seem to work with Window.
"If A’s root .." what is Window's root? I couldn't find any definition for that.
username_1: Thanks. Sounds like https://dom.spec.whatwg.org/#retarget should check if _A_ is a node, first, and return _A_ if not.
And then similar checks in 11.1.3, 11.2, 11.4, and 18.
username_0: Are there cases when target is Window, but relatedTarget something in a shadow DOM? Or vice versa
username_1: Hmm yeah, I'm not sure. Maybe someone from @whatwg/components knows. I vaguely recall a discussion about this before and I'm somewhat surprised it resurfaced.
The other thing I wonder about here is when someone dispatches a synthetic event. Do we assume that in that case relatedTarget the concept is not set, even though the relatedTarget attribute can return something controlled by the developer?
username_2: I remember I have tried to fix compsoedPath() for Window at
https://github.com/whatwg/dom/pull/327, however it wouldn't be enough.
I agree that we have to fix the standard so that we can consider non-node EventTarget nicely.
Basic idea should be:
- Non-node EventTarget event.target and event.relatedTarget should be visible from anywhere.
- From Window, any node in a shadow tree shouldn't be visible.
- From Window, any node in a document tree should be visible.
username_1: I think I'll try to merge this into #585 since it affects that step too.
username_1: Pushed a fix for this too.
Status: Issue closed
|
LaunchPadLab/decanter | 278764717 | Title: Publish a 1.0 version?
Question:
username_0: I can see from the releases on Github that you've gone to 1.0. Congrats on that! The latest version on Rubygems is 0.9.2 from August. Are you likely to publish 1.x soon, or would it better to use a `github:` URL in my `Gemfile` for now?
Answers:
username_1: @username_0 good catch - I never did release the latest version after it was merged. Just released it to Rubygems. Let me know if you have any issues!
Status: Issue closed
|
assimbly/gateway | 330144926 | Title: Bugfixing milestone 0.5 (part 1)
Question:
username_0: 1) Live configuration
A) If one load configuration, users doesn't get feedback if load is succesfull or not.
If failed the error message should be shown. For example when the XML isn't valid (change for example flow to flew)
Else
B) Back button
2) Service and header
- Make a view button on settings page of services and headers
- On view page: Delete/Edit/Clone buttons
3) Gateway
- You cannot clone a gateway in standalone mode (there can't only be one in standalone mode), this button should be removed.
- Move delete button to the "View" gateway page
4) Flow (Edit-All)
- When creating a flow the default component type should be set
- Clone button is missing
5) Check if some warnings can be solved (see log starting frontend)
Answers:
username_1: Bugs solved
Status: Issue closed
|
kubeflow/pipelines | 483079053 | Title: Lists are handled incorrectly by python components
Question:
username_0: Take as example a function `max_value` that will take as an input a list of lists of ints and will return the biggest int value.
If we pass as an input a hardcoded list, in execution time it would receive it as an string. Something like this:
```python
"[[1, 25], [32, 4]]"
```
If we test it using some output from previous methods the input would be something like this:
```python
"[{'PipelineParam': {'name': 'output', 'op_name': 'Return list', 'value': None, 'param_type': <kfp.dsl._metadata.TypeMeta object at 0x1050837b8>, 'pattern': '[1, 25]'}}, {'PipelineParam': {'name': 'output', 'op_name': 'Return list 2', 'value': None, 'param_type': <kfp.dsl._metadata.TypeMeta object at 0x1050837b8>, 'pattern': '[32, 4]'}}]"
```
```python
from typing import List
from kfp.components import func_to_container_op
def return_list(l: List[int]) -> List[int]:
print(l)
return l
def max_value(lists: List[List[int]]) -> List[int]:
from itertools import chain
for l in lists:
print(l)
values = list(chain.from_iterable(lists))
return max(*values)
return_list_op = func_to_container_op(return_list, extra_code='from typing import List')
max_value_op = func_to_container_op(max_value, extra_code='from typing import List')
def pipeline():
first_value = return_list_op([1, 25]).output
second_value = return_list_op([32, 4]).output
max_value_op([[1, 25], [32, 4]]) # The input would be "[[1, 25], [32, 4]]", and the max value will be "]"
max_value_op([first_value, second_value]) # The max value here will be "}"
```
Answers:
username_1: Thanks for the issue. Issues from users help us prioritize the features.
KF Pipelines orchestrates containerized command-line programs.
Command-line programs, on the lowest level, only pass strings or blobs/files.
All other types need to be explicitly supported (by providing serialization/deserialization routines).
At this moment only `str`, `int`, `float` are supported.
Good news is that I've written support for List and Dict and it will be available soon.
I'm also improving the value checking, so in future you'll get warnings about converting values of unsupported types to strings.
username_1: Hmm. The generic lists probably won't be supported in the first release. I'll think of how to support them better.
username_2: Thanks for the clarification @username_1
username_1: There is another issue in your code that would remain unfixed for some time:
The components are supposed to have strict signature with a fixed number of inputs and outputs. You're trying to have variable-length inputs by passing the outputs from variable number of upstream tasks.
```
max_value_op([str(first_value), str(second_value)])
```
might or might, but it's not supported. And in this case (with the feature support I'll add soon) you'll get a list of strings, not numbers.
username_3: HI,
@username_1 have you added support for`list`?
It seems, that is doesn't work in Kubeflow version: 1.0.4.
username_4: Works Kubeflow 1.2.
@username_3, at first I tried `foo: List[str]`, only after I realized that's yet unsupported generic list. `foo: list` correctly passed items in list. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.