repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
opentoonz/opentoonz | 876871286 | Title: shortcut to select parent column
Question:
username_0: A shortcut to select parent/children colluumn linked in the schematic view is another much used thing for cut-out animation
it speeds up process to animate different parts
a shortcut to select parent/children of the elements

example tof toonboom
Answers:
username_1: The schematic is a great way to select parent/child columns. And you can also use the skeleton tool to select the drawing in any column for moving, which looking like the image you shared.
username_0: just think about a simple button to press instead of going in other menus or tools each time
also being able to click on any node in the chain to go to the desired element
username_2: a simple shortcut is even more practical
in Maya you use the arrow keys to move up/down a hierarchy in the viewport (even left/right ones travels you through siblings). |
Azure/azure-sdk-for-python | 707104166 | Title: Data Tables Readme and Samples issues
Question:
username_0: 1.
Section [Link](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/tables/azure-data-tables#general):

Suggestion:
1).Add
```
from azure.data.tables import TableServiceClient
from azure.core.exceptions import HttpResponseError
table_name='your_table_name'
```
2).Update `TableServiceClient(connection_string)`to `TableServiceClient.from_connection_string(connection_string)`.
2.
Section [Link](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/tables/azure-data-tables/samples/sample_insert_delete_entities.py#L62):


Reason:
AttributeError: 'InsertDeleteEntity' object has no attribute 'account_url' and 'access_key'
Suggestion:
Add Class attribute `access_key` and `account_url`:
```
access_key = os.getenv("AZURE_TABLES_KEY")
account_url = os.getenv("AZURE_TABLES_ACCOUNT_URL")
```
3.
Section [Link](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/tables/azure-data-tables/samples/async_samples/sample_query_table_async.py#L69):

Reason:
for entity_chosen in queried_entities:
TypeError: 'AsyncItemPaged' object is not iterable.
Suggestion:
Update to:
```
async for entity_chosen in table_client.query_entities(filter=name_filter, select=["Brand", "Color"]):
print(entity_chosen )
```
4.
Section [Link](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/tables/azure-data-tables/samples/sample_update_upsert_merge_entities.py):
Note: There are the same questions 3 and 4 in [asynchronous Sample](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/tables/azure-data-tables/samples/async_samples/sample_update_upsert_merge_entities_async.py).



Suggestion 1:
Add Class attribute connection_string definition in Class TableEntitySamples:
connection_string = os.getenv("AZURE_TABLES_CONNECTION_STRING")
Reason 2:
AttributeError: 'TableEntitySamples' object has no attribute 'set_access_policy' and 'upsert_entities'.
Suggestion 2:
Remove `sample.set_access_policy()` and `sample.upsert_entities()`.
[Truncated]
6.
Section [Link](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/tables/azure-data-tables/samples/async_samples/sample_update_upsert_merge_entities_async.py#L60):

Suggestion 1:
Update to `await table.create_table()`
Reason 2:
entities = list(table.list_entities())
TypeError: 'AsyncItemPaged' object is not iterable.
Suggestion 2:
Update to:
```
entities = list()
async for n in table.list_entities():
entities.append(n)
```
@jongio for notification.<issue_closed>
Status: Issue closed |
stoicflame/enunciate | 108645672 | Title: Docs export fail on second run
Question:
username_0: I'm working on a Gradle plugin for Enunciate, so this problem may be because I don't understand the Enunciate programmatic API rather than a bug per se.
I'm trying to generate docs output only, so I add an export for 'docs' to a folder.
On first run, enunciate will generate the documentation. So everything is peachy.
But on second run, it outputs:
```
[ENUNCIATE] Skipping documentation source generation as everything appears up-to-date...
[ENUNCIATE] Unknown artifact 'docs'. Artifact will not be exported.
```
I can see why it would not do anything, as everything is already done (in respect to the enunciate build dir, anyway).
But it then fails to handle the docs export.
When I delete the export target folder, I would expect a re-export.
Am I missing something?
Answers:
username_1: Fixed at 4f7463f. Thanks for the report.
Status: Issue closed
|
lianlilin/blog | 853581916 | Title: 排序算法
Question:
username_0: # 冒泡排序
O(n^2)
# 选择排序
O(n^2)
# 插入排序
O(n^2)
# 希尔排序
# 归并排序
## 参考文献
- [这或许是东半球分析十大排序算法最好的一篇文章](https://mp.weixin.qq.com/s?__biz=MzUyNjQxNjYyMg==&mid=2247485556&idx=1&sn=344738dd74b211e091f8f3477bdf91ee&chksm=fa0e67f5cd79eee3139d4667f3b94fa9618067efc45a797b69b41105a7f313654d0e86949607&scene=21#wechat_redirect)
- [原来插入排序、希尔排序是这样的](https://zhuanlan.zhihu.com/p/217904206) |
code4nara/covid19 | 597888870 | Title: 「新型コロナウィルス感染症が心配なときに」印刷ページの連絡先が前バージョンと重なってしまっている
Question:
username_0: ## 起こっている問題 / The Problem
- 「新型コロナウィルス感染症が心配なときに」印刷ページの連絡先が前バージョンと重なってしまっている
## スクリーンショット / Screenshot
<!-- バグであればdeveloper toolからコンソールも合わせて添付 -->
<!-- If it's a bug, attach a screenshot of the developer tool console -->

## 期待する見せ方・挙動 / Expected Behavior
- きれいに表示
## 起こっている問題の再現手段 / Steps to Reproduce
1. https://stopcovid19.code4nara.org/print/flow/ にアクセス
## 動作環境・ブラウザ / Environment
- macOS
- Chrome
Answers:
username_0: おそらくGitHubがマージの際にsvgのコードを置き換えではなく、うまいこと(?)新旧両方残してしまったのかなと
username_1: 開発ページのフローが表示出来なくなっているのは別問題でしょうか?
https://upbeat-volhard-740574.netlify.com/naracity
username_0: https://upbeat-volhard-740574.netlify.com/flow
開発ページはURLの階層がおかしいみたいですね。別問題ですね。
username_1: 単に回線が遅かっただけみたいです。
開発ページでも同様になっているので、おそらくこれを修正で解消出来るでしょう。
Status: Issue closed
username_1: ## 起こっている問題 / The Problem
- 「新型コロナウィルス感染症が心配なときに」印刷ページの連絡先が前バージョンと重なってしまっている
## スクリーンショット / Screenshot
<!-- バグであればdeveloper toolからコンソールも合わせて添付 -->
<!-- If it's a bug, attach a screenshot of the developer tool console -->

## 期待する見せ方・挙動 / Expected Behavior
- きれいに表示
## 起こっている問題の再現手段 / Steps to Reproduce
1. https://stopcovid19.code4nara.org/print/flow/ にアクセス
## 動作環境・ブラウザ / Environment
- macOS
- Chrome
username_1: 該当部分のコードが削除されるように見えたのですが、開発用ページでは変化ないようです。明日でも詳しく見てみます
username_0: 最新の `development` ブランチで、`yarn dev` したものと `yarn run generate:deploy --fail-on-page-error` したものを見てみましたが反映されているようでした。netlifyのキャッシュか何かかな...?
参考: [netlifyのキャッシュ削除方法 \- Qiita](https://qiita.com/taichi0514/items/76800bc801c7759c74bc)
username_1: 参考ページの手順で3度ほどnetlifyのキャッシュクリアをしても変化がみられなかったが、データ更新のマージをすると正しく表示されるようになりました。
どこかのキャッシュの影響かもしれませんが、詳細不明。
Status: Issue closed
|
rancher/rancher | 725469589 | Title: Monitoring V2 Grafana storage causes "breaks non-root policy" error
Question:
username_0: <!--
Please search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue
For security issues, please email <EMAIL> instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase.
-->
**What kind of request is this (question/bug/enhancement/feature request):**
Bug
**Steps to reproduce (least amount of steps as possible):**
Install rancher-monitoring with selected storage for grafana using `Statefulset template` and use fix for this bug #29638.
**Result:**
Volume for grafana will be created but grafana cannot start due to error `Containers with incomplete status: [init-chown-data grafana-sc-datasources]` and `Error: container's runAsUser breaks non-root policy`.
**Other details that may be helpful:**
We use nfs storage with nfs-client-provisioner.
If we add `initChownData.enabled` to grafana while installing rancher-monitoring it works.
**Environment information**
- Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI):
v2.5.1/k8s 1.19.3
- Installation option (single install/HA):
Single install
<!--
If the reported issue is regarding a created cluster, please provide requested info below
-->
**Cluster information**
- Cluster type (Hosted/Infrastructure Provider/Custom/Imported):
- Machine type (cloud/VM/metal) and specifications (CPU/memory):
- Kubernetes version (use `kubectl version`):
```
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"<PASSWORD>", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"<PASSWORD>", GitTreeState:"clean", BuildDate:"2020-10-14T12:41:49Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
```
- Docker version (use `docker version`):
```
Nodes have version from 17.9.1, 18.9.0 and 19.3.5
Rancher server is docker version 17.03.2-ce
```
Answers:
username_1: I hit the same error when installing Monitoring v2 in v2.5-61a8a3691063108bc775c8e4e6f6db971541c253-head
- install Longhorn
- install Monitoring v2 `9.4.201`
- When installing monitoring v2, enable the persistent storage for Grafana
Results:
```
Warning | Failed | Error: container's runAsUser breaks non-root policy
```
<img width="1378" alt="Screen Shot 2020-10-21 at 2 43 41 PM" src="https://user-images.githubusercontent.com/6218999/96791841-0e4b8680-13ae-11eb-8bff-01ba5ab44c54.png">
username_2: I have the same issue, and when using StatefulSet, I'm not allowed to update the "Run as non-root":
`StatefulSet.apps "rancher-monitoring-grafana" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden`
username_3: @username_1 Fix is merged, can you retest this? Also make sure the chart can pass CIS again as I modified some securityContext which might impact CIS.
username_1: The bug fix is validated in rancerh `v2.5-d87f58ac300192c90377faef694efb796a824520-head` with the latest monitoring v2 chart `9.4.201`
Monitoring v2 is installed successfully in a hardened cluster and is functional.
<img width="1373" alt="Screen Shot 2020-10-23 at 11 53 58 AM" src="https://user-images.githubusercontent.com/6218999/97042832-7a50fa80-1526-11eb-9fb2-624e3475efef.png">
related UI bug: https://github.com/rancher/dashboard/issues/1689
Status: Issue closed
|
PowerDNS/parent-signals-dot | 606871502 | Title: Is DoT good enough for an AD bit?
Question:
username_0: First, we need to decide on this (and I trust that the WG will pick a fight over this).
Second, if the conclusion is 'yes', we need to write some words on what that means for validators that talk to an upstream resolver, because we cannot forward them DNSKEYs and RRSIGs that allow them to do their own validation.
Answers:
username_1: My thoughts:
We should say something along the lines of "In theory this could partially replace DNSSEC in the future, however this document does not cover that, due to the complexity it adds in regards to the current abundance of validators that use an upstream resolver. If a future standard does decide to implement this a concern it should address is how a validator with an upstream resolver an still determine data authenticity."
username_2: So this is a tricky one. In some cases, it may be good enough but not for all cases.
Take an example where an organization generate their zone and host their DNS servers. DoT ensure that the content has not been changed in flight and given that this is the same organization creating the zone, pretty much you could validate the content. I think it would give you roughly the same level of authenticity as online signing.
Now, take a case where an organization owns their zone, dnssec signed it and has third-party secondaries serving the content. Even though you can authenticate the server, there is no guarantee that the third-party is not modifying the content you have given them.
username_3: Agreed. It seems to me that, even with a public key protected by DNSSEC, DoT only protects the transport and that's not enough to authenticate the content, which is what the AD bit is about.
username_1: Lets close this, I do not think this draft should imply that DoT can be used to verify well enough to set the AD bit.
Status: Issue closed
|
STEllAR-GROUP/phylanx | 325063111 | Title: @Phylanx passes by value, not by reference
Question:
username_0: @Phylanx copies the values in its input arguments instead of passing them by reference. This means we do unnecessary copying.
```
from phylanx.ast import Phylanx
import numpy as np
def change(n):
n[3] = 4
av = np.ones((10))
change(av) # changes av[3] to be 4
print(av)
@Phylanx()
def change2(n):
n[4] = 5
change2(av) # produces no change
print(av)
```
Answers:
username_1: We will never have references across the Python/PhySL interface. I wouldn't even know how to implement that properly in the general case.
username_2: Duplicate of #376 |
jz4o/codingames | 254876083 | Title: 2017/09/03時点で作成済みのソースをコミットする
Question:
username_0: 以下が作成済み
- CLASSIC PUZZLE
- EASY
- mars-lander-episode-1
- defibrillators
- horse-racing-duals
- mime-type
- chuck-norris
- ascii-art
- onboarding
- temperatures
- power-of-thor-episode-1
- the-descent
- MEDIUM
- don't-panic-episode-1
- shadows-of-the-knight-episode-1
- skynet-revolution-episode-1
- there-is-no-spoon-episode-1
- COMMUNITY PUZZLES
- the-fastest
- ddcg-mapper |
zdj0311/myOwn | 353603376 | Title: 本地项目上传到Github
Question:
username_0: 1.在本地创建一个版本库(即文件夹),通过git init把它变成Git仓库
2.把项目复制到这个文件夹里面,再通过git add .把项目添加到仓库
3.再通过git commit -m "注释内容"把项目提交到仓库
4.在Github上设置好SSH密钥后,新建一个远程仓库TEST2,通过git remote add origin https://gitgub.com/username_0/TEST2.git将本地仓库和远程仓库进行关联
5.最后通过git push -u origin master把本地仓库的项目推送到远程仓库(也就是Github)上 |
GEOS-ESM/MAPL | 631616912 | Title: Bring MAPL_RecordAlarmsIsRinging from CVS
Question:
username_0: We need to bring MAPL_RecordAlarmsIsRinging from the s2sv3 branch of CVS. @yvikhlya reminded me that we need this feature to properly support "dual ocean"
Answers:
username_0: @yvikhlya , I looked again and MAPL and this functionality is already there. There no need to bring any code from CVS. I also confirmed that your branch, feature/yury/s2s3-merge, compiles successfully. I will wait until some time next week, so we can talk, but my plan is to close this issue. |
codacy/codacy-codesniffer | 667721903 | Title: Parameters not showing in the UI
Question:
username_0: Parameters are defined in the `patterns.json`, but not in the `description.json` file.
This makes the UI skip them.
codacy-plugins-test json tests are skipped in CI:
https://github.com/codacy/codacy-codesniffer/blob/5ce7ee089b825543f7e3debf71ab4c09a2e1fc9d/.circleci/config.yml#L38
### Acceptance criteria
No parameters are deleted and codacy-plugins-test json test is enabled and green in CI
Status: Issue closed
Answers:
username_1: This problem should be already fixed in production. |
MerlinDS/Gimmick | 93810862 | Title: Gimmick initialization
Question:
username_0: Add method for post initialization actions.
Add callback that will execute after completion of initialization.
Status: Issue closed
Answers:
username_0: Not implemented yet
username_0: Add method for post initialization actions.
https://github.com/username_0/Gimmick/blob/master/src/org/gimmick/managers/GimmickConfig.as
Add callback that will execute after completion of initialization.
https://github.com/username_0/Gimmick/blob/master/src/org/gimmick/core/GimmickEngine.as#L57-76
Status: Issue closed
|
gardener/autoscaler | 493176804 | Title: Clean up the repo to make rebasing with upstream as automatic as possible
Question:
username_0: ### Issue
The current changes in this fork makes it hard to automatically rebase to upstream changes. Due to this further manual changes are required every time we want to take in changes from upstream.
This following list is the chronological order of our commits as far as I can see. It needs to be verified to see if I have missed anything.
1. https://github.com/gardener/autoscaler/commit/c828a5be1afcd707421ec3e11dcd41200a848a90
2. https://github.com/gardener/autoscaler/commit/f07b4d50c3767b985b7a484620b52f29a83b59a6
3. https://github.com/gardener/autoscaler/commit/915aca84f5e14e1d9e5390ddc2b28ff0db572d20
4. https://github.com/gardener/autoscaler/commit/40285d16008ab3803fdedd201d27e2f2754a5e80
5. https://github.com/gardener/autoscaler/commit/da5e1bc8188a333a949d51310fa7daf4fe24c913
6. https://github.com/gardener/autoscaler/commit/d3886a5a60e6855b558ff7614703482c341bd14c
7. https://github.com/gardener/autoscaler/commit/13d61043df474a6b8f9eb47144c18e50f813b9f0
8. https://github.com/gardener/autoscaler/commit/5167fdde8e278eaea19ac7a32318e8accc068a2c
9. https://github.com/gardener/autoscaler/commit/1ce7d24eff25cc66a53db70b9e7a191836ddccb0
10. https://github.com/gardener/autoscaler/commit/6f9a4ac90b79ffcb66097bb85246487d8a1af093
11. https://github.com/gardener/autoscaler/commit/b8fb38745265e91ade9376a90d97b2860cbe6823
12. https://github.com/gardener/autoscaler/commit/da7c71678535048676326657533e94c50da23dbc
13. https://github.com/gardener/autoscaler/commit/1ea00cd6b6b9699babe98f3bd936d5c2455a3f0b
14. https://github.com/gardener/autoscaler/commit/7bd5249cd9b485a3c11f9081c38e17ae9cb3e236
15. https://github.com/gardener/autoscaler/commit/a128cc96de5366706083997a0a0e4c6f4b4d2366
16. https://github.com/gardener/autoscaler/commit/333fcb9bcfd5dbed18a5b86da751790a9ba66e1b
17. https://github.com/gardener/autoscaler/commit/ddf86929460aedc9297435d82b5521d9e19ddc3a
18. https://github.com/gardener/autoscaler/commit/55e365efd9e5cade4d72a2e6f8aa8c27ecdf6ab7
19. https://github.com/gardener/autoscaler/commit/0db747e2cbb352ccff4ebca671af28ba89f5ce24
20. https://github.com/gardener/autoscaler/commit/3168a9f9338b6ddc40db8f4f4035000b19b24c6a
21. https://github.com/gardener/autoscaler/commit/aeee88e94e400727a77bcc769a96c3e64812bfd2
22. https://github.com/gardener/autoscaler/commit/1b9574f8685308be4675d215b10321618f73e7ea
23. https://github.com/gardener/autoscaler/commit/dde3cfb5192a645c448c957f8867e77713e94534
24. https://github.com/gardener/autoscaler/commit/3f3a734a5ef72b117648f0f2670b30c8eba37ea0
25. https://github.com/gardener/autoscaler/commit/7f4f7ba2b2cfae8736d7d827a4a072a0f9fa647a
26. https://github.com/gardener/autoscaler/commit/41c804969eace6b6c5a3b26691125c88587baaea
27. https://github.com/gardener/autoscaler/commit/03b20e55ec709595419d5046944ab9524edff8dc
28. https://github.com/gardener/autoscaler/commit/f454c48fa536e42ecaf494043b95c59a59793270
29. https://github.com/gardener/autoscaler/commit/1bfd5b7e9fafe7db1b303e98e02caf618ded3a3a
30. https://github.com/gardener/autoscaler/commit/beb681d5e9e46229481c26709efc5fba128c7b9a
31. https://github.com/gardener/autoscaler/commit/01144f3ab8d273f0eb3ad652cd86e095ad0afee8
32. https://github.com/gardener/autoscaler/commit/bb50fb3af1d369618087b0867686bbf1aa58e2ca
33. https://github.com/gardener/autoscaler/commit/f0ceff81b267ec2c48ee73ad0f2ff03b9749d375
34. https://github.com/gardener/autoscaler/commit/af07a756598f1d96e31239f0e4ba6bf455e540b2
35. https://github.com/gardener/autoscaler/commit/f0810742f73547ccd2ef261be54bdde33b6db497
36. https://github.com/gardener/autoscaler/commit/e33a0c4599f0d2615520e6250bb5dff8e16c16dd
37. https://github.com/gardener/autoscaler/commit/fc07919d001127ea00f759079899fa39e7e9669d
38. https://github.com/gardener/autoscaler/commit/55d7cc1c6dd74b594237de3b7c0ea980c77dd844
### Solution
We need to refactor the repo to avoid some changes (like package renaming, deletion of folders etc.) and mark other unavoidable changes explicitly so that we can automate much of the work while rebasing on upstream.
Answers:
username_1: close in favour of https://github.com/gardener/autoscaler/issues/51
Status: Issue closed
username_2: ### Issue
The current changes in this fork make it hard to automatically rebase to upstream changes. Due to this, further manual changes are required every time we want to take in changes from upstream.
This following list is the chronological order of our commits as far as I can see. It needs to be verified to see if I have missed anything.
1. https://github.com/gardener/autoscaler/commit/c828a5be1afcd707421ec3e11dcd41200a848a90
Deleting addon-resizer and vertical-pod-autoscaler makes it hard to rebase in the future. And we would need to keep rebasing in the near future until the machine-api is integrated with cluster-autoscaler. It would be better to retain them until then. So, may be better to remove this commit during the cleanup.
2. https://github.com/gardener/autoscaler/commit/f07b4d50c3767b985b7a484620b52f29a83b59a6
3. https://github.com/gardener/autoscaler/commit/915aca84f5e14e1d9e5390ddc2b28ff0db572d20
4. https://github.com/gardener/autoscaler/commit/40285d16008ab3803fdedd201d27e2f2754a5e80
5. https://github.com/gardener/autoscaler/commit/da5e1bc8188a333a949d51310fa7daf4fe24c913
6. https://github.com/gardener/autoscaler/commit/d3886a5a60e6855b558ff7614703482c341bd14c
7. https://github.com/gardener/autoscaler/commit/13d61043df474a6b8f9eb47144c18e50f813b9f0
8. https://github.com/gardener/autoscaler/commit/5167fdde8e278eaea19ac7a32318e8accc068a2c
9. https://github.com/gardener/autoscaler/commit/1ce7d24eff25cc66a53db70b9e7a191836ddccb0
10. https://github.com/gardener/autoscaler/commit/6f9a4ac90b79ffcb66097bb85246487d8a1af093
11. https://github.com/gardener/autoscaler/commit/b8fb38745265e91ade9376a90d97b2860cbe6823
12. https://github.com/gardener/autoscaler/commit/da7c71678535048676326657533e94c50da23dbc
13. https://github.com/gardener/autoscaler/commit/1ea00cd6b6b9699babe98f3bd936d5c2455a3f0b
14. https://github.com/gardener/autoscaler/commit/7bd5249cd9b485a3c11f9081c38e17ae9cb3e236
15. https://github.com/gardener/autoscaler/commit/a128cc96de5366706083997a0a0e4c6f4b4d2366
Retaining the package name/path would ease future rebases. Might be better to remove this commit during the cleanup.
16. https://github.com/gardener/autoscaler/commit/333fcb9bcfd5dbed18a5b86da751790a9ba66e1b
17. https://github.com/gardener/autoscaler/commit/ddf86929460aedc9297435d82b5521d9e19ddc3a
18. https://github.com/gardener/autoscaler/commit/55e365efd9e5cade4d72a2e6f8aa8c27ecdf6ab7
19. https://github.com/gardener/autoscaler/commit/0db747e2cbb352ccff4ebca671af28ba89f5ce24
20. https://github.com/gardener/autoscaler/commit/3168a9f9338b6ddc40db8f4f4035000b19b24c6a
21. https://github.com/gardener/autoscaler/commit/aeee88e94e400727a77bcc769a96c3e64812bfd2
22. https://github.com/gardener/autoscaler/commit/1b9574f8685308be4675d215b10321618f73e7ea
23. https://github.com/gardener/autoscaler/commit/dde3cfb5192a645c448c957f8867e77713e94534
24. https://github.com/gardener/autoscaler/commit/3f3a734a5ef72b117648f0f2670b30c8eba37ea0
25. https://github.com/gardener/autoscaler/commit/7f4f7ba2b2cfae8736d7d827a4a072a0f9fa647a
26. https://github.com/gardener/autoscaler/commit/41c804969eace6b6c5a3b26691125c88587baaea
27. https://github.com/gardener/autoscaler/commit/03b20e55ec709595419d5046944ab9524edff8dc
Retaining the package name/path would ease future rebases. Might be better to remove this commit during the cleanup.
28. https://github.com/gardener/autoscaler/commit/f454c48fa536e42ecaf494043b95c59a59793270
29. https://github.com/gardener/autoscaler/commit/1bfd5b7e9fafe7db1b303e98e02caf618ded3a3a
30. https://github.com/gardener/autoscaler/commit/beb681d5e9e46229481c26709efc5fba128c7b9a
31. https://github.com/gardener/autoscaler/commit/01144f3ab8d273f0eb3ad652cd86e095ad0afee8
32. https://github.com/gardener/autoscaler/commit/bb50fb3af1d369618087b0867686bbf1aa58e2ca
33. https://github.com/gardener/autoscaler/commit/f0ceff81b267ec2c48ee73ad0f2ff03b9749d375
34. https://github.com/gardener/autoscaler/commit/af07a756598f1d96e31239f0e4ba6bf455e540b2
35. https://github.com/gardener/autoscaler/commit/f0810742f73547ccd2ef261be54bdde33b6db497
36. https://github.com/gardener/autoscaler/commit/e33a0c4599f0d2615520e6250bb5dff8e16c16dd
37. https://github.com/gardener/autoscaler/commit/fc07919d001127ea00f759079899fa39e7e9669d
38. https://github.com/gardener/autoscaler/commit/55d7cc1c6dd74b594237de3b7c0ea980c77dd844
### Solution
We need to refactor the repo to avoid some changes (like package renaming, deletion of folders etc.) and mark other unavoidable changes explicitly so that we can automate much of the work while rebasing on upstream.
Also, it is will be a good idea to make commits that deviate from upstream easily identified. Preferably, it is better to keep them always together and on top of the upstream commit so that future rebasing goes as smoothly as possible. I.e. We should avoid cherry picking.
username_2: I think, this issue is about the larger rebase, considering all the patches. While the one you linked above is only about one very specific update from upstream.
I'd prefer to keep this open.
username_1: /assign @username_2
username_2: Here's the new branch - https://github.com/gardener/autoscaler/tree/machine-controller-manager-provider
- for a cleaner history.
- Easier rebase in the future.
This is also now a default branch for the fork.
username_2: cc @rewiko
username_2: /close with the new branch https://github.com/gardener/autoscaler/tree/machine-controller-manager-provider |
waives/waives.net | 501427667 | Title: Pipeline does not complete if `OnPipelineCompleted()` throws
Question:
username_0: The current implementation of `OnPipelineCompleted` does not handle exceptions, and so will cause a deadlock if an exception is thrown here.
```csharp
void OnPipelineComplete()
{
_onPipelineCompletedUserAction();
Logger.Info("Pipeline complete");
taskCompletion.SetResult(true);
}
```<issue_closed>
Status: Issue closed |
saltstack/salt | 182104739 | Title: Install mysql and ensure that a DB exists
Question:
username_0: ### Description of Issue/Question
I've been struggling for tens of hours now trying to create a super simple state that just installs mysql-server, python-mysqldb, mysql-client and creates a DB. I'm completely new to salt, so I might be doing something very wrong, but after chatting with multiple others at #salt on IRC nobody seems to know what the problem is. Others can reproduce the problem too.
I've been told by multiple people that I need reload_modules to make it work, but it doesn't help, it still looks as if python-mysqldb is undetected/not available at runtime(?).
What am I doing wrong? It seems like such a simple task, but I'm stuck and nobody knows.
### Setup
Using 2016.3.3 on Ubuntu 16.04, clean minimal install.
SLS file:
```
database-packages:
pkg.installed:
- pkgs:
- python-mysqldb
- mysql-server
- mysql-client
- reload_modules: true
mysql-is-running:
service.running:
- name: mysql
- enable: True
has-db:
mysql_database.present:
- name: example
- order: last
```
Error:
```
ID: has-db
Function: mysql_database.present
Name: example
Result: False
Comment: An exception occurred in this state: Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/salt/state.py", line 1733, in call
**cdata['kwargs'])
File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 1652, in wrapper
return f(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/salt/states/mysql_database.py", line 55, in present
existing = __salt__['mysql.db_get'](name, **connection_args)
File "/usr/lib/python2.7/dist-packages/salt/modules/mysql.py", line 897, in db_get
dbc = _connect(**connection_args)
File "/usr/lib/python2.7/dist-packages/salt/modules/mysql.py", line 330, in _connect
dbc = MySQLdb.connect(**connargs)
File "/usr/lib/python2.7/dist-packages/pymysql/__init__.py", line 88, in Connect
return Connection(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 679, in __init__
self.connect()
File "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 891, in connect
self._request_authentication()
File "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 1097, in _request_authentication
auth_packet = self._read_packet()
File "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 966, in _read_packet
packet.check_error()
File "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 394, in check_error
err.raise_mysql_exception(self._data)
[Truncated]
mysql-python: Not Installed
pycparser: Not Installed
pycrypto: 2.6.1
pygit2: Not Installed
Python: 2.7.12 (default, Jul 1 2016, 15:12:24)
python-gnupg: Not Installed
PyYAML: 3.11
PyZMQ: 15.2.0
RAET: Not Installed
smmap: 0.9.0
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.1.4
System Versions:
dist: Ubuntu 16.04 xenial
machine: x86_64
release: 4.4.0-38-generic
system: Linux
version: Ubuntu 16.04 xenial
Answers:
username_1: We need to take a look at why the pymysql plugin isn't working for this, but if you do
`apt-get remove python-pymysql`
before running the above command, your states work correctly. |
Chia-Network/chia-blockchain | 875561589 | Title: [help] Debug.log file
Question:
username_0: I am new in chia , please advice me how to fixed the problems in my debug.log file , thanks
 |
hsu-feedback/hsu-vertebrate-reptiles | 752199051 | Title: Monthly VertNet data use report for 2019-12, resource hsu-vertebrate-reptiles
Question:
username_0: Your monthly VertNet data use report is ready!
You can see the HTML rendered version of this report at:
http://tools-usagestats.vertnet-portal.appspot.com/reports/1e165f83-b718-428f-802e-62b0839ea54c/201912/
Raw text and JSON-formatted versions of the report are also available for
download from this link.
A copy of the text version has also been uploaded to your GitHub
repository under the "reports" folder at:
https://github.com/hsu-feedback/hsu-vertebrate-reptiles/tree/master/reports
A full list of all available reports can be accessed from:
http://tools-usagestats.vertnet-portal.appspot.com/reports/1e165f83-b718-428f-802e-62b0839ea54c/
You can find more information on the reporting system, along with an
explanation of each metric, at:
http://www.vertnet.org/resources/usagereportingguide.html
Please post any comments or questions to:
http://www.vertnet.org/feedback/contact.html
Thank you for being a part of VertNet. |
SHU-2016-SummerPractice/AlgorithmExerciseIssues | 173120125 | Title: 编码专题 2016-8-25
Question:
username_0: [91. Decode Ways](https://leetcode.com/problems/decode-ways/)
[89. Gray Code](https://leetcode.com/problems/gray-code/)
Status: Issue closed
Answers:
username_1: [91. Decode Ways](https://leetcode.com/problems/decode-ways/)
[89. Gray Code](https://leetcode.com/problems/gray-code/)
username_1: ```js
/**
* [AC] LeetCode 89 Gray Code
* @param {number} n
* @return {number[]}
*/
var grayCode = function(n) {
if(n <= 0) return [0];
var ans = ['0','1'], i, j;
for(i = 1; i < n; i++){
for(j = Math.pow(2,i) - 1; j >= 0 ; j--){
ans.push('1' + ans[j]);
ans[j] = '0' + ans[j];
}
}
return ans.map(function(s){
return parseInt(s,2);
});
};
```
username_2: ## - 91 - C#
```csharp
public class Solution
{
public int NumDecodings(string s)
{
if (String.IsNullOrEmpty(s)) return 0;
if (s[0] == '0') return 0;
if (s.Length == 1) return 1;
if (Convert.ToInt32(s.Substring(0, 2)) > 21 && Convert.ToInt32(s.Substring(0, 2)) % 10 == 0) return 0;
if (s.Length == 2) return Convert.ToInt32(s) > 26 || Convert.ToInt32(s) < 11|| Convert.ToInt32(s) % 10 == 0 ? 1 : 2;
int[] dp = new int[s.Length];
dp[0] = 1;
dp[1] = Convert.ToInt32(s.Substring(0,2)) > 26 || Convert.ToInt32(s.Substring(0, 2)) < 11 ? 1 : 2;
for(int i=2; i<s.Length; i++)
{
int ele = Convert.ToInt32(s.Substring(i - 1, 2));
if (s[i] == '0')
{
if (s[i - 1] == '0' || s[i-1] > '2') return 0;
dp[i] = dp[i - 2];
}
else dp[i] = (ele > 26 || ele < 10) ? s[i-1] == '0' ? dp[i-2] : dp[i - 1] : dp[i - 1]+dp[i-2];
}
return dp[s.Length - 1];
}
}
```
username_2: ## 89 - C#
```chsarp
public class Solution
{
public IList<int> GrayCode(int n)
{
List<int> res = new List<int>();
res.Add(0);
if (n > 0)
res.Add(1);
for(int i = 1; i<n; i++)
{
for(int j=res.Count-1; j>=0; j--)
{
res.Add((int)Math.Pow(2, i) + res[j]);
}
}
return res;
}
}
``` |
Project-OSRM/osrm-backend | 625094763 | Title: Build Dependencies and Run Dependencies
Question:
username_0: This is not a bug. I just wanted to know within the below dependent tools which can be defined as a build dependency and which can be marked as a run dependency. I have a plan to remove the run dependency tools which doesn't need to run OSRM-Backend
```
sudo apt install build-essential git cmake pkg-config \
libbz2-dev libxml2-dev libzip-dev libboost-all-dev \
lua5.2 liblua5.2-dev libtbb-dev
```
Answers:
username_1: this list looks correct i think
username_0: @username_1 Sir, I actually didn't mean the list is correct or not. I have successfully installed the OSRM backend with the provided list. But now I want to remove those tools from this list which have no dependency while running OSRM. Thanks
username_1: I understood you incorrectly :) to run OSRM you do not need any of those, *but* you need their runtime counterparts:
* libbz2-1.0
* libxml2
* libzip5
* liblua5.2-0
* libtbb2
* libboost-atomic1.71.0
* libboost-chrono1.71.0
* libboost-date-time1.71.0
* libboost-filesystem1.71.0
* libboost-iostreams1.71.0
* libboost-program-options1.71.0
* libboost-regex1.71.0
* libboost-system1.71.0
* libboost-thread1.71.0
package names are valid for latest ubuntu.
Status: Issue closed
|
moebooru/moebooru | 168599860 | Title: Error: Uncaught TypeError: Cannot read property 'text' of null
Question:
username_0: when i try to batch upload something i get this error in the "user_errors.log"
`Error: Uncaught TypeError: Cannot read property 'text' of null
UA: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36
URL: https://<website>/
Cookies: wp-settings-1=libraryContent=browse; wp-settings-time-1=1468717497; firstVisit=1468723681719; vote=1; hide-news-ticker=20130822; login=username_0; show_defaults_to_edit=1; mode=view; tag-script=; __utma=240815900.91403891.1468985611.1469975383.1469999523.17; __utmz=240815900.1468985611.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); forum_post_last_read_at="2016-07-19T05:13:10.602Z"; country=DE; user_id=1; user_info=1;50;0; has_mail=0; comments_updated=0; mod_pending=0; block_reason=; resize_image=0; show_advanced_editing=0; my_tags=; held_post_count=0; reported_error=1
File: https://<website>/assets/application-7b4990ee69ade56f45feb7871fa4226804d28627e47caa1cbc3b5ef081cf950e.js line 6`
Answers:
username_1: Fixed.
Status: Issue closed
username_0: Problem persists with different error
Error: Uncaught TypeError: Cannot read property 'value' of undefined
username_1: That'd be a different bug.
username_0: File: https://<website>/assets/moe-legacy/application-3a4cd5a74b2c27bc0eb8607f13f0849e50353e03eccc0ec4ac8acb542c308ab7.js line 11
username_1: That doesn't help. At least which page causes the error is needed.
username_0: UA: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36
URL: https://website/batch/create?url=https%3A%2F%2Fimgur.com%2Fa%2Fanjym%2338&commit=Load+file+index
File: https://website/assets/moe-legacy/application-3a4cd5a74b2c27bc0eb8607f13f0849e50353e03eccc0ec4ac8acb542c308ab7.js line 11
username_1: I can't reproduce the error. I can guess what it is but without precise location of the error it'll take a long time for me to fix (and I don't quite have time for that).
Post the full stack trace in chrome. Or even better run in development mode so I can see which file is causing the error.
username_0: running in dev. mode. where do i find the error now?
username_1: browser console
username_0: the console shows nothing for me
username_1: You to visit the page again and do same action.
username_0: my bad. had to tick the "preserve log" box :D
https://i.imgur.com/jo26HuQ.png
username_1: Should be fixed in 6e04283.
username_0: new errors~
https://i.imgur.com/D9DDrsK.png
username_1: Try a full reload first.
username_0: i did. error persists
username_1: Fixed in e2ab9a5.
username_0: next error
https://i.imgur.com/jJgGZHg.png
username_0: applied your fix. no more error messages but its still not working.
"Status:pending"
username_1: As I said, the error is not related to batch upload.
Check `/job_task`, if it's empty, run `bundle exec rake create_jobs`.
Run `bundle exec rake job:start`.
username_1: try `bundle install && gem pristine -a`
username_0: no more errors.
`/job_task` shows only that its started.
stil on pending
username_0: its looping this entry in the development.log
http://pastebin.com/eRNUxSgw |
szabgab/code-maven.com | 174718246 | Title: Episode date format invalid / Invalid feed.
Question:
username_0: Really enjoying the podcast but I would like to subscribe with Pocket Casts so I don't miss an episode.
Answers:
username_1: This is totally the right place though the actual source code of the site is here: https://github.com/username_1/Perl-Maven I'll look into it soon.
username_1: I think I've fixed it. Please check.
username_0: Thanks. I've checked again and it's giving the same error. I had a look at the feed and noticed a couple of things that may need changing to make it work:
pubDate is missing a comma after the day:
`Sat Sep 3 09:19:24 2016 GMT`
should be
`Sat**,** Sep 3 09:19:24 2016 GMT`
enclosure type is different:
`<enclosure url="http://code-maven.com/media/cmos-3-joel-berger-mojolicious.mp3" length="23645783" type="audio/x-mp3" />`
should be
`<enclosure url="http://code-maven.com/media/cmos-3-joel-berger-mojolicious.mp3" length="23645783" type="**audio/mpeg**" />`
Pocket casts is quite picky with its feed formatting it would seem :)
username_1: Does this feed work: http://feeds.5by5.tv/changelog ?
username_1: Could you check the CMOS feed again?
Status: Issue closed
username_1: Phew. 😄 |
pingcap/tidb | 1073181484 | Title: report "update partition record fails" error when upgrade from v4.0.16 to v5.2.0
Question:
username_0: ## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
upgrade from v4.0.16 to v5.2.0
after upgrade, run stmtflow test ddl/all.jsonnet test. found following error in log:
2021/12/07 17:08:16.565 +08:00] [INFO] [domain.go:129] ["diff load InfoSchema success"] [currentSchemaVersion=4402] [neededSchemaVersion=4403] ["start time"=898.82µss
] [phyTblIDs="[]"] [actionTypes="[]"]
[2021/12/07 17:08:16.567 +08:00] [INFO] [schema_validator.go:291] ["the schema validator enqueue, queue is too long"] ["delta max count"=1024] ["remove schema version"=1863]
[2021/12/07 17:08:16.630 +08:00] [INFO] [domain.go:129] ["diff load InfoSchema success"] [currentSchemaVersion=4403] [neededSchemaVersion=4404] ["start time"=602.692µ�
s] [phyTblIDs="[]"] [actionTypes="[]"]
[2021/12/07 17:08:16.886 +08:00] [INFO] [domain.go:129] ["diff load InfoSchema success"] [currentSchemaVersion=4404] [neededSchemaVersion=4405] ["start time"=4.286041ms] [phyTblIDs="[2515,2516,2517]"] [actionTypes="[8,8,8]"]
[2021/12/07 17:08:16.965 +08:00] [ERROR] [partition.go:1218] ["update partition record fails"] [message="new record inserted while old record is not removed"] [error="EncodeRow error: data and columnID count not match 4 vs 3"]
### 2. What did you expect to see? (Required)
no error
### 3. What did you see instead (Required)
### 4. What is your TiDB version? (Required)
v5.2.0
Answers:
username_0: [tidb-5000.tar.gz](https://github.com/pingcap/tidb/files/7667567/tidb-5000.tar.gz)
username_0: data collected by clinic: "https://clinic.pingcap.com:4433/diag/files?uuid=9813e1e4294438a6-116e126a69cc4232-45efdaf37f025cbd"
username_0: this tidb node is scale-out after upgrade
username_1: PTAL @username_4 @username_2
username_0: the testcases report this error:
[negative.tar.gz](https://github.com/pingcap/tidb/files/7668728/negative.tar.gz)
table_partition__drop_partition_2
table_partition__drop_partition_3
table_partition__truncate_partition_2
table_partition__truncate_partition_3
username_1: @username_0 What is the configuration of the cluster? I noticed that 'amend transaction' is enabled but not sure if it is related to this error.
It would be helpful if we can have a configuration for the cluster as well.
username_2: This is the same issue with https://github.com/pingcap/tidb/issues/28292
It's introduced in 5.1 by the cherry-pick PR https://github.com/pingcap/tidb/pull/21148
It's a critical problem and not easy to fix, see our internal document https://pingcap.feishu.cn/docs/doccnDrJ22mWwkFi9NkUfzwlADc
username_3: dup of https://github.com/pingcap/tidb/issues/28292
Status: Issue closed
username_0: also exists in v5.3.1
username_0: --- /tmp/ddl__negative__table_partition__truncate_partition_2.531154131/a 2022-03-22 07:50:37.956115775 +0000
+++ /tmp/ddl__negative__table_partition__truncate_partition_2.531154131/b 2022-03-22 07:50:37.956115775 +0000
@@ -9,7 +9,7 @@
/* txn */ begin;
-- txn >> 0 rows affected
/* txn */ update t set id = 7 where id = 1;
--- txn >> 2 rows affected
+-- txn >> E1105: EncodeRow error: data and columnID count not match 4 vs 3
/* ddl */ alter table t truncate partition p0;
-- ddl >> 0 rows affected
/* txn */ commit; -- E8028
2022/03/22 07:50:37 [ddl/all.jsonnet#ddl/negative/table_partition__truncate_partition_2] failed: event#11 mismatch: txn:return(/* txn */ update t set id = 7 where id = 1;): expect a result, got (E1105: EncodeRow error: data and columnID count not match 4 vs 3)
github.com/zyguan/tidb-test-util/cmd/stmtflow/core.(*matchHistory).Assert
/go/tidb-test-util/cmd/stmtflow/core/test.go:113
github.com/zyguan/tidb-test-util/cmd/stmtflow/core.(*Test).Assert
/go/tidb-test-util/cmd/stmtflow/core/test.go:46
github.com/zyguan/tidb-test-util/cmd/stmtflow/command.testOne
/go/tidb-test-util/cmd/stmtflow/command/test.go:117
github.com/zyguan/tidb-test-util/cmd/stmtflow/command.Test.func1
/go/tidb-test-util/cmd/stmtflow/command/test.go:72
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:852
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/[email protected]/command.go:960
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:897
main.main
/go/tidb-test-util/cmd/stmtflow/main.go:29
username_0: --- /tmp/ddl__negative__table_partition__truncate_partition_3.699959318/a 2022-03-22 07:50:38.351116125 +0000
+++ /tmp/ddl__negative__table_partition__truncate_partition_3.699959318/b 2022-03-22 07:50:38.351116125 +0000
@@ -9,7 +9,7 @@
/* txn */ begin;
-- txn >> 0 rows affected
/* txn */ update t set id = 2 where id = 5;
--- txn >> 2 rows affected
+-- txn >> E1105: EncodeRow error: data and columnID count not match 4 vs 3
/* ddl */ alter table t truncate partition p0;
-- ddl >> 0 rows affected
/* txn */ commit; -- E8028
2022/03/22 07:50:38 [ddl/all.jsonnet#ddl/negative/table_partition__truncate_partition_3] failed: event#11 mismatch: txn:return(/* txn */ update t set id = 2 where id = 5;): expect a result, got (E1105: EncodeRow error: data and columnID count not match 4 vs 3)
github.com/zyguan/tidb-test-util/cmd/stmtflow/core.(*matchHistory).Assert
/go/tidb-test-util/cmd/stmtflow/core/test.go:113
github.com/zyguan/tidb-test-util/cmd/stmtflow/core.(*Test).Assert
/go/tidb-test-util/cmd/stmtflow/core/test.go:46
github.com/zyguan/tidb-test-util/cmd/stmtflow/command.testOne
/go/tidb-test-util/cmd/stmtflow/command/test.go:117
github.com/zyguan/tidb-test-util/cmd/stmtflow/command.Test.func1
/go/tidb-test-util/cmd/stmtflow/command/test.go:72
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:852
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/[email protected]/command.go:960
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:897
main.main
/go/tidb-test-util/cmd/stmtflow/main.go:29
runtime.main
username_0: --- /tmp/ddl__negative__table_partition__drop_partition_3.222860356/a 2022-03-22 07:50:34.110112366 +0000
+++ /tmp/ddl__negative__table_partition__drop_partition_3.222860356/b 2022-03-22 07:50:34.110112366 +0000
@@ -9,7 +9,7 @@
/* txn */ begin;
-- txn >> 0 rows affected
/* txn */ update t set id = 2 where id = 5;
--- txn >> 2 rows affected
+-- txn >> E1105: EncodeRow error: data and columnID count not match 4 vs 3
/* ddl */ alter table t drop partition p0;
-- ddl >> 0 rows affected
/* txn */ commit; -- E8028
2022/03/22 07:50:34 [ddl/all.jsonnet#ddl/negative/table_partition__drop_partition_3] failed: event#11 mismatch: txn:return(/* txn */ update t set id = 2 where id = 5;): expect a result, got (E1105: EncodeRow error: data and columnID count not match 4 vs 3)
github.com/zyguan/tidb-test-util/cmd/stmtflow/core.(*matchHistory).Assert
/go/tidb-test-util/cmd/stmtflow/core/test.go:113
github.com/zyguan/tidb-test-util/cmd/stmtflow/core.(*Test).Assert
/go/tidb-test-util/cmd/stmtflow/core/test.go:46
github.com/zyguan/tidb-test-util/cmd/stmtflow/command.testOne
/go/tidb-test-util/cmd/stmtflow/command/test.go:117
github.com/zyguan/tidb-test-util/cmd/stmtflow/command.Test.func1
/go/tidb-test-util/cmd/stmtflow/command/test.go:72
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:852
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/[email protected]/command.go:960
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:897
main.main
/go/tidb-test-util/cmd/stmtflow/main.go:29
runtime.main
username_0: 2022/03/22 07:50:31 [ddl/all.jsonnet#ddl/negative/table_partition__drop_partition_1] passed
--- /tmp/ddl__negative__table_partition__drop_partition_2.291812921/a 2022-03-22 07:50:32.535110970 +0000
+++ /tmp/ddl__negative__table_partition__drop_partition_2.291812921/b 2022-03-22 07:50:32.535110970 +0000
@@ -9,7 +9,7 @@
/* txn */ begin;
-- txn >> 0 rows affected
/* txn */ update t set id = 7 where id = 1;
--- txn >> 2 rows affected
+-- txn >> E1105: EncodeRow error: data and columnID count not match 4 vs 3
/* ddl */ alter table t drop partition p0;
-- ddl >> 0 rows affected
/* txn */ commit; -- E8028
2022/03/22 07:50:32 [ddl/all.jsonnet#ddl/negative/table_partition__drop_partition_2] failed: event#11 mismatch: txn:return(/* txn */ update t set id = 7 where id = 1;): expect a result, got (E1105: EncodeRow error: data and columnID count not match 4 vs 3)
github.com/zyguan/tidb-test-util/cmd/stmtflow/core.(*matchHistory).Assert
/go/tidb-test-util/cmd/stmtflow/core/test.go:113
github.com/zyguan/tidb-test-util/cmd/stmtflow/core.(*Test).Assert
/go/tidb-test-util/cmd/stmtflow/core/test.go:46
github.com/zyguan/tidb-test-util/cmd/stmtflow/command.testOne
/go/tidb-test-util/cmd/stmtflow/command/test.go:117
github.com/zyguan/tidb-test-util/cmd/stmtflow/command.Test.func1
/go/tidb-test-util/cmd/stmtflow/command/test.go:72
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:852
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/[email protected]/command.go:960
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:897
main.main
/go/tidb-test-util/cmd/stmtflow/main.go:29
username_0: ## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
upgrade from v4.0.16 to v5.2.0
after upgrade, run stmtflow test ddl/all.jsonnet test. found following error in log:
2021/12/07 17:08:16.565 +08:00] [INFO] [domain.go:129] ["diff load InfoSchema success"] [currentSchemaVersion=4402] [neededSchemaVersion=4403] ["start time"=898.82µss
] [phyTblIDs="[]"] [actionTypes="[]"]
[2021/12/07 17:08:16.567 +08:00] [INFO] [schema_validator.go:291] ["the schema validator enqueue, queue is too long"] ["delta max count"=1024] ["remove schema version"=1863]
[2021/12/07 17:08:16.630 +08:00] [INFO] [domain.go:129] ["diff load InfoSchema success"] [currentSchemaVersion=4403] [neededSchemaVersion=4404] ["start time"=602.692µ�
s] [phyTblIDs="[]"] [actionTypes="[]"]
[2021/12/07 17:08:16.886 +08:00] [INFO] [domain.go:129] ["diff load InfoSchema success"] [currentSchemaVersion=4404] [neededSchemaVersion=4405] ["start time"=4.286041ms] [phyTblIDs="[2515,2516,2517]"] [actionTypes="[8,8,8]"]
[2021/12/07 17:08:16.965 +08:00] [ERROR] [partition.go:1218] ["update partition record fails"] [message="new record inserted while old record is not removed"] [error="EncodeRow error: data and columnID count not match 4 vs 3"]
### 2. What did you expect to see? (Required)
no error
### 3. What did you see instead (Required)
### 4. What is your TiDB version? (Required)
v5.2.0
username_0: reopen it because issue still exist in v6.0.0
username_4: i'm still analysing this, but I think there are more issues that can trigger this. I moved the check for equal length for row and colIDs to be done in RemoveRecord which than will fail for several other cases, like TestInsertOnDuplicateKey, with a read row longer than number of colIDs. The test is just not using binary log, so that is why it has no issues.
username_4: These tests are also using `set @@tidb_enable_amend_pessimistic_txn = 1` which is not compatible with binlog (according to the [docs](https://docs.pingcap.com/tidb/dev/system-variables#tidb_enable_amend_pessimistic_txn-span-classversion-marknew-in-v407span)).
Also there exists bugs with amending pessimistic transactions (not sure if it is related to this): https://github.com/pingcap/tidb/issues/20996.
I have not found an easy reproducible test case, but the issue can be triggered by QA, but it needs to be reduced for further investigation.
So my current conclusion is that binlog has similar issues (data and columnID count not match), see https://github.com/pingcap/tidb/issues/33608.
And `tidb_enable_amend_pessimistic_txn` is not compatible with binlog, so I will stop investing for now.
@username_1 & @username_0 should we keep this open or close it as duplicate on any of the above bugs? If we keep it open, can we lower the severity? |
tensorflow/tensorflow | 192334826 | Title: Tensorboard not showing charts after upgrade to 0.12rc0
Question:
username_0: Tensorboard charts (scalars, distributions, histograms) don't show up after upgrading to 0.12rc0.
No errors are logged on the server.
Check the images below to get a sense of the problem.
Operating System: Ubuntu 15.10
1. pip installation from here:
Ubuntu/Linux 64-bit, CPU only, Python 3.4
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.12.0rc0-cp34-cp34m-linux_x86_64.whl
2. The output from `python -c "import tensorflow; print(tensorflow.__version__)"`.
0.12.0-rc0


Answers:
username_1: @username_0 We'll look into this this week.
Can you check if this reproduces using python2?
username_0: Same thing on python2.
Don't know if this helps or not but here it goes. The images below show the html generated for both cases.


username_0: I was using an older version of Chromium.
Moved to Firefox and no problems there.
Closing the issue.
Status: Issue closed
username_2: How old was the version of Chromium you were using?
username_0: I was using: Version 48.0.2564.116 Ubuntu 15.10 (64-bit) |
admb-project/admb | 672411842 | Title: Provide more information on "debug" releases or on debugging in general?
Question:
username_0: Thanks @username_1 for the ADMB 12.2 release and for providing a larger set of download options at https://github.com/admb-project/admb/releases.
For those of us less familiar with debugging tools, I think it would be helpful for users to provide more information about the debug versions of ADMB (what does the debug version get you other than a much larger executable?). It could be useful to include a sentence added to the release description, some text in the QuickStart documents, or links from those places to a more complete description elsewhere.
Answers:
username_1: Can you check if http://www.admb-project.org/downloads/admb-12.2/QuickStartMacOSZip.html is enough info?
username_0: Looks good to me, thank you.
Status: Issue closed
|
PF4Public/gentoo-overlay | 789768934 | Title: www-plugins/chrome-binary-plugins / dependency conflict
Question:
username_0: Welp... well I just received this warning from emerge about a dependency conflict....
```
WARNING: One or more updates/rebuilds have been skipped due to a dependency conflict:
www-plugins/chrome-binary-plugins:stable
(www-plugins/chrome-binary-plugins-88.0.4324.96:stable/stable::gentoo, ebuild scheduled for merge) USE="" ABI_X86="(64)" conflicts with
~www-plugins/chrome-binary-plugins-87.0.4280.141 required by (www-client/ungoogled-chromium-87.0.4280.141_p1:0/0::pf4public, installed)
```
Widevine rearing its ugly head again. Let me know if I can test anything.
Thanks
Answers:
username_1: Are you trying to install it again, or is it a warning? It seems, that they've bumped chrome-binary-plugins with the arrival of new chromium version. I suggest waiting, till ungoogled-chromium bumps to 88 version and see if the warning persists. By the look of it, this seems to be harmless.
Status: Issue closed
username_0: No, just just came up with during my emerge world update, thanks for the response :) |
saltstack/salt | 152525557 | Title: No JSON object could be decoded: if diff contains latin1 Umlaut
Question:
username_0: ### Description of Issue/Question
```
Traceback (most recent call last):
File "/tmp/.root_483e1e_salt/salt-call", line 15, in <module>
salt_call()
File "/tmp/.root_483e1e_salt/py2/salt/scripts.py", line 339, in salt_call
client.run()
File "/tmp/.root_483e1e_salt/py2/salt/cli/call.py", line 58, in run
caller.run()
File "/tmp/.root_483e1e_salt/py2/salt/cli/caller.py", line 148, in run
self.opts)
File "/tmp/.root_483e1e_salt/py2/salt/output/__init__.py", line 86, in display_output
display_data = try_printout(data, out, opts)
File "/tmp/.root_483e1e_salt/py2/salt/output/__init__.py", line 39, in try_printout
return get_printout(out, opts)(data).rstrip()
File "/tmp/.root_483e1e_salt/py2/salt/output/json_out.py", line 57, in output
return json.dumps(data, default=repr, indent=4)
File "/usr/lib64/python2.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib64/python2.7/json/encoder.py", line 203, in encode
chunks = list(chunks)
File "/usr/lib64/python2.7/json/encoder.py", line 428, in _iterencode
for chunk in _iterencode_dict(o, _current_indent_level):
File "/usr/lib64/python2.7/json/encoder.py", line 402, in _iterencode_dict
for chunk in chunks:
File "/usr/lib64/python2.7/json/encoder.py", line 402, in _iterencode_dict
for chunk in chunks:
File "/usr/lib64/python2.7/json/encoder.py", line 402, in _iterencode_dict
for chunk in chunks:
File "/usr/lib64/python2.7/json/encoder.py", line 402, in _iterencode_dict
for chunk in chunks:
File "/usr/lib64/python2.7/json/encoder.py", line 384, in _iterencode_dict
yield _encoder(value)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xfc in position 477: invalid start byte
[ERROR ] No JSON object could be decoded
```
### Setup
monitioring.sls (Snippet)
```
monitoring__bashrc:
file.managed:
- name: {{ pillar['monitoring']['home'] }}/.bashrc
- source: salt://monitoring/files/.bashrc
```
### Steps to Reproduce Issue
```
salt-ssh w123 state.sls monitoring test=True
```
Since I use `test=True` the problem is reproducible. Otherwise it would disappear after the first call.
### Versions Report
current develop branch af7593ae53b3f7902cad2e14aee0850dc57c2966
### UnicodeError
The old .bashrc contained a latin1 Umlaut. I guess this is the reason.
### repr(data) attached
I attached the repr(data) which I created by modifying json_out.py on the remote host.
Answers:
username_0: [data.txt](https://github.com/saltstack/salt/files/244834/data.txt)
username_1: @username_0, thanks for reporting. This should probably be [`sdecode`d](https://github.com/saltstack/salt/blob/v2016.3.0rc2/salt/utils/locales.py#L35) at the `source` parameter in file.managed. |
Code4HR/hrt-bus-api | 83725758 | Title: Alert maintainers when fleet management feed goes down.
Question:
username_0: Companion issue to https://github.com/Code4HR/hrt-bus-finder/issues/145
When the feed is stale (let's say 30+ minutes), it would be helpful if the API could email the maintainers of the project at <EMAIL>
Answers:
username_1: need to revisit this and automate it so that users aren't the one that are reporting when the feed goes down.
actionable tasks
1. Should load (ftp://172.16.58.3/Anrd/hrtrtf.txt)
2. If data is (stale) hasn't reported changes within 5 minutes notify maintainers
3. If data is absent (just the headers) notify maintainers
4. If times out, notify maintainers.
This can be accomplished some sort of cron but simple uptime notifications aren't useful.
this task can also fire an email off <EMAIL>
username_2: After notification of failure, how long should we wait before we check again / send another notification if its still down?
username_1: notifications probably a hour or two, checking could probably be however as responsive we want to make it, the feed refreshes every ~5 minutes so even 5 is cool but we could do minute wise.
I figure we could just stick this easily in a cron
username_1: @username_2 wrote something for this, i don't rememberwhere this is commited, do you have a copy. I want to mark and close this done :)
username_2: Hey sorry I completely forgot to reply to this last week, here it is: https://github.com/username_2/hrtStatus |
JoeMayo/LinqToTwitter | 119388809 | Title: Document Exception
Question:
username_0: API calls contain validations on input parameters that throw exceptions. The documentation doesn't cover these cases. The closest thing that exists is to declare whether a parameter is required or not. It would be nice to have a table to show these exceptions.
The Wiki is open for modification, so any help in this area is welcome.
Answers:
username_1: heelo i run test demo sent message but error The remote server returned an error: (403) Forbidden . can u help me.
Status: Issue closed
|
watson-developer-cloud/java-sdk | 227183264 | Title: [Speech-to-text] Websocket remains open after inputstream is closed
Question:
username_0: Hi all,
When I close my inputpipe, the Websocket keeps streaming and eventually times out after 30 seconds of inactivity. Throwing this following exception:
SEVERE: Session timed out, no data received in the last 30 seconds.
java.lang.RuntimeException: Session timed out, no data received in the last 30 seconds.
at com.ibm.watson.developer_cloud.speech_to_text.v1.websocket.WebSocketManager$SpeechToTextWebSocketListener.onMessage(WebSocketManager.java:140)
at okhttp3.internal.ws.RealWebSocket.onReadMessage(RealWebSocket.java:307)
at okhttp3.internal.ws.WebSocketReader.readMessageFrame(WebSocketReader.java:222)
at okhttp3.internal.ws.WebSocketReader.processNextFrame(WebSocketReader.java:101)
at okhttp3.internal.ws.RealWebSocket.loopReader(RealWebSocket.java:262)
at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:201)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:141)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
My question is: Is there a manually way to close the websocket?
Using the latest version:
<dependency>
<groupId>com.ibm.watson.developer_cloud</groupId>
<artifactId>java-sdk</artifactId>
<version>3.8.0</version>
</dependency>
Answers:
username_1: I'm working on this @username_0.
username_0: Thanks @username_1 - Looking forward to see your solution.
Status: Issue closed
username_1: @username_0 Once I merge #691 it will be in sonatype snapshots.
Check the readme for instructions on how to use the snapshot repository. It will be great if you can validate that this issue was fixed |
waywardgeek/sonic | 647174533 | Title: Build failure on Mac
Question:
username_0: Undefined symbols for architecture x86_64:
"_fftw_destroy_plan", referenced from:
_sonicAddPitchPeriodToSpectrogram in spectrogram.o
"_fftw_execute", referenced from:
_sonicAddPitchPeriodToSpectrogram in spectrogram.o
"_fftw_plan_dft_r2c_1d", referenced from:
_sonicAddPitchPeriodToSpectrogram in spectrogram.o
ld: symbol(s) not found for architecture x86_64
I had to run make with USE_SPECTROGRAM=0 to get it to build on Mac, but I don't know what functionality this will impact.
Answers:
username_1: Are you on the latest version of sonic? Are you possibly on the [espeak fork](https://github.com/espeak-ng/sonic)?
There was a [fix in 2018](https://github.com/username_2/sonic/commit/0501f109f3eec42120cd209153e58aabdbed8842) in this repo to link in libfftw3. The espeak repo fork is out of date and doesn't have that fix.
The build works for me after that fix. Before the fix, you can see that it failed on
```
gcc -Wall -Wno-unused-function -O3 -ansi -fPIC -pthread -DSONIC_SPECTROGRAM -shared -Wl,-install_name,libsonic.so.0 sonic.o spectrogram.o -o libsonic.so.0.3.0
```
username_2: Hi, Kevin. Sorry I missed your email from June! The spectrogram
functionality should be disabled. I'll update the Makefile to set it this
way. Espeak does not use the spectrogram feature. I only include this
feature since I believe we may be able to significantly improve speech
recognition using more efficient and higher quality signal analysis on the
front-end.
Also, Espeak should be enhanced to include the heart of the sonic algorithm
for high-speed speech (> 2.0 times speedup) directly in the vocoder. I
believe most commercial TTS engines have already done this. This improves
the speech quality, since sonic introduces about -40 db of noise. It also
reduces the CPU load which is due to sonic trying to find the best
fundamental pitch estimate for every pitch epoch. Espeak's vocoder knows
the pitch already.
Bill
username_1: Hey Bill @username_2, thanks for the response and the insight on Espeak! Not a big deal, but minor correction - you actually missed Daniel's message from June. I, Kevin, only commented on this issue a few days ago.
So if Espeak-ng doesn't use the spectrogram feature, will you be disabling the spectrogram in this upstream or in the Espeak-ng fork? Do you have maintainer access to the Espeak-ng forked repo?
username_1: Oh nevermind. I see you've made a [commit](https://github.com/username_2/sonic/commit/4a052d9774387a9d9b4af627f6a74e1694419960) to this repo. I guess somebody needs to fast-forward the Espeak-ng fork to be in sync with this upstream repo. |
GetStream/react-activity-feed | 539791164 | Title: Override LoadingSpinner in StreamApp component
Question:
username_0: As far as I can tell, there is no way to override the Loading Spinner component in the StreamApp component, so it is difficult to customize the loading state of the stream app. The css is also written inline for the LoadingSpinner component, so it can't be targeted and removed via css directly.
Answers:
username_1: Hi @username_0 I have just released version `0.9.25` which includes the prop LoadingIndicator on the FlatFeed which allows you to override the LoadingIndicator component. I also removed the inline styles so you can update them as well.
username_0: amazing! thank you!
Status: Issue closed
|
appirio-tech/topcoder-app | 206121465 | Title: Inactive submission button at the Challenge Specs page, when submission phase is late.
Question:
username_0: When submission phase is late (i.e. we are past the deadline for submission, but the submission phase has been re-opened by admin for some very special reason, to allow somebody to contribute past the deadline), the `Submit` button at the Challenge Specs page remains inactive. Submission through the OR page is available though.
Answers:
username_1: @username_0 Any test challenge active currently with above conditions?
username_0: @username_1 Nope, I've tried to rapidly create one for the remote dev env, but failed :( I believe though, this one might be a simple fix, which we can do without test. Most probably, the button condition checks that challenge phase is Submission, and the time till the end of the phase is positive, thus we only need to remove the second condition.
I'll look more into creating of the test challenge tomorrow.
username_1: @username_0 current logic is simply to check if `submissionEndDate` is positive and the user has registered to the challenge.
It would be better to check what's causing that issue and then fix that.
username_0: @ajefts Can you help me a bit here? I don't know how to make a mock billing account in `www.topcoder-dev.com/direct`, and without billing account development version of Direct does not allow to activate challenges.
username_0: @username_1
FYI: I believe, using this https://github.com/appirio-tech/tc-common-tutorials, especially `docker/direct-app` folder there, you can setup locally everything necessary to test challenge-phases-related functionality. I have not yet tried it now, but I believe, I was using that in some challenge a while ago, and it was working like a charm (after you make it through the setup :)
username_0: ### Update
Test challenge set up at dev site: https://www.topcoder-dev.com/challenge-details/30050681/?type=develop&noncache=true. Submission phase is manually set open, but it is late (phase end date is in the past already).
`dan_developer` test user is registered to the challenge as competitor. He can upload submission using Online Review, but he can't from the challenge details page, as the button is inactive.
Status: Issue closed
|
dotnet/machinelearning-samples | 473236127 | Title: Unable to install VSIX Installer extension
Question:
username_0: Problem encountered on https://dotnet.microsoft.com/learn/machinelearning-ai/ml-dotnet-get-started-tutorial/install
Operating System: windows
26/07/2019 01:46:53 PM - Microsoft VSIX Installer
26/07/2019 01:46:53 PM - -------------------------------------------
26/07/2019 01:46:53 PM - vsixinstaller.exe version:
26/07/2019 01:46:53 PM - 15.9.3041
26/07/2019 01:46:53 PM - -------------------------------------------
26/07/2019 01:46:53 PM - Command line parameters:
26/07/2019 01:46:53 PM - C:\Program Files (x86)\Microsoft Visual Studio\Installer\resources\app\ServiceHub\Services\Microsoft.VisualStudio.Setup.Service\VSIXInstaller.exe,C:\Users\z003v57f\Downloads\MLNET_Model_Builder.vsix
26/07/2019 01:46:53 PM - -------------------------------------------
26/07/2019 01:46:53 PM - Microsoft VSIX Installer
26/07/2019 01:46:53 PM - -------------------------------------------
26/07/2019 01:46:54 PM - Initializing Install...
26/07/2019 01:46:54 PM - Extension Details...
26/07/2019 01:46:54 PM - Identifier : FE96D051-645F-4309-AE99-107A776B0DA2
26/07/2019 01:46:54 PM - Name : ML.NET Model Builder (Preview)
26/07/2019 01:46:54 PM - Author : Microsoft
26/07/2019 01:46:54 PM - Version : 16.0.1907.1703
26/07/2019 01:46:54 PM - Description : Simple UI tool to build custom machine learning models.
26/07/2019 01:46:54 PM - Locale : en-US
26/07/2019 01:46:54 PM - MoreInfoURL :
26/07/2019 01:46:54 PM - InstalledByMSI : False
26/07/2019 01:46:54 PM - SupportedFrameworkVersionRange : [4.5,)
26/07/2019 01:46:54 PM -
26/07/2019 01:46:58 PM - SignatureState : ValidSignature
26/07/2019 01:46:58 PM - SignedBy : Microsoft Corporation
26/07/2019 01:46:58 PM - Certificate Info :
26/07/2019 01:46:58 PM - -------------------------------------------------------
26/07/2019 01:46:58 PM - [Subject] : CN=Microsoft Corporation, OU=OPC, O=Microsoft Corporation, L=Redmond, S=Washington, C=US
26/07/2019 01:46:58 PM - [Issuer] : CN=Microsoft Code Signing PCA 2010, O=Microsoft Corporation, L=Redmond, S=Washington, C=US
26/07/2019 01:46:58 PM - [Serial Number] : 330000026ECE6AE5984BFC96A900000000026E
26/07/2019 01:46:58 PM - [Not Before] : 07/09/2018 02:30:30 AM
26/07/2019 01:46:58 PM - [Not After] : 07/09/2019 02:30:30 AM
26/07/2019 01:46:58 PM - [Thumbprint] : 99B6246883B4B32EA59AE18B36945D205A876800
26/07/2019 01:46:58 PM -
26/07/2019 01:46:58 PM - Supported Products :
26/07/2019 01:46:58 PM - Microsoft.VisualStudio.Community
26/07/2019 01:46:58 PM - Version : [15.0.28307.665,17.0)
26/07/2019 01:46:58 PM - Microsoft.VisualStudio.Enterprise
26/07/2019 01:46:58 PM - Version : [15.0.28307.665,17.0)
26/07/2019 01:46:58 PM - Microsoft.VisualStudio.Pro
26/07/2019 01:46:58 PM - Version : [15.0.28307.665,17.0)
26/07/2019 01:46:58 PM -
26/07/2019 01:46:58 PM - References :
26/07/2019 01:46:58 PM - Prerequisites :
26/07/2019 01:46:58 PM - -------------------------------------------------------
26/07/2019 01:46:58 PM - Identifier : Microsoft.VisualStudio.Component.CoreEditor
26/07/2019 01:46:58 PM - Name : Visual Studio core editor
26/07/2019 01:46:58 PM - Version : [15.0,17.0)
26/07/2019 01:46:58 PM -
26/07/2019 01:46:58 PM - -------------------------------------------------------
26/07/2019 01:46:58 PM - Identifier : Microsoft.VisualStudio.Workload.NetCoreTools
26/07/2019 01:46:58 PM - Name : .NET Core cross-platform development
26/07/2019 01:46:58 PM - Version : [15.0,17.0)
26/07/2019 01:46:58 PM -
26/07/2019 01:46:58 PM - -------------------------------------------------------
26/07/2019 01:46:58 PM - Identifier : Microsoft.NetCore.ComponentGroup.DevelopmentTools.2.1
26/07/2019 01:46:58 PM - Name : .NET Core 2.1 development tools
26/07/2019 01:46:58 PM - Version : [15.0,17.0)
26/07/2019 01:46:58 PM -
26/07/2019 01:46:58 PM - Signature Details...
26/07/2019 01:46:58 PM - Extension is signed with a valid signature.
26/07/2019 01:46:58 PM -
26/07/2019 01:46:58 PM - Searching for applicable products...
26/07/2019 01:46:58 PM - Found installed product - Microsoft Visual Studio Premium 2012
26/07/2019 01:46:58 PM - Found installed product - Microsoft Visual Studio Professional 2012
26/07/2019 01:46:58 PM - Found installed product - Microsoft Visual Studio 2012 Shell (Integrated)
26/07/2019 01:46:58 PM - Found installed product - Global Location
26/07/2019 01:46:58 PM - Found installed product - Visual Studio Enterprise 2017
26/07/2019 01:46:58 PM - VSIXInstaller.NoApplicableSKUsException: This extension is not installable on any currently installed products.
at VSIXInstaller.ExtensionService.GetInstallableData(String vsixPath, String extensionPackParentName, Boolean isRepairSupported, IStateData stateData, IEnumerable`1& skuData)
at VSIXInstaller.ExtensionPackService.IsExtensionPack(IStateData stateData, Boolean isRepairSupported)
at VSIXInstaller.ExtensionPackService.ExpandExtensionPackToInstall(IStateData stateData, Boolean isRepairSupported)
at VSIXInstaller.App.Initialize(Boolean isRepairSupported)
at VSIXInstaller.App.Initialize()
at System.Threading.Tasks.Task`1.InnerInvoke()
at System.Threading.Tasks.Task.Execute()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.VisualStudio.Telemetry.WindowsErrorReporting.WatsonReport.GetClrWatsonExceptionInfo(Exception exceptionObject)
Answers:
username_1: Hi @username_0 - it appears your issue relates to Model Builder rather than ML.NET samples. In that case, your issue would be better-suited for the Model Builder repo: https://github.com/dotnet/machinelearning-modelbuilder/issues. Please let me know if you are having any issues with the samples/samples repo itself! |
jackofsporks/blockit | 302513976 | Title: Shouldn't there ALWAYS be something there?!?! At least in this particular case where we're in an element. But now we're abstracted!!!!
Question:
username_0: if (!matchGuaranteed && !match)
----
*opened via [imdone.io](https://imdone.io) from a code comment on [20ae47f7](https://github.com/username_0/blockit/commit/20ae47f7) by username_0*
----
https://github.com/username_0/blockit/blob/70af5f49ac3561692612ecc5d71cae8977c5e35d/lib/control/Locator.js#L391-L397
Answers:
username_0: QUESTION Shouldn't there ALWAYS be something there?!?! At least in this particular case where we're in an element. But now we're abstracted!!!! id:62 gh:68 ic:gh
----
https://github.com/username_0/blockit/blob/70af5f49ac3561692612ecc5d71cae8977c5e35d/lib/control/Locator.js#L391-L397
username_0: QUESTION Shouldn't there ALWAYS be something there?!?! At least in this particular case where we're in an element. But now we're abstracted!!!! id:62 gh:68 ic:gh
----
https://github.com/username_0/blockit/blob/11745c0692aa22c71879cbf9f303cf52e27f3ba7/lib/control/Locator.js#L392-L398
username_0: QUESTION Shouldn't there ALWAYS be something there?!?! At least in this particular case where we're in an element. But now we're abstracted!!!! id:62 gh:68 ic:gh
----
https://github.com/username_0/blockit/blob/57e9444397afe1d4ba3eb8d4ec8e6b12d63d0ce2/lib/langs/html/blocks.js#L391-L397
username_0: QUESTION Shouldn't there ALWAYS be something there?!?! At least in this particular case where we're in an element. But now we're abstracted!!!! id:62 gh:68 ic:gh
----
https://github.com/username_0/blockit/blob/ad4bb68151c0b70f84d7e5b0af56743efe07b93b/lib/langs/html/blocks.js#L376-L382
username_0: QUESTION Shouldn't there ALWAYS be something there?!?! At least in this particular case where we're in an element. But now we're abstracted!!!! id:62 gh:68 ic:gh
----
https://github.com/username_0/blockit/blob/a449943cba90bd479b721958541439e273219c9a/lib/langs/html/blocks.js#L377-L383 |
DataDog/chef-datadog | 258927008 | Title: Update README with an example for AWS OpsWorks (chef solo?)
Question:
username_0: All I had to do in the end to get this working was to provide the node attributes and invoke the recipe by adding the line `include_recipe 'datadog::dd-agent'` at the top of my `install.rb` but it would probably be nice to include that in the README for those not familiar with Chef in the typical context (definitely me).
Answers:
username_0: Happy to add this change myself
username_1: Hi @username_0, and thanks for the feedback! Sorry I didn't answer before.
If you have the chance to improve the README with better instructions your contribution would be more than welcome :)
Status: Issue closed
|
scikit-learn/scikit-learn | 238306212 | Title: In linear_model.LinearRegression: ValueError: array must not contain infs or NaNs
Question:
username_0: <!--
If your issue is a usage question, submit it here instead:
- StackOverflow with the scikit-learn tag: http://stackoverflow.com/questions/tagged/scikit-learn
- Mailing List: https://mail.python.org/mailman/listinfo/scikit-learn
For more information, see User Questions: http://scikit-learn.org/stable/support.html#user-questions
-->
<!-- Instructions For Filing a Bug: https://github.com/scikit-learn/scikit-learn/blob/master/CONTRIBUTING.md#filing-bugs -->
#### Description
<!-- Example: Joblib Error thrown when calling fit on LatentDirichletAllocation with evaluate_every > 0-->
I run the **linear_model.LinearRegression()** function. I have filtered the nan and inf value by `numpy.nan_to_num(X)`.
#### Steps/Code to Reproduce
<!--
Example:
```python
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.decomposition import LatentDirichletAllocation
docs = ["Help I have a bug" for i in range(1000)]
vectorizer = CountVectorizer(input=docs, analyzer='word')
lda_features = vectorizer.fit_transform(docs)
lda_model = LatentDirichletAllocation(
n_topics=10,
learning_method='online',
evaluate_every=10,
n_jobs=4,
)
model = lda_model.fit(lda_features)
```
If the code is too long, feel free to put it in a public gist and link
it in the issue: https://gist.github.com
-->
```
......
matrix = numpy.array(matrix)
where_inf = numpy.isinf(matrix)
where_nan = numpy.isnan(matrix)
matrix[where_inf]=0
matrix[where_nan]=0
clf = linear_model.LinearRegression(self.params)
X = matrix
y = label
print type(X)
X = numpy.nan_to_num(X)
clf.fit(X,y)
....
```
Result:
#### Expected Results
<!-- Example: No error is thrown. Please paste or describe the expected results.-->
#### Actual Results
<!-- Please paste or specifically describe the actual output or traceback. -->
#### Versions
<!--
Please run the following snippet and paste the output below.
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import sklearn; print("Scikit-Learn", sklearn.__version__)
-->
<!-- Thanks for contributing! -->
Answers:
username_1: do you have a feature that is constant?
Status: Issue closed
|
JuantBanda/Queues | 170244970 | Title: Values stored in dictionaries
Question:
username_0: When you do stuff like
for key, value in self.items():
elemA.add(value)
you could just write self.values() to obtain the values stored as a list and then make it into a set
elemA = set(self.values())
to make it all shorter. |
arifa-chouhdry/14-may-task | 445599386 | Title: Eye catch button
Question:
username_0: JOIN NOW! it is an eye catch button, so as mockup, ap ko is ki padding increase krna chye. font size thora aur increase krna chye. and mockup ki trha font family b apply krna chye.
https://github.com/username_1/14-may-task/blob/cd38806c0bcfd5d20dace5c1b3ca555bc0da92aa/style.css#L53
Answers:
username_1: #10 solved
username_0: font size abi b nh h same lkn phir b close krrhi hn
Status: Issue closed
|
alibaba/weex | 214341212 | Title: Weex中native主动发送事件到weex?
Question:
username_0: <!--
注意:Weex 社区已经迁移至 Apache 基金会,为了让社区更聚焦,这里将不会受理 **2017年3月13号** 以后新创建的issue
详情请移步至:https://weex.apache.org/cn/guide/contributing.html
感谢理解
-->
Answers:
username_1: Native与weex通信问题 #1337
Weex中native主动发送事件到JS #866
参考这两个 issues
username_2: This PR/issue doesn't received response since Weex migrated to [Apache](http://incubator.apache.org/projects/weex.html) two years ago and this repository is no longer active yet.
Feel free to ask question in [new repository](https://github.com/apache/incubator-weex) and thanks for your contribution.
username_3: This issue is outdated for a long time and will be closed now. You can create a new one if you still have questions.
Status: Issue closed
|
sensu/sensu-docs | 215854664 | Title: This project is difficult to use independently of the private website project
Question:
username_0: Currently the content of this repo is pulled into the Sensu website as part of publishing that site from it's private repo. This makes it painful to preview changes for those who have access to the private website repo, and practically impossible for those who don't.
In the interest of solving this problem and improving our documentation overall, I think we should establish criteria for selecting a different tool/method to generate online documentation.
Requirements
- [ ] Supports writing docs in markdown
- [ ] Generates static site we can host in S3 or similar
- [ ] Local development/preview possible without access to existing private website project
- [ ] Online search
Nice to have
- [ ] Sidebar examples à la [Stripe API Docs](https://stripe.com/docs/api)
Answers:
username_0: I think http://www.mkdocs.org/ looks promising
username_1: For posterity, I believe the tools that have discussed wanting to look at internally (within Sensu, Inc.) are:
- http://www.mkdocs.org/
- https://readthedocs.org/
- http://readme.io/ (hosted only)
This isn't to say these are the only options, they're just the ones we've looked at so far. Ideally I would like to see us end up with a fully self-contained documentation website that can be hosted at https://docs.sensuapp.org (for example), and that can be contributed to by cloning this repository and running a local instance of the same website in some kind of development mode.
username_2: We think a good "quick test" is to create a new project/repository, using mkdocs, and port a very specific part of the current Sensu documentation to get an idea of the level of pain.
username_0: We've ported the bulk of content from the 0.29 documentation in this repo to a new project usng mkdocs: https://github.com/sensu/sensu-mkdocs/
Live preview here: https://sensu.github.io/sensu-mkdocs/
Once we've sorted out formatting issues described in https://github.com/sensu/sensu-mkdocs/issues/6 I think we'll be ready to publish this new iteration at docs.sensuapp.org. 👍
username_3: something that should be considered with the go forward solution: https://github.com/sensu/sensu-docs/issues/680
username_0: Since my last comment in April we've determined that the mkdocs project isn't a good fit for us, in part because it lacks support for maintaining multiple versions of product documentation. As a result we've begun an effort to migrate our documentation to sensu/sensu-docs-site, a Hugo powered site which will be published independently of the existing sensuapp.org website.
Status: Issue closed
username_4: All of your dreams have come true: https://github.com/sensu/sensu-docs-site/
 |
makiwara/atomdo | 180789394 | Title: Error On Install
Question:
username_0: Hey, there. Getting an error on installing this into Atom:
```
Error: Failed to execute 'registerElement' on 'Document': Registration failed for type 'status-bar-tasks'. A type with that name is already registered.
at Error (native)
at Object.<anonymous> (/home/solarlune/.atom/packages/atomdo/lib/views/task-status-view.coffee:65:27)
at Object.<anonymous> (/home/solarlune/.atom/packages/atomdo/lib/views/task-status-view.coffee:1:1)
at Module._compile (/usr/share/atom/resources/app.asar/src/native-compile-cache.js:103:30)
at Object.defineProperty.value [as .coffee] (/usr/share/atom/resources/app.asar/src/compile-cache.js:208:21)
at Module.load (module.js:357:32)
at Function.Module._load (module.js:314:12)
at Module.require (module.js:367:17)
at require (/usr/share/atom/resources/app.asar/src/native-compile-cache.js:50:27)
at Object.<anonymous> (/home/solarlune/.atom/packages/atomdo/lib/atomdo.coffee:8:18)
at Object.<anonymous> (/home/solarlune/.atom/packages/atomdo/lib/atomdo.coffee:1:1)
at Module._compile (/usr/share/atom/resources/app.asar/src/native-compile-cache.js:103:30)
at Object.defineProperty.value [as .coffee] (/usr/share/atom/resources/app.asar/src/compile-cache.js:208:21)
at Module.load (module.js:357:32)
at Function.Module._load (module.js:314:12)
at Module.require (module.js:367:17)
at require (/usr/share/atom/resources/app.asar/src/native-compile-cache.js:50:27)
at Package.module.exports.Package.requireMainModule (/usr/share/atom/resources/app.asar/src/package.js:718:27)
at /usr/share/atom/resources/app.asar/src/package.js:117:28
at Package.module.exports.Package.measure (/usr/share/atom/resources/app.asar/src/package.js:92:15)
at Package.module.exports.Package.load (/usr/share/atom/resources/app.asar/src/package.js:106:12)
at PackageManager.module.exports.PackageManager.loadPackage (/usr/share/atom/resources/app.asar/src/package-manager.js:457:14)
at PackageManager.module.exports.PackageManager.activatePackage (/usr/share/atom/resources/app.asar/src/package-manager.js:536:30)
at /usr/share/atom/resources/app.asar/node_modules/settings-view/lib/package-manager.js:460:29
at exit (/usr/share/atom/resources/app.asar/node_modules/settings-view/lib/package-manager.js:73:16)
at triggerExitCallback (/usr/share/atom/resources/app.asar/src/buffered-process.js:215:47)
at /usr/share/atom/resources/app.asar/src/buffered-process.js:229:18
at Socket.<anonymous> (/usr/share/atom/resources/app.asar/src/buffered-process.js:100:18)
at emitOne (events.js:95:20)
at Socket.emit (events.js:182:7)
at Pipe._onclose (net.js:477:12)
``` |
google/guava | 140400070 | Title: Map tests assert on .toString even when CollectionFeature.NON_STANDARD_TOSTRING is specified
Question:
username_0: See a unit test demonstrating the issue here: https://github.com/gpanther/fastutil-guava-tests/blob/master/src/test/java/net/greypanther/guava/tests/tests/CustomToStringTest.java
It shows a HashMap subclass which overrides toString and as a result fails tests, even if CollectionFeature.NON_STANDARD_TOSTRING passed to MapTestSuiteBuilder.
I believe that methods like https://github.com/google/guava/blob/2cd4d629a2b6f1a462643b248e0972f44c5133b7/guava-testlib/src/com/google/common/collect/testing/testers/MapToStringTester.java#L45 should be marked with `@CollectionFeature.Require(absent = NON_STANDARD_TOSTRING)`.
Answers:
username_1: Agreed. Thanks for the report. I will take care of this.
I briefly wondered if maybe we should have a separate `MapFeature.NON_STANDARD_TOSTRING`. But if anything, we should probably try to eliminate the cases in which we duplicate a property between `MapFeature` and `CollectionFeature` (internal bug 6238930).
Status: Issue closed
|
IlyaUmanets/action_cable_crud | 384951509 | Title: test
Question:
username_0: Subject line
Current behavior:
* Describe how the bug manifests.
Expected behavior:
* Describe what the behavior would be without the bug.
Steps to reproduce:
* A minimum working example |
mikepenz/MaterialDrawer | 79039361 | Title: How to update the description of PrimaryDrawerItem?
Question:
username_0: I'm trying to update the description of a PrimaryDrawerItem but, I'm haven't succeed.
Can anybody give me a hand?
Answers:
username_1: @username_0 if you want to set it use this `withDescription()`
if you want to update an existing item, define an identifier for the item and just change the description of it at a later time. there a method on the `result` object which allows you to update a DrawerItem
username_0: Thanks! I'm going to try it!
Status: Issue closed
username_1: @username_0 great. just let me know if there are more questions. Or head over to gitter |
digidem/mapeo-settings-builder | 705876587 | Title: filetype .mapeosetting is not recognizable for non english speakers
Question:
username_0: While testing, recording, and designing all things related to the importer I found it cumbersome to identify ".mapeosettings" as a configuration file and explain concepts to train new users on this feature. It is also inconsistent with the UX prompts on bott Mm & Md.
I think the filetype name should be .mapeoconfig
This solves 3 problems:
1. concept of one project having a singular configuration package rather than pluralizing it causing confusion about the quality
2. the import config button give a clue for the filetype the user needs to find as they browse their device
3. "mapeo-config" can be removed as a prefix from the config file names making full name and version number easier to see on a mobile device.

Answers:
username_0: Bumping this as nomenclature affects materials we are now doing major work on.
We want to be using relevant terms. Also considering that for partners with .mapeosettings files it might be a while before they get new configs so the ability for Mapeo to load the packages (all the same content) with both file extensions will be important
username_1: Tech thoughts:
- Good: Implementing this should be easy!
- Bad: We'll need to maintain backwards compatibility, which means multiple codepaths to be supported, and ensuring we don't accidentally break things for older `.mapeosettings` files. So we'll need to proceed carefully.
Logistics thoughts:
- Has someone taken on this work?
- How do issues like these normally get onto the roadmap for a release?
username_2: Link to : https://github.com/digidem/mapeo-desktop/issues/464
username_2: On hold. To add as a broader convo for the config builder. |
runelite/runelite | 490145071 | Title: Menu swapper: pick-lock
Question:
username_0: Please allow the menu-swapper to have pick-lock as a left click option.

Answers:
username_1: Based on #7522, this wont be implemented.
Status: Issue closed
|
gitlab4j/gitlab4j-api | 500542145 | Title: Add API call to /api/v4/license
Question:
username_0: Hello,
First, thanks for the amazing work on this library. It's really useful.
I would like to use some of standalone API of GitLab. I realized that some of them are not covered by the current version of `gitlab4j-api`. Maybe it's intended and I'm ok with that. Neverless, I would like to know if this is in the roadmap to add a call to `/api/v4/license`.
I know it's not available on gitlab.com but I think it can be useful for users like me with a self-managed version of GitLab.
You can find some documentation here:
* https://docs.gitlab.com/ee/api/api_resources.html#standalone-resources
* https://docs.gitlab.com/ee/api/license.html
Thanks for your answer.
Answers:
username_1: @username_0
What you have uncovered is that GitLab has actually changed the API, this library currently has the ```LIcensesApi``` but that has now been moved from ```/licenses``` to ```/templates/licenses```. What they have done is created a new API on the ```/license``` path which is what you are asking for.
I will put both fixing the existing ```LicensesApi``` and adding the ```LicenseApi``` on the roadmap and will schedule both for the next release, which will be released no later than Sunday 10/6/2019.
username_0: @username_1 Thanks for your quick answer.
If you need some help to test your work, I am available.
username_1: @username_0
I will need some help to verify the license API that you are requesting, as I do not have a license for GitLab.
I will let you know when it is released.
username_1: @username_0 GitLab4J-API Release 4.12.9 has been completed.
If you could verify the ```LicenseApi``` functionality and close this issue I would appreciate it.
username_0: I will try the new release tomorrow.
Status: Issue closed
username_0: Sorry for the delay. It works as expected. Thanks again for your work. |
web-platform-tests/wpt.fyi | 601113007 | Title: Make it easier to see which failures have been triaged
Question:
username_0: Currently, no bug links are shown by default, but enabling "Show metadata Information on the wpt.fyi result page." in /flags shows a list at the end.
In a directory like [css/css-images/](https://wpt.fyi/results/css/css-images?label=master&label=experimental&aligned) it's not very easy to figure out which failures have already been triaged. I ended up triaging one test that I didn't know was already triaged in https://github.com/web-platform-tests/wpt-metadata/pull/160.
The UX of this might be tricky, it will easily become cluttered, but something like a bug icon next to the test name or in the individual cells would make it easier to tell what's wrong. Maybe the space that this occupies could also become the thing you click to link new information.
Answers:
username_1: This requires changes in the UI side and the server side.
On the server, every time a user triages a test, the metadata cache in webapp should be updated to reflect the up-to-date metadata information.
On the UI side, we plan to display triaged metadata next to passing/total numbers inside of cells in a directory, similar to reftest screenshot icons:
<img width="920" alt="Screen Shot 2020-05-01 at 12 30 21 PM" src="https://user-images.githubusercontent.com/8611520/80822033-da826780-8ba7-11ea-9e4f-14d7e3cbdf4e.png">
In the subtest view, the metadata triage information will be shown through a row, like the row `harness status`
username_1: The propose change on the server side has been implemented through a [force cached update](https://github.com/web-platform-tests/wpt.fyi/blob/master/api/metadata_cache.go#L29)
Status: Issue closed
|
sendgrid/sendgrid-nodejs | 392022100 | Title: How to set up webhook for dynamic template emails
Question:
username_0: #### Issue Summary
We use sendgrid to email our customers and we store events against all the emails sent to get an insight of how our marketing and promotional emails are doing. But when we started using some of dynamic template emails we are unable to store events against them. Please let us know any api ref doc link or some useful resource which can help in setting up a webhook for events on dynamic template emails.
Thanks!
#### Technical details:
* sendgrid-nodejs Version: master
* Node.js Version: 6.x
Answers:
username_1: Hi @username_0 -- This doesn't sound right to me, I'd recommend reaching out to our Support team (support.sendgrid.com) to double-check this behavior.
Status: Issue closed
|
jhunt/shout | 350662291 | Title: Allow rules to be set/updated via API
Question:
username_0: It is unlikely that the rules used by Shout! will be never updated, and it seems heavy handed to require a redeployment to updated the rules. Instead, the proposal is:
On deployment, no rules are present and Shout! will ignore any posts (equivalent of pseudo-code `(for * (do nothing))`
There would also be an endpoint `/rules` that would receive a POST containing the rules in order to update them. Optionally, a GET on the `/rules` endpoint would return the current rules.
This will require the #7 issue being resolved to ensure only authenticated users can update the rules.<issue_closed>
Status: Issue closed |
ManageIQ/manageiq | 153974228 | Title: Middleware Provider - Mouse hover over any entity in topology graph displays status as unknown
Question:
username_0: Details: When navigated to the Topology graph after adding a middelware provider, hover over on any of the entities in topology graph displays status as unknown.
Product & version: ManageIQ-Upstream latest

Repro Steps:
1) Add a middleware provider in ManageIQ
2) Navigate to Middleware->Topology
3) Hover over any of the entities in the topology graph (Ex: MiddlewareDeployment/MiddlewareServer)
4) It shows status as unknown for all the entities
Answers:
username_0: @miq-bot add_label providers/hawkular, bug
cc @username_1
username_1: @username_0 this is a missing feature and not a bug. statuses are not yet supported.
so either this should be converted into feature request, or the bug resolution would be for now to remove status from the tooltip.
cc @username_2
wdyt?
username_1: @miq-bot add_label topology
username_2: I'd say let's leave it as is. We need first have the hawkular server reliably deliver the availability + sub-states and then we can revisit this.
username_1: @username_2 ok. I'll convert this into a feature request, since it's not a bug.
@miq-bot rm_label bug
username_1: @miq-bot add_label enhancement
username_1: I'm closing this in favor of https://github.com/ManageIQ/manageiq/issues/9033
username_1: @miq-bot close_issue |
shastri9999/kyck | 200309716 | Title: [firefox] - selection of timeslot
Question:
username_0: - Meeting details alignment to be same as the end of accordion.
Screen Details
1) Firefox
2) 1366 X 768
<issue_closed>
Status: Issue closed |
janhkrueger/insulae-public | 223384323 | Title: Twitterversand umstellen
Question:
username_0: _From @username_0 on February 2, 2017 0:12_
Der bisherige Twitterversand läuft noch über die PHP-API von Twitter. Hier ist eine Anpassung auf C++ notwendig.
_Copied from original issue: username_0/insulae-private#32_ |
sakuli/sakuli | 552722257 | Title: normaliseIdentifierString eliminating regex
Question:
username_0: As a software developer, I want to fix and test `normaliseIdentifierString` so that given regex are not eliminated and replaced with a wildcard regex.
Acceptance criteria:
- [ ] `normaliseIdentifierString` is not eliminating regex
- [ ] `normaliseIdentifierString` is unit test covered (maybe a regex utils module makes sense)
Answers:
username_1: Further refinement (e.g. regex example) required
username_2: It looks like any regular expression within a string (e.g. `"/just an example/"`) returns an empty string.
The following test should fail but it doesn't:
```
(async () => {
const testCase = new TestCase("sample test");
try {
await _navigateTo("https://sakuli.io");
await _highlight(_link("/this does not exist/"));
await testCase.endOfStep("example step");
} catch (e) {
await testCase.handleException(e);
} finally {
testCase.saveResult();
}
})();
```
Status: Issue closed
|
github/docs | 761439549 | Title: Opening a throwaway issue to test a new workflow
Question:
username_0: This is a test of the new workflow notification system. This is only a test.
[annoying modem squawking noises]
This was a test of the new workflow notification system. If this was an actual issue, it would contain some relevant details, perhaps some links, and an image or two. This was only a test.
Status: Issue closed
Answers:
username_1: Thanks @username_0! It looks good! |
wingillis/init-scripts | 262274816 | Title: standardizing across scripts/users
Question:
username_0: Hey Win,
It might be helpful to standarize flags (i.e. -s is always shell) between common scripts like slorc and wurm.
Also on O2 ${USER} returns the username so for slorc we could have some $ECOMMONS_ID environmental variable or something that we set so I don't need to modify the default user?<issue_closed>
Status: Issue closed |
bitcoin/bitcoin | 356264562 | Title: Clang: "anonymous structs are a GNU extension"
Question:
username_0: https://github.com/bitcoin/bitcoin/blob/68f3c7eb080e461cfeac37f8db7034fe507241d0/src/prevector.h#L152-L158
I'm opening an issue as its fix is too tiny to become a pull request.
Can be solved by renaming the struct to
```c
struct indirect_contents {
size_type capacity;
char* indirect;
};
```<issue_closed>
Status: Issue closed |
element-plus/element-plus | 760051636 | Title: el-table can't reduce in display:flex
Question:
username_0: ```
<div style="display:flex">
<el-table
:data="tableData"
style="width: 100%">
<el-table-column
prop="date"
label="日期"
width="180"/>
<el-table-column
prop="name"
label="姓名"
width="180"/>
<el-table-column
prop="address"
label="地址"/>
</el-table>
</div>
```
el-table can get longer when browser elongated,but it can’t adapt when browser is shrunk
Answers:
username_1: @username_2 will follow this up
username_2: Duplicate of #922
Status: Issue closed
username_0: "element-plus": "^1.0.1-beta.20"
It hasn't been fixed.
It's also can't adapt when browser is shrunk.
没有修复,在flex布局下,横向收缩依旧无法自适应 |
ReactiveX/IxJS | 254669009 | Title: Add Prettier support
Question:
username_0: <!--
Thank you for raising your concerns, we appreciate your feedback and contributions to this repository.
Before you continue, consider the following:
If you have a "How do I do ...?" question, it is better for you and for us that this question is placed in [StackOverflow](http://stackoverflow.com/questions/tagged/ixjs) or some chat channel. This way, you are making it easier for others to learn from your experiences too.
These "Issues" are meant only for technical problems, bugs, and proposals related to the library.
If your issue is a bug, please follow the format below:
-->
**IxJS version:** 2.x
**Code to reproduce:**
**Expected behavior:**
**Actual behavior:**
**Additional information:**
Add [prettier](https://github.com/prettier/prettier) support for nicer code formatting.<issue_closed>
Status: Issue closed |
UkrSoft/choose-operator | 98086716 | Title: Put our project documents on wiki page
Question:
username_0: If document is a text/doc - copy text from it to the wiki page.
If document is any other typy - provide a link and short description what is in this document.
Answers:
username_0: Done yesterday. See: https://github.com/UkrSoft/choose-operator/wiki
Status: Issue closed
|
jlippold/tweakCompatible | 443147187 | Title: `BetterSettings` working on iOS 12.0.1
Question:
username_0: ```
{
"packageId": "com.midnightchips.bettersettings",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.midnightchips.bettersettings",
"deviceId": "iPhone10,6",
"url": "http://cydia.saurik.com/package/com.midnightchips.bettersettings/",
"iOSVersion": "12.0.1",
"packageVersionIndexed": false,
"packageName": "BetterSettings",
"category": "Tweaks",
"repository": "Dynastic Repo",
"name": "BetterSettings",
"installed": "0.1.3",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.midnightchips.bettersettings",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "Settings Your Way. â Complete customization of the settings app. Allows setting custom color, cell shape, background image, text color and more.",
"latest": "0.1.3",
"author": "MidnightChips",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
filestack/filestack-js | 544247178 | Title: Is ".pdf" really for accepting any filetype?
Question:
username_0: The docs (https://github.com/filestack/filestack-js/blob/6987bc6/src/lib/picker.ts#L361) say to use `".pdf"` for any file extension, but that doesn't seem right. I would have expected `*/*` for any (I guess it has to be omitted, based on https://github.com/filestack/filestack-js/issues/80), and `.pdf` for pdf files. Any clarity would be appreciated.
Answers:
username_1: Hi,
it's an example file extension , if you want to accept any type of file, just skip 'accept'.
Status: Issue closed
|
musaffa/file_validators | 1010577329 | Title: No such file or directory - file
Question:
username_0: Hey , I'm trying to package `file_validators` for debian . Currently I'm getting an error when running the tests. Can you help me here as to why this is happening and what should be done to fix this?
Here's my log
```
/usr/bin/ruby2.7 -I/usr/share/rubygems-integration/all/gems/rspec-support-3.9.3/lib:/usr/share/rubygems-integration/all/gems/rspec-core-3.9.2/lib /usr/share/rubygems-integration/all/gems/rspec-core-3.9.2/exe/rspec --pattern ./spec/\*\*/\*_spec.rb --format documentation
[Coveralls] Set up the SimpleCov formatter.
[Coveralls] Using SimpleCov's default settings.
Combined File Validators integration with ActiveModel
without helpers
with an allowed type
is expected to be valid
with a disallowed type
invalidates jpeg image file having size bigger than the allowed size
invalidates png image file
invalidates text file
with helpers
with an allowed type
is expected to be valid
with a disallowed type
invalidates jpeg image file having size bigger than the allowed size
invalidates png image file
invalidates text file
File Content Type integration with ActiveModel
:allow option
a string
with an allowed type
is expected to be valid
with a disallowed type
is expected not to be valid
as a regex
with an allowed types
validates jpeg image file (FAILED - 1)
validates png image file (FAILED - 2)
with a disallowed type
is expected not to be valid (FAILED - 3)
as a list
with allowed types
validates jpeg (FAILED - 4)
validates text file (FAILED - 5)
with a disallowed type
is expected not to be valid (FAILED - 6)
as a proc
with allowed types
validates jpeg (FAILED - 7)
validates text file (FAILED - 8)
with a disallowed type
is expected not to be valid (FAILED - 9)
:exclude option
a string
with an allowed type
is expected to be valid (FAILED - 10)
with a disallowed type
is expected not to be valid (FAILED - 11)
as a regex
with an allowed type
is expected to be valid (FAILED - 12)
with a disallowed types
invalidates jpeg image file (FAILED - 13)
invalidates png image file (FAILED - 14)
[Truncated]
rspec ./spec/integration/file_content_type_validation_integration_spec.rb:147 # File Content Type integration with ActiveModel :exclude option a string with a disallowed type is expected not to be valid
rspec ./spec/integration/file_content_type_validation_integration_spec.rb:163 # File Content Type integration with ActiveModel :exclude option as a regex with an allowed type is expected to be valid
rspec ./spec/integration/file_content_type_validation_integration_spec.rb:167 # File Content Type integration with ActiveModel :exclude option as a regex with a disallowed types invalidates jpeg image file
rspec ./spec/integration/file_content_type_validation_integration_spec.rb:172 # File Content Type integration with ActiveModel :exclude option as a regex with a disallowed types invalidates png image file
rspec ./spec/integration/file_content_type_validation_integration_spec.rb:192 # File Content Type integration with ActiveModel :exclude option as a list with an allowed type is expected to be valid
rspec ./spec/integration/file_content_type_validation_integration_spec.rb:196 # File Content Type integration with ActiveModel :exclude option as a list with a disallowed types invalidates jpeg
rspec ./spec/integration/file_content_type_validation_integration_spec.rb:201 # File Content Type integration with ActiveModel :exclude option as a list with a disallowed types invalidates text file
rspec ./spec/integration/file_content_type_validation_integration_spec.rb:221 # File Content Type integration with ActiveModel :exclude option as a proc with an allowed type is expected to be valid
rspec ./spec/integration/file_content_type_validation_integration_spec.rb:225 # File Content Type integration with ActiveModel :exclude option as a proc with a disallowed types invalidates jpeg image file
rspec ./spec/integration/file_content_type_validation_integration_spec.rb:246 # File Content Type integration with ActiveModel :allow and :exclude combined with an allowed type is expected to be valid
rspec ./spec/integration/file_content_type_validation_integration_spec.rb:251 # File Content Type integration with ActiveModel :allow and :exclude combined with a disallowed type is expected not to be valid
rspec ./spec/integration/file_content_type_validation_integration_spec.rb:292 # File Content Type integration with ActiveModel :mode option strict mode with valid file validates the file
rspec ./spec/integration/file_content_type_validation_integration_spec.rb:299 # File Content Type integration with ActiveModel :mode option strict mode with spoofed file invalidates the file
rspec ./spec/lib/file_validators/mime_type_analyzer_spec.rb:22 # FileValidators::MimeTypeAnalyzer :file analyzer determines MIME type from file contents
rspec ./spec/lib/file_validators/mime_type_analyzer_spec.rb:26 # FileValidators::MimeTypeAnalyzer :file analyzer returns text/plain for unidentified MIME types
rspec ./spec/lib/file_validators/mime_type_analyzer_spec.rb:30 # FileValidators::MimeTypeAnalyzer :file analyzer is able to determine MIME type for spoofed files
rspec ./spec/lib/file_validators/mime_type_analyzer_spec.rb:34 # FileValidators::MimeTypeAnalyzer :file analyzer is able to determine MIME type for non-files
[Coveralls] Outside the CI environment, not sending data.
Stopped processing SimpleCov as a previous error not related to SimpleCov has been detected
```
Answers:
username_0: Fixed it ! The problem was I didn't have files installed.. Closing this since it has been solved
Thanks !
Status: Issue closed
|
fobiasmog/fancybox3 | 311153711 | Title: on load popupp
Question:
username_0: i want to open popup on window load
there is my code
<a data-fancybox data-src="#dummy href="javascript:;"><button> Open </button></a>
<div class="container-fluid form" style="display:none;" id="dummy">
data
</div>
$(document).ready(function () {
$("#dummy").fancybo {
'overlayShow': true
}).trigger('click');
}); |
dreamfony/dv | 283180007 | Title: As PO I want to have Idea Inbox column, so I can manage ideas and move them to backlog when they get a proper form.
Question:
username_0: As a IP I want Up/Down vote for the ideas, so they can get moved to the backlog.
Answers:
username_0: Waffle reactions issue
https://github.com/waffleio/waffle.io/issues/2454
username_0: For now we can only vote via github until the mention issue is resolved please go and upvote that issue. |
angular/angular | 119534921 | Title: Improve Async Pipe's support for Rx
Question:
username_0: 1. We use this check to determine if an object is an observable `static isObservable(obs: any): boolean { return obs instanceof RxObservable; }`. We should change it to `static isObservable(obs: any): boolean { return !!obs.subscribe; }`. The reason is that async pipe does not depend on anything Rx specific and can be used for all observables that implement the same protocol. For instance, it can be used for a different version of Rx.
2.
- Create a sync BehaviorSubject
- Bind it to the template like this `subj|async`.
- You will see that instead of showing the first value, it will show null. To fix it, we need to change
```
if (isBlank(this._obj)) {
if (isPresent(obj)) {
this._subscribe(obj);
}
return null;
}
```
to
```
if (isBlank(this._obj)) {
if (isPresent(obj)) {
this._subscribe(obj);
}
return this._latestValue;
}
```
Answers:
username_1: Looks like this was fixed by #5624
Status: Issue closed
|
facebook/react-native | 1061970539 | Title: An error occurred when update the MainApplication - getPackages() method in android
Question:
username_0: ### Description
in android/app/src/java/xxx/MainApplication.java
i want to use my own app code-push-server, so i change
```
// @SuppressWarnings("UnnecessaryLocalVariable")
// List<ReactPackage> packages = new PackageList(this).getPackages();
// // Packages that cannot be autolinked yet can be added manually here, for example:
// // packages.add(new MyReactNativePackage());
// return packages;
```
to
```
return Arrays.<ReactPackage>asList(
new MainReactPackage(),
new CodePush(
"key",
MainApplication.this,
BuildConfig.DEBUG,
"site"
)
);
```
can't running and report the error
```
E/ReactNativeJS: Error: [@RNC/AsyncStorage]: NativeModule: AsyncStorage is null.
To fix this issue try these steps:
• Run `react-native link @react-native-async-storage/async-storage` in the project root.
• Rebuild and restart the app.
• Run the packager with `--reset-cache` flag.
• If you are using CocoaPods on iOS, run `pod install` in the `ios` directory and then rebuild and re-run the app.
• If this happens while testing with Jest, check out docs how to integrate AsyncStorage with it: https://react-native-async-storage.github.io/async-storage/docs/advanced/jest
```
### Version
0.66.0
### Output of `react-native info`
System:
OS: macOS 11.6
CPU: (16) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Memory: 24.78 MB / 16.00 GB
Shell: 3.2.57 - /bin/bash
Binaries:
Node: 14.13.0 - ~/.nvm/versions/node/v14.13.0/bin/node
Yarn: 1.22.11 - ~/.nvm/versions/node/v14.13.0/bin/yarn
npm: 6.14.8 - ~/.nvm/versions/node/v14.13.0/bin/npm
Watchman: Not Found
Managers:
[Truncated]
IDEs:
Android Studio: 4.1 AI-201.8743.12.41.7042882
Xcode: 13.1/13A1030d - /usr/bin/xcodebuild
Languages:
Java: javac 15 - /usr/bin/javac
npmPackages:
@react-native-community/cli: Not Found
react: 17.0.2 => 17.0.2
react-native: 0.66.0 => 0.66.0
react-native-macos: Not Found
npmGlobalPackages:
*react-native*: Not Found
### Steps to reproduce
when change the getPackages() method in MainApplication , it's show error above
### Snack, code example, screenshot, or link to a repository
_No response_<issue_closed>
Status: Issue closed |
tidyverse/readr | 269368345 | Title: readr::read_csv issue: Chinese Character becomes messy codes
Question:
username_0: I'm trying to import a dataset to RStudio, however I am stuck with Chinese characters, as they become messy codes. Here is the code:
```
library(tidyverse)
df <- read_csv("中文,英文\n英文,德文")
df
# A tibble: 1 x 2
`\xd6\xd0\xce\xc4` `Ӣ\xce\xc4`
<chr> <chr>
1 "<U+04E2>\xce\xc4" "<U+00B5>\xc2\xce\xc4"
```
When I use the base function read.csv, it works well. I guess I must do something wrong with encoding. But there are no encoding option in read_csv, how can I do this?
Answers:
username_1: I answered on SO: https://stackoverflow.com/a/46999569/5397672
BTW, I wonder why we have to explicitly specify the encoding here. `file` argument should be translated into UTF-8 before passing it to C++.
username_2: [1] "中文,英文\n英文,德文"`
And it seems that the functions of readr::read_csv can not identify characters encoded by GB2312 automatically, while base::read,csv can.
username_3: readr assumes character input is UTF-8 unless you specify a different encoding explicitly. This is done on purpose to ensure code is reproducible across systems.
@username_2 Thank you for tracking down what the encoding of the characters actually was, I think this can be closed.
Status: Issue closed
username_1: Both `read.csv()` and `read_csv()` don't **identify** the encoding from the characters at all. Instead, they just **assume** the encoding (`read.csv()` bets on the system's default encoding and `read_csv()` bets on UTF-8) and, in this case, `read.csv()` won the bet.
However, while assuming and praying can be a good strategy for external resources, for character vectors that already on R's memory, I think we can **identify** their encoding by using the `Encoding()` attribute. This is the basic idea of my PR #730.
username_1: I agree with this strategy, but I think such code as `read_csv("中文,英文\n英文,德文")` should be reproducible without specifying the encoding, I mean, should succeed not only on Linux and macOS but also on non-UTF-8 Windows. Please consider encoding the characters as UTF-8 before converting it to a `datasource()` (#730). |
dart-lang/sdk | 100823041 | Title: Analyzer: When multiple imports provide the same symbol, one import should be marked as "unused"
Question:
username_0: e.g. In Angular, the ```package:angular/angular.dart``` file exports ```package:di/di.dart```.
In the following file, the ```Module``` symbol is coming from di.dart, but also exported through angular.
import package:angular/angular.dart
import package:di/di.dart
class MyModule extends Module { ... }
Currently, the analyzer does not give any hints about unused imports.
However, I would expect angular.dart to be flagged as "unused". angular.dart is not used since ```Module``` is also available through di.dart.
Answers:
username_1: Even a subset of this, examining just shown names would be useful. I found some code with:
```dart
import 'package:a/a.dart';
import 'package:a/src/foo.dart' show foo;
```
because at one point, `a.dart` did not export `foo`. But now it does, so the second import is unnecessary. Not sure if one is easier to implement or faster to run than the other...
username_1: I'll close this in favor of the issue I've been referencing when landing changes. https://github.com/dart-lang/sdk/issues/44569
Status: Issue closed
|
Clommunity/cDistro | 144981828 | Title: Missing package: dialog
Question:
username_0: The dialog package needs to be added to a Cloudy installation to ensure web-based Debian package upgrades work.
Answers:
username_0: Doesn't seem to be needed in Debian Stretch?
username_0: Doesn't seem to be needed in Debian Jessie neither...
Status: Issue closed
username_0: Updating a Cloudy device based on Debian Wheezy:
```
Reading package lists...
Building dependency tree...
Reading state information...
The following packages have been kept back:
python-reportbug reportbug
The following packages will be upgraded:
bind9-host db5.1-util dnsutils gnupg gpgv krb5-locales libbind9-80 libdb5.1
libdns88 libevent-2.0-5 libexpat1 libfreetype6 libgcrypt11 libgssapi-krb5-2
libgssrpc4 libidn11 libisc84 libisccc80 libisccfg82 libk5crypto3
libkadm5clnt-mit8 libkadm5srv-mit8 libkdb5-6 libkrb5-3 libkrb5support0
libldap-2.4-2 liblwres80 libsqlite3-0 libssl1.0.0 libtasn1-3 libtirpc1
libx11-6 libx11-data libx11-xcb1 libxcursor1 libxi6 libxml2 libxpm4 login
multiarch-support openssl passwd perl perl-base perl-modules procmail
python2.6-minimal rpcbind sensible-utils ssh tzdata vim-common vim-tiny wget
54 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.
Need to get 28.6 MB of archives.
After this operation, 889 kB of additional disk space will be used.
Get:1 http://security.debian.org/ wheezy/updates/main login amd64 1:4.1.5.1-1+deb7u1 [984 kB]
Get:2 http://security.debian.org/ wheezy/updates/main perl amd64 5.14.2-21+deb7u5 [4434 kB]
Get:3 http://security.debian.org/ wheezy/updates/main perl-base amd64 5.14.2-21+deb7u5 [1534 kB]
Get:4 http://security.debian.org/ wheezy/updates/main perl-modules all 5.14.2-21+deb7u5 [3441 kB]
Get:5 http://security.debian.org/ wheezy/updates/main libdb5.1 amd64 5.1.29-5+deb7u1 [723 kB]
Get:6 http://security.debian.org/ wheezy/updates/main libgcrypt11 amd64 1.5.0-5+deb7u6 [301 kB]
Get:7 http://security.debian.org/ wheezy/updates/main libssl1.0.0 amd64 1.0.1t-1+deb7u3 [1281 kB]
Get:8 http://security.debian.org/ wheezy/updates/main libtasn1-3 amd64 2.13-2+deb7u5 [68.9 kB]
Get:9 http://security.debian.org/ wheezy/updates/main libevent-2.0-5 amd64 2.0.19-stable-3+deb7u2 [174 kB]
Get:10 http://security.debian.org/ wheezy/updates/main libk5crypto3 amd64 1.10.1+dfsg-5+deb7u9 [114 kB]
Get:11 http://security.debian.org/ wheezy/updates/main libgssapi-krb5-2 amd64 1.10.1+dfsg-5+deb7u9 [150 kB]
Get:12 http://security.debian.org/ wheezy/updates/main libkrb5-3 amd64 1.10.1+dfsg-5+deb7u9 [394 kB]
Get:13 http://security.debian.org/ wheezy/updates/main libkrb5support0 amd64 1.10.1+dfsg-5+deb7u9 [50.7 kB]
Get:14 http://security.debian.org/ wheezy/updates/main libgssrpc4 amd64 1.10.1+dfsg-5+deb7u9 [88.5 kB]
Get:15 http://security.debian.org/ wheezy/updates/main libidn11 amd64 1.25-2+deb7u3 [180 kB]
Get:16 http://security.debian.org/ wheezy/updates/main libkadm5clnt-mit8 amd64 1.10.1+dfsg-5+deb7u9 [69.1 kB]
Get:17 http://security.debian.org/ wheezy/updates/main libkdb5-6 amd64 1.10.1+dfsg-5+deb7u9 [68.1 kB]
Get:18 http://security.debian.org/ wheezy/updates/main libkadm5srv-mit8 amd64 1.10.1+dfsg-5+deb7u9 [85.8 kB]
Get:19 http://security.debian.org/ wheezy/updates/main libexpat1 amd64 2.1.0-1+deb7u5 [139 kB]
Get:20 http://security.debian.org/ wheezy/updates/main libfreetype6 amd64 2.4.9-1.1+deb7u7 [454 kB]
Get:21 http://security.debian.org/ wheezy/updates/main libldap-2.4-2 amd64 2.4.31-2+deb7u3 [244 kB]
Get:22 http://security.debian.org/ wheezy/updates/main libsqlite3-0 amd64 3.7.13-1+deb7u4 [454 kB]
Get:23 http://security.debian.org/ wheezy/updates/main libx11-data all 2:1.5.0-1+deb7u4 [189 kB]
Get:24 http://security.debian.org/ wheezy/updates/main libx11-6 amd64 2:1.5.0-1+deb7u4 [902 kB]
Get:25 http://security.debian.org/ wheezy/updates/main libx11-xcb1 amd64 2:1.5.0-1+deb7u4 [139 kB]
Get:26 http://security.debian.org/ wheezy/updates/main libxcursor1 amd64 1:1.1.13-1+deb7u2 [27.1 kB]
Get:27 http://security.debian.org/ wheezy/updates/main libxi6 amd64 2:1.6.1-1+deb7u3 [76.4 kB]
Get:28 http://security.debian.org/ wheezy/updates/main libxml2 amd64 2.8.0+dfsg1-7+wheezy12 [907 kB]
Get:29 http://security.debian.org/ wheezy/updates/main libxpm4 amd64 1:3.5.10-1+deb7u1 [50.4 kB]
Get:30 http://security.debian.org/ wheezy/updates/main libtirpc1 amd64 0.2.2-5+deb7u1 [88.0 kB]
Get:31 http://security.debian.org/ wheezy/updates/main gpgv amd64 1.4.12-7+deb7u9 [229 kB]
Get:32 http://security.debian.org/ wheezy/updates/main gnupg amd64 1.4.12-7+deb7u9 [1954 kB]
Get:33 http://security.debian.org/ wheezy/updates/main passwd amd64 1:4.1.5.1-1+deb7u1 [1262 kB]
Get:34 http://security.debian.org/ wheezy/updates/main sensible-utils all 0.0.7+deb7u1 [9000 B]
Get:35 http://security.debian.org/ wheezy/updates/main tzdata all 2017c-0+deb7u1 [493 kB]
Get:36 http://security.debian.org/ wheezy/updates/main multiarch-support amd64 2.13-38+deb7u12 [152 kB]
Get:37 http://security.debian.org/ wheezy/updates/main vim-tiny amd64 2:7.3.547-7+deb7u4 [355 kB]
Get:38 http://security.debian.org/ wheezy/updates/main vim-common amd64 2:7.3.547-7+deb7u4 [163 kB]
Get:39 http://security.debian.org/ wheezy/updates/main wget amd64 1.13.4-3+deb7u5 [770 kB]
Get:40 http://security.debian.org/ wheezy/updates/main bind9-host amd64 1:9.8.4.dfsg.P1-6+nmu2+deb7u19 [75.2 kB]
[Truncated]
Setting up wget (1.13.4-3+deb7u5) ...
Setting up libisc84 (1:9.8.4.dfsg.P1-6+nmu2+deb7u19) ...
Setting up libdns88 (1:9.8.4.dfsg.P1-6+nmu2+deb7u19) ...
Setting up libisccc80 (1:9.8.4.dfsg.P1-6+nmu2+deb7u19) ...
Setting up libisccfg82 (1:9.8.4.dfsg.P1-6+nmu2+deb7u19) ...
Setting up libbind9-80 (1:9.8.4.dfsg.P1-6+nmu2+deb7u19) ...
Setting up liblwres80 (1:9.8.4.dfsg.P1-6+nmu2+deb7u19) ...
Setting up bind9-host (1:9.8.4.dfsg.P1-6+nmu2+deb7u19) ...
Setting up dnsutils (1:9.8.4.dfsg.P1-6+nmu2+deb7u19) ...
Setting up krb5-locales (1.10.1+dfsg-5+deb7u9) ...
Setting up procmail (3.22-20+deb7u2) ...
Setting up db5.1-util (5.1.29-5+deb7u1) ...
Setting up openssl (1.0.1t-1+deb7u3) ...
Setting up python2.6-minimal (2.6.8-1.1+deb7u1) ...
Setting up rpcbind (0.2.0-8+deb7u2) ...
Starting rpcbind daemon....
Setting up ssh (1:6.0p1-4+deb7u7) ...
Setting up perl-modules (5.14.2-21+deb7u5) ...
Setting up perl (5.14.2-21+deb7u5) ...
``` |
martinkasa/capacitor-secure-storage-plugin | 613923371 | Title: import com.whitestein.securestorage.SecureStoragePlugin; not found on android Studio
Question:
username_0: I add the plugin in my ionic 4 capacitor app and add the plugin in the main activity but the import fails because the package is not found:
import com.whitestein.securestorage.SecureStoragePlugin;
Answers:
username_1: @username_0 did you run:
```
npx cap sync android
```
which links plugin code to android project after npm install?
username_0: ✔ Copying web assets from www to android/app/src/main/assets/public in 485.74ms
✔ Copying native bridge in 461.07μp
✔ Copying capacitor.config.json in 472.53μp
✔ copy in 574.21ms
✔ Updating Android plugins in 6.78ms
Found 1 Capacitor plugin for android:
capacitor-secure-storage-plugin (0.4.0)
✔ update android in 43.77ms
Android studio says not found the package but the app build and install correctly.
The other strange thing is when a call: secureStorage.set(key,value) i get the exception, i m sure the data in the set are not null so could be related to the import problem?.
username_1: @username_0
your call
```
SecureStoragePlugin.set("test","value")
```
correct call
```
SecureStoragePlugin.set({ key, value })
```
There are missing curly braces. paramater to the set method is single object with key and value properties.
username_0: Ok thanks fixed the problem with set and get.
But i don't know why android studio doesn't find the package.
I made the sync, i have no idea.
Status: Issue closed
username_1: Maybe just some cache problem. I am closing this issue. |
NervJS/taro | 1171975898 | Title: Taro3.4.3版本Preact导致Map组件的 onRegionChange 事件失效报错
Question:
username_0: <!-- 请不要删除自动生成的 Issue 标签 -->
<!-- 请不要删除自动生成的 Issue 标签 -->
### 相关平台
微信小程序
**小程序基础库: 2.23.0**
**使用框架: React**
### 复现步骤
```
import { Component } from 'react'
import Taro from '@tarojs/taro';
import { View, Text, Map } from '@tarojs/components'
import { AtButton } from 'taro-ui'
import "taro-ui/dist/style/components/button.scss" // 按需引入
import './index.scss'
interface IProps {
}
interface IState {
latitude: number;
longitude: number;
}
export default class Index extends Component <IProps, IState> {
constructor(props) {
super(props);
this.state = {
latitude: 39.72684,
longitude: 116.34159
};
}
componentWillMount () { }
componentDidMount () { }
componentWillUnmount () { }
componentDidShow () { }
componentDidHide () { }
onRegionChange = (e) => {
console.log(e)
}
render () {
const { latitude, longitude } = this.state;
return (
<View className='index'>
<Map
id='myMap'
scale={16}
[Truncated]
System:
OS: macOS 10.15.7
Shell: 5.7.1 - /bin/zsh
Binaries:
Node: 14.10.0 - ~/.nvm/versions/node/v14.10.0/bin/node
Yarn: 1.22.17 - ~/.nvm/versions/node/v14.10.0/bin/yarn
npm: 6.14.8 - ~/.nvm/versions/node/v14.10.0/bin/npm
npmPackages:
@tarojs/cli: 3.4.3 => 3.4.3
@tarojs/components: 3.4.3 => 3.4.3
@tarojs/mini-runner: 3.4.3 => 3.4.3
@tarojs/runtime: 3.4.3 => 3.4.3
@tarojs/taro: 3.4.3 => 3.4.3
@tarojs/webpack-runner: 3.4.3 => 3.4.3
babel-preset-taro: 3.4.3 => 3.4.3
eslint-config-taro: 3.4.3 => 3.4.3
taro-ui: ^3.0.0-alpha.3 => 3.0.0-alpha.10
```
<!-- generated by taro-issues. 请勿修改或删除此行注释 --><!--labels=T-weapp,V-3,F-react--> |
morten1982/crossviper | 286043847 | Title: Does not run in macOS (Python 3.6) - Problems with PNG images
Question:
username_0: It seems that, at least in some platforms, tkinter does not take PNG images. The last time I had to use an image file in tkinter, i had to convert it to GIF first.
```
$ python3 /Users/victor/Downloads/crossviper-master/crossviper.py
Traceback (most recent call last):
File "/Users/victor/Downloads/crossviper-master/crossviper.py", line 1691, in <module>
app = CrossViper(master=None)
File "/Users/victor/Downloads/crossviper-master/crossviper.py", line 1562, in __init__
self.initUI()
File "/Users/victor/Downloads/crossviper-master/crossviper.py", line 1658, in initUI
self.rightPanel = RightPanel(self.panedWindow)
File "/Users/victor/Downloads/crossviper-master/crossviper.py", line 658, in __init__
self.initUI()
File "/Users/victor/Downloads/crossviper-master/crossviper.py", line 718, in initUI
newIcon = tk.PhotoImage(file=self.dir + 'images/new.png')
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/tkinter/__init__.py", line 3539, in __init__
Image.__init__(self, 'photo', name, cnf, master, **kw)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/tkinter/__init__.py", line 3495, in __init__
self.tk.call(('image', 'create', imgtype, name,) + options)
_tkinter.TclError: couldn't recognize data in image file "/Users/victor/Downloads/crossviper-master/images/new.png"
```
This kind of issue seems to have been explained here:
https://stackoverflow.com/questions/27599311/tkinter-photoimage-doesnt-not-support-png-image
Answers:
username_1: Do you use tkinter 8.6 ?
I thought tkinter 8.6 does know how to manage with .png .... tkinter 8.5 doesn't
username_0: In mac OS X, the officially recommended Tcl/Tk version for Python 3.6 is ActiveTcl 8.5.18.0. This page clearly states that 8.6 is not supported in this platform, so 8.6 currently is not cross-platform.
https://www.python.org/download/mac/tcltk/ |
coredns/coredns | 222880758 | Title: Runtime re-ordering of middleware
Question:
username_0: Currently middleware are chained together according to the build-time order defined in middleware.cfg. It would be nice to be able to re-order those for more flexibility when using `fallthrough`.
At a minimum, we should at least have a flag to show users the order. The -plugins flag outputs them in alphabetical order.
Answers:
username_1: /cc @username_2
username_2: I'm not sure about the DNS server, but if the HTTP server runs the HTTP middleware in an arbitrary order, it breaks things really bad. Why do you want to reorder them? (I'm not sure what 'fallthrough' is in this sense.)
username_0: Many of ours need to be in at least roughly the same order. But we have a subset of middleware that serve as backends to different data sources. Those would be nice to be able to reorder.
username_1: I think I agree with that statement; i.e. runtime reordering can lead to really
weird things.
`fallthrough` is when we handle a query in middleware X, then discover we don't
have a good answer for it. We then don't return an error but fallthrough through
to the next middleware.
username_0: Maybe we can limit the set of middleware that can be re-ordered to just the backends. Needs more thought.
username_3: Wouldn't "fallthrough to X" as an option work. So you basically jump over middleware. The only thing needed then would be that the data source used by multiple middlewares is one of the last ones called.
Just an idea, haven't looked on how to implement this.
username_1: I'm not convinced yet that this is a good idea. Mostly worried about weird
interactions and hard to debug problems. Also makes `middleware.cfg` part of the
runtime configuration which is weird (at this point in time).
username_1: Please re-open if we have an actual plan and agree that this is worth doing.
Status: Issue closed
|
YACS-RCOS/yacs.n | 844826324 | Title: Bug — 401 Error when fetching course data
Question:
username_0: **Describe the bug**
Sometimes, the server would output a `401` error when fetching the course data for some reason.
**To Reproduce**
1. Go to the site
2. Console should say a `401` error
**Expected behavior**
Should either fetch the data from the server or load data from the cache.
**Screenshots**

**Additional context**
Suspecting it's something about NGINX.<issue_closed>
Status: Issue closed |
libretro-mirrors/libretro-arb | 1090150196 | Title: Run multiple instances of the same game
Question:
username_0: +1 for the connectivity "emulation" and link "emulation.
Not sure I understand the 3rd bullet point.
Answers:
username_1: Would be pretty nice to pass a void* or similar across all calls so that we wouldn't have to be so dependant on globals or singletons. 🙏
username_1: @username_0 The best option for this right now is to run another instance of the core and game in a different thread.
username_2: I think this should be generalized to also allow different games and even cores. Possible use cases:
- games with connectivity support (eg. [Zelda Oracles](https://zelda.fandom.com/wiki/Linked_Game))
- [GBA-GC linking](https://en.wikipedia.org/wiki/GameCube_%E2%80%93_Game_Boy_Advance_link_cable)
- opening an image file with the builtin viewer without ending the current emulation core.
username_0: +1 for the connectivity "emulation" and link "emulation.
Not sure I understand the 3rd bullet point.
username_2: This would allow to check manual scans and maps directly in RA without terminating the current content.
Btw, i guess this is more a Frontend-related issue, not libretro API related..- |
ecadlabs/taquito | 916733781 | Title: Estimation of batch operation broken by Granada
Question:
username_0: **Description**
When Taquito does an estimation, it set the `gasLimit` of each operation to the `hard_gas_limit_per_operation` constant (which is 1 040 000) and does a simulation using the `helpers/scripts/run_operation` endpoint of the RPC to get the actual gas that the operation will consume.
Currently, if we estimate a batch of 150 transactions, the total `gasLimit` of the batch used in the simulation will be `156 000 000` (150*1 040 000) which is higher than the `hard_gas_limit_per_block` (which is 10 400 000 for florencenet and 5 200 000 for granadanet).
Having a total `gasLimit` higher than the `hard_gas_limit_per_block` when doing the simulation leads to a `gas_exhausted.block` exception from the node for granadanet, potentially because of this change: https://gitlab.com/tezos/tezos/-/merge_requests/2880.
Thus, it won't be longer viable to use the `hard_gas_limit_per_operation` when estimating a batch having a high number of operations.
**Solution idea**
Instead of setting the `gasLimit` of each operation to the `hard_gas_limit_per_operation`, we can try setting it to `hard_gas_limit_per_block` / number of operations in the batch.<issue_closed>
Status: Issue closed |
priyamharsh14/SniffnDetect | 712545228 | Title: Add more attack detection algorithm (HACKTOBER 2020)
Question:
username_0: Right now, SniffnDetect can identify attacks like:
- SYN Flood Attack
- SYN-ACK Flood Attack
- ICMP Smurf Attack
- Ping of Death
We need to add some more algorithms to detect other attacks (and possible source of attack) |
pyecharts/pyecharts | 562988176 | Title: Sankey 桑基图显示不全的问题
Question:
username_0: **问题**
Sankey 桑基图显示不全,可能是数据太大下面感觉有一半没展示出来
如果设置init_opts的width,height就会导致显示不出来,请问有什么办法吗
**运行环境(系统环境及 pyecharts 版本)**
最新pyecharts
**代码及截图**
sankey = (
Sankey()
.add("",nodes,links,
linestyle_opt=opts.LineStyleOpts(opacity=0.2, curve=0.5, color="source"),
label_opts=opts.LabelOpts(position="right"),
)
)

Answers:
username_1: @username_0
* 你的 `init_opts` 怎么设置的?
username_0: 都是默认的,没有设置
username_1: @username_0
* 我是看你说设置 `init_opts` 宽和高图就显示不了?
Status: Issue closed
username_1: @username_0
* 可以试试调整 `node_gap` |
QubesOS/qubes-issues | 292014854 | Title: Windows qubes fail to connect to qrexec if set to autostart
Question:
username_0: #### Qubes OS version:
<!-- (e.g., `R3.2`)
You can get it from the dom0 terminal with the command
`cat /etc/qubes-release`
Type below this line. -->
3.2
#### Affected TemplateVMs:
<!-- (e.g., `fedora-23`, if applicable)
Type below this line. -->
windows-7
---
### Steps to reproduce the behavior:
<!-- Type below this line. -->
Create a Windows qube
Set the qube to autostart
Reboot physical machine
Observe qube state
### Expected behavior:
<!-- Type below this line. -->
Qube autostarts and connects to qrexec
### Actual behavior:
<!-- Type below this line. -->
Qube autostarts but fails to connect to qrexec
### General notes:
<!-- Type below this line. -->
qrexec_timeout set to 120 but occurs also on 60
---
#### Related issues:
<!-- Type below this line. -->
Answers:
username_1: Do you have any specific messages during VM startup (see `journalctl -u qubes-vm@NAME_OF_VM` in dom0), or just qrexec not connected later?
username_1: Can you provide also messages of successful `qvm-start NAME_OF_VM`, including approximate time?
username_0: `-- Logs begin at Sat 2017-12-09 00:48:59 CET, end at Fri 2018-01-26 23:06:06 CET. --
Jan 12 20:53:06 dom0 systemd[1]: Starting Start Qubes VM windows-tidal...
Jan 12 20:53:07 dom0 qvm-start[5080]: --> Starting NetVM sys-firewall...
Jan 12 20:53:07 dom0 qvm-start[5080]: --> Starting NetVM sys-net...
Jan 12 20:53:07 dom0 qvm-start[5080]: --> Creating volatile image: /var/lib/qubes/servicevms/sys-net/volatile.img...
Jan 12 20:53:07 dom0 qvm-start[5080]: /var/lib/qubes/servicevms/sys-net/volatile.img already exists, not overriding
Jan 12 20:53:07 dom0 systemd[1]: qubes-vm@<EMAIL>: Main process exited, code=exited, status=1/FAILURE
Jan 12 20:53:07 dom0 systemd[1]: Failed to start Start Qubes VM windows-tidal.
Jan 12 20:53:07 dom0 systemd[1]: [email protected]: Unit entered failed state.
Jan 12 20:53:07 dom0 systemd[1]: qubes-vm@windows-<EMAIL>.<EMAIL>: Failed with result 'exit-code'.
Jan 12 20:53:07 dom0 qvm-start[5080]: Traceback (most recent call last):
Jan 12 20:53:07 dom0 qvm-start[5080]: File "/usr/bin/qvm-start", line 136, in <module>
Jan 12 20:53:07 dom0 qvm-start[5080]: main()
Jan 12 20:53:07 dom0 qvm-start[5080]: File "/usr/bin/qvm-start", line 120, in main
Jan 12 20:53:07 dom0 qvm-start[5080]: xid = vm.start(verbose=options.verbose, preparing_dvm=options.preparing_dvm, start_guid=not options.noguid, notify_function=tray_notify_generic if options.tray else None)
Jan 12 20:53:07 dom0 qvm-start[5080]: File "/usr/lib64/python2.7/site-packages/qubes/modules/01QubesHVm.py", line 335, in start
Jan 12 20:53:07 dom0 qvm-start[5080]: return super(QubesHVm, self).start(*args, **kwargs)
Jan 12 20:53:07 dom0 qvm-start[5080]: File "/usr/lib64/python2.7/site-packages/qubes/modules/000QubesVm.py", line 1949, in start
Jan 12 20:53:07 dom0 qvm-start[5080]: self.netvm.start(verbose = verbose, start_guid = start_guid, notify_function = notify_function)
Jan 12 20:53:07 dom0 qvm-start[5080]: File "/usr/lib64/python2.7/site-packages/qubes/modules/006QubesProxyVm.py", line 82, in start
Jan 12 20:53:07 dom0 qvm-start[5080]: retcode = super(QubesProxyVm, self).start(**kwargs)
Jan 12 20:53:07 dom0 qvm-start[5080]: File "/usr/lib64/python2.7/site-packages/qubes/modules/005QubesNetVm.py", line 122, in start
Jan 12 20:53:07 dom0 qvm-start[5080]: xid=super(QubesNetVm, self).start(**kwargs)
Jan 12 20:53:07 dom0 qvm-start[5080]: File "/usr/lib64/python2.7/site-packages/qubes/modules/000QubesVm.py", line 1949, in start
Jan 12 20:53:07 dom0 qvm-start[5080]: self.netvm.start(verbose = verbose, start_guid = start_guid, notify_function = notify_function)
Jan 12 20:53:07 dom0 qvm-start[5080]: File "/usr/lib64/python2.7/site-packages/qubes/modules/005QubesNetVm.py", line 122, in start
Jan 12 20:53:07 dom0 qvm-start[5080]: xid=super(QubesNetVm, self).start(**kwargs)
Jan 12 20:53:07 dom0 qvm-start[5080]: File "/usr/lib64/python2.7/site-packages/qubes/modules/000QubesVm.py", line 1951, in start
Jan 12 20:53:07 dom0 qvm-start[5080]: self.storage.prepare_for_vm_startup(verbose=verbose)
Jan 12 20:53:07 dom0 qvm-start[5080]: File "/usr/lib64/python2.7/site-packages/qubes/storage/xen.py", line 210, in prepare_for_vm_startup
Jan 12 20:53:07 dom0 qvm-start[5080]: super(XenStorage, self).prepare_for_vm_startup(verbose=verbose)
Jan 12 20:53:07 dom0 qvm-start[5080]: File "/usr/lib64/python2.7/site-packages/qubes/storage/__init__.py", line 256, in prepare_for_vm_startup
Jan 12 20:53:07 dom0 qvm-start[5080]: self.reset_volatile_storage(verbose=verbose)
Jan 12 20:53:07 dom0 qvm-start[5080]: File "/usr/lib64/python2.7/site-packages/qubes/storage/xen.py", line 207, in reset_volatile_storage
Jan 12 20:53:07 dom0 qvm-start[5080]: verbose=verbose, source_template=source_template)
Jan 12 20:53:07 dom0 qvm-start[5080]: File "/usr/lib64/python2.7/site-packages/qubes/storage/__init__.py", line 253, in reset_volatile_storage
Jan 12 20:53:07 dom0 qvm-start[5080]: self.volatile_img, str(self.root_img_size / 1024 / 1024)])
Jan 12 20:53:07 dom0 qvm-start[5080]: File "/usr/lib64/python2.7/subprocess.py", line 540, in check_call
Jan 12 20:53:07 dom0 qvm-start[5080]: raise CalledProcessError(retcode, cmd)
Jan 12 20:53:07 dom0 qvm-start[5080]: subprocess.CalledProcessError: Command '['/usr/lib/qubes/prepare-volatile-img.sh', '/var/lib/qubes/servicevms/sys-net/volatile.img', '10240']' returned non-zero exit status 1
-- Reboot --
Jan 12 20:59:45 dom0 systemd[1]: Starting Start Qubes VM windows-tidal...
Jan 12 20:59:46 dom0 qvm-start[3885]: --> Starting NetVM sys-firewall...
Jan 12 20:59:46 dom0 qvm-start[3885]: --> Starting NetVM sys-net...
Jan 12 20:59:46 dom0 qvm-start[3885]: --> Creating volatile image: /var/lib/qubes/servicevms/sys-net/volatile.img...
Jan 12 20:59:46 dom0 qvm-start[3885]: /var/lib/qubes/servicevms/sys-net/volatile.img already exists, not overriding
Jan 12 20:59:46 dom0 qvm-start[3885]: Traceback (most recent call last):
Jan 12 20:59:46 dom0 qvm-start[3885]: File "/usr/bin/qvm-start", line 136, in <module>
Jan 12 20:59:46 dom0 qvm-start[3885]: main()
Jan 12 20:59:46 dom0 qvm-start[3885]: File "/usr/bin/qvm-start", line 120, in main
Jan 12 20:59:46 dom0 qvm-start[3885]: xid = vm.start(verbose=options.verbose, preparing_dvm=options.preparing_dvm, start_guid=not options.noguid, notify_function=tray_notify_generic if options.tray else None)
Jan 12 20:59:46 dom0 qvm-start[3885]: File "/usr/lib64/python2.7/site-packages/qubes/modules/01QubesHVm.py", line 335, in start
Jan 12 20:59:46 dom0 qvm-start[3885]: return super(QubesHVm, self).start(*args, **kwargs)
Jan 12 20:59:46 dom0 qvm-start[3885]: File "/usr/lib64/python2.7/site-packages/qubes/modules/000QubesVm.py", line 1949, in start
Jan 12 20:59:46 dom0 qvm-start[3885]: self.netvm.start(verbose = verbose, start_guid = start_guid, notify_function = notify_function)
Jan 12 20:59:46 dom0 qvm-start[3885]: File "/usr/lib64/python2.7/site-packages/qubes/modules/006QubesProxyVm.py", line 82, in start
Jan 12 20:59:46 dom0 qvm-start[3885]: retcode = super(QubesProxyVm, self).start(**kwargs)
Jan 12 20:59:46 dom0 qvm-start[3885]: File "/usr/lib64/python2.7/site-packages/qubes/modules/005QubesNetVm.py", line 122, in start
Jan 12 20:59:46 dom0 qvm-start[3885]: xid=super(QubesNetVm, self).start(**kwargs)
Jan 12 20:59:46 dom0 qvm-start[3885]: File "/usr/lib64/python2.7/site-packages/qubes/modules/000QubesVm.py", line 1949, in start
[Truncated]
Jan 26 20:49:29 dom0 qvm-start[3191]: self.volatile_img, str(self.root_img_size / 1024 / 1024)])
Jan 26 20:49:29 dom0 qvm-start[3191]: File "/usr/lib64/python2.7/subprocess.py", line 540, in check_call
Jan 26 20:49:29 dom0 qvm-start[3191]: raise CalledProcessError(retcode, cmd)
Jan 26 20:49:29 dom0 qvm-start[3191]: subprocess.CalledProcessError: Command '['/usr/lib/qubes/prepare-volatile-img.sh', '/var/lib/qubes/servicevms/sys-net/volatile.img', '10240']' returned non-zero exit status 1
-- Reboot --
Jan 26 20:55:09 dom0 systemd[1]: Starting Start Qubes VM windows-tidal...
Jan 26 20:55:09 dom0 qvm-start[3799]: --> Creating volatile image: /var/lib/qubes/appvms/windows-tidal/volatile.img...
Jan 26 20:55:09 dom0 qvm-start[3799]: --> Loading the VM (type = HVM)...
Jan 26 20:55:41 dom0 qvm-start[3799]: --> Starting Qubes DB...
Jan 26 20:55:41 dom0 runuser[9001]: pam_unix(runuser:session): session opened for user username_0 by (uid=0)
Jan 26 20:55:41 dom0 runuser[9001]: pam_unix(runuser:session): session closed for user username_0
Jan 26 20:55:41 dom0 qvm-start[3799]: --> Setting Qubes DB info for the VM...
Jan 26 20:55:41 dom0 qvm-start[3799]: --> Updating firewall rules...
Jan 26 20:55:41 dom0 qvm-start[3799]: --> Starting the VM...
Jan 26 20:55:41 dom0 qvm-start[3799]: --> Starting the qrexec daemon...
Jan 26 20:55:41 dom0 runuser[9004]: pam_unix(runuser:session): session opened for user username_0 by (uid=0)
Jan 26 20:56:11 dom0 qvm-start[3799]: Waiting for VM's qrexec agent..............................connected
Jan 26 20:56:11 dom0 runuser[9004]: pam_unix(runuser:session): session closed for user username_0
Jan 26 20:56:11 dom0 qvm-start[3799]: --> Waiting for user 'user' login...
Jan 26 20:56:12 dom0 systemd[1]: Started Start Qubes VM windows-tidal.`
username_1: The last one looks to be successful, did it worked?
username_0: No, the last one is from manual vm start.
username_2: This issue is being closed because:
- This issue has the "Release 3.2 updates" milestone.
- [Qubes OS 3.2 recently reached end-of-life (EOL).](https://www.qubes-os.org/news/2019/03/28/qubes-3-2-has-reached-eol/)
- This issue has not been updated in over a year.
If anyone believes that this issue should be reopened, please let us know in a comment here.
Status: Issue closed
|
primefaces/primevue | 871181666 | Title: Error using Dialog after upgrade to 3.4.0
Question:
username_0: <span>Hello</span>
</Dialog>
Unfortunately I can't replicate this in a plunkr, so I appreciate the bug may not be be in primevue, but the fact it appears on version upgrade seemed significant enough to report (the working plunkr with attempts to add the equivalent nesting in my own app, for reference/testing):
https://codesandbox.io/s/primevue-issue-template-forked-5zhgf
Please let me know if I can provide any other diagnostics or debugging to help with this!
* **Vue version:** 3.X
* **PrimeVue version:** 3.4.0
* **Browser:** Chrome 89.0.4389.114 | Firefox 88
Answers:
username_0: My fault - I was missing `import PrimeVue from 'primevue/config' and `app.use(PrimeVue)`
Status: Issue closed
|
wso2/k8s-api-operator | 594287948 | Title: operator-sdk generate openapi command fails due to deprecated fields
Question:
username_0: **Description:**
operator-sdk generate openapi command fails due to deprecated fields in the knative serving spec.
**Suggested Labels:**
**Suggested Assignees:**
**Affected Product Version:**
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
**Related Issues:**
Answers:
username_0: Fixed with [#223](https://github.com/wso2/k8s-api-operator/pull/223)
Status: Issue closed
|
Kungsgeten/org-brain | 323649777 | Title: Support window switch
Question:
username_0: With Spacemacs, when I am in Org-brain visualize window, would like to use <Leader>0..9 to switch window
Answers:
username_1: `org-brain` is supported as part of the `org` layer but it's true that the key mappings are not ideal/compatible. Essentially when in `org-brain` mode, I need to remeber to prefix everything global to `spacemacs` with `Alt+m`, to escape `evil-mode`, or I take a chance of messing things up in `org-brain` inadvertently.
username_0: Thanks a lot, I will try to add keys to org-brain-visualize-mode-map.
Status: Issue closed
|
mime-types/ruby-mime-types | 156160748 | Title: mime-types-data v3.2016.0521 regression
Question:
username_0: Hi!
Since updating to v3.2016.0521 of mime-types-data (note: this is not a manual upgrade, Bundler did this automatically, and when https://github.com/rails/rails/pull/25103/commits/d92ea22ed1b7ba782301718c67ace329182dbfca was committed, this is leading to [this test](https://github.com/rails/rails/blob/master/actionpack/test/dispatch/mime_type_test.rb#L50-55) failing [here](https://travis-ci.org/rails/rails/jobs/132085730#L555) on Travis. Any ideas? 😬
Answers:
username_1: So, this is something that `mime-types-data` isn’t related to. `Mime::Type` (the type in the failing Rails test) is something internal to Rails that isn’t the same as `MIME::Type` (the type of data that `mime-types` and `mime-types-data` deal with. There’s some similarities, but `Mime::Type` has features that `MIME::Type` does not and vice-versa and the purpose is slightly different (I want to be able to take MIME::Type into the parsing direction that `Mime::Type` deals with).
However, this is *purely* a sorting issue:
```diff
--- /Users/austin/old 2016-05-22 16:53:39.000000000 -0400
+++ /Users/austin/new 2016-05-22 16:53:39.000000000 -0400
@@ -52,13 +52,6 @@
},
{
class: "Mime::Type:0xXXXXXX",
- @synonyms: ["text/x-json", "application/jsonrequest"],
- @symbol: :json,
- @string: "application/json",
- @hash: 1658625877319167784
- },
- {
- class: "Mime::Type:0xXXXXXX",
@synonyms: [],
@symbol: :pdf,
@string: "application/pdf",
@@ -77,5 +70,12 @@
@symbol: :gzip,
@string: "application/gzip",
@hash: 2378352920639632078
+ },
+ {
+ class: "Mime::Type:0xXXXXXX",
+ @synonyms: ["text/x-json", "application/jsonrequest"],
+ @symbol: :json,
+ @string: "application/json",
+ @hash: 1658625877319167784
}
]
```
(The diff here is semi-jsonification so that I could see the difference in a side-by-side diff.)
Put `Mime[:json]` at the end of the `expect` assignment, or change the parsing code so that `json` shows up earlier, and the test will be fixed.
Status: Issue closed
username_0: Ahh, gotcha. I saw that `mime-types-data` had been updated in the lockfile, and that this error appeared, so I thought that they could be related. Sorry for the bother 😬 |
tastyigniter/TastyIgniter | 739439312 | Title: Ensure at least one language option is enabled
Question:
username_0: **Expected behavior:**
There should be always a fallback language to ensure site will work.
**Actual behavior:**
If all the languages are disabled, the site crashes.
**Reproduce steps:**
Disable all languages save... refresh...
**Version:**
v.3.0.4-beta.23.2
**Additional Information:**

Answers:
username_1: PR here: https://github.com/tastyigniter/TastyIgniter/pull/580
Status: Issue closed
username_2: Fixed |
tendermint/tendermint | 1021130500 | Title: block commitment latency of tendermint
Question:
username_0: <!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v ✰ Thanks for opening an issue! ✰
v Before smashing the submit button please review the template.
v Word of caution: poorly thought-out proposals may be rejected
v without deliberation
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
## Summary
<!-- Short, concise description of the proposed feature -->
Improve the metrics of Tendermint consensus.
## Problem Definition
<!-- Why do we need this feature?
What problems may be addressed by introducing this feature?
What benefits does Tendermint stand to gain by including this feature?
Are there any disadvantages of including this feature? -->
To demonstrate the performance of Tendermint better, we perfer some measurement of the latency of each block commitment.
**Latency of block commitment**, which means the time interval from the creation of a block to the commitment of it.
Besides, we could measure the latency of transaction commitment which means the time interval from when one node has received a transaction to the commitment of it.
## Proposal
<!-- Detailed description of requirements of implementation -->
Create an instance `CurLatency` in consensus metrics `internal/consensus/metrics.go`. Then, we could collect the block commitment latency while a block has been committed.
As for the transactions, we could find the **wrapped latency** which means the time interval from a transaction received to the transaction wrapped into a block. The sum of **wrapped latency** and **block commitment latency** could be regarded as **transaction commitment latency**.
____
#### For Admin Use
- [ ] Not duplicate issue
- [ ] Appropriate labels applied
- [ ] Appropriate contributors tagged
- [ ] Contributor assigned/self-assigned
Answers:
username_1: Hey @username_0, thanks for opening this issue. Currently we just have `consensus_block_interval_seconds` (the time between blocks) which I think is quite a basic metric and could definitely be further elaborated on. Off the top of my head it would be interesting to measure:
- time to reach consensus. That is specifically the time between receiving the proposal block to reaching 2/3+ precommits
- block propagation time. The time between the start of a round and receiving all the block parts to reconstruct the proposed block.
I know @williambanfield has been interesting in gathering more metrics (specifically on latencies from the application) so would be interested in his thoughts. AFAIK, **average wrapped latency** might be a bit more difficult but I believe we track the time we receive each transaction so perhaps it would be easy to measure the time between entering the mempool and when `Update` is called by consensus which flushes out all the committed transactions. |
soimort/translate-shell | 74633168 | Title: $TRANS_PROGRAM has grown too large for OS X
Question:
username_0: 133861
```
The problem appears to be along the line of `MAX_ARG_STRLEN` (not sure about the OS X equivalent), limiting the max length of a single argument, which was once 2^17=131072 on Linux, but apparently the kernel moved on since I couldn't reproduce the bug on Ubuntu 15.04. The field-tested limit seems to be between 133861 and 134262, which is a little bit mysterious since 131072 is smaller than both. (Did I forget some here doc quoting rule? Probably). At any rate, there's clearly a limit on the length of a single argument on OS X around 2^17, and `$TRANS_PROGRAM` is on the edge of exceeding it.
Answers:
username_1: Thanks for bringing this issue up - system specific limit is something I wasn't aware of before.
On Linux kernel, the length limitation of a single argument **is indeed** `MAX_ARG_STRLEN` (**131072**) and this has never increased. You can test this easily with the following program:
```c
#include <stdio.h>
#include <unistd.h>
#define LEN 131071
int main()
{
char s[LEN + 1];
int i;
for (i = 0; i < LEN; i++)
s[i] = 'x';
s[LEN] = 0;
return execl("/usr/bin/expr", "expr", "length", s, (char *)NULL);
}
```
Change `LEN` to `131072`, the `execl()` call shall fail.
Just like you doubted, your estimation of `$TRANS_PROGRAM` isn't very correct. The actual lengths are 122233 and 122634 respectively, both did not exceed the `MAX_ARG_STRLEN` limit on Linux.
```
$ git checkout 467a56
$ make &>/dev/null && . build/trans -V &> /dev/null && expr length "$TRANS_PROGRAM"
122634
$ git checkout HEAD~1
$ make &>/dev/null && . build/trans -V &> /dev/null && expr length "$TRANS_PROGRAM"
122233
```
While Linux kernel does check for validity of erch argument length (https://github.com/torvalds/linux/blob/master/fs/exec.c#L477), the OS X kernel, XNU, as well as other BSD derivatives, does not perform such check for each single argument, so there is no real equivalent to `MAX_ARG_STRLEN`. XNU however, has a limitation on the **maximum length of arguments**, which is
```c
#define ARG_MAX (256 * 1024) /* max bytes for an exec function */
```
as defined in `<syslimits.h>`.
It may seem a bit weird that the argument of length 122233 is fine but the one of length 122634 crashes, since both are far less than 262144, even if you count in all other arguments '`gawk`', '`-f`'. However, every OS has its quirks. I did a simple grepping into XNU's source, it seems these spaces are reserved not only for `argv[]`, but also for `envv[]` (http://www.opensource.apple.com/source/xnu/xnu-2782.1.97/bsd/kern/kern_exec.c , see L#3137-3330). I guess that's why we can't have a lengthier argument in its `execve()` call.
(Not familiar with BSD implementation, so far it's really just a guess. Any experts?)
I cannot tell at exactly what length `$TRANS_PROGRAM` could break on OS X; it is not solely about a single argument length, unlike Linux, as I explained above. Two straightforward solutions I can think of at the moment:
1. Abandon the lengthy argument wrapping; use shell built-in `print` to pipe the whole program to gawk. (I bet shell can take infinite arguments, as long as it's not a system call)
2. Reduce the size of the core program and keep it to 100KB or below.
username_0: Thanks for the clarification; I didn't expect env vars to be counted against `ARG_MAX`. Given that I'd go for shell builtins (yeah, as far as I know they are not limited by `ARG_MAX`). Speaking of limiting the program size, it's still possible to exceed `ARG_MAX` if a user in addition exports some long env vars (albeit unlikely).
username_1: Due fix in [develop](https://github.com/username_1/translate-shell/tree/develop). Closing.
Status: Issue closed
|
dart-lang/sdk | 606448749 | Title: Make sure analyzer bots catch lints and implicit downcasts configured through analysis_options.yaml
Question:
username_0: Today I had to fix an implicit downcast error that I accidentally introduced (https://dart-review.googlesource.com/c/sdk/+/144780). For some reason, neither the trybots nor the post-commit bots caught this error. I haven't investigated deeply, but I suspect that the bots are running the analyzer in a way that somehow prevents it from noticing the `implicit_downcasts` setting in the `nnbd_migration` package's `analysis_options.yaml` file.
@bwilkerson has noticed a similar issue with lint failures--they are getting committed without breaking any bots. Since lints are configured using `analysis_options.yaml`, it's possible that this issue has the same root cause.
Answers:
username_1: The new `strict-casts: true` analysis option has landed, I can look into how much work it is to enable it for package:analyzer and package:nnbd_migration. |
gilwoong-kang/capstone2 | 743405143 | Title: 정적인 변수명에서는 동적인 동사를 사용하지 않습니다.
Question:
username_0: https://github.com/username_1/capstone2/blob/d61f7fe065a940c97f35aaa4ef91edd00a60234d/MoayoShare/src/main/java/com/moayo/server/controller/MainController.java#L31
Answers:
username_1: 동사가 아닌 명사로 와야한다는 의미인가요..?
read -> reading
해당 부분은 줌 미팅에서 다시 질문 드리겠습니다.
username_0: 자바는 객체 기준이므로 행위로 클래스를 구분하지 않습니다.
행위로 클래스를 구분하려다 보니 이름도 애매하게 지어지는 것이 아닌가 합니다. |
ballerina-platform/ballerina-lang | 358456382 | Title: [SOAP] Add support for SOAP messages with attachment
Question:
username_0: SOAP connector should add support for attachments.
https://www.w3.org/TR/SOAP-attachments
Answers:
username_0: Issue already exist for this https://github.com/ballerina-platform/ballerina-lang/issues/3415 , hence closing.
Status: Issue closed
|
Scalingo/cli | 312139664 | Title: Unauthorized message should be more explicit
Question:
username_0: "an error occured:<br>unauthorized - you are not authorized to do this operation"
"An error occured" should be removed.
It should be added: "you're currently loggedin as XXX. Are you sure XXX is a collaborator of this app?"<issue_closed>
Status: Issue closed |
department-of-veterans-affairs/va.gov-team | 792312986 | Title: AWS Staging: Veteran user on VPN service for Community Care the find provider near my location will not work correctly
Question:
username_0: Logging this defect. If a veteran user uses a VPN service, the find provider near my location will not work correctly. 1) Login as hard coded CC eligible veteran. 2) Select Schedule Appt 3) For Type of Care, select Podiatry 4)Select a date/time appt preference 5) Tell us your community care preferences, select +Choose a Provider 6) Select "Or, use your current location" 7) Expected response: Providers near my current location display (Denver/Boulder metro area 8)Actual Response: Providers in Kentucky and Ohio display (edited)
**From Triage with Jeff**:
Marcy to Jeff: When Chrome ask to allow the app to use my current location the address that then populates location is this: 2701 South Belmont Street, Ashland, Kentucky 41102, United States. Why Kentucky? It is just bizarre.
<NAME> [9:44 AM]
Cool. That’s what I suspected. Since your computer probably doesn’t have GPS, I think the browser location feature tries to use IP address for location, and the IP address while on a VPN depends on the VPN server you’re connecting to.
[9:45 AM] Could be worth noting in the app. VPNs aren’t uncommon, especially for people looking at the site while at work. Lauren and Peter can decide if they want to add any messaging for it.
Answers:
username_1: @username_2 are we able to detect if the user is using a VPN? trying to see if a potential message would need to be a blanket statement about potential incorrect results or if we can target specifically to VPN users.
username_2: @username_1 As far as I know, no, we can't directly detect if a user is on a VPN. It does look like the Geolocation API we use has an `accuracy` property, so we could do some testing to see if that is particularly low for someone on a VPN and/or desktop.
username_3: @username_2 Is this a problem with VA facilities too? If so, maybe we should look for helpdesk issues where users have reported that issue, or issues around facilities not showing up correctly to get a better sense of how widespread it is.
I'm just hesitating to add more messaging - the longer the message, the more likely all users are to skim over it.
Accuracy might be an interesting tool, if we could say our confidence level in user's location, and only trigger it if the confidence level is low. This would work best if we also showed the address we think they're at.
username_2: @username_3 Yeah, would be an issue anywhere where we use current location from the browser.
username_1: Created a spike ticket so we can do some testing before triaging this. #19282
username_1: @username_2 Based on the spike #19282, do we have a technical preference on how we make these improvements?
username_4: 
username_1: @username_4 Thanks for posting your testing results! Based on the results of these changes (especially the negative impact to Primary Care VA) and the technical recommendation in the PR not to pursue, I'm closing this ticket as won't fix.
If we choose to proceed with any visual warnings of accuracy concerns, we'll handle in another ticket.
cc @username_2 @narin @username_3
Status: Issue closed
|
sensu/sensu-go | 340842132 | Title: [Build stability] TestExecuteCheck
Question:
username_0: ```
--- FAIL: TestExecuteCheck (6.73s)
assertions.go:239:
Error Trace: check_handler_internal_test.go:137
Error: Not equal:
expected: 2
actual: 0
Messages: {"timestamp":1531440760,"entity":{"class":"agent","deregister":false,"deregistration":{},"environment":"default","id":"APPVYR-WIN","keepalive_timeout":120,"last_seen":1531440753,"organization":"default","subscriptions":null,"system":{"hostname":"APPVYR-WIN","os":"windows","platform":"Microsoft Windows Server 2012 R2 Datacenter","platform_family":"Server","platform_version":"6.3.9600 Build 9600","network":{"interfaces":[{"name":"Ethernet 8","mac":"00:15:5d:07:ef:df","addresses":["fe80::6dd2:1287:a208:840e/64","192.168.2.182/21"]},{"name":"Loopback Pseudo-Interface 1","addresses":["::1/128","127.0.0.1/8"]},{"name":"isatap.{0080E500-4205-442C-9729-FDB7CEB0507A}","mac":"0fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b","addresses":["fe80::5efe:c0a8:2b6/128"]}]},"arch":"amd64"},"user":"agent"},"check":{"command":"..\\bin\\tools\\cat.exe C:\\Users\\appveyor\\AppData\\Local\\Temp\\1\\metric677696619","environment":"default","handlers":[],"high_flap_threshold":0,"interval":60,"low_flap_threshold":0,"name":"check","organization":"default","publish":true,"runtime_assets":["ruby-2-4-2"],"subscriptions":["linux"],"proxy_entity_id":"","check_hooks":[{"non-zero":["hook1"]}],"stdin":true,"subdue":null,"ttl":0,"timeout":1,"round_robin":false,"duration":1.5272485,"executed":1531440758,"history":null,"issued":1531440753,"output":"Execution timed out\n","status":2,"total_state-change":0,"last_ok":0,"occurrences":0,"occurrences_watermark":0,"output_metric_format":"graphite_plaintext","output_metric_handlers":null,"env_vars":null},"metrics":{"handlers":null,"points":null}}
```
Answers:
username_0: Check output: "Execution timed out"
username_1: I believe there is already an issue for this check. It’s tied to the test utility binaries not being there.
username_0: @username_1 https://github.com/sensu/sensu-go/issues/1797? If the binaries are not there, wouldn't we get an error related to "not found" rather than `Execution timed out`?
username_1: Oh you’re right. I didn’t realize this was on AppVeyor. Welp hahaha. Let’s fix it.
username_2: /cc @preed FYI
Status: Issue closed
|
cosmos/cosmos-sdk | 306481146 | Title: App Should get ChainID (and full genesis) from InitChain
Question:
username_0: Right now the app doesnt get the ChainID until the first BeginBlock.
We need to change ABCI so it gets it on InitChain: https://github.com/tendermint/abci/issues/216
Closes https://github.com/cosmos/cosmos-sdk/issues/565
Answers:
username_0: Duplicate / Done!
Status: Issue closed
|
andrewchambers/bupstash | 1007118333 | Title: Can only run one `bupstash put` per user
Question:
username_0: `bupstash put` will fail with `bupstash put: database is locked` if it's already running with the same user, even if the data and repository are different.
exporting a different HOME or cache path makes bupstash use a different `.cache/bupstash/bupstash.sendlog` and works around the problem.
Is this limitation on purpose or one .sendlog database could be created per process without issue?
PD: Thanks for the work on bupstash, works much better than any other backup software I have tried. I'm finally able to backup data that was imposible before because of high resource consumption or really slow backup performace.
Answers:
username_1: Currently you can manually use --send-log to fine tune this without messing with $HOME.
The main reason we only have one default sendlog per user is to prevent an uncontrolled buildup of files.
he concurrency limitation is mainly to simplify the implementation as we only remember the previous send (again to put a limit on resource consumption).
username_1: I will improve the error message to explain this. |
friendly/matlib | 176782536 | Title: LU decomposition
Question:
username_0: ```
Ax = b
LUx = b
LUx - b = 0
L(Ux - d) = 0
```
where `Ld = b`. Then do a simple back-solve for `d`, and finally a simple back-solve for `x` given `d`.
Any interest in this form of decomposition in the form of a function `LU()`? With all the ERO already defined it should be easy to do, and the solving of `d`/`x` could be made into a flag `solve = TRUE`. Otherwise, just the L and U matrices will be returned.
Answers:
username_1: Hi,
> -----Original Message-----
>
username_0: Thanks, John. I've added a working version to the repo.
username_2: that's great! I updated `NEWS.md` and added `@return` to LU.
@username_1 : Is it useful to add `verbose=` here?
username_1: Thanks for doing this.
> -----Original Message-----
>
username_0: @username_2 there are only nrow - 1 EROs in this approach, the rest is forward/backward substitutions. Could be weird to print because of the number of (y - dot product)/x operations, but is of course possible if there's interest.
username_0: Initial equation:
2*x1 + x2 - x3 = 8
-3*x1 - x2 + 2*x3 = -11
-2*x1 + x2 + 2*x3 = -3
Lower triangle equation:
x1 = 8
-1.5*x1 + x2 = -11
-x1 + 4*x2 + x3 = -3
Forward-solving operations:
Equation: x1 = 8
Solution: x1 = 8/1 = 8
Equation: -1.5*x1 + x2 = -11
Substitution: -1.5*8 + x2 + 0*x3 = -11
Solution: x2 = (-11 - -1.5 + 1 + 0)/1 = 1
Equation: -x1 + 4*x2 + x3 = -3
Substitution: -8 + 4*1 + x3 = -3
Solution: x3 = (-3 - -1 + 4 + 1)/1 = 1
Intermediate solution: d = (8, 1, 1)
Upper triangle equation:
2*x1 + x2 - x3 = 8
0.5*x2 + 0.5*x3 = 1
- x3 = 1
Back-solving operations:
Equation: - x3 = -3
Solution: x3 = 1/-1 = -1
Equation: 0.5*x2 + 0.5*x3 = -11
Substitution: 0.5*x2 + 0.5*-1 = -11
Solution: x2 = (1 - 0.5)/0.5 = 3
Equation: 2*x1 + x2 - x3 = 8
Substitution: 2*x1 + -1 - 3 = 8
Solution: x1 = (8 - 1 + -1)/2 = 2
Final solution: x = (2, 3, -1)
```
username_2: That is lovely! Thanks for doing this.
username_1: Yes, really nice! Thanks.
John
> -----Original Message-----
>
Status: Issue closed
|
sqlancer/sqlancer | 653092380 | Title: Incorrect speed logging
Question:
username_0: https://travis-ci.com/github/sqlancer/sqlancer/jobs/358720280
```
[2020/07/08 08:17:24] Executed 61181 queries (30 queries/s; 2.20/s dbs, successful statements: 56%). Threads shut down: 4.
3369[2020/07/08 08:17:24] Executed 61184 queries (**12236 queries/s**; 197.00/s dbs, successful statements: 56%). Threads shut down: 4.
3370[2020/07/08 08:17:29] Executed 61395 queries (43 queries/s; 1.20/s dbs, successful statements: 56%). Threads shut down: 4.
3371
```
Answers:
username_1: I suspect that this issue might be specific to the ClickHouse implementation, since I haven't yet observed such an issue for the other implementations. I noticed that you call ` manager.incrementSelectQueryCount();` even when an `IgnoreMeException` is thrown: https://github.com/sqlancer/sqlancer/blob/e62224edcb43fe421dc87601f75090739e34e595/src/sqlancer/clickhouse/ClickHouseProvider.java#L144
Could this be the issue? For example, in the DuckDB implementation, the counter is not incremented in such a case:
https://github.com/sqlancer/sqlancer/blob/e62224edcb43fe421dc87601f75090739e34e595/src/sqlancer/duckdb/DuckDBProvider.java#L156
username_1: This might have been fixed by refactoring that factored out the common logic for calling the test oracle (see https://github.com/sqlancer/sqlancer/commit/c9a626f21a4968e4b0789db0a7bca45e770b9e6f#diff-5c10c531c6221212192b6d6fae22977cR46). Did you still encounter this issue recently? If not, I would be inclined to close this issue.
username_0: I tried on most recent version master version (with my current changes) and have same issues.
```
[2020/07/23 12:03:52] Executed 3029 queries (16 queries/s; 3,00/s dbs, successful statements: 56%). Threads shut down: 0.
[2020/07/23 12:03:52] Executed 3029 queries (16 queries/s; 3,00/s dbs, successful statements: 56%). Threads shut down: 0.
[2020/07/23 12:03:52] Executed 3029 queries (15 queries/s; 3,00/s dbs, successful statements: 56%). Threads shut down: 0.
[2020/07/23 12:03:52] Executed 3029 queries (605 queries/s; 84,92/s dbs, successful statements: 56%). Threads shut down: 0.
[2020/07/23 12:03:57] Executed 3075 queries (9 queries/s; 1,40/s dbs, successful statements: 56%). Threads shut down: 0.
[2020/07/23 12:03:57] Executed 3075 queries (9 queries/s; 1,40/s dbs, successful statements: 56%). Threads shut down: 0.
[2020/07/23 12:03:57] Executed 3075 queries (9 queries/s; 1,40/s dbs, successful statements: 56%). Threads shut down: 0.
```
I guess it is more correct to log this statistics at most once a second or even use larger window to calculate speed. In example there are several log records for one second.
username_0: I see it happens when I run several tests in one run. It looks like logger is created for each new one and every logger prints once in 5 seconds.
It looks that `startProgressMonitor` should check it there is one already. Or stop it after test ended.
username_1: Thanks for investigating this! I see now what the issue is, and can fix it.
Status: Issue closed
username_1: Please let me know if the PR fixed the issue. |
intesar/NB-Sales | 672258108 | Title: ABAC_Level2 on GET:/api/v1/orgs/{id}/users
Question:
username_0: Title: ABAC_Level2 Vulnerability on GET:/api/v1/orgs/{id}/users
Project: NetBanking API
Description: The ABAC exploit allows an attacker to read, modify, delete, add and perform actions on customer/un-authorized data.
Risk: ABAC_Level2
Severity: Major
API Endpoint: http://172.16.17.323:8080/api/v1/orgs/2c928084730547e80173b58c2cfa65f3/users?page=0&pageSize=20
Environment: Master
Playbook: ApiV1OrgsIdUsersGetUseraCreateOrgorgtypeenterpriseUsercDisallowAbact2
Researcher: [apisec Bot]
QUICK TIPS
Suggestion: Add access-control checks on incoming requests against all data calls.
Effort Estimate: 2.0
Wire Logs:
06:18:49 [D] [ OOECUAI2] : URL [http://9192.168.3.11:8080/api/v1/orgs]
06:18:49 [D] [ OOECUAI2] : Method [POST]
06:18:49 [D] [ OOECUAI2] : Auth [UserA]
06:18:49 [D] [ OOECUAI2] : Request [{
"billingEmail" : "<EMAIL>",
"company" : "Kutch-Kutch",
"createdBy" : "",
"createdDate" : "",
"description" : "GmalJga5",
"id" : "",
"inactive" : false,
"location" : "GmalJga5",
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "GmalJga5",
"orgPlan" : "PRO",
"orgType" : "ENTERPRISE",
"version" : ""
}]
06:18:49 [D] [ OOECUAI2] : Request-Headers [{Content-Type=[application/json], Accept=[application/json], Authorization=[**********]}]
06:18:49 [D] [ OOECUAI2] : Response [{
"requestId" : "None",
"requestTime" : "2020-08-03T18:18:49.467+0000",
"errors" : false,
"messages" : [ ],
"data" : {
"id" : "2c928084730547e80173b58c2cfa65f3",
"createdBy" : "2c928085730548680173054c9f720003",
"createdDate" : "2020-08-03T18:18:49.466+0000",
"modifiedBy" : "2c928085730548680173054c9f720003",
"modifiedDate" : "2020-08-03T18:18:49.466+0000",
"version" : null,
"inactive" : false,
"name" : "8BQ02Ydu",
"description" : "8BQ02Ydu",
"orgType" : "ENTERPRISE",
"billingEmail" : "<EMAIL>",
"company" : "Watsica and Sons",
"location" : "8BQ02Ydu",
"orgPlan" : "ENTERPRISE"
},
"totalPages" : 0,
"totalElements" : 0
}]
06:18:49 [D] [ OOECUAI2] : Response-Headers [{X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=NmZkMjIwNmQtN2U5Mi00N2U1LTlhZTAtNzc2ZjJmYzQyZDA2; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Mon, 03 Aug 2020 18:18:49 GMT]}]
[Truncated]
https://cloud.fxlabs.io/#/app/projects/8a8081766fc3e2a1016fc421d6e55a13/jobs
Environment:
https://cloud.fxlabs.io/#/app/projects/8a8081766fc3e2a1016fc421d6e55a13/environments/8a8081766fc3e2a1016fc421d7155a15/edit
Scan Dashboard:
https://cloud.fxlabs.io/#/app/projects/8a8081766fc3e2a1016fc421d6e55a13/jobs/8a8081766fc3e2a1016fc4230f426628/runs/8a808138739e3ae40173b58c08510d65
Playbook:
https://cloud.fxlabs.io/#/app/projects/8a8081766fc3e2a1016fc421d6e55a13/template/ApiV1OrgsIdUsersGetUseraCreateOrgorgtypeenterpriseUsercDisallowAbact2
Coverage:
https://cloud.fxlabs.io/#/app/projects/8a8081766fc3e2a1016fc421d6e55a13/configuration
Code Sample:
https://cloud.fxlabs.io/#/app/projects/8a8081766fc3e2a1016fc421d6e55a13/recommendations/8a808138739e3ae40173b58c4c3c0e0a/codesamples
PS: Please contact <EMAIL> for apisec access and login issues.
--- apisec Bot ---
Status: Issue closed
Answers:
username_0: Message : <html><b>This issue is manually closed from FX control plane.</b></html>
Title: ABAC_Level2 Vulnerability on GET:/api/v1/orgs/{id}/users
Project: NetBanking API
Description:
Risk: ABAC_Level2
Severity: Major
API Endpoint: http://95.217.118.53:8080/api/v1/orgs/2c928084730547e80173b58c2cfa65f3/users?page=0&pageSize=20
Environment: Master
Playbook: ApiV1OrgsIdUsersGetUseraCreateOrgorgtypeenterpriseUsercDisallowAbact2
Researcher: UserC
QUICK TIPS
Suggestion:
Effort Estimate:
Wire Logs:
06:18:49 [D] [ OOECUAI2] : URL [http://192.168.3.11:8080/api/v1/orgs]
06:18:49 [D] [ OOECUAI2] : Method [POST]
06:18:49 [D] [ OOECUAI2] : Auth [UserA]
06:18:49 [D] [ OOECUAI2] : Request [{
"billingEmail" : "<EMAIL>",
"company" : "Kutch-Kutch",
"createdBy" : "",
"createdDate" : "",
"description" : "GmalJga5",
"id" : "",
"inactive" : false,
"location" : "GmalJga5",
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "GmalJga5",
"orgPlan" : "PRO",
"orgType" : "ENTERPRISE",
"version" : ""
}]
06:18:49 [D] [ OOECUAI2] : Request-Headers [{Content-Type=[application/json], Accept=[application/json], Authorization=[**********]}]
06:18:49 [D] [ OOECUAI2] : Response [{
"requestId" : "None",
"requestTime" : "2020-08-03T18:18:49.467+0000",
"errors" : false,
"messages" : [ ],
"data" : {
"id" : "2c928084730547e80173b58c2cfa65f3",
"createdBy" : "2c928085730548680173054c9f720003",
"createdDate" : "2020-08-03T18:18:49.466+0000",
"modifiedBy" : "2c928085730548680173054c9f720003",
"modifiedDate" : "2020-08-03T18:18:49.466+0000",
"version" : null,
"inactive" : false,
"name" : "8BQ02Ydu",
"description" : "8BQ02Ydu",
"orgType" : "ENTERPRISE",
"billingEmail" : "<EMAIL>",
"company" : "Watsica and Sons",
"location" : "8BQ02Ydu",
"orgPlan" : "ENTERPRISE"
},
"totalPages" : 0,
[Truncated]
https://cloud.fxlabs.io/#/app/projects/8a8081766fc3e2a1016fc421d6e55a13/jobs
Environment:
https://cloud.fxlabs.io/#/app/projects/8a8081766fc3e2a1016fc421d6e55a13/environments/8a8081766fc3e2a1016fc421d7155a15/edit
Scan Dashboard:
https://cloud.fxlabs.io/#/app/projects/8a8081766fc3e2a1016fc421d6e55a13/jobs/8a8081766fc3e2a1016fc4230f426628/runs/8a808138739e3ae40173b58c08510d65
Playbook:
https://cloud.fxlabs.io/#/app/projects/8a8081766fc3e2a1016fc421d6e55a13/template/ApiV1OrgsIdUsersGetUseraCreateOrgorgtypeenterpriseUsercDisallowAbact2
Coverage:
https://cloud.fxlabs.io/#/app/projects/8a8081766fc3e2a1016fc421d6e55a13/configuration
Code Sample:
https://cloud.fxlabs.io/#/app/projects/8a8081766fc3e2a1016fc421d6e55a13/recommendations/null/codesamples
PS: Please contact <EMAIL> for apisec access and login issues.
--- apisec Bot --- |
architecture-building-systems/CityEnergyAnalyst | 1068485453 | Title: Automized Workflow: Problem with temp data of solar radiation
Question:
username_0: As part of a research project I'm (trying to) automize a workflow with different scenarios.
At the moment I'll get a problem with temp data of solar radiation.
Depending on the computer I reach diffenet numbers of scenario until I get the Error
_FileExistsError: [WinError 183] Eine Datei kann nicht erstellt werden, wenn sie bereits vorhanden ist: 'C:\\Users\\Geske\\AppData\\Local\\Temp\\testscenario5_radiation_geometry_pickle\\zone'_
Answers:
username_1: @username_0, this might happen when you are running the same scenario (testscenario5) at the same time.
maybe you could try clean up the Temp folder (`C:\Users\Geske\AppData\Local\Temp`) manually and try again.
also, please make sure all scenarios have unique names, and only one scenario is being run at a time.
let us know if this helps.
username_0: Hi @username_1,
I named my scanarios by numbers (all unique: testscenario0, testscenario1,....).
And unfortunately I already tried to clean the Temp folder - still the same Error (just a bit later) :(
Here the full workflow-message:
Workflow step 242: script=radiation
================================================================================
Running radiation with args {'scenario': 'C:\\Users\\Geske\\Desktop\\projectsensitivity\\testscenario34', 'multiprocessing': True, 'number_of_cpus_to_keep_free': 1, 'debug': False, 'buildings': ['B1000', 'B1001', 'B1002', 'B1003'], 'use_latest_daysim_binaries': True, 'albedo': 0.2, 'roof_grid': 10, 'walls_grid': 200, 'zone_geometry': 2, 'surrounding_geometry': 5, 'consider_floors': True, 'consider_intersections': False, 'rad_ab': 4, 'rad_ad': 512, 'rad_as': 32, 'rad_ar': 20, 'rad_aa': 0.15, 'rad_lr': 8, 'rad_st': 0.5, 'rad_sj': 0.7, 'rad_lw': 0.05, 'rad_dj': 0.7, 'rad_ds': 0.0, 'rad_dr': 0, 'rad_dp': 32, 'daysim_bin_directory': 'c:\\users\\geske\\desktop\\cityenergyanalyst\\dependencies\\daysim\\bin64', 'n_buildings_in_chunk': 100, 'write_sensor_data': True}
City Energy Analyst version 3.18.0
Running `cea radiation` with the following parameters:
- general:scenario = C:\Users\Geske\Desktop\projectsensitivity\testscenario34
(default: {general:project}\{general:scenario-name})
- general:multiprocessing = True
(default: True)
- general:number-of-cpus-to-keep-free = 1
(default: 1)
- general:debug = False
(default: False)
- radiation:buildings = ['B1000', 'B1001', 'B1002', 'B1003']
(default: ['B1000', 'B1001', 'B1002', 'B1003'])
- radiation:use-latest-daysim-binaries = True
(default: True)
- radiation:albedo = 0.2
(default: 0.2)
- radiation:roof-grid = 10
(default: 10)
- radiation:walls-grid = 200
(default: 200)
- radiation:zone-geometry = 2
(default: 2)
- radiation:surrounding-geometry = 5
(default: 5)
- radiation:consider-floors = True
(default: True)
- radiation:consider-intersections = False
(default: False)
- radiation:rad-ab = 4
(default: 4)
- radiation:rad-ad = 512
(default: 512)
- radiation:rad-as = 32
(default: 32)
- radiation:rad-ar = 20
(default: 20)
- radiation:rad-aa = 0.15
(default: 0.15)
- radiation:rad-lr = 8
(default: 8)
- radiation:rad-st = 0.5
(default: 0.5)
- radiation:rad-sj = 0.7
(default: 0.7)
- radiation:rad-lw = 0.05
(default: 0.05)
- radiation:rad-dj = 0.7
(default: 0.7)
- radiation:rad-ds = 0.0
[Truncated]
do_script_step(config, i, step, trace_input)
File "c:\users\geske\desktop\cityenergyanalyst\cityenergyanalyst\cea\workflows\workflow.py", line 176, in do_script_step
run(config, py_script, **py_parameters)
File "c:\users\geske\desktop\cityenergyanalyst\cityenergyanalyst\cea\workflows\workflow.py", line 29, in run
f(config=config, **kwargs)
File "c:\users\geske\desktop\cityenergyanalyst\cityenergyanalyst\cea\api.py", line 59, in __call__
self._runner.__call__(*args, **kwargs)
File "c:\users\geske\desktop\cityenergyanalyst\cityenergyanalyst\cea\api.py", line 37, in script_runner
script_module.main(config)
File "c:\users\geske\desktop\cityenergyanalyst\cityenergyanalyst\cea\resources\radiation_daysim\radiation_main.py", line 231, in main
locator, config, geometry_pickle_dir)
File "c:\users\geske\desktop\cityenergyanalyst\cityenergyanalyst\cea\resources\radiation_daysim\geometry_generator.py", line 610, in geometry_main
config, geometry_pickle_dir)
File "c:\users\geske\desktop\cityenergyanalyst\cityenergyanalyst\cea\resources\radiation_daysim\geometry_generator.py", line 217, in building_2d_to_3d
repeat(consider_intersections, n))
File "c:\users\geske\desktop\cityenergyanalyst\cityenergyanalyst\cea\utilities\parallel.py", line 96, in wrapper
result = map_result.get()
File "C:\Users\Geske\Desktop\CityEnergyAnalyst\Dependencies\Python\lib\multiprocessing\pool.py", line 657, in get
raise self._value
FileExistsError: [Errno 17] Eine Datei kann nicht erstellt werden, wenn sie bereits vorhanden ist: 'C:\\Users\\Geske\\AppData\\Local\\Temp\\testscenario34_radiation_geometry_pickle\\zone'
Status: Issue closed
username_1: Hi @username_0 ,
We have fixed the bug in PR #3074 .
If you are running CEA with a developer's version, you can update from master.
Otherwise, this change will be included in the next release! |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.