repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
Ivorforce/RecurrentComplex | 215206541 | Title: Flat structure
Question:
username_0: I came across this kind of structure VERY often, structure with only 1 layer of block

Answers:
username_1: Perhaps it's a bit extreme. Those are the result of the ruins transformer being a bit harsh - I'll adjust this a bit later.
username_0: MC 1.11.2
Forge 2259
IvT 1.3.2.1
RC 1.2.10
username_2: @username_1 I would like to note this seems to be more evident when using any none default settings with the exception of the BIOMESOP generator.
username_1: What do you mean by 'none default settings ' and 'BIOMESOP generator'?
Status: Issue closed
username_0: it wasn't only those two, I often find "flat" structure, of rather VERY destroyed one, like Castle, and multiple kind of house
username_1: They are supposed to be partially (or almost completely) destroyed. They're supposed to be ruins, if that is not clear :P
username_0: Ah ok, I thought there was a chance to find fairly intact Castle in the wild(as I saw some in old version) ;)
No problem though |
kubernetes/kubernetes | 180271832 | Title: kubeadm/images: make registry configurable
Question:
username_0: We should allow users to use their own private registry instead of GCR. We should also make it clear what images need to be mirrored, ideally we should implement a helper for doing pull/retag/push (i.e. `kubeadm util mirror-images`), as we have a handful of images and users will have to write scripts to do it. People will have to mirror images continuously, when we provide upgrades, so helper would be very handy.
Answers:
username_1: This issue was moved to kubernetes/kubeadm#47
Status: Issue closed
|
ant-design/ant-design-mobile | 339330118 | Title: 运行报错 Unable to resolve module `react-dom` from `F:\xxxxx\node_modules\antd-mobile\lib\modal\alert.js`: Module does not exist in the module map
Question:
username_0: <!--
IMPORTANT: Please use the following link to create a new issue:
http://new-issue.ant.design?repo=ant-design-mobile
If your issue was not created using the app above, it will be closed immediately.
-->
<!--
注意:请使用下面的链接来新建 issue:
http://new-issue.ant.design?repo=ant-design-mobile
不是用上面的链接创建的 issue 会被立即关闭。
-->
版本:antd-mobile: "^2.1.8",
执行命令 react-native run-android 后准备启动app报如下错误,严重影响项目调试:

Unable to resolve module `react-dom` from `F:\xxxxx\node_modules\antd-mobile\lib\modal\alert.js`: Module does not exist in the module map
Answers:
username_1: `npm install react react-dom --save`
Status: Issue closed
username_2: rect-dom 会导致这个报错:Expected to find exactly one React Native renderer on DevTools hook
[https://forums.expo.io/t/error-message-when-trying-to-inspect-element/7574](url)
怎么解决这个问题呢? |
cfpb/grasshopper-loader | 64694126 | Title: North Carolina has no city names... some states have weird structures
Question:
username_0: The loader current substitutes counties when loading NC.
Utah was a bit weird and uses something called AddSystem; which actually makes sense (there are many addresses that AREN'T in cities.. so having something that is not necessarily tied to a city is good.
In general, we should probably talk these things out or have a standard (eg.. if no city, substitute county, if no state, assume it's from the data provider (not always the case, see arkansas))
Answers:
username_0: Changing NC to just leave out cities. See #35
In general, will leave out data where it isn't provided and only drop items with no addresses (street number and name)
Status: Issue closed
|
Ebraheem1/Der-Algorithmus2 | 225406425 | Title: Uploading a document with the extension ".png" in the Gallery shows a broken link.
Question:
username_0: 1: Severity : low.
2: Reported by : <NAME>.
3: Description : When I try to upload a document with the extension .png it uploads successfully, however I see a broken image in my gallery.
4: Steps: 1- Login as a business owner. 2- Add to Gallery. 3- upload a document with the name "omar.png". The result will be like that [](https://ibb.co/kyS5i5).
5: Expected to see an error message when I try to upload.<issue_closed>
Status: Issue closed |
rust-lang/rust | 54671723 | Title: Mac to Linux cross compilation build error - morestack.S:28:18: error: unable to emit symbol attribute
Question:
username_0: ./configure --target=x86_64-unknown-linux-gnu
make
on mac causes the following error -
compile: x86_64-unknown-linux-gnu/rt/arch/x86_64/morestack.o
/Users/kashyap/Documents/RUST/rust-fork/src/rt/arch/x86_64/morestack.S:28:18: error: unable to emit symbol attribute
.private_extern ___morestack
^
make: *** [x86_64-unknown-linux-gnu/rt/arch/x86_64/morestack.o] Error 1
make: *** Waiting for unfinished jobs....
100 13.4M 100 13.4M 0 0 36313 0 0:06:27 0:06:27 --:--:-- 409k
got download with ok hash
opening snapshot dl/rust-stage0-2015-01-07-9e4e524-macos-x86_64-e4ae2670ea4ba5c2e5b4245409c9cab45c9eeb5b.tar.bz2
extracting rust-stage0/bin/rustc
Status: Issue closed
Answers:
username_0: Looks like #16259 is already tracking this |
shivamd20/WhatsappAutomatedMessages | 470122043 | Title: where do you paste it?
Question:
username_0: Can't paste it
function sendMessage(msg){var input = document.querySelector('.pluggable-input-body');
var count = 0;
input.innerHTML = msg;
input.dispatchEvent(new Event('input', {bubbles: true}));
var button = document.querySelector('button.compose-btn-send');
button.click();
}
function wishOnTime(hour,min,msg){
window.setInterval(function(){ // Set interval for checking
var date = new Date(); // Create a Date object to find out what time it is
if(date.getHours() === hour && date.getMinutes() === min){ // Check the time
sendMessage(msg);
}
else{
console.log('waiting to wish birthday'+date.getMinutes()+"h"+date.getHours());
}
}, 60000);
} |
microsoft/PowerToys | 627729317 | Title: Tabbed feature
Question:
username_0: I have 3 feature requests:
### PowerToys Run - Web Search Integration
Having PowerToys Run search the web if it cannot find a file would be a brilliant way to quickly search the web.
It would also be good if:
- The web search functionality could be toggled in the Settings app
- You could change the search engine to anything you wanted, say (sorry Microsoft) using Google instead of Bing....
### New Power Toy: Tabbed Windows Explorer
Having the ability to tab Windows Explorer, much like the functionality that was introduced with the Windows Terminal, which I absolutely love and use everyday over Command Prompt, would be such a time saver as I am more than sick of Alt + Tabbing through multiple instances of Windows Explorer! Even on a 27" monitor, when you have lots of tabs open it fills up quickly.
### New Power Toy: Ability to control the size of the Alt+Tab menu and disable previews
It would be helpful if we could control the size of the Alt+Tab menu (having it not correspond with the screen resolution) and disable the previews as a method of saving space, much like how it was done on Windows 7 when Classic Windows Shell was enabled, instead of Win 7 Basic or Win 7 Aero Glass.
Answers:
username_1: Sorry but issues need to single topic focused else it is too hard to track. The top two do exist.
Status: Issue closed
username_2: thx for reply
username_3: Check out https://thewincentral.com/microsoft-planning-to-bring-windows-10-sets-feature-back-from-grave/, maybe that'll help? |
jhipster/generator-jhipster | 1077658706 | Title: R2DBC errors after upgrading to Spring Boot 2.6
Question:
username_0: ##### **Overview of the issue**
The upgrade to Spring Boot 2.6 is happening in https://github.com/jhipster/generator-jhipster/pull/16787. Tests are currently failing for the `vue-gateway` project because of an R2DBC issue.
```
Error: tech.jhipster.sample.web.rest.OperationResourceIT.putNewOperation Time elapsed: 0.016 s <<< ERROR!
org.springframework.r2dbc.BadSqlGrammarException:
executeMany; bad SQL grammar [SELECT e.id AS e_id, e.date AS e_date, e.description AS e_description, e.amount AS e_amount,
e.bank_account_id AS e_bank_account_id, bankAccount.id AS bankAccount_id, bankAccount.name AS bankAccount_name,
bankAccount.guid AS bankAccount_guid, bankAccount.bank_number AS bankAccount_bank_number,
bankAccount.agency_number AS bankAccount_agency_number, bankAccount.last_operation_duration AS
bankAccount_last_operation_duration, bankAccount.mean_operation_duration AS bankAccount_mean_operation_duration,
bankAccount.mean_queue_duration AS bankAccount_mean_queue_duration, bankAccount.balance AS bankAccount_balance,
bankAccount.opening_day AS bankAccount_opening_day, bankAccount.last_operation_date AS
bankAccount_last_operation_date, bankAccount.active AS bankAccount_active, bankAccount.account_type AS
bankAccount_account_type, bankAccount.attachment AS bankAccount_attachment, bankAccount.attachment_content_type AS
bankAccount_attachment_content_type, bankAccount.description AS bankAccount_description, bankAccount.user_id AS
bankAccount_user_id FROM operation e LEFT OUTER JOIN bank_account bankAccount ON e.bank_account_id = bankAccount.id
WHERE id = 4 WHERE e.id = 4]; nested exception is io.r2dbc.spi.R2dbcBadGrammarException: [90059] [90059] Ambiguous
column name "ID"; SQL statement:
SELECT e.id AS e_id, e.date AS e_date, e.description AS e_description, e.amount AS e_amount, e.bank_account_id AS
e_bank_account_id, bankAccount.id AS bankAccount_id, bankAccount.name AS bankAccount_name, bankAccount.guid AS
bankAccount_guid, bankAccount.bank_number AS bankAccount_bank_number, bankAccount.agency_number AS
bankAccount_agency_number, bankAccount.last_operation_duration AS bankAccount_last_operation_duration,
bankAccount.mean_operation_duration AS bankAccount_mean_operation_duration, bankAccount.mean_queue_duration AS
bankAccount_mean_queue_duration, bankAccount.balance AS bankAccount_balance, bankAccount.opening_day AS
bankAccount_opening_day, bankAccount.last_operation_date AS bankAccount_last_operation_date, bankAccount.active AS
bankAccount_active, bankAccount.account_type AS bankAccount_account_type, bankAccount.attachment AS
bankAccount_attachment, bankAccount.attachment_content_type AS bankAccount_attachment_content_type,
bankAccount.description AS bankAccount_description, bankAccount.user_id AS bankAccount_user_id FROM operation e LEFT
OUTER JOIN bank_account bankAccount ON e.bank_account_id = bankAccount.id WHERE id = 4 WHERE e.id = 4 [90059-200]
at tech.jhipster.sample.web.rest.OperationResourceIT.putNewOperation(OperationResourceIT.java:293)
```
##### **Motivation for or Use Case**
I'd like to see JHipster upgraded to Spring Boot 2.6.0 so I can try to integrate Spring Native with the latest release. I have a talk with <NAME> on Tuesday that I'd like to show off JHipster + Spring Native in. You can see our previous work with JHipster 7.2 and Spring Boot 2.5 in https://github.com/username_0/spring-native-examples.
##### **Reproduce the error**
I recreated the `vue-gateway` application and pushed it to GitHub:
```
git clone -b skip_ci-spring-boot_2.6.0 <EMAIL>:jhipster/jhipster-bom.git
cd jhipster-bom
./mvnw install -Dgpg.skip=true
cd ..
git clone <EMAIL>:username_0/vue-gateway.git
cd vue-gateway
./mvnw verify
```
##### **Related issues**
https://github.com/jhipster/generator-jhipster/pull/16787
##### **JHipster Version(s)**
```
[email protected] /Users/username_0/Downloads/vue-gateway
[Truncated]
</pre>
</details>
##### **Environment and Tools**
openjdk version "17.0.1" 2021-10-19
OpenJDK Runtime Environment GraalVM CE 21.3.0 (build 17.0.1+12-jvmci-21.3-b05)
OpenJDK 64-Bit Server VM GraalVM CE 21.3.0 (build 17.0.1+12-jvmci-21.3-b05, mixed mode, sharing)
git version 2.30.1 (Apple Git-130)
node: v14.18.1
npm: 8.1.2
Docker version 20.10.11, build dea9396
Docker Compose version v2.2.1
Answers:
username_1: @username_0 it now works on my branch, as long as you don't use gradle or mongo. I'll continue to improve it to have everything green
Status: Issue closed
username_1: Bounty claimed here: https://opencollective.com/generator-jhipster/expenses/60950
username_0: This still happens with JHipster 7.6.0.
```
Caused by: java.lang.IllegalStateException: No suitable constructor found on class org.springframework.data.r2dbc.repository.support.SimpleR2dbcRepository to match the given arguments: org.springframework.data.relational.repository.support.MappingRelationalEntityInformation, org.springframework.data.r2dbc.core.R2dbcEntityTemplate, org.springframework.data.r2dbc.convert.MappingR2dbcConverter. Make sure you implement a constructor taking these
at org.springframework.data.repository.core.support.RepositoryFactorySupport.lambda$instantiateClass$6(RepositoryFactorySupport.java:579) ~[na:na]
at java.util.Optional.orElseThrow(Optional.java:403) ~[na:na]
at org.springframework.data.repository.core.support.RepositoryFactorySupport.instantiateClass(RepositoryFactorySupport.java:579) ~[na:na]
at org.springframework.data.repository.core.support.RepositoryFactorySupport.getTargetRepositoryViaReflection(RepositoryFactorySupport.java:543) ~[na:na]
at org.springframework.data.r2dbc.repository.support.R2dbcRepositoryFactory.getTargetRepository(R2dbcRepositoryFactory.java:121) ~[postgres:1.4.1]
at org.springframework.data.repository.core.support.RepositoryFactorySupport.getRepository(RepositoryFactorySupport.java:324) ~[na:na]
at org.springframework.data.repository.core.support.RepositoryFactoryBeanSupport.lambda$afterPropertiesSet$5(RepositoryFactoryBeanSupport.java:322) ~[postgres:2.6.1]
at org.springframework.data.util.Lazy.getNullable(Lazy.java:230) ~[na:na]
at org.springframework.data.util.Lazy.get(Lazy.java:114) ~[na:na]
at org.springframework.data.repository.core.support.RepositoryFactoryBeanSupport.afterPropertiesSet(RepositoryFactoryBeanSupport.java:328) ~[postgres:2.6.1]
at org.springframework.data.r2dbc.repository.support.R2dbcRepositoryFactoryBean.afterPropertiesSet(R2dbcRepositoryFactoryBean.java:179) ~[postgres:1.4.1]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1863) ~[na:na]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1800) ~[na:an]
```
Steps to reproduce:
```
# use SDKMAN to install GraalVM
sdk install java 21.3.0.r17-grl
gu install native-image
# get the project and build a native binary
git clone https://github.com/username_0/spring-native-examples.git
cd spring-native-examples/postgres-webflux
./mvnw package -Pnative,prod -DskipTests
# start Keycloak and PostgreSQL containers with jhkeycloakup and jhpostgresqlup
./target/postgres
```
username_0: ##### **Overview of the issue**
The upgrade to Spring Boot 2.6 is happening in https://github.com/jhipster/generator-jhipster/pull/16787. Tests are currently failing for the `vue-gateway` project because of an R2DBC issue.
```
Error: tech.jhipster.sample.web.rest.OperationResourceIT.putNewOperation Time elapsed: 0.016 s <<< ERROR!
org.springframework.r2dbc.BadSqlGrammarException:
executeMany; bad SQL grammar [SELECT e.id AS e_id, e.date AS e_date, e.description AS e_description, e.amount AS e_amount,
e.bank_account_id AS e_bank_account_id, bankAccount.id AS bankAccount_id, bankAccount.name AS bankAccount_name,
bankAccount.guid AS bankAccount_guid, bankAccount.bank_number AS bankAccount_bank_number,
bankAccount.agency_number AS bankAccount_agency_number, bankAccount.last_operation_duration AS
bankAccount_last_operation_duration, bankAccount.mean_operation_duration AS bankAccount_mean_operation_duration,
bankAccount.mean_queue_duration AS bankAccount_mean_queue_duration, bankAccount.balance AS bankAccount_balance,
bankAccount.opening_day AS bankAccount_opening_day, bankAccount.last_operation_date AS
bankAccount_last_operation_date, bankAccount.active AS bankAccount_active, bankAccount.account_type AS
bankAccount_account_type, bankAccount.attachment AS bankAccount_attachment, bankAccount.attachment_content_type AS
bankAccount_attachment_content_type, bankAccount.description AS bankAccount_description, bankAccount.user_id AS
bankAccount_user_id FROM operation e LEFT OUTER JOIN bank_account bankAccount ON e.bank_account_id = bankAccount.id
WHERE id = 4 WHERE e.id = 4]; nested exception is io.r2dbc.spi.R2dbcBadGrammarException: [90059] [90059] Ambiguous
column name "ID"; SQL statement:
SELECT e.id AS e_id, e.date AS e_date, e.description AS e_description, e.amount AS e_amount, e.bank_account_id AS
e_bank_account_id, bankAccount.id AS bankAccount_id, bankAccount.name AS bankAccount_name, bankAccount.guid AS
bankAccount_guid, bankAccount.bank_number AS bankAccount_bank_number, bankAccount.agency_number AS
bankAccount_agency_number, bankAccount.last_operation_duration AS bankAccount_last_operation_duration,
bankAccount.mean_operation_duration AS bankAccount_mean_operation_duration, bankAccount.mean_queue_duration AS
bankAccount_mean_queue_duration, bankAccount.balance AS bankAccount_balance, bankAccount.opening_day AS
bankAccount_opening_day, bankAccount.last_operation_date AS bankAccount_last_operation_date, bankAccount.active AS
bankAccount_active, bankAccount.account_type AS bankAccount_account_type, bankAccount.attachment AS
bankAccount_attachment, bankAccount.attachment_content_type AS bankAccount_attachment_content_type,
bankAccount.description AS bankAccount_description, bankAccount.user_id AS bankAccount_user_id FROM operation e LEFT
OUTER JOIN bank_account bankAccount ON e.bank_account_id = bankAccount.id WHERE id = 4 WHERE e.id = 4 [90059-200]
at tech.jhipster.sample.web.rest.OperationResourceIT.putNewOperation(OperationResourceIT.java:293)
```
##### **Motivation for or Use Case**
I'd like to see JHipster upgraded to Spring Boot 2.6.0 so I can try to integrate Spring Native with the latest release. I have a talk with <NAME> on Tuesday that I'd like to show off JHipster + Spring Native in. You can see our previous work with JHipster 7.2 and Spring Boot 2.5 in https://github.com/username_0/spring-native-examples.
##### **Reproduce the error**
I recreated the `vue-gateway` application and pushed it to GitHub:
```
git clone -b skip_ci-spring-boot_2.6.0 [email protected]:jhipster/jhipster-bom.git
cd jhipster-bom
./mvnw install -Dgpg.skip=true
cd ..
git clone <EMAIL>:username_0/vue-gateway.git
cd vue-gateway
./mvnw verify
```
##### **Related issues**
https://github.com/jhipster/generator-jhipster/pull/16787
##### **JHipster Version(s)**
```
[email protected] /Users/username_0/Downloads/vue-gateway
[Truncated]
</pre>
</details>
##### **Environment and Tools**
openjdk version "17.0.1" 2021-10-19
OpenJDK Runtime Environment GraalVM CE 21.3.0 (build 17.0.1+12-jvmci-21.3-b05)
OpenJDK 64-Bit Server VM GraalVM CE 21.3.0 (build 17.0.1+12-jvmci-21.3-b05, mixed mode, sharing)
git version 2.30.1 (Apple Git-130)
node: v14.18.1
npm: 8.1.2
Docker version 20.10.11, build dea9396
Docker Compose version v2.2.1
username_2: Is this issue native specific?
username_0: Yes, it works fine when I run the app using `mvn spring-boot:run` or using my IDE.
username_0: I figured out the solution to this today. You need to add `@Component` to Impl classes and add `SimpleR2dbcRepository` to type hints.
Status: Issue closed
|
ericgio/react-bootstrap-typeahead | 181229039 | Title: Strange warning using multiple selection
Question:
username_0: I got this strange warning on my typeahead components, with multiple selection:

Does anyone have any idea what is causing this?
Answers:
username_1: Can you post your code?
username_1: Also, what version of the component are you using? This appears to be a React warning due to the `react-onclick-outside` HOC applying invalid props to a div.
username_0: 0.9.4 looks like it's the latest version.
Simple usage:
```JavaScript
<Row>
<Col xs={12} sm={6}>
<FormGroup>
<ControlLabel>Dances</ControlLabel>
<Typeahead
multiple
options={dances}
selected={editingLevel.dances}
onChange={updateLevelForm}
/>
</FormGroup>
</Col>
</Row>
```
**dances** is array with strings,
**editingLevel.dances** is array that come from store object that holds form state
Nothing really special here.
username_2: I had the same problem. I fix it forcing react-onclickoutside version under 5.4.
npm install [email protected] --save fix the probleme for me
username_0: Cool, thanks! Will try that.
username_1: `react-bootstrap-typeahead` uses `react-onclickoutside`@5.3.3. Is your app using a higher version?
Status: Issue closed
username_1: I'm going to close this out, since I'm pretty sure it's just a package conflict. Will look into upgrading the package at some point.
username_0: Hi, didn't see you replied too. I used the default versions of `react-bootstrap-typeahead`.
But due to semantic versioning syntax it installs latest major version under 6.0.0, that is 5.7.2 currenly.
username_1: @evinak submitted [a fix](https://github.com/username_1/react-bootstrap-typeahead/pull/100) for this issue. Will publish as 0.10.2. |
4teamwork/opengever.core | 79472578 | Title: Update auf Plone 4.3 und Dexterity 2.0
Question:
username_0: - [ ] Update auf Version Plone 4.3.4 (http://dist.plone.org/release/4.3.4/versions.cfg) und Dexterity 2.0
- [ ] Grok mittels extra_feature aktivieren.
- [ ] Imports anpassen
- [ ] Static Folders
- [ ] Monkey patches überprüfen und allenfalls entfernen
- [ ] Hotfixes entfernen
- [ ] Überprüfen und updaten der Pinnings von Drittprodukten
Lukas hat bereits ein kleinen Test durchgeführt Branch: `lg-plone43-compat`
Answers:
username_0: Zur Hilfe kann `collective.ploneupgradecheck` verwendet werden.
username_0: Ich habe bereits eine entsprechenden Branch vorbereitet: [pg_plone_4_3](https://github.com/4teamwork/opengever.core/tree/pg_plone_4_3), es macht aber noch kein Sinn einen PR zu erstellen. |
kubernetes/kubernetes | 193922265 | Title: [Upgrade test] "Addon update should propagate add-on file changes" failing in ci-kubernetes-e2e-gce-1.4-1.5-upgrade-cluster
Question:
username_0: Test has been failing [100% of the time](https://k8s-testgrid.appspot.com/google-1.4-1.5-upgrade#gce-cvm-1.4-cvm-1.5-upgrade-cluster&width=20&sort-by-failures=&include-filter-by-regex=Addon%20update%20should%20propagate%20add-on%20file%20changes) in [ci-kubernetes-e2e-gce-1.4-1.5-upgrade-cluster](https://k8s-gubernator.appspot.com/builds/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-1.4-1.5-upgrade-cluster/). Sample failure: [link](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-1.4-1.5-upgrade-cluster/4).
Based on [spreadsheet](https://docs.google.com/spreadsheets/d/1sAZqyWE--0fvN1PIuKTw9JwmcXNz6tQIm1MrjdILtm4/edit#gid=1403603610&vpid=A1) tracking 1.5 upgrade test failures created by @krousey.
Answers:
username_1: Paste my same comment from #35600:
This test always failed in 1.5 upgrade builds. An example as this :https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-1.4-1.5-upgrade-cluster/12/.
Checked logs and found we were actually running the old version Addon update e2e test(probably depends on the original version?) after upgrade. But because Addon Manager is upgraded to a newer version in 1.5 and does not take care of resources in non `kube-system` namespace anymore, this test must failed without the necessary test code changes in #36008.
May triage it as a non blocker
username_2: @username_0 Is it appropriate to move this to the next milestone or clear the 1.5 milestone? (and remove the non-release-blocker tag as well)
Status: Issue closed
|
MicrosoftDocs/azure-docs | 490062184 | Title: Table is incorrect for allowed Max data size (GB) for General Purpose Gen5 elastic pools
Question:
username_0: Noticed a discrepancy in this table, for General Purpose Gen5 6 vCore we allow 1.5 TB and not 756 GB. May be other items wrong for the General Purpose Gen5 table, did not check them all.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ece92a7d-3250-d109-97e9-20a731dc4b20
* Version Independent ID: 29a97076-fc64-0ae9-4574-de1aeda237d8
* Content: [Azure SQL Database vCore-based resource limits - elastic pools](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-vcore-resource-limits-elastic-pools#feedback)
* Content Source: [articles/sql-database/sql-database-vcore-resource-limits-elastic-pools.md](https://github.com/Microsoft/azure-docs/blob/master/articles/sql-database/sql-database-vcore-resource-limits-elastic-pools.md)
* Service: **sql-database**
* Sub-service: **elastic-pools**
* GitHub Login: @oslake
* Microsoft Alias: **moslake**
Answers:
username_1: @username_0 Thank you for the feedback. We are actively investigating and will get back to you soon.
username_1: This is being assigned to the content author to evaluate and update as appropriate.
username_2: @username_0 - thanks for raising this issue. Apologies for the delayed response. This page has recently been updated and I believe the current values are correct at this time.
username_2: #please-close
Status: Issue closed
|
lxc/lxc-ci | 618439208 | Title: apt broken in ubuntu/focal/cloud
Question:
username_0: When executing apt I get the following error - which breaks my `cloud-init` setup:
```
Hit:1 http://security.ubuntu.com/ubuntu focal-security InRelease
Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
Hit:3 http://archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:4 http://archive.ubuntu.com/ubuntu focal-backports InRelease
Reading package lists... Error!
E: Encountered a section with no Package: header
E: Problem with MergeList /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_focal_universe_binary-amd64_Packages
E: The package lists or status file could not be parsed or opened.
```
Answers:
username_1: Hmm, odd. @username_2 can you check that image?
username_2: Launching a container using `lxc launch images:ubuntu/focal/cloud c2`, and then running `apt update` inside of the container doesn't give me that error.
@username_0 Does this happen every time? What are the exact steps to reproduce this issue?
username_0: @username_2 it happens every time. Maybe my locally cached image is corrupt then? How do I force a redownload?
Steps to reproduce:
```
lxc launch images:ubuntu/focal/cloud focal-cloud-test
lxc exec focal-cloud-test -- su --login
root@focal-cloud-test:~# apt update
Hit:1 http://archive.ubuntu.com/ubuntu focal InRelease
Get:2 http://archive.ubuntu.com/ubuntu focal-updates InRelease [107 kB]
Get:3 http://security.ubuntu.com/ubuntu focal-security InRelease [107 kB]
Get:4 http://archive.ubuntu.com/ubuntu focal-backports InRelease [98.3 kB]
Get:5 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages [54.1 kB]
Get:6 http://security.ubuntu.com/ubuntu focal-security/main Translation-en [21.1 kB]
Get:7 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages [94.0 kB]
Get:8 http://archive.ubuntu.com/ubuntu focal-updates/main Translation-en [35.0 kB]
Get:9 http://archive.ubuntu.com/ubuntu focal-updates/universe amd64 Packages [30.5 kB]
Get:10 http://archive.ubuntu.com/ubuntu focal-updates/universe Translation-en [14.9 kB]
Get:11 http://archive.ubuntu.com/ubuntu focal-backports/universe amd64 Packages [2,792 B]
Get:12 http://archive.ubuntu.com/ubuntu focal-backports/universe Translation-en [1,280 B]
Fetched 566 kB in 1s (1,129 kB/s)
Reading package lists... Error!
E: Encountered a section with no Package: header
E: Problem with MergeList /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_focal_universe_binary-amd64_Packages
E: The package lists or status file could not be parsed or opened.
```
username_2: I did the exact same thing, and don't run into the error. My image's hash is `9c1933d418c3` (`lxc image ls`). You can run `lxc image refresh <hash>` (your hash) to update the image.
username_0: That fixed it. Thank you very much!!
Status: Issue closed
|
Juniper/py-junos-eznc | 354373866 | Title: Many filedescriptors never closed
Question:
username_0: Hi,
I use this module through napalm, to target a several hundreds of devices. After 200 devices, I had an issue of too many file descriptors opened.
Even with the `device` object closed (both the napalm `device` and this module's one), for > 200 devices, I can see that it consumes around 70K non closed file descriptors. The only way to reduce this number is to force the garbage collection of the `device` objects.
I don't have this issue with other napalm drivers (nxos and ios for example).
I will send you some `lsof` output and try to dig on my side ASAP.
Thanks
Answers:
username_1: @username_0 waiting to hear from you.
username_0: Hi @username_1,
Sorry for the delay. I've looked at which FD are kept opened after doing `device.close()`, and it seems that the issue is in the `ncclient` module, not this one. The SSH session object seems to open some pipes that are never closed.
I don't know if you prefer to close this issue or keep it opened to track the bug, so do as you prefer ;)
username_0: FYI, I fixed the issue here: https://github.com/ncclient/ncclient/pull/287
username_0: Merged in [ncclient](https://github.com/ncclient/ncclient/commit/d647a9ee3f600d027d7396ac181b4d7df41191fd)
Status: Issue closed
|
beetbox/beets | 140259074 | Title: edit plugin fires write events for all tracks in query
Question:
username_0: ### Problem
I have [a plugin](https://github.com/username_0/dotfiles/blob/master/home/.config/beets/plugins/git_annex.py) that listens to the `write` event. When I use the `edit` plugin, it fires that event for every track in the query, not just the ones that are actually modified.
Running this command in verbose (`-vv`) mode:
```sh
❯ beet -vv edit todd terje spiral # some album with two tracks
user configuration: /Users/tball/.config/beets/config.yaml
data directory: /Users/tball/.config/beets
plugin paths: /Users/tball/.config/beets/plugins
Sending event: pluginload
library database: /Users/tball/tunes/beets.db
library directory: /Users/tball/tunes
Sending event: library_opened
Todd Terje - Spiral - Spiral # make an edit to a single track
dj: True
continue [E]diting, Apply, Cancel? a
edit: saving changes to Todd Terje - Spiral - Spiral
Sending event: write
Sending event: after_write
moving /Users/tball/tunes/Todd Terje/Spiral/01 Spiral.m4a to synchronize path
Sending event: before_item_moved
Sending event: item_moved
Sending event: database_change
Sending event: database_change
Sending event: database_change
edit: saving changes to Todd Terje - Spiral - Q
Sending event: write
Sending event: after_write
moving /Users/tball/tunes/Todd Terje/Spiral/02 Q.mp3 to synchronize path
Sending event: before_item_moved
Sending event: item_moved
Sending event: database_change
Sending event: database_change
Sending event: database_change
Sending event: cli_exit
```
### Setup
* OS: El Capitan
* Python version: 2.7.10
* beets version: 1.3.17
* Turning off plugins made problem go away (yes/no): no
My configuration (output of `beet config`) is:
```yaml
plugins: [
edit,
]
import:
move: yes
incremental: yes
library: ~/tunes/beets.db
directory: ~/tunes/
pluginpath:
[Truncated]
playlist_dir: ~/.mpd/playlists
playlists:
- name: "1-added_this_week.m3u"
album_query: added:@{1w}..
- name: "2-added_this_month.m3u"
album_query: added:@{1m}..
itunescopy:
auto: yes
dest: ~/Music/iTunes/iTunes Media/Automatically Add to iTunes.localized
lastgenre:
count: 3
duplicates:
keys: [track, title, mb_albumid]
git_annex:
directory: ~/tunes
```
Answers:
username_1: Weirdly, I can't seem to reproduce this here:
```
$ beet -v edit
user configuration: /Users/asampson/.config/beets/config.yaml
data directory: /Users/asampson/.config/beets
plugin paths:
Sending event: pluginload
library database: /Users/asampson/code/beets/_etc/testlib.blb
library directory: /Users/asampson/code/beets/_etc/testlib
Sending event: library_opened
Blakroc feat. Ludacris & Ol’ Dirty Bastard - Blakroc - Coochif
title: Coochif -> Coochie
continue [E]diting, Apply, Cancel? a
edit: saving changes to Blakroc feat. Ludacris & Ol’ Dirty Bastard - Blakroc - Coochie
Sending event: write
Sending event: after_write
Sending event: database_change
Sending event: cli_exit
```
That shows me invoking a `beet edit` on everything in my library but just changing one track. It's the only one that fires the write event. I would suggest that this might be a bad interaction with some other plugin, but you don't seem to have any enabled. That's very strange!
I don't have any other clues at the moment. Maybe try on the latest git source?
username_0: Strange, I installed beets from the latest source and I'm still seeing this. Maybe I'll try on a fresh database/config and see what happens. I keep my music in a [git-annex](https://git-annex.branchable.com/) repository for syncing across multiple computers which might be confusing python in some way.
username_0: It's definitely something about the library being located in a git repository that's confusing beets. A collection in a fresh directory works fine.
Status: Issue closed
username_0: I'm going to close, cause this is obviously my problem. Thanks! |
pytorch/pytorch | 442793725 | Title: Overhead performance regression over time umbrella issue.
Question:
username_0: This issue is meant to collect various performance-regression-over-time bug reports that aren't specific op regressions, that almost certainly overlap, but which we should track separately to make sure we cover all the cases.
To start:
https://github.com/pytorch/pytorch/issues/5388
https://github.com/pytorch/pytorch/issues/16717
https://github.com/pytorch/pytorch/issues/2560 |
MattBubernak/Presentation1_CSCI5828_Angular | 109519825 | Title: host the tea master through git pages
Question:
username_0: Got it hosted, but there are some issues I now need to look into.
http://mattbubernak.github.io/Presentation1_CSCI5828_Angular/#/Recipes
Status: Issue closed
Answers:
username_0: Got it hosted, but there are some issues I now need to look into.
http://mattbubernak.github.io/Presentation1_CSCI5828_Angular/#/Recipes
Status: Issue closed
username_0: Had to fix some bugs after getting it hosted there, but it's up and running. |
mmatl/pyrender | 439578612 | Title: Depth image value range from orthographic camera
Question:
username_0: Hi, @username_1 , I am not sure if I am using the orthographic camera in the right way. But I found that the value range of the depth map from orth. camera looks weird, it is not of physical unit.
When I set a perspective camera to scan a chair at distance around 1 meter, I can get a depth map which has value range between 0 and ~0.75. This is good, in meter.
However, when I set an orthographic camera to scan, like the code below:
camera = pyrender.OrthographicCamera(xmag=.5, ymag=.5, znear=0.05, zfar=100)
then, the value range of the depth map is like 0 - 0.05138555, which is obviously not physically meaningful, I have been looking into this for a while but still have no idea.
Can you please help me out. :)
Thanks!
Answers:
username_0: Hi, @username_1 , I figured it out, for orthographic camera, a different unprojection matrix should be used when unproject depth from the buffer to real value.
Please let me know if and how I can help.:)
username_0: Hi, @username_1 , I think I figured it out and fixed it in my local, let me know if and how I can help to fix the bug. :)
username_1: @username_0 sounds awesome, thanks for fixing the issue! When you have time, feel free to make a PR on this and I'll merge it right in! :)
Status: Issue closed
username_0: @username_1 , glad if I can help, just started a PR and closing this. :) |
atoum/phpstorm-plugin | 132792631 | Title: debug mode with the test (xdebug, zend debug, etc)
Question:
username_0: Hello,
I think it could be usefull to have a way to run test in debug mode (not the debug mode of atoum but of php) to allow the usage of breakpoint and other ;)
thanks
Answers:
username_0: see https://github.com/atoum/atoum-documentation/issues/214 |
baomidou/mybatis-plus | 400957470 | Title: 关于代码生成器两个小问题求教
Question:
username_0: 当前使用版本(3.0.7.1)
1. 我要再增加生成一个exception包,请问要怎么操作设置
2. 生成的Controller头部注解为什么会重复两次模块名,怎么设置成一个

谢谢
Status: Issue closed
Answers:
username_1: 自定义模板 |
IndyTechFellowship/United-Way-Web | 224315595 | Title: Recommendation Page Shell
Question:
username_0: 
Includes:
- Create a placeholder for the other items that will need to go on the recommendation page.
- Navigation Bar
- Footer
- White Space<issue_closed>
Status: Issue closed |
aws/aws-cdk | 784376955 | Title: (cdk-assets): replace docker build with docker buildx build
Question:
username_0: <!-- short description of the feature you are proposing: -->
Replace `docker build` with `docker buildx build`.
### Use Case
<!-- why do you need this feature? -->
To be able to build images for `amd64` architecture (e.g. AWS Fargate) on a system that is using other architecture like `arm64` (e.g. Apple M1).
With `buildx` you can build cross-platform images by declaring `--platform` argument. e.g. `docker buildx build --platform linux/amd64 someimage:sometag .` executed on system Apple M1 results an image which works system with `amd64` architecture.
### Proposed Solution
<!-- Please include prototype/workaround/sketch/reference implementation: -->
### Other
<!--
e.g. detailed explanation, stacktraces, related issues, suggestions on how to fix,
links for us to have context, eg. associated pull-request, stackoverflow, slack, etc
-->
Currently image `.fromAsset` results in an image that works only on the same architecture where it was built. In that sense, this could be considered also a bug – the image built doesn't work on the target system (Fargate).
```ts
import { FargateTaskDefinition, ContainerImage } from '@aws-cdk/aws-ecs';
const taskDefinition = new FargateTaskDefinition(this, 'TaskDefinition');
taskDefinition
.addContainer('Container', {
image: ContainerImage.fromAsset(path.resolve(__dirname, '../image')),
});
```
* [] :wave: I may be able to implement this feature request
* [] :warning: This feature might incur a breaking change
---
This is a :rocket: Feature Request
Answers:
username_1: Any progress on this?
username_2: We are not actively working on this. Pull requests are more than welcome.
@username_3 I see you closed your pull request. Would you be interested in continuing to work on this? I am happy to help out with the review (sorry you didn't get a response for a while).
username_3: @username_2 I'm happy to continue working on it. I closed the first PR because it appears the CodeBuild environment does not have experimental features (`buildx`) available. Is there someone that can help sort out the CodeBuild side? Thanks
username_4: Hello all, really looking forward for this feature. But a question from me, is `buildx` really necessary?
Here is what I am talking about, and mind you I am running this on my M1 Macbook, with the following Docker version installed:
```
docker --version
Docker version 20.10.5, build 55c4c88
```
I build my docker image with the `--platform` option set to `linux/amd64`:
```bash
docker build --platform linux/amd64 .
```
This provides me with an image that is built with the correct architecture:
```
# Cut down output of: docker image inspect <imageid>
"WorkingDir": "/src",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
},
"Architecture": "amd64",
"Os": "linux",
"Size": 114102371,
"VirtualSize": 114102371,
"GraphDriver": {
"Data": {
```
I can clearly see that this image is of `amd64` architecture. Did something change in docker that does not require to run buildx? Is there a way to pass in the `--platform` parameter to the AssetImage?
username_0: Good catch, @username_4!
Did you happen to check if it works in the same way on the latest x86 Docker for Mac? — Just `--platform` argument and no `buildx`. I’d like this change to be implemented in a way that it works on all platforms :)
username_4: It indeed did work. I was able to build the image on an Intel Macbook by using the `--platform` parameter. Even to the `arm64` architecture oddly enough.
username_5: Did hit same problem building layer for lambda.
```
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
```
So do we plan to add something like "platform" option ??
```
const layer = new lambda.LayerVersion(this, 'ruby-common', {
code: lambda.Code.fromAsset(path.join(__dirname, '../src/layers/ruby-common'), {
bundling: {
image: lambda.Runtime.RUBY_2_7.bundlingImage,
platform: "linux/amd64", // <--- new option presumably
command: [
'bash', '-c', `
pwd && ls -la &&
bundle install --standalone &&
cp -a bundle /asset-output/
`,
],
},
}),
compatibleRuntimes: [lambda.Runtime.RUBY_2_7],
description: 'Common ruby gems'
});
```
Running docker manually with --platform linux/amd64
btw. deploying assets to s3 and using code below work quite nice
```
const bucket = s3.Bucket.fromBucketName(this, 'bucket-with-layers', "this-is-fake-bucket")
const layer = new lambda.LayerVersion(this, 'ruby-base', {
code: lambda.Code.fromBucket(bucket, "deploy/ruby-base.zip")
})
```
username_6: I'm taking a crack at this is #14908 however am a bit strapped for time. If anyone would like to pair on it, please take a 👀 and provide comments on that PR.
username_7: Hi @username_6
It seems the `ContainerImage.fromAsset` does not support `--platform` property so we still can't build `linux/amd64` images for Fargate from M1.
Looks like we need another PR, no?
username_0: I noticed the same, and I think it is because the `platform` property is not implemented to https://github.com/aws/aws-cdk/blob/master/packages/@aws-cdk/aws-ecr-assets/lib/image-asset.ts
username_5: Just a note for people with M1 chip. Docker supports DOCKER_DEFAULT_PLATFORM='linux/amd64' env variable now.
https://docs.docker.com/engine/reference/commandline/cli/
username_6: @username_7 @username_0
Glad to see the renewed interest. Yes, I do believe that the PR I originally put together affected the wrong part of the codebase 🤦 . I am still unable to build any assets for ECR with my M1 machine.
I previously did some initial research into this and got a bit lost trying to understand how/where the CDK ECR code actually instructs Docker to build the image.
Here are some notes (largely written out for myself while I try to work this through) of what I think may be the next steps:
1. Update the `DockerImageAssetOptions` to include the `platform` flag: https://github.com/aws/aws-cdk/blob/e61a5b80fb19270a0ed21938b777390ce5d835cc/packages/%40aws-cdk/aws-ecr-assets/lib/image-asset.ts#L18-L57
1. Update the asset hash to consider the `platform` flag: https://github.com/aws/aws-cdk/blob/e61a5b80fb19270a0ed21938b777390ce5d835cc/packages/%40aws-cdk/aws-ecr-assets/lib/image-asset.ts#L147-L152
2. Ensure that the `platform` flag makes its way to the docker build. This is where I get confused. Here's what I can see:
1. When we construct a `DockerImageAsset`, we create an `AssetStaging` object: https://github.com/aws/aws-cdk/blob/e61a5b80fb19270a0ed21938b777390ce5d835cc/packages/%40aws-cdk/aws-ecr-assets/lib/image-asset.ts#L159-L168
2. When we construct an `AssetStaging` object, we build a cache key from the relevant bundling props (`platform` will also need to be considered) and send it to an `assetCache` where the `stageThisAsset` callback is called if it's a new asset: https://github.com/aws/aws-cdk/blob/e61a5b80fb19270a0ed21938b777390ce5d835cc/packages/%40aws-cdk/aws-ecr-assets/lib/image-asset.ts#L159-L168
3. The `stageThisAsset` callback either runs `this.stageByBundling(...)` or `this.stageByCopying()`... I'm not exactly sure why we'd want to stage by copying, however I think it's safe to assume that it won't be affected by the `platform` flag: https://github.com/aws/aws-cdk/blob/e61a5b80fb19270a0ed21938b777390ce5d835cc/packages/%40aws-cdk/core/lib/asset-staging.ts#L189-L197
4. `stageByBundling()` is where things start getting hazy for me... What we do know is that it expects the first arg to comply with the `BundlingOptions` interface, so we should ensure that accepts the `platform` flag: https://github.com/aws/aws-cdk/blob/e61a5b80fb19270a0ed21938b777390ce5d835cc/packages/%40aws-cdk/core/lib/asset-staging.ts#L300 https://github.com/aws/aws-cdk/blob/e61a5b80fb19270a0ed21938b777390ce5d835cc/packages/%40aws-cdk/core/lib/bundling.ts#L10-L97
I'm not entirely sure what is the intention of `stageByBundling()`. I see that it runs `this.bundle()`, which I would think is where the bundling magic ✨ happens, however I'm not really sure: https://github.com/aws/aws-cdk/blob/e61a5b80fb19270a0ed21938b777390ce5d835cc/packages/%40aws-cdk/core/lib/asset-staging.ts#L458-L467
I'm not really sure how `local.tryBundle(...)` is ever set (I can't see that set anywhere in the files I've referenced in this chain of steps), so we're likely running `image.run(...)`. I'm not really sure what `image` even represents... The interface describes it as: https://github.com/aws/aws-cdk/blob/e61a5b80fb19270a0ed21938b777390ce5d835cc/packages/%40aws-cdk/core/lib/bundling.ts#L10-L14
So does that mean at this point we already have the docker image (implying that it's been built)?
I'm willing to put some time into creating another PR, however having some others pair with me on understanding how the bundling actually works would be helpful.
username_8: Docker image assets are built here
https://github.com/aws/aws-cdk/blob/ef1260976f1e231fd4c8f7fbac5b0a592e243432/packages/cdk-assets/lib/private/docker.ts#L49-L59
username_9: One solution that I found (after much head banging 🙉 ) is to put `--platform ...` in the `FROM` at the beginning of my `Dockerfile`:
```Dockerfile
FROM --platform=linux/amd64 someBaseImage:someVersion
# ... moar cool docker stuff here 🐳
```
I hope that helps
username_10: I just spent a day or two struggling with this. I have a temporary workaround that will let you use CDK to build your image cross platform, then use the image in ECS.
1) Make sure to set the environment variable `DOCKER_BUILDKIT=1` in your build environment. This is needed so that Docker itself will respect the `--platform` option on normal `docker build` commands without needing to use the special `docker buildx build` command. This is the first issue I had run into: CDK issues a normal `docker build` command but by default Docker was only respected `--platform` if you use BuildKit.
```sh
export DOCKER_BUILDKIT=1
```
Alternatively modify `/etc/docker/daemon.json`
```json
{ "features": { "buildkit": true } }
```
2) Next add the following flag to your `cdk.json` file:
```json
"@aws-cdk/core:newStyleStackSynthesis": true
```
This enables some new functionality for stack synthesis, including the functionality to load a locally built Docker image from a tarball. You will need to run `cdk bootstrap` again after turning on this flag to update the AWS side CDK bootstrapping resources.
3) Now we can do a little bit of hacky stuff to manually build a container image for the target platform, dump it to a local tarball, and then import that tarball using ContainerImage.fromTarball()
```js
import * as path from 'path';
import { spawnSync, SpawnSyncOptions } from 'child_process';
```
```js
// Manually build the image for the specified platform.
var appImage = cdk.DockerImage.fromBuild(path.resolve('./app'), {
platform: 'linux/arm64'
});
// Dump the image to a tarball in the cdk.out folder
var absolutePath = process.cwd() + '/cdk.out/image.tar';
console.log(absolutePath);
const proc = spawnSync(`docker`, [
'save',
`--output=${absolutePath}`,
`${appImage.image}:latest`,
], {
stdio: [ // show Docker output
'ignore', // ignore stdio
process.stderr, // redirect stdout to stderr
'inherit', // inherit stderr
],
});
if (proc.error) {
throw proc.error;
}
if (proc.status !== 0) {
if (proc.stdout || proc.stderr) {
throw new Error(`[Status ${proc.status}] stdout: ${proc.stdout?.toString().trim()}\n\n\nstderr: ${proc.stderr?.toString().trim()}`);
}
throw new Error(`Docker save exited with status ${proc.status}`);
}
// Now turn the tarball into a DockerImageAsset again
var image = ecs.ContainerImage.fromTarball(absolutePath);
```
4) The resulting ContainerImage can be used in other places, including ECS task definitions as normal:
```js
taskDefinition.addContainer('app', {
cpu: 2048,
memoryLimitMiB: 2048,
image: image,
});
```
5) At this point everything should work and you should have a successful cross platform Docker build and push from tarball.
Also I am reopening this issue because it is definitely not fixed yet. We need to do more research, and we need a better way to pass platform down to the asset bundling stage of CDK, because to be clear my solution above is a totally hacky workaround to do the bundling manually, then export and reimport is not very efficient.
username_10: <!-- short description of the feature you are proposing: -->
### Use Case
<!-- why do you need this feature? -->
To be able to build images for `amd64` architecture (e.g. AWS Fargate) on a system that is using other architecture like `arm64` (e.g. Apple M1).
### Proposed Solution
<!-- Please include prototype/workaround/sketch/reference implementation: -->
Replace `docker build` with `docker buildx build`.
https://docs.docker.com/buildx/working-with-buildx/#build-multi-platform-images
With `buildx` you can build cross-platform images by declaring `--platform` argument. e.g. `docker buildx build --platform linux/amd64 someimage:sometag .` executed on system Apple M1 results in an image which works system with `amd64` architecture.
`buildx` allows you also to build image for multiple platforms at once. e.g. `--platform linux/amd64,linux/arm64`
### Other
<!--
e.g. detailed explanation, stacktraces, related issues, suggestions on how to fix,
links for us to have context, eg. associated pull-request, stackoverflow, slack, etc
-->
Currently image `.fromAsset` results in an image that works only on the same architecture where it was built. In that sense, this could be considered also a bug – the image built doesn't work on the target system (Fargate).
```ts
import { FargateTaskDefinition, ContainerImage } from '@aws-cdk/aws-ecs';
const taskDefinition = new FargateTaskDefinition(this, 'TaskDefinition');
taskDefinition
.addContainer('Container', {
image: ContainerImage.fromAsset(path.resolve(__dirname, '../image')),
});
```
* [ ] :wave: I may be able to implement this feature request
* [x] :warning: This feature might incur a breaking change
---
This is a :rocket: Feature Request
username_0: Now that Lambda released support for `arm64` / Graviton2, maybe this feature request now finally gets some love from the maintainers?
username_7: I am interested to explore this. I think we need pass the `--platform` flag all the way down to here.
https://github.com/aws/aws-cdk/blob/c2852c9c524a639a312bf296f7f23b0e3b112f6b/packages/cdk-assets/lib/private/handlers/container-images.ts#L122-L129
username_11: This is the way I also choose, but it annoys me as I'm using the same multi-stage Dockerfile for local environment (some developers are using the M1 chip, some other are using an x86 platform). This means every developer with a M1 chip locally needs to remember to manually edit its Dockerfile before a build.
Specify the platform in CDK would be the perfect solution, something like these:
```python
# Using build_args (currently ignored)
task_image_options=ecs_patterns.ApplicationLoadBalancedTaskImageOptions(
container_name="mytask",
container_port=5000,
image=ecs.ContainerImage.from_asset('.',
build_args={"--platform": "linux/x86-64"},
target="production"))
# Using a dedicated argument (currently not available)
task_image_options=ecs_patterns.ApplicationLoadBalancedTaskImageOptions(
container_name="mytask",
container_port=5000,
image=ecs.ContainerImage.from_asset('.',
platform="linux/x86-64",
target="production"))
```
username_7: I just notice an important issue
<img width="1035" alt="圖片" src="https://user-images.githubusercontent.com/278432/136338026-1c578375-1b4b-46f4-bb53-d176bd96c1e7.png">
I was using the same Dockerfile with the same base image, which support multi-arch.
When I first docker build with that base image, docker daemon will pull the image with correct platform and everything works great as expected. However, if I change `--platform` and build again, it seems docker will not pull new layers from the base image and will re-use the existing cached base image. In this case, the `--platform` flag will not work as expected. Unless we `docker rmi` the base image and let it pull again.
This could be a major issue if users are using the same base image for different architectures.
Any comments?
username_8: I don't have the same behavior, I'm on `Docker version 20.10.8, build 3967b7d`. You?
username_8: @username_7 please see also #16858 where I suggest a "coupling" between the `lambda.Architecture` and the corresponding `platform`.
username_12: So, two things would be ideal, but each is also very valuable:
1. It would be great to expose a new variable like docker_args to would be expanded to the `>docker ...` command (or future API). This is similar to how someone above tried to say `build_args=['--platform', 'amd64/linux']` but I understand that is currently expanded to BUILD TIME environment variables, which is different. The name is a bit confusingly overloaded.
2. At the same time, you can create an abstraction for `platforms=['amd/64', 'arm64v8`]` to automate specifying the platform explicitly, as well as potentially support multi-platform, which is near-impossible to do right now. This would also potentially work with non-docker or non-CLI container image building implementations.
username_13: For the use case of building an x86 lambda layer on a M1 Mac, after fighting with it for quite a while I realized there was a completely trivial solution...
I just changed
```
image: Runtime.NODEJS_14_X.bundlingImage,
```
to
```
image: DockerImage.fromRegistry(
`public.ecr.aws/sam/build-nodejs14.x:latest-x86_64`
),
```
username_2: @username_7 I am assigning this to you to follow up.
username_14: This issue has been renamed/repurposed a few times, but I think the CDK API should support passing _any_ arguments to `docker build`. Adding support only for `--platform` or `--pull` will just lead to endless similar requests.
**In our case, we need support for adding the `--output` arg to CDKs invocation of `docker build`.**
We use CDK to build and deploy an ARM64 image on an x86-based (i.e. AMD64) Linux EC2 instance. Only Docker desktop offers multi-platform builds out of the box, so we had to install it, as per [Docker's guide](https://docs.docker.com/buildx/working-with-buildx/#set-buildx-as-the-default-builder). We make `buildx` the default builder by running `docker buildx install` and then configuring a builder for `arm64`. Invoking `docker build ...` (like CDK does) is then actually re-routed to use `buildx` instead of the legacy Docker builder.
BUT `buildx` [by default doesn't publish the built image into the local registry](https://docs.docker.com/engine/reference/commandline/buildx_build/#output) and requires an additional `--output` flag (or a shorthand like `--load`) to actually store and use the image that was built. This is reflected by warnings shown by `buildx` when invoking `docker build` without setting the `--output` flag (or any of its shorthands like `--push` or `--load`:
```
No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
```
Because CDK `AssetImageProps` builder API doesn't allow us to set the output for the `docker build` step, it fails after building because it tries to tag an image which doesn't exist:
```
docker tag cdkasset-da3d3b814c3ba23b82b05bcac15892e50aa87059a01ba3dbada59dfef2369aaf foo.dkr.ecr.eu-west-1.amazonaws.com/cdk-hnb659fds-container-assets-foo-eu-west-1:da3d3b814c3ba23b82b05bcac15892e50aa87059a01ba3dbada59dfef2369aaf
Error response from daemon: No such image: cdkasset-da3d3b814c3ba23b82b05bcac15892e50aa87059a01ba3dbada59dfef2369aaf:latest
```
Workarounds like @[username_10's](https://github.com/username_10) https://github.com/aws/aws-cdk/issues/12472#issuecomment-904937818 are not an option for us, since we need to be able to synthesize our CDK stack without building all the Docker images (which requires long compilation etc).
Our current workaround is to rename the `docker` binary to `docker_real` and instead create a `docker` bash script on the path which adds the flag:
```sh
#!/usr/bin/env bash
if [ "$1" = "build" ]; then
echo "Adding --load param to docker build command"
docker_real build --load "${@:2}"
else
docker_real "$@"
fi
```
But this is obviously a hack and the proper solution would be for CDK `AssetImageProps` builder to allow us to add the `--output` flag (or preferrably: any flag) to its invocation of `docker build`.
Finally, as noted above by @[username_0](https://github.com/username_0), in https://github.com/aws/aws-cdk/issues/12472#issuecomment-938063302, Docker recently merged a PR to make `buildx` the default builder. Unless they changed the default behavior of `docker build` not outputting anything (idk, didn't check), this new Docker version would be unusable for building container images from CDK using `fromAsset`. |
MaibornWolff/codecharta | 568655031 | Title: TSLint: Object literal shorthand rule
Question:
username_0: # Feature request
It might be good to use a linting rule to automatically use the shorthand notation. It is nicer to read and to grasp the code that way.
_Originally posted by @username_1 in https://github.com/MaibornWolff/codecharta/pull/860_
https://palantir.github.io/tslint/rules/object-literal-shorthand/<issue_closed>
Status: Issue closed |
storybookjs/storybook | 1067609671 | Title: Storybook 6.4.1 fails to start with Typescript issue 12358 (won't compile modules in node_modules)
Question:
username_0: **Describe the bug**
When trying to start Storybook 6.4.1, I get this error:
ModuleBuildError: Module build failed (from ./node_modules/ts-loader/index.js):
Error: TypeScript emitted no output for <app-dir>/node_modules/@adp-wfn/mdf-components/index.ts. By default, ts-loader will not compile .ts files in node_modules.
You should not need to recompile .ts files there, but if you really want to, use the allowTsInNodeModules option.
See: https://github.com/Microsoft/TypeScript/issues/12358
at makeSourceMapAndFinish (<app-dir>/node_modules/ts-loader/dist/index.js:52:18)
at successLoader (<app-dir>/node_modules/ts-loader/dist/index.js:39:5)
at Object.loader (<app-dir>/node_modules/ts-loader/dist/index.js:22:5)
at processResult (<app-dir>/node_modules/webpack/lib/NormalModule.js:751:19)
at <app-dir>/node_modules/webpack/lib/NormalModule.js:853:5
at <app-dir>/node_modules/loader-runner/lib/LoaderRunner.js:399:11
at <app-dir>/node_modules/loader-runner/lib/LoaderRunner.js:251:18
at context.callback (<app-dir>/node_modules/loader-runner/lib/LoaderRunner.js:124:13)
at makeSourceMapAndFinish (<app-dir>/node_modules/ts-loader/dist/index.js:52:9)
at successLoader (<app-dir>/node_modules/ts-loader/dist/index.js:39:5)
at Object.loader (<app-dir>/node_modules/ts-loader/dist/index.js:22:5)
at LOADER_EXECUTION (<app-dir>/node_modules/loader-runner/lib/LoaderRunner.js:132:14)
at runSyncOrAsync (<app-dir>/node_modules/loader-runner/lib/LoaderRunner.js:133:4)
at iterateNormalLoaders (<app-dir>/node_modules/loader-runner/lib/LoaderRunner.js:250:2)
at <app-dir>/node_modules/loader-runner/lib/LoaderRunner.js:223:4
at <app-dir>/node_modules/webpack/lib/NormalModule.js:827:15
at Array.eval (eval at create (<app-dir>/node_modules/webpack/node_modules/tapable/lib/HookCodeFactory.js:33:10), <anonymous>:12:1)
at runCallbacks (<app-dir>/node_modules/enhanced-resolve/lib/CachedInputFileSystem.js:27:15)
at <app-dir>/node_modules/enhanced-resolve/lib/CachedInputFileSystem.js:200:4
ModuleBuildError: Module build failed (from ./node_modules/ts-loader/index.js):
Error: TypeScript emitted no output for <app-dir>node_modules/@adp-wfn/mdf-core/index.ts. By default, ts-loader will not compile .ts files in node_modules.
You should not need to recompile .ts files there, but if you really want to, use the allowTsInNodeModules option.
See: https://github.com/Microsoft/TypeScript/issues/12358
at makeSourceMapAndFinish (<app-dir>/node_modules/ts-loader/dist/index.js:52:18)
at successLoader (<app-dir>/node_modules/ts-loader/dist/index.js:39:5)
at Object.loader (<app-dir>/node_modules/ts-loader/dist/index.js:22:5)
at processResult (<app-dir>/node_modules/webpack/lib/NormalModule.js:751:19)
at <app-dir>/node_modules/webpack/lib/NormalModule.js:853:5
at <app-dir>/node_modules/loader-runner/lib/LoaderRunner.js:399:11
at <app-dir>/node_modules/loader-runner/lib/LoaderRunner.js:251:18
at context.callback (<app-dir>/node_modules/loader-runner/lib/LoaderRunner.js:124:13)
at makeSourceMapAndFinish (<app-dir>/node_modules/ts-loader/dist/index.js:52:9)
at successLoader (<app-dir>/node_modules/ts-loader/dist/index.js:39:5)
at Object.loader (<app-dir>/node_modules/ts-loader/dist/index.js:22:5)
at LOADER_EXECUTION (<app-dir>/node_modules/loader-runner/lib/LoaderRunner.js:132:14)
at runSyncOrAsync (<app-dir>/node_modules/loader-runner/lib/LoaderRunner.js:133:4)
at iterateNormalLoaders (<app-dir>/node_modules/loader-runner/lib/LoaderRunner.js:250:2)
at <app-dir>/node_modules/loader-runner/lib/LoaderRunner.js:223:4
at <app-dir>/node_modules/webpack/lib/NormalModule.js:827:15
at Array.eval (eval at create (<app-dir>/node_modules/webpack/node_modules/tapable/lib/HookCodeFactory.js:33:10), <anonymous>:12:1)
at runCallbacks (<app-dir>/node_modules/enhanced-resolve/lib/CachedInputFileSystem.js:27:15)
at /Users/hoytk/git/wc-test/node_modules/enhanced-resolve/lib/CachedInputFileSystem.js:200:4
**To Reproduce**
I'll see if I can boil this down at all, but all I did was update from 6.3.12 to 6.4.1 and boom.
**System**
```
Environment Info:
System:
OS: macOS 11.6.1
CPU: (12) x64 Intel(R) Core(TM) i9-8950HK CPU @ 2.90GHz
[Truncated]
});
config.plugins.push(new CopyWebpackPlugin({
patterns: [
{ from: './src/images', to: './src/images' },
{ from: './contentPaneTest', to: './contentPaneTest' }
]
}));
// Resolutions to replace node.js libraries no longer added by default by webpack 5.
config.resolve.fallback.crypto = require.resolve('crypto-browserify');
config.resolve.fallback.stream = require.resolve('stream-browserify');
return merge(
config,
styleAssets({ suppressValidationPlugin: true })
);
}
};
```
Answers:
username_0: The solution was to update main.js to tell ts-loader that TypeScript in node_modules was OK:
```
config.module.rules.push({
test: /\.(ts|tsx)$/,
use: [
{
loader: require.resolve('ts-loader'),
options: {
allowTsInNodeModules: true
}
}
]
});
```
I'm still curious as to what might have changed (and it could be TypeScript 4.5), but I'm back in business.
Status: Issue closed
|
diddlesnaps/audacity | 603496336 | Title: Not installable
Question:
username_0: Can't install Audacity due to the missing plug `gtk-2-themes` which seems to be removed from the snap store.
Answers:
username_1: The plug you cite is provided by the snap called `gtk-common-themes`. This should be automatically installed when you install audacity. If it isn't then you likely have a bug in snapd. Please raise this issue with not automatically installing the relevant dependency snap on forum.snapcraft.io where the snapd developers can help.
username_0: According to the [yaml-file](https://github.com/diddlesnaps/audacity/blob/master/snap/snapcraft.yaml) the snap `gtk-2-themes` is needed, does the yaml-file needs to be updated?
snap find gtk-2-themes
Name Version Publisher Notes Summary
adapta-gtk-snap 0.2 kd913 - Adapta: An adaptive Gtk+ theme based on Material Design Guidelines.
username_0: Trying to install the snap `audacity` again I cannot reproduce this anymore, it installed successfully.
Status: Issue closed
|
thephpleague/commonmark | 661141913 | Title: PSR-14 compliance
Question:
username_0: The event system we use is very similar to [PSR-14](https://www.php-fig.org/psr/psr-14/). Perhaps we should make it fully PSR-compliant and allow users to optionally [use their own event dispatcher](https://packagist.org/providers/psr/event-dispatcher-implementation) if desired?
Answers:
username_1: Makes sense to me. No need to reinvent the wheel each time.
username_0: Duplicate of #436
Status: Issue closed
|
openshift/image-registry | 429531754 | Title: Problems with non AWS-S3 storage backend
Question:
username_0: We are experiencing lots of problems when using a rados-gw-based S3 backend (ceph luminous) for the image-registr (3.11). We are not able to push any images, pushing is aborted with an HTTP-500 error
When looking at the raw requests we see problems with multipart uploads.
After searching the web for some hours it seems that we are not the only ones having problems with rados-gw-s3.
It seems to related to AWS GO SDK which didn't properly support other backends than aws.
for example see the following issue for docker-registry https://github.com/docker/distribution/pull/2563
Any ideas / clues ?
Thank you
Answers:
username_1: In general, if API claims to be compatible with S3, but the official client doesn't work with it, either API or the client should be fixed. If updating aws-go-sdk is enough, we updated aws-go-sdk to 1.17.2 in 4.0. So the next release won't have this problem. You may also check other gateway APIs, for example Swift API.
Status: Issue closed
|
microsoft/PowerToys | 1173850348 | Title: FancyZones no longer retains window placement upon wake-up on multiple monitors
Question:
username_0: ### Microsoft PowerToys version
0.56.2
### Running as admin
- [X] Yes
### Area(s) with issue?
FancyZones
### Steps to reproduce
Using a 5 Zone setup within FancyZones to breakup workspaces across two monitors, when one locks their machine and comes back to it, only the Zones on the primary display remain. All other windows need to be set back to their correct area. This repeats following every system Lock.
### ✔️ Expected Behavior
Prior to the latest update, windows would remain in their respective Zones following placement.
### ❌ Actual Behavior
Every time I unlock my machine, all Zoned windows on the secondary monitor are lost and need to be set back into their correct Zone, which is undone when I lock the machine again
[PowerToysReport_2022-03-18-09-51-54.zip](https://github.com/microsoft/PowerToys/files/8306586/PowerToysReport_2022-03-18-09-51-54.zip)
.
### Other Software
_No response_
Answers:
username_1: I just wanted to add my experience to this, as it may give further insight into the issues.
- The issue seems to ignore primary monitor or system display ID
- The issue exists when waking a display port monitor as well as locking (I do not lock my system as the OP does)
- I have 3 monitors, each have 3 zones.
- It seems that most often, the windows are moved to their respective zones, just on the wrong monitor.
- Example:
- App starts on zone 3, screen 2.
- Computer turns off the screens.
- Wake up screens and app is now on zone 3, screen 1.
Info:
Version 0.56.2
Running as admin
Setup:
Left - AOC 1920x1080 (System Display 2)
Middle - Samsung G9 5120x1440 (Primary, System Display 1)
Right - AOC 1920x1080 (System Display 3) |
AccelerateWithOptane/lab | 400019266 | Title: Request access to an Optane SSD-powered bare metal server
Question:
username_0: If you are interested in filing a request for access to the Accelerate With Optane Community Lab for performance testing, optimization, and analysis, please fill out the details below. Contact <NAME> at <EMAIL> with questions.
### Name, email, company, job title
<NAME>, <EMAIL>, Senior Director of Engineering
<NAME>, <EMAIL>, Principal member of Technical Staff
*Note that projects with two or more participants are preferred.*
### Project Title and brief description
Incorta Analytics Server benchmark testing with Optane and IMDT
### How does the open source community benefit from your work?
Incorta is in-memory analytic engine to provide super fast queries for BI tools.
We can test this technology and provide any feedbacks and issues to the community
### Is the code that you’re going to run 100% open source? If so, what is the URL or URLs where it is located?
No. It is a commercial software, not open source.
### Does the infrastructure provided meet your testing needs (see: https://www.acceleratewithoptane.com/access/)?
*Note that the configuration provided was created to enable testing flexibility across a range of potential use cases. Projects are expected to use one system due to limited supply. If additional resources are required, contact <EMAIL>*
Yes
### What performance-focused articles has your project published before?
*Is your project intensely interested in performance, especially where disk I/O is concerned? Have you written about it or shared results of testing? Please share anything that shows your focus.*
Incorta has two engines, in-memory and Apache Spark to process queries.
One is very memory intensive and the other is I/O intensive operations.
We plan to run partial TPC-DS and TPC-H bench mark against 1TB scale data.
### Please state your contributions to the open source community and any other relevant initiatives
*Feel free to brag a little bit about yourself!*
I've been in Enterprise Software development for 20+ years. I'm currently leading a performance team and DevOps team at Incorta.
### Would you be willing to share your analysis and results publicly?
*We are interested in blog posts, meetups and conference presentations. Accelerate With Optane would be more than happy to host your blog posts or link to them, and may coordinate performance-oriented meetups and conferences. Are you open to sharing?*
It is not my priority. If we can find anything interesting to others, we might.
### Are you interested in testing Intel Optane SSDs with Intel Memory Drive Technology (IMDT)?
*IMDT extends system memory transparently by integrating Intel Optane SSD capacity into the memory subsystem. The systems provided have 192GB of DRAM but can be enabled with 1.4TB of software-defined memory while leaving one Intel Optane SSD still available for fast storage/caching usage. Check [here](https://www.intel.com/content/www/us/en/solid-state-drives/optane-ssd-dc-p4800x-mdt-brief.html) for more information on IMDT.*
Yes, this is a primary purpose of this request so that we can test our in-memory engine against 1TB+ data. |
emad-elsaid/rubyfunctions | 593335508 | Title: Follow Button
Question:
username_0: # Problem
- Users can't follow each other.
# Suggested Solution
- Add Twitter like following mechanism to allow users to follow each other.
Answers:
username_1: That's also a good feature, also one of the basic social feature, any idea about the implementation ?
username_0: I don't have a strong plan on how to implement this, but I'm thinking about one of the following packages:
- [acts_as_follower](https://rubygems.org/gems/acts_as_follower/versions/0.2.1)
- [socialization](https://rubygems.org/gems/socialization/versions/1.2.0)
What do you think?
username_1: Again this feature doesn't need gems to implement it, a simple implementation is a model for `follower, user` and that would be all
username_0: I like how you don't like to use gems in your projects :3
I will start developing this feature and make a pull request ASAP.
username_1: Nice, I recommend you open a draft pull request as soon as you have something, even just models, this way we can think together about the implementation
Status: Issue closed
username_1: # Problem
- Users can't follow each other.
# Suggested Solution
- Add Twitter like following mechanism to allow users to follow each other.
Status: Issue closed
|
RedHatInsights/insights-core | 683335898 | Title: Insights client run leaves temp directories under /tmp/
Question:
username_0: It seems that temporary directories leftovered in /tmp, Two situations as per my observation:
* When new egg is downloaded and updated, a temp directory is leftovered with the download egg and gpg signature
* when egg is not updated, it just leaves a empty temp directory in /tmp
Answers:
username_0: MR : https://github.com/RedHatInsights/insights-core/pull/2722
username_1: @username_0 is this fixed by #2722 ?
username_0: Yes this was fixed in that ME. Closing issue.
Status: Issue closed
|
backend-br/vagas | 355985555 | Title: [São Paulo] Desenvolvedor Dot Net - 5A Attiva
Question:
username_0: ### Empresa
Atuamos fortemente no mercado Corporativo de TI, com destaque em Terceirização de Profissionais de Desenvolvimento, Gestão de Projetos e Soluções em Sistemas.
Buscamos profissionais para fazer parte de uma equipe diferenciada, com desenvolvedores antenados que se preocupam com o desenvolvimento de soluções dentro dos melhores padrões de mercado.
### Atividades Desenvolvidas
Desenvolvimento e manutenção de sistemas. Sólidos conhecimento .Net com SQL Server, Bootstrap, MVC, Gerenciamentos ágeis, Postgresql (desejável).Conhecimentos de soluções Amazon (AWS). Profundo conhecimento com análise.
### Dados Relevantes
Localidade: Paulista
Contratação: PJ
Nível: Sênior
Caso tenha interesse na oportunidade enviar CV para <EMAIL>
Acredito ter uma oportunidade que esteja dentro dos seus objetivos profissionais. |
profusion/sgqlc | 289358665 | Title: Support @directive(arg: value)
Question:
username_0: Support directives, such as:
```graphql
query aQuery($ignoreX: Boolean) {
x @skip(if: $ignoreX)
}
```
Answers:
username_0: We do not handle the directives at all and I never stopped to think about what to do about them.
For instance, on the `schema`, we have projects like https://github.com/profusion/apollo-validation-directives/ where we could call local wrappers, like converting/validating types locally before sending the query or receiving the values (similar to the custom scalars).
On the `operation` (executable), we could mark statements with directives, such as `@skip()` above. Then let the server know how to process that information.
But how to integrate it's the cumbersome bit, maybe follow the `.__fields__()` and provide a `.__directives__()` and let the user specify name + args?
username_1: I think directives are very important. They are used a lot in most schema-first databases like nexus, dgraph, etc..
I'll definitely look how to integrate them (because I need them) but this might be not before may this year.
If you have any more concrete ideas until then, I'm happy to hear about them!
username_0: yes, we use a lot in our schema-first servers (that's why we created that apollo-validation-directives repo). But for clients, it's less frequently used, that's why I never bothered.
In the schema I'd use something like the `__directives__()` as I said, can create a `Directive()` that can be inherited, provide the name and arguments using what exists. And when interpreting the results, in addition to the scalar conversion we'd give it to the directive, allowing to do extra validations.
But to do operations, need to see if we could do some `__directives__()` as well. |
Volst/react-native-honeywell-scanner | 576972325 | Title: barcodeReadSuccess could not received any response
Question:
username_0: hardware button press for scanning
get only "Received data" could get event
var isCompatible = HoneywellScanner.isCompatible
if(isCompatible){
HoneywellScanner.startReader().then((claimed) => {
if(claimed){
setTimeout(() => {
HoneywellScanner.on('barcodeReadSuccess', event => {
alert('Received data', JSON.stringify(event));
});
HoneywellScanner.on('barcodeReadFail', () => {
alert('Barcode read failed');
});
}, 1000);
}
});
HoneywellScanner.on('barcodeReadFail', () => {
alert('Barcode read failed');
});
}<issue_closed>
Status: Issue closed |
justinzm/gopup | 765504454 | Title: 阳西哪里有真实大保健(找特色服务-济南生活圈
Question:
username_0: 阳西哪里有真实大保健(找特色服务╋薇:781372524美女】时至岁末,由《斗罗大陆》原作者唐家三少担任总制作人,炫世唐门联手大神圈、攸乐科技等多个知名团队共同打造的手游《斗罗十年—龙王传说》再次为广大玩家献上全新内容——魂师精英赛。这一模式是基于原著世界观的全新PVP玩法,是玩家与玩家间的直接对抗。在这一模式中,策略将成为衡量玩家的重要标准,只有运用正确的策略,才能为玩家带来胜利。同时,这一模式也是制作团队为玩家们打造的“圆梦之地”。在这里,原著中不同时空的魂师可以组成独特阵容同场竞技一决高下,这也正是原著中“成长与试炼”这一主题的最佳呈现。手游《斗罗十年—龙王传说》宣传图无论电影、电视,还是游戏,为受众“圆梦”都是其重要的主题。手游《斗罗十年—龙王传说》问世至今,一直没有停下为玩家乃至所有“斗罗IP”受众圆梦的脚步。在这款作品之前的呈现的内容中,原著中的“成长”主题已经通过各种不同形式展示在了玩家面前,并获得了玩家的肯定,而原著中的另一主题“试炼”,却一直处于萌芽状态。现在,随着魂师精英赛的开启,雌伏已久的嫩芽终于开始茁壮成长,站上舞台的中央。魂师精英赛宣传图在新开版本中,所有开服满两周的服务器,等级达到40级的玩家即可报名参加魂师精英赛。单赛季时长为一周,成功报名的玩家将会与游戏中实力强劲的高手们过招切磋。按照三局两胜的赛事制度,布置战斗队伍,考虑魂师技能搭配,谋划出战策略。玩家只要灵活运用田忌赛马、李代桃僵的策略,即使面对强敌也可以逆转战局,取得胜利。在这一过程中,玩家不仅可以指挥史莱克七怪、海神阁诸神、各宗门名将等著名魂师,前世唐三、赵无极、玉小刚、唐晨等原著中着墨不多的魂师也已就位,只等玩家一声令下,便可组成独特阵容,以各自不同的绝学功法和魂力产生的精妙搭配,上阵杀敌。此外,游戏中 45级以上的玩家还可以体验到“龙王系统”玩法,即通过龙王的养成,为魂师提升属性,让魂师阵容拥有更高战力,为魂师精英赛提供助力。魂师精英赛宣传图精彩的玩法必然伴随让人惊喜的道具产出,新版本中绝学功法、龙魄龙云、十万年灵芝、暗器之法、罕见魂骨等道具的产出,会让玩家的斗罗之旅更加精彩。据悉,目前魂师精英赛模式已全面开启,期待各路魂师火速加入,体验这场跨越时空的圆梦之旅。核姆道门鸵https://github.com/justinzm/gopup/issues/12355?89748 <br />https://github.com/justinzm/gopup/issues/11689 <br />https://github.com/justinzm/gopup/issues/12546?086Y3 <br />https://github.com/justinzm/gopup/issues/12811?3rtP1 <br />https://github.com/justinzm/gopup/issues/12825?04050 <br /> |
SSAFY-5th-GwanJu-4C-Algorithms/Algorithm_basic | 986728105 | Title: [성애][해시][9월1주][PGMS] 완주하지 못한 선수
Question:
username_0: ### 풀이 방법
해시 알고리즘을 억지로 껴넣고 풀려다보니 약간 더럽게 풀었습니다.. 💧
1) Map<String, Integer> 선언
`String`은 이름, `Integer`은 해당 이름을 가진 선수의 수.
```java
Map<String, Integer> m = new HashMap();
```
<br>
2) 마라톤에 참가한 선수 처리
participant 배열을 순회하면서, Map에 값을 넣음.
이때, 이미 이름이 있다면 선수의 수를 하나 늘려서 수정.
```java
for(int i = 0; i < participant.length; i++){
if(m.containsKey(participant[i])) {
int cnt = m.get(participant[i])+1;
m.put(participant[i], cnt);
} else {
m.put(participant[i], 1);
}
}
```
<br>
3) 마라톤에 완주한 선수 처리
completion 배열을 순회하면서, 해당 이름을 가진 선수의 수를 하나 줄여서 수정.
```java
for(int i = 0; i < completion.length; i++){
int cnt = m.get(completion[i])-1;
m.put(completion[i], cnt);
}
```
<br>
4) 마라톤에 완주하지 못한 선수 출력
Map을 순회하면서, Integer가 0이 아니면 완주하지 못한 선수임.
```java
String answer="";
for(String s : m.keySet()){
if(m.get(s) != 0){
answer = s;
break;
}
}
return answer;
```
<br>
### 전체 코드
[Truncated]
m.put(participant[i], 1);
}
}
for(int i = 0; i < completion.length; i++){
int cnt = m.get(completion[i])-1;
m.put(completion[i], cnt);
}
String answer="";
for(String s : m.keySet()){
if(m.get(s) != 0){
answer = s;
break;
}
}
return answer;
}
}
```
Answers:
username_1: 와 Java 문법 다 까먹겠다.
username_2: 근데 해시 써서 풀려면 어쩔 수 없이 이렇게 되는거 같던데용ㅠ 💁🏻♀️
username_3: 잘봤습니다 :)
username_4: 깔끔하게 푸신거 같은데..? 잘봤습니다!
Status: Issue closed
|
frontend-labs/site | 103171998 | Title: Bug: No funciona el boton cargar más en las categorias
Question:
username_0: Escenario:
Al ingresar a una categoría por ejemplo a: http://frontendlabs.io/category/javascript
Realizo un scroll hacia a abajo y luego cuando llego al final, visualizo el boton "cargar más", pero este botón no carga ningun post más.
Answers:
username_0: Done!
Status: Issue closed
|
patternfly/patternfly-org | 610453224 | Title: Remove small lines in sidebar and footer of V4
Question:
username_0: ## Before Fix:


## After Fix:

<issue_closed>
Status: Issue closed |
desktop/desktop | 327115074 | Title: Clone of repository with private submodule fails
Question:
username_0: <!--
First and foremost, we’d like to thank you for taking the time to contribute to our project. Before submitting your issue, please follow these steps:
1. Familiarize yourself with our contributing guide:
* https://github.com/desktop/desktop/blob/master/.github/CONTRIBUTING.md#contributing-to-github-desktop
2. Check if your issue (and sometimes workaround) is in the known-issues doc:
* https://github.com/desktop/desktop/blob/master/docs/known-issues.md
3. Make sure your issue isn’t a duplicate of another issue
4. If you have made it to this step, go ahead and fill out the template below
-->
## Description
<!--
Provide a detailed description of the behavior you're seeing or the behavior you'd like to see **below** this comment.
-->
## Version
<!--
Place the version of GitHub Desktop you have installed **below** this comment. This is displayed under the 'About GitHub Desktop' menu item. If you are running from source, include the commit by running `git rev-parse HEAD` from the local repository.
-->
* GitHub Desktop:
1.2.0
<!--
Place the version of your operating system **below** this comment. The operating system you are running on may also help with reproducing the issue. If you are on macOS, launch 'About This Mac' and write down the OS version listed. If you are on Windows, open 'Command Prompt' and attach the output of this command: 'cmd /c ver'
-->
* Operating system:
Windows 7
## Steps to Reproduce
<!--
List the steps to reproduce your issue **below** this comment
ex,
1. `step 1`
2. `step 2`
3. `and so on…`
-->
Clone URL https://github.com/HasKha/GWToolboxpp.git into C:\foo
**Authentication failed** dialog comes up (consistent with commandline git).
After entering my username and password:
fatal: destination path 'C:\foo' already exists and is not an empty directory.
### Expected Behavior
<!-- What you expected to happen -->
Can select repository in the drop-down.
### Actual Behavior
<!-- What actually happens -->
Repository absent from the list of selectable repositories
## Additional Information
<!--
Place any additional information, configuration, or data that might be necessary to reproduce the issue **below** this comment.
If you have screen shots or gifs that demonstrate the issue, please include them.
[Truncated]
Attach your log file (You can simply drag your file here to insert it) to this issue. Please make sure the generated link to your log file is **below** this comment section otherwise it will not appear when you submit your issue.
macOS logs location: `~/Library/Application Support/GitHub Desktop/logs/*.desktop.production.log`
Windows logs location: `%APPDATA%\GitHub Desktop\logs\*.desktop.production.log`
The log files are organized by date, so see if anything was generated for today's date.
-->
```
Cloning into 'C:/foo/Dependencies/GWCA'...
remote: Repository not found.
fatal: repository 'https://github.com/GregLando113/GWCA.git/' not found
fatal: clone of 'https://github.com/GregLando113/GWCA.git' into submodule path 'C:/foo/Dependencies/GWCA' failed
Failed to clone 'Dependencies/GWCA' a second time, aborting
(The error was parsed as 8: The repository does not seem to exist anymore. You may not have access, or it may have been deleted or renamed.)
2018-05-28T20:41:59.255Z - info: [ui] storing generic credentials for 'github.com' and 'username_0'
2018-05-28T20:41:59.265Z - info: [ui] [AppStore.getAccountForRemoteURL] account found for remote: https://github.com/HasKha/GWToolboxpp.git - username_0 (has token)
2018-05-28T20:41:59.302Z - error: [ui] `git -c credential.helper= clone --recursive --progress -- https://github.com/HasKha/GWToolboxpp.git C:\foo` exited with an unexpected code: 128.
fatal: destination path 'C:\foo' already exists and is not an empty directory.
```
Answers:
username_1: @username_0 thanks for the report. We're already tracking this in #3242 - please follow along with that.
Status: Issue closed
|
seanmonstar/warp | 350611775 | Title: Relation to tower-web
Question:
username_0: I just read your blog post that introduces the warp library, so I'm quite intrigued now as to how things will develop now with tower and tower-web quickly entering the stage. I am certainly attracted by the design philosophy of warp, but I do want to know how you see it in relation to tower-web, which you mentioned in passing. From an issue you have open here, it would seem the `Filter` and `Service` traits are somehow analogous? I'm presuming neither tower-web nor warp will ever include the other as a dependency, but what will the relation between them be, going forwards? Direct competitors?
Answers:
username_1: Yep, the README and the docs both point out that warp is built on top of hyper and thus supports asynchronous request handling.
username_0: I just looked at the docs closer, and I think I see how you do it now... I wanted to check, because some web frameworks use hyper internally, but don't expose any async stuff really (even if they use it internally?). But I'm glad!
username_1: The goal is that functionality is available in both, and the choice comes down to personal taste of how you prefer structure web apps.
username_2: The current state of these projects is going to the wrong direction (as I think).
1) https://github.com/carllerche/tower-web/tree/master/src/middleware
2) https://github.com/username_1/warp/pull/73
Both `warp` and `tower-web` are starting to develop their web frameworks, which breaks the main `finagle` rule - "all components just a small functions which implement a simple contract Request -> Future<Response>". At this moment, `warp` has Filter abstraction and `tower-web` has a Middleware abstraction, which are going to duplicate the same functionality.
So both `warp` and `tower-web` should implement only routing solution and content negotiation, all other things such as CorsFilter, ServerStatsFilter, RequestLogFilter, etc have to be in `tower-http` services.
Here an example which that could be (using LiftService)
```
let endpoint = warp::any().map(|| "ok");
let new_service = endpoint.lift()
.and_then(Cors::new)
.and_then(RequestLog::new)
.and_then(HttpStats::new)
.and_then(HttpTracing::new);
server:run(new_service);
```
username_1: @username_2 I agree with the feeling, there are some things that we could be duplicating effort, and we do want to reduce that. I think that providing the ability to plug in any `Service` with warp is a very useful step. It *might* make sense for there to be "functional" constructors of these things in warp, that basically just use the `Service`, I'm unsure. It also may be useful to add functionality in warp before such a `Service` exists.
username_2: ### A way to integrate with tower-web
At the current time, the "warp" duplicates many things in the tower-web. The most important of them middlewares. Thus the warp's filter abstraction should be revisited.
In the tower-web, the resource uses for matching requests and returning responses. Thus the warp's filter should be another kind of resource, and the warp itself should be "combinators for constructing resources for tower-web" not a yet another web framework.
Required changes at warp:
1. The filter should implement a trait "IntoResource".
2. The filter should use response and content negotiation from the tower-web instead current "::Response"
3. The filter should use an error implementation from tower-web (currently it's not possible).
Required changes at tower-web (later)
1. The "Error", currently it has only three error kinds and nothing else. It would be better to add "status" and "cause".
At finally it would allow writing code like this:
```rust
impl_web! {
impl HelloWorld {
#[get("/tower-web")]
fn hello_world(&self) -> Result<HelloResponse, ()> {
Ok(HelloResponse)
}
}
}
fn warp_routes() -> impl Filter<...> {
warp::get().and(warp::path("warp")).map(|| "hello world")
}
fn main() {
ServiceBuilder::new()
.resource(HelloWorld)
.resource(warp_routes())
.run(&addr)
.unwrap();
}
```
Another good change, which not related to the tower-web, but helps keep the warp small and simple as possible. Switch to use "hlist" implementation from the "frunk" crate. It already has necessary structures and traits for working with heterogeneous lists.
Proposed traits:
```rust
/// Filter conversion into tower-web resource.
trait IntoResource {
type Output: Resource;
fn into_resource(self) -> Self::Output;
}
/// Rejection conversion into tower-web error.
impl Into<Error> for Rejection {...};
/// The filter which uses frunk's hlists for Input and Output.
trait Filter<INPUT: Hlist> {
type Extract: Hlist;
type Output; // usually would be <INPUT as Add<Self::Extract>>::Output;
type Future: Future<Item = Self::Output, Error = Rejection>;
fn call(&self) -> Self::Future;
}
```
To: @username_1
username_3: since tower-web is still using futures 0.1 and tower::Service 0.1, should we port it's middlewares to warp? |
JonnyBeeGod/nSuns-5-3-1-iOS-issues | 367071193 | Title: Import / Export Functionality
Question:
username_0: **Did you look whether there is already an existing issue for your request?**
I want to be able to export all my workout data including custom exercises and the likes and import it again. This would be useful for backup purposes, transferring to a new phone and debugging of user issues
**Is your feature request related to a problem? Please describe.**
At the moment the workout data is backuped and restored via the backup mechanisms of iOS. When transferring to a new phone this data is not being transferred.
Also I have no solution currently in place to reproduce weird behavior of the app from user reports because their phone is in a different state with different workout data.
**Describe the solution you'd like**
Ideally there should be a way to make a simple export in file format (csv / json) and reimport this data again. Data migration is a problem which drives complexity very much, so it is sufficient to export / import only for the same app version. CSV would have the additional benefit of being human readable.
**Describe alternatives you've considered**
there are none
Answers:
username_1: this is something relevant for me atm, because i just bought a new phone. is there a way right now to export my workout data from an old iphone and import them to the new one?
Status: Issue closed
|
naser44/1 | 141038932 | Title: تغيرات تطرأ على الشعر والأظافر خلال الحمل
Question:
username_0: <a href="http://ift.tt/1Pacjzv">تغيرات تطرأ على الشعر والأظافر خلال الحمل</a> |
matlab2tikz/matlab2tikz | 72946834 | Title: Check whether x/y varies makes sense for 3D plots
Question:
username_0: Instead of #629, we should try if we can come up with some heuristics or code to determine whether it makes sense to put `x varies`(default) or `y varies` in 3D plots.
The difference between both (as shown in #629, from where I've shamelessly stolen the graphs) is huge and obviously data dependent.

vs

However, I'm unsure whether doing the proper checks is easy (and fast). If so, I think it's better to have some common sense heuristics that at least fixes something like #629 and maybe some other "reasonable" cases.
Answers:
username_1: Before taking action, I personally would like to see an example that produces this issue with Matlab code.
username_0: @username_1 You are right. It's probably best to even introduce a test case for this. I imagine two subplots where the left one benefits from `x varies`, the right one from `y varies`.
username_1: Exactly. I am not yet convinced that `m2t` is misbehaving.
username_0: Well, I'm certain it is not `m2t` that is misbehaving in those plots. But it's rather a limitation of `pgfplots`/`TikZ` and how it renders 3D plots.
But, as `pgfplots` has the option to work around this problem, we can at least try if we can make use of it. |
SammehFFXI/FFXIAddons | 279212868 | Title: Not an issue, but stupid lol
Question:
username_0: How do I get the addon to use the my MDT set when the spell is actually cast?
I turned it on, it works, but if I take an action after it starts casting and before it finishes casting it appears my action overrides the MDT action before it's actually landed on me.
ex;
Mob Begins cast > equips MDT > provoke (changes to TH set) > goes back to engaged set > flare is cast in engaged set instead of MDT set
Answers:
username_1: you could use a GS command to equip your MDT set and lock it in place till the spell is cast. an i think the turn around you just can't be locked onto the mob. least thats how it works with the run away stuff
username_0: TH Tag system overrides //gs disable all and I built TH into every job |
itforge-eros/soa2019-group5 | 439578526 | Title: Should be able to create a new memo with data
Question:
username_0: The current endpoint doesn't accept memo body. This creates an unnecessary overhead on the client: create an empty memo first, wait for its ID and then send another request to update it.
`POST` request on `/memos` should accept memo body.
Answers:
username_1: I have added `*` keyword in search feature to fetch all memos just for developing in case you don't know what to search. So you can just do `GET /search/*`.
username_1: I also temporary removed authentication from `/search` endpoint
username_1: Done
Status: Issue closed
|
huggingface/transformers | 874648604 | Title: `TypeError: TextInputSequence must be str` from Fast Tokenizer
Question:
username_0: ### Bug
On version 4.5.1, trying to use fast tokenizer for Roberta, and got the above error. Weirdly, saw `transformers/models/gpt2/tokenization_gpt2_fast.py` in the traceback even no `gpt2` models are involved.
**Script to reproduce**
```
from transformers import AutoTokenizer
from transformers.data.processors.utils import InputExample
if __name__ == "__main__":
tokenizer = AutoTokenizer.from_pretrained(
"roberta-base", use_fast=True
)
MAX_LENGTH = 256
LABEL_LIST = [0, 1]
OUTPUT_MODE = "classificaiton"
inputs = ["Ututu goes public.", "This moon is huge."]
examples = [InputExample(guid=str(index), text_a=text, label=None) for index, text in enumerate(inputs)]
# this throws TypeError: TextInputSequence must be str
batch_encoding = tokenizer(
[(example.text_a, example.text_b) for example in examples],
max_length=MAX_LENGTH,
padding='max_length',
truncation=True
)
print(batch_encoding)
```
**Traceback**
```
Traceback (most recent call last):
File "tests/integration/tmp_test.py", line 22, in <module>
truncation=True
File "/usr/local/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2271, in __call__
**kwargs,
File "/usr/local/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2456, in batch_encode_plus
**kwargs,
File "/usr/local/lib/python3.7/site-packages/transformers/models/gpt2/tokenization_gpt2_fast.py", line 163, in _batch_encode_plus
return super()._batch_encode_plus(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 388, in _batch_encode_plus
is_pretokenized=is_split_into_words,
TypeError: TextInputSequence must be str
```
Saw a similar issue raised for QnA pipeline, but I'm not doing QnA here. Thoughts?
Answers:
username_1: You're sending this to your tokenizer: `[(example.text_a, example.text_b) for example in examples]`
But this is: `[('Ututu goes public.', None), ('This moon is huge.', None)]`.
A tokenizer cannot handle `None`, only text. |
SkeLLLa/fastify-oas | 578775528 | Title: Option to disable UIs but not the yaml/json spec
Question:
username_0: Hi,
If it's possible - I'll be grateful, if it is - I can't find how to do it:
I would like the server to serve the OAS json/yaml, but not the SwaggerUI/ReDoc UIs.
Is it supported?
Thanks
Answers:
username_1: @username_0 no, it's not supported yet. But I think it will be not so hard to add.
If you pass an object like
```js
{
ui: true,
json: true,
yaml: true
}
```
in `exposeRoute ` https://github.com/username_1/fastify-oas/blob/master/lib/openapi/index.js#L20
and check those params in https://github.com/username_1/fastify-oas/blob/master/lib/routes.js#L4 it would do the trick.
Would you like to send PR?
username_1: Added in 2.7.0.
Status: Issue closed
|
rbuchberger/jekyll_picture_tag | 553070620 | Title: how to lossless, -define webp:lossless=true ?
Question:
username_0: How can this plugin do:
```
magick file.png -quality 50 -define webp:lossless=true file.webp
```
??
Answers:
username_1: You can set image quality, which is documented here:
https://username_1.github.io/jekyll_picture_tag/presets (ctrl+f quality)
There's no option to set lossless compression, though it wouldn't be hard to add a setting for it. I'll look at doing that, or maybe I'll add the ability to pass arbitrary commands along to imagemagick.
username_0: Either way, will be fine.
You have this [minimagick issue for lossless](https://github.com/minimagick/minimagick/issues/499) if you need some help
Status: Issue closed
|
ericniebler/range-v3 | 143093010 | Title: Constructing a range from a reference fails with cryptic error
Question:
username_0: I followed the [quick start example](https://username_1.github.io/range-v3/index.html#tutorial-quick-start) to build a custom range and noticed that supplying data from a pointer, e.g. `T const& get() const { return m_data->at(m_current_index); }` works, but using a reference fails with a long template error, e.g. `T const& get() const { return m_data.at(m_current_index); }`.
I've been reading the sources for the whole day and just don't understand a) if it's by design that only pointers are allowed and b) how the access in my class changes the template resolution while the method signature stays the same.
If this is indeed a pattern by design, I would suggest to notify the user at compile-time of the source of the error. I only found it by successively cutting down the offending code.
Attached is a minimal example. Calling ranges::begin on PtrRange succeeds (as does RANGES_FOR), calling it on RefRange fails with the attached error log in resolving `using _t = typename T::type;` in [`include/meta/meta.hpp:140:9`](https://github.com/username_1/range-v3/blob/85dbf819fa267bc27e0ab0d04cba8cab2dbb0e81/include/meta/meta.hpp#L140)
Compile with
`clang++ -std=c++1z -Irange-v3/include/ source.cpp`
or `g++ -std=c++1z -Irange-v3/include/ source.cpp`
```c++
#include <vector>
#include <range/v3/all.hpp>
using namespace ranges;
class PtrRange: public view_facade<PtrRange> {
friend range_access;
std::vector<float>* m_data;
std::size_t m_current_index;
float const& get() const { return m_data->at(m_current_index); }
bool done() const { return m_current_index>=m_data->size(); }
void next() { ++m_current_index; }
public:
PtrRange() = default;
explicit PtrRange(std::vector<float>* vec):
m_data(vec),
m_current_index(0) {}
};
class RefRange: public view_facade<RefRange> {
friend range_access;
std::vector<float>& m_data;
std::size_t m_current_index;
float const& get() const { return m_data.at(m_current_index); }
bool done() const { return m_current_index>=m_data.size(); }
void next() { ++m_current_index; }
public:
RefRange() = default;
explicit RefRange(std::vector<float>& vec):
m_data(vec),
m_current_index(0) {}
};
int main() {
std::vector<float> vec {1,2,3,4,5};
// Constructing a range from a pointer works
auto begin_ptrrange = begin(PtrRange{&vec});
// Constructing a range from a reference fails with:
// ...range-v3/include/meta/meta.hpp:140:9: error: no type named 'type' in 'ranges::v3::concepts::most_refined<meta::v1::list<ranges::v3::range_access::RandomAccessCursor, ranges::v3::range_access::BidirectionalCursor, ranges::v3::range_access::ForwardCursor, ranges::v3::range_access::InputCursor, ranges::v3::range_access::Cursor>, RefRange>'
auto begin_refrange = begin(RefRange{vec});
return 0;
}
```
Compilation log of clang++:
```
In file included from source.cpp:2:
[Truncated]
auto iter_cat(range_access::RandomAccessCursor*) ->
^
range-v3/include/range/v3/utility/basic_iterator.hpp:502:13: error: static_assert failed "Concept check failed"
CONCEPT_ASSERT(detail::Cursor<Cur>());
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
range-v3/include/range/v3/utility/concepts.hpp:883:29: note: expanded from macro 'CONCEPT_ASSERT'
#define CONCEPT_ASSERT(...) static_assert((__VA_ARGS__), "Concept check failed")
^ ~~~~~~~~~~~~~
range-v3/include/range/v3/begin_end.hpp:90:39: note: in instantiation of template class 'ranges::v3::basic_iterator<RefRange,
ranges::v3::default_end_cursor>' requested here
noexcept(noexcept(begin(static_cast<Rng &&>(rng)))) ->
^
range-v3/include/range/v3/begin_end.hpp:107:56: note: in instantiation of exception specification for 'impl<RefRange>' requested here
detail::decay_t<decltype(begin_fn::impl(static_cast<Rng &&>(rng), 0))>
^
source.cpp:42:32: note: while substituting deduced template arguments into function template 'operator()' [with Rng = RefRange]
auto begin_refrange = begin(RefRange{vec});
^
3 errors generated.
```
Answers:
username_1: The problem is here:
```c++
std::vector<float>& m_data;
```
A member of this type makes your view not copy assignable. All views must be copy assignable. I apologize for the terrible error message. I'll see what I can do to improve that.
Status: Issue closed
username_0: That makes perfect sense. I've been really stumped with that error. Thanks so much for your response.
username_2: I had a similar problem, so I changed reference to a pointer, but I cannot get it work. I don't get it. I looks like the example in the documentation.
```
class c_string_range : public ranges::view_facade<c_string_range>
{
friend ranges::range_access;
std::vector<NodeTime>* route;
std::pair<const NodeTime&, const NodeTime&> read() const { return std::pair((*route)[0], (*route)[1]);}
bool equal(ranges::default_sentinel) const { return true;}
void next() {}
public:
c_string_range() : route(nullptr){}
explicit c_string_range(std::vector<NodeTime>& r) : route(&r){}
c_string_range(c_string_range&& other) : route(other.route){}
c_string_range(const c_string_range& other) : route(other.route){}
};
/usr/local/include/meta/meta.hpp:140:9: error: no type named 'type' in 'ranges::v3::concepts::most_refined<meta::v1::list<ranges::v3::range_access::RandomAccessCursor, ranges::v3::range_access::BidirectionalCursor, ranges::v3::range_access::ForwardCursor, ranges::v3::range_access::InputCursor, ranges::v3::range_access::Cursor>, c_string_range>'
using _t = typename T::type;
^~~~~
/usr/local/include/range/v3/range_access.hpp:402:13: note: in instantiation of template type alias '_t' requested here
using cursor_concept_t = meta::_t<cursor_concept<T>>;
^
/usr/local/include/range/v3/utility/basic_iterator.hpp:301:50: note: in instantiation of template type alias 'cursor_concept_t' requested here
using cursor_concept_t = detail::cursor_concept_t<Cur>;
^
/usr/local/include/range/v3/utility/basic_iterator.hpp:363:13: note: in instantiation of template class 'ranges::v3::detail::iterator_associated_types_base<c_string_range, true>' requested here
, detail::iterator_associated_types_base<Cur>
^
matcher.h:360:35: note: in instantiation of template class 'ranges::v3::_basic_iterator_::basic_iterator<c_string_range>' requested here
for (auto& [pick_up, drop_off]: c_string_range(container)) {
^
In file included from main.cpp:4:
In file included from graph.h:10:
In file included from /usr/local/include/range/v3/all.hpp:17:
In file included from /usr/local/include/range/v3/core.hpp:17:
In file included from /usr/local/include/range/v3/begin_end.hpp:27:
In file included from /usr/local/include/range/v3/utility/iterator.hpp:28:
/usr/local/include/range/v3/utility/basic_iterator.hpp:320:30: error: no matching function for call to 'iter_cat'
decltype(detail::iter_cat(_nullptr_v<cursor_concept_t>()));
^~~~~~~~~~~~~~~~
/usr/local/include/range/v3/utility/basic_iterator.hpp:363:13: note: in instantiation of template class 'ranges::v3::detail::iterator_associated_types_base<c_string_range, true>' requested here
, detail::iterator_associated_types_base<Cur>
^
matcher.h:360:35: note: in instantiation of template class 'ranges::v3::_basic_iterator_::basic_iterator<c_string_range>' requested here
for (auto& [pick_up, drop_off]: c_string_range(container)) {
^
/usr/local/include/range/v3/utility/basic_iterator.hpp:272:18: note: candidate function not viable: no known conversion from 'int *' to 'range_access::InputCursor *' for 1st argument
auto iter_cat(range_access::InputCursor *) ->
^
/usr/local/include/range/v3/utility/basic_iterator.hpp:274:18: note: candidate function not viable: no known conversion from 'int *' to 'range_access::ForwardCursor *' for 1st argument
auto iter_cat(range_access::ForwardCursor *) ->
^
/usr/local/include/range/v3/utility/basic_iterator.hpp:276:18: note: candidate function not viable: no known conversion from 'int *' to 'range_access::BidirectionalCursor *' for 1st argument
auto iter_cat(range_access::BidirectionalCursor *) ->
^
/usr/local/include/range/v3/utility/basic_iterator.hpp:278:18: note: candidate function not viable: no known conversion from 'int *' to 'range_access::RandomAccessCursor *' for 1st argument
auto iter_cat(range_access::RandomAccessCursor *) ->
^
/usr/local/include/range/v3/utility/basic_iterator.hpp:370:13: error: static_assert failed due to requirement 'detail::Cursor<c_string_range>()'
CONCEPT_ASSERT(detail::Cursor<Cur>());
^ ~~~~~~~~~~~~~~~~~~~~~
/usr/local/include/range/v3/utility/concepts.hpp:682:24: note: expanded from macro 'CONCEPT_ASSERT'
#define CONCEPT_ASSERT static_assert
^
matcher.h:360:35: note: in instantiation of template class 'ranges::v3::_basic_iterator_::basic_iterator<c_string_range>' requested here
for (auto& [pick_up, drop_off]: c_string_range(container)) {
^
matcher.h:360:35: note: when looking up 'begin' function for range expression of type 'c_string_range'
for (auto& [pick_up, drop_off]: c_string_range(container)) {
```
username_2: Ok, thanks. Maybe the semiregular concept should be stressed in the documentation.
username_2: I forgot I actually want the pointed-to vector constant (so the member looks like `const std::vector<NodeTime>* route`).
It (for me unexpectedly) doesn't work if the constructor takes `const std::vector<NodeTime>& r`, but it works if the constructor takes `const std::vector<NodeTime>* r`. AFAIK the rules for implicitly defined constructors/assignments don't care about arguments of non-(Special member functions).
```
class c_string_range : public ranges::view_facade<c_string_range>
{
friend ranges::range_access;
const std::vector<NodeTime>* route;
std::pair<const NodeTime&, const NodeTime&> read() const { return std::pair((*route)[0], (*route)[1]);}
bool equal(ranges::default_sentinel) const { return true;}
void next() {}
public:
c_string_range() : route(nullptr){}
explicit c_string_range(const std::vector<NodeTime>& r) : route(std::addressof(r)){}
};
```
doesn't compile (a similar error) but changing the second constructor to
```
explicit c_string_range(const std::vector<NodeTime>* r) : route(r){}
```
does. |
matplotlib/matplotlib | 254532394 | Title: broken links in docs
Question:
username_0: See details for linkchecker run:
<details>
```
00:00 $ linkchecker index.html --check-extern
LinkChecker 9.3 Copyright (C) 2000-2014 <NAME>
LinkChecker comes with ABSOLUTELY NO WARRANTY!
This is free software, and you are welcome to redistribute it
under certain conditions. Look at the file `LICENSE' within this
distribution.
Get the newest version at http://wummel.github.io/linkchecker/
Write comments and bugs to https://github.com/wummel/linkchecker/issues
Support this project at http://wummel.github.io/linkchecker/donations.html
Start checking at 2017-09-01 00:00:18-004
10 threads active, 2007 links queued, 356 links in 29 URLs checked, runtime 1 seconds
URL `Matplotlib.pdf'
Name `PDF'
Parent URL file:///home/tcaswell/source/p/matplotlib/doc/build/html/contents.html, line 144, col 13
Real URL file:///home/tcaswell/source/p/matplotlib/doc/build/html/Matplotlib.pdf
Check time 0.000 seconds
Result Error: URLError: <urlopen error [Errno 2] No such file or directory: '/home/tcaswell/source/p/matplotlib/doc/build/html/Matplotlib.pdf'>
/tmp/lnkchk/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py:791: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html
InsecureRequestWarning)
/tmp/lnkchk/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py:791: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html
InsecureRequestWarning)
/tmp/lnkchk/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py:791: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html
InsecureRequestWarning)
/tmp/lnkchk/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py:791: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html
InsecureRequestWarning)
10 threads active, 9574 links queued, 4205 links in 118 URLs checked, runtime 6 seconds
/tmp/lnkchk/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py:791: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html
InsecureRequestWarning)
9 threads active, 13292 links queued, 9772 links in 222 URLs checked, runtime 11 seconds
10 threads active, 14654 links queued, 13406 links in 294 URLs checked, runtime 16 seconds
10 threads active, 15561 links queued, 15497 links in 331 URLs checked, runtime 21 seconds
9 threads active, 16761 links queued, 17462 links in 387 URLs checked, runtime 26 seconds
10 threads active, 17364 links queued, 19902 links in 456 URLs checked, runtime 31 seconds
10 threads active, 18166 links queued, 22264 links in 522 URLs checked, runtime 36 seconds
8 threads active, 13473 links queued, 30157 links in 563 URLs checked, runtime 41 seconds
10 threads active, 14209 links queued, 33583 links in 674 URLs checked, runtime 46 seconds
9 threads active, 14760 links queued, 35995 links in 777 URLs checked, runtime 51 seconds
10 threads active, 15226 links queued, 38387 links in 873 URLs checked, runtime 56 seconds
9 threads active, 15740 links queued, 40672 links in 954 URLs checked, runtime 1 minute, 1 seconds
8 threads active, 16064 links queued, 42998 links in 1057 URLs checked, runtime 1 minute, 6 seconds
10 threads active, 16491 links queued, 45622 links in 1152 URLs checked, runtime 1 minute, 11 seconds
10 threads active, 16868 links queued, 47942 links in 1249 URLs checked, runtime 1 minute, 16 seconds
10 threads active, 17274 links queued, 50275 links in 1340 URLs checked, runtime 1 minute, 21 seconds
10 threads active, 17748 links queued, 52775 links in 1446 URLs checked, runtime 1 minute, 26 seconds
10 threads active, 18061 links queued, 55449 links in 1547 URLs checked, runtime 1 minute, 31 seconds
9 threads active, 18355 links queued, 57871 links in 1637 URLs checked, runtime 1 minute, 36 seconds
10 threads active, 18650 links queued, 60762 links in 1759 URLs checked, runtime 1 minute, 41 seconds
9 threads active, 18930 links queued, 64076 links in 1871 URLs checked, runtime 1 minute, 46 seconds
9 threads active, 19004 links queued, 65831 links in 1946 URLs checked, runtime 1 minute, 51 seconds
10 threads active, 19643 links queued, 69625 links in 2057 URLs checked, runtime 1 minute, 56 seconds
10 threads active, 19988 links queued, 74425 links in 2189 URLs checked, runtime 2 minutes, 1 seconds
10 threads active, 20211 links queued, 78073 links in 2272 URLs checked, runtime 2 minutes, 6 seconds
10 threads active, 12932 links queued, 87672 links in 2319 URLs checked, runtime 2 minutes, 11 seconds
[Truncated]
1 thread active, 0 links queued, 122255 links in 12076 URLs checked, runtime 19 minutes, 16 seconds
URL `http://cairographics.org/pycairo/'
Name `pycairo bindings'
Parent URL file:///home/tcaswell/source/p/matplotlib/doc/build/html/users/prev_whats_new/whats_new_1.4.html, line 559, col 21
Real URL http://cairographics.org/pycairo/
Check time 193.003 seconds
Result Error: ReadTimeout: HTTPConnectionPool(host='cairographics.org', port=80): Read timed out. (read timeout=60)
Statistics:
Downloaded: 42.3MB.
Content types: 11969 image, 97529 text, 0 video, 0 audio, 12716 application, 13 mail and 29 other.
URL lengths: min=16, max=169, avg=79.
That's it. 122256 links in 12077 URLs checked. 0 warnings found. 44 errors found.
Stopped checking at 2017-09-01 00:19:35-004 (19 minutes, 16 seconds)
```
</details><issue_closed>
Status: Issue closed |
beetbox/beets | 190529913 | Title: Configure metadata (itemfields) to be used
Question:
username_0: I’m sorry if this has already been discussed, but I haven’t found anything related. And I’m very new to beets, so wouldn’t have been notified of any prior discussion (heard about beets years ago, but because being very maniac about my files organization, was still doing things by hand with help of ExFalso, until I finally jumped in today).
My issue is that beets write way too much metadata to my tracks (relative to my taste). I would like to configure which metadata are used/written, and also have them not written if they are empty or 0.
For instance I don’t care about ASIN, BPM, RELEASECOUNTRY and things like that, so I would like to disable writing/use of them by beets (and it would be nice to have a way to remove a given itemfields/metadata from every track in my collection).
Other than that, beets is quite an awesome piece of software, keep up the good work!
Answers:
username_1: Hi! Have you checked out the `zero` plugin?
username_0: Admit having missed it. When it says “null”, does that mean empty them, set them to 0, or remove them? Last one would be quite awesome.
Status: Issue closed
username_1: Great! However, no, it can't currently remove tags altogether (see also #919).
username_0: Well #919 looks staled… So I will still have the tags, but they will be empty? Then I think this issue will have to wait for #919 to land before being closed (I can change the title if needed).
So what should I do right now is setup Zero, hand edit what beets already imported to clean it, and then should be (almost) fine?
Also, what about not writing tag when they are 0 or empty? For instance, I have a release from an unknown year, so it’s set to 0, while DATE and ORIGINALDATE are set to 0000.
username_1: Yeah, if you're interested in deleting tags altogether, we could use your help! Not just with the coding: #919 and other efforts have beeen made in the past, but they all had serious drawbacks. You can help, if you're interested, by looking through the trove of related issues and filing one that summarizes the current state of things and makes a proposal.
In particular, we can't do the implicit removal thing, where zero is equivalent to a missing tag, because zero is a meaningful value for some tags.
username_0: OK, sure. :) Currently lacking a bit of time (PhD student with already a lot of other activities here), but definitively something I will want to look at when things settle down a little. ;)
username_0: Bonus question: how to disable “ALBUM ARTIST”? I want only album_artist.
username_0: In the same vein, they are:
disc : 1
DISCTOTAL : 2
DISCC : 2
TOTALDISCS : 2
And:
track : 1
TRACKTOTAL : 7
TRACKC : 7
TOTALTRACKS : 7
I want to keep disc/track, DISCTOTAL and TRACKTOTAL but not the other ones (ending with C or starting with TOTAL).
username_0: And is there a doc where I can found the description of all itemfields? There seems to be mismatches between tags and them (are some itemfields only for beets db?), at least in my understanding of them.
username_0: Side note for the nomeaningless tag writing feature: do not write sort tags if they are identical to non sort ones.
username_1: Assuming you're looking at a file format with free-form tag names (like FLAC) using a tool that lists all key/value pairs, you might want to see #350.
It might help to understand that there is not a 1-1 correspondence between the physical tag names in your favorite media format and the logical field names that beets uses. For example, `albumartist` is a beets field that is mapped to different on-disk tags for different media formats. If you want to see a full list of beets fields, you can run `beet fields`. To see how those map to files' tags, you unfortunately need to read the source code.
username_0: Yes, sorry, I’m only using FLAC files. OK for #350.
It also helped me to understand the second point. Can you give me a pointer for where to look in the code or is it too sparse?
username_1: No, it's not too hard! The tag mappings are declared here: https://github.com/beetbox/beets/blob/31b23207a81b067757048097c102b13d6a79582b/beets/mediafile.py#L1564
username_0: Nice. :) So, I should look at StorageStyle (no prefix one), right? Also, if support for other tags where to be added, that would be the right place I suppose? I might propose a PR for some when I got time (place, conductor, maybe other). (Except I’m not sure whether they exists in all 4 standards and that maybe this is what define the supported list in beets?)
username_1: Yes and yes! |
pyvisa/pyvisa-py | 694538617 | Title: PyVISA-Py: USB backend issue value error operation timed out
Answers:
username_1: Hi, same issue with Agilent u3606a.
```
pyvisa info:
Machine Details: Platform ID: Linux-5.4.72-v8+-aarch64-with-glibc2.17 Processor: Python: Implementation: CPython Executable: /home/pi/miniforge3/envs/sitesting/bin/python Version: 3.8.6 Compiler: GCC 7.5.0 Bits: 64bit Build: Oct 7 2020 18:25:18 (#default) Unicode: UCS4 PyVISA Version: 1.10.1 Backends: ni: Version: 1.10.1 (bundled with PyVISA) Binary library: Not found py: Version: 0.3.1 ASRL INSTR: Available via PySerial (3.4) USB INSTR: Available via PyUSB (1.1.0). Backend: libusb1 USB RAW: Available via PyUSB (1.1.0). Backend: libusb1 TCPIP INSTR: Available TCPIP SOCKET: Available GPIB INSTR: Please install linux-gpib to use this resource type. No module named 'gpib'
```
username_2: Please update to the latest pyvisa and pyvisa-py versions and report the exact error you see (since sometimes there are slight variations between systems).
username_1: Min example with latest version of system (clean environment):
```
Machine Details:
Platform ID: Linux-5.4.72-v8+-aarch64-with-glibc2.17
Processor:
Python:
Implementation: CPython
Executable: /home/pi/miniforge3/envs/pyvisa_test/bin/python3.8
Version: 3.8.6
Compiler: GCC 7.5.0
Bits: 64bit
Build: Oct 7 2020 18:25:18 (#default)
Unicode: UCS4
PyVISA Version: 1.11.1
Backends:
ivi:
Version: 1.11.1 (bundled with PyVISA)
Binary library: Not found
py:
Version: 0.5.1
ASRL INSTR: Available via PySerial (3.4)
USB INSTR: Available via PyUSB (1.1.0). Backend: libusb1
USB RAW: Available via PyUSB (1.1.0). Backend: libusb1
TCPIP INSTR: Available
TCPIP SOCKET: Available
GPIB INSTR:
Please install linux-gpib (Linux) or gpib-ctypes (Windows, Linux) to use this resource type. Note that installing gpib-ctypes will give you access to a broader range of funcionality.
No module named 'gpib'
```
**Minimal example code**
```
import traceback
import pyvisa as visa
rm = visa.ResourceManager()
m = rm.open_resource('USB0::2391::19736::MY50099047::0::INSTR')
m.write_termination = '\n'
m.read_termination = '\n'
try:
m.query('*IDN?')
except Exception as e:
traceback.print_exc()
print('adding delay: ')
m.query_delay = .2
try:
m.query('*IDN?')
except Exception as e:
traceback.print_exc()
```
Output/Error:
[Truncated]
Traceback (most recent call last):
File "test.py", line 17, in <module>
m.query('*IDN?')
File "/home/pi/miniforge3/envs/pyvisa_test/lib/python3.8/site-packages/pyvisa/resources/messagebased.py", line 638, in query
self.write(message)
File "/home/pi/miniforge3/envs/pyvisa_test/lib/python3.8/site-packages/pyvisa/resources/messagebased.py", line 197, in write
count = self.write_raw(message.encode(enco))
File "/home/pi/miniforge3/envs/pyvisa_test/lib/python3.8/site-packages/pyvisa/resources/messagebased.py", line 157, in write_raw
return self.visalib.write(self.session, message)[0]
File "/home/pi/miniforge3/envs/pyvisa_test/lib/python3.8/site-packages/pyvisa_py/highlevel.py", line 543, in write
written, status_code = self.sessions[session].write(data)
File "/home/pi/miniforge3/envs/pyvisa_test/lib/python3.8/site-packages/pyvisa_py/usb.py", line 179, in write
count = self.interface.write(data)
File "/home/pi/miniforge3/envs/pyvisa_test/lib/python3.8/site-packages/pyvisa_py/protocols/usbtmc.py", line 436, in write
bytes_sent += raw_write(data)
File "/home/pi/miniforge3/envs/pyvisa_test/lib/python3.8/site-packages/pyvisa_py/protocols/usbtmc.py", line 258, in write
raise ValueError(str(e))
ValueError: [Errno 110] Operation timed out
```
username_2: Can you try to read just a single byte of the answer ? I would like to know if reading fail altogether or if we have a termination issue.
```python
import pyvisa as visa
rm = visa.ResourceManager()
m = rm.open_resource('USB0::2391::19736::MY50099047::0::INSTR')
m.write_termination = '\n'
m.read_termination = '\n'
m.write('*IDN?')
while True:
print(m.read_bytes(1))
```
username_1: @username_2
This is the output to your code:
```
Traceback (most recent call last):
File "test.py", line 10, in <module>
print(m.read_bytes(1))
File "/home/pi/miniforge3/envs/pyvisa_test/lib/python3.8/site-packages/pyvisa/resources/messagebased.py", line 371, in read_bytes
chunk, status = self.visalib.read(self.session, size)
File "/home/pi/miniforge3/envs/pyvisa_test/lib/python3.8/site-packages/pyvisa_py/highlevel.py", line 519, in read
return data, self.handle_return_value(session, status_code)
File "/home/pi/miniforge3/envs/pyvisa_test/lib/python3.8/site-packages/pyvisa/highlevel.py", line 251, in handle_return_value
raise errors.VisaIOError(rv)
pyvisa.errors.VisaIOError: VI_ERROR_TMO (-1073807339): Timeout expired before operation completed.
```
username_2: Nothing printed before the error message ?
username_1: No, that is all the output
username_2: Can you try a command that would produce a visible change in the instrument state ? We need to figure out if we cannot read the answer or if the instrument is not answering because it did not properly receive the message.
username_1: ```
username_2: Do you see the mode changing on the instrument ? If you send `"OUTP 1"` does the output turn on (assuming it was off) ?
username_1: Unfortunately I am not at the lab. Will check it tomorrow. (Sorry about the inconvenience)
username_2: No problem. Debugging those kind of issues is always a pain and I would really like to make pyvisa-py better but I do not have that much open-source time those days.
username_1: Hi @username_2 , After a few hard-resets on the device: write works (I can see it controlling the instrument, though the ERROR REMOTE appears in the display. Also, query works but the same ERROR Remote message appears in the display.
username_1: ```
`import pyvisa as visa
rm = visa.ResourceManager()
m = rm.open_resource('USBfd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b::19736::MY50099047::0::INSTR')
m.write_termination = '\n'
m.read_termination = '\n'
m.write('CONF:VOLT:DC')
m.write('sense:voltage:dc:range 10V')
print(m.query('*IDN?'))
m.write('MEAS:VOLT:DC?')
while True:
print(m.read_bytes(1))`
```
Produces:
```
Agilent Technologies,U3606A,MY50099047,02.00-03.00-03.00
Traceback (most recent call last):
File "test.py", line 16, in <module>
print(m.read_bytes(1))
File "/home/pi/miniforge3/envs/pyvisa_test/lib/python3.8/site-packages/pyvisa/resources/messagebased.py", line 371, in read_bytes
chunk, status = self.visalib.read(self.session, size)
File "/home/pi/miniforge3/envs/pyvisa_test/lib/python3.8/site-packages/pyvisa_py/highlevel.py", line 519, in read
return data, self.handle_return_value(session, status_code)
File "/home/pi/miniforge3/envs/pyvisa_test/lib/python3.8/site-packages/pyvisa/highlevel.py", line 251, in handle_return_value
raise errors.VisaIOError(rv)
pyvisa.errors.VisaIOError: VI_ERROR_TMO (-1073807339): Timeout expired before operation completed.
```
username_2: This is the point where things get really complicated... I am still confused by how some USB devices works flawlessly while some other like your cannot behave properly.
If you want to keep digging, try to use Wireshark to spy on the transferred data using PyVISA-py and possibly using NI or Keysight implementation if you can install it. There must be something different. I will try to go again through the code and the USBTMC specification and see if we missed anything but it may take me some time.
username_1: I will try to sniff it, though due to the actual circumstances I will not be able to access physically the instrument everyday.
Will use a normal x86 machine to use a different backend and see the differences.
In any case, thanks a lot for your support and all the work you put on the project.
username_2: You are welcome and I am sorry I cannot offer you a better solution.
username_1: Hi @username_2
I am back at the lab, so probably can get the wireshark traces you asked for.
This issue also happens to keysight b2912a (actually, even before, on the instr.write):
```
pyvisa_py/protocols/usbtmc.py in write(self, data)
256 return self.usb_send_ep.write(data)
257 except usb.core.USBError as e:
--> 258 raise ValueError(str(e))
259
260 def read(self, size):
ValueError: [Errno 110] Operation timed out
```
can you tell me what traces you need from wireshark?
thanks!
username_2: Ideally I would like to see everything from the opening of the instrument till the error. And if you can get the same in a working environment that would be great since I could compare both.
username_1: Hi Matthieu, just send you an email with the traces.
Thanks
username_2: Thanks. However I cannot give you a timeline for this. I am a bit overwhelmed at the moment.
username_1: Of course, and thanks again for your time!
username_4: Maybe this helps: https://github.com/python-ivi/python-usbtmc/pull/50
username_2: Thanks for sharing @username_4 .I honestly do not remember of I did such changes for pyvisa-py I have a vague recollection but I am not sure. Could you make a PR ? I am a bit under water at the moment.
username_5: We are also running into this issue with a Keysight 33500B Series waveform generator. Has there been farther work on this? Is there anything we can do to help. Using python-usbtmc has no problems.
I took a look at the python-usbtmc fix but have no idea what the heck they are doing.
username_5: The problem seems to lie with these two lines:
https://github.com/pyvisa/pyvisa-py/blob/main/pyvisa_py/protocols/usbtmc.py#L292
and the line right under it.
C/P here for clarity:
```
self.usb_dev.reset()
self.usb_dev.set_configuration()
```
Commenting out both of them and the timeout error disappears with the 33500B. We tested with a Keysight DMM and don't see a problem there either.
If you comment out just the reset, then you get a resource busy error. If you comment out just set_configuration, you get the time out error again. You must comment out both.
I have no idea what this impacts, but from reading some libusb docs, they say:
**"You cannot change/reset configuration if your application has claimed interfaces. It is advised to set the desired configuration before claiming interfaces."**
I am not sure if this has anything to do with it? Maybe resetting and then trying to set configuration is going out of order. You don't reset, but set_configuration is already called once in the USBRAW class init, which USBTMC class calls
https://github.com/pyvisa/pyvisa-py/blob/main/pyvisa_py/protocols/usbtmc.py#L216
I am going to test with a non-Keysight device and see if anything weird happens
username_2: I must say I have no time for PyVISA beyond answering to people issues. If you can figure out how to at least mitigate this issue I will happily review a PR.
It is perfectly possible that the logic got messed up at one point when refactoring those two classes and that a lack of deep enough understanding of libusb caused the issue. If you do make a PR, please add that kind of information as comment so that we do not regress in the future.
username_5: Ok, let me do some additional testing with non-Keysight devices and see if any weird problems occur with commenting out those two lines. If things look good, I will create a PR for this problem.
username_6: Sorry to barge in this issue, but if it helps, I was having a very similar issue with a Tektronix oscilloscope (TBS1062). Just commenting out those two lines did not work, but if I also comment lines 216 and 221 (actually, the entire try/except blocks), it worked fine, and no problem was observed with other instruments from other manufacturers.
Line 221 was:
`self.usb_dev.set_interface_altsetting()`
It seems there is some kind of issue with the configuration settings and some very specific instruments.
username_2: Thanks for your input @username_6 . More feedback is always welcome.
Looking a bit at different things related libusb, it looks like we could avoid the situation you describe by calling set altsetting only if the interface has multiple settings which we can check. Since you have hardware available to test, could you make a PR or alternatively test one if I can find some time to make one ?
username_5: Sorry for the silence, we have been swamped at work and couldn't work on this any further. Some added testing revealed this:
If we just comment out the two lines as previously stated:
`self.usb_dev.reset()
self.usb_dev.set_configuration()`
This works for the Keysight/Agilent Function generators, but won't work on our Tektronix MSO44 scope. Will cause a timeout problem. When we get time we can look into @username_6 's fix as well when we have time.
username_2: Sorry, I didn't mean to blame you. I have often seen cases of people disappearing, so I just took the opportunity to have more people on board.
I think that setting the alt setting only when relevant as done here https://github.com/google/gousb/pull/38/files may really help.
It is great you have two instruments to test this way; we may get a good fix rather than moving the issue around some more.
username_6: Hi, sorry for "disappearing" as well!
I had a short vacation, but now that I'm back, I'll try this! |
keen/explorer | 123381725 | Title: Analyses with only 1 group-by field return results with an "undefined" label attached to each group
Question:
username_0: For all analyses which only have 1 group-by field, the returned result now shows up with the label "undefined" after every field value.
By adding a 2nd group-by, the "undefined" field is replaced by the property category of the 2nd group-by.
Error reported through Keen Support Pool & the error is reproducible.

Answers:
username_1: @username_0 I'm having a hard time reproducing this one - can you give me specific steps.
username_0: Yeah! Sure!
Here's a link to the type of query parameters that are used in an actual
query, I can provide more if needed.
https://keen.io/project/53cd895d2481967fba000002/explorer?query%5Bevent_collection%5D=purchases&query%5Banalysis_type%5D=count&query%5Bgroup_by%5D%5B0%5D=user.first_name&query%5Btimezone%5D=UTC&query%5Btimeframe%5D=previous_14_months
Instructions:
1) select an event collection
2) select "count" (any analysis also produces this label glitch)
3) choose "group by", select one field to group by
4) leave the second "group by" alone (do not input a second field to group
by)
5) use a valid timeframe
6) hit "run"
username_0: Original report via Keen user:
https://app.intercom.io/a/apps/5eadb8aff7474e2d465c4fc13242935d5b4e1405/inbox/<EMAIL>/conversations/1587448164
(login required)
On Mon, Dec 21, 2015 at 9:59 PM, <NAME> <<EMAIL>> wrote:
> Yeah! Sure!
>
> Here's a link to the type of query parameters that are used in an actual
> query, I can provide more if needed.
> https://keen.io/project/53cd895d2481967fba000002/explorer?query%5Bevent_collection%5D=purchases&query%5Banalysis_type%5D=count&query%5Bgroup_by%5D%5B0%5D=user.first_name&query%5Btimezone%5D=UTC&query%5Btimeframe%5D=previous_14_months
>
> Instructions:
> 1) select an event collection
> 2) select "count" (any analysis also produces this label glitch)
> 3) choose "group by", select one field to group by
> 4) leave the second "group by" alone (do not input a second field to group
> by)
> 5) use a valid timeframe
> 6) hit "run"
>
>
username_1: @username_2 Any thoughts on this? I tried it out with Keen.js (`3.4.0-rc2`) and a single item array for a group_by does cause this. I think it's from this line in keen.js: https://github.com/keen/keen-js/blob/master/src/dataviz/helpers/getQueryDataType.js#L4
Status: Issue closed
username_1: Fixed in `keen.js 3.4.0-rc2`.
username_2: _Updated comment to `keen.js 3.4.0-rc3`_ |
high-mood/PSE-WEB | 458017929 | Title: test
Question:
username_0: **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
Answers:
username_0: Done
Status: Issue closed
|
huitema/dnsoquic | 232953627 | Title: Forbidding the fragmentation of queries?
Question:
username_0: Servers can get lots of efficiency gains if they can assume that a query always fit in a single packet. DNS Queries are normally short. Even if we assume a few EDNS options, they should rarely be more than a few hundred bytes. Normally a query can fit in a single STREAM frame that both creates and closes the stream.
I would like to somehow mandate that, but it is a bit of a layer violation. What do you think?
Answers:
username_0: Let's not do that. Layer violations will just come back and bite us in the future.
Status: Issue closed
|
CocoaPods/CocoaPods | 131324457 | Title: Missing required module 'Google'
Question:
username_0: Hello
Xcode emits ``` Missing required module 'Google' ``` when running unit test target.
Problem persists only with Google framework. In my original project i was using "Google/SignIn".
i created sample project here [https://github.com/username_0/cocoapods-google-bug](url)
i experience this problem on cocoapods 1.0.0.beta.3 and earlier
```
source 'https://github.com/CocoaPods/Specs.git'
platform :ios, "8.0"
use_frameworks!
target ’test’ do
pod 'Google'
pod 'Alamofire'
target 'testTests' do
inherit! :search_paths
pod 'Nimble', '~> 3.0.0'
pod 'OHHTTPStubs'
pod 'OHHTTPStubs/Swift'
end
end
```
Answers:
username_2: @username_0 that sample project doesnt seem to be available?
username_0: fixed url [https://github.com/username_0/cocoapods-google-bug](https://github.com/username_0/cocoapods-google-bug)
username_3: https://github.com/username_0
Make a [bridging header](https://github.com/googlesamples/google-services/blob/master/ios/signin/SignInExampleSwift/SignInExampleSwift-Bridging-Header.h)
username_2: This is because the Google podspec is manually setting ` "HEADER_SEARCH_PATHS": "$(inherited) ${PODS_ROOT}/Google/Headers"` in the `user_target_xcconfig`. Should search paths inheritance include `{FRAMEWORK,HEADER}_SEARCH_PATHS` from the `user_target_xcconfig`, @CocoaPods/core ?
username_4: Paths relative to `PODS_ROOT` rely on CocoaPods generated directory-structure, even though that was historically relatively stable from version to version and might be quite predictable, this feels like a big hack and could cause issues in edge cases, e.g. deduplication across platforms, where you end up having `Google-iOS` and `Google-OSX` instead. So I wonder whether we can do anything, so that the underlying issue can be addressed and such definitions can be avoided in the first place.
username_5: I have the exact same problem and I can't run Tests if I have the pod 'Google/CloudMessaging' installed. Is there any workaround for that?
username_6: Is there still no workaround for that?
username_0: Use it directly without cocoapods
username_6: It will work if you are using first module only. But my project uses Google Cloud Messaging service which contains a lot of dependencies. So this solution is not appropriate for me.
username_7: Solved this problem by adding `Header Search Paths` to Unit Test Target:
1. Select your Unit Test Target setting.
2. Go to `Build Settings`.
3. Look for `Header Search Paths`.
4. Add this value `$(SRCROOT)/Pods` with `recursive`, then Xcode will resolve the path for you.
<img width="1190" alt="header search path" src="https://cloud.githubusercontent.com/assets/1411470/14658897/3161fff6-064c-11e6-9c13-1c62782b78a9.png">
@username_6 @username_0 @username_5 Maybe you would like to try if this works for you.
username_8: @username_7 's solution works for me!
Status: Issue closed
username_9: I'm going to close this as the underlying cause is in the Google podspec, see <https://github.com/CocoaPods/CocoaPods/issues/4858#issuecomment-188368170>
username_10: @username_7's solution worked half. I got rid of the `Missing required module 'Google'` error. But it got replaced with another one (from Realm)
So Instead of adding `$(SRCROOT)/Pods` (recursive) I added:
- `"${SRCROOT}/Pods/Google"` (recursive) to the `Header Search Paths`
- `"${SRCROOT}/Pods/Google"` (recursive) to the `Framework Search Paths`
And now it works like a charm again
username_1: FYI, Google Analytics is gone. It is now Firebase Analytics. See the migration guide at https://firebase.google.com/support/guides/google-ios#configuring_the_firebase_sdk
I'm just working on it now. Hopefully it does not suck as much as Google's older iOS resources.
username_11: I got a similar problem. Now it is called "Missing required module 'Firebase'".
username_12: Same issue here -- Mine was a little different though and the path I had to add to `Header Search Paths` was:
${PODS_ROOT}/Google/Headers
username_13: I also have this problem, now with Firebase. The comment from @username_2 is exactly the problem. Is there a reason that inheriting search paths doesn't inherit the values _after_ the pods modify them?
Is there a fix for the `Podfile` itself that will add the right value to the search paths in the generated `xcconfig`'s as opposed to having to bury the fix in the Xcode project config?
username_14: I have the same issue. Is the Firebase team aware of this? I think we need to modify the podfile.
username_15: Also having the same issue with Firebase: `Missing required module Firebase`. When I added `$(SRCROOT)/Pods` to the Header Search Path as proposed above I got a bunch of errors like `Could not build Objective-C module Realm`. Same error for the Facebook SDK.
When adding `$(SRCROOT)/Pods/Firebase` (recursive) instead it worked however.
username_16: The problem with including $(SRCROOT)/Pods/Firebase is that it seems to break autocomplete in Xcode (at least for me).
What works for me is to modify the Podfile to have test targets inherit `:complete` instead of `:search_paths`
inherit! :complete
So, for example the original poster's `Podfile` would look like this:
source 'https://github.com/CocoaPods/Specs.git'
platform :ios, "8.0"
use_frameworks!
target ’test’ do
pod 'Google'
pod 'Alamofire'
target 'testTests' do
inherit! :complete
pod 'Nimble', '~> 3.0.0'
pod 'OHHTTPStubs'
pod 'OHHTTPStubs/Swift'
end
end
Followed by a `pod install`. This adds in various compilation and linking options that allows for Firebase to be found, and doesn't affect autocomplete. Note that after a `pod install` you'll need to remove some old frameworks from your project in the link phase of the test targets because the names used by CocoaPods is slightly different when inheriting with `:complete`. So you'll get a link error until you've removed these.
username_17: For me the fix was to add the `HEADER_SEARCH_PATHS` from `Pods-project.debug.xcconfig` into `Pods-project-tests.debug.xcconfig`. Of course, these get overwritten when you run `pod install` so I added a `post_install` lambda to the Podfile:
post_install do |installer|
directory = installer.config.project_pods_root + 'Target Support Files/'
podDirectory = '<directory_name>'
fileName = podDirectory + '.debug.xcconfig'
configFile = directory + podDirectory + fileName
xcconfig = File.read(configFile)
newXcconfig = 'HEADER_SEARCH_PATHS = $(inherited) ${PODS_ROOT}/Firebase/Core/Sources $(inherited) "${PODS_ROOT}/Headers/Public" "${PODS_ROOT}/Headers/Public/Carnival" "${PODS_ROOT}/Headers/Public/Crashlytics" "${PODS_ROOT}/Headers/Public/Fabric" "${PODS_ROOT}/Headers/Public/Firebase" "${PODS_ROOT}/Headers/Public/FirebaseAnalytics" "${PODS_ROOT}/Headers/Public/FirebaseCore" "${PODS_ROOT}/Headers/Public/FirebaseInstanceID" "${PODS_ROOT}/Headers/Public/FirebaseRemoteConfig" "${PODS_ROOT}/Headers/Public/Olapic-SDK-iOS" "${PODS_ROOT}/Headers/Public/Reveal-SDK" "${PODS_ROOT}/Headers/Public/SwiftGen"'
File.open(configFile, "a") { |file| file << newXcconfig }
end
username_18: @username_17 An easier workaround is to update the HEADER_SEARCH_PATHS in the Build Settings of the test target. See https://github.com/firebase/firebase-ios-sdk/issues/16 and https://github.com/firebase/firebase-ios-sdk/issues/58
username_17: @username_18 Those get blown away after each time you run `pod install` though.
username_19: @username_17, @username_18 can you please file a new issue? Is this related to test specifications?
username_18: @username_17 Not if you make the change in the xcproject's test target as opposed to a target added by CocoaPods in the xcworkspace.
@username_19 The bug is related to Firebase usage of CocoaPods described at firebase/firebase-ios-sdk#58
username_17: Oh, I see. Brilliant! Thanks a lot. Much better solution. |
budjb/grails-rabbitmq-native | 395884459 | Title: Unable to define consumer autoAck in yaml
Question:
username_0: It is not possible to define `consumer.autoAck` in `application.yml` as it requires enum and yaml can serve only string and numbers.
I would suggest to provide enum full name in config:
```yaml
rabbitmq:
...
consumers:
MyConsumer:
autoAck: com.budjb.rabbitmq.consumer.AutoAck.MANUAL
```
Then you can parse it here: https://github.com/budjb/grails-rabbitmq-native/blob/grails-3.x/rabbitmq-native/src/main/groovy/com/budjb/rabbitmq/consumer/LegacyConsumerContext.groovy#L276 |
jsdelivr/jsdelivr | 1089386770 | Title: Getting unexpected 404's from esm.run
Question:
username_0: **Describe the bug**
Getting 404's from files that jsDelivr finds without the +esm
**Affected jsDelivr links**
These give 404's:
https://cdn.jsdelivr.net/npm/[email protected]/lib/insertAdjacentTemplate.js/+esm
https://cdn.jsdelivr.net/npm/[email protected]/hookUp.js/+esm
https://cdn.jsdelivr.net/npm/[email protected]/lib/transform.js/+esm
https://cdn.jsdelivr.net/npm/[email protected]/lib/PEA.js/+esm
https://cdn.jsdelivr.net/npm/[email protected]/lib/SplitText.js/+esm
But without the +esm, jsdelivr finds the files:
https://cdn.jsdelivr.net/npm/[email protected]/lib/insertAdjacentTemplate.js
https://cdn.jsdelivr.net/npm/[email protected]/hookUp.js
https://cdn.jsdelivr.net/npm/[email protected]/lib/transform.js
https://cdn.jsdelivr.net/npm/[email protected]/lib/PEA.js
https://cdn.jsdelivr.net/npm/[email protected]/lib/SplitText.js
**Response headers**
access-control-allow-origin: *
access-control-expose-headers: *
age: 15
alt-svc: h3=":443"; ma=86400, h3-29=":443"; ma=86400, h3-28=":443"; ma=86400, h3-27=":443"; ma=86400
cache-control: public, max-age=31536000, s-maxage=31536000
cf-cache-status: HIT
cf-ray: 6c445471ded90649-IAD
content-encoding: br
content-type: text/plain; charset=utf-8
cross-origin-resource-policy: cross-origin
date: Mon, 27 Dec 2021 17:39:43 GMT
etag: W/"21-M28qQtNh6xlVdIXXeMOLRujDS0c"
expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
server: cloudflare
strict-transport-security: max-age=31536000; includeSubDomains; preload
timing-allow-origin: *
vary: Accept-Encoding
x-cache: MISS, HIT
x-content-type-options: nosniff
x-served-by: cache-fra19150-FRA, cache-iad-kiad7000040-IAD
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Please complete the following information:**
- Device OS: Windows 11
- Browser: Edge
- Browser version: 96.0.1054.62
- VPN provider if you use one:
- Your location (country): USA
**Additional context**
Add any other context about the problem here.
Answers:
username_1: Hey @username_0, the files you mentioned should work now.
This file will still return 404 because it's not exported in your package.json file.
https://cdn.jsdelivr.net/npm/[email protected]/lib/SplitText.js/+esm
Status: Issue closed
|
larsbeck/HomematicIp | 441667631 | Title: [Unsupported device] EXTENDED_LINKED_SWITCHING
Question:
username_0: Hi Lars,
one more unsupported device.
```json
{
"id": "095c76c2-0fb8-4652-aae3-bcb3d003d2bc",
"homeId": "21edcd0f-2b62-4b28-b0fb-e8476834d493",
"metaGroupId": null,
"label": "Licht Kinderzimmer ",
"lastStatusUpdate": 0,
"unreach": null,
"lowBat": null,
"dutyCycle": null,
"type": "EXTENDED_LINKED_SWITCHING",
"channels": [],
"on": null,
"dimLevel": null,
"onTime": 60.0,
"onLevel": 1.005,
"sensorSpecificParameters": {}
}
```
-- Eddy
Answers:
username_1: Hi Eddy,
added ExtendedLinkedSwitchingGroup :-)
Lars
username_0: Hi Lars,
I don't see the commit ;)
-- Eddy
username_1: Hi Eddy,
ha, the urge for coffee was too big. Forgot to push ;-) It is pushed now.
Lars
username_0: Hi Lars,
haha ;)
Here we go:
```cs
Newtonsoft.Json.JsonSerializationException: Error converting value {null} to type 'System.Int32'. Path 'rssiDeviceValue', line 15, position 25. ---> System.InvalidCastException: Null object cannot be converted to a value type.
at System.Convert.ChangeType(Object value, Type conversionType, IFormatProvider provider)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.EnsureType(JsonReader reader, Object value, CultureInfo culture, JsonContract contract, Type targetType)
--- End of inner exception stack trace ---
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.EnsureType(JsonReader reader, Object value, CultureInfo culture, JsonContract contract, Type targetType)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty containerProperty, JsonReader reader, Object target)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, JsonReader reader, JsonObjectContract contract, JsonProperty member, String id)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type objectType, Boolean checkAdditionalContent)
at Newtonsoft.Json.JsonSerializer.DeserializeInternal(JsonReader reader, Type objectType)
at Newtonsoft.Json.JsonConvert.DeserializeObject(String value, Type type, JsonSerializerSettings settings)
at HomematicIp.Data.JsonConverters.AbstractListConverter`2.ReadJson(JsonReader reader, Type objectType, List`1 existingValue, Boolean hasExistingValue, JsonSerializer serializer) in /Users/eschaefer/Documents/GitHub/HomematicIp/src/HomematicIp/Data/JsonConverters/AbstractListConverter.cs:line 31
at Newtonsoft.Json.JsonConverter`1.ReadJson(JsonReader reader, Type objectType, Object existingValue, JsonSerializer serializer)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.DeserializeConvertable(JsonConverter converter, JsonReader reader, Type objectType, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty containerProperty, JsonReader reader, Object target)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, JsonReader reader, JsonObjectContract contract, JsonProperty member, String id)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type objectType, Boolean checkAdditionalContent)
at Newtonsoft.Json.JsonSerializer.DeserializeInternal(JsonReader reader, Type objectType)
at Newtonsoft.Json.JsonConvert.DeserializeObject(String value, Type type, JsonSerializerSettings settings)
at HomematicIp.Data.JsonConverters.AbstractListConverter`2.ReadJson(JsonReader reader, Type objectType, List`1 existingValue, Boolean hasExistingValue, JsonSerializer serializer) in /Users/eschaefer/Documents/GitHub/HomematicIp/src/HomematicIp/Data/JsonConverters/AbstractListConverter.cs:line 31
at Newtonsoft.Json.JsonConverter`1.ReadJson(JsonReader reader, Type objectType, Object existingValue, JsonSerializer serializer)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.DeserializeConvertable(JsonConverter converter, JsonReader reader, Type objectType, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty containerProperty, JsonReader reader, Object target)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, JsonReader reader, JsonObjectContract contract, JsonProperty member, String id)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type objectType, Boolean checkAdditionalContent)
at Newtonsoft.Json.JsonSerializer.DeserializeInternal(JsonReader reader, Type objectType)
at Newtonsoft.Json.JsonConvert.DeserializeObject(String value, Type type, JsonSerializerSettings settings)
at Newtonsoft.Json.JsonConvert.DeserializeObject[T](String value, JsonSerializerSettings settings)
at HomematicIp.Services.HomematicService.GetCurrentState(CancellationToken cancellationToken) in /Users/eschaefer/Documents/GitHub/HomematicIp/src/HomematicIp/Services/HomematicService.cs:line 54
at HomematicIp.Console.Program.Main(String[] args) in /Users/eschaefer/Documents/GitHub/HomematicIp/src/HomematicIp.Console/Program.cs:line 74
```
and
```cs
Newtonsoft.Json.JsonSerializationException: Error converting value {null} to type 'System.Boolean'. Path 'sabotage', line 19, position 18. ---> System.InvalidCastException: Null object cannot be converted to a value type.
at System.Convert.ChangeType(Object value, Type conversionType, IFormatProvider provider)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.EnsureType(JsonReader reader, Object value, CultureInfo culture, JsonContract contract, Type targetType)
--- End of inner exception stack trace ---
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.EnsureType(JsonReader reader, Object value, CultureInfo culture, JsonContract contract, Type targetType)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty containerProperty, JsonReader reader, Object target)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, JsonReader reader, JsonObjectContract contract, JsonProperty member, String id)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type objectType, Boolean checkAdditionalContent)
at Newtonsoft.Json.JsonSerializer.DeserializeInternal(JsonReader reader, Type objectType)
at Newtonsoft.Json.JsonConvert.DeserializeObject(String value, Type type, JsonSerializerSettings settings)
at HomematicIp.Data.JsonConverters.AbstractListConverter`2.ReadJson(JsonReader reader, Type objectType, List`1 existingValue, Boolean hasExistingValue, JsonSerializer serializer) in /Users/eschaefer/Documents/GitHub/HomematicIp/src/HomematicIp/Data/JsonConverters/AbstractListConverter.cs:line 31
at Newtonsoft.Json.JsonConverter`1.ReadJson(JsonReader reader, Type objectType, Object existingValue, JsonSerializer serializer)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.DeserializeConvertable(JsonConverter converter, JsonReader reader, Type objectType, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty containerProperty, JsonReader reader, Object target)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, JsonReader reader, JsonObjectContract contract, JsonProperty member, String id)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type objectType, Boolean checkAdditionalContent)
at Newtonsoft.Json.JsonSerializer.DeserializeInternal(JsonReader reader, Type objectType)
at Newtonsoft.Json.JsonConvert.DeserializeObject(String value, Type type, JsonSerializerSettings settings)
at HomematicIp.Data.JsonConverters.AbstractListConverter`2.ReadJson(JsonReader reader, Type objectType, List`1 existingValue, Boolean hasExistingValue, JsonSerializer serializer) in /Users/eschaefer/Documents/GitHub/HomematicIp/src/HomematicIp/Data/JsonConverters/AbstractListConverter.cs:line 31
at Newtonsoft.Json.JsonConverter`1.ReadJson(JsonReader reader, Type objectType, Object existingValue, JsonSerializer serializer)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.DeserializeConvertable(JsonConverter converter, JsonReader reader, Type objectType, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty containerProperty, JsonReader reader, Object target)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, JsonReader reader, JsonObjectContract contract, JsonProperty member, String id)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type objectType, Boolean checkAdditionalContent)
at Newtonsoft.Json.JsonSerializer.DeserializeInternal(JsonReader reader, Type objectType)
at Newtonsoft.Json.JsonConvert.DeserializeObject(String value, Type type, JsonSerializerSettings settings)
at Newtonsoft.Json.JsonConvert.DeserializeObject[T](String value, JsonSerializerSettings settings)
at HomematicIp.Services.HomematicService.GetCurrentState(CancellationToken cancellationToken) in /Users/eschaefer/Documents/GitHub/HomematicIp/src/HomematicIp/Services/HomematicService.cs:line 54
at HomematicIp.Console.Program.Main(String[] args) in /Users/eschaefer/Documents/GitHub/HomematicIp/src/HomematicIp.Console/Program.cs:line 74
```
Another new device is on the way.
-- Eddy
Status: Issue closed
|
gorilla/mux | 408819328 | Title: Can Multiple Routers or Subrouters be used that use different middlewares?
Question:
username_0: **What version of Go are you running?** (Paste the output of `go version`)
**_go version go1.11.2 darwin/amd64_**
**What version of gorilla/mux are you at?** (Paste the output of `git rev-parse HEAD` inside `$GOPATH/src/github.com/gorilla/mux`)
_b57cb1605fd11ba2ecfa7f68992b4b9cc791934d_
**Describe your problem** (and what you have tried so far)
I have two endpoints for my go microservice
- healthCheck endpoint
- GetId
I want to use some security checks for GetId endpoint but not for healthCheck, so i preferred using middleware by `router.Use(security.VerifySecurity)`
My question here is -
How can I use two different routers or subRouters for both endpoints, 1st using middleware doing security stuff and other not using any middleware as its just healthCheck?
Also both routers should be started on same port
`http.ListenAndServe(":"+port, cors.Default().Handler(router))`
Any help in this is appreciable, struggling to get through this?
**Paste a minimal, runnable, reproduction of your issue below** (use backticks to format it)
`for _, endpoint := range a.endpoints {
log.WithFields(log.Fields{
"endpoint": endpoint.endpoint,
"method" : endpoint.method.String(),
"function" : runtime.FuncForPC(reflect.ValueOf(endpoint.handler).Pointer()).Name(),
}).Trace("Added endpoint")
if strings.Contains(endpoint.endpoint,"health") {
router2.Get(endpoint.endpoint, endpoint.handler)
}else {
router1.Get(endpoint.endpoint, endpoint.handler)
}
}`
Answers:
username_1: Yes - you should use a Subrouter and PathPrefix to separate out where middleware needs to be applied: https://godoc.org/github.com/gorilla/mux#Route.Subrouter
```go
package main
import (
"fmt"
"log"
"net/http"
"github.com/gorilla/mux"
)
func MiddlewareOne(next http.Handler) http.Handler {
fn := func(w http.ResponseWriter, r *http.Request) {
log.Println("middleware one")
next.ServeHTTP(w, r)
}
return http.HandlerFunc(fn)
}
func MiddlewareTwo(next http.Handler) http.Handler {
fn := func(w http.ResponseWriter, r *http.Request) {
log.Println("middleware two")
next.ServeHTTP(w, r)
}
return http.HandlerFunc(fn)
}
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "hello")
}
func main() {
r := mux.NewRouter()
healthchecks := r.PathPrefix("/health").Subrouter()
healthchecks.Use(MiddlewareOne)
healthchecks.HandleFunc("/ready", handler)
protected := r.PathPrefix("/admin").Subrouter()
protected.Use(MiddlewareTwo)
protected.HandleFunc("/dashboard", handler)
log.Fatal(http.ListenAndServe("localhost:8000", r))
}
```
```
~/repos
➜ curl localhost:8000/admin/dashboard
hello
~/repos
➜ curl localhost:8000/health/ready
hello
```
```sh
➜ go run main.go
2019/02/13 06:32:27 middleware two
2019/02/13 06:32:36 middleware one
```
Let me know if any questions!
username_1: PS: I would almost definitely reconsider how you're building routes using reflect - that's error prone and likely to panic if you misconfigure. It's hard to see from your very limited example, but a map of `map[string]YourHandlerType` would be easy to iterate over and build a subrouter from. |
davidhalter/jedi-vim | 80173195 | Title: jedi-vim failed to initialize Python
Question:
username_0: The output:
```
Error: jedi-vim failed to initialize Python: jedi#setup_py_version: Vim(pyfile):
Traceback (most recent call last): (in function jedi#init_python..<SNR>49_init_p
ython..jedi#setup_py_version, line 16)
```
I use jedi-vim installed by Vundle. Jedi library installed by pip. vim7.4 with `+python`.
Answers:
username_1: It's probably a problem with your Vim/Python setup.
Does `:py print(1)` work?
Can you post your `:ver` output?
username_0: `:py print(1)` return `1`.
The output of `:ver`
```
:ver
VIM - Vi IMproved 7.4 (2013 Aug 10, compiled May 18 2015 21:16:28)
MacOS X (unix) version
Included patches: 1-712
Compiled by Homebrew
Huge version without GUI. Features included (+) or not (-):
+acl +conceal +farsi +libcall +mouse_netterm +profile +syntax +visualextra
+arabic +cryptv +file_in_path +linebreak +mouse_sgr +python +tag_binary +viminfo
+autocmd +cscope +find_in_path +lispindent -mouse_sysmouse -python3 +tag_old_static +vreplace
-balloon_eval +cursorbind +float +listcmds +mouse_urxvt +quickfix -tag_any_white +wildignore
-browse +cursorshape +folding +localmap +mouse_xterm +reltime -tcl +wildmenu
++builtin_terms +dialog_con -footer -lua +multi_byte +rightleft +terminfo +windows
+byte_offset +diff +fork() +menu +multi_lang +ruby +termresponse +writebackup
+cindent +digraphs -gettext +mksession -mzscheme +scrollbind +textobjects -X11
-clientserver -dnd -hangul_input +modify_fname +netbeans_intg +signs +title -xfontset
+clipboard -ebcdic +iconv +mouse +path_extra +smartindent -toolbar -xim
+cmdline_compl +emacs_tags +insert_expand -mouseshape +perl -sniff +user_commands -xsmp
+cmdline_hist +eval +jumplist +mouse_dec +persistent_undo +startuptime +vertsplit -xterm_clipboard
+cmdline_info +ex_extra +keymap -mouse_gpm +postscript +statusline +virtualedit -xterm_save
+comments +extra_search +langmap -mouse_jsbterm +printer -sun_workshop +visual -xpm
system vimrc file: "$VIM/vimrc"
user vimrc file: "$HOME/.vimrc"
2nd user vimrc file: "~/.vim/vimrc"
user exrc file: "$HOME/.exrc"
fall-back for $VIM: "/usr/local/share/vim"
Compilation: /usr/bin/clang -c -I. -Iproto -DHAVE_CONFIG_H -DMACOS_X_UNIX -Os -w -pipe -march=native -mmacosx-version-min=10.10 -U_FORTIFY_SOURCE -
D_FORTIFY_SOURCE=1
Linking: /usr/bin/clang -L. -L/usr/local/lib -L/usr/local/lib -Wl,-headerpad_max_install_names -o vim -lm -lncurses -liconv -framework Cocoa
-fstack-protector -L/System/Library/Perl/5.18/darwin-thread-multi-2level/CORE -lperl -framework Python -lruby.2.0.0 -lobjc
```
username_0: Here is `brew info vim` output.
Because I installed vim7.4 using `brew`, maybe it will helpful.
```
$ brew info vim
vim: stable 7.4.712, HEAD
http://www.vim.org/
Conflicts with: ex-vi
/usr/local/Cellar/vim/7.4.712 (1606 files, 26M) *
Built from source
From: https://github.com/Homebrew/homebrew/blob/master/Library/Formula/vim.rb
==> Dependencies
Optional: lua ✘, luajit ✘
==> Options
--disable-nls
Build vim without National Language Support (translated messages, keymaps)
--override-system-vi
Override system vi
--with-client-server
Enable client/server mode
--with-lua
Build vim with lua support
--with-luajit
Build with luajit support
--with-mzscheme
Build vim with mzscheme support
--with-python3
Build vim with python3 instead of python[2] support
--with-tcl
Build vim with tcl support
--without-perl
Build vim without perl support
--without-python
Build vim without python support
--without-ruby
Build vim without ruby support
--HEAD
Install HEAD version
```
Status: Issue closed
username_0: Problem solved after I uninstall and reinstall vim7.4 using `brew`.
Maybe that I installed new version of python2.7.9 after vim installed caused this issue.
username_2: :+1: me too . thanks
username_3: me too . thanks
username_4: me neither,what shoud i do ?
username_5: I might have the same issue on rhel 7.2. The only change that I'm aware of - I installed pip and upgraded pylint. Before that the problem does not occur.
After quick inspection - the problem come from here:
[https://github.com/username_6/jedi/blob/995a6531225ba0b65e1ff863d97e5404d989047b/jedi/debug.py#L17](https://github.com/username_6/jedi/blob/995a6531225ba0b65e1ff863d97e5404d989047b/jedi/debug.py#L17)
It raises AttributeError with message 'closed'.
When I run the code
```python
from colorama import Fore, init
from colorama import initialise
initialise.atexit_done = True
init()
```
from shell, exception does not raises.
username_6: search for colorama in the issue tracker. |
stanfordnlp/stanza | 667780362 | Title: Bug in arabic POS model
Question:
username_0: Some Arabic characters are causing the stanza model to break
for example "وجيييه"
`import stanza
_tagger = stanza.Pipeline(lang="ar", processors='tokenize,pos', tokenize_no_ssplit=True)
text = "لمعرفة المزيد عن منتجاتنا، نرجو زيارة"
doc = _tagger(text)
print(doc)
`
but I get This error
`---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-13-f866b6280d67> in <module>()
1 text = "وجيييه"
----> 2 doc = _tagger(text)
3 print(doc)
3 frames
/usr/local/lib/python3.6/dist-packages/stanza/pipeline/core.py in __call__(self, doc)
174 assert any([isinstance(doc, str), isinstance(doc, list),
175 isinstance(doc, Document)]), 'input should be either str, list or Document'
--> 176 doc = self.process(doc)
177 return doc
178
/usr/local/lib/python3.6/dist-packages/stanza/pipeline/core.py in process(self, doc)
168 for processor_name in PIPELINE_NAMES:
169 if self.processors.get(processor_name):
--> 170 doc = self.processors[processor_name].process(doc)
171 return doc
172
/usr/local/lib/python3.6/dist-packages/stanza/pipeline/pos_processor.py in process(self, document)
32 for i, b in enumerate(batch):
33 preds += self.trainer.predict(b)
---> 34 preds = unsort(preds, batch.data_orig_idx)
35 batch.doc.set([doc.UPOS, doc.XPOS, doc.FEATS], [y for x in preds for y in x])
36 return batch.doc
/usr/local/lib/python3.6/dist-packages/stanza/models/common/utils.py in unsort(sorted_list, oidx)
194 Unsort a sorted list, based on the original idx.
195 """
--> 196 assert len(sorted_list) == len(oidx), "Number of list elements must match with original indices."
197 _, unsorted = [list(t) for t in zip(*sorted(zip(oidx, sorted_list)))]
198 return unsorted
AssertionError: Number of list elements must match with original indices.`
Answers:
username_1: Some languages require `mwt` in the list of annotators. Arabic seems to be one of them. It works for me if I do this instead:
`processors='tokenize,mwt,pos'`
I've seen this come up a couple times. Perhaps there's some way to simplify this
username_0: Hi @username_1 Thank you very much. it worked for me
Status: Issue closed
|
SMRFoundation/NodeXLBasic | 275027856 | Title: Have not received key for upgrade version
Question:
username_0: Hi,
I signed up and paid for a student version of NodeXL Pro last night and still have no received the key necessary to activate the account. Could someone please assist with this?
Thank you
#### This work item was migrated from CodePlex
CodePlex work item ID: '65000'
Vote count: '2' |
Miouyouyou/RockMyy64 | 337255696 | Title: Slow Video output
Question:
username_0: X11 takes its time to start and I suspect an issue similar to https://github.com/username_0/RockMyy/issues/4 without the trace this time...
But it might be something completely different, since there are issues with almost half of the system nodes definitions.
The entire `dmesg` taken from the serial console is kind of garbled due to bad serial console configuration in my current installation.
```
[ 0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd034]
[ 0.000000] Linux version 4.18.0-rc2-RockMyy-181818 (gamer@username_0) (gcc version 7.3.0 (Gentoo 7.3.0-r3 p1.4)) #5 SMP PREEMPT Sun Jul 1 00:49:47 CEST 2018
[ 0.000000] Machine model: FriendlyElec NanoPC-T4
[ 0.000000] earlycon: uart8250 at MMIO32 0x00000000ff1a0000 (options '')
[ 0.000000] bootconsole [uart8250] enabled
[ 0.000000] efi: Getting EFI parameters from FDT:
[ 0.000000] efi: UEFI not found.
[ 0.000000] cma: Reserved 32 MiB at 0x00000000f6000000
[ 0.000000] NUMA: No NUMA configuration found
[ 0.000000] NUMA: Faking a node at [mem 0x0000000000000000-0x00000000f7ffffff]
[ 0.000000] NUMA: NODE_DATA [mem 0xf5fa9ec0-0xf5fab67f]
[ 0.000000] Zone ranges:
[ 0.000000] DMA32 [mem 0x0000000000200000-0x00000000f7ffffff]
[ 0.000000] Normal empty
[ 0.000000] Movable zone start for each node
[ 0.000000] Early memory node ranges
[ 0.000000] node 0: [mem 0x0000000000200000-0x00000000f7ffffff]
[ 0.000000] Initmem setup node 0 [mem 0x0000000000200000-0x00000000f7ffffff]
[ 0.000000] On node 0 totalpages: 1015296
[ 0.000000] DMA32 zone: 15864 pages used for memmap
[ 0.000000] DMA32 zone: 0 pages reserved
[ 0.000000] DMA32 zone: 1015296 pages, LIFO batch:31
[ 0.000000] psci: probing for conduit method from DT.
[ 0.000000] psci: PSCIv1.0 detected in firmware.
[ 0.000000] psci: Using standard PSCI v0.2 function IDs
[ 0.000000] psci: MIGRATE_INFO_TYPE not supported.
[ 0.000000] psci: SMC Calling Convention v1.0
[ 0.000000] random: get_random_bytes called from start_kernel+0xa8/0x418 with crng_init=0
[ 0.000000] percpu: Embedded 23 pages/cpu @(____ptrval____) s56280 r8192 d29736 u94208
[ 0.000000] pcpu-alloc: s56280 r8192 d29736 u94208 alloc=23*4096
[ 0.000000] pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 [0] 4 [0] 5
[ 0.000000] Detected VIPT I-cache on CPU0
[ 0.000000] CPU features: detected: Kernel page table isolation (KPTI)
[ 0.000000] CPU features: enabling workaround for ARM erratum 845719
[ 0.000000] Built 1 zonelists, mobility grouping on. Total pages: 999432
[ 0.000000] Policy zone: DMA32
[ 0.000000] Kernel command line: root=UUID=0f0b77b2-0885-4e38-b72c-71f42200ec6e rootwait rootfstype=ext4 earlycon=uart8250,mmio32,0xff1a0000 console=ttyS2,1500000 panic=10 consoleblank=0 loglevel=4 ubootpart=c9d5d584-01 usb-storage.quirks=0x2537:0x1066:u,0x2537:0x1068:u cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory swapaccount=1
[ 0.000000] Memory: 3937860K/4061184K available (11004K kernel code, 1344K rwdata, 5004K rodata, 1280K init, 385K bss, 90556K reserved, 32768K cma-reserved)
[ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=6, Nodes=1
[ 0.000000] Preemptible hierarchical RCU implementation.
[ 0.000000] RCU efrom NR_CPUS=64 to nr_cpu_ids=6.
[ 0.000000] Tasds.
[ 0.000000] RCU: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=6
[ 0.000000] NQS: 64, nr_irqs: 64, preallocated irqs: 0
[ 0.000000] GICv3: GIC: Using split EOI/Deactivate mode
3 0.000000] GICv3: Distributor has no Range Selector support
[ 0.000000] GICv3: no VLPI su,p no direct LPI support
[ 0.000000] ITS [mem 0xfee20000-0xfee3ffff]
[ 0.000000] ITS@0x00000000fe020: allocated 6553p sz 64K, shr 0)
[ 0.000000] ITS: using ca hcmd queue
[ 0.000000] GIC: using LPI property table @0x00000000f1440000
[ 0.000000] ITS: Allocated 1792 chunks for LPIs
[ 0.000000] GICv3: CPU0: found redistributor 0 region 0:0x00000000fef00000
[Truncated]
[ 15.128942] mali ff9a0000.gpu: GPU identified as 0x0860 r2p0 status 0
[ 15.132794] mali ff9a0000.gpu: Protected mode not available
[ 15.174753] iio iio:device0: failed to get voltage
[ 15.181380] mali ff9a0000.gpu: Prmb[ 15.279114] iio iio:device0: failed to get voltage
[ 15.382948] iio iio:device0: failed to get voltage
[ 15.487369] iio iio:device0: failed to get voltage
[ 15.591013] iio iio:device0: failed to get voltage
[ 15.694833] iio iio:device0: failed to get voltage
[ 15.799791] iio iio:device0: failed to get voltage
[ 15.858070] cfg80211: Loading compiled-in X.509 certificates for reguaatabase
[ 15.902612] iio iio:device0:lftage
[ 15.905901] cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
[ 15.985135] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ 15.985191] cfg80211: failed to load regulatory.db
[ 16.008860] iio iio:device0: failed to get voltage
[ 16.111868] iio iio:device0: failed to get voltage
[ 16.214724] iio iio:device0: fail dvoltage
[ 16.318854] iio iio:device0: failed to get voltage
[ 16.340220] dw030000.dwc3: Failed to get clk 'ref': -2
```<issue_closed>
Status: Issue closed |
Airtable/airtable.js | 845284054 | Title: 404 in Node, using JS documentation in Next.js project
Question:
username_0: I'm getting errors whenever I try to add records using create:
``` javascript
import Airtable from 'airtable';
const base = new Airtable({
apiKey: `${process.env.AIRTABLE_API_KEY}`
}).base(`${process.env.AIRTABLE_BASE}`);
export default function NewsletterSignup(req, res) {
console.log(req.body);
const email = req.body.email;
base('Emails').create(
{
Email: email
},
{ typecast: true },
function (err, record) {
if (err) {
console.error(err);
return;
}
console.log(record?.getId());
res.status(200).send('Success');
}
);
}
```
I've tried various methods to fix the error I'm receiving including:
* Replacing table name with the table id
* Using a custom Airtable configuration (with various endpoints, etc)
Regardless of what I try, I receive a 404 error, saying the route can't be found.
I'm literally copying and pasting the self-documenting API from the Airtable site...
I have no idea how to fix this :(
Answers:
username_1: If it might help, I think `route can't be found` has something to do with Express and routing, rather than airtable.
On just another observation, in the callback function to addling email, inside the `if` block where you check for error, please add `res.status(404).send('Failed!')`.
That would complete the API flow. |
musonza/chat | 338127493 | Title: How to deal with soft deleted users?
Question:
username_0: I've implemented this package in my application, and I use some viewComposers. I was testing today and noticed that the entire application crash for not finding deleted users. How should I correct this?
Here is a example:
```
class MessagesViewComposer
{
public function compose(View $view)
{
$user = auth()->user();
$conversations = Chat::conversations()->for($user)->get();
$array = [];
$unreadConversationsCount = 0;
foreach($conversations->all() as $key => $conversation){
$unreadNotifications = $conversation->unreadNotifications($user);
$array[$key] = [
'user' => $conversation->users->firstWhere('id', '!=', $user->id)->toArray(),
'last_message' => [
'body' => $conversation->last_message->body,
'type' => $conversation->last_message->type,
'is_seen' => $conversation->last_message->is_seen,
'updated_at' => $conversation->last_message->updated_at->diffForHumans()
],
'unread_notifications' => count($unreadNotifications)
];
if(count($unreadNotifications)){
$unreadConversationsCount++;
}
}
$view->with('conversations', $array);
$view->with('unreadConversationsCount', $unreadConversationsCount);
$view->with('messageReadPermission', 'admin.messages.read');
}
}
```
Error: Call to a member function toArray() on null (View: /Users/username_0/www/ecco/resources/views/admin/layouts/master.blade.php) (View: /Users/username_0/www/ecco/resources/views/admin/layouts/master.blade.php)
The problem is that in several places I refer to the deleted user. How should I correct this?
Answers:
username_1: Can you try `$conversation->users->withTrashed()` to get the deleted users as well
username_0: Just get this error:
Method Illuminate\Database\Eloquent\Collection::withTrashed does not exist. (View: /Users/username_0/www/ecco/resources/views/admin/layouts/master.blade.php) (View: /Users/username_0/www/ecco/resources/views/admin/layouts/master.blade.php)
username_1: oh for relationship it's missing parenthesis `$conversation->users()->withTrashed()`
Status: Issue closed
|
cli/cli | 809627787 | Title: console output sent to stderr when it should be stdout
Question:
username_0: ### Describe the bug
non-error console output is being routed to `/dev/stderr`
e.g. the phrase `Cloning into 'foo-bar'...`
This causes issues with scripts/automations that expect text arriving on stderr is an error condition.
```
$ gh --version
gh version 1.5.0 (2021-01-21)
https://github.com/cli/cli/releases/tag/v1.5.0
```
### Steps to reproduce the behavior
Type these lines in a bash terminal:
```
$ _err() { sed "s/^/err: /" ; }
$ gh repo clone cli/cli 2> >(_err)
```
### Expected vs actual behavior
Expected:
```
$ gh repo clone cli/cli 2> >(_err)
Cloning into 'cli'...
remote: Enumerating objects: 22, done.
remote: Counting objects: 100% (22/22), done.
remote: Compressing objects: 100% (22/22), done.
remote: Total 19014 (delta 7), reused 3 (delta 0), pack-reused 18992
Receiving objects: 100% (19014/19014), 42.66 MiB | 49.87 MiB/s, done.
Resolving deltas: 100% (12772/12772), done.
```
Actual:
```
$ gh repo clone cli/cli 2> >(_err)
err: Cloning into 'cli'...
```
The "Cloning..." line is sent to stderr, and the stdout is lost completely (?)
Answers:
username_1: Thanks for the detailed issue.
I at one point made the decision to do this and in retrospect it was the wrong decision; It'd be great to do a consistency sweep for this and ensure only error-y text is going to stderr.
Status: Issue closed
username_2: @username_0 Thanks for reporting, but it is git that writes the `Cloning into…` line to stderr. The `gh repo clone` command is just a simple wrapper for `git clone`.
username_0: @username_2 I see. Hmm. Seems a strange design decision for `git` to behave this way, but I wouldn't dare question @torvalds. I found some relevant SO threads:
- [git stderr output can't pipe - Stack Overflow](https://stackoverflow.com/questions/4062862/git-stderr-output-cant-pipe)
- [Stop git from writing non-errors to stderr - Stack Overflow](https://stackoverflow.com/questions/57016157/stop-git-from-writing-non-errors-to-stderr)
- [Git clone: Redirect stderr to stdout but keep errors being written to stderr - Stack Overflow](https://stackoverflow.com/questions/34820975/git-clone-redirect-stderr-to-stdout-but-keep-errors-being-written-to-stderr)
Thanks for the tip about `--quiet`. I ended up just redirecting stderr to stdout (2>&1) and then writing a rudimentary parser to handle error conditions in my script. |
zephyrproject-rtos/sdk-ng | 541759256 | Title: arm64 alignment issue
Question:
username_0: **> 33f8: 780993e0 sturh w0, [sp, #153]**
on the last line " sturh w0, [sp, #153]" I am getting alignment exception due to 153 offset
Can you suggest why?
Thanks,
Ehud
Answers:
username_1: cc @username_2
username_2: @username_0 which branch are you using? how are you testing this? what are you compiling? how? I guess we need some more info about this.
username_0: Hi,
I download the SDK from "https://github.com/zephyrproject-rtos/sdk-ng/releases/tag/v0.11.0-alpha-7" zephyr-toolchain-arm64-0.11.0-alpha-7-setup.run
On zepher, I am rebase to this commit:( + our changes to ARM64 support which still not publish)
6933248e0cb4f7af31e2bab5b39c594806ab53ac - <NAME>, 3 months ago : net: shell: ping: Figure out the output network interface
I testing that on our cortex A55 chip, I am compiling with:
set(ARCH_FOR_cortex-a55 armv8.2-a+nofp )
set(CROSS_COMPILE_TARGET_arm aarch64-zephyr-elf)
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
export ZEPHYR_SDK_INSTALL_DIR=/opt/zephyr-sdk-arm64
I success to can run without newlib, but with newlib I have this problem.
Thanks,
Ehud
username_2: @username_0 are you aware that there is an ongoing effort to upstream ARM64 support at https://github.com/zephyrproject-rtos/zephyr/pull/20263?
Which code/test are you compiling? Just to have a way to reproduce your issue.
Is this reproducible when rebasing on the current master?
username_0: @username_2 I am familiar with zephyrproject-rtos/zephyr#20263,
Is this reproducible when rebasing on the current master? We Didn't try, currently it demand us a lot of effort. So I am consult you maybe you have an idea.
I don't think it is matter what is my code because the unalign access is on the libc code which come from the SDK(_vfiprintf_r), and if it isn't align(fix me if I wrong), what do you think?
username_1: @username_0 For now, ensure that `SCTLR_ELn.A` is not set in your arch implementation. If set, try setting it to 0 and see if the alignment exception goes away.
As for triage, I will investigate what other releases are doing tomorrow and make changes if necessary.
username_2: on top of what @username_1 suggested try also to set `SCTLR_ELn.SA` to `0`.
username_1: 128: a94006e0 ldp x0, x1, [x23]
12c: a90607e0 stp x0, x1, [sp, #96]
130: 52800038 mov w24, #0x1 // #1
134: a94106e0 ldp x0, x1, [x23, #16]
138: 90000017 adrp x23, 0 <__sfputc_r>
13c: 910002f7 add x23, x23, #0x0
140: a90707e0 stp x0, x1, [sp, #112]
144: b90097ff str wzr, [sp, #148]
148: aa1303f9 mov x25, x19
```
**zephyr-sdk-0.11.0-alpha-8**
```
/opt/sdk/zephyr-sdk-0.11.0-alpha-8/aarch64-zephyr-elf/bin/aarch64-zephyr-elf-objdump -d /opt/sdk/zephyr-sdk-0.11.0-alpha-8/aarch64-zephyr-elf/aarch64-zephyr-elf/lib/libc.a | grep "<_vfiprintf_r>:" -A 30
0000000000000000 <_vfiprintf_r>:
0: a9a57bfd stp x29, x30, [sp, #-432]!
4: 910003fd mov x29, sp
8: a90153f3 stp x19, x20, [sp, #16]
c: a9025bf5 stp x21, x22, [sp, #32]
10: a90363f7 stp x23, x24, [sp, #48]
14: f90023f9 str x25, [sp, #64]
18: f90047e0 str x0, [sp, #136]
1c: f90043e1 str x1, [sp, #128]
20: f9003fe2 str x2, [sp, #120]
24: aa0303f3 mov x19, x3
28: f900c3ff str xzr, [sp, #384]
2c: f900bfff str xzr, [sp, #376]
30: f94047e0 ldr x0, [sp, #136]
34: f900bbe0 str x0, [sp, #368]
38: f940bbe0 ldr x0, [sp, #368]
3c: f100001f cmp x0, #0x0
40: 540000e0 b.eq 5c <_vfiprintf_r+0x5c> // b.none
44: f940bbe0 ldr x0, [sp, #368]
48: b9405000 ldr w0, [x0, #80]
4c: 7100001f cmp w0, #0x0
50: 54000061 b.ne 5c <_vfiprintf_r+0x5c> // b.any
54: f940bbe0 ldr x0, [sp, #368]
58: 94000000 bl 0 <__sinit>
5c: f94043e0 ldr x0, [sp, #128]
60: 79c02000 ldrsh w0, [x0, #16]
64: 12003c00 and w0, w0, #0xffff
68: 121d0000 and w0, w0, #0x8
6c: 7100001f cmp w0, #0x0
70: 540000a0 b.eq 84 <_vfiprintf_r+0x84> // b.none
74: f94043e0 ldr x0, [sp, #128]
```
username_1: I noticed that `aarch64-zephyr-elf` is not being built with multilib; this will need to be addressed separately.
username_1: @username_0 Are you able to confirm if this issue can be fixed by setting `SCTLR_ELn.A = 0`?
The only condition for `STURH` that generates an alignment fault is if `SCTLR.A = 1` (refer to the pages 7342 and 7338 of the ARMv8-A ARM).
username_0: Hi,
compiling the SDK manual return me another results, I will investigate it and update
username_0: I try to set SCTLR_ELn.SA & SCTLR_ELn.A = 0 , but stll got alignment exception:

Thanks
username_1: Maybe MMU is enabled in your arch port and the page tables are configured incorrectly (e.g. 'strongly ordered' attribute is set).
username_0: @username_1 the mmu is disabled, I will investigate why we are getting this exception
username_0: @username_1 currently it seem that I have architecture limitation on alignment.
But let assume I am using mmu. Is newlib+arm64 should work fine? if yes, I shouldn't have this problem here too, if no, I will glad if you can explain.
Thank you very much
username_1: This means that we need to do either of the following:
1. Enable MMU stage 1 address translation with flat memory mapping in the arch port OR
2. Compile all code targeting ARMv7-A and ARMv8-A to never use unaligned access (i.e. specify `-mno-unaligned-access`)
The second approach may not be feasible and/or desirable for the following reasons:
- There are many architectural limitations regarding the Device memory type (note that the memory type is "Device"-nGnRnE when MMU is disabled)
- There may exist some code that require unaligned access support
- For GCC, `-mno-unaligned-access` is known to increase code size (though whether this really matters for Cortex-A is arguable)
<sup>[1]</sup> https://armv8-ref.codingbelief.com/en/chapter_d4/d42_8_the_effects_of_disabling_a_stage_of_address_translation.html#all-other-accesses
<sup>[2]</sup> https://stackoverflow.com/questions/51520635/how-to-emulate-arm-unaligned-memory-access-exceptions
username_0: @ username_1 Thank you for you answer
Compiling it with -mno-unaligned-access won't solve the newlib problem, because this code already compiled( I already using -mno-unaligned-access).
username_1: @username_0 The `-mno-unaligned-access` approach requires everything (including newlib, libstdc++, ...) to be compiled with that option, which is one of the reasons why I mentioned adding MMU support would be the better.
Have a look at the following; Broadcom has already implemented MMU support on top of the @username_2's AArch64 port:
https://github.com/zephyrproject-rtos/zephyr/pull/20263#issuecomment-568964341
username_0: @username_1,
Can you explain how " MMU with flat memory "will solve the unaligned exception?
Thanks you
username_1: https://github.com/zephyrproject-rtos/sdk-ng/issues/167#issuecomment-568955981
username_0: @username_1 I updating that the problem solved after using broadcom MMU code.
Thank you for your help
Status: Issue closed
|
yakra/tmtools | 378532086 | Title: click canvas for persistent OSM/Google links
Question:
username_0: consider parallel DIVs (a table?)
| pointer | last clicked |
| - | - |
| 52.924094,-1.222485 (y:147, x:395) | 52.875553,-0.482141 (y:157, x:487) |
| [OSM](http://www.openstreetmap.org/?lat=52.924094&lon=-1.222485&zoom=15) [Google](http://maps.google.com/?ll=52.924094,-1.222485&z=15) | [OSM](http://www.openstreetmap.org/?lat=52.875553&lon=-0.482141&zoom=15) [Google](http://maps.google.com/?ll=52.875553,-0.482141&z=15) |
or something |
square/moshi | 89011199 | Title: Better error message for missing half of an adapter
Question:
username_0: Given
```java
class AnInterfaceAdapter {
@ToJson String to(AnInterface ai) {
return "an_interface";
}
}
```
```java
Moshi moshi = new Moshi.Builder()
.add(new AnInterfaceAdapter())
.build();
moshi.adapter(AnInterface.class);
```
An error message is thrown saying no adapter could be found for `AnInterface`. This is not strictly true and misleading, the problem is that only a "to" adapter could be found but no "from" adapter. We should report a better error message.<issue_closed>
Status: Issue closed |
sdispater/pendulum | 1071264731 | Title: Nano second format support
Question:
username_0: <!-- Describe your question and issue here. This space is meant to be used for general questions that are neither bugs, feature requests, nor documentation issues. A good example would be a question regarding Pendulum's roadmap, for example.
Just checking will nanoseconds supported?
for now It seems that nano seconds is not supported. Any plan for this? THANKS!
like
parse('12:04:23.020000010')
or parse('12:04:23.020000010').format('HH:mm:ss.SSSSSSSSS')
<!-- Checked checkbox should look like this: [x] -->
- [ ] I have searched the [issues](https://github.com/sdispater/pendulum/issues) of this repo and believe that this is not a duplicate.
- [ ] I have searched the [documentation](https://pendulum.eustace.io/docs/) and believe that my question is not covered.
## Issue
<!-- Now feel free to write your issue, but please be descriptive! Thanks again 🙌 ❤️ --> |
opnsense/core | 864917501 | Title: Nextcloud Backup stopped working with self signed certificates
Question:
username_0: **Important notices**
Before you add a new report, we ask you kindly to acknowledge the following:
- [x] I have read the contributing guide lines at https://github.com/opnsense/core/blob/master/CONTRIBUTING.md
- [x] I have searched the existing issues and I am convinced that mine is new.
**Describe the bug**
I use Nextcloud 21 to backup the configuration of my 2 OPNsense machines. The nextcloud uses a certificate issued by my own CA. This CA has been imported on both OPNsense hosts. The automated backup worked with OPNsense 21.1.4. Last night I updated to OPNsense 21.1.5. This morning my monitoring warned about missing backups of both firewalls.
For testing purposes I exposed the nextcloud instance to the internet and fetched a letsencrypt certificate. With this certificate the backups started to work again. Reverting to the previous nextcloud VM snapshot caused the problem to appear again. I tested with both, OpenSSL and LibreSSL.
**To Reproduce**
Steps to reproduce the behavior:
1. Setup nextcloud with self-signed cert
2. Import CA into OPNsense
3. Configure and start backup
4. See error
**Expected behavior**
Backups should work even with "self signed" certificates if the CA has been imported into OPNsense
**Describe alternatives you considered**
Using a trusted CA is the only alternative. I would like to avoid this for security reasons. Using letsencrypt would require to open port 80 to the nextcloud. Using a wildcard certificate is even worse from a security perspective and purchasing another certificate seems overkill.
**Environment**
Software version used and hardware type if relevant, e.g.:
OPNsense 21.1.5
Answers:
username_1: I'm having a similar issue, backups stopped about a week ago. My OPNsense firewall is my internal CA and it issued a server certificate that I have loaded in my internal Nextcloud instance. It's as though the OPNsense-to-Nextcloud functionality has stopped trusting OPNsense's own trust store.
OPNsense version: 21.1.5
Nextcloud version: 20.0.9
username_2: Forum thread in German here: https://forum.opnsense.org/index.php?topic=23242.0
First things first start curl from the command line to see what the actual issue is:
# curl -v https://nextcloud/url
username_0: ```
root@fw01:~ # curl -v https://nextcloud.meis.space
* Trying fc00:db20:35b:7399::5:443...
* Connected to nextcloud.meis.space (fc00:db20:35b:7399::5) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /usr/local/share/certs/ca-root-nss.crt
* CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
```
username_2: Thanks, how about after issuing the following?
# opnsense-revert -r 21.1.4 curl
username_0: I'm not sure if the revert worked as it completed with segfault on two different firewalls (different hardware):
```
root@fw01:~ # opnsense-revert -r 21.1.4 curl
Fetching curl.txz: ... done
Verifying signature with trusted certificate pkg.opnsense.org.20210104... done
curl-7.76.0: already unlocked
Updating OPNsense repository catalogue...
OPNsense repository is up to date.
All repositories are up to date.
Child process pid=48764 terminated abnormally: Segmentation fault
```
Curl version:
```
root@fw01:~ # curl --version
curl 7.76.0 (amd64-portbld-freebsd12.1) libcurl/7.76.0 OpenSSL/1.1.1k zlib/1.2.11 nghttp2/1.43.0
Release-Date: 2021-03-31
Protocols: dict file ftp ftps gopher gophers http https imap imaps mqtt pop3 pop3s rtsp smtp smtps telnet tftp
Features: alt-svc AsynchDNS HTTP2 HTTPS-proxy IPv6 Largefile libz NTLM NTLM_WB SSL UnixSockets
```
The problem still persists:
```
root@fw01:~ # curl -v https://nextcloud.meis.space
* Trying fc00:db20:35b:7399::5:443...
* Connected to nextcloud.meis.space (2001:67c:2924:140::15) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /usr/local/share/certs/ca-root-nss.crt
* CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
```
username_2: No this isn't good as the revert was not successful. 21.1.6 is coming out today. Can you update and try on this version? It should not have this issue:
* CAfile: /usr/local/share/certs/ca-root-nss.crt
It should point to the following again:
* CAfile: /usr/local/etc/ssl/cert.pem
username_2: Confirmed fixed with curl on 21.1.6 via https://forum.opnsense.org/index.php?topic=23242.0
username_1: Working for me on 21.1.6 as well. Nice to have backups again, thanks for fixing this! Much appreciated. |
tidyverse/ggplot2 | 351846274 | Title: Documentation of `palette` parameter for continuous scales
Question:
username_0: The documentation is incorrect:
```
continuous_scale package:ggplot2 R Documentation
Continuous scale constructor.
Description:
Continuous scale constructor.
Usage:
continuous_scale(aesthetics, scale_name, palette, name = waiver(),
breaks = waiver(), minor_breaks = waiver(), labels = waiver(),
limits = NULL, rescaler = rescale, oob = censor, expand = waiver(),
na.value = NA_real_, trans = "identity", guide = "legend",
position = "left", super = ScaleContinuous)
Arguments:
aesthetics: The names of the aesthetics that this scale works with
scale_name: The name of the scale
palette: A palette function that when called with a single integer
argument (the number of levels in the scale) returns the
values that they should take
```
This is the expected behavior of `palette` for `discrete_scale`. Currently `continuous_scale` and `discrete_scale` share the same documentation for this parameter. PR coming shortly.<issue_closed>
Status: Issue closed |
mopidy/mopidy | 94030623 | Title: Add tests for Mopidy-File
Question:
username_0: We should add tests covering Mopidy-File's `browse()` and `lookup()` functionality, and especially its corner cases related to symlink following and checking of the paths being inside the media dirs.
Since this feature is new in 1.1, the tests should preferably be in before we release 1.1 too. |
MHRA/products | 566399489 | Title: TECH - Observability of Platform performance
Question:
username_0: ### User want
As a PO I'd like to monitor and log platform performance, so that we can keep track of site health and react quickly when issues occur
MLAs help inform us that the website is performing at its best and that when there are dips in performance or the site goes down (for example), that the business is effectively alerted of when this happens.
**Customer acceptance criteria**
**Technical acceptance criteria**
1. We can log performance of the site and we can create alerts to notify of performance dips / failures
**Data acceptance criteria**
**Testing acceptance criteria**
**Size**
XL
**Value**
**Effort**
### Exit Criteria met
- [ ] Backlog
- [ ] Discovery
- [ ] DUXD
- [ ] Development
- [ ] Quality Assurance
- [ ] Release and Validate<issue_closed>
Status: Issue closed |
WaterReporter/www.waterreporter.org | 90474361 | Title: Route from a User Profile to Activity Feed is Broken
Question:
username_0: When I am logged out and go from a user profile (click on Brent's name from the main activity feed) back to the main activity feed, nothing happens. I tried looking at the console when recreatingn the bug and no errors get thrown.<issue_closed>
Status: Issue closed |
VarenTechInternship/varentech-deploya | 159259429 | Title: Research format for communicating to the REST interface
Question:
username_0: If these are not necessary, in what ways should we use REST?
We have:
GET http://localhost:8080/home/login <-- gets username from the user
POST http://localhost:8080/home/upload <-- returns all data they enter in the html form
Answers:
username_1: You won't necessarily have a "format" per se, but you will have to plan out the REST api.
Example:
- GET http://your.app/users <-- returns a list of users.
- GET http://your.app/users/:id <-- where id is the id of a user, should return info on just that user.
- DELETE http://your.app/users/:id <-- where id is the id of a user to remove
- POST http://your.app/users <-- body of the message would have data for a new user, probably json formatted.
Not that there is any need at all to have a user management system or anything like that. These are just examples for how a rest interface would work.
username_0: If these are not necessary, in what ways should we use REST?
We have:
GET http://localhost:8080/home/login <-- gets username from the user
POST http://localhost:8080/home/upload <-- returns all data they enter in the html form
username_1: Those are legit! Additional stuff you might want:
GET http://localhost:8080/history <-- return a list of all uploads and the info about each upload (who ran it, what ran it, when did it run, what was the command used, was it successful, and anything else you think might be useful) Tossing it back in json would be awesome.
username_0: We began working on this but we are concerned there will be too much data displayed on the screen. Is there a way you want us to cut this down? Or is this not a problem?
Status: Issue closed
|
sangria-graphql/sangria | 119920894 | Title: Cannot resolve all deferred in one Future
Question:
username_0: ```
trait DeferredResolver[-Ctx] {
def resolve(deferred: List[Deferred[Any]], ctx: Ctx): List[Future[Any]]
}
```
should also have this possibility:
```
trait DeferredResolver[-Ctx] {
def resolve(deferred: List[Deferred[Any]], ctx: Ctx): Future[ListFuture[Any]]
}
```
so that the client can decide to resolve all `Deferred` in one shot (one database query or one http request)
Answers:
username_1: Do you mean `Future[List[Any]]`? This it was was implemented initially. After some time I noticed, that this approach makes error handing very hard. Sometimes you may have several futures (not nesessarely 1:1 to the number of deferred values) and if you squish them together in future then you loose the ability to have very presice error handing (in cases where several futures fail at the same time for different reasons) and it was an issue before.
Even with the current signature you still have a choice how you handle the deferred resolution. Here is an example that works with just one `Future`, but produces a list of `Future`s:
```scala
case class Article(id: String, name: String)
case class Comment(articleId: String, content: String)
class ExternalService {
def fetchComments(articleIds: Vector[String]): Future[Vector[Comment]] = ???
}
case class CommentsDeferred(articleId: String) extends Deferred[Seq[Comment]]
class CommentsResolver(service: ExternalService) extends DeferredResolver[ExternalService] {
def resolve(deferred: Vector[Deferred[Any]], ctx: ExternalService) = {
val articleIds = deferred map {case CommentsDeferred(id) => id}
val comments: Future[Vector[Comment]] =
ctx.fetchComments(articleIds)
val commentsByArticle: Future[Map[String, Vector[Comment]]] =
comments.map(_.groupBy(_.articleId))
articleIds map (articleId =>
commentsByArticle.map(_.getOrElse(articleId, Nil)))
}
}
```
Do you think ithis kind of approach would be also suiatble in your case?
username_0: Yes I mean `Future[List[Any]]`.
I understand your example. But I have the feeling it makes the implementation more difficult that it should be.
When the database delivers a `Future[List[A]]`, then either it is successful or not. The error handling is quite trivial in that case... ;) Maybe we should make this easy to implement.
username_1: That's true, in some concrete situations the implementation of deferred resolver may be harder than it possibly can be for this case. But `DeferredResolver` is intended as a generic mechanism for different use-cases which is able to provide the most precise error-handling for all of them.
Not to mention, that `DeferredResolver` is pretty low level mehanism. I can definitely see more higher level (use-case driven) helpers build on top of `DeferredResolver`. One also need to be very cautious in the implementation since deferred values more often than not contain duplicates. So they need to be deduplicated before making a DB query. DB query results on the other hand can't be returned as-is (in most cases), because the shape of the result should reflect 1:1 the shape of `deferred: Vector[Deferred[Any]]`. Not to mention that `deferred: Vector[Deferred[Any]]` normally will contain `Deferred` values of different types, so you most probably will end up with several futures anyway.
username_0: OK I understand.
From the signature, it's not clear that the 2 lists are so deeply linked.
To make it clearer, maybe the signature could be:
```
def resolve(deferred: List[Deferred[Any]], ctx: Ctx): List[Future[(Deferred[Any], Any)]]
```
What do you think?
username_0: I close this issue as the initial request is clearly explained.
Status: Issue closed
username_1: It sounds very good. `List[Future[(Deferred[Any], Any)]]` sounds also much more intuitive to me and immediately hints about the shape of the result. Under normal circumstances I wouldn't think twice about it's inclusion, but in this case it's a bit different. In my opinion this is a performance critical place which was designed to provide an optimization. Because of this I really would like to avoid unnecessary boxing if possible.
That said, I will think again about the signature of resolve method, I like it `List[Future[(Deferred[Any], Any)]]` a lot :) |
commercialhaskell/stack | 105283723 | Title: Should this work?
Question:
username_0: Brand new stack install on Debian. No GHC installed.
This is using lts-3.4
```
$ mkdir stack && cd stack
$ stack unpack hlint
$ cd hlint-version
$ stack init
$ stack setup
$ stack install
haskell-src-exts-1.16.0.1: configure
haskell-src-exts-1.16.0.1: build
Progress: 1/4
-- While building package haskell-src-exts-1.16.0.1 using:
/home/username_0/.stack/setup-exe-cache/setup-Simple-Cabal-1.22.4.0-x86_64-linux-ghc-7.10.2 --builddir=.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/ build --ghc-options -hpcdir .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/hpc/.hpc/ -ddump-hi -ddump-to-file
Process exited with code: ExitFailure (-9)
Logs have been written to: /home/username_0/stack/hlint-1.9.21/.stack-work/logs/haskell-src-exts-1.16.0.1.log
Configuring haskell-src-exts-1.16.0.1...
Building haskell-src-exts-1.16.0.1...
Preprocessing library haskell-src-exts-1.16.0.1...
[ 1 of 22] Compiling Language.Haskell.Exts.Annotated.Syntax ( src/Language/Haskell/Exts/Annotated/Syntax.hs, .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Language/Haskell/Exts/Annotated/Syntax.o )
src/Language/Haskell/Exts/Annotated/Syntax.hs:112:1: Warning:
The import of ‘Data.Foldable’ is redundant
except perhaps to import instances from ‘Data.Foldable’
To import instances alone, use: import Data.Foldable()
src/Language/Haskell/Exts/Annotated/Syntax.hs:113:1: Warning:
The import of ‘Data.Traversable’ is redundant
except perhaps to import instances from ‘Data.Traversable’
To import instances alone, use: import Data.Traversable()
```
On the one hand, I'm not sure how I could have done things differently, but on the other, `haskell-src-exts` is used, or at least used to be used, by quite a few packages, and I can't find any reference of people running into this problem.
Perhaps stackoverflow would have been a better place to post this. Apologies if so.
Answers:
username_0: For what it's worth, `stack install --resolver nightly-2015-09-07` gives the same error.
Status: Issue closed
username_1: This isn't an issue with stack - the build for haskell-src-exts is running out of memory. See https://github.com/commercialhaskell/stack/issues/859
username_0: Damnit, I was just a bit too slow to close this myself.
Sorry about this.
username_1: No problem! Thanks for reporting :)
Better to over-report than under-report. However, it is good to search around for existing issues / explanations first, though.
username_2: Can stack detect when the child process died with -9 and print a helpful message about how compilation ran out of memory?
username_3: No, since the child process is Cabal the library, not GHC. Cabal would need
to detect that and report I think
username_2: @username_3 Okay, filed https://github.com/haskell/cabal/issues/2813 - Thanks for the explanation.
username_3: Looks good, thanks!
username_4: It looks to me like the `Process exited with code: ExitFailure (-9)` message is [produced by stack itself](https://github.com/commercialhaskell/stack/blob/a68e7b656f2bc2d3dc488ec77c82d9b4779bca43/src/Stack/Types/Build.hs#L235), so I think we could capture it and make a more helpful message.
username_3: How would this look? Just add a simple `` "ExitFailure (-9)" `S.isInfixOf` `` on every line that Cabal outputs and, if it matches, give an error message "maybe you ran out of memory?"
username_4: Is that necessary? The `"ExitFailure (-9)"` is produced by `show exitCode` in stack's code, so we should just be able to look at `exitCode` itself. Your idea would catch other cases where this would potentially happen, but so far the two cases I've seen (this issue and #859) are both where we have access to the ExitCode.
username_3: Brand new stack install on Debian. No GHC installed.
This is using lts-3.4
```
$ mkdir stack && cd stack
$ stack unpack hlint
$ cd hlint-version
$ stack init
$ stack setup
$ stack install
haskell-src-exts-1.16.0.1: configure
haskell-src-exts-1.16.0.1: build
Progress: 1/4
-- While building package haskell-src-exts-1.16.0.1 using:
/home/username_0/.stack/setup-exe-cache/setup-Simple-Cabal-1.22.4.0-x86_64-linux-ghc-7.10.2 --builddir=.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/ build --ghc-options -hpcdir .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/hpc/.hpc/ -ddump-hi -ddump-to-file
Process exited with code: ExitFailure (-9)
Logs have been written to: /home/username_0/stack/hlint-1.9.21/.stack-work/logs/haskell-src-exts-1.16.0.1.log
Configuring haskell-src-exts-1.16.0.1...
Building haskell-src-exts-1.16.0.1...
Preprocessing library haskell-src-exts-1.16.0.1...
[ 1 of 22] Compiling Language.Haskell.Exts.Annotated.Syntax ( src/Language/Haskell/Exts/Annotated/Syntax.hs, .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Language/Haskell/Exts/Annotated/Syntax.o )
src/Language/Haskell/Exts/Annotated/Syntax.hs:112:1: Warning:
The import of ‘Data.Foldable’ is redundant
except perhaps to import instances from ‘Data.Foldable’
To import instances alone, use: import Data.Foldable()
src/Language/Haskell/Exts/Annotated/Syntax.hs:113:1: Warning:
The import of ‘Data.Traversable’ is redundant
except perhaps to import instances from ‘Data.Traversable’
To import instances alone, use: import Data.Traversable()
```
On the one hand, I'm not sure how I could have done things differently, but on the other, `haskell-src-exts` is used, or at least used to be used, by quite a few packages, and I can't find any reference of people running into this problem.
Perhaps stackoverflow would have been a better place to post this. Apologies if so.
username_3: Wow, you're right, I can't believe I didn't see that. OK, my mistake.
username_5: My 2 cents user feedback on that is that silent out-of-memory error are very frustrating.
I experienced this several time when compiling some projects on various remote low-end machines.
Several co-workers too.
I am even used to start `htop` when compiling ghc code to monitor memory consumption :disappointed:
username_3: Actually, @username_1 brought up something around auto-retrying a build due to memory exhaustion, and I made the same mistake of thinking we couldn't reliably detect it. Perhaps we should implement some logic along the lines of:
* Do a first build
* If the build fails due to signal 9:
* Print a message "Out of memory when building foo, retrying by itself"
* Grab a lock that will only acquire once no other builds are occurring, and prevent other builds from starting (likely a QSemN with a total of numjobs quantities)
* Try to rebuild. If it fails due to signal 9 a second time, give a "you need more memory" error message
username_2: FWIW, signal 9 is just `SIGKILL`, so it's not *guaranteed* that it's because of the OOM killer. It could be a user manually selecting the process and killing it with `kill -9` or some sort of process manager.
username_3: I should have noticed that... OK, I'm not *convinced* that what I described above is still a good idea, but it might be. After all, how common will it be that someone will `kill -9` the GHC process instead of stack itself?
Status: Issue closed
username_4: Since we can't count on `ExitFailure -9` meaning anything in particular (could be OOM, could be a user-initiated `kill -9`) I don't think we can should do any automated retrying. Closing. |
cturner8/YTS-ui | 702905833 | Title: [Medium] Deploy to github pages
Question:
username_0: Once issues #2 and #4 are completed, test deployment and general functionality of FE when running on github pages.
After successful deployment / functionality being maintained, firebase deployment can be used.<issue_closed>
Status: Issue closed |
astropy/astropy | 519521227 | Title: Decide if it's acceptable to add the User-Agent:astropy and Accept:*/* headers into file download functions
Question:
username_0: This is a follow-on from #9508, with more details in: https://github.com/astropy/astropy/pull/9508#discussion_r343211648 and https://github.com/astropy/astropy/pull/9508#pullrequestreview-311403515
Short version: we should decide if we want to have User-Agent:astropy and Accept:*/* in the request headers for *all* uses of `download_file`/`get_readable_fileobj`, or just the IERS (where it is known to be necessary). As of #9508, no change is necessary for the "all" case, so we can close this without PR if agreement is to stick with that.
Relevant text from the HTTP/1.1 spec: is here: https://tools.ietf.org/html/rfc2616#section-14.43 - "User agents SHOULD include this field with requests." The example shown includes a string "libwww" that suggests to me that it is reasonable practice for this header to include "library" information in the header. So I conclude from this that it the spec is recommending we do things as-is.
cc @username_1 @aarchiba @username_2
Answers:
username_1: To re-iterate my concern from #9508 , let's say `astroquery` uses `download_file` but didn't customize the HTTP header (either because it is an older release version or it just doesn't care to), the it would be telling the service provider that "user agent" is "astropy" although technically it is really "astroquery". Before the patch, it would be left blank, so it is clear that someone didn't set it.
HTTP spec aside, what is the behavior that *we* want as a core library?
username_2: We could allow setting this via a config item, and make the default "blank".
username_3: Actually not, it was urllib's user agent.
```
In [23]: with get_readable_fileobj('https://httpbin.org/headers') as f:
...: print(f.read())
Downloading https://httpbin.org/headers
|=========================================================| 125 /125 (100.00%) 0s
{
"headers": {
"Accept-Encoding": "identity",
"Host": "httpbin.org",
"User-Agent": "Python-urllib/3.7"
}
}
```
And for reference this is with requests:
```
In [25]: print(requests.get('https://httpbin.org/headers').text)
{
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"Host": "httpbin.org",
"User-Agent": "python-requests/2.22.0"
}
}
```
username_3: So I think it's fine to have "User-Agent:astropy" by default, and even better if it can be customized with a config item or similar. And it should also probably include the version string.
username_4: yes. and in that case it would be super easy for CI to distinguish itself, e.g. add a `testrun` element for the version string (e.g. that's what we do in astroquery)
username_1: I started #9564 to implement the config item idea and plan to complete it after #9508 is merged.
Status: Issue closed
|
custom-cards/button-card | 843783191 | Title: Inconsistent template behavior, cannot override some styles
Question:
username_0: I'm reasonably sure I'm doing something wrong, and this isn't really a bug, but I can't figure out what.
I have several configuration templates that I use as my base styles so I can implement multiple button-cards without a lot of repetitive typing, just [as suggested in the docs](https://github.com/custom-cards/button-card#configuration-templates). The problem I'm having is that some of the styles seem to be ignored.
For instance, I have a **base** template that most of my other configuration templates inherit from (I've truncated some of the non-relevant parts of the template):
```
base:
aspect_ratio: 1/1
show_state: true
show_icon: false
styles:
name:
- top: 57.7%
- left: 10.1%
- line-height: 2vw
- position: absolute
state:
- top: 74%
- left: 11%
- line-height: 2vw
- position: absolute
card:
- font-family: Sf Display
- letter-spacing: 0.05vw
- font-weight: 400
- font-size: 1.34vw
- border-radius: 0.8vw
```
Then I have other templates that inherit the base for more specific uses, like this one called **scene**:
```
scene:
template:
- base
show_state: false
tap_action:
action: call-service
service: scene.turn_on
service_data:
entity_id: >
[[[ return entity.entity_id ]]]
styles:
name:
- top: 10%
- left: 10.1%
- line-height: 3vw
- position: absolute
```
I utilize this scene template on my dashboard like so:
```
- type: custom:button-card
template:
- scene
entity: scene.office_high
style:
top: 20.35%
left: 31.5%
width: 10%
```
The issue is that the styles I'm trying to override for **name** are being ignored:

I would expect that name to be larger, and close to the top of the card. Instead, it is still styled using the positions defined in the **base** configuration template. However, I know at least part of **scene** is being applied because the state is hidden (**scene** contains `show_state: false`). Indeed, when inspecting the HTML element, I see the styles from **base**, not from **scene**:

Why are the styles from **scene** not being applied?
Answers:
username_1: Nothing seems wrong in your config, maybe it's a bug on my side. I'll have a look.
username_0: An update: the styles eventually appeared correctly a couple hours after I posted this. There must be some caching going on, even though I have "disable cache" checked on the Network tab in my Chrome developer tools and force-refreshed the page multiple times.
Restarting Home Assistant seems to help. Is that expected, to have to restart the entire HA server just to apply new CSS styles?
username_1: Maybe you forgot to reload lovelace? 😊
username_0: You're right..using the "refresh" icon under the three dot menu applies the styles right away.
I didn't think of this because I had hidden that top Lovelace header (where the refresh button is), and everything else I was changing (other Lovelace parameters) was showing correctly when I pressed the refresh button in my browser. It was only the styles that weren't updating.
Status: Issue closed
|
FreeUKGen/MyopicVicar | 136321136 | Title: Emails: contact and ownership
Question:
username_0: From discussions it appears that there is only one email associatied with each transcriber which is changed to facilitate management - we need to preserve last-known real email, mark it as non-functioning if it does not work. We also need to record transcribers as deceased.
Answers:
username_1: There is no need for coordinators to change email addresses of a transcriber except when asked to so do by the transcriber (Transcriber can do for themselves). In reality they should not be doing it to facilitate management. It is bad practice.
There are a selection of reasons for making a transcriber inactive. Death is one of them.
Is there any further requirement?
username_2: it on GitHub
username_1: Duplicate of 989 and 990
Status: Issue closed
|
facebook/react-native | 180207389 | Title: App crash on IOS at launch if turn on 4G
Question:
username_0: Hi,
for my iphone 6s ,ios10,my application crash if I turn on 4G.But if in Wi-Fi then everything is working.
Answers:
username_1: Are you connected to Debug Bundle (with localhost) in Release Mode ?
username_0: How to close Debug Bundle please?
username_0: ok, I have solved this. thks
Status: Issue closed
|
nelhage/ministrace | 775310989 | Title: Syscall arguments
Question:
username_0: What distribution did you use to make this ? I tried Ubuntu, ZorinOS, Manjaro and Debian, but every time the python script can't find the arguments file for syscalls (even if I change the linux src in the makefile) |
Teylor-SG/RGE-Bug-Reports | 589591329 | Title: Angarvunde is seemingly missing intended Relics of Hyrule content (Spoilers)
Question:
username_0: **Describe the bug**
According to the RoH Wiki, "The Dark Tunic can be found in Angarvunde inside a Dark Hero's Chest containing the tunic, a Dark Cap, a pair of Dark Gauntlets, and a pair of Dark Boots."
However, I searched through the entire dungeon 4 or 5 times over and was completely unable to find this chest. It is possible that it was just *absurdly* well hidden, but if so I think it should be moved somewhere a little more obvious anyway.
**To Reproduce**
Go through Angarvunde
**Expected behavior**
There to be a Dark Hero's Chest with the items.
**Additional context**
My best guess is that one of the other mods, most likely GAT, is overwriting the chest. |
duckpuppy/algolia-hugo | 299482259 | Title: more info:
Question:
username_0: - sentence context such as linguistic features passed by the translator.
----
*opened via [imdone.io](https://imdone.io) from a code comment on [eb845f3](https://github.com/username_0/algolia-hugo/commit/eb845f3) by <NAME>*
----
https://github.com/username_0/algolia-hugo/blob/b0bb3033ab1ad912e97356b8c5f7e7b588f0974a/vendor/golang.org/x/text/internal/format/format.go#L34-L40<issue_closed>
Status: Issue closed |
phpdevbr/vagas | 275028257 | Title: [Salvador/Ba] Analista Desenvolvedor Backend
Question:
username_0: Você gosta de programação e de desenvolvimento de software? Gosta de novas tecnologias e de desafios? Tem espírito de equipe e é comprometido? Você gostaria de colocar tudo isto em prática numa empresa?
A Convergence Works
Somos a Convergence Works, somos a convergência entre os desafios das empresas e as ideias pra vencê-los. Desenvolvemos plataformas para o mundo digital, com foco em comunicação. Somos especialistas na criação de sites e aplicativos para plataformas de comunicação. Integramos sistema de gestão de conteúdo, aplicativo, disparo de email, solução para clube de assinantes, implantação de editoriais em múltiplas plataformas.
Local
Rua <NAME>, 134 - Sala 309 - Itaigara, Salvador – BA
Conhecimentos necessários:
- PHP
- Javascript
- HTML
- SQL
- Linux
- GIT
- MVC e outros padrões de projeto
Desejáveis:
Conhecimento em React
Benefícios
- Vale transporte
- Vale refeição
- Plano de Saúde
- Excelente ambiente de trabalho
- Oportunidades de crescimento
Contratação
CLT
Essa vaga é pra você ? Venha fazer parte do nosso time!!!
Envie seu currículo para <EMAIL> com seu CV anexado e pretensão salarial, enviar no assunto: Vaga Analista Desenvolvedor Backend
Answers:
username_1: Olá @username_0 ,
A vaga ainda está aberta?
_(sou administrador das vagas e estou fechando as que já foram preenchidas)_
Status: Issue closed
username_0: Olá @username_1, desculpa manter a vaga aberta por tanto tempo e sim ela foi preenchida. |
SwissDataScienceCenter/renku-ui | 896426734 | Title: Apply new renku style to documentation
Question:
username_0: # Description
The new style should be used in our documentation.
# Design
(coming)
Answers:
username_0: Fixed in https://github.com/SwissDataScienceCenter/renku-sphinx-theme/pull/4 and https://github.com/SwissDataScienceCenter/renku/pull/2166
Status: Issue closed
|
goharbor/harbor | 446485637 | Title: The reboot time is too long, because chown changes the directory permissions.
Question:
username_0: **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
My mirror repository currently stores an 80T docker image, which takes 1-2 hours to restart the harbor each time. The investigation found that it was caused by the [chown 10000:10000 command](https://github.com/goharbor/harbor/blob/ae007c2a49ec39aa05e6a21fb8676e7959c7ee71/make/photon/registry/entrypoint.sh)
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
Is it possible to provide a configurable parameter, execute chown 10000:10000 at install time, and skip the chown command when restart.
**Describe the main design/architecture of your solution**
A clear and concise description of what does your solution look like. Rich text and diagrams are preferred.
**Describe the development plan you've considered**
A clear and concise description of the plan to make the solution ready. It can include a development timeline, resource estimation, and other related things.
**Additional context**
Add any other context or screenshots about the feature request here.
Answers:
username_1: @username_2 we discussed this.
I think ideally, we should move this to `prepare` phase.
And for the perf issue I can think of 2 options:
1) check the ownship before running `chown`
2) add a flag in the env vars to control it, and by default do not run `chown`
username_2: I think this issue is already fixed in a newer version. And chow logic will move to prepare when refactoring non-root container of the registry.
Status: Issue closed
|
department-of-veterans-affairs/va.gov-team | 554832159 | Title: [FE] Convert profile phone number form to use SchemaForm
Question:
username_0: ## Background
Epic #4068
## Tasks
- [ ] Make a new version of the phone number component that wraps a `SchemaForm` component and does not use any `Errorable` form components
## Acceptance Criteria:
- [ ] Should function the same as the existing [phone number?] component.
- [ ] Needs to be able to live alongside the existing phone number form (ie, behind a feature flag or some other method so the form can be put on staging but not prod)
Answers:
username_0: @username_1 this is still in progress, yes? I have it in the Sprint 15 objectives and want to make sure that's accurate.
username_1: There is a PR open for this that's waiting on VSP review which should be done today. https://github.com/department-of-veterans-affairs/vets-website/pull/11677
username_0: OK, thanks for letting me know!
username_1: Merged. I moved this to Validate and then we can close when we confirm editing phone numbers still works.
username_0: So I noticed a couple issues with the phone number form:
1. The form is requiring an extension even though an extension should not be required (also, what is with that weird error message?).
2. Also, there is a weird spacing issue with the information "i" icon and the copy in the alert box, but that may be a separate issue related to #5553
<img width="442" alt="Screen Shot 2020-02-12 at 3 52 55 PM" src="https://user-images.githubusercontent.com/34068740/74376464-28a97b00-4db0-11ea-8944-b62e773ee628.png">
3. When we originally built the form, we represented the "1" for the country code as follows. Idk where it went but it should be in the form. Here's an old mock to show what that looked like (please ignore the broken icon that's showing as a "?"):
<img width="336" alt="Screen Shot 2020-02-12 at 3 55 08 PM" src="https://user-images.githubusercontent.com/34068740/74376605-745c2480-4db0-11ea-9bc9-d9fc59c44a66.png">
Idk if any of these issues are related to your work but this seemed as good a place as any to raise the flag about them.
username_1: Thanks for catching this, @username_0. This was super sloppy on my part 😞
I have a PR to fix items 1 and 2 (plus another issue I noticed when I went in to fix those): https://github.com/department-of-veterans-affairs/vets-website/pull/11685
I'm not sure about item 3. This is new to me. And it's not like that on prod right now. @username_2 do you know anything about "hardcoding" the +1 in the input field like that? Was that ever implemented? Regardless, with the move to a SchemaForm for the phone number form, that _currently_ won't be possible, as far as I know. But would be a fun thing to add to the forms system. Let me know how badly you want me to pursue that, Samara!
username_0: If it's not possible with SchemaForm then I wouldn't immediately worry about it. Not worth jumping through hoops of fire to implement this.
username_1: @username_0 this should be working correctly now.
username_2: I do remember. At the time, it was implemented in a very temporary way, because our Form System's input components didn't support the addition of the gray box indicating the hardcoded country code. It was removed later because of a strange incompatibility identified while we were upgrading our React version, that was causing a browser error. I think we wrote a ticket in vets.gov-team for adding the number back. Sorry about this.
username_1: I should probably make a custom form system widget to implement this. Since I don't know how to do that but it would probably be pretty easy as far as custom widgets go. @username_0 feel free to make a ticket to create a new custom `DomesticPhoneNumber` widget if you think is worth spending a couple points on.
username_0: For the Home, Work, and Fax numbers, the "Update" button doesn't work both when I don't change the phone number and when I do.
The Update buttons is working for the Mobile phone field, but I am wondering if that's because I checked the option for notifications.
For the Fax number, I was able to add a number, but I wasn't able to update it after I saved it the first time.
username_1: Yeah there's all sorts of weird stuff going on with the new telephone forms. Looking into that now.
username_1: @username_0 this should now work on staging. However I did uncover another weird bug tracked here: https://github.com/department-of-veterans-affairs/va.gov-team/issues/5963
username_0: Awesome. This bug is all set. Can we close this or do you want to wait until #5963 is done?
username_1: Let's close this
Status: Issue closed
|
DianaWalsh19/butterscotch | 762937097 | Title: Fix search
Question:
username_0: 1. Looks hideous
2. It looks for the search words only in the item name. Can I make it check everything (category, item description, etc)?
Answers:
username_0: Css will be fixed later. Search terms are fixed now: https://stackoverflow.com/questions/25814290/searching-through-all-location-description-and-title-ruby-on-rails
Status: Issue closed
|
AA-CubeSat-Team/soci-gnc | 831269415 | Title: MagGyro Processing Lib Updates
Question:
username_0: - [ ] In the `Data Calibration` subsystem, you're selecting the same column in each of the three signals.
- [ ] I think there's a logical error in the case of all 3 valid measurements. It looks like you're only ever averaging two signals -- I can demo this by inputting 3 valid signals with values [ 1,2,3 ], [ 2,3,4 ], and [ -1, -2, -3 ] ... what comes out is [ 1.5, 2.5, 3.5 ], which is the average of the first two. I should see an output of [ 2,3,4 ].
- What's happening here is that the middle if-else block only executes a single branch on any given run .. it will check the if-statements in order, so once `if (u1)` evaluates to true, it executes that branch and ignores anything else. So you get the average of the first two signals only, not all three. You'll need to work out the correct solution here.
- The unit test should actually be improved to catch this kind of error -- because you're sending in the same three signals, you won't be able to notice a bug like this. Try using three different values, and then compute what the result should be for each failure case.
- [ ] Let's name the If-Action Subsystems something a bit more suggestive, like `ThreeValidSignals`, `TwoValidSignals` etc.
- [ ] The input/output names should not be specific to the magnetometer. Instead, use a generic things like:
- Inputs: `sensor_meas_data` and `sensor_meas_valid`
- Outputs: `sensor_meas_body` and `sensor_valid`
Answers:
username_0: Just an FYI, you don't actually need to create two files for each of the Mag and Gyro library. We can pull the `maggyro_processing_lib` into FSW directly and then just set the parameters to be either for the Mags or the Gyros, no need for extra files.
username_0: One more thing I noticed: in the case of three valid measurements...there's actually no way for us to currently average all three signals. Moreover, I'm not sure that I follow the logic in the if-else statement that you have there. I've added a fourth case that will average all three signals if either:
- all signals agree within error
- at least two pairs disagree within error, in this case there isn't much else we could do
I noticed this because the unit test actually does not provide the average of the three signals in the case when they're all at their "valid" values. Now it does -- I'll push my changes and ask you to look over them to make sure you agree.
username_0: Changes look good, I removed the display from inside the library tho. I'll spend some time this week getting this library merged into FSW along with the sensor model library changes that go with it.
Status: Issue closed
|
googleapis/google-cloud-cpp | 589312885 | Title: Make sure examples and documentation look good
Question:
username_0: Publishing the doxygen docs works. I added a See Also link to the QueryOptions page, and it showed up published in the right place.
https://googleapis.dev/cpp/google-cloud-spanner/master/classgoogle_1_1cloud_1_1spanner_1_1v1_1_1QueryOptions.html#details
Status: Issue closed
Answers:
username_0: Publishing the doxygen docs works. I added a See Also link to the QueryOptions page, and it showed up published in the right place.
https://googleapis.dev/cpp/google-cloud-spanner/master/classgoogle_1_1cloud_1_1spanner_1_1v1_1_1QueryOptions.html#details
Status: Issue closed
|
deltaphc/raylib-rs | 805792445 | Title: Windows MinGW64 build error
Question:
username_0: ```
C:/Users/ME/.cargo/bin/cargo.exe build --color=always --package VoxelLands --bin VoxelLands --message-format=json-diagnostic-rendered-ansi
Compiling raylib-sys v3.5.0
error: failed to run custom build command for `raylib-sys v3.5.0`
Caused by:
process didn't exit successfully: `C:\Users\ME\Desktop\MyGames\VoxelLands\target\debug\build\raylib-sys-42c963b1dd57a9b2\build-script-build` (exit code: 101)
--- stdout
running: "cmake" "C:\\Users\\ME\\Desktop\\MyGames\\VoxelLands\\target\\debug\\build\\raylib-sys-04f0b8d384bc617c\\out\\raylib" "-G" "MinGW Makefiles" "-DBUILD_EXAMPLES=OFF" "-DBUILD_GAMES=OFF" "-DCMAKE_BUILD_TYPE=Release" "-DSUPPORT_BUSY_WAIT_LOOP=OFF" "-DSTATIC=TRUE" "-DPLATFORM=Desktop" "-DCMAKE_INSTALL_PREFIX=C:\\Users\\ME\\Desktop\\MyGames\\VoxelLands\\target\\debug\\build\\raylib-sys-04f0b8d384bc617c\\out" "-DCMAKE_C_FLAGS= -ffunction-sections -fdata-sections -m64" "-DCMAKE_CXX_FLAGS= -ffunction-sections -fdata-sections -m64" "-DCMAKE_ASM_FLAGS= -ffunction-sections -fdata-sections -m64"
-- The C compiler identification is GNU 8.1.0
-- The CXX compiler identification is GNU 8.1.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/Program Files/mingw-w64/x86_64-8.1.0-posix-seh-rt_v6-rev0/mingw64/bin/gcc.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files/mingw-w64/x86_64-8.1.0-posix-seh-rt_v6-rev0/mingw64/bin/g++.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Performing Test COMPILER_HAS_THOSE_TOGGLES
-- Performing Test COMPILER_HAS_THOSE_TOGGLES - Success
-- Testing if -Werror=pointer-arith can be used -- compiles
-- Testing if -Werror=implicit-function-declaration can be used -- compiles
-- Testing if -fno-strict-aliasing can be used -- compiles
-- Using raylib's GLFW
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Looking for dinput.h
-- Looking for dinput.h - found
-- Looking for xinput.h
-- Looking for xinput.h - found
-- Performing Test _GLFW_HAS_DEP
-- Performing Test _GLFW_HAS_DEP - Success
-- Performing Test _GLFW_HAS_ASLR
-- Performing Test _GLFW_HAS_ASLR - Success
-- Performing Test _GLFW_HAS_64ASLR
-- Performing Test _GLFW_HAS_64ASLR - Success
-- Using Win32 for window creation
-- Audio Backend: miniaudio
-- Building raylib static library
-- Generated build type: Release
-- Compiling with the flags:
-- PLATFORM=PLATFORM_DESKTOP
-- GRAPHICS=GRAPHICS_API_OPENGL_33
-- Configuring done
-- Generating done
-- Build files have been written to: C:/Users/ME/Desktop/MyGames/VoxelLands/target/debug/build/raylib-sys-04f0b8d384bc617c/out/build
running: "cmake" "--build" "." "--target" "install" "--config" "Debug" "--"
Scanning dependencies of target glfw_objlib
[ 4%] Building C object src/external/glfw/src/CMakeFiles/glfw_objlib.dir/context.c.obj
[ 8%] Building C object src/external/glfw/src/CMakeFiles/glfw_objlib.dir/init.c.obj
[ 12%] Building C object src/external/glfw/src/CMakeFiles/glfw_objlib.dir/input.c.obj
[ 16%] Building C object src/external/glfw/src/CMakeFiles/glfw_objlib.dir/monitor.c.obj
[ 20%] Building C object src/external/glfw/src/CMakeFiles/glfw_objlib.dir/vulkan.c.obj
[ 25%] Building C object src/external/glfw/src/CMakeFiles/glfw_objlib.dir/window.c.obj
[ 29%] Building C object src/external/glfw/src/CMakeFiles/glfw_objlib.dir/win32_init.c.obj
[Truncated]
thread 'main' panicked at 'filed to create windows library', C:\Users\ME\.cargo\registry\src\github.com-1ecc6299db9ec823\raylib-sys-3.5.0\build.rs:88:13
stack backtrace:
0: std::panicking::begin_panic
at C:\Users\ME\.cargo\registry\src\github.com-1ecc6299db9ec823\cc-1.0.66/C:\Users\ME\.rustup\toolchains\stable-x86_64-pc-windows-gnu\lib/rustlib/src/rust\library\std\src/panicking.rs:521:12
1: build_script_build::build_with_cmake
at .\C:\Users\ME\.cargo\registry\src\github.com-1ecc6299db9ec823\raylib-sys-3.5.0/build.rs:88:13
2: build_script_build::main
at .\C:\Users\ME\.cargo\registry\src\github.com-1ecc6299db9ec823\raylib-sys-3.5.0/build.rs:198:5
3: core::ops::function::FnOnce::call_once
at .\C:\Users\ME\.rustup\toolchains\stable-x86_64-pc-windows-gnu\lib/rustlib/src/rust\library\core\src\ops/function.rs:227:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Process finished with exit code 101
```
**In this case, the library compiles successfully, but with a different name: "libraylib.a".**

**This fixes manually by renaming the file to "libraylib_static.a" or "raylib_static.lib" and running the build.** |
crowdint/crowdblog | 64570126 | Title: Status roll back
Question:
username_0: This come from here: https://github.com/crowdint/blog.crowdint.com/issues/9
-----
Able to roll back any status of a blog post.
Example:
From Reviewed to Finished, Finished to Draft, etc.
----- We may need to define what needs to be done on each transition 'undo' |
coinbase/coinbase-commerce-woocommerce | 438503733 | Title: User seeing an error
Question:
username_0: The version 1.1.1 that you have put, I was already using it downloaded from
https://github.com/coinbase/coinbase-commerce-woocommerce
is a project of many months ago, and always finds many problems with the
latest version of wordpress.
always error:
2019-04-27T00:00:25+00:00 CRITICAL Uncaught Error: Call to a member
function get_meta() on boolean in
/home/wp_pc9bcp/iclunlock.com/wp-content/plugins/coinbase-commerce/class-wc-gateway-coinbase.php:346
Stack trace:
#0 /home/wp_pc9bcp/iclunlock.com/wp-content/plugins/coinbase-commerce/class-wc-gateway-coinbase.php(299):
WC_Gateway_Coinbase->_update_order_status(false, Array)
#1 /home/wp_pc9bcp/iclunlock.com/wp-includes/class-wp-hook.php(286):
WC_Gateway_Coinbase->handle_webhook('')
#2 /home/wp_pc9bcp/iclunlock.com/wp-includes/class-wp-hook.php(310):
WP_Hook->apply_filters('', Array)
#3 /home/wp_pc9bcp/iclunlock.com/wp-includes/plugin.php(465):
WP_Hook->do_action(Array)
#4 /home/wp_pc9bcp/iclunlock.com/wp-content/plugins/woocommerce/includes/class-wc-api.php(113):
do_action('woocommerce_api...')
#5 /home/wp_pc9bcp/iclunlock.com/wp-includes/class-wp-hook.php(286):
WC_API->handle_api_requests(Object(WP))
#6 /home/wp_pc9bcp/iclunlock.com/wp-includes/class-wp-hook.php(310):
WP_Hook->apply_filters(NULL, Array)
#7 /home/wp_pc9bcp/iclunlock.com/wp-inc in
/home/wp_pc9bcp/iclunlock.com/wp-content/plugins/coinbase-commerce/class-wc-gateway-coinbase.php
on line 346 |
philipperemy/deep-speaker | 849042188 | Title: RuntimeError: cannot join current thread
Question:
username_0: did you met before?
Answers:
username_1: @username_0 I've never seen this error before.
```
ValueError: 'a' cannot be empty unless no samples are taken
```
But If I can bet, I'd say your folder `/raid/user9/workspace/ASR/SpeakerVerification/deep-speaker-master/pre-training/` should contain something.
username_0: am i right to do this?
username_0: thanks!
i will check if something is containted in pre-training/ folder.
username_0: @username_1
checked out !
it was one speaker whose recording just only one wav ~
i picked it out ,and code run normaly
the same as the last one speaker ,there is also another speaker met the same issue(only one wav~)
so i stop code ,picked out one speaker...
stop code ,picked out one speaker...
haha
if i can i will add some code to pick it out automatically
Status: Issue closed
username_1: @username_0 Great! Happy to hear. Yes, the code is not very robust. I implemented it for LibriSpeech (500 utterances per speaker at least). Please add a fix if you have time. Xie xie~ |
OpenFeign/feign | 277983459 | Title: spring cloud issue
Question:
username_0: In spring cloud project
I use feign with hystrix
but when the microserver provider had the error , the feign cannot find the fallback method
so i test the code
this is the sample test code
UserFeignClient.class
@FeignClient(name = "yyf-provider-user", fallback = UserFeignClientImpl.class)
public interface UserFeignClient {
@RequestMapping(value = "/{id}", method = RequestMethod.GET)
public User findById(@PathVariable("id") Long id);
}
UserFeignClientImpl.class
@Component
public class UserFeignClientImpl implements UserFeignClient {
@Override
public User findById(Long id) {
User user = new User();
user.setId(-1L);
user.setUserName("default user");
return user;
}
}
in the spring cloud Camden version , the feign can find the fallback method
but the newer version cannot find the fallback method such as dalston
i check the doc bewteen camden and dalston
cannot find the diffrent
Answers:
username_1: Duplicate of https://github.com/spring-cloud/spring-cloud-netflix/issues/2486
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.