repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
htl-perg-2018-19-4bhif/angular-todo-example-michaelhitzker | 425609193 | Title: Link to working example
Question:
username_0: https://www.username_0.com/htl/TodoGUI/
If https does not work, you could try http or, if using chrome, press the icon next to the "Add Bookmark icon".
If the API does not work, please let me know. I might have to change the IP of the API then.
Answers:
username_1: Hi @username_0!
Works perfectly fine 👍. The extra point for Angular Material is yours.
Greetings,
<NAME>.
Status: Issue closed
|
rbind/support | 258600609 | Title: feature request?
Question:
username_0: unsure if this is a feature request, or something and can kludge up on my own ...
https://stackoverflow.com/questions/46287117/blogdown-how-to-add-social-media-sharing-to-blog-post-view-default-theme<issue_closed>
Status: Issue closed |
TIBCOSoftware/flogo-lib | 225793686 | Title: Error must be reported for unsupported activity configuration type
Question:
username_0: Currently, no validation is performed for activity configuration type.
No error is reported for following invalid configuration.
e.g.
"attributes": [
{
"name": "MyCustomField",
"value": "MyValue",
**"type": "foo"**
}]
Unsupported types(foo in this case) should be reported to the activity developer. |
RagtagOpen/nomad | 265457371 | Title: Consistent python version
Question:
username_0: Based on https://sentry.io/share/issue/3232323036302e333639373537353831/, it looks like this app is running on python 3.6, but the dockerfile specifies 2.7: https://github.com/RagtagOpen/nomad/blob/master/Dockerfile#L1. Which python version(s) does this support? Probably best to run the same versions locally and on heroku.
Answers:
username_1: It was initially designed to run in both, but it looks like we didn't quite keep up with that. The Dockerfile should be updated to use 3.
Status: Issue closed
|
ilarinie/todo-lista | 147508162 | Title: Koodikatselmointi 11.4.2015
Question:
username_0: 11.4.2016 20:30
**Koodikatselmointi**
- Dokumentaatio on yleisesti ottaen selkeää, ja kaikki tarvittava löytyy. Käyttöliittymän käsin hahmoteltu sivukartta on huikan hankalasti tulkittavissa.
- Listaustesti ei pyöri user-palvelimella, linkki antaa 404:ää.
- Hirveästi ei vielä ole varsinaista sovelluslogiikka käsittelevää koodia, mutta testiluokista huomaa, että MVC-arkkitehtuuria on ruvettu noudattaan.
- Ohjelmakoodi on selkeää. On helppo seurata, mitä missäkin on tarkoitus tehdä.
- Käyttöliittymähahmotelma vaikuttaa selkeältä. Uusi tehtävä -sivulta huomasin sellaisen, että kun selainta pienentaa puoleen läppärin ruudusta, valahtaa navipalkki tehtävänlisäyskenttien päälle. Mutta käyttöliittymän toteuttamiseen ei selvästi olla vielä ehdittyä hirveästi käyttää aikaa. |
chewie550/DAT-NYC-30_portfolio | 126967028 | Title: HW1 Feedback
Question:
username_0: Status: Pass (Homework is graded on "Pass" or "Needs Improvement")
Comments:
Good work! Your analysis is clear and concise, and your methods are straight forward and easy to follow. I would encourage you to explore methods such as "groupby", which may not be faster than your methods for this assignment, but would be for more complex datasets. We showed some examples in class, you can reach out to your classmates to see their work, or to us if you have any further questions. The histogram is a good choice for the age bins. Adding some captions and a couple more visuals would help communicate your results more clearly. Keep up the good work!!
Answers:
username_1: Thanks for your submission!
Your intuition is spot on for predicting survival. I encourage you to try to think of new ideas for features - what about the Name feature? Maybe you could split up the string to extract new information... |
alibaba/Sentinel | 401823504 | Title: [Feature] Reactive support for Sentinel
Question:
username_0: <!-- Here is for bug reports and feature requests ONLY!
If you're looking for help, please check our mail list and the Gitter room.
Please try to use English to describe your issue, or at least provide a snippet of English translation.
-->
## Issue Description
Type: *feature request*
### Describe what feature you want
Reactive can bring higher throughput for IO-intensive workloads. It can also bring more elegant programming style (fluent and functional, event driven), which has been a trend in Java community. Since Sentinel has supported asynchronous entry (`SphU.asyncEntry(resourceName)`), we can integrate with reactive stream libraries, like:
- [ ] RxJava 2.x
- [ ] [Project Reactor](https://projectreactor.io/)
Once we've accomplished this, we can easily integrate Sentinel with Spring WebFlux, Vert.x, etc.
Answers:
username_1: 期待
username_2: I will solve this problem soon, waiting for my good news.
username_0: Nice. Looking forward to your work :)
username_0: @username_2 Any progress on integration with RxJava 2.x? :)
username_2: emm... RxJava is more complicated than what I think, the progress is not well
username_0: Don't worry, we could discuss the design here :)
You could also refer to the code of [SentinelReactorSubscriber](https://github.com/alibaba/Sentinel/blob/master/sentinel-adapter/sentinel-reactor-adapter/src/main/java/com/alibaba/csp/sentinel/adapter/reactor/SentinelReactorSubscriber.java).
username_3: Hi @username_2 @username_0 ,
Did we make any progress on this?
If yes, could you please help me out pointing to the right repo/file where I can read the implementation/example
Thanks |
GovernIB/distribucio | 839725054 | Title: Error quan es vol accedir a una bustia sense permis
Question:
username_0: La problemàtica, es que se reenvien els correus que reben de distribucio, si les intenta obrir qualcú que no te permis, les surt un error no controlat, es tractaria d'informar a l'usuari que no te acces a la bustia en concret.
[error permis.pdf](https://github.com/GovernIB/distribucio/files/6197549/error.permis.pdf)<issue_closed>
Status: Issue closed |
rg-engineering/ioBroker.heatingcontrol | 1031228893 | Title: Version 2.6.2 triggert nicht auf Änderungen am Thermostat
Question:
username_0: Hallo,
ich nutze den Heatingcontrol Adapter in Verbidung mit Tado. Das System ist so eingestellt das wenn ich entweder in der Tado App oder am Thermostat die Temperatur änder der Adapter in den Override für eine Stunde gehen soll. IN dieser Version klappt das aber anscheinend nicht mehr sprich der neue Wert der vom Tado setting Datenpunkt kommt wird nicht übernommen als neuer Override.
Answers:
username_0: Hallo ob ich von der 2.5 komme weiß ich nicht genau. Aber ja bin im stable unterwegs.
So noch kurz was ich gemacht habe :
Heatingadapter aus . auf Debug gestellt dann adapter gestartet.
Im Wohnzimmer 19,5 ° eingestellt . Dies wurde als Override auch erkannt.
1-2 Minuten später das Wohnzimmer auf 20,5° gestellt. Damit wurde der Override aber nicht überschrieben.
Anbei die log Datei
[heatingcontrol.log](https://github.com/username_1/ioBroker.heatingcontrol/files/7381133/heatingcontrol.log)
username_1: Habe das log hier gelöscht, da persönliche Daten beinhaltet waren... Lösche es auch bei mir, wenn ich es nicht mehr benötige... |
threefoldfoundation/tft-stellar | 861083210 | Title: invalid tokencode in the tftstatistics gives an internal server error
Question:
username_0: ```
File "/root/sandbox/var/downloaded_packages/threefoldfoundation_tft-stellar_tft_statistics_master/tft-stellar/ThreeBotPackages/tft_statistics/bottle/../../../lib/stats/stats.py", line 182, in getstatistics
asset_issuer = _ASSET_ISUERS[self._network][tokencode]
KeyError: 'TFTAdetailed=true'
```<issue_closed>
Status: Issue closed |
ployground/ploy_virtualbox | 58245244 | Title: KeyError: 'defaultdisk' on OS X Yosemite / 10.10.2
Question:
username_0: I went down the "homebrew -> py-virtualenv -> git clone -> make -> pip install ploy_virtualbox -> quickstart" path, and that's what i ended up with:
```sh
(bsdploy)McLenix:ploy-quickstart username_0$ ploy start ploy-demo
INFO: Creating instance 'ploy-demo'
INFO: Adding default 'sata' controller.
Traceback (most recent call last):
File "/Users/username_0/Projects/private/unixuni/bsdploy/bin/ploy", line 9, in <module>
load_entry_point('ploy==1.0.3', 'console_scripts', 'ploy')()
File "/Users/username_0/Projects/private/unixuni/bsdploy/lib/python2.7/site-packages/ploy/__init__.py", line 540, in ploy
return ctrl(argv)
File "/Users/username_0/Projects/private/unixuni/bsdploy/lib/python2.7/site-packages/ploy/__init__.py", line 532, in __call__
args.func(sub_argv, args.func.__doc__)
File "/Users/username_0/Projects/private/unixuni/bsdploy/lib/python2.7/site-packages/ploy/__init__.py", line 284, in cmd_start
result = instance.start(overrides)
File "/Users/username_0/Projects/private/unixuni/bsdploy/lib/python2.7/site-packages/ploy_virtualbox/__init__.py", line 327, in start
medium = self.master.disks[medium[8:]].filename(self)
File "/Users/username_0/Projects/private/unixuni/bsdploy/lib/python2.7/site-packages/ploy_virtualbox/__init__.py", line 540, in __getitem__
self._cache[key] = self.klass(key, self.config[key])
KeyError: 'defaultdisk'
(bsdploy)McLenix:ploy-quickstart username_0$ which python
/Users/username_0/Projects/private/unixuni/bsdploy/bin/python
(bsdploy)McLenix:ploy-quickstart username_0$ python --version
Python 2.7.6
(bsdploy)McLenix:ploy-quickstart username_0$ which pip
/Users/username_0/Projects/private/unixuni/bsdploy/bin/pip
(bsdploy)McLenix:ploy-quickstart username_0$ which ploy
/Users/username_0/Projects/private/unixuni/bsdploy/bin/ploy
(bsdploy)McLenix:ploy-quickstart username_0$ pip list
ansible (1.8.3)
ecdsa (0.13)
Jinja2 (2.7.3)
lazy (1.2)
MarkupSafe (0.23)
paramiko (1.15.2)
pip (1.5.6)
ploy (1.0.3)
ploy-virtualbox (1.1.0)
pycrypto (2.6.1)
PyYAML (3.11)
setuptools (12.2)
virtualenv (12.0.7)
wsgiref (0.1.2)
zc.buildout (2.3.1)
(bsdploy)McLenix:ploy-quickstart username_0$ uname -a
Darwin McLenix.private 14.1.0 Darwin Kernel Version 14.1.0: Mon Dec 22 23:10:38 PST 2014; root:xnu-2782.10.72~2/RELEASE_X86_64 x86_64
(bsdploy)McLenix:ploy-quickstart username_0$ cat etc/ploy.conf
[vb-instance:ploy-demo]
vm-nic2 = nat
vm-natpf2 = ssh,tcp,,44003,,22
storage =
--medium vb-disk:defaultdisk
--type dvddrive --medium http://mfsbsd.vx.sk/files/iso/10/amd64/mfsbsd-se-10.1-RELEASE-amd64.iso --medium_sha1 03af247c1058a78a251c46ad5a13dc7b84a7ee7d
(bsdploy)McLenix:ploy-quickstart username_0$
```
any hints?
Answers:
username_1: which version of bsdploy are you using? in fact, bsdploy does not show up in your output above (except as the name of your virtualenv).
the ``defaultdisk`` behaviour is a bsdploy feature, it's neither part of ploy_virtualbox nor ploy itself.
hth
username_1: @username_0 is this still an issue for you?
username_2: Hi there, it is for me. I'm using the latest bsdploy from git, and also have this problem on Yosemite:
(bsdploy)unnamed-72:ploy-quickstart joe$ ploy start ploy-demo
INFO: Creating instance 'ploy-demo'
INFO: Adding default 'sata' controller.
Traceback (most recent call last):
File "/Users/joe/Documents/TrueSpeed/Ploy/bsdploy/bin/ploy", line 9, in <module>
load_entry_point('ploy==1.2.0', 'console_scripts', 'ploy')()
File "/Users/joe/Documents/TrueSpeed/Ploy/bsdploy/lib/python2.7/site-packages/ploy/__init__.py", line 557, in ploy
return ctrl(argv)
File "/Users/joe/Documents/TrueSpeed/Ploy/bsdploy/lib/python2.7/site-packages/ploy/__init__.py", line 549, in __call__
args.func(sub_argv, args.func.__doc__)
File "/Users/joe/Documents/TrueSpeed/Ploy/bsdploy/lib/python2.7/site-packages/ploy/__init__.py", line 284, in cmd_start
result = instance.start(overrides)
File "/Users/joe/Documents/TrueSpeed/Ploy/bsdploy/lib/python2.7/site-packages/ploy_virtualbox/__init__.py", line 327, in start
medium = self.master.disks[medium[8:]].filename(self)
File "/Users/joe/Documents/TrueSpeed/Ploy/bsdploy/lib/python2.7/site-packages/ploy_virtualbox/__init__.py", line 540, in __getitem__
self._cache[key] = self.klass(key, self.config[key])
KeyError: 'defaultdisk'
username_2: I've got it working now. I think that the problem was environmental. I removed all references to virtualenv from my .profile, and rebuilt ploy, and now it works fine. Not sure which of those two things made the difference.
Status: Issue closed
username_1: @username_2 yes, looks like there might have been an older version of ploy_virtualbox or bsdploy interfering.
at any rate i'm glad it's working for you and will close this issue since it's either working for @username_0 , too or we've lost him along the way :) |
aws/aws-sdk-cpp | 440164907 | Title: Build Error on GCC9
Question:
username_0: ### What platform/OS are you using?
Fedora 30
### What compiler are you using? what version?
GCC 9.0.1
### What's your CMake arguments?
-DCPP_STANDARD=17
I got the following error. Looks like a rule-of-5 warning that gets treated as an error. Is there a way to turn off -Werror in my build?
```
[ 0%] Building CXX object aws-cpp-sdk-core/CMakeFiles/aws-cpp-sdk-core.dir/source/client/AWSClient.cpp.o
/home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/source/client/AWSClient.cpp: In member function ‘virtual Aws::Client::AWSError<Aws::Client::CoreErrors> Aws::Client::AWSJsonClient::BuildAWSError(const std::shared_ptr<Aws::Http::HttpResponse>&) const’:
/home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/source/client/AWSClient.cpp:750:111: error: implicitly-declared ‘Aws::Client::AWSError<Aws::Client::CoreErrors>& Aws::Client::AWSError<Aws::Client::CoreErrors>::operator=(const Aws::Client::AWSError<Aws::Client::CoreErrors>&)’ is deprecated [-Werror=deprecated-copy]
750 | error = AWSError<CoreErrors>(CoreErrors::NETWORK_CONNECTION, "", "Unable to connect to endpoint", true);
| ^
In file included from /home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/source/client/AWSClient.cpp:20:
/home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/include/aws/core/client/AWSError.h:51:13: note: because ‘Aws::Client::AWSError<Aws::Client::CoreErrors>’ has user-provided ‘Aws::Client::AWSError<ERROR_TYPE>::AWSError(const Aws::Client::AWSError<Aws::Client::CoreErrors>&) [with ERROR_TYPE = Aws::Client::CoreErrors]’
51 | AWSError(const AWSError<CoreErrors>& rhs) :
| ^~~~~~~~
/home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/source/client/AWSClient.cpp:763:54: error: implicitly-declared ‘Aws::Client::AWSError<Aws::Client::CoreErrors>& Aws::Client::AWSError<Aws::Client::CoreErrors>::operator=(const Aws::Client::AWSError<Aws::Client::CoreErrors>&)’ is deprecated [-Werror=deprecated-copy]
763 | IsRetryableHttpResponseCode(responseCode));
| ^
In file included from /home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/source/client/AWSClient.cpp:20:
/home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/include/aws/core/client/AWSError.h:51:13: note: because ‘Aws::Client::AWSError<Aws::Client::CoreErrors>’ has user-provided ‘Aws::Client::AWSError<ERROR_TYPE>::AWSError(const Aws::Client::AWSError<Aws::Client::CoreErrors>&) [with ERROR_TYPE = Aws::Client::CoreErrors]’
51 | AWSError(const AWSError<CoreErrors>& rhs) :
| ^~~~~~~~
/home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/source/client/AWSClient.cpp:768:61: error: implicitly-declared ‘Aws::Client::AWSError<Aws::Client::CoreErrors>& Aws::Client::AWSError<Aws::Client::CoreErrors>::operator=(const Aws::Client::AWSError<Aws::Client::CoreErrors>&)’ is deprecated [-Werror=deprecated-copy]
768 | error = GetErrorMarshaller()->Marshall(*httpResponse);
| ^
In file included from /home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/source/client/AWSClient.cpp:20:
/home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/include/aws/core/client/AWSError.h:51:13: note: because ‘Aws::Client::AWSError<Aws::Client::CoreErrors>’ has user-provided ‘Aws::Client::AWSError<ERROR_TYPE>::AWSError(const Aws::Client::AWSError<Aws::Client::CoreErrors>&) [with ERROR_TYPE = Aws::Client::CoreErrors]’
51 | AWSError(const AWSError<CoreErrors>& rhs) :
| ^~~~~~~~
/home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/source/client/AWSClient.cpp: In member function ‘virtual Aws::Client::AWSError<Aws::Client::CoreErrors> Aws::Client::AWSXMLClient::BuildAWSError(const std::shared_ptr<Aws::Http::HttpResponse>&) const’:
/home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/source/client/AWSClient.cpp:846:111: error: implicitly-declared ‘Aws::Client::AWSError<Aws::Client::CoreErrors>& Aws::Client::AWSError<Aws::Client::CoreErrors>::operator=(const Aws::Client::AWSError<Aws::Client::CoreErrors>&)’ is deprecated [-Werror=deprecated-copy]
846 | error = AWSError<CoreErrors>(CoreErrors::NETWORK_CONNECTION, "", "Unable to connect to endpoint", true);
| ^
In file included from /home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/source/client/AWSClient.cpp:20:
/home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/include/aws/core/client/AWSError.h:51:13: note: because ‘Aws::Client::AWSError<Aws::Client::CoreErrors>’ has user-provided ‘Aws::Client::AWSError<ERROR_TYPE>::AWSError(const Aws::Client::AWSError<Aws::Client::CoreErrors>&) [with ERROR_TYPE = Aws::Client::CoreErrors]’
51 | AWSError(const AWSError<CoreErrors>& rhs) :
| ^~~~~~~~
/home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/source/client/AWSClient.cpp:858:104: error: implicitly-declared ‘Aws::Client::AWSError<Aws::Client::CoreErrors>& Aws::Client::AWSError<Aws::Client::CoreErrors>::operator=(const Aws::Client::AWSError<Aws::Client::CoreErrors>&)’ is deprecated [-Werror=deprecated-copy]
858 | error = AWSError<CoreErrors>(errorCode, "", ss.str(), IsRetryableHttpResponseCode(responseCode));
| ^
In file included from /home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/source/client/AWSClient.cpp:20:
/home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/include/aws/core/client/AWSError.h:51:13: note: because ‘Aws::Client::AWSError<Aws::Client::CoreErrors>’ has user-provided ‘Aws::Client::AWSError<ERROR_TYPE>::AWSError(const Aws::Client::AWSError<Aws::Client::CoreErrors>&) [with ERROR_TYPE = Aws::Client::CoreErrors]’
51 | AWSError(const AWSError<CoreErrors>& rhs) :
| ^~~~~~~~
/home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/source/client/AWSClient.cpp:872:61: error: implicitly-declared ‘Aws::Client::AWSError<Aws::Client::CoreErrors>& Aws::Client::AWSError<Aws::Client::CoreErrors>::operator=(const Aws::Client::AWSError<Aws::Client::CoreErrors>&)’ is deprecated [-Werror=deprecated-copy]
872 | error = GetErrorMarshaller()->Marshall(*httpResponse);
| ^
In file included from /home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/source/client/AWSClient.cpp:20:
/home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/include/aws/core/client/AWSError.h:51:13: note: because ‘Aws::Client::AWSError<Aws::Client::CoreErrors>’ has user-provided ‘Aws::Client::AWSError<ERROR_TYPE>::AWSError(const Aws::Client::AWSError<Aws::Client::CoreErrors>&) [with ERROR_TYPE = Aws::Client::CoreErrors]’
51 | AWSError(const AWSError<CoreErrors>& rhs) :
| ^~~~~~~~
In file included from /home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/source/client/AWSClient.cpp:31:
/home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/include/aws/core/utils/Outcome.h: In instantiation of ‘Aws::Utils::Outcome<R, E>& Aws::Utils::Outcome<R, E>::operator=(Aws::Utils::Outcome<R, E>&&) [with R = std::shared_ptr<Aws::Http::HttpResponse>; E = Aws::Client::AWSError<Aws::Client::CoreErrors>]’:
/home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/source/client/AWSClient.cpp:164:23: required from here
/home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/include/aws/core/utils/Outcome.h:84:27: error: implicitly-declared ‘Aws::Client::AWSError<Aws::Client::CoreErrors>& Aws::Client::AWSError<Aws::Client::CoreErrors>::operator=(const Aws::Client::AWSError<Aws::Client::CoreErrors>&)’ is deprecated [-Werror=deprecated-copy]
84 | error = std::move(o.error);
| ~~~~~~^~~~~~~~~~~~~~~~~~~~
In file included from /home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/source/client/AWSClient.cpp:20:
/home/fedora/aws-sdk-cpp/aws-cpp-sdk-core/include/aws/core/client/AWSError.h:51:13: note: because ‘Aws::Client::AWSError<Aws::Client::CoreErrors>’ has user-provided ‘Aws::Client::AWSError<ERROR_TYPE>::AWSError(const Aws::Client::AWSError<Aws::Client::CoreErrors>&) [with ERROR_TYPE = Aws::Client::CoreErrors]’
51 | AWSError(const AWSError<CoreErrors>& rhs) :
| ^~~~~~~~
cc1plus: all warnings being treated as errors
make[2]: *** [aws-cpp-sdk-core/CMakeFiles/aws-cpp-sdk-core.dir/build.make:204: aws-cpp-sdk-core/CMakeFiles/aws-cpp-sdk-core.dir/source/client/AWSClient.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:9693: aws-cpp-sdk-core/CMakeFiles/aws-cpp-sdk-core.dir/all] Error 2
make: *** [Makefile:130: all] Error 2
```
Answers:
username_0: Looks like that error can be resolved by adding:
```
AWSError& operator=(const AWSError<ERROR_TYPE>&) = default;
```
to AWSError.
username_1: Thanks for reporting this and for the PR. We'll have it merged ASAP.
GCC 9 is looking great already.
username_2: Thank you for your feedbacks, your PR was merged in today's release: https://github.com/aws/aws-sdk-cpp/commit/201059b0b15b3e01dfead6cdfd27aa71058e64c0.
Please reopen this issue if you have more questions.
Status: Issue closed
|
atlas-engineer/nyxt | 912567554 | Title: Fresh install, wrong zoom ratio, Nyxt won't start
Question:
username_0: **Describe the bug**
fresh install, set default room ratio "125%", indicated unbound error. restart Nyxt, window won't appear.
**Precise recipe to reproduce the issue**
as above
**Information**
- OS name+version: Arch
- Graphics card and driver: Intel Haswell-ULT Integrated Graphics
- Desktop environment / Window manager name+version: xmonad
- How you installed Nyxt (Guix pack, package manager, build from source): AUR
- Information from command copy-system-information:
If you can't run copy-system-information, try `nyxt --system-information` from
a shell. It this still does not work, please provide the following:
- Nyxt version (from =M-x nyxt-version= or =nyxt --version=):
- Lisp implementation/version (if built from source):
- Kernel name+version:
- WebKitGTK+ | QtWebEngine version:
**Output when started from a shell**
Answers:
username_0: running from commandline gave some error info because of auto-config.lisp:

It registered all the wrong value setting....seems every button click,even if repeatedly, are registered repeatedly.
username_1: Registering every click is intentional.
It's strange that it got appended to configuration. It shouldn't. I'll push the type-strengthening change disallowing this value in an hour.
username_1: Okay, I've added more types to buffer slots (in bf2167c95d47a5d049c5936a43e60e79df7ee70b) and made `configure-slot` stricter about types (in ac9722fc5998b7a58ba860dec067c8d8b22ef997).
In plain English: Nyxt should now warn you when you try to put something it does not expects, into a slot. But to have this, you need to wait for a release or build from master branch :(
username_2: Also Nyxt should start anyways when define-configuration is used over wrong
values.
See https://github.com/atlas-engineer/nyxt/issues/1498. |
artin-hackers/drop | 637229138 | Title: Zombie Spawner
Question:
username_0: As a player I want a survival mode, so deathmatch against other players is not the only one.
### Acceptance Criteria
* [ ] Zombies spawn around the living players.
* [ ] Mode can be (de)activated from the command line. |
broadinstitute/picard | 1067894135 | Title: remove duplication error in vcf result
Question:
username_0: Hello, can you tell me how mark duplication in bam file or give me some materials. Does it take into account base variation at a particular location, or just the quality of the base like the picture?

Answers:
username_1: Hi username_0, Picard issues is for specific issues with Picard tools. Your question seems to be a more general issue for support. Please try the GATK forum: https://gatk.broadinstitute.org/hc/en-us/community/topics
Status: Issue closed
|
LafayetteCollegeLibraries/spot | 782338171 | Title: loosen file requirement for private new works
Question:
username_0: require files on items with a visibility other than "private," which would give us something like "drafts."
to do this, we'll have to a) flip the `config.work_requires_files` statement in the hyrax initializer and b) copy some of the javascript locally to check the work's permission as part of the file checking. |
xolvio/chimp | 128042558 | Title: Issues with WebdriverCSS and wrapAsync
Question:
username_0: The feature of wrapping asynchronous methods with wrapAsync doesn't seem to work properly on the webdrivercss command for taking css diff screenshots. The call to the function looks like this:
```javascript
browser.webdrivercss('body', {
name: 'body',
elem: 'body'
}, function (err, res){
assert.ifError(err);
assert.ok(res.body.isWithinMisMatchTolerance);
});
```
It looks to be in the standard style where the last argument is a cb that takes err, res as arguments. According to what I've read, `wrapAsync` should be able to handle this.
I've wrapped the function to be synchronous like this:
```javascript
var cssSync = wrapAsync(browser.webdrivercss);
var res = cssSync('body', {
name: 'body',
elem: 'body'
});
assert.ok(res.body.isWithinMisMatchTolerance);
```
Still, whenever I run this I get the error: `Can't wait without a fiber`.
I've also tried passing in `browser` or `this` as contexts, just in case. No dice.
Thoughts?
Answers:
username_1: The async wrapping should happen automatically for added webdriver commands.
So if it would work, this should be enough:
```js
var result = browser.webdrivercss('body', {
name: 'body',
elem: 'body'
});
assert.ok(result.body.isWithinMisMatchTolerance);
```
Can you try this? If it doesn't work, I need to debug it.
username_0: My testing shows that this way does not work either--thinking back, that's why I tried to wrap it myself in the first place. It's giving me the same error, `Can't wait without a fiber`.
I'm using `[email protected]` and `[email protected]`. Other versions of webdrivercss aren't compatible with webdriverio v 3.0+. Could that be causing compatibility issues?
username_1: You pass the client/browser instance from Chimp to webdrivercss, right?
username_0: Correct, earlier in my code I have:
```javascript
var webdrivercss = require('webdrivercss');
[...] // Other stuff
webdrivercss.init(browser, {
screenshotRoot: 'screenshots',
failedComparisonRoot: 'diffs',
misMatchTolerance: 0.05,
screenWidth: [320,480,640,1024]
});
```
as per the example on webdriver's npm page.
username_1: Ok. So probably some asynchronous code in webdrivercss that is not wrapped automatically with wrapAsync causes the callback to not be in the fiber. To fix that we need to create a sync-webdrivercss package that patches the webdrivercss package, to work with fibers. Like we did with webdriver already (https://github.com/xolvio/sync-webdriverio). Or a fork of webdrivercss, if the patching is too hard.
username_2: This issue is being tracked in Chimpy
https://github.com/TheBrainFamily/chimpy/issues
Status: Issue closed
|
aaronhayes/react-use-hubspot-form | 822291246 | Title: process is undefined
Question:
username_0: - **I'm submitting a ...**
[x ] bug report
[ ] feature request
[ ] question about the decisions made in the repository
[ ] question about how to use this project
- **Summary**
Hey @username_1 I have really enjoyed the simplicity of the plugin so thanks for creating this.
After Gatsby conf I was naturally excited to try out the new changes so I ended up encountering warnings and an error after upgrading gatsby to ^3.0.0 and its dependencies of react to ^17.0.1 and react-dom ^17.0.2.
* **Other information** (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. StackOverflow, personal fork, etc.)
Warnings:

The warnings are tied to the peer dependencies of 16.8.0 react and react-dom. I imagine these should be more easily resolved by upgrading the package to react and react-dom 17+ and fixing any minor adjustments in the plugin if necessary.
The error in production and development mode:
Production:

Development:

It's showing process as undefined. I believe this is because Gatsby upgraded from webpack 4 to 5 and 5 has some breaking changes in regards to how it exposes these global variables.
Here is Gatsby's migration guide from v2 to v3: https://www.gatsbyjs.com/docs/reference/release-notes/migrating-from-v2-to-v3/.
Here is the migration guide on webpack 4 to 5: https://webpack.js.org/migrate/5/.
Here is some Stack overflow snippets of some folks identifying the necessary changes to webpack's 5 to support process/browser and the need to no longer rely on implicitly referencing process:


I hope this helps in identifying the necessary changes to the plugin.
Answers:
username_1: Hey @username_0 ,
Thanks for reporting the issue. I see you've forked and fixed the issue. I'd be happy to merge any PR that fixes this :)
Status: Issue closed
username_1: Fix in `1.2.5` |
gbif/registry-spring-boot | 548002185 | Title: Allow "*" for CORS
Question:
username_0: Response works -> https://api.gbif-dev.org/v1/organization?limit=0 , but because of CORS, portal fails
Access to XMLHttpRequest at 'https://api.gbif-dev.org/v1/dataset/search?limit=0' from origin 'https://www.gbif-dev.org' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
(index):1 Access to XMLHttpRequest at 'https://api.gbif-dev.org/v1/organization?limit=0' from origin 'https://www.gbif-dev.org' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
(index):1 Access to XMLHttpRequest at 'https://api.gbif-dev.org/v1/organization/nonPublishing?limit=0' from origin 'https://www.gbif-dev.org' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
Answers:
username_1: Fixed
username_2: Looks like it's still happening. When the page loads for the first time it shows the CORS error, but if you refresh the page it works.
username_2: Fixed in DEV
Status: Issue closed
username_2: Response works -> https://api.gbif-dev.org/v1/organization?limit=0 , but because of CORS, portal fails
Access to XMLHttpRequest at 'https://api.gbif-dev.org/v1/dataset/search?limit=0' from origin 'https://www.gbif-dev.org' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
(index):1 Access to XMLHttpRequest at 'https://api.gbif-dev.org/v1/organization?limit=0' from origin 'https://www.gbif-dev.org' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
(index):1 Access to XMLHttpRequest at 'https://api.gbif-dev.org/v1/organization/nonPublishing?limit=0' from origin 'https://www.gbif-dev.org' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
Status: Issue closed
|
LeetCode-Feedback/LeetCode-Feedback | 1129817555 | Title: 🚨 请不要悄无声息地关掉 issue 5526; Please do not close issues without any response.
Question:
username_0: **你的 LeetCode 用户名**
N/A
**Bug 类型**
- [x] 题目
- [ ] 题解
- [ ] 编程语言
- [ ] 缺少测试用例
**描述**
[DD-2020006. 简单游戏](https://leetcode-cn.com/problems/1zD30O/) 依然未在题目列表中恢复。详见 https://github.com/LeetCode-Feedback/LeetCode-Feedback/issues/5526
**你使用的语言**
N/A
**你提交或者运行的代码**
N/A
**期望行为**
详见 https://github.com/LeetCode-Feedback/LeetCode-Feedback/issues/5526
**屏幕截图**
N/A
**额外的上下文**
N/A
Answers:
username_1: Hi @username_0,
Thank you for reaching out to us, as I mentioned in the previous thread the questions are being updated. We do not have an estimated time for when it would be published, as a side note it's also possible that the problem is removed. The content team makes the decisions!
Status: Issue closed
username_0: Since "the questions are being updated", then why is the issue closed before being fixed? |
openstreetmap/iD | 332804323 | Title: Multipolygon inner/outer
Question:
username_0: Hi,
Sometimes, multipolygons are very big and it's not easy to see, when it creates things strange, where is the pb.
1) A that time, all the ways are in orange : I suggest to put in one color the inner ways and in another color the outer ways.
2) I suggest, when the outer way is not closed, to put the nodes making a break in red with a blink that you can easily see even with a big zoom (sometimes I use a road in the multipoligon, the road cross a river and I forget to put the bridge in the relation : very difficult to find it !)
3) it could be great, when we save modifications, to have an alert when an outer relation is not closed. Juste an alert because sometimes it takes a long time to create one and you need to save your intermediate creation before to go on the next day for example.
Thanks
username_0
Status: Issue closed
Answers:
username_1: I think this is #4211
username_0: I'm not agree with you : the 1st point is a new suggestion, specific for multipolygons. I found a lot of bad multipolygons because inner or outer are bad for some ways but it's impossible to see because you need to go to every object of the multipolygon to see which one(s) has/have a bad inner/outer.
To highlight under the mouse a member will improve that but sometimes a multipolygon has.a lot of members and to go on every member to highlight it, one after the other, is surely better than what we have today but is quite long
To use 3 different colors is THE solution because you will see IMMEDIATELY the bugs of inner/outer/nothing in the multipolygon (and you will also see immedaitely that all is perfect if colors make the differences). I suggest another thing : to be able to click on one way on the map to go directly on this way to be able to change the bad outer/inner/nothing. Or to highlight not from tje list of the relation to the map but from the map to the list (when the mouse will be over a way on the map, the line of the corresponding way in the list will be underline, like that it will be easy to go to this way in another tab to change it without losing in the first tab the map of all objects of the multipolygon : after the correction in the 2d tab, you will close it, go back to the first tab and open the second bad way in another tab and so on).
I agree with the 2de point ; KeepRight solves the pb I told and to integrate this add-on in ID would be great.
I'm not agree with the 3rd point : the 3130 talks about polygons or ways, I talk of multipolygons where sometimes an outer (or inner) polygon is made of a lot of ways. To solve the 3130 issue will not solve a multipolygon with a hole because multipolygons are not one object but are made of a lot of different objects : ID needs to follow one way after the other to see if it is closed and if 3130 is solved ID will not follow the different ways so ID will not see the hole.
I'm agree with the 4th point : the issue 4211 is the same.
Because of the 1rst and 3rd point, which are new suggestions, please open back this 5080 issue..
Best regards
--------------------------------------------
it on GitHub, or mute
the thread.
username_1: We might not be able to do exactly what you want. For example, we couldn't really validate a multipolygon in iD that's not fully downloaded from the OSM server. Also one of the big concerns is surfacing a warning to the user who is not able to actually understand the issue or fix it. But #3130 is a general catch all for adding more validation, and I think making sure closed ways are actually closed is one of the points in there.
Anyway sorry if we disagree on some of the points, but hopefully we will be improving these items in iD a little more with each new version. |
rapidsai/cuml | 501142411 | Title: [FEA] Add precomputed kernels to SVM
Question:
username_0: Sklearn defines a `precomputed` option SVM's `kernel` parameter. In this case, instead of passing the training vectors, we pass the Gram matrix that contains kernel values between all training examples.
This can be considered as a big kernel cache which is initialized by the user, and could be implemented by adapting the KernelCache class accordingly.
Answers:
username_1: Hello,
Is this functionality being considered for future implementation?
This would be quite useful as sklearn implementation is quite slow (being single threaded) and I find ThunderSVM quite buggy and outdated.
Thanks,
Emil |
soyersoyer/SwCrypt | 461909767 | Title: Warning ⚠️ in Xcode 10.2.1 Swift 5.0.1
Question:
username_0: There are about 38 warning in `SwCrypt.swift` file.
```
'withUnsafeMutableBytes' is deprecated: use `withUnsafeMutableBytes<R>(_: (UnsafeMutableRawBufferPointer) throws -> R) rethrows -> R` instead
```
Answers:
username_0: This may fix with #51
username_1: No, the warnings are still popping up. The code needs to be refactored for Swift 5. Here's a an example:
```swift
fileprivate func withUnsafePointers<A0, A1, Result>(
_ arg0: Data,
_ arg1: inout Data,
_ body: (
UnsafePointer<A0>,
UnsafeMutablePointer<A1>) throws -> Result
) rethrows -> Result {
return try arg0.withUnsafeBytes { p0 in
let b0: UnsafePointer<A0> = p0.baseAddress!.assumingMemoryBound(to: A0.self)
return try arg1.withUnsafeMutableBytes { p1 in
let b1: UnsafeMutablePointer<A1> = p1.baseAddress!.assumingMemoryBound(to: A1.self)
return try body(b0, b1)
}
}
}
```
Basically you need to add another variable inside the call to parse either like `UnsafePointer` or `UnsafeMutablePointer` depend on whether the call is to `withUnsafeBytes` or `withUnsafeMutableBytes` respectivley.
Another example:
```swift
public static func generateRandom(_ size: Int) -> Data {
var data = Data(count: size)
data.withUnsafeMutableBytes { dataBytes in
let b0: UnsafeMutablePointer<UInt8> = dataBytes.baseAddress!.assumingMemoryBound(to: UInt8.self)
_ = CCRandomGenerateBytes!(b0, size)
}
return data
}
```
You can read more about [here](https://stackoverflow.com/a/55484396/2520497)
Full refactored file for v5.1.3 attached below.
[SwCrypt v5.1.3(refactored).swift.zip](https://github.com/soyersoyer/SwCrypt/files/3494028/SwCrypt.v5.1.3.refactored.swift.zip)
Cheers,
Dimitar
username_2: Thank you @username_1 |
electron/electron | 182679368 | Title: Does electron provide a way to automatically generate bindings for alterntive js languages?
Question:
username_0: DOM provides interface definition language(.idl) files.
I wonder if the same applies to electron.
Answers:
username_1: I'm not sure what you mean by "alternative js languages". There is an effort under way to generate a description of the [API in JSON format](https://github.com/electron/electron/releases/download/v1.4.3/electron-api.json).
username_2: Pretty sure this is the intention of the JSON API file and as a bonus the outcome of the typescript definition project.
https://github.com/electron/electron/projects/3
Status: Issue closed
username_0: Thanks... I'll keep track of https://github.com/electron/electron/projects/3 |
cloud-custodian/cloud-custodian | 909514383 | Title: AWS API Gateway rest-api metric filters do not work properly
Question:
username_0: **Describe the bug**
It looks like there is a bug related to the metric filters for `aws.rest-api` resources, specifically around the dimensions being incorrect.
**To Reproduce**
```bash
$ c7n-org run --dryrun -c accounts.yml -s output -u policies/api-gateway/policies.yml
Exception running policy:unused-api-gateways account:test-account region:us-east-1 error:'apigateway'
```
Given the follow policy:
```yaml
policies:
- name: unused-api-gateways
resource: rest-api
filters:
- type: metrics
name: Count
statistics: Sum
days: 7
period: 86400
value: 5
op: less-than
missing-value: 0
```
**Expected behavior**
```bash
$ c7n-org run --dryrun -c accounts.yml -s output -u policies/api-gateway/policies.yml --debug
2021-06-01 17:07:47,460: c7n_org:INFO Ran account:test-account region:us-east-1 policy:unused-api-gateways matched:2 time:1.14
```
**Background (please complete the following information):**
- OS: OS X
- Python Version: Python 3.9
- Custodian Version: 0.9.10.0
- Cloud Provider: aws
- Policy: [please exclude any account/sensitive information]
```yaml
policies:
- name: unused-api-gateways
resource: rest-api
filters:
- type: metrics
name: Count
statistics: Sum
days: 7
period: 86400
value: 5
op: less-than
missing-value: 0
```
- Traceback: `custodian.filters:WARNING CW Retrieval error: 'apigateway'`
**Additional context**
I already patched this and got it working on my local install, I will submit a follow up PR for review.
Answers:
username_1: Thanks for reporting this, especially for including context about an upcoming PR!
Status: Issue closed
|
open-mmlab/mmaction2 | 1011841602 | Title: Run "demo/webcam_demo_spatiotemporal_det.py" , the program is stuck.
Question:
username_0: When I run "demo/webcam_demo_spatiotemporal_det.py" with USB cam or RTSP web cam , the program is stuck after running for a period of time.
I think this is caused by the inference speed or the display speed being slower than the reading speed.
Some log information is as follows:(Sometimes directly show killed)
```
INFO:__main__:Stdet Results: [[('sit', 0.8451391), ('carry/hold (an object)', 0.52010876)]]
DEBUG:__main__:Main thread inference time 627 ms
DEBUG:__main__:Read thread: 691 ms, 12 fps
DEBUG:__main__:Read thread: 231 ms, 35 fps
INFO:__main__:Stdet Results: [[('sit', 0.84119666), ('carry/hold (an object)', 0.5008541)]]
DEBUG:__main__:Main thread inference time 600 ms
DEBUG:__main__:Read thread: 296 ms, 27 fps
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
DEBUG:__main__:Read thread: 212 ms, 38 fps
DEBUG:__main__:Read thread: 332 ms, 24 fps
DEBUG:__main__:Read thread: 350 ms, 23 fps
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
INFO:__main__:Stdet Results: [[('sit', 0.8407789), ('carry/hold (an object)', 0.48366198)]]
DEBUG:__main__:Main thread inference time 1010 ms
DEBUG:__main__:Read thread: 199 ms, 40 fps
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
DEBUG:__main__:Read thread: 320 ms, 25 fps
INFO:__main__:Stdet Results: [[('sit', 0.8350647), ('carry/hold (an object)', 0.46698186), ('text on/look at a cellphone', 0.42576995)]]
DEBUG:__main__:Main thread inference time 657 ms
DEBUG:__main__:Read thread: 250 ms, 32 fps
DEBUG:__main__:Read thread: 247 ms, 32 fps
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
INFO:__main__:Stdet Results: [[('sit', 0.80609334), ('carry/hold (an object)', 0.48520392), ('text on/look at a cellphone', 0.5238202)]]
DEBUG:__main__:Main thread inference time 382 ms
DEBUG:__main__:Read thread: 251 ms, 32 fps
DEBUG:__main__:Read thread: 316 ms, 25 fps
DEBUG:__main__:Read thread: 273 ms, 29 fps
INFO:__main__:Stdet Results: [[('sit', 0.7361066), ('carry/hold (an object)', 0.47983062)]]
DEBUG:__main__:Main thread inference time 957 ms
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
DEBUG:__main__:Read thread: 256 ms, 31 fps
DEBUG:__main__:Read thread: 243 ms, 33 fps
DEBUG:__main__:Read thread: 300 ms, 27 fps
DEBUG:__main__:Read thread: 273 ms, 29 fps
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
INFO:__main__:Stdet Results: [[('sit', 0.7959219), ('carry/hold (an object)', 0.53366905)]]
DEBUG:__main__:Main thread inference time 970 ms
DEBUG:__main__:Read thread: 208 ms, 39 fps
DEBUG:__main__:Read thread: 350 ms, 23 fps
DEBUG:__main__:Read thread: 238 ms, 34 fps
INFO:__main__:Stdet Results: [[('sit', 0.82904106), ('carry/hold (an object)', 0.60257)]]
DEBUG:__main__:Main thread inference time 813 ms
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
DEBUG:__main__:Read thread: 217 ms, 37 fps
DEBUG:__main__:Read thread: 318 ms, 25 fps
DEBUG:__main__:Read thread: 332 ms, 24 fps
[Truncated]
DEBUG:__main__:Read thread: 228 ms, 35 fps
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
DEBUG:__main__:Read thread: 1157 ms, 7 fps
INFO:__main__:Stdet Results: [[('sit', 0.8563365), ('carry/hold (an object)', 0.6350206)]]
DEBUG:__main__:Main thread inference time 8273 ms
DEBUG:__main__:Read thread: 11494 ms, 1 fps
Process finished with exit code 137 (interrupted by signal 9: SIGKILL)
```
I have adjust the keyframes:
```
keyframe = task.frames[len(task.frames) // 4]
```
However, it will still get stuck.
What parameters should I adjust so that the log information output will not get stuck.
Answers:
username_0: Parameter setting:
```
def parse_args():
parser = argparse.ArgumentParser(
description='MMAction2 webcam spatio-temporal detection demo')
parser.add_argument(
'--config',
default=('../configs/detection/ava/'
'slowonly_omnisource_pretrained_r101_8x8x1_20e_ava_rgb.py'),
help='spatio temporal detection config file path')
parser.add_argument(
'--checkpoint',
default=('https://download.openmmlab.com/mmaction/detection/ava/'
'slowonly_omnisource_pretrained_r101_8x8x1_20e_ava_rgb/'
'slowonly_omnisource_pretrained_r101_8x8x1_20e_ava_rgb'
'_20201217-16378594.pth'),
help='spatio temporal detection checkpoint file/url')
parser.add_argument(
'--action-score-thr',
type=float,
default=0.4,
help='the threshold of human action score')
parser.add_argument(
'--det-config',
default='../demo/faster_rcnn_r50_fpn_2x_coco.py',
help='human detection config file path (from mmdet)')
parser.add_argument(
'--det-checkpoint',
default=('http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/'
'faster_rcnn_r50_fpn_2x_coco/'
'faster_rcnn_r50_fpn_2x_coco_'
'bbox_mAP-0.384_20200504_210434-a5d8aa15.pth'),
help='human detection checkpoint file/url')
parser.add_argument(
'--det-score-thr',
type=float,
default=0.9,
help='the threshold of human detection score')
parser.add_argument(
'--input-video',
default='0',
type=str,
help='webcam id or input video file/url')
parser.add_argument(
'--label-map', default='../demo/label_map_ava.txt', help='label map file')
parser.add_argument(
'--device', type=str, default='cuda:0', help='CPU/CUDA device option')
parser.add_argument(
'--output-fps',
default=15,
type=int,
help='the fps of demo video output')
parser.add_argument(
'--out-filename',
default=None,
type=str,
help='the filename of output video')
parser.add_argument(
[Truncated]
type=int,
help='give out a prediction per n frames')
parser.add_argument(
'--clip-vis-length',
default=8,
type=int,
help='Number of draw frames per clip.')
parser.add_argument(
'--cfg-options',
nargs='+',
action=DictAction,
default={},
help='override some settings in the used config, the key-value pair '
'in xxx=yyy format will be merged into config file. For example, '
"'--cfg-options model.backbone.depth=18 model.backbone.with_cp=True'")
args = parser.parse_args()
return args
``` |
facebook/react | 134917766 | Title: why this.props.children cannot re-render?
Question:
username_0: for example:
```js
class Parent extends React.Component {
componentDidMount() {
setTimeout(() => { this.forceUpdate();}, 1000);
}
render() {
return <div>{this.props.children}</div>;
}
}
class Child extends React.Component {
render() { return <div>{this.props.random}</div>}
}
class App extends React.Component {
render() {
return (
<Parent><Child random={Math.random()}/></Parent>
);
}
}
```
Answers:
username_1: It is because `Math.random()` is executed when `App` (your `Parent` component's parent, the `Child`'s grandparent) is rerendered. If you rerender the app in the settimeout, it should do what you want.
Also, we use github to track bugs in the react core, rather than usage questions. A more appropriate place for usage questions is StackOverflow. This is a usage question, so I'm going to close out this issue, but feel free to continue the conversation on this thread or move the discussion over to StackOverflow.
Status: Issue closed
|
smartdevicelink/sdl_core | 276334970 | Title: PoliciesManager does not allow all requested params in case "parameters" field is omited
Question:
username_0: **Occurrence: Always**
# Description
PoliciesManager does not allow all requested params in case "parameters" field is omited
## Preconditions
- SDL and HMI are started.
- App is registered and activated.
- "parameters" field is omited at PolicyTable for used request
## Steps to reproduce
1. Mobile app sends request to SDL and this request is allowed by Policies for this mobile app.
## Actual result
SDL processes RPC as disallowed.
## Expected result
SDL transfers received request with all requested parameters as is to HMI
# Environment
# Attachments
# Expected delivery
- [ ] Source code updates
- [ ] Code comments
- [x] **UTs add/update _(not required)_**
- [ ] ATF tests add/update
- [x] **Manual tests _(not required)_**
- [x] **Add/update CI plans/jobs _(not required)_**
- [x] **SDD updates _(not required)_**
- [x] **Guidelines update ([sdl_core_guides](https://github.com/smartdevicelink/sdl_core_guides)) _(not required)_**
- [x] **Guidelines update ([sdl_hmi_integration_guidelines](https://github.com/smartdevicelink/sdl_hmi_integration_guidelines)) _(not required)_**
Answers:
username_1: Contributor priority is set `High` with reason: The issue is related to correctness of result code in response from HMI/SDL
username_2: The defect is not reproducible on current sdl_core (develop) branch.
Checked on: https://github.com/smartdevicelink/sdl_core/commit/0b19cf498d1c9972d67b85793dc6b029c26378a0
Branch: https://github.com/smartdevicelink/sdl_core/tree/develop
SDL behaves as expected.
Status: Issue closed
|
linkedconnections/gtfs2lc | 587580158 | Title: Add a URI template parameter feed_version to indicate the version of the feed
Question:
username_0: Different options to implement the identifier strategy for e.g., keeping a block ID persistent:
Using just local identifiers for e.g., a block id will give 2 problems:
1. break federation, as multiple GTFS feeds get translated into Linked Connections and will reuse the same block IDs.
2. When an updated GTFS feed gets published, the block ids might conflict with the earlier version.
Suggestion solution to introduce a global identifier
`https://example.org/blocks/{block_id}`
Solved the problem with federating over different source, but not yet the problem of making it work when an updated GTFS file gets translated to LC (unless for your GTFS feed, block ids are incremental over time and you can rely on this).
So we need to scope it to the specific GTFS feed and this brings us to another problem: how do you identify this specific GTFS feed or the fact it got translated to RDF here.
Suggestion for a version number:
1. Rely on `feed_version` in `feed_info.txt` -- Design issue: don’t include patch version so that a block id stays the same when the minor and major version number didn’t change? (e.g., 1.2.0 → 1.2.1)
2. When the GTFS feed’s version is not set, instead use a timestamp from the moment we started `gtfs2lc`
URI template for e.g., block then becomes:
`https://example.org/blocks/{feed_version}/{block_id}` |
Tencent/sluaunreal | 426879011 | Title: lua-wrapper.exe运行错误
Question:
username_0: 
Answers:
username_0: 依赖里面需要的是Newtonsoft.Json 11.0.02,并没看到这个版本号,只找到了11.0.0,是这个版本的库不行吗
Status: Issue closed
username_1: @username_0
https://www.nuget.org/packages/newtonsoft.json/11.0.2 |
Milad-Akarie/auto_route_library | 812962155 | Title: None
Question:
username_0: Is this still a problem in 1.0.1?
Also, your route
```
AdaptiveRoute(
page: ChatListPage,
path: '/det',
),
```
is defined as a child of /myDetails.
If you don't nest it as a child, does it work properly? |
Activiti/Activiti | 417696099 | Title: occasional travis errors in builds
Question:
username_0: The command "~/bin/install-jdk.sh --target "/home/travis/openjdk11" --workspace "/home/travis/.cache/install-jdk" --feature "11" --license "GPL" --cacerts" failed and exited with 7 during .
Could be related to https://github.com/junit-team/junit4/issues/1577
Answers:
username_0: This hasn't happened before so hoping it's a brief thing that the travis guys will fix
Status: Issue closed
username_0: This now seems to have resolved itself. |
sdqali/hugo | 440409689 | Title: Managing security certificates from the console - on Windows, Mac OS X and Linux
Question:
username_0: Comments for [Managing security certificates from the console - on Windows, Mac OS X and Linux](https://username_0.in/blog/2012/06/05/managing-security-certificates-from-the-console-on-windows-mac-os-x-and-linux/index.html) |
KinveyApps/Xamarin-Starter | 224918043 | Title: SQLite Driver issue Android
Question:
username_0: Running the application I get the following dialog on start. "Detected problems with app native libraries. libmonosgen-64bit-2.0.so: unauthorized access to "/system/lib64/libsqlite.so"
I think this is an error with the sqlite packages sqlite.net-pcl and sqlite.netcore-pcl. As a workaround I installed this [package ](https://www.nuget.org/packages/SQLite.Net.Platform.XamarinAndroidN) and referenced it in SQLite_Android.Getconnection()
Answers:
username_1: Thanks @username_0 for the info and workaround! This is an issue with our current SDK, and I will open a ticket for this. I will update this issue when there is a fix available.
Status: Issue closed
username_1: Issue resolved with [dotnet-sdk #123](https://github.com/Kinvey/dotnet-sdk/pull/123) available in the `3.0.9` release. The `SQLite.Net.Platform.XamarinAndroidN` package has been added as a dependency.
username_2: Do I have to do any changes at all? Because I just updated to 3.0.9 and I'm still having this issue
username_1: HI @username_2, thank you for your question. I had to make some additional project changes to the Xamarin-Starter to support `3.0.9`, which I have just pushed. Please pull the latest changes and try again. |
ros-planning/moveit | 330436067 | Title: Collision Objects dissappear shortly after being added via PlanningSceneMonitor
Question:
username_0: ### Description
I'm attempting to add a collision object to a PlanningSceneMonitor's monitored scene. I tried using the approach below which sends a moveit_msgs::collision_object message over the 'collision_objects' topic. It should then be received by the WorldGeometryMonitor and added to the planning scene. I have another node which listens to the planning scenes published by the monitor (over the "current_planning_scene" topic. Ultimately I want to check for collisions with that node but for now I am just trying to get the collision objects added properly. Currently, this node just counts the number of collision objects present in the received planning scene and prints the count.
When I send the collision object once, as below, the 'receiver' node gets an updated planning scene containing the new collision object (as expected) and prints 1. No more planning scenes are sent out by the ps monitor until I move the robot, causing an update via the "joint_states" topic. When the second planning scene is received, the count is now 0 and there are no collision objects in the scene. So sometime between the initial message and the second message, the collision object has disappeared. Furthermore, the ps monitor never sends a message indicating that the collision object is no longer in the world as I would expect it to do. It's as though the collision object just disappears without any indication.
The object seems to disappear after about 0.5 seconds. If I keep publishing collision objects with changing id's at 10 hz, the receiver node indicates that 5 objects are present at a given time. If I publish at 100 hz, ~50 are present. So it seems the collision objects are added, persist for half a second, and then disappear without the ps monitor indicating that they have disappeared.
In addition to the approach below, I have tried sending the objects via planning scene diffs over the "planning_scenes" topic (using startSceneMonitor rather than startWorldGeometryMonitor). I also tried sending them using a PlanningSceneInterface's attachCollisionObjects method ( which I believe just uses the "planning_scenes" topic as well. Lastly, I tried to use the moveit_visual_tools 'processCollisionObjectMsg' method which bypasses ros and uses a LockedPlanningSceneRW to add the collision object. Exact same result in each case.
One last thing... I receive a '[WARN] Returning dirty link transforms' whenever the collision object is published.
_Publisher_
```
int main(int argc, char **argv)
{
// set up planning scene monitor
planning_scene_monitor::PlanningSceneMonitorPtr psm_ptr = std::make_shared<planning_scene_monitor::PlanningSceneMonitor>("robot_description");
psm_ptr->startStateMonitor("joint_states");
psm_ptr->startWorldGeometryMonitor("collision_objects", "world", false);
psm_ptr->startPublishingPlanningScene(psm_ptr->UPDATE_SCENE, "current_planning_scene");
// collision obj publisher for adding objects to scene
ros::Publisher collision_obj_publisher = n.advertise<moveit_msgs::CollisionObject>("collision_objects", 1);
// Create table
moveit_msgs::CollisionObject box;
box.header.frame_id = "/world";
box.id = "box"
shape_msgs::SolidPrimitive primitive;
primitive.type = primitive.BOX;
primitive.dimensions.resize(3);
primitive.dimensions[0] = 0.3;
primitive.dimensions[1] = 0.3;
primitive.dimensions[2] = 0.3;
geometry_msgs::Pose box_pose;
box_pose.orientation.w = 1.0;
box_pose.position.x = 0.4;
box_pose.position.y = 0.419;
box_pose.position.z = 0.0;
box.primitives.push_back(primitive);
box.primitive_poses.push_back(box_pose);
box.operation = box.ADD;
// Wait for monitor to connect, publish collision obj
ros::WallDuration sleep_t(0.5);
while (collision_obj_publisher.getNumSubscribers() < 1)
{
sleep_t.sleep();
}
collision_obj_publisher.publish(box);
// Loop
ros::Rate loop_rate(10);
[Truncated]
```
planning_scene::PlanningScene *scene; // initialized in main
void checkCollisions(const moveit_msgs::PlanningSceneConstPtr &msg)
{
// Convert planning scene message to real planning scene
scene->setPlanningSceneMsg(*msg);
// DEBUG -----
std::vector<std::string> ids = scene->getWorld()->getObjectIds();
ROS_INFO_STREAM(ids.size());
}
```
### Your environment
* ROS Distro: Kinetic
* OS Version: e.g. Ubuntu 16.04
Answers:
username_0: **UPDATE:**
I discovered that the issues are likely all due to the fact that while the first planning scene message sent by the monitor is not a diff, the subsequent messages sent are specified as diffs. This would lead to the problems outlined above as I was expecting those messages to be the complete scene rather than diffs. So even though the monitored planning scene is maintaining the collision objects I add, only the new ones are being sent, along with the new joint states.
Correct me if I'm wrong here but my understanding was that if I set the SceneUpdateType to UPDATE_SCENE in the constructor for the monitor (as I did above), every planning_scene message sent out should be a full scene and none should really ever be diffs.
If my understanding is correct, this could in fact be a bug, possibly caused by line 455 in planning_scene_monitor.cpp
`new_scene_update_ = (SceneUpdateType)((int)new_scene_update_ | (int)update_type);`
In the scenePublishingThread() method which is responsible for actually publishing the new planning_scene message whenever an update occurs, nothing happens until new_scene_update_ != UPDATE_NONE (which = 0). Then after the update is published, new_scene_update_ is reset to UPDATE_NONE. So in line 455, the left side of the logical OR should always be 0 and therefore isn't really doing anything. It seems possible that the line should have been
`new_scene_update_ = (SceneUpdateType)((int)publish_update_types_ | (int)update_type);`
publish_update_types_ is set from the SceneUpdateType parameter passed in the constructor so this would ensure that new_scene_update would _at the very least_ be whatever 'level' of update specified in the constructor, due to the OR-ing. This seems to be the intended outcome of that line. So if we set the SceneUpdateType to UPDATE_SCENE, any call to triggerSceneUpdateEvent would result in the full scene being published, as expected.
I've been stuck on this issue for way too long so I totally understand if there's something obvious I've missed, but so far this is the best explanation I can come up with.
username_1: This should be `scene->usePlanningSceneMsg(*msg);` to account for the diffs
username_1: If you think you found a bug here, please provide a pull-request and explain the reasoning there, referencing surrounding code and examples there.
username_0: Yes but it seems like the triggerSceneUpdateEvent method is the only place where new_scene_update can be set to anything other than UPDATE_NONE. And as soon as it is set to anything else, the scenePublishingThread advances, publishes the appropriate message, and then resets it to UPDATE_NONE again. So when this line is reached the left side of the OR is always 0 so the line is effectively new_scene_update = update_type.
Just as a brief example
-> we create the monitor passing UPDATE_SCENE (or 1111) as the SceneUpdateType
-> so publish_update_types_ = 1111
-> Now say there is an update to the joint states
-> triggerSceneUpdateEvent is called with UPDATE_JOINTS (or 0100) as update_type
-> at line 455 new_scene_update_ now = 0100
-> The loop at 349 and the if at 353 pass and we end up getting a diff message at 362
But we should never get a diff message as this contradicts the documentation for startPublishingPlanningScene: _"The first message sent out is a complete planning scene. Diffs are sent afterwards on updates specified by the event bitmask. **For UPDATE_SCENE, the full scene is always sent.**"_
Sorry I wasn't convinced there was a bug here when I first posted. If my logic here seems okay I will do a pull-request when I have a chance. In the meantime it should be simple enough to find a workaround now that I know what the real problem is.
username_2: Any update? |
elfinlazz/lt3translation | 173121675 | Title: [단순] 전 이만! 도망가는건 아니고, 볼일이 좀 있어서요.
Question:
username_0: 
전 이만! 도망가는 건 아니고,
볼일이 좀 있어서요.
로 내용과 개행 수정하면 어떨까요?
Answers:
username_1: 내용이 약간 수정되어 현재는
전 이만! 도망가는 건 아니고,
할 일이 좀 있어서요.
로 수정했습니다.
Status: Issue closed
|
gfx-rs/naga | 937192669 | Title: Validation error for point_list
Question:
username_0: Since there is no `point_size` in WGSL, I omit it from a shader in our tests:
```wgsl
struct VertexOutput {
[[builtin(position)]] position: vec4<f32>;
//[[builtin(pointSize]] point_size: f32;
};
[[stage(vertex)]]
fn vs_main([[builtin(vertex_index)]] vertex_index : u32) -> VertexOutput {
var positions: array<vec3<f32>, 4> = array<vec3<f32>, 4>(
vec3<f32>(-0.5, -0.5, 0.0),
vec3<f32>(-0.5, 0.5, 0.0),
vec3<f32>( 0.5, -0.5, 0.2),
vec3<f32>( 0.5, 0.5, 0.2),
);
var out: VertexOutput;
out.position = vec4<f32>(positions[vertex_index], 1.0);
//out.point_size = 16.0;
return out;
}
[[stage(fragment)]]
fn fs_main() -> [[location(0)]] vec4<f32> {
return vec4<f32>(1.0, 0.499, 0.0, 1.0);
}
```
Earlier, this kindof worked. Only points of 1px, but ah well... However, with the wgpu-native from latest master I now get:
```
VALIDATION [UNASSIGNED-CoreValidation-Shader-PointSizeMissing (0xf3693078)]
Validation Error: [ UNASSIGNED-CoreValidation-Shader-PointSizeMissing ] Object 0: VK_NULL_HANDLE, type = VK_OBJECT_TYPE_PIPELINE; | MessageID = 0xf3693078 | Pipeline topology is set to POINT_LIST, but PointSize is not written to in the shader corresponding to VK_SHADER_STAGE_VERTEX_BIT.
objects: (type: PIPELINE, hndl: 0x0, name: ?)
```
Answers:
username_1: Interesting! Solving this would require a bit of collaboration from wgpu.
Status: Issue closed
|
Klaus1243/HM9_1-media | 566333094 | Title: Homework10
Question:
username_0: Начиная с версий ipad появляется пустой белый блок справа, нужно проверить чтобы элементы на сайте не вылазили за всю ширину устройства:

Answers:
username_0: Не везде на сайте отступы по бокам одинаковых размеров:

username_0: Избавиться нужно от больших отступов на мобильных устройствах:

username_0: При отрытии менюшки с помощью бургера, кнопка плавает:

username_0: Слишком маленькие картинки на ipad выходят, нужно сделать крупнее чтобы читабельно все было:

username_0: Тут еще и картинка пропала:

username_0: Также на смартфонах избавиться от белой полосы справа:

username_0: 3 в ряд элемента не читабельно смотрятся, тут лучше по 1 в ряд выставлять:

username_0: Проверить чтобы на всем сайте по бокам были одно размера отступы по бокам:
 |
bbc/simorgh | 1139993349 | Title: Spike how we rearrange components for Clientside MVT
Question:
username_0: **Is your feature request related to a problem? Please describe.**
For our upcoming experiment around recommendations we have a need to change the position and types of components for different variants. We have a problem though where if we cause Visual Journalism includes to re-render they will likely not load correctly on re-render; full background can be found here: https://github.com/bbc/simorgh-infrastructure/blob/latest/documentation/architecture-decision-records/2020-08-27-story-page-restricted-clientside-rendering.md We need to identify a technical solution for rearranging blocks once the variant is loaded via [`useOptimizelyVariation`](https://github.com/bbc/simorgh/blob/latest/src/app/hooks/useOptimizelyVariation/index.jsx) that does not cause includes to re-render.
**Describe the solution you'd like**
- Identify a Story fixture page locally that includes an include and verify it loads correctly after making no changes
- Try different methods to rearrange the blocks using the [`useOptimizelyVariation`](https://github.com/bbc/simorgh/blob/latest/src/app/hooks/useOptimizelyVariation/index.jsx) hook and verify the includes block doesn't re-render
- Some ideas to try:
- Pass in the variation here in ther server component: https://github.com/bbc/simorgh/blob/9d87b8934b5e40f46c2705afcf90001e4e0b3cb5/src/server/index.jsx#L145
- The logic here can use the variant to insert/move/delete blocks as needed based on the variant: https://github.com/bbc/simorgh/blob/cdf5c11f3087f87a57f95afc8fefeb3e53c4e3ff/src/app/routes/cpsAsset/getInitialData/index.js#L67
- This in theory would trigger a re-render of the associated components but potentially not the includes if the re-render is intelligent enough
- This way could be 'purest' but is complex and probably is unlikely to prevent includes from re-rendering
- Create placeholder components
- Add logic [here](https://github.com/bbc/simorgh/blob/cdf5c11f3087f87a57f95afc8fefeb3e53c4e3ff/src/app/routes/cpsAsset/getInitialData/index.js#L67) to add blocks for all possible variants initially
- Add logic into each block type required to show or hide the content based on the variant returned from the [`useOptimizelyVariation`](https://github.com/bbc/simorgh/blob/latest/src/app/hooks/useOptimizelyVariation/index.jsx) hook
- This hopefully will cause the relevant component to appear/disappear but not cause the whole article content to re-render
- This is arguably less maintainable as the variation logic is littered around the codebase and the pattern would need to be repeated each time we did an experiment
- It is though more likely to not cause includes to re-render
There may be other ideas, these are two that come to mind.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Testing notes**
[Tester to complete]
Dev insight: Will Cypress tests be required or are unit tests sufficient? Will there be any potential regression? etc
- [ ] This feature is expected to need manual testing.
**Checklist**
- [ ] (BBC contributors only) This issue follows the [repository use guidelines](https://github.com/bbc/simorgh-infrastructure/blob/latest/documentation/repository-guidelines.md)
**Additional context**
Add any other context or screenshots about the feature request here.
Answers:
username_1: Had a quick look at this and I think it'll have to be some combination of the 2 suggested approaches, whereby we modified the `pageData` in the `getInitialData` function and then add logic into the `CpsRecommendations` component to figure out which variation to display.
It's tricky to find a solution that we could universally use for future experiments, as each one will be somewhat unique in the approach (I assume?). The trick with this one is the splitting of the recommendations. The other variations are pretty straightforward to implement, as they are either switching out to a different component (requiring some data manipulation in the component) or hiding it completely. The other variation is within the Related Content section, so again a data manipulation task within the component.
It could be useful to have a function in `getInitialData` that adds a block where we want, for example something like:
```
insertExperimentalBlock({
shouldInsert: true,
insertAtIndex: 10,
blockModel: {
type: 'wsoj',
model: {
type: 'recommendations',
path: '/api/recommend?recSys=2&limit=2&assetUri=/hindi/india-60392284'
},
},
}),
```
This could be a generic-ish function whereby you supply it with an object of options. Should it be inserted, where, and the actual block content. We already have a couple of functions that do this:
- https://github.com/bbc/simorgh/blob/aba81751518d34516a75cc285c749ab80ba2c98e/src/app/routes/cpsAsset/getInitialData/addRecommendationsBlock/index.js
- https://github.com/bbc/simorgh/blob/5d950b942b118948420e3405e718c6a9ddba21bc/src/app/routes/cpsAsset/getInitialData/insertPodcastPromo/index.js
This could work, but there would still be a lot of custom logic within the components we want in the experiment, so I'm not certain of the re-usability.
username_1: Just to note, I also tested the above on a page of includes whilst using the `useOptimizelyVariation` hook and it didn't break the includes. This is not to say the above method is the only way to not break includes, but it seems like modifying data in `getInitialData` doesn't affect include re-rendering.
I did a very hacky check to split up the recommendations for the split-variation:
```
<RecommendationsPromoList
promoItems={
promoVariation === 'variation_1'
? index === 0
? items.slice(0, 2)
: items.slice(2, 4)
: items
}
/>
```
It renders the recs as normal (full list of 4), then when the Optimizely hook completes and returns the variation ID, it splits them. The includes render and behave as normal.
Status: Issue closed
|
sourcejs/sourcejs-react-docgen | 114083132 | Title: using <spec path>/index.jsx doesn't work
Question:
username_0: It's a bug in this line: https://github.com/sourcejs/sourcejs-react-docgen/blob/82f8bc15463c264a8faa65aecf454fa6b75919d0/core/middleware/index.js#L41
Answers:
username_1: Thanks for the report, I'll check it out. Was doing thus plugin in a rush for a demo.
username_0: Even after moving the file, the plugin still doesn't work:

This is the `docgen` output for my component:
```
{
"description": "",
"displayName": "Avatar",
"props": {
"user": {
"type": {
"name": "shape",
"value": {
"name": {
"name": "string",
"required": true
}
}
},
"required": false,
"description": ""
},
"size": {
"type": {
"name": "number"
},
"required": false,
"description": ""
},
"withTooltip": {
"type": {
"name": "bool"
},
"required": false,
"description": ""
},
"style": {
"type": {
"name": "object"
},
"required": false,
"description": ""
}
}
}
```
Status: Issue closed
username_1: Fixed in 0.2.0 |
fuchami/ANOGAN | 500344056 | Title: X = np.concatenate((image_batch, gen_images)) error when training mnist !
Question:
username_0: File "main.py", line 164, in <module>
main()
File "main.py", line 157, in main
run(args)
File "main.py", line 83, in run
DCGAN.train(X_train)
File "/home/fang/Desktop/DCGAN Keras/dcgan.py", line 102, in train
X = np.concatenate((image_batch, gen_images))
ValueError: all the input array dimensions except for the concatenation axis must match exactly |
niklabh/mattermost-plugin-webrtc-video | 598209856 | Title: Audio configuration like Discord
Question:
username_0: Hello all,
something missing in mattermost is the audio chat like we can see on discord
can we use u plugin like the audio of discord, for sure, million of user will using it, no video just audio like discord do it.
create some channel audi, and talk on it id user want join the channel audio
exemple :

thanks all
Answers:
username_1: @username_2 @dankeder we're searching for something like this since a while and would be willing to sponsor this if we can turn our mattermost into a discord clone. If you're interested, please contact me directly at <EMAIL>
username_1: Top down priority list for us:
1. Being able to join an audio channel with one click inside mattermost
2. Seeing who is in that channel, also when I'm not in it (motivation to join for most people)
3. Muting myself
4. Allowing people to somehow join from mobile
username_0: do u have road map please
username_2: - [x] Create webrtc infrastructure for audio calls in group
- [ ] Add voice channel to mattermost
- [ ] Add ability to join voice channel
- [ ] Add mute/unmute button
- [ ] Allow custom stun/turn servers in setting
- [ ] Tesing and making voice chat more resilient with auto rejoin on connection close
username_2: Done here: https://github.com/username_2/mattermost-plugin-webrtc-video/pull/22
username_3: Hello, I just saw this and had a few thought I would like to share
1. I think the user meant the ability to add multible voice channels and have them joinable individual like a groupchat but as static room.
2. Wouldn't it be possible to add a functionality to call a entire channel, the button for starting a video conference is there on every chat, but it seems like it just works via DMs. Maybe add
1. A way to call the entire channel ( but silent so it doesn't ring and they can just join the voice if they like )
2. Add a Regular voice button too and not just a Video Chat button, so you don't start as video chat
3. Add the ability to select if a channel is Chat and Voice ( mixed ) or only Chat (or maybe even a call chat that actually does ring the members when someone starts a session)
3. What about leaving the voice chat? It looks for me like you can only mute it but not really leave it.
Thank you for reading my thoughts and have a great day ^^
username_0: Hello do u have some update ??? |
ComplianceAsCode/content | 460725947 | Title: need new rule, "Record Successful Permission Changes to Files - fchmod"
Question:
username_0: e.g.
`````
#-a always,exit -F arch=b32 -S fchmod -F success=1 -F auid>=1000 -F auid!=unset -F key=successful-perm-change
#-a always,exit -F arch=b64 -S fchmod -F success=1 -F auid>=1000 -F auid!=unset -F key=successful-perm-change
````` |
bradcornford/Googlmapper | 135416728 | Title: Map Click method?
Question:
username_0: Hi,
I would like to let the user mark a point in the map. In my particular case, to register a place.
I did notice that we do have the dragable marker, but I would like to also let user click in the map and make the marker move to the place where the map was clicked.
Is this possible? How can I do it?
Thanks,
Joao
Answers:
username_1: Hi,
You should be able to achieve this, with somehting along the lines of:
```js
google.maps.event.addListener(maps[0].map, 'click', function(event) {
var marker = new google.maps.Marker({
position: event.latLng,
map: map
});
});
```
username_0: Yes, this was the solution.
But, if you just add this in the end of the blade file, you will get 'map not defined', because the map is not loaded it.
So, what I did was:
```
google.maps.event.addDomListener(window, 'load', new function() {
setTimeout(function () {
google.maps.event.addListener(maps[0].map, 'click', function(event) {
maps[0].markers[0].setPosition(event.latLng);
});
}, 500);
});
```
Where I also add my code to the load of the google maps and wait just half second to be sure that the main load (the one from GoogImapper) is finished.
Also, I'm moving the existent marker in instead of just create a new one.
Cheers,
Joao
Status: Issue closed
username_1: Yeah, that is very true. There also now is two options for the map for before and after load events, introduced in version 2.8.0:
'eventBeforeLoad' and 'eventAfterLoad'
These can be used as follows:
```php
Mapper::map(53.381128999999990000, -1.470085000000040000, ['eventBeforeLoad' => 'console.log("before load");']);
Mapper::map(53.381128999999990000, -1.470085000000040000, ['eventAfterLoad' => 'console.log("after load");']);
```
username_0: Hi,
I would like to let the user mark a point in the map. In my particular case, to register a place.
I did notice that we do have the dragable marker, but I would like to also let user click in the map and make the marker move to the place where the map was clicked.
Is this possible? How can I do it?
Thanks,
Joao
username_0: Weird.
I just following the explanations in the github to install the googImapper, but Apparently, I'm not using 2.8.0.
My line is:
`Mapper::map($lat, $lng, ['zoom' => 12, 'draggable' => true, 'eventDrag' => 'newMarkerPosition(event);', 'eventAfterLoad' => 'eventAfterLoad(event);']);`
But I also tried your code above, and nothing happens.
Is there a way to check which version I'm using?
Cheers,
joao
Status: Issue closed
username_0: Ok, for some reason, using the
` "cornford/googlmapper": "2.*"`
in composer.json
was getting the 2.7.0 version
When I explicit said:
` "cornford/googlmapper": "2.8.0"`
I got the latest version and the code worked. :)
Closing again.
username_1: Are you using the command 'composer update' to fetch the latest version?
username_0: yep
username_1: Very strange, composer should pull the latest minor version / bug fix version when using 1.* notation. Glad you got this to work! |
ultrasites/ultrasites-vue-content-slider | 225222911 | Title: Touch enabled? Easy to style?
Question:
username_0: Is this slider touch enabled and easy to style in any way possible?
Answers:
username_1: Hey @username_0,
we want to include touch events in the next version. You can easy overload the css style classes for your own theme.
username_1: There's no comment under the last one from 2017. This issue will be closed.
Status: Issue closed
|
fsouza/s3-upload-proxy | 971863060 | Title: api error InvalidSignatureException: Credential should be scoped to a valid region, not ''. "
Question:
username_0: Hi,
When I run s3-upload-proxy with MediaStore from the main branch or binary release, I receive the error:
`api error InvalidSignatureException: Credential should be scoped to a valid region, not ''. "`
If I revert to commit `793d1164921d6e42b4bec26686e76001995f218b`, I can properly upload to my container.
I have tried setting environment variables from the command line and setting my `~.aws/config` file, but nothing has helped. Do you have any suggestions?
My environment is:
- amd64 Ubuntu 18.04 VM
- go version go1.16.7 linux/amd64
- `~/.aws/config`

- `AWS_REGION=us-west-2`
- `AWS_DEFAULT_REGION=us-west-2`
I am really excited to use 3-upload-proxy the `MEDIASTORE_CHUNKED_TRANSFER` feature, which is unavailable on the working commit.
Answers:
username_1: Hmm it looks like we're not detecting the region from the config file. I'll push a fix soon.
In the meantime, can you try setting the environment variable `AWS_REGION`?
username_1: Actualyl nevermind I see that you have that environment variable set 🤔
username_0: The issue is specifically for doing a PUT to the container. The full message (with the personal info removed) is
`INFO[0000] listening on [::]:8080
ERRO[0004] failed to upload file bucket={my bucket name} contentType= error="operation error MediaStore Data: PutObject, https response error StatusCode: 403, RequestID: {ID}, api error InvalidSignatureException: Credential should be scoped to a valid region, not ''. " objectKey=live_video_init.m4s`
username_0: When you export your environment variable for the region, do you use single quotes, double quotes, or no quotes? I feel like I have tried all combinations with no success, but it's worth to double check.
username_1: Hmm the shell should take either. I'll give it a shot later today.
username_1: @username_0 can you try with latest master? Here's my test:
```
env AWS_REGION=us-west-2 BUCKET_NAME=franciscotest UPLOAD_DRIVER=mediastore HTTP_PORT=8080 ./s3-upload-proxy
```
Then from another terminal:
```
% curl -X POST --data-binary file1.ts localhost:8080/file/file1.ts
OK
```
I gave up on aws-sdk-go-v2, tired of being an early adopter of that lol
username_0: 😍😍😍😍😍😍 THANK YOU SO MUCH!!!!!!!!!! IT WORKS!!!!!!!
username_1: Nice! I'll close this, but feel free to open new issues if you run into other issues.
Status: Issue closed
|
balena-io/etcher | 843736158 | Title: bmap verification fails if image doesn't evenly divide into its block size
Question:
username_0: - **Etcher version:** 1.5.116
- **Operating system and architecture:** Windows 10 x86_64
- **Image flashed:** Custom image created by Yocto
- **Do you see any meaningful error information in the DevTools?** Yes
I originally mentioned this in issue #3474
I got a consistent checksum error for the last block of data:
Checksum does not match for range [11048124416, 11048136703]: "84ff92691f909a05b224e1c56abb4864f01b4f8e3c854e4bb4c7baf1d3f6d652" != "0f6a85c0f9e90c4bfbb623a9489953d0b19841ca35807d23250324f57d0a2cea"
It looks like the issue is that the image file doesn't end on a 4KiB boundary. So if you read a full 4KiB from the sdcard you get a different checksum than if you read the last block from the image file which dd truncates and only gives you the data available.
It's worth noting that "bmaptool copy" flashes this file without any validation issues so they must be handling this edge case somehow.
[tisdk-rootfs-image-mitysom-am57x-20210324193841.rootfs.img.bmap.txt](https://github.com/balena-io/etcher/files/6224074/tisdk-rootfs-image-mitysom-am57x-20210324193841.rootfs.img.bmap.txt)
Source image file matches bmap file:
```
<Range chksum="84ff92691f909a05b224e1c56abb4864f01b4f8e3c854e4bb4c7baf1d3f6d652"> 2697296-2697298 </Range>
```
```
username_0@LAPTOP-JCORMIER2012:/mnt/c/Users/username_0/Downloads$ dd if=tisdk-rootfs-image-mitysom-am57x-20210324193841.rootfs.img bs=4096 skip=2697296 count=3 | sha256sum
2+1 records in
2+1 records out
84ff92691f909a05b224e1c56abb4864f01b4f8e3c854e4bb4c7baf1d3f6d652 -
10240 bytes (10 kB, 10 KiB) copied, 0.0006457 s, 15.9 MB/s
```
Flashed sd card matches etchers calculation:
```
root@mitysom-am57x:~# dd if=/dev/sdb bs=4096 skip=2697296 count=3 | sha256sum
3+0 records in
3+0 records out
0f6a85c0f9e90c4bfbb623a9489953d0b19841ca35807d23250324f57d0a2cea -
```
Looks like the image file doesn't end on a block boundary, looks like this is an issue with bmaptool. It can't address image files that aren't a multiple of its chosen block size.
```
root@mitysom-am57x:~# dd if=/dev/sdb bs=4096 skip=2697296 count=3 | hexdump -C
3+0 records in
3+0 records out
00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00002800 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
*
00003000
root@mitysom-am57x:~# logout
Connection to 192.168.0.29 closed.
username_0@LAPTOP-JCORMIER2012:/mnt/c/Users/username_0/Downloads$ dd if=tisdk-rootfs-image-mitysom-am57x-20210324193841.rootfs.img bs=4096 skip=2697296 count=3 | hexdump -C
2+1 records in
2+1 records out
10240 bytes (10 kB, 10 KiB) copied, 0.0005108 s, 20.0 MB/s
00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00002800
```
Answers:
username_0: Looks like bmaptool doesn't verify the data written to the sdcard.
https://github.com/intel/bmap-tools/blob/3ee092913e15ec6b4b525d9e8ce77523d2083a5d/bmaptools/BmapCopy.py#L633
It only verifies the image file data
https://github.com/intel/bmap-tools/blob/3ee092913e15ec6b4b525d9e8ce77523d2083a5d/bmaptools/BmapCopy.py#L561
username_0: Bash script to resize Yocto wic files and then package them into a zip file. With this, the image files program and verify correctly.
```bash
# HACK: balencaEtcher requires image files to be a multiple of the bmap block size (4KiB)
# https://github.com/balena-io/etcher/issues/3475
# Add some padding to image files so they are divisible by 4096B
for f in *.wic; do
if [ -L "$f" ]; then
continue; # Skip symlinks
fi
blockSize=4096
# Round up file size by $blockSize
fileSizeB=$(stat --format=%s "$f")
fileSizeRoundedUpBlocks=$(( (fileSizeB + (blockSize-1))/blockSize ))
fileSizeRoundedUpBytes=$(( fileSizeRoundedUpBlocks * blockSize ))
if [ "$fileSizeB" != "$fileSizeRoundedUpBytes" ]; then
# Increase image size
truncate -s "${fileSizeRoundedUpBytes}" "$f"
fileSizeBAfter=$(stat --format=%s "$f")
# Update bmap file
bmaptool create "$f" -o "$f".bmap
echo File $f size before $fileSizeB and after $fileSizeBAfter
fi
done
# Rename wic to .img and zip up sd card images
for f in *.wic; do
# Replace .wic with .img
img="$(basename -s .wic "$f").img"
# Cleanup leftover .meta folders
rm -rf .meta
cp "$f" "$img"
cp "$f".bmap "$img".bmap
# Create .meta folder for balenaEtcher bmap support
mkdir .meta
cp "$f".bmap .meta/image.bmap
md5sum "$img" > "$img.md5"
"$ZIP" -r "$img".zip "$img"* .meta
mv "$img".zip "${DEPLOY_DIR}/"
# Cleanup .meta folders
rm -rf .meta
done
``` |
googleapis/nodejs-bigquery | 311587294 | Title: Job ID system test failure
Question:
username_0: @username_1 could you take a look?
https://circleci.com/gh/googleapis/nodejs-bigquery/1524:
```
1) BigQuery
should honor the job id option:
Uncaught ApiError: Already Exists: Job long-door-651:US.hi-im-a-job-id
at Object.parseHttpRespBody (node_modules/@google-cloud/common/src/util.js:193:30)
at Object.handleResp (node_modules/@google-cloud/common/src/util.js:131:18)
at /root/project/node_modules/@google-cloud/common/src/util.js:496:12
at Request.onResponse [as _callback] (node_modules/retry-request/index.js:195:7)
at Request.self.callback (node_modules/request/request.js:186:22)
at Request.<anonymous> (node_modules/request/request.js:1163:10)
at Gunzip.<anonymous> (node_modules/request/request.js:1085:12)
at endReadableNT (_stream_readable.js:1064:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
```
Answers:
username_1: Hmm, do you think its due to the system tests being run in parallel with no unique characters in the id? |
GothamElections2017/RandomThoughts | 318711789 | Title: Today in History - April 29 https://t.co/0YagF5mlwM
Question:
username_0: <blockquote class="twitter-tweet">
<p lang="en" dir="ltr" xml:lang="en">Today in History - April 29 <a href="https://t.co/0YagF5mlwM">https://t.co/0YagF5mlwM</a></p>
— <NAME> (@Ge_Dawn_Granger) <a href="https://twitter.com/Ge_Dawn_Granger/status/990561928618659840?ref_src=twsrc%5Etfw">April 29, 2018</a>
</blockquote>
<br>
<br>
April 29, 2018 at 12:02PM<br>
via Twitter |
Vagr9K/gatsby-advanced-starter | 1007019439 | Title: Mention minimum version of Node.js required in the README
Question:
username_0: | ^
100 |
101 | // Get full post listing
102 | const fullListing = await getIndexListing(graphql);
```
After spending some time debugging this, I found that `fs.rmSync()` (used by initFeedMeta() method) is not supported in Node.js v12 which I was using at the time. **Switching to Node.js v14 fixed the issue**.
`fs.rmSync()` was added in Node.js v14.14.0 ([Check here](https://nodejs.org/api/fs.html#fs_fs_rmsync_path_options)). As `gatsby-theme-advanced` is dependent on this function, I think it would be better to mention this in the README of the project. Would save some time of debugging for developers.
And yes, thanks for this great project! :)
Answers:
username_1: Hey!
Thanks for reporting this.
I was considering adding a section with minimal requirements, but since Gatsby V4 is right around the corner and will require NodeJS v14.15.
By the end of October GatsbyJS will be warning about incompatible versions by itself, so this is a non-issue for the starter at this point.
Status: Issue closed
|
iNPUTmice/ceb2txt | 506181199 | Title: Add column bodyLanguage in messages table
Question:
username_0: ceb2txt crashes with CEB files generated by recent versions of Conversations.
Output message: [SQLITE_ERROR] SQL error or missing database (table messages has no column named bodyLanguage)
Proposed fix: add one column to the "create database messages" statement in src/main/java/im/conversations/ceb2txt/Main.java
See patch here.
https://github.com/username_0/ceb2txt/compare/addbodylanguage?expand=1
Answers:
username_1: I, too, encountered this problem and can confirm your patch fixed it @username_0 . Thanks!
Maybe make a PR?
username_2: I also encountered the problem, saw that the column was missing, and was about to manually add it. Then I saw your branch at https://github.com/username_0/ceb2txt
It is exactly the change, I'd have tried. I ran the code you proposed. Worked fine. I highly recommend that you create a pull request so that @username_3 can easily apply this patch.
username_1: I just made a PR because @username_0 seems inactive and the issue makes ceb2txt disfunctional.
Hope he doesn't mind :).
username_0: No problem. Thanks for doing it.
Status: Issue closed
|
jbped/robo-gladiators | 875552383 | Title: Add Randomness to the Health Pool and Damage Values
Question:
username_0: **Description**
- Start enemies at a random health value between 40 and 60.
- Start enemies with a random attack value between 10 and 14.
- Attack damage is random, using the robot's attack value as an upper limit (for example, if the player's attack is 10, their damage range is 7-10).
Answers:
username_0: Feature Completed
Status: Issue closed
|
A3M4/YouTube-Report | 545450104 | Title: Warning about CET timezone
Question:
username_0: ```python
/usr/local/lib/python3.7/dist-packages/dateutil/parser/_parser.py:1218: UnknownTimezoneWarning: tzname CET identified but not understood. Pass `tzinfos` argument in order to correctly return a timezone-aware datetime. In a future version, this will raise an exception.
category=UnknownTimezoneWarning)
``` |
nabilaalissa24/Intent3 | 182977244 | Title: Intent3
Question:
username_0: 

 |
LuwkasLima/Issues_Test | 97341688 | Title: Erro no script de mais visita na Home
Question:
username_0: _From @diegorojas on March 4, 2015 19:59_
Algum erro na linha 87
$mais_visitada_ID = $mais_visitada[0]->ID;
Ele devolvia o resultado da mais com o thumb e o numero de visitas

_Copied from original issue: brasadesign/spts-theme#2_<issue_closed>
Status: Issue closed |
SupplyChainSandbox/trivia | 796381689 | Title: Deconflict trivia & block
Question:
username_0: @username_1 you should be able to edit the issue now.
I created a [new document](https://docs.google.com/document/d/1XX9aetKx4cYcsUhKO0ivqJPSJQw5oIrhFXi-Np5MN9I/edit#heading=h.8zq75jv7f2e5) to start the process of deconflicting
Status: Issue closed
Answers:
username_1: I don't have permission to edit the issue, but I believe the issue is that trivia, block, & quadquizaminios (aka "Tetris") are 'awareness & adoption' games that teach our supply chain concepts by getting the participants to answer questions. At first glance they may appear 'tests to reward the most knowledgeable supply chain nerds' but they are actually A&A/training. Since we have 3 games about the same body of knowledge, we probably should pool resources and make conscious decisions on overlap (concepts worth repeating and seeing in every game) vs uniqueness.
username_0: @username_1 you should be able to edit the issue now.
I created a [new document](https://docs.google.com/document/d/1XX9aetKx4cYcsUhKO0ivqJPSJQw5oIrhFXi-Np5MN9I/edit#heading=h.8zq75jv7f2e5) to start the process of deconflicting
Status: Issue closed
|
rayon-rs/rayon | 1110653524 | Title: Crash at: "index out of bounds" in sleep mod
Question:
username_0: thread '<unnamed>' panicked at 'index out of bounds: the len is 0 but the index is 3', /home/xxx/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.9.1/src/sleep/mod.rs:355:28
#0 0x000056215dbfb211 in rayon_core::sleep::Sleep::wake_specific_thread ()
#1 0x000056215db7c569 in <rayon_core::job::StackJob<L,F,R> as rayon_core::job::Job>::execute ()
#2 0x000056215da6e1c6 in rayon_core::registry::WorkerThread::wait_until_cold ()
#3 0x000056215dbfc7d1 in std::sys_common::backtrace::__rust_begin_short_backtrace ()
#4 0x000056215dbfc1a0 in core::ops::function::FnOnce::call_once{{vtable-shim}} ()
#5 0x000056215dd54da5 in <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once () at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/boxed.rs:1691
#6 <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once () at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/boxed.rs:1691
#7 std::sys::unix::thread::Thread::new::thread_start () at library/std/src/sys/unix/thread.rs:106
#8 0x00007f719b36d6db in start_thread (arg=0x7f6f747e3700) at pthread_create.c:463
#9 0x00007f719aaf471f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Warning: the current language does not match this frame.
Answers:
username_0: patch for suggestion:
/// Notify the given thread that it should wake up (if it is
/// sleeping). When this method is invoked, we typically know the
/// thread is asleep, though in rare cases it could have been
/// awoken by (e.g.) new work having been posted.
pub(super) fn notify_worker_latch_is_set(&self, target_worker_index: usize) {
+if target_worker_index < self.worker_sleep_states.len() {
self.wake_specific_thread(target_worker_index);
+}
}
username_1: That *should* always be true -- I'm concerned how we ended up in a state with length 0 at all!
username_0: Yes, I have attached the process w/ gdb, and found the object invalid, how about embed one threadpool in another threadpool use case?
eg:
for _ in 0..10 {
let pool = ThreadPoolBuilder::new()
.stack_size(8 * 1024 * 1024)
.num_threads(12)
.build()?;
thread_pools.push(pool);
}
...
for (_i, tp) in thread_pools.iter().enumerate() {
...
tp.spawn(move || {
...
execute_all();
...
})
...
}
pub fn execute_all(self) -> Vec<T>
where
T: Send + Sync,
{
use rayon::prelude::*;
self.jobs
.into_par_iter()
.map(|job| execute_with_threads(job, max_available_threads()))
.collect()
}
fn execute_with_threads<T: Sync + Send>(f: impl FnOnce() -> T + Send, num_threads: usize) -> T {
let pool = rayon::ThreadPoolBuilder::new()
.num_threads(num_threads)
.build()
.unwrap();
pool.install(f)
}
username_1: Where does `max_available_threads()` come from, and what value does it return on your system?
username_0: it is 8 for 32-cores CPU.
username_1: Would it be possible for you to share some complete code that reproduces that crash?
Otherwise, we don't really have any way to help debug this.
It shouldn't be a problem to "embed" thread pools, although we don't enforce any hierarchy between pools, so they'll run more like siblings. It's certainly possible that you've uncovered a bug though.
username_2: The suggested patch looks interesting as panics always seem to happen here, even though I also think (in the vein of [the reply to the patch](https://github.com/rayon-rs/rayon/issues/913#issuecomment-1018723426)) that the root cause of the invalid access is something else. In the linked issue #919 the same panic happens, but also segfaults, which leads me to think that memory corruption of sorts leads to this panic rather than a logic bug.
In any case, now there are two ways to reproduce it, and the one presented here seems to happen on linux as well. I hope with a maintainer being able to reproduce it, there is a chance for finding the root cause.
username_1: I am extremely wary of running a large reproducer that's also using network resources. I'd much prefer something that can run self-contained and sandboxed.
username_0: Of course, we can discard the network interaction, and just focus on the calculation of corrupt case, pls. hold on a moment, we will make some changes and make it happen, thanks for care about this issue!
username_2: For those who are interested to try reproducing it with [the above](https://github.com/rayon-rs/rayon/issues/913#issuecomment-1033601456), note that it takes about 22 minutes for a first build due to downloading various git-dependencies with big objects apparently. Instead of running `nohup target/debug/aleo-prover -a aleo1d5hg2z3ma00382pngntdp68e74zv54jdxy249qhaujhks9c72yrs33ddah -p 172.16.17.32:4132 >> snarkos.log 2>&1 &` as suggested, I ran only `target/debug/aleo-prover -a aleo1d5hg2z3ma00382pngntdp68e74zv54jdxy249qhaujhks9c72yrs33ddah -p 172.16.17.32:4132` for 90 minutes but it wouldn't reproduce in that time.
Trying again with the suggested method of running it in the background also wouldn't reproduce the issue for me, this time it ran for only a couple of minutes though (MacOS 12, aarch64).
If I had to fix it, I would probably need an example that reproduces more quickly. This works for me on my machine at least in #919 when `dua` crashes in less than 10 seconds. Unfortunately it wouldn't reproduce on linux thus far, maybe you could also give it a try @username_0 .
username_3: I'm the original author of `aleo-prover` and I'm trying to dig into this issue as well. Instead of panicking, the program segfaults on my test machine with the following backtrace:
```
#0 <alloc::vec::Vec<T,A> as core::ops::deref::Deref>::deref () at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/alloc/src/vec/mod.rs:2434
#1 <alloc::vec::Vec<T,A> as core::ops::index::Index<I>>::index () at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/alloc/src/vec/mod.rs:2528
#2 rayon_core::sleep::Sleep::wake_specific_thread () at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.9.1/src/sleep/mod.rs:355
#3 0x000055cb27b3e1c8 in rayon_core::sleep::Sleep::notify_worker_latch_is_set () at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.9.1/src/sleep/mod.rs:245
#4 rayon_core::registry::Registry::notify_worker_latch_is_set () at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.9.1/src/registry.rs:544
...
(I guess this is enough)
```
I thought I was somehow using rayon incorrectly though...
Thank you @username_0 for helping with this issue.
username_0: @username_2 Yes, As @username_3 said, it is hard to reproduce, sometimes it would be crashed after two days, sometimes it would be crashed in one hour. According to the description [as above](https://github.com/rayon-rs/rayon/issues/913#issuecomment-1021044645), I wrote a demo to reproduce and track, but it never crashed. With this demo help, I found some cue maybe useful. The normal case about threadpool is as following:
```
fn execute_with_threads<T: Sync + Send>(f: impl FnOnce() -> T + Send, num_threads: usize) -> T {
let pool = rayon::ThreadPoolBuilder::new()
.num_threads(num_threads)
.build()
.unwrap();
pool.install(f)
}
```
Builder -> Generate Pool -> Install Task -> Pool Drop -> Registry Terminate (iterates "set_and_tickle_one", threads not join())
From the logging of demo speaking, some sub-routine was still running after the terminate calling finished. As my understanding, all the sub-routines should be done before terminate calling exit. What's your comments here? @username_2 @username_1
So I will make some changes trying to reproduce more easy, from the morning check, it seems need more time and testing. Pls. be more patient.
Thank you @username_2 @username_1 again for caring about this case.
@username_3 Thank you very much for your good sharing! I have checked the whole logic of aleo-prover, and I think it should be OK with the usage, because they are working right as rayon demonstration. Maybe we can simulate the task received process to eject the task in a short time less than 5s or 3s, and trigger the threadpools dropped in high frequency. that's the idea I want to try reproducing this issue in next step.
username_2: All of this is fascinating and I am glad it's approached from another angle. Unfortunately I have no idea yet on how Rayon works and use it indirectly in my own application. Thus far I seem to be the only one who is able to reproduce it quickly, but noticed that it really does depend on how fast the application runs. The first run of `dua` doesn't actually crash, but is slowest on a cold file system cache. Subsequent runs will panic or crash within a couple of seconds most of the time, and using more threads seems to be better for reproduction.
```sh
➜ time dua -t20
[2] 61121 segmentation fault dua -t20
dua -t20 1.41s user 7.81s system 178% cpu 5.174 total
```
That's all I know.
Boiling it down to a simple example for reproduction would definitely be appreciated, and it looks like the code in `aleo-prover` lends itself to that in a more straightforward fashion. |
macmillanpublishers/Word-template | 237627097 | Title: Character Styles macro isn't clearing italic style if "Not Italic" is added
Question:
username_0: Ditto bold, underline, etc. Specifically, if you try to remove a character style with direct formatting (e.g., use the Ctrl-i shortcut), what you end up with is "span italics characters (ital) + Not Italic" because "Not Italic" is a property of the Font object for that span.
The Character Styles macro should be recognizing these and remove the style to revert to roman text, but it isn't. An example file is [here](https://www.dropbox.com/s/ejzfsjvjua4c3yk/9781250155504_MNU.docx?dl=0). |
giove91/regolamento-lupus | 309564705 | Title: [Ruolo] Divinatore
Question:
username_0: In aggiunta alle frasi, il Divinatore riceve il seguente potere: ogni due notti sceglie un personaggio vivo ed un ruolo. Scopre se tale personaggio ha tale ruolo (eventualmente fallisce in caso contrario).
Si potrebbe anche implementare come nuovo ruolo e tenere il Divinatore così com'è. Occhio, è un ruolo abbastanza forte, smonta i bluff facilmente. Si può modificare la Fattucchiera (rendendola come Confusione) per bilanciare.
Answers:
username_0: Approvato come aggiunta al potere attuale. La condizione "Esattamente 2 vere e 2 false" diventa "Almeno 1 vera e 1 falsa".
username_0: Fattucchiera e Confusione ottengono il potere seguente: ogni notte, scelgono un personaggio ed un ruolo, e tale personaggio risulta avere ruolo/aura/misticità del ruolo scelto agli occhi di Veggenti/Maghi/etc. |
platformio/platformio-core | 684419390 | Title: Fail to download platform due to random package.json file in submodule
Question:
username_0: ### Configuration
**Operating system**: Win10 64bit
**PlatformIO Version** (`platformio --version`):
version 4.4.0b4
### Description of problem
I am trying to use ESP32 IDF from github and the platform installer throws an error while trying to download the framework.
As far as I can understand it is trying to install a package because there is a package.json file in cJson submodule.
Issue is unrelated to actual building any file, it is just failing to download it properly.
#### Steps to Reproduce
1. Change platformio.ini
2. Build
### Actual Results
`
Processing esp32 (platform: espressif32; board: esp32dev; framework: espidf)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------C:\Users\Fire\.platformio\platforms\atmelavr\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F6B8E688>
C:\Users\Fire\.platformio\platforms\espressif32\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F6B8E688>
C:\Users\Fire\.platformio\platforms\espressif8266\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F73D6B08>
C:\Users\Fire\.platformio\platforms\[email protected]\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F73D9908>
C:\Users\Fire\.platformio\platforms\nxplpc\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F73DC988>
C:\Users\Fire\.platformio\platforms\nxplpc-arduino-lpc176x\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F73DFE48>
C:\Users\Fire\.platformio\platforms\nxplpc-arduino-lpc176x@src-de6b279104eee6e886c6740cfbf2debe\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F73E8A08>
C:\Users\Fire\.platformio\platforms\ststm32\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F73EB148>
C:\Users\Fire\.platformio\packages\contrib-piohome\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73D9F88>
C:\Users\Fire\.platformio\packages\contrib-pysite\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73D9B88>
C:\Users\Fire\.platformio\packages\framework-arduino-lpc176x\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73D6748>
C:\Users\Fire\.platformio\packages\framework-arduinoespressif32\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73D6748>
C:\Users\Fire\.platformio\packages\framework-arduinoststm32\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73D6F88>
C:\Users\Fire\.platformio\packages\framework-arduinoststm32-maple\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F6B8ED88>
C:\Users\Fire\.platformio\packages\framework-cmsis\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F6B8E808>
C:\Users\Fire\.platformio\packages\framework-espidf\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F6B8E648>
C:\Users\Fire\.platformio\packages\framework-stm32cube\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73EBD88>
C:\Users\Fire\.platformio\packages\tool-avrdude\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73EB448>
C:\Users\Fire\.platformio\packages\tool-cmake\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73EB888>
C:\Users\Fire\.platformio\packages\tool-cppcheck\package.json
[Truncated]
### Expected Results
### If problems with PlatformIO Build System:
**The content of `platformio.ini`:**
```ini
[platformio]
default_envs = esp32 ;ESP32 IDF Enviroment
[env:esp32]
platform = espressif32
board = esp32dev
framework = espidf
platform_packages = framework-espidf @ https://github.com/espressif/esp-idf.git
```
### Additional info
I added a print in parser.py to trace the error. Unchanged core won't print information about the file that it is trying to parse.
I corrected line numbers on output log to match stock parser.py file
Status: Issue closed
Answers:
username_1: Thanks for the report! Please re-test with `pio upgrade --dev`.
username_0: It didn't work. Failed at the same spot.
PlatformIO version 5.0.0b2
```
Processing esp32 (platform: espressif32; board: esp32dev; framework: espidf)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Tool Manager: Installing git+https://github.com/espressif/esp-idf.git
git version 2.28.0.windows.1
Cloning into 'C:\Users\Fire\.platformio\.cache\tmp\pkg-installing-jmgi3e6e'...
remote: Enumerating objects: 8196, done.
remote: Counting objects: 100% (8196/8196), done.
remote: Compressing objects: 100% (6975/6975), done.
remote: Total 8196 (delta 1792), reused 3531 (delta 891), pack-reused 0R
Receiving objects: 100% (8196/8196), 38.44 MiB | 1.34 MiB/s, done.
Resolving deltas: 100% (1792/1792), done.
Updating files: 100% (7164/7164), done.
Submodule 'components/asio/asio' (https://github.com/espressif/asio.git) registered for path 'components/asio/asio'
Submodule 'components/bootloader/subproject/components/micro-ecc/micro-ecc' (https://github.com/kmackay/micro-ecc.git) registered for path 'components/bootloader/subproject/components/micro-ecc/micro-ecc'
Submodule 'components/bt/controller/lib' (https://github.com/espressif/esp32-bt-lib.git) registered for path 'components/bt/controller/lib'
Submodule 'components/bt/host/nimble/nimble' (https://github.com/espressif/esp-nimble.git) registered for path 'components/bt/host/nimble/nimble'
Submodule 'components/cbor/tinycbor' (https://github.com/intel/tinycbor.git) registered for path 'components/cbor/tinycbor'
Submodule 'components/coap/libcoap' (https://github.com/obgm/libcoap.git) registered for path 'components/coap/libcoap'
Submodule 'components/esp_wifi/lib' (https://github.com/espressif/esp32-wifi-lib.git) registered for path 'components/esp_wifi/lib'
Submodule 'components/esptool_py/esptool' (https://github.com/espressif/esptool.git) registered for path 'components/esptool_py/esptool'
Submodule 'components/expat/expat' (https://github.com/libexpat/libexpat.git) registered for path 'components/expat/expat'
Submodule 'components/json/cJSON' (https://github.com/DaveGamble/cJSON.git) registered for path 'components/json/cJSON'
Submodule 'components/libsodium/libsodium' (https://github.com/jedisct1/libsodium.git) registered for path 'components/libsodium/libsodium'
Submodule 'components/lwip/lwip' (https://github.com/espressif/esp-lwip.git) registered for path 'components/lwip/lwip'
Submodule 'components/mbedtls/mbedtls' (https://github.com/espressif/mbedtls.git) registered for path 'components/mbedtls/mbedtls'
Submodule 'components/mqtt/esp-mqtt' (https://github.com/espressif/esp-mqtt.git) registered for path 'components/mqtt/esp-mqtt'
Submodule 'components/nghttp/nghttp2' (https://github.com/nghttp2/nghttp2.git) registered for path 'components/nghttp/nghttp2'
Submodule 'components/protobuf-c/protobuf-c' (https://github.com/protobuf-c/protobuf-c.git) registered for path 'components/protobuf-c/protobuf-c'
Submodule 'components/spiffs/spiffs' (https://github.com/pellepl/spiffs.git) registered for path 'components/spiffs/spiffs'
Submodule 'components/tinyusb/tinyusb' (https://github.com/espressif/tinyusb.git) registered for path 'components/tinyusb/tinyusb'
Submodule 'components/unity/unity' (https://github.com/ThrowTheSwitch/Unity.git) registered for path 'components/unity/unity'
Submodule 'examples/build_system/cmake/import_lib/main/lib/tinyxml2' (https://github.com/leethomason/tinyxml2.git) registered for path 'examples/build_system/cmake/import_lib/main/lib/tinyxml2'
Submodule 'examples/peripherals/secure_element/atecc608_ecdsa/components/esp-cryptoauthlib' (https://github.com/espressif/esp-cryptoauthlib.git) registered for path 'examples/peripherals/secure_element/atecc608_ecdsa/components/esp-cryptoauthlib'
Cloning into 'C:/Users/Fire/.platformio/.cache/tmp/pkg-installing-jmgi3e6e/components/asio/asio'...
remote: Enumerating objects: 10, done.
remote: Counting objects: 100% (10/10), done.
remote: Compressing objects: 100% (10/10), done.
remote: Total 39225 (delta 0), reused 10 (delta 0), pack-reused 39215
Receiving objects: 100% (39225/39225), 13.81 MiB | 1.33 MiB/s, done.
Resolving deltas: 100% (26421/26421), done.
Cloning into 'C:/Users/Fire/.platformio/.cache/tmp/pkg-installing-jmgi3e6e/components/bootloader/subproject/components/micro-ecc/micro-ecc'...
remote: Enumerating objects: 765, done.
remote: Total 765 (delta 0), reused 0 (delta 0), pack-reused 765
Receiving objects: 100% (765/765), 511.80 KiB | 1.15 MiB/s, done.
Resolving deltas: 100% (464/464), done.
Cloning into 'C:/Users/Fire/.platformio/.cache/tmp/pkg-installing-jmgi3e6e/components/bt/controller/lib'...
remote: Enumerating objects: 165, done.
remote: Counting objects: 100% (165/165), done.
remote: Compressing objects: 100% (61/61), done.
remote: Total 549 (delta 145), reused 110 (delta 104), pack-reused 384
Receiving objects: 100% (549/549), 2.47 MiB | 1.31 MiB/s, done.
Resolving deltas: 100% (349/349), done.
Cloning into 'C:/Users/Fire/.platformio/.cache/tmp/pkg-installing-jmgi3e6e/components/bt/host/nimble/nimble'...
remote: Enumerating objects: 14, done.
remote: Counting objects: 100% (14/14), done.
remote: Compressing objects: 100% (14/14), done.
remote: Total 31556 (delta 1), reused 4 (delta 0), pack-reused 31542
[Truncated]
============================================================
An unexpected error occurred. Further steps:
* Verify that you have the latest version of PlatformIO using
`pip install -U platformio` command
* Try to find answer in FAQ Troubleshooting section
https://docs.platformio.org/page/faq.html
* Report this problem to the developers
https://github.com/platformio/platformio-core/issues
============================================================
The terminal process "C:\Users\Fire\.platformio\penv\Scripts\platformio.exe 'run'" terminated with exit code: 1.
Terminal will be reused by tasks, press any key to close it.
```
username_1: ### Configuration
**Operating system**: Win10 64bit
**PlatformIO Version** (`platformio --version`):
version 4.4.0b4
### Description of problem
I am trying to use ESP32 IDF from github and the platform installer throws an error while trying to download the framework.
As far as I can understand it is trying to install a package because there is a package.json file in cJson submodule.
https://github.com/DaveGamble/cJSON/blob/master/tests/json-patch-tests/package.json
Issue is unrelated to actual building any file, it is just failing to download it properly.
#### Steps to Reproduce
1. Change platformio.ini
2. Build
### Actual Results
```
Processing esp32 (platform: espressif32; board: esp32dev; framework: espidf)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------C:\Users\Fire\.platformio\platforms\atmelavr\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F6B8E688>
C:\Users\Fire\.platformio\platforms\espressif32\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F6B8E688>
C:\Users\Fire\.platformio\platforms\espressif8266\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F73D6B08>
C:\Users\Fire\.platformio\platforms\[email protected]\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F73D9908>
C:\Users\Fire\.platformio\platforms\nxplpc\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F73DC988>
C:\Users\Fire\.platformio\platforms\nxplpc-arduino-lpc176x\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F73DFE48>
C:\Users\Fire\.platformio\platforms\nxplpc-arduino-lpc176x@src-de6b279104eee6e886c6740cfbf2debe\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F73E8A08>
C:\Users\Fire\.platformio\platforms\ststm32\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F73EB148>
C:\Users\Fire\.platformio\packages\contrib-piohome\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73D9F88>
C:\Users\Fire\.platformio\packages\contrib-pysite\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73D9B88>
C:\Users\Fire\.platformio\packages\framework-arduino-lpc176x\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73D6748>
C:\Users\Fire\.platformio\packages\framework-arduinoespressif32\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73D6748>
C:\Users\Fire\.platformio\packages\framework-arduinoststm32\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73D6F88>
C:\Users\Fire\.platformio\packages\framework-arduinoststm32-maple\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F6B8ED88>
C:\Users\Fire\.platformio\packages\framework-cmsis\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F6B8E808>
C:\Users\Fire\.platformio\packages\framework-espidf\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F6B8E648>
C:\Users\Fire\.platformio\packages\framework-stm32cube\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73EBD88>
C:\Users\Fire\.platformio\packages\tool-avrdude\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73EB448>
C:\Users\Fire\.platformio\packages\tool-cmake\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73EB888>
[Truncated]
### Expected Results
### If problems with PlatformIO Build System:
**The content of `platformio.ini`:**
```ini
[platformio]
default_envs = esp32 ;ESP32 IDF Enviroment
[env:esp32]
platform = espressif32
board = esp32dev
framework = espidf
platform_packages = framework-espidf @ https://github.com/espressif/esp-idf.git
```
### Additional info
I added a print in parser.py to trace the error. Unchanged core won't print information about the file that it is trying to parse.
I corrected line numbers on output log to match stock parser.py file
Status: Issue closed
username_1: Thanks for the report! Please re-run `pio upgrade --dev`. Should work now.
username_0: Now the download works (kind of) but the files are missing from packages folder.
It copied only the files from random package.json git
https://github.com/DaveGamble/cJSON/tree/master/tests/json-patch-tests
```
Processing esp32 (platform: espressif32; board: esp32dev; framework: espidf)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Tool Manager: Installing git+https://github.com/espressif/esp-idf.git
git version 2.28.0.windows.1
Cloning into 'C:\Users\Fire\.platformio\.cache\tmp\pkg-installing-y5pkakez'...
remote: Enumerating objects: 8196, done.
remote: Counting objects: 100% (8196/8196), done.
remote: Compressing objects: 100% (6973/6973), done.
remote: Total 8196 (delta 1793), reused 3536 (delta 893), pack-reused 0 eceiving objects: 100% (8196/8196), 38.07 MiB | 1.33 MiB/s
Receiving objects: 100% (8196/8196), 38.44 MiB | 1.30 MiB/s, done.
Resolving deltas: 100% (1793/1793), done.
Updating files: 100% (7164/7164), done.
Submodule 'components/asio/asio' (https://github.com/espressif/asio.git) registered for path 'components/asio/asio'
Submodule 'components/bootloader/subproject/components/micro-ecc/micro-ecc' (https://github.com/kmackay/micro-ecc.git) registered for path 'components/bootloader/subproject/components/micro-ecc/micro-ecc'
Submodule 'components/bt/controller/lib' (https://github.com/espressif/esp32-bt-lib.git) registered for path 'components/bt/controller/lib'
Submodule 'components/bt/host/nimble/nimble' (https://github.com/espressif/esp-nimble.git) registered for path 'components/bt/host/nimble/nimble'
Submodule 'components/cbor/tinycbor' (https://github.com/intel/tinycbor.git) registered for path 'components/cbor/tinycbor'
Submodule 'components/coap/libcoap' (https://github.com/obgm/libcoap.git) registered for path 'components/coap/libcoap'
Submodule 'components/esp_wifi/lib' (https://github.com/espressif/esp32-wifi-lib.git) registered for path 'components/esp_wifi/lib'
Submodule 'components/esptool_py/esptool' (https://github.com/espressif/esptool.git) registered for path 'components/esptool_py/esptool'
Submodule 'components/expat/expat' (https://github.com/libexpat/libexpat.git) registered for path 'components/expat/expat'
Submodule 'components/json/cJSON' (https://github.com/DaveGamble/cJSON.git) registered for path 'components/json/cJSON'
Submodule 'components/libsodium/libsodium' (https://github.com/jedisct1/libsodium.git) registered for path 'components/libsodium/libsodium'
Submodule 'components/lwip/lwip' (https://github.com/espressif/esp-lwip.git) registered for path 'components/lwip/lwip'
Submodule 'components/mbedtls/mbedtls' (https://github.com/espressif/mbedtls.git) registered for path 'components/mbedtls/mbedtls'
Submodule 'components/mqtt/esp-mqtt' (https://github.com/espressif/esp-mqtt.git) registered for path 'components/mqtt/esp-mqtt'
Submodule 'components/nghttp/nghttp2' (https://github.com/nghttp2/nghttp2.git) registered for path 'components/nghttp/nghttp2'
Submodule 'components/protobuf-c/protobuf-c' (https://github.com/protobuf-c/protobuf-c.git) registered for path 'components/protobuf-c/protobuf-c'
Submodule 'components/spiffs/spiffs' (https://github.com/pellepl/spiffs.git) registered for path 'components/spiffs/spiffs'
Submodule 'components/tinyusb/tinyusb' (https://github.com/espressif/tinyusb.git) registered for path 'components/tinyusb/tinyusb'
Submodule 'components/unity/unity' (https://github.com/ThrowTheSwitch/Unity.git) registered for path 'components/unity/unity'
Submodule 'examples/build_system/cmake/import_lib/main/lib/tinyxml2' (https://github.com/leethomason/tinyxml2.git) registered for path 'examples/build_system/cmake/import_lib/main/lib/tinyxml2'
Submodule 'examples/peripherals/secure_element/atecc608_ecdsa/components/esp-cryptoauthlib' (https://github.com/espressif/esp-cryptoauthlib.git) registered for path 'examples/peripherals/secure_element/atecc608_ecdsa/components/esp-cryptoauthlib'
Cloning into 'C:/Users/Fire/.platformio/.cache/tmp/pkg-installing-y5pkakez/components/asio/asio'...
remote: Enumerating objects: 10, done.
remote: Counting objects: 100% (10/10), done.
remote: Compressing objects: 100% (10/10), done.
remote: Total 39225 (delta 0), reused 10 (delta 0), pack-reused 39215
Receiving objects: 100% (39225/39225), 13.81 MiB | 1.32 MiB/s, done.
Resolving deltas: 100% (26421/26421), done.
Cloning into 'C:/Users/Fire/.platformio/.cache/tmp/pkg-installing-y5pkakez/components/bootloader/subproject/components/micro-ecc/micro-ecc'...
remote: Enumerating objects: 765, done.
remote: Total 765 (delta 0), reused 0 (delta 0), pack-reused 765
Receiving objects: 100% (765/765), 511.80 KiB | 1.08 MiB/s, done.
Resolving deltas: 100% (464/464), done.
Cloning into 'C:/Users/Fire/.platformio/.cache/tmp/pkg-installing-y5pkakez/components/bt/controller/lib'...
remote: Enumerating objects: 168, done.
remote: Counting objects: 100% (168/168), done.
remote: Compressing objects: 100% (64/64), done.
remote: Total 552 (delta 147), reused 111 (delta 104), pack-reused 384
Receiving objects: 100% (552/552), 2.47 MiB | 1.32 MiB/s, done.
Resolving deltas: 100% (351/351), done.
Cloning into 'C:/Users/Fire/.platformio/.cache/tmp/pkg-installing-y5pkakez/components/bt/host/nimble/nimble'...
remote: Enumerating objects: 31556, done.
remote: Total 31556 (delta 0), reused 0 (delta 0), pack-reused 31556
Receiving objects: 100% (31556/31556), 9.44 MiB | 1.28 MiB/s, done.
[Truncated]
Environment Status Duration
--------------- -------- ------------
esp32 SUCCESS 00:06:25.479
=================================================================================================== 1 succeeded in 00:06:25.479 ===================================================================================================
Terminal will be reused by tasks, press any key to close it.
```
framework-espidf folder contains the files:
.editorconfig
.gitignore
.npmignore
.piopm
cjson-utils-tests.json
package.json
README.md
spec_tests.json
tests.json
Which are exactly the same with here https://github.com/DaveGamble/cJSON/tree/master/tests/json-patch-tests
username_0: Issue still exists on version 5.0.0
username_1: ### Configuration
**Operating system**: Win10 64bit
**PlatformIO Version** (`platformio --version`):
version 4.4.0b4
### Description of problem
I am trying to use ESP32 IDF from github and the platform installer throws an error while trying to download the framework.
As far as I can understand it is trying to install a package because there is a package.json file in cJson submodule.
https://github.com/DaveGamble/cJSON/blob/master/tests/json-patch-tests/package.json
Issue is unrelated to actual building any file, it is just failing to download it properly.
#### Steps to Reproduce
1. Change platformio.ini
2. Build
### Actual Results
```
Processing esp32 (platform: espressif32; board: esp32dev; framework: espidf)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------C:\Users\Fire\.platformio\platforms\atmelavr\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F6B8E688>
C:\Users\Fire\.platformio\platforms\espressif32\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F6B8E688>
C:\Users\Fire\.platformio\platforms\espressif8266\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F73D6B08>
C:\Users\Fire\.platformio\platforms\[email protected]\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F73D9908>
C:\Users\Fire\.platformio\platforms\nxplpc\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F73DC988>
C:\Users\Fire\.platformio\platforms\nxplpc-arduino-lpc176x\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F73DFE48>
C:\Users\Fire\.platformio\platforms\nxplpc-arduino-lpc176x@src-de6b279104eee6e886c6740cfbf2debe\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F73E8A08>
C:\Users\Fire\.platformio\platforms\ststm32\platform.json
<platformio.package.manifest.parser.PlatformJsonManifestParser object at 0x000002E4F73EB148>
C:\Users\Fire\.platformio\packages\contrib-piohome\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73D9F88>
C:\Users\Fire\.platformio\packages\contrib-pysite\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73D9B88>
C:\Users\Fire\.platformio\packages\framework-arduino-lpc176x\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73D6748>
C:\Users\Fire\.platformio\packages\framework-arduinoespressif32\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73D6748>
C:\Users\Fire\.platformio\packages\framework-arduinoststm32\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73D6F88>
C:\Users\Fire\.platformio\packages\framework-arduinoststm32-maple\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F6B8ED88>
C:\Users\Fire\.platformio\packages\framework-cmsis\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F6B8E808>
C:\Users\Fire\.platformio\packages\framework-espidf\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F6B8E648>
C:\Users\Fire\.platformio\packages\framework-stm32cube\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73EBD88>
C:\Users\Fire\.platformio\packages\tool-avrdude\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73EB448>
C:\Users\Fire\.platformio\packages\tool-cmake\package.json
<platformio.package.manifest.parser.PackageJsonManifestParser object at 0x000002E4F73EB888>
[Truncated]
### Expected Results
### If problems with PlatformIO Build System:
**The content of `platformio.ini`:**
```ini
[platformio]
default_envs = esp32 ;ESP32 IDF Enviroment
[env:esp32]
platform = espressif32
board = esp32dev
framework = espidf
platform_packages = framework-espidf @ https://github.com/espressif/esp-idf.git
```
### Additional info
I added a print in parser.py to trace the error. Unchanged core won't print information about the file that it is trying to parse.
I corrected line numbers on output log to match stock parser.py file
username_1: This is not a valid package, it does not contain `package.json` in the root.
P.S: You can't override ESP-IDF in runtime. We support only stable IDF releases.
Status: Issue closed
|
jgehrcke/python-cmdline-bootstrap | 80316190 | Title: Somehow console script is not available
Question:
username_0: Sorry to write it here, but I don't know what to do.
My package [yagmail](https://github.com/username_0/yagmail/) is not getting a console_scripts entry point, even though with the exception of __version__, I completely followed your style.
Could you see what is wrong with it? Code works under Python 2 and 3.
Also, could it be that python setup.py bdist_wininst gives an error after defining such an entry point on a Mac?
Answers:
username_1: Did you resolve the issue you were observing?
Status: Issue closed
username_0: Yea, I assume it was something silly, perhaps __init__ related or having to upgrade with pip install -U or something. |
kbdancer/TPLINKKEY | 274857089 | Title: 运行出错
Question:
username_0: Traceback (most recent call last):
File "scan.py", line 172, in <mod
SET_THREAD = int(sys.argv[1])
IndexError: list index out of range
Answers:
username_1: 第一个参数是线程数,你设置了吗
username_2: 为什么我跑了好多ip段都没有扫出来一个?o(╥﹏╥)o
作者给点意见呗。
username_1: 现在很多地方都已经没有adsl了,TPLINK漏洞基本上都是在adsl网络上才会有并且需要路由器直接能够得带公网ip
Status: Issue closed
|
pybind/pybind11 | 183283054 | Title: Problems with numpy array handling
Question:
username_0: Hi! I am trying to make some work with numpy arrays, but I am getting some error compilation. Here it is the code that I am trying:
```
#include <pybind11/pybind11.h>
#include <pybind11/numpy.h>
namespace py = pybind11;
py::array_t<double> downsample_c(py::array_t<double> x) {
auto xbuf = x.request();
auto result = py::array_t<double>(xbuf.size);
auto rbuf = result.request();
double *xptr = (double *) xbuf.ptr;
double *rptr = (double *) rbuf.ptr;
for (size_t idx = 0; idx < xbuf.shape[0]; idx++)
rptr[idx] = xptr[idx] + 1;
return result;
}
PYBIND11_PLUGIN(test) {
py::module m("downsample");
m.def("downsample_c", &downsample_c, "Downsample array");
return m.ptr();
}
```
and compiling this with:
```
$ c++ -O3 -shared -std=c++11 -I `python -c'import pybind11; print(pybind11.get_include())'` `python-config --cflags --ldflags` prova.c -o prova.so
```
gives me this error:
```
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
prova.c: In function ‘pybind11::array_t<double> downsample_c(pybind11::array_t<double>)’:
prova.c:23:42: error: invalid user-defined conversion from ‘size_t {aka long unsigned int}’ to ‘const pybind11::buffer_info&’ [-fpermissive]
auto result = py::array_t<double>(xbuf.size);
^
In file included from /home/faltet/miniconda3/include/python3.5m/pybind11/pytypes.h:12:0,
from /home/faltet/miniconda3/include/python3.5m/pybind11/cast.h:13,
from /home/faltet/miniconda3/include/python3.5m/pybind11/attr.h:13,
from /home/faltet/miniconda3/include/python3.5m/pybind11/pybind11.h:32,
from prova.c:1:
/home/faltet/miniconda3/include/python3.5m/pybind11/common.h:214:5: note: candidate is: pybind11::buffer_info::buffer_info(Py_buffer*) <near match>
buffer_info(Py_buffer *view)
^
/home/faltet/miniconda3/include/python3.5m/pybind11/common.h:214:5: note: conversion of argument 1 would be ill-formed:
prova.c:23:42: error: invalid conversion from ‘size_t {aka long unsigned int}’ to ‘Py_buffer* {aka bufferinfo*}’ [-fpermissive]
auto result = py::array_t<double>(xbuf.size);
^
prova.c:23:42: error: invalid conversion from ‘size_t {aka long unsigned int}’ to ‘Py_buffer* {aka bufferinfo*}’ [-fpermissive]
In file included from /home/faltet/miniconda3/include/python3.5m/pybind11/pytypes.h:12:0,
from /home/faltet/miniconda3/include/python3.5m/pybind11/cast.h:13,
from /home/faltet/miniconda3/include/python3.5m/pybind11/attr.h:13,
from /home/faltet/miniconda3/include/python3.5m/pybind11/pybind11.h:32,
from prova.c:1:
/home/faltet/miniconda3/include/python3.5m/pybind11/common.h:214:5: note: initializing argument 1 of ‘pybind11::buffer_info::buffer_info(Py_buffer*)’
buffer_info(Py_buffer *view)
^
In file included from prova.c:2:0:
/home/faltet/miniconda3/include/python3.5m/pybind11/numpy.h:132:5: note: initializing argument 1 of ‘pybind11::array_t<T, ExtraFlags>::array_t(const pybind11::buffer_info&) [with T = double; int ExtraFlags = 16]’
array_t(const buffer_info& info) : array(info) {}
^
```
I am just trying to follow the [example in pybind11 docs](http://pybind11.readthedocs.io/en/latest/advanced.html#vectorizing-functions). I don't see what I am doing wrong. It might be that the example in documentation is a bit obsolete?
I am using Ubuntu 16.04 and gcc 5.4.
Thanks in advance!
Answers:
username_1: Is this with the latest 'master' pybind11 version from the git repository? If not, can you please retry?
username_2: That line in the documentation (and your code) really should be written as `py::array_t<double> result(buf1.size);`. As it's written now, it's invoking implicit conversion constructors, but apparently is picking the wrong ones.
This is a deeper issue in pybind11, though: we really should be marking almost all of the single-argument constructors (or multi-argument with defaults for 2nd and beyond arguments) as `explicit` (as recommended in the [https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#Rc-explicit](C++ Core Guidelines)): otherwise we're defining implicit conversions in all sorts of places (like this) that we don't really intend it.
Right now, we're saying:
```C++
py::array_t<double> foo = 13;
```
and
```C++
void foo(py::array_t<double> f) { ... }
foo(13);
```
are valid, and I really don't think they should be. This goes far beyond `array_t`, however: we have unintended implicit conversion in many other places, as well.
(As a side note, this particular C++ feature has always irritated me: implicit conversion really should be an opt-in rather than opt-out feature; that ship sailed long, long ago, however, so we're stuck with it).
username_1: @username_2's PR #449 was merged, which marks the troublesome implicit constructor as explicit.
username_3: @username_0 btw I think you can just use `.data()` here without having to request a buffer, i.e.
```cpp
auto xptr = xbuf.data();
auto rptr = rbuf.mutable_data();
```
username_1: Closing this then -- please comment if you still have issues with compilation errors.
Status: Issue closed
username_0: Sorry for being late. I have tried latest master, but after installing it in my MacOSX box with:
```
pip install git+https://github.com/pybind/pybind11
```
I am getting this error:
```
$ c++ -O3 -shared -std=c++11 -I `python -c'import pybind11; print(pybind11.get_include())'` `python-config --cflags --ldflags` downsampling.cpp -o downsampling.so
Undefined symbols for architecture i386:
"_PyBytes_AsStringAndSize", referenced from:
pybind11::str::operator std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >() const in downsampling-50ad40.o
pybind11::detail::type_caster<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, void>::load(pybind11::handle, bool) in downsampling-50ad40.o
"_PyInstanceMethod_New", referenced from:
pybind11::cpp_function::initialize_generic(pybind11::detail::function_record*, char const*, std::type_info const* const*, unsigned long) in downsampling-50ad40.o
"_PyModule_Create2", referenced from:
pybind11::module::module(char const*, char const*) in downsampling-50ad40.o
"_PyUnicode_AsUTF8String", referenced from:
pybind11::str::operator std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >() const in downsampling-50ad40.o
pybind11::detail::type_caster<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, void>::load(pybind11::handle, bool) in downsampling-50ad40.o
"_PyUnicode_FromString", referenced from:
pybind11::detail::type_caster<char, void>::cast(char const*, pybind11::return_value_policy, pybind11::handle) in downsampling-50ad40.o
pybind11::str::str(char const*) in downsampling-50ad40.o
"__Py_FalseStruct", referenced from:
pybind11::detail::type_caster<bool, void>& pybind11::detail::load_type<bool, void>(pybind11::detail::type_caster<bool, void>&, pybind11::handle const&) in downsampling-50ad40.o
ld: symbol(s) not found for architecture i386
clang: error: linker command failed with exit code 1 (use -v to see invocation)
```
so probably the new install ended with a corrupted package.
After reverting to the wheel in PyPI (pybin 1.8.1), and using the suggestion by @username_2 , I am getting this error:
```
downsampling.cpp:8:23: error: no matching constructor for initialization of 'py::array_t<double>'
py::array_t<double> result(xbuf.size);
^ ~~~~~~~~~
/Users/faltet/miniconda3/include/python3.5m/pybind11/numpy.h:128:64: note: candidate constructor
(the implicit copy constructor) not viable: no known conversion from 'size_t' (aka 'unsigned long') to
'const pybind11::array_t<double, 16>' for 1st argument
template <typename T, int ExtraFlags = array::forcecast> class array_t : public array {
```
I am still learning C++, so not sure how to interpret this one.
Thanks for all the help
username_1: @username_0: The linker errors you got from the github package looks much more reasonable ;). Here, it just looks like ``python-config --ldflags`` isn't linking to your Python library for some reason. I would look into that (potentially you need to add ``--libs``)
The Pybind11 release on PyPi is super-old, please don't use that to debug this error. (A new version will be out fairly soon). |
ESDP-15-5/student-crm | 124761213 | Title: Добавил сообщения об успешности действий
Question:
username_0: ## Проблема в наблюдаемых явлениях
Когда пользователь совершает действие, то система не говорит об успешности свершения
## Факты неудобства при использовании функции системы
Если удалил или отредактировал парамметры сущностей, то нужно пробегать по списку проверять выполнилось ли действие или нет.
## Предлагаемое решение проблемы.
Добавить всплывающие сообщения, которые будет видеть пользователь до следующего действия
Answers:
username_0: Затратил 30 минут
Status: Issue closed
|
ryankeefe92/Episodes | 154398016 | Title: Lags for shows that have a lot of episodes in iTunes
Question:
username_0: ...because it is looking for higher-quality version of every episode in your library. Working on a fix that marks some episodes as already being at the highest quality desired so it won't search for a higher-quality version. |
facebook/react-native | 414538105 | Title: OnLongPress is not working in Tablet device(Both in android & ios)
Question:
username_0: ## 🐛 Bug Report
`OnLongPress` is not getting initiated in Tablet device(Both in android & ios)
## To Reproduce
If `TouchableOpacity` component is created with `onLongPress` action, This issue can be reproduceble
## Expected Behavior
when the `Touchable` component is hold for long time , `OnLongPress` should be initiated.
## Code Example
<TouchableOpacity style={Styles.tabletStyle.listRowStyle} onPress={() => this.onRowClick(item)} onLongPress={(event) => this.onContextMenuOpen(event, item)}}>
<Text>Render Something</Text>
</TouchableOpacity>
## Environment
OS: Linux 4.15
Node: 11.4.0
npm: 6.4.1
react-native: 0.52.2
I have implemented long press in many area in my app for testing purpose, But no where it's working. But same code is working as expected in mobile devices.
Answers:
username_0: @react-native-bot I have already included Environment info. If these details are not enough , Let me know what to mention in addition to these details
username_1: @username_0 as the bot said, please run `react-native info`
username_0: @username_1 As i had already mention in my previous comment, i had run the command and Pasted those details in the question. Could you let me what am i missing here?
username_2: @username_0 is this issue occuring while debugging, or in a production build?
username_0: @username_2 in both builds. Just now I have checked production build as well.
Status: Issue closed
username_0: I have just noticed that it's working as expected. Surprisingly it started working without doing any changes,Hence proceeding to close |
anymail/django-anymail | 373079363 | Title: [Proposal][Mailgun][webhooks] change event_types and reject_reason mappings
Question:
username_0: Currently, it seems impossible to differentiate temporary from permanent failures by analyzing `AnymailTrackingEvent` created by the mailgun webhook.
A delivery which had a temporary failure due to - for instance - a broken connection, will generate a webhook payload with (among other fields):
```json
{
"event": "failed",
"reason": "generic",
"severity": "temporary"
}
```
This gets mapped to a `AnymailTrackingEvent` with `event_type=BOUNCED` and `reject_reason=BOUNCED`, which is hardly helpful. In particular, in the case of a failed connection, it feels actually misleading, IMHO, since there was no actual bounce from the receiving server.
Looking at `anymail/webhooks/mailgun.py`, I'd suggest doing something like the following in `esp_to_anymail_event()`:
```python
if event_type == EventType.BOUNCED and event_data['severity'] == 'temporary':
event_type = EventType.DEFERRED
```
Also, changing the `reject_reasons` mapping from `generic → BOUNCED` to `generic → OTHER`. The rationale being: if Mailgun cannot interpret the error, we shouldn't assume it has any particular type.
Additionally, and mostly unrelated, we also see `"reason": "greylisted"` in our logs, which is currently not handled at all and might be mapped to `BOUNCED`?
Before working on a concrete PR, I thought I get some opinions on this. Thoughts?
Answers:
username_1: Those changes both seem good to me.
The (root level) "reason" field in the Mailgun payload doesn't seem to be documented anywhere, and it's not clear to me that Anymail's Mailgun [`reject_reasons` mapping](https://github.com/anymail/django-anymail/blob/v4.3/anymail/webhooks/mailgun.py#L85-L91) is really adding any value. Perhaps the normalized `reject_reason` should just be `BOUNCED` for bounce events, and `OTHER` for rejected events (unless someone can figure out how to map Mailgun's `event["reject"]["reason"]` text to Anymail's normalized reject reasons).
Since Mailgun's "severity" field *is* at least partially documented (under their [failed Event Type](https://documentation.mailgun.com/en/latest/api-events.html#event-types)), it makes sense to use it to identify `DEFERRED` events.
Does `"reason": "greylisted"` arrive with `"severity": "temporary"`? If so, that's another good argument for this change—greylisted should be `DEFERRED`. |
tcalmant/python-javaobj | 799317281 | Title: Import numpy only when it is really used
Question:
username_0: We need to **set NumPy/OpenBLAS thread limit via environment variable before NumPy is imported**.
Value of the thread limit is stored in our central configuration service to which we connect over network, using TLS configured via Java truststore and certstore. So we have to first import `jks` which imports `javaobj` which imports `numpy` -- so we are able to get the thread limits but NumPy ignores them because javaobj imported it too early.
Would it be possible to move the import of `numpy` from module-level into `class JavaObjectUnmarshaller`'s constructor and do the import only when `use_numpy_arrays` is `True`? The `jks` uses default value of `use_numpy_arrays` (False) so we would avoid importing `numpy`.
Status: Issue closed
Answers:
username_1: Fixed by #45 and released with version 0.4.2 |
mozilla/fxa-bugzilla-mirror | 438563332 | Title: Primary account email required for sign-in / resetting the password [bz1547186]
Question:
username_0: From https://bugzilla.mozilla.org/show_bug.cgi?id=1547186
Created attachment 9060892
Screenshot_2019-04-26 Sign in to continue to Firefox Sync.png
User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:66.0) Gecko/20100101 Firefox/66.0
Steps to reproduce:
Steps to Reproduce:
Sign In Flow:
1. Open Firefox
2. Open Menu from the Top Right Corner of the Browser
3. Click on Sign-In to Sync
4. Enter the correct email address and password
5. Click on Sign In
Reset Password:
1. Tried to reset the password for the account.
Actual results:
Sign In Flow
Error Message: "Primary account email required for sign-in"
Reset Password
Error Message: "Primary account email required for sign-in"
Expected results:
Sign In Flow
Error Message should be an apt error message, if the email address and password are correct, then it should let me login.
If there is a mismatch between the email address or password it should present me with an understandable error message
Reset Password
It should let me send the password reset options and let me reset the password and login.
Answers:
username_1: This is a duplicate issue. Will post link. |
hshahwan/Zad-AlKheir | 442610619 | Title: Database schema
Question:
username_0: ### Create a database schema
- analyses database.
- define tables.
- define relations between them.
- define the type of every element in the table
Answers:
username_1: Amazing! I like to share it with all my friends and hope they will like this information.
Regards
<a href="http://abtrainings.com/courses/power-bi-online-training/">Power Bi Training In Hyderabad</a>
<a href="http://abtrainings.com/courses/power-bi-online-training/">Power Bi Online Training</a>
<a href="http://abtrainings.com/courses/power-bi-online-training/">Power Bi Training</a>
<a href="http://abtrainings.com/courses/power-bi-online-training/">Power Bi Training Online </a>
username_0: Good luck, and if you need any support tell me |
jupyter/notebook | 145149458 | Title: Proper way to add "alert"/"note"/"hint"/"warning"/... boxes?
Question:
username_0: I apologize if this has been discussed before, but I didn't find anything about it in the issues ...
I often see colored boxes in Markdown cells, e.g. in `docs/source/examples/Notebook/Notebook Basics.ipynb` (rendered by [Github](https://github.com/jupyter/notebook/blob/master/docs/source/examples/Notebook/Notebook%20Basics.ipynb)/[nbviewer](http://nbviewer.jupyter.org/github/jupyter/notebook/blob/master/docs/source/examples/Notebook/Notebook%20Basics.ipynb)).
Those are typically created with raw HTML `<div>`s like this:
```html
<div class="alert alert-success">
Blah blah blah
</div>
```
This looks nice and all, but
1. It's kinda tedious to type
1. The colored boxes (but not their content) get lost in translation to LaTeX
Is there a better way to create those boxes?
Answers:
username_1: Hum, not as far as I am aware, we would have to have some king of markdown `role` like in sphinx/rst.
Hope the common mark folks come up with a syntax for this soon !
username_2: As a workaround you could use a python object which implements `_repr_html_` and `_repr_latex_`:
```
class Alert():
def __init__(self, text):
self.text = text
def _repr_html_():
return '<div class="...">text</div>'
def _repr_latex_():
return '...'
```
This would return both html and latex and would therefore be visible in both. But now you need to remove the cell itself (not sure how that works...)
username_0: I don't think this will happen anytime soon. This was discussed over and over for years, but nothing was added to CommonMark. A few proposals are have been collected in the [CommonMark Wiki](https://github.com/jgm/CommonMark/wiki/Proposed-Extensions) under "Directives".
What about implementing one of them in The Notebook as a Markdown extension?
Could someone please point me to the code where Markdown cells are parsed?
Probably there is already an extension available for whatever JavaScript library is used for that?
Here are a few more links:
* admonition support for Python-Markdown: https://pythonhosted.org/Markdown/extensions/admonition.html
* proposed admonition support in pandoc: https://github.com/jgm/pandoc/issues/2610
username_1: `notebook/static/notebook/js/textcell.js` `MarkdownCell.prototype.render ` ~ Line 370
username_3: Additionally, if everyone gives up on the standard and does their own versions, we end up in the same kind of mess that led to the creation of CommonMark in the first place. So I think we should say no to ad-hoc extensions, and encourage people who don't like that to go and improve CommonMark.
username_2: Just putting in a few good words for a pandoc based notebook/nbconvert: pandoc because that's used on the R side with knitr/RMarkdown and because it is (next to github) the kind of "extended standard". knitr/RMarkdown currently (and more so in the future if e.g. better tables are added to pandoc or the above admonition support) have an advantage on the things they can express vs what can be expressed in the notebook md cells. nbconvert can probably easily switch to pandoc, but the browser would need a md2html webservice (e.g. using pypandoc). Whether it would be worth it? The tables feature might be a great addition, as it would mean that more richly styled html tables could be translated into pdfs.
@username_0 If you don't need interactivity, just html and latex/PDF support (and after pandoc gets everything you want), you could try to write admonitions into md cells (or raw cells?), ignore the error in the notebook and then convert the ipymd file to md using nbconvert and then the md file to whatever pandoc supports via pandoc.
username_0: @username_1
I'm not talking about implementing a Markdown parser (or several of them), I'm talking about an extension (or several of them).
Thanks for the pointer, here's [a link](https://github.com/jupyter/notebook/blob/cf1e849fc6c294c19c86236d475808227bb7cb1d/notebook/static/notebook/js/textcell.js#L365).
Apparently, there is already some pre-processing being done before [marked](https://github.com/chjj/marked) is used for the actual conversion.
In the next few days (or weeks) I'll have a closer look how my proposed extension could be added there.
@username_3
The goal of CommonMark, as far as I understand, is to have an unambiguous spec for the most "common" Markdown features. CommonMark allows, probably even encourages, extensions.
There are many [proposed extensions (same link as I gave above)](https://github.com/jgm/CommonMark/wiki/Proposed-Extensions), most of them will never be part of CommonMark.
In the discussions linked from the wiki page, the CommonMark authors even encourage people to create extensions.
And since extensions like the one i'm suggesting have been discussed for years and not been added to CommonMark, leads me to predict that it ain't gonna happen.
Fun fact 1: [marked (and therefore The Notebook) doesn't use CommonMark](https://github.com/chjj/marked/issues/563)
Fun fact 2: The Notebook already uses at least 2 extensions (probably more): inline math with `$...$` and raw LaTeX blocks that are interpreted as math.
(Just for future reference, the former might be built-in to marked sometime in the future (https://github.com/chjj/marked/pull/180))
I don't see a fundamental problem in adding another extension, if it keeps people from using HTML-exclusive work-arounds.
And sure, other tools will have to implement those extensions, too, but that's what happens already, see e.g. https://github.com/spatialaudio/nbsphinx/pull/35.
@username_2
Pandoc is circumstantial to this discussion, it's just another parser.
And I don't want to ignore errors in The Notebook. I want this to work correctly in The Notebook and also in all related tools I care about.
username_3: I'm fine with extensions once there is a generic extensible syntax (see [this discussion](http://talk.commonmark.org/t/generic-directives-plugins-syntax/444)). What I want to avoid is doing ad-hoc extensions where someone picks some symbols and decides to make them mean a special thing, without regard for compatibility or conflicts.
I know we already make some extensions to the markdown the notebook supports. That does not mean it's OK to add more. Those extensions cause enough headaches, but so many notebooks already rely on them that we can't practically get rid of them.
I'm deliberately being hard on this, because I want to see a generic extension point in CommonMark, and that's not going to happen if we carry on doing ad-hoc extensions.
username_0: And you shouldn't!
Those extensions are great, and the syntax is really simple and easy to use.
And being able to write equations in a straightforward way is one of the most important features of the Jupyter Notebook!
Anyway, I think the first step forward is to get rid of marked and switch to CommonMark. I've created a new issue about that: #1371.
Once this is done, I think we can continue discussing extensions.
username_4: We recently changed our documentation workflow to integrate jupyter notebooks within our reST-documents. The reasons for that, we wanted to only document once. The notebooks can be utilized by the user and it is synced within the rendered docs (eg. using sphinx). The transition process went really smooth and we are happy to gain this much from using jupyter notebooks within our project.
However, these things can't be accomplished so far:
- admonitions
- citations
For admonitions I proposed some workaround for the 'nbsphinx' package (which we are using to convert the notebooks) spatialaudio/nbsphinx#46, which has it's faults, but works. Also in this issue here, several workarounds and hacks have been proposed, neither of which is really favourable.
From the users perspective things should just work. If there weren't the inline math available within the notebooks, this would have been a showstopper. And using `<div>` sections to get alert-boxes is (to some degree) annoying, too.
I understand, that the cleanest solution is to wait for the changes in the standard. But please also consider an interim extension which takes care of this until the changes are incorporated in the standard. Otherwise and this is already under-way, hacks, workarounds, incompatibilities will float around (just my 2c).
username_0: For those who are interested: Since a *proper* solution will take a long time to come, I've implemented a temporary work-around in `nbsphinx` that converts `<div>` blocks with bootstrap `warning` and `info` classes to Sphinx boxes: http://nbsphinx.readthedocs.io/en/latest/markdown-cells.html#Info/Warning-Boxes
Status: Issue closed
username_0: @username_5 Why did you close this?
Is there a solution available?
username_5: I apologize if this has been discussed before, but I didn't find anything about it in the issues ...
I often see colored boxes in Markdown cells, e.g. in `docs/source/examples/Notebook/Notebook Basics.ipynb` (rendered by [Github](https://github.com/jupyter/notebook/blob/master/docs/source/examples/Notebook/Notebook%20Basics.ipynb)/[nbviewer](http://nbviewer.jupyter.org/github/jupyter/notebook/blob/master/docs/source/examples/Notebook/Notebook%20Basics.ipynb)).
Those are typically created with raw HTML `<div>`s like this:
``` html
<div class="alert alert-success">
Blah blah blah
</div>
```
This looks nice and all, but
1. It's kinda tedious to type
2. The colored boxes (but not their content) get lost in translation to LaTeX
Is there a better way to create those boxes?
username_5: @username_6 & @mpacer : what are the next steps on this issue? Thanks!
username_6: There is no plan to implement something like this. However, I have seen several issues related to extending our markdown renderer, so I think the best thing to do is create an issue for implementing a general markdown extension interface so that we can start to act on things like this: https://github.com/jupyter/notebook/issues/2450
username_7: Admonitions `!!! foo` are a common extension to Markdown, e.g. [supported by mkdocs](https://squidfunk.github.io/mkdocs-material/extensions/admonition/), in pandoc via an extension (https://github.com/jgm/pandoc/issues/2610), and in flexmark via an [extension](https://github.com/vsch/flexmark-java/wiki/Admonition-Extension). Julia's [markdown docstrings](https://docs.julialang.org/en/v1/stdlib/Markdown/index.html) commonly use them for warnings etcetera. Would be nice to have them in notebooks, but I understand that this is more of an upstream issue for you guys.
username_8: Another thought here - perhaps rather than adopting (or defining) a new markdown syntax for "admonition" blocks, we could use the cell-level metadata for this.
*Cell tags* could be a way to tag markdown cells with the metadata needed for a renderer to know what "kind" of markdown cell it is. For example, Jupyter Book [uses a "hide_input" tag to control whether cells are hidden](https://jupyterbook.org/features/hiding.html). One could similarly imagine a "warning" or "information" tag with cells.
A UI for adding / removing tags from cells is now natively in both the Classic notebook UI, as well as JupyterLab (I am not sure about this for Nteract?).
I think the main thing we'd need in this case is not a new syntax in Markdown, but for Jupyter to formally recommend this as a valid approach, and potentially to define a subset of tags that are recognized as the "official" name for a particular thing (e.g. "warning" vs. "warn", etc). In the future, you could imagine front-ends using these tags to render some cells differently from others.
username_9: Nice idea @username_8, however what about nested admonitions? The admonition-tag would thus require opening and closing versions I guess
username_8: @username_9 yep, that's a good point. I think that out-of-the-box it would be most straightforward to do that without supporting nested admonitions :-/
That said, I feel like if tags were supported for something like this, it doesn't prevent some kind of markup support for this as well (which might be the suggested approach for nested admonitions)
username_0: AFAICT, three possibilities have been suggested:
1. The `<div class="alert alert-info">` work-around
2. Using cell tags
3. A proper Markdown/CommonMark extension
IMHO, option (3) would be best, but it might still take a really long time until its time comes.
Option (1) is not great, but it somewhat works today.
Option (2) is somewhere in-between. It has fundamental limitations regarding nesting and I'm not sure whether it's worth spending the effort for a feature that's intrinsically flawed (given that a work-around is available and a better solution will be available in the far future).
I think we should hope for option (3) to come some day, and in the meantime try to improve option (1).
Option (1) works well in:
* Classic Notebook (Binder link: https://mybinder.org/v2/gh/spatialaudio/nbsphinx/master?filepath=doc/markdown-cells.ipynb)
* `nbsphinx` (https://nbsphinx.readthedocs.io/en/latest/markdown-cells.html), even with LaTeX (https://readthedocs.org/projects/nbsphinx/downloads/pdf/latest/#section.3)
It works well on my local JupyterLab, but for some reason it doesn't seem to work on Binder: https://mybinder.org/v2/gh/spatialaudio/nbsphinx/master?urlpath=lab/tree/doc/markdown-cells.ipynb
It also does *not* work very well in:
* `nbconvert`
... and I guess by extension in:
* `nbviewer` (https://nbviewer.jupyter.org/github/spatialaudio/nbsphinx/blob/master/doc/markdown-cells.ipynb)
* Github: https://github.com/spatialaudio/nbsphinx/blob/master/doc/markdown-cells.ipynb
I think the best short-time investment would be to fix `nbconvert` (and find out what's going wrong in JupyterLab on Binder)!
username_10: Why not just add support for asciidoc at once? They are as concise as markdown, but translate semantics directly to docbook, so they had admonition blocks you ask for since forever. Markdown is a kind of a email-thing, not a proper solution to full-blown documentation.
username_11: <div class="alert alert-success">
Blah blah blah
</div>
username_11: Center-aligned
{: .alert .alert-info .text-center}
username_12: In passing, I created a simple extensions that implemented simple cell tag based styling. It doesnlt immediately apply style when you add a tag - you need to reload the notebook to see the effect: https://github.com/innovationOUtside/nb_extension_tagstyler
 |
villadora/express-http-proxy | 240923309 | Title: ajax请求代理转发post请求问题
Question:
username_0: eg
http://localhost:3001/api forwart: http://localhost:3000
post请求 http://localhost:3001/api/users data:{a:1,b:2}
在 http://localhost:3001/api/user 里的req.body显示{a:1,b:2}
在代理服务端,接收到的req.body变成了{'a':'1','b':'2'}=''
Answers:
username_1: 我也遇到了,最后没用这个插件了,直接用request来转
username_0: 使用http-proxy可以实现正常的post请求。 |
MicrosoftDocs/azure-docs | 681107146 | Title: Step out of organisation as guest
Question:
username_0: [Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: b448cfd0-b673-7d04-047b-71aa62bebc27
* Version Independent ID: 75118bff-94de-c9f9-7666-d2a6f4253b11
* Content: [Leave an organization as a guest user - Azure Active Directory](https://docs.microsoft.com/en-us/azure/active-directory/external-identities/leave-the-organization)
* Content Source: [articles/active-directory/external-identities/leave-the-organization.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/external-identities/leave-the-organization.md)
* Service: **active-directory**
* Sub-service: **b2b**
* GitHub Login: @msmimart
* Microsoft Alias: **mimart**
Answers:
username_1: @username_0
Thanks for your feedback! We will investigate and update as appropriate.
username_1: @username_0 you need to go to portal.azure.com and select it from there.

Then you can select "Organizations" from the side bar.

Hope this helps! If you have further questions feel free to let me know or create a pull request.
Status: Issue closed
|
python/devguide | 183165886 | Title: devguide-git bot
Question:
username_0: Hi Brett,
I know this issue is not related to the devguide, but to a robot used by github.
When devguide-git push a message to #python-dev, the http url is not complete. There is only "http" and nothing else. I don't know where is the code of devguide-git, but could you try to fix it or how can we fix it ?
```
< devguide-git> [devguide] username_2 commented on issue #59: Why do you want to merge
`github` into `master` except when we are done with the `github` branch? Wouldn't it be easier
to merge `master` into `github` (which is what I have been doing on occasion)? Most of the
changes in `master` are spelling mistakes and such while all the churn is in `github`, so I would
think we would want to merge from calm to crazy and not the other way around. http
```
Thank you
Answers:
username_1: That looks like truncation of some sort. The url is normally a complete shortened url.
But the answer is no, we can't fix the github bot. We could use a different bot, though, if someone wants to decide which one.
username_0: Hi @username_1,
so in this case, we can't fix it, fortunately, we have the number of the issue in the message.
But where can I find this bot or read the configuration ?
Thank you
username_2: It's just the IRC integration that GitHub provides.
Status: Issue closed
username_0: I am closing this issue because it's a bug with the github bot. |
matsondawson/vic20dart | 69621116 | Title: Colors not shown correctly in FireFox
Question:
username_0: If opened in FireFox, the startup screen is totally blue.
By giving `POKE 36879,0` the screen turns blue with black border and black text.
Answers:
username_1: It sounds like a issue in vic code, probably around rendering directly to bitmap data.
Maybe related to alpha.
username_0: Yes, I verified and it's a kind of overflow problem with the white color `0xFFFFFFFF` of the palette. Decreasing its alpha value to `FE` makes it work correctly.
Do you have any clue why this happens? It's a Firefox bug?
BTW, I've also found some palette entries aren't correct (e.g. light cyan), so I've replicated the palette from the VICE emulator. Sending a pull request. |
stemcellontologyresource/OSCI | 360348773 | Title: Definitions needed for cell culture substrate and subtypes
Question:
username_0: cell culture substrate
df= A processed material on which a cell culture grows or is attached.
**Question:** Do we intend something more specific about the material? E.g., the processed material is specifically created to grow cells.
Answers:
username_1: cell culture substrate
def: surface on which a cell culture grows or is attached
matrigel substrate
def: a cell culture substrate that is composed of matrigel
username_0: cell culture substrate
df= A processed material on which a cell culture grows or is attached.
**Question:** Do we intend something more specific about the material? E.g., the processed material is specifically created to grow cells. |
dutu/poloniex-api-node | 225284419 | Title: CORS error
Question:
username_0: i don't know if it's real a issue or i'm doing something wrong but when i try to call a private method i get a cors error

Answers:
username_1: Are you using the module in a browser?
username_0: no, i'm using this way...

Status: Issue closed
username_1: @username_0, this library was tested as node.js module andf the description states it is a node.js library; you are using it in browser/client.
The library is using `request` module and I cannot see `request` has been tested in browser either. I will release and test the library for the browser in one of the next revisions.
username_2: @username_1 Any updates on the library for the browser?
Thanks |
rainlake/homebridge-platform-lightify | 631253239 | Title: Can you make moods show up
Question:
username_0: Hey this plugin rocks!! only thing missing is Moods
coudl you make it so it could. be something like this
show moods (true or False
kind of like how it is for groups
would be awesome because whenever i say turn lights white always look pinkish
i have no clue how to code or i would give it a shot!!
but would be sweet to just hit a button in the home app to make lights blue etc.. when you set a scene
Answers:
username_1: Are you aware that Lightify is disabling their servers next year? It’s best to not invest time in this platform and move on at this point. I would highly recommend the Hubitat hub. It can control Lightify devices. |
epurdom/clusterExperiment | 153144060 | Title: improve random seed
Question:
username_0: We should have an argument in the subsampling that takes the seed and clusterMany could set this in subsampleArgs. This would mean that across all of the clusterMany clusterings, they would be run on the same set of sampled data, because *everytime* subsampling is done, its done on the same set.
But it also means at each sequential step, for example, you would be running the same set of subsampled data. Not sure that's good. Could also have a random seed for seqCluster, and then at the beginning of seqCluster create a *long* list of random seeds that would then be consecutively passed to subsampleArgs each time it is called (and would be cycled if necessary). That would give each a subsampling a different subsample, but you could control it to be the same.<issue_closed>
Status: Issue closed |
JosefNemec/Playnite | 320535401 | Title: Have some "store label" in game pages
Question:
username_0: Or even an icon/logo would be fine.
Use case is: I have games installed both via GOG and Steam, and I cannot figure out which is where which, without opening first install location.
Status: Issue closed
Answers:
username_1: Will be done as part of #511 |
joshua-montanari/Project-B | 642568107 | Title: .gitignore
Question:
username_0: Looking good champ, would go ahead and make a `.gitignore` file, and put `node_modules` in it, and anything related to how your auth works. Putting the `node_modules` there is just best practice and makes the commits less beefy, and auth related stuff being in there is best practice. But overall looking good
Answers:
username_0: Can also delete `serviceWorker` file, and all test related files
username_0: Edit: `.gitignore` for the backend, I see it in the client side of things |
stelligent/cfn-model | 274684070 | Title: Relax any sequence schema restrictions
Question:
username_0: The potential use of Fn::If can muck with even the most basic restriction to have a sequence. The Fn::If is a Hash, but could have a legit array "output".
This could play in with #10 but in the meantime, probably makes sense to reduce the schema validators to only include required fields and set them to any. For each field we make this change, we need to check there isn't a core rule that will be affected in an undesirable way (i.e. something it needs to check itself that previously it depended upon being correct).<issue_closed>
Status: Issue closed |
apache/apisix-dashboard | 736584460 | Title: dashboard 2.0 failed to fetch ssl certificate not found
Question:
username_0: Please answer these questions before submitting your issue.
- Why do you submit this issue?
- [ ] Question or discussion
- [ ] Bug
- [ ] Requirements
- [ ] Feature or performance improvement
- [ ] Other
___
### Question
- What do you want to know?
___
### Bug
- Which version of Apache APISIX Dashboard, OS, and Browser?
dashboard 2.0
centos 7
chrome 83
- What happened?
If possible, provide a way to reproduce the error.
证书已经上传了,但是错误日志中报下面这个截图

___
### Requirements or improvement
- Please describe your requirements or improvement suggestions.
Answers:
username_1: Can you provide a certificate file that can reproduce this problem?
username_2: cc @username_3
username_3: ok, I try to reproduce in my environment.
username_3: @username_0 It did not reproduce in my environment, can you provide the certificate file that can reproduce this problem?
username_2: @username_3 Could you please list where your test cert come from?
- [ ] Self make
- [ ] Aliyun
- [ ] Let's Encrypt
username_3: @username_2
[x]Aliyun
[x]Self make
They can all reproduce the problem.
username_4: is bug of manager API or apisix?
username_3: This is a bug of manager API.
username_4: @username_5 please take a look
username_5: working on it
username_1: @username_5 I think you have told me that, you have found this bug? all right?
username_5: yes,will fix it soon
username_5: fixed.
Status: Issue closed
|
ManageIQ/manageiq | 42308959 | Title: Upgrade or replace old rails 2 style plugin resource_feeder
Question:
username_0: This was a rails 2 plugin that was made to work with rails 3... should we dump this completely or will this work with Rails 4?
See: https://github.com/ManageIQ/manageiq/blob/897e5ade1952756a57d6c16538cdaa6528424431/vmdb/config/initializers/rails2_plugins.rb
https://github.com/ManageIQ/manageiq/tree/897e5ade1952756a57d6c16538cdaa6528424431/vmdb/lib/resource_feeder<issue_closed>
Status: Issue closed |
jballands/what-can-i-catch-now | 603591683 | Title: Request: Condensed Mobile View
Question:
username_0: I was using a Nook Miles ticket, trying to decide what fish to keep and realized that I didn't know prices for anything. "Someone must have made a tool to quickly see what things that can be caught right now and what they are worth..." This project did not disappoint! Thank you so much!
I would love to have a view that fit more critters on screen at a time, specifically for quick mobile reference. I only really care about the icon, price, and first character of name. I made an incredibly poorly coded prototype and thought I'd share in case you wanted to do something similar.
Repo: https://github.com/username_0/what-can-i-catch-now/tree/condensed
Deployment: https://what-can-i-catch-now-condensed.now.sh/
Screenshot (iPhone XS Max):

Answers:
username_1: @username_0 Yo, this is sick. I was trying to find a good way to condense the mobile view that didn't suck. I'm guessing I could implement a modal to display the full info?
username_0: I think a modal would be perfect for full info; or at least more info. I don't think people would use this view much other than for quick farming. So far I've been using it and enjoying it.
The side-by-side on desktop is also super handy when I'm playing near my laptop!

Gotta get those bells! 🔔💰
username_1: Sounds a fun new feature to work on! Thanks for the feedback! 🔔 💰 |
boostorg/serialization | 271457295 | Title: heap_allocation for a class without a specific new operator calls the class' destructor
Question:
username_0: Since the commit https://github.com/boostorg/serialization/commit/69ecae6919b417be2b2558aefffea97fbe50d4a8 , the method doesnt_have_new_operator::invoke_delete(T * t) not only frees memory, but also calls the destructor of class T.
This causes compilation errors when using BS with blitz++ arrays, because the destructor of blitz::MemoryBlock is protected.
What is the rationale behind the commit? Please note that the commit did not change the behaviour of the class if DONT_USE_HAS_NEW_OPERATOR is defined.
Same concerns were expressed by @qingl97 in his comment under the discussed commit.<issue_closed>
Status: Issue closed |
datacarpentry/r-raster-vector-geospatial | 352610284 | Title: Remove or explain + coord_equal()
Question:
username_0: Plots at the end of lesson 2 use `coord_equal()` while plots before that didn't.
Include an explanation of why it's being and what it is, or drop it.
Answers:
username_1: Good catch @username_0. All of the spatial plots in this lesson should use coord_equal(). I'll put in an explanation at the first use (episode 1) and will add that call to all the plots.
Status: Issue closed
|
ska-telescope/C_DFT | 430328250 | Title: LICENSE
Question:
username_0: Just want to point out that we have licensing guidelines at: http://developer.skatelescope.org/en/latest/projects/licensing.html
If it is not a great deal for you, I'd appreciate if you could swap the license to BSD-3-CLAUSE, this helps in making things easier to maintain.
Answers:
username_1: Hi Marco,
Apologies, this has now been corrected to the BSD-3-clause license.
Regards,
Adam
Status: Issue closed
|
keithmorris/node-dotenv-extended | 548061351 | Title: Feature request: Override
Question:
username_0: Hi, I'd like to request a feature if possible.
I'd like an override field setting. I would like process.env to override the .env values if they are set. Right now I have to do this manually, or have I missed something?
Answers:
username_1: Hi @username_0. You should be able to do this by setting the `includeProcessEnv` to true like the following:
```javascript
require('dotenv-extended').load({
includeProcessEnv: false,
});
```
By doing this, this will load all of the values from your `.env` and `.env.defaults` files and then overrides them with anything in the `process.env` file. Note this this will include *all* values in process.env into your loaded config. Also, this does not work if you have `overrideProcessEnv` set to `true`. Those two settings pretty much exclude each other
Let me know if this doesn't work for you.
Status: Issue closed
|
kaylaswartz/PetMeProject | 968853198 | Title: Style the Pet Match Form
Question:
username_0: Currently the form looks like classic HTML.
Find some forms on the web that you like and try to mimic the CSS for that form.
<img width="391" alt="PetMatchForm" src="https://user-images.githubusercontent.com/292375/129206634-84cafabe-6097-41c0-97ad-fd0040bc513e.png">
Example:
<img width="661" alt="StyledForm" src="https://user-images.githubusercontent.com/292375/129207064-9307eca7-5bbf-41c9-9086-3a7dfd8c3c4b.png">
Answers:
username_1: @username_0 said we can close this issue because of **radio** buttons
Status: Issue closed
|
undera/pylgbst | 349825184 | Title: Missing contents of pylgbst.comms?
Question:
username_0: I just installed the 0.7 release on Windows but I'm getting an error:
```
C:\Anaconda3\envs\boost\python.exe C:/Anaconda3/envs/boost/Lib/site-packages/pylgbst/__init__.py
Traceback (most recent call last):
File "C:/Anaconda3/envs/boost/Lib/site-packages/pylgbst/__init__.py", line 4, in <module>
from pylgbst.comms import DebugServer
File "C:\Anaconda3\envs\boost\lib\site-packages\pylgbst\__init__.py", line 4, in <module>
from pylgbst.comms import DebugServer
ImportError: cannot import name 'DebugServer'
```
It appears that the contents of https://github.com/username_1/pylgbst/blob/master/pylgbst/comms.py is indeed empty. Did this get deleted by accident?
Excited to try out this library as a tool for teaching Python coding to my daughter. Thank you for building this and your excellent videos.
Answers:
username_1: Thanks for reporting this.
I have made updated release 0.8 that should work fine. Let me know if it helps.
BTW `DebugServer` is currently in a broken state, just use `get_connection_auto()` or simply use `MoveHub()`.
username_1: Huh, I see you use Windows. Do you use BlueGiga dongle? It's the only supported BLE device on Windows...
username_0: I just ordered one, but thanks for the heads up!
<NAME>
Data Manager, Analyst and Developer
US Geological Survey, Fort Collins Science Center
2150 Centre Ave. Bldg. C
Fort Collins, CO 80526
(970) 226-9425
http://orcid.org/0000-0002-9505-1876
Work schedule:
Monday, Thursday - 7:00 - 5:00
Tuesday - Wednesday - 7:00 - 3:10
Friday (Telework) - 7:00 - 5:00
username_2: I just wanted to continue with issue#7 by using the newest version 0.8 but I get the same error like the starter of this issue.
`
pi@raspberrypi:~/pylgbst/pylgbst/examples $ python test.py
Traceback (most recent call last):
File "test.py", line 1, in <module>
from pylgbst.movehub import MoveHub
File "/usr/local/lib/python2.7/dist-packages/pylgbst/__init__.py", line 4, in <module>
from pylgbst.comms import DebugServer
ImportError: No module named comms
`
The error happens already in the very first line when trying to import MoveHub.
Any further advice?
Status: Issue closed
|
material-components/material-components-ios | 319687539 | Title: [Snackbar] Flaky tests
Question:
username_0: ```
Test Case '-[components_Snackbar_unit_test_swift_sources.SnackbarManagerSwiftTests testMessagesResumedWhenTokenIsDeallocated]' started.
Invalid connection: com.apple.coresymbolicationd
<unknown>:0: error: -[components_Snackbar_unit_test_swift_sources.SnackbarManagerSwiftTests testMessagesResumedWhenTokenIsDeallocated] : Asynchronous wait failed: Exceeded timeout of 3 seconds, with unfulfilled expectations: "completion".
Test Case '-[components_Snackbar_unit_test_swift_sources.SnackbarManagerSwiftTests testMessagesResumedWhenTokenIsDeallocated]' failed (6.871 seconds).
```
https://fusion.corp.google.com/runanalysis/test/prod%3AMaterialComponents_iOS%2Fmacos_external%2Fpresubmit_bazel_xcode_910/prod%3AMaterialComponents_iOS%2Fmacos_external%2Fpresubmit_bazel_xcode_910/KOKORO/48f82322-f39b-4eb6-b7b7-c5466820b024/1525286003510/prod%3AMaterialComponents_iOS%2Fmacos_external%2Fpresubmit_bazel_xcode_910%20Build%20%23893/Target%20Transitions?target=MaterialComponents_iOS/macos_external/presubmit_bazel_xcode_910
Answers:
username_1: The internal issue [b/117178994](http://b/117178994) is now closed. This issue is being closed as a result.
Status: Issue closed
|
xhsien/sg-bridge | 936139546 | Title: Implement leave room
Question:
username_0: I'm not sure how leave room works now but it's probably not working correctly.
Answers:
username_1: Update 17th Jul 2021:
To implement a button on FE to allow users to leave a room.
username_1: Question remaining: how do we deal with the remaining 3 players.
Some ideas:
1. Show prompt for them to leave room. If they don't do so, they'll be stuck there forever.
2. Force leave the room, e.g. show a message saying "A player has left the room, this session will be automatically terminated in 3 seconds".
2.1 Bring users to entry view after force leave
2.2. Bring users to room view after force leave. There, they can find a new user to join the same room.
username_1: if we are doing 2.2 (to be split into a BE issue),
1. Need to implement on BE to detect disconnections.
2. When that happens, use socket.io api to find the room the user is in.
3. Delete him from room-users map. If he is the last user in room, remove the room-players entry of that room (equivalent to freeing this room number for use again).
4. If the users who left is the host, set isHost flag to true for the top most player in the remaining players list.
4. (may not be necessary) clean up game state for the room. |
gwu-libraries/TweetSets | 726731494 | Title: Consider license for Tweetsets data
Question:
username_0: I don't currently see anything on the Tweetsets site around licensing of the data. I'm thinking the main benefit would be to guard against liability/warranty concerns. Not that those are likely to occur, but it would be good to be explicit about it. Perhaps consider something like https://choosealicense.com/licenses/cc-by-sa-4.0/ (also see https://choosealicense.com/non-software/). This would layer on to, not replace, the policies referenced near the bottom of the Tweetsets home page (under "About sharing Twitter datasets for research and archiving")
Answers:
username_0: This relates somewhat to #44 in that it would make limitations more clear to the users. |
rstudio/learnr | 578774377 | Title: warning in console after running code
Question:
username_0: R version 3.6.3 (2020-02-29)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 18.04.4 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/openblas/libblas.so.3
LAPACK: /usr/lib/x86_64-linux-gnu/libopenblasp-r0.2.20.so
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] learnr_0.10.1
loaded via a namespace (and not attached):
[1] Rcpp_1.0.3 withr_2.1.2 rprojroot_1.3-2 digest_0.6.23
[5] later_1.0.0 mime_0.8 R6_2.4.1 backports_1.1.5
[9] jsonlite_1.6.1 xtable_1.8-4 magrittr_1.5 evaluate_0.14
[13] rlang_0.4.4 promises_1.1.0 rmarkdown_2.1 tools_3.6.3
[17] htmlwidgets_1.5.1 markdown_1.1 shiny_1.4.0 fastmap_1.0.1
[21] httpuv_1.5.2 xfun_0.12 compiler_3.6.3 htmltools_0.4.0
[25] knitr_1.27
```
Answers:
username_1: Hi,
I am having the same problem. Any ideas?
username_2: As far as I know, it's a harmless warning that happens when the parent process doesn't need to do anything to clean up the forked process. IIUC the warning comes from parallel package and, despite, being annoying, doesn't mean there's anything wrong. (If there is something wrong or unexpected happening, please do let us know.)
Status: Issue closed
|
pemistahl/grex | 838203811 | Title: Much slower than Regexp::Assemble
Question:
username_0: Thanks for grex! I love to see tools in this space.
Historically, I've used Regexp::Assemble for this kind of work, either via the full perl script, regexp-assemble, or via my own custom perl code that leveraged it.
The regexp-assemble script / module is available via CPAN, https://metacpan.org/pod/Regexp::Assemble, or via the package libregexp-assemble-perl on Ubuntu (not sure about other distributions).
I found it fairly straightforward to make a somewhat pathological case. I used pwgen to make 100 random strings, 80 characters long, and spat them out to a file `pwgen 80 100 > testcase`, then ran it through multitime with it doing 5 attempts and reporting back.
This represents a similar-ish way I've used regexp-assemble in the past, feeding in a stream of unique request IDs to produce a combined regex to then use searching large log files, though I've not needed to do that for a while.
Feel free to disregard this issue if this doesn't meet your goals for grex.
regexp-assemble:
```
Mean Std.Dev. Min Median Max
real 0.072 0.014 0.051 0.070 0.096
user 0.058 0.011 0.046 0.062 0.075
sys 0.015 0.010 0.005 0.008 0.030
```
grex:
```
Mean Std.Dev. Min Median Max
real 16.872 0.171 16.699 16.853 17.190
user 14.745 0.178 14.541 14.745 15.062
sys 2.125 0.019 2.104 2.126 2.157
```
Performance of the actual regexes spat out the other side are variable. Some of the test cases I made from real world data, regexp-assemble produced faster regexes, for others grex did.
Not sure if regexp-assemble might give you some interesting prior art to dig in to from a performance perspective.
Answers:
username_0: Another pretty pathological case, performance wise, is to point grex at a file containing these lines, which are just some sample lines of Unicode characters:
```
! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~
¡ ¢ £ ¤ ¥ ¦ § ¨ © ª « ¬ ® ¯ ° ± ² ³ ´ µ ¶ · ¸ ¹ º » ¼ ½ ¾ ¿ À Á Â Ã Ä Å Æ Ç È É Ê Ë Ì Í Î Ï Ð Ñ Ò Ó Ô Õ Ö × Ø Ù Ú Û Ü Ý Þ ß à á â ã ä å æ ç è é ê ë ì í î ï ð ñ ò ó ô õ ö ÷ ø ù ú û ü ý þ ÿ
Ā ā Ă ă Ą ą Ć ć Ĉ ĉ Ċ ċ Č č Ď ď Đ đ Ē ē Ĕ ĕ Ė ė Ę ę Ě ě Ĝ ĝ Ğ ğ Ġ ġ Ģ ģ Ĥ ĥ Ħ ħ Ĩ ĩ Ī ī Ĭ ĭ Į į İ ı IJ ij Ĵ ĵ Ķ ķ ĸ Ĺ ĺ Ļ ļ Ľ ľ Ŀ ŀ Ł ł Ń ń Ņ ņ Ň ň ʼn Ŋ ŋ Ō ō Ŏ ŏ Ő ő Œ œ Ŕ ŕ Ŗ ŗ Ř ř Ś ś Ŝ ŝ Ş ş Š š Ţ ţ Ť ť Ŧ ŧ Ũ ũ Ū ū Ŭ ŭ Ů ů Ű ű Ų ų Ŵ ŵ Ŷ ŷ Ÿ Ź ź Ż ż Ž ž ſ
ƀ Ɓ Ƃ ƃ Ƅ ƅ Ɔ Ƈ ƈ Ɖ Ɗ Ƌ ƌ ƍ Ǝ Ə Ɛ Ƒ ƒ Ɠ Ɣ ƕ Ɩ Ɨ Ƙ ƙ ƚ ƛ Ɯ Ɲ ƞ Ɵ Ơ ơ Ƣ ƣ Ƥ ƥ Ʀ Ƨ ƨ Ʃ ƪ ƫ Ƭ ƭ Ʈ Ư ư Ʊ Ʋ Ƴ ƴ Ƶ ƶ Ʒ Ƹ ƹ ƺ ƻ Ƽ ƽ ƾ ƿ ǀ ǁ ǂ ǃ DŽ Dž dž LJ Lj lj NJ Nj nj Ǎ ǎ Ǐ ǐ Ǒ ǒ Ǔ ǔ Ǖ ǖ Ǘ ǘ Ǚ ǚ Ǜ ǜ ǝ Ǟ ǟ Ǡ ǡ Ǣ ǣ Ǥ ǥ Ǧ ǧ Ǩ ǩ Ǫ ǫ Ǭ ǭ Ǯ ǯ ǰ DZ Dz dz Ǵ ǵ Ƕ Ƿ Ǹ ǹ Ǻ ǻ Ǽ ǽ Ǿ ǿ ...
ɐ ɑ ɒ ɓ ɔ ɕ ɖ ɗ ɘ ə ɚ ɛ ɜ ɝ ɞ ɟ ɠ ɡ ɢ ɣ ɤ ɥ ɦ ɧ ɨ ɩ ɪ ɫ ɬ ɭ ɮ ɯ ɰ ɱ ɲ ɳ ɴ ɵ ɶ ɷ ɸ ɹ ɺ ɻ ɼ ɽ ɾ ɿ ʀ ʁ ʂ ʃ ʄ ʅ ʆ ʇ ʈ ʉ ʊ ʋ ʌ ʍ ʎ ʏ ʐ ʑ ʒ ʓ ʔ ʕ ʖ ʗ ʘ ʙ ʚ ʛ ʜ ʝ ʞ ʟ ʠ ʡ ʢ ʣ ʤ ʥ ʦ ʧ ʨ ʩ ʪ ʫ ʬ ʭ
ʰ ʱ ʲ ʳ ʴ ʵ ʶ ʷ ʸ ʹ ʺ ʻ ʼ ʽ ʾ ʿ ˀ ˁ ˂ ˃ ˄ ˅ ˆ ˇ ˈ ˉ ˊ ˋ ˌ ˍ ˎ ˏ ː ˑ ˒ ˓ ˔ ˕ ˖ ˗ ˘ ˙ ˚ ˛ ˜ ˝ ˞ ˟ ˠ ˡ ˢ ˣ ˤ ˥ ˦ ˧ ˨ ˩ ˪ ˫ ˬ ˭ ˮ
ʹ ͵ ͺ ; ΄ ΅ Ά · Έ Ή Ί Ό Ύ Ώ ΐ Α Β Γ Δ Ε Ζ Η Θ Ι Κ Λ Μ Ν Ξ Ο Π Ρ Σ Τ Υ Φ Χ Ψ Ω Ϊ Ϋ ά έ ή ί ΰ α β γ δ ε ζ η θ ι κ λ μ ν ξ ο π ρ ς σ τ υ φ χ ψ ω ϊ ϋ ό ύ ώ ϐ ϑ ϒ ϓ ϔ ϕ ϖ ϗ Ϙ ϙ Ϛ ϛ Ϝ ϝ Ϟ ϟ Ϡ ϡ Ϣ ϣ Ϥ ϥ Ϧ ϧ Ϩ ϩ Ϫ ϫ Ϭ ϭ Ϯ ϯ ϰ ϱ ϲ ϳ ϴ ϵ ϶
```
grex sits at 100% cpu usage for a fair while with that.
username_1: Hi @username_0, thank you for this evaluation. I appreciate that.
I've already stated my opinion on this matter in [another issue](https://github.com/username_1/grex/issues/25). For a tool like *grex*, performance in terms of raw speed of execution is not essential imho. Nobody will ever use this tool to generate a regex from 100 strings, each 80 characters long. At least not without manual inspection of the result. Execution speed does not matter in this case.
Your comparison with the Perl tool is difficult imho, because it combines multiple regexes into a single one. *grex*, however, assembles a regex from test cases that it matches. So the algorithms for both tools are different which does not make a performance comparison well-founded.
In future releases, I will surely optimize performance where it is feasible. Much more important is the correctness of the results imho. I'm closing this issue now but will gladly refer back to it if I work on performance aspects. So thanks again.
Status: Issue closed
|
ziglang/zig | 494651820 | Title: Compiler crash when unpacking and returning optionals
Question:
username_0: ```zig
const std = @import("std");
const warn = std.debug.warn;
const T = struct{};
const U = struct {
str: []const u8,
fn f(self: *U) ?T {
var str = self.str;
return str[0..1];
}
fn g(self: *U) ?T {
if (self.f()) |e| return e;
return null;
}
};
pub fn main() void { // return type can also be 'anyerror!void' as well -- still crashes.
var u = U{ .str = "" };
// Either of these crashes the compiler.
// if (u.f()) |e| return e;
// if (u.g()) |e| warn("{}\n", e);
warn("done\n");
}
```
Answers:
username_0: There's a related problem that causes invalid LLVM IR to be generated, which is what I was originally trying to reproduce when I came across this; this outright crashes rather than generating bad IR though.
username_1: Fixed by 5ea79bfc4a0a1269930d98faee36a8f6cb0b0401
```
./build/a.zig:45:27: error: expected type 'void', found 'T'
if (u.f()) |e| return e;
^
./build/a.zig:25:11: note: T declared here
const T = struct{};
^
./build/a.zig:32:19: error: expected type 'T', found '[]const u8'
return str[0..1];
^
./build/a.zig:25:11: note: T declared here
const T = struct{};
^
```
Status: Issue closed
|
yisainan/web-interview | 522568926 | Title: [编程题] 37.实现一个 call 或 apply
Question:
username_0: 答案:
1. call
```js
Function.prototype.call2 = function(context) {
var context = context || window;
context.fn = this;
var args = [];
for (var i = 1, len = arguments.length; i < len; i++) {
args.push("arguments[" + i + "]");
}
var result = eval("context.fn(" + args + ")");
delete context.fn;
return result;
};
```
2. apply
```js
Function.prototype.apply2 = function(context, arr) {
var context = Object(context) || window;
context.fn = this;
var result;
if (!arr) {
result = context.fn();
} else {
var args = [];
for (var i = 0, len = arr.length; i < len; i++) {
args.push("arr[" + i + "]");
}
result = eval("context.fn(" + args + ")");
}
delete context.fn;
return result;
};
``` |
cto-a/policy-proposal | 718536446 | Title: 修正のリリースタイミングについて
Question:
username_0: 各種修正のリリースタイミングについてです。
現在は特にリリースフローなどはなく任意のタイミングで修正をリリースしているようですが、下記の問題があると思ってます。
プログラムの場合はリリースした瞬間から利用者が差分を気にすることなく、すべての機能を使うことができるので、
インクリメンタルな適宜リリースは一定の理解が得られると思いますが、文章の場合、利用者は
1. 一旦今まで読んでいたものをリセットした上で読み直す
2. 差分が提示されるなら差分を読んだ上で文章の理解をアップデートし直す
のいずれかの方法をとらなくてはならず、リリースのたびに上記を強いるようだと利用者に対して多大な負荷をかけることになり、大量のドキュメントを日々読んで意思決定を行っている政治家の方々の理解が得られなさそうだなと思いました。
githubで提言を管理するというアイディアはとてもいいと思いますが、リリースフローに関してはまだ練る余地があると思います。
Answers:
username_1: 確かにおっしゃる通りの課題があると感じました。その課題を解決するには、ある時点まで変更をうけつけるという期限を切って、それ以降は基本的に変更しないという方針にするとよいのかなと思いました。
username_0: 政府の方々には概ね1回しか読んでもらえないと思うので、外部公開的なファーストリリースは割と完璧に近いものを出さないといけないという制約が付きそうですね。
username_0: 方針のコンセンサスが取れてルールが決まったらcloseということでどうでしょう?
username_1: はい、それが良いと思いました。ご提案ありがとうございます。
username_1: 本件、こちらのPull Requestで対応してみましたが、いかがでしょうか?
https://github.com/cto-a/policy-proposal/pull/32 |
marcioj/elixir_blog | 73000833 | Title: There should be a way to list only posts of a specific user
Question:
username_0: Often, the blog reader might me interested in a specific author, but in the way things are currently, the reader can't show all posts of a specific user.
Also users want to list their own posts.
Answers:
username_1: The idea of this blog is to demonstrate how to create a basic CRUD application with authentication, relationships and also some tests. I'd not like to add a lot of pages/functionality without nothing new in the elixir/phoenix perspective.
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.