repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
jspm/jspm-cli | 107411381 | Title: [email protected] bundle command
Question:
username_0: Hy!
I upgraded to version 0.16.6, and this command does not work now:
```
jspm bundle [source/javascript/**/*.js] client/dist/bundle-application.js --inject --no-mangle --inline-source-maps
```
After execution the config.js file:
```json
"dist/bundle-application.js": [
"[object Object]"
]
```
Is it a bug or did i miss something?
Answers:
username_0: With [email protected] the bundling works well. Between 0.16.2 and 0.16.6 there was systemjs/builder version upgrading, i think this issue belongs to the systemjs/builder.
username_1: Thanks, fixed in https://github.com/systemjs/builder/commit/174b977a640f055681330abdfd8b564dfb57e78d.
username_0: @username_1 Thanks!
username_1: Released in 0.16.7.
Status: Issue closed
|
c834606877/free_comment | 298454327 | Title: Reflected XSS
Question:
username_0: It seems your software is vulnerable to Reflected XSS just as [commento](https://github.com/adtac/commento) by @adtac is.
This was previously reported under [issue #154](https://github.com/adtac/commento/issues/154) on comento.
Leaving the following comment on your demo page resulted in a valid PoC.
`[XSS POC](javascript:alert('XSS');)` |
digital-preservation/droid | 487388470 | Title: Request to add option to collect created information
Question:
username_0: It would be useful if it was possible to optionally collect additional information about files, especially _Created_ dates.
I wouldn't expect this to be added to the default export but an option to export created dates of files with the other information would help in instances where it is necessary to capture the original created date for for files in SharePoint that originated in network drives. SharePoint exports use the SharePoint created date and not the original file created date which impacts the metadata being exported for archives.
Answers:
username_1: Thank you username_0, do you mean the file creation date as displayed by the operating system?
username_0: Yes the original, e.g. Word creation date, and not the "new" creation date added by, for example, SharePoint following import.
username_2: DROID is only scanning file system metadata for created/last-modified etc. ApacheTika would be a better option for what you seem to be after as it looks into embedded file metadata for a wide range of file formats.
username_3: This isn't quite what the request asks for, but I should not that DROID has only ever recorded the last modified datetime, not the file creation date time.
This was because in the days of Java 6, it wasn't possible to get anything other than the last modified date time. Using NIO libraries in Java 7 and later, it would be possible to obtain further file system metadata, like file creation time (not necessarily the same as the embedded Word time).
Of course, this is a reasonably large change, as it requires changes to the data model (database tables, export results, filters, UI ... anything that might touch the new data).
username_0: Thanks for the responses. I need to capture original file creation date from files and thought that would be a useful DRIOD feature for SharePoint users in order to avoid the problem of exporting files from SharePoint the upload to SharePoint date treated as created date.
If anyone could suggest an alternative method of extracting this metadata that would be great.
username_4: I'd echo the use of Tika if you're looking for a range of metadata options from a wide-range of formats.
If you have Linux tools available, the subsystem in recent Windows, or perhaps there are Power Shell alternatives, commands like `stat` will work well.
```bash
$ stat foo.ini
File: tox.ini
Size: 1323 Blocks: 8 IO Block: 4096 regular file
Device: 812h/2066d Inode: 393476 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 1000/username_4) Gid: ( 1000/username_4)
Access: 2019-08-28 09:12:44.772035199 +0200
Modify: 2019-08-28 09:12:44.772035199 +0200
Change: 2019-08-28 09:12:44.772035199 +0200
```
FITS will also provide some of this information, and that wraps a DROID, and a JHOVE, and a few other bits and pieces, but large-scale performance is difficult to find.
This probably isn't the forum, but as you are using Sharepoint (are you using any records management extensions?) then capturing this information can be done in a multitude of other ways. It is a tough one for policy to require users to get this information. I'd personally be looking to combine whatever Sharepoint metadata (the record metadata) you have with the file metadata, but not solve it (getting creation date) at the file level alone. The file system introduces its own difficulties which is perhaps why JAVA initially approached this from the point of not capturing the data. See [this table of filesystem metadata comparisons](https://en.wikipedia.org/wiki/Comparison_of_file_systems#Metadata) to see where creation time simply isn't captured. |
apache/pulsar | 1102708096 | Title: Flaky-test: [EndToEndMetadataTest].[testPublishConsume]
Question:
username_0: [EndToEndMetadataTest] is flaky. The [testPublishConsume] test method fails sporadically.
https://github.com/apache/pulsar/runs/4805692281?check_suite_focus=true
```
Error: Tests run: 10, Failures: 1, Errors: 0, Skipped: 9, Time elapsed: 1.027 s <<< FAILURE! - in org.apache.pulsar.broker.EndToEndMetadataTest
Error: testPublishConsume(org.apache.pulsar.broker.EndToEndMetadataTest) Time elapsed: 0.166 s <<< FAILURE!
java.lang.IllegalArgumentException: Unknown backend metadata-store
at org.apache.bookkeeper.meta.MetadataDrivers.getBookieDriver(MetadataDrivers.java:272)
at org.apache.bookkeeper.meta.MetadataDrivers.getBookieDriver(MetadataDrivers.java:295)
at org.apache.bookkeeper.bookie.Bookie.instantiateMetadataDriver(Bookie.java:1141)
at org.apache.bookkeeper.bookie.Bookie.<init>(Bookie.java:730)
at org.apache.bookkeeper.proto.BookieServer.newBookie(BookieServer.java:152)
at org.apache.bookkeeper.proto.BookieServer.<init>(BookieServer.java:120)
at org.apache.pulsar.metadata.bookkeeper.BKCluster.startBookie(BKCluster.java:237)
at org.apache.pulsar.metadata.bookkeeper.BKCluster.startNewBookie(BKCluster.java:222)
at org.apache.pulsar.metadata.bookkeeper.BKCluster.startBKCluster(BKCluster.java:128)
at org.apache.pulsar.metadata.bookkeeper.BKCluster.<init>(BKCluster.java:80)
at org.apache.pulsar.broker.EmbeddedPulsarCluster.<init>(EmbeddedPulsarCluster.java:59)
at org.apache.pulsar.broker.EmbeddedPulsarCluster.<init>(EmbeddedPulsarCluster.java:33)
at org.apache.pulsar.broker.EmbeddedPulsarCluster$EmbeddedPulsarClusterBuilder.build(EmbeddedPulsarCluster.java:54)
at org.apache.pulsar.broker.EndToEndMetadataTest.testPublishConsume(EndToEndMetadataTest.java:43)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:132)
at org.testng.internal.InvokeMethodRunnable.runOne(InvokeMethodRunnable.java:45)
at org.testng.internal.InvokeMethodRunnable.call(InvokeMethodRunnable.java:73)
at org.testng.internal.InvokeMethodRunnable.call(InvokeMethodRunnable.java:11)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
```
Answers:
username_1: Similar problem:
https://github.com/apache/pulsar/runs/4811252051?check_suite_focus=true#step:10:576
username_1: This seems to be related to #12770 .
If the org.apache.bookkeeper.meta.MetadataDrivers class gets loaded before org.apache.pulsar.metadata.bookkeeper.BKCluster constructor is called, things will fail. @merlimat Could you check this problem?
https://github.com/apache/pulsar/blob/12ca27fa800c72b0c9a6e7f2583a9daa2497add9/pulsar-metadata/src/main/java/org/apache/pulsar/metadata/bookkeeper/BKCluster.java#L72-L81
https://github.com/apache/bookkeeper/blob/bc02d8c487a809fa58d75c477d9e2d5c7dedccec/bookkeeper-server/src/main/java/org/apache/bookkeeper/meta/MetadataDrivers.java#L95-L106
username_1: The workaround would be to set the system properties for all tests in pulsar-broker using the pom.xml. I can create a PR for that since it shouldn't be harmful.
username_0: @username_1 Can we only add the properties for `EndToEndMetadataTest`, looks like other tests will not test different drivers?
username_2: just a temporary solution, i think.
username_1: you can see in BK code that it's not harmful:
https://github.com/apache/bookkeeper/blob/bc02d8c487a809fa58d75c477d9e2d5c7dedccec/bookkeeper-server/src/main/java/org/apache/bookkeeper/meta/MetadataDrivers.java#L95-L106
Status: Issue closed
username_1: You will need to close and re-open and PRs where the `CI - Unit - Brokers - Broker Group / pulsar-ci-test (group2) job` is failing to get the fix for this issue which is provided by PR #13754.
Closing and re-opening the PR will cause the PR to create a new merge commit internally with the latest master branch changes. This is an alternative to rebasing the PR or making an explicit merge to master in the PR commits.
username_1: [EndToEndMetadataTest] is flaky. The [testPublishConsume] test method fails sporadically.
https://github.com/apache/pulsar/runs/4805692281?check_suite_focus=true
```
Error: Tests run: 10, Failures: 1, Errors: 0, Skipped: 9, Time elapsed: 1.027 s <<< FAILURE! - in org.apache.pulsar.broker.EndToEndMetadataTest
Error: testPublishConsume(org.apache.pulsar.broker.EndToEndMetadataTest) Time elapsed: 0.166 s <<< FAILURE!
java.lang.IllegalArgumentException: Unknown backend metadata-store
at org.apache.bookkeeper.meta.MetadataDrivers.getBookieDriver(MetadataDrivers.java:272)
at org.apache.bookkeeper.meta.MetadataDrivers.getBookieDriver(MetadataDrivers.java:295)
at org.apache.bookkeeper.bookie.Bookie.instantiateMetadataDriver(Bookie.java:1141)
at org.apache.bookkeeper.bookie.Bookie.<init>(Bookie.java:730)
at org.apache.bookkeeper.proto.BookieServer.newBookie(BookieServer.java:152)
at org.apache.bookkeeper.proto.BookieServer.<init>(BookieServer.java:120)
at org.apache.pulsar.metadata.bookkeeper.BKCluster.startBookie(BKCluster.java:237)
at org.apache.pulsar.metadata.bookkeeper.BKCluster.startNewBookie(BKCluster.java:222)
at org.apache.pulsar.metadata.bookkeeper.BKCluster.startBKCluster(BKCluster.java:128)
at org.apache.pulsar.metadata.bookkeeper.BKCluster.<init>(BKCluster.java:80)
at org.apache.pulsar.broker.EmbeddedPulsarCluster.<init>(EmbeddedPulsarCluster.java:59)
at org.apache.pulsar.broker.EmbeddedPulsarCluster.<init>(EmbeddedPulsarCluster.java:33)
at org.apache.pulsar.broker.EmbeddedPulsarCluster$EmbeddedPulsarClusterBuilder.build(EmbeddedPulsarCluster.java:54)
at org.apache.pulsar.broker.EndToEndMetadataTest.testPublishConsume(EndToEndMetadataTest.java:43)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:132)
at org.testng.internal.InvokeMethodRunnable.runOne(InvokeMethodRunnable.java:45)
at org.testng.internal.InvokeMethodRunnable.call(InvokeMethodRunnable.java:73)
at org.testng.internal.InvokeMethodRunnable.call(InvokeMethodRunnable.java:11)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
```
Status: Issue closed
|
android-js/androidjs | 509392476 | Title: Notification not showing
Question:
username_0: Sorry, am new to this.
Currently trying to get notification to work following the documentation [Notification API](https://android-js.github.io/docs/notification_api.html) but no notification is behind shown
```
<script>
window.onload = function() {
app.notification.initBig("Update", ["Well well well", "hello there"]);
}
</script>
```
Please do advise on what I am missing.
Thanks in advance.
Answers:
username_1: You initialized the notification but had not called `show` function for show them.
username_1: Hey, we have created a slack channel for support & discussion: [join here](https://join.slack.com/t/androidjs/shared_invite/zt-<KEY>) |
DarklightGames/DarkestHour | 795456596 | Title: SU-76 Collision Issues With Gunner Position
Question:
username_0: SU-76 has noticeable collision issues with its exposed gunner position. Unlike the Marder, bullets bounce right off as if the collision is not matching the model itself, resulting in the gunner being invulnerable. Additionally, if the gunner turns out and attempts to place markers with binoculars, they place on his own position.
Answers:
username_1: Confirmed. It appears to be caused by an extraneous hitbox that is attached to the commander bone.

Status: Issue closed
|
Azure-Samples/cognitive-services-speech-sdk | 788245445 | Title: Transcription stops after 2 second silence in the middle of the file
Question:
username_0: Hello,
I am using the Speech Cognitive Service and passing an audio file to get the transcription file along with the Pronunciation score. But, I am encountering an issue where the transcription stops whenever there is a short silence.
Is there a config that allows us to process the entire file?
ID in the json: 0ba9da3ec3a34e14be134dde289c8de4
Answers:
username_1: @username_0 Thank you for logging this issue. Would you mind sharing your code and the audio file you're using?
username_2: @username_0 we're happy to help diagnose but will need the additional information @username_1 requested. It's definitely possible to do what you're looking for and the fix is likely either changing a "RecognizeOnce" call to "StartContinuous" call or modifying a configuration property--that's where having a code snippet would help us advise.
If we don't hear back, we'll close the issue as general procedure. Thanks!
username_0: Thank you Travis! Yes, can you help me change the configuration of
"RecognizeOnce" with a larger duration? We were successfully able to
implement the "StartContinuous" method, but it gave us the pronunciation
assessment for each individual chunk. And we are looking to have the score
for the entire file at once, instead of us having additional logic (which
would be a non-standard weighted average of the individual chunks.
Hope that makes sense.
Audio file:
https://drive.google.com/file/d/18JUyjgPxyzV6yyan3Ahz49s80ktoG4iK/view?usp=sharing
Below is my Python code:
audio_input = speechsdk.AudioConfig(filename=folder + file_name)
pronunciation_assessment_config =
speechsdk.PronunciationAssessmentConfig(reference_text="Actual Text
spoken in the audio file",
grading_system=speechsdk.PronunciationAssessmentGradingSystem.HundredMark,
granularity=speechsdk.PronunciationAssessmentGranularity.Phoneme)
speech_recognizer =
speechsdk.SpeechRecognizer(speech_config=speech_config,
audio_config=audio_input)
pronunciation_assessment_config.apply_to(speech_recognizer)
result = speech_recognizer.recognize_once()
Best,
Sarthak
username_3: @username_0 Hi, actually the setting that would enable recognize_once() to process the whole input at once despite moments of silence is not configurable in the Speech SDK anymore - currently it only exists in the service internal configuration. (It used to be configurable by the SDK in the past which is what Travis was thinking about.) So, for the time being you would need to use continuous recognition with some post-processing of results as you mentioned.
However, we do have a work item on the backlog to make this configurable again, tentatively coming this spring. We will update this issue when the work is done and the release schedule can be confirmed.
username_3: I just realized that even with the existing work item that would help with occasional silence in input audio, there is also the setting of max duration which is currently 15 seconds for recognize_once() - much longer with continuous recognition - and again, it's service internal only. Your example audio file is 20s so the result would cover only 75% of the file. I have added another work item to the backlog to make the max duration configurable.
Status: Issue closed
username_3: Closing the issue since there are work items on the backlog and a workaround has been suggested for now. Please create a new issue if you need more support. |
hedzr/android-file-chooser | 507119733 | Title: Can't list SD card content at all
Question:
username_0: I run on virtual device API 26, the SD card content is not listed. please help.
Answers:
username_1: seems our `FileUtil.getStoragePath()` broken, I need more time to check it and others
username_1: sorry to late reply.
About this issue, there are three points:
1. `FileUtil.getStoragePath()` broken on some devices (virtual or physical). see also https://stackoverflow.com/questions/14796931/illegalargumentexception-in-statfs-in-webviewcore-internal-thread and more...
2. In newer Android SDKs, external storage volume directory is non-readable and non-listable on some devices (most of emulators).
3. afc library v1.2.0 has targetSDK >= 28, see also [android-9.0-changes-28#per-app-selinux](https://developer.android.com/about/versions/pie/android-9.0-changes-28#per-app-selinux), this might disallow one app access the folders outside its scope.
It should be useful to downgrade to android-file-chooser v1.1.x.
This is non
username_2: Any better solution on this? |
EmmaRamirez/Clickopolis | 210615999 | Title: design: new design choices for refactor [resources]
Question:
username_0: Questions to answer:
- Can we achieve less clutter in our UI?
- Can we create better customization options for the user?
Focus on `Resources`, as a prototype for how the new design should work.
- What's our UX flow in each subject? |
pandas-dev/pandas | 223103480 | Title: Unary plus missing for Series and DataFrame
Question:
username_0: #### Code Sample
```
In [1]: import numpy as np
In [2]: import pandas as pd
In [3]: pd.__version__
Out[3]: '0.19.2.post+ts5'
In [4]: a = np.array([-1,0,1]) # As expected
In [5]: +a
Out[5]: array([-1, 0, 1])
In [6]: s = pd.Series(data=a)
In [7]: +s # Unexpected
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-7-d1319861d847> in <module>()
----> 1 +s
TypeError: bad operand type for unary +: 'Series'
In [8]: d = pd.DataFrame(a)
In [9]: +d # Unexpected
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-9-e4fd8df22471> in <module>()
----> 1 +d
TypeError: bad operand type for unary +: 'DataFrame'
In [10]: -a
Out[10]: array([ 1, 0, -1])
In [11]: -s
Out[11]:
0 1
1 0
2 -1
dtype: int64
In [12]: -d
Out[12]:
0
0 1
1 0
2 -1
```
#### Problem description
Unary plus is well-defined on built-in types and Numpy ndarrays. It is not defined for Series or DataFrame. It's not ever necessary, but the asymmetry with the present unary minus is unexpected. It's also occasionally nice to use it to line up a positive expression against a negative expression. Likely adding methods implemented as an identity is sufficient.
#### Expected Output
```
In [11]: -s
[Truncated]
bottleneck: None
tables: None
numexpr: None
matplotlib: 1.5.1
openpyxl: 2.4.5
xlrd: 1.0.0
xlwt: None
xlsxwriter: None
lxml: None
bs4: 4.5.3
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: 1.1.9
pymysql: None
psycopg2: None
jinja2: 2.9.4
boto: None
pandas_datareader: None
</details>
Answers:
username_0: Obviously low priority.
username_1: sure this is pretty straightforward to add actually
see how we implement ``__neg__`` https://github.com/pandas-dev/pandas/blob/master/pandas/core/generic.py#L862
a pull request would be great! (with tests!)
username_0: Thank you @wuisawesome and @username_1.
Status: Issue closed
|
rinvex/laravel-attributes | 432409266 | Title: Getting Illuminate/Database/QueryException with message 'SQLSTATE[42S22]: Column not found: 1054 Unknown column' on new installation
Question:
username_0: Also the special relation `eav` is not working. `$branch->load('eav');` throws `RelationNotFoundException` exception.
Anything I am doing wrong? Any help would be appricated.
Answers:
username_0: This is my bad. Apparently the full path for model should be `App\CompanyBranch` instead of `\App\CompanyBranch`. It's working now.
Status: Issue closed
username_1: Good to know you figured it out :) |
fulcrologic/fulcro | 307843850 | Title: Alpha DOM: Make props required to allow for expressions as props
Question:
username_0: After thinking about this some more, I have the following comments:
It seems like, instead of the required props, that the “punt to runtime” is the best option to me:
1. If you add props (via map or symbol) you get compiler interpretation and best speed…if you require them via spec, it just annoys users that are interested in the sugar (“do what I mean, dammit!“)
2. The runtime overhead of props detection is small, assuming that `element?` detection works on things returned from component factories…might need `component?` also?
Cases:
`(dom/div :.a (expr))` - has to wrap expr in detection before calling react, since it is not clear if that is props or UI element from expr. This could be a relatively common case, but the overhead to check the return value of `expr` is relatively low. Combining the props (if that is what they are) adds a bit more overhead, but that is an expected behavior, since you asked for it. If expr is element/component, then only the check causes overhead, and that is relatively cheap.
`(dom/div {} (expr))` - completely supported at macro time
`(dom/div :.a {:data-x 1} (expr))` - full macro support
`(dom/div #js {} (expr))` - full macro
One final thing…if we want to be *really* picky about performance, we should not be calling `macro-create-element*` with function-call overhead, but should instead be inlining it…that could be done by making a `macro-create-element*` macro, come to think of it.
Answers:
username_0: After thinking about this some more, I have the following comments:
It seems like, instead of the required props, that the “punt to runtime” is the best option to me:
1. If you add props (via map or symbol) you get compiler interpretation and best speed…if you require them via spec, it just annoys users that are interested in the sugar (“do what I mean, dammit!“)
2. The runtime overhead of props detection is small, assuming that `element?` detection works on things returned from component factories…might need `component?` also?
Cases:
`(dom/div :.a (expr))` - has to wrap expr in detection before calling react, since it is not clear if that is props or UI element from expr. This could be a relatively common case, but the overhead to check the return value of `expr` is relatively low. Combining the props (if that is what they are) adds a bit more overhead, but that is an expected behavior, since you asked for it. If expr is element/component, then only the check causes overhead, and that is relatively cheap.
`(dom/div {} (expr))` - completely supported at macro time
`(dom/div :.a {:data-x 1} (expr))` - full macro support
`(dom/div #js {} (expr))` - full macro
One final thing…if we want to be *really* picky about performance, we should not be calling `macro-create-element*` with function-call overhead, but should instead be inlining it…that could be done by making a `macro-create-element*` macro, come to think of it.
username_0: I'd say copy the devcards you've modified in the PR, revert the old cards to their old state (so that we have the original verification of optional props), and add new cards (can be same ns) that hit expression cases hard.
username_0: Fixed by PR 176
Status: Issue closed
|
srmklive/laravel-paypal | 1177988924 | Title: Not Getting Transaction ID
Question:
username_0: I am trying to do recurring payments with Paypal. As mentioned in the document I am using
**$provider->createRecurringPaymentsProfile($data, $token)** method.
But I am not getting transactionID in response. So after that is there any other method that i need to call?
Please check my response
(
[PROFILEID] => I-VSJW00L5E02B
[PROFILESTATUS] => ActiveProfile
[TIMESTAMP] => 2022-03-23T11:17:52Z
[CORRELATIONID] => 4ca0ac15072a1
[ACK] => Success
[VERSION] => 123
[BUILD] => 56068150
) |
bit-bots/humanoid_league_msgs | 421979378 | Title: Inconsistencies in variable names
Question:
username_0: There are inconsistencies in variable names in some of the messages:
Position of a BallRelative: ball_relative
Position of an ObstacleRelative: position
I don't know whether we should change this or not. It has the potential to break a LOT if we change it. But it isn't pretty either...
Answers:
username_1: maybe change this after this years RoboCup?
username_1: So now is after RoboCup so lets discuss what we want to change. I have found the following points:
1. confidence for detected balls, etc. is given as a float value. Do we want to change this to a coveriance matrix? Can still encode the same information if the float is put in as a identity matrix
2. Position variables should only be named "position" (e.g. in BallRelative)
username_1: https://github.com/bit-bots/humanoid_league_msgs/pull/14
Status: Issue closed
|
AllTheMods/ATM-3 | 479421953 | Title: Mods on twitch doesnt work after I tried to change RAM used in minecraft
Question:
username_0: I tried to intall all the mods 3 and the first time I initated it, it said that more RAM was needed. I found the way to change it in Twitch, but now when I try to play any mod, the game doesn´t even initiate.
Do you have any idea what can I try?
I've already try to unistall and intall again, but i have the same problem of the RAM and I have to change it again.
Answers:
username_1: Sounds like you've done it wrong, but cant really be sure without some specific information. Follow this guide first and if you still have a problem tell us specifically what is happening. https://www.reddit.com/r/allthemods/comments/968r4k/how_to_fix_modpack_wont_load_fps/
Status: Issue closed
|
linaradwan/LISA-kun | 337360651 | Title: Ability to download Manga
Question:
username_0: will be doing this with JSzip
Status: Issue closed
Answers:
username_0: will be doing this with JSzip
Status: Issue closed
username_0: Downloading manga is not possible as cross origin security issue of the API using wrong port from the requester.
Access-Control-Allow-Origin' header is present on the requested resource. Origin 'null' is therefore not allowed access. |
facebook/flow | 320870938 | Title: AppVeyor build is failing for all commits, even old ones that succeeded
Question:
username_0: The AppVeyor build is failing. Worse, it fails for commits that have previously passed. (Is this indicative of a config or version change somewhere?)
The error that seems to be consistent across the fails is this:
Error: Files C:/cygwin64/home/appveyor/.opam/4.05.0+mingw64c/lib/ocaml\unix.cmxa
and C:/cygwin64/home/appveyor/.opam/4.05.0+mingw64c/lib/ocaml\unix.cmxa
both define a module named Unix
Command exited with code 2.
make: *** [Makefile:96: ../../_build/src/parser/test/run_tests.native] Error 10
make: Leaving directory '/cygdrive/c/projects/flow/src/parser'
Command exited with code 2
For an example of a commit that should have succeeded, see #6249
Answers:
username_1: Very cryptic error: is that not the same path?
username_0: Hmmm... it does seem to be...
Check out that backslash at the end - would you call that normal?
username_1: Not really, but interestingly it does seem to find the module (if this were an invalid path error I'd assume it would fail, though it is worth thinking about.
I'm very much not acquainted with the build process / AppVeyor unfortunately.
username_1: PRs are still reviewed in spite of the AppVeyor. About the PR itself, I'm unsure why it got deprecated, and that discussion happened before I got involved, so unfortunately I'm not sure how to help. My guess is if it was deprecated it was for a reason (though I don't know what that reason is).
Would your internal tooling be too difficult to change? I would largely suggest it moves towards `flow version --json` so it doesn't need to rely on parsing some natural language text 😅
username_0: Ah, it's not really a problem - everything currently works, but it just sucks having to make an exception for flow when most other CLI tools use `--version` (doesn't need to parse, just display to the user and return an exit code of 0)
Issue #4330 talks about undeprecating this - I think initially it was deprecated without much input from other developers except one maintainer - or at least it looks that way from the git logs.
username_2: the windows builds (now on Circle) seem stable recently
Status: Issue closed
|
bertramdev/asset-pipeline | 168900227 | Title: Plug-in repo confusion
Question:
username_0: It looks like the actual repo for the grails-asset-pipeline plug-in is this repo:
#A https://github.com/bertramdev/asset-pipeline/tree/master/asset-pipeline-grails
even if it's called only 'asset-pipeline' in the BuildConfig.groovy file, and there
are even some new releases of the plug-in in this repo.
The official page of the plug-in https://grails.org/plugin/asset-pipeline however links to:
#B https://github.com/bertramdev/grails-asset-pipeline
with a different issue tracker, bugs and issues and no activity in the since v2.9.1.
running ```grails list-plugin-updates``` will not list the latest releases from #A, only from #B :(.
Could you please clarify?
( This problem seems to be the case with quite a few Grails plug-ins :(. )
Thank you.
Answers:
username_1: these are all for grails 2 plugins since the structure is different. they are simply wrappers around the binary plugins
>
username_0: Sorry but I'm even more confused now :).
username_1: Grails 2.x had a different plugin structure…
There are repositories with BuildConfigs and grails 2 project structures (where the original plugins used to exist actually) that wrap the asset-pipeline core plugins now. You technically don’t even need them for most grails 2 stuff because the dependencies{} block can also take the main asset-pipeline plugins..
Asset Pipeline Plugins are framework agnostic now and those are the ones you see primarily in the core project in the gradle multi project build… Grails 3 had no need for plugin wrapping so there is no “grails 3” less-asset-pipeline plugin for example… The standard “less-asset-pipeline” plugin in the main project works sufficiently.
There are base plugins for each framework like asset-pipeline-grails for grails 3 and https://github.com/bertramdev/grails-asset-pipeline <https://github.com/bertramdev/grails-asset-pipeline> for grails2 as noted by its README.
There are also servlet, spring boot, and rat pack modules
>
username_1: The repository you pointed to is an older grails3 repo from when grails3 was first being built... I have just updated its README as well
http://bertram.d.pr/1glh0
Status: Issue closed
username_0: For Grails 2, should I use #A or #B ?
Both repos support Grails 2, but #A has more recent releases, not picked however from the ```grails list-plugin-updates``` command (is this a bug), and also the official Grails 2 Plug-in page links to #B
They can't be all correct at the same time.
username_1: for grails 2 follow the grails plugin repository that would be https://github.com/bertramdev/grails-asset-pipeline which wraps plugins within https://github.com/bertramdev/asset-pipeline . It is last updated for 2.9.1 I have not updated to latest 2.10.x as of yet as that came out today so I will look into updating it, at some point grails 2 updates will cease and become stability releases only
username_1: you will see this note in the README: http://bertram.d.pr/143bA
username_0: There were quite a few releases to repo #A since May, hence my confusion:

username_1: yep just haven’t updated it I can though i think most things released in the 2.9.x were minor for grails 2
> |
MicrosoftDocs/cloud-adoption-framework | 1158928227 | Title: Clarification
Question:
username_0: Can we define Master Data with a sentence or two here at the beginning of the article? We've touched on it briefly in other governance articles, but not enough for me to have a clear definition without an outside reference.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 48244c82-2aa3-3d5c-5238-4e80466e757f
* Version Independent ID: 4eccc5b1-2675-de0e-b211-6fd480203696
* Content: [Manage master data - Cloud Adoption Framework](https://review.docs.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/data-management/govern-master-data?branch=collab-data-mesh-v1)
* Content Source: [docs/scenarios/data-management/govern-master-data.md](https://github.com/MicrosoftDocs/cloud-adoption-framework/blob/main/docs/scenarios/data-management/govern-master-data.md)
* Service: **cloud-adoption-framework**
* Sub-service: **scenario**
* GitHub Login: @mboswell
* Microsoft Alias: **mboswell**
Answers:
username_1: commit ee54e7f15576d30f33982ea6e51aa2a5b6b88045 (HEAD -> collab-data-mesh-v1, origin/collab-data-mesh-v1)
Author: <NAME> <<EMAIL>>
Date: Fri Mar 4 15:27:35 2022 +0100
Fixed issue #788
username_1: https://github.com/MicrosoftDocs/cloud-adoption-framework-pr/pull/3070
username_1: #please-close
Status: Issue closed
|
humanmade/altis-local-server | 629993233 | Title: Remove always restart directive on Cavalcade container
Question:
username_0: There's a problem that was previously worked around by setting the Cavalcade container to always restart.
This is because it would exit if WP wasn't installed yet which isn't possible because the containers must all be started before WP can be installed.
See #81 for background.
The changes in Cavalcade v2 should help to mitigate this issue so we need to test that and if successful remove the restart on failure configuration.
@username_1 does this more or less cover the issue you noticed with the Cavalcade container not stopping? I thought restart on failure should allow it to be stopped successfully.
Answers:
username_1: @username_0 if I manually stop the container, it does not seem to come back. But what I'm experiencing is that, after running local-server once, I then continued to see the cavalcade container spin up independently of any other docker commands or services, and would fail and continually restart. Because this interacts with my host's network configuration, it causes transient timeouts and intermittent but persistent ERR_NETWORK_RESET issues when trying to use the host OS until I manually `docker container stop {id}`.
username_0: Alright, removing all `restart` directives from the docker compose file in an upcoming PR
username_0: @username_1 for reference here did you stop local server using `composer server stop`?
username_0: Issue was not explicitly stopping the containers using the command noted above and the use of `restart: on-failure` - this meant even if the docker daemon was restarted after a reboot of the computer for example the containers would start back up even though the rest of services would not.
Status: Issue closed
username_0: Tagged 3.0.5 with this fix https://github.com/humanmade/altis-local-server/releases/tag/3.0.5 |
aws/aws-sdk-js-v3 | 545213085 | Title: error when building @aws-sdk/client-api-gateway
Question:
username_0: **Describe the bug**
Many errors are thrown when building @aws-sdk/client-api-gateway
**To Reproduce (observed behavior)**
Run the following command in `smithy-codegen`:
```console
$ ./node_modules/.bin/lerna run pretest --scope '@aws-sdk/client-api-gateway' --include-dependencies
...
...
lerna ERR! yarn run pretest exited 2 in '@aws-sdk/client-api-gateway'
lerna ERR! yarn run pretest stdout:
yarn run v1.17.3
$ tsc
models/index.ts(9788,33): error TS1131: Property or signature expected.
models/index.ts(9788,97): error TS1127: Invalid character.
models/index.ts(9788,98): error TS1109: Expression expected.
models/index.ts(9788,100): error TS1161: Unterminated regular expression literal.
models/index.ts(9789,7): error TS1110: Type expected.
models/index.ts(9789,8): error TS1161: Unterminated regular expression literal.
models/index.ts(9790,6): error TS1161: Unterminated regular expression literal.
models/index.ts(9791,10): error TS1109: Expression expected.
models/index.ts(9792,1): error TS1128: Declaration or statement expected.
models/index.ts(11875,12): error TS1127: Invalid character.
models/index.ts(11875,13): error TS1131: Property or signature expected.
models/index.ts(11876,4): error TS1109: Expression expected.
models/index.ts(11876,8): error TS1110: Type expected.
models/index.ts(11876,9): error TS1161: Unterminated regular expression literal.
models/index.ts(11877,7): error TS1109: Expression expected.
models/index.ts(11877,11): error TS1005: '(' expected.
models/index.ts(11877,22): error TS1005: ';' expected.
models/index.ts(11877,26): error TS1005: ';' expected.
models/index.ts(11877,41): error TS1005: ')' expected.
models/index.ts(11878,4): error TS1003: Identifier expected.
models/index.ts(11878,9): error TS1110: Type expected.
models/index.ts(11878,16): error TS1005: ';' expected.
models/index.ts(11878,24): error TS1005: ';' expected.
models/index.ts(11878,36): error TS1005: ';' expected.
models/index.ts(11878,166): error TS1161: Unterminated regular expression literal.
models/index.ts(11879,7): error TS1110: Type expected.
models/index.ts(11879,8): error TS1161: Unterminated regular expression literal.
models/index.ts(11880,6): error TS1161: Unterminated regular expression literal.
models/index.ts(11881,18): error TS1109: Expression expected.
models/index.ts(11881,26): error TS1005: ']' expected.
models/index.ts(11881,34): error TS1005: ',' expected.
models/index.ts(11881,35): error TS1136: Property assignment expected.
models/index.ts(11888,13): error TS1109: Expression expected.
models/index.ts(11895,8): error TS1109: Expression expected.
models/index.ts(11895,16): error TS1005: ']' expected.
models/index.ts(11895,24): error TS1005: ',' expected.
models/index.ts(11895,25): error TS1136: Property assignment expected.
models/index.ts(11906,18): error TS1109: Expression expected.
models/index.ts(11922,13): error TS1109: Expression expected.
models/index.ts(11922,21): error TS1005: ']' expected.
models/index.ts(11922,29): error TS1005: ',' expected.
models/index.ts(11922,30): error TS1136: Property assignment expected.
models/index.ts(11933,13): error TS1109: Expression expected.
models/index.ts(11934,1): error TS1128: Declaration or statement expected.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
lerna ERR! yarn run pretest stderr:
error Command failed with exit code 2.
lerna ERR! yarn run pretest exited 2 in '@aws-sdk/client-api-gateway'
```
**Expected behavior**
The command runs without any error
Answers:
username_0: Fixed by https://github.com/aws/aws-sdk-js-v3/pull/641
Status: Issue closed
|
django-haystack/django-haystack | 66696642 | Title: "Conflicting option string" error when rebuilding index with Django 1.7 and Django Haystack 2.4.0
Question:
username_0: I was experiencing the issue described here:
https://github.com/django-haystack/django-haystack/issues/1097
to work around it, I updated my haystack version to the 2.4.0 pre-release.
Now, when I attempt to rebuild the index, I get a conflict error:
(venv)➜ example-project git:(master) python manage.py rebuild_index
Traceback (most recent call last):
File "manage.py", line 11, in <module>
execute_from_command_line(sys.argv)
File "/Users/nina/Documents/Sites/example-project/venv/lib/python2.7/site-packages/django/core/management/__init__.py", line 385, in execute_from_command_line
utility.execute()
File "/Users/nina/Documents/Sites/example-project/venv/lib/python2.7/site-packages/django/core/management/__init__.py", line 377, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Users/nina/Documents/Sites/example-project/venv/lib/python2.7/site-packages/django/core/management/base.py", line 284, in run_from_argv
parser = self.create_parser(argv[0], argv[1])
File "/Users/nina/Documents/Sites/example-project/venv/lib/python2.7/site-packages/django/core/management/base.py", line 265, in create_parser
option_list=self.option_list)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/optparse.py", line 1219, in __init__
add_help=add_help_option)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/optparse.py", line 1261, in _populate_option_list
self.add_options(option_list)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/optparse.py", line 1039, in add_options
self.add_option(option)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/optparse.py", line 1020, in add_option
self._check_conflict(option)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/optparse.py", line 995, in _check_conflict
option)
optparse.OptionConflictError: option --nocommit: conflicting option string(s): --nocommit
My library versions:
Python 2.7.5
Django==1.7.2
-e git+https://github.com/django-haystack/django-haystack.git@866b24f3f7769b569799c063ee6<PASSWORD>#egg=django_haystack-v2.4.0
Answers:
username_0: I updated to Django==1.8.2 and the latest haystack version (-e git+https://github.com/django-haystack/django-haystack.git@0576c8093caf59bebedce75e0d711aacfd36f03f#egg=django_haystack-origin_HEAD) and this works now.
Status: Issue closed
|
rapidsai/cudf | 513663422 | Title: Pls support corresponding API in cuDF as pandas.date_range and DateOffset [FEA]
Question:
username_0: **Is your feature request related to a problem? Please describe.**
I wish I could use cuDF to do date operation like pandas.date_range (https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.date_range.html) and pandas.tseries.offsets.DateOffset https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.tseries.offsets.DateOffset.html.
Answers:
username_1: DateOffset has been implemented. Updating this issue to request date_range.
```python
import cudf
import pandas as pd
d = cudf.from_pandas(pd.date_range(start="1/1/2021", periods=2))
print(d)
print(d + cudf.DateOffset(years=1))
DatetimeIndex(['2021-01-01', '2021-01-02'], dtype='datetime64[ns]')
DatetimeIndex(['2022-01-01', '2022-01-02'], dtype='datetime64[ns]')
``` |
Azure/azure-cli | 1031783262 | Title: az functionapp update doesn't support https-only true
Question:
username_0: please use 'az functionapp update' to update this function app
```
😭Manually using the portal and clicking for hours
We have fixed this in our ARM template for future deployments but it doesn't solve for running apps that don't require updates
Answers:
username_1: route to service team |
DDMAL/Neon | 1161678205 | Title: Request for error message when trying to make a ligature out of something other than two puncta
Question:
username_0: In Neon, the `toggle ligature` function only works if the two selected glyphs are puncta. This is fine and great!
It follows that if you accidentally try to toggle, say, a reversed virga with a punctum, it won't work and nothing will happen. However, a message will appear in the top-left corner saying that the ligature has been toggled. This is slightly confusing. It would be easier to figure out the mistake, I think, if the message said the toggling had failed.
<img width="1191" alt="Ligature not, in fact, toggled 272v" src="https://user-images.githubusercontent.com/83373378/157080607-32f54722-b7da-4e06-98da-22c1f5c771b0.png">
This is a super non-urgent issue, because it's just a little user-friendliness enhancement.
Answers:
username_1: Hi @username_0 , I've pushed the changes for this issue. Could you please test it? Thanks for pointing out this issue!
username_2: alternatively, it wouldn't be a bad thing to allow reverse puncta to be in a ligature...
username_1: Oops, my bad. I will work on this tomorrow! Just to make sure, what kind of `nc` should be successfully ligated, and what should not? Do we need the `toggle ligature` option for any combinations of two `nc`s?
username_2: @username_1 I like that solution. So any two adjacent descending neumes (of any shape) could have "toggle ligature", and then they would turn into a punctum with ligated=yes in the file, correct?
We don't NEED every combination to be a ligature (the user can always change things to be a punctum themselves, after all.) But I don't think it would be a bad thing either. And reverse-virga + punctum would be a very nice option to be able to put a ligature in, because it is sort of the underlying shape of the neumes.
username_2: basically we want one of the following things:
1) If user selects two puncta, they can "toggle ligature" and it works; nothing else gives this option. [[this is the original I think, when there were fewer note shapes]]
OR
2) If user selects two (adjacent) ncs, they can press "toggle ligature" but it only works if they are puncta; otherwise it returns a "ligature failed" and the neumes stay put. [[the option suggested by @username_0]]
OR
3) If user selects two adjacent ncs of any shape, they can toggle ligature, and these neumes always turn into two puncta with ligature; "ligature failed" is reserved for other issues, like choosing ncs that aren't adjacent.
My personal preference is option (3) but any one of them could work I think.
username_1: Hi @username_2 and @username_0 , I've implemented option (3). It turns out frontend Neon could not successfully distinguish the shapes of ligature, virga, or inclinatum. Therefore, I moved the `isLigature` check to verovio. Issue #641 should also be fixed by now. Please let me know if anything unexpected happened.
username_0: I tested the fix, it works very well. I couldn't find any strange behaviours! Closing the issue now.
Status: Issue closed
|
bokeh/bokeh | 222274384 | Title: Error with ColumnDataSource in 0.12.5
Question:
username_0: Creating a ColumnDataSource from a pandas dataframe in 0.12.5 causes an error that was not present in 0.12.4. It appears that the difference is that 0.12.5 is adding the index to the column. Error is below. Note the 0, 1, and 2 before the actual values. These are not present in 0.12.4 and I cannot find documentation for why they were added or how to get around them.
```
ValueError: Unrecognized range input: '0 $100
1 $200
2 $400
Name: bin, dtype: object'
```
Code used to replicate the error:
```
def make_bar(df, x, y, title):
""" Creates a bar chart in bokeh"""
p = figure(title=title,
x_range= df.data[x], webgl=True,
plot_width=400,
plot_height=400)
p.vbar(x= x,
width=0.5,
bottom=0,
top=y, color="darkgreen",
source = df)
return p
df = pd.DataFrame([
{"bin": '$100', 'count': 100},
{"bin": '$200', 'count': 200},
{"bin": '$400', 'count': 400}
])
src = ColumnDataSource(data=df) # convert to bokeh column source
p = make_bar(src, 'bin', 'count', 'Title', format='None')
```
Answers:
username_1: Bokeh CDS has been adding an index to converted data frames for a long time (a few years at least). Nothing about this changed with `0.12.5` (though we are considering making this behaviour configurable) so I'm not sure why you would only see an issue now. Can you update the code above to provide a complete example that can be run as-is for further investigation?
username_0: I have fixed my version to 0.12.4 because it does not give me the issue.
What would make an example more complete? I'm new to reporting issues.
username_1: No problem, the report is appreciated. In this case, there are missing imports, and it's not clear whether additional code to `show` `output_file`, etc is needed.
username_1: Closing because there was no follow up to provide an MRE and no other users have reported any similar issue.
Status: Issue closed
|
liamdamato1997/acunetix360 | 974711824 | Title: Vulnerability - Password Transmitted over HTTP
Question:
username_0: **URL:** http://php.testsparker.com/auth/login.php
**Name:** Password Transmitted over HTTP
**Severity:** High
**Confirmed:** True
**Input Name :**
password
**Form target action :**
http://php.testsparker.com/auth/control.php
**Page Type :**
Login
You can see vulnerability details from the link below:
https://online.acunetix360.com/issues/detail/ac4cfa08f25745750f23ad89021bc5dc |
MicrosoftDocs/sql-docs | 562025147 | Title: Unable to install msodbcsql17 on Ubuntu 14
Question:
username_0: Installing this package on Ubuntu 14 gives:
```
msodbcsql17 : Depends: libc6 (>= 2.21) but 2.19-0ubuntu6.15 is to be installed
Depends: libstdc++6 (>= 4.9) but 4.8.4-2ubuntu1~14.04.4 is to be installed
```
I don't believe that version of `libc6` is available.
Answers:
username_1: @username_0 -- Dave, thank you for your feedback. Do you have a particular docs.microsoft.com article in mind?
username_0: Whoops, sorry, it is this one:
https://docs.microsoft.com/en-us/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server?view=sql-server-ver15
username_1: @username_0 -- Dave, thank you for clarifying.
@MightyPen -- Gene, please look into this issue #4111.
username_2: @username_0 Apologies for the long-delayed response. In looking at the article you reference, I think the best person to investigate is @username_3. We should have routed this to him much sooner, and I sincerely apologize. Were you able to find a solution/work-around? Is this currently blocking you still?
@username_3 can you please take a look at this reported problem? #reassign:username_3
username_3: @username_0 This is more more of a support issue. Installation generally works fine for Ubuntu 14.04. I've only seen this error in the context of other errors. Can you post the rest of the context around your `apt-get` command? My first guess would be you have held broken packages or dependency conflicts on your local machine which need to be resolved.
username_0: Well I fixed it by pinning the version to one just before the release that broke my install.
username_0: Has there been another release since then that maybe fixed it?
username_0: I can test later when I'm back at keyboard.
username_3: @username_0 Update: We noticed an issue, too, and the 17.5.x packages will not install on 14.04. Pinning at 17.4.2 should work. We are going to pull the 17.5.x packages.
Thanks,
David
username_0: @username_3 gotcha, well glad to know it wasn't just me 😄 Thanks!
username_3: The 17.5 packages have been removed and this issue should be resolved.
Note: Ubuntu 14.04 Standard Support ended last year and there will be no new Microsoft ODBC Driver for SQL Server releases for that Ubuntu version. So 17.4.2 is the latest version it will ever have.
username_3: #please-close
Status: Issue closed
username_0: Understood, thanks @username_3 . |
zooniverse/Panoptes-Front-End | 367717294 | Title: Load new subject doesn't reset annotations after visiting Talk
Question:
username_0: ## Expected behavior
On a project with summaries enabled, I should be able to comment on a subject via the Talk link under the classification summary, then start a new classification by following the Classify link.
## Current behavior
_Please include any error messages from the browser console and/or screenshots_
The previous annotations are saved and displayed in the classifier for the next subject. This only happens on projects with summaries enabled, when returning to classify from Talk.
## Steps to replicate
Classify a subject, follow the Talk link after classifying, then return to the classifier.
## Additional information
- **Operating system:**
- **Browser:**<issue_closed>
Status: Issue closed |
statsmodels/statsmodels | 762704628 | Title: MAINT: future for Travis unit testing
Question:
username_0: Travis CI is dropping free support for open source packages
https://groups.google.com/g/pystatsmodels/c/509vmT4wWAE/m/38Kku3uYCAAJ
I had sent them an email about getting a free time allocation. The response was that it is only for `.com`, and that `.org` will be discontinued. We still have a `.org` account and I didn't switch (yet).
Based on the response of other packages, scikit-learn, numpy, scipy, it doesn't look like anyone is getting enough free time allocation to continue using it as before.
So we will have to mostly or completely switch away from Travis.
We have already large part of the testing on azure. appveyor also still works as always.
I'm keeping `.org` on travis for now because it's still working.
Answers:
username_1: Azure is the only way IMO.
username_0: Do we need to make any changes when Travis stops working for us?
It looks like we already have everything covered in Azure.
username_1: Azure needs to become more flexible. It (and Appveyor) runs latest releases so it is easy to create backward compact issues w/o Travis.
username_0: https://travis-ci.org/github/statsmodels/statsmodels shows now the header:
`Since June 15th, 2021, the building on travis-ci.org is ceased. Please use travis-ci.com from now on.`
username_0: @username_1 We don't have automatic doc build for `devel` anymore, do we?
last update according to history of `devel` was on March 25
https://github.com/statsmodels/statsmodels.github.io/commits/master/devel
username_1: You should try and migrate to see if it will resume. But I suspect travis is dead. Will need to get it working again before release; I am using github actions in my own projects to build docs which works well enough.
username_0: Can we use github actions for the statsmodels docs?
I worry about surprise bills when switching to travis.com. It might be feasible if we only use it for doc builds.
username_1: .com is free as long as you are on the free plan. It will probably run out of credits though.
GH actions should be usable, although it is more complex than what I've been doing since I push to a branch rather than another repo. |
libgdx/libgdx | 618354134 | Title: App music continues to play (with choppy audio) in iOS 13.2 and 13.3 simulators when app is put into background
Question:
username_0: I don't currently have a real iOS device to test on, so I haven't yet confirmed whether this affects real devices as well. If anyone reading this issue is able to test, I'd appreciate the confirmation.
#### Issue details
When running a libgdx app in the iOS 13.3 simulator: If the app has music playing and you swipe the app to the background, the music unexpectedly continues to play and sounds very choppy. This problem exists for brand new, minimally-configured libgdx apps. Please see the "Reproduction steps/code" section for more info.
I'm using Eclipse with the RoboVM plugin to run the app, and this error message appears when the app is sent to the background:
`2020-05-14 11:46:37.815 IOSLauncher[91140:2360202] Can't end BackgroundTask: no background task exists with identifier 1 (0x1), or it may have already been ended. Break in UIApplicationEndBackgroundTaskError() to debug.`
And then when you bring it to the foreground and send it to the background again:
`2020-05-14 11:47:09.100 IOSLauncher[91140:2360202] Can't end BackgroundTask: no background task exists with identifier 4 (0x4), or it may have already been ended. Break in UIApplicationEndBackgroundTaskError() to debug.`
This background music behavior didn't exist in the past (e.g. testing on iOS 11.3). Normally, putting the app in the background silenced any playing music.
#### Reproduction steps/code
- Create new app using latest gdx-setup.jar (as of this writing, 2020-05-14)
- Import into Eclipse
- Install version 2.3.9 of the RoboVM plugin for Eclipse (because it seems that you can't build/run for the iOS 13 simulator with earlier releases)
- Add an mp3 file called "music.mp3" at `android/assets/music.mp3`
- Update the `create()` method in MyGdxGame.java so it looks like this:
```
public void create () {
batch = new SpriteBatch();
img = new Texture("badlogic.jpg");
music = Gdx.audio.newMusic(Gdx.files.internal("music.mp3"));
music.setLooping(true);
music.play();
}
```
- Run the game via RoboVM plugin run configuration, selecting device type "iPhone 11 ..., 13.3" and arch "64-bit (x86_64)"
One thing to note, for the above instructions, is that the current gdx-setup.jar creates an app with a top level build.gradle robovm version of 2.3.8. You can leave that version as is or update to 2.3.9, but it doesn't change the problem I'm describing. Both cases build for the iOS 13 simulator as long as your RoboVM **plugin** is 2.3.9.
#### Version of LibGDX and/or relevant dependencies
Tested with:
- libgdx 1.9.7 and 1.9.10
- RoboVM 2.3.9 Eclipse plugin
- Gradle RoboVM version of 2.3.8 or 2.3.9 (but Eclipse plugin must be 2.3.9)
- iOS Simulator - iPhone 11 with iOS 13.2 or 13.3
- Xcode 11.3.1
#### Stacktrace
No relevant stacktrace.
#### Please select the affected platforms
- [ ] Android
- [x] iOS (robovm)
- [ ] iOS (MOE)
- [ ] HTML/GWT
- [ ] Windows
- [ ] Linux
- [ ] MacOS
Answers:
username_1: Even if I can't reproduce on my setup:
- IntelliJ
- RoboVM 2.3.10-SNAPSHOT
- libGDX 1.9.10
- Device iOS 13.4.1 and Simulator 13.2.
I have been able to reproduce different behaviour between Simulator and device when app is backgrounded. When the app is backgrounded on simulator, even if the music stops, when the app is resumed the music is not resumed where it was left but as if it has continued playing in the background with volume 0. That doesn't happen on the device, music resumes on foreground where previously paused.
I think the issue you describe is also too critical not to have been noticed previously so my guess is that it's an issue on your environment using the Simulator.
Regarding the error/warning message, it also occurs when running on the device and is independent of playing any music (happens on a totally empty app).
username_0: Thank you for the reply and testing, @username_1 ! Your mention of `RoboVM 2.3.10-SNAPSHOT` made me want to try that plugin version instead of the latest 2.3.9 release (still in Eclipse, in my case) and I'm happy to say that it solved the audio problem in the simulator. Though as you also noted, I do still see the `Can't end BackgroundTask` error/warning.
If it wouldn't be a pain to try, would you mind another test on your side? In both the simulator and with a physical device, could you see if you're able to reproduce the audio issue with the latest 2.3.9 IntelliJ plugin release instead of the unreleased snapshot?
username_1: If you want to follow up on the issue it is you who should run those tests :)
1. Test again on Eclipse and RoboVM 2.3.9 to confirm the issue is linked plugin version.
2. Try on IntelliJ both 2.3.10-SNAPSHOT and 2.3.9 on Simulator.
This way you'll cover all combinations of IDE and platforms and we can further investigate if it's a RoboVM issue.
username_0: I already tested (1) before posting my last comment, to make sure it was accurate, but I'm glad to do it again for good measure. Just did and confirmed the same results as last time. Problem appears for me with the Eclipse 2.3.9 plugin and doesn't occur with the Eclipse 2.3.10-SNAPSHOT plugin.
For (2), I haven't used IntelliJ before but I'll give it a go and will report back. That's a good suggestion, testing to see if the RoboVM plugins are behaving differently in different IDEs. Verification would reduce the likelihood of things just being off about my Eclipse setup.
I'm still unable to test RoboVM plugin builds on physical device though, sorry. That's why I was asking if you could in my previous comment.
username_0: I just finished testing the two plugin versions in a fresh install of the latest version of IntelliJ (Community 2020.1).
First, the good news:
IntelliJ + RoboVM 2.3.10-SNAPSHOT plugin + Simulator behaves the same way as Eclipse + RoboVM 2.3.10-SNAPSHOT plugin + Simulator. It works, and the background-app music issue doesn't occur.
Now the bad news:
I wasn't able to build and run the app with the RoboVM 2.3.9 release IntelliJ plugin. I'm getting the same NullPointerException `at org.robovm.idea.running.RoboVmRunProfileState.executeRun(RoboVmRunProfileState.java:57)` mentioned in this issue (https://github.com/MobiVM/robovm/issues/242#issuecomment-598698156). Based on dkimitsa's comment in that issue, it sounds like 2.3.9 IntelliJ plugin incompatibility is a known problem, so I don't think I have a good way of comparing the 2.3.9 plugin behavior in Eclipse vs. IntelliJ after all.
What's the best way to proceed now? Thanks for your help with this.
username_2: How about
```
@Override
public void pause() {
music.setVolume(0f);
//or stop() / pause()
}
@Override
public void resume() {
music.setVolume(1f);
//or play()
}
```
in the class extending ApplicationListener?
Those Methods allow you to implement silent music / paused music.
The choppy noise could lead from low fps execution when the app is in background, but thats just a theroy from me.
username_1: @username_0 Yep, the issue you are facing is https://github.com/MobiVM/robovm/issues/242 and affects versions older that 2.3.1-0-SNAPSHOT. There's a workaround explained in one of the comments, you need to remove/comment the Android Facet config on `ios.iml`.
username_0: @username_1 With the workaround in https://github.com/MobiVM/robovm/issues/242 I was able to run IntelliJ + RoboVM 2.3.9, thanks -- and I'm seeing the same problem with IntelliJ as Eclipse with RoboVM 2.3.9. Still not sure if it affects a real device, but hopefully this additional info helps!
@username_2 I tried your workaround in the test app and it does hide the problem, thanks! Even if I can stop the audio from being noticeable though, I'm hesitant to release a version of the app with audio output still theoretically open (just in case there are other related issues with things like several-second-long sound effects, and also because I may end up having to silence a bunch of things in various places), so maybe I'll play it safe and wait until 2.3.10 is officially released so I can reference it in my build.gradle file when creating a standalone signed IPA.
Or...if there's a way to reference the snapshot build for a `./gradlew ios:createIPA` operation, I can try that instead? @username_1, do you know if there's a way to reference 2.3.10-SNAPSHOT in the top level `build.gradle` file? Or is that not possible because it's not hosted on maven? (https://mvnrepository.com/artifact/com.mobidevelop.robovm/robovm-gradle-plugin)
username_0: Just updated the ticket title because I also tested this on the iOS 13.5 Simulator.
username_1: Ok, so the bug is confirmed to happen on 2.3.9 simulator independentñy of the IDE and doesn't happen on 2.3.10-SNAPSHOT in simulator or device. I'm almost sure it doesn't happen on devices on 2.3.9 either because we would have noticed but, in any case, I would recommend using 2.3.10-SNAPSHOT not just for this bug but because it fixes several other ones. I don't expect this is investigated further considering it's fixed on 2.3.10.
You should be able to depend on the SNAPSHOT adding:
`maven { url "https://oss.sonatype.org/content/repositories/snapshots/" }`
username_0: Great, then I'll stick with 2.3.10-SNAPSHOT and use the sonatype repo. And I'll bet you're right about the devices.
Thanks for your help!
username_3: I tested 2.3.9 yesterday on my iPhone 6 and it did not exhibit the glitchy sound behavior. Simulator did, but not the real device.
username_0: @username_3 Thank you for confirming!
username_4: Can this get closed? Seems to be solved.
Status: Issue closed
username_0: @username_4 Yes, I'll close now. |
eiriktsarpalis/dim-versioning-issues | 894837265 | Title: What about introducing an intermediate interface to linearize the implementation chain
Question:
username_0: E.g.,
```C#
namespace IntermediateIface
{
public interface IEnumerable<T>
{
#if IS_DIM_ADDED
bool TryGetNonEnumeratedCount(out int count)
{
count = 0;
return false;
}
#endif
}
public interface IReadOnlyCollection<T> : IEnumerable<T>
{
int Count { get; }
#if IS_DIM_ADDED
bool IEnumerable<T>.TryGetNonEnumeratedCount(out int count)
{
count = Count;
return true;
}
#endif
}
public interface ICountable<T> : IReadOnlyCollection<T>
{
int IReadOnlyCollection<T>.Count => CountHelper;
internal int CountHelper { get; }
}
public interface ICollection<T> : IEnumerable<T>, ICountable<T>
{
new int Count { get; }
int ICountable<T>.CountHelper => Count;
}
}
namespace ThirdPartyLibrary
{
public class MyCollection2<T> : IReadOnlyCollection<T>, ICollection<T>
{
int IReadOnlyCollection<T>.Count => 42;
int ICollection<T>.Count => 42;
#if IS_DIM_ADDED_TO_NUGET_LIBRARY
bool IMyEnumerable<T>.TryGetNonEnumeratedCount(out int count)
{
count = 42;
return true;
}
#endif
}
}
```
Answers:
username_0: Found a counterexample:
```C#
public interface Interface2<T> : IReadOnlyCollection<T>
{
int IReadOnlyCollection<T>.Count => throw new NotImplementedException();
}
public class MyCollection3<T> : Interface2<T>, ICollection<T>
{
int ICollection<T>.Count => 42;
}
``` |
Shopkit/docs | 297377065 | Title: Update product
Question:
username_0: Hi.
I'm trying to update the title of a product on a .NET program but it always return error 404.
I am able to get the product by ID into one .NET Object, change the title on the .NET Object but it does not update after webrequest.
Is there any other documentation with examples? Can you provide a .NET example?
Thank you.
Answers:
username_1: Hi,
As of today the product API is read only.
PUT/POST methods are in the works and we hope to release them soon.
Status: Issue closed
|
shuchkin/simplexlsx | 575952597 | Title: Empty Date formatted cells return 1970-01-01 00:00:00 rather than an empty string
Question:
username_0: # Description
When the contents of an empty cell formatted as "Date" is accessed, the value "1970-01-01 00:00:00". This value should be an empty string.
# Reproduce
```php
<?php
require_once('SimpleXLSX.php');
if ($xlsx = SimpleXLSX::parse('Book1.xlsx')) {
print_r($xlsx->rows());
}
```
Example Excel file: [Book1.xlsx](https://github.com/username_1/simplexlsx/files/4290620/Book1.xlsx)
Output:
Note that the value of row 3 column 2 is expected to be an empty string.
```
Array
(
[0] => Array
(
[0] =>
[1] => Text
[2] => Date
[3] => Currency
[4] => Number
)
[1] => Array
(
[0] => Filled
[1] => Steve
[2] => 2020-01-05 00:00:00
[3] => 1
[4] => 1
)
[2] => Array
(
[0] => Filled
[1] => Steve
[2] => 2020-01-06 00:00:00
[3] => 100
[4] => 100
)
[3] => Array
(
[0] => Empty
[1] =>
[2] => 1970-01-01 00:00:00
[3] =>
[4] =>
)
)
```<issue_closed>
Status: Issue closed |
Aman-zishan/textextractor2.0 | 714690001 | Title: Hacktoberfest Update
Question:
username_0: Maintainers of the repository can add the `hacktoberfest` topic to their repository if they wish to participate.
<issue_closed>
Status: Issue closed |
CDrummond/lms-material | 392418142 | Title: Please add 'Stop' button
Question:
username_0: The classic skin has this command button and it's quite useful.
Answers:
username_1: No, sorry. This would spoil the layout of prev/play/next. and I see no real need for a 'stop' button.
Status: Issue closed
username_0: Maybe as an option?I
I can see that the stop button is apparently out of fashion, but I find it
very useful myself. It moves the playback start point precisely to the
beginning of the track. You can do this with the slider, but you need one
more more and this is difficult when you have big fingers and a small
screen.
Please reconsider, this IS useful.
Best
DW
username_1: Pause track, then pres prev button - track position is set to start.
username_0: Apart from being less comfortable, it does not always work - sometimes the
playback resumes after tapping prev.
What I also like about stop is that this is stable - if you tap twice (or
any times), the state does not change. This is important when the UI gets
unresponsive, you tap and do not know if the UI consumed the event. You can
safely tap again.
I am quite sure many users would find this useful as an optional setting.
username_1: Will be in next release.
username_0: Great thanks! ;-)
--
(P.S.
i pozdrawiam bezprawnie czytajacych ten list smutnych panow ze sluzb Polski
i calego swiata)
username_0: Works as a charm! |
brata-hsdc/brata.masterserver | 104801228 | Title: Implement heartbeat
Question:
username_0: Last year we had a quick-and-dirty script written on the morning of the competition to ping all the stations to make sure everything is still up.
Prior to that, I recall discussions about the `connect` message periodically going out to make sure the stations are all up, but I don't recall how this played out during the competition.
Are there any lessons learned that we want to incorporate regarding this for this year? Is there anything we want to add/modify in the ICD in order to get a better periodic heartbeat? (I'm thinking even twice per minute is probably good enough.)
Answers:
username_1: It might be nice to have a simple separate process handle the heartbeat messages. The downside is that then it would need to figure out if the main station task was alive and well. The upsides are:
* it might not go autistic if the station process goes south
* we could send it a message to restart the station process or reboot the system
* it could be a much easier to understand, single purpose piece of code
username_0: So you're saying a completely separate app independent of the Django project? Similar to the script you wrote the day of the competition?
Sounds like a good idea. So its entire purpose in life would be to start up, repeatedly send out a REST message to each device and wait for a response from each device, and set off some alarm if something doesn't respond back in time.
Devices would consist of each station instance and the Master Server. Should it ping the phones as well, or not worry about them?
You mentioned a message to restart the station or reboot. Would this be a button on the management interface of `dbservice`? If so, we would be in trouble if the Master Server went south.
Next thing is how would it get its configuration? How would it know the IP address of each station? The Master Server would already have this information, and I'm not thrilled about duplicating it.
Should the Master Server be responsible for starting this heartbeat process, and maybe killing/restarting from the management interface?
Once started, the heartbeat process could pull its configuration from the Master Server just like any other station process, and then start its heartbeats.
username_1: Actually, I was thinking of a separate process running on the RPi stations, and not the process running the competition (e.g., the `brata.station` software). This would be a little background daemon that monitored the station's "competition process" and maybe could restart it or reboot if commanded from the MS.
On the MS side, it could probably be another Django app, and by being so it could easily access the RPi station information from the database.
username_2: It might be easier to use a third party package, such as Nagios (https://www.nagios.org/), to do this sort of pinging. It has a number of modules to check various services (HTTP, etc), as necessary. It would save us time needing to write all that.
username_0: @username_1 If we go this route, a separate process running on each RPi seems like a good idea--station, MS, everything. All of the apps--MS, Station, etc. can have a single message defined in the ICD that the heartbeat process would use to verify the app is still up, and the heartbeat processes themselves would indicate the RPi is still up.
The MS app you mentioned could talk to all of the process instances, including the one running on the MS itself.
@username_2 I took a brief look at [Nagios](https://www.nagios.org/projects/). I assume you're referring to their open-source projects. Have you used this before? How easy is it to set up?
The RPi boxes are running Raspbian, which is Debian on ARM if I recall correctly.
I see installation instructions [here](https://community.spiceworks.com/how_to/68159-install-nagios-on-a-raspberry-pi). It looks like we should already have a package available for at least the Core, so we should be able to add steps to our [GUB](../../brata/wiki/RaspberryPiGub) easily for that part.
If we choose to go this route, there are probably official plugins to support most of the checks we have listed so far already; the only part left would be writing our own plugin to call the single message I mentioned earlier.
username_1: In #13 I added a `Heartbeat` message that the Stations can send periodically. |
microsoft/Microsoft365DSC | 612034092 | Title: AADConditionalAccessPolicy
Question:
username_0: @username_3 is leading this right now. The dev branch is located here:
https://github.com/username_3/Microsoft365DSC/tree/AADConditionalAccessPolicy
Answers:
username_1: I am very keen to test this module out as we want to monitor and control changes to AAD Conditional Access policies using Microsft365DSC
Do you have some specs or code that we can review or provide some testing feedback against? Any idea of timelines on when this would be available?
Thanks,
username_2: I'd be really keen to test this out and implement CA policies using DSC. I have a desperate need to automate the creation of CA policies across tenants.
username_0: @username_3 is leading this right now. The dev branch is located here:
https://github.com/username_3/Microsoft365DSC/tree/AADConditionalAccessPolicy
username_2: @username_3, could you please help me understand what the current status of the development is? We are really keen to test the CA policy support in DSC.
username_3: @username_2, the CA module is in quite good shape. It is already tested in production, no issues so far.
I have 2 things remaining:
1. change all the logging calls to event log instead of file based
2. write the M365DSC unit test
I'll probably finish with these by end of next week.
username_2: @username_3 , this sounds really promising. I'll give it a try and test your module once its available in the dev branch.
Status: Issue closed
|
itchio/itch.io | 257071277 | Title: [pico8 games] mobile touch controls?
Question:
username_0: Hello itch team!
First, I love your platform. I'd like to propose an idea:
I think that especially indie-games have a future on mobile devices,
because they are often simple in scope and hardware requirements. I love
that you have pico8 integration, but unfortunately I can't play the
pico8 games with mobile controls.
I hope you consider optimizing your platform for mobile play in the
future. The official pico8 site has some sort of mobile controls, but
they don't seem to work nice for me. I'd love to have a button on an
itch-pico8-game-site that says "Play on smartphone" and opens a blank
page with only the game and touch controls.
I tried myself to create proper touch controls for pico8 and put the
code online: https://github.com/username_0/pico8_html_template
Maybe you can use some of the code as a basis of adding touch controls
to pico8 games on itch.
Also, pico8 is only one of many fantasy consoles and most of them
compile to JavaScript. Maybe after some time of testing you could extend
a standardized set of mobile controls for the other fantasy consoles as
well!
I'd really love to have itch a mobile storefront for html-games!
I hope you consider my proposal and if you like my consultation on any
part of the process, please drop me a line!
Answers:
username_1: I would second this. Pico-8, and other such fantasy consoles are such great learning tools, and broadening the base that could play the games made with them would be amazing. |
dart-lang/site-www | 166935832 | Title: Update "Futures and Error Handling" article to include await/async examples
Question:
username_0: _From @Scorpiion on July 2, 2015 16:27_
I think it would be nice to see one or more examples of error handling with async/await.
The try/catch block for example with async/await is very familiar for developers with many different programming language backgrounds.
Article: https://www.dartlang.org/articles/futures-and-error-handling/
_Note: The article was written before async/await was added and has no mention at all of it at this point._
_Copied from original issue: dart-lang/www.dartlang.org#1408_
Answers:
username_0: _From @Sfshaza on September 4, 2015 6:50_
At the very least, add a note to the article, referring them to the Async: Futures tutorial which does demonstrate try-catch with async-await.
username_0: No longer an article, the URL is now https://www.dartlang.org/guides/libraries/futures-error-handling
username_0: Unlikely to happen to this page, but we should have better coverage soon. Related: #512.
Status: Issue closed
|
antonagestam/collectfast | 510834770 | Title: Issue with boto3 and collectfast > 1.0.0
Question:
username_0: Good day,
We have a Django project using s3 as its media/static storage engine through django storages and boto3. When using `Collectfast==1.0.0` everything works as expected. However, if we try to update the package to a new version, we get the following error when running `python manage.py collectstatic`
```
File "/lib/python3.6/site-packages/storages/backends/s3boto.py", line 25, in <module>
from boto import __version__ as boto_version
ModuleNotFoundError: No module named 'boto'
```
It seems that the `collectstatic` command is ignoring the storage configuration `storages.backends.s3boto3.S3Boto3Storage` and using the boto's default one.
Thank you
Status: Issue closed
Status: Issue closed
Answers:
username_1: This should now be fixed in 1.3.0. Thanks for reporting this!
username_1: To add a bit of information here. This was a bug in the code that tries to guess which strategy to use based on the configured storage backend. Setting a value for `COLLECTFAST_STRATEGY` is a workaround and is strongly recommended anyway. I plan to remove the guessing feature in the next major release and thus require explicitly setting it. |
openshift/origin | 91872645 | Title: Awkward text wrapping on overview page
Question:
username_0: Text often breaks at hyphens, for instance, in the middle of an image name. Better to break on whitespace when possible.

We should separate port mappings with a comma or always have line breaks.

Sometimes a dash appears on its own line.

Maybe remove the dash and add a line break in its place.
Answers:
username_0: Route wrapping is a bit awkward, too. Here it's wrapped and elided.

Status: Issue closed
|
hubaimaster/aws-interface | 441322499 | Title: TODO
Question:
username_0: 1. JAVA, JS SDK 지원
2. 기존 SDK 완성 및 TEST CODE 작성
3. 데이터베이스 MySQL RDS 열고 마이그레이션 (Dev / Prod 따로)
4. 리소스 조금 리팩토링.. Partition 권한 제어 부분 -> 상위레이어에선 신경안쓸수있게
5. + Index 할 필드 따로 설정하는 기능 추가할것.. -> 흠.. 추가해야하나? 고민
6. 로그 구현 완료할것
7. Logic 함수 업로드까지만.. 구현 및 얼로케이터 연결..
Answers:
username_0: 8. Auth 회원정보 수정기능 추가 |
hpc-unibe-ch/puppet-module-slurm | 872245262 | Title: Refactor module to current best practices
Question:
username_0: **Is your feature request related to a current shortcoming? Please describe.**
The current module is functional but needs some improvments in terms of design. In
additions it does to much things.
**Describe the solution you'd like**
Refactor towards a better manageable module and release 1.0
* Provide slurm package installation tasks
* Provide slurm user/group creation tasks
* DO NOT provide functionality regarding Munge daemon/user/group nor should be a dependency there
* DO NOT provide slurm configuration management
* DO NOT provide compilation or repository functionality
**Describe alternatives you've considered**
Module search on https://forge.puppet.com/ was done and the modules `treydock-slurm` and `ULHPC-slurm`
have been evaluated. Both cover a lot more stuff than we need especially the creation of the `slurm.conf`
and other configuraiton files, which we explicitly do not want to handle using puppet.
Therefore it's feasible to create "another" slurm module with only limited functionality.<issue_closed>
Status: Issue closed |
monofon/hindent-format | 365714176 | Title: Usage with stack exec
Question:
username_0: Thanks for this extension! I'd really love to use `stack exec -- hindent` as the `hindent-format.command` (vs installing hindent globally) but I keep getting `Could not execute hindent` errors after trying several variations, including simply aliasing hindent. Is this not possible?
Status: Issue closed
Answers:
username_0: Thanks! 🎉 |
gocd/gocd | 168461003 | Title: OAuth gadgets tab very confusing
Question:
username_0: ##### Issue Type
- Bug Report
##### Summary
Attempted to configure the google oauth plugin and instead configured the oauth gadget.
##### Environment
Happens in all environments
###### Basic environment details
n/a
###### Additional Environment Details
n/a
##### Steps to Reproduce
1. install the google-oauth-plugin
2. set a password file
3. you should be required to login at this point
4. login
5. go to the admin page
6. Notice that there are two tabs labeled oauth
7. Note that the oauth gadgets tab seems to have the properties that need to be set for the oauth plugin
##### Expected Results
1. hoped documentation would provide step by step illustrated steps. docs assumed knowledge i didn't have and consequently was difficult to follow
2. expected some explanation of what a gadget was. i assumed it was another error in documentation and it should have been called plugins. even now, I don't really understand the difference between gadget and plugin
3. input appears to be accepted and G+ button is available from the home screen
4. expected the G+ button to not appear until the plugin was properly configured
##### Actual Results
I entered all the oauth settings into the oauth gadgets tab and was unable to login. The error message was a non informative stack trace.
##### Possible Fix
Mostly a documentation and comprehension issue. Need to make clear that a gadget is not a plugin and that this is not the place to configure oauth. Perhaps it would be useful to describe what this does configure.
##### Log snippets
==> plugin-google.oauth.login.log <==
2016-07-30 10:02:44,844 ERROR [qtp1334729950-24] OAuthLoginPlugin:77 - Error occurred while OAuth setup.
java.lang.RuntimeException: plugin is not configured. please provide plugin settings.
at com.tw.go.plugin.OAuthLoginPlugin.getPluginSettings(OAuthLoginPlugin.java:232)
at com.tw.go.plugin.OAuthLoginPlugin.handleSetupLoginWebRequest(OAuthLoginPlugin.java:206)
at com.tw.go.plugin.OAuthLoginPlugin.handle(OAuthLoginPlugin.java:99)
at com.thoughtworks.go.plugin.infra.DefaultPluginManager$1.execute(DefaultPluginManager.java:172)
at com.thoughtworks.go.plugin.infra.DefaultPluginManager$1.execute(DefaultPluginManager.java:167)
at com.thoughtworks.go.plugin.infra.FelixGoPluginOSGiFramework.executeActionOnTheService(FelixGoPluginOSGiFramework.java:315)
at com.thoughtworks.go.plugin.infra.FelixGoPluginOSGiFramework.doOn(FelixGoPluginOSGiFramework.java:245)
at com.thoughtworks.go.plugin.infra.DefaultPluginManager.submitTo(DefaultPluginManager.java:167)
at com.thoughtworks.go.server.plugin.controller.PluginController.handlePluginInteractRequest(PluginController.java:59)
##### Code snippets/Screenshots
n/a
##### Any other info
Status: Issue closed
Answers:
username_1: Closing there are no oauth tabs anymore. |
Sylius/Sylius | 166017396 | Title: downloadable products
Question:
username_0: How to define a downloadable product for my shop?
Answers:
username_1: I'm having the same issue here, from the documentation I did not see any way to do it.
username_2: It is not an issue. Please move this question to stackoverflow.
Regular product will do the job. Just add a `DownloadableTaxCategory` which will have tax rate with 0. Product can be extended with additional field url, or something similar.
Can be closed /cc @username_3
username_3: I labeled it "Documentation" since it will be a frequent question so it's worth to describe it in our docs. I would leave it open till we put it in the backlog.
username_4: I've just put it into the docs backlog @username_3 :)
Status: Issue closed
username_3: Thanks @username_4 👍 |
rook/rook | 401482432 | Title: [cassandra] Integrate a solution for cluster repair
Question:
username_0: **Is this a bug report or feature request?**
* Feature Request
**What should the feature do:**
If a node is down or unavailable, when it comes up it needs to discover the writes it missed. To deal with inconsistencies, Cassandra has a mechanism called [repair](http://cassandra.apache.org/doc/4.0/operating/repair.html) that will compare data between replicas and make sure everything is consistent.
Issuing repairs is a difficult manual process. It must be done carefully as to not overload the cluster. For that reason, [Spotify Reaper](https://github.com/thelastpickle/cassandra-reaper) was created to automate the repair process. It will automatically repair the cluster a little bit at a time, in order not to overload it.
**What is use case behind this feature:**
Users should not be required to issue repairs manually. Integrating Spotify Reaper with cassandra operator will give users an out-of-the-box production-grade solution for repair.
Answers:
username_1: What is the status of this? I don't see any PR linked.
username_0: **Is this a bug report or feature request?**
* Feature Request
**What should the feature do:**
If a node is down or unavailable, when it comes up it needs to discover the writes it missed. To deal with inconsistencies, Cassandra has a mechanism called [repair](http://cassandra.apache.org/doc/4.0/operating/repair.html) that will compare data between replicas and make sure everything is consistent.
Issuing repairs is a difficult manual process. It must be done carefully as to not overload the cluster. For that reason, [Spotify Reaper](https://github.com/thelastpickle/cassandra-reaper) was created to automate the repair process. It will automatically repair the cluster a little bit at a time, in order not to overload it.
**What is use case behind this feature:**
Users should not be required to issue repairs manually. Integrating Spotify Reaper with cassandra operator will give users an out-of-the-box production-grade solution for repair.
username_0: @username_1 I actually had a branch experimenting with this.
It shouldn't be too much effort to include a guide on how to get this working.
What are you doing for your clusters?
Are you running some solution like Cassandra Reaper alongside it or do you do manual repairs?
username_1: On our end, we have our main cassandra "business" clusters, which we haven't moved to rook yet, they're maintained separately because we can afford the cost of operating it manually, using `cassandra-reaper` and checking regularly.
Our clusters using `rook-cassandra` are used for in-cluster tooling, like `jaeger` and friends. For us the plan is to "not pay attention at all" to these clusters, and just have them working under an operator governance. We faced some issue with our underlying in-cluster storage and that forced me to dig into the topology and `nodetool` obviously, manually for now.
I'm considering the options for automatic maintenance in cluster, which means I am considering `cassandra-reaper` here as well, having it under the operator governance is the option I would prefer obviously, in order to minimize entropy.
username_1: I couldn't get to have a working `cassandra-reaper` without interfering heavily with the operator's scope sadly...
This block of configuration in `/etc/cassandra.yaml` seams to be the issue:
```
# jmx authentication and authorization options. By default, auth is only
# activated for remote connections but they can also be enabled for local only JMX
## Basic file based authn & authz
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password"
#JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.access.file=/etc/cassandra/jmxremote.access"
```
It is hardcoded and since `/etc/cassandra` isn't part of the `Persistent Volume` , even adding the `jmxremote` files would be pointless as it wouldn't be consistent accross restarts.
I think the plan would be kinda bigger if we can't to support this kind of scenarios.
username_1: Still believe it would be agreat addition though. But not using rook-cassandra anymore.
username_1: :1234:
username_0: @username_1 thanks for the sustained activity :)
username_2: :+1:
username_1: :no_good: |
gnosis/dex-services | 706130247 | Title: [oba] build api
Question:
username_0: Build an api with the warp library for accepting new orders and broadcasting all orders:
- [ ] Build warp API route for posting new orders. Any order should be accepted if the signature is valid. The order should be posted as json with signature v,r,s as variables of the order
- [] Build a warp API route 'GET' for boardcasting orders.
Later on, order deletion route will be specified in a new ticket. |
NGEET/fates | 606477486 | Title: Site-level-vegetation temperature bug
Question:
username_0: When running ELM(FATES), t_veg24_si will not be updated and cause errors in phenology (never cold) and cause vegetation to die off from carbon starvation. See the detailed description and solution in the attached pdf file. Please let me and @username_1 know if you have any suggestions for the solution.
[Bug of site-level-vegetation temperature.pdf](https://github.com/NGEET/fates/files/4530536/Bug.of.site-level-vegetation.temperature.pdf)
Answers:
username_1: @username_0 , I just talked this over with @ckoven and @username_2 and we feel the fix to this is simple, and can be fixed on the FATES side of the code.
The solution is to just calculate the area weighted mean of the patch-level t_veg_24 (a variable we get along with the site level anyway) ourselves in fates. I will self assign.
Status: Issue closed
|
tensorflow/models | 219917834 | Title: Syntaxnet Parsey Mcparse gives dependency tree with two roots.
Question:
username_0: I am using parsey_mcparseface as the model in syntaxnet. I get really weird results with two ROOTs.
An example is as follows:
$ echo "They never considered themselves to be anything else." | syntaxnet/demo.sh
I get following output. Notice the two roots in token number 3 and 9. This is not the only case when this happens. There are many sentences producing output like this, especially the problem seems to stem from punctuations. Does anyone know the cause for this and the way to fix it. I am using the default context.pbtxt file.
1 They _ PRON PRP _ 3 nsubj _ _
2 never _ ADV RB _ 3 neg _ _
3 considered _ VERB VBD _ 0 ROOT _ _
4 themselves _ PRON PRP _ 7 nsubj _ _
5 to _ PRT TO _ 7 aux _ _
6 be _ VERB VB _ 7 cop _ _
7 anything _ NOUN NN _ 3 xcomp _ _
8 else _ ADJ JJ _ 7 amod _ _
9 . _ . . _ 0 ROOT _ _
Status: Issue closed
Answers:
username_1: Sorry to take so long to get back to you. There is unfortunately no easy fix -- the issue is that, depending on the language, it is sometimes OK to have multiple roots, so the transition system does not enforce the constraint.
If you find multiple roots, and want to choose a single one, the best thing you can do is pick one heuristically -- regardless, the statistical model would have made a mistake on that example (as you can see by the fact that it has two nsubj's.) |
primefaces/primevue | 775353381 | Title: SplitButton composition-api.
Question:
username_0: When binding "model" property of SplitButton to a composition-api variable, dropdown menu are not showed. But it working when using regular options API. Console message: Property "xxx" was accessed during render but is not defined on instance.
Status: Issue closed
Answers:
username_0: Sory wrong name of setup() function |
ncbi/ngs | 361864616 | Title: 'install' target fails
Question:
username_0: ```
Checking make status of ngs-sdk/language...
/usr/ports/biology/ngs-sdk/work/ngs-2.9.2/ngs-sdk/./Makefile.config.FreeBSD.amd64:115: target '/usr/local/lib' given more than once in the same rule
/usr/ports/biology/ngs-sdk/work/ngs-2.9.2/ngs-sdk/./Makefile.config.FreeBSD.amd64:115: target '/usr/local/lib' given more than once in the same rule
/usr/ports/biology/ngs-sdk/work/ngs-2.9.2/ngs-sdk/./Makefile.config.FreeBSD.amd64:115: target '/usr/local/lib' given more than once in the same rule
Checking make status of ngs-sdk/dispatch...
/usr/ports/biology/ngs-sdk/work/ngs-2.9.2/ngs-sdk/./Makefile.config.FreeBSD.amd64:115: target '/usr/local/lib' given more than once in the same rule
Checking make status of ngs-sdk/adapter...
/usr/ports/biology/ngs-sdk/work/ngs-2.9.2/ngs-sdk/./Makefile.config.FreeBSD.amd64:115: target '/usr/local/lib' given more than once in the same rule
Checking make status of object libraries...
/usr/ports/biology/ngs-sdk/work/ngs-2.9.2/ngs-sdk/./Makefile.config.FreeBSD.amd64:115: target '/usr/local/lib' given more than once in the same rule
/usr/ports/biology/ngs-sdk/work/ngs-2.9.2/ngs-sdk/./Makefile.config.FreeBSD.amd64:115: target '/usr/local/lib' given more than once in the same rule
/usr/ports/biology/ngs-sdk/work/ngs-2.9.2/ngs-sdk/./Makefile.config.FreeBSD.amd64:115: target '/usr/local/lib' given more than once in the same rule
/usr/ports/biology/ngs-sdk/work/ngs-2.9.2/ngs-sdk/./Makefile.config.FreeBSD.amd64:115: target '/usr/local/lib' given more than once in the same rule
/usr/ports/biology/ngs-sdk/work/ngs-2.9.2/ngs-sdk/./Makefile.config.FreeBSD.amd64:115: target '/usr/local/lib' given more than once in the same rule
/usr/ports/biology/ngs-sdk/work/ngs-2.9.2/ngs-sdk/./Makefile.config.FreeBSD.amd64:115: target '/usr/local/lib' given more than once in the same rule
/usr/ports/biology/ngs-sdk/work/ngs-2.9.2/ngs-sdk/./Makefile.config.FreeBSD.amd64:115: target '/usr/local/lib' given more than once in the same rule
Installing libraries to /usr/local/lib
/usr/ports/biology/ngs-sdk/work/ngs-2.9.2/ngs-sdk/./Makefile.config.FreeBSD.amd64:115: target '/usr/local/lib' given more than once in the same rule
gmake[4]: *** No rule to make target '/usr/local/lib/libngs-sdk.so.2.9.2'. Stop.
```
OS: FreeBSD 11.2 amd64 |
MicrosoftDocs/azure-docs | 1160973159 | Title: We can use WinSCP to download
Question:
username_0: This article said ssh client does not support download but I confirmed that we can use WinSCP (or Teraterm) in order to download a file. I hope you add this information into this document.
https://docs.microsoft.com/ja-jp/azure/bastion/vm-upload-download-native
This command can be used to upload files from your local computer to the target VM. File download is not supported.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 56c09bee-48e5-7893-c645-0a774a3e5051
* Version Independent ID: c4b35b75-edf4-364f-fcb6-559901e117ee
* Content: [Upload or download files - native client - Azure Bastion](https://docs.microsoft.com/en-us/azure/bastion/vm-upload-download-native)
* Content Source: [articles/bastion/vm-upload-download-native.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/bastion/vm-upload-download-native.md)
* Service: **bastion**
* GitHub Login: @cherylmc
* Microsoft Alias: **cherylmc**
Answers:
username_1: @username_0 , thank you for your feedback. We'll review this and get back to you shortly!
username_1: @username_0 , I reached out to the Azure Bastion Product group team and they have mentioned that they will look into WinSCP download, and will update both English & Japanese docs to reflect the correct information soon.
We are closing this issue for now. If there are further questions regarding this matter, please reply and we will gladly continue the discussion.
Status: Issue closed
|
Chia-Network/chia-blockchain | 939833388 | Title: [BUG] 1.2.0 release doesn't run on Windows 7 anymore
Question:
username_0: **Describe the bug**
After 1.2.0 update, Chia doesn't run on Windows 7 anymore. GUI is stuck at "Connecting to wallet" while throwing these errors in the developer tools log:
`events.js:292 Uncaught Error: connect ECONNREFUSED 127.0.0.1:55400
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1141)`
and CLI throws a popup error about missing api-ms-win-core-path-l1-1-0.dll, while complaining about Python39.dll in console.
**To Reproduce**
1. Install/update to 1.2.0 on Windows 7.
2. Run GUI or CLI and observe the described behaviour
**Expected behavior**
Chia should run on Windows 7.
**Desktop**
- OS: Windows 7
- CPU: any
**Additional context**
It is caused by the use of Python 3.9 which doesn't support Windows 7: https://bugs.python.org/issue40740
But there is a workaround. Someone has created a replacement api-ms-win-core-path-l1-1-0.dll based on Wine code to run Blender 2.93 (which also uses Python 3.9) on Windows 7: https://github.com/nalexandru/api-ms-win-core-path-HACK
I've copied that replacement DLL to the Chia command line folder and that allowed me to run Chia 1.2.0 GUI and CLI.
Answers:
username_0: This user has the same issue: https://github.com/Chia-Network/chia-blockchain/issues/7174#issuecomment-876363643
username_1: Can you drop this DLL and in which folder should you put it?
Status: Issue closed
username_2: We no longer support windows 7
username_0: @username_2: There is nothing about Windows 7 in the FAQ, Wiki or release notes. You should probably state somewhere that Windows 7 is unsupported.
username_1: So it is necessary to write about this, that Windows 7 is not supported anyway, otherwise it is not really written about it anywhere
username_3: Windows 7, Windows Server 2008 R2, not support Python 3.9 :(
username_4: https://github.com/nalexandru/api-ms-win-core-path-HACK
Hack to run it on Win7 |
Facepunch/garrysmod-issues | 372266738 | Title: Model rendering issues when using NPC models
Question:
username_0: When using NPC models as player models and overriding some animation code to translate activities properly, they will sometimes cause various issues when the model is rendered.
The most reliable way that I've found to make it happen is to use a map that has some PVS optimization (my test map is https://steamcommunity.com/sharedfiles/filedetails/?id=1532126505) and have two players be a fair distance apart and in different visleaves. After waiting a couple of minutes, bring the players back together and there's about a 1 in 3 chance that they will have some sort of bone issue. It's more likely to happen for yourself if you are tabbed out of the game.
Forcing a full update for your client will resolve this issue temporarily, and many players have resorted to having to use `record 1; stop` in console to do so.
There are a couple different kinds of bugs that will happen, including incorrect bone rotation:
https://i.imgur.com/dRiHaXs.jpg
Models not rendering at all:
https://streamable.com/c2y7z
And models twitching left and right rapidly, which I couldn't get a video of at the moment.
I've isolated this issue with a test gamemode that includes a stripped down version of the animation code that we use. No other addons/lua files have been ran except for the gamemode code and base GMod stuff. You can check out the gamemode here https://gist.github.com/username_0/b6de45e03ecf5b2c5d722a1bea07ee0e
Answers:
username_0: The issues that are happening seems to be similar to #3010, so it should be considered as a possible solution
username_1: Bone positions become completely invalid which is the point at which the player becomes completely invisible.
```] lua_run_cl print(LocalPlayer():GetEyeTraceNoCursor().Entity:GetBonePosition(1))
-nan(ind) -nan(ind) -nan(ind) nan nan 0.000```
username_2: @username_8 @username_3
username_0: This also seems to happen for regular player models, although not nearly as frequent as NPC models it seems
username_3: Merged the potential fix from https://github.com/ValveSoftware/source-sdk-2013/issues/404#issuecomment-405042224 Let me know if that fixes it.
Status: Issue closed
username_0: When would this be pushed to the dev branch?
username_3: All changes are pushed to Dev branch within minutes of them being committed automatically.
username_1: The patch has fixed it partially, players no longer go invisible all the time and the spine twisting seems much less frequent.
username_1: Not sure if this was reverted in the latest patch but it seems to have started occurring frequently since.
username_4: I'm seeing this a lot now on citizen based NPC models, the spine is turned roughly 90 degrees to the right if I'm not mistaken. Happening frequently to those with higher FPS.
username_5: Been starting to see this more and more often now that I'm entering gamemodes with higher FPS, honestly a weird occurrence and I've seen this occur on both Citizen and Combine based models
username_6: I could only speculate that it's becoming more of an issue with the x64 branch if that's where you're testing it.
And yes, the higher the FPS the higher the chance of it happening (`max_fps 120` on the client should prevent it from happening as a workaround), but really, it wasn't a problem anymore with the Lerp_Hermite specialisation, so I'm at a loss as to where else it could come from, someone would need to do some deep digging if it still has to do with the pose params going out of bounds.
username_7: The only workaround I've found is starting to record a demo wiht 'record demo' in console, then following it up with 'stop'.
username_1: Yeah, that forces a full client update which would usually be done with cl_fullupdate but that command requires cheats.
username_0: @username_3 Could this get reopened? This issue seems to still be persisting even through the lerp fix
username_3: When using NPC models as player models and overriding some animation code to translate activities properly, they will sometimes cause various issues when the model is rendered.
The most reliable way that I've found to make it happen is to use a map that has some PVS optimization (my test map is https://steamcommunity.com/sharedfiles/filedetails/?id=1532126505) and have two players be a fair distance apart and in different visleaves. After waiting a couple of minutes, bring the players back together and there's about a 1 in 3 chance that they will have some sort of bone issue. It's more likely to happen for yourself if you are tabbed out of the game.
Forcing a full update for your client will resolve this issue temporarily, and many players have resorted to having to use `record 1; stop` in console to do so.
There are a couple different kinds of bugs that will happen, including incorrect bone rotation:
https://i.imgur.com/dRiHaXs.jpg
Models not rendering at all:
https://streamable.com/c2y7z
And models twitching left and right rapidly, which I couldn't get a video of at the moment.
I've isolated this issue with a test gamemode that includes a stripped down version of the animation code that we use. No other addons/lua files have been ran except for the gamemode code and base GMod stuff. You can check out the gamemode here https://gist.github.com/username_0/b6de45e03ecf5b2c5d722a1bea07ee0e
username_0: We've found a pretty reproducible test case for triggering the issue. It isn't as extreme as we've seen in the wild, but it's as good as it's going to get in terms of reliably reproducing the issue.
We simply switched between the `male_01` and `police` models a couple times and it would happen almost every time as captured in this video: https://streamable.com/mebf8. It shows the model being switched 6 times and with the final switch back to the `male_01` model, you can see the spine being distorted. Shortly after, `cl_fullupdate` was used to reset the entity and fix the issue.
The issue also shows up in demos, so I've uploaded one at https://www.dropbox.com/s/mw4a27ucacgcieo/animtest.dem if it's at all useful.
The test code that was used is below. The gamemode used is the one linked in the OP (https://gist.github.com/username_0/b6de45e03ecf5b2c5d722a1bea07ee0e).
```lua
concommand.Add("animmodel", function(client)
for i = 1, 3 do
timer.Simple(i * 0.1, function()
local newModel = client:GetModel() == "models/humans/group01/male_01.mdl" and "models/police.mdl" or "models/humans/group01/male_01.mdl"
client:SetModel(newModel)
end)
end
end)
```
username_8: It appears as if changing the model results in the pose parameters being copied over, but the models have different pose parameters (or pose parameters are stored in a different order.)
I can't say for sure it is related to your first post, but I can try to fix it.
username_8: I pushed a fix for that to the dev branch. I'm still not sure if it'll fix the actual issue.
username_1: I'll give it a test. This happens on it's own over time (5-10 minutes) without any model changes but changing the model a couple times is the most reproducable trigger of the issue.
username_9: It's happening less frequently now, however, it is still noticeable in a couple of players over time 10minutes & People still continue to go invisible which in combat screws over the person they're fighting against.
 |
vercel/next.js | 700617324 | Title: Disable cssnano's postcss-calc
Question:
username_0: # Bug report
`next build` messes up complex css calc that works as expected in `next dev`
## To Reproduce
Steps to reproduce the behavior, please provide code snippets or a repository:
1. Clone https://github.com/username_0/tailwindcss-capsize
2. On the dev server `npm run dev` the calc function works as expected https://prnt.sc/ugllip
3. When using `npm run build` then `npm run start` it shortens the calc function from
```css
.text-6xl::before {
margin-top: calc(-1em * ((var(--ascent-scale) - var(--cap-height-scale) + var(--line-gap-scale) / 2) - (((var(--line-height-scale) * (var(--font-size-rem) * var(--root-font-size-px)) - (var(--line-height-rem) * var(--root-font-size-px)) - (var(--line-height-unitless) * (var(--font-size-rem) * var(--root-font-size-px))) - var(--line-height-px)) / 2) / (var(--font-size-rem) * var(--root-font-size-px))) + (.05 / (var(--font-size-rem) * var(--root-font-size-px)))));
}
```
to
```css
.text-6xl::before {
margin-top: calc(-1em*(var(--ascent-scale) - var(--cap-height-scale) + .05));
}
```
That's when I added the following to postcss.config.js
```js
module.exports = {
plugins: {
cssnano: {
preset: ['default', { calc: false }],
},
},
}
```
## Expected behavior
Expected the full calc function when using `npm run build`
## System information
- OS: Windows 10
- Version of Next.js: 9.5.3
- Version of Node.js: 12.18.2
## Additional context
I tried the same cssnano options on a gulp configuration and it disabled calc.
```js
const gulp = require('gulp')
const postcss = require('gulp-postcss')
const cssnano = require('cssnano')
gulp.task('css', function () {
return gulp
.src('input.css')
.pipe(
postcss([
cssnano({
preset: ['default', { calc: false }],
}),
])
)
.pipe(gulp.dest('dist'))
})
```
Answers:
username_0: A temporary workaround for those having a similar issues is using intermediate CSS variables with top level calculations only.
For me, was turning:
```css
.text-6xl::before {
margin-top: calc(-1em * ((var(--ascent-scale) - var(--cap-height-scale) + var(--line-gap-scale) / 2) - (((var(--line-height-scale) * (var(--font-size-rem) * var(--root-font-size-px)) - (var(--line-height-rem) * var(--root-font-size-px)) - (var(--line-height-unitless) * (var(--font-size-rem) * var(--root-font-size-px))) - var(--line-height-px)) / 2) / (var(--font-size-rem) * var(--root-font-size-px))) + (.05 / (var(--font-size-rem) * var(--root-font-size-px)))));
}
```
into:
```css
.text-6xl::before {
--line-height-normal: calc(var(--line-height-scale) * var(--font-size-px));
--specified-line-height-offset-double:
calc(var(--line-height-normal) - var(--line-height-px));
--specified-line-height-offset:
calc(var(--specified-line-height-offset-double) / 2 );
--specified-line-height-offset-to-scale:
calc(var(--specified-line-height-offset) / var(--font-size-px));
--prevent-collapse-to-scale:
calc(0.05 / var(--font-size-px));
--line-gap-scale-half: calc(var(--line-gap-scale) / 2);
--leading-trim-top:
calc( var(--ascent-scale) - var(--cap-height-scale) + var(--line-gap-scale-half) - var(--specified-line-height-offset-to-scale) + var(--prevent-collapse-to-scale) );
'margin-top: calc(-1em * var(--leading-trim-top));
}
```
username_1: We're having this issue as well (as expected since it's a parser issue), but in a way that can't be worked around with multiple declarations: we're generating a deep `calc` value (for a responsive type system) in a SASS `@function` which means its usage should only return a single value.
Until we can configure `cssnano`'s config to unblock this, would it be possible to override @username_2's https://github.com/username_2/cssnano-preset-simple using Yarn resolutions?
Status: Issue closed
username_3: This issue has been automatically locked due to no recent activity. If you are running into a similar issue, please create a new issue with the steps to reproduce. Thank you. |
UziTech/action-setup-atom | 762840155 | Title: Installation of Atom snap for Linux
Question:
username_0: Can you provide an option to setup Atom using snapcraft?
https://snapcraft.io/atom
Answers:
username_1: I'm not sure how to do that but would be open to PRs.
username_2: Snap (the command-line tool) is just another package manager. It's included in the GitHub Actions runners.
You can simply do this on the command-line...
Stable channel:
`sudo snap install atom --classic`
Edge channel:
`sudo snap install atom --edge --classic`
---
After that, you have Atom installed. I haven't tried to do much with it beyond installing it... I know `apm --version` works, but `atom --version` does not work, and both of those commands print an error message before running:
```
mkdir: missing operand
Try 'mkdir --help' for more information.
```
username_1: @username_2 If you create a PR I would be happy to review it. 😁👍
Status: Issue closed
username_1: I'm going to close this because of the discussion in #153. It doesn't seem like this action should be concerned with where Atom is installed from. |
jonasmalacofilho/robrt | 197851076 | Title: Support for pull requests events when the base has changed
Question:
username_0: Apparently this means `"action": "edited"`...
Answers:
username_0: Even if no support is added, this is also a bug: no reply is sent back to GitHub for these events.
username_0: Payload:
```
{
"action": "edited",
"number": 98,
"pull_request": {
"url": "https://api.github.com/repos/protocubo/the-online-brt-planning-guide/pulls/98",
"id": 99398028,
"html_url": "https://github.com/protocubo/the-online-brt-planning-guide/pull/98",
"diff_url": "https://github.com/protocubo/the-online-brt-planning-guide/pull/98.diff",
"patch_url": "https://github.com/protocubo/the-online-brt-planning-guide/pull/98.patch",
"issue_url": "https://api.github.com/repos/protocubo/the-online-brt-planning-guide/issues/98",
"number": 98,
"state": "open",
"locked": false,
"title": "Cross reference support (work in progress)",
"user": {
"login": "username_0",
"id": 1832496,
"avatar_url": "https://avatars.githubusercontent.com/u/1832496?v=3",
"gravatar_id": "",
"url": "https://api.github.com/users/username_0",
"html_url": "https://github.com/username_0",
"followers_url": "https://api.github.com/users/username_0/followers",
"following_url": "https://api.github.com/users/username_0/following{/other_user}",
"gists_url": "https://api.github.com/users/username_0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/username_0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/username_0/subscriptions",
"organizations_url": "https://api.github.com/users/username_0/orgs",
"repos_url": "https://api.github.com/users/username_0/repos",
"events_url": "https://api.github.com/users/username_0/events{/privacy}",
"received_events_url": "https://api.github.com/users/username_0/received_events",
"type": "User",
"site_admin": false
},
"body": "I had hoped to push something functional by Xmas, but I ended up working on the new server...\r\n\r\nStill, I'm opening the pull request as is to get the tests to run (and to track the development).\r\n\r\n - manual ids with `\\id{<id>}`:\r\n + [x] spec/design\r\n + [x] parse\r\n + [x] apply to the structured document\r\n + [x] update the generators\r\n + [ ] enforce syntax rules at the parser and/or validator\r\n - cross references with `\\ref[<optional type>]{<id>}` and `\\rangeref[<optinal type>]{<id>}`\r\n + [x] spec/design\r\n + [ ] parse\r\n + [ ] pass through the structuring\r\n + [ ] implement the resolution algorithm\r\n + [ ] html references\r\n + [ ] tex/pdf references\r\n + [ ] enforce syntax rules at the parser and/or validator\r\n + [ ] check ref resolution at the validator\r\n - usage examples",
"created_at": "2016-12-27T01:56:14Z",
"updated_at": "2016-12-28T12:51:52Z",
"closed_at": null,
"merged_at": null,
"merge_commit_sha": "464594f3d5d526a958488d7c0071f1fefc7c08b0",
"assignee": null,
"assignees": [
],
"milestone": null,
"commits_url": "https://api.github.com/repos/protocubo/the-online-brt-planning-guide/pulls/98/commits",
"review_comments_url": "https://api.github.com/repos/protocubo/the-online-brt-planning-guide/pulls/98/comments",
"review_comment_url": "https://api.github.com/repos/protocubo/the-online-brt-planning-guide/pulls/comments{/number}",
"comments_url": "https://api.github.com/repos/protocubo/the-online-brt-planning-guide/issues/98/comments",
"statuses_url": "https://api.github.com/repos/protocubo/the-online-brt-planning-guide/statuses/658363c6d442cdda7e0db45294784ff62c3d742b",
"head": {
"label": "username_0:xmas",
"ref": "xmas",
"sha": "658363c6d442cdda7e0db45294784ff62c3d742b",
"user": {
"login": "username_0",
"id": 1832496,
"avatar_url": "https://avatars.githubusercontent.com/u/1832496?v=3",
[Truncated]
"site_admin": false
}
}
```
Robrt:
```
Dec 28 12:51:52 brt-guide-host env[15233]: * POST /incoming -> [2bf59f96] @robrt.IncomingRequest.handleRequest(IncomingRequest.hx
Dec 28 12:51:52 brt-guide-host env[15233]: [2bf59f96] DELIVERY: 6766e400-ccfc-11e6-97d7-bcf0b54d269b @robrt.IncomingRequest.execute
Dec 28 12:51:53 brt-guide-host env[15233]: [2bf59f96] repository: protocubo/the-online-brt-planning-guide @robrt.IncomingRequest.ex
Dec 28 12:51:53 brt-guide-host env[15233]: [2bf59f96] event: GitHubPullRequest @robrt.IncomingRequest.execute(IncomingRequest.hx:77
Dec 28 12:51:53 brt-guide-host env[15233]: [2bf59f96] repository matches: protocubo/the-online-brt-planning-guide @robrt.IncomingRe
```
GitHub:
```
We couldn’t deliver this payload: Service Timeout
```
username_0: Fixed in 8555eff5dbf08fd80c2187a2ad3d625462a42a88
Status: Issue closed
|
GIANTCRAB/gitlabby-dockerish-laravel | 255094554 | Title: Laravel Dusk support
Question:
username_0: Hello! can you add support for run laravel dusk tests? :)
Greetings!
Answers:
username_1: Sure, I'll get #11 resolved ASAP and thereafter, I'll add this in.
username_1: Laravel Dusk can now be run using woohuiren/php-laravel-env:latest docker image. Let me know if it works for you. Thanks!
username_0: it works
Status: Issue closed
|
aws/aws-codedeploy-agent | 628978254 | Title: Feature request: Extra deployment config
Question:
username_0: My applications need some "extra" configuration settings that apply on the deployment group level.
So, it would be great if I could have a way to put this data into the deployment group, similar to how "user data" works when starting EC2 instances.
If this data were available as a file somewhere near the deployment directory (like `/opt/codedeploy-agent/deployment-root/{deployment-group-id}/{deployment-id}/extra-config`, that'd be all I need :+1: |
LucilleN/Chaser | 283053366 | Title: Game crashes
Question:
username_0: By creating an image on every frame, and fetching it from a remote URL, the game generates many errors in the console and ultimately crashes.
Create only a single background object.
<img width="557" alt="screen shot 2017-12-18 at 3 12 57 pm" src="https://user-images.githubusercontent.com/538615/34132810-76f0dabc-e406-11e7-9f40-0542deca6b4d.png">
Answers:
username_0: Once fixed, the egg image has a problem too.
The image appears to no longer exist. Best to grab an image and put it in your local repo and reference it that way.
<img width="351" alt="screen shot 2017-12-18 at 3 25 01 pm" src="https://user-images.githubusercontent.com/538615/34133073-bb3e26ce-e407-11e7-9038-88d3e4724b34.png">
username_0: You should make these patches
```
-const powerUpSpriteURL =
- "http://freeclipartimage.com//storage/upload/egg-clip-art/egg-clip-art-2.png";
+const powerUpSpriteURL = "egg.png";
const backgroundURL = "http://i.imgur.com/bTgbcZR.png";
+const backgroundImage = new Image();
+backgroundImage.src = backgroundURL;
```
and
```
function clearBackground() {
- let background = new Image();
- background.src = backgroundURL;
- ctx.drawImage(background, 0, 0, canvas.width, canvas.height);
+ ctx.drawImage(backgroundImage, 0, 0, canvas.width, canvas.height);
```
Put the following image in `egg.png`:
<img width="12" alt="egg" src="https://user-images.githubusercontent.com/538615/34186863-145236de-e4e3-11e7-9db9-92e0a708c2e0.png"> |
Alan-Mathison-Turing/Asistente_Virtual | 330908025 | Title: REQ#20 - Búsqueda Wikipedia o Voy a tener suerte
Question:
username_0: Como usuario quiero poder preguntarle al asistente sobre un tema, y que me responda son un enlace a la Wikipedia o a Google (voy a tener suerte...) si no encuentra resultados en la Wikipedia, para desambiguar dudas. 15pts. Debe mostrar el principio de la respuesta.
Answers:
username_0: API Wikipedia: https://es.wikipedia.org/api/rest_v1/
Jackson JAR download: http://www.java2s.com/Code/Jar/j/Downloadjacksoncore223jar.htm
Status: Issue closed
|
apollographql/apollo-client | 314259998 | Title: Stateless component gets random
Question:
username_0: **Intended outcome:**
Stateless component receive refreshed data from the subscription
**Actual outcome:**
Stateless component receive randomly data - sometimes from subscription query, sometimes cache
**How to reproduce the issue:**
My query with subscription:
```
const TableScreenData = graphql(MY_QUERY, {
options: () => ({ fetchPolicy: 'network-only' }),
props: ({data, ownProps}) => {
return {
subscribeToTableChanges: () => {
return data.subscribeToMore({
document: MY_SUBSCRIPTION,
fetchPolicy: 'network-only',
updateQuery: (prev, next) => {
const table = next.subscriptionData.data.exampleQuery
console.log(table.game.current.winners);
return {table}
}
});
},
...data,
...ownProps
}
}
});
```
My stateless component:
```
class TableScreen extends React.Component {
subscribed = false;
subscribe () {
if (this.subscribed === false) {
this.unsubscribe = this.props.subscribeToTableChanges();
this.subscribed = true;
}
}
componentWillUnmount () {
this.unsubscribe();
}
render () {
const {loading, error, table} = this.props;
if (loading) return <Text>Loading</Text>;
if (error) return <Text>error</Text>;
this.subscribe();
console.log(table.game.current.winners);
}
}
```
The result from the console (line 50 - console.log from the component, line 107 - console.log from updateQuery):

Usually when there are longer pauses between firing subscription event, or if I put a breakpoint in the graphQL for one second before subscribed event is sent to the client, everything is ok. But if I fire events very fast, stateless component receive cached props even though updateQuery shows updated one.
**Version**
- [email protected]
<!--**Issue Labels**
- [ ] has-reproduction
- [ ] feature
- [ ] docs
- [ ] blocking
- [x] good first issue
-->
Answers:
username_0: Ok, I solved the issue alone.
I was keeping `playerId` in the `id` field. Since `id` field didn't change even though cards did, heuristic way of caching was treating whole object that came from network as it was exactly the same object because it had the same id.
Status: Issue closed
|
silvia-odwyer/photon | 1106134609 | Title: Website: Uncaught TypeError: e.oil is not a function
Question:
username_0: In the [website](https://username_1.github.io/photon/), after some time of iterating over the filters, it stops on the filter **`oil`** which doesn’t seem to be present in the bundle.
Maybe its path changed?
Answers:
username_1: @username_0 Thanks very much for letting me know about this, it's much appreciated! 😄 I'm going to take a look and fix this soon 👍
username_1: It should be fixed now, I just updated the website to reflect this also 👍 Thanks again! 😄
Status: Issue closed
|
keithknott26/datadash | 558587143 | Title: how do i get the datadash binary
Question:
username_0: Hi. Sorry I don't know how to build a Go project. I'm used to traditional Linux programs that use ./configure ; make ; make install
The instructions here aren't very explicit on how to get the datadash binary. The install instructions says:
go get -u github.com/username_1/datadash
I ran that command but I still had no datadash binary. Matter of fact, the command had zero output at all. I wasn't sure if it did anything at all. I hunted around to see if I could find anything and luckily I stumbled across a go directory inside my home directory. Inside there I was able to find a datadash directory which had a datadash.a file and some other files that look like source code.
I was hoping to get instructions that were a little more clear about what to do after running the go get command and specifically how to build the datadash binary. The usage examples all seem pretty clear so I don't think I need help with usage, I just need the binary.
Can someone help please?
Answers:
username_1: Hello,
Thanks for raising this issue, I'm in the process of refactoring the code a bit and will upload a binary release or perhaps set up support for homebrew when I can.
In the meantime you'll want to navigate to the directory where you downloaded the application and build the binary. If you have Go setup and installed on your machine you probably downloaded the package to:
$GOPATH/src/github.com/username_1/datadash/. The fact that you ran the command above and didn't get any output means you probably have a Go environment already setup , otherwise you would have gotten an error.
To build and then run the binary:
cd $GOPATH/src/github.com/username_1/datadash/cmd
go build datadash.go
./datadash
Hope this helps!
Keith
username_0: Thanks Keith! That worked! I spent a few hours trying to figure it out and I even started with the hello world Go tutorial on Google's site, which I was able to complete but still couldn't figure out what needed to be done to build this binary.
I could have spent a full 24 hours on this and not figured it out, but the steps above got it built in no time.
I now have a datadash binary and I could experiment with the example. Thanks for your help, I really appreciate the fast response.
username_0: Thanks to the help, I was able to pretty quickly construct a one-liner to get data from some sensors and pipe it to datadash and visualize the metrics from the sensor. Although, the "one-liner" approach may be stretching the limits of good taste and what qualifies as an actual one-liner.
My command is a while true loop, which runs curl to get JSON data from an endpoint on the sensor every second, piped to python which parses the JSON data and prints it, tab separated to the STDOUT so that datadash can ingest it. Seems to be working nicely.

username_2: FYI [zinit](https://github.com/zdharma/zinit) users can use the following to install datadash:
```zsh
zinit as"null" \
sbin"datadash" \
atclone"rm -f datadash; go build cmd/datadash.go" \
atpull"%atclone" \
for @username_1/datadash
``` |
glycoinfo/GlycanBuilder2 | 942408553 | Title: Generated SVG files no longer have semantic ID attributes...
Question:
username_0: It seems that in the battle to get SVG generation working again and the related issues with GroupingSVGGraphics2D, GroupingSVGGraphics2D is no longer putting semantic ID attributes on SVG element groups (representing monosaccharides and their links). These IDs are crucially important for Will's glycoTree sandbox.
See also #12 and #13.
Answers:
username_1: The SVG of glycans output by GlycanBuilder does not support will's glycoTree sandbox.
username_0: See my pull request #22 for a fix
username_2: @username_0
Hi Nathan,
Thank you for taking up this issue and putting a proposal of implementation for that.
However, we do not plan to support SVG export for Will's glycoTree sandbox in GlycanBuilder, because the GroupingSVGGraphcs2D is not supposed to output semantic ID attributes originally.
We instead developed the following web APIs for the SVG export:
WURCSToWURCSJSON-api
https://api.glycosmos.org/glycanformatconverter/#2.5.2-wurcs-to-wurcsjson
WURCSJSONToSVG-api
https://api.test.glycosmos.org/#wurcs-json-wurcs-json-to-svg-api-post
WURCSToWURCSJSON-api converts WURCS sequence to a JSON style format, and WURCSJSONTOSVG-api exports the SVG from the JSON.
These APIs are developed for Will's glycoTree sandbox as a main purpose and the basic features are already implemented.
The reason why we developed the systems as APIs is that we originally planed to use these in our glycan editing web tool, SugarDrawer.
The features of these APIs are originally implemented in Java. We will let you know how to use the features if you would like to.
username_0: All that not withstanding, Will does not appear aware of this development to support his tools, but regardless, y'all are removing functionality and choosing not to add it back in for free, for no great reason. I've added it back because it is useful to us, y'all are free to ignore it in the output. Please merge so I don't have to maintain a separate fork.
username_3: After taking your comment, we investigated our changes again and then noticed that SVGGlycanRenderer originally has the fanctionality which you pointed out and it is accidentally removed. We are sorry about that.
We will accept your pull-request. Thank you for your cooperation. I will be careful so that this kind of thing doesn't happen in the future.
In addition, I suggest that we invite you as a owner of github.com/glycoinfo repositories.
username_0: I've no problem with having existing developers review/gatekeep (and push-back and/or approve) pull requests. Most projects require a degree of dictatorship with respect to direction and quality control. Still, it would be nice to have the interests of those outside the core group be acknowledged and considered, and for projects like this to be run in a way that serves the community rather than the core group.
Thanks for merging the pull request!
Cheers!
Status: Issue closed
username_0: Actually, I realize now that my solution is incomplete and will not be correct for structures with undetermined linkage. I will provide another pull request in the same style as this one to fix this issue.
Status: Issue closed
|
angular/angular | 975909553 | Title: The lint command line reference is incorrect
Question:
username_0: ### Description
The `ng lint` command line reference incorrectly states that, "When a project name is not supplied, executes the lint builder for the default project." In fact, like `ng test` it will execute for all projects.
### What is the affected URL?
https://angular.io/cli/test
### Please provide the steps to reproduce the issue
1. Clone [username_0/angular-issue](https://github.com/username_0/angular-issue).
2. Run `npm install`.
3. Run `ng lint`.
### Please provide the expected behavior vs the actual behavior you encountered
As per the documentation, the command should only execute on the default project. Instead, it executes on all projects. This is reasonable, but the documentation does not reflect that.
### Please provide a screenshot if possible
<img width="570" alt="Screen Shot 2021-08-20 at 2 39 44 PM" src="https://user-images.githubusercontent.com/2351292/130290965-a04d9553-a8df-4396-aa92-a2595c03d073.png">
### Please provide the exception or error you saw
```true
"When a project name is not supplied, executes the lint builder for the default project."
```
### Is this a browser-specific issue? If so, please specify the device, browser, and version.
_No response_ |
pulumi/pulumi-aws | 732680756 | Title: Add missing "AmazonSSM..." enums to aws.iam.ManagedPolicy
Question:
username_0: There are four managed policy ARNs beginning with AmazonSSM that are not in provider/resources.go. One is not for general use (`AmazonSSMServiceRolePolicy`). Can we add the other three (`AmazonSSMAutomationApproverAccess`, `AmazonSSMDirectoryServiceAccess` and `AmazonSSMPatchAssociation`) to that file?
Answers:
username_0: PR #1196 resolves this.
Status: Issue closed
|
jackocnr/intl-tel-input | 250423749 | Title: Only one country loaded
Question:
username_0: Hello guys, i am starting to use this jQuery plugin and i am having some troubles on the start. My input is loading just one country and the selectbox don't open when i click.
What should i do?
p.s.: console don't throw any errors.
Answers:
username_1: Which initialisation options are you using?
Could it be a conflict with your site's CSS or JS?
The best thing to do is to try to recreate your problem in a codepen.
username_0: @username_1 thank you for your response. It was Laravel's stock app.js file that was conflicting with intl-tel-input.
Status: Issue closed
|
amodm/webbrowser-rs | 1139901917 | Title: Current function signature is too restrictive
Question:
username_0: As of now, the two publicly exposed functions are:
* `fn open(url: &str) -> Result<Output>`, as [seen here](https://github.com/username_0/webbrowser-rs/blob/v0.5.5/src/lib.rs#L169), and
* `fn open_browser(browser: Browser, url: &str) -> Result<Output>`, as [seen here](https://github.com/username_0/webbrowser-rs/blob/v0.5.5/src/lib.rs#L200)
This causes issues of two kinds:
1. Not all runtime environments provide `Output`, e.g. when built for the `wasm32`, the signature ends up being `fn open(url: &str) -> Result<()>` as [seen here](https://github.com/username_0/webbrowser-rs/blob/v0.5.5/src/lib.rs#L174), so we end up having different signatures for different environments.
2. The current `Output` is not useful at all, which reflects in a (unscientific) review of usages of this library across Github. It's easy to see why - there's not much that a client can do differently based on the response. So everyone ends up using the return value effectively as a `Result<()>`.
Given the above, it's probably best to modify the signature to:
* `fn open(url: &str) -> Result<()>`, and
* `fn open_browser(browser: Browser, url: &str) -> Result<()>`
Answers:
username_0: Also, maybe it's the best time to update signature to take in `AsRef<str>` as discussed in #35 and #38.
username_0: Factors that I've considered:
1. I can currently think of only one legitimate non-string object, where this can be helpful, and that's [Url](https://docs.rs/url/latest/url/struct.Url.html). And there's a `.as_str()` readily available.
2. Potential scenarios where `AsRef` can lead to type inference issues.
3. The increased verbosity being thrown up in docs.
Given all the above, I'm currently deciding against modifying input parameter to be an `AsRef<str>`. I'm making a considered choice right now, biased from my own personal sense of ergonomics. As I hear from the users of this library, we can revisit this choice.
Status: Issue closed
|
edent/BMW-i-Remote | 164073990 | Title: BMW server API rate limit
Question:
username_0: I have been playing about with this and my new i3 which I got last week. I prefer PHP rather than python so I went with that, I found the curl documentation here was invaluable in helping me to get things working. I have written a script that posts any changes in status to a slack channel rather than twitter.
Everything is working happily and I have a cron job running every 5 minutes to grab the status, if the timestamp has changed (or if the status has changed if it's charging) then it posts a new message to slack. yay!

While I was looking at this last night I noticed that the status doesn't change unless something physically is different with the car.. totally makes sense. What doesn't make sense to me from a technical perspective is that while the vehicle is in motion nothing seems to change. After a drive home from work the updatedReason changed to VEHICLE_MOVING and kept the same timestamp until I got home and parked. I guess the theory behind this is that people aren't going to be using the iRemote on their phones while driving the car.
I did however notice that when charging my i3 the timestamp would change upon every request (I hit refresh a bunch of times and even after a second or two it changed). I'm not sure that up to the second information is necessary and the implications on servers and mobile data would make that a silly thing to me for the car to be doing.. but there ya go. perhaps there is some backend tasks going on where the server calculates what the battery % increase should be based on the charging speed etc and so isn't actually talking to the car directly.
What's my point.. oh yes.. based on the fact that I appear to be able to make new requests every second what do people think is a sensible frequency for requests to be made. I was thinking of going down to 1 every 60 seconds. I'm not sure how easy it would be for BMW to detect/block access for a homemade client.. but I don't want to hammer something and then get myself or everyone blocked.. What does everyone/anyone think?
Also would a PR with the php->slack code be of use to anyone.. would it be better as a separate repo?
Answers:
username_1: I'm happy to have it on here - but I won't be offended if you want it under your own control :-)
username_2: @username_0 I'd hack away at a PHP version with you 👍
username_0: @username_2 This is what I have come up with so far. https://github.com/username_0/iRemote-PHP
username_3: You have to understand how gas/diesel cars typically dealing while turned off:
As you turn off a car it will stay in a semi-active mode for some time - just a few minutes. This is called "Nachlauf" in German engineering and I would translate this to "coastdown" in English. After that the car is shutdown to protect the battery ("Klemme 15").
For electric vehicles this is very different as during charging the car is still active while not driving. Hence, no-driving and non-charging status you should also be careful waking up your car too often.
Status: Issue closed
|
vim/vim | 1124957850 | Title: Vim9: unexpected type mismatch error when changing undeclared list with `map()` in `:def` function
Question:
username_0: **Steps to reproduce**
Run this shell command:
vim -Nu NONE -S <(tee <<'EOF'
vim9script
def Func()
echo getline(1, 3)->map((_, v) => 3)
enddef
Func()
EOF
)
An error is given:
E1013: Argument 2: type mismatch, expected func(...): string but got func(any, any): number
**Expected behavior**
No error is given, because `getline(1, 3)` is not a declared variable, and it is allowed for `map()` to change the type of the items in a list, provided that it's not assigned to a variable with a declared type.
**Version of Vim**
8.2 Included patches: 1-4301
**Environment**
Operating system: Ubuntu 20.04.3 LTS
Terminal: xterm
Value of $TERM: xterm-256color
Shell: zsh 5.8
**Additional context**
At the script level, the exact same code does not give any error:
```vim
vim9script
echo getline(1, 3)->map((_, v) => 3)
```
[3]
---
In compiled code, Vim has no problem with us doing the opposite; i.e. use `map()` to turn a list of numbers into a list of strings:
```vim
vim9script
def Func()
echo range(3)->map((_, v) => 'x')
enddef
Func()
```
['x', 'x', 'x']
If `map()` can do this on an undeclared variable:
number → string
Then it should be allowed to do this too:
string → number
Answers:
username_0: Also, this error is useless:
```vim
vim9script
def Func()
echo [{a: 0}]->map((_, v) => 'x')
enddef
Func()
```
E1013: Argument 2: type mismatch, expected func(...): dict<any> but got func(any, any): string
Because `[{a: 0}]` is not a declared variable. If the error was useful, Vim would give it at the script level too, but it does not:
```vim
vim9script
echo [{a: 0}]->map((_, v) => 'x')
```
['x']
username_0: Although, I don't know how hard it would be to suppress it, because it's given at compile time, when Vim has less information than at runtime.
username_1: These are corner cases. At least changing the list that getline() returns does not cause problems.
So is it useful to give an error? Or are we just making it harder to do what you want to do?
I would think the error is more annoying than useful. Especially the example with range()->map() seems common.
username_1: Taken care of by patch 8.2.4302
Status: Issue closed
|
taye/interact.js | 71811855 | Title: multiple acceptables for drop zone
Question:
username_0: I would like to add three acceptable selectors to a dropzone, is it possible?
if it is not possible, please suggest some workaround.
Status: Issue closed
Answers:
username_1: https://developer.mozilla.org/en-US/docs/Web/Guide/CSS/Getting_Started/Readable_CSS#Grouped_selectors |
not-wlan/regionpicker | 589580790 | Title: How ti fix ?
Question:
username_0: [code]
error[E0463]: can't find crate for `protoc_rust`
--> build.rs:1:1
|
1 | extern crate protoc_rust;
| ^^^^^^^^^^^^^^^^^^^^^^^^^ can't find crate
error: aborting due to previous error
For more information about this error, try `rustc --explain E0463`.
[/code]
build.rs:1:1 --->extern crate protoc_rust;
how to fix ?
https://prnt.sc/rocqqi
Answers:
username_1: Use `cargo build` instead of calling rustc directly.
Status: Issue closed
username_0: still does not work, can you give me a more candid advice?

https://prnt.sc/rovehl
username_1: Read the error message, it tells you to install `protoc`, the Protobuf compiler. |
svenschneider/youbot-manipulation | 41291871 | Title: Planner is not able to find any solution on the real robot
Question:
username_0: Hi, I am trying to use your package with a real youbot (Hydro+MoveIt!). I tried to use Commander and rviz to send some goals to the planning, but I always get:
[ INFO] [1409147480.844621192]: Planning request received for MoveGroup action. Forwarding to planning pipeline.
[ INFO] [1409147480.845457053]: Starting state is just outside bounds (joint 'arm_joint_1'). Assuming within bounds.
[ INFO] [1409147480.845675307]: Starting state is just outside bounds (joint 'arm_joint_2'). Assuming within bounds.
[ INFO] [1409147480.845857942]: Starting state is just outside bounds (joint 'arm_joint_3'). Assuming within bounds.
[ INFO] [1409147480.917308438]: No planner specified. Using default.
[ INFO] [1409147480.918721677]: LBKPIECE1: Attempting to use default projection.
[ INFO] [1409147480.948382716]: LBKPIECE1: Starting with 1 states
[ INFO] [1409147486.265920437]: LBKPIECE1: Created 80 (42 start + 38 goal) states in 71 cells (42 start (42 on boundary) + 29 goal (29 on boundary))
[ INFO] [1409147486.266674444]: No solution found after 5.346708 seconds
[ INFO] [1409147486.269393646]: Unable to solve the planning problem
When I start rviz, at the beginning the arm has an init position (see fig.1) and after loading it is in the correct position (fig.2), but the first position is always displayed, is it ok?
Can you help me? Do you know how to solve my problem?
Thanks!


Answers:
username_1: Closing as issue seems to be resolved.
Status: Issue closed
|
electron/windows-installer | 284363158 | Title: electron-winstaller only gives the error "the path specified, the file name or both are too long."
Question:
username_0: creating windows installer
Error: Failed with exit code: 1
Output:
Tentando construir o pacote de 'infinityapp.nuspec'.
O caminho especificado, o nome do arquivo ou ambos sao muito longos. O nome de arquivo totalmente qualificado deve ter menos de 260 caracteres e o nome do diret�rio menos de 248 caracteres.
at ChildProcess.proc.on.code (C:\Users\username_0\Desktop\Desktop\Programing\JavaScript\InfinityApp\node_modules\electron-winstaller\lib\spawn-promise.js:62:16)
at emitTwo (events.js:126:13)
at ChildProcess.emit (events.js:214:7)
at maybeClose (internal/child_process.js:925:16)
at Process.ChildProcess._handle.onexit (internal/child_process.js:209:5)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] setup: `node build.js --force`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] setup script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\username_0\AppData\Roaming\npm-cache\_logs\2017-12-24T13_22_37_506Z-debug.log
```
I do not know what to do anymore, can anyone help me?
Answers:
username_1: Any news about this issue?
username_2: The error message says it all; the project is nested in too many folders, making the file path length exceed the limit of allowed characters, which is 260. This was a limitation with Windows, but was removed in Windows 10, but you had to enable the removal of this limit (https://stackoverflow.com/questions/1880321/why-does-the-260-character-path-length-limit-exist-in-windows).
*Solution*
Move your project folder closer to the root directory so that the absolute path to your project does not exceed 260 characters
username_3: @username_2 I am also seeing this issue today. And my deepest project path goes max to 65 characters.
username_1: @username_3 move the project to the root folder works for me, also i found another solution if you are using ```electron-packager``` just set the flag ``` --asar```
username_2: @username_3 Also ensure the output path does not exceed the max amount of characters. Can you post your error, the options you are using, the file path to your folder, and the expected file path of your output?
username_3: Well, sometimes you are not so lucky: Failed with exit code: 4294967295
Output:
System.AggregateException: One or more errors occurred. ---> System.IO.PathTooLongException: The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters.
at System.IO.Path.LegacyNormalizePath(String path, Boolean fullCheck, Int32 maxPathLength, Boolean expandShortPaths)
at System.IO.Path.NormalizePath(String path, Boolean fullCheck, Int32 maxPathLength, Boolean expandShortPaths)
at System.IO.Path.InternalGetDirectoryName(String path)
at Squirrel.ReleasePackage.<>c__DisplayClass14_0.<extractZipWithEscaping>b__0()
at System.Threading.Tasks.Task.InnerInvoke()
at System.Threading.Tasks.Task.Execute()
--- End of inner exception stack trace ---
at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)
at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken)
at Squirrel.ReleasePackage.CreateReleasePackage(String outputFile, String packagesRootDir, Func`2 releaseNotesProcessor, Action`1 contentsPostProcessHook)
at Squirrel.Update.Program.Releasify(String package, String targetDir, String packagesDir, String bootstrapperExe, String backgroundGif, String signingOpts, String baseUrl, String setupIcon, Boolean generateMsi, String frameworkVersion, Boolean generateDeltas)
at Squirrel.Update.Program.executeCommandLine(String[] args)
at Squirrel.Update.Program.main(String[] args)
at Squirrel.Update.Program.Main(String[] args)
---> (Inner Exception #0) System.IO.PathTooLongException: The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters.
at System.IO.Path.LegacyNormalizePath(String path, Boolean fullCheck, Int32 maxPathLength, Boolean expandShortPaths)
at System.IO.Path.NormalizePath(String path, Boolean fullCheck, Int32 maxPathLength, Boolean expandShortPaths)
at System.IO.Path.InternalGetDirectoryName(String path)
at Squirrel.ReleasePackage.<>c__DisplayClass14_0.<extractZipWithEscaping>b__0()
at System.Threading.Tasks.Task.InnerInvoke()
at System.Threading.Tasks.Task.Execute()<---
```
username_4: @username_1 I used --asar=true but I'm still having this problem. There is anything else I need to know?
username_1: @username_4 did you try to move the project to the root folder ?? also i don't think this makes a difference but i used the flag as ``` --asar``` instead of ```--asar=true```
username_4: Yes I tried this too...
username_1: Maybe you should update `npm`, delete the `node_modules` dir and try again
username_5: I fixed this by adding the following line to my package.json:
```
"config": {
"forge": {
"packagerConfig": {
"asar": true
},
```
username_6: @username_5 On your advice I added asar and it caused Packaging Application to run for over an hour doing nothing.
username_7: I have expanded paths enabled in my registry.. Is there an option to force disable this check?
Status: Issue closed
username_0: creating windows installer
Error: Failed with exit code: 1
Output:
Tentando construir o pacote de 'infinityapp.nuspec'.
O caminho especificado, o nome do arquivo ou ambos sao muito longos. O nome de arquivo totalmente qualificado deve ter menos de 260 caracteres e o nome do diret�rio menos de 248 caracteres.
at ChildProcess.proc.on.code (C:\Users\username_0\Desktop\Desktop\Programing\JavaScript\InfinityApp\node_modules\electron-winstaller\lib\spawn-promise.js:62:16)
at emitTwo (events.js:126:13)
at ChildProcess.emit (events.js:214:7)
at maybeClose (internal/child_process.js:925:16)
at Process.ChildProcess._handle.onexit (internal/child_process.js:209:5)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] setup: `node build.js --force`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] setup script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\username_0\AppData\Roaming\npm-cache\_logs\2017-12-24T13_22_37_506Z-debug.log
```
I do not know what to do anymore, can anyone help me?
username_8: did anyone manage to do it without setting asar to true?
username_9: I tried all the solutions above ant nothing seems to work in my case, I have a really small application and I had success to build in Linux and MacOS, but in the Windows 10 I have this error.
For now I'm going to use another builder for win10, but I'll follow this issue |
angular/angular | 68786144 | Title: Get reliable numbers on minified/compiled Angular size
Question:
username_0: We currently don't really have a good overview on this.
@rkirov @tbosch, maybe something for one of you? @tbosch, could we track this as a statistic over time?
Answers:
username_1: Some number available here: http://username_1.github.io/ng2-code-size/
This shows weight distribution across different ng2 components.
Status: Issue closed
username_2: We started tracking the size with #5294
username_2: sample: 
username_2: Less confusing graph:
<img width="1615" alt="screen shot 2015-12-01 at 5 06 56 pm" src="https://cloud.githubusercontent.com/assets/216296/11519307/f84f0bda-984d-11e5-9484-c27d646bdf91.png"> |
un33k/python-slugify | 62666531 | Title: Emoticons fail
Question:
username_0: Hi @username_1, is the conversion of emoticons supported by python-slugify ?
Answers:
username_1: no description, so it must be spam.
Status: Issue closed
username_0: Hi @username_1, is the conversion of emoticons supported by python-slugify ?
username_1: Oh, I see. The answer is no. but it would be a cool feature if you have a pull request.
username_0: Sorry, I don't have a PR. I was researching on different slugify libraries and facing this emoticons problem :confused:
username_1: @username_0 Could you provide a use-case. The string of some sort?
username_0: A test string like this: `abc🎅`. Should maybe output like this `aaa`
Status: Issue closed
username_1: I'll have it in mind during the next release cycle.
Status: Issue closed
|
StompMarket/helpdesk | 327310719 | Title: User Reported Problem - No option to specify size in a new variant |<EMAIL>
Question:
username_0: Partner : CF, User : <EMAIL>
Problem : Added KSH-222 (a new size) to a existing product.
Can't find a option to specify MRP, Size and also the save button is not available on trying to save these details the 2nd time.
Router Link : /catalog/brands/5/products/202146/variants/217213
Answers:
username_1: 
username_2: Neetu, lets discuss this.
username_3: Issue is resolved.Will be live on production by the end of the day
Status: Issue closed
|
mwtoews/surface-water-network | 904737976 | Title: nan in rgrd using ngwf.set_reach_slope(method='grid_top')
Question:
username_0: Probably just need to add a check to set nans to some value, depending on where the nans are being generated.
Answers:
username_1: Hopefully resolved with c43ae0a05d05e3c2b86ea1f56dddcf2b8f1cd0ea
This will just set them to `min_slope`, which has a default 1./1000 (or 0.001). The NaNs probably get in from [numpy.gradient](https://numpy.org/doc/stable/reference/generated/numpy.gradient.html) along edges that have NaN values in `dis.top`. They might be avoided by defining a full 2D array of DEM values, without any NaN values.
username_0: Seems to work now. No more nans! Thanks.
Status: Issue closed
|
nginxinc/docker-nginx | 192397667 | Title: Add support for libatomic and aio libs for atomic memory and asynic io
Question:
username_0: These require a couple of system packages and a couple of configure flags:
https://books.google.com/books?id=qScxCgAAQBAJ&pg=PA5&lpg=PA5&dq=nginx+with-libatomic&source=bl&ots=RM-UAdOiv7&sig=eiMyokRX0GKpqlLTd_KEQNSVioU&hl=en&sa=X&ved=0ahUKEwiS9tPP7M7QAhWKjlQKHVF9DAkQ6AEIUDAH#v=onepage&q=nginx%20with-libatomic&f=false
Thank you for consideration
Status: Issue closed
Answers:
username_1: It's not needed on amd64. |
MyEtherWallet/MyEtherWallet | 398286384 | Title: "Send" should be clickable
Question:
username_0: - **I'm submitting a ...**
- [x] Feature request
- [x] Bug report
- **Bug Report**
Send is a position in menu and should be clickable. Same problem with Message and Contract. First option should be enabled.
PS. I'm still very much convinced that old convention "light grey text = inactive" is very intuitive and<issue_closed>
Status: Issue closed |
ayyshim/esewa_pnp | 1094247762 | Title: Bug:
Question:
username_0: **Screenshots**
**Environment (please complete the following information):**
- esewa_pnp 1.0.1
- Targeted OS: [e.g. iOS, Android]
- Flutter Version 2.8
- [only if targetd OS is iOS] Xcode Version and Swift Version
**Additional context**
Add any other context about the problem here.
Status: Issue closed
Answers:
username_1: Downgrade your gradle version to 3.5.0
Status: Issue closed
username_0: ok |
haiwen/seafile-iOS | 56159384 | Title: iOS version requirement
Question:
username_0: Hi,
I saw the appstore description and the requirement for iOS 7. I am currently running iOS 6 and the app crashes on start. Is this really caused by the outdated OS? Is there any chance to also support iOS 6?
Thanks
Answers:
username_1: I'm sorry we can't support iOS 6 for reducing maintenance burden.
Status: Issue closed
|
microsoft/BotFramework-Composer | 942549432 | Title: Bot's Windows process not closed when Composer is closed
Question:
username_0: Related? https://github.com/microsoft/BotFramework-Composer/issues/7157
## Describe the bug
Bot `.exe` process doesn't close when Composer does.
## Version
Version: 2.0.0-nightly.258070.287a4dc
Electron: 8.2.4
Chrome: 80.0.3987.165
NodeJS: 12.13.0
V8: 8.0.426.27-electron.0
## OS
<!-- What operating system are you using? -->
- [ ] macOS
- [X] Windows
- [ ] Ubuntu
## To Reproduce
Steps to reproduce the behavior:
1. Start bot
2. Note in Task Manager that `mybot.exe` is running
3. Close Composer by clicking the X at the top-right of the window
4. Note in Task Manager that `mybot.exe` is *still* running
## Expected behavior
Bot process will close when Composer closes.
## Additional context - Odd?
Oddly, if you start the bot on say port `3980` and when performing the steps above, you can start up a completely different bot in VS or VS Code on the same port without error. The odd part about it is that although the new bot appears to be on port `3980` as well, messages will go to the bot that was started in Composer.
Answers:
username_1: @username_0 Yep. They are related. We will have to work on a more graceful VSCode style handling of cleaning up dangling processes.
Status: Issue closed
|
SAP/InfraBox | 409738742 | Title: Character escaping issue in console logs
Question:
username_0: **Describe the bug**
It looks like there is some issue when displaying console logs, where a quote followed by a number creates another character. For instance, the raw line (from the "Console Output" button):
`10:25:09|Warning: Permanently added the RSA host key for IP address '192.168.3.11' to the list of known hosts.`
Is rendered as:
`11:25:09|Warning: Permanently added the RSA host key for IP address 飤.82.118.4' to the list of known hosts.`
**To Reproduce**
Steps to reproduce the behavior:
1. Create a job that clones from Github
2. Click on the Create Jobs job.
3. Open up the "Prepare Job" section.
4. See the Chinese character at the "Permanently added the RSA host key" line.
5. Click on "Console Output".
6. Notice the IP address number.
**Expected behavior**
The quote and number should be visible instead of the Chinese character for a person eating rice.
**Desktop (please complete the following information):**
- OS: macOS 10.14.3
- Browser Chrome 72.0.3626.96 (Official Build) (64-bit)
**Additional context**
Note that it seems that the character comes from `'` but is missing a semicolon and the number 140 then forms the character shown (see: https://en.wiktionary.org/wiki/%E9%A3%A4). Running a relatively recent master tag of Infrabox.<issue_closed>
Status: Issue closed |
conorhennessy/php-Boy | 353905479 | Title: Pi is 81 degrees Celsius 🔥🔥
Question:
username_0: Probably should slap a heat sink on the pi and the LVDS and make the enclosure a bit more.. thermodynamic.
so I probs cut some slots at the top 🤷
open to suggestions
Answers:
username_1: Water cooling is the only option.
username_0: maybe some copper heat pipes... because a heatsink doesn't actually remove heat from the enclosure
username_0: Added aluminium heatsinks |
SpartaSystems/holdmail | 270797040 | Title: Add the ability to automatically forward mail in whitelist
Question:
username_0: ##### Question/Issue Overview
It would be advantageous to have a whitelist of email addresses that would automatically be forwarded to the whitelisted email address
##### Expected Behavior
Email received where the recipient is in the whitelist will have mail forwarded automatically
##### Current Behavior
All mail received is stored and can be manually forwarded.
Answers:
username_1: Nice idea - Might want to consider supporting wildcard domains etc.
Would prefer to avoid an admin interface (at least, do we?), could achieve this for the interim through the configuration in /etc/holdmail.properties.
BTW - probably some overlap with https://github.com/SpartaSystems/holdmail/issues/29 |
MRPT/mrpt | 640336978 | Title: Invalid target pos in PlannerSimple2D.computePath throws an error
Question:
username_0: When giving an invalid target position, it throws an error instead of receiving notfound = true (or other status) so the script itself could handle it and not being interrupted.
Error:
Origin: (2.866,2.847,90.47deg)
Target: (-3.000,3.000,90.00deg)
Searching path...terminate called after throwing an instance of 'std::logic_error'
what(): /build/mrpt-KfsiMU/mrpt-2.0.4~snapshot20200518-1423-git-3e8ecd67-bionic/libs/nav/src/planners/PlannerSimple2D.cpp:60: [void mrpt::nav::PlannerSimple2D::computePath(const mrpt::maps::COccupancyGridMap2D&, const mrpt::poses::CPose2D&, const mrpt::poses::CPose2D&, std::deque<mrpt::math::TPoint2D_<double> >&, bool&, float) const] Assert condition failed: target.x > theMap.getXMin() && target.x < theMap.getXMax() && target.y > theMap.getYMin() && target.y < theMap.getYMax()
Status: Issue closed
Answers:
username_1: Done! |
dojo/widgets | 310590450 | Title: Consider changing onmouseup action to something else in DatePicker radio buttons
Question:
username_0: **Bug**
Package Version: RC1
Trying to use Calendar in MSIE 11, with the theme from @dojo/themes. Thing is, MSIE 11 does not support appearance property, so the current "trick" used in the theme that works on all other browsers won't work there.
And if I set radio button to disabled via CSS, it never gets onmouseup event, but does get onchange event due to clicking on the label, which is currently being used. That results in Year/Month dropdown never being closed, and calendar being rendered buggy.
I've tried to move the function call from onmouseup event into onchange handler, and it seems to work here. Not sure why is onmouseup being used, but I guess there's a good reason.
Please provide a desirable solution.
Answers:
username_1: Can't reproduce on IE11 after adding `opacity: 0` to the radio inputs. It cannot close on the `onChange` event, since that would result in the menu being closed every time a keyboard user arrowed to a new month or year.
There are two other issues with the Calendar, which might be obfuscating the issue:
1. IE11 styling problems, issue here: https://github.com/dojo/themes/issues/16
2. Cross-browser menu bugs, issue here: #535
Closing this issue for now, but feel free to reopen if the IE11 bug is still present after the above issues are resolved.
Status: Issue closed
|
haltu/muuri | 896534485 | Title: Questions on future versions
Question:
username_0: Man I love this. I can't believe I've forgot/ignored it. So delightful and organized code.
I have some questions:
* **Is it possible to create a MuuriLite version to exclude Packer and all about dragging features?** Layout + Filter + Sort should do for hopefully less than half the size. Also the other part strictly for drag'n'drop layout would work fine separately IMO. My point is that in most cases the needs to have both worlds into a single file is overkill for 80kb of script.
* **Do you guys have plans to do an ESLint valid ES6+ code base update in the future?** I've been working in the last few days on exactly that and I'm a bit confused, surely need to invest more time into it. I'm getting a vibe the two [layout+filter+sort|drag] are interdependent, which is a bit counter productive IMO.
* **Would you guys consider creating a simple API Interface to enable easier initialization?** In my mind there is a DATA API enabled interface `class MuuriInterface extends Grid()` to allow `new MuuriInterface(element, options)`, perhaps a web component like type of script to remove the need for files like [this](https://muuri.dev/js/demo-grid.js?v103).
Thanks for any reply :)
Answers:
username_1: Making Muuri as easy to use as possible is always high on the priority list, but I'm not sure what you mean specifically. I think a Muuri web component could be its own library and repo perhaps. |
bcgov/entity | 806852477 | Title: Create release notes for Feb 17th release
Question:
username_0: #### Description
Create release notes for the Feb 17th release.
#### Tasks
- [ ] When ticket has been created, post the ticket in RocketChat '#Operations Tasks' channel
- [ ] Add **entity** or **relationships** label to zenhub ticket
- [ ] Add 'Priority1' label to zenhub ticket
- [ ] Assign zenhub ticket to milestone: current, and place in pipeline: sprint backlog
- [ ] Reply All to IT Ops email and provide zenhub ticket number opened and which team it was assigned to
- [ ] Dev/BAs to complete work & close zenhub ticket
- [ ] Author of zenhub ticket to mark ServiceNow ticket as resolved or ask IT Ops to do so
Answers:
username_0: Release notes created and uploaded to SP: https://citz.sp.gov.bc.ca/sites/SBC/REG/Projects/MVSM/_layouts/15/DocIdRedir.aspx?ID=S52QENDTEJAE-1724982671-3654
@jyoti3286 @lmullane please confirm this is the full scope of release and review?
Notes are based upon Zenhub Release (Relationships 2.4.2.9 - Manual Account Suspensions)
@username_1 can you please review?
username_1: @jyoti3286 @username_0 @username_2 I chatted with Amit and he confirmed that both BCOL HD and BCOL Admin will be provided with access to suspend accounts, so we need to ensure these teams are aware of this responsibility and functionality. I'll create a separate ticket for this communication.
(noted in [6121](https://app.zenhub.com/workspaces/entity-5bf2f2164b5806bc2bf60531/issues/bcgov/entity/6121))
username_1: I added some comments to the doc on the SharePoint.
username_1: If the payment methods release goes in on Sunday, maybe we can combine these release notes with the note stating the new payment methods are now available?
username_2: Updated release notes, added all the links to job aides, info sessions, ppt, videos.
username_0: Release notes updated again to include full lists of available banks and to update in spots which refer to the release happening in the future.
username_0: Notes and dist list have been sent to SBC IT Ops to send.
username_0: The announcement has been sent by IT Ops.
Status: Issue closed
|
trentm/node-bunyan | 147564175 | Title: Bunyan crashing production server for some reason
Question:
username_0: Our production server, which uses bunyan 1.8.0, has been periodically crashing hard lately. I found this error on the stdout when it crashed:
```
/www/node_modules/bunyan/lib/bunyan.js:1383
throw new TypeError('cannot start a rotation when already rotating');
^
TypeError: cannot start a rotation when already rotating
at RotatingFileStream.rotate (/www/node_modules/bunyan/lib/bunyan.js:1383:15)
at null._onTimeout (/www/node_modules/bunyan/lib/bunyan.js:1268:28)
at Timer.unrefdHandle (timers.js:312:14)
```
Here is how I'm setting up the logger:
```
var logDirectory = '/app/runtime/speed/logs';
fs.existsSync(logDirectory) || fs.mkdirSync(logDirectory);
logger = bunyan.createLogger({
name: "Shot planner Experiment Editor",
streams: [
{stream: process.stdout},
{
type: 'rotating-file',
path: logDirectory + "/logfile.log",
period: "1d",
count: 5
}
]
});
```
Any ideas on what might be causing this? The server runs for several days without problems, before this happens.
Thanks,
John
Answers:
username_0: I was hoping I would get a reply -- as I said, this is impacting our production server. Any ideas on what might be going wrong here?
username_1: After giving the code a quick look, here's a quick guess until someone with better knowledge can take a look:
[RotatingFileStream.prototype._setRotationTimer()](https://github.com/username_2/node-bunyan/blob/master/lib/bunyan.js#L1258) doesn't cancel an existing timer before it sets a new one. That means that if for some reason some code would result in a new timer to be set up before an existing one is triggered, then two timers would be triggered and you could probably end up with the issue you're having.
Checking that file further it seems that the only two places in that file that call `_setRotationTimer()` are `_setupNextRot()` and `rotate()`, and the latter also calls the former.
Looking further it shows that the constructor actually calls both `rotate()` and `_setupNextRot()` if it rotates at the start (if `rotateAfterOpen` becomes true). That seems like a case that, if not handled properly could cause two rotations to be set up which would in turn cause the error that you see.
A way to at least mitigate the effects of whatever error you're running across could probably be by adding this to `_setRotationTimer()`, right before it sets the new `this.timeout` value:
```javascript
if (this.timeout) { clearTimeout(this.timeout); }
```
That way the error you run across would at least be much harder to trigger and I can see no harm caused by adding it after the quick look I just had at the code.
username_0: Thanks, username_1. That sounds like the flavor of the problem. My build process is pulling from NPMJS though, I don't have a local version. Is there a patch possible that adds that?
username_1: @username_0 You can fork the repo here on GitHub, add the fix and temporarily change your dependency to the git URL of that repo, and if it works out then you could also use the branch on that repo to submit a PR and once that PR has been released you can move back your dependency to a npm version range.
username_0: I've made the change on a fork, and will try it out in our next release.
username_2: @username_3 ideas? suggestions?
username_3: I've not come across this particular one, sounds interesting. You might want to check out: https://www.npmjs.com/package/bunyan-rotating-file-stream
I've been running this under stress for several weeks now and I'm getting a bit more confidence in it. I'd be interested if it worked for you, @vmerlin
It handles that particular issue in a different way, since there are a few more triggers and concerns being dealt with. Hopefully it'll be a good fit.
Status: Issue closed
username_2: I think this is the same bug as fixed by pull #386. That will be in version 1.8.1. |
slevomat/coding-standard | 502689164 | Title: New Release Timeline
Question:
username_0: I've been patiently awaiting a release to come out with my two accepted PR's, #656 and #657 since April 4th - any update on when there might be a tagged release?
Answers:
username_1: We at https://github.com/doctrine/coding-standard are also interesting on those ones :)
username_2: Not to pile on, but [webonyx/graphql-php](https://github.com/webonyx/graphql-php) could make good use of the new array shapes support. May I ask why there hasn't been a new release since March?
username_3: Nobody paid me for the release.
username_0: I'd be happy to help maintain this, if time/manpower is the issue.
username_2: I'll call your bluff. Produce a donation link and I will produce a donation. 😄
Status: Issue closed
|
liquidweb/LiquidWeb-WHMCS-Plugin | 171706751 | Title: [BUG] Order summary displaying hidden options
Question:
username_0: For customize-able options you have the ability to hide/remove them from the public facing list. When you have adjusted a customable options to be hidden, the option should not be included or shown in the Order Summary either.
Pics to come.
Answers:
username_0: As you can see in the following picture I have a number of options hidden:

But for some reason these items are being shown on the Order summary here:

Items outlined with red dots should not be shown at all as they are hidden from the customers selection.
username_1: There is no issue. Hidden configurable options does not display in cart.
Status: Issue closed
|
kubernetes/git-sync | 409916491 | Title: git-sync ssh as an initContainer
Question:
username_0: Following https://github.com/kubernetes/git-sync/pull/144 :
I don't want to force all volumes in the pod to be limited to user 65533, so how can we use it as an initContainer ?
Answers:
username_1: They don't need to be limited to a user for the whole pod. Each container can run as any user they want. Setting the fsGroup should allow them all to read the git contents.
I don't have a demo of that, but it should work - does it not?
Alternately, you can run as an init container and manually chown/chmod if you want, but that requires more privilege.
username_2: Simplified example running in production:
```
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: git-sync-example
namespace: tmp
spec:
revisionHistoryLimit: 2
replicas: 2
strategy:
rollingUpdate:
maxSurge: 2
maxUnavailable: 0
selector:
matchLabels:
project: example.git.sync
template:
metadata:
labels:
project: example.git.sync
spec:
securityContext:
fsGroup: 65533
containers:
- name: example
image: example/application:1.0.1
securityContext:
readOnlyRootFilesystem: true
capabilities:
drop: ["all"]
args: [ "--conf=/etc/config.conf" ]
readinessProbe:
httpGet:
path: /healthz
port: 8081
httpHeaders:
- name: X-Probez
value: readiness.k8s
initialDelaySeconds: 120
livenessProbe:
httpGet:
path: /healthz
port: 8081
httpHeaders:
- name: X-Probez
value: liveness.k8s
initialDelaySeconds: 140
ports:
- name: health
containerPort: 8081
volumeMounts:
- name: config
mountPath: /etc/config
- name: source
mountPath: /etc/source
- name: git-sync
image: username_1/git-sync:thtest #currently using our own build, but this test build is available from username_1
securityContext:
[Truncated]
- name: GIT_SYNC_SSH
value: "true"
- name: GIT_SYNC_WAIT
value: "60"
- name: GIT_SYNC_ROOT
value: /etc/source/
- name: GIT_SYNC_DEST
value: files
volumes:
- name: config
configMap:
name: config
- name: git-secret
secret:
secretName: git-secret
defaultMode: 288
- name: source
emptyDir: {}
```
username_1: Maybe put that in a doc in this repo? Would be nice if it ws actually
runnable (no "example app" images :)
Then we can close this?
username_2: Was an internal app so couldn't use it here. Can morph it into an example and push as docs. Might take a bit as I'm overloaded, but not forgotten.
Status: Issue closed
username_0: Thank you for your answers.
I still have an issue using the fsgroup : `ERROR: can't configure SSH: Permissions -r--r----- for SSH key are too open. It is recommended to mount secret volume with `defaultMode: 256` (decimal number for octal 0400).`
my deployment definition :
```
[...]
spec:
initContainers:
- name: git-sync
image: k8s.gcr.io/git-sync:v3.1.0
env:
- name: GIT_SYNC_SSH
value: "true"
- name: GIT_SYNC_REPO
value: {{ .Values.persistentVolume.copyFromGithub.repo | quote }}
- name: GIT_SYNC_ROOT
value: "/app"
- name: GIT_SYNC_DEST
value: "backup"
volumeMounts:
- name: "git-secret"
mountPath: "/etc/git-secret"
[...]
securityContext:
runAsUser: 65533 # to allow read of ssh key
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["all"]
containers:
[...]
volumes:
[...]
- name: git-secret
secret:
secretName: git-secret
defaultMode: 288
[...]
securityContext:
fsGroup: 65533
```
One another thing I can't figure out is that, changing the `defaultMode` to 256 again, create the exact same error.
I don't have the error when I remove the fsGroup. But got `fatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\n"`
username_1: Sorry, I need to build a new release. If you build from head or if you use
username_1/git-sync:thtest (temporarily) it will work.
username_1: I have kicked off a build of 3.1.1 and am waiting for it to be promoted
through the image registry. Should be up within a few hours.
username_0: Thx a lot 👍
username_3: Is this working? And is there a complete example somewhere for initContainers?
username_1: Look at docs/ssh.md, that example should work except you should use the
`-one-time` flag
username_3: Is there a description of this flag somewhere ?
username_1: ```
$ git-sync
...
-one-time
exit after the initial checkout
...
```
username_4: i also still gets `ERROR: can't configure SSH: Permissions -r--r----- for SSH key are too open. It is recommended to mount secret volume with defaultMode: 256 (decimal number for octal 0400)`
config is the same as in example. what is wrong?
username_1: Did you follow
https://github.com/kubernetes/git-sync/blob/master/docs/ssh.md, especially
https://github.com/kubernetes/git-sync/blob/master/docs/ssh.md#step-2-configure-poddeployment-volume
?
username_4: Sorry, I deleted comment because I figured out that I used old image
version, so it is my mistake.
вт, 2 июня 2020 г. в 18:54, <NAME> <<EMAIL>>:
> Did you follow
> https://github.com/kubernetes/git-sync/blob/master/docs/ssh.md, especially
>
> https://github.com/kubernetes/git-sync/blob/master/docs/ssh.md#step-2-configure-poddeployment-volume
> ?
>
>
username_5: So while using git-sync sidecar with an ssh key, the best way is to set the fsGroup to 65533? Can't we instead set the git-sync sidecar's runAsUser to to match the main container's uid and set `GIT_SYNC_ADD_USER` to true? It seems to work well for me.. can you suggest what should be the preferred or better way? :)
username_1: If you run as another user, SSH demands that you have an entry in /etc/passwd. We added a `--add-user` flag for this, specifically. It's kind of gross, but I couldn't find anything better.
username_5: I have runAsUser set to 5000 on both my airflow container and git sync sidecar and fsGroup set to 5000. I have enabled the --add-user flag and the ssh secret has a default mode of 288. This works well
Is this a better approach compared to running airflow as 5000, git sync as 65533 and fsgroup set to 65533 ?
username_1: I don't think it is better or worse. Some people have policies or
preferences or other reasons to do it one way or the other.
username_6: @username_5 can you share an example for your airflow setup with ssh keys? |
djfdat/sci-hub-scholar | 697044906 | Title: Removed tracking data
Question:
username_0: Add the ability to removing tracking data from links.
`<div class="gs_or_ggsm" ontouchstart="gs_evt_dsp(event)" tabindex="-1"><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC128377/" data-clk="hl=en&sa=T&oi=gga&ct=gga&cd=2&d=8785873905806038757&ei=2BtZX9WbGtPryATHrKTwAQ" data-clk-atid="5VbxjMSx7XkJ"><span class="gs_ctg2">[HTML]</span> nih.gov</a></div>` |
ElvUI-WotLK/ElvUI_AddOnSkins | 333043532 | Title: Skin Request: TrinketMenu/Поддержка скина аддона TrinketMenu
Question:
username_0: Is it possible to make TrinketMenu compatible with ElvUI.
Thanks.
https://a.radikal.ru/a24/1806/ab/f96e18b518ad.jpg
Answers:
username_1: Yes i have added support for it in https://github.com/ElvUI-WotLK/ElvUI_AddOnSkins/pull/93
Status: Issue closed
|
naser44/1 | 145528832 | Title: فيديو: امرأة تطلق النار على صراف آلي لسرقته
Question:
username_0: <a href="http://ift.tt/1RGKXU6">فيديو: امرأة تطلق النار على صراف آلي لسرقته</a> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.