repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
katebliz/bonhamlab | 176918674 | Title: Hooray!
Question:
username_0: @username_1 - You don't know how happy it made me to get the alert that you had started using github! Yay!
Answers:
username_1: 
Slowly but surely one becomes the mighty Charizard ;)
Status: Issue closed
|
jeremymailen/kotlinter-gradle | 515575805 | Title: .editorconfig disabled_rules for import-ordering is ignored
Question:
username_0: Since version "2.1.2", disabled_rules in .editorconfig for import-ordering has no effect.
```
[*.{java, kt}]
disabled_rules=import-ordering
```
`[import-ordering] Imports must be ordered in lexicographic order without any empty lines in-between`
Answers:
username_1: Thank you for the report and your patience @username_0.
You need to remove the space in the file selector and it will work, `[*.{java,kt}]`. Actually the same quirk exists in prior releases to.
I agree it's annoying and confusing, but it's a result of the editorconfig implementation in `ktlint` and will need a fix there.
Status: Issue closed
|
yarnpkg/berry | 547185931 | Title: [Bug] Fetch step of yarn install with pnp is really slow
Question:
username_0: - [ ] I'd be willing to implement a fix
**Describe the bug**
I have gone through the process of switching to use PNP for a React app. When I ran `yarn install` it recommended I try out v2 of Yarn. When I ran `yarn install` with v2, all of my deps were not in the cache so they were downloaded. The result of this step was:
```
➤ YN0000: ┌ Fetch step
Many lines of:
➤ YN0013: │ type@npm:2.0.0 can't be found in the cache and will be fetched from the remote registry
➤ YN0000: └ Completed in 11.9m
```
**To Reproduce**
With a `package.json` with a large number of dependencies:
```
yarn --pnp
yarn policies set-version berry
yarn install
```
**Environment if relevant (please complete the following information):**
- OS: Linux
- Node version v13.5.0
- Yarn version 2.0.0-rc1
Answers:
username_1: We have a fix in master that should help; can you try running `yarn cache clear --all && yarn set version from sources && yarn`?
Note that cold installs will still be slower than the v1 due to us having to transform the fetched packages from tgz into zip (and even though we're doing it through wasm, it still has a cost). Hopefully registries will eventually catch up and offer to download packages as zip out of the box 🙂
(Note that we need zip because otherwise we would need to uncompress the whole archive when accessing a single file at runtime, which would be very wasteful for most Node projectsv)
username_2: Had the same issue; installing latest from master makes the fetch step much faster (1m vs 5m+)
Thanks for the fix!
username_0: I can confirm that the time to fetch my dependencies has come down to 2.1m from 11.9m. Thanks!
username_1: Awesome! I'll close this issue then 🎉
Status: Issue closed
|
ctimmerm/axios-mock-adapter | 258281338 | Title: Need Clarification for use with componentDidMount
Question:
username_0: Maybe I am just a bit illiterate when it comes to testing, but I was hoping to get some clarification and/or help on writing tests for calls made by `axios` from within the `componentDidMount` function for React components.
Here is a simple `componentDidMount` function which uses `axios`:
```js
componentDidMount() {
axios.get(`http://www.reddit.com/r/${this.props.subreddit}.json`)
.then((response) => {
const posts = response.data.data.children.map(obj => obj.data)
this.setState({ posts })
})
.catch((error) => {
this.setState({ errors: this.state.errors.concat(['GET Request Failed'])})
})
}
```
and here is the setup and spec for the functionality:
```js
it('returns data from API as expected', () => {
let mock = new MockAdapter(axios)
mock.onGet('http://www.reddit.com/r/test.json')
.reply(
200,
{
data: {
children: [
{ data: { id: 1, title: 'A Post' } },
{ data: { id: 2, title: 'Another Post' } }
]
}
}
)
let posts = mount(<AjaxDemo subreddit="test" />).update().state().posts
expect(posts).to.eql([{id: 1, title: 'A Post'},{id: 2, title: 'Another Post'}])
})
```
The problem I run into is that within the spec, `state` never changes.
As far as I understand it, when the `AjaxDemo` component gets mounted, the `axios` call it makes will receive a response from `mock`, and update state according to the response from `mock`. This would mean that `state` should have been set, because the `axios` promise resolved and assigned an array of values to `state.posts`. Is there something I'm missing here? I feel like it's going to be embarrassingly obvious, or maybe there is a better way to split this testing up - maybe it's leaning too much into the integration test arena.
Any help on this would be much appreciated!
An extra note:
I tried running this code with
```js
mount(<AjaxDemo subreddit="test" />).update().state().posts
```
as well as
```js
mount(<AjaxDemo subreddit="test" />).state().posts
```
and the call to `update()` did nothing noticeable.
Answers:
username_1: The problem is probably that:
* you mount the component
* you fire off a request, which is asynchronous
* you check the state before the request promise resolves
I'll close this issue since it's not related to axios-mock-adapter.
Status: Issue closed
username_2: @username_1 would be really nice if a solution was provided instead of a list of problems...
It's nice to check the response and all, but what about the components and it's state and the output of response-based elements? Don't we want to test "deeper" than just the response?
Appreciate your work, just really would like to know how to test deeper myself.
username_1: It was not a list of problems, but a sequence of events that explains the problem.
It is not a problem that is related to axios-mock-adapter and the way you handle it depends on the testing framework and other libraries you're using, and how you structure your code.
If you're using mocha, the simplest solution might be to wrap your assertion in `setImmediate` or `setTimeout`.
```js
it('does something', function(done) {
// ... your test code ...
setImmediate(function() {
expect(/* ... */);
done();
});
});
```
username_0: @username_2 Thanks for the tips! I actually rethought my approach after the correct albeit blunt response from @username_1 and just forgot to come back and provide the solution I came up with. I understand that this isn't an issue with the functionality of `axios-mock-adapter`, but figured since it's likely a *very* common use case that it would be something that should be discussed here for others to be able to skip the beating-head-against-wall phase of figuring it out.
## Separation of Concerns
I ended up writing the methods that make use of `axios` completely decoupled from the React code, then stubbed out those methods when testing how the React component will respond to the data.
### Build a Method to Make the AJAX Call
Let's say I write a method called `myApiGet` that uses `axios.get`. I'll test that method on its own using `axios-mock-adapter` to make sure that it does what I expect it to with whatever data is returned as a result of the `axios` call. For example I might test to ensure that `myApiGet` returns an object that contains the HTTP response code along with the expected data.
### Use the Predictable AJAX Call Method in `componentDidMount`
For the React components, I stub out what's expected to be returned by `myApiGet` using `sinon`, and use this stubbed response to test the behavior of the component.
### Bennies
This seems to be the best approach to separate concerns that I could come up with and offers the following benefits:
* The method that uses `axios` can be reused throughout the codebase and can be expected to return data in a predictable format
* The code in the component is tested against the expected behavior of the method that uses `axios`, and thus only needs to be tested for what it does with the data returned
### Conclusion
What I was running into was that if I wrote the `axios` calls into `componentDidMount`, I needed to test that the call was not only returning the data as expected but also that the component is rendering the data as anticipated all within the same spec, and it just seems messy. The way I understand it:
1. A React component should do one thing well: render data in the view layer.
2. The methods used for AJAX calls should do one thing well: return data in a predictable format.
Once again, this isn't necessarily an issue with the functionality of `axios-mock-adapter`, but hopefully anyone looking into this will help others dealing with similar problems.
[Here](https://github.com/airbnb/enzyme/issues/346#issuecomment-347741622) is a thread where we discussed how to deal with rendering data that should be available from `componentDidMount`. |
xws-bench/battles | 133430909 | Title: Computer:51 Human:149
Question:
username_0: Bossk*Calculation*Outlaw_Tech*Recon_Specialist*K4_Security_Droid*Heavy_Laser_Cannon.Graz_the_Hunter.Binayre_Pirate.Binayre_Pirate.VSNight_Beast*Twin_Ion_Engine_Mk._II.Backstabber.Dark_Curse.Scourge*Decoy.Omicron_Group_Pilot*Sensor_Jammer.<br>
http://bit.ly/20RMUXp<br> |
yiisoft/yii2 | 437550664 | Title: "session_set_cookie_params()" does not handle session lifetimes correctly
Question:
username_0: ### What steps will reproduce the problem?
session_set_cookie_params php function does not handle session lifetimes correctly.
https://www.php.net/manual/en/function.session-set-cookie-params.php#100657
`web.config`
```php
'session' => [
'class' => 'yii\web\DbSession',
'db' => 'db',
'sessionTable' => 'sessions',
'cookieParams' => [
'secure' => false,
'httponly' => false,
'lifetime' => time() + (365 * 24 * 60 * 60), #one year
],
'timeout' => 365 * 24 * 60 * 60, #one year
],
```
### What is the expected result?
set timeout cookie
### What do you get instead?
cookie timeout not be set.(When the browser is closed, the cookie is deleted)
### Additional info
| Q | A
| ---------------- | ---
| Yii version | 2.0.18
Answers:
username_0: hi
please set this config in `web.config` file
```php
'session' => [
'class' => 'yii\web\Session',
'cookieParams' => [
'domain' => '.example.com',
],
],
```
after run -> domain `.example.com` not set!!!
i traced this problem , finally find this problem in [Session.php:404](https://github.com/yiisoft/yii2/blob/331d9971857d8c95cf1bf12b1a2f9f8fa83f1125/framework/web/Session.php#L404)
```php
session_set_cookie_params($data['lifetime'], $data['path'], $data['domain'], $data['secure'], $data['httponly']);
````
`session_set_cookie_params` return false.
username_1: Can you check if the session is started by the time `session_set_cookie_params()` is called?
Status: Issue closed
username_1: I've checked and for the clean basic application it works well setting lifetime properly. btw., you have an error there. Lifetimes is number of seconds, not timestamp so there's no need to add `time()`.
username_0: yes , i edit first post.
finally find this problem.
`session_set_cookie_params` return false if you set below settings in php-fpm.
```
session.cookie_lifetime
session.cookie_path
session.cookie_domain
```
username_1: fpm? What do you mean? How to reproduce it?
username_0: yes. php-fpm.
in last test for this problem, i create a test.php inside index.php of yii.
test.php
```php
session_set_cookie_params(1000,'/','.example.com');
session_start();
```
after run `test.php` , `session_set_cookie_params` return `false` , so this is not a yii2 bug.
i search in settings(php,php-fpm and ...) on server and i find below setting in `mysite.conf` (is php-fpm config)
```
php_admin_value[session.cookie_lifetime] = 1500
```
if you set this php setting by `php_admin_value` flag in php-fpm , `session_set_cookie_params` return false.
username_1: Ah, right. That's PHP behavior defined for `php_admin_value`. If it's used, application can't change it later. |
spujadas/elk-docker | 125416784 | Title: Add beats output routing
Question:
username_0: If you have a beats input plugin, if you send the data directly to elasticsearch it works as expected with the sample dashboards. If you configure beats to send data to the logstash endpoint, they do not because there is no output routing to the correct index names.
Adding:
```
output {
elasticsearch {
hosts => "localhost:9200"
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
```
To the 02-beats-input.conf, it works, although the naming of that file suggests that an output shouldn't be added there.
Ref: https://www.elastic.co/guide/en/beats/libbeat/1.0.1/getting-started.html#logstash-installation
Answers:
username_1: Sorry, not quite sure what the exact issue is here, it really should work out-of-the-box. :confused:
The output filter/routing for Logstash is in `30-output.conf`, which contains a minimal configuration item:
output {
elasticsearch { hosts => ["localhost"] }
stdout { codec => rubydebug }
}
So implicitly, in the `elasticsearch` section, the default values are used for non-specified configuration options (as per https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html), e.g. `"logstash-%{+YYYY.MM.dd}"` for `index`.
Using the vanilla `sebp/elk` image on a clean VM and having Filebeat push logs from an instance of nginx to Logstash's Beat input plugin on port 5044 (see the [example](http://elk-docker.readthedocs.org/#forwarding-logs-filebeat) in the documentation of the image) produces an entry like this when browsing to a page served by nginx:
{
"_index": "logstash-2016.01.07",
"_type": "nginx-access",
"_id": "AVIdlGrkng6MqhVcdOZ3",
"_score": null,
"_source": {
"message": "XX.XX.XX.XX - - [07/Jan/2016:19:33:19 +0000] \"GET / HTTP/1.1\" 304 0 \"-\" \"Mozilla/5.0 (Windows NT 6.3; WOW64; rv:43.0) Gecko/20100101 Firefox/43.0\" \"-\"",
"@version": "1",
"@timestamp": "2016-01-07T19:33:23.023Z",
"beat": {
"hostname": "ac29184dfcf0",
"name": "ac29184dfcf0"
},
"count": 1,
"fields": null,
"input_type": "log",
"offset": 0,
"source": "/var/log/nginx/access.log",
"type": "nginx-access",
"host": "ac29184dfcf0",
"clientip": "XX.XX.XX.XX",
"ident": "-",
"auth": "-",
"timestamp": "07/Jan/2016:19:33:19 +0000",
"verb": "GET",
"request": "/",
"httpversion": "1.1",
"response": "304",
"bytes": "0",
"agent": "\"Mozilla/5.0 (Windows NT 6.3; WOW64; rv:43.0) Gecko/20100101 Firefox/43.0\""
},
"fields": {
"@timestamp": [
1452195203023
]
},
"sort": [
1452195203023
]
}
… which looks fine to me.
Using the piece of configuration you suggested (which I added to `30-output.conf` where it would belong), the same operation creates entries such as this:
{
"_index": "filebeat-2016.01.07",
[Truncated]
"verb": "GET",
"request": "/",
"httpversion": "1.1",
"response": "304",
"bytes": "0",
"agent": "\"Mozilla/5.0 (Windows NT 6.3; WOW64; rv:43.0) Gecko/20100101 Firefox/43.0\""
},
"fields": {
"@timestamp": [
1452194980787
]
},
"sort": [
1452194980787
]
}
… so essentially the same thing, except for the `_index` field which has a different prefix (default `logstash` vs explicitly set `filebeat`).
Having said that, there may be something wrong that I'm not seeing or I may be misunderstanding the issue: if so could you please provide steps to reproduce the issue you're having? Cheers.
username_0: The issue is that if you configure the beat to send directly to elastic search, the information ends up in filebeat-XXXX.XX.XX or topbeat-XXXX.XX.XX indexes. If you send through the logstash beat plugin as configured, the information ends up in the logstash-* index. It seems like it should be the same in either case.
Additionally, the beats project provides a bunch of pre-made dashboards which only work with the information in the XXXbeat-XXXX.XX.XX format.
https://www.elastic.co/guide/en/beats/libbeat/current/getting-started.html#load-kibana-dashboards
username_1: Right, got it this time. I thought the issue was about the plugin not working rather than about inconsistent behaviours and predefined dashboards not playing properly.
Will update in a sec.
Status: Issue closed
|
fjordllc/bootcamp | 1074202912 | Title: Uncaught TypeError: Cannot assign to read only property 'solana' of object '#<Window>'
Question:
username_0: View details in Rollbar: [https://rollbar.com/username_0/Bootcamp/items/767/](https://rollbar.com/username_0/Bootcamp/items/767/)
```
TypeError: Cannot assign to read only property 'solana' of object '#<Window>'
File "chrome-extension://afbcbjpbpfadlkmhmclhkeeodmamcflc/injects/solana.js", line 45, in Object.dispenseInjectMessage
File "chrome-extension://afbcbjpbpfadlkmhmclhkeeodmamcflc/injects/solana.js", line 45, in [anonymous]
File "chrome-extension://afbcbjpbpfadlkmhmclhkeeodmamcflc/injects/solana.js", line 39, in HTMLDocument.<anonymous>
File "chrome-extension://afbcbjpbpfadlkmhmclhkeeodmamcflc/injects/solana.js", line 39, in n.dispatch
File "chrome-extension://afbcbjpbpfadlkmhmclhkeeodmamcflc/injects/solana.js", line 39, in n.send
File "chrome-extension://afbcbjpbpfadlkmhmclhkeeodmamcflc/injects/solana.js", line 39, in n.sync
File "chrome-extension://afbcbjpbpfadlkmhmclhkeeodmamcflc/injects/solana.js", line 45, in new <anonymous>
File "chrome-extension://afbcbjpbpfadlkmhmclhkeeodmamcflc/injects/solana.js", line 45, in Module.2202
File "chrome-extension://afbcbjpbpfadlkmhmclhkeeodmamcflc/injects/solana.js", line 1, in r
File "chrome-extension://afbcbjpbpfadlkmhmclhkeeodmamcflc/injects/solana.js", line 1, in [anonymous]
```<issue_closed>
Status: Issue closed |
getsentry/sentry-webpack-plugin | 534550571 | Title: Assets Accessible at Multiple Origins
Question:
username_0: Hi,
Sentry doc talks about assets accessible at multiple origin and I don't find the parameter to use. Is it doable with this webpack plugin?
In my use case, I have the same file which is fetched with different origins.
Ex: file://www/app.js, https://mysite.fr/subsite1/app.js, https://mysite.fr/subsite2/app.js.
When an error is raised, one is created by file, but I'd like to group them as it's the same file.
Have a good day,
DR
Answers:
username_1: It's not possible through the plugin, but you can use our custom grouping settings to achieve that - https://docs.sentry.io/data-management/event-grouping/server-side-fingerprinting/
Status: Issue closed
|
Ymagis/ClairMeta | 674018725 | Title: Report Mode Feature Request: -report/-report_out_file to generate more detailed output.
Question:
username_0: Hi,
ClaieMeta checks for quite a lot of issues. It's very impressive.
And I am especially please we now have an open source tool to QC DCPs for problems. (Thats easy to install and use compared to others)
I would like to see the tool updated to generate a QC-Report type result. For example..
```
python3 -m clairmeta.cli check -type dcp -qc_report DCP_DIR
clairmeta version: X.XX
base CHECK: PASS
cpl CHECK: PASS
am CHECK: PASS
isdcf_dcnc CHECK: WARNING
WARNING:check_dcnc_compliance
ContentTitle must have 12 parts, 11 found
Field Studio not found in ContentTitle
atmos CHECK: NOT_APPLICABLE
.
.
----------------------
QC acceptance: PASS
```
This is especially important in accepting DCPs into some circuits/cinemas, having a tool to pre-submission-check DCPs before they are submitted by random content producers who are making ads for small film festival content. It's common for non DCP experienced production companies to take on such work and leave the creation of the DCP to the last minute and just grab whatever software they can find to do a DCP. It would be good to have a free tool to utilise in asset delivery requirements those accepting the DCP can point them at.
i.e. When submitting your DCP, please supply clairmeta QC_REPORT as part of your submission.. Obviously if they are not going to do this until it passes the QC tool.
The tool, should produce a type of report as listed above, and if a specific check_plugin has issues, it will list them and result in a WARNING or FAIL based on the result of the issue.
The version of the clairmeta version should be part of the report as extra features may come down the line for which a older version of he software does not yet implement.
I have not yet look too deeply into the code, but it appears this type of feature may not be that difficult to add to the implementation.
I will admit, the tool in its current form still is very useful and does archive this outcome, but it does not have the gravitas or specific feel of being as powerful and complete as it is. Naming all the checks in the report would be far more visceral and hold more weight in how useful this tool really is to the general community that utilises DCPs.
Please comment on this addition and "use model" of the tool. I am keen to hear others feedback.
Answers:
username_0: I expect many of us have had time to check out the SMPTE RDD 52 document, now made freely downloadable on
[https://ieeexplore.ieee.org/document/9161348](https://ieeexplore.ieee.org/document/9161348)
This is great to see as its refining to a detailed level what is acceptable for wider release DCP creation. Also great for film festivals who don't want to spend as much money on repacking DCP.. Specify it must pass clarmeta check before it can be accepted puts making (Costs) a good DCP back on the film maker.
My first reaction would be to add a reasonable amount of RDD52 specifics to clairmeta.. but after reading, I would assume, not that I have looked over all the code, that many of the specifics are already checked for by clairmeta already.
This comes back to this discussion. Currently clairmeta only reports/prints out anything if it fails a specific test. There is no real way to see, via its output, what checks have been performed. This is a considerable floor to me as, if a DCP is in an archive, for example, and with it a clairmeta report was created against it at the time it was archived, there is no way to see what tests were applied at that time. Just it was OK. That's not very useful.
Clairmeta will evolve and more tests will likely be added.
After some consideration, I think it is quite important that we add a VERBOSE option at the very least that prints out the test applied, Pass or Fail.
This then comes to how we incorporate RDD 52 tests. Do we add it as a module, of add tests to specific parts of the code that deal with RDD52 specific tests.. i.e. in subtitles, picture etc. I would assume we add it to already existing sections of the code. Where checks that mirror the RDD52 requirements already exist, we modify the error reported to reference the RDD52 document.
username_1: Thanks for opening the discussion @username_0.
For context, when we made ClairMeta at Ymagis, we already had another companion application for mastering related things and that tool was responsible for generating this kind of report. For that we used a simple HTML / CSS page and template expansion to fill in the report. We chose not to add it to ClairMeta source code because of the added dependencies and complexity that's not really wanted by everyone.
That's to say it should be relatively easy to implement it already as almost all the informations are already present in the report generated (using the Python API). One thing missing, but probably one liner, is to add the check that succeeded.
Note that you can easily add the checks description and reference (notice that most check have in their docstring a reference section to source the actual related standard / specification) by reading the docstring programmatically from python (in fact that's what we did).
I may look at it further when I have more time, but for now I won't be able to help with making this report. However I can help by adding any missing piece of information that's needed to generate it.
PS: I didn't get the time to look at RDD 52 yet, but yes, we will add those missing check in their respective module and document where they comes from in the docstring as already done.
username_0: I am and we all should be greatly appreciative to Ymagis for offering this to the public.
Considering ymagis had a vision and specific use model for the tool, I think the best way to approach this is to add a argument flag that adds this functionality as so its default use stays the same. (Apart from upgrades, additions fixes)
Today I had a reasonable look at the code. `dcp_check_base.py` mainly. On this topic, after looker through it..
I noticed 543 checks in the resulting list `self.check_executions`, then it looks though the list for the type of result and ads ERRORS, WARNING, to self.check_report[ ERROR, WARNING, INFO, SILENT ]
```
def make_report(self):
""" Check report generation. """
self.check_report = {
'ERROR': [],
'WARNING': [],
'INFO': [],
'SILENT': []
}
for c in self.find_check_failed(): <-- only add check failed into self.check_report
level = self.find_check_criticality(c.name)
self.check_report[level].append((c.name, c.msg))
check_unique = set([c.name for c in self.check_executions])
self.check_elapsed = {}
self.total_time = 0
self.total_check = len(check_unique)
for name in check_unique:
execs = [c.seconds_elapsed
for c in self.check_executions if c.name == name]
elapsed = sum(execs)
self.total_time += elapsed
self.check_elapsed[name] = elapsed
```
This appears to go through and populate `self.check_report`.
I find this a little confusing as why do you not add all PASSED checks to INFO?
And what is suppressed for.. (I imagine those not performed due to the checking profile supplied?)
Could you expand on the purpose here. What is the INFO list suppose to represent?
In the end, why not just expose `self.check_executions` in the API?
I could understand this portion of the code was mainly written for passing to the `dump_report()` function.
My reaction of reviewing this code is to ad the '-report' flag.
Add passed checks to `self.check_report['INFO']` in `make_report(self)`,
Update dump_report() to accept check for the `-report` flag and if present, add output for all the items found in `self.check_report['INFO']`
I'll create a new issue regarding CPL errors i also find confusing..
username_1: Could be the easiest and more flexible way, yes, this list already provide more information with eg. check processing times. We could add even more informations, eg. on which asset the check was performed (if applicable), the docstring of the check, etc.
username_1: Yes, this correspond to check that were bypassed using the profile `bypass` key.
username_1: #167 is addressing part of the issue by providing a more complete report object, from where you should be able to build a complete report. Let me know if that address your immediate concerns.
username_1: Should we close this ?
Status: Issue closed
username_0: Yes,
You have addressed this from my perspective.
I am working on the online tool still. I have the interactive version done but for likely bugs to be discovered.
I am currently working on making a HTML based Emailable version that can be exported from the online interactive tool.
Regenerating the interactive graphs as images that can be embedded into an Email. Based on the draft one I sent you a while ago for Clairmeta, but now adding CPL-structure, Video-bitrate, audio waveforms, loudness analysis images.
Once I am happy with that I will implement the latest Clairmeta version with recent upgrades and fixes.
Rami, please go to https://admin.d-cine.net and request an account under Research user. I need to test that part of the website to. Need to make it as easy as possible to manage/automate considering tis free to use..
Thanks,
James |
sorgerlab/indra | 366398879 | Title: Add more detail to knowledge sources documentation
Question:
username_0: Now that we have 19 documented knowledge sources, it might make sense to add another intermediate level to the module documentation that separates groups of knowledge sources like:
- general purpose reading systems
- biology-specific reading systems
- standard pathway databases
- custom knowledge bases
See https://indra.readthedocs.io/en/latest/modules/sources/index.html#
Answers:
username_0:  [Add an additional level to the indra.sources module documentation](https://trello.com/c/cwBptZqM/30-add-an-additional-level-to-the-indrasources-module-documentation)
username_1: Hi level description:
```INDRA can draw content from many sources, some specifically for biological
content, and some for more generic causal knowledge, some are from machine
readers and some are from human-curated or data driven databases.```
Sound good?
username_0: Yes, sounds perfect
Status: Issue closed
username_0: Done in #695 |
DenisStrokatov/Denis | 298179456 | Title: Zadanie4
Question:
username_0: package main.java;
public class Zadanie4 { public static void main(String[] args) {
int chislo = 83;
int i,k;
System.out.printf("razmen po 3 i 5 koppek:\n");
i = 0;
while (chislo - 5*i >= 0 && chislo>=5) {
System.out.printf("5 kop*%d, 3 kop*%d, ne razmenyano %d kop\n ", i, (chislo - 5*i) / 3, (chislo - 5*i) % 3);
i = i + 1;
};
if (chislo==4 || chislo==3) {
System.out.printf("5 kop*0, 3 kop*1, ne razmenyano %d kop\n ", chislo - 3);
};
if (chislo<3) {
System.out.printf("razmen %d kopeek ne vozmogen\n ", chislo);
}
System.out.printf("\n razmen po 3, 5 i 7 koppek:\n");
for (k=0; k<= chislo / 7;k++) {
i=0;
while (chislo-7*k - 5*i >= 0 && chislo>=7) {
System.out.printf("7kop*%d, 5 kop*%d, 3 kop*%d, ne razmenyano %d kop\n ", k, i, (chislo-7*k - 5*i) / 3, (chislo-7*k - 5*i) % 3);
i = i + 1;
}
}
i=0;
if (chislo==6 || chislo==5) {
while (chislo - 5*i >= 0) {
System.out.printf("7kop*0, 5 kop*%d, 3 kop*%d, ne razmenyano %d kop\n ", i, (chislo - 5*i) / 3, (chislo - 5*i) % 3);
i = i + 1;
};
};
if (chislo==4 || chislo==3) {
System.out.printf("7kop*0, 5 kop*0, 3 kop*1, ne razmenyano %d kop\n ", chislo - 3);
};
if (chislo<3) {
System.out.printf("razmen %d kopeek ne vozmogen\n ", chislo);
}
}
} |
jadrake75/odata-filter-parser | 120807541 | Title: Can likely optimize the concat recursive call
Question:
username_0: should be able to improve the handling of the array slicing (arguments is not an array but can us prototype array methods) via something like this: (courtesy of kminio)
Array.prototype.slice.call(arguments, 1) |
RocketMan1988/EDMC-Passport-System | 963276778 | Title: Missing DSSA carriers
Question:
username_0: https://inara.cz/station/183920/
https://inara.cz/galaxy-station/211128/
Answers:
username_1: I'll look into this. Thank you for the ping. 👍
username_0: This may have been fixed on 21 August by -- my count went up when I rescanned journals. Though I didn't specifically test each report here
https://discord.com/channels/164411426939600896/749701520865493012/878793109557903440 (EDCD)
Status: Issue closed
|
miragejs/discuss | 557703037 | Title: server.create() fails for models which are named as a plural
Question:
username_0: Boilerplate example: https://github.com/username_0/ember-cli-mirage-boilerplate
When an ember-data model is named as plural, for example `settings`, calling `server.create('settings')` or `server.createList('settings')` fails with the error:
```
Promise rejected during "server.create singular model fails": Mirage: You called server.create('settings') but no model or factory was found. Make sure you're passing in the singularized version of the model or factory name.
```
Instead, if we use `server.create('setting')`, this works, but this is not the actual model name.
Examples can be seen here:
**Model:**
https://github.com/username_0/ember-cli-mirage-boilerplate/blob/master/app/models/settings.js
**Mirage Factory:**
https://github.com/username_0/ember-cli-mirage-boilerplate/blob/master/mirage/factories/settings.js
**Example failure test:**
https://github.com/username_0/ember-cli-mirage-boilerplate/blob/master/tests/integration/components/settings-test.js
# Expected behaviour
Even if a model name is actually plural, `server.create()` should not require an incorrect, singularized version of the model name to be passed to it.
Answers:
username_1: FYI: Transferred this to our Discuss repo, our new home for more open-ended conversations about Mirage!
If things become more concrete + actionable we can create a tracking issue in the main repo.
username_2: Um, so maybe a silly question, but how should we handle this for an integration test? The issue we are having is that the `custom-inflector-rules.js` initializer does not run during an integration test. Are we making a fundamental mistake by interacting with the Mirage server in integration tests?
username_1: There's some info in the docs about integration tests: https://www.ember-cli-mirage.com/docs/testing/integration-and-unit-tests
You can absolutely use Mirage there. In this case, you could just import and run your initializer alongside `setupMirage`. I might make a separate function that does both (`setupMirageForIntegration` or something like that) just so you don't forget! |
jjhelmus/nmrglue | 250559289 | Title: Sorry, but still: "Integers not floats should be used for indexing and slicing arrays"
Question:
username_0: Although announced to be solved, with numpy 1.9.2, I recently installed via "git clone https://github.com/username_1/nmrglue" (from source).
dic, data = ng.bruker.read_pdata('../Data/3/pdata/1')
print data.shape
print data
thres = data.std() * 2
peakList = ng.analysis.peakpick.pick( data, pthres = thres )
(1024, 2048)
[[ 14099.25 15660.5 15820.25 ..., 441.5 13453.5 17852.25]
[ 22354.75 24070.25 16389. ..., 2944. -547.75 7666. ]
[ 17501.5 21159. 3769.25 ..., 866.5 -10845.5 -4526. ]
...,
[ 1695.5 -15082.25 -16300.75 ..., 1921. 17331. 22866.25]
[ -6796.25 -16138.25 -11376.75 ..., 4678.75 24169.25 22745.75]
[ -512.5 -2591.5 1098.5 ..., 2588.5 24818.75 23191. ]]
Traceback (most recent call last):
File "findPeaks.py", line 9, in <module>
peakList = ng.analysis.peakpick.pick( data, pthres = thres )
File "/usr/lib/python2.7/site-packages/nmrglue/analysis/peakpick.py", line 215, in pick
ls_classes)
File "/usr/lib/python2.7/site-packages/nmrglue/analysis/peakpick.py", line 383, in guess_params_slice
r = extract_1d(region, rlocation, axis)
File "/usr/lib/python2.7/site-packages/nmrglue/analysis/peakpick.py", line 398, in extract_1d
return np.atleast_1d(np.squeeze(data[s]))
TypeError: slice indices must be integers or None or have an __index__ method
Answers:
username_0: Also, with the script from
https://github.com/username_1/nmrglue/wiki/Find-and-fit-peaks-in-a-spectrum
Traceback (most recent call last):
File "fitPeaks.py", line 25, in <module>
peaks = ng.peakpick.pick(data, 25000)
File "/usr/lib/python2.7/site-packages/nmrglue/analysis/peakpick.py", line 222, in pick
c_ndil)
File "/usr/lib/python2.7/site-packages/nmrglue/analysis/peakpick.py", line 291, in clusters
return [labeled_array[i] for i in locations]
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
Status: Issue closed
username_0: Although announced to be solved, with numpy 1.9.2, I recently installed via "git clone https://github.com/username_1/nmrglue" (from source).
dic, data = ng.bruker.read_pdata('../Data/3/pdata/1')
print data.shape
print data
thres = data.std() * 2
peakList = ng.analysis.peakpick.pick( data, pthres = thres )
(1024, 2048)
[[ 14099.25 15660.5 15820.25 ..., 441.5 13453.5 17852.25]
[ 22354.75 24070.25 16389. ..., 2944. -547.75 7666. ]
[ 17501.5 21159. 3769.25 ..., 866.5 -10845.5 -4526. ]
...,
[ 1695.5 -15082.25 -16300.75 ..., 1921. 17331. 22866.25]
[ -6796.25 -16138.25 -11376.75 ..., 4678.75 24169.25 22745.75]
[ -512.5 -2591.5 1098.5 ..., 2588.5 24818.75 23191. ]]
Traceback (most recent call last):
File "findPeaks.py", line 9, in <module>
peakList = ng.analysis.peakpick.pick( data, pthres = thres )
File "/usr/lib/python2.7/site-packages/nmrglue/analysis/peakpick.py", line 215, in pick
ls_classes)
File "/usr/lib/python2.7/site-packages/nmrglue/analysis/peakpick.py", line 383, in guess_params_slice
r = extract_1d(region, rlocation, axis)
File "/usr/lib/python2.7/site-packages/nmrglue/analysis/peakpick.py", line 398, in extract_1d
return np.atleast_1d(np.squeeze(data[s]))
TypeError: slice indices must be integers or None or have an __index__ method
username_1: @username_0 Can you provide the data file that you are looking at which is causing this error? I tried to replicate the issue with the data from the wiki example and was unable to cause the error.
username_0: Here you are, in 4 parts because of github limits.
[Data4.zip](https://github.com/username_1/nmrglue/files/1228812/Data4.zip)
Concatenate with cat Data*.zip > data.zip
username_0: next:
[Data3.zip](https://github.com/username_1/nmrglue/files/1228820/Data3.zip)
username_0: next:
[Data2.zip](https://github.com/username_1/nmrglue/files/1228822/Data2.zip)
username_0: & last:
[Data1.zip](https://github.com/username_1/nmrglue/files/1228823/Data1.zip)
username_0: tested to concatenate - did not work out.
Don't know how to provide you with the original Data.zip of 28 MB
username_1: @username_0 Can you upload the file to dropbox or a similar file sharing site?
username_0: To send you a download link & password requires an e-mail address of yours.
username_1: username_1 [at] gmail work.
username_1: I got the data file and ran the following script without any error:
``` python
import nmrglue as ng
dic, data = ng.bruker.read_pdata('../Data/3/pdata/1')
print data.shape
print data
thres = data.std() * 2
peakList = ng.analysis.peakpick.pick( data, pthres = thres )
```
Can you verify that you have the latest version of nmrglue and provide details about what NumPy and Python version you are using?
username_0: Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/site-packages/nmrglue/analysis/peakpick.py", line 215, in pick
ls_classes)
File "/usr/lib/python2.7/site-packages/nmrglue/analysis/peakpick.py", line 383, in guess_params_slice
r = extract_1d(region, rlocation, axis)
File "/usr/lib/python2.7/site-packages/nmrglue/analysis/peakpick.py", line 398, in extract_1d
return np.atleast_1d(np.squeeze(data[s]))
TypeError: slice indices must be integers or None or have an __index__ method
username_0: Same error on a CentOS 7 (64bit) with Python 2.7.5 & numpy 1.12 & nmrglue (0.7_dev, i.e. from git-sources)
username_1: I've tried to replicate this on my own system with the same version of python and numpy without any luck. The peak picking does not raise an error.
I'm at a bit of a loss on what else to try. Perhaps the scipy version is causing issue, what version of SciPy is installed.
Also, what is the output from:
``` Python
import nmrglue as ng
dic, data = ng.bruker.read_pdata('../Data/3/pdata/1')
thres = data.std() * 2
ploc, pseg = ng.analysis.peakpick.find_all_connected(data, thres, True, False)
print(thres)
print(ploc)
print(pseg)
```
`find_all_connected` is being run inside the peak picker to generate the region limits. If it gives odd results that could explain the error.
username_0: Thank you very much for your patience and support. You're right, with pointing at scipy version !
import nmrglue as ng
dic, data = ng.bruker.read_pdata('../Data/3/pdata/1')
thres = data.std() * 2
ploc, pseg = ng.analysis.peakpick.find_all_connected(data, thres, True, False)
works WITHOUT problems on MAC OSX (python 2.7.12 from Xcode, numpy & scipy 0.17 added by Homebrew), as well as on Fedora 24 (python 2.7.13, numpy 1.11, scipy 0.16.1).
Finally, the problem has been solved by updating 'scipy' (originally vers. 14.1 on Fedora 23) via "pip install --upgrade scipy" (as root) !!
Status: Issue closed
username_1: Great to hear you found a fix. |
magicbug/Cloudlog | 486048669 | Title: Have a use for the field OPERATOR
Question:
username_0: We are evaluating Cloudlog for our clubstation currently. It would be useful to have an information about who (personally) operated the club callsign for each QSO. From what I read in the ADIF specs there is a field OPERATOR which should allow for that.
We imported an .adif file and the field obviously gets filled. So what is missing is a way to have that on the input side. So QSOs which are logged through the Cloudlog GUI (aka. web interface) do not populate this field.
Or would that require separate accounts for anyone who wants to log QSOs for the club station?
Answers:
username_1: @username_0 In the code it uses the user logins callsign against the OPERATOR field so if you give everyone an account, at the moment the privileges don't really do anything so I'd set everyone to admin for now.
I need to expose the OPERATOR field on the QSO popup windows so that it can be seen.
What I'm planning todo do is add a role that removes the admin features and just lets an "Operator" add QSOs.
username_1: Station Callsign is for argument's sake in Cloudlog the logbook callsign so 2M0SQL this is defined in Station Profiles, then you can select it in the QSO panel "general"
username_1: Yes, I guess a global club username for everyone to use is an option, i just never saw it like that.
username_1: I'll get it added :)
username_1: Done
username_1: When you create an account for a user, when you fill in "Callsign" this will be the field that is used for the OPERATOR field
username_0: We are evaluating Cloudlog for our clubstation currently. It would be useful to have an information about who (personally) operated the club callsign for each QSO. From what I read in the ADIF specs there is a field OPERATOR which should allow for that.
We imported an .adif file and the field obviously gets filled. So what is missing is a way to have that on the input side. So QSOs which are logged through the Cloudlog GUI (aka. web interface) do not populate this field.
Or would that require separate accounts for anyone who wants to log QSOs for the club station?
username_0: I see. But how is station callsign set then?!
username_0: I see. Means we need to have an account for anyone who wants to log for our club call sign. So then change this to FR: Make the COL_OPERATOR filed visible :)
username_0: Not a problem. I would find a separate account for each operator more convenient.
But I just discovered, that OPERATOR is also not included when exporting. Could you please also consider this?
username_0: Mni tnx.
username_0: Cool. Will test later on as I am horizontally polarized :-)
username_0: Am I missing something on the QSO details popup window?

Should include the operator as per 47d9efe218986142bd8af1c4d5d78b0db8f19eac right?
username_0: Well, forget what I worte. I didn't realize there are veritcal scrollbars ... *facepalm*
username_1: This is now 100% stored from User account and won't be made available in the Panel unless under edit to correct an error.
Status: Issue closed
|
openMF/web-self-service-app | 298121094 | Title: Create a template for Issues and Pull Requests for the repository
Question:
username_0: Adding a template for issues and pull requests will help the contributors share all the required important information which will be helpful to the community as a whole.
Answers:
username_0: @username_3 Can I work on this?
username_1: @username_3 @santoshconflux Can i work on this?
username_2: @username_1 , it is done already #110
Status: Issue closed
|
napalm-automation/napalm-ansible | 230539043 | Title: route_to on junos - missing parameters
Question:
username_0: I'm trying to use the 'route_to' filter on get_facts and I'm failing:
```
tasks:
- name: get facts from device
napalm_get_facts:
provider: "{{ junos_provider }}"
filter: 'route_to'
register: result
```
```
$ ansible-playbook -i ./inventory/ ./getdata.yaml
PLAY [fetch info] ****************************************************************************************************************************************************************************
TASK [get facts from device] *****************************************************************************************************************************************************************
fatal: [10.69.0.208]: FAILED! => {"changed": false, "failed": true, "msg": "[route_to] cannot retrieve device data: Unknown protocol: destination"}
to retry, use: --limit @/home/ubuntu/2d-automation/ansible/getdata.retry
PLAY RECAP ***********************************************************************************************************************************************************************************
10.69.0.208 : ok=0 changed=0 unreachable=0 failed=1
```
Looking at the code in [napalm-junos](https://github.com/napalm-automation/napalm-junos/blob/develop/napalm_junos/junos.py#L1104) the function expects two parameters, but I don't think there is a way to pass them.
Answers:
username_1: Hi, I have implemented an `args` argument you can use to pass to the filter. Would you mind trying? I did already and seemed to work but it would be nice if you could double check before testing:
https://github.com/napalm-automation/napalm-ansible/pull/57/files#diff-bf4dee70d5c4bdf0ee8d92a29ed373c0R118
```
(.venv) ➜ ansible git:(master) ✗ ansible-playbook --limit rtr00 playbook_hello_world.yaml
.
# Print gathered facts ****************************************************************************************************
* rtr00 - changed=False ----------------------------------------------------
{
"facts": {
"fqdn": "localhost",
"hostname": "localhost",
"interface_list": [
"Ethernet1",
"Ethernet2",
"Management1"
],
"model": "vEOS",
"os_version": "4.17.5M-4414219.4175M",
"serial_number": "",
"uptime": 1083,
"vendor": "Arista"
},
"interfaces": {
"Ethernet1": {
"description": "",
"is_enabled": true,
"is_up": true,
"last_flapped": 1495885906.2295532,
"mac_address": "08:00:27:26:EF:68",
"speed": 0
},
"Ethernet2": {
"description": "",
"is_enabled": true,
"is_up": true,
"last_flapped": 1495885906.2296872,
"mac_address": "08:00:27:83:BE:A5",
"speed": 0
},
"Management1": {
"description": "",
"is_enabled": true,
"is_up": true,
"last_flapped": 1495885922.2773018,
"mac_address": "08:00:27:47:87:83",
"speed": 1000
}
},
"route_to": {
"8.8.8.8/32": [
{
"age": 0,
"current_active": true,
"inactive_reason": "",
"last_active": true,
"next_hop": "10.0.2.1",
"outgoing_interface": "Management1",
"preference": 1,
"protocol": "static",
"protocol_attributes": {},
"routing_table": "default",
"selected_next_hop": true
}
]
}
}
# STATS *******************************************************************************************************************
rtr00 : ok=2 changed=0 failed=0 unreachable=0
```
username_0: I can confirm that it works for me too.
I also realised that I can't use it - if I look for a specific match (like 8.8.8.0/24) I get this:
```
fatal: [10.69.0.208]: FAILED! => {"changed": false, "failed": true, "msg": "[route_to] cannot retrieve device data: failed to detect a valid IP address from u'10.69.0.177:500:172.16.58.3'"}
to retry, use: --limit @/home/ubuntu/2d-automation/ansible/getdata.retry
```
So it looks like it expects only an ipv4 prefix, and when it gets a vpnv4 one it fails to parse it.
username_1: Interesting. How does that work with EOS? Which command would you run to actually see that route?
username_0: No quick access right now to check. I suspect this is going to be junos specific. It's the same over cli too - 'show route _destination_' shows all tables, including vpnv4. On other platforms I worked with you always have to specify you want 'vrf/vpnv4' routes.
On junos the way around this is to specify the 'table', but since currently napalm-junos doesn't use 'table' as a parameter I think the only way around is to use the junos_get_table from Juniper (and craft the required table definition manually).
username_1: Oh, it's junos! Sorry, got confused and I thought it was on EOS. So, I suggest you to open an issue on `napalm-base` for further discussion as I think it might be useful to discuss adding a "table" parameter. In the meantime, I will just merge this.
Status: Issue closed
username_1: Solving this via #57 |
globaldothealth/list | 1076085612 | Title: Add Omicron Travel History Dataset to G.h
Question:
username_0: @Mougk and collaborators have compiled a dataset of travel history associated with verified Omicron cases, which is updated regularly: https://docs.google.com/spreadsheets/d/1LHfVWlUfIFp-4NiAoyI4xIrq3v56juuK2-KrNquwtmo/edit#gid=0
While not line list per se, these data are valuable, unique, and time-sensitive to make available to the global research and epidemiological community. We should find a simple and fast way to host these data via G.h in a downloadable format similar to our core LL db. A simple option could be to write a script to ingest/geocode the data and host on our github as a .CSV with some accompanying documentation on methodology, sources, data dictionary, authors etc. We can then create either a button and/or top level nav on our homepage to link to this GitHub page and/or a Wordpress page similar to our Data Deep Dives. Suggestions/ideas welcome on the best MVP approach!
Answers:
username_1: I think it's best to put data files in another repo, I'll create covid19-omicron and put up a WIP script there.
username_1: WIP at https://github.com/globaldothealth/covid19-omicron, there are some issues with geocoding, looking into that.
username_0: Thanks @username_1 ! Good idea and great progress
username_0: @username_1 copying slack notes from @Mougk :
"could you add more details on the LTLA dataset from UK gov and direct link to data description?
Moritz 1 hour ago
and pls add data dictionary to travel dataset and ltla dataset (similar to denmark readme)"
username_1: @username_0 done!
username_0: @username_1 I think we can close this issue now and add updates as needed?
Status: Issue closed
|
sisoputnfrba/foro | 280562540 | Title: [MASTER] Problema ejecutando la función de un hilo
Question:
username_0: Buenas !!
Estoy teniendo un pequeño quilombo en master, no parece muy jodido pero estoy hace un par de horas y no lo puedo solucionar:
En la etapa de reduccion local, master crea un solo hilo para conectarse a un nodo (el nodo sobre el cual se va a hacer la reduccion), el problema aparece cuando empieza a ejecutar la funcion del hilo: básicamente explota porque lo que recibe el hilo esta todo en null PEEEEEEEERO en la función _conectarse_a_worker_reduc_local()_ cuando se escribe lo que se deserializo en el log (justo antes de crear el hilo y pasarle la estructura deserializada), esta todo legal !!!!
Por eso, no se como pasa de estar todo normal a estar todo en null cuando empieza a ejecutar el hilo.
Capaz es una pavada que no estoy viendo.... alguna idea ?
```c
int conectarse_a_worker_reduc_local(){
t_conexion_rlocal info_nodo = deserializar_info_nodo_rlocal_mock(); //deserializar_info_nodo_rlocal();
log_trace(log_master, "MAIN MASTER: Recibi esto: {\ncant_archivos: %d \n id_nodo: %d\n ip: %s\n puerto: %d\n ruta_archivo_rlocal: %s\n tamanio_ruta_archivo_rlocal: %d\n primera_ruta: %s\n segunda_ruta: %s\n tercera_ruta: %s\n}",
info_nodo.cant_archivos_transformados,
info_nodo.id_nodo,
info_nodo.ip,
info_nodo.puerto,
info_nodo.ruta_archivo_rlocal,
info_nodo.tamanio_ruta_archivo_rlocal,
list_get(info_nodo.rutas_archivos_transformados, 0),
list_get(info_nodo.rutas_archivos_transformados, 1),
list_get(info_nodo.rutas_archivos_transformados, 2)
);
log_trace(log_master, "master crea un hilo para conectarse a un worker en la ip %s y puerto %d en la etapa de reduccion local", info_nodo.ip, info_nodo.puerto);
pthread_t hilo;
pthread_attr_t attr;
int detachstate;
pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);
if(pthread_create(&hilo, &attr, ordenar_rlocal_a_worker, (void*) &info_nodo) != 0){
log_error(log_master, "no se pudo iniciar un hilo para conectarse a un worker en la ip %s y puerto %d en la etapa de reduccion local", info_nodo.ip, info_nodo.puerto);
destroyer_info_nodo_rlocal((void*)&info_nodo);
t_info_tarea info_tarea_fallida = {.id_job = nro_job, .id_nodo = info_nodo.id_nodo, .bloque_archivo = -1};
notificar_resultado_etapa(ERROR_EN_REDUCCION_LOCAL, info_tarea_fallida);
return -1;
}
pthread_attr_destroy(&attr);
return 0;
}
```
En la funcion _deserializar_info_nodo_rlocal_mock()_ tengo lo siguiente :
```c
t_conexion_rlocal deserializar_info_nodo_rlocal_mock() {
t_conexion_rlocal nodo;
nodo.id_nodo = 1;
nodo.ip = string_new();
string_append(&nodo.ip, "127.0.0.1");
[Truncated]
==6962== HEAP SUMMARY:
==6962== in use at exit: 974 bytes in 27 blocks
==6962== total heap usage: 163 allocs, 136 frees, 8,035 bytes allocated
==6962==
==6962== LEAK SUMMARY:
==6962== definitely lost: 40 bytes in 3 blocks
==6962== indirectly lost: 63 bytes in 6 blocks
==6962== possibly lost: 136 bytes in 1 blocks
==6962== still reachable: 735 bytes in 17 blocks
==6962== suppressed: 0 bytes in 0 blocks
==6962== Rerun with --leak-check=full to see details of leaked memory
==6962==
==6962== For counts of detected and suppressed errors, rerun with: -v
==6962== ERROR SUMMARY: 5 errors from 5 contexts (suppressed: 0 from 0)
Terminado (killed)
```
(Por cierto, tampoco entiendo por que muestra dos veces que *[INFO] 14:14:23:544 Master/(6962:6963): La primera ruta es (null)* pero bue...)
Ayuda!!!!!!!!!!!
Answers:
username_1: Buenas!
Fijate que tenés `Invalid read of size` en las líineas:
- `ordenar_rlocal_a_worker (funciones_master.c:629)`
- `ordenar_rlocal_a_worker (funciones_master.c:633)`
- `ordenar_rlocal_a_worker (funciones_master.c:648)`
Cuales son esas líneas?
username_0: Tiene pinta de que los invalid read son por eso, no ?
Por otro lado, comento esto porque quizás les sirva a ustedes para encontrar el problema o entender que esta pasando:
Probé poniendo un sleep (_si, solo para probar, se que hay que sacarlo_:satisfied:) en la función _conectarse_a_worker_reduc_local()_ acá :
```c
...
if(pthread_create(&hilo, &attr, ordenar_rlocal_a_worker, (void*) &info_nodo) != 0){
log_error(log_master, "no se pudo iniciar un hilo para conectarse a un worker en la ip %s y puerto %d en la etapa de reduccion local", info_nodo.ip, info_nodo.puerto);
destroyer_info_nodo_rlocal((void*)&info_nodo);
t_info_tarea info_tarea_fallida = {.id_job = nro_job, .id_nodo = info_nodo.id_nodo, .bloque_archivo = -1};
notificar_resultado_etapa(ERROR_EN_REDUCCION_LOCAL, info_tarea_fallida);
return -1;
}
sleep(1000000);
pthread_attr_destroy(&attr);
return 0;
...
```
Y anda todo lo mas bien, WTF :scream: .
Lo único que se me ocurre es que quizás todas los punteros que tiene _info_nodo_ son del stack de la funcion _conectarse_a_worker_reduc_local()_ y esos son los que le pasas al hilo, entonces cuando termina _conectarse_a_worker_reduc_local()_ y se liberan esas referencias, lo que tiene el hilo esta todo vació... puede ser o la estoy re flasheando?
username_1: Tengo la ligera sensación de que el problema está aca:
`t_conexion_rlocal info_nodo = deserializar_info_nodo_rlocal_mock();`
Los hilos comparten memoria, especialmente la que les pasas como argumento. Pero que pasa si esa ubicación de memoria es estática y no dinámica? Primero, se almacena en el stack. Segundo, si se pisa esa info con otra cosa en el hilo principal, lo estás pisando en el otro. Creo que lo que tenés aca es, _la clásica condición de carrera_ 😛
username_0: Ah ok ! Ahí es donde masomenos me había trabado ... ¿Como puedo hacer para que esos info_nodo sea individuales para cada hilo y que si el hilo principal termina no joda a los demás? ¿No hay una forma de pasarle al hilo una copia independiente? ¿Una solución podría ser que en vez de que el hilo principal haga _deserializar_info_nodo_rlocal_mock()_ lo haga cada hilo cuando arranca a ejecutar? (ahí tendría que poner un semaforo para sincronizar la lectura por el socket)
username_2: Hay una funcion que se llama
Destroyer_info_nodo_rlocal o algo asi que ejecutás después crear el hilo.
Creo que tu problema es que le estas pasando la direccion de infoNodo al
hilo y seguido, lo destruís.
Por eso cuando tu hilo quiere leerlo, te tira un null pointer exception.
Fijate si el problema viene por ahí y cualquier cosa pasate de nuevo.
username_0: Debuggeandolo me pareció que no caía nunca ahí porque entra en el if si _pthread_create(..)_ retorna != 0
username_1: Chequeá lo que dijo chris, aunque en ese caso no creo que el bardo venga por ahi, es otro problema potencial.
@username_0 si acaso tuvieramos una funcionalidad del SO para reservar memoria fuera del stack, que se pudiera compartir entre hilos y solo asignar en el stack la "referencia" a esa memoria, algo como `memory allocation` 😛
Jodas a parte, `malloc` es tu héroe. Aprovechá la existencia de los punteros para pasar memoria al hilo 😉
username_0: Claro, la forma en la que lo pregunte hizo que parezca que no use malloc en todo el tp jaja. Lo pregunte asi poque en la funcion _deserializar_info_nodo_rlocal_mock()_ estoy alocando memoria a lo loco, y aun asi no funciona, pense que el problema no venia por ahi.
Igualmente, pensando un poco en lo que me dijiste, recién probe hacer lo siguiente en _conectarse_a_worker_reduc_local()_ para asegurarme de que le estoy pasando una nueva posición de memoria al hilo (lo cual yo pensaba que ya estaba pasando por todos los mallocs que hacia en _deserializar_info_nodo_rlocal_mock()_ ) :
```c
...
t_conexion_rlocal* nuevo_info_nodo = (t_conexion_rlocal*)malloc(tamanio_info_nodo(info_nodo));
memcpy(nuevo_info_nodo, &info_nodo, tamanio_info_nodo(info_nodo));
if(pthread_create(&hilo, &attr, ordenar_rlocal_a_worker, (void*) nuevo_info_nodo) != 0){
...
```
```c
int tamanio_info_nodo(t_conexion_rlocal info_nodo) {
int tamanio = 0;
tamanio += sizeof(info_nodo.id_nodo);
tamanio += sizeof(info_nodo.cant_archivos_transformados);
tamanio += sizeof(info_nodo.puerto);
tamanio += sizeof(info_nodo.tamanio_ruta_archivo_rlocal);
tamanio += strlen(info_nodo.ip);
tamanio += strlen(info_nodo.ruta_archivo_rlocal);
int i;
for(i = 0; i < info_nodo.cant_archivos_transformados; i++) {
tamanio += strlen((char*) list_get(info_nodo.rutas_archivos_transformados, i));
}
return tamanio;
}
```
Y ahi va queriendo, cuando llega al hilo por lo menos dice que la primera ruta que hay en la lista no esta en null... despues de eso se cuelga y todavia no se por que... pero me parece que va encaminado, cualquier cosa aviso !
Status: Issue closed
|
Sandia-OpenSHMEM/SOS | 665445112 | Title: find-debuginfo reports no build ID note found error during rpmbuild
Question:
username_0: rpmbuild fails with the following error (not reproducible in Travis)
```
extracting debug info from /home/rahmanmd/rpmbuild/BUILDROOT/sandia-openshmem-1.4.5-1.el7.x86_64/usr/shmem/lib64/libsma.so.0.0.0
*** ERROR: No build ID note found in /home/rahmanmd/rpmbuild/BUILDROOT/sandia-openshmem-1.4.5-1.el7.x86_64/usr/shmem/lib64/libsma.so.0.0.0
error: Bad exit status from /var/tmp/rpm-tmp.CwP5x4 (%install)
```
A possible workaround is to stop building the debuginfo package by adding the following in the .spec file:
```
%define debug_package %{nil}
```
This work-around is validated to work without the above error.
Answers:
username_0: @username_1 Do you have any positive or negative opinion on the workaround? It seems to be ok to build without debuginfo for the RPM, but not sure what would be the implications, if any?
username_1: The spec file is there to support OPA fabric software folks. You could check with them if they are ok with this solution.
username_0: Resolved by #962
Status: Issue closed
|
icshwi/e3 | 287833461 | Title: Add option to set output path with prepare_module
Question:
username_0: I find that prepare module should be improved providing an option (e.g. -d) to set the destination path of the module. Currently everything is extracted under the e3/tool directory.
Status: Issue closed
Answers:
username_1: Not in prepare_module. bash because we drop. However, it was implemented at cbb1baa in the new template generator. |
NG-ZORRO/ng-zorro-antd | 313962236 | Title: nz-tree can not used in reactiveform
Question:
username_0: ## Version
0.7.0-beta.4
## Environment
windows 10 x64, chrome 65.0.3325.181, ng-zorro 0.7.0-beta.4
## Reproduction link
[https://stackblitz.com/edit/ng-zorro-antd-setup-evun5b](https://stackblitz.com/edit/ng-zorro-antd-setup-evun5b)
## Steps to reproduce
```
<form nz-form [formGroup]="form">
<nz-form-item>
<nz-tree [(ngModel)]="menus">
</nz-tree>
</nz-form-item>
</form>
```
just put nz-tree component in reactivefrom and use ngModel bind data ,it will be error:
```
ngModel cannot be used to register form controls with a parent formGroup directive. Try using
formGroup's partner directive "formControlName" instead.
```
but is I use formControlName to instead ngModel,It will not be display anything!
## What is expected?
Can use formControlName to bind data to nz-tree or other method can use to bind data to nz-tree and can be put in reactiveform
## What is actually happening?
Now can not be used in reactiveform
## Other?
<!-- generated by ng-zorro-issue-helper. DO NOT REMOVE -->
Answers:
username_1: @username_0 not nz-tree's error, `If ngModel is used within a form tag, either the name attribute must be set or the form control must be defined as 'standalone' in ngModelOptions. `, you should use `[ngModelOptions]="{standalone: true}"`
Status: Issue closed
|
lbryio/lbry-desktop | 485887888 | Title: Clicking enter in tag input (without typing) adds blank tag
Question:
username_0: Thanks to MH for spotting this one.
If you are in the customize/publish page that has the inputs for tag search and don't type anything, clicking enter adds a blank tag.
Answers:
username_1: a blank tag is also added if you type spaces only (unsure if people would want to follow space only tags) and if you add a tag with preceding spaces it escapes the duplicate check as well as being added into tags.

maybe input should be trimmed and checked for zero length before being added.
Status: Issue closed
|
ulaval/modul | 500961198 | Title: proposition: Mise à niveau de Portal et de son fonctionnement
Question:
username_0: <!--
Content can be written in English or in French
-->
**Description du besoin**
Portal est à la version 1.1.1, il faudrait le migrer à la version la plus récente (actuellement 2.1.6).
Il faut aussi un refactor de l'écosystème de la mixin et du plugin Portal.
**Description sommaire de la solution proposée**
*Un nouveau composant modul-portal qui inclurait <portal> et <transition>.
*Un nouveau set d'événements émits par ce composant selon les hook JS de Transition
https://fr.vuejs.org/v2/guide/transitions.html#hooks-JavaScript
Le retrait des setTimeout et TransitionDuration
Le retrait du Mixins Portal
L'ajustement des composantes utilisant le Mixins Portal (MDialog, MModal, MOverlay, MPopper, MSidebar et MToast)
Answers:
username_1: Vue3 s'en vient avec son propre Portal built-in. Il faudait évaluer cette possibilité aussi avant de faire un refactor portal en vue2. https://vueschool.io/articles/vuejs-tutorials/portal-a-new-feature-in-vue-3/
Status: Issue closed
|
lfkeitel/aptmirror-go | 290244777 | Title: Clean up old files
Question:
username_0: If the upstream removes an old version or otherwise changes files, the downloaded version needs to be cleaned up. A simply way would be to have a map of filenames to bool. If a file is in the map (ie. it's in the Release file or is a piece of metadata) then it should be kept. Anything else is removed. A simple directory walker can be used to remove files and empty directories. |
jellekitz/Json | 161128203 | Title: autofocus toevoegen
Question:
username_0: Je hebt feedback gekregen van **username_0**
op:
```c
<input type="search" placeholder="Zoek foto's..." id="zoekterm">
```
URL: https://github.com/jellekitz/Json/blob/master/index.html
Feedback: Autofocus geeft een veel betere UXD[](http://www.studiozoetekauw.nl/codereview-in-het-onderwijs/ '#cr:{"sha":"7eff689bff558987fdcb6b8629d17277610fb5b8","path":"index.html","reviewer":"username_0"}') |
pulumi/docs | 452160205 | Title: Anchors are dropped from redirects
Question:
username_0: With the move to Hugo, the Node.js API docs have moved from `reference/pkg/nodejs/@pulumi/` to `reference/pkg/nodejs/pulumi/` as Hugo strips the `@` when generating the site.
We have Hugo aliases in place that redirect from the old locations to the new locations. However, these are based on [meta refresh](https://www.w3.org/TR/WCAG20-TECHS/H76.html), which does not pass along any anchors that may be specified in the source URL. This is problematic as many of our API docs rely on anchors when linking to specific identifiers.
The fix is to use real 301 redirects instead of these client-side redirects, as browsers will pass along the anchor when 301 redirects are encountered.<issue_closed>
Status: Issue closed |
nxtbgthng/OAuth2Client | 71554459 | Title: Multiple user : Same account type
Question:
username_0: How to configure multiple user for same account type. For example a single user may use multiple dropbox account.
Answers:
username_1: I ran into same issue. Unfortunately this lib lacks both a manual refresh and account types of same host. (I.e must know the specific info PRIOR to setting client config). My solution is to make my own oauth2 client. :/
username_2: There is no real API for this, but it still can be done.
Since account types are just strings you can just make up arbitrary strings. When I faced the same problem I just used `"github-username_2"` as the account type. |
rbarrois/python-semanticversion | 675592073 | Title: Docs say that named components patch must be both an integer and a tuple of strings
Question:
username_0: This text is present in both `README.rst` and `docs/index.rst`:
```rst
In that case, ``major``, ``minor`` and ``patch`` are mandatory, and must be integers.
``prerelease`` and ``patch``, if provided, must be tuples of strings:
```
I think the second `patch` reference must be to some other argument instead...
Answers:
username_1: Good catch! The second should indeed by ``build``, fixing it now :)
Status: Issue closed
username_0: Wow - that was quick - thanks @username_1 ! |
open-sdg/sdg-translations | 457535204 | Title: Use dashes instead of dots, for global_targets and global_indicators
Question:
username_0: This would unfortunately be a large breaking change, but may be necessary. The "keys" for our global_targets and global_indicators translations contain dots. Eg, "1.2.1". This is problematic for any platform that tries to "drill down" using dot-delimited syntax. For example, "global_indicators.1.2.1.title" would not work. If we switch to dashes instead of dots, as in "global_indicators.1-2-1.title", then it would work.
Since we have not yet reached verison 1, this is probably our best time to make this change.<issue_closed>
Status: Issue closed |
sigp/lighthouse | 671723347 | Title: Ensure lcli generates correct pre-genesis ENR
Question:
username_0: ## Steps to resolve
Ensure that `generate_bootnode_enr.rs` creates the `EnrForkId` in a way that is faithful to specification before genesis is known.
Answers:
username_0: @username_1 perhaps you'd be interested in this one?
username_1: Yep, I'll take this one! |
MeasuringPolyphony/mp_editor | 774010433 | Title: After loading MEI Parts file not all staves load into input editor screen
Question:
username_0: However, when you click continue to score editor the music is there. Tried this with several parts files and the same thing happened with them all. Seems kind of random which parts will be appear when you click on the stave bounding boxes. Attaching one example file of an MEI parts file that this occurred with - also unsure whether it may be related to the general issue with the broken MP editor that we are having at the moment
[je_sui_aussi_MENSURAL_parts copy.txt](https://github.com/MeasuringPolyphony/mp_editor/files/5736877/je_sui_aussi_MENSURAL_parts.copy.txt)
: https://github.com/MeasuringPolyphony/mp_editor/issues/78<issue_closed>
Status: Issue closed |
firebase/quickstart-unity | 446343014 | Title: JNI DETECTED ERROR IN APPLICATION: can't call java.lang.Object com.google.firebase.database.DataSnapshot.getValue()
Question:
username_0: ## Please fill in the following fields:
Unity Version: 2017.4.27
Firebase Unity SDK version: 6.0.0
Firebase plugins in use (Auth, Database, etc.): Auth, Database, Analytics, RemoteConfig, Messaging
Additional SDKs you are using (Facebook, AdMob, etc.): AdMob, Unity IAP
Platform you are using the Unity editor on (Mac, Windows, or Linux): Windows, Mac
Platform you are targeting (iOS, Android, and/or desktop): iOS, Android
## Please describe the issue here:
(Please list the full steps to reproduce the issue. Include device logs, Unity logs, and stack traces if available.)
Building build with the new 6.0.0 version (was previously working under 4.5.2)
Auto sign into account, start pulling making database requests while (maybe TMI: IAP is also pulling doing purchasing init in the background). App hangs most of the time, sometimes when restarting the app things works (no hang). Curiously the app will usually not hang while ADB is connected you have to disconnect and then start it up. Even curiously-er it hangs 100% of the time when starting from fresh install (with ADB cable unplugged) which is the case for someone installing from the store for the first time.
## Please answer the following, if applicable:
Have you been able to reproduce this issue with just the Firebase Unity quickstarts (this GitHub project)?
n/a I've been using firebase for about 2 years now
CRASH LOG
(Filename: /Users/builduser/buildslave/unity/build/artifacts/generated/Android/runtime/DebugBindings.gen.cpp Line: 51)
05-20 15:03:05.401 10865-10921/? A/aimadiction.co: java_vm_ext.cc:542] JNI DETECTED ERROR IN APPLICATION: can't call java.lang.Object com.google.firebase.database.DataSnapshot.getValue() on null object
java_vm_ext.cc:542] in call to CallObjectMethodV
java_vm_ext.cc:542] from boolean com.unity3d.player.UnityPlayer.nativeRender()
java_vm_ext.cc:542] "UnityMain" prio=5 tid=26 Runnable
java_vm_ext.cc:542] | group="main" sCount=0 dsCount=0 flags=0 obj=0x12c81758 self=0x72b4c37800
java_vm_ext.cc:542] | sysTid=10921 nice=0 cgrp=default sched=0/0 handle=0x72a32df4f0
java_vm_ext.cc:542] | state=R schedstat=( 4103897740 298315453 10985 ) utm=355 stm=54 core=5 HZ=100
java_vm_ext.cc:542] | stack=0x72a31dc000-0x72a31de000 stackSize=1041KB
java_vm_ext.cc:542] | held mutexes= "mutator lock"(shared held)
java_vm_ext.cc:542] native: #00 pc 00000000003c7324 /system/lib64/libart.so (art::DumpNativeStack(std::__1::basic_ostream<char, std::__1::char_traits<char>>&, int, BacktraceMap*, char const*, art::ArtMethod*, void*, bool)+220)
java_vm_ext.cc:542] native: #01 pc 0000000000495dc0 /system/lib64/libart.so (art::Thread::DumpStack(std::__1::basic_ostream<char, std::__1::char_traits<char>>&, bool, BacktraceMap*, bool) const+352)
java_vm_ext.cc:542] native: #02 pc 00000000002e85ac /system/lib64/libart.so (art::JavaVMExt::JniAbort(char const*, char const*)+972)
java_vm_ext.cc:542] native: #03 pc 00000000002e89cc /system/lib64/libart.so (art::JavaVMExt::JniAbortV(char const*, char const*, std::__va_list)+108)
java_vm_ext.cc:542] native: #04 pc 00000000000fd5f8 /system/lib64/libart.so (art::(anonymous namespace)::ScopedCheck::AbortF(char const*, ...)+144)
java_vm_ext.cc:542] native: #05 pc 0000000000101458 /system/lib64/libart.so (art::(anonymous namespace)::ScopedCheck::CheckMethodAndSig(art::ScopedObjectAccess&, _jobject*, _jclass*, _jmethodID*, art::Primitive::Type, art::InvokeType)+1584)
java_vm_ext.cc:542] native: #06 pc 00000000000ffcb4 /system/lib64/libart.so (art::(anonymous namespace)::CheckJNI::CallMethodV(char const*, _JNIEnv*, _jobject*, _jclass*, _jmethodID*, std::__va_list, art::Primitive::Type, art::InvokeType)+756)
java_vm_ext.cc:542] native: #07 pc 00000000000ec7d4 /system/lib64/libart.so (art::(anonymous namespace)::CheckJNI::CallObjectMethodV(_JNIEnv*, _jobject*, _jmethodID*, std::__va_list)+84)
java_vm_ext.cc:542] native: #08 pc 000000000010f218 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libFirebaseCppApp-6.0.0.so (_JNIEnv::CallObjectMethod(_jobject*, _jmethodID*, ...)+92)
java_vm_ext.cc:542] native: #09 pc 000000000014172c /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libFirebaseCppApp-6.0.0.so (firebase::database::internal::DataSnapshotInternal::GetValue() const+32)
java_vm_ext.cc:542] native: #10 pc 000000000013daa4 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libFirebaseCppApp-6.0.0.so (Firebase_Database_CSharp_InternalDataSnapshot_value+16)
java_vm_ext.cc:542] native: #11 pc 0000000001609abc /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #12 pc 00000000016076e8 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #13 pc 0000000001607080 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #14 pc 0000000000863488 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #15 pc 0000000000eed788 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #16 pc 0000000000ee0b80 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #17 pc 0000000000eedd78 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #18 pc 0000000000eea88c /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #19 pc 0000000000eea720 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #20 pc 000000000142f934 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #21 pc 000000000142f8a8 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #22 pc 000000000075c240 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #23 pc 00000000006aed30 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #24 pc 00000000004c8814 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libunity.so (???)
java_vm_ext.cc:542] native: #25 pc 00000000004c35c8 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libunity.so (???)
java_vm_ext.cc:542] native: #26 pc 000000000040d1dc /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libunity.so (???)
java_vm_ext.cc:542] native: #27 pc 000000000040d670 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libunity.so (???)
java_vm_ext.cc:542] native: #28 pc 0000000000138d64 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libunity.so (???)
java_vm_ext.cc:542] native: #29 pc 000000000013b7f4 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libunity.so (???)
java_vm_ext.cc:542] native: #30 pc 000000000003747c /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/oat/arm64/base.odex (offset 36000) (com.unity3d.player.GoogleVrProxy.isQuiting [DEDUPED]+124)
[Truncated]
at libFirebaseCppApp-6.0010f21c(Native Method)
at libFirebaseCppApp-6.firebase::database::internal::DataSnapshotInternal::GetValue() const(firebase:36)
at libFirebaseCppApp-6.Firebase_Database_CSharp_InternalDataSnapshot_value(Firebase_Database_CSharp_InternalDataSnapshot_value:20)
at libil2cpp.01609ac0(Native Method)
at libil2cpp.016076ec(Native Method)
at libil2cpp.01607084(Native Method)
at libil2cpp.0086348c(Native Method)
at libil2cpp.00eed78c(Native Method)
at libil2cpp.00ee0b84(Native Method)
at libil2cpp.00eedd7c(Native Method)
at libil2cpp.00eea890(Native Method)
at libil2cpp.00eea724(Native Method)
at libil2cpp.0142f938(Native Method)
at libil2cpp.0142f8ac(Native Method)
at libil2cpp.0075c244(Native Method)
at libil2cpp.006aed34(Native Method)
at libunity.004c8818(Native Method)
at libunity.004c35cc(Native Method)
at libunity.0040d1e0(Native Method)
at libunity.0040d674(Native Method)
Answers:
username_1: **05-20 15:03:05.401 10865-10921/? A/aimadiction.co: java_vm_ext.cc:542] JNI DETECTED ERROR IN APPLICATION: can't call java.lang.Object com.google.firebase.database.DataSnapshot.getValue() on null object**
This means that the C++ DatabaseSnapshot instance had been destroyed when you were accessing it. That is definitely odd. One possibility is that FirebaseDatabase is destroyed at this moment, which would cause all database related object being deleted as well.
I would like to learn two things:
- Where did you use DatabaseSnapshot.GetValue()? Is it in the continuation function of Query.GetValueAsync() or is it in some event handler like Query.ValueChanged? A snippet of code would be helpful.
- Could you try to hold a static reference of FirebaseDatabase after FirebaseApp.CheckAndFixDependenciesAsync()? This means that FirebaseDatabase instance would not be destroyed until you switch scene or kill the app.
Thank you
Shawn
username_0: You insight actually led me to the problem which was subtle, so for posterity here are the particulars...
My initialization currently looks likes this, where a static reference is saved.
`
public static class FirebaseWrapper
{
public static DependencyStatus s_firebaseDepStatus = DependencyStatus.UnavailableOther;
static Firebase.FirebaseApp s_firebaseApp;
static Firebase.Auth.FirebaseAuth s_auth;
static Firebase.Storage.FirebaseStorage s_storage;
static Firebase.Database.FirebaseDatabase s_database;
#if UNITY_EDITOR
public static bool s_tester = true;
#else
public static bool s_tester = false;
#endif
public static IEnumerator Init(System.Action callback)
{
Debug.Log("Firebase starting");
Debug.Log("Current Status: " + s_firebaseDepStatus);
InitializeFirebase();
while (s_firebaseDepStatus == Firebase.DependencyStatus.UnavailableOther)
{
yield return 2;
Debug.Log("Status After initialization: " + s_firebaseDepStatus);
}
if(s_firebaseDepStatus != Firebase.DependencyStatus.Available)
{
UnityEngine.Debug.LogError(System.String.Format(
"Could not resolve all Firebase dependencies: {0}", s_firebaseDepStatus));
} else
{
// Firebase initialized poke all the things we want to use and hold references.
s_auth = Firebase.Auth.FirebaseAuth.DefaultInstance;
s_storage = Firebase.Storage.FirebaseStorage.DefaultInstance;
s_database = Firebase.Database.FirebaseDatabase.DefaultInstance;
}
if (callback != null) callback();
yield return 0;
}
static void InitializeFirebase()
{
Firebase.FirebaseApp.CheckAndFixDependenciesAsync().ContinueWith(task =>
{
if (task.Result== Firebase.DependencyStatus.Available)
{
// Create and hold a reference to your FirebaseApp,
// where app is a Firebase.FirebaseApp property of your application class.
s_firebaseApp = Firebase.FirebaseApp.DefaultInstance;
// Set a flag here to indicate whether Firebase is ready to use by your app.
s_firebaseDepStatus = DependencyStatus.Available;
Debug.Log("----FIREBASE DEPENDENCIES RESOLVED----");
}
[Truncated]
m_scoreRef.GetValueAsync().ContinueWith(task =>
{
if (task.IsFaulted)
{
// Handle the error...
OnCommandFailedCallback(kCmdName + " Failed getting data");
}
else if (task.IsCompleted)
{
DataSnapshot snapshot = task.Result;
// Do something with snapshot...
recurse(snapshot);
}
}, TaskScheduler.FromCurrentSynchronizationContext());
}
}
`
BIngo bongo problem solved.
Status: Issue closed
username_1: I see.
We did notice that GC optimization may deference and finalize the function parameters from the earlier recursion level.
One way to prevent it is to keep a reference in the class level, similar to what you did. Another way is to suppress GC using global::System.GC.KeepAlive().
username_0: I'll keep that in mind, thanks!
username_0: Ok this started crashing again with `can't call java.lang.Object com.google.firebase.database.DataSnapshot.getValue() on null object` the crash logs varied a bit but behind every crash was a `getValue()` being called on something null. This time I just re-wrote the recursive call because apparently something in firebase and or IL2CPP can handle it, does firebase test for this scenario? Anyways after re-writing the issue again resolved.
username_0: ## Please fill in the following fields:
Unity Version: 2017.4.27
Firebase Unity SDK version: 6.0.0
Firebase plugins in use (Auth, Database, etc.): Auth, Database, Analytics, RemoteConfig, Messaging
Additional SDKs you are using (Facebook, AdMob, etc.): AdMob, Unity IAP
Platform you are using the Unity editor on (Mac, Windows, or Linux): Windows, Mac
Platform you are targeting (iOS, Android, and/or desktop): iOS, Android
## Please describe the issue here:
(Please list the full steps to reproduce the issue. Include device logs, Unity logs, and stack traces if available.)
Building build with the new 6.0.0 version (was previously working under 4.5.2)
Start up app, wait through splash screen, must be signed into an account (sign up under settings) and have leaderboard active (which uses database). Then when starting up, auto sign in occurs, database requests start firing off to fill in the leaderboard. (maybe TMI: IAP is also pulling doing purchasing init in the background). App hangs most of the time, sometimes when restarting the app things work (no hang). Curiously the app will usually not hang while ADB is connected you have to disconnect and then start it up. Even curiously-er it hangs 100% of the time when starting from fresh install (with ADB cable unplugged) which is the case for someone installing from the store for the first time.
## Please answer the following, if applicable:
Have you been able to reproduce this issue with just the Firebase Unity quickstarts (this GitHub project)?
n/a I've been using firebase for about 2 years now
CRASH LOG
(Filename: /Users/builduser/buildslave/unity/build/artifacts/generated/Android/runtime/DebugBindings.gen.cpp Line: 51)
05-20 15:03:05.401 10865-10921/? A/aimadiction.co: java_vm_ext.cc:542] JNI DETECTED ERROR IN APPLICATION: can't call java.lang.Object com.google.firebase.database.DataSnapshot.getValue() on null object
java_vm_ext.cc:542] in call to CallObjectMethodV
java_vm_ext.cc:542] from boolean com.unity3d.player.UnityPlayer.nativeRender()
java_vm_ext.cc:542] "UnityMain" prio=5 tid=26 Runnable
java_vm_ext.cc:542] | group="main" sCount=0 dsCount=0 flags=0 obj=0x12c81758 self=0x72b4c37800
java_vm_ext.cc:542] | sysTid=10921 nice=0 cgrp=default sched=0/0 handle=0x72a32df4f0
java_vm_ext.cc:542] | state=R schedstat=( 4103897740 298315453 10985 ) utm=355 stm=54 core=5 HZ=100
java_vm_ext.cc:542] | stack=0x72a31dc000-0x72a31de000 stackSize=1041KB
java_vm_ext.cc:542] | held mutexes= "mutator lock"(shared held)
java_vm_ext.cc:542] native: #00 pc 00000000003c7324 /system/lib64/libart.so (art::DumpNativeStack(std::__1::basic_ostream<char, std::__1::char_traits<char>>&, int, BacktraceMap*, char const*, art::ArtMethod*, void*, bool)+220)
java_vm_ext.cc:542] native: #01 pc 0000000000495dc0 /system/lib64/libart.so (art::Thread::DumpStack(std::__1::basic_ostream<char, std::__1::char_traits<char>>&, bool, BacktraceMap*, bool) const+352)
java_vm_ext.cc:542] native: #02 pc 00000000002e85ac /system/lib64/libart.so (art::JavaVMExt::JniAbort(char const*, char const*)+972)
java_vm_ext.cc:542] native: #03 pc 00000000002e89cc /system/lib64/libart.so (art::JavaVMExt::JniAbortV(char const*, char const*, std::__va_list)+108)
java_vm_ext.cc:542] native: #04 pc 00000000000fd5f8 /system/lib64/libart.so (art::(anonymous namespace)::ScopedCheck::AbortF(char const*, ...)+144)
java_vm_ext.cc:542] native: #05 pc 0000000000101458 /system/lib64/libart.so (art::(anonymous namespace)::ScopedCheck::CheckMethodAndSig(art::ScopedObjectAccess&, _jobject*, _jclass*, _jmethodID*, art::Primitive::Type, art::InvokeType)+1584)
java_vm_ext.cc:542] native: #06 pc 00000000000ffcb4 /system/lib64/libart.so (art::(anonymous namespace)::CheckJNI::CallMethodV(char const*, _JNIEnv*, _jobject*, _jclass*, _jmethodID*, std::__va_list, art::Primitive::Type, art::InvokeType)+756)
java_vm_ext.cc:542] native: #07 pc 00000000000ec7d4 /system/lib64/libart.so (art::(anonymous namespace)::CheckJNI::CallObjectMethodV(_JNIEnv*, _jobject*, _jmethodID*, std::__va_list)+84)
java_vm_ext.cc:542] native: #08 pc 000000000010f218 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libFirebaseCppApp-6.0.0.so (_JNIEnv::CallObjectMethod(_jobject*, _jmethodID*, ...)+92)
java_vm_ext.cc:542] native: #09 pc 000000000014172c /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libFirebaseCppApp-6.0.0.so (firebase::database::internal::DataSnapshotInternal::GetValue() const+32)
java_vm_ext.cc:542] native: #10 pc 000000000013daa4 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libFirebaseCppApp-6.0.0.so (Firebase_Database_CSharp_InternalDataSnapshot_value+16)
java_vm_ext.cc:542] native: #11 pc 0000000001609abc /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #12 pc 00000000016076e8 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #13 pc 0000000001607080 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #14 pc 0000000000863488 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #15 pc 0000000000eed788 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #16 pc 0000000000ee0b80 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #17 pc 0000000000eedd78 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #18 pc 0000000000eea88c /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #19 pc 0000000000eea720 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #20 pc 000000000142f934 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #21 pc 000000000142f8a8 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #22 pc 000000000075c240 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #23 pc 00000000006aed30 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libil2cpp.so (???)
java_vm_ext.cc:542] native: #24 pc 00000000004c8814 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libunity.so (???)
java_vm_ext.cc:542] native: #25 pc 00000000004c35c8 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libunity.so (???)
java_vm_ext.cc:542] native: #26 pc 000000000040d1dc /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libunity.so (???)
java_vm_ext.cc:542] native: #27 pc 000000000040d670 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libunity.so (???)
java_vm_ext.cc:542] native: #28 pc 0000000000138d64 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libunity.so (???)
java_vm_ext.cc:542] native: #29 pc 000000000013b7f4 /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/lib/arm64/libunity.so (???)
java_vm_ext.cc:542] native: #30 pc 000000000003747c /data/app/bigtale.claimadiction.com-vkJFoDZIc4Oc4lbnRvUatA==/oat/arm64/base.odex (offset 36000) (com.unity3d.player.GoogleVrProxy.isQuiting [DEDUPED]+124)
[Truncated]
at libFirebaseCppApp-6.0010f21c(Native Method)
at libFirebaseCppApp-6.firebase::database::internal::DataSnapshotInternal::GetValue() const(firebase:36)
at libFirebaseCppApp-6.Firebase_Database_CSharp_InternalDataSnapshot_value(Firebase_Database_CSharp_InternalDataSnapshot_value:20)
at libil2cpp.01609ac0(Native Method)
at libil2cpp.016076ec(Native Method)
at libil2cpp.01607084(Native Method)
at libil2cpp.0086348c(Native Method)
at libil2cpp.00eed78c(Native Method)
at libil2cpp.00ee0b84(Native Method)
at libil2cpp.00eedd7c(Native Method)
at libil2cpp.00eea890(Native Method)
at libil2cpp.00eea724(Native Method)
at libil2cpp.0142f938(Native Method)
at libil2cpp.0142f8ac(Native Method)
at libil2cpp.0075c244(Native Method)
at libil2cpp.006aed34(Native Method)
at libunity.004c8818(Native Method)
at libunity.004c35cc(Native Method)
at libunity.0040d1e0(Native Method)
at libunity.0040d674(Native Method)
username_0: Removing recursive calls to databaseref.getValueAsync() resolved the crash.
Status: Issue closed
|
rnyholm/runcalc | 442088178 | Title: Version name handling
Question:
username_0: Set version name in gradle only and retrieve it from there when it's to be used in app, see https://stackoverflow.com/questions/29524583/reference-build-gradle-versionname-attribute-in-xml-layout/29525111<issue_closed>
Status: Issue closed |
dart-lang/http | 544952244 | Title: How can I change content-type with a POST request?
Question:
username_0: I created a custom api backend...
But I'm getting "TypeError: 'NoneType' object is not subscriptable"
So I'm looking for a way to change the request body to application/json
Answers:
username_0: I created a custom api backend...
But I'm getting "TypeError: 'NoneType' object is not subscriptable"
So I'm looking for a way to change the request body to application/json
username_1: The `post` and `client.post` methods have a `headers` named argument. If you pass `{'content-type': 'application/json'}` to that argument it will be used for the request.
Status: Issue closed
username_0: Ohhhhh... Thanks 😊
username_2: @username_1
http: ^0.12.2
```dart
const apiTokenAuth = PROD_URL + '/api-token-auth/';
const headers = {"content-type": "application/json"};
var body = {
"username": data.name,
"password": <PASSWORD>,
};
var res = await http.post(apiTokenAuth, headers: headers, body: body);
```
I got error
```bash
E/flutter ( 6833): [ERROR:flutter/lib/ui/ui_dart_state.cc(166)] Unhandled Exception: Bad state: Cannot set the body fields of a Request with content-type "application/json".
```
Where am I wrong?
username_2: Found the [answer](https://stackoverflow.com/questions/27574740/http-post-request-with-json-content-type-in-dartio) |
crusaderky/ndcsv | 404810276 | Title: Resilience to spurious commas
Question:
username_0: When exporting a CSV from Excel, it's common to have spurious commas on the right hand side.
They currently cause a crash:
```
buf = io.StringIO("""
y,y1,y2
x,
x0,1,2,,
""".strip())
ndcsv.read_csv(buf)
ValueError: Length mismatch: Expected axis has 4 elements, new values have 2 elements
````
A related problem is specific to the case of 1-dimensional arrays where the header does not end with a comma; in this case, the dimension name is lost:
```
buf = io.StringIO("""
x
x0,1
""".strip())
ndcsv.read_csv(buf)
xarray.DataArray (dim_0: 1)>
array([1])
Coordinates:
* dim_0 (dim_0) <U2 'x0'
``` |
Pragmatists/open-trapp-ui | 705637957 | Title: Confusing storage of commonly-used tag combinations
Question:
username_0: What's the current way of storing and proposing commonly used tag combinations? I often need to log three different combinations, but OpenTrapp seems to only suggest max. 2 of those, sometimes 3, but if I don't use one for a week or so it jumps back to 2.
Can we have more options stored and suggested?
Answers:
username_1: Now it suggests 2 combinations used most often and 2 recently used. It's possible to suggest more options. I'll try to do that. |
ipfs/go-ds-badger | 418576720 | Title: Periodically garbage collect
Question:
username_0: We should periodically garbage collect in case the user has garbage collection turned off. Otherwise, we'll never delete _anything_.
Answers:
username_1: Why is `gcDiscardRatio` statically configured to 0.1? The struct field is unexported :-(
username_0: Not exporting it was probably a mistake, @username_2?
username_1: Gotcha, I'll submit a PR. Another thing, overriding the error here creates ambiguity: https://github.com/ipfs/go-ds-badger/blob/master/datastore.go#L286. Badger returns nil if one file was collected, and callers may use this as a hint to continue collecting by calling GC again. Overriding the "nothing was collected" error to nil makes it impossible to discriminate whether it's worth calling GC again or not :-(
username_0: That's the issue. However, for every datastore _except_ badger, re-running GC till it removes nothing is just a waste of time so I'm not sure how useful that is.
See https://github.com/ipfs/go-ds-badger/pull/56 that just does this automatically.
username_2: Yeah, I probably meant to export it, but also set some value that would mostly work to quickly test and never changed it
username_3: @username_0 Is this up for grabs ?
username_0: Yes! (it's also pretty easy to review so it should get merged pretty quickly).
username_0: Fixed in #72.
Status: Issue closed
|
nathanpalmer/example-semantic-ui-ember-cli | 156562981 | Title: 404 errors for font files
Question:
username_0: I still get the errors:

Instead of app.css I have app.less, and in ember-cli-build.js I have:
var app = new EmberApp(defaults, {
lessOptions: {
paths: [
'bower_components/semantic-ui'
]
},
SemanticUI: {
css: false,
javascript: true,
fonts: true
}
});
And I have installed ember-cli-less through npm. I have a theme.config in the root, and app.less and theme.less in the styles folder in the app directory. Am I missing something? The font files are actually in dist/assets/css/themes/default/assets/fonts/ when the ember server is started.
Fonts do not appear. The themes and site folder is also in the styles directory. I cloned the example app here and I get the same errors I get in my project.
Answers:
username_1: I think the issue is because this demo is using semantic-ember 0.9.3.
I have similiar bug with 404 of fonts assets, using semantic-ember 2.0 branch.
Details here: https://github.com/Semantic-Org/Semantic-UI-Ember/issues/162
username_1: Ah, I think the issue is with semantic less it's using grunt to do a relative path rewrite for the icon and image assets.
https://github.com/Semantic-Org/Semantic-UI-Ember/issues/162#issuecomment-254669496
How do we correct to get it to work with ember-cli-less?
username_2: @username_1 I haven't revisited this project since I made the original demo and don't use ember-cli-less. Were you able to get it to work?
username_1: @username_2 yes. it was a bit of a hack.
#1 - Define the theme path (default, otherwise) sources in ember-cli-build:
``` source: {
css: 'bower_components/semantic-ui/dist',
javascript: 'bower_components/semantic-ui/dist',
images: 'bower_components/semantic-ui/dist/themes/default/assets/images', //DO NOT CHANGE THEME. Other themes do not have image assets (flags). Use the default theme here always.
fonts: 'bower_components/semantic-ui/dist/themes/default/assets/fonts'
},
```
#2 - Then you have to do a bunch of overrides to correct the paths. After you've loaded all the semantic less files.
eg.
```
/*
*
*
* Start BUGFIX For relative font paths
*/
//icon fix uses lazy-loading, so we can easily adjust any variable below.
@fontPath : "./themes/@{themenameForIcons}/assets/fonts";
/*-------------------
Icon Variables
--------------------*/
@fontName: 'icons';
@fallbackSRC: url("@{fontPath}/@{fontName}.eot");
//conditinal statement on which font files to load depending on theme.
//some of the themes have woff2 files missing.
//declare else function#1
.function (@param, @fontPath, @fontName) when not (@param = "default") {
//themenameForIcons is not equal to default.
@src:
url("@{fontPath}/@{fontName}.eot?#iefix") format('embedded-opentype'),
//url("@{fontPath}/@{fontName}.woff2") format('woff2'), //for example the material theme doesn't have a woff2 file, so we can exclude here.
url("@{fontPath}/@{fontName}.woff") format('woff'),
url("@{fontPath}/@{fontName}.ttf") format('truetype'),
url("@{fontPath}/@{fontName}.svg#icons") format('svg')
;
}
//declare if conditional function#1
.function (@param, @fontPath, @fontName) when (@param = "default") {
//themenameForIcons is equal to default
@src:
url("@{fontPath}/@{fontName}.eot?#iefix") format('embedded-opentype'),
url("@{fontPath}/@{fontName}.woff2") format('woff2'), //for example the material theme doesn't have a woff2 file, so we can exclude here.
url("@{fontPath}/@{fontName}.woff") format('woff'),
url("@{fontPath}/@{fontName}.ttf") format('truetype'),
url("@{fontPath}/@{fontName}.svg#icons") format('svg')
;
}
//run the function
.function (@themenameForIcons, @fontPath, @fontName);
/*******************************
Icon
*******************************/
@font-face {
font-family: 'Icons';
src: @fallbackSRC;
src: @src;
font-style: normal;
font-weight: normal;
font-variant: normal;
text-decoration: inherit;
text-transform: none;
}
```
username_1: Have to do a similiar thing with flags.
```
/*!
* # Semantic UI - Flag
* http://github.com/semantic-org/semantic-ui/
*
*
* Released under the MIT license
* http://opensource.org/licenses/MIT
*
*/
/*******************************
Theme
*******************************/
@type : 'element';
@element : 'flag';
@import (multiple) '../../theme.config';
/*
*
*
* Start Fix
*/
@imagePath : './themes/default/assets/images'; //correct issue with images
@formLoaderPath: "@{imagePath}/loader-large.gif";
@spritePath: "@{imagePath}/flags.png";
/*
*
* End fix
*/
/*******************************
Flag
*******************************/
i.flag:not(.icon) {
display: inline-block;
width: @width;
height: @height;
line-height: @height;
[Truncated]
text-decoration: inherit;
speak: none;
font-smoothing: antialiased;
backface-visibility: hidden;
}
/* Sprite */
i.flag:not(.icon):before {
display: inline-block;
content: '';
background: url(@spritePath) no-repeat -108px -1976px;
width: @width;
height: @height;
}
.loadUIOverrides();
``` |
Mardanjan/Blog | 577044171 | Title: JavaScript: 定时器
Question:
username_0: ## setTimeOut
+ 第一个参数为函数,第二个参数为时间
+ 经过制定的时间后,执行制定的代码(把任务push到任务队列)
+ 只执行一次
## setInterval
+ 参数与前者相同
+ 返回值是定时器的Id
+ 经过制定的时间后,执行制定的代码(把任务push到任务队列)
+ 多次执行,直到清除定时器,一直执行
+ 清除方法: clearInterval(定时器Id)
## setImmediate
## requestAnimationFrame
后面这两个没用过 |
allista/ConfigurableContainers | 321680190 | Title: Add moist instead of sink em all compatibility
Question:
username_0: Don't know what this means, but will try to figure out...
Answers:
username_0: I need clarification for this; as far as I can understand, the MOIST contains some configs in SinkEmAll subfolder that has a patch for CC which should work.
username_0: Closing as stale issue
Status: Issue closed
|
ARM-software/lisa | 999418157 | Title: Question: enable Arm energy probe (AEP) in ipython script
Question:
username_0: I want to enable AEP in the ipython script, and then invoke the functions to reset meter and record the meter for a specific workload. If I remember correctly, there have two interfaces for enabling AEP, one is the LISA's self interface and another is based on devlib. So I am not clear what's a good practice to enable AEP in ipython script.
I searched a bit for the document, it just gives out the suggestion for using the below configurations:
```
# AEP Energy Meter configuration
aep-conf:
# Channels to use
# type: Mapping
channel-map: ['Device0' : 'BAT']
# Resistor values
# type: TypedList[float]
resistor-values: [0.033]
# List of labels
# type: TypedList[str]
labels: ['aep']
# TTY device
# type: TypedList[str]
device-entry: ['/dev/ttyACM0']
```
If this is the good way to enable AEP, then what's the function I should use in ipython script for energy measurement? thanks for suggestions!
Answers:
username_1: Hi @username_0 ,
The current situation is:
* The code itself lives in devlib and is therefore shard with workload automation
* Lisa used to have implementation but they have been removed for all the bits that were just plain duplication of devlib
* Lisa now only sticks to providing yaml configuration, which could be useful in a setting like with exekall, but I don't know if there is a real use case right now
If all you want is to instanciate a python object in a custom script, i would suggest using devlib directly (passing a Lisa target object should work), or using the wrapper class from Lisa if it has useful context managers etc (can't remember if it does). Unless you really need it, avoid the yaml conf, as all the keys are mapped to constructor parameters that you can invoke directly.
Once you have your instance, you should get devlib instrument API on it to get some data
https://devlib.readthedocs.io/en/latest/instrumentation.html
That stack of APIs clearly needs some love and documentation, and there is also an idea floating around in devlib to unify the instrument and collector API. The future is probably in that direction.
username_0: Hi @username_1 ,
Thanks for guidance. Based on your suggestion, I read devlib and worked out the notebook as:
https://gist.github.com/username_0/df22336a035140d2207b63282a681e12
Seems to me, this is not too bad. On the other hand, I'd like to suggest to add some examples in the repo so any user could easily enable the power meter; otherwise, the developers need to dive into the source code and find out what's they want. But this seems like the scope of devlib and it causes extra workload for project maintenance, so it's up to your plan for this.
Anyway, thanks a lot for info and I am trying other features in LISA. Appreciate your quick response on your holiday (sorry for disturbing and you could wait to reply until you come back!).
Status: Issue closed
|
Dave-Browne/CarND_P3-Behavioral-Cloning | 205125123 | Title: Images for README
Question:
username_0: 
Answers:
username_0: 
 |
rumblesan/improviz | 842802706 | Title: Improviz doesn't accept OSC lists
Question:
username_0: I'm attempting to send random fill values from Pure Data to Improviz via OSC but I'm finding that it's not working.
This is the improviz code that I'm using
```
color = ext(:color, 0)
fill(color)
strokeSize(8)
rotate(time)
cube(4)
```
And the Pure Data patch I'm using is attached, and screenshot below:

[pd_improviz.zip](https://github.com/username_1/improviz/files/6217862/pd_improviz.zip)
If I click on the toggle to generate random values or click on the message box Improviz returns this error: `ERROR: invalid OSC address`
If I send just one number it works but it only changes the fill to shades of grey
Answers:
username_1: Ok, so!
In theory this is an easy-ish change to make, and I prototyped it yesterday to see how feasible it is. But it raises a bit of a problem.
The way that Improviz handles errors in the programs that get sent, is it keeps track of the "last known working" program, so when an error occurs in a program, it blows up, and Improviz just falls back to the previous good state. The problem with allowing the OSC input (and this is actually an issue now that it allows strings, but I think it's become more obvious with arrays) is that a program can be good with one OSC input, get saved as good, and then when the OSC input changes it now crashes, and the last known good state that improviz has will also crash with that input.
Just to be really verbose and explain :D
```
color = ext(:color, 0)
fill(color)
strokeSize(8)
rotate(time)
cube(4)
```
as long as you send a number to `/vars/color` over OSC this will be fine, but if you send an array, then the `fill` function will complain that it can't use an array, and will crash, but this program has already been saved as good, and it will enter a crash loop.
I'm not really sure what the best course of action is, and it really depends on peoples expectations as users.
Things I'm definitely sure of:
* Improviz should not enter a crash loop
* It should be reasonably easy to figure out what the problem is
* It shouldn't be something that is going to impede a live performance in too major a fashion
Things that could be solutions, but I'm unsure about:
* functions that get unexpected values (array or string instead of number) throw a warning in the logs and just do the best they can. (maybe default to 0 value for example)
* Improviz warns if the type of an OSC variable changes ("color was originally a number but now it's an array. I'm not changing it")
* Improviz keeps a history of the OSC variables that were last working with the program, though this could definitely lead to unexpected behaviour
None of those feel especially difficult to implement, but I'm aiming for minimum level of surprise for people who use it |
ChaissonLab/danbing-tk | 793485129 | Title: Boundary expanded VNTRs
Question:
username_0: Dear ChaissonLab,
I was curious if the boundary expanded VNTRs are made available (or can be made available) as described in page 21 of your preprint https://www.biorxiv.org/content/10.1101/2020.08.13.249839v3.full.pdf
If not, can you pinpoint on how to perform this expansion on an assembly or hg38?
Thank you.
Doruk
Answers:
username_1: Hi Doruk,
We can provide the intervals on hg38, and some of the assemblies, but
four are under embargo for
https://www.biorxiv.org/content/10.1101/2020.12.16.423102v1 to be
published. Are you looking to incorporate your own assemblies?
Best,
-Mark
username_2: Hi @username_0,
If you would like to identify the VNTR boundaries with your own assemblies, it's doable by running the [`danbing-tk build`](https://github.com/ChaissonLab/danbing-tk#danbing-tk-build) with an additional option `--until JointTRAnnotation` when invoking snakemake. This will skip steps to generate RPGG. Let me know if any of the documentation is unclear.
Thanks,
-Tony
username_0: Hi Mark ,
It would be nice to have the intervals on hg38 initially, but I would also be happy for the intervals you can provide on assemblies that are not under an embargo. Incorporating our own assemblies would be interesting, but I am curious to start with available intervals as an initial experiment!
Hi Tony,
Thank you for pointing out how. I will look into it. Your approach seem to use the intervals from the assemblies, but is there a default set of intervals you provide?
Best,
Doruk
username_2: Hi Doruk,
I've now added assemblies that we can release at this moment and their VNTR coordinates under [v1.0](https://github.com/ChaissonLab/danbing-tk/tree/v1.0). Hope this helps!
-Tony
username_0: Hi Tony,
Thank you so much for adding the assemblies and their VNTR coordinates. I have already seen one case I am aware of where a VNTR interval split into 2 regions by Tandem Repeats Finder is nicely merged into one in your tr.good.bed file.
A few questions about the data: I am understanding that tr.good.bed file is a step by step filtering of the 84,411 loci down to 73,582 and then to 32,138.
Question 1) Can we also reach the 84,411 and 73,582 set of loci as two separate bed files?
Question 2) Can we run danbing-tk build using the option --until JointTRAnnotation and use hg38.fa reference genome as my assembly to generate its set of expanded VNTRs?
username_2: Hi Doruk,
Thanks for asking, glad to know the initial sets could be useful to others. I've now included the two sets under v1.0 as well.
And yes, annotations on hg38 can be done by running the `pipeline/RefGraph.snakefile` pipeline with the option you mentioned.
Let me know if you have any problems running the pipeline. Thanks!
Best,
-Tony
Status: Issue closed
username_0: Hi Tony,
Thank you for including the unfiltered TR coordinates, and answering my questions!
Of course -- will do so!
Best,
Doruk |
dotnet/roslyn-analyzers | 813977100 | Title: ConfigureAwait analzyer needs to be updated for IAsyncDisposable
Question:
username_0: Found in https://github.com/dotnet/roslyn/pull/51328
The appropriate pattern for IAsyncDisposable (with ConfigureAwait) is to do the following:
```c#
var storage = await persistService.GetStorageAsync(solution, cancellationToken).ConfigureAwait(false);
await using var _ = storage.ConfigureAwait(false);
```
Note the *two* `.ConfigureAwait(false)` calls. One on the Task to get the IAsyncDisposable instance, then one on the IAsyncDisposable in the `using` statement to get an appropriate configured awaiter for the `DisposeAsync()` call.
Right now, you can write:
```c#
await using var storage = await persistService.GetStorageAsync(solution, cancellationToken).ConfigureAwait(false);
```
Which is correct for getting the value, but not for appropriately calling `DisposeAsync()`.<issue_closed>
Status: Issue closed |
okken/pytest-check | 413699197 | Title: Idea: context manager API
Question:
username_0: So one thing in pytest-check that is reminiscent of unittest and nose is the assertion api. While it should work fine with pytest's assertion rewriter you still need to remember what the method name is. Is it `check.less` or `check.less_than`? Is it `check.equal` or `check.is_equal`? You can never tell.
So an alternative:
```python
with check: assert a < 1
with check: assert 1 < a < 10
```
Context managers and "silence" exceptions by returning True in `__exit__`.
Answers:
username_1: This is brilliant
Status: Issue closed
username_1: 0.3.3 added a `check()` context manager.
So this now works:
```
with check(): assert a < 1
with check(): assert 1 < a < 10
```
username_1: Also, kinda really stoked that you took a look at the plugin.
I've learned a lot from you over the past few years.
username_0: Heh thanks. Not wanting to nitpick but I had hoped for a singleton-ish api (no function call, as it seems unnecessary). I'll make a PR to fix it what do you think?
username_0: Another thing that I noticed - only AssertionErrors are collected - is that intentional?
username_1: I like the Singleton idea. Wasn’t sure how to accomplish it. So please, PR
username_1: Only catching assertions. Ya. It was intentional. But maybe not in line with normal pytest behavior of all exceptions causing failure.
Do you have an opinion?
username_0: So now that I think of it more, I'd prolly woulnd't like seeing dozen undefined variable failures, so it's fine as it is now. |
Arquisoft/radarin_es6b | 817212572 | Title: Tipo de base de datos
Question:
username_0: ¿Que tipo de base de datos vamos a usar para la el almacenamiento de localizaciones y otros datos?
Answers:
username_1: Podríamos usar MongoDB para la persistencia de las localizaciones de los usuarios. Para el resto de la información podríamos guardarla en los pods.
Documentación: https://www.mongodb.com/es/what-is-mongodb
username_2: Estoy de acuerdo con @username_1 . Además, en el proyecto inicial que se nos entregó ya utiliza esta base de datos por lo que me parece buena idea mantenerla.
username_0: Estoy de acuerdo en usar MongoDB como una base de datos relacional
username_3: Yo también estoy de acuerdo.
username_4: Sí yo también opino igual en cuanto a usar MongoDB
Status: Issue closed
|
netlify/zip-it-and-ship-it | 553866604 | Title: NPM modules missing with 0.4.0-8
Question:
username_0: IIRC some updates to zip & ship made sure that function modules were automatically installed.
It doesn't look like this is the case with `0.4.0-8`
See build logs: https://app.netlify.com/sites/functions/deploys/5e28e2a39ad3e7000a0d9f25
```
4:03:52 PM: Prepping functions with zip-it-and-ship-it 0.4.0-8
4:03:53 PM: Error: In file "/opt/build/repo/functions/add-example.js": Cannot find module 'git-url-parse/package.json' from '/opt/build/repo/functions'
4:03:53 PM: Error prepping functions
```
Answers:
username_1: Same issue when trying to deploy a function with `[email protected]` dependency:
```
4:41:05 PM: Prepping functions with zip-it-and-ship-it 0.4.0-8
4:41:07 PM: Error: In file "/opt/build/repo/dist/functions/netlify/nextApp/nextApp.js": Cannot find module 'node-sass/package.json' from '/opt/build/repo/node_modules/sass-loader'
4:41:07 PM: Error prepping functions
```
sass-loader has them defined as peerDependencies https://github.com/webpack-contrib/sass-loader/blob/master/package.json
username_0: Interesting! Thanks for debugging that bit. Looks like we will need to account for this in the bundling resolution
username_2: @DavidWeels, this seems to be a problem with the current production version too (`0.3.1`). I cloned the repository and removed the `ZISI_VERSION` environment variable and [got the same error](https://app.netlify.com/sites/lucid-jepsen-560d48/deploys/5e2b511fe3b729d764f67484).
@username_1, could you please try removing the `ZISI_VERSION` environment variable and check if you still get the error? Just to debug whether this is the same issue or a separate one.
username_2: Closing this issue as this is stale. Feel free to re-open if this is still happening.
Status: Issue closed
|
feathericons/feather | 406597421 | Title: Icon request: file-check, file-x, folder-check, folder-x
Question:
username_0: ## Icon Request
* Icon name: “check” and “x” variants of the `file` and `folder` icons
* Use case: There are already `file-plus` and `file-minus` icons (and the equivalent for folders) to symbolize adding and removing these objects, but being able to symbolize success and validation in relation to files and folders would be enormously useful for many apps, especially those involving uploads
* Screenshots of similar icons:
<img width="243" alt="screen shot 2019-02-04 at 8 43 37 pm" src="https://user-images.githubusercontent.com/10377391/52248691-9a0d3e80-28bd-11e9-81cc-50f35c451375.png">
<img width="246" alt="screen shot 2019-02-04 at 8 43 26 pm" src="https://user-images.githubusercontent.com/10377391/52248693-9bd70200-28bd-11e9-8b29-8d75480a6227.png">
(all from [unicons](https://iconscout.com/unicons))
Answers:
username_1: @colebemis @username_2 @username_0 How does [this](https://www.figma.com/file/vMeF1zmyWdW68RxzQsYeCj/file-folder-check-times) look?
username_2: This one looks great, but make sure to be in compliance with #171 .
username_1: Yup ... looking like it is in compliance ... going to submit a PR!
username_2: @johnlestey
No they are not.
username_2: Strokes = Borders. This means you should use borders instead of fills. Also, you shouldn't use the masks provided for Figma when submitting your icons. You should use only shapes themselves.
username_1: @username_2 What is wrong?
username_1: Refactoring now ... will update you to have a look when I'm done!
username_1: @username_2 I've updated the icons ... please take a look! They now use SVG shapes instead of components and also I have shrunk the "x" to have the same top and bottom space as the "check"
username_2: Great, but please use a border for the checks and X, instead of a fill. BTW, you should remove the original "file" and "folder" groups from the icons.
username_1: @username_2 The checks and Xs now have a 1px border (is that a violation of #171?) and I have removed all icon groups ... solely vectors now!
username_2: #171 is again very clear : "Every line and shape has a 2px center-aligned stroke with round joins and round caps"
username_1: @username_2 Have changed ... but IMHO, they look ugly now
username_1: @username_2 @username_0 How does it look now?
<img width="477" alt="Screen Shot 2019-07-02 at 7 31 27 AM" src="https://user-images.githubusercontent.com/30328854/60482745-777c2780-9c9b-11e9-8254-17adb22fed35.png">
username_2: Maybe make the checks and crosses smaller.

username_1: @username_2 @ahtohbi4 How does it look now? [Figma](https://www.figma.com/file/C7cEHvYfSsGpvx9utbnFnhJy/Feather-Icons?node-id=0%3A1)
<img width="606" alt="Screen Shot 2019-07-08 at 8 00 15 AM" src="https://user-images.githubusercontent.com/30328854/60784192-a2acbe00-a156-11e9-8d71-72adfceef10a.png">
username_2: Perfect! |
zzzprojects/EntityFramework.Extended | 415023503 | Title: UpdateAsync() -
Question:
username_0: My code:
```
await _dbContext.MyEntity.Where(x => x.Id == myEntity.Id && x.Date < myEntity.Date)
.UpdateAsync(x => new MyEntity { ValueAsJson = myEntity.ProductAsJson, Date = myEntity.Date });
```
Is it possible to tell the UpdateAsync() method what kind of type my parameter "ValueAsJson" is? I know it would work if I set it to "NpgsqlDbType.Jsonb".<issue_closed>
Status: Issue closed |
sensu/sensu-go | 705771119 | Title: Silences fail to resolve after cluster restart
Question:
username_0: ## Expected Behavior
After a silence is applied, it should resolve after its period, even through cluster restarts.
## Current Behavior
Silences seem to persist past cluster restarts.
## Possible Solution
Unknown.
## Steps to Reproduce (for bugs)
1. Create silences for pretty much everything, and do a rolling upgrade of Sensu.
2. Observe that silences don't resolve.
## Context
Reported by a customer.
Answers:
username_1: This likely requires a spike.
Status: Issue closed
|
badges/shields | 169691065 | Title: package.json files array is missing measure-text.js
Question:
username_0: When installed using NPM from the GitHub repo (commit fbe13d7), requiring gh-badges results in:
```
module.js:442
throw err;
^
Error: Cannot find module './measure-text'
at Function.Module._resolveFilename (module.js:440:15)
at Function.Module._load (module.js:388:25)
at Module.require (module.js:468:17)
at require (internal/module.js:20:19)
at Object.<anonymous> (/home/drew/tmp/badge-test/node_modules/gh-badges/badge.js:5:24)
at Module._compile (module.js:541:32)
at Object.Module._extensions..js (module.js:550:10)
at Module.load (module.js:458:32)
at tryModuleLoad (module.js:417:12)
at Function.Module._load (module.js:409:3)
```
Adding `"measure-text.js"` to package.json fixes this issue.<issue_closed>
Status: Issue closed |
ElectronNET/Electron.NET | 330206693 | Title: csproj ComputeFilesToPublish section cause "Error occurred during dotnet publish." error on Angular projects
Question:
username_0: Hi,
First of all, thank you for developing such a project. I trying to use Electron.NET with Visual Studio Angular 5 template. To facilitate publishing process they have put some post publish scripts to csproj file.
```xml
<Project Sdk="Microsoft.NET.Sdk.Web">
<PropertyGroup>
<TargetFramework>netcoreapp2.0</TargetFramework>
<TypeScriptCompileBlocked>true</TypeScriptCompileBlocked>
<TypeScriptToolsVersion>Latest</TypeScriptToolsVersion>
<IsPackable>false</IsPackable>
<SpaRoot>ClientApp\</SpaRoot>
<DefaultItemExcludes>$(DefaultItemExcludes);$(SpaRoot)node_modules\**</DefaultItemExcludes>
<!-- Set this to true if you enable server-side prerendering -->
<BuildServerSideRenderer>false</BuildServerSideRenderer>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="ElectronNET.API" Version="0.0.9" />
<PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.8" />
<PackageReference Include="Microsoft.AspNetCore.SpaServices.Extensions" Version="2.0.0" />
</ItemGroup>
<ItemGroup>
<DotNetCliToolReference Include="ElectronNET.CLI" Version="0.0.9" />
</ItemGroup>
<ItemGroup>
<!-- Don't publish the SPA source files, but do show them in the project files list -->
<Content Remove="$(SpaRoot)**" />
<None Include="$(SpaRoot)**" Exclude="$(SpaRoot)node_modules\**" />
</ItemGroup>
<ItemGroup>
<!-- Files not to publish (note that the 'dist' subfolders are re-added below) -->
<Content Remove="ClientApp\**" />
</ItemGroup>
<Target Name="DebugEnsureNodeEnv" BeforeTargets="Build" Condition=" '$(Configuration)' == 'Debug' And !Exists('$(SpaRoot)node_modules') ">
<!-- Ensure Node.js is installed -->
<Exec Command="node --version" ContinueOnError="true">
<Output TaskParameter="ExitCode" PropertyName="ErrorCode" />
</Exec>
<Error Condition="'$(ErrorCode)' != '0'" Text="Node.js is required to build and run this project. To continue, please install Node.js from https://nodejs.org/, and then restart your command prompt or IDE." />
<Message Importance="high" Text="Restoring dependencies using 'npm'. This may take several minutes..." />
<Exec WorkingDirectory="$(SpaRoot)" Command="npm install" />
</Target>
<Target Name="PublishRunWebpack" AfterTargets="ComputeFilesToPublish">
<!-- As part of publishing, ensure the JS resources are freshly built in production mode -->
<Exec WorkingDirectory="$(SpaRoot)" Command="npm install" />
<Exec WorkingDirectory="$(SpaRoot)" Command="npm run build -- --prod" />
<Exec WorkingDirectory="$(SpaRoot)" Command="npm run build:ssr -- --prod" Condition=" '$(BuildServerSideRenderer)' == 'true' " />
<!-- Include the newly-built files in the publish output -->
<ItemGroup>
<DistFiles Include="$(SpaRoot)dist\**; $(SpaRoot)dist-server\**" />
<DistFiles Include="$(SpaRoot)node_modules\**" Condition="'$(BuildServerSideRenderer)' == 'true'" />
<ResolvedFileToPublish Include="@(DistFiles->'%(FullPath)')" Exclude="@(ResolvedFileToPublish)">
<RelativePath>%(DistFiles.Identity)</RelativePath>
<CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>
[Truncated]
<ItemGroup>
<Content Update="electron.manifest.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</Content>
</ItemGroup>
</Project>
```
As you can see they run some npm commands on `PublishRunWebpack`. But if i run `dotnet electronize start` command, , I get the "Error occurred during dotnet publish." error and if i remove this section `dotnet electronize start` runs without an error.
I have inspect the source code of ElectronNET.CLI and found which command cause this error.
[StartElectronCommand.cs#L54](https://github.com/ElectronNET/Electron.NET/blob/master/ElectronNET.CLI/Commands/StartElectronCommand.cs#L54)
Its executing `dotnet publish -r win-x64 --output "D:\My\infinity-item\src\InfinityItem\InfinityItem\obj\Host\bin"` command in my environment.
And if I run this command manually, it works run without an error.
Then i reliazed this command also runs succesfuly with `dotnet electronize start` . Because i can see compiled output on `D:\My\infinity-item\src\InfinityItem\InfinityItem\obj\Host\bin` path.
So, the problem may be related to exit codes, even if npm commands are successful, when ProcessHelper.CmdExecute is called.
Answers:
username_0: I made an cake addin for Electron.Net to use in my personel project. Maybe it can be useful for someone else too.
https://github.com/username_0/Cake.Electron.Net
username_1: Thank you @username_0
Status: Issue closed
username_2: I could solved this problem supressing the ng build output. I made these changes on .csproj:
From:
`<Exec WorkingDirectory="$(SpaRoot)" Command="npm run build -- --prod" ConsoleToMSBuild="true" />
<Exec WorkingDirectory="$(SpaRoot)" Command="npm run build:ssr -- --prod" Condition=" '$(BuildServerSideRenderer)' == 'true' " />`
To:
`<Exec WorkingDirectory="$(SpaRoot)" Command="npm run build -- --prod --no-progress" ConsoleToMSBuild="true" />
<Exec WorkingDirectory="$(SpaRoot)" Command="npm run build:ssr -- --prod --no-progress" Condition=" '$(BuildServerSideRenderer)' == 'true' " />
` |
hidonguyen/Develover.WebUI | 817825313 | Title: Bảng khai báo BHXH
Question:
username_0: Tên controller: SocialInsuranceDeclaration
Tên model: SocialInsuranceDeclarationViewModel
Danh sách properties:
Guid Id,
DateTime EffectiveDate (Ngày hiệu lực),
double SIPercentageByEmployer (% BHXH công ty),
double HIPercentageByEmployer (% BHYT công ty),
double UIPercentageByEmployer (% BHTN công ty),
double UFPercentageByEmployer (% phí công đoàn công ty),
double SIPercentageByEmployee (% BHXH người lao động),
double HIPercentageByEmployee (% BHYT người lao động),
double UIPercentageByEmployee (% BHTN người lao động),
double UFPercentageByEmployee (% phí công đoàn người lao động),
double PersonalIncomeTaxDeduction (Tiền giảm trừ cá nhân),
double DependentTaxDeduction (Tiền giảm trừ mỗi người phụ thuộc),
double PITGroup1From (Thu nhập chịu thuế từ), double PITGroup1To (đến), double PITGroup1Percentage (% Thuế),
double PITGroup2From (Thu nhập chịu thuế từ), double PITGroup2To (đến), double PITGroup2Percentage (% Thuế),
double PITGroup3From (Thu nhập chịu thuế từ), double PITGroup3To (đến), double PITGroup3Percentage (% Thuế),
double PITGroup4From (Thu nhập chịu thuế từ), double PITGroup4To (đến), double PITGroup4Percentage (% Thuế),
double PITGroup5From (Thu nhập chịu thuế từ), double PITGroup5To (đến), double PITGroup5Percentage (% Thuế),
double PITGroup6From (Thu nhập chịu thuế từ), double PITGroup6To (đến), double PITGroup6Percentage (% Thuế),
double PITGroup7From (Thu nhập chịu thuế từ), double PITGroup7To (đến), double PITGroup7Percentage (% Thuế)<issue_closed>
Status: Issue closed |
StylishThemes/GitHub-Dark | 172196164 | Title: [new navigation for StackOveflow(beta feature)] the text in 2 of the 3 new tabs appears truncated
Question:
username_0: ### Unstyled Content (not styled properly)
- [X] No other existing issue and/or pull request.
- [ ] URL of unstyled content.
- [ ] If unable to provide a URL, please report the class name of unstyled content.
- [X] Provide steps to reproduce (opening a dialog, etc).
<br>
Affected pages: http://stackoverflow.com/*
I've been testing the [New Navigation for StackOverflow](http://meta.stackexchange.com/questions/256814/new-navigation-for-stack-overflow-is-in-alpha-testing) (beta feature) some months now (using FF 48, Stylish 2.0.7 in win 10 x64).
I've noticed this issue:
The text in 2 of the 3 new tabs, `voted`, `active` appears truncated after applying the userstyle.
All three tabs, have this classname `.intellitab`.
STR
Login to http://stackoverflow.com having enabled the new-nav feature.
Screenshots:


Answers:
username_1: Hi @username_0!
Thanks, I'll look into this problem! And don't worry about closing this issue and reopening it in the Stackoverflow repo.
Status: Issue closed
username_0: Sorry about that! Here it is https://github.com/StylishThemes/StackOverflow-Dark/issues/33 |
SharePoint/sp-dev-docs | 334405105 | Title: Error - gulp package-solution
Question:
username_0: schema.xml must be in the same folder as element.xml.
Or "gulp package-solution" crashs with error.
---
#### Dokumentdetails
⚠ *Bearbeiten Sie diesen Abschnitt nicht. Er ist für die Verknüpfung von docs.microsoft.com zum GitHub-Artikel erforderlich.*
* ID: 2c30373b-3c44-c8bc-75d1-bb788efb479e
* Version Independent ID: 5ad284d8-3320-e7f3-0164-d29d3d82b74d
* Content: [Bereitstellen von SharePoint-Ressourcen aus Ihrem clientseitigen SharePoint-Webpart](https://docs.microsoft.com/de-de/sharepoint/dev/spfx/web-parts/get-started/provision-sp-assets-from-package)
* Content Source: [docs/spfx/web-parts/get-started/provision-sp-assets-from-package.md](https://github.com/SharePoint/sp-dev-docs.de-de/blob/live/docs/spfx/web-parts/get-started/provision-sp-assets-from-package.md)
* Product: **sharepoint**
* GitHub Login: @spdevdocs
* Microsoft Alias: **spdevdocs**
Answers:
username_1: That's indeed correct. The packaging does assume that the schema.xml file is located next to the elements.xml file.
Status: Issue closed
|
johnwdubois/rezonator | 713937103 | Title: "Field" Menu Bar Options
Question:
username_0: **What is the background of this feature request? Please describe.**
@johnwdubois We'll need this ticket filled out with a description of what the "Field" option should do.
**Describe what to do for the development of this feature**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
**What is the estimated development time?**
An estimate in hours. Ex. 2.5 hours, 6 hours.
**Describe the individual development tasks that make up this feature (along with a dev hour estimate)**
- [ ] Example Task - 1 hr
- [ ] Example Task- 2 hr
- [ ] Example Task- 3 hr
- [ ] Example Task - 4 hr |
ripple/rippled-network-crawler | 90814465 | Title: In rawcrawl_util.js using normalizePubKey() sometimes produces false ip addresses
Question:
username_0: This is because normalizePubKey adds the default port 51235 to peers that don't have a port defined and several rippleds on one ip end up being all attached to that ipp.
PR #12 should not be merged until this is resolved<issue_closed>
Status: Issue closed |
ServiceStack/Issues | 162766263 | Title: ServiceStack in Xamarin
Question:
username_0: I'm having a werid issue with the recent updates. I'm Using Visual Studio Enterprise 2015 and Xamarin.Forms 2.3
I'm using Service Stack, and everything works cool but here's when the story goes werid.
I have a REST API with service stack, and the app is the client, but when I request a method, no matter if its using POST or GET the API response is correct but when it enters the app it goes to empty with a custom property type. I know it is correct because logs on the API says so and I know it reaches the app using WireShark and the values are there, furthermore it was working before I upgraded from Visual Studio 2013 to Visual Studio 2015. I formatted my PC and resintalled everything, still doesn't work.
If I run the same code (client methods) in a windows forms it works as expected but running it on Xamarin.Forms it does not.
The issue happens with a custom type I created but when I use the generic type
The API has a DataProperty property I use to send some encoded data.
```
public class DataProperty
{
protected string base64Value;
public DataProperty();
public string Value { get; }
public byte[] Get();
public DataProperty Set(byte[] data);
public static implicit operator DataProperty(byte[] data);
public static implicit operator byte[] (DataProperty PropertyValue);
}
```
```
public class DataProperty<T> : DataProperty
{
public DataProperty();
public DataProperty<T> Set(byte[] data);
public static implicit operator DataProperty<T>(byte[] data);
public static implicit operator byte[] (DataProperty<T> PropertyValue);
}
```
Service stack json returns:
```
[0:] {
"Token": {
"Value": "<KEY>
},
"Id": {
"Value": ""
},
"AId": {
"Value": ""
},
"Iden": {
"Value": ""
},
"ACID": 1522,
"ResponseStatus": null
}
```
[Truncated]
The client implementation method is an async Task<bool> and its like the following:
```
MyMethodRequest request = new MyMethodRequest();
using (JsonServiceClient client = new JsonServiceClient(serviceURL))
{
try
{
var result = await client.PostAsync(request);
Debug.WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
return true;
}
catch (Exception e)
{
Debug.WriteLine("Error:[" + e.Message + "]");
return false;
}
}
```
Any ideas?
Answers:
username_1: Can you provide a small, stand-alone example that reproduces the issue, you likely wont need to call a Service, can you provide an example of a DTO that fails to serialize in Xamarin.Forms?
Also why are you wrapping values in `DataProperty<T>`? Your DTO/POCO's shouldn't have behaviors.
username_0: Thanks username_1!
I'm Working on the stand-alone example, I have the client, PCL+WinForms and the same PCL+Xamari.Forms but I'm working on the example service.
I'm wrapping values into a `DataProperty<T>` beacuase I need to encrypt/decrypt it so I know the Type of the encrypted Data to work correctly with it.
username_1: Seems strange to encrypt fields individually instead of just encrypting the whole DTO.
BTW I've just included [Encrypted Messaging](https://github.com/ServiceStack/ServiceStack/wiki/Encrypted-Messaging) support in the Xamarin iOS/Android and Mac client platform builds so you could now use that on iOS/Android if needed.
It's available from v4.0.61 that's now [available on MyGet](https://github.com/ServiceStack/ServiceStack/wiki/MyGet).
username_0: Its complicated, is a bank-like app so we manage AES and RSA encryptions with dynamic keys per device and I can't have an static key to encrypt/decrypt data.
Here is the example of the backend
https://github.com/username_0/ServiceStackBug
username_0: Finally, here is the client https://github.com/username_0/ServiceStackBugApp
You can run it on the Droid client or the WFExample
1.- You need to bring up the service in here https://github.com/username_0/ServiceStackBug
2.- Bring up the clients, the examples contain the Droid and the Windows form clients, executing the client on the Droid (Xamarin.Forms) it does execute but no values are returned on the other hand executing it on hte DesktopApp (Windows Forms) it does exectue and shows the values.
username_2: I ran ExampleApp.Droid application and got the following output. Are these values correct? Visual Studio Version shows info:
Microsoft Visual Studio Community 2015
Version 14.0.25123.00 Update 2
Xamarin 4.0.3.214 (0dd817c)
Xamarin.Android 6.0.3.5 (a94a03b)
Xamarin.iOS 9.6.1.8 (3a25bf1)
Output:
```
06-29 23:29:17.198 I/mono-stdout( 802): {
"FistValueResponse": {
06-29 23:29:17.207 I/mono-stdout( 802): "FistValueResponse": {
"Value": "MQAzAA=="
06-29 23:29:17.207 I/mono-stdout( 802): "Value": "MQAzAA=="
},
06-29 23:29:17.220 I/mono-stdout( 802): },
"SecondValueReponse": {
06-29 23:29:17.229 I/mono-stdout( 802): "SecondValueReponse": {
"Value": "MQAzAA=="
},
"ThirdValueResponse": {
"Value": "MQAzAA=="
06-29 23:29:17.237 I/mono-stdout( 802): "Value": "MQAzAA=="
06-29 23:29:17.237 I/mono-stdout( 802): },
06-29 23:29:17.237 I/mono-stdout( 802): "ThirdValueResponse": {
06-29 23:29:17.237 I/mono-stdout( 802): "Value": "MQAzAA=="
},
06-29 23:29:17.248 I/mono-stdout( 802): },
"PlainValueResponse": 13,
[0:] {
"FistValueResponse": {
"Value": "MQAzAA=="
},
"SecondValueReponse": {
"Value": "MQAzAA=="
},
"ThirdValueResponse": {
"Value": "MQAzAA=="
},
"PlainValueResponse": 13,
"ResponseStatus": null
}
06-29 23:29:17.259 I/mono-stdout( 802): "PlainValueResponse": 13,
06-29 23:29:17.259 I/mono-stdout( 802): "ResponseStatus": null
06-29 23:29:17.259 I/mono-stdout( 802): }
"ResponseStatus": null
}
```
username_0: Yeah does values are right,
I have the same visual studio version but on the Xamarin side I have different ones:
Xamarin 4.1.1.3 (34a92cd)
Xamarin.Android 6.1.1.1 (7db2aac)
Xamarion.iOS 9.8.1.4 (3cf8aae)
username_0: Ok I downgraded the versions of Xamarin and it worked correctly but using the current stable versions doesn't.
username_1: If it works in .NET and in a previous release of Xamarin, it's a regression Xamarin, we'll keep looking to see if we can identify the issue but the fix is going to need to be in Xamarin's libraries.
username_2: Xamarin which comes with Visual Studio Update 3 also has this issue.
Xamarin 4.1.0.530 (2e39740)
Xamarin.Android 6.1.0.71 (4e27558)
username_1: Hi the issue is because Value doesn't have a public setter, i.e:
```
public string Value { get; set; }
```
Where it works when Value has a public setter. DTO's should have properties with public setters/getters in order to retrieve and populate the property during serialization. There is a change from the underyling Mono's reflection API's that causes the difference in behavior, but you shouldn't be relying on serialization of private setters.
Status: Issue closed
username_2: Created bug about Xamarin regression in Xamarin Bugzilla
https://bugzilla.xamarin.com/show_bug.cgi?id=42357
username_0: Thank you guys! I was diggin in Xamarin's fourms to report the bug but you did it!
Thats so sad I can't use that private setter in particular, but finally we knew what the issue was.
username_2: @username_0 Support for properties with private setter is added via this commit https://github.com/ServiceStack/ServiceStack.Text/commit/1eb5254f8d98533885e95395b7bb72ca280ba0c1 and is available in v4.0.61 on MyGet
It works with Xamarin v4.1 and now your sample on android produces same results as windows forms application.
username_0: Awesome! thank you guys! keep up the good work!! |
uchicago-computation-workshop/ben_golub | 313731221 | Title: Question about broadcasting
Question:
username_0: You mentioned that there are several ways of broadcasting: twitter, TV, newspaper, etc. Do you think different ways of broadcasting might influence your results?
Answers:
username_1:  |
Robadob/sdl_exp | 186260342 | Title: Linux Support
Question:
username_0: Working version on linux would be nice.
Makefile has been created, however OpenGL issues are stopping any rendering (currently)
Answers:
username_1: If you add this code before the context is created, the last line might flush the context flags. Might simply be the case that forward compatibility is enables on Linux by default, but not on windows.
```
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_FLAGS, 0);
```
username_1: Using the above note, I've managed to get the branch [4.3compat](https://github.com/username_1/sdl_exp/tree/4.3compat) working, would be useful to check this branch now builds/runs on Linux.
username_1: Will also need to adjust Stock font's in `Text` on Linux. |
spotbugs/spotbugs-maven-plugin | 303786659 | Title: Too many opened file handles to spotbugs plugins
Question:
username_0: We are running the spotbugs-maven-plugin programmatically in a multi-module Maven project. This is why it is difficult to describe the exact spotbugs maven plugin configuration, this is why I will attach part of the Maven debug logs, where this can be seen:
[spotbugs-config.txt](https://github.com/spotbugs/spotbugs-maven-plugin/files/1796287/spotbugs-config.txt)
While trying to set up a Jenkins job, we faced ‘Too many open files’ problem causing the build to fail.
We started investigating the problem and found out that there is a huge number of opened files (spotbugs plugins).
I attach here the output of the following command `ls -l /proc/MAVEN_PID/fd/ > openfiles.txt`:
[openfiles.txt](https://github.com/spotbugs/spotbugs-maven-plugin/files/1796374/openfiles.txt)
They are all located in the target directory.
Used spotbugs-maven-plugin/3.1.3 with spotbugs 3.1.2
Answers:
username_1: note from my investigation:
* `ICodeBase`,`IClassPath` is `AutoCloseable`.
* `IClassPath` is responsible to close all `ICodeBase` instances inside it
* `FindBugs2#clearCaches()` closes generated `IClassPath`
* It seems that this plugin doesn't invoke `FindBugs2#clearCaches()` for now
So I guess that we should invoke this `FindBugs2#clearCaches()` method, or create classloader to GC `FindBugs2.classpath` that is singleton field?
username_1: https://github.com/spotbugs/spotbugs/issues/589 is similar with this issue, but this issue's target is mainly jar files in classpath to scan.
username_1: @username_0 let me confirm: are your Maven uses `-T` option, to run modules in parallel?
username_1: WIP: https://github.com/spotbugs/spotbugs/compare/fix-resource-leaks
username_0: No, currently it is not being used.
username_1: Almost all opened file isn't not spotbugs plugin, but libraries in aux classpath.
In current architecture, spotbugs keeps all jar files opened during analysis (because `IClassPath` is keep opened), so we should face problem like this. To fix this issue, I think we need to change internal architecture to handle libraries in aux & app classpaths as stream.
username_2: Has there been any progress? We are now seeing this frequently on our Jenkins CI server (we are using -T and there can also be concurrent builds). |
IU-IPOD-F20/map-projects-team-7 | 758064218 | Title: Import feature
Question:
username_0: As a user, I want to import a quiz from a file according to the GIFT format to quickly create new quizzes from a file.
Business priority - **SHOULD**
Story points: **8**.
Status: Issue closed
Answers:
username_1: As a user, I want to import a quiz from a file according to the GIFT format to quickly create new quizzes from a file.
Business priority - **SHOULD**
Story points: **13**. |
jgthms/bulma | 326808822 | Title: Using Bulma with Wordpress CSS only
Question:
username_0: Hi,
I really love Bulma but I would like to use it with css only since I'm not familiar with sass. I want to use Bulma with a child theme but I'm not able to do some customizations since the style.css does not override anything from the scss file. I am using Bulmapress. Is it possible to use only css? |
4teamwork/opengever.core | 100054808 | Title: `paste_clipboard` can be accessed even if `is_pasting_allowed` is False
Question:
username_0: Die `paste_clipboard` `View` kann auch dann aufgerufen werden, wenn die `is_pasting_allowed` `View` False zurückgibt. Dieses URL-Crafting Problem sollten wir angehen da so theoretisch checks, die die View durchführt umgangen werden können.<issue_closed>
Status: Issue closed |
apache/incubator-ponymail-foal | 1053835651 | Title: messages.query shorten parameter does nothing useful anymore
Question:
username_0: https://github.com/apache/incubator-ponymail-foal/blob/53452a767062df8cb2cb5cd4430e53911aa081db/server/plugins/messages.py#L351-L355
The shorten parameter as originally introduced was used to decide whether to truncate [1] the body response to 200 characters.
As it now stands, it only affects whether the '...' truncation marker is added. I don't see a use case for that.
The parameter can just be dropped.
[1] https://github.com/apache/incubator-ponymail-foal/blob/d38ea9fe40780048224b9445b7c40ddd7f168246/server/plugins/messages.py#L330<issue_closed>
Status: Issue closed |
JECSand/yahoofinancials | 628999582 | Title: Can we get details of Indian mutual funds details?
Question:
username_0: **example**:
**Name of the fund**: _Taurus Banking and Financial Services Fund - Direct Plan - Growth_
**Holdings**:
**### NSE symbol**
--
1. HDFCBANK
2. ICICIBANK
3. KOTAKBANK
4. SBIN
5. AXISBANK
6. HDFC
7. BAJFINANCE
8. ICICIPRULI
9. MUTHOOTFIN
10. HDFCLIFE
11. BAJAJFINSV
12. CUB
13. MOTILALOFS
14. SUNDARMFIN
15. JMFINANCIL
16. M&MFIN
17. FEDERALBNK
18. CHOLAFIN
19. SBILIFE
20. HDFCAMC
21. NAM-INDIA
**### Name of the stock**
--
1. HDFC Bank Ltd.
2. ICICI Bank Ltd.
3. Kotak Mahindra Bank Ltd.
4. State Bank Of India
5. Axis Bank Ltd.
6. Housing Development Finance Corporation Ltd.
7. Bajaj Finance Ltd.
8. ICICI Prudential Life Insurance Co Ltd.
9. Muthoot Finance Pvt. Ltd.
10. HDFC Standard Life Insurance Company Ltd.
11. Bajaj Finserv Ltd
12. City Union Bank Ltd.
13. Motilal Oswal Financial Services Ltd.
14. Sundaram Finance Limited
15. JM Financial Ltd.
16. Mahindra & Mahindra Financial Services Ltd.
17. Federal Bank Ltd.
18. Cholamandalam Investment & Finance Co. Ltd.
19. - SBI Life Insurance Co Ltd.
20. HDFC Asset Management Co. Ltd.
21. Nippon Life India Asset Management Ltd.
**### % of total holding**
--
1. 0.2134
2. 0.1948
3. 0.1723
4. 0.0484
5. 0.0332
6. 0.0285
7. 0.0207
8. 0.0203
9. 0.0151
10. 0.0133
11. 0.0127
12. 0.0086
13. 0.0072
14. 0.007
15. 0.0066
16. 0.0063
17. 0.0063
18. 0.0061
19. 0.0056
20. 0.0043
21. 0.0029
Any info to get this details will be useful. Thanks
Answers:
username_1: I think no, instead use this -> https://pypi.org/project/mftool/ |
hpi-swa/Squot | 860719738 | Title: `ClassDescription >> #package` silently registers new PackageInfos
Question:
username_0: We (@marceltaeumel and me) observed that the following workflow silently generates a new package in the global package registry but the user is not asked or informed about this action:
1. Create a new system category in the image (i.e. in the System Browser, "add item" in the left pane). Let's call it `MyPackage-Core`.
2. In the Git Browser, choose a repository and select "Change tracked packages"/"Add or remove packages".
Here, two packages are registered automatically: `MyPackage` and `MyPackage-Core` (see `ClassDescription >> #package` and ``PackageInfo class >> #named:``). This is confusing and does not happen if you manually created the package `MyPackage` before.
How could a solution look like? I could imagine only instantiating pseudo-package in the mentioned method, without already registering them. When actually chosen in the `SquitPackageChooser`, this object could be actually registered in the global registry (be aware of multi-processing here, so check the registry again first). But maybe you also have a better approach. :-)
Answers:
username_1: Please try whether it is resolved now. After that commit I do not hit a breakpoint in PackageInfo class>>named: anymore when opening the package chooser.
username_0: Sorry for the late reply! I confirm that the issue no longer occurs (on the `develop` branch). Thank you for fixing! :-)
username_0: Shall we close this issue now or when you release the fix?
username_1: If it is fixed for you on develop, I will close. :-) I release too seldom to keep all the issues open in the meantime.
Status: Issue closed
|
dotnet/csharplang | 213501151 | Title: "this" type
Question:
username_0: # History
I'd like to revive an issues from the roslyn repo:
https://github.com/dotnet/roslyn/issues/13988
* suggests "thistype" type parameter for interfaces
* constrains all subinterfaces / subclasses
# Motivation
I'd like to declare interfaces allowing me to provide meta-functionality operating on types, such as (deep) cloning, converting, synchronizing, wrapping, subordinating, etc. generic classes. To make sure that `myobject.CloneThisOne()` actually returns a clone of the same type as the cloned object, we currently have to use runtime checks. It would be useful if the compiler could guarantee this.
Two use-cases (IClonable-IClonable and IChild-IParent type relationship) are described in https://github.com/dotnet/roslyn/issues/13988
----
**_The following part might change over time, as I try to fully include https://github.com/dotnet/roslyn/issues/4332 without creating inconsistencies._**
https://github.com/dotnet/roslyn/issues/4332
* suggests "this" type constraint for abstract classes
* constrains only the subinterface where the type parameter is filled
----
# Suggestion
## Keyword
We need a keyword that can be used instead of a type name, or instead or together with of the new(), struct or class type constraints. The earlier issues suggest the keywords "this", "subclassed", "concrete", "thistype". Since both suggestions use the keyword in different positions to create similar results, it makes sense to use the same one
The keyword "this" seems to be a good choice, since it does not introduce a new reserved word and has not yet any meaning when used in a Type context (e.g. in `where T : this` -- but also in `typeof(this)`, `new List<this>()`, or `public this ReturnSomething()` out of scope for this suggestion). Using this instead of a type thus should not create any major confusion.
### Drawback:
The usage of "this" defined here might be confused with the use of "this" in extension methods. It can be distinguished by whether we look at a static or instance method:
```cs
static void SomeExtensionMethod(this Type param); // this is used as a "modifier keyword"
void SomeInstanceMethod(this param); // this is used as a type parameter
// not possible, because "this" as a type should not be defined for static methods -- "this" as some kind of variable is neither defined:
static void Weird(this this param);
```
## "this" as generic type parameter on classes (not generic methods)
This part of the suggestion works on all of these types.
Interfaces are easy to define, since they do not implement any method bodies.
Also, sealed classes are easy to implement, since no derived classes can be expected as a return value.
Both non-sealed non-abstract classes may require special handling. To ensure every derived type implements the methods if necessary (e.g. cloning itself), we need to define abstract methods on non-abstract classes, which still need to have a method body.
Using normal method implementations with virtual or non-virtual methods still is possible, but may require some types to match.
### Declaration
We want to use "this" as a special type parameter:
```cs
interface IInterface1 <out this>
[Truncated]
Defining which method returns a value of a specific type might be difficult, since returning might happen:
* as the return value
* via a ref parameter
* via an out parameter
* by generic encapsulation, e.g. when using the return type `IEnumerable<this>` instead of `this`
* by calling a delegate passed as parameter, e.g. an `Action<this>`
* by returning a delegate, e.g. a `Func<this>`
* by any of the combinations above
* by any way I have not thought of.
To avoid this complexity, any method can be abstract and still have a method body, which is only called on non-derived instances.
## Only one "this" type parameter can be specified, there is no `<this T1, this T2, T3, T4>`
Why:
The type parameter is always determined where the object is instanciated, and is always equal to the type of the object. Since two of these type parameters would always be equal, we don't need two or more of them, one without a name is enough.
# Related
This might also solve the use-case of issue https://github.com/dotnet/roslyn/issues/311 ("this" type as return type), by marking the method mentioned in https://github.com/dotnet/roslyn/issues/311#issuecomment-73806696 as abstract, and still providing an implementation.
Answers:
username_1: There is another suggestion to allow any expression in `typeof`, combining these, you can define a dependency property like this,
```cs
public static readonly DependencyProperty NameProperty =
DependencyProperty.Register(nameof(Name), typeof(Name), typeof(Foo));
```
username_2: Cheekily plugging my own similar proposal in #169, that achieves all of this (though isn't implicit on all classes by default, perhaps it could be) through a `where T: this` generic constraint.
It's not immediately clear to me whether this proposal adds any significant extra capabilities that #169 doesn't, or vice versa. #169 is limited to abstract classes and interfaces, that seems like the most significant difference.
I suspect that #169 is much easier to implement as it's basically just a little compiler check on top of existing generics functionality, but this arguably provides a slightly nicer API for developers.
#253 was also just created and references a version of #169 mixed with defaults that brings the APIs a little closer together.
username_3: MyType is a really nice feature that I wish many more languages would have. A lot of the gymnastics people do with F-bounded polymorphism are mainly to emulate a MyType feature. I.e. you are forced to write something like this:
```csharp
interface ICloneable<out T> where T : ICloneable<T>
{
T Clone();
}
class MyObject : ICloneable<MyObject>
{
public MyObject Clone() => new MyObject();
}
// unfortunately, this is not type-safe:
class MyEvilObject : ICloneable<MyObject>
{
public MyObject Clone() => new MyObject();
}
```
instead of
```csharp
interface ICloneable
{
this Clone() { /* … stuff … */ }
}
class MyEvilObject : ICloneable
{
public MyObject Clone() => new MyObject();
// Error: there is no implicit reference conversion from 'MyObject' to 'MyEvilObject'.
}
```
MyTypes are discussed in *Foundations of Object-oriented Languages: Types and Semantics*. D has `this` template parameters which are somewhat related.
Scala has self-type annotations, which are different: they allow you to explicitly annotate `this` with a more precise type, but they don't allow you to refer to the type of `this`. Scala also has singleton types, and you can get the singleton type of `this` using `this.type`, but as the name "singleton type" implies, this type is only inhabited by a single value, namely the one you got it from. So, a return type of `this.type` would only allow you to `return this` but not a clone of `this`. So, you could use `this.type` for method chaining with a mutable builder, but not for emulating a MyType feature.
username_4: IIRC there is a self type in Swift in Protocols, this could be used as a common ground for this idea.
username_0: @username_2: I also thought of a type constraint alone first, but then I saw some issues depending on the use case with that -- it only solves part of the problem:
```
IClonable<T> where T : this {...}
MyBaseClonable : IClonable<MyBaseClonable> {...}
MyDerivedClonable : MyBaseClonable {...}
```
`MyDerivedClonable.Clone()` will now return a MyBaseClonable. Depending on the situation, this might be acceptable, but in other situations, I want to either
* return the correct type for derived types on my own will, or even
* enforce the correct type for all derived types.
Even the first one won't work that easy, let's try:
```
IClonable<T> where T : this {...}
MyBaseClonable<T> : IClonable<T> where T : this{...}
MyDerivedClonable<T> : MyBaseClonable<T> where T : this {...}
```
The Class Model itself looks good, but I have a major problem: I can't create any instances:
```
var myclonable = new MyDerivedClonable<T>(); // T is unspecified
var myclonable = new MyDerivedClonable<MyDerivedClonable>(); // MyDerivedClonable is undefined
var myclonable = new MyDerivedClonable<MyDerivedClonable<MyDerivedClonable<... infinite recursion ...>>>(); //
```
username_0: @username_1 : I don't think I would use it a lot, but it feels inconsistent not to allow the usage of the this type anywhere in the class. Methord return values are the obvious use case, but there might always be another, e.g. method parameters:
```cs
public void CopySettingsFrom(this template);
```
I'm not yet sure whether `this` is actually the right choice for the keyword. In general, it seems to express the right meaning, but with extension methods, it seems a bit weird. A (static!) class with extension methods might not often need to support this feature, that way, one would probably never see both interpretations of the keyword "this" in the same file.
username_2: @username_0
I'm not sure I entirely understand what you're getting at here to be honest.
```C#
IClonable<T> where T : this {...}
MyBaseClonable<T> : IClonable<T> where T : this{...}
MyDerivedClonable<T> : MyBaseClonable<T> where T : this {...}
```
I would have implemented this as:
```C#
interface IClonable<T> where T : this {...}
abstract class MyBaseClonable<T> : IClonable<T> where T : this{...}
class MyDerivedClonable : MyBaseClonable<MyDerivedClonable > where T : this {...}
```
This obviously has the restriction that MyBaseClonable<T> is required to be abstract. Is this the point that you're making? If so, then yes, I absolutely agree that that is an acknowledged limitation under my proposal, for the infinite recursion reason you demonstrated.
I do agree that this limitation is not ideal, and that some way of being able to resolve that recursion would be great. I view my proposal as broadly including the method of implementation, and yours as just being the API you'd like to see. I agree that your API has benefits relative to mine and would probably be preferable, but I don't know how much of a chance it has of actually being implemented any time soon.
The strongest argument I would still make in favour of my implementation is that it's a much smaller change. While ideally the constraint would be implemented in the CLR, it could be approximated pretty well within just C# itself. The compiler could place an attribute on the type parameters for enforcing it internally, and inject a runtime check into the static constructor for interfacing with other languages.
Plugging #255 where I asked about the status of the CLR with this debate in mind.
username_0: Do we already have such features? From what I've read -- sorry, no sources, just my overall impression from the different roslyn issues -- I doubt the "compiler check" will be introduced as a language feature without a CLR check.
Linq-Syntax doesn't influcence other CLR languages (VB.net, IronPython), extension methods simply are only static methods in other languages, and cannot be defined there -- these are features that don't require a CLR change, because they can not break anything.
The compiler check could also be implemented with attributes and some postcompiler like Fody -- it probably wouldn't use much more code, an attribute (does AttributeTargets.GenericParameter fit?) on the parameter itself might be enough to check this in a postcompiler. The problem is: As soon as someone with VB.Net instanciates any of these classes without caring about the type constraint, and passes the instance to C# code, which assumes correct behaviour (we have a type constraint!), I expect the implementation to fail horribly -- and someone has to sit there a long time until one can figure out what the actual problem is -- the runtime not honoring C# type constraints.
username_0: PS: The static constructor check might does seem possible, but I don't think of it as a nice solution.
Also, this would also be possible when using the feature as suggested in my syntax -- the implementation of my features above also describe a generic type parameter with a specific constraint; but since `IClonable<T1, T2> where T1 : this where T2 : this` makes no sense, I just reduce them to a type parameter with the keyword "this" instead of a name. Everything else are compiler checks, that the compiler can also infer -- simply by making all "abstract" methods with body just virtual and preventing type initialization when any subclass implemented in VB.net does not override them.
I suspect that this might have minor Performance impacts and might prevent compiler optimizations, so I would prefer a CLR-Implementation, but one can implement almost everything without changing the CLR if the classes themselves refuse to load. The constraints then would have to be compiled as Attributes.
username_2: @username_0
With regards to point 2, I think this ultimately comes down to the abstract restriction. Ultimately it's all about the restrictions of the infinite recursion. We're largely in agreement here, if we can get a version of this feature that solves that problem that would be absolutely fantastic, I just have doubts about practical viability for now. Perhaps I'm just being a pessimist.
With regards to point 1, we again somewhat agree. I think many situations could be worked around with slightly different patterns and interface variance, but it would be cumbersome to have to do so relative to a solution that magics all of that away. The situations where I've found myself wanting this myself have largely not come up against these scenarios, but I can absolutely see that it would be a frustrating limitation for many. A fix for this would probably be essential for moving this from the area of a niche feature for those occasional awesome APIs when there's no other way, into something people can happily throw around with little care as a sprinkle of niceness everywhere.
I have half of an idea for how this restriction could be worked around nicely actually, I'll think about it and get back to you here if that idea actually turns out to work, though I strongly suspect it won't.
As for the static constructor check, I was imagining that it could perhaps be implemented in such a way that it could be replaced by a CLR constraint at a later date whenever CLR stuff finally happens, and the runtime check could be a temporary thing. I don't know how feasible that is. Other major languages could also independently implement features that respect the attributes without making changes to the CLR, so the problem of dealing with the runtime checks may be more theoretical than practical.
Ultimately, if we can actually get all of the features of your way that would be great, I'd love that!
username_0: One other thought on this: Omitting specifying the implicit type parameter is necessary to avoid infinite source code recursion, but naming might still be necessary. Multiple this type parameters on the same class don't make any sense, but there might be multiple parameters in scope when nesting classes:
Maybe something `thistype(MyClass)` similar to `default` and `typeof would be useful. We might get name collisions when nesting classes:
```cs
abstract class ClonableList<TElement> : IClonable
where TElement : ClonableList<TElement>.ClonableListElement
{
abstract class ClonableListElement : IClonable
{
virtual thistype(ClonableList<TElement>.ClonableListElement) Clone() { ... }
thistype(ClonableList<TElement>) ParentList {get; set;} // would not be possible, since "thistype" alone would reference ClonableListElement, which is the wrong type.
}
virtual thistype(ClonableList<TElement>) Clone() { ... }
TElement FirstElement {get; set;}
TElement LastElement {get; set;}
}
```
Or simply name the parameter, but prevent filling it in manually:
```cs
// two parameters on declaration:
abstract class ClonableList<thistype TThis1, TElement> : IClonable
// one parameter anywhere else, we do not want to write ClonableList<ClonableList<..., TElement>, TElement>
where TElement : ClonableList<TElement>.ClonableListElement
{
abstract class ClonableListElement<thistype TThis2> : IClonable
{
virtual TThis2 Clone() { ... }
TThis1 ParentList {get; set;}
}
virtual TThis1 Clone() { ... }
TElement FirstElement {get; set;}
TElement LastElement {get; set;}
}
```
Any opinions on this?
username_5: Simply put, today we cannot `IEquatable<TSelf> where T : self` which means cannot have nice things like requiring a type to support proper `T.Equals(T)` without casting.
Given the success Rustlang has been having with `Self` it is a complete shame that C# doesn't support it.
username_2: @username_0
Good spot on the losing access with nesting. Perhaps this would be a good time to introduce something else I've considered before, and allow `SomeClass<T>{}` to have `T` accessed with something like `SomeClass<>.T` from inside? This would universally remove the problem of covering up type parameters with nesting.
username_6: username_0 thank you for opening this subject, for me it is one of the most important features that I would like to see it available into C#.
Guys, please make this available! It is really valuable feature!
username_7: Another potential plus point for supporting a "this" type is it could potentially be used to avoid boxing issues when using value types. For example interfaces that are implicitly generic on "this" type could be implemented with methods that take "this" as a (hidden/implied?) parameter and call methods on it without the need for boxing.
username_8: There is another use case where "this" type would be very helpful:
```csharp
public class Node
{
public thistype Next { get; set; }
public thistype Previous { get; set; }
}
public class TextNode : Node
{
public string Text{ get; set; }
private void CheckText()
{
if (Next.Text== "abc")
{
// Do something
}
}
}
public class TextValueNode : TextNode
{
public int Value { get; set; }
private void CheckValue()
{
if ((Next.Text == "123") && (Next.Value == 123))
{
// Do something
}
}
}
```
CRTP wouldn't work if you use it in the middle of your implementation. In below example `private void CheckValue()` will raise compile error CS1061 _'TextNode' does not contain a definition for 'Value'_
```csharp
public class Node<T> where T : Node<T>
{
public T Next { get; set; }
public T Previous { get; set; }
}
public class TextNode : Node<TextNode>
{
public string Text{ get; set; }
private void CheckText()
{
if (Next.Text == "abc")
{
// Do something
}
}
}
public class TextValueNode : TextNode
{
public int Value { get; set; }
private void CheckValue()
{
if ((Next.Text== "123") && (Next.Value == 123))
{
// Compile error CS1061 'TextNode' does not contain a definition for 'Value'
}
}
}
```
username_9: @username_8 If you had:
```cs
Node node = new TextNode();
```
What would be the type of `node.Next`? I don't see how it could be anything other than `Node`, but then you'd be able to write `node.Next = new Node()` which would obviously be invalid...
username_0: This assignment would not work due to contravariance -- let's display the implicit type parameter here:
```cs
class Node<in thistype> {...} // setter requires in, getter requires out, both is not useful under any circumstance.
Node<Node> node = (Node<Node>)(Node<TextNode>) new TextNode<TextNode>();
```
The cast from `TextNode<TextNode>` to `Node<TextNode>` would be okay, but the cast from `Node<TextNode>` to `Node<Node>` is invalid, contravariance only allows to cast `Node<TextNode>` to `Node<Node>`, not the other way round.
username_8: In that case there should be CS0029 Error raised (Cannot implicitly convert type). The same way as this error is raised in other cases, eg.:
```cs
List<Node> nodeList = new List<TextNode>();
List <FileSystemInfo> pathList= new List<DirectoryInfo>();
// Error CS0029 Cannot implicitly convert type 'System.Collections.Generic.List<System.IO.DirectoryInfo>' to 'System.Collections.Generic.List<System.IO.FileSystemInfo>'
```
username_10: You then violated: `public class TextNode : Node` that says "TextNode is-a Node`. But now you're nota allowing TextNodes to be used where Nodes are used. In that case, why do you hve inheritance in the first place?
username_9: @username_8 So if `thistype` is used anywhere, you basically just disable all inheritance? That seems really, really weird, and not at all what this issue is about.
username_8: You're right. I see the problem now. Unfortunately I have no idea how to figure it out.
username_7: Seems related to this:
https://github.com/dotnet/csharplang/blob/725763343ad44a9251b03814e6897d87fe553769/proposals/covariant-returns.md
username_0: @username_8 : Your Node-Example rather looks to be a case for mixins -- which C# does not support either, but they maybe can be added with some Fody postcompilation magic. Inheritance usually intends two objects to have a similarity, by being exchangeable in some situations -- however, the nodes of linked lists of different types are not intended to have any compatibility, only reused code. Mixins, or extension methods might suit your usecase better. You don't even need this proposal, it does work without:
```cs
abstract class Node<TNode> { TNode Next { get; set; } /* plus some list iteration functions, maybe */ }
class TextValueNode : Node<TextValueNode> { }
```
username_0: @username_7 : What you linked is a technical detail required to implement this, although it doesn't become obviously visible here when looking at the code, since it would be hidden behind some kind of keyword or type parameter.
username_11: A more general solution for this seems to be #2936.
Status: Issue closed
|
DylanBulmer/TheDocs | 304089547 | Title: Window buttons cover title of the application on Mac OS
Question:
username_0: ## Problem:
The buttons of the application window on Mac OS X is covering the title
## Solution:
Change the style of the frame type to default
Answers:
username_1: Here are my main.js settings for an Electron app with a transparent window border that works fine on Mac:
const electron = require('electron')
const {app,BrowserWindow,Menu} = electron
let win,win2
app.on('ready', () => {
win = new BrowserWindow({
width: 800,
minWidth: 800,
height: 600,
title: '',
titleBarStyle: 'hidden',
defaultFontFamily: 'fantasy',
transparent: true,
})
const menuTemplate = [
{
label: 'MAIN',
submenu: [
{
label: 'About ...',
click: () => {
if(win2)return
win2 = new BrowserWindow({
width: 256,
height: 256,
title: '',
titleBarStyle: 'hidden',
defaultFontFamily: 'fantasy',
transparent: true,
alwaysOnTop: true,
});
win2.loadURL(`file://${__dirname}/about.html`)
win2.on('closed', () => {
win2 = null
})
console.log('Made by SirFizX');
}
}, {
type: 'separator'
}, {
label: 'Quit',
click: () => {
app.quit();
}
}
]
}
];
const menu = Menu.buildFromTemplate(menuTemplate)
Menu.setApplicationMenu(menu)
win.loadURL(`file://${__dirname}/index.html`)
//win.openDevTools()
})
exports.openWindow = () => {
let win = new BrowserWindow({height: 600, width: 300})
win.loadURL(`file://${__dirname}/otherpage.html`)
}
username_1: 
username_0: So I did some more researching into making my own custom title bar and to do it, I have to actually create my own. So I'm going to keep what I have and I decided to remove the frame as well so Windows gets the same effect. I may use the following too just to make every button custom to my application:
```
new BrowserWindow({titleBarStyle: 'customButtonsOnHover', frame: false})
```
Right now I'm in the process of developing the title bar within the application with HTML.
Here is how I found this out:
* https://stackoverflow.com/questions/35660043/how-to-customize-the-window-title-bar-of-an-electron-app
* https://github.com/electron/electron/blob/master/docs/api/frameless-window.md
And yes, I will be documenting this with TheDocs!
username_0: This issue has been fixed and will be part of the next release.
Status: Issue closed
|
onevcat/Kingfisher | 261194734 | Title: Found on crashlytics
Question:
username_0: Hi I found a crash on crashlytics.
EXC_BAD_ACCESS
Please find the attached crash.
[com.fikra.myu_issue_280_crash_903ffa1cf0c647aabf6a2e0e8d44ea94_7b14495ba36911e79a2456847afe9799_0_v2.txt](https://github.com/username_1/Kingfisher/files/1339543/com.fikra.myu_issue_280_crash_903ffa1cf0c647aabf6a2e0e8d44ea94_7b14495ba36911e79a2456847afe9799_0_v2.txt)
Status: Issue closed
Answers:
username_1: Not sure what what happened on this and it seems to be a crash in Swift `reverse` extension method on `Sequence`. I believe it should not be an issue of Kingfisher and there is actually little we could do with it. |
vvksh/reddit_product_search | 777220989 | Title: Identify product names from text
Question:
username_0: see paper: http://keg.cs.tsinghua.edu.cn/jietang/publications/ICDM12-Wu-et-al-Product-mention-recognition.pdf
also see: https://towardsdatascience.com/named-entity-recognition-with-nltk-and-spacy-8c4a7d88e7da
Answers:
username_0: allennlp seems to work; see usage instructions at https://demo.allennlp.org/named-entity-recognition
username_0: labels can be from 9 classes (B-PER, I-PER, B- LOC, I-LOC, B-ORG, I-ORG, B-MISC, I-MISC and 0) to indicate the Beginning of a named entity, the Inside of a named entity and the Outside of a named entity.
Status: Issue closed
username_0: see paper: http://keg.cs.tsinghua.edu.cn/jietang/publications/ICDM12-Wu-et-al-Product-mention-recognition.pdf
also see: https://towardsdatascience.com/named-entity-recognition-with-nltk-and-spacy-8c4a7d88e7da
username_0: NER doesnt work too well it seems
username_0: some ideas:
- find a product dataset and cross reference
- aggregate identified products: most recommended products will probably appear multiple times.
- if link to amazon or any other online seller present, identify products from there.
username_0: some subreddits have buying guide or wiki with recommendations -> seems like the best place
username_0: other ideas:
- some posts -> have bullet structure; can assume bullet points are product names; or they could have amazon links; just aggregated across various posts |
devildrey33/RAVE | 527451340 | Title: Volum amb la rodeta del mouse
Question:
username_0: Al modificar el volum amb la rodeta del mouse, no es queda guardat a la BD
Status: Issue closed
Answers:
username_0: - Afegit event per la rodeta del mouse a DBarraDesplazamientoEx
- Ara es pot modificar el volum amb la rodeta del mouse, si el mouse está a sobre de la barra de volum
- Ara es guarda el volum a la BD si es modificat amb la rodeta del mouse. |
GDGVIT/mailer-gui | 589451683 | Title: DLL Load Failed
Question:
username_0: Py version: 3.7.3
After installing all requirements when running the application then getting below error traceback
```
File "main_gui.py", line 14, in <module>
from PyQt5.QtWebKit import *
ImportError: DLL load failed: The specified procedure could not be found.
```
Tried solution available at [here](https://stackoverflow.com/questions/42863505/dll-load-failed-when-importing-pyqt5)
Any Suggestions on this?
Answers:
username_1: Just try downgrading the pyqt5 version to 5.9 currently its 5.14
username_0: @username_1 already did tha |
imain/ocp-doit | 391139641 | Title: ocp_install_env.sh no longer works
Question:
username_0: All of the environment variables have been removed from the installer, so we're not forced to answer everything interactively.
I think we need to move to the installconfig yaml format instead, but not looked into the details yet.
Answers:
username_1: The CI handled this by heredoc'ing an install-config.yaml from the env vars: https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-e2e.yaml#L299-L362
I can put up a patch to do something similar here.
username_1: fixed by https://github.com/imain/ocp-doit/pull/34
Status: Issue closed
|
plotly/plotly.py | 405612514 | Title: plot function (from plotly.offline) does not support (open) file handles as target
Question:
username_0: Currently, the `filename` parameter of `plot` only supports filenames (`temp-plot.html` is default). It does not support open file handles.
The latter is very useful when writing e.g. command line tools with [click](https://click.palletsprojects.com/en/7.x/) (or similar tools), which [handles opening files specified by a user on the command line transparently](https://click.palletsprojects.com/en/7.x/arguments/#file-arguments) and only provides a file handle to the script.
```python
from plotly.offline import plot
help(plot)
plot(figure_or_data, show_link=False, link_text='Export to plot.ly', validate=True, output_type='file', include_plotlyjs
=True, filename='temp-plot.html', auto_open=True, image=None, image_filename='plot_image', image_width=800, image_height
=600, config=None, include_mathjax=False)
[...]
```
Answers:
username_0: Just looked at the [relevant source code](https://github.com/plotly/plotly.py/blob/15aff13d8e596e0ed1872d872dbafcc6e070d1e5/plotly/offline/offline.py#L701) - it's fairly easy to add. Would you accept a pull request?
The interesting question then becomes how to handle the stuff between subsequent lines 731 and 744. Ignore `auto_open == True` and `include_plotlyjs == 'directory'` options or raise exceptions if `filename` is an open file handle? |
pytorch/pytorch | 497777740 | Title: Compiling master with PARALLEL_BACKEND=NATIVE_TBB option is failing
Question:
username_0: ## 🐛 Bug
Compiling master with PARALLEL_BACKEND=NATIVE_TBB option is failing, commit `9f3351de81517533ef0f86a9086e99795e936e97`
## To Reproduce
Steps to reproduce the behavior:
1. Clone repo: git clone --recursive https://github.com/pytorch/pytorch
2. Run:
```
export USE_OPENMP=0
export USE_TBB=1
export BLAS=MKL
export MKL_THREADING=TBB
export MKLDNN_THREADING=TBB
export PARALLEL_BACKEND=NATIVE_TBB
python setup.py build
3: Relevant error part:
```
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/ext/new_allocator.h:120:23: error: no matching constructor for initialization of 'c10::ivalue::Future'
{ ::new((void *)__p) _Up(std::forward<_Args>(__args)...); }
^
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/bits/alloc_traits.h:254:8: note: in instantiation of function template specialization '__gnu_cxx::new_allocator<c10::ivalue::Future>::construct<c10::ivalue::Future>' requested here
{ __a.construct(__p, std::forward<_Args>(__args)...); }
^
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/bits/alloc_traits.h:393:4: note: in instantiation of function template specialization 'std::allocator_traits<std::allocator<c10::ivalue::Future> >::_S_construct<c10::ivalue::Future>' requested here
{ _S_construct(__a, __p, std::forward<_Args>(__args)...); }
^
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/bits/shared_ptr_base.h:399:30: note: in instantiation of function template specialization 'std::allocator_traits<std::allocator<c10::ivalue::Future> >::construct<c10::ivalue::Future>' requested here
allocator_traits<_Alloc>::construct(__a, _M_impl._M_ptr,
^
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/ext/new_allocator.h:120:23: note: in instantiation of function template specialization 'std::_Sp_counted_ptr_inplace<c10::ivalue::Future, std::allocator<c10::ivalue::Future>, __gnu_cxx::_S_atomic>::_Sp_counted_ptr_inplace<>' requested here
{ ::new((void *)__p) _Up(std::forward<_Args>(__args)...); }
^
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/bits/alloc_traits.h:254:8: note: in instantiation of function template specialization '__gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::ivalue::Future, std::allocator<c10::ivalue::Future>, __gnu_cxx::_S_atomic> >::construct<std::_Sp_counted_ptr_inplace<c10::ivalue::Future, std::allocator<c10::ivalue::Future>, __gnu_cxx::_S_atomic>, const std::allocator<c10::ivalue::Future> >' requested here
{ __a.construct(__p, std::forward<_Args>(__args)...); }
^
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/bits/alloc_traits.h:393:4: note: (skipping 2 contexts in backtrace; use -ftemplate-backtrace-limit=0 to see all)
{ _S_construct(__a, __p, std::forward<_Args>(__args)...); }
^
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/bits/shared_ptr_base.h:956:14: note: in instantiation of function template specialization 'std::__shared_count<__gnu_cxx::_S_atomic>::__shared_count<c10::ivalue::Future, std::allocator<c10::ivalue::Future>>' requested here
: _M_ptr(), _M_refcount(__tag, (_Tp*)0, __a,
^
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/bits/shared_ptr.h:316:4: note: in instantiation of function template specialization 'std::__shared_ptr<c10::ivalue::Future, __gnu_cxx::_S_atomic>::__shared_ptr<std::allocator<c10::ivalue::Future>>' requested here
: __shared_ptr<_Tp>(__tag, __a, std::forward<_Args>(__args)...)
^
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/bits/shared_ptr.h:597:14: note: in instantiation of function template specialization 'std::shared_ptr<c10::ivalue::Future>::shared_ptr<std::allocator<c10::ivalue::Future>>' requested here
return shared_ptr<_Tp>(_Sp_make_shared_tag(), __a,
^
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/bits/shared_ptr.h:613:19: note: in instantiation of function template specialization 'std::allocate_shared<c10::ivalue::Future, std::allocator<c10::ivalue::Future>>' requested here
return std::allocate_shared<_Tp>(std::allocator<_Tp_nc>(),
^
../aten/src/ATen/ParallelNativeTBB.cpp:94:22: note: in instantiation of function template specialization 'std::make_shared<c10::ivalue::Future>' requested here
auto future = std::make_shared<c10::ivalue::Future>();
^
../aten/src/ATen/core/ivalue_inl.h:194:3: note: candidate constructor not viable: requires single argument 'type', but no arguments were provided
Future(TypePtr type) : type_(type) {}
^
../aten/src/ATen/core/ivalue_inl.h:183:27: note: candidate constructor (the implicit copy constructor) not viable: requires 1 argument, but 0 were provided
[Truncated]
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
Answers:
username_1: @username_0 himself will be working on it, fyi to everyone else
username_2: cc @username_3 @username_4
username_3: Is this pathway tested in CI?
username_4: we need to add this build option to CI builds, also cc. @kostmo
username_4: I'll work on adding CI builds this week for TBB and NATIVE backends
username_4: The next command should work now:
`USE_OPENMP=0 USE_TBB=1 BLAS=MKL USE_MKLDNN=1 MKL_THREADING=TBB MKLDNN_THREADING=TBB ATEN_THREADING=TBB python setup.py develop --cmake`
Status: Issue closed
|
networkRob/eos-sdk-ip-monitor | 365558705 | Title: Logging Triggering on incorrect Threshold values
Question:
username_0: When the default PING_THRESHOLD value is overridden. ie default is 3 failed attempts, and customized value of 5 defined.
```
Oct 1 12:11:12 veos-rtr-01 myIP-MON: %myIP-6-LOG: Failed Count cvx-01: 3
Oct 1 12:11:13 veos-rtr-01 myIP-MON: %myIP-6-LOG: cvx-01 on {IP_REMOVED} has Failed over 5 times!
```<issue_closed>
Status: Issue closed |
ClementPinard/SfmLearner-Pytorch | 853305982 | Title: run_inference.py
Question:
username_0: Hi! Thank you very much for your work. When I try to plot depth images, I found the results is exactly the same. How can I get the depth images in the paper? Thanks!


Answers:
username_1: Hi, what were the input images ? It is expected that the depth more or less follow the same distribution because it has only been trained with KITTI images. However the depth should not always be the same.
username_0: Thank you for your quick reply!
I used the pretrained nets(dispnet_model_best.pth.tar) you posted on https://drive.google.com/drive/folders/1H1AFqSS8wr_YzwG2xWwAQHTfXN5Moxmx.
And the input images are KITTI-rawdata-road like these


username_1: Indeed there should be a better depth for these pictures. What size did you use ? Normally it should 416x128 . I guess you used the run_inference.py script ?
Can you output the disparity instead of depth ? The resulting colored maps are usually more readable
username_0: Thank you very much! The reason of the problem is the incorrect size of pictures!
username_1: great! :)
Closing the issue then. Don't hesitate to reopen if you have further questions
Status: Issue closed
|
pandas-dev/pandas | 412174847 | Title: BUG: SparseDataFrame indexing sometimes loses `fill_value` of empty columns in 0.24
Question:
username_0: #### Code Sample, a copy-pastable example if possible
```python
import numpy as np
import pandas as pd
X = pd.SparseDataFrame([[0,1], [0,0]], default_fill_value=0.0)
## Good behaviour
X.loc[0].to_numpy()
# array([0., 1.])
X.loc[[0]].to_numpy()
# array([[0., 1.]])
X.iloc[0].to_numpy()
# array([0., 1.])
## Bad behaviour
X.iloc[[0]].to_numpy()
# array([[nan, 1]], dtype=object)
X.loc[[True, False]].to_numpy()
# array([[nan, 1]], dtype=object)
```
#### Problem description
Indexing a SparseDataFrame with `iloc` and more than a single row number should return the same result as indexing the same rows with `loc` and the corresponding indices. Instead, `iloc` drops column `fill_value` for any column with no non-zero entries.
#### Expected Output
All commands _should_ return `array([0., 1.])`. The last two (`iloc` with fancy indexing, and `loc` with boolean indexing) returns instead `array([nan, 1.])`.
#### Output of ``pd.show_versions()``
<details>
```
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.7.final.0
python-bits: 64
OS: Linux
OS-release: 4.4.0-17763-Microsoft
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: C.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.24.1
pytest: None
pip: 18.0
setuptools: 40.2.0
Cython: 0.29
numpy: 1.15.1
scipy: 1.2.0
pyarrow: None
xarray: None
IPython: 7.1.1
sphinx: 1.6.7
patsy: None
dateutil: 2.7.3
pytz: 2018.3
[Truncated]
matplotlib: 2.2.3
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml.etree: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
gcsfs: None
```
</details>
Answers:
username_1: FYI we’re likely deprecating SparseDataFrame. You’re probably better off switching to a regular data frame with sparse columns.
>
username_0: I don't see any mention of this in the documentation. Can you please post a link?
username_1: Not deprecated yet: https://github.com/pandas-dev/pandas/issues/19239
username_2: Marking as a bug for now, but given that this isn't a regression, it would likely be patched in `0.25.0`. However, even with the deprecation, a patch would be welcomed if it isn't too difficult.
Status: Issue closed
|
jstedfast/MimeKit | 1008035627 | Title: Imapclient oauth authentication failed for outlook server
Question:
username_0: Hi
I have been using code below for a while to authenticate with imapclient andit was working fine till few days back but from last 3 days it starts giving me issue that Imapclient is not authenticated.
Dim Client = New ImapClient
Dim oauth2 = New SaslMechanismOAuth2(accountEmailAddress, accessToken)
Await Client.AuthenticateAsync(oauth2)
However same code is working fine for Gmail server. Please check. thanks
Answers:
username_1: access tokens don't last forever, you need to refresh it.
Status: Issue closed
|
JourneyLimo/Seattle-Towncar-Services | 172579064 | Title: North College Park Town Car Service
Question:
username_0: http://ifttt.com/images/no_image_card.png<br><br><h2>The finest Airport Transportation and Town Car Service in North College Park.</h2>
<p>Offering quality limousine services in North College Park, Washington and surrounding areas. Compare our rates and friendly service to anyone!</p>
<p>We guarantee professional and safe <a href="http://www.journeylimo.com/limo-services/">limousine service</a>. We look forward to making your special occasion or event a memorable and enjoyable experience.</p>
<h3>Call <a>(206) 488-1443</a> to Reserve a North College Park Town Car</h3>
<p>Please call us for more information or to book your <a href="http://www.journeylimo.com/town-car-services/">town car service</a> with us.</p>
<figure id="attachment_57" style="width: 840px;" class="wp-caption alignnone"><img class="wp-image-57 size-large" src="https://i0.wp.com/www.journeylimo.com/wp-content/uploads/2016/08/white-lincoln-town-car-1024x578.jpg" alt="town car service company" width="840" height="474"><br><figcaption class="wp-caption-text">North College Park town car service company</figcaption></figure><h2>Limousines in North College Park, WA</h2>
<p>We’ll take you to the city, in a <a href="http://www.journeylimo.com/seatac-limo/">limo to seatac airport</a>, or wherever you’d like to be. Our drivers are friendly and courteous, if you’re new to the city we can show you around.</p>
<figure id="attachment_48" style="width: 840px;" class="wp-caption alignnone"><img class="size-large wp-image-48" src="https://i2.wp.com/www.journeylimo.com/wp-content/uploads/2016/08/town-car-on-road-1024x768.jpg" alt="seattle town cars" width="840" height="630"><br><figcaption class="wp-caption-text">North College Park town cars</figcaption></figure><h2>The Most Reliable North College Park Limousine Company</h2>
<p>We will be there for you. Just give us a call to arrange your pickup for <a href="http://www.journeylimo.com/car-service/">seattle car service</a>. Whether you are traveling alone or with a group, we will provide you with a vehicle fit for your needs and the service you can trust and rely on.</p>
<h1>The Best Town Car Service in North College Park</h1>
<p>We are the premier North College Park Town Car Service. Stylish late design Lincoln Town Car’s at your service. We have Lincoln Town cars or 7 guest Mountaneers for bigger groups. Our car’s and SUV’s are all geared up with plush leather for the finest convenience. We at Journey Limo believe in tidy late model cars, fantastic client service and timeliness by our motorist as the keys to having our client’s repeated company over and over. Why trouble with other lessor car service, opt for the very best. </p>
<h3>
<strong>Call <a>(206) 488-1443</a></strong> to Make a Reservation OR <a href="http://www.journeylimo.com/contact/">Book Online (48 hours notice required)</a>
</h3>
<div style="float: left; padding: 10px;">
<div style="width: 250px; height: 250px; margin-left: 10px; margin-right: 10px;"><div class="googlemaps"></div></div>
</div>
<p><a href="http://www.showmyweather.com/weather_widget.php?int=0&type=js&country=us&state=Washington&city=North+College+Park&smallicon=1&current=1&forecast=1&background_color=ffffff&color=000000&width=175&padding=10&border_width=1&border_color=000000&font_size=11&font_family=Verdana&showicons=1&measure=F&d=2016-08-22">http://www.showmyweather.com/weather_widget.php?int=0&type=js&country=us&state=Washington&city=North+College+Park&smallicon=1&current=1&forecast=1&background_color=ffffff&color=000000&width=175&padding=10&border_width=1&border_color=000000&font_size=11&font_family=Verdana&showicons=1&measure=F&d=2016-08-22</a></p>
<p>We have the best drivers around serving Washington state. Our personal relationships will help you get wherever you need to go <a href="http://www.journeylimo.com/north-delridge-town-car-service/">from home</a> to seattle or to <a href="http://www.journeylimo.com/north-bellevue-town-car-service/">a nearby town</a>, even if it’s outside of the puget sound area.</p>
<p>The post <a rel="nofollow" href="http://www.journeylimo.com/north-college-park-town-car-service/">North College Park Town Car Service</a> appeared first on <a rel="nofollow" href="http://www.journeylimo.com">Journey Limo</a>.</p>
<div class="embed-journeylimo">
<blockquote class="wp-embedded-content"><a href="http://www.journeylimo.com/north-college-park-town-car-service/">North College Park Town Car Service</a></blockquote>
<p><!--//--><![CDATA[//><!-- !function(a,b){"use strict";function c(){if(!e){e=!0;var a,c,d,f,g=-1!==navigator.appVersion.indexOf("MSIE 10"),h=!!navigator.userAgent.match(/Trident.*rv:11./),i=b.querySelectorAll("iframe.wp-embedded-content");for(c=0;c<i.length;c++)if(d=i[c],!d.getAttribute("data-secret")){if(f=Math.random().toString(36).substr(2,10),d.src+="#?secret="+f,d.setAttribute("data-secret",f),g||h)a=d.cloneNode(!0),a.removeAttribute("security"),d.parentNode.replaceChild(a,d)}else;}}var d=!1,e=!1;if(b.querySelector)if(a.addEventListener)d=!0;if(a.wp=a.wp||{},!a.wp.receiveEmbedMessage)if(a.wp.receiveEmbedMessage=function(c){var d=c.data;if(d.secret||d.message||d.value)if(!/[^a-zA-Z0-9]/.test(d.secret)){var e,f,g,h,i,j=b.querySelectorAll('iframe[data-secret="'+d.secret+'"]'),k=b.querySelectorAll('blockquote[data-secret="'+d.secret+'"]');for(e=0;e<k.length;e++)k[e].style.display="none";for(e=0;e<j.length;e++)if(f=j[e],c.source===f.contentWindow){if(f.removeAttribute("style"),"height"===d.message){if(g=parseInt(d.value,10),g>1e3)g=1e3;else if(~~g<200)g=200;f.height=g}if("link"===d.message)if(h=b.createElement("a"),i=b.createElement("a"),h.href=f.getAttribute("src"),i.href=d.value,i.host===h.host)if(b.activeElement===f)a.top.location.href=d.value}else;}},d)a.addEventListener("message",a.wp.receiveEmbedMessage,!1),b.addEventListener("DOMContentLoaded",c,!1),a.addEventListener("load",c,!1)}(window,document);//--><!]]></p>
</div><br><a rel="nofollow" href="http://feeds.wordpress.com/1.0/gocomments/journeylimo.wordpress.com/291/"><img border="0" src="http://feeds.wordpress.com/1.0/comments/journeylimo.wordpress.com/291/"></a> <img border="0" src="https://pixel.wp.com/b.gif?host=journeylimo.wordpress.com&blog=115440817&post=291&subd=journeylimo&ref=&feed=1" width="1" height="1"><br>
via WordPress https://journeylimo.wordpress.com/2016/08/22/north-college-park-town-car-service/ |
AvaloniaUI/Avalonia | 610703380 | Title: Avalonia Bitmap scan0 constructor problem with DPI
Question:
username_0: Hey,
When I'm using Avalonia.Media.Imaging.Bitmap with the scan0 constructor in the nightly builds with the Bitmap's DPI I'm getting cropped images. As stated by @grokys if I use 96 DPI regardless of the real DPI everything is working fine. (Had this issue with Skia and D2D)
Since this is the situation I think to make it less confusing it will be better to remove the DPI vector from this constructor.
Thanks,
Adir |
Creators-of-Create/Create | 1005983410 | Title: [Suggestion/Request] Lights stay on in Moving Cart Contraptions
Question:
username_0: I've noticed lights turn off while a cart contraption is moving. I'm not sure if this is feasible to change or not, but it would be nice if it could be looked at. Other interactables on a moving contraption would also be nice, if possible.
Answers:
username_1: Are you on the latest version of Create? Is flywheel enabled?
This is already a feature as I recall. (When Flywheel enabled)
Status: Issue closed
username_0: This issue is fixed with flywheel for 1.16.15. |
jOOQ/jOOQ | 1149073927 | Title: Change default for <pojosEqualsAndHashCode/> code generation option to true
Question:
username_0: jOOQ 3.5 added support for the code generation of `<pojosEqualsAndHashCode/>`: https://github.com/jOOQ/jOOQ/issues/1380
At the time, the flag was probably turned off by default because a lot of edge cases were expected (arrays, large data sets, etc). But today, most issues have been fixed.
The `toString()` value is generated by default, too, and when generating Scala or Kotlin code, the generated classes are `case class` or `data class` by default, which already implement `equals()` and `hashCode()`.
It makes sense to change the default to `true` as also suggested here: https://github.com/jOOQ/jOOQ/issues/13136<issue_closed>
Status: Issue closed |
generatorgame/online | 107245297 | Title: Stardoll Hack Updates September 18 2015 at 09:46PM
Question:
username_0: <img src="https://scontent.cdninstagram.com/hphotos-xaf1/t51.2885-15/e15/11856616_1686814604896697_1104903402_n.jpg"><br><div>[NEW] STARDOLL ONLINE HACK WORKS 2015 : www.stardoll.com-hack.ga Add up to 99,999 Starpoints, Starcoins and Stardollars each day : www.stardoll.com-hack.ga Real Hack 100% Guaranteed Free Working Method : www.stardoll.com-hack.ga SHARE this if you want this hack :) HOW TO USE : 1. Go to >>> www.stardoll.com-hack.ga 2. Type your Stardoll Username/ID or Email Address (You don't need to type your password) 3. Insert the amount of Starpoints, Starcoins and Stardollars then click "Generate" 4. Finish verification process and check your account !. More Hack Online Real Working : www.username_0.com #username_0 #onlineusername_0 #stardoll #stardollar #stardoll300 #stardollturkey #stardollars #stardolls #stardollgraphic #stardollfashion #stardollbrazil #stardollhair #stardolllove #stardollgraphics #stardollwig #stardollbr #stardollaccess #stardollpics #stardollbazaar #stardollbrasil #stardollgiveaway #stardollismylife #stardolloffice #stardollgirl #stardollstyle #stardollrussia #stardolllook #stardollfamily #stardollsuite #stardollootd</div> |
codefordurham/adopt-a-drain | 232120075 | Title: Identify Pilot neighborhoods; send out an email.
Question:
username_0: For the pilot, we'd like to pick a couple neighborhoods that we could reach out to. Ideally, if we could schedule a walk around the neighborhood with people that use the app, that'd be great. Maybe the Northgate Park and Ellerbe Creek Association?
Answers:
username_1: We could ping <NAME> (<EMAIL>), President of the Northgate Park Neighborhood Association and organizer of the Big Sweep creek cleanup events.
username_0: The email we sent to NGP:
```
Hello neighbors,
The Durham City Stormwater & GIS Services department and Code for Durham are
collaborating to bring citizens a website that helps citizens and the city work
together to keep stormwater drains clear. Clean drains mean less flooding,
property damage, stream pollution and stress on our city infrastructure.
It is called Adopt-a-Drain.
Use Adopt-a-Drain to find stormwater drains in your neighborhood, and "adopt"
them. The website provides helpful information about cleaning drains. And when you
sign up on the Adopt-a-Drain website, the city will notify you of future cleanup
events in your area.
You can visit the website by following the link below:
https://adoptadraindurham.herokuapp.com/
Also, we have set up a button on the website called 'Give Us Feedback!'. Once you've
tried out the website, we'd love to have your feedback on this tool. It will
help us get ready for a more official launch of the website.
Thank you,
<NAME>
Code for Durham
```
username_1: I think that [downtown Durham](http://durhamhoods.com/downtown/), as well as [Trinity Park](http://durhamhoods.com/trinity-park/), [Northgate Park](http://durhamhoods.com/northgate-park/), [Watts-Hillandale](http://durhamhoods.com/watts-hospital-hillandale/), [Walltown](http://durhamhoods.com/walltown/), [Old West Durham](http://durhamhoods.com/old-west-durham/), [Duke Park](http://durhamhoods.com/duke-park/) and [Morehead Hill](http://durhamhoods.com/morehead-hill/) are particularly suitable neighborhoods for introducing this app. The Duke University community might be very interested in participating as well. |
bbc/simorgh | 636125108 | Title: MAP Media Player should be full-width on mobile
Question:
username_0: MAP video styling is not correct on mobile - it does not go to the edges of the screen. It ought to match the styling of the OD TV video player - as highlighted here https://github.com/bbc/simorgh/pull/6710#pullrequestreview-421984519
**To recreate**
- Visit: https://www.bbc.com/pidgin/tori-51913962
- Resize your window to mobile size
- Notice there is padding either side, which shouldn't be there.<issue_closed>
Status: Issue closed |
baydindima/AutomateLiveTemplatesPlugin | 305493279 | Title: Don't work in idea 2017.3
Question:
username_0: SCALA_FILE_TYPE
java.lang.NoSuchFieldError: SCALA_FILE_TYPE
at scala.edu.jetbrains.plugin.lt.finder.extensions.ScalaFileTypeNodeFilter.fileType(JavaFileTypeNodeFilter.scala:46)
at scala.edu.jetbrains.plugin.lt.LiveTemplateFindAction.$anonfun$actionPerformed$2(LiveTemplateFindAction.scala:40) |
networktocode/ntc-templates | 926434513 | Title: cisco_nxos_show_interface_brief.texfsm and output modifier unexpected result
Question:
username_0: ##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
```python
from netmiko import ConnectHandler
device = {
"device_type": "cisco_nxos",
"ip": ip,
"username": "cisco",
"password": "<PASSWORD>",
}
with ConnectHandler(**device) as net_connect:
# Parse the hostname of the device
hostname = net_connect.send_command("show hostname", use_textfsm=True)[0]["hostname"]
# Parse the show interface brief of the device
dwn_intfs = net_connect.send_command("show interface brief | include Eth.*down.*Link.not.connected", use_textfsm=True)
print(dwn_intfs)
```
##### SAMPLE COMMAND OUTPUT
```cisco
Eth1/6 1 eth access down Link not connected auto(D) --
Eth1/7 1 eth access down Link not connected auto(D) --
Eth1/8 1 eth access down Link not connected auto(D) --
Eth1/9 1 eth access down Link not connected auto(D) --
Eth1/10 1 eth access down Link not connected auto(D) --
...output truncated
```
##### SUMMARY
<!--- Explain the problem briefly -->
I am using Netmiko with TEXTFSM to parse the `down` + `Link not connected` interfaces on a Nexus switch. Basically, this output modifier gets me what I need (`include Eth.*down.*Link.not.connected`), but it is not converted into a list of dictionaries as expected. Running the Python script without the output modifier, I get the expected result `list[dicts]`. So, the problem is with the output modifier after the `|`.
##### STEPS TO REPRODUCE
Run the attached Python sample script.
##### EXPECTED RESULTS
```json
{
"description": "",
"interface": "Eth1/6",
"ip": "",
"mode": "access",
"mtu": "",
"portch": "--",
"reason": "Link not connected",
"speed": "auto(D)",
"status": "down",
"type": "eth",
"vlan": "1",
"vrf": ""
},
{
"description": "",
[Truncated]
"vlan": "1",
"vrf": ""
},
...output truncated
```
##### ACTUAL RESULTS
<!--- What actually happened? -->
<!--- Paste verbatim command output between quotes below -->
```
Eth1/6 1 eth access down Link not connected auto(D) --
Eth1/7 1 eth access down Link not connected auto(D) --
Eth1/8 1 eth access down Link not connected auto(D) --
Eth1/9 1 eth access down Link not connected auto(D) --
Eth1/10 1 eth access down Link not connected auto(D) --
...output truncated
```
Thank you.
Answers:
username_0: Any updates?
username_1: This can't parse the headers. You should likely be just sending the entire command and do the processing within the Python looking for the list items you are looking for.
username_0: Hello @username_1,
I used the same output modifier with `cisco_ios` instead of `cisco_nxos` and it worked and parsed the expected output.
You can try it out:
```python
from netmiko import ConnectHandler
device = {
"device_type": "cisco_ios",
"ip": "sandbox-iosxe-latest-1.cisco.com",
"username": "developer",
"password": "<PASSWORD>"
}
with ConnectHandler(**device) as net_connect:
output = net_connect.send_command(
command_string="show ip interface brief | include ^GigabitEthernet.*up.*up",
use_textfsm=True,
)
print(output)
```
**output:**
```json
[
{
"intf": "GigabitEthernet1",
"ipaddr": "10.10.20.48",
"status": "up",
"proto": "up",
},
{
"intf": "GigabitEthernet2",
"ipaddr": "10.10.30.48",
"status": "up",
"proto": "up",
},
{
"intf": "GigabitEthernet2.1629",
"ipaddr": "10.198.177.1",
"status": "up",
"proto": "up",
},
]
```
username_1: What happens if you send the command `show interface brief`? You should get all of the table. Then pull the parsing out of the table.
username_0: I mentioned this in the issue. When I send the command without any output modifier, I get the expected output (list[dict]), otherwise, I get plain text output. So, I am wondering why does output modifiers work with _cisco_ios_ but don't work with _cisco_nxos_!
username_1: I don't have a NXOS readily in front of me and have to spend some time to get one spun up. On the NXOS template the start of the parsing is defined on lines 14/15 here: https://github.com/networktocode/ntc-templates/blob/master/ntc_templates/templates/cisco_nxos_show_interface_brief.textfsm#L14
```
Start
^Port\s+VRF\s+Status\s+IP\s+Address\s+Speed\s+MTU -> Management
```
Since those lines do not show up in the output, then the TextFSM parser does not know where to start parsing.
The IOS output likely has the start portion in the command output, and thus why it is able to parse and not the NXOS.
Thanks for the clarification on the question @username_0 as I was thrown off by the title.
username_0: Thanks a lot for your elaboration. I really appreciate it 🙏🏻
Status: Issue closed
|
alexa/ask-toolkit-for-vscode | 705755546 | Title: Credentials for Git-CodeCommit
Question:
username_0: OS: Linux x64 5.7.11-200.fc32.x86_64
Visual Studio Code Version: 1.47.3
Alexa Skills Toolkit Version: 2.0.2
Git Version: git version 2.26.2
**Question**
When trying to import an Alexa skill created in the Alexa Developer console, I am prompted for Git credentials, but I do not know what they should be. I have tried the AWS account credentials I used when creating the skill, but these are not accepted.
**Steps:**
Under Skill management I click "Download and edit skill"
a dropdown appears with the skill listed, which I click.
I am prompted for a folder to download to, so I select an empty folder
I am then prompted for a Username:
`Git: https://git-codecommit.us-east-1.amazonaws.com (Press 'Enter' to confirm or `Escape` to cancel)`
I am then prompted the same, but for a password.
I have tried:
- the email i use to log in to the developer console
- blank
- escape
Escape cancels, email and blank just display a toast saying:
Skill clone failed. Reason: Git folder setup failed for <dir path>. Reason: Failed to execute git {
"exitCode": 128,
"gitErrorCode": "AuthenticationFailed",
"gitCommand": "fetch",
"stdout": "",
"stderr": "fatal: Authentication failed for 'https://git-codecommit.us-east-1.amazonaws.com/v1/repos/9f7748b5-b219-440a-89c9-d0ac959a72ab/'\n"
}
It is not clear what credentials are expected, and i dont believe I have any other credentials I could use.
I have added this as a question as there is a good chance I have missed something.
Thanks for any suggestions :)
Answers:
username_1: Hi @username_0,
Thank you for your report.
I found you are using Git 2.26.x. Based on our investigation, there are some known issues in git 2.25.x and 2.26.x version. These two versions also cannot work correctly to fetch the git credentials in the Alexa Skills Toolkit.
Thus, we will suggest you update to Git >= 2.27.
Please let us know if the problem persists.
username_2: Thanks for the update @username_1 . To add on that, the `git` credentials that were being asked in the extension, is for the AWS CodeCommit repository that the hosted skill resides at. This is one of the AWS resources `Alexa-hosted skills` service provides when creating a skill, as explained a bit [here](https://developer.amazon.com/en-US/docs/alexa/hosted-skills/build-a-skill-end-to-end-using-an-alexa-hosted-skill.html#overview). Since this resource is under the service account, your personal AWS login credentials doesn't work here.
To get your hosted skills' `git` credentials, you can call this [SMAPI API](https://developer.amazon.com/en-US/docs/alexa/smapi/alexa-hosted-skill.html#generate-credentials) manually, or use this [ASK CLI](https://developer.amazon.com/en-US/docs/alexa/smapi/ask-cli-command-reference.html#git-credentials-helper) command. However, the `git` credentials provided through this API would be short-lived, and needs to be refreshed periodically, to keep your locally cloned hosted skill git access intact.
The `ASK Toolkit` extension removes this pain by automatically configuring the credential helper for the hosted skills that are created or downloaded using the extension. It is in this process, that we observed that some specific versions of `git` cannot configure the credential helper properly, leading to this bug. Hopefully if you update the `git` version, you will never be asked for these credentials again :-).
@username_0 , hope this explains a bit about the credentials and the issue faced.
username_2: Hey @username_0 , just checking back if you got around this issue. Please let us know if you are still facing problems.
username_0: Hi @username_2 apologies for the long delay. I've just tried to update Git on my local machine, but it appears that 2.26.2 is the latest version available in Fedora32, so updating isn't going to be easy unfortunately.
I'll see if i can find a consistent way to get a newer version installed (i.e. via an official repo rather than a manual install for example) and I'll let you know how it goes
Thanks for your help though, its good to know it was a known issue rather than something I was getting wrong! You help is very much appreciated :+1:
username_2: Closing this issue since we added an FAQ section about this.
Status: Issue closed
|
felipefrancisco/pokemon-go-map | 166329224 | Title: Locations gone
Question:
username_0: I added quite a few locations to Sheffield, UK a few days ago, looked today to see if anyone else has added any, but all my locations have gone and there is nothing in the city. Any idea what is causing this or have my requests been removed?
Answers:
username_1: @username_0 Yesterday the database was failing lots of times due to the crazy amount of people using the map. I was forced to stop the database and migrate a backup to a better server, during this process some hundred markers, locations, reports and sights were lost, I'm really sorry about that.
Do you mind adding them again to the map? There'll be no more database related problems, I can guarantee that now.
username_0: No problem at all, I'll get them all added back on now.
Status: Issue closed
|
ag-grid/ag-grid | 354009312 | Title: Collapse all or expand all function is required in ag grid tool panal.
Question:
username_0: <!--
IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION WE MIGHT CLOSE YOUR ISSUE WITHOUT INVESTIGATING
-->
**I'm submitting a ...** (check one with "x")
```
[] bug report => see 'Providing a Reproducible Scenario'
[x] feature request => do not use Github for feature requests, see 'Customers of ag-Grid'
[] support request => see 'Requesting Community Support'
```
**Customers of ag-Grid**
If you are a customer you are entitled to use the ag-Grid's customer support system (powered by Zendesk). Please use that channel for guaranteed response from the ag-Grid team with regards bugs, feature requests and support.
**Requesting Community Support**
If you are not a customer of ag-Grid, ag-grid staff will label your issue as managed-by-the-community. This means that ag-Grid staff is not going to be actively looking into it and it will get closed if inactive for more than one month. The community is welcome to help with this question/support issue.
**Providing a Reproducible Scenario**
Accepted reproducible scenarios are
- A description of the detailed steps to reproduce your behaviour in one of our examples in the docs.
- A plunker
If you decide to send us a plunkr, from any example in our website use the plunkr button in there to fork your own code by following the steps below:
- Select the framework that is appropriate to you from the drop-down
- Open it in plunker. (Use the button plunker in our example)
- Add your changes so that the behaviour is reproduced
- Save and Freeze the plunker(On the top left corner)
- Send us the link to the plunker(You can copy the URL from the browser)
If reporting a bug make sure to state.
Current behaviour.
Expected behaviour. If possible back this up with our docs/examples if possible
**Current behavior**
<!-- Describe how the bug manifests. -->
**Expected behavior**
<!-- Describe what the behavior would be without the bug. If possible back this up with our docs/examples if possible-->
**Please tell us about your environment:**
<!-- Operating system, IDE, package manager, HTTP server, ... -->
* **ag-Grid version:** X.X.X
<!-- Check whether this is still an issue in the most recent ag-Grid version -->
* **Browser:**
<!-- Run `navigator.userAgent` in console of all of the browsers where this could be reproduced -->
* **Language:** [all | TypeScript X.X | ES6/7 | ES5]
Status: Issue closed
Answers:
username_1: Hi,
Note that as of the latest versions of ag-grid, the tool panel has a button that lets you expand/collapse all columns
Hope this helps. |
naholyr/node-twitter-timeline-cleaner | 33085332 | Title: 404 error when extracting mentions
Question:
username_0: When I try to run it, the following error appears:
```
Extracting Mentions [ ] 0%
/usr/local/share/npm/lib/node_modules/twitter-timeline-cleaner/cli/stats.js:86
if (err) throw err;
^
Error: HTTP Error 404: Not Found
at /usr/local/share/npm/lib/node_modules/twitter-timeline-cleaner/node_modules/twitter/lib/twitter.js:95:14
at passBackControl (/usr/local/share/npm/lib/node_modules/twitter-timeline-cleaner/node_modules/twitter/node_modules/oauth/lib/oauth.js:386:13)
at IncomingMessage.<anonymous> (/usr/local/share/npm/lib/node_modules/twitter-timeline-cleaner/node_modules/twitter/node_modules/oauth/lib/oauth.js:398:9)
at IncomingMessage.EventEmitter.emit (events.js:117:20)
at _stream_readable.js:920:16
at process._tickCallback (node.js:415:13)
```
Answers:
username_1: I've run into this error too. I've only had a few minutes to look at it but might have found the problem, if I can fix it tonight I'll submit a fix/fork it.
username_1: I just realized that I am getting a different error, but it probably stems from the same issue. Here is my error:
Extracting Direct Messages [ ] 0%
/usr/local/lib/node_modules/twitter-timeline-cleaner/cli/stats.js:86
if (err) throw err;
^
Error: HTTP Error 403: Forbidden
at /usr/local/lib/node_modules/twitter-timeline-cleaner/node_modules/twitter/lib/twitter.js:95:14
at passBackControl (/usr/local/lib/node_modules/twitter-timeline-cleaner/node_modules/twitter/node_modules/oauth/lib/oauth.js:397:13)
at IncomingMessage.<anonymous> (/usr/local/lib/node_modules/twitter-timeline-cleaner/node_modules/twitter/node_modules/oauth/lib/oauth.js:409:9)
at IncomingMessage.emit (events.js:129:20)
at _stream_readable.js:908:16
at process._tickCallback (node.js:355:11) |
VKCOM/VKUI | 837190950 | Title: [Bug] DatePicker не имеет фиксированного размера и выходит за пределы контейнера
Question:
username_0: https://vkcom.github.io/VKUI/#datepicker
Выбираем какой-нибудь большой месяц, например Февраля, селект становится больше и выталкивает весь контент вправо, тем самым выходя за главный контейнер.
Answers:
username_0: 

username_1: [Пофикшено](https://github.com/VKCOM/VKUI/releases/tag/v4.4.0).
Status: Issue closed
|
jloser/github.io | 644607503 | Title: Picture does not display
Question:
username_0: Please fix the issue with the picture. It does not display.
- [ ] get new URL
- [ ] change index
- [ ] commit changes
- [ ] push changes
- [ ] create and review pull request
- [ ] merge changes
- [ ] delete branch online
- [ ] delete branch offline
- [ ] synch repository with pull<issue_closed>
Status: Issue closed |
microsoft/fluentui | 965190589 | Title: aria-live attribute on Dropdown doesn't apply to child span
Question:
username_0: ### Environment Information
- **Package version(s)**: [email protected]
- **Browser and OS versions**: Edge, os.ver 21390.2025
### Describe the issue:
Setting aria-live on a Dropdown has no effect, as the attribute is not inherited by the span which displays the selected dropdown option. Setting it like so:
<Dropdown aria-live="off" options={options} >
</Dropdown>
Results in:

This is an accessibility issue as we have times where we cascade auto-populate dropdowns in cases of their being only one option, causing Narrator to read out every single dropdown we fill (without actually naming the label of the dropdown being populated, so it is not useful to the user).
### Please provide a reproduction of the issue in a codepen:
https://codepen.io/username_0/pen/OJmrGpy
#### Actual behavior:
aria-live="polite" is set on the span of the dropdown
#### Expected behavior:
aria-live="off" or no attribute is set on the span of the dropdown
### Documentation describing expected behavior
https://microsoft.sharepoint.com/:w:/r/sites/accessibility/_layouts/15/Doc.aspx?sourcedoc=%7B64DD7C5C-AA62-4D53-886E-BCAEE5341C01%7D&file=Guideline%20101%20Recommended%20Assistive%20Technology%20Tools%20for%20MAS%20Testing%20of%20Microsoft%20Digital%20Properties_v3_(CELA).docx&action=default&mobileredirect=true
HCL accessibility team filed us issue on us with the expected result of "Screen reader should announce only correct or relevant information".
Answers:
username_0: I've tried onRenderOption, onRenderItem, and onRenderTitle, and none of them seem to give access to change the span element which sets aria-live="polite" (the selected item in a dropbox). I might just be missing something though.
username_1: @username_0 - Thanks for filing this issue with us and providing the details into the problem you are facing.
@bsunderhus - Would you be able to confirm if this is a regression, or if this behavior is an issue? |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.