repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
bricelam/EFCore.TextTemplating | 558609844 | Title: To Do
Question:
username_0: - [ ] Update README.md
- [ ] Use separate projects for the design-time code and the scaffolded model
- [ ] Use a database project and Erik's dacpac database model factory
Answers:
username_0: WIP branch: [dacpac](https://github.com/username_0/EFCore.TextTemplating/compare/dacpac)
Status: Issue closed
|
reapit/foundations | 1060061436 | Title: API Not Scoping to Reapit Customer Header
Question:
username_0: **Describe the bug**
Upon querying any REST API Endpoint, and providing the `reapit-customer` header, we are seeing data which is unrelated to the reapit customer we have requested. We get the same data no matter what `reapit-customer` ID we provide.
Example `x-amzn-requestid` requests:
```
# Brings back the incorrect data
346fda63-af18-454b-b577-5c749453cef5
# Brings back the correct data
ae4c5c54-f372-4426-95f8-1f8d04dbadaa
```
**To Reproduce**
Make any request with any valid `reapit-customer` ID, and the same data is returned. This applies to all endpoints.
**Expected behaviour**
For the requests to be scoped based on the provided `reapit-customer` ID
Answers:
username_1: Hi @username_0 - are you getting a new access token when you change customer ids? As a request comes into the platform the token and headers are inspected with the result being cached for 5 minutes. If you change the customer id within that 5 minute period, the new one will effectively be ignored until the cached version of the token expires, after which the token inspection runs again. does this sound the likely cause of the issue you describe? I have found both those requests from Nov 21 at 00:05:03 and 00:05:04 and both hit our platform with the same customer id so I suspect what I've said above is accurate.
username_0: Hi @username_1,
Thank you for your reply.
I have just been looking over your documentation and I am not entirely sure how the `reapit-customer` header works in conjunction with the token request.
We only have one `client_id` and `client_secret`, we are using the client credentials flow.
For the purpose of this example, let's say I have two customers. One customer's ID is `C1` and the other is `C2`.
When I dispatch a request to Negotiators, I use the token which has been generated and pass `reapit-customer=C1` as the header. A few seconds later, I will make the same request again but this time switching the `reapit-customer` header's value to `C2`. But, I am still seeing data from `C1`, not `C2`.
Would this be the caching you explained? Or, do I need to do something different before making the second request for another customer?
username_1: Hi there @username_0 yes that's correct. The access token is cached as the request comes in so after the first request, changing the reapit-customer header won't make any difference for 5 minutes until the cache entry reaches it's TTL and is purged. You need to be obtaining a new token for each customer you want to access (so if you're processing a lot of data, it makes sense to do it customer by customer rather than just random records, to reduce the number of times you need to swap out your token). This probably isn't very clear in the documentation and only applies to machine to machine apps, so I'll make a note to get that documentation updated.
username_0: Thanks for clearing that up @username_1
Based on what has been discussed here, I will make the necessary changes on our side.
Thanks for your help @username_1
username_1: no problem. Thanks for getting in touch
Status: Issue closed
|
creativecommons/creativecommons.github.io-source | 616199592 | Title: Allow support for historical project ideas
Question:
username_0: ## Description
Currently, the way our internship project ideas are set up only allows one set of project ideas to be live at a time. We'd like to show historical project ideas on our website as well.
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
- [Project ideas template](https://github.com/creativecommons/creativecommons.github.io-source/blob/master/templates/project-ideas.html) (line 10 hardcodes the project idea location)
- Project ideas models:
- https://github.com/creativecommons/creativecommons.github.io-source/blob/master/models/project-ideas.ini
- https://github.com/creativecommons/creativecommons.github.io-source/blob/master/models/project-ideas-collection.ini
Answers:
username_0: Please put your implementation ideas here for discussion before sending a PR.
username_1: @username_0 We can add Project Titles under the bullet of each year. Under it, we can give a short description, including the Student and Mentor name.
The GSoC'19 bullet carries blog links for all projects, so we can also include short description and names. However, the project links should come under a sub-heading of Projects to separate them from other links.
The link of 'Open Source Blog Posts' mentioned under various categories is ambiguous to a new user. It should be changed to 'Open Source Blog Posts-GSoC'19' or something alike for different categories which points out that it is a link to blogs to such.
For the Content box, we can add bullet links under GSoC heading for every year.
Similar structure can be used for Outreachy posts, with Project title, short description, participant name.
username_1: Please give your views on it.
username_0: @username_1 I don't understand the solution you are proposing. Could you explain in more detail with screenshots if you're proposing UI changes or references to lines of code where you're proposing code changes?
Thanks!
username_2: If I understood correctly, we want to host historical GSoC projects ideas on our website as opposed to https://www.google-melange.com/archive/gsoc/. Historical project ideas are currently accessible via menu **Internships** -> **History**.
We want to port content from **google-melange** and store it in structured format. The links on **History** page should take us to the newly formed pages. We can use the existing `models` to create template pages.
username_0: @username_2 that sounds like a great idea but I wasn't thinking about porting content from GSoC's websites or updating the History section, although we could certainly do that once this is implemented
We've had a few rounds of internships so far, but we don't show the previous project ideas for those internship rounds. We'd like to restore them from the old git commits, perhaps in a new section of the **Internships > History** page.
For example, see these PRs that replace the project ideas entirely:
https://github.com/creativecommons/creativecommons.github.io-source/pull/239
https://github.com/creativecommons/creativecommons.github.io-source/pull/170
username_2: We can place all project ideas in same folder i.e. `project-ideas-collection`. Archived (Historical) and non-archived (live) ideas can be differentiated by a boolean model field `archived`.
The template which renders non-archive ideas can use a filter in `for` loop.
Our current **History** page uses `page-with-toc.html` template. We can create a new template which extends `page-with-toc.html` and contains additional section for **Project Ideas History**. Here we can use a `for` loop similar to one in `project-ideas.html` template. Only difference is that here we need add filter for archived ideas.
Here is a step-by-step guide:
1. Add a boolean field `archived` with default value `false` in `project-idea.ini` model. This is used to differentiate between archived and non-archived idea in `project-ideas-collection` directory.
2. Create content file for all archived ideas using `project-idea.ini` model. Set the field `archived` to `true`.
3. For live ideas, set field `archived = false` in content.
4. Modify `project-ideas.html` template such that it should iterate on non-archived ideas only.
`{% for idea in ideas.filter(F.archived == false) %} `
This way our current functionality is preserved.
5. Create a new template `internship-history.html` which extends `page-with-toc.html`. Add a section in it for historical project ideas. We can iterate over archived project ideas and render them. This loop is similar to one in `project-ideas.html`
I have a POC ready. Let me know if you want to have a look.
username_0: @username_2 I think you're on the right track! The two main changes I would suggest are:
1. I think we should have a field indicating which internship round each project idea is associated with. It would be great if we could just have a folder for each internship round's project ideas.
2. I think it would be good to use the existing project ideas template and just have a different project ideas page for each round of internships. We can link to archived rounds from the history page. That way, we don't have to make any new templates.
What do you think?
username_2: In this scenario, we don't need `archived` field. This field was useful when we had all ideas at same directory level and we needed to differentiate between **live** and **archived**.
But if we plan to have different folders, this differentiation is not required. The current live folder can be linked from **Project Ideas** page. We can create **gsod-2020** directory inside **project-idea-collections** and put all GSOD 2020 ideas inside it. This **gsod-2020** directory will then be linked from **project-ideas.html** template.
Similarly we can have other directories for rounds of internships. These can be linked from **History** page.
This eliminates the need for `archived` field.
There is one thing which I am unable to figure out. To render currently live project ideas, we hardcode the link to content directory in template file like [this](https://github.com/creativecommons/creativecommons.github.io-source/blob/master/templates/project-ideas.html).
How are we going to achieve that for all rounds of internships? Multiple template files where each one has hardcoded link to respective content directory? We need some mechanism to iterate over content files. Am I missing anything?
username_0: @username_2 that all sounds right!
We need to remove the hardcoding of the content directory in the template file and instead of make it something we pass into the template somehow. I think we should just have a single template file. I am not sure how to solve this, any thoughts are appreciated.
username_3: @username_0 I think this issue is a somewhat related to #576 where I have moved information about old GSOC-projects from `cc-archive` org of github to `Internships ---> History` section of few specific years like 09-13 but the info about GSOC projcts from 06-08 lacks info . I am still researching may be I find the required data on `cc-archive`
username_0: @username_3 This is not about finding more information, this is about updating the code to support browsing through project ideas for multiple years.
username_3: @username_0 You guys want to implement in this manner. Actually I misunderstood the conversation. I want to take over this issue. So is someone working on this issue.
username_0: I don't think anyone is working on it.
username_3: @username_0 A little idea of implementation from my side. In website's `Internship ---> History` Section we have categories of `GSOC-2019`(just an example) when we click on the mian link of `GSOC-2019` we get redirected to the GSOC website. So what I am going to do that break the links and then implement the above via https://github.com/creativecommons/creativecommons.github.io-source/blob/master/content/gsoc-2019/project-ideas/contents.lr
.In this way we would be able to engage the user in the `CC` website only rather than multiple links for the user in new tabs.
username_0: I'm going to defer to @TimidRobot on questions about this issue since they are going to be taking over maintaining the site.
username_3: @TimidRobot Let me start implementing this issue. |
Royal-Navy/design-system | 918568417 | Title: React hook ESLint rules aren't running
Question:
username_0: The `react-hooks/rules-of-hooks` and `react-hooks/exhaustive-deps` ESLint rules are not running in our app. I've created this as a bug as I believe they were running at some point (but I could be wrong so feel free to change this to a feature request).
The reason they aren't running appears to be that [`airbnb/hooks`](https://github.com/airbnb/javascript/tree/master/packages/eslint-config-airbnb#eslint-config-airbnbhooks) is missing from `extends` in `packages/eslint-config-react/index.js`.
### Steps to reproduce
1. Run `yarn lint` in `packages/react-component-library`
2. Observe there are 0 errors
3. Add `airbnb/hooks` to `extends` in `.eslintrc.js`
4. Run `yarn lint` in `packages/react-component-library` again
5. Observe there are now ~10 errors
Answers:
username_1: After investigation this is a larger piece of work and is a feature request so should be a lower priority than other bugs. For now it is possible to work around this by extending the configuration in the application.
Status: Issue closed
|
youzan/vant | 930032871 | Title: [Bug Report] Picker组件数字类型数组会报错
Question:
username_0: ### 设备 / 浏览器
Chrome
### Vant 版本
3.1.0
### Vue 版本
3.1.2
### 重现链接
<a href="https://vant-contrib.gitee.io/vant/v3/#/zh-CN/picker" target="_blank">https://vant-contrib.gitee.io/vant/v3/#/zh-CN/picker</a>
### 描述问题
Picker组件数字类型数组会报错
Answers:
username_1: 请提供有效的重现链接
https://codesandbox.io/s/m5v3f
Status: Issue closed
username_0: https://codesandbox.io/s/keen-lewin-88icx?file=/index.html
username_1: ### 设备 / 浏览器
Chrome
### Vant 版本
3.1.0
### Vue 版本
3.1.2
### 重现链接
[https://codesandbox.io/s/keen-lewin-88icx?file=/index.html](url)
### 描述问题
Picker组件数字类型数组会报错
username_1: 目前 columns 的内容必须为 string 类型,后续会评估下是否要支持 number 类型
Status: Issue closed
username_1: 已在 3.1.2 版本支持 number 类型 |
scikit-learn-contrib/imbalanced-learn | 400857741 | Title: AttributeError: 'SMOTE' object has no attribute 'fit_resample'
Question:
username_0: <!--
If your issue is a usage question, submit it here instead:
- The imbalanced learn gitter: https://gitter.im/scikit-learn-contrib/imbalanced-learn
-->
from imblearn.over_sampling import SMOTE
X_resampled, y_resampled = SMOTE(random_state=42).fit_resample(X_trainC, y_trainC)
<!-- Instructions For Filing a Bug: https://github.com/scikit-learn-contrib/imbalanced-learn/blob/master/CONTRIBUTING.md#filing-bugs -->
#### Description
<!-- Example: Joblib Error thrown when calling fit on LatentDirichletAllocation with evaluate_every > 0-->
#### Steps/Code to Reproduce
<!--
Example:
```
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.decomposition import LatentDirichletAllocation
docs = ["Help I have a bug" for i in range(1000)]
vectorizer = CountVectorizer(input=docs, analyzer='word')
lda_features = vectorizer.fit_transform(docs)
lda_model = LatentDirichletAllocation(
n_topics=10,
learning_method='online',
evaluate_every=10,
n_jobs=4,
)
model = lda_model.fit(lda_features)
```
If the code is too long, feel free to put it in a public gist and link
it in the issue: https://gist.github.com
-->
#### Expected Results
<!-- Example: No error is thrown. Please paste or describe the expected results.-->
#### Actual Results
<!-- Please paste or specifically describe the actual output or traceback. -->
#### Versions
<!--
Please run the following snippet and paste the output below.
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import sklearn; print("Scikit-Learn", sklearn.__version__)
import imblearn; print("Imbalanced-Learn", imblearn.__version__)
-->
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-10-106bdd16e0a2> in <module>()
1 from imblearn.over_sampling import SMOTE
----> 2 X_resampled, y_resampled = SMOTE(random_state=42).fit_resample(X_trainC, y_trainC)
3 print(X_resampled.shape)
4 print(X_train.shape)
AttributeError: 'SMOTE' object has no attribute 'fit_resample'
<!-- Thanks for contributing! -->
Answers:
username_1: There mus be something wrong with your module version. See example of working SMOTE here: https://gist.github.com/username_1/d85a08ddb38a18ed444cc68f43d88631
username_0: is there any version that i have to set?
username_1: Maybe try `pip install imblearn -U` or did you use conda?
username_0: i tried pip install imblearn -U. but still the same
username_1: can you try to execute this simple example here:
https://gist.github.com/username_1/d85a08ddb38a18ed444cc68f43d88631
same problem?
username_0: yes. same problem
username_1: Hmm...
Do you use Python 3.6?
username_0: yes 3.6.4
username_0: i also get
TypeError: __init__() got an unexpected keyword argument 'sampling_strategy'
for
ada = SMOTE(sampling_strategy=150/100000,
random_state=42,
)
username_2: Let start by checking the version installed because you don't have the right version for sure.
``` python
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import sklearn; print("Scikit-Learn", sklearn.__version__)
```
username_0: Windows-10-10.0.17134-SP0
Python 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 10:22:32) [MSC v.1900
64 bit (AMD64)]
NumPy 1.16.0
SciPy 1.0.0
Scikit-Learn 0.19.1
username_1: I would suggest to create a brand new conda env with python 3.6:
`conda create --name <new_env_name> python=3.6`
Activate that env and then install no packages with conda but use pip.
Then try again.
To manage my pip files I use this:
"Install and Update packages from a File: https://eniak.de/it/pip_how-to#install_and_update_packages_from_a_file
username_2: Could you check the version of imblearn.
But the version of scikit-learn is too olde to have imblearn 0.4
You should update scikit-learn and imbalanced-learn
conda update imbalanced-learn should update thing
Sent from my phone - sorry to be brief and potential misspell.
username_3: After reading this I updated scikit learn and imbalanced learn, but the error persists:
Windows-10-10.0.17134-SP0
Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)]
NumPy 1.16.1
SciPy 1.1.0
Scikit-Learn 0.20.3
username_4: By using Python 2.7
and doing : **pip install imblearn -U**
it solved it for me.
maybe try with a clean environment.
_scipy==1.2.1
numpy==1.16.3
scikit-learn==0.20.3_
Status: Issue closed
username_5: @username_2 I get an error "'SMOTE' object has no attribute 'fit_resample'"
the versions that I have on my system are import platform; print(platform.platform())
import sys; print("Python", sys.version)
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import sklearn; print("Scikit-Learn", sklearn.__version__)
Windows-10-10.0.17763-SP0
Python 3.7.3 (default, Mar 27 2019, 17:13:21) [MSC v.1915 64 bit (AMD64)]
NumPy 1.16.2
SciPy 1.3.0
Scikit-Learn 0.21.2
username_2: Update imbalanced-learn and you should not comment on a close issue |
Backblaze/JavaReedSolomon | 393504279 | Title: "Shards are different sizes"
Question:
username_0: If you try to decodeMissing and the first byte[] is one of the missing arrays, you will get an Exception. This happens since you check for the shard lenght at the first byte[]. It's a easily fixed problem.<issue_closed>
Status: Issue closed |
woocommerce/woocommerce | 361917994 | Title: Add an option to pay for order marked from view-order page
Question:
username_0: **Is your feature request related to a problem? Please describe.**
Although, not necessary, it could be helpful to include an option to pay for an order from the view order page within My Account.
**Describe the solution you'd like**
Currently, the only place to see a "pay" option for pending orders is the all orders page. |
pelias/api | 254074737 | Title: Root path ('/') is a 404
Question:
username_0: As of https://github.com/pelias/api/pull/929 we no longer have anything to handle the root URL for Pelias. It used to redirect to `/v1` which gives some basic information and links to Pelias documentation.\
Now, a request to the root URL gives an unhelpful 404:
```
julian@julian-mapzen ~ $ curl localhost:4000
{"error":"not found: invalid path"}
```
This is somewhat similar to what used to happen in https://github.com/pelias/api/issues/511<issue_closed>
Status: Issue closed |
DMTF/Redfish-Tools | 584773186 | Title: Syntax error in OpenApi.yaml hosted at http://redfish.dmtf.org/schemas/v1/openapi.yaml
Question:
username_0: Not sure this is the right place for this, but it seems active, so please feel free to move to a different repro, or let me know a better place for this...
I am completely new to YAML and the OpenAPI space right now. I've tried a few different OData and Openapi generators, but so far none of them have been able to fully parse the data hosted on http://redfish.dmtf.org/schemas/v1/.
Currently I am using VS2019 with the 'Unchase OpenAPI (Swagger API) Connected Service' pluggin to generate some C# client classes for accessing a rack of servers which use the Redfish api.
When I load http://redfish.dmtf.org/schemas/v1/openapi.yaml, the tool throws an error that the ? character is unexpected:
`Error: Invalid property identifier character: ?. Path 'paths['/redfish/v1/AccountService/ExternalAccountProviders/{ExternalAccountProviderId}/Certificates/{CertificateId}']', line 1, position 31085`
So, my question is.... Is the ? on line 986 actually valid YAML?
` ? /redfish/v1/AccountService/ExternalAccountProviders/{ExternalAccountProviderId}/Certificates/{CertificateId}/Actions/Certificate.Rekey
: post:
parameters:
`
Answers:
username_1: Moved to Redfish-Tools since this is the repo that contains the JSON Schema to OpenAPI converter.
username_1: The short answer is, yes, that is valid YAML. YAML defines usage of `?` for complex mappings.
However, the more complex thing about this is why is it even generated that way in the first place. As far as I can tell, the Python `yaml` module seems to do this when a line in the output will go more than a certain length. We haven't had issues with this in the past; when using the same module to read a YAML file, it's able to properly put it into a Python dictionary:
```
with open( "openapi.yaml" ) as openapi_file:
openapi_data = yaml.load( openapi_file, Loader = yaml.FullLoader )
```
The same file has been used on some of the tools used in the OpenAPI community (like Swagger Editor).
I'm not aware of anyone trying this out with the module you're referencing though. |
Creators-of-Create/Create | 792743657 | Title: Can't use the filter
Question:
username_0: In 1.16.4 version I found an item called filter. I selected the whitelisted items, but I couldn't find a "filter slot". Is the item unavailable rn or am I so stupid?
Answers:
username_1: Brass based logistic blocks uses filters
username_2: If you are on fabulous graphics, it could cause the filter slots to visually disappear.
username_0: Thanks! It was very helpful
Status: Issue closed
|
iheanyi/bandcamp-dl | 207532144 | Title: feature-request: Add the option to add the label as a Grouping meta tag.
Question:
username_0: I've been relying heavily on this in spotify-ripper so I can order my collection per music label by adding it to the grouping tag in iTunes. Spotify-ripper also uses mutagen so this shouldn't be too big of an issue adding in I think when it comes to code. Would like to submit a PR, but have to look into this Python code first :D
Answers:
username_1: Seems the label information isn't part of the JS we scrape though I roughly know where to find it.
Its shoved in a blob of data that I was probably going to add to BandcampJSON at some point anyway.
Once I do that its just a matter of throwing in the label metadata along with the rest of the data we currently use and using TIT1 to set the contentgroup (Group) tag. At least in theory, I have no idea how iTunes works or does its tags but from what a bit of research tells me I've got the right idea.
username_1: Update: Refactored BandcampJSON to return a list of JSON strings instead of creating multiple instances and returning individual strings, in the process added a separate function to retrieve the pagedata json string containing things like the label, other album names, etc
Status: Issue closed
username_0: Super! |
humanmade/Custom-Meta-Boxes | 165074152 | Title: Consider support for register_meta()
Question:
username_0: [WordPress 4.6 introduces a fundamental change to the way meta fields are registered.](https://make.wordpress.org/core/2016/07/08/enhancing-register_meta-in-4-6/) The primary change is the addition of meta metadata such as its type, description, value structure, and whether to expose it in the REST API.
We should consider whether registered CMB meta fields should also call `register_meta()` in order to expose meta fields to this new meta API.
Answers:
username_1: Is there any consideration for this yet to coincide with the upcoming release of the new REST API?
username_2: @username_1 no, there's no consideration for this as of yet, but I'm tagging it for 1.2.
username_2: Closing this issue as we will not be pursuing enhancements.
Status: Issue closed
|
chaoss/augur | 939071268 | Title: Game Oriented Insight Worker Mobile App
Question:
username_0: It would be great to gameify use of a mobile app against insights from the insight worker, or any of the machine learning workers. Completely open to creativity here.
<img width="1159" alt="augur-tech" src="https://user-images.githubusercontent.com/379847/124797431-dbcfc280-df17-11eb-838b-7718a29728a1.png"> |
RippleOSI/Ripple-Showcase-Stack-Project | 450327959 | Title: Hovering over Dashboard Widget Rows
Question:
username_0: When hovering over the rows within any widgets on the dashboard, no row hover effects are being displayed. It’s convention to invert the colour when hovering over any rows within a panel or widget. The mouse cursor should also change to a pointer (cursor: pointer;)
Answers:
username_1: @username_2
Could you please clarify, what do you mean as Dashboard? Which elements of UI should have cursor "pointer"?
Could you please add screenshot?
username_2: @username_1 Apologies, I mean Patient Summary. The page were we display widgets...

Cursor 'pointer' should be applied to the table rows within these widgets.
Status: Issue closed
username_1: OK! I need in 0.2-0.25 h for this.
username_1: When hovering over the rows within any widgets on the dashboard, no row hover effects are being displayed. It’s convention to invert the colour when hovering over any rows within a panel or widget. The mouse cursor should also change to a pointer (cursor: pointer;)
username_3: @tony-shannon @username_0 @username_2
This was done. Result in the video: https://drive.google.com/file/d/1vKU1fLo-r-xzjAIk1wByTygOFKqJAK-A/view
username_2: @username_1 This is caused because the View button doesn't have a white background by default, I think it's transparent. So when you hover over the row, you're seeing a transition on the button between transparent and white, but the row background changes to green immediately.
Also, please could you keep the green border on the View button when not hovering over it. It looks like the View button has no border in it's default state.

username_2: @username_1 I've also checked the Patient Summary and the rows within the widgets aren't hovering over in green.
username_1: @username_2
OK, should I add hovering to Patient Summary panels?
Because I remember that it was done but later we moved it.
username_2: The only panels we want to add hovers on the patient summary are the rows within widgets i.e.

username_1: @username_2
This issue has been done
https://drive.google.com/file/d/1yCdO-C18uSCybyg-7MVtZYwMSmPmXmUQ/view
username_2: @username_0 Looks good to me. Thanks Bogdan.
Status: Issue closed
|
ProxyConn/UntamedChat | 81424607 | Title: How to use GroupManager ranks?
Question:
username_0: Hello id like to use your plugin on my bungee server but it removes the format for my ranks setup on GroupManager.
Is there a way to set it up so it is like {server} {gmprefix}{player}{gmsuffix}?
Answers:
username_1: Currently this is not possible.
Status: Issue closed
username_0: For those who are sick of lazy programmers who close issues/suggestions like this cause they dont feel like improving anything go here: http://dev.bukkit.org/bukkit-plugins/bungeechatplus/
Much better alternative, and the author actually gives a shit.
username_2: @username_0 this is impossible currently with how UntamedChat is setup, it works on a bungee level and not on a spigot/bukkit level.
username_0: Sounds like a personal problem. BungeeChatPlus simply works as a bukkit bridge and bungee plugin. I think you need to work harder on your excuses.
username_2: Sure looks like you having a good time with that plugin

username_0: Its a compatibility issue with https://github.com/Poweruser/MinetickMod/tree/v1.7.10
Its built for bukkit/spigot which KCauldron is also built off of and works great on there. But MineTick is a completely different story which is what we run on our HUB.
The author already got on our teamspeak and our server to check things out and hes working on a fix.
username_2: Congrats, hope that works out for you! Have fun with that plugin, and thanks for stopping by! |
jlippold/tweakCompatible | 564392757 | Title: `LowPowerCharging` working on iOS 13.2.3
Question:
username_0: ```
{
"packageId": "me.jjolano.lowpowercharging",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "me.jjolano.lowpowercharging",
"deviceId": "iPhone8,1",
"url": "http://cydia.saurik.com/package/me.jjolano.lowpowercharging/",
"iOSVersion": "13.2.3",
"packageVersionIndexed": false,
"packageName": "LowPowerCharging",
"category": "Tweaks",
"repository": "jjolano's iOS Tweaks",
"name": "LowPowerCharging",
"installed": "0.0.2",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "me.jjolano.lowpowercharging",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "Enables Low Power Mode automatically when charging.",
"latest": "0.0.2",
"author": "jjolano",
"packageStatus": "Unknown"
},
"base64": "<KEY>",
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
xia-lab/MetaboAnalystR | 367260764 | Title: Error in CrossRefencing: 'curl' call had nonzero exit status
Question:
username_0: Error in download.file(lib.url, destfile = filenm, method = "curl") :
'curl' call had nonzero exit status
In addition: Warning message:
running command 'curl "http://www.metaboanalyst.ca/resources/libs/compound_db.rds" -o "compound_db.rds"' had status 127
Answers:
username_1: Same problem here. The url "http://www.metaboanalyst.ca/resources/libs/compound_db.rds" was not accessible.
username_2: Hi,
It seems like the downloading of "compound_db.rds" failed, are you using a Windows computer? I've updated the package recently, please retry to get this file. If this fails again, I can send you the "compound_db.rds" file directly.
Cheers
username_3: Hi there,
I've been running into this same issue with curl. In my case, I'm trying to run the function `CrossReferencing( my_list_of_compounds, q.type = "name", hmdb = TRUE, pubchem = TRUE, chebi = TRUE, kegg = TRUE, metlin = TRUE`.
I'm getting the same error message:
```
Error in download.file(lib.url, destfile = filenm, method = "curl") :
'curl' call had nonzero exit status
```
I'm also running R on a Windows machine, though, which I'm guessing is why the system call to curl is failing. You mentioned earlier that you had updated the package recently, but I downloaded MetaboAnalystR just this week using devtools (Option A on the GitHub page for MetaboAnalystR) and still ran into the problem.
Any ideas on how to get around this issue on Windows?
Thanks!
username_2: Hi,
I've not yet found the solution as to why this function works on some Windows platforms (tested on 2 Windows 10 laptops) and not on others. If you shoot me an email I can send you the necessary file.
username_3: Hi,
I was able to download the file that curl was trying to download (compound_db.rds) using a Linux server separately. Is that the file you mean? If so, what's the way to have CrossReferencing() use that file?
Thanks for your help!
username_4: still have the same issue could you help in solving
thanks
username_2: According to https://cran.r-project.org/doc/manuals/r-patched/NEWS.pdf, download.file in R 3.5.2 should permit curl to work on Windows.
username_3: I think I figured out the problem. The version of Windows 10 my server is running is from before Windows 10 Version 1803, which it seems was when Windows started including curl natively (see this StackOverflow [comment](https://stackoverflow.com/questions/9507353/how-do-i-install-and-use-curl-on-windows/50200838#50200838)). Since .read.metaboanalyst.lib() eventually calls download.file() with method = "curl", it's giving a 127 status error when run on my Windows server.
My workaround was to use Windows Powershell manually outside of R. I moved to the directory where I was running the R code with MetaboAnalyst, then manually ran the curl calls for both the compound_db.rds and syn_nms.rds files:
```
PS C:\...\my_r_project> curl https://www.metaboanalyst.ca/resources/libs/compound_db.rds -o compound_db.rds
PS C:\...\my_r_project> curl https://www.metaboanalyst.ca/resources/libs/syn_nms.rds -o syn_nms.rds
```
These commands downloaded the files to my project directory, and now CrossReferencing() seems to be running fine from R.
For others who may run into this in the future, the call stack when I ran into the problem was...
CrossReferencing() >> MetaboliteMappingExact() >> .read.metaboanalyst.lib() >> download.file()
username_4: thanks a lot . yes im using 5.3.1 . ill update appreciating your helpand thanks for the super package you made. you are genus
<NAME>, DVM, 2 Ph.D Head of Proteomics research program
Children's Cancer Hospital Egypt 57357.
Cairo, Egypt.
Phone:(202) 25351500. Ext:7204
Handy: 01064962210 http://samehmagd.tripod.com/https://scholar.google.co.jp/citations?user=EnP_f8UAAAAJ&hl=enhttps://scholar.google.co.jp/citations?user=EnP_f8UAAAAJ&hl=en
username_5: Now it get 404 error code... |
icorn/MovieStarts | 121925349 | Title: Push-Notifications für Favoriten
Question:
username_0: - Local Notifications verwenden
- In den Settings ein/ausschaltbar
- In den Settings muss Tag und Uhrzeit gewählt werden können (3 Tage vorher, 2 Tage vorher, Tag vorher, am Starttag, am Tag danach)<issue_closed>
Status: Issue closed |
ktbyers/netmiko | 912112990 | Title: Netmiko/ How can I make my current code use Multithreading/concurrent.futures?
Question:
username_0: I am new to netmiko/Python scripting , Using online examples was able to make a script to take configuration backup. The backup is copied to the text file and output is saved.
Currently this backup is done sequentially and it does not connect to all device at once and take the backup. I want to connect to all the devices concurrently.
I understand multithreading or concurrent.futures can solve this issue but I was not able to do it so far.
Can anyone please suggest, how my existing code can be modified to achieve it. Below is the code.
from netmiko import ConnectHandler
from netmiko.ssh_exception import NetMikoTimeoutException
from paramiko.ssh_exception import SSHException
from netmiko.ssh_exception import AuthenticationException
import getpass
import sys
import time
import os
from datetime import datetime
##getting system date
day=time.strftime('%d')
month=time.strftime('%m')
year=time.strftime('%Y')
today=day+"-"+month+"-"+year
##initialising device
device = {
'device_type': 'cisco_ios',
'ip': '192.168.100.21',
'username': 'Cisco',
'password': '<PASSWORD>',
'secret':'<PASSWORD>',
'session_log': 'log.txt'
}
##opening IP file
ipfile=open("iplist.txt")
print ("Script to take backup of devices, Please enter your credential")
device['username']=input("username ")
device['password']=getpass.getpass()
print("Enter enable password: ")
device['secret']=getpass.getpass()
##taking backup
for line in ipfile:
try:
device['ip']=line.strip("\n")
print("\n\nConnecting Device ",line)
net_connect = ConnectHandler(**device)
net_connect.enable()
time.sleep(1)
with open('config.txt') as f:
cmd = f.read().splitlines()
print ("Reading the running config ")
output = net_connect.send_config_set(cmd)
output4 = "Failed"
time.sleep(7)
filename=device['ip']+'-'+today+".txt"
folder = os.path.join(today)
file = os.path.join(folder,filename)
os.makedirs(folder,exist_ok=True)
saveconfig=open(file,'w+')
print("Writing Configuration to file")
saveconfig.write(output)
saveconfig.close()
time.sleep(10)
net_connect.disconnect()
print ("Configuration saved to file",filename)
except:
print ("Access to ######"+device['ip']+" failed,backup did not taken")
output4 = "Failed"
file= device['ip']+'-'+today+"Error"+".txt"
config=open(file,'w+')
config.write(output4)
config.close()
ipfile.close()
print ("\nAll device backup completed")
Status: Issue closed
Answers:
username_1: Here are some examples using threads and processes:
https://github.com/twin-bridges/netmiko_course/tree/master/class8/collateral
You might also want to check out Nornir which has Netmiko plugins available. |
Biovision/biovision-base | 234054657 | Title: Хлебные крошки в админке
Question:
username_0: Для большего удобства навигации нужно в админке во всех разделах вывести хлебные крошки.
- [ ] agents
- [ ] browsers
- [ ] codes
- [x] editable_pages
- [ ] metrics
- [x] privilege_groups
- [ ] privileges
- [ ] tokens
- [ ] users<issue_closed>
Status: Issue closed |
timrwood/gulp-consolidate | 63383979 | Title: How can I set template engine from file format?
Question:
username_0: The benefit of consolidate is opportunity not to worry about template languages.
For example I have a gulp task:
```
gulp.src('./layouts/*.*')
.pipe(consolidate(???, data))
.pipe(extRename('html'))
.pipe(gulp.dest(...))
```
How can I set template engine from file format?
Answers:
username_1: Create an object to map file extension to language. Depending on how involved you want to get you might consider using `gulp-tap`.
Are you wanting to use multiple template languages in the same task in the same project? That raises a red flag to me
username_1: Unless you're looking to create a consumable project, then it may be pretty nifty :)
Status: Issue closed
username_0: No I don't use multiple languages on the same project. Seems it was about markdown and jade.
I think we can close it now. |
Molunerfinn/hexo-theme-melody | 849088630 | Title: [Feature Request] Darkmode, footer, meta (seo).
Question:
username_0: <!--
IMPORTANT: Please follow the template to create a new issue.
IMPORTANT: Do not ask questions like how to modify the theme or how to change the theme. I can't help you and you need to learn CSS & HTML & JS by yourself.
Or it will be closed!!
重要:请依照该模板来提交,请尽可能用英文来提问,因为并不是所有使用者都能看得懂中文。你的提问也会帮助到其他人~
重要:请不要提问类似于如何修改主题,如何改变主题等相关问题。我没有时间帮助你,你需要自己学习CSS&HTML&JS的相关知识。
ISSUE是用来提交bug或提出新的feature的。否则将会被关闭!!
-->
## I want to create a new issue <!-- 我想要创建一个新的issue -->
<!-- Check all with "x" especially FAQ & Documentation!! (使用 "x" 选择) -->
<!-- 请确认是否都已经翻阅过如下的资料, 尤其是FAQ和文档!! -->
- [x] Yes, I have read [FAQ](https://github.com/Molunerfinn/hexo-theme-melody/blob/dev/FAQ.md).
- [x] Yes, I have read [Hexo Docs page](https://hexo.io/docs/), especially [Templates](https://hexo.io/docs/templates.html), [Variables](https://hexo.io/docs/variables.html), [Helpers](https://hexo.io/docs/helpers.html) and [Troubleshooting](https://hexo.io/docs/troubleshooting.html).
- [x] Yes, I have read [Hexo-theme-melody Documentation](https://molunerfinn.com/hexo-theme-melody-doc/).
- [x] And yes, I already searched for current [issues](https://github.com/Molunerfinn/hexo-theme-melody/issues?utf8=%E2%9C%93&q=is%3Aissue) and this did not help me.
## Feature Request
<!-- If you have any ideas of theme-melody, please write down here and we can have a discussion. -->
<!-- 如果你有任何关于theme-melody的功能方面的想法,可以在这个部分里写下来我们一起讨论 -->
1. Adding a Darkmode button/switch in navv bar or footer option.
2. Option to add multiple footer urls, example: https://gyazo.com/c6c492795cbdea620f0a396bd2effcab
3. Being able to add social icons in footer would be a nice option.
4. Meta tags / social card preview, example:
A) current: https://gyazo.com/c1a08f8e8191445be9dd2d89ada35fd2
B) requested feature: https://gyazo.com/5f3b21a50f2d412e738ef1306775edb0
@myl7
Thank you.<issue_closed>
Status: Issue closed |
neurodata/hyppo | 1047720566 | Title: k-sample HSIC
Question:
username_0: Don't know if it is already included here but dHISC (e.g. k-sample HSIC) test for joint independence of many distributions and is a critical method in causal discovery algorithms. See the paper here: [https://rss.onlinelibrary.wiley.com/doi/full/10.1111/rssb.12235](https://rss.onlinelibrary.wiley.com/doi/full/10.1111/rssb.12235)
Status: Issue closed
Answers:
username_1: yep we have someone working on this: https://github.com/neurodata/hyppo/issues/104 |
ikedaosushi/tech-news | 567024669 | Title: GitHubOpen Source Guidesの日本語訳を公開 OSSコミュニティのベストプラクティスを集約 - 窓の杜
Question:
username_0: GitHub、“Open Source Guides”の日本語訳を公開 ~OSSコミュニティのベストプラクティスを集約 - 窓の杜<br>
<br>
https://ift.tt/2V2GLdk |
dasilva333/TowerGhostForDestiny | 72196517 | Title: IOS: Enhancement Suggestion - Support 1Password for login
Question:
username_0: In IOS, logging into things could be simplified by allowing the browser to pull login information from the 1password app. This would make logging in significantly less painful.
(see https://blog.agilebits.com/2014/07/30/introducing-the-1password-app-extension-for-ios-8-apps/ for video and documentation )
Answers:
username_1: This is a Phonegap Build application that doesn't allow me to do any native things such as interacting with other applications, also logging in on iOS is a limitation of the NSHTTPCookieAcceptPolicyAlways policy I can't change in WebView. Closing this ticket as it is something I can't fix
Status: Issue closed
|
pandorabox-io/pandorabox.io | 476953212 | Title: Support/Maintenance
Question:
username_0: <img src="https://acimg.auctivacommerce.com/imgdata/0/2/0/4/2/7/webimg/4057844.jpg"/>
Due to the popularity of the server i find myself with less and less time on my hands.
But i think thats a good sign: pandorabox gets more popular!
With that i want to apologize:
**i can't do everything on the issue list.** :confused: (solely anyways)
So, if you have a bug, feel free to open a new issue but don't expect it to be resolved anytime soon.
If (by any chance) you have the technical skills to fix an issue, don't hesitate to open a PR or ask for write permissions on the repo, i don't have problems with giving access to people...
(Thanks to everybody that submitted patches/PR's by the way :smile:)
I don't plan to retire or anything, don't worry. I always invested a few hours a week for development and playing, but that time will not be enough to fix _all_ issues it seems...
Also: i don't wan't to be dictated by only fixing things:
<img src="http://blogs.elon.edu/gst336/files/2014/06/imgres.jpg"/>
Currently on my have-fun list is:
* Implementing some kind of 3D view on the mapserver
* Scaling the minetest-engine (less lag, more players)
* Building awesome stuff ingame
(this is just a head-up/fyi issue, i hope it gets understood as such...)
Answers:
username_1: I suggest we restructure the moding infrastructure **sanely**.
**The server has 22 forked mods**, and we have [mods](https://github.com/pandorabox-io/mobs_xenomorph) that override [forked mods](https://github.com/pandorabox-io/mobs_scifi) behaviour(half of the stuff in the custom pandorabox repo shouldn't be stand-alone lua files).
**The server also has 37 mod repos** and some of the mods are just useless since other mods provide better alternatives IMO
[Not implementing features is hard](http://weblogs.mozillazine.org/roc/archives/2010/06/not_implementin.html).
JWZ’s Law of Software Envelopment: “Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.”
username_2: I would like to create a repository (perhaps called "pandorabox-mods" or simply "mods") holding references to all installed mods as git submodules. That would be more useful than [mods.md](https://github.com/pandorabox-io/pandorabox.io/blob/master/doc/mods.md) since it would always be up-to-date and reference the exact commits. As far as I can tell, the version of fancy_vend on the server is not using the upstream repo or the master or pandorabox branches of the forked repo because all three branches have some changes (the customer view) that don’t seem to be on the server. If there are mods with no git repo (the URL for the atm mod was dead when I looked for it), they could be added on the pandorabox-io org.
It’d be possible to reproduce the same version of mods running on the server with `git clone --recurse-submodules`, to identify the change that causes a performance issue by testing the set of mods running at a previous point in time, and to test changes before making them live, with less chance of using the wrong version of the mod or forgetting to update something. I’d like to use that.
username_0: @username_1 Yes, there is a mess right now :)
I reduced the stray repos from 64 to 44 but i have still some cleanup to do: #276
@username_2 you should have an invitation
Status: Issue closed
username_0: closing this: performance is good now (not perfect though), all mods and the engine-modifications are open-source and can be extended if there is an enhancement.
Next issue will be free disk-space, but https://github.com/username_0-mt/mtpurge should take care of that when it is ready... |
ProyectoINCAN/Proyecto | 283062171 | Title: Medicamentos
Question:
username_0: 1. Medicamentos ----José(listo)
1.1. ver por qué no guarda medicamento(Verificar flujo completo de medicamentos, verificar validacion de cantidad y fechas.)
1.2. verificar flujo completo de tipo de medicamentos( verificar todas las validaciones correspondientes).
1.3.agregar check habilitado grande en medicamento
1.4. verificar la respuesta al guardar un tipo de medicamento( responde con un success true).<issue_closed>
Status: Issue closed |
isthatjack/streets | 889556258 | Title: isuee....
Question:
username_0: you smell like poopoo LMOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO |
Senior-Design-0x07/the-hobby-hub | 841023046 | Title: Change test_programs folder name to programs
Question:
username_0: # Overview
<!--
- What is the general idea of the task?
- Any relevant background information that would be useful to the assignee?
- Why is this task important?
-->
test programs is a misleading name going forward.
# Goals
<!--
- What specific features should be added?
- Are there any modifications needed to existing code?
- Is this a bug or feature request?
-->
# Helpful Info
<!--
- As the task creator is there any starting advice you can give the assignee?
- Any libraries or relevant links that might be useful?
- Setup instructions?
- Info on how integration is performed with another subsystem?
-->
wait until command line utility is merged in.
wait until pin manager backend is merged in.
# Verification Criteria
<!--
- how does the assignee test their code to make sure it meets the task requirements?
-->
make sure command line utility still works, program manager still works, and pin manager still works.
# Task Advisor
<!--
- whomever the assignee should go to for questions related to this task
--> |
TaskForce47/Liberation | 200493175 | Title: Whitelist/Blacklist Overhaul
Question:
username_0: In gitlab by @mindbl4ster on Jul 21, 2016, 01:21
Internes Permissionsystem mit TF47 Whitelist verknüpfen und bestimmte Douchebags auf ne Blacklist kloppen!
Answers:
username_0: In gitlab by @Crewt on Jul 22, 2016, 02:51
Wollen wir das auf Eis legen bis weiteres bekannt ist bezüglich dem neu aufsetzten der Whitelists?
Status: Issue closed
|
linkml/linkml | 1012662433 | Title: Document attributes behavior and gather use cases
Question:
username_0: Currently our FAQ entry on attributes vs slots is a little vague:
https://linkml.io/linkml/faq/modeling.html#when-should-i-use-attributes-vs-slots
In fact there are nuances in use of attributes. Consider:
```yaml
classes:
C:
attributes:
a:
D:
attributes:
a:
```
The intent of the modeler is not clear here:
- convenient shorthand for not having to use a separate "slots" declaration, "a" is the same slot
- The two "a"s have nothing to do with one another, the purpose of attributes is to insulate
in the above, the internal representation is as follows:
```yaml
slots:
c1__a:
name: c1__a
from_schema: https://w3id.org/linkml/examples/personinfo
range: my_str
slot_uri: https://w3id.org/linkml/examples/personinfo/a
alias: a
owner: C1
domain_of:
- C1
c2__a:
name: c2__a
from_schema: https://w3id.org/linkml/examples/personinfo
range: my_str
slot_uri: https://w3id.org/linkml/examples/personinfo/a
alias: a
owner: C2
domain_of:
- C2
classes:
C1:
name: C1
definition_uri: https://w3id.org/linkml/examples/personinfo/C1
from_schema: https://w3id.org/linkml/examples/personinfo
slots:
- c1__a
attributes:
a:
name: a
class_uri: https://w3id.org/linkml/examples/personinfo/C1
C2:
name: C2
definition_uri: https://w3id.org/linkml/examples/personinfo/C2
from_schema: https://w3id.org/linkml/examples/personinfo
slots:
- c2__a
[Truncated]
```ttl
<https://w3id.org/linkml/examples/personinfo/c1__a> a linkml:SlotDefinition ;
skos:inScheme <https://w3id.org/linkml/examples/personinfo> ;
linkml:alias "a" ;
linkml:domain_of <https://w3id.org/linkml/examples/personinfo/C1> ;
linkml:owner <https://w3id.org/linkml/examples/personinfo/C1> ;
linkml:range <https://w3id.org/linkml/examples/personinfo/my_str> ;
linkml:slot_uri <https://w3id.org/linkml/examples/personinfo/a> .
<https://w3id.org/linkml/examples/personinfo/c2__a> a linkml:SlotDefinition ;
skos:inScheme <https://w3id.org/linkml/examples/personinfo> ;
linkml:alias "a" ;
linkml:domain_of <https://w3id.org/linkml/examples/personinfo/C2> ;
linkml:owner <https://w3id.org/linkml/examples/personinfo/C2> ;
linkml:range <https://w3id.org/linkml/examples/personinfo/my_str> ;
linkml:slot_uri <https://w3id.org/linkml/examples/personinfo/a> .
```
[in progress, will edit more later] |
MGHollander/home-assistant-config | 831520366 | Title: Use packages as intended
Question:
username_0: Bundle all addon_version config in on base file
Example: https://github.com/klaasnicolaas/Student-homeassistant-config/blob/master/integrations/system_info.yaml
Answers:
username_0: I've made a package for my myStrom button, but decided to leave the addon_version config as is, because I would like to be able to change the addon_version automations via the UI.
Status: Issue closed
|
HERA-Team/hera_qm | 467469435 | Title: implement IERS testing fix used in pyuvdata
Question:
username_0: conftest.py lines 30-35 (we'll need to add a conftest)
in testing init file: filter iers warning (are we just using pyuvdata's testing for check warnings?)
To test: clear cache in astropy, turn off internet, run.
Should get a bunch of extra warnings (ones that aren't checked), but tests should pass.
Status: Issue closed
Answers:
username_0: This was addressed in #291 |
cibernox/ember-basic-dropdown | 510175578 | Title: Add back eventHandler arguments in V2
Question:
username_0: V2 removed the various event handler arguments (`onMouseEnter`, `onFocusOut` etc) in favour of the `{{on ...}}` element modifier.
This was to better align the add-on with Octane patterns.
However, if an end-user extends `ember-basic-dropdown` to add extra default behaviour for some internal dropdown component, there is a problem in so far as that element modifiers are not usable with the `(component)` helper and the extending component must yield out the `Content` & `Trigger` components, in the same was as `ember-basic-dropdown` does: https://github.com/username_1/ember-basic-dropdown/blob/master/addon/templates/components/basic-dropdown.hbs
The problem is further excaberated by attempting to upgrade to Ember 3.13, where `ember-basic-dropdown` is required because of various deprecations in Ember.
This particular issue seems to be one that is being addressed via an RFC (https://github.com/emberjs/rfcs/issues/497), but until then, this leaves end users such as myself stuck not being able to upgrade to Ember 3.13.
As such, I'd like to propose adding back the arguments for the various event handlers, which can be passed through to various `{{on ...}}` modifiers on `Content` & `Trigger, something like:
```
<Element
id={{this.dropdownId}}
class="ember-basic-dropdown-content ember-basic-dropdown-content--{{@hPosition}} ember-basic-dropdown-content--{{@vPosition}} {{this.animationClass}}{{if @renderInPlace " ember-basic-dropdown-content--in-place"}} {{@defaultClass}}"
...
{{on 'focusout' (optional @onFocusOut}}
{{on 'mouseleave' (optional @onMouseLeave}}>
{{yield}}
</Element>
```
This could temporarily be added back in `ember-basic-dropdown` V2 and later removed again in an `ember-basic-dropdown` V3 release when the previously mentioned RFC is landed in Ember.
Happy to help out with the implementation if you agree! 👍
Answers:
username_1: I can certainly consider it, however I'd like to better understand how are you using the `(component)` keyword. I believe that there should be a way today of passing element modifiers if you combine it with `{{let}}`.
username_0: Yeah, I was discussing this on Discord today and the `{{let}}` workaround was mentioned, however, because our component essentially extends the `ember-basic-dropdown` component, the template must yield out an instance of the `Content` & `Trigger` components, essentially copying the `ember-basic-dropdown.hbs` template - here's an overview of our use case:
`components/basic-dropdown-styled.js`
```js
import EmberBasicDropdown from 'ember-basic-dropdown/components/basic-dropdown';
import layout from 'templates/components/basic-dropdown-styled';
export default EmberBasicDropdown.extend({
//Custom component default overrides, eventHandlers etc
})
```
`templates/components/basic-dropdown-styled.hbs`
```hbs
{{yield (hash
uniqueId=this.publicAPI.uniqueId
isOpen=this.publicAPI.isOpen
disabled=this.publicAPI.disabled
actions=this.publicAPI.actions
Trigger=(component "basic-dropdown-trigger"
dropdown=(readonly this.publicAPI)
hPosition=(readonly this.hPosition)
renderInPlace=(readonly this.renderInPlace)
vPosition=(readonly this.vPosition)
onMouseDown=(action this._prevent)
onMouseEnter=(action this._open)
onTouchEnd=(action this._open)
onMouseLeave=(action this._close)
)
Content=(component "basic-dropdown-content"
dropdown=(readonly this.publicAPI)
hPosition=(readonly this.hPosition)
renderInPlace=(readonly this.renderInPlace)
vPosition=(readonly this.vPosition)
destination=(readonly this.destination)
rootEventType=(readonly this.rootEventType)
top=(readonly this.top)
left=(readonly this.left)
right=(readonly this.right)
width=(readonly this.width)
height=(readonly this.height)
onMouseEnter=(action this._open)
onMouseLeave=(action this._close)
onTouchEnd=(action this._open)
onFocusOut=(action this._test)
)
)}}
```
(Notice the added event handlers in the above)
Which allows us to have the same usage as `ember-basic-dropdown`, but with a default set of event handler behaviour defined for our component:
```hbs
<BasicDropdownStyled as |dd|>
<dd.Trigger>Button here</dd.Trigger>
<dd.Content>Content here</dd.Content>
</BasicDropdownStyled>
```
username_0: Cool, I'll likely take a first pass at this tomorrow and submit a PR 👍
Status: Issue closed
|
fuankarion/active-speakers-context | 955855753 | Title: Code for generating face tracking csv file
Question:
username_0: Hi,
Thanks a lot for sharing your model and your code. Your paper is great!
I intend to perform active speaker detection on a bunch of videos.
I just need to apply your pre-trained models , not to train the model.
My understanding is that, in order to do active speaker detection on a video, I first have to perform the face tracking on this video and generate a csv file with similar tracking data as the ones provided with the AVA dataset.
I did not find any face tracking script in the code. Could you provide this script (or tell me where it is in the code if I missed it) ?
More generally could you provide all the missing scripts or give guidance and pointers to perform active speaker detection on any video ?
Thanks in advance. |
introlab/rtabmap | 664276066 | Title: Rtabmap with emulated ZED Crash
Question:
username_0: Hi,
I'm using only the depth point cloud of the simulated ZED on the robot, it went well and this error occurs
```
[ INFO] [1595490914.120097131, 1542.035000000]: Assembled 1 obstacle and 0 ground clouds (12235 points, 0.001679s)
[ INFO] [1595490914.120949802, 1542.035000000]: rtabmap (72): Rate=0.33s, Limit=0.000s, RTAB-Map=0.2694s, Maps update=0.0035s pub=0.0025s (local map=53, WM=53)
[ INFO] [1595490914.303735809, 1542.212000000]: Odom: ratio=0.331479, std dev=0.046505m|0.046505rad, update time=0.448586s
[FATAL] (2020-07-23 09:55:15.214) LaserScan.cpp:241::LaserScan() Condition (data.channels() != 3 || (data.channels() == 3 && (format == kXYZ || format == kXYI))) not met! [format=XYZRGB]
terminate called after throwing an instance of 'UException'
what(): [FATAL] (2020-07-23 09:55:15.214) LaserScan.cpp:241::LaserScan() Condition (data.channels() != 3 || (data.channels() == 3 && (format == kXYZ || format == kXYI))) not met! [format=XYZRGB]
[ INFO] [1595490915.395193399, 1543.196000000]: Odom: ratio=0.349150, std dev=0.041732m|0.041732rad, update time=0.436971s
```
Here is the launch file used
```
<?xml version="1.0"?>
<launch>
<arg name="use_imu" default="true"/> <!-- Assuming IMU fixed to lidar with /velodyne -> /imu_link TF -->
<arg name="imu_topic" default="imu/data"/>
<arg name="scan_20_hz" default="false"/> <!-- If we launch the velodyne with "rpm:=1200" argument -->
<arg name="use_sim_time" default="false"/>
<param if="$(arg use_sim_time)" name="use_sim_time" value="true"/>
<arg name="frame_id" default="base_link_stabilized"/>
<node pkg="tf" type="static_transform_publisher" name="static" args="0 0 0 0 0 0 /odom_combined /base_link_stabilized 100"/>
<group ns="rtabmap">
<node pkg="rtabmap_ros" type="icp_odometry" name="icp_odometry" output="screen">
<remap from="scan_cloud" to="/zed/depth/points"/>
<param name="publish_tf" type="string" value="true"/>
<param name="odom_frame" type="string" value="odom"/>
<param name="wait_for_transform" type="string" value="true"/>
<param name="guess_frame_id" type="string" value="odom_combined"/>
<remap if="$(arg use_imu)" from="imu" to="$(arg imu_topic)"/>
<!-- ICP parameters -->
<param name="Icp/PointToPlane" type="string" value="true"/>
<param name="Icp/Iterations" type="string" value="10"/>
<param name="Icp/VoxelSize" type="string" value="0.1"/>
<param name="Icp/DownsamplingStep" type="string" value="1"/> <!-- cannot be increased with ring-like lidar -->
<param name="Icp/Epsilon" type="string" value="0.001"/>
<param name="Icp/PointToPlaneK" type="string" value="20"/>
<param name="Icp/PointToPlaneRadius" type="string" value="0"/>
<param name="Icp/MaxTranslation" type="string" value="2"/>
<param name="Icp/MaxCorrespondenceDistance" type="string" value="1"/>
<param name="Icp/PM" type="string" value="true"/>
<param name="Icp/PMOutlierRatio" type="string" value="0.7"/>
<param name="Icp/CorrespondenceRatio" type="string" value="0.01"/>
<!-- Odom parameters -->
<param name="Odom/ScanKeyFrameThr" type="string" value="0.9"/>
<param name="Odom/Strategy" type="string" value="0"/>
<param name="OdomF2M/ScanSubtractRadius" type="string" value="0.2"/>
<param name="OdomF2M/ScanMaxSize" type="string" value="15000"/>
</node>
<node pkg="rtabmap_ros" type="rtabmap" name="rtabmap" output="screen" args="-d">
[Truncated]
<param name="Mem/STMSize" type="string" value="30"/>
<!-- param name="Mem/LaserScanVoxelSize" type="string" value="0.1"/ -->
<!-- param name="Mem/LaserScanNormalK" type="string" value="10"/ -->
<!-- param name="Mem/LaserScanRadius" type="string" value="0"/ -->
<param name="Reg/Strategy" type="string" value="1"/>
<param name="Grid/FromDepth" type="string" value="false"/>
<param name="Grid/CellSize" type="string" value="0.10"/>
<param name="Grid/RangeMax" type="string" value="10"/>
<param name="Grid/ClusterRadius" type="string" value="1"/>
<param name="Grid/GroundIsObstacle" type="string" value="false"/>
<param name="Grid/RayTracing" type="string" value="true"/>
</node>
</group>
</launch>
```
Should I force the param for the subscribing to laserscan to false ? Or add it ?
Answers:
username_0: rtabmap --version
RTAB-Map: 0.19.6
PCL: 1.9.1
With VTK: 7.1.1
OpenCV: 3.4.8
With OpenCV nonfree: false
With ORB OcTree: true
With FastCV: false
With Madgwick: true
With TORO: true
With g2o: false
With GTSAM: false
With Vertigo: true
With CVSBA: false
With Ceres: false
With OpenNI2: true
With Freenect: true
With Freenect2: false
With K4W2: false
With DC1394: true
With FlyCapture2: false
With ZED: false
With RealSense: false
With RealSense SLAM: false
With RealSense2: false
With libpointmatcher: false
With octomap: true
With cpu-tsdf: false
With open chisel: false
With Alice Vision: false
With LOAM: false
With FOVIS: false
With Viso2: false
With DVO: false
With ORB_SLAM2: false
With OKVIS: false
With MSCKF_VIO: false
With VINS-Fusion: false
username_1: The new released binary version is 0.20.0. You may give a try.
`sudo apt install ros-[rosdistro]-rtabmap-ros` |
abs-lang/abs | 440789271 | Title: Consider to add an abs package manager
Question:
username_0: Hello,
first and foremost the ABS seems fun and practical to code with, thanks for that.
By experience, language with a good associated package manager are easier to adopt for obvious reasons, that's why I would love to code, package and reuse some libraries made with ABS.
It's also something really missing with Bash.
What do you think ?
My 2 cents,
Answers:
username_1: Definitely on the wishlist. The main obstacle right now is defining the structure of the package manager, the commands it can run etc. I am inclined to implement something extremely simple:
* `abs get github.com/user/pkg`
* Installs in a local `vendor` folder by cloning the repo
* then you can require `vendor/user/pkg`
Later on one can add support for lockfiles, sources other than girt repos etc etc but as a first step I think this could be something.
@username_2 your input is appreciated as usual :)
username_0: Awesome.
While on the news (Microsoft github org.), it reminds me that the path of the package should be normalized (lowercased...), in order to avoid a possible mess if renamed.
My 2 cents,
username_2: You can also write a short ABS snippet that tests for the existence of your library package directory (git or otherwise) and then either `clone a repo` or `download and untar an archive` as needed. This can be coded in a few lines as part of your `~/.absrc file` where you would then `source(file)` the file(s) containing the library functions and environment variables into the running ABS environment.
This will give you total control over how you package and distribute all your ABS, BASH, or other shell library functions and does not require you to adopt an ABS-specific distribution model.
username_1: The downside with the snippet is that it adds a bit of a barrier for people who are not familiar with ABS itself. I think `abs get github.com/user/package` could work well and be minimal enough.
username_1: I would like to get this done before introducing try / catch, so that we can start writing part of the std library as an "external" module, eg:
```
errors = require("errors")
return errors.new(xyz)
```
I think the basic feature that a package manager would have to have:
* `abs install github.com/user/repo`
* clone the repo in a `./vendor` folder (and I'm ok if we just support Git / GH for now)
* update the `require` function to be able to distinguish between 3rd party packages and built-in modules (standard library)
Once this is done, we can start thinking about try / catch, built-in structure like mutexes / semaphores etc.
username_3: We need to think about a way to distinguish external modules when we require them.
E.g.
```
errors = require("errors")
return errors.new(xyz)
```
Here "errors" could be a local file in the same directory, or an external module in the `./vendor` folder, or a built-in module. Maybe we should enforce the use of `require(./vendor/user/pkg)` for external modules?
username_1: Multiple ways to tackle this but I haven't thought of the best one in my head (all following examples are vendor vs local package):
* `package` vs `./package`
* `@package` vs `package`
* `package` vs `github.com/user/package`
* `:package` vs `package`
My slight bias would be towards option 1, as I'm used to it. Option 3 (like golang does) is also not a bad way to think about it but might be overkill, and it's definitely less pragmatic / more verbose. Since we want to make sure ABS doesn't really force users to type that much, I'd lean on no.1 for now. Something to think about is naming collision, eg. if I `abs install github.com/user/util` and then `abs install github.com/another_user/util`, there should be a way to avoid this -- here my suggestion would be to allow users to specify the full path when package names collide. And in the future we can think of a `.lock` file that we use to sort these collisions out, where the file looks like:
``` json
[
{"package": "github.com/user/util", "version": "xyz", "alias": "util"}
]
```
and when you try to `abs install github.com/another_user/util` the CLI tells you "there is another package aliased to `util`, please pick a new alias for this package or press enter to overwrite the dependency". Fun stuff we can think about later on... :smile:
Also one of the things I think we need to fix is scoping in modules. If a module sets a variable we shouldn't have that variable set at global level. This does not necessarily have to be fixed in this PR but, if you want, it's a nice to have. Things we would need to change are:
* differentiate how we do [source vs require](https://github.com/abs-lang/abs/blob/master/evaluator/functions.go#L324-L332): source can pollute the global environment (like in a shell) but require shouldn't
* fix the [docs](https://www.abs-lang.org/types/builtin-function)
* replace the [global environment for require calls](https://github.com/abs-lang/abs/blob/master/evaluator/functions.go#L1614): we instead need to use an [enclosed environment](https://github.com/abs-lang/abs/blob/master/object/environment.go#L12-L16), which is what we do when a function `f() { ... }` [is called](https://github.com/abs-lang/abs/blob/cc2191ee9911c045dffda8a58cbb2b9cdef59b2e/evaluator/evaluator.go#L1046) (the function inherits the environment but cannot modify the global one eg. set new variables)
Let me know if you'd like me to have a look at this instead. It might get too beefy to work on the package manager as well as differentiating source/require at the same time.
Status: Issue closed
|
phpbrew/phpbrew | 103737863 | Title: opcode error with composer and phpunit
Question:
username_0: I had PHP 5.5.28 installed on my Ubuntu machine with everything working as expected.
I decided to start using phpbrew to make upgrading to PHP 5.6 easier. To start with I just installed the same version of PHP that I have on my machine: phpbrew install 5.5.28 +default+dbs+debug+apxs2 (I also installed 5.6.12, with xdebug, and got the same errors)
After doing so everything worked great with 2 exceptions: phpunit and composer. Both commands run to completion but right at the end I get fatal opcode errors which cause issues with the CI server.
As an example I get the following at the end of running phpunit (4.6.10):
PHP Fatal error: Invalid opcode 65/16/8. in phar:///usr/local/bin/phpunit/phpunit/TextUI/Command.php on line 0
Fatal error: Invalid opcode 65/16/8. in phar:///usr/local/bin/phpunit/phpunit/TextUI/Command.php on line 0
Let me know if you need any further details. Any help would be greatly appreciated. Thanks
Answers:
username_1: Did you installe eaccelerator or some similar?
Status: Issue closed
username_0: Hey username_1.
eaccellerator isn't installed. I did install uopz and decided to remove that temporarily and the fatal errors went away so it looks to be an issue with that and not phpbrew.
Thanks for the response and I'll just go ahead and close the issue.
username_1: I think you may refer to krakjoe/uopz#19 , which has an explanation of this fatal error. |
18F/handbook | 916583206 | Title: Run Prettier on Handbook code (existing and future)
Question:
username_0: I don't think it would be a terrible amount of work to add Prettier as a check on all PRs via GitHub Actions and run Prettier on all existing Markdown in the project. Doing so would standardize the files, thereby easing any potential burden on new contributors (and be a good aesthetic choice).
Status: Issue closed
Answers:
username_1: Good job! |
helm/helm-www | 1042705719 | Title: Helm dependency documentation omit information on pre-release dependencies
Question:
username_0: The docs for [Helm Dependency](https://helm.sh/docs/helm/helm_dependency/#helm) do not link to the docs for dependency [best practices](https://helm.sh/docs/chart_best_practices/dependencies/), which covers the use of range.
Additionally, the best practices documentation does not cover the inclusion of pre-release charts as dependencies, which is extremely useful for CI/CD pipelines.
Answers:
username_1: We generated those specific doc files via this process: https://github.com/helm/helm-www#updating-the-helm-cli-reference-docs
So the place to edit that file is here:
https://github.com/helm/helm/blob/main/cmd/helm/dependency.go#L28-L74
Unfortunately we cannot edit this doc directly in the helm-www repo, but we would appreciate your changes in the helm code repo! Thanks. |
rsgalloway/pyseq | 461842780 | Title: Issues with strict padding
Question:
username_0: Hi.
I posted a couple of hours ago about an issue I was having with strict padding into the closed issue #41.
I tried to handle my issue by turning strict padding off, as it is more likely (at least in my case) that I would run into a sequence without padding( ```1, 2, 3, 4, 5, 6, 7, 8, 9, 10, ...```) than I am to run into multiple files with the same frame number but different padding (```file.1.jpg, file.01.jpg```).
It seemed at first to solve the issue, however this caused some issues with padding and sorting:
```pyhton
import pyseq
pyseq.strict_pad = False
sequence = pyseq.get_sequences(['file.7.jpg', 'file.8.jpg', 'file.9.jpg', 'file.10.jpg', 'file.11.jpg', 'file.12.jpg'])[0]
print sequence[0]
# Prints: file.10.jpg
print sequence._get_padding()
# Prints: %02d
```
Now I can't really tell if the Sequence object is meant to be sorted, as it has an append and insert method, I guess it's not meant to be sorted, however _get_padding() uses Item 0 to define the padding. That works if every item has the same padding (as in strict padding), but breaks in this case where there is no strict padding.
It sort of brings back what I was asking in #41 , if it might make sense to add a padding property to the sequence, as well as tracking if it's an inferred or explicit padding (being the smallest number if no item has a 0 padding, becomes explicit once an item with padding is added).
I will do a test implementation to see if it still passes unittests and see if it solves my issues in production.
Answers:
username_0: Did a possible implementation of that.
It's probably not perfect, but would be curious to hear feedback on it. See merge request #61
username_0: In hindsight, this was probably overcomplicated. It would be nice though if pyseq understood non-padded sequences as in the example above.
username_1: I can reproduce this issue, thanks for reporting it and apologies for the delay. Also, thanks for taking a stab at a fix ;)
It escaped detection originally because the non-padded example tests/files/z1_002_v1.%d.png only includes 4 single digit frame numbers :/
I have a branch with a potential fix here with a unit test, please test it and let me know if it addresses the issue for you:
https://github.com/username_1/pyseq/tree/issue-60
I've also added a -s option to lss for convenience:
```
$ for i in {1..12}; do touch test.$i.exr; done
$ lss
9 test.%d.exr [1-9]
3 test.%02d.exr [10-12]
$ lss -s
12 test.%d.exr [1-12]
```
username_0: ... print frame
...
file.7.jpg
file.8.jpg
file.9.jpg
file.10.jpg
file.11.jpg
file.12.jpg
file.87.jpg
```
username_0: It also looks like like maybe you didn't push the code? The branch you linked is the same as master and is a few months old.
username_1: Ah, sorry about that. Should be there now.
username_1: The fix in that branch may be too simple.. did some more testing and it only works if one of the frames is a single digit. I'll dig deeper.
username_0: Are sequences meant to be sorted? The only sorting I'm seeing is in the
get_sequences method, so maybe not. Sorting and then getting the padding
from the first element does produce more consistent results, although it's
impossible to tell for a sequence that contains 999 and 1000 if it was %03d
or %02d or %d
username_1: I pushed another update to the [issue-60](https://github.com/username_1/pyseq/tree/issue-60) branch that hopefully addresses this issue. Give it another test and let me know. I'd like to do a review and merge to master.
There's no sorting during the sequence building process (yet) for performance. Sorting could slow things down and performance is key when you're dealing with 1000s of files. You can always sort the sequence at the end. May look into ordered lists at some point.
username_1: I'm wondering if strict padding should be disabled by default in lss since this issue will be manifest in cases where files have no pad.
```
$ for i in {1..1200}; do touch test.$i.exr; done
$ lss
9 test.%d.exr [1-9]
90 test.%d.exr [10-99]
900 test.%d.exr [100-999]
201 test.%d.exr [1000-1200]
```
with strict pad disabled:
```
$ lss -s
1200 test.%d.exr [1-1200]
```
and it works with padded frames too:
```
$ for i in $(seq -f "%05g" 1 1200); do touch test.$i.exr; done
$ lss -s
1200 test.%05d.exr [1-1200]
```
username_0: I was thinking strict padding should still allow sequences with no padding,
but nearly same outcome.
username_1: I'm not sure that makes sense. By definition strict padding requires the pad lengths to match, literally the number of chars in the frame string must be equal. Otherwise, there's no way to tell for example if test.10000.exr is in the same seq as test.00001.exr. They might be, but if test.9999.exr also exists then you will get different results with and without strict padding.
username_0: You're right. I guess it's more about allowing mixed padding within the
same sequence or not. I've had very few cases (2 to be exact) where I've
received badly formatted sequences where a few frames weren't padded
properly.
I've had a lot more cases where I have received non-padded sequences, or
sequences where the numbers passed the padding (example 2 padded sequences
reaching the 100s).
I ended up turning strict padding off because I never really needed it on,
as I've not yet encountered a folder containing 2 sequences with the same
name but different padding. I can see it could happen, but so far I haven't
seen it.
username_1: I've posted a potential fix here: https://github.com/username_1/pyseq/pull/63
cc: @username_3 @username_2
username_2: Need to test here, but as we are mostly running with strict padding anyway, this should not pose a problem...
Thanks for giving the headsup.
username_3: Nothing pops out to me as problematic in your PR. We're running with strict_pad through the interface so there shouldn't be any issues.
I would also add that get_sequences (and iget_sequences) do sort, but since it's a standard string sort, instead of a natural sort, what they are seeing is expected.
username_1: OK, thanks. Unless there are any concerns I'm going to merge #63 later today.
Status: Issue closed
username_1: this has been tagged and released |
cloudfour/cloudfour.com-patterns | 929671473 | Title: Redesign article subscription interface
Question:
username_0: While reviewing @username_1's work on #1094, we ran into some interesting challenges relating to the interaction between visual control labels and audible control labels. This was summarized nicely by @username_1 here: https://github.com/cloudfour/cloudfour.com-patterns/pull/1318#issuecomment-867210632
We realized that there may be an opportunity to redesign that area in order to accomplish two main goals:
- Make the action to be taken… enabling or disabling notifications… clear to users of all devices without relying on wholly separate labels. This will be in keeping with our commitment to both responsiveness and accessibility.
- Perhaps make the interface element as a whole easier to use in both the current article context and in the article listing, which currently _only_ supports email notifications.
As part of a voice conversation (which also included @dromo77, @AriannaChau and @haysjoey), @username_1 shared a link to an example notification UI from [Inclusive Components](https://inclusive-components.design/toggle-button/):
<img width="848" alt="Screen Shot 2021-06-24 at 3 45 45 PM" src="https://user-images.githubusercontent.com/69633/123341698-4558d580-d503-11eb-9380-f84f56a8e4be.png">
Other ideas included more explanatory copy beforehand.
Outcomes of this issue should be a prototype or wireframes reviewed by me for creative direction and @username_1 for accessibility.
Answers:
username_1: I haven't taken the time to verify the UX, but I noticed Github has a notifications button. Perhaps we can learn something from how they handle it?
<img width="706" alt="Screen Shot 2021-07-21 at 10 53 56 AM" src="https://user-images.githubusercontent.com/459757/126536663-eb5b2681-4df9-4c48-aa54-a7aca91e3482.png"> |
AtlasOfLivingAustralia/data-management | 264424140 | Title: GBIF Audit: 1.4 million records with no scientificName
Question:
username_0: The ALA currently contains 1,413,517 records with no value for the scientificName field. These may not directly be an issue if the vernacularName or a higher level taxonomic field is being used for identification, but it may be an area for investigation in future.
The distribution by data resource for missing scientificName values is:
dataResource | missingScientificNameCount | totalRecordCount
-- | -- | --
dr1089 | 2296 | 3381819
dr1168 | 141 | 342
dr1171 | 1 | 1056
dr1172 | 301 | 685
dr1175 | 357 | 503
dr1176 | 42 | 925
dr1177 | 34 | 69
dr1180 | 257 | 686
dr1181 | 563 | 1484
dr1185 | 102 | 2170
dr1189 | 320 | 320
dr1195 | 57 | 504
dr1196 | 72 | 1100
dr1201 | 293 | 1247
dr1411 | 2724 | 78866
dr1675 | 73 | 771
dr1680 | 1 | 521
dr1681 | 2 | 1500
dr1682 | 97 | 216
dr1683 | 75 | 606
dr1687 | 236 | 1174
dr1697 | 70 | 245
dr1757 | 2 | 587
dr1762 | 26 | 429
dr1763 | 872 | 1179
dr1765 | 116 | 774
dr1791 | 95 | 1085
dr1792 | 22 | 996
dr1840 | 4222 | 4222
dr1848 | 1119 | 1119
dr1849 | 2 | 587
dr1864 | 132035 | 132035
dr1875 | 17 | 487
dr1888 | 1 | 371
dr1902 | 11317 | 83099
dr1917 | 1 | 343
dr1920 | 20 | 1175
dr1922 | 40 | 485
dr1925 | 148 | 1303
dr1926 | 174 | 1229
dr1930 | 956 | 1028
dr2014 | 20880 | 71280
dr2129 | 5 | 647
dr2153 | 134059 | 847143
dr2185 | 94 | 432
dr2186 | 462 | 640
dr2187 | 204 | 595
dr2188 | 339 | 550
dr2244 | 119185 | 148701
dr2262 | 7780 | 7780
dr2277 | 24 | 1388
dr2281 | 42 | 1002
dr2287 | 67 | 523184
dr2288 | 17 | 4703
[Truncated]
dr732 | 6 | 53750
dr736 | 2089 | 2089
dr7430 | 7 | 348
dr7452 | 74 | 31275
dr7453 | 37 | 393
dr7456 | 13 | 13833
dr7706 | 152 | 152
dr7729 | 4712 | 4712
dr7858 | 367 | 35628
dr789 | 20 | 592
dr790 | 11 | 494
dr795 | 8104 | 8104
dr799 | 651 | 2813
dr805 | 3854 | 6092
dr807 | 8490 | 81310
dr835 | 2261 | 2261
dr843 | 366 | 366
dr893 | 64068 | 64068
dr920 | 52055 | 52055
dr968 | 1 | 352 |
getsentry/sentry-unity | 981292889 | Title: iOS Native support for Simulator
Question:
username_0: Through https://github.com/getsentry/sentry-unity/issues/164 we added iOS native support.
The `sentry-cocoa` framework bundled is only for real devices. Customers expect to test things in the simulator. And in addition to get e2e tests in CI, we also need simulator support.
Answers:
username_0: Fixed in #406
Status: Issue closed
|
python/pythondotorg | 174864479 | Title: PSF Sponsor Prospectus 2016
Question:
username_0: @username_1 let me know what wording should go with the PSF Sponsor Prospectus 2016 and under which section: https://www.python.org/psf/sponsorship/
Answers:
username_1: Under "How to become a Sponsor", add the following copy to the beginning of the paragraph:
Download [link to pdf] a copy of the PSF Sponsor Prospectus for more details about sponsorship.
username_0: Linked the PDF https://www.python.org/psf/sponsorship/.
Status: Issue closed
|
nolemmings/compare-json | 121778186 | Title: Add option to process multiple files in groups
Question:
username_0: I have my angular-translate files structured by section, e.g.:
```
index-nl.json
index-en.json
help-nl.json
help-en.json
about-nl.json
about-en.json
```
When running `comparejson *.json` every file is checked against every other file. What I would like is that all files with a common prefix ('index', 'help', or 'about') are checked against each other.
The best I can think of is to set some separator (in my case '-') such that groups can be formed. After forming groups, each groups can be processed by running `compareFiles` on the group.
Do you have a better suggestion? I can do a PR if you want.
Answers:
username_1: Added feature in 5ead75d90ea17e7a672c54d9d80c3bb1eb260392
See README.
Any feedback is welcome, I'll be releasing 0.2.0 shortly.
Closing for now, feel free to reopen or submit new issue if it doesn't work as expected.
Status: Issue closed
username_0: Thanks! I will review and test this tomorrow evening.
username_0: I have just tested this feature and it works as expected. Thanks again for adding it.
username_2: No errors found |
frontity/frontity | 828195628 | Title: `className` attribute in WordPress content overwrites the `class`.
Question:
username_0: **Observed behavior**
When a user has an element with a `className` attribute in their content, this attribute will be passed as `className` to the final react component created with `Html2React`.
Additionally, when a user has an element with both a `className` and `class` attribute in the WordPress content, that `className` will overwrite the `class`.
**Steps involved to reproduce the problem**
I've created a test case to show the behavior in #746
**Possible solution**
I think that we should either a) ignore the `className` attribute in the user content; or b) Merge the content from `class` and `className` attributes.
Personally, I think that a) is a more reasonable solution and leads to less surprising behavior.
Answers:
username_1: Are there any known cases where we are going to get className from WordPress content?
username_0: Nope, I don't know of any cases at the moment. It's only that it's possible to happen.
username_2: Was reading through this and it hit me, that `className` is a property of a DOM node, not an attribute. So, even if it exists on the html coming from WP I think it needs to be simply ignored.
That means we need to skip it when we find it. Pushed a change on that PR @username_0. Let me know what you think.
username_0: Yup, I think that this is the reasonable behaviour I was expecting :)
Status: Issue closed
|
btk5h/skript-mirror | 393108138 | Title: log handler error spam and freeze
Question:
username_0: I get this when i start my server after the latest paperspigot update
```
[10:55:10 ERROR]: [Skript] 1 log handler was not stopped properly! (at ch.njol.skript.lang.SkriptParser.parseSingleExpr(SkriptParser.java:371)) [if you're a server admin and you see this message please file a bug report at https://github.com/bensku/skript/issues if there is not already one]
[10:55:10 ERROR]: [Skript] 1 log handler was not stopped properly! (at ch.njol.skript.lang.SkriptParser.parseSingleExpr(SkriptParser.java:371)) [if you're a server admin and you see this message please file a bug report at https://github.com/bensku/skript/issues if there is not already one]
[10:55:10 ERROR]: [Skript] 1 log handler was not stopped properly! (at ch.njol.skript.lang.SkriptParser.parseSingleExpr(SkriptParser.java:371)) [if you're a server admin and you see this message please file a bug report at https://github.com/bensku/skript/issues if there is not already one]
[10:55:10 ERROR]: [Skript] 1 log handler was not stopped properly! (at ch.njol.skript.lang.SkriptParser.parseSingleExpr(SkriptParser.java:371)) [if you're a server admin and you see this message please file a bug report at https://github.com/bensku/skript/issues if there is not already one]
[10:55:10 ERROR]: [Skript] 1 log handler was not stopped properly! (at ch.njol.skript.lang.SkriptParser.parseSingleExpr(SkriptParser.java:580)) [if you're a server admin and you see this message please file a bug report at https://github.com/bensku/skript/issues if there is not already one]
[10:55:10 ERROR]: [Skript] 2 log handlers were not stopped properly! (at ch.njol.skript.lang.SkriptParser.parse_i(SkriptParser.java:1602)) [if you're a server admin and you see this message please file a bug report at https://github.com/bensku/skript/issues if there is not already one]
[10:55:10 ERROR]: [Skript] 8 log handlers were not stopped properly! (at ch.njol.skript.lang.SkriptParser.parse_i(SkriptParser.java:1596)) [if you're a server admin and you see this message please file a bug report at https://github.com/bensku/skript/issues if there is not already one]
[10:55:10 ERROR]: [Skript] 1 log handler was not stopped properly! (at ch.njol.skript.lang.SkriptParser.parse_i(SkriptParser.java:1602)) [if you're a server admin and you see this message please file a bug report at https://github.com/bensku/skript/issues if there is not already one]
[10:55:10 ERROR]: [Skript] 1 log handler was not stopped properly! (at ch.njol.skript.lang.Statement.parse(Statement.java:54)) [if you're a server admin and you see this message please file a bug report at https://github.com/bensku/skript/issues if there is not already one]
[10:55:10 ERROR]: [Skript] 1 log handler was not stopped properly! (at ch.njol.skript.lang.SkriptParser.parseSingleExpr(SkriptParser.java:371)) [if you're a server admin and you see this message please file a bug report at https://github.com/bensku/skript/issues if there is not already one]
[10:55:10 ERROR]: [Skript] 1 log handler was not stopped properly! (at ch.njol.skript.lang.SkriptParser.parseSingleExpr(SkriptParser.java:371)) [if you're a server admin and you see this message please file a bug report at https://github.com/bensku/skript/issues if there is not already one]
[10:55:10 ERROR]: [Skript] 1 log handler was not stopped properly! (at ch.njol.skript.lang.SkriptParser.parseSingleExpr(SkriptParser.java:371)) [if you're a server admin and you see this message please file a bug report at https://github.com/bensku/skript/issues if there is not already one]
[10:55:10 ERROR]: [Skript] 1 log handler was not stopped properly! (at ch.njol.skript.lang.SkriptParser.parseSingleExpr(SkriptParser.java:371)) [if you're a server admin and you see this message please file a bug report at https://github.com/bensku/skript/issues if there is not already one]
[10:55:10 ERROR]: [Skript] 7 log handlers were not stopped properly! (at ch.njol.skript.lang.SkriptParser.parse_i(SkriptParser.java:1596)) [if you're a server admin and you see this message please file a bug report at https://github.com/bensku/skript/issues if there is not already one]
[10:55:10 ERROR]: [Skript] 5 log handlers were not stopped properly! (at ch.njol.skript.lang.SkriptParser.parse_i(SkriptParser.java:1602)) [if you're a server admin and you see this message please file a bug report at https://github.com/bensku/skript/issues if there is not already one]
[10:55:10 ERROR]: [Skript] 8 log handlers were not stopped properly! (at ch.njol.skript.lang.SkriptParser.parse_i(SkriptParser.java:1596)) [if you're a server admin and you see this message please file a bug report at https://github.com/bensku/skript/issues if there is not already one]
[10:55:10 ERROR]: [Skript] 9 log handlers were not stopped properly! (at ch.njol.skript.lang.SkriptParser.parse_i(SkriptParser.java:1596)) [if you're a server admin and you see this message please file a bug report at https://github.com/bensku/skript/issues if there is not already one]
[10:55:10 ERROR]: [Skript] 2 log handlers were not stopped properly! (at ch.njol.skript.lang.SkriptParser.parse_i(SkriptParser.java:1596)) [if you're a server admin and you see this message please file a bug report at https://github.com/bensku/skript/issues if there is not already one]
[10:55:10 ERROR]: [Skript] 1 log handler was not stopped properly! (at ch.njol.skript.lang.Statement.parse(Statement.java:54)) [if you're a server admin and you see this message please file a bug report at https://github.com/bensku/skript/issues if there is not already one]
[10:55:10 ERROR]: [Skript] 2 log handlers were not stopped properly! (at ch.njol.skript.lang.SkriptParser.parse_i(SkriptParser.java:1596)) [if you're a server admin and you see this message please file a bug report at https://github.com/bensku/skript/issues if there is not already one]
[10:55:10 ERROR]: [Skript] 1 log handler was not stopped properly! (at ch.njol.skript.lang.SkriptParser.parseSingleExpr(SkriptParser.java:371)) [if you're a server admin and you see this message please file a bug report at https://github.com/bensku/skript/issues if there is not already one]
```
and many more, the server also freezes most of the time but not always
The problem is fixed when i remove skript-mirror so i guess here might be a good start?
paperspigot 1.13.2 #481
skript-mirror 1.0.0
Answers:
username_1: This has been fixed on the 2.x branch.
username_2: @username_1 Could you release a 1.0.x version with the log handlers fix?
It's impossible to use 2.3.1 and skript-mirror, and 2.x doesn't seem stable enough to be used yet.
Thanks.
username_3: @username_1 I tried 1.0 with the fixes of https://github.com/username_1/skript-mirror/commit/bbaefd71cd90674c1e34404d00b53cbc0408f57a.
The log handles messages don't appear but the addon doesn't work at all.
The same when using 2.0.
username_1: I have no confirmation that the server freezing is related to this bug.
username_3: @username_1 yes, it is related to that bug. I've compiled the version with that patch and it is working perfect now.
I don't know why but when I compiled with Gradle the addon didn't work at all but when I manually compiled it it worked.
username_1: I don't really have the time to backport the fixes from 2.x and test them myself, and skript-mirror 1.x doesn't target newer versions of Skript, so this isn't a very high priority for me.
I will accept a PR from anyone who wants to backport the fixes and test skript-mirror themselves though.
username_4: am haveing this issue may someone send me a download on a working version |
SBU-BMI/quip_distro | 238595882 | Title: Need to time out quickly and return error for expired logins
Question:
username_0: It seems google auth information is being cached on the client side but not getting expired properly. If a user keeps logged on to his/her google account for a long time, she will be able to log on to QuIP but not able to access the list of images, etc. The application will keep waiting without a timeout or an error message.
Answers:
username_1: Is this what is causing the flextable page to sit with the circle animation indefinitely?
------------------------------------------------------------------
<NAME>
Supervising Programmer Analyst
Biomedical Informatics
Stony Brook University
Phone: (631) 444-8498 Fax: (631) 444-8873
------------------------------------------------------------------
"Fortune knocks but once, but misfortune has much more patience.”
- <NAME>
>
username_2: Probably. The diagnosis makes sense.
Status: Issue closed
username_3: i modify the select.php file to force logout if bindass api_key cached by users' browser is expired. |
jlippold/tweakCompatible | 323814342 | Title: `XenInfo` working on iOS 11.1.2
Question:
username_0: ```
{
"packageId": "com.junesiphone.xeninfo",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.junesiphone.xeninfo",
"deviceId": "iPhone9,4",
"url": "http://cydia.saurik.com/package/com.junesiphone.xeninfo/",
"iOSVersion": "11.1.2",
"packageVersionIndexed": false,
"packageName": "XenInfo",
"category": "Addons (Tweaks)",
"repository": "junesiphone.com",
"name": "XenInfo",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.junesiphone.xeninfo",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.0.7",
"shortDescription": "Music, Weather, and battery info for XenHTML",
"latest": "0.6",
"author": "JunesiPhone",
"packageStatus": "Unknown"
},
"base64": "<KEY>",
"chosenStatus": "working",
"notes": "Working faultless iphone 7plus 11.1.2"
}
```
Answers:
username_1: This issue is being closed because your review was accepted into the tweakCompatible website.
Tweak developers do not monitor or fix issues submitted via this repo.
If you have an issue with a tweak, contact the developer via another method.
Status: Issue closed
|
hiloteam/Hilo3d | 352648671 | Title: Hilo3d does not support scale of normal map?
Question:
username_0: I tried to display models of some normal maps in Hilo3d.
However, Hilo3d does not seem to support scale of normal maps.

Hilo3d + Yunomi_normal_20.glb result:
http://jsdo.it/username_0/4MPF
Hilo3d + Material_07.gltf result:
http://jsdo.it/username_0/m6tR
Status: Issue closed
Answers:
username_1: @username_0
I updated the normalMap calculation method according to the [glTF spec](https://github.com/KhronosGroup/glTF/blob/master/specification/2.0/README.md#normaltextureinfoscale). But the formula is not correct.
I make a PR to the spec.
https://github.com/KhronosGroup/glTF/pull/1429
username_1: I tried to display models of some normal maps in Hilo3d.
However, Hilo3d does not seem to support scale of normal maps.

Hilo3d + Yunomi_normal_20.glb result:
http://jsdo.it/username_0/4MPF
Hilo3d + Material_07.gltf result:
http://jsdo.it/username_0/m6tR
Status: Issue closed
|
firebase/firebase-tools-ui | 901395612 | Title: Unexpected Firebase Cloud Storage Emulator behaviour when objects uploaded using the Admin SDK and the Web SDK
Question:
username_0: ### Environment info
**firebase-tools:** 9.11.0
**node:** v14.16.0
**Platform:** Manjaro
### Setup
<!-- Provide a minimal, complete, and verifiable example (http://stackoverflow.com/help/mcve) -->
```
git clone https://github.com/username_0/firebase-cloud-storage-emulator-issue.git
cd firebase-cloud-storage-emulator-issue && npm ci
firebase emulators:start`
```
### Test case 1: Uploading with the Firebase Admin SDK
### Steps to reproduce
1. `FIREBASE_STORAGE_EMULATOR_HOST=localhost:9199 node firebase-admin.js`
2. Open emulator UI on Google Chrome and go to the storage emulator.
3. Click on any of the uploaded files in the default bucket.
### Expected behavior
- The detail pane of the image should show like the following:

### Actual behavior
- Whole window goes blank, and does not revert unless you manually reload the page.
- The following is logged to the browser console:
```
It looks like you're using the development build of the Firebase JS SDK.
When deploying Firebase apps to production, it is advisable to only import
the individual SDK components you intend to use.
For the module builds, these are available in the following manner
(replace <PACKAGE> with the name of a component - i.e. auth, database, etc):
CommonJS Modules:
const firebase = require('firebase/app');
require('firebase/<PACKAGE>');
ES Modules:
import firebase from 'firebase/app';
import 'firebase/<PACKAGE>';
Typescript:
import firebase from 'firebase/app';
import 'firebase/<PACKAGE>';
(anonymous) @ index.ts:18
react-dom.production.min.js:225 FirebaseError: Firebase Storage: The given file does not have any download URLs. (storage/no-download-url)
ms @ react-dom.production.min.js:225
scheduler.production.min.js:12 Uncaught FirebaseError: Firebase Storage: The given file does not have any download URLs. (storage/no-download-url)
```
[Truncated]
### Steps to reproduce
1. `FIREBASE_STORAGE_EMULATOR_HOST=localhost:9199 node firebase-web-sdk.js`
2. You need to manually end the node script in the terminal with CTRL+C.
3. Open emulator UI on Google Chrome and go to the storage emulator.
4. Click on any of the uploaded files in the default bucket.
### Expected behavior
- Same as first test case.
### Actual behavior
- The image thumbnail is missing in the detail pane:

- No errors are logged to the browser console.
- Output of command with --debug flag:
[test-case-2-firebase-debug.log](https://github.com/firebase/firebase-tools/files/6519819/test-case-2-firebase-debug.log)
[test-case-2-firestore-debug.log](https://github.com/firebase/firebase-tools/files/6519820/test-case-2-firestore-debug.log)<issue_closed>
Status: Issue closed |
DestinyItemManager/DIM | 188942563 | Title: [i18n] Loadout - Create or Edit
Question:
username_0: The class-list dropdown is not translated currently. I could not figure it out.
Answers:
username_1: @username_0 I'll have a look.
username_1: Are you talking about the drop down for the selection of tiers?

The class list looks like it is translated, but the "Tier %d" string isn't
username_0: Closing in favor of #1212
Status: Issue closed
|
LuaxY/CawotteSrv | 96550717 | Title: Client/Server
Question:
username_0: Client/Server socket to listen and dispatch new connection to client session using LibEvent.
Answers:
username_0: Base is done, improvement :
- Store client in vector
- Remove client from vector when logout
- Clean up on server stop |
apache/couchdb-nano | 595449191 | Title: connect to couchdb with curl works and with nano fails
Question:
username_0: Hello everyone,
connecting to couch using curl returns:
`http://4b03SQKEIPqrC3U07eRV:[email protected]
{"couchdb":"Welcome","version":"3.0.0","git_sha":"03a77db6c","uuid":"d118f924612337b166a2c8dfbbdc9177","features":["search","access-ready","partitioned","pluggable-storage-engines","reshard","scheduler"],"vendor":{"name":"The Apache Software Foundation"}} `
However the same set up(password,username and url ) returns:
`Unable to connect to Couchdb.`
and here is my code:
` private getCouchDBClient(config: CouchConfig): { db: any, endpoint: string } {
this._logger.info({log: `Setting up couchdb client for:${config.url}`});
let nano: Nano.ServerScope;
let db: any;
try {
const [proto, path] = config.url.split("://", 2);
const url = `${proto}://${config.username}:${config.password}@${path}`;
nano = Nano(url);
if (config.dbName && config.dbName.length > 1) {
db = nano.db.use(config.dbName);
} else {
db = nano.db;
}
} catch (err) {
this._logger.debug({ log: COUCH_CONNECTION_ERROR.message });
this._logger.debug({ log: `Error: ${err}` });
throw (new Error(COUCH_CONNECTION_ERROR.message));
}
return {
db: db,
endpoint: `${config.url}/${config.dbName}`,
};
}`
and my env file:
`SEARCH_RECORDS_DATASTORE_TYPE = 'couchdb'
SEARCH_RECORDS_COUCH_USERNAME = '4b03SQKEIPqrC3U07eRV'
SEARCH_RECORDS_COUCH_PASSWORD = '<PASSWORD>stE4KHX3oLZID5'
SEARCH_RECORDS_COUCH_DB_NAME = 'de-search-records'
SEARCH_RECORDS_COUCH_URL ='http://default-couch.app.xxx.com' `
Any idea?
Answers:
username_1: If you just want to check connectivity. A simpler test script should help you identify what's going on. This is the equivalent of the `curl` statement:
```js
const Nano = require('nano')
const nano = Nano('http://4b03SQKEIPqrC3U07eRV:[email protected]')
const db = nano.db.use('de-search-records')
db.info().then(console.log)
```
^ this works with Nano for local CouchDB on http or remote Cloudant over https.
I don't see that the code you supplied actually makes any API calls. `db.use` doesn't nor does the 'constructor'.
username_0: @username_1 Thanks a lot for your comment. Actually I noticed there was a empty space in my path url and that was causing the issue
Status: Issue closed
|
m2ms/fragalysis-frontend | 1158445703 | Title: Close/open batch navigator
Question:
username_0: Allow user to hide/show batch navigator. Hiding to extend the batch display to occupy the space and likewise adjust to accommodate the navigator panel when opened.
Status: Issue closed
Answers:
username_0: Allow user to hide/show batch navigator. Hiding to extend the batch display to occupy the space and likewise adjust to accommodate the navigator panel when opened.
username_0: This is complete |
microsoft/CCF | 453106433 | Title: The ledger contains the exact same data over sequential transactions
Question:
username_0: When running a test, after the setting up a network, the nodes table should be populated, therefore, readable by ledger.py.
When reading the ledger, we can see identical node entries appearing on sequential transactions, without any property being changed. This seems concerning, since the ledger should only be storing the delta between versions.
[test_nodes.py.txt](https://github.com/microsoft/CCF/files/3262349/test_nodes.py.txt)
Answers:
username_1: There is a specific transaction that is run 1) when the leader starts up and 2) when a new node is added to the network to set the status of a node as `TRUSTED`. See https://github.com/microsoft/CCF/blob/master/src/node/nodestate.h#L291-L294 and https://github.com/microsoft/CCF/blob/master/src/node/rpc/nodefrontend.h#L36-L37.
Are the nodes info retrieved exactly the same or is the node status slightly different? I would expect the nodes to be added as `PENDING` in the genesis transaction and then transitioned to `TRUSTED` within the first few transactions.
username_0: @username_1 The entries are identical, node state included
username_2: Cannot reproduce with a recent build.
Status: Issue closed
username_1: When running a test, after the setting up a network, the nodes table should be populated, therefore, readable by ledger.py.
When reading the ledger, we can see identical node entries appearing on sequential transactions, without any property being changed. This seems concerning, since the ledger should only be storing the delta between versions.
[test_nodes.py.txt](https://github.com/microsoft/CCF/files/3262349/test_nodes.py.txt)
username_1: Re-opening. As far as I could see, this is an issue with the Python ledger reader (`ledger.py`) as the requests do not seem duplicated in the ledger file itself.
username_2: To clarify, I tested this by grepping for strings in the ledger.
username_1: I can confirm that this was an issue with the infra. See #918
Status: Issue closed
|
rancher/rancher | 557166364 | Title: changes for kontainer-driver-metadata
Question:
username_0: Because we currently have code and data in rancher/kontainer-driver-metadata, vendoring it in different release branches for rancher and rke causes issues.
- Introduce changes to our build process for rke and rancher so that they stop vendoring kdm and have access to local bin data file
- Update rke and rancher to load data from this local file
Answers:
username_1: Having the RKE CLI be able to consume a `data.json` would be amazing
username_2: Discussed with @username_0, the current logic for syncing kdm is use catalog url to fetch `data.json` first, then fallback to vendor file.(Assuming fallback method is needed in case catalog sync failed or air-gap installation). The goal is to move away from vendor file and only use `data.json` as source of truth. Same applies to RKE.
username_0: @username_2 sounds good, just to clarify, we don't currently have any vendor file, as in we don't have vendored data.json but it's just structs
username_2: Related PRs:
https://github.com/rancher/types/pull/1088
https://github.com/rancher/rke/pull/1910
https://github.com/rancher/kontainer-driver-metadata/pull/136
https://github.com/rancher/types/pull/1088
username_2: This issue is available to test. This actually introduce no functionality change, but we need to verify there is no regression.
To verify this, try to provision RKE cluster in a normal or airgap setup. All the image should still be download and components should be brought up.
username_3: do the following preparation before doing validations
- Run Rancher:master-head 7640e89b0 and check the builtin k8s versions as the following:
```
- 1.17.3-rancher1-1 (default)
- 1.16.7-rancher1-1
- 1.15.10-rancher1-1
```
<img width="705" alt="Screen Shot 2020-02-26 at 3 43 35 PM" src="https://user-images.githubusercontent.com/6218999/75396997-0e44c680-58b3-11ea-92d2-32b410e91155.png">
- Copy the builtin `data.json` file and host it at a server so Rancher can download from an URL
**Validation 1:**
- edit the copy to add a new k8s version, saying `1.17.4-rancher1-1`
- go to Settings -> rke-metadata-config to change the URL to point to the copy
Results:
- Rancher logs show the following message which indicates a refresh is trigged
```
2020/02/27 00:38:53 [INFO] Refreshing driverMetadata in 1440 minutes
2020/02/27 00:38:53 [INFO] driverMetadata: refreshing data from upstream https://raw.githubusercontent.com/username_3/kontainer-driver-metadata/issue-25162/data/data.json
```
- In Rancher UI, go to the cluster provision page, see the new k8s version is showing in the list

**Validation 2:**
- change the default to be `1.16.x` in the data.json file
```
"RancherDefaultK8sVersions": {
"2.3": "v1.17.x",
"2.3.0": "v1.15.x",
"2.3.1": "v1.15.x",
"2.3.2": "v1.15.x",
"2.3.3": "v1.16.x",
"default": "v1.16.x" <--- this line
},
```
- go to tools -> drivers and click `refresh kubernetes metadata`
Results:
- Rancher logs indicate a refresh is triggered
```
2020/02/27 00:46:30 [INFO] driverMetadata: refreshing data from upstream https://raw.githubusercontent.com/username_3/kontainer-driver-metadata/issue-25162/data/data.json
```
- in Rancher UI, the default version showing in the cluster provision page is `v1.16.7-rancher1-1`
- also in the api:
```
"default": "v1.16.7-rancher1-1",
"id": "k8s-version",
```
**Validation 3:**
- go to Settings -> rke-metadata-config to change the refreshing interval to 1
```
"refresh-interval-minutes": "1",
```
- edit the data.json file to add a new k8s version `v1.18.1-rancher1-1`, and change the default to `v1.18.x`
Results:
- in Rancher's log, the following shows every minute which indicates the refresh is triggered automatically
```
2020/02/27 01:07:08 [INFO] Refreshing driverMetadata in 1 minutes
2020/02/27 01:07:08 [INFO] driverMetadata: refreshing data from upstream https://raw.githubusercontent.com/username_3/kontainer-driver-metadata/issue-25162/data/data.json
2020/02/27 01:08:08 [INFO] Refreshing driverMetadata in 1 minutes
2020/02/27 01:08:08 [INFO] driverMetadata: refreshing data from upstream https://raw.githubusercontent.com/username_3/kontainer-driver-metadata/issue-25162/data/data.json
2020/02/27 01:09:08 [INFO] Refreshing driverMetadata in 1 minutes
2020/02/27 01:09:08 [INFO] driverMetadata: refreshing data from upstream https://raw.githubusercontent.com/username_3/kontainer-driver-metadata/issue-25162/data/data.json
```
- in Rancher UI, the new version shows up as the default value

- the values under v3/settings also reflect the changes
username_3: The following validations are done on an air-gapped setup Rancher:v2.4.0-rc1, single install
Validation 1:
- provision a cluster for each supported k8s version with the embedded metadata
- confirm that clusters are active and use the same set of images as the clusters provisioned in a non-air-gapped setup with the same k8s version
Validation 2:
- host the data.json file and change the url in settings to point to that file
- add a new k8s version to the data.json file, and trigger a refresh in Rancher
- confirm that the new k8s version shows up in Rancher
Status: Issue closed
|
blxzfb307/swag-test | 464839853 | Title: error message
Question:
username_0: ```sh
➜ ~ git clone <EMAIL>:username_1/swag-test.git
Cloning into 'swag-test'...
remote: Enumerating objects: 9, done.
remote: Counting objects: 100% (9/9), done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 9 (delta 0), reused 9 (delta 0), pack-reused 0
Receiving objects: 100% (9/9), done.
➜ ~ cd swag-test
➜ swag-test git:(master) swag init -g app/main/mian.go
2019/07/06 18:10:09 Generate swagger docs....
2019/07/06 18:10:09 Generate general API Info, search dir:./, mainAPIFile:app/main/mian.go
app/main
2019/07/06 18:13:10 execute go list command, exit status 1, stdout:, stderr:go: finding github.com/labstack/echo/v4 v4.1.6
go: finding github.com/labstack/gommon v0.2.9
go: finding golang.org/x/sys v0.0.0-20190609082536-301114b31cce
go: finding golang.org/x/tools v0.0.0-20190608022120-eacb66d2a7c3
go: finding golang.org/x/net v0.0.0-20190607181551-461777fb6f67
go: finding golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5
go: finding golang.org/x/sys v0.0.0-20190602015325-4c4f7f33c9ed
go: finding github.com/mattn/go-colorable v0.1.2
can't load package: package swag-test/a: unknown import path "swag-test/a": cannot find package
```
Answers:
username_1: May be not exactly the same. I got this error message.
```
swag init -g app/main/mian.go
2019/07/06 18:23:38 Generate swagger docs....
2019/07/06 18:23:39 Generate general API Info, search dir:./
2019/07/06 18:23:39 execute go list command, exit status 1, stdout:, stderr:can't load package: package swag-test: unknown import path "swag-test": cannot find module providing package swag-test
```
username_0: Ok, I will dig this issue later, thanks for your report.
username_1: @username_0 Thank you for your help. By the way, there is another problem in nested struct when generating the document. If I have a struct below, if will generate a strange string.
```
//@Success 200 {object} Foo
type Foo struct {
Field1 []struct{
Field2 uint
Field3 string
}
}
```
The document will like this
```
Foo {
Field1 [&{%!s(token.Pos=1046) %!s(*ast.FieldList=&{1053 [0xc00036a080 0xc00036a0c0 0xc00036a100 0xc00036a140 0xc00036a180 0xc00036a200 0xc00036a240 0xc00036a280 0xc00036a2c0 0xc00036a300 0xc00036a340 0xc00036a3c0 0xc00036a480] 1695}) %!s(bool=false)}]
}
```
I also want to know if any method to use the anonymous structures to generating document. For example, I just use this struct once in the router handler function like below and I don't want to name them.
```
response := struct {
Code int `json:"code"`
Data []struct{
Field1 uint `json:"field1"`
Field2 string `json:"field2"`
}
}{}
return c.JSON(200, response)
```
username_0: @username_1 this issue have been fixed by https://github.com/swaggo/swag/commit/91ec3e69be3fcd78e18b7f4d19f9a1785f36919d. Please `go get -u github.com/swaggo/swag/cmd/swag` to get v1.6.1 should be work.
username_0: For anonymous structures issue, it's kind of difficult to fix it, but would be do my best to support all of cases for anonymous structures, and you can raise the issue at https://github.com/swaggo/swag/issues, I would be fix for it.
username_1: @username_0 sorry for bothering you again, the problem still exists in v1.6.1. Here is the screenshot.

username_0: Should shown swag-test/app/main via go list cmd , I’m not sure that if related with windows platform, but I can fix enable with go list when —parsingDependency flag on.
username_1: Here is the screenshot of go list. It has a same error message if I use the relative path.

username_0: Thanks for your informations, I would disable go list if not specify flag —parsingDependency=on.
username_0: @username_1 please use the latest v1.6.2 to try again, thx.
username_1: It works now, thanks! |
keybase/client | 268226759 | Title: False no space left on device error
Question:
username_0: I have plenty of space on hard drive but keybase reports:
ERROR Cannot commit Merkle root to local DB: write /home/jascha/.local/share/keybase/keybase.leveldb/000430.log: no space left on device
Answers:
username_1: Are you sure? What does `df` show?
username_0: $df -h
Filesystem Size Used Avail Use% Mounted on
udev 5.9G 4.0K 5.9G 1% /dev
tmpfs 1.2G 3.5M 1.2G 1% /run
/dev/sda4 92G 28G 60G 32% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 5.9G 309M 5.6G 6% /run/shm
none 100M 28K 100M 1% /run/user
/dev/sdc1 2.7T 713G 1.9T 28% /mnt/sdc1
/dev/sda5 89G 74G 11G 88% /home
/dev/sda1 197M 23M 175M 12% /boot/efi
I had to kill keybase and restart for it for it to work once again.
username_2: just had the same problem
`▶ ERROR Cannot commit Merkle root to local DB: write /home/simon/.local/share/keybase/keybase.leveldb/003453.log: no space left on device`
`/dev/mapper/main-root 205G 193G 1,5G 100% /`
keybase version 1.0.37-20171208170249+025062c3a (arch linux)
killing and restartin keybase worked for me, thx
username_3: Saw this today. `df` looks normal, plenty of space on my drive.
`ERROR Cannot commit Merkle root to local DB: write /home/ubuntu/.local/share/keybase/keybase.leveldb/000008.log: no space left on device
▶ WARNING Error in writing UPAK for 10af161581627b002ff34b4d3ef7d619: write /home/ubuntu/.local/share/keybase/keybase.leveldb/000008.log: no space left on device [tags:SELF=kv5V8d-Ma8Ea]`
username_4: Hm, how about `df -i`?
username_3: Most usage I see on anything is 11% which is on /. There's plenty of space on my drive.
username_5: Same here and it is frustrating... cannot chat, post, receive information or even change a password or install a second device. I have 160GB left and should be moooooooore than enough |
Tychobra/shiny-insurance-examples | 410893726 | Title: Cannot open compressed file readRDS("./data/shiny-model-fit-dat.RDS")
Question:
username_0: When downloading the git (clon), and try to run the app. This message is sent:

- I do have installed the tychobratools libraries and similars.
Help, please.
Status: Issue closed
Answers:
username_1: Sorry, but unfortunately this particular Shiny app does not come with the necessary data to run it locally. The data is proprietary, so I could not include it.
username_2: Is it the data is the raw data for this load data? because there two RDS save data that been loaded.
-this the first load data
# load data
dat <- readRDS("**./data/shiny-model-fit-dat.RDS**")
- this the second load data
preds <- readRDS("model.rds") |
JimmyLv/reading | 382500511 | Title: 区块链入门教程 - 阮一峰的网络日志
Question:
username_0: ## 区块链入门教程 - 阮一峰的网络日志<br>
区块链(blockchain)是眼下的大热门,新闻媒体大量报道,宣称它将创造未来。 可是,简单易懂的入门文章却很少。区块链到底是什么,有何特别之处,很少有解释。…<br>
<br>
November 20, 2018 at 12:56PM<br>
via Instapaper http://www.ruanyifeng.com/blog/2017/12/blockchain-tutorial.html |
hassio-addons/addon-adguard-home | 655741301 | Title: Plugin stopping to block anything
Question:
username_0: Good day!
Adguard plugin stopping to block anything from any lists. Even when I add "||google.com^" to custom filtering rules, it says "Not found in your filter lists" after check.
Only plugin reinstallation solves this problem for another 10-12 hours.
There are no warnings or errors in log files, and meanwhile plugin is serving DNS requests normally (including DNS rewrites).
In dashboard it looks like this:
[https://clip2net.com/s/48mUuXx](url)
Hassio ver. 0.112.3, plugin ver. 2.4.2.
Thanks!
Answers:
username_0: Reinstalled addon again. It worked perfect about 2 days, but then it somehow turned itself off (when I've opened control panel, I saw red "Protection off" label). When I turned it on again it still don't want to block any addresses and even custom filtering rules said, that nothing found in any lists.
Status: Issue closed
|
YDLIDAR/ydlidar_ros_driver | 789600961 | Title: what's the different between ydlidar_ros_driver::LaserFan and sensor_msgs::LaserScan of ROS
Question:
username_0: What's the point of creating a new message type LaserFan?
Answers:
username_1: LaserScan is a standard ROS message type. The angle precision of LaserFan message type will not be lost, and LaserScan will lose the angle precision:
```
for(size_t i=0; i < scan.points.size(); i++) {
// Convert angle to index will cause angle precision loss
int index = std::ceil((scan.points[i].angle - scan.config.min_angle)/scan.config.angle_increment);
if(index >=0 && index < size) {
if(scan.points[i].range >= scan.config.min_range) {
scan_msg.ranges[index] = scan.points[i].range;
scan_msg.intensities[index] = scan.points[i].intensity;
}
}
// Keep the original angle
fan.angles.push_back(scan.points[i].angle);
fan.ranges.push_back(scan.points[i].range);
fan.intensities.push_back(scan.points[i].intensity);
}
```
Status: Issue closed
|
MicrosoftDocs/windows-itpro-docs | 731400087 | Title: Use Group Policy to hide the Microsoft Defender AV interface from users
Question:
username_0: Is it planned to make this available as an user gpo?
If we set it and log on as an administrator, we can't controll the Settings or findings of the defender via GUI on the machine.
Since Microsoft has decided to make Windows Security available for all Users as an extra Appx Shortcut in the Startmenu we have can't restrict the Users from using it.
[Hier Feedback eingeben]
---
#### Dokumentdetails
⚠ *Bearbeiten Sie diesen Abschnitt nicht. Er ist für die Verknüpfung von docs.microsoft.com zum GitHub-Artikel erforderlich.*
* ID: 117e0cad-123d-5b77-3879-1b4e08271b1c
* Version Independent ID: 13bca309-67d1-5fa4-628d-2ef6f84a70d0
* Content: [Ausblenden der Antivirus-Oberfläche von Microsoft Defender - Windows security](https://docs.microsoft.com/de-de/windows/security/threat-protection/microsoft-defender-antivirus/prevent-end-user-interaction-microsoft-defender-antivirus)
* Content Source: [windows/security/threat-protection/microsoft-defender-antivirus/prevent-end-user-interaction-microsoft-defender-antivirus.md](https://github.com/MicrosoftDocs/windows-itpro-docs/blob/public/windows/security/threat-protection/microsoft-defender-antivirus/prevent-end-user-interaction-microsoft-defender-antivirus.md)
* Product: **w10**
* Technology: **windows**
* GitHub Login: @username_1
* Microsoft Alias: **deniseb**
Answers:
username_1: @username_0 thank you for posting this question. I'm looking into this.
Status: Issue closed
username_1: Hello @username_0. See this article for hiding the Microsoft Defender Antivirus interface from users: [Use Group Policy to hide the Microsoft Defender AV interface from users](https://docs.microsoft.com/windows/security/threat-protection/microsoft-defender-antivirus/prevent-end-user-interaction-microsoft-defender-antivirus#use-group-policy-to-hide-the-microsoft-defender-av-interface-from-users)
If that does not work, please contact technical support.
username_0: Like I said. This Group Policy is just a computer policy and does prevent the admin user also to access the Microsoft Defender AV and the answer is to set the same computer policy? The Problem is, that Microsoft removes the working control panel step by step to the new fancy looking Settings Apps. |
zfw1226/gym-unrealcv | 513674603 | Title: Exiting abnormally
Question:
username_0: when I run "UnrealTrack-City1StefaniPath1-DiscreteColor-v1", some error occurred:
[2019.10.29-03.34.23:452][ 90]LogModuleManager: Shutting down and abandoning module PakFile (2)
[2019.10.29-03.34.23:457][ 90]LogExit: Exiting.
[2019.10.29-03.34.23:457][ 90]LogInit: Tearing down SDL.
Exiting abnormally (error code: 143)
Answers:
username_1: Could you show more details about your environments ( OS, GPU, drivers, ...)?
username_0: OS: ubuntu18.04
GPU: Nvidia1080ti
cuda9.0
python3.6
thank you very much! |
gravitee-io/issues | 345733069 | Title: [global] Secure the backend
Question:
username_0: Hi,
I'm still playing with Gravitee, and I have some questions :
My Gravitee instance is hosted at : gravitee.acme.com
My Rest API is hosted at : backend.acme.com/api
My Keycloak instance is hosted at : auth.acme.com
I configured Gravitee to deploy my API. When I try to reach it via Gravitee, it works well (with keyless and Oauth2 plans) but my API still publicly accessible (when I try to reach it directly). So here is my question : should I configure my API server to only accept requests from Gravitee instance ? Or should I modify my API code to manage authentication/authorization ? I'm confused ...
Thank you for your feedback and help.
Best regards
Status: Issue closed
Answers:
username_0: Ok, thank you :-) |
gamburg/margin | 620541146 | Title: Decide how to treat "annotations with children"
Question:
username_0: Originally raised by @burlesona in #11
Answers:
username_1: See https://github.com/username_0/margin/issues/11#issuecomment-628839001 why I think this is confusing and problematic.
username_1: @username_0, you could try out [my parser](https://github.com/username_1/margin-parser/) to see how this would work in practice (you can edit `playground.js` as instructed in the readme). |
att/rcloud | 90182480 | Title: group interface shouldn't allow you to delete yourself as admin
Question:
username_0: We should try to prohibit dangerous operations in protection group management.
One of these is deleting yourself as admin. Maybe best to handle this on the server and then display the message in the error logger built into the dialog.
Answers:
username_0: right now, the UI shows you deleting yourself successfully. but re-open the dialog and you're back.
it should be an error rather than silently not doing what you asked, especially now that we have the fantastic error display in the dialog.
username_0: likewise, it should not allow you to move yourself to the members, which currently succeeds
username_0: will handle this in `notebook.protection.R`
Status: Issue closed
|
Skamer/Eska-Tracker | 734383476 | Title: Timer not showing in Mythic+ keys
Question:
username_0: **Describe the bug**
When entering a mythic plus key, there is no timer or countdown. This was fine before the update for SL prepatch got released.
**Do you have an error log of what happened?**
N/A
**To Reproduce**
Steps to reproduce the behavior:
1. Enter a dungeon and activate any mythic plus key
2. No timer will show
**Expected behavior**
For a timer to show as in the images on the curse addon page for the objectives eska addon.
**Screenshots**

**What is the version of EskaTracker you use ?**
Version: 1.5.3c-release
**What are the versions of PLoop and Scorpio you use ?**
(You can known that in opening the EskaTracker options)
- PLoop: v55
- Scorpio:v260
**Additional context**
N/A
Answers:
username_1: Thanks for reporting the issue, it's now fixed. |
pmd/pmd | 205586426 | Title: *[java]*
Question:
username_0: The build fails randomly with PMD exception, see below
**Description:**
Exception
```
[pmd] :scripts:pmdTestjava.lang.NullPointerException: Inflater has been closed
[pmd] at java.util.zip.Inflater.ensureOpen(Inflater.java:389)
[pmd] at java.util.zip.Inflater.inflate(Inflater.java:257)
[pmd] at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:152)
[pmd] at java.io.FilterInputStream.read(FilterInputStream.java:133)
[pmd] at java.io.FilterInputStream.read(FilterInputStream.java:107)
[pmd] at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1792)
[pmd] at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1769)
[pmd] at org.apache.commons.io.IOUtils.copy(IOUtils.java:1744)
[pmd] at org.apache.commons.io.IOUtils.toByteArray(IOUtils.java:462)
[pmd] at net.sourceforge.pmd.RuleSetFactoryCompatibility.filterRuleSetFile(RuleSetFactoryCompatibility.java:84)
[pmd] at net.sourceforge.pmd.RuleSetFactory.parseRuleSetNode(RuleSetFactory.java:249)
[pmd] at net.sourceforge.pmd.RuleSetFactory.createRuleSet(RuleSetFactory.java:202)
[pmd] at net.sourceforge.pmd.RuleSetFactory.createRuleSet(RuleSetFactory.java:197)
[pmd] at net.sourceforge.pmd.RuleSetFactory.parseRuleSetReferenceNode(RuleSetFactory.java:359)
[pmd] at net.sourceforge.pmd.RuleSetFactory.parseRuleNode(RuleSetFactory.java:317)
[pmd] at net.sourceforge.pmd.RuleSetFactory.parseRuleSetNode(RuleSetFactory.java:272)
[pmd] at net.sourceforge.pmd.RuleSetFactory.createRuleSet(RuleSetFactory.java:202)
[pmd] at net.sourceforge.pmd.RuleSetFactory.createRuleSet(RuleSetFactory.java:197)
[pmd] at net.sourceforge.pmd.RuleSetFactory.createRuleSets(RuleSetFactory.java:161)
[pmd] at net.sourceforge.pmd.RuleSetFactory.createRuleSets(RuleSetFactory.java:145)
[pmd] at net.sourceforge.pmd.RulesetsFactoryUtils.getRuleSets(RulesetsFactoryUtils.java:31)
[pmd] at net.sourceforge.pmd.processor.AbstractPMDProcessor.createRuleSets(AbstractPMDProcessor.java:63)
[pmd] at net.sourceforge.pmd.processor.MonoThreadProcessor.processFiles(MonoThreadProcessor.java:41)
[pmd] at net.sourceforge.pmd.PMD.processFiles(PMD.java:367)
[pmd] at net.sourceforge.pmd.ant.internal.PMDTaskImpl.doTask(PMDTaskImpl.java:188)
[pmd] at net.sourceforge.pmd.ant.internal.PMDTaskImpl.execute(PMDTaskImpl.java:269)
[pmd] at net.sourceforge.pmd.ant.PMDTask.execute(PMDTask.java:47)
[pmd] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:293)
[pmd] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[pmd] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[pmd] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[pmd] at java.lang.reflect.Method.invoke(Method.java:498)
[pmd] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
[pmd] at groovy.util.AntBuilder.performTask(AntBuilder.java:327)
[pmd] at groovy.util.AntBuilder.nodeCompleted(AntBuilder.java:272)
[pmd] at org.gradle.api.internal.project.ant.BasicAntBuilder.nodeCompleted(BasicAntBuilder.java:78)
[pmd] at sun.reflect.GeneratedMethodAccessor124.invoke(Unknown Source)
[pmd] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[pmd] at java.lang.reflect.Method.invoke(Method.java:498)
[pmd] at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
[pmd] at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
[pmd] at org.gradle.internal.metaobject.BeanDynamicObject$MetaClassAdapter.invokeMethod(BeanDynamicObject.java:382)
[pmd] at org.gradle.internal.metaobject.BeanDynamicObject.invokeMethod(BeanDynamicObject.java:170)
[pmd] at org.gradle.internal.metaobject.AbstractDynamicObject.invokeMethod(AbstractDynamicObject.java:163)
[pmd] at org.gradle.api.internal.project.antbuilder.AntBuilderDelegate.nodeCompleted(AntBuilderDelegate.java:118)
[pmd] at groovy.util.BuilderSupport.doInvokeMethod(BuilderSupport.java:154)
[pmd] at groovy.util.BuilderSupport.invokeMethod(BuilderSupport.java:67)
[pmd] at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:48)
[pmd] at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:133)
[pmd] at org.gradle.api.plugins.quality.internal.PmdInvoker$_invoke_closure2.doCall(PmdInvoker.groovy:62)
[pmd] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[pmd] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[pmd] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[pmd] at java.lang.reflect.Method.invoke(Method.java:498)
[Truncated]
[pmd] at org.gradle.launcher.daemon.server.exec.LogAndCheckHealth.execute(LogAndCheckHealth.java:55)
[pmd] at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:120)
[pmd] at org.gradle.launcher.daemon.server.exec.LogToClient.doBuild(LogToClient.java:60)
[pmd] at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:36)
[pmd] at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:120)
[pmd] at org.gradle.launcher.daemon.server.exec.EstablishBuildEnvironment.doBuild(EstablishBuildEnvironment.java:72)
[pmd] at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:36)
[pmd] at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:120)
[pmd] at org.gradle.launcher.daemon.server.exec.HintGCAfterBuild.execute(HintGCAfterBuild.java:44)
[pmd] at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:120)
[pmd] at org.gradle.launcher.daemon.server.exec.StartBuildOrRespondWithBusy$1.run(StartBuildOrRespondWithBusy.java:50)
[pmd] at org.gradle.launcher.daemon.server.DaemonStateCoordinator$1.run(DaemonStateCoordinator.java:293)
[pmd] at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
[pmd] at org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
[pmd] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[pmd] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[pmd] at java.lang.Thread.run(Thread.java:745)
```
**Running PMD through:** *[CLI]* Jenkins
Answers:
username_1: @username_0 thanks for your report.
This seems to be a duplicate of #234
A fix for that issue has already been proposed at #235, and will be included in PMD 5.5.4 and 4.4.5.
In the meantime, you can disable parallel execution in Gradle.
Gradle will run in parallel if:
* `org.gradle.parallel=true` is present in `gradle.properties`
* `--parallel` flag is used when executing gradle
You can override both values using `--max-workers 1` to force a single worker being used always, but this will also affect test running in parallel.
Status: Issue closed
|
serverless/examples | 357423834 | Title: Example aws-node-rest-api-with-dynamodb-and-offline timeout
Question:
username_0: Running the example (https://github.com/serverless/examples/tree/master/aws-node-rest-api-with-dynamodb-and-offline) results in '[Serverless-Offline] Your λ handler 'xxx' timed out after 30000ms.'
The table seems to get created ok, but all invocations time out.
Steps to reproduce: Follow the instructions in the README.md
Answers:
username_0: Turned out to be a configuration issue. Closing.
Status: Issue closed
|
hankcs/HanLP | 361177670 | Title: 获取关键词
Question:
username_0: <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/username_1/HanLP)
- [wiki](https://github.com/username_1/HanLP/wiki)
- [常见问题](https://github.com/username_1/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/username_1/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [ ] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.6.8
我使用的版本是:1.6.8
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
hancks,您好!我想咨询一下,就是获得关键词的时候,第一步是要进行分词,源码中看到分词是用的DefaultSegment,这个分词是运用什么进行分词的?如果想用其他分词器,可以实现么?感谢大神献出这么好的一个资源,希望得到大神的指点,哈哈!
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
### 步骤
1. 首先……
2. 然后……
3. 接着……
### 触发代码
```
public void testIssue1234() throws Exception
{
CustomDictionary.add("用户词语");
System.out.println(StandardTokenizer.segment("触发问题的句子"));
}
```
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
```
期望输出
```
### 实际输出
<!-- HanLP实际输出了什么?产生了什么效果?错在哪里?-->
```
实际输出
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
Status: Issue closed
Answers:
username_1: ```java
KeywordExtractor extractor = new TextRankKeyword().setSegment(HanLP.newSegment("感知机"));
String content = "程序员(英文Programmer)是从事程序开发、维护的专业人员。" +
"一般将程序员分为程序设计人员和程序编码人员," +
"但两者的界限并不非常清楚,特别是在中国。" +
"软件从业人员分为初级程序员、高级程序员、系统" +
"分析员和项目经理四大类。";
List<String> keywordList = extractor.getKeywords(content, 5);
System.out.println(keywordList);
``` |
cityofaustin/techstack | 558351684 | Title: Draft the community service timesheet form
Question:
username_0: This form must be printable due to current usage by defendants and community service supervisors. Formstack does not support printed forms. This must be drafted as a PDF.
- [ ] get design template for PDFs from @chasechenevert or maybe @desigonz
- [ ] using feedback from form workshop draft the form
- [ ] submit to @toribr for design review
- [ ] submit to @ablangworthy27 or @srigdon for content review
- [ ] submit to <NAME> for approval
- [ ] create resulting issues
Answers:
username_1: @username_0 Does this need to go into formstack if the process primarily involves a printed PDF?
Status: Issue closed
|
Sententiaregum/flux-container | 159767962 | Title: don't freeze whole store
Question:
username_0: ### Description of the issue
the store itself should not be frozen as it should be mockable in unittests
### Steps to reproduce
- build a store using the ``store()`` API.
- try to mock it in a unit test with a mocking engine such as ``sinon``
### Expected behavior
store should be mockable<issue_closed>
Status: Issue closed |
noaa-nws-cpc/cpc.geogrids | 260728622 | Title: Extract Lat and Lon values from known grid resolution.
Question:
username_0: At the moment, the known resolutions listed at geogrid packages return only the list of grid points in list with the following code:-
```
from cpc.geogrids import Geogrid
geogrid = Geogrid('1deg-global')
print(geogrid.lats)
print(geogrid.lons)
```
For plotting any parameter on basemap, it is mandatory to have lat and lon data. However, for extracting lat and lon information from grib2 is a time consuming affair. Therefore, it is proposed to include data extraction routine/ function as part of geogrid package. This may reduce the data extraction time as the grid values are known. May be some think like following:-
```
from cpc.geogrids import Geogrid
geogrid = Geogrid('1deg-global')
print(geogrid.lats_data)
print(geogrid.lons_data)
```
Answers:
username_1: I don't quite understand what you want the "lat/lon data" to look like. Can you give me an example of what it should look like?
username_2: Hi Mike,
How do you extract the grid, you have created? I'm interested in converting the grid points into a "standard" Geodataframe using pandas.
username_1: This code will extract the lats and lons of all grid points. Is this what you need?
```python
from cpc.geogrids import Geogrid
geogrid = Geogrid('1deg-global')
lats = geogrid.lats
lons = geogrids.lons
print(lats)
print(lons)
[-90. -89. -88. -87. -86. -85. -84. -83. -82. -81. -80. -79. -78. -77.
-76. -75. -74. -73. -72. -71. -70. -69. -68. -67. -66. -65. -64. -63.
-62. -61. -60. -59. -58. -57. -56. -55. -54. -53. -52. -51. -50. -49.
-48. -47. -46. -45. -44. -43. -42. -41. -40. -39. -38. -37. -36. -35.
-34. -33. -32. -31. -30. -29. -28. -27. -26. -25. -24. -23. -22. -21.
-20. -19. -18. -17. -16. -15. -14. -13. -12. -11. -10. -9. -8. -7.
-6. -5. -4. -3. -2. -1. 0. 1. 2. 3. 4. 5. 6. 7.
8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21.
22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35.
36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49.
50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63.
64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77.
78. 79. 80. 81. 82. 83. 84. 85. 86. 87. 88. 89. 90.]
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.
14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27.
28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41.
42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55.
56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69.
70. 71. 72. 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83.
84. 85. 86. 87. 88. 89. 90. 91. 92. 93. 94. 95. 96. 97.
98. 99. 100. 101. 102. 103. 104. 105. 106. 107. 108. 109. 110. 111.
112. 113. 114. 115. 116. 117. 118. 119. 120. 121. 122. 123. 124. 125.
126. 127. 128. 129. 130. 131. 132. 133. 134. 135. 136. 137. 138. 139.
140. 141. 142. 143. 144. 145. 146. 147. 148. 149. 150. 151. 152. 153.
154. 155. 156. 157. 158. 159. 160. 161. 162. 163. 164. 165. 166. 167.
168. 169. 170. 171. 172. 173. 174. 175. 176. 177. 178. 179. 180. 181.
182. 183. 184. 185. 186. 187. 188. 189. 190. 191. 192. 193. 194. 195.
196. 197. 198. 199. 200. 201. 202. 203. 204. 205. 206. 207. 208. 209.
210. 211. 212. 213. 214. 215. 216. 217. 218. 219. 220. 221. 222. 223.
224. 225. 226. 227. 228. 229. 230. 231. 232. 233. 234. 235. 236. 237.
238. 239. 240. 241. 242. 243. 244. 245. 246. 247. 248. 249. 250. 251.
252. 253. 254. 255. 256. 257. 258. 259. 260. 261. 262. 263. 264. 265.
266. 267. 268. 269. 270. 271. 272. 273. 274. 275. 276. 277. 278. 279.
280. 281. 282. 283. 284. 285. 286. 287. 288. 289. 290. 291. 292. 293.
294. 295. 296. 297. 298. 299. 300. 301. 302. 303. 304. 305. 306. 307.
308. 309. 310. 311. 312. 313. 314. 315. 316. 317. 318. 319. 320. 321.
322. 323. 324. 325. 326. 327. 328. 329. 330. 331. 332. 333. 334. 335.
336. 337. 338. 339. 340. 341. 342. 343. 344. 345. 346. 347. 348. 349.
350. 351. 352. 353. 354. 355. 356. 357. 358. 359.]
```
username_1: ```python
from cpc.geogrids import Geogrid
geogrid = Geogrid('1deg-global')
lats = geogrid.lats
lons = geogrids.lons
print(lats)
print(lons)
[-90. -89. -88. -87. -86. -85. -84. -83. -82. -81. -80. -79. -78. -77.
-76. -75. -74. -73. -72. -71. -70. -69. -68. -67. -66. -65. -64. -63.
-62. -61. -60. -59. -58. -57. -56. -55. -54. -53. -52. -51. -50. -49.
-48. -47. -46. -45. -44. -43. -42. -41. -40. -39. -38. -37. -36. -35.
-34. -33. -32. -31. -30. -29. -28. -27. -26. -25. -24. -23. -22. -21.
-20. -19. -18. -17. -16. -15. -14. -13. -12. -11. -10. -9. -8. -7.
-6. -5. -4. -3. -2. -1. 0. 1. 2. 3. 4. 5. 6. 7.
8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21.
22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35.
36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49.
50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63.
64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77.
78. 79. 80. 81. 82. 83. 84. 85. 86. 87. 88. 89. 90.]
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.
14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27.
28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41.
42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55.
56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69.
70. 71. 72. 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83.
84. 85. 86. 87. 88. 89. 90. 91. 92. 93. 94. 95. 96. 97.
98. 99. 100. 101. 102. 103. 104. 105. 106. 107. 108. 109. 110. 111.
112. 113. 114. 115. 116. 117. 118. 119. 120. 121. 122. 123. 124. 125.
126. 127. 128. 129. 130. 131. 132. 133. 134. 135. 136. 137. 138. 139.
140. 141. 142. 143. 144. 145. 146. 147. 148. 149. 150. 151. 152. 153.
154. 155. 156. 157. 158. 159. 160. 161. 162. 163. 164. 165. 166. 167.
168. 169. 170. 171. 172. 173. 174. 175. 176. 177. 178. 179. 180. 181.
182. 183. 184. 185. 186. 187. 188. 189. 190. 191. 192. 193. 194. 195.
196. 197. 198. 199. 200. 201. 202. 203. 204. 205. 206. 207. 208. 209.
210. 211. 212. 213. 214. 215. 216. 217. 218. 219. 220. 221. 222. 223.
224. 225. 226. 227. 228. 229. 230. 231. 232. 233. 234. 235. 236. 237.
238. 239. 240. 241. 242. 243. 244. 245. 246. 247. 248. 249. 250. 251.
252. 253. 254. 255. 256. 257. 258. 259. 260. 261. 262. 263. 264. 265.
266. 267. 268. 269. 270. 271. 272. 273. 274. 275. 276. 277. 278. 279.
280. 281. 282. 283. 284. 285. 286. 287. 288. 289. 290. 291. 292. 293.
294. 295. 296. 297. 298. 299. 300. 301. 302. 303. 304. 305. 306. 307.
308. 309. 310. 311. 312. 313. 314. 315. 316. 317. 318. 319. 320. 321.
322. 323. 324. 325. 326. 327. 328. 329. 330. 331. 332. 333. 334. 335.
336. 337. 338. 339. 340. 341. 342. 343. 344. 345. 346. 347. 348. 349.
350. 351. 352. 353. 354. 355. 356. 357. 358. 359.]
``` |
rancher/rancher | 398867056 | Title: Stack stuck in "Upgrade in progress"
Question:
username_0: **What kind of request is this (question/bug/enhancement/feature request):**
bug
**Steps to reproduce (least amount of steps as possible):**
Click on "Upgrade" when a catalog entry proposes an upgrade.
**Result:**
Stacks are *sometimes* stuck in "Upgrade in progress" for hours (sometimes days).
**Other details that may be helpful:**
Maybe related to #17393 and/or #17420
**Environment information**
- Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): 1.6.25
- Installation option (single install/HA): single HA<issue_closed>
Status: Issue closed |
XX-net/XX-Net | 267065258 | Title: 怎么开启ivp6啊?
Question:
username_0: 怎么开启ivp6啊?
Answers:
username_1: 老铁,进门先看readme
https://github.com/XX-net/XX-Net/issues/6918
https://github.com/XX-net/XX-Net/issues/6991
https://github.com/XX-net/XX-Net/issues/7150
https://github.com/XX-net/XX-Net/issues/7164
https://github.com/XX-net/XX-Net/issues/7241 |
prettier/prettier | 548450338 | Title: TypeScript: Class private fields syntax error
Question:
username_0: | ^
3 | #y: number;
4 |
5 | constructor(x: number, y: number) {
```
**Expected behavior:**
Should not throw a syntax error. This works with parser set to flow, just not typescript.
Answers:
username_1: this is not yet supported by typescript in stable channel (only in beta)
#7263
username_2: Duplicate of #7263
Status: Issue closed
username_3: Is there a target release when private field support for Typescript will be moved GA from beta?
Thanks. |
StefH/AzurePipelinesTest3 | 373863766 | Title: Error building .NET 3.5 with Azure Pipelines
Question:
username_0: Error is
```
2018-10-25T09:43:04.5473047Z ##[section]Starting: Build !
2018-10-25T09:43:04.5481345Z ==============================================================================
2018-10-25T09:43:04.5481539Z Task : .NET Core
2018-10-25T09:43:04.5481654Z Description : Build, test, package, or publish a dotnet application, or run a custom dotnet command. For package commands, supports NuGet.org and authenticated feeds like Package Management and MyGet.
2018-10-25T09:43:04.5481810Z Version : 2.141.0
2018-10-25T09:43:04.5481933Z Author : Microsoft Corporation
2018-10-25T09:43:04.5482063Z Help : [More Information](https://go.microsoft.com/fwlink/?linkid=832194)
2018-10-25T09:43:04.5482607Z ==============================================================================
2018-10-25T09:43:06.7139253Z [command]C:\Windows\system32\chcp.com 65001
2018-10-25T09:43:06.8573931Z Active code page: 65001
2018-10-25T09:43:06.9560566Z [command]"C:\Program Files\dotnet\dotnet.exe" build D:\a\1\s\src\ClassLibrary1\ClassLibrary1.csproj /p:Configuration=Release
2018-10-25T09:43:08.2247559Z Microsoft (R) Build Engine version 15.8.166+gd4e8d81a88 for .NET Core
2018-10-25T09:43:08.2248213Z Copyright (C) Microsoft Corporation. All rights reserved.
2018-10-25T09:43:08.2248354Z
2018-10-25T09:43:09.2270163Z Restoring packages for D:\a\1\s\src\ClassLibrary1\ClassLibrary1.csproj...
2018-10-25T09:43:09.3912004Z Generating MSBuild file D:\a\1\s\src\ClassLibrary1\obj\ClassLibrary1.csproj.nuget.g.props.
2018-10-25T09:43:09.3938115Z Generating MSBuild file D:\a\1\s\src\ClassLibrary1\obj\ClassLibrary1.csproj.nuget.g.targets.
2018-10-25T09:43:09.4021942Z Restore completed in 199.62 ms for D:\a\1\s\src\ClassLibrary1\ClassLibrary1.csproj.
2018-10-25T09:43:09.5865545Z C:\Program Files\dotnet\sdk\2.1.402\Microsoft.Common.CurrentVersion.targets(1179,5): error MSB3644: The reference assemblies for framework ".NETFramework,Version=v3.5" were not found. To resolve this, install the SDK or Targeting Pack for this framework version or retarget your application to a version of the framework for which you have the SDK or Targeting Pack installed. Note that assemblies will be resolved from the Global Assembly Cache (GAC) and will be used in place of reference assemblies. Therefore your assembly may not be correctly targeted for the framework you intend. [D:\a\1\s\src\ClassLibrary1\ClassLibrary1.csproj]
2018-10-25T09:43:17.5852328Z ClassLibrary1 -> D:\a\1\s\src\ClassLibrary1\bin\Release\netstandard2.0\ClassLibrary1.dll
2018-10-25T09:43:17.6015749Z
2018-10-25T09:43:17.6016344Z Build FAILED.
2018-10-25T09:43:17.6016684Z
2018-10-25T09:43:17.6017100Z C:\Program Files\dotnet\sdk\2.1.402\Microsoft.Common.CurrentVersion.targets(1179,5): error MSB3644: The reference assemblies for framework ".NETFramework,Version=v3.5" were not found. To resolve this, install the SDK or Targeting Pack for this framework version or retarget your application to a version of the framework for which you have the SDK or Targeting Pack installed. Note that assemblies will be resolved from the Global Assembly Cache (GAC) and will be used in place of reference assemblies. Therefore your assembly may not be correctly targeted for the framework you intend. [D:\a\1\s\src\ClassLibrary1\ClassLibrary1.csproj]
2018-10-25T09:43:17.6017479Z 0 Warning(s)
2018-10-25T09:43:17.6017660Z 1 Error(s)
2018-10-25T09:43:17.6017782Z
2018-10-25T09:43:17.6017950Z Time Elapsed 00:00:10.08
```
Answers:
username_0: Linked to https://github.com/username_0/System.Linq.Dynamic.Core/issues/209
Status: Issue closed
username_0: Solution is to add `FrameworkPathOverride` in the projectfile for net35. |
TPII20162/BankSys | 185764545 | Title: Definir estrutura física da base de dados
Question:
username_0: Irei pesquisar e definir como será feito o nosso banco de dados.
A principio tentarei criar um banco de dados na nuvem para que todos acessem a mesma base através da URL e mandem queries através dela. Caso não seja possível, criarei uma base de dados local e disponibilizarei o script de criação.
Sistema Gerenciador de Banco de Dados escolhido: Postgres.
Answers:
username_1: É bom definir etapas pro projeto do banco de dados até para que mais pessoas possam trabalhar. Podemos definir etapas como:
-projetar o BD em si (quais serão as tabelas e atributos);
-Gerar scripts do banco(ou fazer isso automaticamente);
-Definir/Implementar a forma do bd(nuvem ou local);
-Fazer as conexões do BD com o BankSys.
São apenas sugestões.
username_0: Otimo. Eu tinha pensado nisto mas eu não tinha explicado, pensei que eu
tinha explicado direito... sorry.
Nesta issue eu estou fazendo o terceiro item da sua lista:
-Definir/Implementar a forma do bd(nuvem ou local);
O item 1 é meio complicado, as vezes é feito e refeito..
Vou atrás dele assim que eu terminar o item 3. Mas caso alguem já queira
abrir uma issue do tipo "Definir schema da base de dados", já pode por que
ela não depende do item 3.
O item 2 será criado dependendo do item 3.
Item 4 será criado por último pois depende um pouco do item 3.
--
Técnico em Informática - IFCE
Computação - UFC
e-Dea Jr. - Empresa Júnior da Computacão - UFC
Skype: rodrigues.moises
Status: Issue closed
|
soheilhy/cmux | 241952641 | Title: problem serve http + tcp with cmux
Question:
username_0: Hi
I am trying to serve a http and tcp server with cmux. The http server works great but I can not fix a problem with the tcp server.
The tcp server for testing is a simple echo service. The problem is after client is connected and send out the first message, the server will see a split message (which is read by two successive conn.Read() )
Attached the source code.
Server: cmux_http_tcp.go
```
package main
import (
"github.com/username_1/cmux"
"net"
"net/http"
"fmt"
"time"
)
func tcpServer (l net.Listener) {
// echo service
for {
conn, _ := l.Accept()
fmt.Println("tcp connected")
go func() {
for {
data := make([]byte, 100)
conn.SetReadDeadline(time.Now().Add(100*time.Second))
n, err := conn.Read(data)
if err==nil {
fmt.Println(string(data[:n]))
conn.Write(data)
}
}
}()
}
}
type exampleHTTPHandler struct{}
func (h *exampleHTTPHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "example http response")
}
func serveHTTP(l net.Listener) {
s := &http.Server{
Handler: &exampleHTTPHandler{},
}
if err := s.Serve(l); err != cmux.ErrListenerClosed {
panic(err)
}
}
func both(l net.Listener) {
m := cmux.New(l)
httpL := m.Match(cmux.HTTP1Fast())
tcpL := m.Match(cmux.Any())
go tcpServer(tcpL)
[Truncated]
go keepWriting(conn)
keepReading(conn)
}
```
the client log:
```
➜ networking git:(master) ✗ go run cmux_tcp_client.go
connected to server
*** the first message is split ***
sent: hello tcp server
received: hello tc
received: p server
******* ok for following messages *******************
sent: hello tcp server
received: hello tcp server
^Csignal: interrupt
```
Answers:
username_1: Thanks for reporting this, but this is not an incorrect behavior. In fact, there is no guarantee that a naked TCP connection would behave. This happening on cMux because it reads the minimum number of bytes required to classify the HTTP request and buffers it. When you actually read from the connection, cMux will first return the sniffed bytes. Then resorts to reading from the connection.
Status: Issue closed
username_0: @username_1 Thanks for the reply. It makes much sense to me. |
ikedaosushi/tech-news | 485301448 | Title: ZOZOTOWNAIを活用し閲覧商品と似ている商品を検索できる 類似アイテム検索機能を本日より導入 ZOZOグループのAI全面活用を加速化よりスムーズなお買い物体験の提供を目指す
Question:
username_0: ZOZOTOWN、AIを活用し、閲覧商品と似ている商品を検索できる 「類似アイテム検索機能」を本日より導入 〜 ZOZOグループのAI全面活用を加速化、よりスムーズなお買い物体験の提供を目指す 〜<br>
株式会社ZOZO(本社:千葉県千葉市 代表取締役社長:前澤 友作)が運営するファッション通販サイト 「ZOZOTOWN」(https://zozo.<br>
https://ift.tt/30CHh1C |
openSUSE/osem | 286571034 | Title: undefined method uuid
Question:
username_0: **I'm submitting a ..**
- [ ] Bug Report
**Current behavior:**


**Steps to reproduce:**
Rails db:migrate:reset
Answers:
username_1: You shouldn't use db:migrate:reset. I would suggest running db:reset and, then, db:migrate, if needed.
username_2: @username_0 what's the status of this? Does db:schema:load work properly for you?
username_0: @username_2 Yes I tried it and db:schema:load is working for me.
username_3: Confirmed, the issue is still present.
`db:schema:load` does work for me, but is at best a workaround. If all the migrations don't run, then there's a problem somewhere. It means the test environment may be different from production, which invalidates all the tests. |
pombase/fypo | 137951996 | Title: PMID:22771823
Question:
username_0: increased chromatin silencing at centromere central core
increased histone H4K12 acetylation at subtelomere
increased histone H4K8/16 acetylation at subtelomere
Answers:
username_1: increased chromatin silencing at centromere central core FYPO:0005315
increased histone H4-K12 acetylation at subtelomere FYPO:0005316
increased histone H4-K8 and H4-K16 acetylation at centromere outer repeat FYPO:0005317
username_1: corrected FYPO:0005317 name in previous comment
Status: Issue closed
username_1: edit file: 138d806d362c771009d917f669073b54cb4452af
release: 0c9f107ee56df966ee46641cac1897aa7fa31157 |
BSData/runewars | 359573893 | Title: [Anon] Bug report: Daqan_Lords_(2017).catz
Question:
username_0: **File:** Daqan_Lords_(2017).catz
**BattleScribe version:** 2.01.19
**Platform:** iPhone / iPod / iPad
**Dropbox:** No
**Description:** Spearmen are given the option to take a heavy upgrade even if the unit is not consisting of a 3x3 tray formation, which is the only formation that can take a heavy upgrade. 2x1, 2x2, and 3x2 formations should not have the menu for heavy upgrades (NOTE: I accidentally submited this same issue in the Waiqar bug submission, so that's why this is showing up in this file and the Waiqar report) |
CyclopsMC/EvilCraft | 194241461 | Title: Poison-Sacs Pufferfish, and plurals
Question:
username_0: Well, i've had this mod for a while and i was thinking. If you can use poisoned potatoes and poison sacs, why can't we use pufferfish?
I mean, they poison you when you eat them. you could craft them to make poison sacs, or you can just do the same thing you did with the potatoes.
Maybe you could even make it so that you cut the poison sac out of the pufferfish, and you can eat the pufferfish meat.
This was just something that i thought would be interesting.
by the way, i've been using this mod with another called saltymod, which adds salt, salt lakes, and that sort of thing, and the dark tanks and endless water source block make a furnace like item in saltymod that extracts salt much more useful.
-username_0
Answers:
username_1: Makes sense, that's an easy thing to add, I can look into doing that.
username_0: great!
Status: Issue closed
|
spf13/viper | 401145730 | Title: Possibility to delete keys
Question:
username_0: Sometimes it might be necessary to delete/unset existing keys. For instance in my case, I want to use `viper.WriteConfig` to dump config to file, but I do want to filter out certain keys: I've tried the following:
```
viper.SetDefault("config", nil)
viper.Set("config", nil)
```
But it is interpreted as empty string in resulted config file. I also ran into https://stackoverflow.com/questions/52339336/removal-of-key-value-pair-from-viper-config-file but I wasn't able to make it work for a root level keys.
Answers:
username_1: The reason why it is not working is that there is no such key as `config`, but something like `config.key`, `config.key2`, etc.
What you can do is fetching all keys from a viper instance using `AllKeys` and setting all keys starting with `config.` to null, but I think that wouldn't work either.
Another option is getting all configuration with `AllSettings` and writing it to a file manually.
Please note that neither options will work with `AutomaticEnv` and env vars as it only works with direct access (using `Get*` functions).
Unfortunately implementing the feature you want is not trivial, because "unsetting" would trigger the next configuration source.
username_0: `config` is a valid root key in my case, something like:
```
config: /foo/bar
xxx: yyy
```
Basically I'm trying to maintain in-memory config based on inputs from viper and cobra, i.e. from CLI arguments, env variables and multiple config files. As part of CLI arguments or env variables you would be able to pass a path to the config file that later needs to be considered as part of the rest of the input.
When I want to change something in a config, such as similar to `kubectl config use-context` - I want to take what's in-memory, filter out some keys (such as why config path needs to be in the config itself?), and dump it to the file.
I were able to set the key to null, but when I try to save the file - the key ends up in the yaml with an empty value. Expected behavior would be to not no have that key at all.
Writing to file manually is the only option atm but it would be nice if I can reuse what's already in the library and not reinventing the wheel.
Theres two potential ways I can see this can be implemented. One is to go through every map in memory (`config`, `override`, `defaults`, etc...) and delete a key from there. Second is to improve `AllSettings` and interpret null value as "no key". It seems to break backward compatibility so might require a feature flag / option to be introduced.
username_0: Could you explain please why do you think so? Any suggestions/alternatives? I'm pretty new to viper, well actually I'm pretty new to golang...
username_2: Yay!! There's already a patch for this, see https://github.com/spf13/viper/pull/519
username_1: @username_0 sorry, missed your answer.
Viper reads configuration from a number of sources. When you write the configuration, everything is written to a file. This poses several issues in itself.
For example, if you configure Viper to read secrets from a secret store and the rest from file, your secrets would be written too file as well. That's not what you usually want.
So using Viper for writing to file only makes sense, of you only use a file as source as well. Even then, you have limited access to the configuration itself (the issue itself proves that).
Based on what the use case is, I usually prefer using Viper when I need to configure an application, and use custom logic when I need to write files. Eg. use a separate Viper instance for file config. Or just read the config file manually, merge in viper, and write back the original config manually.
Take a look at my suggestions in my first comment as well. My advice: avoid writing with Viper. I'm actually going to propose removing config writing if a v2 Viper ever becomes a thing.
username_2: @username_0 As a hack (if you are only using a config file) if you really want to use `viper.WriteConfig` you could do something as following:
```golang
func Unset(key string) error {
configMap := viper.AllSettings()
delete(configMap, key)
encodedConfig, _ := json.MarshalIndent(configMap, "", " ")
err := viper.ReadConfig(encodedConfig)
if err != nil {
return err
}
viper.WriteConfig()
}
```
Note: do proper error handling
username_3: I had to convert the byte[] here to a reader, otherwise, you receive:
```cannot use (type []byte) as type io.Reader in argument to viper.ReadConfig```
Here's the modified hack:
```go
func Unset(key string) error {
configMap := viper.AllSettings()
delete(configMap, key)
encodedConfig, _ := json.MarshalIndent(configMap, "", " ")
err := viper.ReadConfig(bytes.NewReader(encodedConfig))
if err != nil {
return err
}
viper.WriteConfig()
}
```
username_4: A more complete version of Unset which deals with more than just root level items.
It's still not perfect as it will save default's and has no concept of where values came from so could save values from `ENV` to the config including secure vars, which isn't desireable.
```golang
func Unset(vars ...string) error {
cfg := viper.AllSettings()
vals := cfg
for _, v := range vars {
parts := strings.Split(v, ".")
for i, k := range parts {
v, ok := vals[k]
if !ok {
// Doesn't exist no action needed
break
}
switch len(parts) {
case i + 1:
// Last part so delete.
delete(vals, k)
default:
m, ok := v.(map[string]interface{})
if !ok {
return fmt.Errorf("unsupported type: %T for %q", v, strings.Join(parts[0:i], "."))
}
vals = m
}
}
}
b, err := json.MarshalIndent(cfg, "", " ")
if err != nil {
return err
}
if err := viper.ReadConfig(bytes.NewReader(b)); err != nil {
return err
}
return viper.WriteConfig()
}
``` |
typora/typora-issues | 169962419 | Title: Bug on symbol ` in single code fence
Question:
username_0: the output will not be interpreter correctly on several other online markdown editors too.
Status: Issue closed
Answers:
username_1: In my testing, that is the way what all markdown editors parses inline code.
Should use "<code><code>ctrl+`</code></code>" in this case. |
w3c/low-vision-a11y-tf | 144382607 | Title: Update information on acuity
Question:
username_0: When I have my contacts in -- I am visually impaired but I certainly don't have blurry vision. My vision lacks detail. When I take my contacts out I have blurry vision. Blurry vision is caused by a refraction issue -- the way the light hits the retina. With correction I have no refraction issue but the macula where the fine detail of vision is seen has less sensitivity and thus I don't see those fine details -- but blurry is not an accurate representation of the situation.
Answers:
username_0: I feel blurry is a stereotype of what is often seen by people with low vision and I don't want to perpetuate something that is not totally accurate even if it is common practice. It is the decrease in detail in many cases. Also indicating what causes acuity decrease is useful information IMO to leave in.
username_1: What do you think?
Re: listing the causes — Based on the target audience and the purpose of this document, the Task Force agreed a while ago to put such details in a separate document. Wayne was working on something that I think might provide information such as [this table](http://www.tsbvi.edu/eye-conditions) -- except with conditions, impact on vision, and ICT considerations. He was talking about providing it through a database so it could be presented by conditions or by impact on vision (e.g., visual acuity). Some of his work on this is in [Ontology issue #3](https://github.com/w3c/low-vision-a11y-tf/issues/3) and the [Types of Low Vision wiki page](https://www.w3.org/WAI/GL/low-vision-a11y-tf/wiki/Types_of_Low_Vision).
username_0: This is better -- I'll just accept that we don't agree as I don't want to hold this up further -- so this change is fine.
Regarding the term low visual acuity....
I often have people with correctable vision come up to me and say they are legally blind or have low vision (without the glasses) but with the glasses they see within the typical visual acuity range. They aren't legally blind and they don't have low vision if their vision is correctable. That is what I believe we are trying to say so we need to make it absolutely clear in this section that if your vision is correctable out of the range of low vision and your fields do not classify you as low vision then that person is not generally seen as having low vision. We should tie this back to 2.1.
username_2: Yes, I don't have blurry vision. Not in my head. It's more like small stuff isn't there.
There are many ways low vision occurs. The retina is one big class, refractive error is another, cloudy fluids are another. So I think editing cause is good. Low visual acuity just means you can't see small stuff.
Wayne
username_1: Thoughts?
username_0: I like it without the blurry bit.
Jonathan
Status: Issue closed
username_3: per minutes https://www.w3.org/2016/04/13-lvtf-minutes.html#item01
requirements.html updated 26 apr 2016
username_3: TF member comment, resolved in discussion on telecon. Resolution passed. Marked Reply Sent to indicate issue resolution and documentation. |
FuelRats/pipsqueak | 140874754 | Title: Mecha resiliency and the API
Question:
username_0: Poking all of you since this is somewhat multidisciplinary
@trezy @tyrope @xlexi @kenneaal
(Wall of text crits for 9001 damage).
The current API model for "skunkworks" will be stripping out all of the HTTP functionality and speaking only Websockets. This drastically simplifies some of the design bits, considering it needs WS anyways for notifications and not having to implement both is... convenient.
It's been theoretically possible for Mecha to operate in a hybrid "online, with offline fallback" mode for awhile now, though I hadn't given it much thought of how it'd look. I also don't have a convenient way to test various failure modes since I haven't been able to get the API to run locally to be able to do things like... 'crash' it in the middle of Mecha talking to it.
One part of this is figuring out some of the specifics of when Mecha should consider itself offline vs online, how exactly it should go about checking for restored connectivity, and how it should reconcile vs the API after being disconnected for a set amount of time.
To that end, here's some tidbits on the current setup:
## Property Change Tracking
Mecha tracks "pending" property changes. That is, I can do this:
```
rescue.platform = 'pc'
rescue.quotes.append("quote")
```
and Mecha will know that both `platform` and `quotes` have changed. Furthermore, for some simple cases of modification (like appending to a `list`, add/remove from a `set`, or add/remove/set in a `dict`) it maintains a limited amount of state that can be used to re-apply the same change against an updated collection. (In essence, it's kind of like `git rebase`). More complicated modifications that can't be reliably repeated (like removing or altering an item in a list) flag the collection in such a way where Mecha knows it *can't* reliably merge and instead will overwrite whatever the API provides in its entirety.
This is currently used by the mechanism for saving cases to determine which attributes to send -- once a case is successfully saved, the particular properties are 'committed' which essentially removes them from the set of changed properties and tells them to discard any pending state they might have.
This is all handled under the hood, individual commands just change properties and call `rescue.save()` without having to deal with the minutiae.
## Async saving and applying updates from the API
Mecha immediately applies any changes locally, reports on their affects, and then queues the relevant API call. It does have the ability to report any subsequent failures, but does not have the ability to roll back state.
Mecha also applies any updated rescue messages it receives from the API against a rescue, with one exception: Any properties with pending changes keep their existing (Mecha-supplied) values rather than what the API says. This is *probably* the correct behavior, because if they're flagged as changed in Mecha that means it's probably trying to tell the API about the change and hasn't yet. The exception to this exception is collections: as mentioned above, they'll replay their changes against the API version of the data if they believe it is feasible.
There is one *potential* problem here: If there are multiple pending updates to a case and they end up executing out of order. In theory the change protection should prevent any mayhem from happening. In practice, Mecha has a per-rescue "lock" -- while a case is locked no other command may modify it until the lock is released. If everything is healthy with the API, this should be unnoticeable -- but if there's issues saving cases, it'll take longer for the lock to be released (if the API is slow or calls are otherwise timing out) and may somewhat slow down multiple actions on the same case.
The rescue lock only applies to writes -- !quote, !list, etc are unaffected.
## Timeouts and Retries.
Currently, Mecha allows for a 30 second timeout before giving up on a change. There's not yet any reply mechanism -- the case will just stay out of sync until something triggers it to save again and that save succeeds. This is bad and needs to be fixed.
There's also issues retrying some requests -- appending quotes as a convenience method is great, until this scenario occurs:
1. Mecha tries to append a quote.
2. The request times out or the connection is lost.
3. Unknown to Mecha, the quote is successfully appended.
4. Mecha retries the action
5. The quote gets duplicated.
(Replace "append a quote" with "Create a new case" for a bigger issue.)
One option here might be to treat a timeout as "Okay, we're totally offline" until some other connectivity test proves otherwise and then reconcile state (but we need to figure out how to reconcile state.)
Currently, only appending quotes and opening new cases are not idempotent and thus cannot safely be retried.
## Reconciliation after downtime.
**The big question here is... what happens when whatever issue kept Mecha away from the API is resolved?**
There's a few options here:
- Mecha can assume it's authoritative and completely overwrite the case with its own state. (This can merge in anything that doesn't have pending changes as normal). This is currently what I'm leaning towards, but is not without drawbacks -- anyone using !sub on a case in Mecha will overwrite all of the changes to that case's quotes made via some other API user if the issue is due to Mecha losing its network connection rather than the server.
- Mecha can discard all of its changes and use the API's versions instead. This is probably the wrong option.
- Mecha can pick some hybrid approach *(but what?)* based on case attributes -- like what Mecha has for dateModified vs what the server has. This would be super easy if individual attributes were versioned, but that's also overkill.
- Something else you guys can come up with.
Something else Mecha will need to do is remember all of its closed cases until API connectivity is restored -- but I want a way to be able to access recently closed cases from Mecha anyways. (My thought is they'd use negative index numbers and rotate between, e.g. -1..-10)
## What exactly is downtime, and what is an error.
While connection loss is easy to identify, there's a fine between "the API is being slow/timing out" and "the API is allowing connections, but is completely unresponsive to commands." Figuring out where to draw this line in Mecha for it to switch from online to offline mode is going to be key. Figuring out when functionality is restored is also important.
Also, Mecha needs to be able to distinguish from an error message that is -- say -- complaining about MongoDB or Elasticsearch being down (which corresponds to "API is borked, go offline!"), error messages telling it to retry something/etc (e.g versioning conflicts, if/when the API gets them), and error messages saying "Nope, you screwed up, don't ever try that again." Right now most API errors are pretty vague and nonspecific.
Answers:
username_1: TL;DR? |
SimpleSoftwareIO/simple-qrcode | 256468057 | Title: Class 'QrCode' not found
Question:
username_0: Hi,
i try to use the QrCode::generate('Make me into a QrCode!'); command directly in a view.
but i get the error message Class 'QrCode' not found.
https://imgur.com/a/sbDaq
i use laravel 5.4
my app.php
`<?php
return [
/*
|--------------------------------------------------------------------------
| Application Name
|--------------------------------------------------------------------------
|
| This value is the name of your application. This value is used when the
| framework needs to place the application's name in a notification or
| any other location as required by the application or its packages.
|
*/
'name' => env('APP_NAME', 'Laravel'),
/*
|--------------------------------------------------------------------------
| Application Environment
|--------------------------------------------------------------------------
|
| This value determines the "environment" your application is currently
| running in. This may determine how you prefer to configure various
| services your application utilizes. Set this in your ".env" file.
|
*/
'env' => env('APP_ENV', 'production'),
/*
|--------------------------------------------------------------------------
| Application Debug Mode
|--------------------------------------------------------------------------
|
| When your application is in debug mode, detailed error messages with
| stack traces will be shown on every error that occurs within your
| application. If disabled, a simple generic error page is shown.
|
*/
'debug' => env('APP_DEBUG', false),
/*
|--------------------------------------------------------------------------
| Application URL
|--------------------------------------------------------------------------
|
| This URL is used by the console to properly generate URLs when using
[Truncated]
'Queue' => Illuminate\Support\Facades\Queue::class,
'Redirect' => Illuminate\Support\Facades\Redirect::class,
'Redis' => Illuminate\Support\Facades\Redis::class,
'Request' => Illuminate\Support\Facades\Request::class,
'Response' => Illuminate\Support\Facades\Response::class,
'Route' => Illuminate\Support\Facades\Route::class,
'Schema' => Illuminate\Support\Facades\Schema::class,
'Session' => Illuminate\Support\Facades\Session::class,
'Storage' => Illuminate\Support\Facades\Storage::class,
'URL' => Illuminate\Support\Facades\URL::class,
'Validator' => Illuminate\Support\Facades\Validator::class,
'View' => Illuminate\Support\Facades\View::class,
'QrCode' => SimpleSoftwareIO\QrCode\Facades\QrCode::class,
'Form' => 'Collective\Html\FormFacade',
),
];
`
my
Status: Issue closed
Answers:
username_1: See: https://github.com/SimpleSoftwareIO/simple-qrcode/issues/86 |
dojo/cli-build-webpack | 277052690 | Title: Problems building the app with webpack directly
Question:
username_0: **Bug** <!-- delete as appropriate -->
<!-- Summary of enhancement or bug-->
Package Version: <!-- package version -->
Before `dojo eject`
```
The currently installed groups are:
build (@dojo/cli-build-webpack) 0.2.1
create (@dojo/cli-create-app) 0.2.0
test (@dojo/cli-test-intern) 0.2.0
You are currently running @dojo/cli 0.2.0
```
After `dojo eject`
```
create (@dojo/cli-create-app) 0.2.0
You are currently running @dojo/cli 0.2.0
```
**Code**
If I `dojo create --name eject-test` and then `dojo eject` I get an instruction that I can do:
```
./node_modules/.bin/webpack --config ./config/build-webpack/webpack.config.js
```
However I get errors:
```
ERROR in /dojo-2-app/node_modules/@dojo/test-extras/harness.d.ts
(3,10): error TS2305: Module '"/dojo-2-app/node_modules/@dojo/widget-core/interfaces"' has no exported member 'ClassesFunction'.
ERROR in ./src/main.css
Module build failed: ModuleBuildError: Module build failed: Error: No PostCSS Config found in: /dojo-2-app/config/build-webpack/postcss.config.js
at /dojo-2-app/node_modules/postcss-load-config/index.js:51:26
at <anonymous>
at runLoaders (/dojo-2-app/node_modules/webpack/lib/NormalModule.js:192:19)
at /dojo-2-app/node_modules/loader-runner/lib/LoaderRunner.js:364:11
at /dojo-2-app/node_modules/loader-runner/lib/LoaderRunner.js:230:18
at context.callback (/dojo-2-app/node_modules/loader-runner/lib/LoaderRunner.js:111:13)
at Promise.resolve.then.then.catch (/dojo-2-app/node_modules/postcss-loader/lib/index.js:176:71)
at <anonymous>
@ multi @dojo/shim/main @dojo/shim/browser ./src/main.css ./src/main.ts
ERROR in ./src/widgets/styles/HelloWorld.m.css
Module build failed: ModuleBuildError: Module build failed: Error: No PostCSS Config found in: /dojo-2-app/config/build-webpack/postcss.config.js
at /dojo-2-app/node_modules/postcss-load-config/index.js:51:26
at <anonymous>
at runLoaders (/dojo-2-app/node_modules/webpack/lib/NormalModule.js:192:19)
at /dojo-2-app/node_modules/loader-runner/lib/LoaderRunner.js:364:11
at /dojo-2-app/node_modules/loader-runner/lib/LoaderRunner.js:230:18
at context.callback (/dojo-2-app/node_modules/loader-runner/lib/LoaderRunner.js:111:13)
[Truncated]
Module build failed: Error: No PostCSS Config found in: /dojo-2-app/config/build-webpack/postcss.config.js
at /dojo-2-app/node_modules/postcss-load-config/index.js:51:26
at <anonymous>
Child extract-text-webpack-plugin:
[0] ./~/@dojo/webpack-contrib/css-module-decorator-loader!./~/css-loader?modules&sourceMap&importLoaders=1&localIdentName=[hash:base64:8]!./~/postcss-loader/lib?{"config":{"path":"/dojo-2-app/config/build-webpack/postcss.config.js"}}!./src/main.css 340 bytes {0} [built] [failed] [1 error]
ERROR in ./~/@dojo/webpack-contrib/css-module-decorator-loader!./~/css-loader?modules&sourceMap&importLoaders=1&localIdentName=[hash:base64:8]!./~/postcss-loader/lib?{"config":{"path":"/dojo-2-app/config/build-webpack/postcss.config.js"}}!./src/main.css
Module build failed: Error: No PostCSS Config found in: /dojo-2-app/config/build-webpack/postcss.config.js
at /dojo-2-app/node_modules/postcss-load-config/index.js:51:26
at <anonymous>
Child extract-text-webpack-plugin:
[0] ./~/@dojo/webpack-contrib/css-module-decorator-loader!./~/css-loader?modules&sourceMap&importLoaders=1&localIdentName=[hash:base64:8]!./~/postcss-loader/lib?{"config":{"path":"/dojo-2-app/config/build-webpack/postcss.config.js"}}!./~/@dojo/webpack-contrib/css-module-dts-loader?type=css!./src/widgets/styles/HelloWorld.m.css 340 bytes {0} [built] [failed] [1 error]
ERROR in ./~/@dojo/webpack-contrib/css-module-decorator-loader!./~/css-loader?modules&sourceMap&importLoaders=1&localIdentName=[hash:base64:8]!./~/postcss-loader/lib?{"config":{"path":"/dojo-2-app/config/build-webpack/postcss.config.js"}}!./~/@dojo/webpack-contrib/css-module-dts-loader?type=css!./src/widgets/styles/HelloWorld.m.css
Module build failed: Error: No PostCSS Config found in: /dojo-2-app/config/build-webpack/postcss.config.js
at /dojo-2-app/node_modules/postcss-load-config/index.js:51:26
at <anonymous>
```
The contents of `config/build-webpack/` are just `webpack.config.js`
Answers:
username_1: Since we are rewriting this, we might not address this particular issue in advance of the release. Keeping it open to help ensure that the rewrite does not prevent issues when ejecting.
username_2: Eject seems to be working as expected in the new `cli-build-app` command
Status: Issue closed
|
KonduitAI/konduit-serving | 510374722 | Title: Create a GUI for generating InferenceConfiguration
Question:
username_0: A GUI walking a user through creating the desired configuration file for whatever kind of pipeline he wants to serve.
Answers:
username_0: @maxpumperla Ideas here? I have a few stuff I wanna talk about maybe on a call. We can list the summary here and break it down to individual issues. |
mregni/EmbyStat | 541427116 | Title: Null ref exception in TVDB client
Question:
username_0: Running the sync takes a VERY LONG time and all shows return the same null ref exception.
```
9-12-13 14:38:40.1826 [INFO] MEDIASYNC-JOB Lets start processing show
2019-12-13 14:38:40.1826 [INFO] MEDIASYNC-JOB Logging in on the Tvdb API.
2019-12-13 14:38:40.1826 [INFO] THETVDB-CLIENT Logging in on theTVDB API with key: <KEY>
2019-12-13 14:38:40.7966 [INFO] MEDIASYNC-JOB Found 169 show for Hentai library
2019-12-13 14:38:41.2000 [INFO] THETVDB-CLIENT Call to THETVDB: https://api.thetvdb.com/series//episodes?page=1
2019-12-13 14:38:41.3277 [ERROR] MEDIASYNC-JOB Can't seem to process show Ai no Katachi: Ecchi na Onnanoko wa Kirai... Desuka?, check the logs for more details!
2019-12-13 14:38:41.3277 [ERROR] System.ArgumentNullException: Value cannot be null.
Parameter name: source
at System.Linq.Enumerable.Where[TSource](IEnumerable`1 source, Func`2 predicate)
at EmbyStat.Clients.Tvdb.TvdbClient.GetEpisodes(String seriesId, CancellationToken cancellationToken) in d:\a\1\s\EmbyStat.Clients.Tvdb\TvdbClient.cs:line 66
at EmbyStat.Jobs.Jobs.Sync.MediaSyncJob.ProgressMissingEpisodesAsync(Show show, CancellationToken cancellationToken) in d:\a\1\s\EmbyStat.Jobs\Jobs\Sync\MediaSyncJob.cs:line 330
at EmbyStat.Jobs.Jobs.Sync.MediaSyncJob.GetMissingEpisodesFromTvdbAsync(Show show, CancellationToken cancellationToken) in d:\a\1\s\EmbyStat.Jobs\Jobs\Sync\MediaSyncJob.cs:line 306 System.ArgumentNullException: Value cannot be null.
Parameter name: source
at System.Linq.Enumerable.Where[TSource](IEnumerable`1 source, Func`2 predicate)
at EmbyStat.Clients.Tvdb.TvdbClient.GetEpisodes(String seriesId, CancellationToken cancellationToken) in d:\a\1\s\EmbyStat.Clients.Tvdb\TvdbClient.cs:line 66
at EmbyStat.Jobs.Jobs.Sync.MediaSyncJob.ProgressMissingEpisodesAsync(Show show, CancellationToken cancellationToken) in d:\a\1\s\EmbyStat.Jobs\Jobs\Sync\MediaSyncJob.cs:line 330
at EmbyStat.Jobs.Jobs.Sync.MediaSyncJob.GetMissingEpisodesFromTvdbAsync(Show show, CancellationToken cancellationToken) in d:\a\1\s\EmbyStat.Jobs\Jobs\Sync\MediaSyncJob.cs:line 306
2019-12-13 14:38:41.7119 [INFO] MEDIASYNC-JOB Processed (0/169) Ai no Katachi: Ecchi na Onnanoko wa Kirai... Desuka?
2019-12-13 14:38:41.7433 [INFO] THETVDB-CLIENT Call to THETVDB: https://api.thetvdb.com/series//episodes?page=1
2019-12-13 14:38:41.8320 [ERROR] MEDIASYNC-JOB Can't seem to process show Akiba Girls, check the logs for more details!
2019-12-13 14:38:41.8320 [ERROR] System.ArgumentNullException: Value cannot be null.
Parameter name: source
```
Answers:
username_1: Just reported this on Emby forum. Series without TVDBid throw this error. But I only have one.
https://emby.media/community/index.php?/topic/56640-developing-a-standalone-embystat-server/?p=825489
username_0: Looks like the same exception indeed! I'll have a look when the annoying duplicate error is fixed!
Status: Issue closed
username_0: This should be fixed in version .15 If not. Feel free to re-open! |
MicrosoftDocs/azure-docs | 401184338 | Title: Unable to proceed with request error
Question:
username_0: I was able to create a CosmosDB database and collection on Azure successfully, but I get this error when trying to run the dot net app:
DocumentClientException: Unable to proceed with the request. Please check the authorization claims to ensure the required permissions to process the request.
ActivityId: f3c72700-95ac-4865-944f-20aca275e247, Microsoft.Azure.Documents.Common/2.1.0.0
I'm using the URI as the 'endpoint' and the Read-Write Primary Key as the 'authKey' values but am not able to get the app to work. Any help would be appreciated. Thanks!
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 4255f572-38cc-08aa-6fb7-08c919713cf9
* Version Independent ID: c04fe4dd-e75d-1028-8f2e-154d8c303da2
* Content: [Build a .NET web app with Azure Cosmos DB using the SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/create-sql-api-dotnet)
* Content Source: [articles/cosmos-db/create-sql-api-dotnet.md](https://github.com/Microsoft/azure-docs/blob/master/articles/cosmos-db/create-sql-api-dotnet.md)
* Service: **cosmos-db**
* GitHub Login: @SnehaGunda
* Microsoft Alias: **sngun**
Answers:
username_0: Hi, thanks for your reply. Yes, I followed those steps exactly and I get
the same error message. Additionally, I tried to use both "Tasks" and
"ToDoList" as the database, but neither of them worked. I have attached a
screenshot of the error message here.
Thanks,
Aditya
username_1: +1.
I do get the same error.
username_2: Same issue here.
username_3: @SnehaGunda Could you please look into this issue, where the multiple customers are reporting?

username_2: This error also occurs after commenting out the following two lines:
```csharp
CreateDatabaseIfNotExistsAsync().Wait();
CreateCollectionIfNotExistsAsync().Wait();
```
I had suspected the problem was with creating a database or collection, but I still receive the error within the `GetItemsAsync` method on the line where the query is executed (`results.AddRange(await query.ExecuteNextAsync<T>());`).
I have tried with a brand new Cosmos DB instance, and even run across this error when attempting to read or write to a different database and collection within that instance, from a different application. I've also attempted using older versions of the **Microsoft.Azure.DocumentDB.Core** NuGet package to no avail.
username_4: There is a regression in the portal that turns on firewall on on every newly created account. You can turn it off on the firewall blade. If you have further questions, please ping us at <EMAIL> Thanks!
username_2: @username_4 That worked! How did I miss that? It is more obvious with Azure SQL where it tells you that you need to add your IP to the firewall. I've never seen that as an option for Cosmos DB. Would be nice if there was something that told you that you need to configure it on the Overview blade.
username_4: cool! thanks for the feedback!
username_0: This works for me as well! Thanks!
Status: Issue closed
username_5: we are getting same exceptions and we cannot allow all networks.
We are accessing cosmos db from azure functions and enabled following settings-
Accept connections from within public Azure datacenters
Allow access from Azure Portal
Day before it was working and now we are getting exception.
Thanks in advance. |
ic-labs/django-icekit | 186964222 | Title: Refactor asset library
Question:
username_0: From https://app.assembla.com/spaces/icekit/simple_planner#/ticket:39
Latest commit at icekit.django-icekit2:039ae87bab202c76b63d779c2f4f74f4f45fcbd3
1. All assets have
1.1. Polymorphic subtypes
1.2. List of links to places each asset is used
1.3. Caption
1.4. Title
1.5. Asset category
1.6. Admin notes
1.7. Thumbnail image for listings
Create hackable asset subtypes for:
Images
Videos (use oembed vimeo/youtube)
Audio (use oembed soundcloud)
Files
Slideshows
Ideally there should be one admin for all types, with a filter by type/category, and search by title/admin notes and subtype-specific fields if poss.
Answers:
username_0: Since assets will be publishable, remove the is_active field (which isn't used currently AFAIK) |
marktext/marktext | 1148745014 | Title: Not able to download Mac OS version from the homepage.
Question:
username_0: <!--
- Please search for issues that matches the one you want to file and use the thumbs up emoji.
- Please make sure your application version is up to date.
-->
### Description
<!-- Description of the bug -->
- [ ] Can you reproduce the issue? <!-- no: `[ ]` or yes: `[x]` -->
### Steps to reproduce
<!-- Steps how the issue occurred. -->
1. [First step]
2. [Second step]
3. [and so on...]
**Expected behavior:**
<!-- What you expected to happen -->
**Actual behavior:**
<!-- What actually happened -->
**Link to an example: [optional]**
<!-- If you're reporting a bug that's not reproducible, or it's hard to description, please paste a screenshot of reproducing this issue - gif format is appropriate -->
### Versions
- MarkText version:
- Operating system:
Answers:
username_1: Yep. It should have directed you to `marktext-x64.dmg`, not `marktext.dmg`.
username_0: But it is pointed to a broken page now. The button is not working.
username_2: I think this issue should belong to the [MarkText/website](https://github.com/marktext/website.git) repository, so I just created a new issue in that repository and submitted some code to repair the problem by the way |
microbialphenotypes/OMP-ontology | 156753500 | Title: Cryptococcus NTR: Haploid fruiting phenotype
Question:
username_0: NTR
1) Haploid fruiting phenotype
xref: GO:0000905 " sporocarp development involved in asexual reproduction"
2) Presence of haploid fruiting (is_a haploid fruiting phenotype)
3) absence of haploid fruiting (is_a haploid fruiting phenotype)
4) altered haploid fruiting (is_a haploid fruiting phenotype)
5) increased haploid fruiting (is_a altered haploid fruiting)
6) decreased haploid fruiting(is_a altered haploid fruiting)
7) increased rate of haploid fruiting (is_a altered haploid fruiting)
8) decreased rate of haploid fruiting(is_a altered haploid fruiting)
Answers:
username_1: 'Haploid fruiting, also referred to as "monokaryotic fruiting" in Cryptococcus involves the formation of a "fruit body." or "fruiting body." This term is used for fungi, bacteria and slime molds but the structures in these 3 groups are not equivalent. When identical terms are used differently by disparate research communities, a means of distinguishing among these species is important. The term "haploid fruiting body formation" is the term that I am most concerned about. One possible solution is to prepend the terms the way some GO terms are constructed...for example, the terms "cell wall" and "fungal-type cell wall" are used. This is one example of a conflictling term but others may be encountered. The use of the "species-type" has precedence in the GO and is straightforward.
Here are some descriptions for each type of fruiting body.
Fruiting body (fungi), a multicellular structure on which spore-producing structures, such as basidia or asci, are born.
Fruiting body (bacteria), the aggregation of myxobacterial cells when nutrients are scarce (not to be confused with those in fungi)
Fruiting body (slime mold), the sorophore and sorus of a slime mold (not to be confused with sporocarp, a multicellular structure on which spore-producing structures are borne
My request is for the fungal terms only. The myxobacterial and slime mold terms are for comparison only.
Possible 'formation' terms,
1) "fungal-type fruiting body formation"
2) "myxobacterial-type fruiting body formation"
3) "slime mold-type fruiting body formation" OR "slime mold-type multicellular fruiting body formation"
Possible process terms, only the fungal term is requested:
1) "fungal-type haploid fruiting" with synonym "monokaryotic fruiting"
2) "myxobacterial-type fruiting"
3) "slime mold-type fruiting"
Please consider this issue and comment. Deciding on a general strategy for resolving term collisions will minimize this problem when it arises.
Thank you,
Diane
username_2: Since 'fruiting body' is used in so many different ways, I suggest that we use 'fruiting body phenotype' as the overall umbrella term for the node and then have branches for distinct types of fruiting bodies, such as 'sporocarp (fungi)' for fungal fruiting bodies. See example:
fruiting body phenotype
.....fruiting body formation phenotype
..........sporocarp (fungi) formation phenotype
..........sporocarp (fungi) morphology phenotype
Question: I'm not sure what the distinction is between fruiting body formation and fruiting? Is one to indicate the presence/absence of fruiting bodies and other phenotypes affecting the structure? and fruiting would be for phenotypes that affect the process of fruiting? For other structures, we've used 'morphology' to cover the former and 'formation' to cover the latter.
Debby
username_1: Hi Debby,
The Basidiomycete "sporocarp" is called a "basidiocarp" and is a sexual structure. Haploid fruiting in Cryptococcus is an asexual process that occurs in the absence of a mating partner and is specifically referred to as "haploid" or "monokaryotic" fruiting to distinguish it from the sexual (dikaryotic) fruiting process. Haploid fruiting does not occur in all fungi and their are other subtle differences between mating (fruiting) and haploid fruiting in Cryptococcus. “Fused clamps” are formed between 2 clamp cells during mating and “unfused clamps” are seen during haploid fruiting.
The NTR for fruiting body formation is #110.
Terms for "haploid fruiting" phenotype and "haploid fruiting body formation" phenotype are both needed. The "haploid fruiting body" is the spore-bearing structure that forms during the process of "haploid fruiting." The fruiting process can be impaired without affecting "haploid fruit body formation" such as when no spores form on the fruit body or if the spores that form are inviable. For an "abolished haploid fruiting body formation" phenotype, haploid fruiting would have to be initiated and a change in the number of shape of the fruiting bodies would be observed. If the haploid fruiting process was absent or impaired prior to the stage when haploid fruiting bodies form, the appropriate term would be "haploid fruiting phenotype."
Let me know if this term request needs further clarification.
Thank you!
Diane
username_1: This term request is for the process of haploid (monokaryotic) fruiting and is distinct from haploid fruiting body formation.
haploid fruiting
synonym: monokaryotic fruiting
Def: A developmental process of asexual reproduction that is similar to fruiting during sexual reproduction but is the result of mating between strains of the same mating-type or self-mating under conditions of nutrient deprivation.
NTR's:
haploid fruiting phenotype
altered haploid fruiting phenotype
increased haploid fruiting phenotype
decreased haploid fruiting phenotype
abolished haploid fruiting phenotype
Thank you,
Diane
username_2: Please review the terms I made for haploid fruiting.
+[Term]
+id: OMP:0007542
+name: haploid fruiting phenotype
+def: "An asexual reproduction phenotype where progeny are contained in a fruiting body that forms as the result of mating between strains of the same mating-type or self-mating." [OMP:DAS]
+[Term]
+id: OMP:0007543
+name: presence of haploid fruiting
+def: "A haploid fruiting phenotype where a microbe can reproduce by the process of haploid fruiting." [OMP:DAS]
+synonym: "presence of monokaryotic fruiting" EXACT []
+is_a: OMP:0007542 ! haploid fruiting phenotype
+[Term]
+id: OMP:0007544
+name: absence of haploid fruiting
+def: "A haploid fruiting phenotype where a microbe is unable to reproduce by the process of haploid fruiting." [OMP:DAS]
+synonym: "absence of monokaryotic fruiting" EXACT []
+is_a: OMP:0007542 ! haploid fruiting phenotype
+[Term]
+id: OMP:0007545
+name: altered haploid fruiting
+def: "A haploid fruiting phenotype where the rate, frequency, timing, or extent of the process of haploid fruiting is altered relative to a designated control." [OMP:DAS]
+synonym: "altered monokaryotic fruiting" EXACT []
+is_a: OMP:0007144 ! altered asexual reproduction
+is_a: OMP:0007542 ! haploid fruiting phenotype
+[Term]
+id: OMP:0007546
+name: increased haploid fruiting
+def: "An altered haploid fruiting phenotype where the rate, frequency, timing, or extent of the process of haploid fruiting is increased relative to a designated control." [OMP:DAS]
+synonym: "increased monokaryotic fruiting" RELATED [] <-- this was changed to EXACT
+is_a: OMP:0007545 ! altered haploid fruiting
+[Term]
+id: OMP:0007547
+name: decreased haploid fruiting
+def: "An altered haploid fruiting phenotype where the rate, frequency, timing, or extent of the process of haploid fruiting is decreased relative to a designated control." [OMP:DAS]
+synonym: "decreased monokaryotic fruiting" EXACT []
+is_a: OMP:0007545 ! altered haploid fruiting
+[Term]
+id: OMP:0007548
+name: abolished haploid fruiting
+def: "A decreased fruiting phenotype where the process of haploid fruiting is abolished." []
+synonym: "abolished monokaryotic fruiting" EXACT []
+is_a: OMP:0007544 ! absence of haploid fruiting
+is_a: OMP:0007547 ! decreased haploid fruiting
username_1: These terms and defs look good. Would one use 'altered haploid fruiting phenotype' if the morphology of the haploid fruiting bodies were abnormal? If so, could that be mentioned in the def?
Thanks!
username_2: I think there should be a set of terms for fruiting body morphology. I would start at the top with a very general definition for fruiting body morphology and then can add terms for haploid fruiting body morphology and diploid fruiting body morphology. What do you think?
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.