repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
simonsobs/pixell | 372660457 | Title: pip install is broken
Question:
username_0: * pixell version:
* Python version: 3.7
* Operating System: macOS High Sierra 10.13.6
### Installing from pip silently fails, no error message
Installing pixell via pip (I believe it is [this module on pypi](https://pypi.org/project/pixell/)) failed without any error message. It appears to successfully exit `setup.py`, however it cannot be imported and apparently the install process never finished.
### What I Did
Ran the following:
```
CC=gcc-8 CXX=g++-8 pip install --user pixell
```
Answers:
username_0: The output is pretty long and clashes severely with markdown, so I am attaching a file: [output.txt](https://github.com/simonsobs/pixell/files/2503111/output.txt).
When I do this, if I run `python -c "import pixell"` I get
```
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'pixell'
```
if I run `pip uninstall pixell`, I get
```
Skipping pixell as it is not installed.
```
username_1: Use https://pastebin.com/ for long outputs (I'll add this to the issue template).
Hmm, weird, I don't see any obvious errors in that output. That said, the very first line is saying that it can't find gcc-8. Does gcc-8 exist in your path? Also, if you want to force it to use gcc (and not clang), you might also need to add FC=gfortran in the beginning. C and Fortran are the only two compiled languages being used here.
Glad to hear it works from master. Does running py.test in the git directory work? Are you able to get "from pixell import sharp" and "from pixell import interpol" to work from the home directory?
username_0: That is doubly weird, as gcc-8 definitely exists in my path. Good call on the fortran flags.
I can indeed run those imports, and tests/test_pixell.py runs with no errors.
username_1: The pip output says it is using "/usr/local/bin/gfortran" as your gfortran compiler. I know you got it working off master, but would be good to get to the bottom of this if you have time! What's the output of:
"/usr/local/bin/gfortran --version"
username_1: Another question, it says it copied the pixell egg to:
```
/Users/dylan/Library/Python/3.7/lib/python/site-packages/pixell-0.2.0-py3.7.egg-info
```
Does this actually exist? And if yes, would be good to understand why python isn't finding that egg when you import pixell.
username_0: Oh, I wish I had seen that. That explains why my python cannot see it; for research purposes I run python inside a [venv](https://docs.python.org/3/library/venv.html), which is totally isolated from ~/Library/Python/. What is weird is that running pip installs for other packages works just fine inside venvs.
I don't know anything about packaging modules for pip, but could it be whatever method of path detection is used for pixell fails to detect the fact that it should install to the venv directories?
Also, re: gfortran, that is the gfortran installed by homebrew (8.2.0); I don't think macOS actually comes with any flavor of gfortran (while it does come with clang, renamed to gcc). |
OatmealDome/DolphiniOS-Issue-Tracker | 1021549634 | Title: Sonic adventure 2 battle black textures
Question:
username_0: **Describe the issue**
<!--- Describe the issue clearly. -->
I’m on ios 15.0.1 using altstore beta (3.2.0b2) and whenever I try to play sonic adventure 2 battle it works with good performance but all the textures are broken and it seems like they are covered in something black because some textures flash for an instant showing how the actual texture is supposed to look like so the textures themselves don’t seem to be broken or not working
**How to reproduce**
<!--- Describe how to reproduce the issue. -->
Probably something to do with vulkan because i tried with OpenGL and even though it’s really slow the textures were there
<!--- For example: -->
<!--- 1. Open Animal Crossing. -->
<!--- 2. Talk to Tom Nook. -->
<!--- 3. Get into debt. -->
**Expected result**
<!--- What did you expect to happen? -->
The game to have working textures like some old youtube videos of the emulator
**Device information**
<!--- Please fill this out. Jailbreak refers to the Jailbreak method you used, e.g. checkra1n or unc0ver. DolphiniOS version can be found in the settings tab. -->
- Device: iphone 12 pro
- OS: ios 15.0.1
- Jailbreak method: non-jailbroken
- DolphiniOS version: 3.2.0b2
- Emulated game: sonic adventure 2 battle
**Additional information**
<!--- Add any other information here that does not fit into the above sections. -->
Answers:
username_1: This can be fixed by installing the most recent build of DolphiniOS posted here: https://github.com/OatmealDome/dolphin/releases/tag/3.2.0b2-194
username_0: Thank you so much it works now
Status: Issue closed
|
react-component/field-form | 471890208 | Title: Use an object for the hook API?
Question:
username_0: Not that it is worth anything but I think the hook API would be better if we dropped the array. There is no rule that enforce a hook to return an array and in that case, an object would be better, as it would allow destructuring in a single line:
```js
const {form, getFieldError} = Form.useForm();
```
Ideally form would only be a single ref without the extra API to keep things dry.
Answers:
username_1: Since we support multiple form in one component. Use object user need to use like:
```js
const { form: form1 } = useForm();
const { form: form2 } = useForm();
```
Which is same consider as `React.useState` to simplify the coding:
```js
const [ form1 ] = useForm();
const [ form2 ] = useForm();
```
Status: Issue closed
username_1: And also, array is aimed to providers additional content if future we meet some requirement. Yes, it's no need currently. But, it's also least cost if we add more with this. |
CAVaccineInventory/vaccine-feed-ingest | 865728305 | Title: fetch vaccinespotter for entire us
Question:
username_0: [](https://github.com/CAVaccineInventory/vaccine-feed-ingest/wiki/Runner-pipeline-stages#fetch)
Fetch data from all of the state APIs listed here: https://www.vaccinespotter.org/api (for example, https://www.vaccinespotter.org/api/v0/states/AL.json)
Put your script in a file named: `us/vaccinespotter_org/fetch.sh` (other extensions are ok)
Store it (without processing) in a new file created in the directory passed as the first argument (`sys.argv[1]`).
Check the wiki to learn more about the purpose of the fetch stage and how to get set up for development!
### Tips
1. While working on your code, run it at any point:
```sh
poetry run vaccine-feed-ingest fetch <state>/<site>
```
Status: Issue closed
Answers:
username_0: Thank you @obra and @juleea ! |
networkx/networkx | 504109268 | Title: GN networks (girvan-newman benchmart)
Question:
username_0: we want to make a network with parameters N = 128, l = 4, g= 32, ⟨k⟩ = 16, the range of kout can be change from 1 to 8.
Answers:
username_1: I'm not sure what you're asking, but there is an implementation of the [GN algorithm](https://networkx.github.io/documentation/latest/reference/algorithms/generated/networkx.algorithms.community.centrality.girvan_newman.html) if you were looking for that.
username_0: Girvan and Newman introduced an artificial network benchmark to test out community detection algorithms on it.
and the properties of these networks are as I mentioned above. between girvan and newman algorithm to detect communities and girvan and newman's artificial network bechmark people always mixed them up !!!
Status: Issue closed
|
bibliotechie/client | 584769762 | Title: Run our own (vanilla) client
Question:
username_0: To get experience developing with the code base, we should [follow the instructions for setting up a normal development install](https://h.readthedocs.io/projects/client/en/latest/developers/developing/).
Answers:
username_0: This might be more effort than it's worth. We can't use our own client with the official hypothesis server because it [requires access to a private git repo](https://h.readthedocs.io/en/latest/developing/install/#create-the-development-data-and-settings), so we would need to build our own instance of [h](https://github.com/hypothesis/h), then [get it working with oauth](https://h.readthedocs.io/en/latest/developing/integrating-client/), neither of which are things our design has.
I'm going to try playing around with getting a (broken) version of the client running right on amusewiki, if I don't make much progress then we can come back to this.
username_0: Now that we have the broken client running, I think we can just give up on this; if we need to test what the execution of a properly functioning client looks like, we can just use the actual client.
Status: Issue closed
|
EntangledBits/CUETools.Codecs | 713062154 | Title: Working example of simple conversion of WAV stream to FLAC stream
Question:
username_0: The example on the front page doesn't seem to work as the constructor for the WAVReader is invalid.
I have the Wave in a stream and I want to compress using FLAC to another stream.
Answers:
username_1: Invalid as VS says it doesn't exist or Invalid as it throws an error?
username_0: My bad....this was a namespace issue. Apparently I am referencing multiple projects that contain a WAVReader class.
Status: Issue closed
username_1: Ok let me know if anything else comes up, I originally used this on a xamarin project. |
mymarilyn/clickhouse-driver | 576760586 | Title: Feature request: Extend columnar form to support numpy arrays
Question:
username_0: Hi Konstantin, currently the binary raw data are deserialized into python types. Wouldn't be great if you can match numpy data types with ClickHouse data types and deserialize into numpy arrays instead ?
As far as I understand there are two ways to do this, either turn python tuples into numpy arrays, if possible with zero copy, or do the transformation straight ahead on the binary data.
The bonus of this will be another zero copy transformation using pyarrow arrays and that opens the sky for arrow flight protocol which can be great for transferring data with high speed from remote servers.
I would also like to use numpy array feature for my project and I am offering testing.
Answers:
username_1: also maybe https://github.com/Arturus/clickhouse-driver can be backported to official python driver?
username_2: Would be really awesome
username_3: Yes, it will be good to have this feature with optional dependencies (numpy, pandas) in upstream project.
I'd like to merge it, but this feature needs to be optional and well-tested.
username_2: @username_3, maybe the author himself can help. Here's his telegram @asuilin i found on his website
username_4: There's the somewhat tricky possibility here to use the Numpy fast reading feature just now and _almost_ in this module :)
I just described the one of the possible solutions as the issue for the another module and posted it to this repo by mistake, so closed it. The solution and the short test are in this closed issue: https://github.com/mymarilyn/clickhouse-driver/issues/132
Sure not the best but still usable workaround: use the two modules at once, this for all and outdated Artur's for the fast reading, but Artur's module should be patched then. Unfortunatelly, the both modules use the same naming logick. Or, probably, the module might be placed in some python's non-searchable path to be imported in nonstandard way, didn't test it. The patch works so why?
username_0: @username_3 any plans to merge this numpy arrays feature in the near future? By the way a great alternative would be to support ClickHouse Arrow array columnar format. I made a cross-reference with a relevant issue I opened here at ClickHouse/ClickHouse#12284. That may also solve the problem of transferring fast large volume of structured data through the wire, i.e. ArrowStream format. I volunteer to test the new feature.
username_3: NumPy support from https://github.com/Arturus/clickhouse-driver is ready for merging into master branch. This support will be optional and driver will work without `numpy` as usual.
```bash
pip install git+https://github.com/mymarilyn/clickhouse-driver@feature-numpy-support#egg=clickhouse-driver
```
Additional dependencies are `pandas` and `numpy`.
Supported types:
* Float32/64
* [U]Int8/16/32/64
* Date/DateTime(‘timezone’)/DateTime64(‘timezone’)
* String/FixedString(N)
* LowCardinality(T)
Numpy arrays are not used when reading nullable columns and columns of unsupported types.
Examples:
```python
client = Client('localhost', settings={'use_numpy': True}):
client.execute(
'SELECT * FROM system.numbers LIMIT 10000',
columnar=True
)
```
```python
client = Client('localhost', settings={'use_numpy': True}):
client.query_dataframe(' FROM table')
'SELECT number AS x, (number + 100) AS y '
'FROM system.numbers LIMIT 10000'
)
```
Looking for feedback.
username_3: Branch feature-numpy-support was merged into master.
username_3: Optional numpy arrays/pandas dataframes writing was also merged into master: https://github.com/mymarilyn/clickhouse-driver/commit/90a49c276d2134cdf886e106882cfe9f833ca9b5
username_3: Should we close this issue since 0.2.0 version has numpy support?
username_5: Does this mean that one can directly load data into a Cudf (Nvidia Rapids) dataframe on the GPU bypassing any intermediary pandas/numpy representation?
username_3: I have no experience with Cudf. It's better just to give it a try.
Status: Issue closed
|
pyiron/pyiron_atomistics | 330215504 | Title: Incomplete visualization plot3d() for atoms with very large number of atoms
Question:
username_0: For `n_atoms > 60000` the visualization is incomplete!
Answers:
username_1: I guess that is something to discuss with the people at https://github.com/arose/nglview . I guess @username_2 already had some discussions with them. In addition we should confirm that the error remains when using the latest version of NGLview - for this tests you can use the latest docker image https://github.com/pyiron/pyiron-docker
username_2: I guess the most pragmatic solution for now is to write a function that exports the data e.g. in LAMMPS format, so that it can be visualised with Ovito or vmd.
username_2: Anyway, regardless of the purpose, such a function should at any rate exist in my opinion.
username_0: Actually, such a function already exists. `basis.write('struct', format='xyz')` uses the ase writer to do this
username_0: But did you already ask the NGlview developers @username_2? Could you link the issue here if you have?
username_2: No it was about something completely different.
username_0: I just found out that the issue is not with nglview but ase. NGLview converts our structure to the pdb format using the ase writer and then visualizes it. The ase writer somehow writes only a maximum of 50000 entries in the pdb format. This means we have to report the bug to ase or define our own pdb writer.
username_1: @username_2 Can we now visualise more than 50000 atoms? With the recent update of NGLview https://github.com/pyiron/pyiron_atomistics/pull/15 ?
username_2: 1, 2, 3, 4, ... I'm not sure if I really want to count up to 50,000...
... ok, I just finished counting, and no, it's still the same. It simply stops showing the atoms at 50,000. |
mage2pro/stripe | 389326076 | Title: Create a licence file
Question:
username_0: You should create a licence file (not the test licence page) in your repository. This way people who don't read the readme.md will know and you have a solid case for issue's like this one: [https://mage2.pro/t/topic/5764](https://mage2.pro/t/topic/5764)
Answers:
username_1: It is not your responsibility to tell me what I should to do, therefore just fuck off.
Status: Issue closed
username_0: Just trying to help bro.
username_1: Banned too. Next unpaid teacher? |
microsoft/fhir-server | 637380849 | Title: Container based SQL server for helm chart
Question:
username_0: **User story**
As a user I want to deploy the FHIR server in Kubernetes using SQL server in a container. This will enable deployment locally (minikube) or Azure Stack Hub.
**Acceptance criteria**
1. When deploying with helm chart and `--set database.dataStore=SqlContainer`, the FHIR server is deployed with SQL server running in a container.<issue_closed>
Status: Issue closed |
nokeedev/gradle-native | 1043808829 | Title: Component, Variant, Binary, LanguageSourceSet should all implement Named
Question:
username_0: Those interfaces are our general domain objects. Despite being used as a projection for our universal model, we should make sure they still conform to the vanilla Gradle domain object. This means they should all implement the `Named` interface. The name returned should be the full name (including owner name) as if they were held within a `NamedDomainObjectCollection`/`Container`. Without this restriction, our domain objects won't conform to other domain objects such as `Configuration` and `Task` which has full name.
For example, assuming the ownership `main` (Component) -> `debug` (Variant) -> `executable` (Binary) -> `link` (Task)
The respective name should be:
- Component: `main`
- Variant: `debug` ("main" name are excluded from the full name as per Gradle convention)
- Binary: `debugExecutable`
- Task: `linkDebugExecutable`
It allows for easy hacking without using our model as if everything was vanilla Gradle. For example, one could create an extra task on the binary using this simple code: `tasks.create("generate${binary.name.capitalize()}")`. Using our previous example, it would create the task `generateDebugExecutable`. Although ad-hoc task creation from within the vanilla Gradle configuration is not recommended, if it's an internal task then everything will be fine.<issue_closed>
Status: Issue closed |
dtcenter/METplus | 731743442 | Title: Crash when improperly formatted filename template is used
Question:
username_0: @jvigh discovered this area for improvement.
If a filename template has a closing curly brace '}' without a corresponding opening curly brace '{' before it, the script will crash.
## Describe the Enhancement ##
In the getraw function in metplus/util/config_metplus.py inside the elif character == "}" block there should be a check that in_brackets is set to True. If it is not, it should report an error and return. Logging an error in this function will not automatically increment the number of errors like self.log_error does in the wrappers, so it would be nice to add something to catch this and prevent the wrappers from running. This could potentially be done by returning None from getraw when this happens (also when count >10) and check for None in the configuration validation (validate_configuration_variables in metplus/util/met_util.py)
### Time Estimate ###
1 day
### Sub-Issues ###
Consider breaking the enhancement down into sub-issues.
- [X] *Add a checkbox for each sub-issue here.*
### Relevant Deadlines ###
None
### Funding Source ###
None
## Define the Metadata ##
### Assignee ###
- [X] Select **engineer(s)** or **no engineer** required
- [X] Select **scientist(s)** or **no scientist** required
### Labels ###
- [X] Select **component(s)**
- [X] Select **priority**
- [X] Select **requestor(s)**
### Projects and Milestone ###
- [X] Review **projects** and select relevant **Repository** and **Organization** ones or add "alert:NEED PROJECT ASSIGNMENT" label
- [X] Select **milestone** to next major version milestone or "Future Versions"
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [X] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
## Enhancement Checklist ##
See the [METplus Workflow](https://dtcenter.github.io/METplus/Contributors_Guide/github_workflow.html) for details.
- [X] Complete the issue definition above, including the **Time Estimate** and **Funding Source**.
- [ ] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>_<Description>`
- [ ] Complete the development and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update unit tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)**, **Project(s)**, **Milestone**, and **Linked issues**
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Close this issue.
Answers:
username_0: Changes needed for #780 fixed this issue. Once #792 is merged, this issue can be closed as well.
username_0: Resolved with PR #792
Status: Issue closed
|
dart-lang/markdown | 497815534 | Title: v 2.1.0 does not preserve whitespace in <pre> blocks
Question:
username_0: Upgraded to 2.1.0 from 2.0.3 and see a test failure in dartdoc where docs for
`Sample class [String]\n<pre> A\n B\n</pre>`
markdown removes the whitespace before `A` and `B`, using `HtmlRenderer().render`.
Answers:
username_1: What did that source use to render? What does it render now? Using the command-line tool, it looks like it just puts the source between `<p>` and `</p>`, w/o parsing any markdown between:
```none
$ dart bin/markdown.dart
Sample class [String]\n<pre> A\n B\n</pre>^D
<p>Sample class [String]\n<pre> A\n B\n</pre></p>
```
username_0: Passing in a node that's text `<pre>\n A\n B\n</pre>`, get back string `<pre>A\nB\n</pre>`
From the test
```
Expected: '<p>Sample class <code>String</code></p><pre class="language-dart"> A\n'
' B\n'
'</pre>'
Actual: '<p>Sample class <code>String</code></p><pre class="language-dart">A\n'
'B\n'
'</pre>'
Which: is different.
Expected: ... age-dart"> A\n B\ ...
Actual: ... age-dart">A\nB\n</pr ...
^
Differ at offset 66
```
Status: Issue closed
|
trailofbits/ebpfpub | 765988978 | Title: 桂林市灵川县妹子真实找上门服务p
Question:
username_0: 桂林市灵川县哪有特殊服务的洗浴▋╋薇/芯:781372524▋月日深夜,奥运冠军陈一冰在微博晒照为自己庆生并配文:“这一年送自己最好的生日礼物,更好的身体。”让网友大为吃惊的是照片中陈一冰将自己最胖的时候和现在做对比,胖瘦可见,自己还配字调侃“扶摇直胖精致男神”。的确,陈一冰近阶段收到很多杂志的邀约,看来明星也只是胖着玩玩而已。网友热烈回应道:就想知道他怎么瘦的;对比太厉害了;左边仿佛要过年的我。声明:中华娱乐网刊载此文出于传递更多信息之目的,并非意味着赞同其观点或证实其描述。版权归作者所有,更多同类文章敬请浏览:综合资讯闷寂菩斜本https://github.com/trailofbits/ebpfpub/issues/769?10731 <br />https://github.com/trailofbits/ebpfpub/issues/1954 <br />https://github.com/trailofbits/ebpfpub/issues/576 <br />https://github.com/trailofbits/ebpfpub/issues/2256 <br />https://github.com/trailofbits/ebpfpub/issues/876 <br />https://github.com/trailofbits/ebpfpub/issues/2063 <br />https://github.com/trailofbits/ebpfpub/issues/680?A2AKp <br />tojrfgyjvjlsnquksiuorjwvulzvowgniju |
arquivo/pwa-technologies | 167017495 | Title: Increase Suggestions Text Size on Table of Versions for URL Search.
Question:
username_0: The suggestion text-size is too small for URL searches.
Increase text size.

Answers:
username_0: Change to <a> Ver resultados que contêm o texto "$query " </a>
Status: Issue closed
|
jlippold/tweakCompatible | 417248728 | Title: `Flipswitch` working on iOS 12.1
Question:
username_0: ```
{
"packageId": "com.a3tweaks.flipswitch",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.a3tweaks.flipswitch",
"deviceId": "iPhone10,5",
"url": "http://cydia.saurik.com/package/com.a3tweaks.flipswitch/",
"iOSVersion": "12.1",
"packageVersionIndexed": true,
"packageName": "Flipswitch",
"category": "System",
"repository": "rpetrich repo",
"name": "Flipswitch",
"installed": "1.0.16~beta3",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.a3tweaks.flipswitch",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.2",
"shortDescription": "Centralized toggle system for iOS",
"latest": "1.0.16~beta3",
"author": "rpetrich",
"packageStatus": "Unknown"
},
"base64": "<KEY>",
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
sendgrid/sendgrid-ruby | 367231449 | Title: Run our *.md documents in this repo through the Grammer.ly service and update
Question:
username_0: #### Issue Summary
We would like to get our English polished up throughout the repo.
#### Aceptance Criteria
* Every .md file in this repo has been run through the Grammer.ly service and updated accordingly
Answers:
username_1: @username_0 Got it!
Assign this to me.
username_0: Than you @username_1!
Status: Issue closed
|
EverestAPI/SpringCollab2020 | 553825913 | Title: More variations on the Bird tutorial
Question:
username_0: <!--
Describe the thing you'd like
- A clear and concise description of what you want to happen.
Additional context
- Add any other context or screenshots about the feature request, anything that can help people make this.
-->
<!--Put your issue below this line-->
The following variations of the Tutorial bird have been requested on Collabcord:
- Reverse super
- Wavedash and Reverse wavedash (requested by DanTKO)
Can evolve into "customizable bird tutorial". In this case, this will be candidate for integration into Everest instead.
Answers:
username_1: DanTKO no longer needs this.
Keeping the issue open, some one else might need it, and the customization part seems like a good Everest addition.
Status: Issue closed
|
2ndalpha/gasmask | 298139537 | Title: After Being Opened for a While, Uses too Much CPU & RAM
Question:
username_0: Using Gasmask v0.8.5 and host file from https://github.com/StevenBlack/hosts. After Gasmask is opened in the tray icon for several minutes, it starts to consume too much CPU and RAM. I have to force quit to stop it. This issue started recently (2-3 weeks ago). I just updated macOS High Sierra (10.13.3) and the issue still exists. Any ideas what's going on? Suggestions for a fix?
Answers:
username_1: Please provide the log file from `~/Library/Logs/Gas\ Mask.log`
username_0: @username_1
```
[DEBUG] - int - Starting Gas Mask 0.8.5
[DEBUG] - Network - Starting listening for network changes
[DEBUG] - int - Reopen
[DEBUG] - HostsMainController - Creating Hosts Controller
[DEBUG] - Menulet - Initializing Status Bar with Yosemite and later options
[DEBUG] - ApplicationController - Init structure
[DEBUG] - HostsMainController - Adding groups
[DEBUG] - LocalHostsController - Loading local hosts
[DEBUG] - LocalHostsController - Loaded file: "host1.hst"
[DEBUG] - LocalHostsController - Loaded file: "host0.hst"
[DEBUG] - RemoteHostsController - Loading remote hosts
[DEBUG] - RemoteHostsManager - Loading remote hosts properties
[DEBUG] - CombinedHostsController - Loading combined hosts
[INFO] - HostsMainController - All hosts files are loaded
```
username_0: @username_1 the resource usage issue is still happening on the latest version. Any suggestions?
username_2: I see your measley 56,5% CPU time and raise you 99,5%!

I can't even get it to launch by double-clicking the app. The GUI doesn't appear, my laptop's fans spin up to "prepare for take-off" speed and the CPU maxes out. Log below.
```
$: tail -100 ~/Library/Logs/Gas\ Mask.log
[DEBUG] - int - Starting Gas Mask 0.8.6
[DEBUG] - Network - Starting listening for network changes
[DEBUG] - HostsMainController - Creating Hosts Controller
[DEBUG] - Menulet - Initializing Status Bar with Yosemite and later options
[DEBUG] - ApplicationController - Init structure
[DEBUG] - HostsMainController - Adding groups
[DEBUG] - LocalHostsController - Loading local hosts
[DEBUG] - LocalHostsController - Loaded file: "Original File.hst"
[DEBUG] - RemoteHostsController - Loading remote hosts
[DEBUG] - RemoteHostsManager - Loading remote hosts properties
[INFO] - RemoteHostsManager - Starting updater
[DEBUG] - RemoteHostsManager - Searching updates for "raw.githubusercontent.com"
[DEBUG] - HostsDownloader - Downloading: https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts
[DEBUG] - AbstractHostsManager - Downloading started
[DEBUG] - CombinedHostsController - Loading combined hosts
[DEBUG] - CombinedHosts - Type: Local, name: Original File
[DEBUG] - CombinedHosts - Type: Remote, name: raw.githubusercontent.com
[DEBUG] - CombinedHostsController - Loaded file: "Combined Hosts File.hst"
[INFO] - HostsMainController - All hosts files are loaded
[DEBUG] - ListController - Expanding all items
[INFO] - ListController - Selecting active item: Combined Hosts File
[DEBUG] - Hosts - Loading contents for file "Original File"
[DEBUG] - Hosts - Loading contents for file "raw.githubusercontent.com"
[DEBUG] - AbstractHostsManager - Hosts up to date
[DEBUG] - AbstractHostsManager - Hosts file "raw.githubusercontent.com" is up-to-date
[DEBUG] - RemoteHostsManager - Starting timer for remote hosts files. Interval: 1440 minutes
```
BTW: for what it's worth, I'm using the PR [#155](https://github.com/username_1/gasmask/pull/155) patch by @softwarebouwer –just in case the problem is related to that.
Oddly, if I open the binary directly from a terminal:
```
$: /Applications/Utilities/Gas\ Mask.app/Contents/MacOS/Gas\ Mask
```
the GUI opens fine and CPU usage is negligible. So it looks like it's either a problem with the patch, or the .app package.

username_2: EDIT: Bloody typical!
After seeing this behaviour three times in a row, when trying to launch the GasMask app, I've just tried it again and it's opened normally, without any turbo-prop fan activity. So looks like one of those annoying intermittent issues. |
biopython/biopython | 674724362 | Title: Wrappers for blastdb_aliastool, blastdbcmd
Question:
username_0: Hello,
I'm trying to make a pipeline using the Bio.Blast.Applications module. In the BioPython documentation, there doesn't seem to be wrappers for tools such as blastdb_aliastool and blastbcmd. Does anyone know if they exist? If not, would the best alternative be to use the Python subprocess module for command line use?
Thanks!
Answers:
username_1: See https://github.com/biopython/biopython/issues/1463
username_2: Thanks Chris. And yes, subprocess is a good alternative - but see also discussion on #2877
username_0: Thanks so much for pointing out the resources - they are very helpful. I was wondering if anyone could speak a little about platform compatibility (Linux, Windows) for subprocess module and Blast.Applications? To assemble the command line in su process, are there drawbacks to using a string and running it vs. creating a Popen object with a list of commands?
Thank you!
username_2: We tried to hide Windows vs Linux/Mac differences in the ``Bio.Application`` framework although I understand people have had trouble with at least some of the tutorial examples. Please add to the discussion on #2877.
Status: Issue closed
|
naptha/tesseract.js | 502051725 | Title: Remove the decimals in progress percent
Question:
username_0: I can't seem to find the solution on how to remove the decimals in percentage progress when displaying, so maybe we can suggest or cite an example re this. Show 1%,2%,3...100% (whole num) instead of 1.1857142..% and so on. Thank you!
Answers:
username_1: Maybe I am wrong, but I think you can use toPrecision to solve the issue, check reference [HERE](https://www.w3schools.com/jsref/jsref_toprecision.asp)
Status: Issue closed
username_0: Will try, thanks! |
honeyjonny/rcc-2016-sociality-service | 196372105 | Title: Bug in import local package
Question:
username_0: В [**server.go**](https://github.com/username_1/rcc-2016-sociality-service/blob/master/server.go#L7) мы подключаем [**middleware**](https://github.com/username_1/rcc-2016-sociality-service/blob/master/server.go#L7), [**database**](https://github.com/username_1/rcc-2016-sociality-service/blob/master/server.go#L6) c не существующего репозитория.
```go
import (
"fmt"
"github.com/gin-gonic/gin"
"github.com/username_1/sociality/database"
"github.com/username_1/sociality/middleware"
"github.com/jinzhu/gorm"
"net/http"
"strconv"
_ "time"
)
```
Я считаю стоит поправить на:
```go
import (
"fmt"
"github.com/gin-gonic/gin"
"github.com/username_1/rcc-2016-sociality-service/database"
"github.com/username_1/rcc-2016-sociality-service/middleware"
"github.com/jinzhu/gorm"
"net/http"
"strconv"
_ "time"
)
```
Такая же проблема есть в [**middleware/logic.go**](https://github.com/username_1/rcc-2016-sociality-service/blob/master/middleware/logic.go).
Answers:
username_1: Дело в том, что репозиторий переносился с другого git сервера на github, видимо упустил этот момент, спасибо за поправку!
Тем не менее - возникает вопрос - если у меня несколько git origin серверов и называются репозитории на этих серверах по разному - как тогда быть?
username_0: В таком можно сделать так:
1. Разместить проект в `$GOPATH/src/rcc-2016-sociality-service`
2. И подключать локальные пакеты:
```go
import (
"rcc-2016-sociality-service/middleware"
"net/http"
)
```
Такой вопрос уже поднимали на [stackoverflow](http://stackoverflow.com/a/35511866). |
facebook/react | 928619458 | Title: React-redux
Question:
username_0: import 'bootstrap/dist/css/bootstrap.css';
import React from 'react';
import ReactDOM from 'react-dom';
import { BrowserRouter } from 'react-router-dom';
import App from './App';
import registerServiceWorker from './registerServiceWorker';
import './Styles/StyleSheets.css';
import './fontawesome';
import { store, history } from './store';
import { Provider } from "react-redux";
import { ConnectedRouter } from 'connected-react-router';
const baseUrl = document.getElementsByTagName('base')[0].getAttribute('href');
const rootElement = document.getElementById('root');
ReactDOM.render(
<React.StrictMode>
<Provider store={store()}>
<ConnectedRouter history={history}>
<BrowserRouter basename={baseUrl}>
<App />
</BrowserRouter>
</ConnectedRouter>
</Provider>
</React.StrictMode>,
rootElement);
registerServiceWorker();

Status: Issue closed
Answers:
username_1: Support requests filed as GitHub issues often go unanswered. We want you to find the answer you're looking for, so we suggest the following alternatives:
##### Coding Questions
If you have a coding question related to React and React DOM, it might be better suited for Stack Overflow. It's a great place to browse through frequent questions about using React, as well as ask for help with specific questions.
[https://stackoverflow.com/questions/tagged/react](https://stackoverflow.com/questions/tagged/react)
##### Talk to other React developers
There are many online forums which are a great place for discussion about best practices and application architecture as well as the future of React.
[https://reactjs.org/community/support.html](https://reactjs.org/community/support.html#popular-discussion-forums) |
WayofTime/BloodMagic | 134095148 | Title: Well of Suffering state after load
Question:
username_0: just done some WoS automation and got easy reproducible issue: WoS (and may be other rituals?) remain deactivated after load and do not respond to any redstone level, if you exit while WoS deactivated (i.e. high lvl of redstone been applied). the only way to make it work again - activate with Activation Crystal.
Answers:
username_1: Just tested this, I can reproduce it at least with the WoS.
username_1: https://github.com/username_1/BloodMagic/commit/19bf728da38946a119c7ba8f6e80b5a3e66f5020
Status: Issue closed
|
comic/grand-challenge.org | 31135732 | Title: Add logging/ statistics for downloads
Question:
username_0: Be able to say how many times and preferably by whom datasets were downloaded.
Technical: probably add logging to serve file url, but discriminate between large datasets and smaller files. We don't want to log each time the banner image is served.
Maybe just log serving of any file that requires authentication. Then it's easy to log the user anyway.<issue_closed>
Status: Issue closed |
mapbox/mapbox-gl-js | 794327459 | Title: How to prevent automatic window scrolling when opening popup?
Question:
username_0: <!--
Hello! Thanks for contributing.
The answers to many "how do I...?" questions can be found in our [help documentation](https://mapbox.com/help). If you can't find the answer there, the best place to ask is either [Stack Overflow](https://stackoverflow.com/questions/tagged/mapbox-gl-js) or [Mapbox support](https://mapbox.com/contact/).
However, if you have a question that isn't addressed in the documentation but should be, please do let us know by filling out the template below! As a general rule, if a question is about _how Mapbox GL JS works_ rather than your specific use case, we will try to address it here or by improving the documentation. Otherwise, we might close the issue here and instead recommend asking on Stack Overflow or contacting support.
-->
**mapbox-gl-js version**: 1.13.0
### Question
Opening popup using mouseenter, click or any other way sometimes scrolls window. I get that this so-called feature should make the popup visible but sometimes it scrolls the page to top even if the map div is not on the top. Is there any way to prevent or disable the automatic window scrolling.
### Links to related documentation
Use case can be seen on the mapbox docs page https://docs.mapbox.com/mapbox-gl-js/example/popup-on-click/ if you scroll the page down so that the clickable icon is barely visible then it scrolls the window.
<!-- Include links to the specific section(s) of the documentation where you would have expected to find an answer to this question. -->
Answers:
username_1: Yep, this looks like a bug — needs investigation. Agree that the behavior looks inconsistent. Might be the browser reacting to the opened popup focusing on content, which is a recent accessibility behavior that can be turned off with the `focusAfterOpen` option.
username_0: @username_1 Thanks, setting `focusAfterOpen` to **false** fixed it for me. I don't know why I didn't think to test it.
Status: Issue closed
username_2: This issue should be re-opened. While setting `focusAfterOpen` to `false` serves as a workaround for this particular issue, it shouldn’t be necessary. Focusing the popup as part of opening is desirable and doing so should not cause the window to scroll. |
google/gvisor | 319804869 | Title: docker run --runtime=runsc hello-world failed
Question:
username_0: I can run hello-world with runc, but runsc failed.
$sudo docker run --runtime=runsc hello-world
error reading spec: error unmarshaling spec from file "/var/run/docker/libcontainerd/bab9e5f444815a1a612eef51c2637d146ec84bb5ed882310d8909b4b0e900f3a/config.json": json: cannot unmarshal array into Go struct field Process.capabilities of type specs.LinuxCapabilities
{"ociVersion":"1.0.0-rc2-dev","platform":{"os":"linux","arch":"amd64"},"process":{"consoleSize":{"height":0,"width":0},"user":{"uid":0,"gid":0},"args":["/hello"],"env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","HOSTNAME=bab9e5f44481"],"cwd":"/","capabilities":["CAP_CHOWN","CAP_DAC_OVERRIDE","CAP_FSETID","CAP_FOWNER","CAP_MKNOD","CAP_NET_RAW","CAP_SETGID","CAP_SETUID","CAP_SETFCAP","CAP_SETPCAP","CAP_NET_BIND_SERVICE","CAP_SYS_CHROOT","CAP_KILL","CAP_AUDIT_WRITE","CAP_SYS_RESOURCE","CAP_SYS_MODULE","CAP_SYS_PTRACE","CAP_SYS_PACCT","CAP_NET_ADMIN","CAP_SYS_ADMIN"]},"root":{"path":"/home/docker/overlay/60442221f3ecdcf8f4fd2db4ebcda9d13c8b705c84d4410f3740c7c9fa1411a8/merged"},"hostname":"bab9e5f44481","mounts":[{"destination":"/proc","type":"proc","source":"proc","options":["nosuid","noexec","nodev"]},{"destination":"/dev","type":"tmpfs","source":"tmpfs","options":["nosuid","strictatime","mode=755"]},{"destination":"/dev/pts","type":"devpts","source":"devpts","options":["nosuid","noexec","newinstance","ptmxmode=0666","mode=0620","gid=5"]},{"destination":"/sys","type":"sysfs","source":"sysfs","options":["nosuid","noexec","nodev","ro"]},{"destination":"/dev/mqueue","type":"mqueue","source":"mqueue","options":["nosuid","noexec","nodev"]},{"destination":"/sys/fs/cgroup","type":"cgroup","source":"cgroup","options":["ro","nosuid","noexec","nodev"]},{"destination":"/etc/resolv.conf","type":"bind","source":"/home/docker/containers/bab9e5f444815a1a612eef51c2637d146ec84bb5ed882310d8909b4b0e900f3a/resolv.conf","options":["rbind","rprivate"]},{"destination":"/etc/hostname","type":"bind","source":"/home/docker/containers/bab9e5f444815a1a612eef51c2637d146ec84bb5ed882310d8909b4b0e900f3a/hostname","options":["rbind","rprivate"]},{"destination":"/etc/hosts","type":"bind","source":"/home/docker/containers/bab9e5f444815a1a612eef51c2637d146ec84bb5ed882310d8909b4b0e900f3a/hosts","options":["rbind","rprivate"]},{"destination":"/dev/shm","type":"bind","source":"/home/docker/containers/bab9e5f444815a1a612eef51c2637d146ec84bb5ed882310d8909b4b0e900f3a/shm","options":["rbind","rprivate"]}],"hooks":{"prestart":[{"path":"/usr/bin/dockerd-1.12.6","args":["libnetwork-setkey","bab9e5f444815a1a612eef51c2637d146ec84bb5ed882310d8909b4b0e900f3a","2813e5d5164ba3526568ffd397e2074766a65f2290e686165c22087394efecd1"]}]},"annotations":{"__BlkBufferWriteBps":"0","__BlkBufferWriteSwitch":"0","__BlkFileLevelSwitch":"0","__BlkFileThrottlePath":"","__BlkMetaWriteTps":"0","__ali_network_alinet":"libnetwork-setkey bab9e5f444815a1a612eef51c2637d146ec84bb5ed882310d8909b4b0e900f3a 2813e5d5164ba3526568ffd397e2074766a65f2290e686165c22087394efecd1","__ali_network_bridge":"docker0","__ali_network_endpoint_id":"713d54c1415be0b263534fe558d5e80314baa72f6a6b7f0cc6f83697d8e8445d","__ali_network_gateway":"192.168.5.1","__ali_network_mac":"02:42:c0:a8:05:02","__ali_network_prefix":"24","__ali_network_type":"bridge","__cput_bvt_warp_ns":"-2","__intel_rdt.l3_cbm":"","__memory_extra_in_bytes":"0","__memory_force_empty_ctl":"-1","__memory_wmark_ratio":"0"},"linux":{"resources":{"devices":[{"allow":false,"access":"rwm"},{"allow":true,"type":"c","major":1,"minor":5,"access":"rwm"},{"allow":true,"type":"c","major":1,"minor":3,"access":"rwm"},{"allow":true,"type":"c","major":1,"minor":9,"access":"rwm"},{"allow":true,"type":"c","major":1,"minor":8,"access":"rwm"},{"allow":true,"type":"c","major":5,"minor":0,"access":"rwm"},{"allow":true,"type":"c","major":5,"minor":1,"access":"rwm"},{"allow":false,"type":"c","major":10,"minor":229,"access":"rwm"}],"disableOOMKiller":false,"oomScoreAdj":0,"memory":{"swappiness":18446744073709551615},"cpu":{"ScheLatSwitch":null},"pids":{"limit":0},"blockIO":{"blkioWeight":0,"ThrottleBufferWriteBpsDevice":null,"ThrottleDeviceIdleTime":null,"ThrottleDeviceLatencyTarget":null}},"cgroupsPath":"/docker/bab9e5f444815a1a612eef51c2637d146ec84bb5ed882310d8909b4b0e900f3a","namespaces":[{"type":"mount"},{"type":"network"},{"type":"uts"},{"type":"pid"},{"type":"ipc"},{"type":"cgroup"}],"devices":[{"path":"/dev/fuse","type":"c","major":10,"minor":229,"fileMode":438,"uid":0,"gid":0}],"maskedPaths":["/proc/kcore","/proc/latency_stats","/proc/timer_list","/proc/timer_stats","/proc/sched_debug"],"readonlyPaths":["/proc/asound","/proc/bus","/proc/fs","/proc/irq","/proc/sys","/proc/sysrq-trigger"]}}
docker: Error response from daemon: containerd: container not started.
Answers:
username_1: What Docker version are you using?
Status: Issue closed
username_0: Thank you very much, i use the latest docker, then it's working. |
PaddleHQ/Mac-Framework | 171095972 | Title: What's the isSiteLicensed property for?
Question:
username_0: There's a property called "isSiteLicensed" in the Paddle class. I wasn't able to find any documentation on this. Is there an option somewhere to create a site license key? If so, that would be really, really awesome.
Status: Issue closed
Answers:
username_1: Currently there aren't any site license keys but you can set `isSiteLicensed` to `true` which will store a license for the machine rather than just the current user.
username_0: Just to double check, if I set this to true will it resolve the issue where customers have to activate for every user on their Mac? Sent from my BlackBerry Priv
username_1: That's right, however if there is already a license for the user it will not be picked up - the user will need to re-enter a license but then it will be licensed for every user on that machine.
username_0: I'd like to verify this before I turn it on in the next app update - if I set **isSiteLicensed** to **true** will every customer who already has the app activated get prompted to re-enter their license? |
tungstonminer/brunel-3 | 812701487 | Title: Evaluate biomes for appropriate wildlife
Question:
username_0: Each biome should feel appropriately "alive": neither crowded with animals, nor deserted. As a general rule, you should be able to find an animal after 2–3 minutes of walking about. At no point should you be unable to look in at least some direction without seeing an animal.<issue_closed>
Status: Issue closed |
SAP/openui5 | 806147971 | Title: WebIDE cannot build since yesterday.
Question:
username_0: This is all the input it provides.
Answers:
username_1: Hello @username_0 ,
Can you provide a GitHub link to your project? Also detail steps how do you try to build it?
What URL you use for WebIDE?
Thanks and regards,
Iliana
username_2: Hi @username_0, we have UI5 build failures in EU2 currently, we're working to resolve it ASAP.
username_0: exactly it is on EU2.
username_1: Hello @username_2 ,
Could you please share some more information when it is expected this problem to be fixed? We are receiving multiple requests about this issue.
Regards,
Iliana
username_2: Hi @username_1, we still working to solve this issue. currently, I can't estimate when it will be resolved.
username_2: Hi,
This issue has been resolved.
Please logout and login before trying to build again,
Eliran
username_0: @username_2 In the morning I could build once, but then again the problem appeared.
Even by logout login it is still there.
username_2: Hi, The build is not stable. Our osp team are still working on it. You will get failures from time to time. try to logout and login again.
Eliran
username_2: Hi,
We validated that the builds are now working as expected.
Please validate after logging out from up and log in again please.
Eliran
username_3: Hello @username_0,
Could you please confirm that the issue is fixed?
The opened case could be closed soon due to inactivity and assumption that the symptom does not occur anymore.
Best regards,
Tereza
Status: Issue closed
|
sgreene570/readme-generator | 205394391 | Title: Add authentication from git to allow more requests per day
Question:
username_0: The request limit is pretty small without authentication. Is it possible to get the user's set git username and get an authentication token for API use?
Answers:
username_1: This can definitely be done with a username and password/token set in a separate file
username_1: I'm going to fix this soon: it just involves writing an "api call" function so I can send both authorized or unauthorized requests based on whether or not the user provides valid credentials.
username_1: I added a function to allow for authenticated requests, I'll refactor the code soon™ |
benetech/Imageshare | 612402465 | Title: Report Issue Position On Detail Page Incorrect
Question:
username_0: The report issue link on the detail page needs to be position at the top of the page, closer to the return to search results link. Currently, It could be missed by users placed after the download selected files submit button.
Answers:
username_1: Can we get a quick mock up for this. I am thinking an unobtrusive small text link "report issue".
username_0: I can do that as soon as I get my laptop back from Apple. Should be tomorrow.
username_0: @username_1 what do you think of the report issue link placement? It's floated to the right of the back to search results page. Since we shrunk the content width I think it makes sense for it to be in the far right empty column to reconcile that.

username_1: perfect, thanks!
username_2: Closing this because other issues exist for UI/UX now.
Status: Issue closed
|
Azure/azure-functions-host | 868774808 | Title: Requiring ILogger<> during function startup throws InvalidOperationException
Question:
username_0: This is a re-create of issue #5912.
I would like to initialize some caching code within an azure function. This caching service injects an ILogger<> which cannot be injected apparently, since not all required services have been registered.
When running the following line of code (part of the code below). i get the following exception:
`scope.ServiceProvider.GetRequiredService<ICachingService>().Initialize();`
```
An exception of type 'System.InvalidOperationException' occurred in Microsoft.Extensions.DependencyInjection.dll but was not handled in user code
Unable to resolve service for type 'Microsoft.Azure.WebJobs.Script.IFileLoggingStatusManager' while attempting to activate 'Microsoft.Azure.WebJobs.Script.Diagnostics.HostFileLoggerProvider'.
```
#### Investigative information
- Function App version: v3 (Microsoft.NET.Sdk.Functions" Version="3.0.6")
- dot net core 3.1
Startup code:
```
public class Startup : FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
builder.Services.AddLogging();
RegisterServices(builder.Services);
var serviceProvider = builder.Services.BuildServiceProvider(true);
InitializeServices(serviceProvider);
}
private static void RegisterServices(IServiceCollection services)
{
var serviceProvider = services.BuildServiceProvider();
services.AddSingleton<ICachingService, CachingService>();
}
private static void InitializeServices(IServiceProvider serviceProvider)
{
using var scope = serviceProvider.CreateScope();
scope.ServiceProvider.GetRequiredService<ICachingService>().Initialize().Wait();
}
}
```
Caching service:
```
public interface ICachingService
{
Task Initialize();
}
public class CachingService : ICachingService
{
private readonly ILogger<CachingService> _logger;
public CachingService(ILogger<CachingService> logger)
{
_logger = logger;
}
public async Task Initialize()
{
_logger.LogInformation("initialization");
await Task.CompletedTask;
}
}
```
The workaround would be:
public override void Configure(IFunctionsHostBuilder builder)
{
builder.Services.AddSingleton<ICachingService>(p =>
{
var logger = p.GetService<ILogger<CachingService>>();
var cachingService = new CachingService(logger);
cachingService.Initialize().Wait();
return cachingService;
});
}
It seems the ilogger dependencies are registered too late. In my opinion these should be logged before Configure is called.
Answers:
username_1: Hi @username_0 , Thank you for re-creating the issue, we will be further investigating this issue and update the progress. |
cityofaustin/atd-data-tech | 533449449 | Title: Update Project Backlog: Value Assessment Exercise
Answers:
username_1: @username_2 I think we can do this if we 📌 all the index issues to the pipelines. Is this a good idea?
username_2: i think that's an ok idea. shall we discuss at product sync? what about.....a separate zenhub workspace? <collective groan>
username_1: ~groan~ ... but yeah, I think this is worth discussing. :) I think it might be a better idea. TBH, this is really only an issue with Data Tracker.
Status: Issue closed
|
scikit-learn/scikit-learn | 1009776220 | Title: BUG fresh install on OSX conda env with pip gives a segfault
Question:
username_0: We are using `scikit-learn` our `benchopt` package and we came across a weird behavior when installing the 1.0 release on OSX with `pip` in a `conda` env in the CI, where the import of scikit-learn causes a segfault.
After a bit of debugging, we were able to pinpoint that it comes from an interation with installing `numba` in the conda env beforehand. Here are the PR where we investigated this https://github.com/benchopt/benchOpt/pull/211 and the error log with a minimal reproduction:
https://github.com/benchopt/benchOpt/runs/3733088581?check_suite_focus=true
Step to reproduce:
```
conda create --n test_env -c conda-forge python=3.8 numpy scipy numba
conda activate test_env
pip install scikit-learn
python -c 'from sklearn.linear_model import Lasso'
```
This results in:
```
line 4: 2841 Segmentation fault: 11 python -c 'from sklearn.linear_model import Lasso'
```
We fixed our issue by switching to `conda install` which is probably safer.
Answers:
username_1: I tried to reproduce locally on macOS with M1 processor but we do not have wheels yet for that platform so I recompiled scikit-learn from source using the conda-forge compilers and I do not reproduce the problem in this case.
So it might be related to a bad interaction between runtime libraries of the compiler used to generate the wheels by cibuildwheel on macos and numba / llvmlite from conda-forge installed on the conda env...
username_1: Maybe you could try running on your CI:
```
ulimit -c unlimited && (python -c 'from sklearn.linear_model import Lasso' || (lldb -c `ls -t /cores/* | head -n1` \
--batch -o 'thread backtrace all' -o 'quit' && exit 1))
```
Adapted from: https://stackoverflow.com/questions/26812047/scripting-lldb-to-obtain-a-stack-trace-after-a-crash
username_0: Yes, here is the CI run on this: https://github.com/benchopt/benchOpt/runs/3734031296?check_suite_focus=true
username_2: I can reproduce on OSX intel, using this test file:
```python
from sklearn.linear_model import Lasso
```
with `lldb python test.py` and running
```
Process 6123 stopped
* thread #2, stop reason = EXC_BAD_ACCESS (code=1, address=0x8)
frame #0: 0x000000010179b189 libomp.dylib`void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*) + 36
libomp.dylib`__kmp_suspend_64<false, true>:
-> 0x10179b189 <+36>: movq (%rax,%rcx,8), %r13
0x10179b18d <+40>: movq %r13, %rdi
0x10179b190 <+43>: callq 0x10179a9da ; __kmp_suspend_initialize_thread
0x10179b195 <+48>: movq %r13, %rdi
```
Looking into a fix.
username_2: Since the nightly wheels are built openmp 11 on osx now, one can check to see if the import works with the nightly build:
```bash
pip install -i https://pypi.anaconda.org/scipy-wheels-nightly/simple scikit-learn
```
The `main` branch and 1.0.X has not diverged too much given 1.0 was release not too long ago. |
libjxl/libjxl | 973228238 | Title: decode_oneshot generates pfm images that are brighter than originals
Question:
username_0: **Describe the bug**
JXL images decoded with `decode_oneshot` look brighter than the originals when viewed with GIMP. (The histogram is squeezed toward the right.) `encode_oneshot` does not appear to be affected. Applying the generated icc profile does not appear to make any difference. Iteratively applying `encode_oneshot` and `decode_oneshot` results in progressively brighter images. Simple levels adjustments appear to correct the problem, so it appear gamma is being applied multiple times.
**To Reproduce**
* Decode a JXL image with `decode_oneshot`.
* Open generated PFM file with GIMP.
**Expected behavior**
PFM and original should look similar. PFM images generated by GIMP appear the same as the original.
**Environment**
- OS: Ubuntu 21.04; Linux 5.11.??
- Compiler version: gcc 10.3.0
- CPU type: x86_64
- cjxl/djxl version string: 0.6.0 810ecc3; However, previous versions are also affected.
Answers:
username_0: I think the problem is related to gamma/linearity. PFM images are assumed to be linear, by GIMP and by `cjxl`. The `jxl` images I was converting were non-linear. When interpreted as linear, they look brighter. When I used `oneshot_decode` on linear `jxl` files, the resulting `pfm` files displayed as expected. Is there a way to signal to the decoder that the pixels should be linear?
username_0: ImageMagick interprets PFM files as non-linear, while GIMP interprets them as linear. This is independent of whether I pass the ICC profile to IM (`convert decoded.pfm -profile decoded.icc decoded.png`). I'm closing this issue because the problem is with how PFM files are interpreted by different programs.
Status: Issue closed
|
AtomLinter/linter-rubocop | 196072391 | Title: v0.5.2 not automatically linting Chef code
Question:
username_0: Automatic linting of Chef code stopped working in v0.5.1, looks like due to the change to only activate on language-ruby in that release. In v0.5.2, language-ruby-on-rails and language-chef were added to the activationHooks, but with an apparent typo in language-chef ("chec" vs. "chef"). PR #184 opened with fix.
Thanks for your work on this linter!
Answers:
username_1: Fixed in #184.
Status: Issue closed
|
jburgos1/fls1lambdas | 56404030 | Title: Site Vulnerability
Question:
username_0: Our webhost needs updating... just FYI.
Latest as of 2/3/15
Apache v2.412
CentOs v7.0
Our Host
Apache v2.2.3
CentOS v5.11
just FYI.
http://sitecheck.sucuri.net/results/floridalambdas.com

Answers:
username_1: How do we fix that?
username_0: We can't do it ourselves, that's the host. e.g. They have to update the computer that holds our website.
username_1: Oh lol. |
bcgov/entity | 504218038 | Title: LEGAL_API: Add field to filings that marks them as paper-only
Question:
username_0: ## Description:
Add a flag that marks the filing as being available on paper only.
**Dependencies**
None
**Acceptance Criteria**
GIVEN I'm a user with a valid JSON token
AND I'm a valid user to query the Businesses Filings
WHEN I do a GET on a businesses filings
AND I request a specific filing that is only available on paper
THEN I get a filing json back that has the availableOnPaperOnly flag in the JSON
AND the flag is set to True
**Validation Rules**
Ready to Build (DoR):
- [ ] Stakeholders have approved
- [ ] User story completed
Acceptance / DoD:
- [ ] Design / Solution accepted by Product Owner
- [ ] Acceptance criteria has been defined (happy path, known sad paths)
- [ ] Test coverage acceptable
- [ ] Peer Reviewed
- [ ] PR Accepted
- [ ] Production burn in completed
Answers:
username_1: probably ready for QA, waiting to hear back from @username_0
Status: Issue closed
|
nongeneric/lsd2dsl | 104002146 | Title: Make failed
Question:
username_0: Hello, I install deps and try make on fedora (x64):
```
[kroid@localhost-localdomain lsd2dsl]$ cmake . -DCMAKE_RELEASE=TRUE
-- The C compiler identification is GNU 4.8.3
-- The CXX compiler identification is GNU 4.8.3
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Boost version: 1.54.0
-- Found the following Boost libraries:
-- system
-- program_options
-- filesystem
-- Configuring done
-- Generating done
-- Build files have been written to: /home/kroid/Downloads/lsd2dsl
[kroid@localhost-localdomain lsd2dsl]$ make
Scanning dependencies of target dictlsd
[ 4%] Building CXX object dictlsd/CMakeFiles/dictlsd.dir/lsd.cpp.o
[ 8%] Building CXX object dictlsd/CMakeFiles/dictlsd.dir/tools.cpp.o
[ 12%] Building CXX object dictlsd/CMakeFiles/dictlsd.dir/LenTable.cpp.o
[ 16%] Building CXX object dictlsd/CMakeFiles/dictlsd.dir/BitStream.cpp.o
[ 20%] Building CXX object dictlsd/CMakeFiles/dictlsd.dir/ArticleHeading.cpp.o
/home/kroid/Downloads/lsd2dsl/dictlsd/ArticleHeading.cpp: In function ‘void dictlsd::foreachReferenceSet(std::vector<dictlsd::ArticleHeading>&, std::function<void(__gnu_cxx::__normal_iterator<dictlsd::ArticleHeading*, std::vector<dictlsd::ArticleHeading> >, __gnu_cxx::__normal_iterator<dictlsd::ArticleHeading*, std::vector<dictlsd::ArticleHeading> >)>, bool)’:
/home/kroid/Downloads/lsd2dsl/dictlsd/ArticleHeading.cpp:366:69: error: parameter declared ‘auto’
it = std::find_if(it, end(groupedHeadings), [ref](auto& h) {
^
/home/kroid/Downloads/lsd2dsl/dictlsd/ArticleHeading.cpp: In lambda function:
/home/kroid/Downloads/lsd2dsl/dictlsd/ArticleHeading.cpp:367:24: error: ‘h’ was not declared in this scope
return h.articleReference() != ref;
^
/home/kroid/Downloads/lsd2dsl/dictlsd/ArticleHeading.cpp: In function ‘void dictlsd::collapseVariants(std::vector<dictlsd::ArticleHeading>&)’:
/home/kroid/Downloads/lsd2dsl/dictlsd/ArticleHeading.cpp:378:44: error: parameter declared ‘auto’
foreachReferenceSet(headings, [&](auto first, auto last) {
^
/home/kroid/Downloads/lsd2dsl/dictlsd/ArticleHeading.cpp:378:56: error: parameter declared ‘auto’
foreachReferenceSet(headings, [&](auto first, auto last) {
^
/home/kroid/Downloads/lsd2dsl/dictlsd/ArticleHeading.cpp: In lambda function:
/home/kroid/Downloads/lsd2dsl/dictlsd/ArticleHeading.cpp:379:27: error: ‘first’ was not declared in this scope
if (std::distance(first, last) > 1) {
^
/home/kroid/Downloads/lsd2dsl/dictlsd/ArticleHeading.cpp:379:34: error: ‘last’ was not declared in this scope
if (std::distance(first, last) > 1) {
^
/home/kroid/Downloads/lsd2dsl/dictlsd/ArticleHeading.cpp: In function ‘void dictlsd::collapseVariants(std::vector<dictlsd::ArticleHeading>&)’:
/home/kroid/Downloads/lsd2dsl/dictlsd/ArticleHeading.cpp:387:6: error: could not convert ‘<lambda closure object>dictlsd::collapseVariants(std::vector<dictlsd::ArticleHeading>&)::__lambda3{(* & toRemove), (* & headings)}’ from ‘dictlsd::collapseVariants(std::vector<dictlsd::ArticleHeading>&)::__lambda3’ to ‘std::function<void(__gnu_cxx::__normal_iterator<dictlsd::ArticleHeading*, std::vector<dictlsd::ArticleHeading> >, __gnu_cxx::__normal_iterator<dictlsd::ArticleHeading*, std::vector<dictlsd::ArticleHeading> >)>’
});
^
/home/kroid/Downloads/lsd2dsl/dictlsd/ArticleHeading.cpp:390:92: error: parameter declared ‘auto’
std::copy_if(begin(headings), end(headings), std::back_inserter(compressed), [&](auto& h) {
^
/home/kroid/Downloads/lsd2dsl/dictlsd/ArticleHeading.cpp: In lambda function:
/home/kroid/Downloads/lsd2dsl/dictlsd/ArticleHeading.cpp:391:21: error: ‘h’ was not declared in this scope
auto idx = &h - &headings[0];
[Truncated]
it = std::find_if(it, end(groupedHeadings), [ref](auto& h) {
^
/home/kroid/Downloads/lsd2dsl/dictlsd/ArticleHeading.cpp:366:70: note: candidate expects 0 arguments, 1 provided
In file included from /usr/include/c++/4.8.3/algorithm:62:0,
from /home/kroid/Downloads/lsd2dsl/dictlsd/ArticleHeading.cpp:7:
/usr/include/c++/4.8.3/bits/stl_algo.h:242:23: error: no match for call to ‘(dictlsd::foreachReferenceSet(std::vector<dictlsd::ArticleHeading>&, std::function<void(__gnu_cxx::__normal_iterator<dictlsd::ArticleHeading*, std::vector<dictlsd::ArticleHeading> >, __gnu_cxx::__normal_iterator<dictlsd::ArticleHeading*, std::vector<dictlsd::ArticleHeading> >)>, bool)::__lambda2) (dictlsd::ArticleHeading&)’
if (__pred(*__first))
^
/home/kroid/Downloads/lsd2dsl/dictlsd/ArticleHeading.cpp:366:61: note: candidate is:
it = std::find_if(it, end(groupedHeadings), [ref](auto& h) {
^
/home/kroid/Downloads/lsd2dsl/dictlsd/ArticleHeading.cpp:366:70: note: dictlsd::foreachReferenceSet(std::vector<dictlsd::ArticleHeading>&, std::function<void(__gnu_cxx::__normal_iterator<dictlsd::ArticleHeading*, std::vector<dictlsd::ArticleHeading> >, __gnu_cxx::__normal_iterator<dictlsd::ArticleHeading*, std::vector<dictlsd::ArticleHeading> >)>, bool)::__lambda2
it = std::find_if(it, end(groupedHeadings), [ref](auto& h) {
^
/home/kroid/Downloads/lsd2dsl/dictlsd/ArticleHeading.cpp:366:70: note: candidate expects 0 arguments, 1 provided
make[2]: *** [dictlsd/CMakeFiles/dictlsd.dir/ArticleHeading.cpp.o] Error 1
make[1]: *** [dictlsd/CMakeFiles/dictlsd.dir/all] Error 2
make: *** [all] Error 2
```
Answers:
username_1: Hi. This is because of generic lambdas, they were introduced in gcc 4.9.
username_0: Thanks.
Status: Issue closed
|
playframework/playframework | 125691647 | Title: WS client get right cookie in case of duplication
Question:
username_0: Today I have a problem of getting right cookie by name causing of duplication. I made login request with post method to web site. In cookies I have two session_id. The second one is right cookie. But when I do <code>response.cookie("session_id")</code>
I get first one which seems old. I analyze, now I am doing like this
<code>response.cookies.last</code>
Of course it is now works every time. So is there any standard and is it possible to do it work right with ws client.
I tested with php curl, no problem.
Answers:
username_1: Hi, please first ask question on the [MailingList]( https://groups.google.com/forum/m/#!forum/play-framework) (rather than this tracker dedicated to confirmed issue). Best regards.
Status: Issue closed
username_0: @username_1 I already looked up there. https://groups.google.com/forum/m/#!searchin/play-framework/ws$20client$20cookie/play-framework/B_FXQXQUQtI
But seems nothing changed. |
kvalue/emp_lc | 508469985 | Title: Se não vender o veículo
Question:
username_0: Bom dia, primeiramente parabéns, o script ficou show!
Somente um observação, caso o player não consiga vender o carro, for preso antes, ou algo do tipo, fica na tela: Rastreador desativado, uma sugestão é por um tempo pra vender, ou um comando para cancelar a venda!, abraços e sucesso!
Answers:
username_1: O intuito era que caso voce seja apreendido com o carro, a policia deveria destruir o carro, mas eu vou por uma verificaçao da vida do veiculo, pra caso pare de funcionar, etc.
Status: Issue closed
|
appirio-tech/arena-web | 55362846 | Title: 'Back to My Room' button layout issue
Question:
username_0: Description
-----------------------------------------------------------------
'Back to My Room' button layout issue
Steps
-----------------------------------------------------------------
Hit url : arena.topcoder.com
Enter the username and password
Click login
In the 'Active Matches'
Click ENTER on a Match that you haven't registered
Check the 'Back to My Room' button in the Match Summary
Expected Result
-----------------------------------------------------------------
Must fix the Issue.
- Must not touch the top border
Actual Result
-----------------------------------------------------------------
'Back to My Room' button layout issue
Environment
-----------------------------------------------------------------
Chrome 40.0.2214.91 m in Windows 7 Pro 64bit
Image
-----------------------------------------------------------------

Bug Hunt
-----------------------------------------------------------------
Web Arena QA for 201501 Release
Answers:
username_1: Fixed it already
Status: Issue closed
username_2: fixed
username_2: @username_1 I think your fix introduced a new bug, see below the "Enter" icon is overlapping with the table, please fix it:

username_2: Description
-----------------------------------------------------------------
'Back to My Room' button layout issue
Steps
-----------------------------------------------------------------
Hit url : arena.topcoder.com
Enter the username and password
Click login
In the 'Active Matches'
Click ENTER on a Match that you haven't registered
Check the 'Back to My Room' button in the Match Summary
Click on the 'Back to My Room' won't happen anything
Expected Result
-----------------------------------------------------------------
Must fix the Issue.
- Must not touch the top border
Actual Result
-----------------------------------------------------------------
'Back to My Room' button layout issue
Environment
-----------------------------------------------------------------
Chrome 40.0.2214.91 m in Windows 7 Pro 64bit
Image
-----------------------------------------------------------------

Bug Hunt
-----------------------------------------------------------------
Web Arena QA for 201501 Release
username_1: Fixed!
username_2: fixed
Status: Issue closed
|
ember-fastboot/fastboot-app-server | 562609363 | Title: chunkedResponse turns gzip off
Question:
username_0: The default config in readme: https://github.com/ember-fastboot/fastboot-app-server#quick-start
is setting `gzip: true` and `chunkedResponse: true`, but it seems the second seems to turn the first one off. Is this expected? |
AtomLinter/linter-jscs | 134304220 | Title: Could I use this if I installed "linter-eslint"?
Question:
username_0: Could I use this if I installed "linter-eslint"?
Answers:
username_1: Yes, you totally can! You may have to tweak some of your rules in either `.eslintrc` or `.jscsrc` so that they don't cause conflicts. There are some rules that the each turn on by default that might contradict eachother,
Status: Issue closed
username_2: You could also set both of these packages to only run when there is a configuration present, then only ones that you have configured will run.
username_0: Thanks for your reply. |
kev007/ARTFUL | 203413069 | Title: Reverse country colorizing
Question:
username_0: At the moment, we create a layer for every country we have data for. Because of this many countries don't have layers, as you can see here:

But instead we should create a layer for EVERY country and just leave the one white which don't have data.
This has to be done here: https://github.com/kev007/ARTFUL/blob/master/server/src/main/webapp/resources/js/app.js#L179-L201<issue_closed>
Status: Issue closed |
LeanneStoDomingo/recurtion | 1160050518 | Title: Critical error - won't run
Question:
username_0: Polling...
Polling...
(node:801) UnhandledPromiseRejectionWarning: TypeError: Cannot set property 'dtstart' of null
at findNextDueDate (/home/runner/recurtion/index.js:70:21)
at /home/runner/recurtion/index.js:138:22
at Array.forEach (<anonymous>)
at Timeout._onTimeout (/home/runner/recurtion/index.js:137:11)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
(node:801) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:801) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
(node:801) UnhandledPromiseRejectionWarning: TypeError: Cannot set property 'dtstart' of null
at findNextDueDate (/home/runner/recurtion/index.js:70:21)
at /home/runner/recurtion/index.js:138:22
at Array.forEach (<anonymous>)
at Timeout._onTimeout (/home/runner/recurtion/index.js:137:11)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
(node:801) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 2)
(node:801) UnhandledPromiseRejectionWarning: TypeError: Cannot set property 'dtstart' of null
at findNextDueDate (/home/runner/recurtion/index.js:70:21)
at /home/runner/recurtion/index.js:138:22
at Array.forEach (<anonymous>)
at Timeout._onTimeout (/home/runner/recurtion/index.js:137:11)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
(node:801) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 3)
(node:801) UnhandledPromiseRejectionWarning: TypeError: Cannot set property 'dtstart' of null
at findNextDueDate (/home/runner/recurtion/index.js:70:21)
at /home/runner/recurtion/index.js:138:22
at Array.forEach (<anonymous>)
at Timeout._onTimeout (/home/runner/recurtion/index.js:137:11)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
(node:801) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 4)
Polling...
Polling...
Polling...
Polling...
Polling.. |
koajs/jwt | 423771256 | Title: "Secret not provided" even if it's configured
Question:
username_0: I am using Koa 2.7 and koa-jwt 3.5 library. I went through the docs and setup the jwt to work as below.
secret set to "123"
Middlewear configuration is as below;
app.use(async (ctx, next) => {
return next().catch((err) => {
//error logic
});
})
.use(jwt({"123"}).unless({
path: ["/", /\/public/]
}))
.use(helmet())
.use(bodyParser())
.use(router.routes());
2) In my login function I use jsonwebtoken libary to encrypt
var token = jwt.sign(payload, "123"); //secret key is 123
response.status=200;
response.set('Authorization', 'Bearer ' + token)
This works fine and I get the token in header. However when trying to call a protected API by passing the token in header; it says "Secret not provided". If the header is properly set, I assume the decrypting is automatically handled by koa-jwt? May be my order of middleware should change?
What am I doing wrong?
Answers:
username_0: Jeez! This was my bad. I think I was drunk... sorry :) It works perfectly!
Status: Issue closed
username_1: l meet this problem too, how to reslove?
username_2: More (much more) information is needed.. so far all have been errors on the users side.
What is the code?
>
username_3: I meet the same problem how you fix that |
aws/aws-cdk | 707178974 | Title: [aws-lambda] Construct Function, Construct Props logRetention
Question:
username_0: <!--
description of the bug:
-->
### Reproduction Steps
<!--
minimal amount of code that causes the bug (if possible) or a reference:
-->
fn = aws_lambda.Function(self.stack, "failing-lambda",
code=aws_lambda.Code.from_inline(lambda_code),
runtime=aws_lambda.Runtime.PYTHON_3_7,
handler='index.failing_lambda',
function_name=self.workspace.name_for("failing-lambda"),
timeout=Duration.seconds(30), log_retention=aws_logs.RetentionDays.FIVE_DAYS
)
### What did you expect to happen?
<!--
What were you trying to achieve by performing the steps above?
-->
I expected that the log_retention parameter will be set accordingly to the FIVE DAYS.
### What actually happened?
<!--
What is the unexpected behavior you were seeing? If you got an error, paste it here.
-->
` botocore.exceptions.ClientError: An error occurred (ValidationError) when calling the CreateStack operation: Parameters: [AssetParameters27b58c1b3f137723c1cdbb881058a4b21230873b55318044de2a913e607a49f9S3Bucket8795CE3D, AssetParameters27b58c1b3f137723c1cdbb881058a4b21230873b55318044de2a913e607a49f9ArtifactHash8DB7EB35, AssetParameters27b58c1b3f137723c1cdbb881058a4b21230873b55318044de2a913e607a49f9S3VersionKeyFC482B2A] must have values`
So I suppose, that problem is that Lambda code is loaded as **inline** and not from **asset**. Is my guess right? Why it should not be possible to set log retention in a case I am loading code as inline?
### Environment
- **CLI Version :**
- **Framework Version:**
- **Node.js Version:** <!-- Version of Node.js (run the command `node -v`) --> v14.11.0
- **OS :** macOS Catalina v10.15.6
- **Language (Version):** <!-- [all | TypeScript (3.8.3) | Java (8)| Python (3.7.3) | etc... ] --> Python 3.8.5
### Workaround
<!-- e.g. detailed explanation, stacktraces, related issues, suggestions on how to fix, links for us to have context, eg. associated pull-request, stackoverflow, gitter, etc -->
```
fn = aws_lambda.Function(self.stack, "failing-lambda",
code=aws_lambda.Code.from_inline(lambda_code),
runtime=aws_lambda.Runtime.PYTHON_3_7,
handler='index.failing_lambda',
function_name=self.workspace.name_for("failing-lambda"),
timeout=Duration.seconds(30), log_retention=aws_logs.RetentionDays.FIVE_DAYS
)
lmb_log_group = aws_logs.LogGroup(self.stack, "failing-lambda-log-group",
log_group_name='/aws/lambda/' + fn.function_name,
retention=aws_logs.RetentionDays.FIVE_DAYS,
removal_policy=core.RemovalPolicy.DESTROY)
```
---
This is :bug: Bug Report
Answers:
username_1: This doesn't seem like an error related to `log_retention`. Something else is going on in your app.
This is my Python CDK app that I'm able to deploy successfully -
```python
from aws_cdk import (
core,
aws_lambda,
aws_logs
)
app = core.App()
stack = core.Stack(app, "mystack")
aws_lambda.Function(stack, "failing-lambda",
code=aws_lambda.Code.from_inline("foo"),
runtime=aws_lambda.Runtime.PYTHON_3_7,
handler='index.failing_lambda',
log_retention=aws_logs.RetentionDays.FIVE_DAYS
)
app.synth()
``` |
phin1x/go-ipp | 707830393 | Title: panic: runtime error: makeslice: len out of range
Question:
username_0: The length returned by the readValueLength function in attribute.go is negative in some cases, resulting in an error in make in the decodeString function
Error info:
panic: runtime error: makeslice: len out of range
goroutine 15 [running]:
github.com/username_1/go-ipp.(*AttributeDecoder).decodeString(0xc00051fc10, 0xc000016a00, 0x200a, 0x0, 0x0)
/root/go/pkg/mod/github.com/username_1/[email protected]/attribute.go:456 +0xc5
github.com/username_1/go-ipp.(*AttributeDecoder).Decode(0xc00004cc10, 0xc000428518, 0x1, 0x1, 0x1)
/root/go/pkg/mod/github.com/username_1/[email protected]/attribute.go:411 +0x163
github.com/username_1/go-ipp.(*RequestDecoder).Decode(0xc00051fc80, 0x818e80, 0xc000072bd0, 0x8, 0xc00004cbe8, 0x6ada57)
/root/go/pkg/mod/github.com/username_1/[email protected]/request.go:201 +0x331
Answers:
username_0: Suggest:
`if length == 0 {
return "", nil
}`
to
`if length <= 0 {
return "", nil
}`
username_0: Problem causes:
Caused by `*data = int16(order.Uint16(bs))` in the binary.Read function
Test code:
`var a uint16 = 1 << 16 - 1234
fmt.Println(a, int16(a))`
Resoult: `64302 -1234`
username_1: The binary package is part of the golang standard library. Please report the issue to the upstream golang github project.
Status: Issue closed
|
nodecg/nodecg | 230261870 | Title: Make end-to-end tests for the Assets system
Question:
username_0: It has none, currently.
Answers:
username_0: Now that we have client-side coverage, adding these tests will be a bit more rewarding. It'll also be possible to tell what code paths are still untested.
username_0: This is important, but I just don't have the will to write more tests right now. This can wait until after 0.9 is out.
username_1: I will like to work on it can you guide me on this?
username_0: Hi @username_1!
Great! Here's some links to get you started:
- NodeCG's tests of its Sound system can probably be used as a starting point/reference.
- https://github.com/nodecg/nodecg/blob/master/test/sounds.js
- To learn how to run the tests locally, check out this link:
- https://github.com/nodecg/nodecg#running-tests-locally
- If you want to run the tests more quickly, you can run `ava` directly instead of running `npm t`, which also re-builds all of NodeCG before running the tests. If you're changing client-side code then you'll need to do this, but if you're only changing server-side code or the contents of your test files, then you can save time by just running `ava`.
- You'll first need to install `ava` globally via `npm i -g ava`
- You can also run specific test files or folders with `ava`. For example, `ava test/sounds.js` will only run that one test file, and `ava test/logger` will run all the tests in the `logger` folder.
- Check out [`ava`'s docs](https://github.com/avajs/ava) for more info.
username_0: Done in https://github.com/nodecg/nodecg/commit/dfce675341b61faca92c14d1473c313fc087eb5a
Status: Issue closed
|
devpi/devpi-ldap | 90143187 | Title: Remove password information in debug logs
Question:
username_0: The password are present in clear text in the debug log (when starting the `devpi-server` with `--debug`).
Even if it is only for debugging, I think it should be removed and replaced with a clear way to enable printing the password if needed.
Answers:
username_1: Can you paste an example, without the actual password of course? I checked the source and don't see where the password would be logged in debug mode.
username_0: 2015-06-22 16:26:43,030 DEBUG NETWORK:sent 98 bytes via <ldaps://ldap-server - ssl - user: CN=<NAME>,OU=Users,OU=MTL,OU=NCSA,OU=my_company,DC=my_company,DC=org - unbound - open - <local: 127.0.0.1 - remote: 10.129.1.10:636> - tls not started - listening - SyncStrategy>
2015-06-22 16:26:43,058 DEBUG NETWORK:received 22 bytes via <ldaps://ldap-server - ssl - user: CN=<NAME>,OU=Users,OU=MTL,OU=NCSA,OU=my_company,DC=my_company,DC=org - unbound - open - <local: 127.0.0.1 - remote: 10.129.1.10:636> - tls not started - listening - SyncStrategy>
2015-06-22 16:26:43,058 DEBUG NETWORK:received 1 ldap messages via <ldaps://ldap-server - ssl - user: CN=<NAME>,OU=Users,OU=MTL,OU=NCSA,OU=my_company,DC=my_company,DC=org - unbound - open - <local: 127.0.0.1 - remote: 10.129.1.10:636> - tls not started - listening - SyncStrategy>
```
username_1: Hmm, that looks like it's coming from the library we use. Could be kinda tricky to disable from the plugin, since the logging is set up by devpi-server and currently plugins don't have an API to adjust logging configuration. We recently merged a PR which adds logging configuration: https://bitbucket.org/hpk42/devpi/pull-request/225/make-logging-configurable-via-an-external/diff
If that approach works, then we should document it.
username_2: HI, I'm the author of the ldap3 library. The code in dev at https://github.com/username_2/ldap3.git should fix this issue. Can you try to use it? Let me know if you can't get the code from dev and rely on pypi for ldap3 installation.
Bye,
Giovanni
username_1: @username_2 Thanks for the information!
@username_0 did you have a chance to see if that fixes the issue? If so and when there is a new ldap3 release, I would make a new release with updated requirements.
username_2: ldap3 0.9.8.6 has been released. It hides sensitive data in logging by default. Can you check it?
Thanks,
Giovanni
username_0: 2015-07-09 09:39:16,592 DEBUG NETWORK:sent 98 bytes via <ldaps://ldap-server - ssl - user: CN=<NAME>,OU=Users,OU=MTL,OU=my_company,DC=my_company,DC=org - unbound - open - <local: 10.128.32.247:57723 - remote: 10.129.1.242:636> - tls not started - listening - SyncStrategy>
```
Anything that might be needed on the plugin to make sure it uses the default?
username_0: With ldap3 0.9.8.6 nothing related to ldap is outputted. I guess the password is not visible anymore!
username_1: So your earlier comment was a false alarm? If so, I will update setup.py and make a release.
username_0: Yep, false alarm. I had some issues with my virtualenv. All seems good.
Sorry about that.
>
Status: Issue closed
username_1: Released 1.1.1 which requires the correct minimum ldap3 version. |
django-haystack/django-haystack | 48304277 | Title: Documentation about Woosh Python 3 support
Question:
username_0: The current documentation says Woosh has partial Python 3 support because of an open issue, however, the link to the issue is dead:
http://django-haystack.readthedocs.org/en/latest/python3.html
Also the Woosh 2.0 release notes it says Python 3 is supported:
http://whoosh.readthedocs.org/en/latest/releases/2_0.html#improvements
Can we consider Woosh has Python 3 support and modify the documentation?
Answers:
username_1: Good catch – if you have time to contribute a doc update, it'd be appreciated. Otherwise I'll try to fit it into 2.4.
username_0: So this issue the docs are referring to is no longer an issue?
username_1: @username_0 I'm assuming so because the tests are passing on Python 3.3 & 3.4, but it might be worth a quick review to confirm that the whoosh highlighting test is indeed using the Whoosh highlighter:
https://travis-ci.org/django-haystack/django-haystack/jobs/53709852#L1350
username_0: https://github.com/django-haystack/django-haystack/blob/master/test_haystack/whoosh_tests/test_whoosh_backend.py#L262
I assume we can conclude it works. Also over here I see that highlighting is supported:
http://django-haystack.readthedocs.org/en/latest/backend_support.html#backend-support-matrix
I'll file a pull request
username_1: Great, thanks!
username_0: Here it is: https://github.com/django-haystack/django-haystack/pull/1154
Status: Issue closed
username_2: "The following backends are fully supported under Python 3. However, you may need to update these dependencies if you have a pre-existing setup.
Solr (pysolr>=3.1.0)
Elasticsearch
"
Basically Whoosh is not supported or not "fully" supported? |
pantsbuild/pants | 995069766 | Title: doc site: reorder versions so v2 is first
Question:
username_0: The version selector on the docs site puts the versions in lexical order which puts v1.30 closer to the top than v2.6 and, moreover, puts v2.6 in the middle of the list. At the very least, the list should be in reverse lexical order with v2.6 first and possibly v1.30 right after it. Deprecated versions of v1 should be at the bottom of the list.
<img width="229" alt="Screen Shot 2021-09-13 at 12 03 54 PM" src="https://user-images.githubusercontent.com/901363/133118144-ac7df39a-e2dd-467e-b2de-660690831bc3.png">
Answers:
username_1: This is dependent on Readme.io, unless we do something like change the version value to not start with `v`. But that would break a bunch of hyperlinks. Could you please send a feature request to Readme? https://docs.readme.com/docs/contact-support
Once that's done, we can close this because there is nothing Pantsbuild can do.
username_0: Submitted as requested.
Status: Issue closed
|
rust-lang/rust | 473660295 | Title: -Zprofile and -Clink-dead-code enabled leads to the linking error on macOS
Question:
username_0: I'm integrating [grcov](https://github.com/mozilla/grcov) and have macOS build failures with a linkage error while both `-Zprofile` and `-Clink-dead-code` flags enabled.
Consider a cargo project with the following manifest:
```toml
[package]
name = "foo"
version = "0.1.0"
authors = ["me"]
edition = "2018"
[dependencies]
core-foundation = "0.6.4"
```
and the `src/main.rs` file:
```rust
use core_foundation::dictionary::CFMutableDictionary;
use core_foundation::number::CFNumber;
fn main() {
let mut dict = CFMutableDictionary::new();
let key = CFNumber::from(1);
let value = CFNumber::from(2);
dict.add(&key, &value);
dbg!(dict.len());
}
```
Plain `cargo run` successfully compiles and run the binary (no additional env vars set):
```
$ cargo run
Compiling foo v0.1.0 (/Users/apple/Desktop/foo)
Finished dev [unoptimized + debuginfo] target(s) in 0.41s
Running `target/debug/foo`
[src/main.rs:10] dict.len() = 1
```
Now, as described in the `gcov` [readme](https://github.com/mozilla/grcov#grcov-with-travis) I add the necessary flags:
```
$ cargo clean
$ CARGO_INCREMENTAL=0 RUSTFLAGS="-Zprofile -Ccodegen-units=1 -Cinline-threshold=0 -Clink-dead-code -Coverflow-checks=off -Zno-landing-pads" cargo run
Compiling libc v0.2.60
Compiling core-foundation-sys v0.6.2
Compiling core-foundation v0.6.4
Compiling foo v0.1.0 (/Users/apple/Desktop/foo)
error: linking with `cc` failed: exit code: 1
|
= note: "cc" "-m64" "-L" "/Users/apple/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib" "/Users/apple/Desktop/foo/target/debug/deps/foo-43971b11e9fc60ca.foo.ayycu2cz-cgu.0.rcgu.o" "-o" "/Users/apple/Desktop/foo/target/debug/deps/foo-43971b11e9fc60ca" "/Users/apple/Desktop/foo/target/debug/deps/foo-43971b11e9fc60ca.8f4xgbrv4jev7ka.rcgu.o" "-nodefaultlibs" "-L" "/Users/apple/Desktop/foo/target/debug/deps" "-L" "/Users/apple/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib" "/Users/apple/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libtest-2f2c545f20952714.rlib" "/Users/apple/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libterm-d31c4cfba4ff2a7a.rlib" "/Users/apple/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libgetopts-2b60fe103d1e455e.rlib" "/Users/apple/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libunicode_width-7d268ed1d33eabb7.rlib" "/Users/apple/Desktop/foo/target/debug/deps/libcore_foundation-19ef10f3e7242abe.rlib" "/Users/apple/Desktop/foo/target/debug/deps/liblibc-5e68c041a34d0eed.rlib" "/Users/apple/Desktop/foo/target/debug/deps/libcore_foundation_sys-6d86f6aead90fb35.rlib" "/Users/apple/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libprofiler_builtins-f8dafa01db4dca39.rlib" "/Users/apple/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libstd-292d8bc6470467ba.rlib" "/Users/apple/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libpanic_unwind-912dbe632ba1cbae.rlib" "/Users/apple/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libbacktrace-ba5714b629684fb4.rlib" "/Users/apple/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libbacktrace_sys-eac4a78ff89c6e87.rlib" "/Users/apple/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/librustc_demangle-ef9b06bfe5cc2531.rlib" "/Users/apple/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libhashbrown-3b7d42ffe20649f3.rlib" "/Users/apple/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/librustc_std_workspace_alloc-4ad9bb4642dcbb0e.rlib" "/Users/apple/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libunwind-2705756291291215.rlib" "/Users/apple/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libcfg_if-4b3e65f59d0d2bb7.rlib" "/Users/apple/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/liblibc-654bf98555d844ff.rlib" "/Users/apple/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/liballoc-46c32f2ea46d194a.rlib" "/Users/apple/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/librustc_std_workspace_core-ce3fd965850830d8.rlib" "/Users/apple/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libcore-46c797561289aff0.rlib" "/Users/apple/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libcompiler_builtins-974d425bd7750373.rlib" "-framework" "CoreFoundation" "-lSystem" "-lresolv" "-lc" "-lm"
= note: Undefined symbols for architecture x86_64:
"_CFMutableAttributedStringGetTypeID", referenced from:
_$LT$core_foundation..attributed_string..CFMutableAttributedString$u20$as$u20$core_foundation..base..TCFType$GT$::type_id::h9cd95ae9d283ffee in libcore_foundation-19ef10f3e7242abe.rlib(core_foundation-19ef10f3e7242abe.core_foundation.bhl000vu-cgu.0.rcgu.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
```
With a `-Clink-dead-code` flag removed from `RUSTFLAGS` build successfully compiles:
[Truncated]
## Meta
```
$ rustc --version --verbose
rustc 1.38.0-nightly (a7f28678b 2019-07-23)
binary: rustc
commit-hash: a7f28678bbf4e16893bb6a718e427504167a9494
commit-date: 2019-07-23
host: x86_64-apple-darwin
release: 1.38.0-nightly
LLVM version: 9.0
$ xcrun --show-sdk-version
10.14.1
$ sw_vers
ProductName: Mac OS X
ProductVersion: 10.13.6
BuildVersion: 17G65
```
Answers:
username_1: I'm seeing this issue except I get:
```
note: Undefined symbols for architecture x86_64:
"___isPlatformVersionAtLeast", referenced from:
_singleipconnect in libcurl.a(libcurl_la-connect.o)
_sectransp_connect_common in libcurl.a(libcurl_la-sectransp.o)
_sectransp_connect_step2 in libcurl.a(libcurl_la-sectransp.o)
_sectransp_version_from_curl in libcurl.a(libcurl_la-sectransp.o)
```
Unlike in https://github.com/alexcrichton/curl-rust/issues/279 I only get it when trying to build for grcov with `-Zprofile -Clink-dead-code`.
username_2: I'm getting the same error
```
= note: Undefined symbols for architecture x86_64:
"_CFMutableAttributedStringGetTypeID", referenced from:
_$LT$core_foundation..attributed_string..CFMutableAttributedString$u20$as$u20$core_foundation..base..TCFType$GT$::type_id::h58cab0fb10c1f89d in libcore_foundation-264aa8cf11fe1e90.rlib(core_foundation-264aa8cf11fe1e90.core_foundation.1egne0i1-cgu.0.rcgu.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
```
and I'm on nightly as well:
```
nightly-x86_64-apple-darwin (default)
rustc 1.41.0-nightly (ae1b871cc 2019-12-06)
```
username_1: I'm also seeing a similar issue in an Ubuntu-based Docker container:
```
error: linking with `cc` failed: exit code: 1
|
= note: "cc" "-Wl,--as-needed" "-Wl,-z,noexecstack" "-m64" "-L" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib" "/project/target/debug/deps/skylight-b16259e4e61f6fd1.skylight.384cbiki-cgu.0.rcgu.o" "-o" "/project/target/debug/deps/skylight-b16259e4e61f6fd1" "/project/target/debug/deps/skylight-b16259e4e61f6fd1.28i8qis0u3yk7bvn.rcgu.o" "-pie" "-Wl,-zrelro" "-Wl,-znow" "-nodefaultlibs" "-L" "/project/target/debug/deps" "-L" "deps/x86_64-linux" "-L" "/usr/lib/x86_64-linux-gnu" "-L" "/project/deps/x86_64-linux/lib" "-L" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib" "-Wl,-Bstatic" "/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib" "/project/target/debug/deps/libsocket2-2961fab338b70973.rlib" "/project/target/debug/deps/libcurl_sys-6bb174e487a3b5f6.rlib" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libtest-589e8731323ee536.rlib" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libterm-f19c446aea8afe57.rlib" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libgetopts-60229cb5a4d83162.rlib" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libunicode_width-dc535a59a874e08d.rlib" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/librustc_std_workspace_std-6e69174eaf0d10a8.rlib" "/project/target/debug/deps/libslab-0e5169b0fa9dc45f.rlib" "/project/target/debug/deps/libmio_extras-8dbbe4d46043ed32.rlib" "/project/target/debug/deps/liblazycell-d26a110cec625dcd.rlib" "/project/target/debug/deps/libbytes-223909e0fc55cf73.rlib" "/project/target/debug/deps/libbyteorder-d87a5caa6531b0f0.rlib" "/project/target/debug/deps/libtempfile-e450a0141f8c9f7c.rlib" "/project/target/debug/deps/librand-93c7a37df2ab6599.rlib" "/project/target/debug/deps/librand_chacha-b7d7fc4cd5b9cb15.rlib" "/project/target/debug/deps/libc2_chacha-32c31ca297fd176c.rlib" "/project/target/debug/deps/libppv_lite86-683d6c4f0db0ca82.rlib" "/project/target/debug/deps/librand_core-deb7567b74364c86.rlib" "/project/target/debug/deps/libgetrandom-cbce369a23047747.rlib" "/project/target/debug/deps/libremove_dir_all-9cbeaf41f02bb348.rlib" "/project/target/debug/deps/libmio_uds-6f2ee5ecb50a9367.rlib" "/project/target/debug/deps/libmio-9a0970157657be0c.rlib" "/project/target/debug/deps/libslab-0537c06633a4abc9.rlib" "/project/target/debug/deps/libnet2-ecc6df404c3eadbb.rlib" "/project/target/debug/deps/libiovec-7c7512359b4d0a0b.rlib" "/project/target/debug/deps/libglob-1ba296cdcf0e01b9.rlib" "/project/target/debug/deps/libunix_socket-2fb58c0b8fd15d7a.rlib" "/project/target/debug/deps/librand-0d039ae8d8aea6af.rlib" "/project/target/debug/deps/librand_xorshift-9c9d826cc0de32e7.rlib" "/project/target/debug/deps/librand_pcg-d31e74e9908596f7.rlib" "/project/target/debug/deps/librand_hc-f814ccc9684acc55.rlib" "/project/target/debug/deps/librand_chacha-298f4925e39045d7.rlib" "/project/target/debug/deps/librand_isaac-a0a67e8a09e5017d.rlib" "/project/target/debug/deps/librand_core-73ada910099d71e8.rlib" "/project/target/debug/deps/librand_os-8a020002bc373844.rlib" "/project/target/debug/deps/librand_jitter-f0a9036cfebb833e.rlib" "/project/target/debug/deps/librand_core-82d5e96b6bc6a3fd.rlib" "/project/target/debug/deps/liblog4rs-8b92055510bcf8df.rlib" "/project/target/debug/deps/libtypemap-2bfdbd17724bc420.rlib" "/project/target/debug/deps/libunsafe_any-3e684ac4059a451d.rlib" "/project/target/debug/deps/libtraitobject-8a03022144c096bf.rlib" "/project/target/debug/deps/libthread_id-f78e213313ef3a9e.rlib" "/project/target/debug/deps/libserde_yaml-a660bc8499e84ebe.rlib" "/project/target/debug/deps/libdtoa-d56afc5a0f8ea498.rlib" "/project/target/debug/deps/libyaml_rust-0d992a63b4e9fbd6.rlib" "/project/target/debug/deps/liblinked_hash_map-ad81973f5cb83fed.rlib" "/project/target/debug/deps/libserde_value-c3495b725241042e.rlib" "/project/target/debug/deps/libordered_float-81df153054b4260d.rlib" "/project/target/debug/deps/liblog_mdc-3ebfff32d747017d.rlib" "/project/target/debug/deps/libfnv-db31bc8a14ceaafe.rlib" "/project/target/debug/deps/libflate2-85278b5770c1daeb.rlib" "/project/target/debug/deps/libminiz_oxide-9909ad042a9c9894.rlib" "/project/target/debug/deps/libadler32-43a566967e17fb29.rlib" "/project/target/debug/deps/libcrc32fast-1d1e5f713046710b.rlib" "/project/target/debug/deps/libchrono-f6cf678184b3e8fd.rlib" "/project/target/debug/deps/libnum_integer-70c5c2cfd73e7c66.rlib" "/project/target/debug/deps/libnum_traits-ab5d64efb4d1de5a.rlib" "/project/target/debug/deps/libarc_swap-f12b2f4d9eb2705e.rlib" "/project/target/debug/deps/libantidote-d1300fd4e41b7300.rlib" "/project/target/debug/deps/libnix-bb380732871eb4e6.rlib" "/project/target/debug/deps/libbitflags-87a64b28e4c93fa7.rlib" "/project/target/debug/deps/libbloomfilter-39694116dba9ce09.rlib" "/project/target/debug/deps/libsiphasher-76f1c433e1da5d58.rlib" "/project/target/debug/deps/librand-879a54245f237008.rlib" "/project/target/debug/deps/librand-df16d4b7a7dfff86.rlib" "/project/target/debug/deps/libbit_vec-cef18d491377605f.rlib" "/project/target/debug/deps/libpidfile-14227934404ede6a.rlib" "/project/target/debug/deps/liblog-bf7a31298b860d16.rlib" "/project/target/debug/deps/libnix-f2ad7472ae7fa5a5.rlib" "/project/target/debug/deps/libvoid-beb033ed00d89924.rlib" "/project/target/debug/deps/libbitflags-2d516e7f00201bec.rlib" "/project/target/debug/deps/libdocopt-729355d582495ce3.rlib" "/project/target/debug/deps/libstrsim-fb46177850682edf.rlib" "/project/target/debug/deps/libbuffoon-a31af420092a8f91.rlib" "/project/target/debug/deps/libsemver-9d5f04eeeb4cdcbc.rlib" "/project/target/debug/deps/libsemver_parser-ac9b4c719761673f.rlib" "/project/target/debug/deps/liburl-c1801a4cff106351.rlib" "/project/target/debug/deps/libpercent_encoding-35dbc3d881e553ce.rlib" "/project/target/debug/deps/libidna-f90c9f080a00d499.rlib" "/project/target/debug/deps/libunicode_normalization-27c042e472a056d0.rlib" "/project/target/debug/deps/libsmallvec-b0d47fef0ce4e6be.rlib" "/project/target/debug/deps/libunicode_bidi-278b71400887702d.rlib" "/project/target/debug/deps/libmatches-3e46d2bd749c1a05.rlib" "/project/target/debug/deps/libtime-8e5a3591786dc0b0.rlib" "/project/target/debug/deps/libsql_lexer-cfb8ea13a94fa676.rlib" "/project/target/debug/deps/liblalrpop_util-81b9f0e52c902db2.rlib" "/project/target/debug/deps/libenv_logger-70592c63808fd818.rlib" "/project/target/debug/deps/libtermcolor-771d66e06b83ef38.rlib" "/project/target/debug/deps/libatty-601bb16a39a7fdc2.rlib" "/project/target/debug/deps/libhumantime-21f37323cb7869dc.rlib" "/project/target/debug/deps/libquick_error-019894261f1f0347.rlib" "/project/target/debug/deps/libregex-766996c96720bda4.rlib" "/project/target/debug/deps/libthread_local-4ff3b91ef8ffb5e7.rlib" "/project/target/debug/deps/liblazy_static-fe9ad5d9a04c4ee2.rlib" "/project/target/debug/deps/libregex_syntax-af25017c0adc159d.rlib" "/project/target/debug/deps/libaho_corasick-64de71e23333f7bb.rlib" "/project/target/debug/deps/libmemchr-fd3c645d60c8da01.rlib" "/project/target/debug/deps/liblibc-86ded2db986cd877.rlib" "/project/target/debug/deps/liblog-de205a072af13734.rlib" "/project/target/debug/deps/libcfg_if-5a2eba7d9e1d582d.rlib" "/project/target/debug/deps/libserde_json-50c59c6a8cdb9028.rlib" "/project/target/debug/deps/libryu-6ce698c492f354ef.rlib" "/project/target/debug/deps/libitoa-c879c1c76ef2ad14.rlib" "/project/target/debug/deps/libserde-b6f4f434facd9e3c.rlib" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libprofiler_builtins-c95e10c81e17a8c3.rlib" "-Wl,--start-group" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libstd-87194af682396769.rlib" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libpanic_unwind-af80e10d728d9fa0.rlib" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libhashbrown-dc72808411834d6e.rlib" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/librustc_std_workspace_alloc-fa9dccc2dd30bed7.rlib" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libbacktrace-1556888cccf238af.rlib" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libbacktrace_sys-f2d71e6a92ac1aa5.rlib" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/librustc_demangle-d875430891f9ff56.rlib" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libunwind-30a3a0ef179b2cb8.rlib" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libcfg_if-d227b879b21a33f0.rlib" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/liblibc-140f5d932e2c4290.rlib" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/liballoc-f2c6c629baadf366.rlib" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/librustc_std_workspace_core-6d0d8b33b8f7527c.rlib" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libcore-a0edec241ec339ce.rlib" "-Wl,--end-group" "/usr/local/rustup/toolchains/nightly-2020-01-21-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libcompiler_builtins-4948ad4bea52b2b6.rlib" "-Wl,-Bdynamic" "-lutil" "-lutil" "-ldl" "-lrt" "-lpthread" "-lgcc_s" "-lc" "-lm" "-lrt" "-lpthread" "-lutil" "-lutil"
= note: /project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `_$LT$curl..error..Error$u20$as$u20$std..error..Error$GT$::description::h784aa74a27071602':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/error.rs:335: undefined reference to `curl_easy_strerror'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `_$LT$curl..error..ShareError$u20$as$u20$std..error..Error$GT$::description::h2a22e51c4016fd6a':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/error.rs:407: undefined reference to `curl_share_strerror'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `_$LT$curl..error..MultiError$u20$as$u20$std..error..Error$GT$::description::hd7b7696ecc2d1f01':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/error.rs:494: undefined reference to `curl_multi_strerror'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::version::Version::num::he5775daefcae7a6c':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/version.rs:27: undefined reference to `curl_version'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::version::Version::get::h9cda8f878f2e86c3':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/version.rs:35: undefined reference to `curl_version_info'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `_$LT$curl..easy..form..Form$u20$as$u20$core..ops..drop..Drop$GT$::drop::h840280cba5b6b6af':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/form.rs:72: undefined reference to `curl_formfree'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::easy::form::Part::add::h15d584fd09939428':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/form.rs:316: undefined reference to `curl_formadd'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::easy::handler::Easy2$LT$H$GT$::new::h5556842d8101872c':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/handler.rs:610: undefined reference to `curl_easy_init'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::easy::handler::Easy2$LT$H$GT$::reset::h2265ecaaa01b7e99':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/handler.rs:634: undefined reference to `curl_easy_reset'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::easy::handler::Easy2$LT$H$GT$::cookies::hdc82f36de1e1c15c':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/handler.rs:2564: undefined reference to `curl_easy_getinfo'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::easy::handler::Easy2$LT$H$GT$::perform::hc54ec8909b1f3750':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/handler.rs:2624: undefined reference to `curl_easy_perform'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::easy::handler::Easy2$LT$H$GT$::unpause_read::hf954754948760006':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/handler.rs:2645: undefined reference to `curl_easy_pause'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::easy::handler::Easy2$LT$H$GT$::unpause_write::h113b260b264e7a22':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/handler.rs:2666: undefined reference to `curl_easy_pause'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::easy::handler::Easy2$LT$H$GT$::url_encode::h6d38c599b5f16355':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/handler.rs:2677: undefined reference to `curl_easy_escape'
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/handler.rs:2685: undefined reference to `curl_free'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::easy::handler::Easy2$LT$H$GT$::url_decode::hce7d0338fc734e17':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/handler.rs:2710: undefined reference to `curl_easy_unescape'
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/handler.rs:2719: undefined reference to `curl_free'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::easy::handler::Easy2$LT$H$GT$::recv::h8ae7e08849b1b67b':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/handler.rs:2759: undefined reference to `curl_easy_recv'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::easy::handler::Easy2$LT$H$GT$::send::h9d957d0e2f63cebd':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/handler.rs:2780: undefined reference to `curl_easy_send'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::easy::handler::Easy2$LT$H$GT$::setopt_long::hd2176641597aff61':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/handler.rs:2812: undefined reference to `curl_easy_setopt'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::easy::handler::Easy2$LT$H$GT$::setopt_ptr::h5f74f34e93eaeb8e':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/handler.rs:2820: undefined reference to `curl_easy_setopt'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::easy::handler::Easy2$LT$H$GT$::setopt_off_t::hb98bb1a5556639a9':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/handler.rs:2829: undefined reference to `curl_easy_setopt'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::easy::handler::Easy2$LT$H$GT$::getopt_ptr::hc4f958828b2192fd':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/handler.rs:2848: undefined reference to `curl_easy_getinfo'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::easy::handler::Easy2$LT$H$GT$::getopt_long::hc73d68d94e2499b5':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/handler.rs:2868: undefined reference to `curl_easy_getinfo'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::easy::handler::Easy2$LT$H$GT$::getopt_double::hdb6f3176b16d4228':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/handler.rs:2877: undefined reference to `curl_easy_getinfo'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `_$LT$curl..easy..handler..Easy2$LT$H$GT$$u20$as$u20$core..ops..drop..Drop$GT$::drop::hecaba7b25a0ae8f6':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/handler.rs:2932: undefined reference to `curl_easy_cleanup'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::easy::list::List::append::hc26ebe80e3eb7260':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/list.rs:39: undefined reference to `curl_slist_append'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `_$LT$curl..easy..list..List$u20$as$u20$core..ops..drop..Drop$GT$::drop::h3d625851a8e9b64d':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/easy/list.rs:74: undefined reference to `curl_slist_free_all'
[Truncated]
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::multi::Multi::timeout::h41235987c530c44c':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/multi.rs:511: undefined reference to `curl_multi_socket_action'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::multi::Multi::get_timeout::h13bd31e994e4ec74':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/multi.rs:541: undefined reference to `curl_multi_timeout'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::multi::Multi::wait::h344ef1c8a9e85a81':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/multi.rs:587: undefined reference to `curl_multi_wait'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::multi::Multi::perform::hf633d99e197fc20b':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/multi.rs:641: undefined reference to `curl_multi_perform'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::multi::Multi::fdset2::h69db902c41f168fd':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/multi.rs:688: undefined reference to `curl_multi_fdset'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::multi::Multi::close::h4d24639d5b8d9e60':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/multi.rs:705: undefined reference to `curl_multi_cleanup'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::multi::EasyHandle::set_token::<KEY>':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/multi.rs:795: undefined reference to `curl_easy_setopt'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::multi::Message::token::h04b2b85dbfc5caaf':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/multi.rs:985: undefined reference to `curl_easy_getinfo'
/project/target/debug/deps/libcurl-af0fb6c764673d16.rlib(curl-af0fb6c764673d16.curl.biyx96j4-cgu.0.rcgu.o): In function `curl::init::_$u7b$$u7b$closure$u7d$$u7d$::h2a664f675a467dcd':
/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/curl-0.4.25/src/lib.rs:88: undefined reference to `curl_global_init'
collect2: error: ld returned 1 exit status
```
username_3: Interesting. Is there an example project that this fails for? It just works for me on a 2017 mbp with latest nightly (I'm using it on a small project though..).
username_1: @username_3 unfortunately, I don't currently have a simplified project. I suspect it may be related to some of the way curl-rust is set up. But I can't say for certain.
username_3: Ok so this could be a somewhat idiosyncratic error rather than being a general error affecting all osx users of grcov...
<sub>Sent with <a href="http://githawk.com">GitHawk</a></sub>
username_1: @username_3 definitely not unique to macOS since I saw something very similar in an Ubuntu Docker container.
username_4: This issue was fixed in https://github.com/servo/core-foundation-rs/pull/357 so now we should wait for a new release of `core-foundation` crate an all dependent crates tree (`security-framework`, `native-tls`, etc)
username_2: @username_4 that's great news. Thanks for the fix!
username_5: I'm getting this issue now as well
```
= note: Undefined symbols for architecture x86_64:
"_CFMutableAttributedStringGetTypeID", referenced from:
_$LT$core_foundation..attributed_string..CFMutableAttributedString$u20$as$u20$core_foundation..base..TCFType$GT$::type_id::h34814b711ea45d36 in libcore_foundation-a5a0f887432b6bbe.rlib(core_foundation-a5a0f887432b6bbe.core_foundation.2kp1ivj9-cgu.0.rcgu.o)
```
active toolchain
----------------
stable-x86_64-apple-darwin (default)
rustc 1.46.0 (04488afe3 2020-08-24)
username_6: ```
= note: Undefined symbols for architecture x86_64:
"_CFMutableAttributedStringGetTypeID", referenced from:
_$LT$core_foundation..attributed_string..CFMutableAttributedString$u20$as$u20$core_foundation..base..TCFType$GT$::type_id::h5e9b5accdb7d85cb in libcore_foundation-3fe28be0eb571159.rlib(core_foundation-3fe28be0eb571159.core_foundation.77blwz1k-cgu.0.rcgu.o)
```
# Environment
macOS Big Sur 11.1
stable-x86_64-apple-darwin (default)
rustc 1.49.0 (e1884a8e3 2020-12-29)
username_4: There are still a lot of crates which [depend](https://crates.io/crates/core-foundation/reverse_dependencies) on older version of core-foundation (e.g. 0.7).
Test code (see above) with core-foundation = "0.9.1" and rustc 1.54.0-nightly works as expected without errors, coverage report is generated successfully.
If you see errors, please check your crates dependencies, there should be an old core-foundation version somewhere in the tree.
username_7: @username_4 Does this mean that crates which depend on core-foundation should upgrade to the latest now? I imagine the situation will be similar to when Tokio went from 0.3 to 1.0, and it took a while for crates to upgrade.
username_4: @username_7 Yes. |
opensim-org/opensim-core | 235032450 | Title: Avoid duplication of student and teacher versions for C++ Hopper Device example
Question:
username_0: The MATLAB Hopper Device example has student and teacher versions of some files, but we avoid duplicating entire files by using CMake to generate the student version automatically from the teacher version by extracting the "answers."
We could use CMake to avoid duplication for the C++ version of the Hopper Device example as well, as suggested by @aseth1 in #1737.
Answers:
username_1: @username_0 Is this issue still relevant? |
realm/realm-java | 75816784 | Title: Link Queries example mistake
Question:
username_0: Hi guys,
I think I've found a mistake in the example provided under http://realm.io/docs/java/0.80.0/#link-queries
```java
RealmResults<Contact> contacts = realm.where(Contact.class).equalTo("email.active", true).findAll();
```
Taking in account this:
```java
public class Contact extends RealmObject {
private RealmList<Email> emails;
// Other fields…
}
```
The query should be:
_equalTo("**emails**.active", true)_.
Answers:
username_1: @username_0 Thank you!
username_1: You will be able to see the change at next release (when we update the documentation).
Status: Issue closed
|
galasa-dev/projectmanagement | 564070077 | Title: Go/NoGo for WebUI
Question:
username_0: Quick due dilligence to make sure customers will be happy to work with a web UI rather than IDEs.
(tempered by Galasa team's resource constraints for producing IDE plugins)
I expect they will be OK with WebUI but must check
Set aside time in Sponsor user calls to ask
Answers:
username_1: Needs to be clear that the WebUI is for automation runs only.
Development and local running of tests still remain in the IDE of choice. We will likely have a option to submit runs, monitor and view results in the IDEs, but everything else is likely to be in the WebUI, like configuration and resource management.
username_2: Had so many positive responses to all Galasa interactions that I believe we can close this as done - 'Go'
Status: Issue closed
|
DFranzen/cordova-FileStorage | 286985703 | Title: Plugin Might Need Update
Question:
username_0: Hi. The plugin might need to be updated. It feels broken or works half the time, especially as of late while using Cocoon.io. Thanks.
Answers:
username_0: Also, I advise users wait for at least 5 seconds to reuse "fileStorage.writeToUri" each time. |
Kirangaira/Hello-World | 593500639 | Title: WAMVOTP to create an HTML page that says "Hello World" #4
Question:
username_0: file = open('helloworld.html', 'w' )
msg = """<html>
<head>
<title>Hello World</title>
</head>
<body>
<p>Hello World</p>
</body>
</html>
"""
file.write(msg)
file.close()
Status: Issue closed
Answers:
username_0: I fixed this in: 7efa27e09db2434b1d10a6e9f37cc9d0207fb823 |
MariaMelnik/flutter_date_pickers | 665342076 | Title: Show dates from other months
Question:
username_0: How about showing dates from other months when we choose a week? Because when I select a week, I suppose to see full line in calendar selected with all 7 days of this week.
Answers:
username_1: @username_0 hi, I suppose it is useful feature to show days from previous month. Will add it in next releases.
Thanks for pointing it!
username_0: Thanks, Maria! Would love to use it ) |
ContinuumIO/anaconda-issues | 260608071 | Title: Navigator Error
Question:
username_0: ## Main error
An unexpected error occurred on Navigator start-up<br>psutil.AccessDenied (pid=2644)
## Traceback
```
Traceback (most recent call last):
File "D:\Programs\Anaconda\lib\site-packages\psutil\_pswindows.py", line 620, in wrapper
return fun(self, *args, **kwargs)
File "D:\Programs\Anaconda\lib\site-packages\psutil\_pswindows.py", line 690, in cmdline
ret = cext.proc_cmdline(self.pid)
PermissionError: [WinError 5] Отказано в доступе
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Programs\Anaconda\lib\site-packages\anaconda_navigator\exceptions.py", line 75, in exception_handler
return_value = func(*args, **kwargs)
File "D:\Programs\Anaconda\lib\site-packages\anaconda_navigator\app\start.py", line 108, in start_app
if misc.load_pid() is None: # A stale lock might be around
File "D:\Programs\Anaconda\lib\site-packages\anaconda_navigator\utils\misc.py", line 384, in load_pid
cmds = process.cmdline()
File "D:\Programs\Anaconda\lib\site-packages\psutil\__init__.py", line 701, in cmdline
return self._proc.cmdline()
File "D:\Programs\Anaconda\lib\site-packages\psutil\_pswindows.py", line 623, in wrapper
raise AccessDenied(self.pid, self._name)
psutil.AccessDenied: psutil.AccessDenied (pid=2644)
```
## System information
```
python: 3.6.1
language: ru_RU
os: Windows;10;10.0.15063;AMD64;Intel64 Family 6 Model 78 Stepping 3, GenuineIntel
version: 1.6.2
platform: win-64
qt: 5.6.2
py
Status: Issue closed
Answers:
username_1: **See Issue #1984 for more information on how to fix this.**
---
Closing as duplicate of #1984
---
Please remember to update to the latest version of Navigator to include
the latest fixes.
Open a terminal (on Linux or Mac) or the Anaconda Command Prompt (on windows)
and type:
```
$ conda update anaconda-navigator
$ conda update navigator-updater
``` |
BPMspaceUG/SQMS2 | 505181564 | Title: When a Answer or SyllabusChapter is set to active the following Checks shall be executed
Question:
username_0: Check if the related and selected Parent [Question or Syllabus] is active then allow, else Error "Question/Syllabus must be active"
1.5 h
Answers:
username_0: 3
username_0: removed the check at **Syllabus** because of conflicted with issue #25
username_0: see at issue #26 at bottom solution.
username_1: sorry, by accident I moved this back... |
DavBfr/dart_pdf | 594115563 | Title: I can't create a PDF in iOS
Question:
username_0: I've found that the future:
await file.writeAsBytes(pdf.save());
is not finishing.
Android works, iOS don't.
Does anyone have this problem too?
Dart version: 3.9.0
Flutter: 1.12.13+hotfix.9
iOS: 13.3.1
dart_pdf: ^1.6.0
Answers:
username_0: I've found the problem.
Conditional type: pw.Text(anyBool ? "Great!" : "Nop") makes the future file.writeAsBytes(pdf.save()) await infinitely.
This bug is only for iOS, Android it works perfectly.
username_1: maybe this is because you run the iOS code using release or profile builds. If you use a debug build, you will have some assert that checks you are not doing something wrong. These checks are disabled in release builds for performance reasons.
Status: Issue closed
|
net-snmp/net-snmp | 727400037 | Title: Enhancement request; SCA with CodeChecker
Question:
username_0: CodeChecker is a static analysis infrastructure built on the LLVM/Clang Static Analyzer toolchain.
Using Github action to build net-snmp with SCA.
The Github action Codechecker workflow is started manually.
The result is stored in an artifact with HTML files that can be downloaded and analyzed.
Reference:
- https://github.com/Ericsson/codechecker
Pull request created; #200 |
Ryujinx/Ryujinx-Games-List | 462302766 | Title: 限界凸記 モエロクロニクル H
Question:
username_0: ## 限界凸記 モエロクロニクル H
#### Current on `master` :
* Build Version : 1.0.2784
```
Unhandled Exception: Ryujinx.HLE.Exceptions.UndefinedInstructionException: The instruction at 0x000000000020191c (opcode 0xe6c31002) is undefined!
```
Answers:
username_1: Needs an update!
username_2: ## Moero Chronicle Hyper
#### Current on `master` : 1.0.4682
Game crashes on boot.
```
Last error returned.
00:00:15.482 | HLE.HostThread.0 Emulation CurrentDomain_UnhandledException: Unhandled exception caught: Ryujinx.HLE.Exceptions.UndefinedInstructionException: The instruction at 0x0000000002f11cc4 (opcode 0xf2c0617f) is undefined!
at Ryujinx.HLE.HOS.Kernel.Process.KProcess.UndefinedInstructionHandler(Object sender, InstUndefinedEventArgs e) in C:\projects\ryujinx\Ryujinx.HLE\HOS\Kernel\Process\KProcess.cs:line 1114
at ARMeilleure.State.ExecutionContext.OnUndefined(UInt64 address, Int32 opCode) in C:\projects\ryujinx\ARMeilleure\State\ExecutionContext.cs:line 134
at ARMeilleure.Instructions.NativeInterface.Undefined(UInt64 address, Int32 opCode) in C:\projects\ryujinx\ARMeilleure\Instructions\NativeInterface.cs:line 71
```
#### Outstanding Issues:
https://github.com/Ryujinx/Ryujinx/issues/1005
#### Log file :
[moero-chronicle-hyper.log](https://github.com/Ryujinx/Ryujinx-Games-List/files/4742191/moero-chronicle-hyper.log)
username_3: Updated
username_2: ## Moero Chronicle Hyper
#### Game Update Version : 1.0.0
#### Current on `master` : 1.0.5171
Game stays on a black screen without reacting to anything and without sound, while the logs only spam the `GetActualVibrationValue` stubs
#### Hardware Specs :
##### CPU: i3-7300
##### GPU: NVIDIA Corporation GeForce GTX 1050 Ti/PCIe/SSE2
##### RAM: 8GB
#### Log file :
[moero-chronicle-hyper.zip](https://github.com/Ryujinx/Ryujinx-Games-List/files/5069450/moero-chronicle-hyper.zip)
username_3: Updated
username_3: Same status on 1.0.6807 |
luni64/TeensyStep | 303758450 | Title: deceleration confused
Question:
username_0: Hello,
I noticed a strange behaviour while deceleration. It seems the speed was reduced until zero, not until pull in speed. Also there seems to be a jump from constant speed to a lower value at the beginning of deceleration.
I changed the end of line 124 in StepControl.h
from “… = F_BUS / (sqrt_2a * sqrtf(motorList[0]->target - pos - 1) + 0 * vMin / 2);”
into “… = F_BUS / (sqrt_2a * sqrtf(motorList[0]->target - pos - 1) + vMin);”
Was you playing around with this piece of code and did you forget to change it back to original? There is a difference in older versions of this file.
Answers:
username_1: deceleration.
Obviously there needs to be a jump from constant to some lower speed if you want to decelerate? Do you think that the jump is too large? Can you please post the motor settings you chose so that I can try to reproduce the behavior you observed.
username_0: "It seems the speed was reduced until zero"
I expect this: . . . constant (max.) speed --> ramp down (until pull-in speed) --> jump to speed = 0 (stand still)
I got that: . . . constant (max.) speed --> suddenly change to a slower speed (jump) -- > ramp down until speed = 0 (and of corse stand still)
I guess the deceleration ramp starts with [max. speed - pull in speed]; it ends with zero.
But it should start with max. speed and end with pull-in speed.
You removed the ofset 'vMin' (by inserting the factor 0) in line 124 in StepControl.h to calculate the decelerating. To calculate the accelerating in line 116 'vMin' takes a role.
You can reproduce this behavior by using any motor settings but slow acceleration and pull-in speed approximately 50 % of max. speed.
In my project i run 2 steppers but not at the same time yet. I tried a lot of different setting. I guess you don't need my settings to reproduce the behavior. But for weekend i'll be home and i can post it.
Thanks for reply and forgive my poor English (I'm a German too)
<NAME>
username_0: Hi Lutz,
thanks for showing me this way to record the speed profile.
I used the same motor settings as you but i added "motor.setPullInSpeed(4000)" in void setup.
Here is my record:

Do you see what is wrong?
<NAME>
Status: Issue closed
username_1: Oh, that doesn't look nice indeed. As you already observed, the reason is the "0 x vMin" bug in stepControl.h. For small vMin's this is hardly noticeable but for your large vMin this is of course not good.
I changed the corresponding code accordingly and added a small safety check.

The result is probably what you want to see:




As you see in the last graph the frequency doesn't reach vMin after the deceleration phase, this is due to the fact that I calculate the new period using the current position. Would be better to start deceleration a few steps earlier to ensure that it goes down to vMin at the end. Again, for small vMin's this is no problem at all, for large vMin's this might be a problem. In case you get step losses during deceleration try to reduce vMin.
Best wishes
Lutz
username_1: Hello,
I noticed a strange behaviour while deceleration. It seems the speed was reduced until zero, not until pull in speed. Also there seems to be a jump from constant speed to a lower value at the beginning of deceleration.
I changed the end of line 124 in StepControl.h
from “… = F_BUS / (sqrt_2a * sqrtf(motorList[0]->target - pos - 1) + 0 * vMin / 2);”
into “… = F_BUS / (sqrt_2a * sqrtf(motorList[0]->target - pos - 1) + vMin);”
Was you playing around with this piece of code and did you forget to change it back to original? There is a difference in older versions of this file.
username_1: Added the branch vMin_Bugfix. Would be gread if you could test if the code in that brance works for you
Lutz
username_0: Hello Lutz,
yes it works perfect.
By the way, i prefered your stepper library because it has the ability tho adjust the pull in speed. I missed that in the ordinary AccelStepper library. I need it to save time while acceleration and deceleration and to reduce vibrations and noise.
Now I'm very happy and i thank you for your cooperation.
Regards
Peter
Status: Issue closed
|
BPSTechServices/pcef-public | 950724039 | Title: Planning grant application
Question:
username_0: Here is a link to the application fields for the portal for planning grants https://portlandoregongov.sharepoint.com/:w:/s/pcef/EfVvW15KIHdGsjoHRRHT5wcB3IKpu96TwZ0NlsVJeRqtKw?e=ijLun5
We also need the attachments section again with these categories. Docs will likely be in pdf with exception of budget which should always be in excel, but want to be able to accept images (jpg, png etc.) and word docs as well.
• Application - required
• Budget (excel) - required
• Financials - required
• Others/optional<issue_closed>
Status: Issue closed |
simatec/ioBroker.backitup | 892764717 | Title: cifs mount - mount: only root can use "--options" option
Question:
username_0: Hello, my tests with Manual mounting my synology cifs drive have been succesfull finally.
But seems backitup is using some mechanism that prevents non-root from using the mount.
I have this in my log:
<html><body>
<!--StartFragment-->
backitup.0 | 2021-05-16 23:17:49.053 | error | mount: only root can use "--options" option
-- | -- | -- | --
backitup.0 | 2021-05-16 23:17:49.053 | error | (28278) Error: Command failed: mount -t cifs -o username=iobroker,password=****,rw,file_mode=0777,dir_mode=0777 //**MYSERVER***/Backup\iobroker /opt/iobroker/backups
<!--EndFragment-->
</body>
</html>
This is my manual mount with root working:
`sudo mount -t cifs //**MYSERVER***/Backup/iobroker /var/backups --verbose -o vers=1.0,username=iobroker,password=<PASSWORD>
mount.cifs kernel mount options: ip=fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b,unc=\\**MYSERVER***\Backup,vers=1.0,user=iobroker,prefixpath=iobroker,pass=********
`
I found that fstab needs to be updated so it allows non-root to mount using options, but this is not possible by using the full mount form:
[https://community.hpe.com/t5/General/mount-only-root-can-do-that-why/m-p/4467749/highlight/true#M17566](url)
is there anything i can do to make it work?
Answers:
username_1: The user iobroker needs a sudo for the mount.
Please activate the sudo-mount option in the NAS settings of Backitup
username_0: thank you @username_1 , i just tested with this option, but i now get another error:
`Started iobroker ...
[DEBUG] [mount] - first mount attempt with smb option failed. try next mount attempt without smb option ...
[ERROR] [mount] - [undefined Error: Command failed: sudo mount -t cifs -o username=iobroker,password=****,rw,file_mode=0777,dir_mode=0777 //***SERVER***/Backup\iobroker /opt/iobroker/backups
mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
[ERROR] [mount] - [IGNORED] Error: Command failed: sudo mount -t cifs -o username=iobroker,password=****,rw,file_mode=0777,dir_mode=0777 //***SERVER***/Backup\iobroker /opt/iobroker/backups
mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
[DEBUG] [iobroker] - host.raspberry2 5751 states saved
[DEBUG] [iobroker] - host.raspberry2 6701 objects saved
[DEBUG] [iobroker] - Backup created: /opt/iobroker/backups/iobroker_2021_05_24-17_45_04_backupiobroker.tar.gz`
Anything else wrong now? Still the manual mount as shown above works fine, also with creating files on the NAS.
username_1: Please show your complete config
username_0: @username_1 Please see below:

Or is there another place that gives better overview?
username_2: Try putting a slash, not a backslash
Backup/iobroker
SMB 1.0 is most likely the wrong protocol version. The whole world should have moved to smb > 2.0
Status: Issue closed
|
lbdjana23/Iot_kelompok | 883451357 | Title: Wiring Module Relay ke ESP32
Question:
username_0: Perlu melakukan pengkabelan modul relay ke board ESP32.
**Ketentuan**
1. Ada tiga pin; 1 adalah VCC, 2 adalah TRIG, 3 adalah GND.
2. Pin VCC Relay ke 3V3 ESP32
3. Pin GND Relay ke GND ESP32
4. Pin TRIG Relay ke GPIO32 atau pin 32 ESP32
**Misi Berhasil Jika**
1. Git branch **RelayWiring**
2. Foto dan upload ke folder docs dengan nama file **relay_wiring.jpg**
3. Git pull ke branch utama
4. Close issue dan attach git pull sebagai bukti<issue_closed>
Status: Issue closed |
yanbaru-expert/team_project_58 | 989994569 | Title: テキスト教材・動画教材ページの追加
Question:
username_0: - `texts` コントローラと `movies` コントローラを作成
1. 作成されたビューファイルの中身は空にすること
- ルーティングを resources で設定
1. `only` を使用して,使用するアクションを制限すること
2. `texts` コントローラは `index, show` のみ, `movies` コントローラは `index` のみ
3. `texts` コントローラの `index` アクションに対応するビューを **トップページ** に設定<issue_closed>
Status: Issue closed |
mpc001/end-to-end-lipreading | 518690716 | Title: About the concatenation between the audio and visu
Question:
username_0: Hi, thank you for the code. I want to ask about the concatenation part,
inputs = torch.cat((audio_outputs, video_outputs), dim=2)
outputs = concat_model(inputs)
Does this mean it is being concatenated along the feature axis?
Answers:
username_1: Hi,
Audio features and video features are concatenated along the feature axis. We downsample the audio input to 25 frames per second by convolutional layers and pooling layers.
username_2: Hi,
Did you also use convolutional layers when you tried to use MFCC instead of ResNet?
Status: Issue closed
|
etal/cnvkit | 125670003 | Title: Expose MIN_MAPQ in batch?
Question:
username_0: 1. Is this hyperparameter something that you've played with?
2. Is it worth exposing as an option in `batch`? I see it's already exposed in `coverage`.
Answers:
username_1: I've tested this parameter using the `coverage` command. MAPQ filtering seems to make the results significantly worse, even if the filter is as low as `-q 1`. So the ambiguously mapped reads do seem to provide a useful indicator of copy number, in aggregate at least. (This is based on mappings from bwa-mem.)
The CNVnator team found the [same result](http://genome.cshlp.org/content/21/6/974.long) (see Methods / Read placement) with MAQ, BWA's predecessor.
username_0: Cool, sounds like it's best to keep it hidden for most users.
Status: Issue closed
|
JabRef/jabref | 141115832 | Title: Export to MS Office 2007 XML file puts DOI in the StandardNumber tag
Question:
username_0: * JabRef version (available in the About box): 2.11b4
* Operating system and version: Windows 7, Word 2013
* Steps to reproduce:
1. ...File, Export to MS Office 2007 format
2. ...Open XML with Notepad and you can see the tagging
3. ...In Word, References tab, Manage Source, Browse, chose file that was exported, Edit, you will see DOI in the standard number field
* If applicable, excerpt of the bibliography file, screenshot, and excerpt of log (available in the error console)
Answers:
username_1: Yes.
There is no "DOI" tag or anything else like this. Therefore, the former developers seems to have interpreted the "StandardNumber" tag as a "standardized number" and not as a "number of a standard" and therefore put the DOI/ISBN in there...
What do you propose to change?
username_0: Just change the XML export tagging from <b:StandardNumber> to <b:DOI>. Right now, we do a search and replace on the exported XML file and it import it into Word with the DOI number in the DOI box in Word 2013.
Thanks for Jabref. It has some very nice features.
------ Original Message ------
Received: 04:11 AM EDT, 03/16/2016
username_2: What about storing the DOI in the <b:DOI> tag in addition to using the <b:StandardNumber>? When looking at the interface in word 2010, there is onyl a standard number field, no DOI field. Because of this, the mapping was done.
http://mahbub.wordpress.com/2007/03/24/details-of-microsoft-office-2007-bibliographic-format-compared-to-bibtex/ is a comparison of bibtex and word.
JabRef also stores ISBN or other numbers in the standard number field, depending on their availability, I think. See `MSBibDatabase` and `MSBibEntry`.
username_1: http://www.ecma-international.org/publications/standards/Ecma-376.htm - this is the standard MS is using. In [ECMA-376 4th edition Part 4]() you can find the XML schema (shared-bibliography.xsd) for the bibliography definition. And there is nothing like "ISBN" or "DOI" defined.
username_2: @username_0 could you provide a screenshot of word which has the doi box?
@username_1 may be office 2013 uses another standard?
Status: Issue closed
username_0: I have attached a screenshot from Word 2013. Word 2010 also has the same fields.
To see the DOI and some other fields, you must check the box "Show All Bibliography Fields".
Thanks

username_0: * JabRef version (available in the About box): 2.11b4
* Operating system and version: Windows 7, Word 2013
* Steps to reproduce:
1. ...File, Export to MS Office 2007 format
2. ...Open XML with Notepad and you can see the tagging
3. ...In Word, References tab, Manage Source, Browse, chose file that was exported, Edit, you will see DOI in the standard number field
* If applicable, excerpt of the bibliography file, screenshot, and excerpt of log (available in the error console)
username_0: I found a youtube video that shows the DOI field in Word 2010. Skip to the 2:31 minutes mark in this video:
https://www.youtube.com/watch?v=oDEF5aYDDEE
username_1: Not available here (MS Word 2010 German edition):

username_1: Additional note: Even after manually creating the `<b:DOI>...</b:DOI>` tag in an exported XML file, the DOI field does not show up in the dialog above.
username_0: Do you have the latest Office updates?
I also found the DOI in a screenshot on page 5 of this US college tutorial, so I know it is not just a me.
http://www.allegany.edu/Documents/Library/Tips%20on%20using%20Microsoft%20Word%20to%20create%20Bibliographies%20and%20Citations.pdf
Is there a way for me to create a custom export from Jabref or do I need to download the source code and change it?
Thanks
------ Original Message ------
Received: 04:33 PM EDT, 03/23/2016
username_3: Very mysterious thing. I also have the same dialog like @username_1 in Word 2010 German.
username_4: So what can we do about this?
username_2: What about just exporting the doi in the doi tag additionally? I do not see any downside.
username_1: Well, the resulting file would be schema invalid regarding the ECMA-376 standard... But as Word (in the English version) seems to produce such "invalid" files this should not cause much trouble.
Status: Issue closed
|
home-assistant/addons | 982823940 | Title: Terminal & SSH: keyboard layout
Question:
username_0: <!-- READ THIS FIRST:
- If you need additional help with this template, please refer to https://www.home-assistant.io/help/reporting_issues/
- Make sure you are running the latest version of the add-on before reporting an issue
- Provide as many details as possible. Paste logs, configuration samples and code into the backticks.
DO NOT DELETE ANY TEXT from this template! Otherwise, your issue may be closed without comment.
- This is the issue tracker for add-ons, feature request for new or existing add-ons should be opened on the forum https://community.home-assistant.io/c/feature-requests
-->
## The problem
Hi, the web terminal uses a different keyboard layout from my system's and I could find a way to set it.
I would like to have the keyboard working with my system's layout.
There's much more to say... In the past it worked correctly. Last time I used it, maybe one month ago, it worked as expected.
Thanks!
## Environment
- Add-on with the issue: Terminal & SSH
- Add-on release with the issue: 9.1.3
- Last working add-on release (if known): don't know, but it worked correctly in the past
- Operating environment (OS/Supervised): Home Assistant OS 6.2 on a Raspi 4
## Problem-relevant configuration
<!--
An example configuration that caused the problem for you. Fill this out even
if it seems unimportant to you. Please be sure to remove personal information
like passwords, private URLs and other credentials.
-->
```yaml
authorized_keys: []
apks: []
password: ''
server:
tcp_forwarding: false
```
## Traceback/Error logs
<!--
If you come across any trace or error logs, please provide them.
-->
```txt
```
## Additional information
Answers:
username_1: Is that working in the emulator here <https://xtermjs.org/>?
username_0: Same issue there. It looks like a matter of browser (Chrome) configuration.
On Firefox I am not having the same issue.
Have you got ant hint for that please? Chrome configuration doesn't look wrong... it's set in my language and locale, apparently.
Thanks for pointing me into the right direction to investigate further!
username_1: Disable all content/ad blockers |
go-telegram-bot-api/telegram-bot-api | 470818267 | Title: HTTP proxy Error
Question:
username_0: Socks5 proxy and VPN work fine, but HTTP/HTTPS proxy doesn't.
When I create http client with proxy:
proxyUrl, err := url.Parse("http_proxy_here")
client := &http.Client{Transport: &http.Transport{Proxy: http.ProxyURL(proxyUrl)}}
And pass it to NewBotAPIWithClient, I get error
Post "https://api.telegram.org/botMY_TOKEN/getMe": Found
It doesn't look like error (302 Found response isn't error), but Go panics and response is nil.
Status: Issue closed
Answers:
username_1: A 302 is not the correct response for the getMe endpoint, I assume the proxy is doing something weird. |
materialsvirtuallab/megnet | 820540907 | Title: Atomic embeddings for QM9 models
Question:
username_0: When reading atomic embeddings from QM9 models (MEGNet-simple) from files, for example:
```
model_h = MEGNetModel.from_file('../mvl_models/qm9-2018.6.1/H.hdf5')
embedding_layer = [i for i in model_h.layers if i.name.startswith('embedding')][0]
embedding = embedding_layer.get_weights()[0]
print('Embedding matrix dimension is ', embedding.shape)
```
One obtains the following:
`Embedding matrix dimension is (9, 16)`
How is this matrix embedding atoms H,C,N,O,F if the atomic number goes up to 9 (F)?
Answers:
username_1: @username_0 for historic reasons, the QM9 models were built by using atom types, so the model will first convert the atomic number to the corresponding type before passing it to the model.
https://github.com/materialsvirtuallab/megnet/blob/0047e04f3762433590d0bdda37bbb6ef6cf4cab2/megnet/data/qm9.py#L10
username_1: @username_0 To be precise, our original qm9 models were initially trained on the processed qm9 dataset by Faber et. al. J. Chem. Theory Comput. 2017, 13, 11, 5255–5264, where they have atom types (integer) instead of atomic number. In later model developments, we decided to use the atomic number for graph construction, so we had to make this conversion to be compatible.
username_0: @username_1 Thanks for your quick reply!
Does that means that in the code above `embedding[1]` == "H", `embedding[2]` == "C", `embedding[4]` == "N", `embedding[6]` == "O" and `embedding[8]` == "F"?
username_1: Yes you are right @username_0
username_0: Thanks again! That was my initial guess, but I wanted to cross check with you. I think it is useful to know this before trying transfer learning for MEGNet models reading pymatgen molecules directly (i.e., using atomic number).
username_0: BTW, you are welcome to close the issue.
Status: Issue closed
|
nextras/orm | 455255941 | Title: How to annotate property that is managed by database itself
Question:
username_0: hi,
I have properties `createdAt` and `updatedAt` which values are managed by DB by `CURRENT_TIMESTAMP`:
```
/**
* @property-read DateTimeImmutable $createdAt {virtual}
* @property-read DateTimeImmutable $updatedAt {virtual}
*/
class UserEntity extends Entity
...
```
Problem is that I must have `{virtual}` annotation there otherwise I am getting `Property UserEntity::$createdAt is not set.` but when it is `{virtual}` I can not access to value even using `getRawValue()` because I am getting `Maximum function nesting level of '256' reached, aborting!` because `getRawValue()` is calling itself.
So do somebody know how to hack this? I would expect that `@property-read` will not raise `Property is not set.` exception. So maybe this is bug. But I need it somehow solve it for now because it is blocking me.
Thanks
Answers:
username_1: Hi, first, some fundamental rules:
1) Do the most in the orm layer. If not crucially needed, create the datetime in the PHP. Mainly because of consistency, other datetime values will be probably filled by PHP (e.g. users' verified time). Of couse all this handling could be done in database layer, but this is quite opposite what we try to achieve with ORMs.
2) Be true about actual property types. The entity exists in Php in invalid state, until you persist it. This is how Orm and developers see it. If you pass the entity somewhere, it will not be fully usable. So, createdAt should be annotated with with `|null` type annotation. That's it. I understand it may be difficult to read the value after persist without non-pleasant checks. You may add your own getter that will check for nullability and return non-nullable datetime.
----
To propagate changes from db, you may use:
- ignore the change for "now", e.g. in other request it will load the proper createdAt value
- use `IPersistAutoupdateMapper` interface and implement method to say which columns you wanna update; this works for Postgres - where it uses `RETURNING` syntax, and for MySQL - where it uses a separate query; [see test implementation](https://github.com/nextras/orm/blob/cf4ad4edfdcc514fbf0a6e539eafe3e4b60b5291/tests/inc/model/bookCollection/BookCollectionsMapper.php)
- use [Model::refresh()](https://nextras.org/orm/docs/3.1/model#toc-refresh)
Status: Issue closed
|
drmohundro/SWXMLHash | 57089152 | Title: lazily create nested enums
Question:
username_0: Reading https://devforums.apple.com/message/1101244#1101244 it makes me think that this would be the right way to create the nested enum structure lazily.
Any thoughts?
Answers:
username_1: `Delayed` sounds like a very interesting way to do this. My initial thought had been to keep a queue of element names and then enumerate the queue when the final `element` was requested, but this might be a more elegant solution. I'll try to get a spike going with some of these ideas.
username_0: Have you had any time to look into this?
I just happen to have a 35MB XML file, with around 500k lines, and it is not pretty ;)
Takes about 16 seconds to parse (in Release mode on my MBP) and takes a whopping 700MB of RAM :D
Profiling shows that most of the time is (kind of obviously) spent in retain and release calls. So it would be hugely beneficial to circumvent Swifts memory management here.
Something like allocating one block of memory and using pointers to address it>
(I guess that should be another issue probably)
Of course, most people will never see these problems when working with small XML files and I could use the raw NSXMLParser. But hey, SWXMLHash is really pretty ;) (and 'Swifty')
username_1: Now *that* is a perfect use case for lazy parsing! :smiley:
I haven't had a chance to yet, but it is definitely on my to do list. Would it be possible to share the XML file (or perhaps a sanitized version of it) so that I can easily compare/contrast the differences in performance, etc.? If it is too big of a hassle then no worries.
username_0: Sadly I am not allowed to give out the data :(
username_1: See [this gist](https://gist.github.com/username_1/ad39a95d917fc74c51f5) for my current thoughts on how I think I might approach lazy loading.
Anything I'm missing with that general approach? I'm conceptually thinking of each subscript operation as a stream or path of operations and, given that, I should be able to parse only the XML elements in that stream.
username_1: I've begun work on this and have created #17 to track this work.
username_1: I think this is ready for a first iteration now - apologies on taking so long getting something going here! @username_0 do you think you could pull down PR #17 and see how it handles your 35MB file? Before I bring this in to master, I'd like to make sure I'm actually improving the performance :)
A few notes... I initially tried to detect when it was safe to stop parsing, but I don't think that will work with all XML fragments, so the `NSXMLParser` will still look through the entire document, but will only load results that match what was requested. The actual parsing doesn't get triggered until a call to `all` or `element` is made (or a call that calls them like `children`).
username_0: Ok, I am speechless :)
Without changing anything in my code (I am still calling just the parse function) parsing now takes only 3.2 seconds! (previously 16-20 seconds)
Memory consumption is also way down to around 200-300MB (it increases though when scrolling the table, where I present text of the nodes)
Those are just quick comparisons.
__And now for the kicker:__
When using your new lazy parsing, I get results almost immediately (hard to measure exactly because I fetch a few nodes and present them in a table.
And the best part, it only takes about 80MB of memory, and when scrolling the table actually never goes higher than maybe 90-100MB and immediately dropping back to 80MB when stopping scrolling.
This is brilliant!
You @username_1 are the man!
I still can't stop grinning :D
username_1: That's awesome! Glad it works!
Quick question - are you targeting Xcode 6.3 or an earlier version? I only ask because if you're targeting an earlier version, I will just merge this into master. There are very few changes in my PR that assume Swift 1.2, so it would be easy to drop them.
username_0: I am only using Xcode 6.3 anymore.
username_1: Okay - I'll just merge into the xcode-6.3 branch then.
username_1: It's now live in the xcode-6.3 branch - I'll go ahead and close this issue. Thanks so much for your help with both the suggestion and with testing this!
Status: Issue closed
|
bazelbuild/rules_go | 191164760 | Title: cgo_library with dependency on types defined in .go files in the same package is broken
Question:
username_0: I can't figure out how to get this case to work. I have a `type Type struct` defined in types.go which is not a cgo file. I have a cgo_foo.go file which does import "C" and uses Type. The go tool handles this but I can't seem to figure out how to get it to work with bazel rules. Here is a shot:
```
go_library(
name = "go_default_library"
srcs = ["types.go"],
library = "foo_cgo",
)
cgo_library(
name = "foo_cgo",
srcs = ["foo_cgo.go"],
)
```
foo_cgo fails to compile `undefined: Type`. go build builds this package.
Answers:
username_1: @username_0 can you paste sample `types.go` and `foo_cgo.go` files? it'd make it easier to repro.
username_0: Ok, I'll setup a sample repo (a repro repo???) later tonight.
username_0: Here is a repo that demonstrates this issue https://github.com/username_0/cgobuild
username_0: And what go build does:
```console
$ go build -x github.com/username_0/cgobuild/pkg
WORK=/tmp/go-build634287837
mkdir -p $WORK/github.com/username_0/cgobuild/pkg/_obj/
mkdir -p $WORK/github.com/username_0/cgobuild/
cd /usr/local/google/home/username_0/go/src/github.com/username_0/cgobuild/pkg
CGO_LDFLAGS="-g" "-O2" /usr/local/google/home/username_0/.gimme/versions/go1.6.3.linux.amd64/pkg/tool/linux_amd64/cgo -objdir $WORK/github.com/username_0/cgobuild/pkg/_obj/ -importpath github.com/username_0/cgobuild/pkg -- -I $WORK/github.com/username_0/cgobuild/pkg/_obj/ cgo_foo.go
gcc -I . -fPIC -m64 -pthread -fmessage-length=0 -I $WORK/github.com/username_0/cgobuild/pkg/_obj/ -g -O2 -o $WORK/github.com/username_0/cgobuild/pkg/_obj/_cgo_main.o -c $WORK/github.com/username_0/cgobuild/pkg/_obj/_cgo_main.c
gcc -I . -fPIC -m64 -pthread -fmessage-length=0 -I $WORK/github.com/username_0/cgobuild/pkg/_obj/ -g -O2 -o $WORK/github.com/username_0/cgobuild/pkg/_obj/_cgo_export.o -c $WORK/github.com/username_0/cgobuild/pkg/_obj/_cgo_export.c
gcc -I . -fPIC -m64 -pthread -fmessage-length=0 -I $WORK/github.com/username_0/cgobuild/pkg/_obj/ -g -O2 -o $WORK/github.com/username_0/cgobuild/pkg/_obj/cgo_foo.cgo2.o -c $WORK/github.com/username_0/cgobuild/pkg/_obj/cgo_foo.cgo2.c
gcc -I . -fPIC -m64 -pthread -fmessage-length=0 -o $WORK/github.com/username_0/cgobuild/pkg/_obj/_cgo_.o $WORK/github.com/username_0/cgobuild/pkg/_obj/_cgo_main.o $WORK/github.com/username_0/cgobuild/pkg/_obj/_cgo_export.o $WORK/github.com/username_0/cgobuild/pkg/_obj/cgo_foo.cgo2.o -g -O2
/usr/local/google/home/username_0/.gimme/versions/go1.6.3.linux.amd64/pkg/tool/linux_amd64/cgo -objdir $WORK/github.com/username_0/cgobuild/pkg/_obj/ -dynpackage cgobuild -dynimport $WORK/github.com/username_0/cgobuild/pkg/_obj/_cgo_.o -dynout $WORK/github.com/username_0/cgobuild/pkg/_obj/_cgo_import.go
cd $WORK
gcc -I . -fPIC -m64 -pthread -fmessage-length=0 -no-pie -c trivial.c
cd /usr/local/google/home/username_0/go/src/github.com/username_0/cgobuild/pkg
gcc -I . -fPIC -m64 -pthread -fmessage-length=0 -o $WORK/github.com/username_0/cgobuild/pkg/_obj/_all.o $WORK/github.com/username_0/cgobuild/pkg/_obj/_cgo_export.o $WORK/github.com/username_0/cgobuild/pkg/_obj/cgo_foo.cgo2.o -g -O2 -Wl,-r -nostdlib -Wl,--build-id=none
/usr/local/google/home/username_0/.gimme/versions/go1.6.3.linux.amd64/pkg/tool/linux_amd64/compile -o $WORK/github.com/username_0/cgobuild/pkg.a -trimpath $WORK -p github.com/username_0/cgobuild/pkg -buildid e1df783272a56fa029c5c06f60be48a30a2d81a5 -D _/usr/local/google/home/username_0/go/src/github.com/username_0/cgobuild/pkg -I $WORK -pack ./types.go $WORK/github.com/username_0/cgobuild/pkg/_obj/_cgo_gotypes.go $WORK/github.com/username_0/cgobuild/pkg/_obj/cgo_foo.cgo1.go $WORK/github.com/username_0/cgobuild/pkg/_obj/_cgo_import.go
pack r $WORK/github.com/username_0/cgobuild/pkg.a $WORK/github.com/username_0/cgobuild/pkg/_obj/_all.o # internal
```
username_0: Interestingly in the example repo `//pkg:go_default_library` compiles even though `//pkg:cgo_default_library` does not.
username_0: Here's an inelegent fix that doesn't break compatibility. It implements an option to opt out of creating the go_library from a cgo_library.
Status: Issue closed
|
sfu-db/dataprep | 656893719 | Title: parameter management for `plot`: stage 1
Question:
username_0: **Is your feature request related to a problem? Please describe.**
This is a detailed task related to to issue #238
As an initial stage, we first work on `plot`. We create an api `_plot` like
```
_plot(df,
x,
dtype = {x: Categorical},
display = "auto",
config = {"hist.bins": 10,
"hist.width": "auto"
"hist.height": "auto"})
```
This API is an adapter of current `plot` function. It extracts the parameters needed by current `plot` function and then calls current `plot` function. In the next stage we will replace the old `plot` function with this new `_plot`.
The tasks include:
1. Create configuration classes such as `HistConfig`, `BarChartConfig`, `WordCloudConfig`,.... each class has related attributes from the old `plot` (https://sfu-db.github.io/dataprep/dataprep.eda.html). For example, `bins` should be an attribute in `HistConfig`, and `top_words` should be an attribute in `WordCloudConfig`. Each attribute has a default value as in current `plot` function.
2. Each class can set attribute by a dict. For example, `HistConfig.set({"bins": 10, "width": 100})` will set the `bins` attribute as 10 and `width` attribute as 100.
3. In `_plot` we will build the related configuration object from the input `config` parameter. For example, `_plot(..., config = {"hist.bins": 10, "word_cloud.top_word": 20})` will build a `HistConfig` and `WordCloudConfig` and set related their attributes based on the input.
4. In this stage we just extract the related attributes from step 3 and get the parameter for current `plot`. For example, we extract `top_word` from `WordCloudConfig`, and the pass the `top_word` to current `plot`. In this way we get all the parameters for current `plot` and then we directly call it.
Answers:
username_0: @username_1 Could you estimate the points for this task? I think you could just choose a few fig types to work on, such as the hist and bar chart, such that this task could be finished by next milestone(7.26). Please also consider the time for writing docstring and testing.
username_1: @username_0 I think the first step is to create a histogram class first. If I could finish the first class, then it is not difficult for me to create other classes.
username_2: @username_0 and I want to define what components of `plot(df)` the user can configure in the `display` parameter. We propose the user be able to specify whether or not to show
1. each plot type (histogram and bar chart),
2. the statistics (so all or no statistics),
3. the insights.
By, default, `display` will be `display=["histogram", barchart", "statistics", "insights"]`. Whenever "insights" is in the `display` list, we will compute all the insights we have for `plot(df)` and show them all in the insights tab. The relevant insights will also be distributed to any other component in `display`. For example, if the user specifies `plot(df, display=["histogram", "insights"])`, we will compute all insights (even for categorical columns and stats) and show them all in the insights tab, and distribute the relevant insights associated with numerical columns to the histograms.
Moreover, we think the user may not want to compute all the insights. We propose allowing the user to specify the insight type in `display`. For example, `plot(df, display=["histogram", "insight.skew"])` to compute only the skew insight.
Please let us know if you have any comments about this.
username_3: Thanks for proposing this design. I don't see any issue with it.
username_2: @username_4 @username_0 and I want to move forward with this design but first clarify some details. Since we create a configuration class for each plot type, this raises the problem that different functions generate the same plot type and require different parameters. For example, `plot(df, numerical, numerical)` and `plot(df, numerical, categorical)` each output a boxplot; in the former, `bins` can be set, and in the latter, `ngroups` can be set. But we propose a class with all possible attributes
``` python
class BoxPlot:
bins: int = 50
ngroups: int = 50
...
```
In the documentation, we will make it clear which parameters are relevant to each function. In the how-to guide, we will only show parameters relevant to the function, eg the `plot(df, numerical, numerical)` box plot how-to guide will show `bins` and not `ngroups`.
Another possibility is to have a class for each plot and data type, eg BoxPlotNumerical, BoxPlotCategorical. But for a line chart, we would already have four different classes, and so we prefer the above approach.
### Global parameters
We plan to have a few "global" parameters to make some common parameters easy to set. For example, the global parameter `bins` will function as `plot(df, x, bins=20)= plot(df, x, config={"hist.bins": 20, "kde.bins": 20})`. Other global parameters include `height` and `width`.
Please let us know any comments.
username_3: I like this proposal due to its simplicity.
I want to see how well this design is in terms of extensibility? For example, can you illustrate how to add a new plot (eg, violin plot) to dataprep.eda with this design?
username_0: In this design, adding a new plot would be easy, as most params are local, except for render-related params like `height` and `width`. When adding a new plot, we create a config class of that plot, and then add related params in the config class.
username_4: There are two problems that need your final decision:
1. In plot(df, x), the insights are attached to corresponding plots instead of showing in an overview section. For example uniform and normal insights are shown with histogram only. Then there comes the problem if the user input is plot(df,x, display=["insight.uniform", "insight.normal", "bar", "wordcloud"]). They don't want to show histogram but they want the insights which can only be shown with the histogram tab. In plot(df), as we have an overview section for all the insights it won't be a problem there. But we assume users will use the same display list for other functions(plot(df,x)) which will cause the above problem.
2. A more specific problem associated with the above one is, in plot(df, x) some insights(zeros, negatives, skewness, unique..) are attached to STATS tab. Instead of showing the sentences, we use the highlight(this is a decision made in a previous UI meeting).

Then the problem here is if the user doesn't want an insight, are we gonna turn off the corresponding highlight? Do we also need to delete(not compute) that information from stats tab? Currently, we keep the stats tab fixed.
Jinglin, Brandon, and I had a meeting to discuss today. Please add on the info I missed @username_0 @username_2
Your final decisions are important. @username_3 @username_5
username_3: I guess in most cases, users want to see all insights. In other cases, users may want to disable all insights when sharing a notebook. The scenario that users want to show some of the insights may not be common.
If this is the case, then we only allow users to specify "insights" in the display rather than specify certain types of insights like "insight.uniform", "insight.normal".
For 2, I want to understand how hard it is to implement the following. If it is hard, then I am ok to keep the stats tab fixed.
- display = ["statistics", "insights"] show all stats with highlights (i.e., the figure you show above).
- display = ["statistics"] show all stats without highlights (i.e., "Distinct Count", "Missing", and "Missing (%)" are in black rather than in red).
Status: Issue closed
|
softwaresale/crossclip | 397174867 | Title: Creating a test suite
Question:
username_0: Unit test framework should be created with the `unittest` python package. It should integrate with the `setup.py` file. For now, create a test for the frontend. This will test the multi-platform nature of the frontend. Each backend can be tested later. |
google/blockly | 673875333 | Title: Marked workspace tests failing
Question:
username_0: Marked workspace tests are failing after introducing async and event cleanup in #4070 and #4064.
After debugging, the source of the issue appears to be that if the Move events fired after the `modify_` call are processed before getting the block coordinates with `getRelativeToSurfaceXY`, then the asserts for coordinates fail.
An example of how to see this issue more clearly (**only for debugging, not a fix**), would be to add breakpoints at `modify` call and `fireNow_`, update a test like this (and then observe how `fireNow_` is not called after `modify_` when the test passes, and fails when it is called after `modify_`):
```
test('Cursor on row block', function() {
this.workspace.getCursor().setCurNode(
Blockly.ASTNode.createBlockNode(
this.row_block_1));
this.eventsFireStub.restore(); // restore behavior of events fire, so the events are not fired immediately
this.clock.restore(); // restore clock behavior so that it uses real clock (test behavior before addition of clock stub)
chai.assert.isTrue(Blockly.navigation.modify_());
var pos = this.row_block_1.getRelativeToSurfaceXY();
chai.assert.equal(100, pos.x);
chai.assert.equal(200, pos.y);
});
```
This is could be either a **bug** in keyboard_nav, that was only caught after making sure events were run before asserts in the test, or an error in writing the tests where the expected value is not correct.<issue_closed>
Status: Issue closed |
mangoweb-backend/clock | 828669057 | Title: Participating in standardizing of a System Clock Interface
Question:
username_0: Sorry, I have tried emailing @JanTvrdik and @matej21 via your public github emails, I was hoping you would participate in the standardisation of the System Clock Interface via this Draft PSR:
https://github.com/php-fig/fig-standards/pull/1224/files
We are still looking for working group members. |
gavz63/Project450 | 524647598 | Title: Notification for new connection request
Question:
username_0: * [ ] Notification displayed when request received
* [ ] Clicking notification should navigate directly to Connections->Received
* [ ] Notification should show Name and Avatar of the person who sent you the request |
GuillaumeAmat/knuckle | 305255675 | Title: :sparkles: **Init NPM module** :sparkles:
Question:
username_0: - [ ] Babel config
- [ ] Prettier
- [ ] Commit-lint
- [ ] NPM deploy task
- [ ] README.md
- [ ] CONTRIBUTING.md
Answers:
username_0: For the record, @username_1 set the title of the task to: :sparkles: **Init NPM module** :sparkles:
But Github issues do not accept markdown in their title so I had to rename it. What a shame 🤦♂️
Status: Issue closed
|
ricky71us/VegShopClub | 655174427 | Title: BACKLOG: one spot email
Question:
username_0: In the Send email page,
let's change the flow to check orders by buyer, and remove the individual email option...
only when all orders of each buyers have been verified, then one email button would be displayed.
one click of this email (with validation that all orders have been verified & confirmation that they are ready to send email) should send an email to every buyer of this order CCing G.
Answers:
username_0: closed
Status: Issue closed
|
conan-io/conan | 569437878 | Title: [question] conan_basic_setup breaks find_package
Question:
username_0: - I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
Answers:
username_1: Hi @username_0
It is possible that Conan is configuring some settings that might be affecting your build and causing the link error.
If you check the ``conan install`` output you would see some configuration, coming from your ``default`` profile. You can also show it with ``conan profile show default``.
The ``conan_basic_setup()`` will use those and for example can activate the ``libstdc++`` library to link with, but your CppKafka might have been linked with ``libstdc++11``.
You could try:
- Modify your ``default`` profile (it is a text file, and you also have conan commands to change it if you prefer), to match your desired configuration, including the ``compiler.libcxx`` that you want
- If you are using C++17 to build ,you might want to also set the ``compiler.cppstd=17`` to make sure the binaries for all packages are built with exactly the same standard. Though many times it is possible to link with binaries built with other standards, it is still not 100% guaranteed.
- Depending on the critically of your application you will be good with the default, or you might want to ensure all dependencies use C++17 to build
- If you decide to go for ``compiler.cppstd=17``, there will not be pre-existing binaries in ConanCenter. You can build them from sources with ``--build``, then upload them to your Artifactory (the CE is totally free), so you can use them later easily without rebuilding.
Please let me know if this clarifies it a bit. Thanks!
username_0: Hi @username_1
Thank you for your help. I'm not sure exactly what fixed it because I was trying a lot of things.
I think it was:
1. Regenerate conan profile using detect
1. Rebuild and Install CppKafka
1. Rebuild my project
My new profile for reference
```
[settings]
os=Linux
os_build=Linux
arch=x86_64
arch_build=x86_64
compiler=gcc
compiler.version=9
compiler.libcxx=libstdc++11
build_type=Release
[options]
[build_requires]
[env]
```
I had a theory that this could have been related to CppKafka linking with Boost on my system and Conan linking to its project scoped version of boost. This turned out to be fine (both Boost versions are 1.71.0)
Status: Issue closed
username_1: Yes, this could be. A huge problem of the find_xxx.cmake modules that come with CMake and/or different libraries is that they are uncontrollable, some might find some dependencies directly in the system. CMakeLists.txt of libraries might download and build transitive dependencies too, which is another problem. All the CMake ecosystem does not have a clean way to provide information of dependencies and to force it to use those, and not look for them in the system, for example.
This is one of the reasons why we provide the ``cmake_find_package`` generators, because those find_xxx.cmake files generated by Conan contain the right information to find the dependencies and transitive dependencies from Conan correctly.
As this seems to be solved, I am going to close it, but do not hesitate to re-open or create a new issue if you have any further issue. Thanks! |
ihs-programming/Dirtbox | 300462694 | Title: ##### Command Help Menu Bug
Question:
username_0: When an unknown command is submitted, there should be an indicator that the command is not a real command. This does not work if the incorrect command is a prefix of an existing command. As a result:
``` !viewmessage ```
does not create an error message, although the correct command is
``` !viewmessages```
Answers:
username_0: How do I reproduce this? As shown in the below image, it works fine

Status: Issue closed
username_1: My bad, that was on my branch |
earthlab-education/ea-bootcamp-hometowns | 509174949 | Title: Add notebook about hometown for jaeyoungjin
Question:
username_0: @jaeyoungjin Follow the GitHub collaborative workflow to:
1. Create a fork of this repository and add to your fork a Jupyter Notebook called `city-state-or-country.ipynb` (e.g. houston-tx.ipynb) with some facts about your hometown (or another chosen city) using [Markdown](https://www.earthdatascience.org/courses/intro-to-earth-data-science/file-formats/use-text-files/format-text-with-markdown-jupyter-notebook/):
- add a subtitle (header) and the information for the latitude and longitude of the main area of the town/city
- add a subtitle (header) and the information for the most recent population figure you can find, plus a hyperlink to the source for this information
- add a subtitle (header) for a local landmark, plus an image and short text description of this landmark
2. Submit a pull request from your fork to this repository, with the following included in the message of your pull request:
- notify the owner of the repository (i.e. your instructor) that you have addressed the issue using `@username_0`
- reference the issue number using `Fixes #issue-number` (e.g. the issue number is above in the title of this issue)<issue_closed>
Status: Issue closed |
platanus/our-boxen | 109505969 | Title: Failed for amosrivera
Question:
username_0: Running on `yosemite.local` (OS X 10.11) under `/bin/zsh`, version 2b5d1443929d700b5ac194640694d49697a26e40 ([compare to master](https://github.com/platanus/our-boxen/compare/2b5d1443929d700b5ac194640694d49697a26e40...master)).
### Changes
```
D .ruby-version
```
### Puppet Command
```
/opt/boxen/repo/bin/puppet apply --group admin --confdir /tmp/boxen/puppet/conf --vardir /tmp/boxen/puppet/var --libdir /opt/boxen/repo/lib --libdir /opt/boxen/repo/.bundle/ruby/2.0.0/gems/boxen-2.8.0/lib --modulepath /opt/boxen/repo/modules:/opt/boxen/repo/shared --hiera_config /opt/boxen/repo/config/hiera.yaml --logdest /opt/boxen/repo/log/boxen.log --logdest console --no-report --detailed-exitcodes --show_diff /opt/boxen/repo/manifests
```
### Output (from /opt/boxen/repo/log/boxen.log)
```
2015-10-02 11:32:10 -0300 Puppet (err): Unable to set ownership of log file
2015-10-02 11:32:17 -0300 Puppet (notice): Compiled catalog for yosemite.local in environment production in 3.95 seconds
2015-10-02 11:32:26 -0300 /Stage[main]/Platanus::Hound/Repository[/Users/username_0/src/hound]/ensure (notice): created
2015-10-02 11:32:26 -0300 /Stage[main]/Platanus::Hound::Ruby/File[/Users/username_0/.rubocop.yml]/ensure (notice): created
2015-10-02 11:32:32 -0300 /Stage[main]/Git/Homebrew::Formula[git]/File[/opt/boxen/homebrew/Library/Taps/boxen/homebrew-brews/git.rb]/content (notice):
--- /opt/boxen/homebrew/Library/Taps/boxen/homebrew-brews/git.rb 2015-09-30 16:51:00.000000000 -0300
+++ /var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/puppet-file20151002-37995-xk29w9 2015-10-02 11:32:32.000000000 -0300
@@ -1,25 +1,26 @@
class Git < Formula
desc "Distributed revision control system"
homepage "https://git-scm.com"
- url "https://www.kernel.org/pub/software/scm/git/git-2.5.1.tar.xz"
- sha256 "b3ceb7b118221b8c74d0abdc62ab035a58360dbbd28ca17c53e301e517d4220f"
+ url "https://www.kernel.org/pub/software/scm/git/git-2.4.3.tar.xz"
+ sha256 "f05007a9d1ef28c3d84091ddf7ce5d29c2df2272f34b4dd8b24e09274280d814"
head "https://github.com/git/git.git", :shallow => false
bottle do
- sha256 "2b285b7e989ef95bcafaf23314acffe2d292820eae29eb5981bbe4c1d871a109" => :yosemite
- sha256 "e9bc4b9f4272925364376b75cf3d94c890679fdb10e4004936cbb4b614f066a1" => :mavericks
- sha256 "85ded4d9b2a131d07fb650d3ddb69d30fa5abf7dee4f58aece09daee0284d335" => :mountain_lion
+ revision 1
+ sha256 "452c896f29ac78004482cf5a0de63aab0e393d8f0b24b094fef0147ad94e91cc" => :yosemite
+ sha256 "9bf5b62541989ba6a79705c37b317d5561d4ee44808c70dd3906f8697a6bde10" => :mavericks
+ sha256 "43aa19fb41413aef74d9d20913ac307079c5b1fbd486358986ff48eb4614aa96" => :mountain_lion
end
resource "man" do
- url "https://www.kernel.org/pub/software/scm/git/git-manpages-2.5.1.tar.xz"
- sha256 "6e403070ee71678acad0b7f53bc5327e13b42cebccc6769177fe0b4a11f042e3"
+ url "https://www.kernel.org/pub/software/scm/git/git-manpages-2.4.3.tar.xz"
+ sha256 "91e1a9cb4a35c141bd875063fedc0409ecb676e828204afabc75c3b4c7b844cc"
end
resource "html" do
- url "https://www.kernel.org/pub/software/scm/git/git-htmldocs-2.5.1.tar.xz"
- sha256 "2ebf4761a793d4c8bdf49ff04937c08408c8903160d910eba5714786535d0c83"
+ url "https://www.kernel.org/pub/software/scm/git/git-htmldocs-2.4.3.tar.xz"
+ sha256 "5a6f6ef9c992eef29b62d8e963ba508583b917f14a374e0fbba3fe50a44c111a"
end
option "with-blk-sha1", "Compile with the block-optimized SHA1 implementation"
[Truncated]
2015-10-02 11:39:18 -0300 /Stage[main]/Nodejs::Global/Nodejs::Version[4.1]/Nodejs::Alias[4.1]/File[/opt/nodes/4.1] (warning): Skipping because of failed dependencies
2015-10-02 11:48:28 -0300 /Stage[main]/Platanus::Hound::Ruby/Ruby_gem[rubocop for all rubies]/ensure (notice): created
2015-10-02 11:48:28 -0300 /Stage[main]/Nodejs::Global/File[/opt/boxen/nodenv/version] (notice): Dependency Nodejs[4.1.1] has failures: true
2015-10-02 11:48:28 -0300 /Stage[main]/Nodejs::Global/File[/opt/boxen/nodenv/version] (warning): Skipping because of failed dependencies
2015-10-02 11:48:28 -0300 /Stage[main]/Stacks::Node/Npm_module[karma-cli for all nodes] (notice): Dependency Nodejs[4.1.1] has failures: true
2015-10-02 11:48:28 -0300 /Stage[main]/Stacks::Node/Npm_module[karma-cli for all nodes] (warning): Skipping because of failed dependencies
2015-10-02 11:48:28 -0300 /Stage[main]/Stacks::Node/Npm_module[npm for all nodes] (notice): Dependency Nodejs[4.1.1] has failures: true
2015-10-02 11:48:28 -0300 /Stage[main]/Stacks::Node/Npm_module[npm for all nodes] (warning): Skipping because of failed dependencies
2015-10-02 11:48:28 -0300 /Stage[main]/Stacks::Node/Npm_module[generator-platanus-ionic for all nodes] (notice): Dependency Nodejs[4.1.1] has failures: true
2015-10-02 11:48:28 -0300 /Stage[main]/Stacks::Node/Npm_module[generator-platanus-ionic for all nodes] (warning): Skipping because of failed dependencies
2015-10-02 11:48:28 -0300 /Stage[main]/Stacks::Node/Npm_module[ionic for all nodes] (notice): Dependency Nodejs[4.1.1] has failures: true
2015-10-02 11:48:28 -0300 /Stage[main]/Stacks::Node/Npm_module[ionic for all nodes] (warning): Skipping because of failed dependencies
2015-10-02 11:48:28 -0300 /Stage[main]/Stacks::Node/Npm_module[yo for all nodes] (notice): Dependency Nodejs[4.1.1] has failures: true
2015-10-02 11:48:28 -0300 /Stage[main]/Stacks::Node/Npm_module[yo for all nodes] (warning): Skipping because of failed dependencies
2015-10-02 11:48:28 -0300 /Stage[main]/Stacks::Node/Npm_module[grunt-cli for all nodes] (notice): Dependency Nodejs[4.1.1] has failures: true
2015-10-02 11:48:28 -0300 /Stage[main]/Stacks::Node/Npm_module[grunt-cli for all nodes] (warning): Skipping because of failed dependencies
2015-10-02 11:48:28 -0300 /Stage[main]/Stacks::Node/Npm_module[gulp for all nodes] (notice): Dependency Nodejs[4.1.1] has failures: true
2015-10-02 11:48:28 -0300 /Stage[main]/Stacks::Node/Npm_module[gulp for all nodes] (warning): Skipping because of failed dependencies
```
Answers:
username_0: Succeeded at version 2b5d1443929d700b5ac194640694d49697a26e40.
Status: Issue closed
|
demokratie-live/democracy-client | 318804386 | Title: Darstellung Search-Resultate
Question:
username_0: **Description:**
Dieses Issue bezieht sich auf ein Feedback eines Testers.
**Faulty Behaviour:**

**Expected Behaviour:**
"Wenn ich ein Vorhaben favorisiert habe, dann sollte das auch in der Liste sichtbar sein."
Es geht ihm also um die Darstellung der Such-Resultate. Anstatt Untertitel möchte der User eine Ansicht mit Glocke. Es gilt, darüber nachzudenken, ob man die Darstellung aus BurgerMenu/Benachrichtigungen adaptiert oder eine andere?
**App-Version:** 0.7.5
Answers:
username_1: Ja, es geht mir um die Darstellung der Suchresultate.
Allerdings eher um die Zahl und den Pfeil an der Seite, der hat sich nämlich nicht verändert, nachdem ich in der Detailansicht favorisiert habe.
Untertitel ist ok so, keine Glocke nötig.
Status: Issue closed
|
contribute-md/discussion | 99967085 | Title: Awesome-contribute?
Question:
username_0: I would like to start praising repositories and organizations that have awesome contribute files. I think the way to do this is to have an awesome-contribute repository - something like this [awesome-readme](https://github.com/matiassingers/awesome-readme) repository, based off of the @sindresorhus [awesome](https://github.com/sindresorhus/awesome) lists idea. I already have more than a few of these lists to maintain - so maybe we should have contribute-md host the next one? What do you think?
Answers:
username_0: I've started a repository [here](https://github.com/username_0/awesome-contribute). We can switch it to this organization if anyone feels like that would be a good move!
Status: Issue closed
|
justintime/nagios-plugins | 257617112 | Title: Cache reported as negative
Question:
username_0: Hi!
For some time, I was facing issue that `check_mem` was reporting high memory usage, while `free` reported that memory usage was ok. On closer inspection, I found that the cache reported by `check_mem` was negative:
~~~~
# /usr/local/nagios/libexec/check_mem -u -C -w 90 -c 95
WARNING - 92.9% (11695180 kB) used!|TOTAL=12582912KB;;;; USED=11695180KB;11324620;11953766;; FREE=887732KB;;;; CACHES=-2709560KB;;;;
~~~~
The negative cache value is what causes the issue; if I choose to ignore cache dont get the warning:
~~~~
# /usr/local/nagios/libexec/check_mem -u -w 90 -c 95
OK - 71.5% (8997588 kB) used.|TOTAL=12582912KB;;;; USED=8997588KB;11324620;11953766;; FREE=3585324KB;;;; CACHES=-2692876KB;;;;
~~~~
Its a linux system, details are as follows:
~~~~
# cat /proc/meminfo
MemTotal: 12582912 kB
MemFree: 3591364 kB
Cached: 518712 kB
Buffers: 0 kB
Active: 4081284 kB
Inactive: 1503948 kB
Active(anon): 3923484 kB
Inactive(anon): 1143036 kB
Active(file): 157800 kB
Inactive(file): 360912 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 248 kB
Writeback: 0 kB
AnonPages: 5066520 kB
Shmem: 3292220 kB
Slab: 114076 kB
SReclaimable: 61008 kB
SUnreclaim: 53068 kB
~~~~
~~~~
# uname -sr
Linux 2.6.32-042stab120.16
~~~~
=========================
Tried it on a different system with a more modern kernel, output is as follows:
~~~~
# cat /proc/meminfo
MemTotal: 8171680 kB
MemFree: 282784 kB
MemAvailable: 7199936 kB
Buffers: 640576 kB
Cached: 5530204 kB
SwapCached: 236 kB
Active: 4675496 kB
Inactive: 2069064 kB
[Truncated]
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 4610124 kB
Committed_AS: 1730916 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
DirectMap4k: 14200 kB
DirectMap2M: 2082816 kB
DirectMap1G: 8388608 kB
~~~~
~~~~
# uname -sr
Linux 4.9.15-x86_64
~~~~
Looks to be an issue in `Shmem` usage in the code:
https://github.com/justintime/nagios-plugins/blob/master/check_mem/check_mem.pl#L159 |
codeforboston/communityconnect | 418320272 | Title: Org card social media link different in Saved Resources
Question:
username_0: Currently, the social media links at the bottom of the org card in list view of admin is accessible through the respective social media icons. However, when the org card is added to the saved resources to create a unique url, the social media icon is removed and the social media link is directly written out (in large font).
<img width="823" alt="screen shot 2019-03-07 at 8 50 56 am" src="https://user-images.githubusercontent.com/42190870/53961386-7274e600-40b6-11e9-8e15-12fe88927148.png">
Answers:
username_1: For anyone working on this, we should use the icon from the package FontAwesomeIcon that is currently being used for all social media icon in the OrganizationCard.js
Status: Issue closed
|
oemof/DHNx | 729363979 | Title: ThermalNetwork: components.csv and component_attrs needs clean-up
Question:
username_0: In components.csv
* [ ] Delete ThermalSubnetwork - does not exist yet
* [ ] TransferStation same
* [ ] ThermalStorage same
* [ ] environment - is it used?
* [ ] Input/Output: Is this used?
In components_attrs:
* [ ] unit rendered as dot
* [ ] node_type necessary?
* [ ] Different variable names (with and without units)
ThermalNetwork vs NumpySimulationModel
length length_m
diameter diameter_mm
heat_transfer_coeff heat_transfer_coefficient_W/mK
roughness roughness_mm
* [ ] In type, mention that it can be either float/series
mass_flow
temp_drop
temp-env<issue_closed>
Status: Issue closed |
kirbydesign/designsystem | 1031481929 | Title: [Research] Chromatic
Question:
username_0: - [x] I have written a [good issue](https://github.com/kirbydesign/designsystem/wiki/The-Good%3A-Issue)
## Describe the enhancement
Find out how chromatic would be used for branch deploys and visual testing.
<hr />
## Checklist:
The following tasks should be carried out in sequence in order to follow [the process of contributing](https://github.com/kirbydesign/designsystem/blob/master/.github/CONTRIBUTING.md/#the-process-of-contributing) correctly.
### Refinement
- [ ] Request that the issue is [UX refined](https://github.com/kirbydesign/designsystem/blob/master/.github/CONTRIBUTING.md/#ux-refinement); do not proceed until this is done.
- [ ] Request that the issue is [tech refined](https://github.com/kirbydesign/designsystem/blob/master/.github/CONTRIBUTING.md/#tech-refinement); do not proceed until this is done.
### Implementation
The contributor who wants to implement this issue should:
- [ ] Make sure you have read: "[Before you get coding](https://github.com/kirbydesign/designsystem/blob/master/.github/CONTRIBUTING.md/#before-you-get-coding)".
- [ ] Signal to others you are working on the issue by assigning yourself.
- [ ] Create a branch from the [master branch](https://github.com/kirbydesign/designsystem/tree/master) following our [branch naming convention](https://github.com/kirbydesign/designsystem/wiki/The-Good%3A-Branch).
- [ ] Publish a WIP implementation to Github as a draft PR and ask for feedback.
- [ ] Make sure you have implemented tests following the guidelines in: "[The good: Test](https://github.com/kirbydesign/designsystem/wiki/The-Good%3A-Test)".
- [ ] Update the [cookbook](https://cookbook.kirby.design) with examples and showcases.
### Review
Once the issue has been implemented and is ready for review:
- [ ] Do a [self-review](https://github.com/kirbydesign/designsystem/wiki/The-Good%3A-Self-review).
- [ ] Create a pull-request. If you created a draft PR during implementation you can just mark that as "ready for review".
Answers:
username_0: I made a quick deploy of chromatic when working on #1316 that can be seen here:
https://www.chromatic.com/builds?appId=6183eb36ffc092003c641d5e
Our conclusion at that time was that a nice starting point would be to make a test-only storybook configuration (not to be used as documentation, but simply to showcase as many states of the components as possible) and publish that to chromatic for visual testing.
Status: Issue closed
|
pytorch/examples | 403207575 | Title: [C++ Frontend] Question about tensor's item() method
Question:
username_0: Hi,
I've a question regarding the use of `torch::Tensor::item()` method in the mnist sample code.
If I look at the python doc about `item`, it says it applies to single value tensors, but in the mnist sample it is applied to loss tensor (l.80 of `mnist.cpp`), but are loss tensors always single value tensors?
And by the way, how come, concerning the loss function, default mean reduction is used for training and sum reduction is used for testing?
thx for your help,
Regards
Albert
Answers:
username_1: yes, loss Tensors are always scalars (i.e.e single value Tensors) by default. We use mean reduction by default.
Status: Issue closed
|
validator/validator | 60287975 | Title: Different JSON structure and details when cheking from web or command line
Question:
username_0: When checking the same page, first on the web interface, and then from the command line, I'm getting a different JSON structure and different details. The issues reported are the same, but the web version includes additional details like the extract and hilite:
https://gist.github.com/username_0/4f8bc92c00e8341002b8
I'm using the latest validator release (16 February 2015), on OSX Yosemite with this Java version:
java version "1.6.0_65"
Java(TM) SE Runtime Environment (build 1.6.0_65-b14-466.1-11M4716)
Java HotSpot(TM) 64-Bit Server VM (build 20.65-b04-466.1, mixed mode)
Is this difference intentional? Is there a way to get the same JSON structure on both, or at least to have as well the extra details that the web version has?
Can this be related with the Java version I'm using and if so, which one is the recommended to use validator?
Thank you!
Answers:
username_1: Interesting. It’s definitely not intentional, and it almost certainly has nothing to do with your client java version.
But that said, in general for building and running the validator I'd recommend using at least Java7 (1.7) if not Java8. Myself I do most of my development on OSX Yosemite, using Java8. I don't plan on intentionally making any changes that require Java7 or 8, but the thing is, Java6 is now very very old, and has been end-of-lifed by Oracle, so going forward, ensuring continued compatibility with Java6 is not a goal for the validator project.
username_1: Anyway, will try to make some time to investigate this today
username_0: Thanks, I've updated to Java 1.8 and I can confirm this still happens. I'm running the checker like this:
```bash
java -jar vnu.jar --format json http://validationhell.com
```
I get the same results if I pass it a local HTML file.
username_0: I've tried this on Ubuntu Precise 32 and I also get a different JSON structure on the command line version.
Status: Issue closed
username_1: Please test this change using the latest nightly:
https://username_1.net/nightlies/vnu.jar
username_0: Thank you, I just compared the JSON produced by the latest nightly on CLI, VS the JSON produced currently on http://validator.w3.org/nu and it's almost the same structure now.
There's just one thing that is still different as you can see [in this gist](https://gist.github.com/username_0/3fb78fb04d645f5b0ef3), the URL of the checked document is shown in the top level when checking from web:
```json
{
"url": "http://validationhell.com",
"messages": [{
"type": "info",
"message": "The Content-Type was “text/html”. Using the HTML parser."
}, {
"type": "info",
"message": "Using the schema for HTML with SVG 1.1, MathML 3.0, RDFa 1.1, and ITS 2.0 support."
}, {
"type": "error",
"lastLine": 56,
"firstLine": 55,
"lastColumn": 64,
"firstColumn": 37,
"message": "The “align” attribute on the “img” element is obsolete. Use CSS instead.",
"extract": " href=\"/\"><img\n src=\"/images/fire.png\" align=\"absmiddle\" width=\"30\" hspace=\"5\"><stron",
"hiliteStart": 10,
"hiliteLength": 69
},
```
but when checking on CLI, the URL is included on each message:
```json
{
"messages": [
{
"type": "info",
"url": "http://validationhell.com",
"lastLine": 1,
"lastColumn": 109,
"firstColumn": 1,
"subType": "warning",
"message": "Obsolete doctype. Expected “<!DOCTYPE html>”.",
"extract": "<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Strict//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd\">\n<html",
"hiliteStart": 0,
"hiliteLength": 109
},
{
"type": "error",
"url": "http://validationhell.com",
"lastLine": 56,
"firstLine": 55,
"lastColumn": 64,
"firstColumn": 37,
"message": "The “align” attribute on the “img” element is obsolete. Use CSS instead.",
"extract": " href=\"/\"><img\n src=\"/images/fire.png\" align=\"absmiddle\" width=\"30\" hspace=\"5\"><stron",
"hiliteStart": 10,
"hiliteLength": 69
}
```
I'd go with the version that includes the URL on every message, it makes sense to have it along with the coordinates of the issues found, and if in the future the validator starts checking linked documents (like the CSS validator does), we'll need to specify the URL for each issue.
username_1: If we ever do add to the HTML checker the ability to check linked documents the way the CSS validator does, then at that time we can change the backend to add the URL to the messages.
username_0: :+1:, totally agree
username_1: I realize now that they are not entirely different use cases—at least the use cases for the HTTP/Web-services API can be very similar to use cases for the CLI.
So I reckon I'm going to change this, actually. Lemme know if you think that’s a bad idea.
One thing is, even after I make this change, the JSON output and the XML output are still going to also show a URL at the top level of the message output—along with showing a URL for each message. That’s redundant but dropping it from the top level now would break backward compatibility for any apps or third-party tools that are currently relying on the URL being emitted at the top level.
username_0: I don't know, I think that adding the URL to every message doesn't add any
value to the output from the web UI currently and if it's still going to be
available at the top, it will just make the JSON weigh a bit more.
It would be different if the UI allowed to check multiple documents, in
that case that would be needed.
I just found it a bit weird that the output was different, as I just check
one single document, so I expected the same output.
Another idea: if you just check one document, have the URL returned at the
top level. If you check multiple docs, include their URLs on every message.
In other words, make that depend on what you're requesting, not from where.
El dom, 29 de marzo de 2015 16:41, Michael[tm] Smith <
username_1: True. So I suppose it’s possible to make the behavior be that if you're using the Web frontend, it doesn't add the URL to each message. Does that sound like a good way to resolve this?
username_0: I think I'm a bit lost here. Isn't that the same as leaving things as are now?
username_0: I don't know about the implementation of vnu, what I mean is that using the CLI, if we're just checking one document like that:
java -jar vnu.jar --format json http://example1.com
Then it would make sense to have the URL just being included at the top level, and not on every message, but if we're checking multiple documents:
java -jar vnu.jar --format json http://example1.com http://example2.com
Then it would make sense to do the opposite, just include the URLs on the messages, and not on the top level.
Also, if from the CLI we're currently able to check multiple documents, I don't see why we wouldn't be able to do so from the web interface. Let's suppose we add a feature to do batch-checking of several documents from the web interface, in that case we'd be on the second example.
But, again, I don't know about the implementation of vnu, for me it's OK to leave it like it is now, it's just that it looked a bit inconsistent from my point of view.
username_1: Yeah—sorry for being unclear. What I had meant was that we could keep the behavior the same if you're using the Web frontend, but change it to be different if you're using the Web-services API directly, instead of going through the Web frontend.
But after considering it more, I realize there's really nothing that's broken here that needs fixing, so I'm back to thinking we should just keep everything the way it is.
username_0: I agree.
I was always referering to the Web service API, vs the CLI.
As the web API only accepts one URL, but the CLI accepts multiple URLs, it makes sense that the output structure is different. If some day we change that, so that the web API can accept multiple URLs, it will be the moment to consider this change.
Thanks for your explanations! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.