repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
platformio/platform-ststm32 | 488289218 | Title: Can not compile Stm32f103c8 core: STM32duino
Question:
username_0: My platform.ini configuration
```
[env:genericSTM32F103C8]
platform = ststm32
board = genericSTM32F103C8
framework = arduino
board_build.core = STM32Duino
```
Error I get while compiling
```
Processing genericSTM32F103C8 (platform: ststm32; board: genericSTM32F103C8; framework: arduino)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Verbose mode can be enabled via `-v, --verbose` option
CONFIGURATION: https://docs.platformio.org/page/boards/ststm32/genericSTM32F103C8.html
PLATFORM: ST STM32 5.6.0 > STM32F103C8 (20k RAM. 64k Flash)
HARDWARE: STM32F103C8T6 72MHz, 20KB RAM, 64KB Flash
DEBUG: Current (blackmagic) External (blackmagic, jlink, stlink)
PACKAGES: framework-arduinoststm32 3.10601.190716 (1.6.1), toolchain-gccarmnoneeabi 1.70201.0 (7.2.1)
LDF: Library Dependency Finder -> http://bit.ly/configure-pio-ldf
LDF Modes: Finder ~ chain, Compatibility ~ soft
Found 12 compatible libraries
Scanning dependencies...
Dependency Graph
|-- <VL53L0X> 1.0.2
| |-- <Wire> 1.0
Linking .pio\build\genericSTM32F103C8\firmware.elf
arm-none-eabi-g++: error: Sandima\.platformio\packages\framework-arduinoststm32\variants\PILL_F103XX\ldscript.ld: No such file or directory
*** [.pio\build\genericSTM32F103C8\firmware.elf] Error 1
```
Status: Issue closed
Answers:
username_1: There are only two officially supported cores:
`stm32` - https://github.com/stm32duino/Arduino_Core_STM32
`maple` - https://github.com/rogerclarkmelbourne/Arduino_STM32
username_0: Yes. STM32duino means stm32. There is no issue with that. Can you please help me with the issue I have
username_2: Would be nice to enumerate both of this options on [this page](https://docs.platformio.org/en/latest/platforms/ststm32.html).
Because there is unclear which option should I use to explicitly define stm32duino core. |
nodeschool/nodeschool.github.io | 56202173 | Title: Node School Social Presence
Question:
username_0: We don't have any official FB page or Twitter Account. I guess we should have one to increase our audience. I can make one right now, as I am experienced in running couple of web pages and twitter account so I can run them too.
Any suggestions? About name?, contact details? etc...
Answers:
username_1: There is a [@nodeschool](https://twitter.com/nodeschool) Twitter id that basically automatically retweets everything that has @nodeschool written in it. Maintaining a facebook page is not a good idea because you will exclude quite a few people in the process. Unless its a similar bot -1 from me.
username_2: :scream: I thought it was a living person / multiple people in charge of that account.
username_1: @username_2 That its a bot is something that I just assume :)
username_0: So can i make a dedicated FB Page by NodeSchool Lahore name? Its actually easier to interact with people over there. Because not everyone is Good at GitHub, Plus not everyone is on Github. We can tell them how to use it though in our introductory sessions.
username_3: @username_0 I think creating a facebook page for your local nodeschool chapter is no problem. You know best what works for you in your area. :star2:
Apart from that I am not against having a facebook page for NodeSchool in general. I am not sure how to manage it though.
username_4: @nodeschool on twitter is me, not a bot
username_1: @username_4 o_o wow ... :+1: Regarding managing: We could have a repository here that contains an issue board. Every entry will be published on twitter/facebook automatically and people can comment on it using github :)
username_4: for now I'm generally -1 on having a central coordinated nodeschool social presence, and would rather see chapters do it themselves as @username_3 said. would also be nice if we had an organizers handbook that included info on how to run social media as well as all the other stuff involved in running a chapter
username_5: We will be launching @nodeschool/wroclaw social account soon on twitter/facebook that's going to be maintained by a company of one of mentors (social presence, ecommerce and stuff). Having said so - we can help with creating social guide on how to drive those channels efficiently.
username_0: @username_5 It will really help it you can provide a social presence guide. I am also thinking of creating social presence for Lahore. Any help would be appreciated.
username_5: Cool, I am in a process of setting everything up but hopefully by the end of this week we'll have more details on that.
username_5: Just to give a short update. We are going live on Monday and after that, I am going to write a quick summary with a little help of our sponsor (social media/e-commerce agency)
Status: Issue closed
|
ipfs/go-ds-badger | 414271308 | Title: badger item doesn't have method `ValueSize`
Question:
username_0: I didn't find a method return `ValueSize` in badger but a method which returns the sum of key and value size. What version of badger ipfs use? Did you guys have a forked version of badger?
Answers:
username_0: my currently work around is
```golang
return int(item.EstimatedSize()) - len(key.Bytes()), nil
```
Status: Issue closed
username_0: I see. You are using commit close to their master. I am trying to find this method in a tagged verion |
ShannonPosey/taskinator | 936115611 | Title: Initial Setup
Question:
username_0: ## Requirements
* Create the task tracking HTML page that needs a:
* Header
* Main content area for the task list
* Footer
* Use the style sheet provided
* Add functionality to the button to add tasks to the list<issue_closed>
Status: Issue closed |
actframework/actframework | 384118153 | Title: Support `Keyword` matching for param binding of incoming request
Question:
username_0: So certain application tends to use underscore, while others might want to use camelCase as parameter name. This enhancement is to make it able to handle all these cases in Actframework application easily (even transparently)<issue_closed>
Status: Issue closed |
redux-offline/redux-offline | 1184358208 | Title: How to dispatch an action from redux-offline dequeue function
Question:
username_0: Scenario:
Go Offline:
Make 3 contacts t0, t1 and t2.
Update t1 to “t1 update”.
Make some transactions in t0 and “t1 update”.
Go online.
Result:
3 contacts created t0, t1 and t2 works as expected means their server_id and client_id are synced.
Update call ( t1 to “t1 update”) did not work well because at the time of this call t1 was not even created on server so there’s no id available and we are using [redux-offline](https://github.com/redux-offline/redux-offline) which stores the action with the payload provided to dispatch once connection restored so it dispatches update call with id null.
Same case with transaction call because to make a transaction we need to send contact id which is not even present on server
Action look like this:
static addBusinesses(payload: any) {
const { id, name = '', businessTypeId, clientId } = payload;
const headers = handers.customHeaders();
const body = {
clientId,
name,
...(businessTypeId && { businessTypeId }),
isSelected: true,
};
return {
type: AccountTypes.ADD_BUSINESSES,
payload,
meta: {
offline: {
effect: {
url: End_Points.BUSINESSES(id),
method: Variable.METHOD.POST,
json: body,
headers,
},
commit: { type: AccountTypes.ADD_BUSINESSES_COMMIT },
rollback: { type: AccountTypes.ADD_BUSINESSES_ROLLBACK },
},
},
};
}
root reducer is:
//Import: Dependencies
import { createStore, applyMiddleware, compose } from 'redux';
import { offline } from '@redux-offline/redux-offline';
import offlineConfig from '@redux-offline/redux-offline/lib/defaults';
import defaultQueue from '@redux-offline/redux-offline/lib/defaults/queue';
import createSagaMiddleware from 'redux-saga';
import { ApiCaller, customConfig } from '../services';
import asyncDispatchMiddleware from '../middleware/asyncDispatch';
// Imports: Redux Root Reducer
import rootReducer from './reducers/index';
// Imports: Redux Root Saga
import { rootSaga } from './saga';
// Middleware: Redux logger
import logger from 'redux-logger';
[Truncated]
for (let i = 0; i < array.length; i++) {
const { payload, type } = array[i];
if (payload?.serverId == null) {
store.dispatch({
type,
payload,
});
}
}
array.shift();
return array;
},
},
};
I'm making offline first application and I want to swap local_id with server_id before making api call, so I want to check payload if the serverId is null then I want to dispatch the action again.
When I try to dispatch action in dequeue function I got this error **Error: Reducers may not dispatch actions.** |
eghc/emr-lite | 443142798 | Title: As Doctor Clayton or <NAME>, I want to be able to view a patient’s lab results so I can determine if there is an issue.
Question:
username_0: TODO:
1. Create Lab Results model --> Patient ID, date of lab, type of lab, completed (boolen), results (string for PDF), notes (string)
2. When a patient's chart is retrieved, these results should also be returned. |
ericc06/P8-project | 458165841 | Title: Fix: Attach a task to a user.
Question:
username_0: A task must be attached to a user.
Currently, when a task is created, it is not attached to a user.
Make the necessary corrections so that automatically, when the task is saved, the currently authenticated user is attached to the newly created task.
When editing the task, the author can not be edited.
For tasks already created, they must be attached to an "anonymous" user.
Planned realization time: 3 days
Status: Issue closed
Answers:
username_0: Closed by PR #10. |
egingric/2015-Racing-Game | 65976523 | Title: Material Overhaul
Question:
username_0: Lava needs overhaul.
Ground needs overhaul.
Cavern needs overhaul.
Boss needs overhaul.
Grass needs overhaul.
Hell, even the sky needs an overhaul.
Tombstone and Crypt feel lonely, they need an actual material.
Flora is okay. |
frankmcsherry/differential-dataflow | 125839255 | Title: Relicense under dual MIT/Apache-2.0
Question:
username_0: This issue was automatically generated. Feel free to close without ceremony if
you do not agree with re-licensing or if it is not possible for other reasons.
Respond to @username_0 with any questions or concerns, or pop over to
`#rust-offtopic` on IRC to discuss.
You're receiving this because someone (perhaps the project maintainer)
published a crates.io package with the license as "MIT" xor "Apache-2.0" and
the repository field pointing here.
TL;DR the Rust ecosystem is largely Apache-2.0. Being available under that
license is good for interoperation. The MIT license as an add-on can be nice
for GPLv2 projects to use your code.
# Why?
The MIT license requires reproducing countless copies of the same copyright
header with different names in the copyright field, for every MIT library in
use. The Apache license does not have this drawback. However, this is not the
primary motivation for me creating these issues. The Apache license also has
protections from patent trolls and an explicit contribution licensing clause.
However, the Apache license is incompatible with GPLv2. This is why Rust is
dual-licensed as MIT/Apache (the "primary" license being Apache, MIT only for
GPLv2 compat), and doing so would be wise for this project. This also makes
this crate suitable for inclusion and unrestricted sharing in the Rust
standard distribution and other projects using dual MIT/Apache, such as my
personal ulterior motive, [the Robigalia project](https://robigalia.org).
Some ask, "Does this really apply to binary redistributions? Does MIT really
require reproducing the whole thing?" I'm not a lawyer, and I can't give legal
advice, but some Google Android apps include open source attributions using
this interpretation. [Others also agree with
it](https://www.quora.com/Does-the-MIT-license-require-attribution-in-a-binary-only-distribution).
But, again, the copyright notice redistribution is not the primary motivation
for the dual-licensing. It's stronger protections to licensees and better
interoperation with the wider Rust ecosystem.
# How?
To do this, get explicit approval from each contributor of copyrightable work
(as not all contributions qualify for copyright, due to not being a "creative
work", e.g. a typo fix) and then add the following to your README:
```
## License
Licensed under either of
* Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
* MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)
at your option.
### Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any
additional terms or conditions.
```
and in your license headers, if you have them, use the following boilerplate
[Truncated]
license = "MIT OR Apache-2.0"
I'll be going through projects which agree to be relicensed and have approval
by the necessary contributors and doing this changes, so feel free to leave
the heavy lifting to me!
# Contributor checkoff
To agree to relicensing, comment with :
I license past and future contributions under the dual MIT/Apache-2.0 license, allowing licensees to chose either at their option.
Or, if you're a contributor, you can check the box in this repo next to your
name. My scripts will pick this exact phrase up and check your checkbox, but
I'll come through and manually review this issue later as well.
- [ ] @frankmcsherry
- [ ] @username_2
- [ ] @username_1
Answers:
username_1: In regard to the project "differential-dataflow": I license past and future contributions under the dual MIT/Apache-2.0 license, allowing licensees to chose either at their option.
I'm just getting out of the way, do whatever you want.
username_2: Similarly, I don't care. I license past and future contributions under the dual MIT/Apache-2.0 license, allowing licensees to chose either at their option.. |
communi/libcommuni | 479948389 | Title: qtcreator successful build
Question:
username_0: https://github.com/username_0/cannachat.git updated to build with qt creator project it has ssl issues connecting to freenode though. feel free to use the changes also included is codeblocks project.
Answers:
username_1: What is the issue?
Status: Issue closed
|
CodeYourFuture/syllabus | 960747777 | Title: Clarify when the Git coursework should be assigned
Question:
username_0: The git classes are assigned in the background - we should make it a lot more clear when they should be done, and highlight it in those weeks' homeworks.
See also https://codeyourfuture.slack.com/archives/C01DANTAGRK/p1627729083001800 from @ClaireBickley |
Azure/azure-sdk-for-net | 494339457 | Title: [BUG][Storage][Recordings] x-ms-copy-source request header is being URL-encoded in test recordings
Question:
username_0: - FileClient.PutRangeFromUrl() has a x-ms-copy-source header to specify the source URL
- This header is sent non-URL-encoded, but it is URL-encoded in the test recordings, causing the recorded test to fail. |
makiuchi-d/gozxing | 323812662 | Title: Implement QRCodeReader
Question:
username_0: zxing.qrcode.detector.Detector の実装に必要なもの
- zxing/*
- [ ] ResultPoint
- [ ] ResultPointCallback
- [ ] DecodeHintType
- [ ] NotFoundException
- [ ] FormatException
- zxing/common/*
- [ ] BitMatrix
- [ ] BitArray
- [ ] DetectorResult
- [ ] PerspectiveTransform
- [ ] GridSampler
- zxing/common/detector.*
- [ ] MathUtils
- zxing/qrcode/decoder.*
- [ ] Version
- zxing/qrcode/detector.*
- [ ] FinderPatternFinder
- [ ] FinderPatternInfo
- [ ] FinderPattern
- [ ] AlignmentPattern
- [ ] Detector
zxing.qrcode.detector.Decoder の実装に必要なもの
- [ ] ...
Status: Issue closed
Answers:
username_0: implemented |
saleae/hdmi-cec-analyzer | 797318685 | Title: Support More Opcodes
Question:
username_0: (Ticket 56822)
For example:
* 0xa3 and 0xa4 for Short Audio Descriptors
* 0xc0 to 0xc5 for ARC feature
More CEC message opcodes can be found here :
https://git.linuxtv.org/v4l-utils.git/tree/include/linux/cec.h<issue_closed>
Status: Issue closed |
dotnet/docs | 716108166 | Title: Breaking change: Update FrameworkName to ".NET"
Question:
username_0: ## Breaking change: Update FrameworkName to ".NET"
[`RuntimeInformation.FrameworkDescription(String)`](https://docs.microsoft.com/dotnet/api/system.runtime.interopservices.runtimeinformation.frameworkdescription) returns ".NET" instead of ".NET Core".
### Version introduced
.NET 5
### Old behavior
[`RuntimeInformation.FrameworkDescription(String)`](https://docs.microsoft.com/dotnet/api/system.runtime.interopservices.runtimeinformation.frameworkdescription) returned ".NET Core" as part of the description string, e.g.: ".NET Core 3.1.1".
### New behavior
[`RuntimeInformation.FrameworkDescription(String)`](https://docs.microsoft.com/dotnet/api/system.runtime.interopservices.runtimeinformation.frameworkdescription) returns ".NET" as part of the description string, e.g.: ".NET 5.0.0".
### Reason for change
With .NET 5, `netcoreapp` is replaced by `net` as the short target framework moniker and with that the framework's description is updated as well. The change is cosmetic as the `FrameworkName` isn't encoded anywhere else than in the [`RuntimeInformation.FrameworkDescription(String)`](https://docs.microsoft.com/dotnet/api/system.runtime.interopservices.runtimeinformation.frameworkdescription) property.
### Recommended action
Alter the code that searches for ".NET Core" in the string returned by the API to ".NET".
### Category
- [ ] ASP.NET Core
- [ ] C#
- [ ] Code analysis
- [x] Core .NET libraries
- [ ] Cryptography
- [ ] Data
- [ ] Debugger
- [ ] Deployment
- [ ] Globalization
- [ ] Interop
- [ ] JIT
- [ ] LINQ
- [ ] Managed Extensibility Framework (MEF)
- [ ] MSBuild
- [ ] Networking
- [ ] Printing
- [ ] Security
- [ ] Serialization
- [ ] Visual Basic
- [ ] Windows Forms
- [ ] Windows Presentation Foundation (WPF)
- [ ] XML, XSLT
### Affected APIs
[`RuntimeInformation.FrameworkDescription(String)`](https://docs.microsoft.com/dotnet/api/system.runtime.interopservices.runtimeinformation.frameworkdescription)
<!-- Do not modify anything below this line -->
---
#### Issue metadata
* Issue type: breaking-change<issue_closed>
Status: Issue closed |
Azure/azure-sdk-for-js | 652527510 | Title: [KeyVault][Test] min-max testing failure at rush update step
Question:
username_0: See https://dev.azure.com/azure-sdk/internal/_build/results?buildId=449392&view=logs&j=1c9290f0-7664-5780-0e82-5fab6e3e2db5&t=0dd44daa-ba67-5763-8a2a-e40cb7fcc4b0
<issue_closed>
Status: Issue closed |
abnvanand/os-proj | 529383730 | Title: Extensions
Question:
username_0: - Using consistent hashing to select chunk servers for write operation:
Currently master server chooses k(=3) chunk servers randomly from list of active chunk servers.
- Use leader election for selecting primary replica and secondary replicas from chunkservers containing that chunk
Answers:
username_0: Consistent Hashing preventing domino effect: Jump Hash Algorithm
https://arxiv.org/pdf/1406.2294.pdf |
qtile/qtile | 1092124215 | Title: Switching to the current group doesn't do anything anymore
Question:
username_0: Hey!
I have set up multiple groups and keybindings for switching to them (full config linked at the end of the report):
```python
for i, label in enumerate(group_labels, 1):
keys.append(Key([mod], str(i), lazy.group[label].toscreen()))
keys.append(Key([mod, "shift"], str(i), lazy.window.togroup(label)))
```
These work, I can switch workspaces when pressing the correct keys. Until recently, I could also switch back to the group I came from when pressing the same keys again (for example `Mod+3` to go to group 3, `Mod+3` to go back to the group I was in before). However, the switching-back part stopped working (let's say, one or two weeks ago).
Interestingly, switching back works as expected if I click on the group in the `GroupBox` widget in the built-in bar instead of using the keyboard.
The logfile contains:
```plain
2022-01-02 23:57:00,677 WARNING libqtile lifecycle.py:_atexit():L34 Restarting Qtile with os.execv(...)
2022-01-02 23:57:00,915 ERROR libqtile core.py:_xpoll():L336 Got an exception in poll loop
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/libqtile/backend/x11/core.py", line 304, in _xpoll
ret = target(event)
File "/usr/lib/python3.10/site-packages/libqtile/backend/x11/core.py", line 699, in handle_DestroyNotify
if self.qtile.current_window is None:
File "/usr/lib/python3.10/site-packages/libqtile/core/manager.py", line 552, in current_window
return self.current_screen.group.current_window
AttributeError: 'Qtile' object has no attribute 'current_screen'
```
Notably, no more `AttributeError`s appear if I try to switch, I am not sure when exactly the one here is raised.
```shell
$ qtile -v
0.19.0
$ python --version
Python 3.10.1
```
I am switching between 1 to 3 monitors which all have their own `Bar`, switching works in neither setup.
I installed qtile with the Arch package `qtile`, [here is my full config](https://github.com/username_0/dotconfig/blob/3cc95fbe831c3b312f2eeec7774bb5dcafdc1d2e/.config/qtile/config.py).
Any ideas what I am doing wrong? Thank you!
---
I have been using qtile as my daily driver for over a year and am really enjoying it! Thank you for your work :)
Answers:
username_1: I had the same problem, fixed with this:
```python
keys.append(Key([mod], str(i), lazy.group[label].toscreen(toggle=True)))
```
See [changelog](https://github.com/qtile/qtile/blob/70c17e1088df391d7f89040e2e24ef43759a4ab5/CHANGELOG#L25-L26)
username_2: @username_0 the above fix should work for you as this was something we changed.
I'm interested in the error message from your log though. That's not related to this issue but I am curious as to what caused it. Are you able to trigger it again/reliably?
username_0: @username_1 Thanks, I missed that update, this works!
I am surprised that toggling still works when clicking on the `GroupBox` widget, it is a bit confusing that the default behaviors are not the same for `toscreen` and `GroupBox`.
username_0: @username_2 I tried to for 10 minutes, I can't really reproduce the error anymore today :thinking: I will post again if I spot it again. I suppose I should still close the issue?
Status: Issue closed
|
mapbox/node-pre-gyp | 25700131 | Title: Port key modules to node-pre-gyp
Question:
username_0: We should try porting a few very popular C++ modules and provide pull requests.
- [node-canvas](https://github.com/learnboost/node-canvas)
- modules with issues referencing https://github.com/npm/npm/issues/189: node-sass, node-pcre
- http://registry.npmjs.org/-/scripts?scripts=install,preinstall,postinstall&match=node-gyp | https://gist.github.com/username_0/8449220 | from https://github.com/npm/read-package-json/pull/23
- https://www.npmjs.org/browse/depended/nan
- https://github.com/audreyt/node-webworker-threads
Status: Issue closed
Answers:
username_0: Closing, lots of modules are using node-pre-gyp now. |
yjmenezes/odm_tnova | 226781397 | Title: odm opensfm craches.
Question:
username_0: 1- maybe 13% side overlap is too short. ( cropped image from 9700 to 8600 due git 25MB limitations )
2- missing some Exif tag.
Answers:
username_0: it seems opensfm crash is exiftag related.
run ok, without exiftags writen by exiftool.
georeference must be verified.
absence of exiftags can be bad for photo alignment, increasing run time ?
username_0: module odm_extact_utm seems to be ok. it taken the GPS lon/lat tags from exif and produce UTM correctly. Like this:
WGS84 UTM 24S
389576 9055149
461.3015332 2520.1112 0
-2647.680307 -2292.16901 0
.... |
getgrav/grav-theme-antimatter | 174900577 | Title: Antimatter: The little menu icon is late appearing when shrinking the window size
Question:
username_0: I have quite a few menu headings that stretch across the top of the page.
When I reduce the page size, the normal menu doesn't reduce to the menu icon soon enough – the whole of the normal menu disappeared.
When reduced even further, the menu icon does appear and the normal menu disappears. this happen between the screen width of 960px and 1199px, which no menu toolbar or menu icon can be found.
How can I make it that the normal menu disappears as soon as the left most item in the normal menu bumps into the logo/site title?
[Screenshot.pdf](https://github.com/getgrav/grav-theme-antimatter/files/453437/Screenshot.pdf)
Answers:
username_1: I don't have this problem on a plain antimatter site.
Does it happen also with fewer items in the menu (2-3)? |
envoyproxy/envoy | 424427535 | Title: Snap docs/configs/image versions
Question:
username_0: Related to https://github.com/envoyproxy/envoy/pull/6357. We have two different user bases:
1) People that run master
2) People that run tagged versions
We need to do a better job of having a consistent state that works for people. I think this will involve setting up some templating so that when we build the docs, we snap to a specific SHA for everything, including the image tags. This will need more thought since some of our references to `latest` are checked into code (so maybe some of the Docker files also need to be generated.)
Answers:
username_1: @username_0 Thanks for the response on #6363 !
Yeah, was a little bit surprised when seeing the `envoyproxy/envoy:v1.9.0` tag has changed to point something that's built just 2 days ago.
IMOH, once tagged with `major.minor.patch` I won't expect it to change..
username_0: I don't think the tag has changed. What changed is `latest` no longer points to the master build but points to v1.9.0.
username_2: The release tag has not changed
```
docker inspect --format '{{ .Created }}' envoyproxy/envoy:v1.9.0
2018-12-20T20:40:31.342647689Z
```
username_2: re: templating and GITSHAs-
Would this approach work for releases, where we use the tag instead of the sha? We build on both events so it seems possible. I think maintaining friendly release tags in the release docs would go a long way for user experience.
If it was the case that the following was true:
- All commits to master have docs with an `envoyproxy/envoy-dev` image
- All releases have docs with their respective `envoyproxy/envoy:vX.X.X` image
Then I think it would solve the inconsistency problem. Then it seems like a matter of making sure folks end up on the right version of documentation and the current docs are great because they have a version selector. This may sound crazy, but what if you changed the default branch visible in Github to the latest release 🤔
username_0: @username_2 we do already build tagged docs. See here: https://www.envoyproxy.io/docs
We still have more to do though, because some of those docs refer back into the repo which links to master. I think some of this is pretty easy to fix.
1) We can make the `:repo:` link use the git SHA of the doc build. I can do this today.
2) I think for anthing that refers to docker images, we need to use the SHA and not latest. This is going to take some more thought. I think we could have a macro for references in RST docs, but then we have some MD files, and we also have the Dockerfiles that are checked in. Any thoughts on this one?
username_1: Ah, I was just looking at the dockerhub tag created date, thank you for the clarification! @username_2
username_2: I agree that pinning the Docker images is necessary and understand from a technical standpoint why using the SHA makes sense. The downside I see to this approach is that by using the SHA, correct me if I'm wrong, you'll be using `envoyproxy/envoy-dev`, even for release docs.
username_0: @username_2 take a look at the linked PR, we pivot from envoy-dev to envoy if it's a tagged release.
username_2: Thanks. That's what I get for commenting before coffeeing.
username_0: @username_3 if you are looking for another issue related to the examples you have been working on, do you feel like finishing this one up? The remaining work here is to figure out a way that we don't refer to `:latest` in all of the Dockerfiles for the examples. We need to somehow inject the version to use via an environment variable, and then correctly reference that command in the snapped docs, as I have done for the other commands in the docs (see https://github.com/envoyproxy/envoy/pull/6376).
username_3: @username_0 Sorry for the late response, just got over the worst cold. Sounds perfect I'll get right on it. Super appreciate the suggestions for next steps from everyone, really helps make it approachable to get involved as someone who hasn't contributed to open source before |
Azure/azure-sdk-for-c | 496493308 | Title: [FEATURE REQ] Azure.Core: contract library for validating parameters
Question:
username_0: **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
**Information Checklist**
Kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report
- [ ] Description Added
- [ ] Expected solution specified<issue_closed>
Status: Issue closed |
formulahendry/vscode-auto-rename-tag | 405390331 | Title: Extension causes high cpu load
Question:
username_0: - Issue Type: `Performance`
- Extension Name: `auto-rename-tag`
- Extension Version: `0.0.15`
- OS Version: `Darwin x64 17.7.0`
- VSCode version: `1.30.2`
:warning: Make sure to **attach** this file from your *home*-directory: `/Users/goddarmx/formulahendry.auto-rename-tag-unresponsive.cpuprofile.txt` :warning:
Find more details here: https://github.com/Microsoft/vscode/wiki/Explain:-extension-causes-high-cpu-load |
ITfoxtec/ITfoxtec.Identity.Saml2 | 323762660 | Title: Send Signed AuthnRequest
Question:
username_0: Hi, I'm trying to connect to a client IdP and I need to send a signed AuthnRequest (with ds:Signature elements) with the cert value so I can successfully log in the IdP with my Sp.
I am using a Service Fabric Application (hosted in Azure) with .net Core.
I couldn't find anything on how to create this elements with this solution.
Thanks in advance.
Answers:
username_1: Hi,
You can use either Saml2PostBinding or Saml2RedirectBinding and set the saml2Configuration.SignAuthnRequest to true.
Regards
Anders
Status: Issue closed
|
ESMCI/cime | 205248434 | Title: Create a test for DWAV
Question:
username_0: Create a test for DWAV that does something.
Status: Issue closed
Answers:
username_0: Was this test added?
username_1: Ah - good point. I'm not sure it was added to the scripts regression tests.
This needs to be reopened. I got confused with new DWAV compsets rather
than new DWAV tests in CIME. |
HelioGuilherme66/RIDE | 291948464 | Title: dont get keywords promts while adding keywords on Mac OS -
Question:
username_0: Hi Guys,
Installed Ride on Mac and it seems to be working fine with a few crashes sometimes - creating this as when your entering keywords and press Command space or contrl space I dont see an keywords prompt I have attached the Recording for the same.
Please let me know if you need any further information on this -
[Jan 26 2018 8_01 AM.webm.zip](https://github.com/HelioGuilherme66/RIDE/files/1668305/Jan.26.2018.8_01.AM.webm.zip)
[Jan 26 2018 8_21 AM.webm.zip](https://github.com/HelioGuilherme66/RIDE/files/1668309/Jan.26.2018.8_21.AM.webm.zip) |
byuoitav/av-api | 230908404 | Title: Getting status if device not responsive
Question:
username_0: This is a two fold thing
1) If a device isn't responsive, we need to make sure that device is at least present in the return body, maybe with some indicator that we can't get a response (maybe a response code?)
1) We need to include the microservice response in the error we report.
For example, if I query the state of a pjlink device with an incorrect address I get the following log trace from the AV-API:
```
2017/05/24 03:07:41 Sending requqest to http://localhost:8012/0.0.0.0/power/status
2017/05/24 03:07:41 Microservice returned: "error dialing address : dial tcp 0.0.0.0:53595: getsockopt: connection refused"
2017/05/24 03:07:41 Error unmarshalling response from D1
2017/05/24 03:07:41 Sending requqest to http://localhost:8012/0.0.0.0/volume/mute/status
2017/05/24 03:07:41 Microservice returned: "error dialing address : dial tcp 0.0.0.0:53595: getsockopt: connection refused"
2017/05/24 03:07:41 Error unmarshalling response from D1
2017/05/24 03:07:41 Sending requqest to http://localhost:8005/0.0.0.0/input/current
2017/05/24 03:07:41 Microservice returned: "failed to establish a connection with pjlink device. error msg: dial tcp 0.0.0.0:4352: getsockopt: connection refused"
2017/05/24 03:07:41 Error unmarshalling response from D1
2017/05/24 03:07:41 Sending requqest to http://localhost:8012/0.0.0.0/volume/level
2017/05/24 03:07:41 Microservice returned: "error dialing address : dial tcp 0.0.0.0:53595: getsockopt: connection refused"
2017/05/24 03:07:41 Error unmarshalling response from D1
2017/05/24 03:07:41 Writing output to channel...
2017/05/24 03:07:41 outputs from device ITB-1006-D1
2017/05/24 03:07:41 outputs from device ITB-1006-D1
2017/05/24 03:07:41 outputs from device ITB-1006-D1
2017/05/24 03:07:41 outputs from device ITB-1006-D1
2017/05/24 03:07:41 Done acquiring statuses from ITB-1006-D1
2017/05/24 03:07:41 Done. Closing channel...
2017/05/24 03:07:41 Error querying status with destination: D1
2017/05/24 03:07:41 Publishing event: {"hostname":"ITB-1101-CP3","timestamp":"2017-05-24T03:07:41Z","localEnvironment":true,"event":{"type":0,"eventCause":3,"device":"CP3","eventInfoKey":"Error String","eventInfoValue":"Error querying status for destinationD1:json: cannot unmarshal string into Go value of type map[string]interface {}"},"building":"ITB","room":"1101"}
2017/05/24 03:07:41 Appending results of D1 to output
2017/05/24 03:07:41 Error querying status with destination: D1
2017/05/24 03:07:41 Publishing event: {"hostname":"ITB-1101-CP3","timestamp":"2017-05-24T03:07:41Z","localEnvironment":true,"event":{"type":0,"eventCause":3,"device":"CP3","eventInfoKey":"Error String","eventInfoValue":"Error querying status for destinationD1:json: cannot unmarshal string into Go value of type map[string]interface {}"},"building":"ITB","room":"1101"}
2017/05/24 03:07:41 Appending results of D1 to output
2017/05/24 03:07:41 Error querying status with destination: D1
2017/05/24 03:07:41 Publishing event: {"hostname":"ITB-1101-CP3","timestamp":"2017-05-24T03:07:41Z","localEnvironment":true,"event":{"type":0,"eventCause":3,"device":"CP3","eventInfoKey":"Error String","eventInfoValue":"Error querying status for destinationD1:json: cannot unmarshal string into Go value of type map[string]interface {}"},"building":"ITB","room":"1101"}
2017/05/24 03:07:41 Appending results of D1 to output
2017/05/24 03:07:41 Error querying status with destination: D1
2017/05/24 03:07:41 Publishing event: {"hostname":"ITB-1101-CP3","timestamp":"2017-05-24T03:07:41Z","localEnvironment":true,"event":{"type":0,"eventCause":3,"device":"CP3","eventInfoKey":"Error String","eventInfoValue":"Error querying status for destinationD1:json: cannot unmarshal string into Go value of type map[string]interface {}"},"building":"ITB","room":"1101"}
2017/05/24 03:07:41 Appending results of D1 to output
2017/05/24 03:07:41 Evaluating responses...
```
You'll notice that we get a good error here:
`Microservice returned: "failed to establish a connection with pjlink device. error msg: dial tcp 0.0.0.0:4352: getsockopt: connection refused"`
But in the error that is published doesn't include this information it includes a much more generic error:
`"Error String","eventInfoValue":"Error querying status for destinationD1:json: cannot unmarshal string into Go value of type map[string]interface {}"}`
We need to find a way to report the error that the microservice returned in the event.
Answers:
username_1: I fixed it in ```av-api```, but somewhere along the line these events are getting turned into ```AUTOGENERATED``` events, which is another problem. Also, are we trying to use this to diagnose problems with source devices or destination devices?
username_1: Maybe ```event-translator-microservice```? |
watson-developer-cloud/python-sdk | 388419158 | Title: Travis fails in python 3.7
Question:
username_0: #### Expected behavior
A recent PR: https://github.com/watson-developer-cloud/python-sdk/pull/609 added python 3.7 which should pass in travis.
#### Actual behavior
It passes locally, but fails with following on travis:
```
BlockingIOError: [Errno 11] write could not complete without blocking
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>
BlockingIOError: [Errno 11] write could not complete without blocking
```
https://travis-ci.org/watson-developer-cloud/python-sdk/jobs/464662022
As a workaround tried adding the following: https://github.com/travis-ci/travis-ci/issues/8982#issuecomment-354357640, but still failed
```
before_install:
- python -c "import fcntl; fcntl.fcntl(1, fcntl.F_SETFL, 0)"
```
#### Steps to reproduce the problem
Enable python 3.7 in travis:
```
matrix:
include:
- python: '3.7'
dist: xenial # required for Python >= 3.7 (travis-ci/travis-ci#9069)
```
#### Code snippet (Note: Do not paste your credentials)
NA
#### python version
3.7
Answers:
username_1: where are we with this?
Status: Issue closed
|
JTFouquier/mark2cure | 144082140 | Title: Single quote (apostrophe) highlight bug
Question:
username_0: There seems to be an issue with highlighting terms enclosed by single quotes, in that the highlight will turn grey, and then cannot be further interacted with. This issue has been reported multiple times.
Skyem: Bug - quest/368/4/: Another instance where punctuation prevents highlighting of the attached word. In this case: 'autoinflammatory
Skyem: Same problem in quest/361/2/ for "holdase" and "unfoldase"
Tom: When I attempt to click on a term enclosed in single quotes I cannot change its color.
Skyem: Bug - Quest 161: Small bug in the second line. It won't accept highlighting on the word: 'congential ... I think because the apostrophe is there. It highlights gray instead and doesn't get submitted. (PMID: 22991164)
Skyem: Quest 174 - page 3. Text: ...is the first 'deafness-CDG ' . CDG should be...
Because the first apostophe is directly next to the word it won't accept highlighting of the entire deafness-CDG and goes gray instead
----------------------------------------
- Bitbucket: https://bitbucket.org/sulab/mark2cure/issue/125
- Originally reported by: [gtsueng](http://bitbucket.org/gtsueng)
- Originally created at: 2015-09-02T17:30:23.417 |
PyTables/PyTables | 105483002 | Title: Return unicode strings from stored bytestrings
Question:
username_0: Storing strings in a table in Python 3 with something like:
```python
import tables
class Tags(tables.IsDescription):
tag = tables.StringCol(namelength)
handle = tables.open_file('testfile.h5', 'a')
table = handle.create_table('/', 'tags', Tags, 'tags')
newtags = ['bark', 'lark', 'snark']
for tag in newtags:
table.row['tag'] = tag
table.row.append()
handle.close()
```
and then reading them back with:
```python
handle = tables.open_file('testfile.h5', 'r')
table = handle.get_node('/', 'tags')
tags = [x['tag'] for x in table.read()]
print(tags[0])
```
yields `b'bark'` instead of `'bark'`. This is because PyTables currently stores strings as ascii byte strings instead of unicode.
What solutions exist for this issue? Because unicode strings are variable length, there might not be an easy way to directly support them. Should PyTables call `str.decode('ascii')` for every string read to at least give back unicode strings from its stored bytestrings?
This issue was broached in #268, and was discussed briefly in username_0/datreant#10.
Answers:
username_1: Thanks @username_0, this is very valuable.
username_2: If it helps, here's some material on Unicode type mapping from the h5py project. Unfortunately we have not had much success finding a good solution in the general case. If PyTables stores ASCII-only data, you might have better luck.
https://github.com/h5py/h5py/issues/379
http://docs.h5py.org/en/latest/strings.html
username_1: Thanks helping @username_2. To me it seems that there's no real Unicode support in HDF5.
You can tag a string to be UTF8 encoded but you still can set a fixed (byte) length which makes, as you know, little sense because UTF8 is a variable length encoding.
Do you know how numpy handles this? it seems that when numpy needs a fixed (byte) length it will use UTF32.
```
In [26]: s = np.str_("❤♎☀★☂♞☯☭☢€☎⚑❄♫✂")
In [27]: s.itemsize / len(s)
Out[27]: 4.0
In [28]: s.tobytes().decode('utf-32')
Out[28]: '❤♎☀★☂♞☯☭☢€☎⚑❄♫✂'
```
Could that be a solution? use UTF32 whenever we need fixed byte length?
username_2: NumPy will use the "U" dtype, which is indeed UTF-32 (4 bytes). This is also the default type when creating an array from strings on Python 3 (e.g. `np.array(["Hello"])`).
Unfortunately there's no equivalent type in HDF5 for "U", so ultimately we decided not to support it in h5py. HDF5 has no native support for UTF-32, or any wide-character encoding. Solutions involving e.g. opaque types are not interoperable with other HDF5 tools, and storing in e.g. variable-length strings means the type doesn't round-trip. :/
username_1: I don't have experience with other HDF5 tools but I am very curious how they approach the problem because they must experience the same issues, in the end this is not a Python problem at all.
In abstract terms: I want to store a array of UTF8 encoded, 1 character long strings like HDF5 says it supports, the string/characters are a and ☎. How do I do it?
Ping @username_3, welcome to github :)
username_2: Folks from the HDF Group may have some more insight, but from what I have seen, you simply can't. The only way to reliably store Unicode in an HDF5 dataset with more than one element is to use variable-length UTF-8 strings. Unfortunately, there is no NumPy dtype for this, although one is desperately needed. In h5py, we use `dtype("O")` object arrays storing regular Python str/unicode objects.
username_1: The only way to reliably store Unicode in an HDF5 dataset with more than one element is to use variable-length UTF-8 strings.
But HDF5 lets you define a "fixed length, UTF8 encoded string" which is misleading. Now, at Python level, we could just check the passed string fits into the HDF5 string byte size, given we mostly deal with a string at a time.
Anyway, all this could be fixed, but HDF5 has to step up to the unicode challenge :)
The problem @username_0 is having is a matter or consistency, sometimes we return strings and sometimes we return bytes. That's just no good.
username_2: It's a consequence of HDF5 not supporting wide characters, and having only two declared string padding approaches in wide use... NULLTERM and NULLPAD, of which NULLTERM seems to be particularly popular. The breakage happens inside the HDF5 conversion routines when reading from a dataset into memory. Among other things, this is why people get a nasty shock when trying to store arbitrary binary data in HDF5 strings.
I agree we may be getting too far away from @username_0's bug report. :) The great thing is, the upcoming merger will provide an opportunity to change up h5py's (and PyTables') string handling for the better, including changes to the storage types if needed.
username_3: Greetings!
It looks like the issue was settled! For completeness
I would like to refer you to the discussion of UTF-8 support in HDF5 found at https://www.hdfgroup.org/HDF5/doc/Advanced/UsingUnicode/index.html
At this point The HDF Group has no plans to support other encodings. One of the problems (as it was mentioned above) is conversion between different encoding types. The second problem is that it is a pretty big task. As usual, we are more than happy to work with someone who volunteers to implement the feature :-)
Just a few comments:
Andrea is correct that H5Tset_size is used to set up storage size required for the string, i.e., HDF5 pushes responsibility to the application to allocate enough space to store UTF-8 data. HDF5 doesn't touch user's data (unless conversion is required). "Fixed-length UTF-8" string simply means that all strings of that type will require a defined number of bytes for storage, i.e., it is not misleading if one remembers the meaning of "size" and leaves in HDF5 world ;-)
Small correction: HDF5 has three string padding types:
H5T_STR_NULLTERM (0)
Null terminate (as C does).
H5T_STR_NULLPAD (1)
Pad with zeros.
H5T_STR_SPACEPAD (2)
Pad with spaces (as FORTRAN does).
See https://www.hdfgroup.org/HDF5/doc/RM/RM_H5T.html#Datatype-SetStrpad
username_1: PyTables' inconsistency derives from numpy's, which has our same problem:
```python
In [122]: z = np.zeros(1, dtype='S10') # 'S' is bytes, not a string
In [123]: z
Out[123]:
array([b''],
dtype='|S10')
In [124]: z[0] = 'ciao' # enters a string
In [125]: z[0] # exit encoded bytes
Out[125]: b'ciao'
In [126]: z[0] = '❤' # and the encoding is just ascii
---------------------------------------------------------------------------
UnicodeEncodeError Traceback (most recent call last)
<ipython-input-126-387e70d94037> in <module>()
----> 1 z[0] = '❤'
UnicodeEncodeError: 'ascii' codec can't encode character '\u2764' in position 0: ordinal not in range(128)
```
StringCol maps to numpy type kind 'S' (bytes), so it should not accept strings on the way in but only bytes (and perhaps it should be renamed BytesCol).
So we are lying when we allow `table.row['tag'] = tag`, because we are actually doing `table.row['tag'] = tag.encode('ascii')` behind the user's back.
I am afraid this can't be fixed without partially affecting the api so it has to wait until after 3.2.2.
username_4: Just chiming in to note that the Unicode NumPy dtype can be either UCS2 or UCS4, depending on how the Python interpreter has been compiled. Here it is how I got in the same trap and Travis describing the above behavior: https://github.com/numpy/numpy/issues/1123
On a rather historical note, I remember advocating for native UCS4 support in HDF5 many years ago (I still think it would be the best for NumPy-based libraries), but as @username_3 said, UTF-8 was considered the 'blessed' encoding and converting from/to other encodings was considered too involved.
username_4: Urgh, the thing was that the **array scalar** was UCS2 on UCS2 Python interpreters:
```
while the array unicode data-type is always a 4-byte unicode character, the array scalar unicode_ Python type is represented by 2-byte unicode characters.
```
but the array dtype is indeed always UCS4. Sorry for being sloppy in my reading :P
username_1: @username_4 @username_3 I have trouble understanding what the HDF5 UTF-8 support consist in.
The good thing about UTF-8 is that it works seamlessly with all C string functions, with the only exception that `strlen` is the 'length' in bytes and not in characters. If HDF5 doesn't touch the user's data there's very little to do to support UTF-8. I looked at the code and indeed the only thing the library does is to have a tag that says 'this string is UTF-8 encoded'.
Correct me if I am wrong but library users could do that themselves either with attributes or by their own data types. The part where HDF5 messes things up a bit is the padding, which doesn't work with all encodings.
@username_3 I'd be happy to dig into the library code but the development is not very transparent, is the the bug tracker public? how do I know what people are working on? moving to github-like tools will enormously increase you chances of attracting casual contributors :)
username_3: Hi Andrea,
username_5: Hi all,
Apologies if my comment is too naive: having ported our large codebase from py2 to py3 this issue of strings not making the roundtrip with hdf5/pytables is a nasty one, i.e. python string -> bytearray !-> string.
Would this one-liner at least alleviate the issue? I.e. when this check returns true:
`hasattr(bytearray, 'decode') and isinstance(bytearray.decode(), str)`
then the byte array could be decoded behind the scene in pytables instead of doing this everywhere in user-code. I am sure I am not considering lots of edge-cases here, just raising the possibility to reduce user-level complexity.
Thanks for your devotion in improving pytables! |
tudace/tuda_latex_templates | 655213775 | Title: PDF Examples to check for compilation errors
Question:
username_0: May I propose to add an folder of compiled pdf examples to the repo? Just plain examples of random text. That way the user can check if the compiled document looks actually as intended.
An example, I currently have a few bugs with the latex compiler. Nothing serious, just a broken bibliography and odd spaces like seen for example here:

Now the main problem is, I can not figure out if that space is as big as intended. It does not fit the other documents I have, but those could have been made with a different version.
Answers:
username_1: I can't follow the link because I am an external.
The PDF examples exist, just not in the repo. because It's never a good idea to add compiled material to a git repo.
I don't get the point of using PDFs to find compiler issues, because usually that's the task of the logfile, but if this helps,you can find the PDFs for the release in the CTAN bundle (https://github.com/tudace/tuda_latex_templates/releases/latest/) Or single files directly on CTAN: https://ctan.org/pkg/tuda-ci
I hope this helps.
username_2: That's not true. It's usually a bad idea to add compilation products to a git repo as they frequently change but manually adding a small compiled example is absolutely not a bad practice.
Also, if those examples are not linked in the readme, they might as well not exist.
Maybe the CTAN page should be linked in the readme as a suggestion where to find compiled pdfs that can be used to check whether the local installation works correctly (Though I haven't had the experience that they would compile but render incorrectly as could happen with an installation of the old template).
username_3: @username_1 I guess we could set up a free runner (I use circleci for this) and push the PDFs to a doc branch? The infrastructure is already in place ([docker image](https://github.com/dante-ev/docker-texlive) and your release scripts). This makes sense since the repo is typically one release ahead of CTAN. But I'd agree that this does not have very high priority.
username_1: They would change, because dependencies might change. This is why I'd prefer to add them with the release. I can of course add a link to the files directly. Still I don't get the requirement, as they are installed by TeX Live/MikTeX to the doc tree of your TeX distribution.
But I will add an additional link to the GitHub version of the README for the CTAN reference.
@username_3
The problem with a CI setup will be, that we create a bunch of copies which are not necessary. I'd rather prefer to directly add the PDFs to the release tag. Which I think can be okay, but I still don't really see a requirement, because the log files should give information on that.
username_3: I agree that there is no need to add the PDFs for the releases because they will be at CTAN. Maybe it's also sufficient to just add the CTAN link to each example as a comment? Either way, I agree that the files should not be in the main branch.
username_2: You should not forget that most people using this template are not very knowledgeable in LaTeX and I doubt they would know where to look for the docs if they even knew they existed. Same goes for the CTAN entry.
I did not think of checking that and while I'm certainly not proficient, I do have a few years of experience with LaTeX.
The first place they will go looking for example pdfs is this repository.
A link to the CTAN in the readme and perhaps as @username_3 suggests in the corresponding example tex file would probably help a lot.
username_0: That is exactly what I mean. The average student starts using latex when he is writing either his bachelor thesis. He has no idea about the log files, he has no idea about the distribution and most importantly he has no idea how to deal with errors.
He downloads / installs the package and then clicks on build.
Most latex projects I worked on had one or two errors. In fact this one has two errors out of the box (Misplaced alignment tab character &. \end and Underfull \hbox (badness 10000) in paragraph, both on an empty example). The most common approach, especially close to the deadline, is to just check if something breaks the layout. Why? Because Latex bug-hunting is very time consuming.
But the problem is even worse if nothing is wrong. The professor might remember an older version of the document. If you present him with the current version he might ask if the spacing is broken or if the design is the wrong style. How can you verify that? Just because you can't see the problem doesn't mean that there is no problem.
A simple pdf that shows how the layout should look like. It allows students who don't understand latex to check if they even need assistance.
username_1: I thought students might receive at least a simple introduction to the TeX & Friends Ecosystem. Because the template for me was not meant to describe the basic usage of LaTeX. The universities I visited/taught at always had workshops on this to get the basics. Especially with a focus on debugging LaTeX.
I usually archieve a log with a 20 Minute workshop on this and explaining how TeX output should be read. But sure … sadly not everyone sees the requirement of handling it.
My problem with this usually is, that I think it's dangerous to support users too much in doing a “visual” debugging. Because this will not make them start reading their logs. On the long term it's much easier to work with the log files. Especially because you train yourself to stick to the markup.
username_3: We seem to know very different universities. I know the latex courses of at least three different universities and nobody talked ever sufficiently about debugging or log files (by the way: also all my programming lectures that I ever attended disregarded this topic). I am not saying that this is good though...
username_0: I can only speak knowing a small part of the engineering department but there simply is no mandatory (or even with CP rewarded) latex course that I know of. There are workshops but most people do not know about those. Also the need to use latex arises during a time where you have a lot on your plate (thesis deadline) so everything that requires time is of the table.
I know that this is in no way optimal, it just is that way for many students that I know of.
username_1: I just wanted to do this, but when typing the text for the README I was wondering, how people get this Information. Because if people install the bundle using tlmgr or MikTeX they will not even see this.
I now added it to the README and to the License information of the DEMO files. Would be glad, if one of you could take a look.
And to get back to the OffTopic of LaTeX-Courses: Debugging is the easiest to teach. This is why I am always confused, people don't include this in their courses. If you know people offering those intros I can always try to help with preparation of a small debugging session. Easiest way to save time.
username_0: Usually people find the information via the [TU website](https://www.intern.tu-darmstadt.de/arbeitsmittel/dokumente_formulare/sortiert_von_a_bis_z/details_147072.de.jsp). I strongly advice to put the link there. Most people do not know about MikTeX or even latex before visiting the site.
Latex is something that is usually not taught at all. The first time most students are confronted with it is during their thesis.
username_1: @username_0 okay then. That's not my job.
@username_4: It's your turn I guess. But I can also write an email to the shown contact address. Just guess it will be easier if an intern does ;-)
username_3: I guess people find the tex examples files mostly using google (not on their harddisk disk) so I would have recommended to put a direct link in ea h tex-file to ctan? You don't like it? What do you think?
username_1: That's wat the license Info of the demo files does. But during an online search people will find the pdf before the code anyway.
username_1: I remove the milestone, because the release has been done, and I am waiting for @username_4 to close this when the information is added on the webpage.
Status: Issue closed
|
videojs/video.js | 261700902 | Title: RTMP live, time drift with packet loss, player showing not-live video
Question:
username_0: ## Description
Using RTMP live stream, sometimes users are "behind" the live feed by up to 1 minute. This seems to correlate with packet loss. I have not been able to correlate with any errors in the browser or specific network conditions. The only way a user knows they are "behind" is checking against a second player that has recently been refreshed to see the difference.
## Steps to reproduce
Leave RTMP stream up for extended periods and degrade network quality. Sorry, no specific steps yet.
## Results
### Expected
I would expect the player to "keep live" with the most current packets from the stream or somehow indicate it is buffered and not live, allowing a user to dump all packets and move up to the latest.
### Actual
Player is showing "buffered" or old packets when the stream is supposed to be displaying live video.
## Additional Information
Please include any additional information necessary here. Including the following:
### versions
Not browser or platform specific
#### videojs
videojs 5.19.2
### plugins
Also have videojs hls contrib as backup if platform doesn't support flash, however I can confirm in all of these cases, that HLS was NOT being used. |
yurijmikhalevich/jdext | 282252889 | Title: xbox360 wrong file?
Question:
username_0: hi,
is it an error or I found wrong file in my xbox360? file name original, this file 213 MB looks like video from just dance 2017 but mybe it is not:
C:\prv\film>jdext C0DE99990F586558 f1.webm
Error: cannot find video start byte
at Object.exports.extract (C:\Users\xxx\AppData\Roaming\npm\node_module
s\jdext\jdext.js:7:11)
at Object.<anonymous> (C:\Users\xxx\AppData\Roaming\npm\node_modules\jd
ext\jdext.js:17:11)
at Module._compile (module.js:635:30)
at Object.Module._extensions..js (module.js:646:10)
at Module.load (module.js:554:32)
at tryModuleLoad (module.js:497:12)
at Function.Module._load (module.js:489:3)
at Function.Module.runMain (module.js:676:10)
at startup (bootstrap_node.js:187:16)
at bootstrap_node.js:608:3
please copy log from above and report the issue here https://github.com/39dotyt/
jdext/issues
C:\prv\film> |
agda/agda | 203418907 | Title: Exact split analysis is too strict when matching on eta record constructor
Question:
username_0: ```agda
{-# OPTIONS --exact-split #-}
open import Common.Bool
open import Common.Equality
record Unit : Set where
eta-equality
constructor unit
f : Unit → Bool → Bool
f unit true = true
f u false = false -- passes without --exact-split
-- All equations pass do hold definitionally:
test-1 : f unit true ≡ true
test-1 = refl
test-2 : ∀{u : Unit} → f u false ≡ false
test-2 = refl
-- Error:
-- Exact splitting is enabled, but not all clauses can be preserved as
-- definitional equalities in the translation to a case tree
-- when checking the definition of f
-- Should succeed.
```
Status: Issue closed
Answers:
username_1: Nothing is quite like having a thesis to write to motivate you to work on other things. I even got the issue number right this time!
username_0: Ah sorry, I am giving you a pretext for procrastination... |
Constellation-Labs/constellation | 770755009 | Title: Store snapshot proposal together with signature
Question:
username_0: With the current implementation we **pull** proposals directly from nodes which created the proposal.
Gossip protocol reverts the process so we **push** proposals to nodes through other nodes which means that we can get someone's proposal from other node which is not the original proposer. It is in general secured by signature chain.
According to what we agreed on error handling - there is a case when we want to make **push** (if node didn't get proposal and has "gap") with a difference that we can **pull** not only from original proposer (as it was in pre-gossip implementation) but we would like to **pull** from any of nodes which already received a proposal.
Example:
1. `A` makes proposal at height `10` and sends it to `B`-> `C`-> `D` (via Gossip) but `C`->`D` fails.
2. Standard retrying/error handling doesn't work so proposal never reaches `D`. Assuming that `D` is aware of missing proposal `10` from `A` it should then **pull** proposal from `A` or `B` or `C`.
3. To make that secure, all the nodes should store not only proposal in form of `hash` but rather `hash`+`signature`. Thanks to that even if we fetch proposal `A` from `B` (so not from the original proposer) then we can still check if `B` didn't malform `A`'s proposal by checking signature. It can be achieved by storing and signing whole `case class PersistedSnapshotProposal(hash: String, reputation: SortedMap[Id, Double])` as `SignedSnapshotProposal` (we can also add `height: Long` field to `PersistedSnapshotProposal`) instead of just storing `height`+`hash`
Status: Issue closed
Answers:
username_0: With the current implementation we **pull** proposals directly from nodes which created the proposal.
Gossip protocol reverts the process so we **push** proposals to nodes through other nodes which means that we can get someone's proposal from other node which is not the original proposer. It is in general secured by signature chain.
According to what we agreed on error handling - there is a case when we want to make **push** (if node didn't get proposal and has "gap") with a difference that we can **pull** not only from original proposer (as it was in pre-gossip implementation) but we would like to **pull** from any of nodes which already received a proposal.
Example:
1. `A` makes proposal at height `10` and sends it to `B`-> `C`-> `D` (via Gossip) but `C`->`D` fails.
2. Standard retrying/error handling doesn't work so proposal never reaches `D`. Assuming that `D` is aware of missing proposal `10` from `A` it should then **pull** proposal from `A` or `B` or `C`.
3. To make that secure, all the nodes should store not only proposal in form of `hash` but rather `hash`+`signature`. Thanks to that even if we fetch proposal `A` from `B` (so not from the original proposer) then we can still check if `B` didn't malform `A`'s proposal by checking signature. It can be achieved by storing and signing whole `case class PersistedSnapshotProposal(hash: String, reputation: SortedMap[Id, Double])` as `SignedSnapshotProposal` (we can also add `height: Long` field to `PersistedSnapshotProposal`) instead of just storing `height`+`hash`
Status: Issue closed
|
AMReX-Astro/Castro | 467729814 | Title: document the SDC solver
Question:
username_0: We need to make clearer the different time integration methods in the flowchart.
Also the Hydrodynamics section needs to be cleaned up to make it clear what parts are CTU and what parts are MOL
Answers:
username_0: We need to make clearer the different time integration methods in the flowchart.
Also the Hydrodynamics section needs to be cleaned up to make it clear what parts are CTU and what parts are MOL
username_0: a first pass of this has been done.
Status: Issue closed
|
adobe/spectrum-css | 674599229 | Title: Windows High Contrast mode broken
Question:
username_0: ## Description
It is impossible to tell whether checkboxes/radio buttons are checked or not when running in WHCM
## Steps to reproduce
1. Go to http://opensource.adobe.com/spectrum-css/
2. Turn on Windows High Contrast mode (left shift+ left alt + print screen)
3. Run Checkbox page
4. Observe there is no difference between checked and unchecked
5. Run Radio page
6. Observe no difference between checked and unchecked
## Expected behavior
I expect to be able to tell what is selected
## Screenshots
current behavior


behavior before #619 was applied


## Environment
- **Spectrum CSS version:** 2.13.0
- **Browser(s) and OS(s):** Edge 85.0.564.23 on Win 10 -->
## Additional context
likely caused by #619
Seems caused by the addition of background-color on the button/checkbox
Answers:
username_0: Thinking about this a little more I think the best approach for checkboxes and radios it to show the native checkbox/radio instead of the generated content version when possible. So something like this for checkboxes.
```
@media (forced-colors: active) {
.spectrum-Checkbox-input {
opacity: 1;
inline-size: var(--spectrum-checkbox-box-size);
block-size: var(--spectrum-checkbox-box-size);
/* Needs some work to position the checkbox in the right place */
}
.spectrum-Checkbox-box {
visibility: hidden;
}
.spectrum-Checkbox.is-indeterminate {
.spectrum-Checkbox-input {
opacity: 0.0001;
}
.spectrum-Checkbox-box,
.spectrum-Checkbox-input:checked + .spectrum-Checkbox-box {
visibility: visible;
&:before {
border-width: var(--spectrum-checkbox-box-border-size);
}
/* Need to fix focus styles for windows high contrast for indeterminate checkboxes */
}
}
}
```
This would solve a lot of the problems without having to modify much code. Ideally we could actually set the indeterminate attribute in JS and we wouldn't even have to special case indeterminate here.
Status: Issue closed
|
BotBuilderCommunity/botbuilder-community-dotnet | 1064451889 | Title: Bot.Builder.Community.Dialogs.Location
Question:
username_0: Hi,
Does this component allows the user to actually selects a location in the map? The description says "An open source location picker" but based on the sample provided you need to enter the location instead of picking one from a map.
Is that possible?
Many thanks |
smartdevicelink/sdl_evolution | 321621080 | Title: [In Review] SDL 0170 - SDL behavior in case of LOW_VOLTAGE event
Question:
username_0: Hello SDL community,
The review of "SDL 0170 - SDL behavior in case of LOW_VOLTAGE event" begins now and runs through May 15, 2018. The proposal is available here:
https://github.com/smartdevicelink/sdl_evolution/blob/master/proposals/0170-sdl-behavior-in-case-of-Low-Voltage_mqueue.md
Reviews are an important part of the SDL evolution process. All reviews should be sent to the associated Github issue at:
https://github.com/smartdevicelink/sdl_evolution/issues/489
What goes into a review?
The goal of the review process is to improve the proposal under review through constructive criticism and, eventually, determine the direction of SDL. When writing your review, here are some questions you might want to answer in your review:
* Is the problem being addressed significant enough to warrant a change to SDL?
* Does this proposal fit well with the feel and direction of SDL?
* If you have used competitors with a similar feature, how do you feel that this proposal compares to those?
* How much effort did you put into your review? A glance, a quick reading, or an in-depth study?
Please state explicitly whether you believe that the proposal should be accepted into SDL.
More information about the SDL evolution process is available at
https://github.com/smartdevicelink/sdl_evolution/blob/master/process.md
Thank you,
<NAME>
Program Manager - Livio
<EMAIL>
Answers:
username_1: Please state explicitly whether you believe that the proposal should be accepted into SDL.
A quick reading. If after others weigh in on this proposal it appears that OEMs besides Ford require this sort of functionality, then I believe that this proposal should be accepted into SDL.
username_2: In the HMI API there exists an `OnAwakeSDL` RPC.
```
<function name="OnAwakeSDL" messagetype="notification">
<description>
Sender: HMI->SDL. Must be sent to return SDL to normal operation after 'Suspend' or 'LowVoltage' events
</description>
</function>
```
Also there is a `SUSPEND` enum which is used in OnExitAllApplications.
```
<enum name="ApplicationsCloseReason">
<description>Describes the reasons for exiting all of applications.</description>
<element name="IGNITION_OFF" />
<element name="MASTER_RESET" />
<element name="FACTORY_DEFAULTS" />
<element name="SUSPEND" />
</enum>
```
Instead of creating a new communications channel, can the problem described by the proposal be solved by using "SUSPEND" or adding "LOW_VOLTAGE" to the ApplicationsCloseReason enums as well as using the `OnAwakeSDL` RPC?
username_3: The aim is to suspend SDL when a Low voltage condition occurs. A potential problem with using the existing messages in HMI API is that these messages can be queued behind other HMI API messages and the system will likely suspend SDL before Core gets an opportunity to de-queue and process the messages. Hence the request for a separate communication mechanism.
However, I would like to get feedback from others as well on the likelihood of this problem on their platforms.
username_2: Also I believe the proposal is suggesting we use "mqueue" as the "message queue"?
I thought that "mqueue" was the method of communication from Core to HMI specific to the Sync 3 implementation (in place of websockets). This type of communication between core and the HMI does not exist at all in the open source project. The only form of communications between core and the HMI is through web sockets or dbus.
If this proposal means that mqueue must also be implemented in the project then I feel like there are a lot of details missing in the proposal.
Maybe I am misunderstanding something but the proposal is written as if "mqueue" already exists in the project.
username_4: In case of additional interface we will need to keep a lot of threads active on select ( whole him communication) , and processing him requests. This activity is rather expensive for Low voltage state
username_5: "SUSPEND is not the same as LOW_VOLTAGE. SUSPEND is regular state for example when you exited the car, but HU is still working for some time to startup faster in case if you will return back. Low Voltage is not regular state, it is exceptional state. In case of low voltage we should consume as few resources as possible. That's why we should shutdown all transports and event reject any communication with Applink via MessageBrocker. This proposal is for creating low consuming transport channel, without any websockets, Json parsing and notifications subscriptions managment, and heavy adapters. It should just connect to mqueue and be ready to react on minimal string received from Applink side."
While I'm not sure we can handle low voltage this way, I do see the advantage of this type of emergency suspension of SDL Core. Since SUSPEND is already used, maybe we could use HIBERNATE or ESTOP/ESUSPEND? I think this type of function could be called in cases other than low voltage.
Regarding the additional communication channel, as the proposal states I'm also concerned to make any communication channel a requirement to OEMs. I don't think the communication method has to be tied to this functionality so I agree with the direction @yang1070 mentioned that we should seek to have this more generic or have the communication method to be defined by the OEM.
username_2: After a quick test, this also works in stopping and resuming core
Start Core. Find the PID of Core (mine was 33230).
This will stop core.
```
$ kill -STOP 33230
```
This will resume core to the same place before the stop signal.
```
$ kill -CONT 33230
```
username_3: I like Jack's idea of using user defined signals. Our preliminary analysis indicates that this is possible. We doing some more analysis to check if the same mechanism can be expanded to other signals/messages as well.
username_0: The Steering Committee voted to defer this proposal, keeping it in review until our meeting on 2018-05-29, to allow more time for SDLC Members to conduct additional testing and analysis of the alternative option described by username_2: using signals to stop/resume core in the event of low voltage.
username_3: After analysis, we would like to propose the use of 3 signals of the range SIGRTMIN - SIGRTMAX for LowVoltage, WakeUp and IgnitionOff.
Status: Issue closed
username_0: The Steering Committee has voted to accept this proposal with revisions. The revisions will include the use of 3 signals of the range SIGRTMIN - SIGRTMAX for LowVoltage, WakeUp and IgnitionOff - in place of using mqueue messages.
username_0: @username_4 please advise when a new PR has been entered to update the proposal to reflect the agreed upon revisions. I'll then merge the PR so the proposal is up to date, and enter issues in the respective repositories for implementation. Thanks! |
spring-projects/spring-boot | 1089794747 | Title: JobLauncherCommandLineRunner missing in spring boot autoconfigure 2.6.2
Question:
username_0: I have migrated my application to spring boot parent 2.6.2. On execution I am getting below exception. I digged further and found that the class JobLauncherCommandLineRunner is missing in spring-boot-autoconfigure 2.6.2. Can you please help me fix this issue at the earliest ?
What is application look for a batch class in my realtime application ?
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2021-12-28 05:52:49.995 ERROR 16140 --- [ main] o.s.b.SpringApplication : Application run failed
org.springframework.context.ApplicationContextException: Unable to start web server; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.boot.autoconfigure.web.servlet.ServletWebServerFactoryConfiguration$EmbeddedTomcat': Initialization of bean failed; nested exception is java.lang.IllegalArgumentException: warning no match for this type name: org.springframework.boot.autoconfigure.batch.JobLauncherCommandLineRunner [Xlint:invalidAbsoluteTypeName]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.onRefresh(ServletWebServerApplicationContext.java:163) ~[spring-boot-2.6.2.jar:2.6.2]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:577) ~[spring-context-5.3.14.jar:5.3.14]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:145) ~[spring-boot-2.6.2.jar:2.6.2]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:730) [spring-boot-2.6.2.jar:2.6.2]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:412) [spring-boot-2.6.2.jar:2.6.2]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:302) [spring-boot-2.6.2.jar:2.6.2]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1301) [spring-boot-2.6.2.jar:2.6.2]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1290) [spring-boot-2.6.2.jar:2.6.2]
at com.prudential.policypackage.PolicyPackageApplication.main(PolicyPackageApplication.java:14) [classes/:?]
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.boot.autoconfigure.web.servlet.ServletWebServerFactoryConfiguration$EmbeddedTomcat': Initialization of bean failed; nested exception is java.lang.IllegalArgumentException: warning no match for this type name: org.springframework.boot.autoconfigure.batch.JobLauncherCommandLineRunner [Xlint:invalidAbsoluteTypeName]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:628) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:410) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1352) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1195) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:213) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.getWebServerFactory(ServletWebServerApplicationContext.java:217) ~[spring-boot-2.6.2.jar:2.6.2]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.createWebServer(ServletWebServerApplicationContext.java:180) ~[spring-boot-2.6.2.jar:2.6.2]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.onRefresh(ServletWebServerApplicationContext.java:160) ~[spring-boot-2.6.2.jar:2.6.2]
... 8 more
Caused by: java.lang.IllegalArgumentException: warning no match for this type name: org.springframework.boot.autoconfigure.batch.JobLauncherCommandLineRunner [Xlint:invalidAbsoluteTypeName]
at org.aspectj.weaver.tools.PointcutParser.parsePointcutExpression(PointcutParser.java:319) ~[aspectjweaver-1.9.7.jar:?]
at org.springframework.aop.aspectj.AspectJExpressionPointcut.buildPointcutExpression(AspectJExpressionPointcut.java:227) ~[spring-aop-5.3.14.jar:5.3.14]
at org.springframework.aop.aspectj.AspectJExpressionPointcut.obtainPointcutExpression(AspectJExpressionPointcut.java:198) ~[spring-aop-5.3.14.jar:5.3.14]
at org.springframework.aop.aspectj.AspectJExpressionPointcut.getClassFilter(AspectJExpressionPointcut.java:177) ~[spring-aop-5.3.14.jar:5.3.14]
at org.springframework.aop.support.AopUtils.canApply(AopUtils.java:226) ~[spring-aop-5.3.14.jar:5.3.14]
at org.springframework.aop.support.AopUtils.canApply(AopUtils.java:289) ~[spring-aop-5.3.14.jar:5.3.14]
at org.springframework.aop.support.AopUtils.findAdvisorsThatCanApply(AopUtils.java:321) ~[spring-aop-5.3.14.jar:5.3.14]
at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.findAdvisorsThatCanApply(AbstractAdvisorAutoProxyCreator.java:128) ~[spring-aop-5.3.14.jar:5.3.14]
at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.findEligibleAdvisors(AbstractAdvisorAutoProxyCreator.java:97) ~[spring-aop-5.3.14.jar:5.3.14]
at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.getAdvicesAndAdvisorsForBean(AbstractAdvisorAutoProxyCreator.java:78) ~[spring-aop-5.3.14.jar:5.3.14]
at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.wrapIfNecessary(AbstractAutoProxyCreator.java:339) ~[spring-aop-5.3.14.jar:5.3.14]
at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.postProcessAfterInitialization(AbstractAutoProxyCreator.java:291) ~[spring-aop-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsAfterInitialization(AbstractAutowireCapableBeanFactory.java:455) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1808) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:620) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:410) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1352) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1195) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:213) ~[spring-beans-5.3.14.jar:5.3.14]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.getWebServerFactory(ServletWebServerApplicationContext.java:217) ~[spring-boot-2.6.2.jar:2.6.2]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.createWebServer(ServletWebServerApplicationContext.java:180) ~[spring-boot-2.6.2.jar:2.6.2]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.onRefresh(ServletWebServerApplicationContext.java:160) ~[spring-boot-2.6.2.jar:2.6.2]
... 8 more
Status: Issue closed
Answers:
username_1: `JobLauncherCommandLineRunner` is deprecated since Spring Boot 2.3 (#19442) in favor of `JobLauncherApplicationRunner`.
username_0: Then why is spring-aop-5.3.14 and aspectjweaver-1.9.7.jar referencing the JobLauncherCommandLineRunner ?
username_1: Because something in your application refers to it and shouldn't. Please take the time to review your own arrangement.
username_0: The application if I reference older version (2.5.2) of spring auto configure and remove the versions of 2.6.2, then I dont see this error.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-autoconfigure</artifactId>
<version>2.5.2</version>
</dependency>
username_0: My applications is using a custom logger framework with spring AOP.
username_1: The class has been removed in 2.6.x so I am not surprised this doesn't cause a problem in 2.5.x. If you're looking for support upgrading to 2.6.x, then please share a small sample that we can run ourselves. You can do so by attaching a zip to this issue or sharing a link to a GitHub repository. |
jlippold/tweakCompatible | 574033001 | Title: `AwesomePageDots` working on iOS 13.2.3
Question:
username_0: ```
{
"packageId": "com.shiftcmdk.awesomepagedots",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.shiftcmdk.awesomepagedots",
"deviceId": "iPhone11,8",
"url": "http://cydia.saurik.com/package/com.shiftcmdk.awesomepagedots/",
"iOSVersion": "13.2.3",
"packageVersionIndexed": true,
"packageName": "AwesomePageDots",
"category": "Tweaks",
"repository": "shiftcmdk",
"name": "AwesomePageDots",
"installed": "1.1.3",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.shiftcmdk.awesomepagedots",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "Animated page control for the home screen.",
"latest": "1.1.3",
"author": "shiftcmdk",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
tlienart/Franklin.jl | 792590208 | Title: Support Jenkins themes in Franklin
Question:
username_0: I know this may not be possible or even useful to most, but given how popular Jenkins is, it would seemingly make sense to support in some way Jenkins themes.
Answers:
username_1: Right so I should probably pin an issue somewhere but for new themes basically the procedure is as follows:
* someone suggests a theme they would like to use
* we provide a draft port of the theme
* we ask the someone to use the port and ping back and forth with issues so that we can polish the theme together
* theme is released for everyone
There's three currently "draft-mode" theme like this:
- https://github.com/username_1/FranklinTemplates.jl/issues/63
- https://github.com/username_1/FranklinTemplates.jl/issues/112
- https://github.com/username_1/FranklinTemplates.jl/issues/116
Unfortunately the requesters seem quite busy so for now these are a bit slow.
In short: porting theme is relatively easy to do, I'm happy to help, but I do want people to help back. If someone wants a new theme or to start a website with a given theme, I will help them *quite a lot* but the counterpart is that they help me polish the port for others to use.
username_0: You're a rockstar, thank you! I'll wait until I need this more urgently but can definitely help at some point. |
socketio/socket.io | 894319275 | Title: Upgraded to 4.x and objects in socket.request.session no longer exists
Question:
username_0: **Describe the bug**
`socket.request.session` is always a new session even after declaring `req.session.user` with a value.
**To Reproduce**
Setup Socket.io with `express-session` middleware.
Socket.IO server version: `^4.1.1`
*Server*
```
import { Server, Socket } from 'socket.io';
import session from 'express-session';
const io = new Server(http.createServer(/* express app */), { /* cors /* });
const wrapper = (middleware: any) => (socket: Socket, next: any) => middleware(socket.request, {}, next);
io.use(wrapper(session(/* session configs */)));
io.on('connection', (socket: Socket) => {
const req = socket.request
console.log(req.session) // Typescript will warn that session is not in IncomingMessage
// console.log output shows no user
/* Session {
cookie: {
path: '/',
_expires: 2021-05-18T13:06:36.146Z,
originalMaxAge: 3600000,
httpOnly: true,
domain: undefined
}
} */
});
app.use((req, resp, next) => {
console.log(req.session) // user is in session
req.io = io;
next();
});
app.post('/login', (req, resp, next) => {
const user = // login user
req.session.user = user
});
// server listen and stuff
```
Socket.IO client version: `^4.1.1`
*Client*
```js
import { io } from "socket.io-client";
const socket = io("http://localhost", {});
socket.on("connect", () => {
console.log(`connect ${socket.id}`);
});
socket.on("disconnect", () => {
console.log("disconnect");
});
```
**Expected behavior**
`socket.request.session` should have `user` property in it when initializing session in express middleware (eg. - `req.session.user = user`)
**Platform:**
- Device: Desktop and Mobile
- OS: Windows 10 and Android 10
**Additional context**
Answers:
username_0: Downgraded back to 2.4.1 (for the 5th time), changed the imports from named to default and `user` is in `socket.request.session` again.
Not upgrading ever again.
username_1: I'm having the similar issue with the following versions:
express 4.17.1
express-session 1.17.2
socket.io 3.1.2 |
otac0n/Pegasus | 456877875 | Title: Extension for VS 2019
Question:
username_0: Are there any plans to update the extension for VS 2019?
Answers:
username_1: im interested too
username_2: I am also in trouble.
I cannot use the extension for Visual Studio 2019.
username_1: @username_2 i use a little Trick until the extension will be upgraded. I use VS Code with pegjs extension. It has SyntaxHighlighting and kind of intellisense
username_3: this looks nice however I use visual studio 2019 too |
kumahq/kuma | 1080788180 | Title: Support CircuitBreaker on ExternalServices when using ZoneEgress
Question:
username_0: ### Description
ZoneEgress changes the way CircuitBreakers work for external services. We should ensure this still works.
https://docs.google.com/document/d/1CLh-JGRiviijNv<KEY>Ek7_X<KEY>/edit# |
stiletto/bnw | 161405808 | Title: blacklist на глагне
Question:
username_0: [command_show.py](https://github.com/username_1/bnw/blob/master/bnw/handlers/command_show.py)
строчка 159
```
...
parameters['user']['$nin'] = list(bl['tag'])
...
```
опечатка же, не? из-за нее, видимо, юзеры из bl видны на глагне
Answers:
username_1: починено в f8ba4bb
Status: Issue closed
|
gridap/MiniQhull.jl | 817096650 | Title: Implement `delaunay!`, in-place version of `delaunay`
Question:
username_0: Now, `delaunay` allocates the output. It would be nice to have the corresponding in-place version `delaunay!`, where the user provides the output. A part of performance reasons, the in-place version would allow the user to select the container type (as long as it has the correct memory layout).
Once `delaunay!` is available, `delaunay` can be implemented by calling `delaunay!`.
Answers:
username_0: Hi @kahaaga ! If you want to further contribute to the project. This would be a nice thing to have. |
xmonad/xmonad-contrib | 1063887983 | Title: Build error with failed parsing
Question:
username_0: ### Problem Description
Arch Linux build error
I'm just trying to do regular update, but build fails with unexpected input
```
==> Starting build()...
Warning: xmonad-contrib.cabal:65:20:
unexpected 'd'
expecting space, white space, opening paren, operator, comma or end of input
Setup.lhs: Failed parsing "./xmonad-contrib.cabal".
==> ERROR: A failure occurred in build().
Aborting...
```
### Steps to Reproduce
Update git-version with regular paru update.
### Configuration File
It's not a problem in config file.
### Checklist
- [x] I've read [CONTRIBUTING.md](https://github.com/xmonad/xmonad/blob/master/CONTRIBUTING.md)
- I tested my configuration
- [x] With `xmonad` version XXX (commit XXX if using git)
- [x] With `xmonad-contrib` version XXX (commit XXX if using git)
Answers:
username_1: This seems like an error with the AUR version of `xmonad-contrib`, HEAD builds fine here.
Judging from the site [here](https://aur.archlinux.org/packages/xmonad-contrib-git/) they need to add `deepseq` to the dependencies (I don't know what it's called on Arch).
Status: Issue closed
|
topcoder-platform/topcoder-x-ui | 922098666 | Title: [$100] Topcoder-X deployment debugging help
Question:
username_0: As discussed on Slack - help with debugging broken indices.
Answers:
username_0: Challenge https://www.topcoder.com/challenges/932b7083-e7f7-4175-bbbd-53a7908056f3 has been created for this ticket.<br/><br/>```This is an automated message for ghostar2020 via Topcoder X```
username_0: Challenge https://www.topcoder.com/challenges/932b7083-e7f7-4175-bbbd-53a7908056f3 has been assigned to afrisalyp.<br/><br/>```This is an automated message for ghostar2020 via Topcoder X```
Status: Issue closed
username_0: Payment task has been updated: https://www.topcoder.com/challenges/932b7083-e7f7-4175-bbbd-53a7908056f3
*Payments Complete*
Winner: afrisalyp
Copilot: ghostar2020
Challenge `932b7083-e7f7-4175-bbbd-53a7908056f3` has been paid and closed.<br/><br/>```This is an automated message for ghostar2020 via Topcoder X``` |
romanz/electrs | 527962213 | Title: Bitcoin update v0.19.0.1 breaks electrs
Question:
username_0: I have updated Bitcoin Core to the latest version v0.19.0.1.
And now my log filled with these errors:
```
Nov 25 09:33:51 HC1 electrs[26965]: 2019-11-25T09:33:51.796+00:00 - TRACE - RPC Request("{\"jsonrpc\": \"2.0\", \"method\": \"mempool.get_fee_histogram\", \"id\": 108}\n")
Nov 25 09:33:51 HC1 electrs[26965]: 2019-11-25T09:33:51.797+00:00 - TRACE - RPC Request("{\"jsonrpc\": \"2.0\", \"method\": \"blockchain.estimatefee\", \"id\": 109, \"params\": [25]}\n")
Nov 25 09:33:51 HC1 electrs[26965]: 2019-11-25T09:33:51.797+00:00 - TRACE - RPC Request("{\"jsonrpc\": \"2.0\", \"method\": \"blockchain.estimatefee\", \"id\": 110, \"params\": [10]}\n")
Nov 25 09:33:51 HC1 electrs[26965]: 2019-11-25T09:33:51.797+00:00 - TRACE - RPC Request("{\"jsonrpc\": \"2.0\", \"method\": \"blockchain.estimatefee\", \"id\": 111, \"params\": [5]}\n")
Nov 25 09:33:51 HC1 electrs[26965]: 2019-11-25T09:33:51.797+00:00 - TRACE - RPC Request("{\"jsonrpc\": \"2.0\", \"method\": \"blockchain.estimatefee\", \"id\": 112, \"params\": [2]}\n")
Nov 25 09:33:53 HC1 electrs[26965]: 2019-11-25T09:33:53.144+00:00 - TRACE - RPC PeriodicUpdate
Nov 25 09:33:58 HC1 electrs[26965]: 2019-11-25T09:33:58.442+00:00 - TRACE - RPC PeriodicUpdate
Nov 25 09:34:04 HC1 electrs[26965]: 2019-11-25T09:34:04.010+00:00 - TRACE - RPC PeriodicUpdate
```
However Electrum still connects it does not seem to pick up the new transactions.
Answers:
username_0: OK, sorry this have been a false alarm.
These messages seem to be the normal function of Electrs.
Downgrading to bitcoin v0.18.1 did not make a difference and transactions are being registered.
Back up on BItcoin Core v0.19.0.1 and working.
Closing now.
Status: Issue closed
|
lesgourg/class_public | 363202285 | Title: Calling GSL functions from CLASS
Question:
username_0: Dear developers,
Quick question. Is there a way to call special functions from external libraries such as GSL from CLASS ? What kind of modifications should be done in the Makefile ?
Many thanks in advance,
Status: Issue closed
Answers:
username_0: Dear developers,
Quick question. Is there a way to call special functions from external libraries such as GSL from CLASS ? What kind of modifications should be done in the Makefile ?
Many thanks in advance,
Status: Issue closed
|
bytedance/ps-lite | 911237009 | Title: error when send multiple keys in one message
Question:
username_0: I want to apply the RDMA version of ps-lite in sparse case in whch one message contains multiple keys and multiple values. However, the error occurs. The worker sends 100 keys, however, the server only receives 1 keys. I wonder whether this implemention only works in dense case such as test_benchmark.cpp and byteps package in which on message contains only one key and many values. Thank you. |
mozilla/addons-server | 242430389 | Title: "View Mobile Site" is not working from themes categories pages
Question:
username_0: Steps to reproduce:
1. Load Themes homepage on you device https://addons.allizom.org/en-US/android/themes/
2. Go to any theme category page i.e. https://addons.allizom.org/en-US/android/themes/abstract/
3. Click "View classic desktop site"
4. Click " View Mobile Site"
Expected results:
The mobile page is loaded again.
Actual results:
Nothing happens.
Notes/Issues:
- issue is not reproducing for extensions
- nothing displayed in the console
- issue is also reproducing while using new desktop pages
Verified on FF54(Android 7.1.2, Win 7). Reproducible around all AMO servers.
Video fro this issue:

Originally filled at https://github.com/mozilla/addons-frontend/issues/2786
Answers:
username_1: I think it's because addons-frontend is capturing:
https://addons.allizom.org/en-US/android/themes/abstract/
but not
https://addons.allizom.org/en-US/android/themes/abstract
username_2: Shoot, okay, we'll need to redirect. I thought we had a bug open for that... I'll investigate.
username_3: @username_2 there's the ops managed nginx config which defines what urls addons-frontend can serve if the mamo cookie is set or if it's a mobile UA. If a url isn't defined you get routed to addons-server.
I think there's a couple of options here:
* Make addons-server and addons-frontend use the same url by changing addons-server
* make sure the old url non-slash url redirects correctly
Alternatively change addons-frontend's url config:
* File a bug to have ops to update the config to included routing the non-slash variant to addons-frontend and test on -dev
* Addons-frontend would need to handle 301'ing missing slash maybe it does that already?
username_2: I think the front end already does handle it (search for trailingSlashesMiddleware) but the nginx updates removed its ability to do so. The second one could work pretty easily.
<NAME> (sent from mobile)
>
username_1: Can the nginx change be to just capture everything starting /themes/ and sent it to addons-frontend?
username_3: Yep the second option should cover that.
username_4: I think this can also be closed by #8115
Status: Issue closed
username_4: Verified as fixed on AMO-dev with FF59, WIn10x64 and Android 7.0
Also see results from #7524 |
pradeepkoneti/Softwareassurance | 718741621 | Title: Team Motivation
Question:
username_0: Initially, we had a brainstorming session to identify the Assurance cases that fall under our system of interest. In the team meeting we decided that each one must come with two different Assurance cases based on our system of interest. Out of all assurance cases the team posted in the GitHub and after discussing about each Assurance case scope, Team decided to go with five different Assurance cases by eliminating the duplication and identifying the important claims. we narrowed them down to 5 significant Assurance cases. We proactively chose one Assurance case each and decided to go individuals with one top claims from our discussion. At the time of our weekly meeting, we presented it to the team and after the presentation, every team member suggested some changes in our Claims, Rebuttal, and sub claims whatever we presented to the team. Later after receiving some valuable comments from teammates, we met with our professor, we understood that we should not have repetitive scenarios in our Assurance cases and identified few issues with Claims and sub claims and noted that the sub claims should be enough strong to hold the top claim. As the sub claims of one case is matching with other Cases, we decided to go with keeping the sub claim which suits better for a top claim. One more thing to add here is that even the team followed the noun plus verb scenarios for the claims the team statements in the sub claims are lengthier. This was a little challenging for us to modify the statements in all claims and sub claims accordingly to support the Top claims. Based on the above challenges we shared individual thoughts on the claims in all cases and started working on different breakout sessions for Assurance cases where we need improvements. This approach made us easily carryout the improvements with all the necessary modifications.
As mentioned above, it was challenging for our entire team to maintain the strong sub claims which supports the Top Claims and confine them under the scope of Magento handling those cases. But the breakout sessions helped us to solve that issue. Moving forward, when someone got blocked on a specific thing, Team members can individually request the teammates regarding their problems who can provide the valuable suggestions in the Communication channel of the group project so that everyone will be notified and everyone can add the valuable comments which removes all issues faced by the team.
Overall, this assignment made us know a lot more about the claims and evidences that support claims. This helps our team stay organized. Regarding the individual contributions, we are in an agreeance that everyone has been open to receiving responsibilities and successful at meeting the deadlines.<issue_closed>
Status: Issue closed |
CityOfZion/neon-wallet | 318589179 | Title: Calculating claimable GAS failed
Question:
username_0: ## PLEASE NOTE
Can't claim my GAS via current Neon Wallet, please see attached image for error:
<img width="472" alt="cannot_claim_gas" src="https://user-images.githubusercontent.com/1400300/39390509-4c9dc1f8-4a4a-11e8-95e6-90f4775db226.png">
Current neon wallet version: **0.2.2**
Answers:
username_1: I have the same issue with 0.2.3 and nano s
username_2: I have same issue with 0.2.3 with the error message "Calculating claimable Gas failed". How to fix this problem? any idea?
username_3: Duplicate of #742.
Status: Issue closed
|
christyc1129/startbootstrap-grayscale | 533373575 | Title: Turn.js unavailable.
Question:
username_0: Turn.js is unable to use in the website, so the book cannot be placed in the website.
Answers:
username_0: Changed to use an online application to transfer the book into a flipping page, then embedded into the website by using the link provided by the service provider.
https://fliphtml5.com/fbein/ymch
Status: Issue closed
username_0: Turn.js is unable to use in the website, so the book cannot be placed in the website.
Status: Issue closed
username_0: Turn.js is unable to use in the website, so the book cannot be placed in the website. |
p12tic/libsimdpp | 559845718 | Title: Docs issue: Dynamic Dispatch Example + CMake
Question:
username_0: The dynamic dispatch example, as [described in documentation](https://p12tic.github.io/libsimdpp/doc/html/index.html) seems to work using the [Makefile approach](https://github.com/p12tic/libsimdpp/tree/master/examples/dynamic_dispatch).
However, with a CMake file, using the appropriate CMake module, the build fails with error:
`main.cpp:(.text+0x5): undefined reference to print_arch()'`
I've made a barebones repository that implements only the dynamic dispatch built with CMake.
It is [available here](https://github.com/username_0/libsimdpp_cmake_mwe).
The CMake documentation was generated in 2013, so it's possible that it is no longer appropriate for the current version of the project.
Any pointers on getting this working for CMake? It would really help me with integrating into my projects.
Answers:
username_1: Added a PR to your repo that works on my machine and on CI:
https://github.com/username_0/libsimdpp_cmake_mwe/pull/1 |
cdnjs/cdnjs | 131396404 | Title: [Request] Add ColorRotator.js
Question:
username_0: **Library name:** ColorRotator.js
**Git repository url:** https://github.com/askupasoftware/color-rotator
**License(s):** GNU GENERAL PUBLIC LICENSE
**Official homepage:** http://products.askupasoftware.com/color-rotator/
Answers:
username_1: @username_0 we need a higher popularity to add a new lib, please read our document first. Feel free to ping me if I miss anything.
Status: Issue closed
|
jimboca/camera-polyglot | 211719361 | Title: NameError: global name 'param' is not defined
Question:
username_0: ERROR [03-02-2017 22:29:15] polyglot.nodeserver_manager: camera: Error handling cmd in function on_cmd: NameError("global name 'param' is
not defined",)Traceback (most recent call last): File "/home/pi/development/Polyglot/polyglot/nodeserver_api.py", line 1419, in _recv s
uccess.append(fun(**data)) File "/home/pi/development/Polyglot/polyglot/nodeserver_api.py", line 61, in auto_request_report_wrapper succ
ess = fun(*args, **kwargs) File "/home/pi/development/Polyglot/polyglot/nodeserver_api.py", line 1137, in on_cmd command, value=value, c
md=command, uom=uom, **kwargs) File "/home/pi/development/Polyglot/polyglot/nodeserver_api.py", line 137, in run_cmd success = fun(self,
**kwargs) File "/home/pi/development/camera-polyglot/camera_nodes/FoscamHD2.py", line 329, in _set_irled_state self.parent.send_error("
_set_irled_state failed to set %s=%s" % (param,value) )NameError: global name 'param' is not defined |
KappaDistributive/rs2048 | 395753185 | Title: Use a pseudo random number generator instead of ran.rs
Question:
username_0: Currently, a randomly generated but static array is used to generate new tiles. This should be replaced with an honest prng.
Answers:
username_1: This is a good first issue, I think. Use the `rand` crate, which should work great for this.
username_0: For reference: This is is the implementation in the CLI branch:
```rust
/// Fill a new random cell randomly with either 2 or 4
pub fn generate_new_cell(&mut self) {
let mut candidates: Vec<(usize, usize)> = Vec::new();
for y in 0..self.size {
for x in 0..self.size {
if self.board.get_state(x, y) == 0 {
candidates.push((x, y));
}
}
}
let candidates_len = candidates.len();
if candidates_len == 0 {
panic!("Game has ended!");
} else {
let mut rng = thread_rng();
let ran: usize = rng.gen_range(0, candidates_len);
let (x, y) = candidates[ran];
let rad: f32 = rng.gen();
match rad < 0.9 {
true => self.board.set_state(x, y, 2),
false => self.board.set_state(x, y, 4),
}
}
}
Status: Issue closed
|
expo/expo | 939992195 | Title: Eas Build Failed for ios with error : folly/dynamic.h' file not found
Question:
username_0: | ^ 'folly/dynamic.h' file not found
11 | #include <jsi/jsi.h>
12 |
13 | namespace facebook {
› Packaging react-native Pods/Yoga » libYoga.a
› Packaging react-native Pods/React-jsinspector » libReact-jsinspector.a
› Compiling react-native Pods/React-jsi » JSCRuntime.cpp
› Executing [CP] Copy XCFrameworks
› Executing expo-file-system Pods/EXFileSystem » [CP] Copy XCFrameworks
› Preparing react-native Pods/React-Core-AccessibilityResources » ResourceBundle-AccessibilityResources-React-Core-Info.plist
› Copying node_modules/react-native/React/AccessibilityResources/en.lproj ➜ Users/expo/Library/Developer/Xcode/DerivedData/TableDiscover-cxmyhmtlknhazugkisymzaqmewlf/Build/Intermediates.noindex/ArchiveIntermediates/TableDiscover/IntermediateBuildFilesPath/UninstalledProducts/iphoneos/AccessibilityResources.bundle/en.lproj
› Compiling unimodules-permissions-interface Pods/UMPermissionsInterface » UMPermissionsMethodsDelegate.m
› Compiling unimodules-permissions-interface Pods/UMPermissionsInterface » UMPermissionsInterface-dummy.m
› Packaging unimodules-app-loader Pods/UMAppLoader » libUMAppLoader.a
› Creating Pods/Stripe-Stripe3DS2 » Stripe3DS2.bundle
› Creating Pods/Stripe-Stripe » Stripe.bundle
▸ ** ARCHIVE FAILED **
▸ The following build commands failed:
▸ CompileC /Users/expo/Library/Developer/Xcode/DerivedData/TableDiscover-cxmyhmtlknhazugkisymzaqmewlf/Build/Intermediates.noindex/ArchiveIntermediates/TableDiscover/IntermediateBuildFilesPath/Pods.build/Release-iphoneos/React-jsi.build/Objects-normal/arm64/JSIDynamic.o /Users/expo/workingdir/build/node_modules/react-native/ReactCommon/jsi/jsi/JSIDynamic.cpp normal arm64 c++ com.apple.compilers.llvm.clang.1_0.compiler
▸ (1 failure)
** ARCHIVE FAILED **
The following build commands failed:
CompileC /Users/expo/Library/Developer/Xcode/DerivedData/TableDiscover-cxmyhmtlknhazugkisymzaqmewlf/Build/Intermediates.noindex/ArchiveIntermediates/TableDiscover/IntermediateBuildFilesPath/Pods.build/Release-iphoneos/React-jsi.build/Objects-normal/arm64/JSIDynamic.o /Users/expo/workingdir/build/node_modules/react-native/ReactCommon/jsi/jsi/JSIDynamic.cpp normal arm64 c++ com.apple.compilers.llvm.clang.1_0.compiler
(1 failure)
Exit status: 65
+-------------+-------------------------+
| Build environment |
+-------------+-------------------------+
| xcode_path | /Applications/Xcode.app |
| gym_version | 2.185.1 |
| sdk | iPhoneOS14.5.sdk |
+-------------+-------------------------+
Looks like fastlane ran into a build/archive error with your project
It's hard to tell what's causing the error, so we wrote some guides on how
to troubleshoot build and signing issues: https://docs.fastlane.tools/codesigning/getting-started/
Before submitting an issue on GitHub, please follow the guide above and make
sure your project is set up correctly.
fastlane uses `xcodebuild` commands to generate your binary, you can see the
the full commands printed out in yellow in the above log.
Make sure to inspect the output above, as usually you'll find more error information there
[stderr] [!] Error building the application - see the log above
Error: Fastlane build failed with unknown error. Please refer to the "Run fastlane" and "Xcode Logs" phases.
Fastlane errors in most cases are not printed at the end of the output, so you may not find any useful information in the last lines of output when looking for an error message.
```
### Managed or bare workflow? If you have `ios/` or `android/` directories in your project, the answer is bare!
bare
### What platform(s) does this occur on?
iOS
### SDK Version (managed workflow only)
41
### Environment
Expo CLI 4.7.3 environment info:
[Truncated]
require_relative '../node_modules/@react-native-community/cli-platform-ios/native_modules'
platform :ios, '11.0'
target 'TableDiscover' do
use_unimodules!
config = use_native_modules!
use_react_native!(:path => config["reactNativePath"])
# Uncomment to opt-in to using Flipper
#
# if !ENV['CI']
# use_flipper!('Flipper' => '0.75.1', 'Flipper-Folly' => '2.5.3', 'Flipper-RSocket' => '1.3.1')
# post_install do |installer|
# flipper_post_install(installer)
# end
# end
end
``` |
NeuralEnsemble/python-neo | 380363512 | Title: Epoch.time_slice() is crrently broken
Question:
username_0: currently, `Epoch.time_slice()` is broken:
```
import neo
import quantities as pq
Epoch = neo.core.Epoch(times=[1,2,3]*pq.s,durations=[1,1,1]*pq.ms,labels=['a','b','c'])
Epoch.time_slice(*(1,2)*pq.s)
## -- End pasted text --
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-1-d675ce8b57e8> in <module>()
2 import quantities as pq
3 Epoch = neo.core.Epoch(times=[1,2,3]*pq.s,durations=[1,1,1]*pq.ms,labels=['a','b','c'])
----> 4 Epoch.time_slice(*(1,2)*pq.s)
~/Dropbox/python/git_repos/python-neo/neo/core/epoch.py in time_slice(self, t_start, t_stop)
261
262 indices = (self >= _t_start) & (self <= _t_stop)
--> 263 new_epc = self[indices]
264 return new_epc
265
~/Dropbox/python/git_repos/python-neo/neo/core/epoch.py in __getitem__(self, i)
177 obj._copy_data_complement(self)
178 obj.durations = self.durations[i]
--> 179 obj.labels = self.labels[i]
180 return obj
181
TypeError: only integer scalar arrays can be converted to a scalar index
```
`self.labels` is a list, and indexing it with a `np.array()` causes the error. I am making a pull request and reference this issue
Answers:
username_1: Thanks for the report.
My guess is that it is already fix in this PR #472
Could test the branch related to this PR to see if the bug is fixed ?
Best
username_0: my bad. All of this error boils down to me passing the labels as a list of strings and not a `np.array` of strings. Apologies ...
Status: Issue closed
|
gnwl/NotGrid | 264751702 | Title: FPS Issues
Question:
username_0: In the lates version I get major FPS issues ( 10-20FPS in 40man raid ) , If i go back to a previous build its almost fine.
Answers:
username_1: I noticed this myself tonight. I'm looking into it. It's to do with aura checking and gets exponentially worse for every buff/debuff a player has. I'm not surprised since I changed a lot of things to OnUpdate but not sure why symptoms weren't there before. I'm going to push an update that will remove aura checking for the time being.
username_1: I've rewritten it. Kind of went back to the way it was done way back on the first release. That does mean we might get back to some weird situations of certain auras not fading or showing depending on certain circumstances, but it seems to be pretty good from the testing I've done. I'll keep this issue open for a while.
Status: Issue closed
|
demeesterdev/terraform-provider-transip | 456301219 | Title: Add resource transip_domain with nameservers
Question:
username_0: <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
<!--- Please leave a helpful description of the feature request here. --->
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* azurerm_XXXXX
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
* https://azure.microsoft.com/en-us/roadmap/virtual-network-service-endpoint-for-azure-cosmos-db/
--->
* #0000
Answers:
username_0: fixed with #7
Status: Issue closed
|
dotnet/csharplang | 340803577 | Title: Target-Typed New Idea
Question:
username_0: Hi All,
As I was reading:
https://github.com/dotnet/csharplang/blob/master/meetings/2018/LDM-2018-06-25.md
I had an idea regarding constructors and such. I generally really like using the initializer syntax because it gives me a clear view of the property->value mapping. I've wished this syntax was extended to regular constructors.
For example, given the following class definition:
```
public class Person {
public string FirstName {get; private set;}
public string LastName {get; private set;}
public int Age {get; set; }
public Person(string FirstName, string LastName){
this.FirstName = FirstName;
this.LastName = LastName;
}
public Person(string Name){
var Names = string.Split(Name, " ");
this.FirstName = Names.FirstOrDefault();
this.LastName = Names.LastOrDefault();
}
}
````
I want to be able to new up a person as follows:
````
var P1 = new Person {
Name = "<NAME>",
Age = 21,
};
//This would be the same as:
var P1 = new Person("<NAME>) {
Age = 21,
};
var P2 = new Person {
FirstName = "Bob",
LastName = "Smith",
Age = 21,
};
//This would be the same as:
var P2 = new Person("Bob", "Smith") {
Age = 21,
};
````
Inside of an initializer, I should be able to specify constructor parameters as if they were properties.
If I'm specifying constructor parameters, I must specify all the parameters for an overload ortherwise it is an error. I'm thinking something similar to the way that PowerShell does parameter sets would be really useful.
Answers:
username_1: Would this only apply when a constructor argument name matches a property name? Would that property have to be read-only?
username_0: The constructor arguments should not be required to match a property name. For example, I should be able to do the following:
````
public class Person {
public string FirstName {get; set;}
public string LastName {get; set;}
public int Age {get; set;}
public Person(string Name_First, string Name_Last){
this.FirstName = Name_First;
this.LastName = Name_Last;
}
}
...
var P = new Person {
Name_First = "Bob",
Name_Last = "Smith", //These two are required and get passed to the constructor.
Age = 21,
FirstName = "Bob" //This is redundant but it calls the property accessor.
};
````
Also, in the example above, the Name_First and Name_Last values would have to be specified since there is no default constructor.
Status: Issue closed
|
ARCJ-fc/the_jolly_scientist | 63047206 | Title: credentials.js
Question:
username_0: Did you mean to put the credentials.js file into the repo?
Status: Issue closed
Answers:
username_0: Still visible: https://github.com/ARCJ-fc/the_jolly_scientist/commit/2669d33a0b26c21566c71bc27ea3fb7b9960ae71
See https://help.github.com/articles/remove-sensitive-data/ |
daenny/climate_group | 622677069 | Title: Deprecation Warning: ClimateDevice is deprecated, modify ClimateGroup to extend ClimateEntity
Question:
username_0: Not a real issue as the integration works for me, but I encountered the following deprecation warning after upgrading to 110.1.
```
Logger: homeassistant.components.climate
Source: components/climate/__init__.py:547
Integration: Climate (documentation, issues)
First occurred: 7:30:44 PM (1 occurrences)
Last logged: 7:30:44 PM
ClimateDevice is deprecated, modify ClimateGroup to extend ClimateEntity
```
Related: https://github.com/username_1/climate_group/blob/master/custom_components/climate_group/climate.py#L96
Answers:
username_1: Should be solved by: https://github.com/username_1/climate_group/releases/tag/0.4.1-rc2
Status: Issue closed
|
Proteus-Eretes/hoesnelwasik_frontend | 438396844 | Title: Css styling with different phone widths
Question:
username_0: Styling is now very inconsistent.
Answers:
username_0: Styling is now very inconsistent.
username_0: Might be good to have a look at how display desktop site option looks on the phone. It is a bit small but you do see many crews clearly in an overview so that is a big plus.
username_1: Is the styling still inconsistent? Otherwise I will take a look at that.
username_1: If you open the hamburger menu at the top left on your phone, the styling of "bekijk: Uitslagen, Loting" is a bit off. |
JasonZigelbaum/jqbx-issues | 383129143 | Title: [Android App] scroll down in chat moves the chat history to the up upon release
Question:
username_0: There is a bug when scrolling through the chat window in the android app.
When scrolling with a _fast_ finger movement to go to the button, the chat window actually goes to the top.
Thanks for the development of this app ! |
JetBrains/gradle-intellij-plugin | 381068319 | Title: Allow running tests with all other bundled plugins loaded
Question:
username_0: We've got a conflict between two plugins and it wasn't discovered in tests due to the other plugin wasn't loaded (more info: [RIDER-21655](https://youtrack.jetbrains.com/issue/RIDER-21655)). Loading all bundled plugins in tests could prevent this.
Answers:
username_1: Somewhat related, it would be helpful to support a separate configuration for `testPlugins`, so that we can integration test interoperability between plugins which are not bundled, but should work together smoothly, cf. acejump/acejump#380. |
osmbe/working-group-bylaws | 212960348 | Title: Board of directors
Question:
username_0: As suggested by @marcu, let's drop the limit of 3 board members.
Any member can postulate to become a board member. When there are more than three candidates for the board, the three candidates with highest number of votes will become board member. Other candidates that get at least three first choice votes are also elected. |
sudo-project/sudo | 1012539830 | Title: wolfSSL Support
Question:
username_0: Hi @username_1 I'm a software engineer with wolfSSL, a lightweight TLS and cryptography library geared toward embedded systems. Additionally, we sell a commercial version of wolfSSL that's been FIPS-validated (https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/3389). We recently had a customer ask us to port sudo 1.9.5p2 so that it can use wolfSSL instead of OpenSSL or gcrypt. The patch for that work can be found here: https://github.com/wolfSSL/osp/pull/85/files.
Would you be interested in a pull request to add optional wolfSSL support for the latest sudo version? If so, I can adapt the 1.9.5p2 patch so that the changes apply cleanly onto the main branch. I just wanted to gauge interest before proceeding with a pull request. :) Thanks!
Answers:
username_1: Yes, please. You can also just replace the X509_FILETYPE_PEM with SSL_FILETYPE_PEM in the call to SSL_CTX_use_PrivateKey_file() which reduces the diff slightly. |
GoogleChrome/lighthouse | 740082578 | Title: <object> accessiblity audit does not consider alt attribute
Question:
username_0: <!-- Before creating an issue please make sure you are using the latest version and have checked for duplicate issues. -->
<!-- Before creating an Accessibility issue please test that it is reproducible upstream with axe (https://www.deque.com/axe/) first and file the issue there if necessary. -->
#### Provide the steps to reproduce
1. Run LH on https://shoogle.net/overview/
<!-- If your page is only local, or is liable to change, consider uploading a repro so that we can more easily debug the problem. Some services that will help are: https://jsbin.com/, https://surge.sh/ -->
#### What is the current behavior?
Accessibility audit `<object> does not have [alt] text` fails, although relevant element has `[alt]` text.
When using extra info https://web.dev/object-alt/#how-to-add-alternative-text-to-lessobjectgreater-elements and putting the alt text as text content in <object> audit passes as expected.
#### What is the expected behavior?
Mentioned audit should pass with `[alt]` text.
#### Environment Information
* Affected Channels: DevTools
* Lighthouse version: 6.4.0
* Chrome version: Version 88.0.4315.5
* Node.js version: -
* Operating System: Debian
I would have time to help out fixing the issue.
Cheers
username_0
Answers:
username_1: Thanks for filing @username_0! I can't reproduce this with the latest axe version, so I'm guessing it's been fixed.
Should be addressed by updating to axe 4 in #11643
username_1: I made a note for us to double check this is fixed when we update, if it's not we'll come back here to reopen and discuss :)
https://github.com/GoogleChrome/lighthouse/issues/11643#issuecomment-724842442
Status: Issue closed
username_0: @username_1 Audit still fails for me using latest master on LH 7.0.0
username_1: Hmm @username_0 I can reproduce in latest axe now, but they have added guidance about why. It seems `alt` is not sufficient for `object` elements.

Seems like this is WAI then. If you think it's still not, then an issue with axe would be the appropriate next step. Not sure why I couldn't repro before, sorry!
username_1: Next steps for LH team: figure out if this is WAI and if it is, update the web.dev docs.
username_1: <!-- Before creating an issue please make sure you are using the latest version and have checked for duplicate issues. -->
<!-- Before creating an Accessibility issue please test that it is reproducible upstream with axe (https://www.deque.com/axe/) first and file the issue there if necessary. -->
#### Provide the steps to reproduce
1. Run LH on https://shoogle.net/overview/
<!-- If your page is only local, or is liable to change, consider uploading a repro so that we can more easily debug the problem. Some services that will help are: https://jsbin.com/, https://surge.sh/ -->
#### What is the current behavior?
Accessibility audit `<object> does not have [alt] text` fails, although relevant element has `[alt]` text.
When using extra info https://web.dev/object-alt/#how-to-add-alternative-text-to-lessobjectgreater-elements and putting the alt text as text content in <object> audit passes as expected.
#### What is the expected behavior?
Mentioned audit should pass with `[alt]` text.
#### Environment Information
* Affected Channels: DevTools
* Lighthouse version: 6.4.0
* Chrome version: Version 88.0.4315.5
* Node.js version: -
* Operating System: Debian
I would have time to help out fixing the issue.
Cheers
username_0
username_2: Just hit the same issue and did some investigating and think that Lighthouse (well axe) is correct to flag this, but documentation could certainly do with flagging this, reasonably common, use-case of using `<object>` for loading SVGs.
This is fine:
```html
<object>
This is visible text
</object>
```
However this is **not** fine:
```html
<object type="image/svg+xml">
This text is not exposed to the accessibility tree
</object>
```
and neither is this:
```html
<object data="example.svg">
This text is also not exposed to the accessibility tree
</object>
```
In effect, setting the `type` or `data` of the `object` means anything in the element as it is no longer able to treated it as plain text and so can't be used as the accessible name. You can see this is the case by looking at the Accessibility tree in Chrome Dev Tool.
Therefore the last two examples **do** need an accessible name (unless it's switched to being presentational with either `role="presentation"` or `aria-hidden="true"`), which should be provided by the `aria-label`, `aria-labelled-by` or `title` (note you cannot use the `alt` attribute on an `object` element). I can confirm when adding these the audit no longer fails.
@username_0 it looks like you've updated your object to be `aria-hidden` on your site:
```html
<object data="/svg/map.svg" type="image/svg+xml" id="world" alt="World map" aria-hidden="true"></object>
```
However it probably should be this:
```html
<object data="/svg/map.svg" type="image/svg+xml" id="world" aria-label="World map"></object>
```
username_3: Merging this into https://github.com/GoogleChrome/lighthouse/issues/6146
Status: Issue closed
|
asciidoctor/asciidoctor-reveal.js | 292212419 | Title: Setup instructions unclear
Question:
username_0: When following the README, the presentation is missing reveal.js if you do not clone the reveal.js framework form GitHub.
See the following PR: https://github.com/asciidoctor/asciidoctor-reveal.js/pull/180
I did not try the JS Setup
Answers:
username_1: Fixed by the merge of #180
Status: Issue closed
|
pivotal-cf/docs-rabbitmq-pcf | 656753351 | Title: Repeat this across 1.18.x versions
Question:
username_0: https://github.com/pivotal-cf/docs-rabbitmq-pcf/blob/2769da234312d97d251216adece6faeb73bddbb0/releases.html.md.erb#L616
Hi folks
The above warning isn't repeated across other 1.18 minor versions. When customers upgrade, they can [run into issues](https://pivotal.slack.com/archives/C0RDGG81Z/p1594743113093200) unless they read the release notes for this version.
Please can this be resolved?
Answers:
username_1: Closing as 1.18.x is EOGS
Status: Issue closed
|
facebook/folly | 257455349 | Title: folly::toJson reorders object order
Question:
username_0: I've been building a folly::dynamic object in a specific order and convert it to json using folly::toJson(). For some reason it reorders the fields which doesn't matter for my parser, but is really inconvenient when I want to look through the json manually. I know you can sort the parsed json but that not really the same thing as keeping the original order.
Is there an obvious reason for this? (perhaps I'm doing something wrong)
Answers:
username_1: folly json is built on folly::dynamic which uses std::unordered_map under the hood.
Status: Issue closed
username_2: Neither of the two standard library map types preserves insertion order. There was no use-case for `folly::dynamic` requiring preservation of insertion order. So, as @username_1 notes, `folly::dynamic` just uses one of the standard library map types.
Closing, because this answers the original question as asked. |
sigp/lighthouse | 523224809 | Title: Tidy Eth2Config generation at runtime
Question:
username_0: ## Description
Presently, when the `lighthouse` binary is started an `Environment` is created which contains an `Eth2Config`.
Then, [this code](https://github.com/sigp/lighthouse/blob/bdae7e01a7d2b7cbc9bd760f139e4f1c8e3c4487/beacon_node/src/lib.rs#L59-L65) ignores the `environment.eth2_spec` and replaces the `Eth2Config` in it's local context.
This means that there's no guarantee of consistency of the `Eth2Config` from an `Environment`.
## Steps to resolve
- Pull out the code that generates the `Eth2Config` from CLI args (`ArgMatches`) (see [`config.rs`])(https://github.com/sigp/lighthouse/blob/eth1/beacon_node/src/config.rs) and place it in it's own function in the `environment` crate.
## Notes
This issue is only valid after #542 is merged.
Answers:
username_1: Hi, I'm addressing this issue, will file a PR to close this. 😃
Status: Issue closed
|
coreos/fedora-coreos-tracker | 979496817 | Title: aws regions for aarch64 image upload
Question:
username_0: AWS doesn't have support for arm64 images in every region, but they are close. Should we upload to every region or put in the work to only upload to regions that have support?
With:
```bash
regions=$(aws ec2 describe-regions --all-regions --output=json | jq -r .Regions[].RegionName)
for region in $regions; do
echo "######## $region"
aws ec2 describe-instance-types --region=$region --filters Name=processor-info.supported-architecture,Values=arm64 | wc -l
done
```
I see:
```
$ bash /tmp/foo.sh
######## af-south-1
0
######## eu-north-1
1109
######## ap-south-1
1205
######## eu-west-3
1109
######## eu-west-2
1109
######## eu-south-1
1109
######## eu-west-1
2888
######## ap-northeast-3
0
######## ap-northeast-2
1109
######## me-south-1
325
######## ap-northeast-1
2545
######## sa-east-1
1109
######## ca-central-1
1109
######## ap-east-1
1109
######## ap-southeast-1
2234
######## ap-southeast-2
2234
######## eu-central-1
2234
######## us-east-1
2888
######## us-east-2
2888
######## us-west-1
2138
######## us-west-2
2888
```
So that only really leaves `ap-northeast-3` and `af-south-1` on the outside.
Status: Issue closed
Answers:
username_0: Agreed we'll upload to all regions like is done for x86_64. Will close this. |
google/ksp | 709633355 | Title: Lookup of constructor JVM signature throws exception if not found
Question:
username_0: Given this snippet
```kotlin
class Properties {
var a: Int = -1
var b: Int = -1
}
```
If one tries to look up the signature, KSP will throw an exception rather than just return null. I would probably expect that `KSClassDeclaration.primaryConstructor` should be null in this case
```kotlin
resolver.mapToJvmSignature(primaryConstructor)
```
```
e: java.lang.IllegalStateException: unexpected class: class com.google.devtools.ksp.symbol.impl.synthetic.KSConstructorSyntheticImpl
at com.google.devtools.ksp.processing.impl.ResolverImpl.resolveFunctionDeclaration(ResolverImpl.kt:264)
at com.google.devtools.ksp.processing.impl.ResolverImpl.mapToJvmSignature(ResolverImpl.kt:219)
```
Status: Issue closed
Answers:
username_0: What was the change with this one? I noticed the API still returns non-null, so not sure what the expected behavior is now
username_0: checked in my project, see it just returns the synthetic constructor 👍 |
cfloquetcapstone/CCC-410-Capstone-Design | 836004127 | Title: Sprint 3.1: Expand Data Collection, Test Accuracy, Implement API
Question:
username_0: For this sprint it's very important for me to expand my current dataset of images. Now that I have nailed down a solid process to take images with the new backlight and process them in a way that spits out a differential cropped image of just that piece, it should be incredibly accurate.
I've already submitted my paperwork and seller documents to Bricklink for approval to get access to their API, so until then I'm focusing on getting as many pictures as I can.
**Checklist (To Do):**
- [ ] Gather 15-20 pictures of each main color for each piece
- [ ] Train algorithm using newly gathered data, testing for overfitting
- [ ] Adjust training_steps, learning_rate to achieve 96-98% accuracy in predictions
- [ ] Begin working on API to fetch data about specific bricks or pieces
- [ ] Incorporate fetched data into the final UI that is presented to the user. |
Tecnativa/doodba-copier-template | 670891555 | Title: how project's `odoo/custom/src/ssh` supposed to propogate inside the container if you only mount src?
Question:
username_0: https://github.com/Tecnativa/doodba-copier-template/blob/9a33b2e3adf4f98b2f93f02ee99189271f8a5461/setup-devel.yaml.jinja#L32
Answers:
username_0: Steps to reproduce:
invoke git-aggregate
Current behavior:
key_load_public: invalid format
Host key verification failed.
fatal: Could not read from remote repository.
inside the container root's .ssh folder contains empty id_rsa
desired behavior:
to use odoo/custom/ssh when dealing with private repositories
username_0: This https://github.com/Tecnativa/doodba-copier-template/pull/81 fixes it, but there is another problem occurs when `invoke git-aggregate`: `Bad owner or permissions on /root/.ssh/config`. I only know the workaround for it - `chown root:root config` - don't sure this is a proper way
username_0: I've found out that there is
::
ONBUILD RUN mkdir -p /opt/odoo/custom/ssh \
&& ln -s /opt/odoo/custom/ssh ~root/.ssh \
&& chmod -R u=rwX,go= /opt/odoo/custom/ssh \
&& sync
so I should put my keys into `odoo/custom/ssh` before build, right?
username_0: or if I change the content of my odoo/custom/ssh folder - I should rebuild the image
username_0: ONBUILD COPY $LOCAL_CUSTOM_DIR /opt/odoo/custom
username_0: `ONBUILD RUN mkdir -p /opt/odoo/custom/ssh \`
It seems that this mkdir doesn't create anything because the dir is already there - we copy all $LOCAL_CUSTOM with ssh in it
username_1: For ssh keys and configs added after the initial build I do a force build that prevents it from using the cache. It does not pick up that the files have changed, I’ve never bothered investigating further as it’s a very uncommon event for us.
Status: Issue closed
username_2: Indeed, that's the fix you're looking for, and the reason is explained by @username_1: it's very uncommon. Closing. |
perfsonar/mesh-config | 90762318 | Title: Ping tests don't support selecting the tool
Question:
username_0: As the title said, there is no option to select the BWCTL tool for ping tests, preventing regular owping from being an option.
Status: Issue closed
Answers:
username_0: I was incorrect, this is already supported, you just have to use force_bwctl_owamp with a perfsonarbuoy/owamp test. |
Azure/azure-cosmos-dotnet-v3 | 956601485 | Title: ReadManyItemsAsync with PartitionKey.None throws ArgumentException
Question:
username_0: **Describe the bug**
Using the `container.ReadManyItemsAsync(..)` function to retrieve multiple documents where the partition key is set to PartitionKey.None throws an ArgumentException with the message
`PartitionKey has fewer components than defined the collection resource.`
**To Reproduce**
Call ReadManyItemsAsync in a similar fashion to:
```csharp
var ids = new List<string> { "0" };
var results = await container.ReadManyItemsAsync(ids.ConvertAll(id => (id, PartitionKey.None)));
```
**Expected behavior**
No ArgumentException thrown.
**Actual behavior**
ArgumentException thrown with the following exception.ToString():
```
System.ArgumentException: PartitionKey has fewer components than defined the collection resource.
at Microsoft.Azure.Documents.Routing.PartitionKeyInternal.GetEffectivePartitionKeyString(PartitionKeyDefinition partitionKeyDefinition, Boolean strict)
at Microsoft.Azure.Cosmos.ReadManyQueryHelper.CreatePartitionKeyRangeItemListMapAsync(IReadOnlyList`1 items, CancellationToken cancellationToken)
at Microsoft.Azure.Cosmos.ReadManyQueryHelper.ExecuteReadManyRequestAsync[T](IReadOnlyList`1 items, ReadManyRequestOptions readManyRequestOptions, ITrace trace, CancellationToken cancellationToken)
at Microsoft.Azure.Cosmos.ContainerCore.ReadManyItemsAsync[T](IReadOnlyList`1 items, ITrace trace, ReadManyRequestOptions readManyRequestOptions, CancellationToken cancellationToken)
at Microsoft.Azure.Cosmos.ClientContextCore.RunWithDiagnosticsHelperAsync[TResult](ITrace trace, Func`2 task)
at Microsoft.Azure.Cosmos.ClientContextCore.OperationHelperWithRootTraceAsync[TResult](String operationName, RequestOptions requestOptions, Func`2 task, TraceComponent traceComponent, TraceLevel traceLevel)
at ...
```
**Environment summary**
SDK Version: 3.20.1
OS Version: Windows 10
**Additional context**
A call to `container.ReadItemAsync("0", PartitionKey.None)` works as expected.
Answers:
username_1: This appears to be a bug. It will be fixed in the next release which should be in the next few weeks.
Status: Issue closed
|
lukeautry/tsoa | 1059086837 | Title: Middleware: AWS API Gateway V2 Support
Question:
username_0: <!--- Provide a general summary of the issue in the Title above -->
## Sorting
- **I'm submitting a ...**
- [ ] bug report
- [x] feature request
- [ ] support request
- I confirm that I
- [x] used the [search](https://github.com/lukeautry/tsoa/search?type=Issues) to make sure that a similar issue hasn't already been submit
Replaces: https://github.com/lukeautry/tsoa/issues/971
Enables: https://github.com/lukeautry/tsoa/issues/1105
## Background
AWS APIGateway V2 is a high availability API as a Service that provides:
- Authentication
- Authorisation
- Routing
It has native support for uploading an OPENAPI declaration as shown in https://github.com/lukeautry/tsoa/issues/1012
## Proposal
This issue proposes supporting API Gateway V2 natively with the following API:
Generation of routes for an Lambda handler that supports TSOA
```json
{
"basePath": "/v1",
"entryFile": "./src/handler.ts",
"middleware": "api-gateway-v2"
}
```
Creation of the `src/handler.ts`
```ts
import { RegisteredRoutes } form './routes'
// A single lambda for all routes
export const handler = (event, context) => RegisteredRoutes
// A lambda solely for the get user route, which invokes the appropriate controller
const getUserHandler = (event, context) => RegisteredRoutes['GET /user/:userId']
```
The Lambda code above for NodeJS is deployed behind an AWS API Gateway V2 instance.
## Justification
API Gateway V2 backed by Lambda is a very popular stack for constructing high availability (10,000 req/s) APIs. These API's are 100% free at low volume making them a good candidate for Start-ups, and established players alike.
There are few frameworks existing designed to make:
[Truncated]
body?: string | undefined;
pathParameters?: APIGatewayProxyEventPathParameters | undefined;
isBase64Encoded: boolean;
stageVariables?: APIGatewayProxyEventStageVariables | undefined;
}
```
Response:
```ts
export interface APIGatewayProxyStructuredResultV2 {
statusCode?: number | undefined;
headers?: {
[header: string]: boolean | number | string;
} | undefined;
body?: string | undefined;
isBase64Encoded?: boolean | undefined;
cookies?: string[] | undefined;
}
```
Answers:
username_1: +1
Being able to automatically ship to Lambda/API Gateway would be awesome!
username_2: Seconding the above. Right now it looks like we'll be manipulating the YAML in Jenkins or Terraform during deploy.. which is exactly what the old system did that pushed us to move away from it 😂 Thanks for your work on this. |
missionpinball/mpf | 209933026 | Title: Add custom "code:" section to machine configs
Question:
username_0: This is to replace "scriptlets" (which is a dumb name). I guess there's no real need to get rid of scriptlets, since they don't hurt anything, but we should add a `code:` section to the machine config, users add their custom code in via `module.Class`, and put their modules in their machine folder's `code` folder and we call their constructor and just pass the machine.<issue_closed>
Status: Issue closed |
DevSecOps-TTS/AlphaCourse | 669390083 | Title: Rgrace workflow runs on all PRS and pushes to master
Question:
username_0: Hint: https://github.com/DevSecOps-TTS/AlphaCourse/actions/runs/196490655
Answers:
username_0: Hint: https://github.com/DevSecOps-TTS/AlphaCourse/actions/runs/196490655
username_1: I fixed it yesterday maybe it didn’t push up but I changed it using the edit feature. I ran it and it worked fine. Not sure what happened.
Sent from my iPhone
> |
sameersbn/docker-gitlab | 356187520 | Title: bundle exec rake gitlab:backup:create => backup directory is empty, where is ...gitlab_backup.tar file ?
Question:
username_0: Hi,
I execute:
```
# docker-compose exec gitlab bash
# su git
$ bundle exec rake gitlab:backup:create
Creating backup archive: 1535793325_2018_09_01_11.1.4_gitlab_backup.tar ... done
Uploading backup archive to remote storage ... skipped
Deleting tmp directories ... done
done
done
done
done
done
done
done
Deleting old backups ... done. (1 removed)
$ ls /home/git/data/backups/ -lha
total 8.0K
drwxr-xr-x 2 git git 4.0K Sep 1 11:15 .
drwxr-xr-x 11 git git 4.0K Mar 15 2016 ..
```
Question: why backup directory is empty?
More informations:
```
$ cat /home/git/gitlab/config/gitlab.yml | grep "backup"
backup:
path: "/home/git/data/backups" # Relative paths are relative to Rails.root (default: tmp/backups/)
archive_permissions: 0600 # Permissions for the resulting backup.tar file (default: 0600)
backup:
path: tmp/tests/backups
$ export | grep "backup"
declare -x GITLAB_BACKUP_DIR="/home/git/data/backups"
```
I use this docker image: `sameersbn/gitlab:11.1.4`
Best regards,
Stéphane
Answers:
username_1: Hey @username_0 ,
i tested it and it works for me here are my commands:
```console
# root@58b6e2677e4a:/home/git/gitlab
# su - git
$ bundle exec rake gitlab:backup:create SKIP=registry
$ ls -alh /home/git/data/backups/
drwxr-xr-x 2 git git 4.0K Sep 1 09:40 .
drwxr-xr-x 12 git git 4.0K Jul 1 2017 ..
-rw------- 1 git git 1.2G Sep 1 09:39 1535794798_2018_09_01_11.1.4_gitlab_backup.tar
```
Could it be something with your volume under `/home/git/data` ?
username_0: When I execute:
```
git@d2331e0df6a7:~/gitlab$ bundle exec rake gitlab:backup:create SKIP=repositories
Dumping database ...
Dumping PostgreSQL database gitlabhq_production ... [DONE]
done
Dumping repositories ...
[SKIPPED]
Dumping uploads ...
done
Dumping builds ...
done
Dumping artifacts ...
done
Dumping pages ...
done
Dumping lfs objects ...
done
Dumping container registry images ...
[DISABLED]
Creating backup archive: 1535889827_2018_09_02_11.1.4_gitlab_backup.tar ... done
Uploading backup archive to remote storage ... skipped
Deleting tmp directories ... done
done
done
done
done
done
done
Deleting old backups ... done. (0 removed)
```
I can see archive file:
```
git@d2331e0df6a7:~/gitlab$ ls /home/git/data/backups/ -lha
total 33M
drwxrwxr-x 2 git git 4.0K Sep 2 14:03 .
drwxr-xr-x 11 git git 4.0K Sep 2 13:57 ..
-rw------- 1 git git 33M Sep 2 14:03 1535889827_2018_09_02_11.1.4_gitlab_backup.tar
```
but if I remove `SKIP=repositories` archive is empty:
```
git@d2331e0df6a7:~/gitlab$ bundle exec rake gitlab:backup:create SKIP=repositories
Dumping database ...
Dumping PostgreSQL database gitlabhq_production ... [DONE]
done
Dumping repositories ...
...
done
Dumping uploads ...
done
Dumping builds ...
done
Dumping artifacts ...
done
Dumping pages ...
[Truncated]
Dumping lfs objects ...
done
Dumping container registry images ...
[DISABLED]
Creating backup archive: 1535889689_2018_09_02_11.1.4_gitlab_backup.tar ... done
Uploading backup archive to remote storage ... skipped
Deleting tmp directories ... done
done
done
done
done
done
done
done
Deleting old backups ... done. (2 removed)
git@d2331e0df6a7:~/gitlab$ ls /home/git/data/backups/ -lha
total 8.0K
drwxrwxr-x 2 git git 4.0K Sep 2 14:01 .
drwxr-xr-x 11 git git 4.0K Sep 2 13:57 ..
```
username_0: 11 Go
username_0: With is my stupid error:
```
GITLAB_BACKUP_EXPIRY=2
```
this is 2 seconds and not 2 archives files.
So, I can close the issue.
Status: Issue closed
|
RelaxedJS/ReLaXed | 324853361 | Title: Is there any way to add footnotes?
Question:
username_0: Footnotes like LaTeX is necessary to write up a formal document, but seems no simple way to put footnotes using HTML/CSS or JavaScript. It's possible to set position: absolute, bottom: 0 to the footnote container, but each page margin could not be arbitrary.
[CSS Generated Content for Paged Media Module](https://www.w3.org/TR/css-gcpm-3/) looks good, but it seems not implemented in the current version of Chromium. Not working.
Any other ideas?
Answers:
username_1: LaTeX-like footnotes may be difficult to do indeed. I believe there is a footnote plugin for Markdown-it. Not sure how good it is. There may be also a way to have footnotes with a pug mixin. Either way, it certainly won't be footnotes in the page footer, rather footnotes below the text at the end of a paragraph or a chapter.
username_2: What did you try so far?
So far I positioned it with JS and CSS. See https://github.com/username_2/htmlinvoice/blob/master/invoice.html#L175
username_0: I've tried simply set position of footnotes container element to 'absolute'. It was certainly stuck on the bottom of its page, but I guess there's no way to set margin-bottom using @page at-rule for a specific page like :nth-child(3) pseudo-classes.
If it possible to set margin for each pages, the following step might be useful.
1. calculate footnotes height and its page number
2. set footnotes position absolute with bottom: 0
3. set its page margin-bottom to be default margin + footnotes height
username_1: Sadly it is not possible to have different margins for each page. Its a chromium limitation.
username_2: Well, it has to be calculated for sure.
https://github.com/username_2/htmlinvoice/blob/master/invoice.html#L168-L183
username_3: Are there any updates on this?
username_2: As you can see there are no updates, at least there is no linked PR or commit.
username_4: I've found this issue when I was looking for a simmular purpose. After some experimenting I found that this piece of CSS puts the content on the bottom of the page but it won't be included on all pages such as the footer css does. Please note that I'm not a professional so most likely the CSS can be improved.
I hope this will get you a bit closer to the official footnotes representation 🙂
CSS:
```css
.pagefooter {
display: flex;
height: auto;
#content {
position: absolute;
bottom: 0;
}
}
```
Pug usage:
```HTML
.pagefooter
#content
.ui.container
.ui.icon.message.yellow.block-center
i.exclamation.triangle.outline.icon
.content
.header Internal usage only !
p
| Please note that this document is for internal use only and may not be
| used for other purposes then the intended purpose of this document or
| externally distributed by any means.
div(style="page-break-before:always")
``` |
dotnet/cli | 154073314 | Title: Runing a Tool with dependency on a different version of Newtonsoft.Json gives FileNotFoundException
Question:
username_0: ## Steps to reproduce
1 File|New Project|.NET Core|Console Application (.NET Core)
2 Call it `dotnet-ToolWithDependencyNetCore1_0`
3 Install-Package `Newtonsoft.Json -Version 7.0.1`
4 Add dependency on e.g. `JsonSerializer` into `Program.Main()` so that the tool depends on `Newtonsoft.Json v7.0.1`and also add a `Console.WriteLine("Hello from dotnet-ToolWithDependencyNetCore1_0")` so you can tell if the tool runs correctly.
5 `dotnet restore` and `dotnet pack` this project and place the `.nupkg` in a local feed.
6 Add that local feed to your `Nuget.config` so VS can pick up the tool in the next steps.
7 File|New Project|.NET Core|Console Application (.NET Core)
8 Call this one `Using-dotnet-ToolWithDependencyNetCore1_0`
9 Open the project.json and add a) a dependency on `"dotnet-ToolWithDependencyNetCore1_0": "1.0.0"` and b) add a `tools` section which looks like this:
```
"tools": {
"dotnet-ToolWithDependencyNetCore1_0": {
"version": "1.0.0",
"imports": [ "dnxcore50" ]
}
}
```
10 Install-Package `Newtonsoft.Json -Version 8.0.3`
11 Add dependency on e.g. `JsonSerializer` into `Program.Main()` so that the tool depends on `Newtonsoft.Json v7.0.1`and also add a `Console.WriteLine("Hello from Referencing Project")` so you can tell if the tool runs correctly.
12 Add a `Console.WriteLine()` in the `Program.Main()` method.
13 `dotnet restore` this project
14 `dotnet ToolWithDependencyNetCore1_0` (runs correctly - not sure if this step is needed for the repro)
15 `dotnet run` this project
## Expected behavior
`Hello from Referencing Project` written to the command line.
## Actual behavior
C:\work\RC2ReleaseTest\Using-dotnet-ToolWithDependencyNetCore1_0\src\Using-dotnet-ToolWithDependencyNetCore1_0>dotnet run
Project Using-dotnet-ToolWithDependencyNetCore1_0 (.NETCoreApp,Version=v1.0) was previously compiled. Skipping compilation.
Unhandled Exception: System.IO.FileNotFoundException: Could not load file or assembly 'System.Runtime.Serialization.Primitives, Version=4.0.0.0, Culture=neutral, PublicKeyToken=<KEY>' or one of its dependencies. The system cannot find the file specified.
at Using_dotnet_ToolWithDependencyNetCore1_0.Program.Main(String[] args)
## Environment data
.NET Command Line Tools (1.0.0-rc2-002673)
Product Information:
Version: 1.0.0-rc2-002673
Commit Sha: c0aeb91d61
Runtime Environment:
OS Name: Windows
OS Version: 10.0.10240
OS Platform: Windows
RID: win10-x86
Status: Issue closed
Answers:
username_1: This isn't a cli bug the tool is broken because dependencies are missing
username_0: Note: You can solve this problem by adding the `aspnetrelease` feed to your `Nuget.config` and adding a dependency on `"System.Runtime.Serialization.Primitives": "4.1.1-rc2-24018"` but I would have thought that that should not be necessary?
username_0: Note: this happens on Windows 10 x86 but not on Windows 10 x64. |
LukeRoss00/gta5-real-mod | 561034815 | Title: Game crashes when loading into Story Mode
Question:
username_0: When i try to load into story mode the game loads for a while and then it crashes.
Valve Index HMD
RTX 2070
i7-8700
Answers:
username_1: Please post your `asiloader.log` and `ScriptHookV.log` files
username_0: [ScriptHookV.log](https://github.com/username_1/gta5-real-mod/files/4165533/ScriptHookV.log)
[asiloader.log](https://github.com/username_1/gta5-real-mod/files/4165534/asiloader.log)
username_1: Mmm... they look OK. Could you add the line `RVRLog = 2` at the end of your `RealVR.ini` and try launching the game again? After the crash you should have a new `RVRLog.txt` file, please post it here
username_0: [RVRLog.txt](https://github.com/username_1/gta5-real-mod/files/4165604/RVRLog.txt)
Here you go
username_1: It looks like you didn't run the `RealConfig.bat`, or perhaps it gave you errors and you didn't notice. Try to run it again
username_0: It gave me errors something: Path couldn't be found, so i follwed your instructions on another issue i found here.
username_1: Check the steps again, because the game is trying to run with graphics options that are all wrong; that's probably the cause of the crash
username_2: I am also having this exact same problem. Question did you take into account anyone using onedrive? Cause this is what my GTA5 directory is "C:\Users\Username\OneDrive\Documents\Rockstar Games\GTA V", could that have something to do with the issue?
username_1: If your Documents folder has been redirected to OneDrive, you should manually edit the batch file in Notepad and replace `%UserProfile%\Documents` with the full path of your Documents folder, then run the batch file again.
username_1: Please try with [this](https://github.com/username_1/gta5-real-mod/files/4176824/RealConfig.zip) updated batch file (zipped because GitHub doesn't like .bat attachments). If it detects the OneDrive redirection correctly, I might add it as a hotfix to the current release
username_2: Awesome, that worked, quick and simple.Thanks very much for your quick response.
Status: Issue closed
|
rbind/support | 381931330 | Title: Domain request
Question:
username_0: <!--
Please use this template for new rbind.io subdomain requests.
A volunteer will help you create the subdomain later. We don't really have enough human resources here, so please be serious about your website. We hope to see you really make use of your website in the future, instead of simply getting a free subdomain and letting it collect dust in a corner. Thank you!
-->
## Netlify website address
wizardly-montalcini-991c74.netlify.com
## Preferred rbind.io subdomain
nazliozumkafaee.rbind.io
### Agreement
- [x] By submitting this request, I promise I will at least write one blog post or create one web page on my website after I get the rbind.io subdomain.
Answers:
username_1: @username_0 - we have just configured the rbind subdomain you requested. Please [set the rbind subdomain](https://support.rbind.io/about/) in your Netlify account.

Note that there might be a hint "Check DNS configuration" in the domain section after adding the rbind subdomain -- it can be safely ignored.
Thanks!
Irene
username_0: @username_1 thank you very much! I have a question though. I added the custom domain, but it is not active. The site can be accessed still with the old domain but not the new one. Is there anything else I should do?
username_0: Yes, that was the problem. Thank you for the reminder. The domain is active now. |
ccxt/ccxt | 349001327 | Title: Bitmex create order reminds the previous leverage ...
Question:
username_0: So I have something weird over here, in the documentation of Bitmex they state that every order/position initiated will defaultly receive the cross leverage.
https://www.bitmex.com/app/isolatedMargin
"Note that, by default all positions are initially set to “Cross Margin”."
But for some reason If I have ever made a trade with leverage for example 5, it does not matter if this have been initiated with the api or the webinterface, the next order I initiate will automatically create a position with leverage 5.
This results in the following issue:
- If I would create the position with more value that I actually have it would still go through since the leverage is 5x by default, if I would like to switch back to a lower leverage (which I actually wanted from the start) it will show insufficient funds because I dont have enough BTC to put it on 1x....
I understand why it is not possible to add leverage into an order, but this seems to be a big bug here ....
Status: Issue closed
Answers:
username_1: @username_0 shouldn't this question be forwarded to BitMEX instead of CCXT? We don't change those values in the library, so if there's an issue with the above, that's on the BitMEX side. You should probably reach out for their tech support. Hope this answers your question.
username_0: yes I was not able to close this for some reason, sorry to disturb
username_1: @username_0 no worries... unfortunately, it's beyond the scope of the library, and we can't really fix it on our side in any way, sorry. We will be happy to answer on the library, if you have difficulties with it. But, as I said above, we don't substitute values for implicit methods, so, whatever values you get from their – those come directly from BitMEX. |
MicrosoftDocs/windows-itpro-docs | 660159498 | Title: 鹤壁哪里可以开入账发票-本地宝
Question:
username_0: 鹤壁哪里可以开入账发票-本地宝开票【█1 З 5-电-ЗЗ45-嶶-З429█】杨生【σσ/V信1З00█507█З60】正规税务业务代理.100%真-票此.信.息.永.久.有.效” 实体公司开/详情-项.目.齐.全 可先开验。无需打开直接联系点击上方“百度快照”现场曝光!美准航母烧了4天还冒烟 飞机洒水1500次(原标题:美"准航母"烧了4天还冒烟,直升机洒水1500次,现场曝光)海外网7月16日电 美国海军“好人理查德”号两栖攻击舰烧了四天,火还没灭。军方15日曝光了救援现场的最新画面。据今日俄罗斯消息,“好人理查德”号自7月12日爆炸起火至今,消防人员一直在持续进行灭火工作。美国海军在15日的一份声明中说,为了抑制大火蔓延,直升机已经洒水超过1500次,较大的火焰已经被熄灭。目前,消防人员正在全力以赴,扑灭军舰闷烧的个别地点。目前共有63人受伤并接受治疗,其中包括40名船员和23个平民。美海军表示,尽管发生大火和爆炸,船体还是避免了无法弥补的损害,并称“燃油箱没有受到威胁,船体稳定,结构安全。”<issue_closed>
Status: Issue closed |
cardigann/cardigann | 184649538 | Title: Docker Questions
Question:
username_0: I setup the cardigann docker and created all the same variables/paths etc for it that jackett had, and I have it working, but.....
1) the config folder is empty, how do I get access to the config.JSON or the definitions?
2) should I use a $HOME or $CONFIG variable or something?
overall, what are the options and settings for using in UnRAID/Docker?
Answers:
username_1: The config dir is set here https://github.com/cardigann/cardigann/blob/master/Dockerfile#L5
Are you saying there isn't any configuration being written to /.config/cardigann ?
username_0: no I did finally figure out how to add the path correctly in a unraid docker container so that I can now see the config.json, but there seems to be no way to access the definition files in the same scenario? would like to be able to add a definition file and/or play with making a new one or fixing one (programmer here)
username_1: If you start cardigann with `--debug` it will show you the paths it looks at to load definition files. They are also enumerated here https://github.com/cardigann/cardigann#definitions. If you mount in the definitions you want to develop to one of those spots, cardigann should load them.
Status: Issue closed
|
python-sprints/python-sprints.github.io | 307403290 | Title: Create resources section with guides
Question:
username_0: This is to be done as the last thing. Very low priority but we want to have a section for new people with reference material on first pull request - cloning. forking, branching etc.
How to contribute to pandas and anything else that will become useful eventually. |
alfrcr/paginathing | 275592875 | Title: Provide Filter(Search) Fearture.
Question:
username_0: The pagination is very easy to use and understand. same like that can you add the search the table data as Filter. It will be more powerful. Just said as a suggestions.
Thanks.
Answers:
username_1: You can use a filtering library like [fuzzysort](https://github.com/farzher/fuzzysort) with paginathing. This was done for LibreOffice help: https://gerrit.libreoffice.org/plugins/gitiles/help/+/master/help3xsl/help.js
```
// filter the index list based on search field input
var search = document.getElementById('search-bar');
var filter = function() {
var target = search.value.trim();
if (target.length < 1) {
fullList();
return;
}
results = fuzzysort.go(target, bookmarks, {threshold: -15000, key:'text'});
var filtered = '';
results.forEach(function(result) {
filtered += '<a href="' + result.obj['url'] + '" class="' + result.obj['app'] + '">' + fuzzysort.highlight(result) + '</a>';
});
document.getElementsByClassName("index")[0].innerHTML = filtered;
addIds();
Paginator(document.getElementsByClassName("index")[0]);
};
```
Status: Issue closed
username_0: Sorry for late reply, Thanks bro :) |
holzschu/Carnets | 842618218 | Title: Graphviz
Question:
username_0: I have created a little graphviz program to draw tree diagrams for secondary school students. It runs on my Mac with Jupiter Notebook.
However it does not run in Carnets.
`
# l = [['italian', 'japanese', 'cantonese'], ['menu A', 'menu B']]
def treeDiagram(d, l):
a = ['x']
b = [1]
k = 1
for i in range(len(l)):
b.append(len(l[i]))
for j in range(b[-1]*k):
a.append(l[i][j % b[-1]])
k = k*b[-1]
for i in range(len(a)):
d.node(str(i),a[i])
p = 0
q = 1
for i in range(len(b)-1):
t = 1
r = [(t:=t*v) for v in b][i]
for k in range(r):
for s in range(b[i+1]):
d.edge(str(p),str(q))
q += 1
p += 1
dot = Digraph()
treeDiagram(dot,[['A', 'B'], ['C', 'D', 'E','F']])
dot
`

Answers:
username_1: Hi,
thanks for raising the issue. `graphviz` (the Python package) require `graphviz` (the shell command), so I'll have to cross-compile it and install it. |
SharePoint/sp-dev-docs | 316802852 | Title: Dead Link
Question:
username_0: on page
https://docs.microsoft.com/en-us/sharepoint/dev/sp-add-ins/complete-basic-operations-using-sharepoint-rest-endpoints
this is a dead link
https://github.com/OfficeDev/SharePoint-Add-in-REST-OData-BasicDataOperations.md
somewhere in the top 20 rows
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 047ea71c-e866-b2dd-5530-8451de27ba4a
* Version Independent ID: 3b886e81-c3b7-d5c1-136f-4cb827b4952f
* Content: [Complete basic operations using SharePoint REST endpoints](https://docs.microsoft.com/en-us/sharepoint/dev/sp-add-ins/complete-basic-operations-using-sharepoint-rest-endpoints#feedback)
* Content Source: [docs/sp-add-ins/complete-basic-operations-using-sharepoint-rest-endpoints.md](https://github.com/SharePoint/sp-dev-docs/blob/master/docs/sp-add-ins/complete-basic-operations-using-sharepoint-rest-endpoints.md)
* Service: **unspecified**
* GitHub Login: @spdevdocs
* Microsoft Alias: **spdevdocs**
Answers:
username_1: Thanks for catching this!
Status: Issue closed
|
Subsets and Splits