repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
aframevr/aframe | 231442330 | Title: Performance problems when rotating camera
Question:
username_0: I've set up two simple test cases:
1) 1000 planes are rendered via `a-plane` tag
2) 1000 plane geometries are added via `new THREE.Mesh(new THREE.PlaneGeometry(1, 1))` and then `scene.add(plane)`
On my PC performance in roughly the same when the camera is still ~ 80 fps. However when I start dragging the mouse to rotate the camera the 1st test case shows a performance drop to 15fps, while the 2nd renders the same 80 fps.
Moving camera with wasd controls does not cause fps to drop.
What could be the reason for this performance drop?
The scene doesn't have a cursor, so I guess this cannot be caused by raycaster checking collisions.
- A-Frame Version: 0.5.0 and current master was tested
Answers:
username_1: Thanks. I made a Glitch https://glitch.com/~abrasive-orchid for this
username_1: I don't get issues with 1000 `<a-plane>`s on my laptop or desktop. Can you check the Glitch and see if you reproduce or see what needs to be changed?
username_0: @username_1 Thank you, actually, the issue is connected to react or aframe-react which I'm using, I will file an issue in the proper place
Status: Issue closed
|
slackapi/node-slack-sdk | 854142866 | Title: Add unit tests for @slack/socket-mode
Question:
username_0: ### Description
Related to https://github.com/slackapi/bolt-js/issues/750, we don't have unit tests for `@slack/socket-mode` package.
### What type of issue is this? (place an `x` in one of the `[ ]`)
- [ ] bug
- [ ] enhancement (feature request)
- [ ] question
- [ ] documentation related
- [x] testing related
- [ ] discussion
### Requirements (place an `x` in each of the `[ ]`)
* [x] I've read and understood the [Contributing guidelines](https://github.com/slackapi/node-slack-sdk/blob/master/.github/contributing.md) and have done my best effort to follow them.
* [x] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
* [x] I've searched for any related issues and avoided creating a duplicate issue. |
carbon-design-system/carbon-design-kit | 796187979 | Title: [File uploader] pattern size in Sketch Kit
Question:
username_0: ## File uploader pattern size in Sketch Kit
## Detailed description
File uploader size in the Sketch Kit is the same for large, medium and small patterns (and yes, I checked the size is set for original :-)
Theme used - Gray 10
<img width="1124" alt="image" src="https://user-images.githubusercontent.com/24441071/106172018-465a2680-6160-11eb-96a0-34328acc3158.png">
<img width="1117" alt="image" src="https://user-images.githubusercontent.com/24441071/106172390-c7192280-6160-11eb-891b-5c52ce5cee59.png">
Answers:
username_1: The bucketing of `large, medium, small` has to do with the height size of the `primary button` and `uploaded file` for the `default file uploader`. This logic also applies to the `drag and drop file uploader`, but is not as obvious since the uploaded file state shows this size change and there isn't a button in this case. The drag and drop container should stay the same, unless you need to change the size for your usecase.
It was bucketed this way to follow how we bucket other form components that have three different sizes. So you can make sure you are choosing components that have the same sizes if they are being placed on the same page or flow.

Status: Issue closed
|
LiskHQ/lisk-sdk | 624250547 | Title: Update lisk-db to use value as Buffer
Question:
username_0: ### Description
Update `lisk-db` to manage `value` as Buffer
### Motivation
While moving to use `lisk-codec` value should not be encoded from JSON anymore
### Acceptance Criteria
- All the tests are updated
- All the usage to `Buffer`, without using lisk-codec<issue_closed>
Status: Issue closed |
envkey/envkey-app | 902059683 | Title: Performance Issue
Question:
username_0: Hi,
As admins of an organization in envkey, we have been experiencing performance of the start-up of the envkey app declining over time as we add more projects into the organization.
We now have ~235 projects in envkey. As org admins, we have access to all projects by default. However, it takes at least 2 minutes at best for the app to be fully loaded & in a usable state when our system is in a relatively idle state. When we are running other CPU-intensive processes (e.g. IDEs, collaboration tools) or run in battery-save mode whilst we start the envkey app, it takes a lot more and in some cases it even goes into a loop and won't start. Since we are planning to add more projects, we think it is only going to get worse.
The fact that it gets worse when more projects are added, tells me that the app may be doing (over-)eager fetching of resources of all the projects one has access to at the time of start-up —in an org admin's case it means all the projects. In our use cases, when we start the envkey app, we are only interested in a handful of projects. We don't need the resources of all the projects to be ready & loaded when the app starts.
Is there any room for improvement here? Any chance to move towards a more lazy-loading kind of approach maybe?
I am currently using v1.4.19 for Windows.
Answers:
username_1: Hi Serkan,
Thanks for the comment. This is a priority for our v2 release, which is now fully functional and will be ready for beta in a matter of weeks. It is built to load apps on-demand and also uses a much faster approach to encryption, so it performs drastically better with a large number of apps. We're also working on a data importer to simplify migration from the v1.
Status: Issue closed
username_0: @username_1 is there any news on the v2? It keeps getting worse as we add new projects over time.
username_1: @username_0 It's very close. We're doing final testing (including performance testing) and working on the v1 > v2 importer.
username_0: @username_1 any ETA on v2?
username_2: @username_1 this is really a big issue for us now. An update would be appreciated. |
knative/serving | 339957623 | Title: Validation e2e testing
Question:
username_0: <!--
/area API
/area test-and-release
/kind dev
-->
We now have pretty good lightweight table testing for validation of each of the `knative/serving` resources ([e.g.](https://github.com/knative/serving/blob/4008f7ec221ad15b298f6fd9e5706727662eea13/pkg/apis/serving/v1alpha1/configuration_validation_test.go#L81)).
I would actually like to see us harness the same shared table for testing against the *real* webhook in an e2e test.
I think that this is tantamount to:
1. Exporting the table for each of: `Revision`, `Configuration`, `Route`, and `Service`,
1. Writing a new e2e harness that drives a new control loop core from the same table. It should ignore "success" cases (or they should be made to work!).
1. The top-level table tests should be expanded to provide more overall coverage (I generally only covered propagation of error conditions at the top-level, not all of the low-level conditions since I covered those in more tightly scoped table testing).
Answers:
username_0: cc @vagababov
I was talking about this idea several weeks ago. If you have interest in pursuing this in 0.4, I can pull it in.
Status: Issue closed
|
techtek/steempi | 254093007 | Title: Example interface
Question:
username_0: Would be great to see some example screenshots of the interface and features it offers. It's been on my list of things to try and see what it does but would make the decision a lot easier.
Answers:
username_1: You can find some screenshots in our new preview post:
https://steemit.com/steemdev/@username_1/the-steempi-project-is-going-at-full-steem ... :-)
Status: Issue closed
|
ic-labs/django-icekit | 186964013 | Title: Document how to figure out what view is used for a page
Question:
username_0: For new people unfamiliar with the "Pages" (and the number of r'$' URLs in SFMOMA's urls.py), it's difficult to figure out what view is being rendered to get to a page.
This is usually as simple as UrlNode.objects.get_for_path(request.path), to give a Page, then looking for the view rendered by that page (whether default or overridden). But probably an architectural overview would help explain what's going on.<issue_closed>
Status: Issue closed |
PSPDFKit-labs/bypass | 170516290 | Title: Problem trying to access the endpoint
Question:
username_0: Hi! I'm doing
```
test "capture from invoice.created", %{bypass: bypass} do
Bypass.expect bypass, fn conn ->
assert "PUT" == conn.method
Plug.Conn.resp(conn, 201, ~s<{"response": "ok"}>)
end
send_invoice(bypass.port)
end
def send_invoice(port) do
case HTTPoison.put("http://localhost:#{port}/", Poison.encode!(to_send)) do
{:ok, response} -> :ok
{:error, reason} ->
Logger.warn "Failed to callback the invoice. Reason: #{inspect reason}."
:error
end
end
```
But it's failing with this error:
```
No HTTP request arrived at Bypass
stacktrace:
(bypass) lib/bypass.ex:17: anonymous fn/1 in Bypass.open/0
(ex_unit) lib/ex_unit/on_exit_handler.ex:82: ExUnit.OnExitHandler.exec_callback/1
(ex_unit) lib/ex_unit/on_exit_handler.ex:66: ExUnit.OnExitHandler.on_exit_runner_loop/0
```
Am I doing something wrong?
Answers:
username_1: I don't see anything wrong in this code. Could you provide a minimal Mix project that reproduces the problem?
username_0: Sure, just clone and do the ritual `mix deps.get`, `mix test`.
https://github.com/username_0/bypass-issue
username_2: I am having what appears to be the same problem, so I cloned the sample repo and I was able to replicate the problem. Also, I tried inserting a Process.sleep/0 call after bypass starts up. I grabbed the port number and verified that I was able to contact the bypass server via curl on that port. However, the sample app's test was still unable to do so.
username_2: I also tried swapping in HTTPoison for Gun in the tests, since my app and the sample are both using that. It did not cause any tests to fail in a way that would suggest that gun is masking a problem.
username_2: I noticed a small inconsistency while reading the bypass README and the tests. In the readme, it says you should use Plug.Conn.resp/3 to formulate the bypass response. However, in the test they use Plug.Conn.send_resp/3 instead. I tried changing this call in your sample app and it did not appear to change the outcome of the test, but I still wanted to make note of it.
username_2: After further work on this, I think my problem was compound:
1. I needed to use send_resp/3 instead of resp/3 in my bypass blocks
2. I needed to properly start one of my dependency libraries in mix.exs
After fixing those two things, I had much better results. I'll make a note to submit a pull request for the doc change I mentioned in the previous comment.
Status: Issue closed
username_1: Closing this since it doesn't look like a bug in Bypass.
username_3: @username_2 what were the dependencies that needed to start properly.
username_2: @username_3 I looked back through some of my repos, but it has been several years and I don't recall the specific dependency. Sorry I can't be of more help.
username_3: I too am facing similar issue, tried a number of things but to no avail. |
rust-lang/rust | 375321935 | Title: Possible bug when trying to implement hylomorphisms in Rust
Question:
username_0: When I try to compile
```
enum ListF<S, T> {
Cons(S, T),
Nil,
}
fn map<F, S, T0, T1>(f: F, x: ListF<S, T0>) -> ListF<S, T1>
where
F: Fn(T0) -> T1,
{
match x {
ListF::Nil => ListF::Nil,
ListF::Cons(y, ys) => ListF::Cons(y, f(ys)),
}
}
fn hylo<F0, F1, S, T0, T1>(f: F0, g: F1, x: T1) -> T0
where
F0: Fn(ListF<S, T0>) -> T0,
F1: Fn(T1) -> ListF<S, T1>,
{
f(map(|a| hylo(|b| f(b), |c| g(c), a), g(x)))
}
fn alg(xs: ListF<i64, i64>) -> i64 {
match xs {
ListF::Nil => 0,
ListF::Cons(y, ys) => y + ys,
}
}
fn coalg(x: i64) -> ListF<i64, i64> {
match x {
0 => ListF::Nil,
y => ListF::Cons(y, y - 1),
}
}
fn sum_to(x: i64) -> i64 {
hylo(|x0| alg(x0), |x1| coalg(x1), x)
}
fn main() {
println!("sum of first 200 integers: {}", sum_to(200));
}
```
I get the following error:
```
error: reached the type-length limit while instantiating `map::<[closure@src/main.rs:21:11: 21:42 f:&[closure@src/main.rs:...`
--> src/main.rs:6:1
|
6 | / fn map<F, S, T0, T1>(f: F, x: ListF<S, T0>) -> ListF<S, T1>
7 | | where
8 | | F: Fn(T0) -> T1,
9 | | {
... |
[Truncated]
fn coalg(x: i64) -> ListF<i64, i64> {
match x {
0 => ListF::Nil,
y => ListF::Cons(y, y - 1),
}
}
fn sum_to(x: i64) -> i64 {
hylo(|x0| alg(x0), |x1| coalg(x1), x)
}
fn main() {
// println!("sum of first 200 integers: {}", sum_to(200));
}
```
Then it compiles just fine, with only warnings about unused functions.
The hint to increase the type length limit does not work; I bumped it several times and all that happened (as far as I can tell) is that it made compile times longer.
Answers:
username_1: The two closures created inside `hylo` appears in their own type due to the recursive call to `hylo`, so the type is infinite. The compiler error is to be expected.
You can avoid it by using point-free form instead of creating closures so that new anonymous types are not generated:
```rust
fn hylo<F0, F1, S, T0, T1>(f: F0, g: F1, x: T1) -> T0
where
F0: Fn(ListF<S, T0>) -> T0 + Copy,
F1: Fn(T1) -> ListF<S, T1> + Copy,
{
f(map(|a| hylo(f, g, a), g(x)))
}
```
username_0: No, I can't. That code generates the following error:
```
error[E0504]: cannot move `f` into closure because it is borrowed
--> src/main.rs:21:20
|
21 | f(map(|a| hylo(f, g, a), g(x)))
| - ^ move into closure occurs here
| |
| borrow of `f` occurs here
error[E0382]: use of moved value: `g`
--> src/main.rs:21:30
|
21 | f(map(|a| hylo(f, g, a), g(x)))
| --- ^ value used here after move
| |
| value moved (into closure) here
|
= note: move occurs because `g` has type `F1`, which does not implement the `Copy` trait
error[E0507]: cannot move out of captured outer variable in an `Fn` closure
--> src/main.rs:21:20
|
16 | fn hylo<F0, F1, S, T0, T1>(f: F0, g: F1, x: T1) -> T0
| - captured outer variable
...
21 | f(map(|a| hylo(f, g, a), g(x)))
| ^ cannot move out of captured outer variable in an `Fn` closure
error[E0507]: cannot move out of captured outer variable in an `Fn` closure
--> src/main.rs:21:23
|
16 | fn hylo<F0, F1, S, T0, T1>(f: F0, g: F1, x: T1) -> T0
| - captured outer variable
...
21 | f(map(|a| hylo(f, g, a), g(x)))
| ^ cannot move out of captured outer variable in an `Fn` closure
error: aborting due to 4 previous errors
Some errors occurred: E0382, E0504, E0507.
For more information about an error, try `rustc --explain E0382`.
```
username_1: https://play.rust-lang.org/?version=stable&mode=debug&edition=2015&gist=10735646f4ada22d5f41546f0a71b93e
username_0: Ah, I mis-copied. However, the original point still stands; that `g` has a different type from `|x0| g(x0)` seems unfortunate.
username_2: Currently every closure defines its own, anonymous type that implements the right `Fn*` traits. However, in light of [RFC 1558](https://github.com/rust-lang/rfcs/blob/master/text/1558-closure-to-fn-coercion.md), which adds a coercion from closures to corresponding `fn` references if the closure doesn't capture anything from its environment, maybe this would be considered by the language team if an RFC was written. |
ikedaosushi/tech-news | 345461663 | Title: The world's most productive team chat
Question:
username_0: The world's most productive team chat<br>
Zulip combines the immediacy of Slack with an email threading model. With Zulip, you can catch up on important conversations while ignoring irrelevant ones. Messages sent hours apart are linked in the same topic.<br>
https://zulipchat.com |
juba/rmdformats | 991558302 | Title: downcute_theme: "chaos" not working
Question:
username_0: I am trying to use downcute_theme: "chaos" but I am getting same downcute theme, not chaos.
```
output:
rmdformats::downcute:
downcute_theme: "chaos"
```
Answers:
username_1: Are you using the development version ?
The chaos theme is not (yet) available in the version currently on CRAN.
username_0: I was using CRAN version, now I updated to development version and chaos theme is available there.
Thanks for your answer.
Status: Issue closed
|
godotengine/godot | 280307363 | Title: Random Crash - when project stopped or tab clicked
Question:
username_0: Godot Master
Kubuntu
I get spammed with this message.
ERROR: _process_line: Index line=0 out of size (l.offset_caches.size()=0)
At: scene/gui/rich_text_label.cpp:111.
I removed some code to find the issue, now I get this
```
[1] /lib/x86_64-linux-gnu/libc.so.6(+0x357f0) [0x7f2465fa27f0] (??:0)
[2] Vector<int>::size() const (??:0)
[3] Vector<int>::push_back(int const&) (??:0)
[4] RichTextLabel::_process_line(RichTextLabel::ItemFrame*, Vector2 const&, int&, int, int, RichTextLabel::ProcessMode, Ref<Font> const&, Color const&, Point2i const&, RichTextLabel::Item**, int*, bool*, int) (??:0)
[5] RichTextLabel::_validate_line_caches(RichTextLabel::ItemFrame*) (??:0)
[6] RichTextLabel::_notification(int) (??:0)
[7] RichTextLabel::_notificationv(int, bool) (??:0)
[8] Object::notification(int, bool) (??:0)
[9] CanvasItem::_update_callback() (??:0)
[10] MethodBind0::call(Object*, Variant const**, int, Variant::CallError&) (??:0)
[11] Object::call(StringName const&, Variant const**, int, Variant::CallError&) (??:0)
[12] MessageQueue::_call_function(Object*, StringName const&, Variant const*, int, bool) (??:0)
[13] MessageQueue::flush() (??:0)
[14] SceneTree::iteration(float) (??:0)
[15] Main::iteration() (??:0)
[16] OS_X11::run() (??:0)
[17] .../Volume/godot-master/bin/godot.x11.tools.64(main+0xd5) [0x55f704d92ce5] (??:0)
[18] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1) [0x7f2465f8d3f1] (??:0)
[19] .../godot-master/bin/godot.x11.tools.64(_start+0x2a) [0x55f704d92b0a] (??:0)
```
@hpvb I still get the ?? any idea how to solve it?
bin folder

Starting godot with
```./godot.x11.tools.64```
Answers:
username_1: Some ideas that may help:
use `target=release_debug`
copy both files to a local drive
check `readelf -p .gnu_debuglink godot.x11.opt.tools.64`
```String dump of section '.gnu_debuglink':
[ 0] godot.x11.opt.tools.64.debug
[ 21] U^A
```
try to run it in `gdb`
make a clean build (`git commit -a` and `git clean -fdx`)
username_0: I am not able to reproduce the bug with
`scons p=x11 target=release_debug`
So I don't know if it still show's the ??
username_1: You need the `release_debug` or `debug` target, otherwise IIRC the symbols are not generated by the compiler, hence the ??.
Hmm, there is also
```
if env["tools"]:
print("Tools can only be built with targets 'debug' and 'release_debug'.")
sys.exit(255)
```
in the SConstruct, so it shouldn't be actually possible to build your release target …
username_2: A project that can reliably reproduce the issue and/or detailed steps to reproduce would be welcome, as for now it's a bit light. I guess the project needs at least to have a RichTextLabel? :)
username_2: Thanks for the backtrace, there seems to be something fishy in RichTextLabel indeed. But I still can't reproduce the issue myself, could you explain *how* you get it to crash? Does it happen on an empty project?
Status: Issue closed
|
EOL/tramea | 136285683 | Title: Set up Tramea environment: Solr Cloud installation
Question:
username_0: This is subissue from #156
This issue is for installing solrcloud on tramea environment
Answers:
username_0: I used the following gems for solr installations:
- Sunspot_rails which will enable us to use solr via sunspot.
- For solr cloud installation, I used gem sunspot_solr in development group in Gemfile. This represents a packaged distribution of solr for use with the Sunspot and sunspot rails gems.
username_1: YAAAAAAAAA-A-A-A-A-A-AYYYYY!!!
(I have been **dying** to use Sunspot for EOL since ... since... since ...
as long as I can remember!)
Status: Issue closed
|
ModCoderPack/MCPBot-Issues | 358221202 | Title: SPacketUpdateRecipesPacket → SPacketUpdateRecipes
Question:
username_0: `net/minecraft/network/play/server/SPacketUpdateRecipesPacket` → `net/minecraft/network/play/server/SPacketUpdateRecipes`
For some reason `Packet` is on there twice...
Status: Issue closed
Answers:
username_1: https://github.com/MinecraftForge/MCPConfig/commit/3292260f68f664e93f44bcaa346299b76e8c1d83
- [x] `net/minecraft/network/play/server/SPacketUpdateRecipesPacket` → `net/minecraft/network/play/server/SPacketUpdateRecipes` |
adevinta/zoe | 642733615 | Title: It doesn't work for me
Question:
username_0: Hi,
Seems like a nice tool but it has really bad configuration, and basically no defaults. I spent 10-20 mins trying to make this work to no avail.
I understand that your needs included having `runners`, I am using Kafka for years and I never felt the need to have this option, if you want to make it friendlier split the config in 2, one with basic things like kafka hosts and if you must the ser/de and move the runners part somewhere else, most people will only need to run it locally.
Even more having a switch zoe -h mykafka overriding any configuration, would make it even more usable, in my setup I sometimes run multiple kafka (integration tests) on the same machine, attached to random ports, it would be impossible for me to edit a configuration each time I want to see if my tests created the proper topics.
See my crash below.
My config:
```
---
runners:
default: "local"
config:
lambda:
deploy: null
credentials:
type: "default"
awsRegion: null
enabled: false
kubernetes:
namespace: "default"
context: null
deletePodAfterCompletion: true
cpu: "1"
memory: "512M"
timeoutMs: 300000
image:
registry: "docker.io"
image: "adevinta/zoe-core"
tag: null
local:
enabled: true
storage: null
secrets: null
expressions: {}
clusters:
default:
props:
bootstrap.servers: "xxx:9092"
key.deserializer: "org.apache.kafka.common.serialization.StringDeserializer"
value.deserializer: "org.apache.kafka.common.serialization.StringDeserializer"
key.serializer: "org.apache.kafka.common.serialization.StringSerializer"
value.serializer: "org.apache.kafka.common.serialization.ByteArraySerializer"
groups: {}
```
Running:
```
zoe topics list
```
leads to
``` loading config from url : file:/home/xxx/.zoe/config/default.yml
2020-06-19 17:34:29 INFO zoe: requesting topics...
failure: runner 'local' failed
cause:
failure: Instantiation of [simple type, class com.adevinta.oss.zoe.core.functions.CreateTopicRequest] value failed for JSON property name due to missing (therefore NULL) value for creator parameter name which is a non-nullable type
at [Source: (byte[])"{"props":{"bootstrap.servers":"xxx:9092","key.deserializer":"org.apache.kafka.common.serialization.StringDeserializer","value.deserializer":"org.apache.kafka.common.serialization.StringDeserializer","key.serializer":"org.apache.kafka.common.serialization.StringSerializer","value.serializer":"org.apache.kafka.common.serialization.ByteArraySerializer"}}"; line: 1, column: 379] (through reference chain: com.adevinta.oss.zoe.core.functions.CreateTopicRequest["name"])```
Thank you,
Answers:
username_1: Hi! And thanks for raising the issue!
You are stumbling into a regression that haq already been fixed: https://github.com/adevinta/zoe/issues/1
Can you update your zoe version and try again?
username_0: Hi, ty for answering, with the latest version works, apart from the bug the rest of the stuff is still nice to have or even required for me.
username_1: Thanks a lot @username_0 for the feedback. We will soon make it available to override any configuration from the command call :+1:
Status: Issue closed
|
InsertKoinIO/koin | 345571189 | Title: org.koin.error.NoBeanDefFoundException: No definition found to resolve type 'android.content.Context'. Check your module definition error
Question:
username_0: Hi,
I am trying to run DryRunTest and getting following error from the compiler.
**ERROR:**
org.koin.error.BeanInstanceCreationException: Can't create bean Bean[class=com.zumepizza.app.pielot.utils.ZumeSharedPreferences] due to error :
org.koin.error.NoBeanDefFoundException: No definition found to resolve type 'android.content.Context'. Check your module definition
**My module class:**
val appModule: Module = applicationContext {
bean { ZumeSharedPreferences(get()) }
bean { APIClient(get()) }
}
**ZumeSharedPreferences class:**
class ZumeSharedPreferences(context: Context) {
private val PREFERENCE_NAME = "ZSharedPreferences"
private val preference: SharedPreferences
init {
preference = context.getSharedPreferences(PREFERENCE_NAME, Context.MODE_PRIVATE)
}
...
I need to inject Context to initialize ZumeSharedPreferences according to my current code. How can I inject Context? Or what is the best way to inject a SharedPreferences class to my application?
Answers:
username_1: You could use androidContext() in your ZumeSharedPreferences bean:
`bean { ZumeSharedPreferences(androidContext()) }`
If you are writing tests, you may need to mock the Application (with say mockito):
`
startKoin(listOf(appModule)) with mock(Application::class.java)
dryRun()
closeKoin()
`
username_0: Thank you for the answer @username_1 . But, I guess "bean { ZumeSharedPreferences(get()) }" is doing same job as "bean { ZumeSharedPreferences(androidContext()) }" . Am I right? I am having trouble while I am testing. "with mock(Application::class.java)" part does not change anything. I am still getting error.
username_2: in Koin `1.0.0` you have `declareMock` to help you: https://beta.insert-koin.io/docs/1.0/quick-references/koin-test/#bring-koin-powers-to-your-test-class
In 0.9.x, you have to use the `with` operator at start to give it a `Application` instance. It will help you inject this instance later.
Status: Issue closed
username_2: Closing question issue - Please reopen it or post your next question on stackoverflow: https://stackoverflow.com/questions/tagged/koin |
mekanism/Mekanism | 904083947 | Title: Ability to extract or insert increments of gases/fluids into/out of tanks.
Question:
username_0: **Describe the the feature you'd like**
The ability to control the flow of gases and fluids into and out of machines, pipes and tanks.
**Describe alternatives you've considered**
I have had issues where sometimes I am left with a small amount of fluid or gas in a machine. It is not enough for the machine to process, or be extracted from. In some cases, it is easy enough to refill the machine to full and let it process everything from full to 0. (In this case I probably broke a pipe early and got stuck with a partial amount of material).
However, I have a machine that processes in increments that don't leave the machine completely empty when finished (Chemical Crystallizer). So there doesn't seem to be a way to completely empty it of an incorrect input material.
I think it would be a nice feature if I could extract or insert any increment of a gas or fluid.
This would also be handy if I wanted to tank a tank full of gas or fluid, and split it evenly between two containers.
**Additional context**
Add any other context or screenshots about the feature request here. |
github-nakasho/astroph | 998834416 | Title: A kilonova from an ultra-quick merger of a neutron star binary
Question:
username_0: # 論文概要
2秒以上の発光があるが超新星爆発との関連が見られないハイブリッドγ線バーストGRB 060505を詳細に観測したところ、青いkilonovaであることが示唆された。低金属量で星形成が活発な場所で、連星中性子星が形成後1Myr以内に合体する理論を支持
# 論文を理解する上で重要な図など



# 論文リンク
https://arxiv.org/abs/2109.07694 |
oh-my-fish/oh-my-fish | 419408982 | Title: randomrussel theme's prompt's [I] and [N] is persistent throughout other themes.
Question:
username_0: I was trying omf themes one by one and tried randomrussell theme too. And when I switch to other theme the themes prompt is always visible. And I am getting following errors too:
```
- (line 1): function: Unexpected positional argument 'alias fish_indent=/usr/bin/fish_indent'
function fish_indent --wraps --description 'alias fish_indent=/usr/bin/fish_indent'; /usr/bin/fish_indent $argv; end
^
from sourcing file -
called on line 72 of file /usr/share/fish/functions/alias.fish
in function “alias”
called on line 7 of file /usr/share/fish/functions/fish_indent.fish
with parameter list “fish_indent=/usr/bin/fish_indent”
from sourcing file /usr/share/fish/functions/fish_indent.fish
called on line 263 of file ~/.config/fish/functions/fish_prompt.fish
in command substitution
called on line 263 of file ~/.config/fish/functions/fish_prompt.fish
in command substitution
called on line 527 of file ~/.config/fish/functions/fish_prompt.fish
in function “__budspencer_create_cmd_hist”
called on standard input
in event handler: handler for generic event “fish_prompt”
```
Answers:
username_0: omf doctor output
```
Oh My Fish version: 6-29-gb2d7a44
OS type: Linux
Fish version: fish, version 3.0.1
Git version: git version 2.20.1
Git core.autocrlf: no
Checking for a sane environment...
Your shell is ready to swim.
```
username_0: I am an Idiot.
Turns out this is vi mode. And to disable vi mode just type `fish_default_key_bindings` in fish.
Status: Issue closed
username_1: Glad you got it sorted :) |
awslabs/amazon-kinesis-producer | 293967397 | Title: Unable to post message to AWS Kinesis stream using KPL
Question:
username_0: Hi,
We are trying to test out the following flow for a PoC
Java ( KPL ) --> Kinesis Stream --> FireHose Stream --> AWS ElasticSearch Index
Below is the Java Class that is sending the message.
The records are not sent over to either Kinesis Stream or FireHose Stream.
No Exceptions shown.
The process just exists.
Any suggestion greatly appreciated.
Thanks !!
SRama
-----This is my class ----------
<pre><code>
package com.wiley.iss.elk;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.kinesis.producer.KinesisProducer;
import com.amazonaws.services.kinesis.producer.KinesisProducerConfiguration;
import java.nio.ByteBuffer;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import java.io.Serializable;
public class PutRecordToAWS implements Serializable {
private static final long serialVersionUID = 1L;
private static final Log logger = LogFactory.getLog(PutRecordToAWS.class);
public static String KinesisPutRecord( String awsStreamName, String userAccessKeyId, String secretAccessKey, String jsonData ) {
try {
KinesisProducerConfiguration config = new KinesisProducerConfiguration();
config.setRegion(Regions.US_EAST_1.getName());
BasicAWSCredentials awsCreds = new BasicAWSCredentials(userAccessKeyId, secretAccessKey);
config.setCredentialsProvider(new AWSStaticCredentialsProvider(awsCreds));
KinesisProducer producer = new KinesisProducer(config);
ByteBuffer data = ByteBuffer.wrap("jsonData".getBytes("UTF-8"));
producer.addUserRecord(awsStreamName, Long.toString(System.currentTimeMillis()) , data) ;
/* producer.flush();
producer.destroy();
*/
return "Is it Success" ;
} catch ( Exception e) {
e.printStackTrace();
return e.getMessage() ;
}
}
@SuppressWarnings("static-access")
public static void main ( String args[]){
PutRecordToAWS ss = new PutRecordToAWS();
System.out.println(ss.KinesisPutRecord("MyStream", "MyUserAccessID", "MySceretKey" , "{ \"sample\" : \"checked\" }"));
}
}
</pre></code>
Status: Issue closed
Answers:
username_1: The return from `KinesisProducer#addUserRecord` is a future. You need to wait on the future before exiting.
e.g.
``` java
Future<UserRecordResult> f = producer.addUserRecord(...);
UserRecordResult result = f.get();
```
Feel free to reopen if you have further questions.
username_0: Thank you Justin,
It did not help either.
I placed the two lines recommended and a couple of SoP's.
I see this exception :

We are running this KPL code from on-Prem machine. Would that contribute to the issue?
<pre> <code>
public static String KinesisPutRecord( String awsStreamName, String userAccessKeyId, String secretAccessKey, String jsonData ) {
try {
KinesisProducerConfiguration config = new KinesisProducerConfiguration();
config.setRegion(Regions.US_EAST_1.getName());
BasicAWSCredentials awsCreds = new BasicAWSCredentials(userAccessKeyId, secretAccessKey);
config.setCredentialsProvider(new AWSStaticCredentialsProvider(awsCreds));
KinesisProducer producer = new KinesisProducer(config);
ByteBuffer data = ByteBuffer.wrap("jsonData".getBytes("UTF-8"));
// producer.addUserRecord(awsStreamName, Long.toString(System.currentTimeMillis()) , data) ;
System.out.println( producer.getOutstandingRecordsCount() );
Future<UserRecordResult> f = producer.addUserRecord(awsStreamName, Long.toString(System.currentTimeMillis()) , data) ;
System.out.println( "After " + producer.getOutstandingRecordsCount() );
System.out.println( "is Done ? " + f.isDone() );
UserRecordResult result = f.get();
System.out.println( result.getSequenceNumber() );
producer.flush();
producer.destroy();
return "Success" ;
} catch ( Exception e) {
System.out.println( "Exception " + e.getMessage() );
e.printStackTrace();
return e.getMessage() ;
}
}
</code></pre>
Thanks,
username_1: If all attempts to send the records to Kinesis fail the KPL will throw the UserRecordFailedException. You will need to extract the failure reason from the exception by accessing [`UserRecordFailedException#getResult`](https://github.com/awslabs/amazon-kinesis-producer/blob/master/java/amazon-kinesis-producer/src/main/java/com/amazonaws/services/kinesis/producer/UserRecordFailedException.java#L28). From the result you get the attempts from [`UserRecordResult#getAttempts`](https://github.com/awslabs/amazon-kinesis-producer/blob/master/java/amazon-kinesis-producer/src/main/java/com/amazonaws/services/kinesis/producer/UserRecordFailedException.java#L28). Finally from that you can get the [error message from `Attempt#getErrorMessage`](https://github.com/awslabs/amazon-kinesis-producer/blob/master/java/amazon-kinesis-producer/src/main/java/com/amazonaws/services/kinesis/producer/Attempt.java#L68).
The error message from the attempts should let you know why the requests are failing.
username_0: Thank you Justin.
The issue I was facing was with amazon-kinesis-producer.
I was using 0.12.8 and apparently, that has some bug with running on Windows. 0.12.6 worked fine on Windows and Linux.
username_1: If you're having an issue with 0.12.8 on Windows can you please provide any related log messages. 0.12.8 should be working on Windows.
username_2: Hi,
Is there a solution in windows? i am using 1.12.11 and i am getting the same error message.
Exception in thread "main" java.util.concurrent.ExecutionException: com.amazonaws.services.kinesis.producer.UserRecordFailedException
at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
at com.kpl.client.KinesisKPLTest.main(KinesisKPLTest.java:71)
Caused by: com.amazonaws.services.kinesis.producer.UserRecordFailedException
at com.amazonaws.services.kinesis.producer.KinesisProducer$MessageHandler.onPutRecordResult(KinesisProducer.java:199)
username_0: Version 0.12.6 worked fine for me. You might want to go over what changed between your version vs 0.12.6. I know could be a lot. May be Justin could shine some light here.
You could try out previous standard versions, maybe, you could get lucky(like I did).
username_2: i tried with 0.12.6 as well. it fails with same error. I am trying this from my local windows machine.
This is the code and it fails at f.get . Any suggestion woudl help.
KinesisProducerConfiguration producerConfig = new KinesisProducerConfiguration().setRecordMaxBufferedTime(300)
.setMaxConnections(1).setRequestTimeout(6000).setRegion("us-east-1")
.setCredentialsProvider(credentialsProvider);
KinesisProducer kinesis = new KinesisProducer(producerConfig);
List<Future<UserRecordResult>> putFutures = new LinkedList<Future<UserRecordResult>>();
for (int i = 0; i < 100; i++) {
ByteBuffer data;
data = ByteBuffer.wrap("Testing Data".getBytes("UTF-8"));
putFutures.add(kinesis.addUserRecord("POCStreamTest", Long.toString(System.currentTimeMillis()), data));
}
for (Future<UserRecordResult> f : putFutures) {
UserRecordResult result = f.get(); // this does block
if (result.isSuccessful()) {
System.out.println("Put record into shard " + result.getShardId());
} else {
for (Attempt attempt : result.getAttempts()) {
// Analyze and respond to the failure
}
username_3: Did you get work around |
marbl/metAMOS | 73173296 | Title: Error at the scaffold step when test
Question:
username_0: -redundancy 10 -b /data/yyq/test/test1/Scaffold/in/proba.bnk -repeats /data/y
yq/test/test1/Scaffold/in/proba.reps
*************************DETAILS***********************************
Last 10 commands run before the error (/data/yyq/test/test1/Logs/COMMANDS.log)
|2015-05-04 14:51:08|# [SCAFFOLD]
|2015-05-04 14:51:08| rm -rf /data/yyq/test/test1/Scaffold/in/proba.bnk [12/542]
|2015-05-04 14:54:47| /opt/metAMOS-1.5rc3/AMOS/Linux-x86_64/bin/toAmos_new -Q /d
ata/yyq/test/test1/Preprocess/out/lib1.seq -i --min 5 --max 1679 --libname lib1
-b /data/yyq/test/test1/Scaffold/in/proba.bnk
|2015-05-04 14:56:17| /opt/metAMOS-1.5rc3/AMOS/Linux-x86_64/bin/toAmos_new -c /d
ata/yyq/test/test1/Assemble/out/proba.asm.tigr -b /data/yyq/test/test1/Scaffold/
in/proba.bnk
|2015-05-04 14:59:45| /opt/metAMOS-1.5rc3/AMOS/Linux-x86_64/bin/asmQC -b /data/y
yq/test/test1/Scaffold/in/proba.bnk -scaff -recompute -update -numsd 2
|2015-05-04 14:59:45| perl /opt/metAMOS-1.5rc3/AMOS/Linux-x86_64/bin/bank-unlock
/data/yyq/test/test1/Scaffold/in/proba.bnk
|2015-05-04 15:01:57| /opt/metAMOS-1.5rc3/AMOS/Linux-x86_64/bin/clk -b /data/yyq
/test/test1/Scaffold/in/proba.bnk
|2015-05-04 15:03:03| /opt/metAMOS-1.5rc3/AMOS/Linux-x86_64/bin/Bundler -b /data
/yyq/test/test1/Scaffold/in/proba.bnk
|2015-05-04 21:16:41| /opt/metAMOS-1.5rc3/AMOS/Linux-x86_64/bin/MarkRepeats -red
undancy 50 -b /data/yyq/test/test1/Scaffold/in/proba.bnk > /data/yyq/test/test1
/Scaffold/in/proba.reps
|2015-05-04 21:52:32| /opt/metAMOS-1.5rc3/AMOS/Linux-x86_64/bin/OrientContigs -m
inRedundancy 5 -all -redundancy 10 -b /data/yyq/test/test1/Scaffold/in/proba.b
nk -repeats /data/yyq/test/test1/Scaffold/in/proba.reps
Last 10 lines of output (/data/yyq/test/test1/Logs/SCAFFOLD.log)
FOR SKIPPED EDGE 926440 SET EDGE STATUS TO BE 5
FOR SKIPPED EDGE 944273 SET EDGE STATUS TO BE 5
FOR SKIPPED EDGE 982719 SET EDGE STATUS TO BE 5
FOR SKIPPED EDGE 983738 SET EDGE STATUS TO BE 5
FOR SKIPPED EDGE 1019566 SET EDGE STATUS TO BE 5
FOR SKIPPED EDGE 1026702 SET EDGE STATUS TO BE 5
FOR SKIPPED EDGE 1070057 SET EDGE STATUS TO BE 5
FOR SKIPPED EDGE 1072044 SET EDGE STATUS TO BE 5
FOR SKIPPED EDGE 1072647 SET EDGE STATUS TO BE 5
FOR SKIPPED EDGE 1112734 SET EDGE STATUS TO BE 5
Please veryify input data and restart MetAMOS. If the problem persists please co
ntact the MetAMOS development team.
*************************ERROR***********************************
*****************************************************************
rm: cannot remove ‘/data/yyq/test/test1/Logs/scaffold.ok’: No such file or direc
tory
Oops, MetAMOS finished with errors! see text in red above for details.
Answers:
username_0: s/perl/statistics.pl /data/yyq/test/test1/Postprocess/out/proba.scf.fa > /data/yy
q/test/test1/Postprocess/out/html/asmstats.out
*************************DETAILS***********************************
Last 10 commands run before the error (/data/yyq/test/test1/Logs/COMMANDS.log)
|2015-05-05 03:28:57| ln /data/yyq/test/test1/Postprocess/out/abundance.krona.htm
l /data/yyq/test/test1/Postprocess/out/html/Abundance.html
|2015-05-05 03:28:57| touch /data/yyq/test/test1/Postprocess/out/ref.name
|2015-05-05 03:28:57| mv /data/yyq/test/test1/Preprocess/out/*.fastqc /data/yyq/t
est/test1/Postprocess/out/html
|2015-05-05 03:28:57| unlink /data/yyq/test/test1/Postprocess/out/html/propagate.
in.clusters
|2015-05-05 03:28:57| ln /data/yyq/test/test1/Propagate/in/proba.clusters /data/y
yq/test/test1/Postprocess/out/html/propagate.in.clusters
|2015-05-05 03:28:57| unlink /data/yyq/test/test1/Postprocess/out/html/propagate.
out.clusters
|2015-05-05 03:28:57| ln /data/yyq/test/test1/Propagate/out/proba.clusters /data/
yyq/test/test1/Postprocess/out/html/propagate.out.clusters
|2015-05-05 03:28:57| unlink /data/yyq/test/test1/Postprocess/out/html/Functional
Annotation.html
|2015-05-05 03:28:57| ln /data/yyq/test/test1/FunctionalAnnotation/out/ec.krona.h
tml /data/yyq/test/test1/Postprocess/out/html/FunctionalAnnotation.html
|2015-05-05 03:28:57| perl -I /opt/metAMOS-1.5rc3/AMOS/Linux-x86_64/lib /opt/metA
MOS-1.5rc3/Utilities/perl/statistics.pl /data/yyq/test/test1/Postprocess/out/prob
a.scf.fa > /data/yyq/test/test1/Postprocess/out/html/asmstats.out
Last 10 lines of output (/data/yyq/test/test1/Logs/POSTPROCESS.log)
unlink: cannot unlink ‘/data/yyq/test/test1/Postprocess/out/html/class.classified
’: No such file or directory
unlink: cannot unlink ‘/data/yyq/test/test1/Postprocess/out/html/Annotate.html’:
No such file or directory
unlink: cannot unlink ‘/data/yyq/test/test1/Postprocess/out/html/Abundance.html’:
No such file or directory
mv: cannot stat ‘/data/yyq/test/test1/Preprocess/out/*.fastqc’: No such file or d
irectory
unlink: cannot unlink ‘/data/yyq/test/test1/Postprocess/out/html/propagate.in.clu
sters’: No such file or directory
unlink: cannot unlink ‘/data/yyq/test/test1/Postprocess/out/html/propagate.out.cl
usters’: No such file or directory
unlink: cannot unlink ‘/data/yyq/test/test1/Postprocess/out/html/FunctionalAnnota
tion.html’: No such file or directory
ln: failed to access ‘/data/yyq/test/test1/FunctionalAnnotation/out/ec.krona.html
’: No such file or directory
Can't locate Statistics/Descriptive.pm in @INC (@INC contains: /opt/metAMOS-1.5rc
3/AMOS/Linux-x86_64/lib /opt/metAMOS-1.5rc3/src/phylosift/lib/ /home/yyq/perl5/li
b/perl5 /root/bin /root/bin /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/li
b64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/pe
rl5 .) at /opt/metAMOS-1.5rc3/Utilities/perl/statistics.pl line 7.
BEGIN failed--compilation aborted at /opt/metAMOS-1.5rc3/Utilities/perl/statistic
s.pl line 7.
Please veryify input data and restart MetAMOS. If the problem persists please con
tact the MetAMOS development team.
*************************ERROR***********************************
*****************************************************************
rm: cannot remove ‘/data/yyq/test/test1/Logs/postprocess.ok’: No such file or dir
ectory
Oops, MetAMOS finished with errors! see text in red above for details.
username_1: Your second error is a missing Perl library called Statistics::Descriptive. You need to install that perl package which will fix your Postprocess error (http://metamos.readthedocs.org/en/v1.5rc3/content/installation.html#prerequisites).
As for the scaffold error, it is the same error as issue #198. If you can share your data we can try to reproduce it locally.
username_0: hi~ I used the published liver cirrhosis dataset, the link is below:
http://www.ebi.ac.uk/ena/data/view/ERP005860
I just selected the first sample to test the pipeline.
username_0: hi~
Does the command 'OrientContigs' contain the parameter named ' -minRedundancy' ?
I cann't find it in its usage...
╰─[$: 1] % /opt/metAMOS-1.5rc3/AMOS/Linux-x86_64/bin/OrientContigs -h
Determine contig order and orientation
USAGE:
OrientContigs -b[ank] <bank_name> [-all] [-noreduce] [-agressive] [-redundancy minLinks] [-repeats fileName] [-skip]
The -all option will force initialization of all contigs, including those that have no links to them, otherwise they remain uninitialized
The -noreduce option will turn off search for common motifs and recursively remove them, thus simplyfing the graph
The -agressive option will not mark edges that move a contig more than 3 STDEVS away as bad and will try to reconcile the positions
The -redundancy option specifies the minimum number of links between two contigs before they will be scaffolded
The -repeats option specifies a file containing a list of contig IIDs which are considered repeats and whose edges will be unused
The -skip option will skip edges that have too low a weight relative to the weights of the other edges connecting their respective nodes.
Options summary:
Bank =
Redundancy = 0
InitAll = 0
Compress = 1
AgressiveScf = 0
Max Overlap = -1
A bank must be specified
username_1: The minRedundancy is a parameter to OrientContigs. The parameters passed to OrientContigs are listed in the configuration file in Utilities/config/bambus.config. The parameter isn't the cause of the error because if invalid parameters were passed it would error immediately but it began running on your dataset before exiting with an error. We will try to reproduce the error on your dataset locally. |
thingsboard/thingsboard | 667873160 | Title: [Question] your title here
Question:
username_0: **Component**
<!-- Choose one of the following and delete all others. -->
* UI
* Rule Engine
* Installation
* Generic
**Description**
compile the source failed
**Environment**
<!-- Add information about your environment and ThingsBoard version if applicable -->
* OS: win10
* ThingsBoard: latest
* Browser: name and version
[INFO] Thingsboard ........................................ SUCCESS [ 1.979 s]
[INFO] Netty MQTT Client .................................. SUCCESS [ 3.790 s]
[INFO] Thingsboard Server Commons ......................... SUCCESS [ 0.336 s]
[INFO] Thingsboard Server Common Data ..................... SUCCESS [ 7.101 s]
[INFO] Thingsboard Server Common Utils .................... SUCCESS [ 0.827 s]
[INFO] Thingsboard Server Common Messages ................. SUCCESS [ 4.179 s]
[INFO] Thingsboard Actor system ........................... SUCCESS [ 1.526 s]
[INFO] Thingsboard Server Stats ........................... SUCCESS [ 1.303 s]
[INFO] Thingsboard Server Queue components ................ SUCCESS [ 16.532 s]
[INFO] Thingsboard Server Commons ......................... SUCCESS [ 0.105 s]
[INFO] Thingsboard Server Common Transport components ..... SUCCESS [ 2.212 s]
[INFO] Thingsboard MQTT Transport Common .................. SUCCESS [ 2.299 s]
[INFO] Thingsboard HTTP Transport Common .................. SUCCESS [ 1.072 s]
[INFO] Thingsboard CoAP Transport Common .................. SUCCESS [ 1.349 s]
[INFO] Thingsboard Server Common DAO API .................. SUCCESS [ 2.350 s]
[INFO] Thingsboard Extensions ............................. SUCCESS [ 0.088 s]
[INFO] Thingsboard Rule Engine API ........................ SUCCESS [ 1.862 s]
[INFO] Thingsboard Server DAO Layer ....................... SUCCESS [ 13.815 s]
[INFO] Thingsboard Rule Engine Components ................. SUCCESS [ 8.800 s]
[INFO] Thingsboard Server Transport Modules ............... SUCCESS [ 0.150 s]
[INFO] Thingsboard HTTP Transport Service ................. FAILURE [ 28.267 s]
[INFO] Thingsboard MQTT Transport Service ................. SKIPPED
[INFO] Thingsboard CoAP Transport Service ................. SKIPPED
[INFO] ThingsBoard Server UI .............................. SKIPPED
[INFO] Thingsboard Server Tools ........................... SKIPPED
[INFO] Thingsboard Rest Client ............................ SKIPPED
[INFO] ThingsBoard Server Application ..................... SKIPPED
[INFO] ThingsBoard Microservices .......................... SKIPPED
[INFO] ThingsBoard Docker Images .......................... SKIPPED
[INFO] ThingsBoard JavaScript Executor Microservice ....... SKIPPED
[INFO] ThingsBoard Web UI Microservice .................... SKIPPED
[INFO] ThingsBoard Node Microservice ...................... SKIPPED
[INFO] ThingsBoard Transport Microservices ................ SKIPPED
[INFO] ThingsBoard MQTT Transport Microservice ............ SKIPPED
[INFO] ThingsBoard HTTP Transport Microservice ............ SKIPPED
[INFO] ThingsBoard COAP Transport Microservice ............ SKIPPED
[INFO] ThingsBoard Black Box Tests ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:42 min
[INFO] Finished at: 2020-07-29T21:36:15+08:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.thingsboard:gradle-maven-plugin:1.0.10:invoke (default) on project http: org.gradle.tooling.GradleConnectionException: Could not install Gradle distribution from 'https://serv
ices.gradle.org/distributions/gradle-6.3-bin.zip'. -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf org.thingsboard.transport:http
Answers:
username_1: I have got the same failure in thingsboard compilation. have you found any solution ? |
HDVinnie/Private-Trackers-Spreadsheet | 793541281 | Title: Codebase \ Torrent Tracker Platforms
Question:
username_0: I propose to transfer information from this [repository](https://github.com/username_1/Torrent-Tracker-Platforms) to the main table (to a separate tab).
Receive information via API, for example:
https://api.github.com/repos/OPSnet/Gazelle
Answers:
username_1: Good idea. Im currently busy with UNIT3D at the moment but can look into it soon unless you plan to make a PR?
username_0: I tried to figure it out, but my knowledge was not enough.
username_0: I would use these fields:
* full_name
* description
* archived
* watchers
* forks
* contributors - https://api.github.com/repos/OPSnet/Gazelle/contributors
* languages - https://api.github.com/repos/OPSnet/Gazelle/languages
@username_1 maybe you have time or @tracker-user may I help.
username_2: @username_0 @username_1 I'll see what I can do.
Unfortunately GitHub flagged my @tracker-user account so all the issues and pull requests I created are now hidden.
I created a support ticket, hopefully they'll resolve it, .
Probably got a false positive because I mentioned ~200 tracker names in another issue (wrote a big comment).
I'll get back to you once my main account is available again. |
TheIndieStone/ProjectZomboidTranslations | 874508439 | Title: Add TV and radio translations
Question:
username_0: As some people has been asking around, maybe it's best to add the translation files here? In the meantime, I've created a rough readme update with #476.
Answers:
username_1: I agree, I think that all translations should be in one place.
username_0: I think the problem is that the files are in different format and they are waiting for the electricity revamp to unify it, but maybe in the meantime it might be possible for them to upload them as is on here @username_3? There were some VHS videos teased so that would need translations as well, maybe it's a right time to take care of it now :)
username_2: But... It is placed in a different folder. It can't just be put in this directory tree.
username_0: If you want to just symlink this folder as is and pull changes from github, then it could be troublesome (though nothing clever symlinking or even a simple ba{tch,sh} script couldn't solve.
Also, doesn't the TV translations currently require editing before they are usuable in the game in the first place? I'm not sure, as I didn't fiddle with them.
Either way, it's mostly about easily tracking changes and translations :). Ideally, translations could be done via external source such as Crowdin or Transifex though, but placing them on GitHub would still make translating them easier for some surely. :)
username_2: We can make a repository to do it before TIS did it for us ;)
username_0: Yeah, though I'd love to have it in official one eventually :)
Status: Issue closed
|
neurodata/synaptome-stats | 161745579 | Title: Discussion with CEP 20160621
Question:
username_0: ## Notes that need help
- Run GMM on the whole thing (~billion points), we want to estimate the 24*6 dimensional distribution (full covariance).
- BIC for \hat{k}
- Use FlashGraph in a docker (slacked Disa).
- Let the null be p1 = p2 = ... = pd (run ANOVA) ...
- Collect points thresholding on each channel individually so that there isn't a channel bias.
- Lq hoteling test?
- ... lowest entropy between distributions.
- choose points such that l_1 distance is ...
- Ripley's K (test?)
- Silverman table (grazing goats?)
Status: Issue closed
Answers:
username_0: Moved [here](https://github.com/neurodata/synaptome-stats/blob/451f5aef3a9980d6ecd2f17641cab819c78b212d/Draft/CEP20160621.md) |
dbalduini/smeago | 261610374 | Title: panic: runtime error: makeslice: len out of range
Question:
username_0: I cannot get it working. Whatever I do I get len ouf of range error.
```
smeago -h http://www.google.com -p 80
2017/09/29 13:16:26 Crawling Host: http://www.google.com:80
2017/09/29 13:16:26 Urlset Loc: http://localhost
2017/09/29 13:16:26 Sitemap File: /Users/x/test/sitemap.xml
2017/09/29 13:16:26 Visiting: http://www.google.com:80/
panic: runtime error: makeslice: len out of range
goroutine 4 [running]:
github.com/username_1/smeago/src.ReadStringSize(0x2a00050, 0xc42000c2c0, 0xffffffffffffffff, 0x2a00050, 0xc42000c2c0, 0xc42008c260)
/Users/x/go/src/github.com/username_1/smeago/src/request.go:19 +0x6e
github.com/username_1/smeago/src.(*Crawler).Crawl(0xc4200f2360, 0x1, 0x129e01c, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
/Users/x/go/src/github.com/username_1/smeago/src/crawler.go:57 +0x2bf
created by github.com/username_1/smeago/src.(*CrawlerSupervisor).CrawlJobs
/Users/x/go/src/github.com/username_1/smeago/src/supervisor.go:74 +0x88
```
my go version is go1.9 darwin/amd64
Answers:
username_1: @username_0 Wow, i didn't know someone would use **smeago** besides me.
I will take a look. Maybe i will ask you more information later.
username_0: @username_1 with cool name like that, how could I not use it? 👍
I would be glad to help.
username_1: Sure, if you find something meanwhile you can send me the PR and i will merge it.
username_1: Done!
Status: Issue closed
|
electron-userland/electron-builder | 201006906 | Title: interface 'platform' doesn't exist
Question:
username_0: electron-builder version 11.2.6
64 bit and 32 bit Linux targets
I am building my snap with this configuration
```
...
"linux": {
"synopsis": "shortened...",
"category": "Audio",
"packageCategory": "GNOME;GTK;AudioVideo;Audio;Player",
"depends": [
"libappindicator1",
"libindicator7",
"libnotify4",
"notify-osd",
"wget",
"unzip",
"tar"
],
"target": [
"AppImage",
"deb",
"zip",
"snap"
]
}
},
"snap": {
"confinement": "devmode",
"grade": "stable"
}, ...
```
I had my package reviewed and it has an interface named 'platform' - where did this come from and how do I remove it because it's automatically generated with electron-builder. Do I need to configure my snap differently or is this a bug?
```
<NAME> evan.dandrea at canonical.com
Mon Jan 16 09:12:30 UTC 2017
Hi Matthew,
You've set your snap to use the 'platform' interface, but no such interface
exists. If you remove that line and 'unity8', then re-upload, it should
pass review.
Do you recall what you read that referenced a 'platform' interface? If
there's some outdated documentation out there, I'd like to get it fixed.
Let us know if you need any more help, and thanks for snapping Spotify!
```
Answers:
username_1: 1. https://github.com/ubuntu/snapcraft-desktop-helpers/blob/master/snapcraft.yaml#L35
2. https://insights.ubuntu.com/2016/12/08/using-the-ubuntu-app-platform-content-interface-in-app-snaps/
"Using the ‘platform’ plug (content interface) and specifying its default provider (‘ubuntu-app-platform’)"
Bug: this plug should be not added if `ubuntuAppPlatformContent` option is not set.
username_1: Workaround until fixed version is not published: https://github.com/electron-userland/electron-builder/issues/509#issuecomment-269079536
1. edit generated snapcraft.yaml in the `dist/linux-unpacked-snap`
2. run `snapcraft snap` (current working directory must be `dist/linux-unpacked-snap`).
username_0: Yeah I see that I can edit this, will I need to remove unity7 as well since I'm also removing unity 8?
username_1: Why unity8 is not suitable? unity7 should be here in any case, I think. Please ask why `unity8` is bad.
username_0: It just doesn't pass the reviewing stage on my snap. I will email them, it may just be that unity8 is not ready for my application?
Status: Issue closed
username_1: `unity8` is removed, `platform` is added only if `ubuntuAppPlatformContent` is specified. Eager to know about is it valid or not.
username_1: Fixed in 11.4.3
username_0: What is `ubuntuAppPlatformContent` supposed to refer to? Scopes?
username_1: @username_0 Please see docs https://github.com/electron-userland/electron-builder/wiki/Options#SnapOptions-ubuntuAppPlatformContent
username_0: Thanks.
It looks like they just use it to save space for Qt apps due to Qt being supported mainstream by Ubuntu. Interesting... :+1:
username_1: @username_0 Is is not only Qt, but gconf/gtk/other weird stuff that is required to run electron apps (UI apps). Currently, usability is a nightmare. |
tailwindlabs/tailwindcss | 1121074837 | Title: Grid Template Column - Arbitrary Values Not Working
Question:
username_0: <!-- Please provide all of the information requested below. We're a small team and without all of this information it's not possible for us to help and your bug report will be closed. -->
**What version of Tailwind CSS are you using?**
v3.0.8
**What build tool (or framework if it abstracts the build tool) are you using?**
NextJS v12.0.9
**What version of Node.js are you using?**
v16.13.1
**What browser are you using?**
Firefox v96.0.3
**What operating system are you using?**
Windows 10 Pro with Debian 11 on WSL 2
**Reproduction URL**
Working: https://play.tailwindcss.com/nW7Pq100Wc
Not Working: https://play.tailwindcss.com/x0b1o7wZgO
**Describe your issue**
Trying to use `repeat(auto-fill, 14rem)` inside `grid-col-[]` (so, as: `grid-col-[repeat(auto-fill, 14rem)]`) does not work. As far as I read in the docs, this is how it is supposed to work.
In the working url, I created a separate class as shown below, added it to the div top class, and it works fine.
```css
.grid-list {
grid-template-columns: repeat(auto-fill, 14rem)
}
```
In NextJS, I am able to use `style={{ gridTemplateColumns: "repeat(auto-fill, 14rem)" }}` and it works fine. As shown:

Status: Issue closed
Answers:
username_1: Hey! Thank you for your bug report!
Much appreciated! 🙏
The issue is that you can't use spaces in your arbitrary values because then you create multiple classes. Either remove the spaces or replace them with an `_`. Checkout the docs on the topic: https://tailwindcss.com/docs/adding-custom-styles#handling-whitespace
Updated example: https://play.tailwindcss.com/HtJBsQzAke |
reactioncommerce/catalyst | 481370895 | Title: Update catalyst docs navigation to reflect his structure
Question:
username_0: Not all of these pages are v1 but wanted to play out how this structure would work going forward. The content for the proposed pages isn't polished but we can start to use this navigation to pull it together from various notion documents that are floating around.
https://docs.google.com/spreadsheets/d/1qu4q8x4Rl2ftlXr_lRwRNlUsWUapfRXERD1BsH4PUBw/edit#gid=0
Answers:
username_1: <img width="1005" alt="Screen Shot 2019-08-16 at 2 22 47 PM" src="https://user-images.githubusercontent.com/3673236/63199011-5c8a2a80-c031-11e9-9d4f-42223af6cc54.png">
Comparing the doc w/ where we are at now.
I think there will be a need for more dev-focused docs/how to guides in the future. Should all of those go under "Overview", or should we make a separate "Developers" section? Some of these guides are fairly technical and don't apply to everyone: https://catalyst.reactioncommerce.com/#/Introduction/Understanding%20Component%20References https://catalyst.reactioncommerce.com/#/Developers/Developing%20Locally%20Inside%20Another%20Project
username_0: @username_1 Yes, maybe we add a developers section? I'd like to make these docs a little more usable than they are now. Unless they make sense to devs?
username_1: Yeah I think we can put a Dev section in between Foundations & Components.
username_0: @username_1 Can you fill in what items would potentially be in the dev section?
username_1: @username_0 Done. I also added a column on the status of the docs, whether it's written yet or not.
Status: Issue closed
username_2: :tada: This issue has been resolved in version 1.9.4 :tada:
The release is available on:
- [npm package (@latest dist-tag)](https://www.npmjs.com/package/@reactioncommerce/catalyst)
- [GitHub release](https://github.com/reactioncommerce/catalyst/releases/tag/v1.9.4)
Your **[semantic-release](https://github.com/semantic-release/semantic-release)** bot :package::rocket: |
SanderMertens/flecs | 990918307 | Title: Custom relations corrupt ChildOf relations
Question:
username_0: **Describe the bug**
In my code base, adding a custom relation to an entity leads to a crash reported here with 2 screenshots ( https://discord.com/channels/633826290415435777/689025747909607434/884915189735129100 )
A more simple reproduce of ChildOf corruption that does not crash, but yields incorrect program state is listed below
**To Reproduce**
```c
struct MyStruct {
int Field;
};
void SanityTest(flecs::world &ECS) {
auto Relation = ECS.entity();
ECS.trigger<const MyStruct>().event(flecs::OnRemove).iter([&](iter &Iter){
for(auto Index : Iter) {
auto Entity = Iter.entity(Index);
auto Parent = Entity.get_object(flecs::ChildOf);
printf("Destroying %s son of %s", Entity.name().c_str(), Parent ? Parent.name().c_str() : "<noone>");
}
});
auto Entity = ECS.entity("Entity").add<MyStruct>();
auto OtherEntity = ECS.entity("OtherEntity");
ECS.scope(Entity, [&]() {
auto Child2 = ECS.entity("Child2").add<MyStruct>();
Child2.add(Relation, OtherEntity);
auto InnerChild = ECS.entity("InnerChild").child_of(Child2).add<MyStruct>();
});
Entity.destruct();
OtherEntity.destruct();
Relation.destruct();
}
```
**Expected behavior**
Destroying InnerChild son of **Child2**
Destroying Child2 son of Entity
Destroying Entity son of <noone>
**Actual behavior**
Destroying InnerChild son of **Entity**
Destroying Child2 son of Entity
Destroying Entity son of <noone>
**Additional context**
It repros with both OnAdd and OnRemove, but replacing `.child_of` with `.scope` prevents the repro from occurring.
My idea is that it has something to do with table transitions when custom relations are involved, but I can't really explain the difference between `.child_of` and `.scope` in that case, maybe state is already corrupted before even creating the InnerChild.
Answers:
username_1: I'll have to look into this but what I _think_ is happening here is that the `OnRemove` handler is accessing a component from a parent (`Child2`) which just has been deleted itself. Because the delete operation is still in progress the storage is still "in flux" and as a result the data for the parent looks corrupted.
My guess is that if you print the entity ids vs. the entity names, you should actually see that they are correct.
This is a tricky problem (invoking callbacks in an operation that involves more than one entity/component, while ensuring that for each callback the storage is in a consistent state). I'll check back in when I know more.
username_1: Btw, I don't expect that the custom relationship affects this scenario much. Does the issue still reproduce when you remove this line?
```c
Child2.add(Relation, OtherEntity);
```
username_0: But it also applies to OnAdd, and even in case of removal, children should be removed first, right?
Status: Issue closed
username_0: Relation does not matter in the example, accessing parents from triggers is not allowed as per design, closing.
username_0: Updated description and name, as the issue does indeed exist.
username_1: Dumb I didn't see this before, there is an issue in your code! You're using `child_of` inside the `scope` lambda which causes the entity to have 2 `ChildOf` relations. When I modify the code to this
```c
ECS.scope(Entity, [&]() {
auto Child2 = ECS.entity("Child2").add<MyStruct>();
ECS.scope(Child2, [&]() {
auto InnerChild = ECS.entity("InnerChild").add<MyStruct>();
});
});
```
The output is as expected:
```c
Destroying InnerChild son of Child2
Destroying Child2 son of Entity
Destroying Entity son of <noone>
```
username_1: Note that the original output is correct for the provided code. `get_object(flecs::ChildOf)` just happens to return the first parent. You can have it return the 2nd parent with `get_object(flecs::ChildOf, 1)`.
Having an entity with multiple `ChildOf` relations can lead to unexpected behavior and is usually not what you want. I'll eventually add a feature that lets you specify that a relation can only have one instance, in which case this code would have triggered an error.
Since the reported behavior is as expected, I'll close the issue.
Status: Issue closed
username_0: Hey, I even wrote it in original post :D
True, it is correct, good thing I removed all child_of-s from my code and now safe from such mistake.
```
Additional context
It repros with both OnAdd and OnRemove, but replacing .child_of with .scope prevents the repro from occurring.
``` |
mtkennerly/dunamai | 787131024 | Title: Tag lookup fails when git refs contain commas
Question:
username_0: Since dunamai parses git tags by running `git log --simplify-by-decoration --topo-order --decorate=full HEAD "--format=%H%d"` and splitting on commas, it will fail with an unhelpful error if any tag contains a comma. For example, with the following tag list:
```$ git log --simplify-by-decoration --topo-order --decorate=full HEAD "--format=%H%d"
b8847fb838aa45b474537e0c3f8aa5f385cd3ed0 (HEAD -> refs/heads/master, tag: refs/tags/0.0.1,, tag: refs/tags/0.0.1)```
Running `poetry version` with any poetry-dynamic-versioning config results in this error:
```$ poetry version
RuntimeError
Unable to determine commit offset for ref refs/tags/0.0.1, in data: {'refs/tags/0.0.1': 0}
at ~/.local/pipx/venvs/poetry/lib/python3.6/site-packages/dunamai/__init__.py:233 in commit_offset
229│ return self.tag_topo_lookup[self.fullref]
230│ except KeyError:
231│ raise RuntimeError(
232│ "Unable to determine commit offset for ref {} in data: {}".format(
→ 233│ self.fullref, self.tag_topo_lookup
234│ )
235│ )
236│
237│ @property```
It's unclear what the issue is, and it's not obvious that the comma is a part of the tag here. It might be tricky to parse tags containing commas properly, but it would be nice if the error message at least was more helpful since we know that commas can trip up with the way parsing is currently done.
Answers:
username_1: Thanks for reporting this! Somehow, I didn't realize that commas were valid in a tag. I think it should be easy enough to fix by splitting on `", "` instead of `","` since spaces are not valid in tags, but I'll add some tests to make sure.
Status: Issue closed
username_1: This is now fixed in v1.5.4.
username_0: Thanks for the quick turnaround! |
fatiando/pooch | 725712254 | Title: Update GitHub default branch name for version_dev argument
Question:
username_0: **Description of the desired feature**
After GitHub changed the default branch name from `master` to `main`, we should update Pooch docstrings to include `main` as one of the best choices for the `version_dev` argument, like in:
https://github.com/fatiando/pooch/blob/1f60c6ffe143a233bf15b5d1d1ac5acc648504ba/pooch/core.py#L282-L285
We could deprecate plan to change the default value for `version_dev` to `main` after Pooch v2.0.0 and add a Deprecation Warning meanwhile.
**Are you willing to help implement and maintain this feature?** Yes, but I would let anyone to tackle it if they want! |
bazelbuild/examples | 896411449 | Title: iOS example failing using Bazel version 4.4.0
Question:
username_0: the error:
```
DEBUG: Rule 'build_bazel_rules_apple' indicated that a canonical reproducible form can be obtained by modifying arguments commit = "adc96e1667416502ca87ccf888b6612821b37a3d", shallow_since = "1522244844 -0400" and dropping ["tag"]
DEBUG: Repository build_bazel_rules_apple instantiated at:
/Users/dongzhao/temp/examples/tutorial/WORKSPACE:3:15: in <toplevel>
Repository rule git_repository defined at:
/private/var/tmp/_bazel_dongzhao/baef564110325a4d5a625796701764d5/external/bazel_tools/tools/build_defs/repo/git.bzl:199:33: in <toplevel>
ERROR: Traceback (most recent call last):
File "/private/var/tmp/_bazel_dongzhao/baef564110325a4d5a625796701764d5/external/build_bazel_rules_apple/apple/bundling/entitlements.bzl", line 300, column 35, in <toplevel>
"entitlements": attr.label(
Error in label: label() got unexpected keyword argument 'single_file'
ERROR: Skipping '//ios-app:ios-app': error loading package 'ios-app': in /private/var/tmp/_bazel_dongzhao/baef564110325a4d5a625796701764d5/external/build_bazel_rules_apple/apple/ios.bzl: in /private/var/tmp/_bazel_dongzhao/baef564110325a4d5a625796701764d5/external/build_bazel_rules_apple/apple/bundling/binary_support.bzl: Extension file 'apple/bundling/entitlements.bzl' has errors
WARNING: Target pattern parsing failed.
ERROR: error loading package 'ios-app': in /private/var/tmp/_bazel_dongzhao/baef564110325a4d5a625796701764d5/external/build_bazel_rules_apple/apple/ios.bzl: in /private/var/tmp/_bazel_dongzhao/baef564110325a4d5a625796701764d5/external/build_bazel_rules_apple/apple/bundling/binary_support.bzl: Extension file 'apple/bundling/entitlements.bzl' has errors
INFO: Elapsed time: 0.060s
INFO: 0 processes.
```
Answers:
username_1: Doesn't work in bazel 5.0.0 either.
My favorite part of learning this build system is discovering all the tutorials and examples are all flat-out broken. |
jupyterhub/binderhub | 243461602 | Title: Allow copying build logs easily
Question:
username_0: The terminal can be fickle to copy build logs from. We should allow people to download them.
Answers:
username_1: @username_0 do the build logs still exist on the page even though they're no longer visible within the xterm window? Now that we're using the web clipper package could we just create a "copy logs" button?
username_0: They do, and you should be able to make a copy logs button! It might be
slightly tricky, since xterm.js only keeps some amount of scrollback, but
would be not too difficult to add I think.
username_1: Would another option be to do this outside of xterm entirely? Basically any time some new data is written we can append that to a hidden div or something, and then copy/paste just copies from that div?
username_0: Yes, but we've to be cautious about memory usage. We can easily crash a
user's browser (at least our tab) that way.
username_2: Something I noticed:
If I do not uncollapse the log viewer right after loading the URL (and instead let Binder do its thing and uncollapse the viewer itself at the end), neither Firefox nor Chromium let me select any text in it.
Firefox does not let me copy the text at all.
Chromium requires me to right-click -> "Copy". Neither Ctrl-C nor Linux' usual select-to-copy copying work.
username_1: Arg - thanks for your input @username_2!! We agree this is a pretty frustrating piece of the Binder UX right now :-/ if we can store the text in an element somewhere on the page, it should be pretty easy to use `clipboard.js` to give a "copy" button, but we need to make sure not to accidentally crash something in the process
Status: Issue closed
username_3: Done in https://github.com/jupyterhub/binderhub/pull/1335
username_2: Thank you! |
novium/achievements | 255982667 | Title: D9 - Dokumentation
Question:
username_0: # Dokumentation
_Dokumentera icke-triviala modulers gränssnitt så att någon utomstående kan programmera mot dem._
Bra dokumentation är av största vikt vid utveckling. Vid en kort
kurs som denna är det svårt att uppleva nyttan med dokumentation
som man själv skriver eftersom det inte hinner gå tillräckligt
lång tid under kursen för att sådant man utvecklat skall falla
tillräckligt i glömska. (Ta gärna fram någon gammal
inlämningsuppgift i Haskell från PKD och försök följa logiken och
ändra i den.)
* Det är svårt att balansera mängden dokumentation som krävs för
att beskriva något. Vem vänder man sig till? Vad kan man
förvänta sig hos den som läser? Vad vill denne åstadkomma?
* Vad är en bra balans mellan för lite information och för mycket?
Vad är en lämplig detaljnivå?
* Hur mycket av den interna implementationen bör man beskriva?
Varför?
* Hur beskriver man komplexa och tvetydiga processer?
I funktionella språk som Haskell är pre- och postvillkor bra sätt
att dokumentera förväntningar och löften på ett sätt som inte
exponerar onödiga detaljer. Javaprogram har i regel tonvis med
sido-effekter -- vad får det för konsekvens för pre- och
postvillkor?
[Ge gärna kommentarer och rapportera buggar](https://github.com/IOOPM-UU/achievements/commits/master/D9.md) (klicka på den senaste commiten)<issue_closed>
Status: Issue closed |
worldiaday/website | 892353849 | Title: Usability and Accessibility
Question:
username_0: Usability test setup
Usability test report
Accessibility guidelines
Answers:
username_0: @samashlee Would you be able to connect with ilumino about working on a plan for the new website? We are preparing a project document #3 and will be investigating CMS requirements #9 |
beetbox/beets | 157965096 | Title: bs1770gain uses wrong parameters
Question:
username_0: Applying replaygain with bs1770gain uses the parameter `-it` which means get integrated loudness and calculate true peak.
Specifying -i is just superfluous because that's the default anyway and -t is wrong for two reasons:
1. Replaygain specifies sample peak, not any other fancy peak calculation methods
2. Calculating true peak is *horrrrribly* slow, the whole process is like 10x slower with this
Solution: invoke bs1770gain with parameter -p instead to calculate the correct peak gain and get a major speed boost as a bonus.
Answers:
username_1: This does look like a bug! I don't know why those were chosen originally, but let's fix them.
Status: Issue closed
|
open-telemetry/community | 505413770 | Title: CI/CD Build Tooling Information
Question:
username_0: Hey y'all, Bob from the opentelemetry-php community here. Wanted to make sure our SIG understood properly:
(a) Are there common CI/CD tooling bits that we are using for opentelemetry as a whole?
(b) If so, how do we gain access to those?
Answers:
username_1: @username_0 nope, there is no prescriptive or paid for tools that we are all using. For now it is just the best tool for the job.
username_1: please re-open if you have more questions
Status: Issue closed
|
tomLadder/react-native-echarts-wrapper | 494554219 | Title: Tried to register two views with the same name RNCWebView?
Question:
username_0: any help would be appreciated?
Answers:
username_1: Yes it's probably in an issue with the dependencies. I will have a look at it.
username_2: @username_1 Any updates on this? Thanks for you continued great work!
username_1: I'm finished with almost all tests in v2.0.0. Hopefully i can release it this weekend.
Tom
Status: Issue closed
username_1: Just released [v2.0.0](https://www.npmjs.com/package/react-native-echarts-wrapper/v/2.0.0) 🎉 |
phetsims/chipper | 188090947 | Title: move some query parameters to phet-io's schema
Question:
username_0: Please identify which of the query parameters below (currently in chipper.QueryParameters) should be moved to phet-io's schema:
```js
// when running a simulation using phetio.js, outputs states and deltas within the phetioEvents data stream, see phetio.js
'phet-io.emitDeltas': { type: 'flag' },
// when emitting deltas using phetio.js (see phet-io.emitDeltas) emit deltas that are empty, to simplify playback in some systems like Metacog.
'phet-io.emitEmptyDeltas': { type: 'flag' },
// emit the Scenery input events
'phet-io.emitInputEvents': { type: 'flag' },
// when running a simulation using phetio.js, outputs the state at the end of every frame
'phet-io.emitStates': { type: 'flag' },
// will output type documentation to the console, see https://github.com/phetsims/phet-io/issues/218
'phet-io.docs': { type: 'flag' },
// evaluate expressions on phet-io wrapper objects, like: ?phet-io.expressions=[["beaker.beakerScreen.soluteSelector","setVisible",[true]]]
'phet-io.expressions': {
type: 'string',
defaultValue: null
},
// Specifies where to log phetioEvents
'phet-io.log': {
type: 'string',
defaultValue: null,
validValues: [
null, // no logging
'console', // stream to console in JSON format
'lines' // stream colorized human-readable events to the console (Chrome and Firefox only)
]
},
// Causes a phet-io simulation to launch, even without a wrapper "go-ahead" step, see phet-io#181
'phet-io.standalone': { type: 'flag' },
// When running as phet-io assertions are normally thrown when uninstrumented objects are encountered.
// Setting this to false will allow the simulation to proceed. Useful for partially instrumented simulations.
'phet-io.validateTandems': {
type: 'flag',
defaultValue: true
},
```
Answers:
username_0: Searching for `'phet-io.`, the only uses that I see outside of the phet-io repository are:
Sim.js line 60:
```js
if ( phet.chipper.getQueryParameter( 'phet-io.standalone' ) || phet.chipper.brand !== 'phet-io' ) {
```
Tandem.js line 241:
```js
if ( phet.chipper.brand === 'phet-io' && phet.chipper.getQueryParameter( 'phet-io.validateTandems' ) !== 'false' ) {
```
username_0: I'm trying to wrap up https://github.com/phetsims/chipper/issues/516, so raising the priority of this to high.
username_1: However, the simulation query parameters should remain in chipper/initialize-globals. I'll take a look at them shortly.
username_1: I reviewed the usage of phet-io query parameters in initialize-globals, made 2 cleanup commits (above) and everything else looks good.
@username_0 anything else to do here?
username_0: I don't see `phet.chipper.queryParameters` being used for any of these query parameters. I see usages of `QueryStringMachine.get`. E.g. phetio.js line 317:
```js
if ( QueryStringMachine.get( 'phet-io.emitInputEvents', { type: 'flag' } ) ) {
```
... and similarly for all of the query parameters identified in https://github.com/phetsims/chipper/issues/517#issue-188090947 (except 'pheti-io.docs', which was deleted.) You've duplicated the schemas, and this is an opportunity for those query parameters to get out of sync with initialize-globals.js.
username_1: Thanks for clarifying the problem @username_0. I've factored out the simulation query parameters in the above commits. The wrapper query parameters do not (and should not) use chipper's initialize-globals, so I've left them alone for now. Would you like to review?
Status: Issue closed
username_0: 👍 |
cp3-llbb/CommonTools | 107191076 | Title: Link to SAMADhi
Question:
username_0: for now createSampleJson.py crashes since the setup_env.sh (rightfully) does not mention CMSSW, so does not know where to look for SAMADhi
Answers:
username_0: I don't think this `createSampleJson.py` is actually used by anyone anywhere, if I am not mistaken, any objection to deleting the script altogether ?
username_1: I'm still using it... (didn't follow all the changes to this part yet, where is the equivalent of this with the analysis json?). Maybe we could change it into an optional feature of `runPostCrab.py`?
username_2: We don't use it because the task of creating the JSONs for the factories has been moved to `condorTools` which, by the way, will not work either if simply using `setup_env.sh`.
In the end it might still be useful to have that feature as a standalone script, bearing in mind you'll have to be in a CMS environment to be able to run it? And if @username_1 uses it that settles the question! ;)
username_0: I actually don't remember the use case which lead to opening this issue...
if the script is used, then yes of course let's keep it :) but we can probably close this issue if @username_1 confirms it does work correctly ?
username_1: if it finds SAMADhi (in practice, after `cmsenv`) it works fine indeed, so we can close this issue (we could add a more descriptive error message, or move it to the GridIn or SAMADhi package)
username_0: well, this is about creating a json file specifically for CommonTools usage... so it's probably better to keep it here, and add a more descriptive error message :) I'm changing the title of the issue !
username_1: I can prepare a pull request for that
username_0: I assign you to it then, thanks !
Status: Issue closed
|
Restuta/ingenio-hackathon-2.0 | 113991815 | Title: Name Your Price page
Question:
username_0: Allows a user to select one of 3 options for the price OR allows to drag (I don't see much value in this dragging for now).
What is more important I think is to show time-based simulation of what happens after they press "Negotiate with Advisors" (wording TBD and **is** important)
Answers:
username_0: @muthur we need to polish it a little, like show "negotiating only after we select best deal, etc"
Status: Issue closed
|
trailofbits/ebpfpub | 765909663 | Title: 开阳哪有特殊服务的洗浴n
Question:
username_0: 开阳哪有特殊服务的洗浴〖加薇781乄372乄524〗疫情之下全民抗击疫情的爱心与力量正在通过音乐传递。近期光明网与酷狗音乐人共同发起的“用音乐的力量支援武汉”原创音乐征集活动不断升温至月日已征集到余首原创音乐作品来自全国各地的余名原创音乐人参与了进来。在积极声援武汉的原创音乐人中有的亲属正奋战在援鄂一线有的如今或曾经工作、学习、生活在湖北。他们结合近来的生活经历和所思所想相继创作出一首首朴实而动人的战“疫”歌曲发挥自己所长用原创音乐鼓舞人心。音乐人杨振宇的父亲就战斗在抗击疫情的最前线。生活在武汉的他能够近距离感受身边亲友的经历和情绪也激发了创作音乐的表达欲“想要为武汉和所有经历这次疫情的人们写一首歌。”恰好是在自己生日那天杨振宇创作出《光献给武汉》这首十足真诚的歌曲。他表示“在这个特殊的时期非常庆幸可以用自己本能的音乐方式说出我的心里话。”在武汉学习和工作多年的武汉音乐学院教师刘思远面对疫情时常思考“我们在捐物资外还能做点什么”他迅速发动起身处各地的位武音师生、校友在线上共同录制了原创歌曲《明天依然最美》。“一场灾一首歌让天南海北的武音师生、校友联结在一起也让万千国人联结在一起”看到网友的留言刘思远非常感慨他希望这首歌能够鼓舞奋战在一线的“白衣战士”也能够鼓舞身受疫情影响的人们。“我是四分之一个武汉人”音乐人乌拉多恩说。他在原创歌曲《在你身边》中唱道“虽然城市被封但武汉不是一座孤岛你永远不孤单别怕我们在你身边”在特殊时期通过这首歌曲给予武汉的亲人、同胞以陪伴给予他们力量和信心。活动专题页面内页正如肖邦曾说“我愿意使我的音乐成为战歌”。原创音乐人们纷纷以热血谱写出音符借由互联网音乐平台的力量传递到前线期望将饱含每一位创作者热切期待和美好祝愿的歌曲化作抗击疫情的“战歌”。打开活动页面就可以看到众多网友留言跟帖加油打气“武汉加油中国加油”的共同呐喊声仿佛就和着歌声在耳边回荡。在共同的口号外还有许多网友留下了戳中心底的深情表达——“隔山隔水不隔心封城封路不封爱。中国人民万众一心众志成城勇抗疫情必胜”“向每一位战斗在一线的最美‘逆行者’敬礼你们辛苦了”“我们手拉手一起走向光明。”“待到人间春暖樱花开满我们再相约武汉。”……此刻所有的音乐、所有的话语都融汇成了共同抗击疫情的坚定信念。诮涸磐毯铝https://github.com/trailofbits/ebpfpub/issues/175 <br />https://github.com/trailofbits/ebpfpub/issues/115 <br />https://github.com/trailofbits/ebpfpub/issues/187 <br />https://github.com/trailofbits/ebpfpub/issues/127?5T7zP <br />https://github.com/trailofbits/ebpfpub/issues/68?loT7Z <br />https://github.com/trailofbits/ebpfpub/issues/146?rp19h <br />https://github.com/trailofbits/ebpfpub/issues/87?m4886 <br />trukjprjljxvwjjokjjkhixadrfmieycqil |
rust-lang/cargo | 829396651 | Title: Cargo build forcing stable channel
Question:
username_0: <!-- Thanks for filing a 🐛 bug report 😄! -->
**Problem**
<!-- A clear and concise description of what the bug is. -->
<!-- including what currently happens and what you expected to happen. -->
I'm trying to build a project with `cargo build`. It seems to be locking to the stable channel, even though I'm overriding with a toolchain file as well as with rustup. The project contains a `rust-toolchain` file, locked to `nightly-2021-01-01`. When I run `cargo build`, I get:
```
error: failed to parse manifest at `/home/forest/Documents/git/veloren/Cargo.toml`
Caused by:
the cargo feature `named-profiles` requires a nightly version of Cargo, but this is the `stable` channel
See https://doc.rust-lang.org/book/appendix-07-nightly-rust.html for more information about Rust release channels.
```
The features that we're using in `Cargo.toml` are:
```
cargo-features = ["named-profiles","profile-overrides"]
```
**Steps**
<!-- The steps to reproduce the bug. -->
I tried reproducing this inside a Docker container, but I wasn't able to. I'd be happy to give any other information that is helpful.
**Notes**
Output of `cargo version`:
<!-- Also, any additional context or information you feel may be relevant to the issue. -->
<!-- (e.g rust version, OS platform/distribution/version, target toolchain(s), release channel.. -->
Operating system - Garuda Linux
$ cargo version
cargo 1.50.0
$ rustup toolchain list
stable-x86_64-unknown-linux-gnu
nightly-2021-03-01-x86_64-unknown-linux-gnu
nightly-x86_64-unknown-linux-gnu (default) (override)
Answers:
username_1: Do you maybe have a [directory override](https://rust-lang.github.io/rustup/overrides.html#directory-overrides) or `RUSTUP_TOOLCHAIN` env var set? Those take precedence over the rust-toolchain fail (see https://rust-lang.github.io/rustup/overrides.html).
username_1: Oh, I also think `rustup show` will tell you why it is picking a particular toolchain.
username_0: The only Rust env vars I have set up are:
```
$ env | grep RUST
RUSTFLAGS=-C target-cpu=native
RUST_BACKTRACE=1
```
With rustup show I get:
```
Default host: x86_64-unknown-linux-gnu
rustup home: /home/forest/.rustup
installed toolchains
--------------------
nightly-2021-01-01-x86_64-unknown-linux-gnu (default)
nightly-2021-03-01-x86_64-unknown-linux-gnu
nightly-x86_64-unknown-linux-gnu
active toolchain
----------------
nightly-x86_64-unknown-linux-gnu (directory override for '/home/forest/Documents/git/veloren')
rustc 1.52.0-nightly (f98721f88 2021-03-10)
```
I even went and removed the stable toolchain, but got the same result.
username_1: Maybe check that the `cargo` executable is the rustup wrapper. Run `which cargo` and make sure it is `~/.cargo/bin/cargo`. You may have another installation of `cargo` coming from some other directory in your PATH.
username_1: Without more information I'm going to close this issue. If you have further issues with using rustup or setting up overrides, I suggest trying one of the [user forums](https://users.rust-lang.org/) or chat platforms for help. Rustup also has its own issue tracker at https://github.com/rust-lang/rustup/issues.
Status: Issue closed
|
toggl/toggldesktop | 276966486 | Title: [macOS] Multiple workspaces, project with same name problem
Question:
username_0: If I have projects with the same name in different workspaces the project only shows up in the workspace that is higher up in the list.
Status: Issue closed
Answers:
username_1: Same as #2284
username_0: Sorry about that, I searched for a related issue but I guess i missed it. |
alerta/alerta-contrib | 264302704 | Title: Add integration for Rocket chat
Question:
username_0: see https://rocket.chat/
Answers:
username_1: Please excuse me, is there any progress with this case?
username_0: No. I was hoping someone who knew something about Rocket Chat would be inspired to contribute. Having someone willing to test and give feedback would be appreciated though so let me know if you could help with that. Thanks.
Status: Issue closed
username_0: Feedback welcome. |
sxs-collaboration/spectre | 268515235 | Title: Simplify OptionContext
Question:
username_0: If OptionContext can be used without the YAML::Mark then it would be much simpler and could be placed in a separated header file from Options.hpp. This would allow us to reduce the number of times we need to include Options.hpp, which is a rather large chunk of code.<issue_closed>
Status: Issue closed |
cltk/cltk | 301608660 | Title: Add Swadesh list for Odia
Question:
username_0: @kylepjohnson In reference to #655, I will add Swadesh list for Odia from [here](http://ielex.mpi.nl/language/Oriya/). The order of the list is not in the order of Swadesh lists in Wikipedia. I will rearrange them, maintain the order according to the Swadesh list format and then add it. |
spaam/svtplay-dl | 294225555 | Title: #777 Workaround until svtplay-dl is updated (use youtube-dl)
Question:
username_0: **Related information**
https://github.com/username_1/svtplay-dl/issues/777
You can use **youtube-dl** until the issue with www.dplay.se is fixed in **svtplay-dl**
Maybe someone good **python** developer can fix the issue with www.dplay.se in **svtplay-dl** by looking in the source code for **youtube-dl** who is also developed in **python**
**Website for youtube-dl**
https://rg3.github.io/youtube-dl/
**youtube-dl on github (source code)**
https://github.com/rg3/youtube-dl/
---
I try to download from www.dplay.se with **youtube-dl** and it works fine!
**Show Quailty Format** = `-F`
`.\youtube-dl.exe -F https://www.dplay.se/videos/sofias-anglar/sofias-anglar-100`
**Output Quailty Format**
[DPlay] sofias-anglar/sofias-anglar-1001: Downloading webpage
[DPlay] sofias-anglar/sofias-anglar-1001: Downloading token
[DPlay] sofias-anglar/sofias-anglar-1001: Downloading JSON metadata
[DPlay] sofias-anglar/sofias-anglar-1001: Downloading JSON metadata
[DPlay] sofias-anglar/sofias-anglar-1001: Downloading m3u8 information
[DPlay] sofias-anglar/sofias-anglar-1001: Downloading MPD manifest
[info] Available formats for 40158:
format code extension resolution note
hls-159992mp4a.40.2-und mp4 audio only [und]
hls-64000mp4a.40.2-und mp4 audio only [und]
hls-64 mp4 audio only 64k , mp4a.40.2
hls-234 mp4 320x180 234k , avc1.42C015, 25.0fps, video only
hls-463 mp4 480x270 463k , avc1.42C01F, 25.0fps, video only
hls-859 mp4 640x360 859k , avc1.4D401E, 25.0fps, video only
hls-1743 mp4 960x540 1743k , avc1.4D401F, 25.0fps, video only
hls-3328 mp4 1280x720 3328k , avc1.64001F, 25.0fps, video only
hls-6508 mp4 1920x1080 6508k , avc1.640029, 25.0fps, video only (best)
**Download movie with 1280x720 resolution and with audio 64k , mp4a.40.2** = `-f`
`.\youtube-dl.exe -f hls-3328+hls-64 https://www.dplay.se/videos/sofias-anglar/sofias-anglar-100`
**Output Downloading**
[DPlay] sofias-anglar/sofias-anglar-1001: Downloading webpage
[DPlay] sofias-anglar/sofias-anglar-1001: Downloading token
[DPlay] sofias-anglar/sofias-anglar-1001: Downloading JSON metadata
[DPlay] sofias-anglar/sofias-anglar-1001: Downloading JSON metadata
[DPlay] sofias-anglar/sofias-anglar-1001: Downloading m3u8 information
[DPlay] sofias-anglar/sofias-anglar-1001: Downloading MPD manifest
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 251
[download] Destination: Dödsbudet kom som en chock-40158.fhls-3328.mp4
[download] 1.8% of ~1002.95MiB at 5.26MiB/s ETA 04:08
Answers:
username_1: this is not helping fixing the issue.
Status: Issue closed
username_2: No subtitles with youtube-dl and i wondering if @username_1 will fix it...but i cross my fingers that he do :) |
gildor478/forge-dump-migration-test | 713187648 | Title: address to get via http incorrect
Question:
username_0: __This bug has been migrated from artifact #1404 on forge.ocamlcore.org. It was assigned to [user102](https://forge.ocamlcore.org/users/user102). It was closed on 2019-08-27 06:54:29.__
## [user919](https://forge.ocamlcore.org/users/user919) posted on 2014-06-21 15:08:51:
darcs get http://darcs.ocamlcore.org/repos/ounit/ounit
should be,
darcs get http://darcs.ocamlcore.org/repos/ounit ounit
## [user102](https://forge.ocamlcore.org/users/user102) replied on 2019-08-27 06:54:29:
Not using darcs anymore.
The new URL to get ounit can be found there:
https://github.com/username_0/ounit<issue_closed>
Status: Issue closed |
TommyTheBlackbird/CDI | 115066778 | Title: Nuevo tipo de evento "Mostrar HDMI"
Question:
username_0: Para las nuevas pantallas con entrada de fuente externa HDMI, es necesario crear un nuevo tipo de evento tipo "Mostrar HDMI", para programar que a determinada hora se muestre la señal procedente de la señal externa HDMI. |
ZJONSSON/node-unzipper | 482893953 | Title: can not unzip an empty folder
Question:
username_0: hi,
i got a zip called 111.zip
it was zipped from a folder called 111.
i would unzip '111.zip'. I don't know there are things in the folder or not.
so when the folder called 111 has nothing in it, i got nothing after unzip `111.zip`, which was supposed to be an empt folder called 111.
please help~
Answers:
username_1: Can you please clarify: Does the zip file only contain an empty folder called 111? And you want the unzipper to just create this empty folder?
username_0: @username_1
yes, the zip file only contain an empty folder called 111, and when i upzip it with unzipper, i got nothing, which i expect it to be an empty folder.
i use unzipper for lots of zip files called 111.zip, some of them contain a folder with files in it while others just contain an empty folder. maybe i can handle it as a special case, but i prefer to get just an empty folder.
username_1: Good point - the expected behavior should be to extract exactly what is in the zipfile, empty folders included.
username_0: @username_1
🎉🎉🎉, so how long it will take to experience this feature?
i 'd like to update my schedule depending on the time when i can use this feature~
username_2: I just stumbled along this behavior too. I'd be very pleased if this would be added soon.
username_3: @username_1 same issue here, when will you fix that ?
our users drop zip files with some empty folders, like pre formatted trees. Would be great to be able to detect / create empty folders when unziping it. |
dart-lang/sdk | 154668366 | Title: Dart VM exposes ImmutableMap type from dart:core
Question:
username_0: Consider this program:
```
main() {
print(ImmutableMap);
}
```
When run with the dart VM, this happens:
```
$ dart -c immutable_map.dart
ImmutableMap
```
When compiled with dart2js, this happens:
```
$ dart2js immutable_map.dart
immutable_map.dart:2:9:
Warning: Cannot resolve 'ImmutableMap'.
print(ImmutableMap);
^^^^^^^^^^^^
Dart file (immutable_map.dart) compiled to JavaScript: out.js
```
I would expect that the type ImmutableMap isn't exposed by dart:core on the Dart VM.
Answers:
username_0: This class is defined in [runtime/lib/immutable_map.dart](https://github.com/dart-lang/sdk/blob/master/runtime/lib/immutable_map.dart#L6)
username_0: @lrhn
username_1: I think they should just be made private, like we do for _ImmutableList.
username_2: I see an error now form the CFE in the dart vm.
Status: Issue closed
|
miragejs/miragejs | 699093664 | Title: Unit testing `ECONREFUSED 127.0.0.1:80` with Axios and Mocha
Question:
username_0: I use MirageJs on a Vue.js application with Axios. It works very well.
However, as long as I try to do unit tests, no requests are intercepted by MirageJs.
_I use `Mocha` with the `@vue/cli-plugin-unit-mocha` plugin._
```typescript
let server: Server<AppRegistry>;
beforeEach(() => {
server = makeServer({ environment: 'test' });
});
afterEach(() => {
server.shutdown();
});
it('should handle Axios request', async () => {
const url = '/fake-url';
server.get(url, () => new Response(200));
const { status } = await axios.get(url);
assert.strictEqual(status, 200);
});
```
In addition, I get the following error:
```
Error: Error: connect ECONNREFUSED 127.0.0.1:80
at Object.dispatchError (node_modules\jsdom\lib\jsdom\living\xhr-utils.js:54:19)
at Request.<anonymous> (node_modules\jsdom\lib\jsdom\living\xmlhttprequest.js:675:20)
at Request.emit (events.js:228:7)
at Request.onRequestError (node_modules\request\request.js:877:8)
at ClientRequest.emit (events.js:223:5)
at Socket.socketErrorListener (_http_client.js:415:9)
at Socket.emit (events.js:223:5)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:81:21) undefined
Error: Network Error
at createError (dist\js\webpack:\node_modules\axios\lib\core\createError.js:16:1)
at XMLHttpRequest.handleError (dist\js\webpack:\node_modules\axios\lib\adapters\xhr.js:83:1)
at XMLHttpRequest.<anonymous> (node_modules\jsdom\lib\jsdom\living\helpers\create-event-accessor.js:33:32)
at innerInvokeEventListeners (node_modules\jsdom\lib\jsdom\living\events\EventTarget-impl.js:316:27)
at invokeEventListeners (node_modules\jsdom\lib\jsdom\living\events\EventTarget-impl.js:267:3)
at XMLHttpRequestEventTargetImpl._dispatch (node_modules\jsdom\lib\jsdom\living\events\EventTarget-impl.js:214:9)
at fireAnEvent (node_modules\jsdom\lib\jsdom\living\helpers\events.js:17:36)
at requestErrorSteps (node_modules\jsdom\lib\jsdom\living\xhr-utils.js:121:3)
at Object.dispatchError (node_modules\jsdom\lib\jsdom\living\xhr-utils.js:51:3)
at Request.<anonymous> (node_modules\jsdom\lib\jsdom\living\xmlhttprequest.js:675:20)
at Request.onRequestError (node_modules\request\request.js:877:8)
at Socket.socketErrorListener (_http_client.js:415:9)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:81:21)
```
I tried to modify the `urlPrefix`, however the requests are not intercepted and are sent directly to the real server.
I also tried to use `axios` without interceptors or config, without success. On problem #307, the problem was related to an Axios interceptor, which is not my case here.
Do you have any suggestions?
Answers:
username_1: Any chance those unit tests are being run in node?
username_0: Yes, it's run with Node `v12.15.0`:
```bash
"C:\Program Files\nodejs\node.exe" C:\...\node_modules\@vue\cli-service\bin\vue-cli-service.js test:unit --watch --require tsconfig-paths/register --require @babel/register --ui bdd C:\...\src\app\AppServer.spec.ts
```
The problem will come from the compatibility with Node?
username_1: Yeah Mirage doesn't run in non-browser environments. It can work in node if you have something like jsdom running (like Jest has). But sadly today it doesn't work without that.
username_2: What you need is a way to have `XMLHttpRequest` available in your test environment. I am using Jest with jsdom which provides it. However there is a [standalone package that emulates XHR](https://www.npmjs.com/package/xmlhttprequest). Could be worth a try to set `global.XMLHttpRequest` or `window.XMLHttpRequest` using that.
If you manage to have XHR available in your app, all you need to do is to tell Axios to use XHR instead of the default node HTTP module:
```
import XhrAdapter from 'axios/lib/adapters/xhr'
axios.defaults.adapter = XhrAdapter
```
Hope this helps!
username_0: Thank you for all your help.
However, the method with the addition of `jsdom` does not work for me. I had too many problems trying to inject the `XMLHttpRequest` globally.
I've already spent too much time trying to get the unit tests to work with `miragejs` and `jsdom`... We turned to [a package](https://github.com/mswjs/msw) that better suits our needs.
username_1: Thanks for the input + sorry about that! Curious if you ended up using MSW in the browser or in node?
username_0: We use MSW in the browser with the [`setupWorker`](https://mswjs.io/docs/api/setup-worker) method and in our unit tests (Node) with the [`setupServer`](https://mswjs.io/docs/api/setup-server) method.
All the logic remains the same and is sent as a parameter of the function.
username_1: For future travelers: I figured out the issue!
You actually don't need to run
```js
axios.defaults.adapter = XhrAdapter
```
At least in my example, when I ran the test Axios defaulted to using the XHR adapter.
Also, the default Vue Test Utils + Vue CLI setup comes with jsdom, so that wasn't the issue either. There was a `window` object available, so Mirage (via Pretender) was able to successfully monkey patch `window.XMLHttpRequest`.
The problem was this line of Axios' XHR adapter: https://github.com/axios/axios/blob/master/lib/adapters/xhr.js#L28
Since it just calls `new XMLHttpRequest()`, Axios is using a different `XMLHttpRequest` object than `window`'s version, since the node global is not window.
The fix is to force the node global to use `window`'s version in your tests:
```js
describe("HelloWorld.vue", () => {
let server;
let originalXMLHttpRequest = XMLHttpRequest;
before(() => {
server = makeServer({ environment: "test" });
// Force node to use the monkey patched window.XMLHttpRequest
// This needs to come after `makeServer()` is called.
// eslint-disable-next-line no-global-assign
XMLHttpRequest = window.XMLHttpRequest;
});
after(() => {
server.shutdown();
// Restore node's original window.XMLHttpRequest.
// eslint-disable-next-line no-global-assign
XMLHttpRequest = originalXMLHttpRequest;
});
```
You can find the example here: https://github.com/miragejs/examples/tree/master/vue-axios-test-utils
username_1: Closing for now but please leave any more comments if you run into anything!
Status: Issue closed
|
chakra-ui/chakra-ui | 742406986 | Title: docs(gatsby-plugin): update package name
Question:
username_0: # Bug report
## Describe the bug
https://github.com/chakra-ui/chakra-ui/blob/develop/website/pages/guides/integrations/with-gatsby.mdx
shows the old package name.
Maybe in other locations as well.
## Expected behavior
Should be `@chakra.ui/gatsby-plugin`
## Additional context
The package got renamed after v1 release
Status: Issue closed
Answers:
username_1: I didnt realize we had a ticket for this. I fixed it earlier in https://github.com/chakra-ui/chakra-ui/pull/2435 |
ProPra16/programmierpraktikum-abschlussprojekt-nimmdochirgendeinennamen | 165187811 | Title: gradle stuff
Question:
username_0: Die Struktur die gradle aufbaut entspricht nicht der unseren, die durch die fast schon absoluten Pfade in den Imports gegeben ist.
Der src-Folder wird so garnicht erkannt und verhindert, dass "gradle run" die richtigen Referenzen bekommt.
Ich werde mich morgen dran setzen - wer will kann mitmachen. Bräuchte andere Systeme zum testen.
Answers:
username_1: Wie kann man dir denn helfen?
username_0: Die Umstrukturierung ist abgeschlossen
Status: Issue closed
|
Jinmo/jekyll-casper | 292124009 | Title: How to set cover?
Question:
username_0: username_1,
I want to know how to set cover,
And, how to jump to home page with click logo button.(now it is link to localhost:4000)
Thanks.
Status: Issue closed
Answers:
username_1: Hello! You can set `cover` property in the post.
username_0: Yep, I found it.
Thanks. |
tlaplus/tlaplus | 407354607 | Title: Add TLA+ user/discussion Google group to Toolbox's list of default RSS feeds
Question:
username_0: Add TLA+ user/discussion Google group to Toolbox's list of default RSS feeds.
If we consider this too annoying, it could still be part of the Toolbox's list of RSS feeds but turned off.
Status: Issue closed
Answers:
username_0: To turn all or individual feeds off:
 |
UBCCM/UBC-DEF | 578234820 | Title: New Component: Tab Accordion
Question:
username_0: - [x] I have performed the [cursory search](https://github.com/issues?utf8=%E2%9C%93&q=is%3Aissue+repo%3Aubccm%2Fubc-def)
**Problem description**
- One of the widely used components in CLF7, tab navigation.
- It's similar to a regular accordion, but with selector above the content region. Allowing users to browse across all available options in a single view.
- An alternate display options will be needed for narrower device width
**Proposed solution**
- Starting with the same user experience on mobile devices, it has a mobile first experience to display the whole tab accordion as a regular accordion.
- With space allowed, the selector (accordion headings) will display above content region. Because of this, HTML markup will have to be different from regular accordion (#50).
- If javascript is disabled, content will still be visible from accessibility standpoint. The current use of `hidden` attribute can be used to display content
- User experience will be similar to accordion, with the difference on larger display area (e.g. desktop, tablet), where the selectors will be above the content region.
- Responsive behaviour, as mentioned above, HTML is mobile first with same behaviour as regular accordion. The tab above content will be trigger when reaching certain breakpoints.
- Tab accordion uses ID-based triggers, as it needs to be able to relate the content region from the trigger. This is different from the currently proposed regular accordion.
- Accordion selector will be keyboard accessible, with the content region only accessible when it's visible.
- `aria-expanded` added to the selector, `hidden` attribute on the content block
- a plus/minus sign, bold title, and thicker border bottom to provide a visual indicator on the currently display/selected accordion title
*Note:*
- For review, the current accordion markup proposed is different with the other accordion markup, should consider whether they can be using the same markup.
- Javascript to be considered separately
**Alternatives considered**
- Considered the existing behaviour of the regular accordion, and regular tab system. The proposed solution is the hybrid of the two
**Additional context**
Demo: https://codepen.io/username_0/pen/zYGpPOX
**Feature Review Checklist:**
- [ ] HTML Markup
- [ ] CSS
- [ ] JS (if applicable)
- [ ] Responsive Behaviour
- [ ] UX
- [ ] Design
- [ ] Web Accessibility
- [ ] Documentation |
objectbox/objectbox-java | 254793293 | Title: io.objectbox.exception.DbException
Question:
username_0: Sometimes, the app will crash cause this error:
Could not create directory: /data/user/0/org.rdgy.pinny/files/objectbox/objectbox
org.rdgy.pinny.App.onCreate(App.java:80)
how do solve it?
Answers:
username_1: Closing this issue due to inactivity. :zzz: Feel free to re-open with more details or submit a new issue.
Status: Issue closed
|
Bioconductor/Rsamtools | 502380576 | Title: Error with getSeq from opened fasta file (Rsamtools 2.0.2)
Question:
username_0: Hi,
I am using Rsamtools 2.0.2, and I have faced an issue on reading sequences from fasta files.
My issue is similar to https://github.com/Bioconductor/Rsamtools/issues/5 , but not exactly. When I try to "getSeq" from fasta file I get an error message. This is my script:
```
GTFfile = "Homo_sapiens.GRCh37.87.gtf"
FASTAfile = "Homo_sapiens.GRCh37.dna.toplevel.fa"
FASTA <- FaFile(FASTAfile)
open(FASTA)
getSeq(FASTA, GRanges("chr1", IRanges(1,5)))
```
I get this:
Error in value[[3L]](cond) : record 1 (chr1:1-5) failed
file: Homo_sapiens.GRCh37.dna.toplevel.fa
If I run it in Windows, I get this:
[E::faidx_fetch_seq2] Failed to retrieve block. (Seeking in a compressed, .gzi unindexed, file?)
Error in value[[3L]](cond) : record 1 (chr1:1-5) failed
file: Homo_sapiens.GRCh37.dna.toplevel.fa
The reference files are standard ensembl reference files and are downloaded from ftp://ftp.ensembl.org/pub/grch37/release-87/fasta/homo_sapiens/dna .
Thank you for your help.
Farshad
Answers:
username_1: Can you please provide your `sessionInfo()`? Thanks.
username_0: Sorry, I should have added that already:
```
R version 3.5.1 (2018-07-02)
Platform: x86_64-conda_cos6-linux-gnu (64-bit)
Running under: CentOS release 6.9 (Final)
Matrix products: default
BLAS/LAPACK: /data/miniconda3/lib/R/lib/libRblas.so
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats4 parallel stats graphics grDevices utils datasets
[8] methods base
other attached packages:
[1] xtable_1.8-3 rtracklayer_1.42.2 Rsamtools_1.34.0
[4] Biostrings_2.50.2 XVector_0.22.0 GenomicRanges_1.34.0
[7] GenomeInfoDb_1.18.1 IRanges_2.16.0 S4Vectors_0.20.1
[10] BiocGenerics_0.28.0
loaded via a namespace (and not attached):
[1] zlibbioc_1.28.0 GenomicAlignments_1.18.1
[3] BiocParallel_1.14.2 BSgenome_1.50.0
[5] lattice_0.20-38 tools_3.5.1
[7] SummarizedExperiment_1.12.0 grid_3.5.1
[9] Biobase_2.42.0 matrixStats_0.54.0
[11] yaml_2.2.0 Matrix_1.2-15
[13] GenomeInfoDbData_1.2.1 BiocManager_1.30.4
[15] bitops_1.0-6 RCurl_1.95-4.11
[17] DelayedArray_0.8.0 compiler_3.5.1
[19] XML_3.98-1.16
```
username_1: Thanks but in your original post you said you were using Rsamtools 2.0.2 but now your `sessionInfo()` reports that you were using Rsamtools 1.34.0. We need to see the full transcript of the R session that is producing the error. The transcript should contain the call to `sessionInfo()` at the end of the session. So we know exactly what command you used, what error you got, on which OS you are, and what version of Bioconductor/R you use. Thanks!
username_2: + try(
+ print(getSeq(FASTA, GRanges(chrom, IRanges(41194312,41194322))))
+ )
+ }
A DNAStringSet instance of length 1
width seq names
[1] 11 TTTTGTTTTGT 1
A DNAStringSet instance of length 1
width seq names
[1] 11 GCCATAAAAAA 2
A DNAStringSet instance of length 1
width seq names
[1] 11 GTATTTACAAA 3
A DNAStringSet instance of length 1
width seq names
[1] 11 CCCAAAGGAAA 4
A DNAStringSet instance of length 1
width seq names
[1] 11 TGGATAAGAAT 5
A DNAStringSet instance of length 1
width seq names
[1] 11 GTAATGCCATT 6
A DNAStringSet instance of length 1
width seq names
[1] 11 TTTGATTTAAG 7
A DNAStringSet instance of length 1
width seq names
[1] 11 ATTCCAGCACC 8
A DNAStringSet instance of length 1
width seq names
[1] 11 TAAGTTGGGGA 9
A DNAStringSet instance of length 1
width seq names
[1] 11 NNNNNNNNNNN 10
A DNAStringSet instance of length 1
width seq names
[1] 11 ATTTTTAAAAC 11
A DNAStringSet instance of length 1
width seq names
[1] 11 TTAAGCAATAT 12
Error in value[[3L]](cond) : record 1 (13:41194312-41194322) failed
file: F:/ahwan.pandey/Projects/TargetedExome/Project_DG/MOCOG/ref_test/human_g1k_v37.fasta
Error in value[[3L]](cond) : record 1 (14:41194312-41194322) failed
file: F:/ahwan.pandey/Projects/TargetedExome/Project_DG/MOCOG/ref_test/human_g1k_v37.fasta
Error in value[[3L]](cond) : record 1 (15:41194312-41194322) failed
file: F:/ahwan.pandey/Projects/TargetedExome/Project_DG/MOCOG/ref_test/human_g1k_v37.fasta
Error in value[[3L]](cond) : record 1 (16:41194312-41194322) failed
file: F:/ahwan.pandey/Projects/TargetedExome/Project_DG/MOCOG/ref_test/human_g1k_v37.fasta
Error in value[[3L]](cond) : record 1 (17:41194312-41194322) failed
file: F:/ahwan.pandey/Projects/TargetedExome/Project_DG/MOCOG/ref_test/human_g1k_v37.fasta
Error in value[[3L]](cond) : record 1 (18:41194312-41194322) failed
file: F:/ahwan.pandey/Projects/TargetedExome/Project_DG/MOCOG/ref_test/human_g1k_v37.fasta
Error in value[[3L]](cond) : record 1 (19:41194312-41194322) failed
file: F:/ahwan.pandey/Projects/TargetedExome/Project_DG/MOCOG/ref_test/human_g1k_v37.fasta
Error in value[[3L]](cond) : record 1 (20:41194312-41194322) failed
file: F:/ahwan.pandey/Projects/TargetedExome/Project_DG/MOCOG/ref_test/human_g1k_v37.fasta
Error in value[[3L]](cond) : record 1 (21:41194312-41194322) failed
file: F:/ahwan.pandey/Projects/TargetedExome/Project_DG/MOCOG/ref_test/human_g1k_v37.fasta
Error in value[[3L]](cond) : record 1 (22:41194312-41194322) failed
file: F:/ahwan.pandey/Projects/TargetedExome/Project_DG/MOCOG/ref_test/human_g1k_v37.fasta
[Truncated]
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 18362)
Matrix products: default
locale:
[1] LC_COLLATE=English_Australia.1252 LC_CTYPE=English_Australia.1252 LC_MONETARY=English_Australia.1252 LC_NUMERIC=C
[5] LC_TIME=English_Australia.1252
attached base packages:
[1] stats4 parallel stats graphics grDevices utils datasets methods base
other attached packages:
[1] Rsamtools_2.2.3 Biostrings_2.54.0 XVector_0.26.0 GenomicRanges_1.38.0 GenomeInfoDb_1.22.1 IRanges_2.20.2 S4Vectors_0.24.4
[8] BiocGenerics_0.32.0
loaded via a namespace (and not attached):
[1] bitops_1.0-6 zlibbioc_1.32.0 data.table_1.12.8 BiocParallel_1.20.1 tools_3.6.3 RCurl_1.98-1.2 compiler_3.6.3
[8] GenomeInfoDbData_1.2.2
```
username_2: I installed ```Rsamtools_2.4.0``` and the same error persists.
username_2: Another thing I tried was to delete the fasta index and let Rsamtools create the index while running. The index it creates is weird from chr14 onwards.
It appends "1844674407" i.e. 2^64 to the third column:
```
1 249250621 52 60 61
2 243199373 253404903 60 61
3 198022430 500657651 60 61
4 191154276 701980507 60 61
5 180915260 896320740 60 61
6 171115067 1080251307 60 61
7 159138663 1254218344 60 61
8 146364022 1416009371 60 61
9 141213431 1564812846 60 61
10 135534747 1708379889 60 61
11 135006516 1846173603 60 61
12 133851895 1983430282 60 61
13 115169878 2119513096 60 61
14 107349540 18446744071651186846 60 61
15 102531392 18446744071760325599 60 61
16 90354753 18446744071864565901 60 61
17 81195210 18446744071956426620 60 61
18 78077248 18446744072038975137 60 61
19 59128983 18446744072118353726 60 61
20 63025520 18446744072178468246 60 61
21 48129895 18446744072242544245 60 61
22 51304566 18446744072291476358 60 61
X 155270560 18446744072343636053 60 61
Y 59373566 18446744072501494513 60 61
MT 16569 18446744072561857717 70 71
GL000207.1 4262 18446744072561874585 60 61
GL000226.1 15008 18446744072561878981 60 61
GL000229.1 19913 18446744072561894302 60 61
GL000231.1 27386 18446744072561914609 60 61
GL000210.1 27682 18446744072561942514 60 61
GL000239.1 33824 18446744072561970720 60 61
GL000235.1 34474 18446744072562005170 60 61
GL000201.1 36148 18446744072562040281 60 61
GL000247.1 36422 18446744072562077094 60 61
GL000245.1 36651 18446744072562114186 60 61
GL000197.1 37175 18446744072562151510 60 61
GL000203.1 37498 18446744072562189367 60 61
GL000246.1 38154 18446744072562227552 60 61
GL000249.1 38502 18446744072562266404 60 61
GL000196.1 38914 18446744072562305610 60 61
GL000248.1 39786 18446744072562345235 60 61
GL000244.1 39929 18446744072562385747 60 61
GL000238.1 39939 18446744072562426404 60 61
GL000202.1 40103 18446744072562467071 60 61
GL000234.1 40531 18446744072562507905 60 61
GL000232.1 40652 18446744072562549174 60 61
GL000206.1 41001 18446744072562590566 60 61
GL000240.1 41933 18446744072562632313 60 61
GL000236.1 41934 18446744072562675007 60 61
GL000241.1 42152 18446744072562717702 60 61
GL000243.1 43341 18446744072562760619 60 61
GL000242.1 43523 18446744072562804745 60 61
GL000230.1 43691 18446744072562849056 60 61
GL000237.1 45867 18446744072562893538 60 61
GL000233.1 45941 18446744072562940232 60 61
[Truncated]
GL000213.1 164239 18446744072564414327 60 61
GL000211.1 166566 18446744072564581367 60 61
GL000199.1 169874 18446744072564750773 60 61
GL000217.1 172149 18446744072564923542 60 61
GL000216.1 172294 18446744072565098624 60 61
GL000215.1 172545 18446744072565273853 60 61
GL000205.1 174588 18446744072565449337 60 61
GL000219.1 179198 18446744072565626898 60 61
GL000224.1 179693 18446744072565809146 60 61
GL000223.1 180455 18446744072565991897 60 61
GL000195.1 182896 18446744072566175423 60 61
GL000212.1 186858 18446744072566361431 60 61
GL000222.1 186861 18446744072566551467 60 61
GL000200.1 187035 18446744072566741506 60 61
GL000193.1 189789 18446744072566931722 60 61
GL000194.1 191469 18446744072567124738 60 61
GL000225.1 211173 18446744072567319462 60 61
GL000192.1 547496 18446744072567534218 60 61
```
username_3: Error in value[[3L]](cond) : record 1 (AY172335.1:71-372) failed
file: 2021-02-22_sciatac/GRCm39.fna
```
when using a pre-built index (which looks valid). If I move the pre-built index and let RSamtools build it, the index is similarly invalid-looking (very high constant offset):
```
CM000994.3 195154279 82 80 81
GL456210.1 169725 197593918 80 81
GL456211.1 241735 197765893 80 81
GL456212.1 153618 198010778 80 81
GL456221.1 206961 198166445 80 81
MU069434.1 8412 198376122 80 81
GL456239.1 40056 198384768 80 81
CM000995.3 181755017 198425407 80 81
CM000996.3 159745316 382452444 80 81
CM000997.3 156860686 544194659 80 81
JH584295.1 1976 703016227 80 81
CM000998.3 151758149 703018310 80 81
JH584296.1 199368 856673564 80 81
JH584297.1 205776 856875553 80 81
JH584298.1 184189 857084030 80 81
GL456354.1 195993 857270650 80 81
JH584299.1 953012 857469221 80 81
CM000999.3 149588044 858434228 80 81
CM001000.3 144995196 1009892205 80 81
GL456219.1 175968 1156699969 80 81
CM001001.3 130127694 1156878219 80 81
CM001002.3 124359700 1288632592 80 81
CM001003.3 130530862 1414546872 80 81
CM001004.3 121973369 1546709453 80 81
CM001005.3 120092757 1670207573 80 81
CM001006.3 120883175 1791801573 80 81
CM001007.3 125139656 1914195871 80 81
CM001008.3 104073951 2040899856 80 81
CM001009.3 98008968 2146274815 80 81
CM001010.3 95294699 18446744071660093299 80 81
CM001011.3 90720763 18446744071756579265 80 81
CM001012.3 61420004 18446744071848434121 80 81
CM001013.3 169476592 18446744071910621958 80 81
GL456233.2 559103 18446744072082217136 80 81
CM001014.3 91455967 18446744072082783310 80 81
JH584300.1 182347 18446744072175382599 80 81
JH584301.1 259875 18446744072175567348 80 81
JH584302.1 155838 18446744072175830594 80 81
JH584303.1 158099 18446744072175988502 80 81
GL456367.1 42057 18446744072176148684 80 81
GL456378.1 31602 18446744072176191373 80 81
GL456381.1 25871 18446744072176223477 80 81
GL456382.1 23158 18446744072176249778 80 81
GL456383.1 38659 18446744072176273332 80 81
GL456385.1 35240 18446744072176312581 80 81
GL456390.1 24668 18446744072176348368 80 81
GL456392.1 23629 18446744072176373452 80 81
GL456394.1 24323 18446744072176397484 80 81
GL456359.1 22974 18446744072176422219 80 81
GL456360.1 31704 18446744072176445588 80 81
GL456396.1 21240 18446744072176477796 80 81
GL456372.1 28664 18446744072176499409 80 81
GL456387.1 24685 18446744072176528539 80 81
GL456389.1 28772 18446744072176553640 80 81
GL456370.1 26764 18446744072176582879 80 81
GL456379.1 72385 18446744072176610085 80 81
GL456366.1 47073 18446744072176683482 80 81
GL456368.1 20208 18446744072176731251 80 81
JH584304.1 114452 18446744072176751819 80 81
MU069435.1 31129 18446744072176867809 80 81
AY172335.1 16299 18446744072176899400 80 81
```
This is for the mouse GRCm39 genome. |
andreinafactorio/IndustrialRevolution-Miniloader | 808261176 | Title: Research requirements needs reordering
Question:
username_0: Currently researching any miniloaders with industrial revolution is only possible close to end game. Please make them more inline with logistics research or right after required science packs are available to make. Like 'bronze analysis' for green science pack and 'iron analysis' for blue science pack in the tech tree. |
oppia/oppia | 455379724 | Title: TypeError: Unable to get property 'match' of undefined or null reference
Question:
username_0: This error occurred recently in production
```
TypeError: Unable to get property 'match' of undefined or null reference
at compile (https://www.oppia.org/build/third_party/generated/js/third_party.min.a8d0c323da3c9efa572d2a025c4ef104.js:1)
at oa (https://www.oppia.org/third_party/static/angularjs-1.5.8/angular.min.js:71)
at s (https://www.oppia.org/third_party/static/angularjs-1.5.8/angular.min.js:59)
at aa (https://www.oppia.org/third_party/static/angularjs-1.5.8/angular.min.js:57)
```
**General instructions**
There are no specific repro steps available for this bug report. The general procedure to fix server errors should be the following:
* Analyze the code in the file where the error occurred and come up with a hypothesis for the reason.
* Get the logic of the proposed fix validated by an Oppia team member (have this discussion on the issue thread).
* Make a PR that fixes the issue, then close the issue on merging the PR. (If the error reoccurs in production, the issue will be reopened for further investigation.)
Answers:
username_0: This issue only occurred in IE or Edge :(. Looks to be browser dependent. Also, the error is reported on opening a collection viewer page.
username_0: Attaching the full log
```<text xmlns="http://www.w3.org/2000/svg" font-family="Capriola, Roboto, Arial, sans-serif" font-size="15" alignment-baseline="middle" fill="#e14738" text-anchor="middle" x="210" y="170" ng-if="!collectionPlaythrough.hasStartedCollection()" translate="I18N_START_HERE" />
Unable to get property 'match' of undefined or null reference
TypeError: Unable to get property 'match' of undefined or null reference
at compile (https://www.oppia.org/build/third_party/generated/js/third_party.min.a8d0c323da3c9efa572d2a025c4ef104.js:1:62465)
at oa (https://www.oppia.org/third_party/static/angularjs-1.5.8/angular.min.js:71:34)
at s (https://www.oppia.org/third_party/static/angularjs-1.5.8/angular.min.js:59:110)
at aa (https://www.oppia.org/third_party/static/angularjs-1.5.8/angular.min.js:57:102)
at Anonymous function (https://www.oppia.org/third_party/static/angularjs-1.5.8/angular.min.js:63:2)
at d (https://www.oppia.org/third_party/static/angularjs-1.5.8/angular.min.js:59:467)
at m (https://www.oppia.org/third_party/static/angularjs-1.5.8/angular.min.js:64:51)
at Anonymous function (https://www.oppia.org/third_party/static/angularjs-1.5.8/angular.min.js:276:469)
at m.prototype.$digest (https://www.oppia.org/third_party/static/angularjs-1.5.8/angular.min.js:143:181)
at m.prototype.$apply (https://www.oppia.org/third_party/static/angularjs-1.5.8/angular.min.js:146:111)
at URL: https://www.oppia.org/collection/53gXGLIR044l```
username_1: Hm -- looking at this, we may want to try to find a way to incorporate sourcemaps into our toolchain somehow, otherwise the line numbers will all be "line 1" (which doesn't help with debugging :P).
username_2: It seems like String.prototype.match ( regexp ) is not implemented for IE11.
(see https://docs.microsoft.com/en-us/openspecs/ie_standards/ms-es6/f83fa032-1111-1111-9cd0-373a796671c4)
Looks like 'match' is used in a lot of places in third_party.js. Here are two possible approaches I could think of to handle this:
1. Add a check for browser compatibility and throw an appropriate error.
2. Or, add a polyfill to support 'match' in IE.
/cc @username_1
username_1: Go with polyfill; we've done that before (see bottom of App.ts).
Status: Issue closed
|
Azure/azure-sdk-for-ruby | 320402418 | Title: Media Services SDK generation failure for 2018-03-30-preview version
Question:
username_0: One of the SDKs in Ruby is [azure_media_services](https://github.com/Azure/azure-sdk-for-ruby/tree/master/management/azure_mgmt_media_services). For this Service, we have 2 versions:
1. [2015-10-01](https://github.com/Azure/azure-sdk-for-ruby/tree/master/management/azure_mgmt_media_services/lib/2015-10-01/generated)
2. [2018-03-30-preview](https://github.com/Azure/azure-sdk-for-ruby/tree/master/management/azure_mgmt_media_services/lib/2018-03-30-preview/generated)
As of 05/04/2018, these version are generated from commit:
1. [2015-10-01 - 68a0d93b00f335894fd00b83bbdc8dd27e68b034](https://raw.githubusercontent.com/Azure/azure-rest-api-specs/68a0d93b00f335894fd00b83bbdc8dd27e68b034/specification/mediaservices/resource-manager/readme.md)
2. [2018-03-30-preview - 4fc4a59bcfab6b59a9d46c22a2454bef3230c45a](https://raw.githubusercontent.com/Azure/azure-rest-api-specs/4fc4a59bcfab6b59a9d46c22a2454bef3230c45a/specification/mediaservices/resource-manager/readme.md)
But, the latest version in master is creating error for the 2018-03-30-preview version. The SDK is not getting generated. This issue must be fixed.
Answers:
username_0: Note: The code is generated correctly in @microsoft.azure/[email protected]. The later versions are causing the failure
username_0: I am not able to reproduce this issue today. I have tried multiple scenarios and it is working fine. So, closing this task for now. If the issue happens again I will reopen the issue and work on it
Status: Issue closed
|
NinePts/OntoGraph | 305180175 | Title: Update code to follow a DAO pattern
Question:
username_0: Currently, the code is written assuming a backing Stardog triple store. All of the triple-processing is restricted to the file, GraphDBAccess, but the code does not use a DAO pattern.
This change would make it straightforward to substitute other APIs and repositories. |
bugfender/BugfenderSDK-Flutter | 1032775350 | Title: Why specify `build-tools` separately?
Question:
username_0: As per https://developer.android.com/studio/releases/build-tools, Android Gradle Plugin v3 and higher automatically use a default version of the build tools that the plugin specifies.
Why does this plugin explicitly specify `buildTools`? |
DonBruce64/MinecraftTransportSimulator | 1040564776 | Title: help me the writing in the config screen is squares either black or grey or white
Question:
username_0: this a link of the squares in photo hope it helps https://imgur.com/a/bo97hpc I also have 260 mods other than MTS and its packs including optifine and other performance mods ( I tried almost all the mods in a separate pack with almost identical everything and the text was just fine) sorry for bad English
Answers:
username_1: Odd, in that case do you know which mod it is? Or perhaps are you using a texture pack that changes the fonts? Unless I know what causes it I won't know how to fix it.
username_1: Closing due to no response from user.
Status: Issue closed
|
NRGI/rgi-assessment-tool | 134007492 | Title: Law has been removed from dropdown
Question:
username_0: When adding a document reference
Answers:
username_1: I believe this was asked for. are we moving it back in?
username_0: We will send you an updated list of the document reference options required.
username_1: @username_0 when can i get this list?
username_0: It was drafted yesterday, currently getting feedback
username_1: this will be sorted in #169
Status: Issue closed
|
legsem/legstar-core2 | 57731860 | Title: Allow maximum length of composites to exceed largest integer
Question:
username_0: The z/OS COBOL compiler accepts structures like this one:
```cobol
01 DFHCOMMAREA.
05 ODO-COUNTER PIC 9(9) COMP.
05 FIXED-ARRAY OCCURS 16777215 TIMES
DEPENDING ON ODO-COUNTER.
10 FILLER PC X(300).
```
Where the theoretical maximum length exceeds 2,147,483,647, the largest java integer.
It is therefore safer to use longs when returning the length of complex structures.<issue_closed>
Status: Issue closed |
nss-day-cohort-44/nutshell-whiskered-gnomes | 744187026 | Title: Events - store events
Question:
username_0: Given a user has entered in all details of an event
When the user performs a gesture to save the event
Then the event should be rendered in the application in the Events component
And it should show the event name
And it should show the event date
And it should show the event location
And it should show a button labeled "Show Weather"<issue_closed>
Status: Issue closed |
gpertea/gffread | 206069328 | Title: How to extract fasta seq using gene id and not transcript id from stringtie output (gene count matrix file)
Question:
username_0: Hi Geo,
I came across gffread utilities and was able to find information about extracting transcript fasta sequences using this utility. Could you please advise how to extract fasta seqs using only the gene ids generated by stringtie output (gene counts matrix file).
Thanks
Answers:
username_1: A gene has multiple transcripts, so I assume your question is rather: how do I get the fasta sequences for all the transcripts from a specific gene (or belonging to a list of genes).
This is indeed not straight forward.. (i.e. not just a gffread command). Given a list of gene_ids, I would probably use something like `fgrep -w -f gene_ids.lst` to select all the transcript entries from a target GTF and feed that output into ` | gffread -w selected_transcripts.fasta -g genome.fa -`
Hope this helps.
username_0: I am actually after the sequence of the gene and not transcripts. Since I am using gene count matrix for DE analysis of genes (and not transcript count matrix), is there any other way to find out just the sequence of the gene (this sounds complicated)? Also does it mean if a gene is expressed, all of the transcripts of the gene is expressed.
As you have mentioned above, a gene has multiple transcripts, does that mean it makes more sense to conduct a DE analysis based on the transcript count matrix and not using gene count matrix.
Since I am new to all this, your advice has been very useful and I would like to understand the basics around DE analysis before coming to a conclusion.
Many Thanks
username_1: I'd suggest taking some real course on the topic or at least some introductory classes in this field in order to understand the basics -- and better define your goals in the first place: are you interested in gene DE or transcript DE? Why would you even need the gene sequences of entire gene regions anyway?
Sorry for being so blunt but I think the "issues" section in github is not the proper forum for asking for tutorials on these subjects in order to help you *"understand the basics"* -- please do *that* first, using other resource, and *then* come here in order to post bug reports or more to-the-point questions. I certainly don't have the time to write and post such introductory tutorials here. There is ample online documentation on this topic and nowadays there are also entire free online courses that could help you better than I could (or I have the time) here.
Status: Issue closed
username_0: I understand, thanks for your time. |
MicrosoftDocs/powerapps-docs | 1076380214 | Title: Incorrect action for button
Question:
username_0: [Enter feedback here]
..._Set **OnChange** property on btnReset to this formula:_...
There is no such "OnChange" action for a button control.
It should be replaced by OnSelect action or else. Thanks.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: eca9b750-b686-4eea-009e-90f7d7a88f8f
* Version Independent ID: 47f0ff15-5438-d8f8-80e7-31739748e2a5
* Content: [Add a list box, a drop-down list, or radio buttons to a canvas app - Power Apps](https://docs.microsoft.com/en-us/powerapps/maker/canvas-apps/add-list-box-drop-down-list-radio-button#create-a-simple-list)
* Content Source: [powerapps-docs/maker/canvas-apps/add-list-box-drop-down-list-radio-button.md](https://github.com/MicrosoftDocs/powerapps-docs/blob/main/powerapps-docs/maker/canvas-apps/add-list-box-drop-down-list-radio-button.md)
* Service: **powerapps**
* Sub-service: **canvas-maker**
* GitHub Login: @chmoncay
* Microsoft Alias: **chmoncay**
Answers:
username_1: Thank you for reaching out. Documentation has been updated, and will reflect the changes soon.
Status: Issue closed
|
jaws/jaws | 286405882 | Title: Use canonical units
Question:
username_0: units attribute values should be spelled out in lowercase singular SI units with whitespace separating words and exponents indicated as positive integers > 1 (but no plus sign) or negative integers.
```
windspeed_10m:units = "m/s" ; # Wrong
windspeed_10m:units = "meters second-1" ; # Wrong
windspeed_10m:units = "meter second-1" ; # Right
density:units = "m3/kg" ; # Wrong
density:units = "meter3 kilogram-1" ; # Right
```
Answers:
username_1: Changed for all datasets
Status: Issue closed
|
toggl-open-source/toggldesktop | 702588842 | Title: Linux app crashing
Question:
username_0: <!-- Before submitting a new issue, please make sure that the same issue has not been created already -->
### 💻 Environment
<!-- Info about the platform and Toggl Version. It helps us narrow down the issue to smaller section of our project -->
Platform: <!-- macOS/Windows/Linux --> Linux
OS Version: <!-- macOS 10.14 or Windows 10.1 or Linux 14.04 -->Arch Linux x86_64
Toggl Version: <!-- 7.4.253 --> 7.5.260
### 🐞 Actual behavior
I'm running version 7.5.260 on Arch Linux. After running for some time, Toggl closes unexpectedly. The output in the terminal is "terminate called after throwing an instance of 'Poco::Net::SSLConnectionUnexpectedlyClosedException' terminate called recursively".
### 💯 Expected behavior
App should not crash
### 📦 Additional info
<!-- Error messages, logs and screenshots -->
[Slack](https://toggl.slack.com/archives/C029WCX0T/p1600245923184100)
[Case](https://app.intercom.com/a/apps/ayixs927/inbox/inbox/534741/conversations/106481303555457)
Answers:
username_1: [Case reference](https://app.intercom.com/a/apps/ayixs927/inbox/inbox/2097915/conversations/106481304659562) |
mapnik/node-mapnik | 89092891 | Title: No remove_layer in mapnik-node
Question:
username_0: Is it possible to add remove_layer or have some sort of way to modify/set a layer dynamically via node?
The only functions I see are
```
// layer access
NODE_SET_PROTOTYPE_METHOD(lcons, "add_layer", add_layer);
NODE_SET_PROTOTYPE_METHOD(lcons, "get_layer", get_layer);
NODE_SET_PROTOTYPE_METHOD(lcons, "layers", layers);
```
ref: https://github.com/mapnik/node-mapnik/blob/master/src/mapnik_map.cpp
But in mapnik we have https://github.com/mapnik/mapnik/blob/master/src/map.cpp#L334
Answers:
username_1: This omission was intentional: standard practice is to consider instances of a mapnik.Map as immutable. So you create them once / load an XML and after that you don't modify them. This makes async code predictably safe.
If you are writing an app that creates layers one by one and exposes an interface to modify them in place (sounds like you are) then I recommend using your own structure to hold layers + styles and then creating a new mapnik.Map and adding them lazily when you need to render.
If you think the above is not feasible, happy to discuss more. However that is a pretty reliable and simple approach that I would recommend. Loading maps from XML might sound "slow" but its actually been heavily optimized for this kind of case.
Status: Issue closed
username_0: I understand the practice, and will approach it differently.
username_2: @username_1 Isn't `add_layer` also mutating `mapnik.Map`?
I would also find `remove_layer` useful - in my case I'd like to clone a `mapnik.Map` (from a pool), remove some optional layers, render it and throw the clone away.
username_2: @username_1 how to find a reviewer? |
flutter/flutter | 119826827 | Title: Selection Controls gallery page crashes
Question:
username_0: Selecting the 'Selection Controls' gallery page causes a crash.
```
hansmuller@chumley:/builds/dev/flutter$ (cd /builds/dev/flutter/examples/material_gallery; adb logcat -c; flutter start --checked; flutter logs)
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
android: --------- beginning of main
android: --------- beginning of system
android: F/flutter : [FATAL:Paint.cpp(69)] Check failed: length == kNumberOfPaintFields (-1722030723 vs. 11)
android: --------- beginning of crash
android: F/libc : Fatal signal 6 (SIGABRT), code -6 in tid 2004 (Thread-19831)
android: W/ActivityManager: Force finishing activity 1 org.domokit.sky.shell/.SkyActivit
```
Answers:
username_1: @username_2
Status: Issue closed
username_2: This should work now |
Qirui0805/Personal-Blog | 523841021 | Title: 顺时针打印矩阵
Question:
username_0: #### 题目描述
输入一个矩阵,按照从外向里以顺时针的顺序依次打印出每一个数字,例如,如果输入如下4 X 4矩阵: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 则依次打印出数字1,2,3,4,8,12,16,15,14,13,9,5,6,7,11,10.
[nowcoder](https://www.nowcoder.com/practice/9b4c81a02cd34f76be2659fa0d54342a?tpId=13&tqId=11172&tPage=1&rp=1&ru=%2Fta%2Fcoding-interviews&qru=%2Fta%2Fcoding-interviews%2Fquestion-ranking)
#### 题目类型
数组
#### 解题思路
具体思路无需赘述,只说需要注意的点:
- 当r >= R || c >= C时停止
- 这里有几个corner case:
1. r = R && c <= C(包含了c < C 和c = C)两种情况
2. r = R && c > C
3. r < R && c = C
4. r > R && c = C
第一种情况需要for循环从c打印到C
第二种情况无需判断, 因为c>C不会进入for循环
第三种情况与第一种情况类似,从r打印到R
第四种情况同样无需判断
因此四种情况的代码表达是两个for循环,且是else的关系
#### 代码实现
```
public class Solution {
public ArrayList<Integer> printMatrix(int [][] matrix) {
ArrayList<Integer> res = new ArrayList<>();
if (matrix == null || matrix.length == 0) {
return res;
}
int r = 0;
int c = 0;
int R = matrix.length - 1;
int C = matrix[0].length - 1;
for (;r < R && c < C; r++, R--, c++, C--){
for (int i = c; i < C; i++) {
res.add(matrix[r][i]);
}
for (int i = r; i < R; i++) {
res.add(matrix[i][C]);
}
for (int i = C; i > c; i--) {
res.add(matrix[R][i]);
}
for (int i = R; i > r; i--) {
res.add(matrix[i][c]);
}
}
if (r == R){
for (int i = c; i <= C; i++) {
res.add(matrix[r][i]);
}
}
else if (c == C) {
for (int i = r; i <= R; i++) {
res.add(matrix[i][c]);
}
}
return res;
}
}
``` |
ic-hep/pdm | 306376438 | Title: Incorrect algorithm for calculating CS_Token
Question:
username_0: I was thinking about this and the CS_Token generation isn't secure enough:
The current algorithm is hash(cs_salt + hash(pw_salt + user_pw)) but the hashed user password is stored in the database, so an attacker with the database can recover the full cs_token for any user with no effort.
The algorithm should be: hash(cs_salt + hash(pw_salt + user_pw + cs_salt)) so that the hash is only re-calculable if you know the user_pw plain-text.
Answers:
username_0: Yes @martynia, you were right... A per-user salt would be even better, it would be even better to keep the cs_salt as well... The algorithm should be:
user_salt = pdm.utils.hashing.get_salt()
hash(user_pw, salt=(user_salt + cs_salt))
The salts can be combined just by concatenating them together.
The user_salt will have to be stored in the database.
Status: Issue closed
username_0: If we move to a myproxy based model, then this becomes less important, so I'm going to close it for now. |
hellobloom/bloom-starter | 437343365 | Title: Integrate Share Kit and Sign Up/Sign In with Email
Question:
username_0: Integrate `Bloom Share Kit` into your live website or app such that an individual can create an account and sign in to an existing account using their BloomID and an attested email address. Submit your work as a pull request on the `bloom-starter` repo including a link to the website/app so that we can test to verify. The Bloom team will review the pull request for accuracy and may request up to two revisions. Submissions will be reviewed by the Bloom team and rewards will be issued when approved.
This bounty is worth 250 DAI and will be rewarded for any and all successful integrations into live apps!
You can find Share Kit [here](https://github.com/hellobloom/share-kit), Receive Kit for advanced integrations [here](https://github.com/hellobloom/receive-kit), and an example set up in the [Bloom Starter repo](https://github.com/hellobloom/bloom-starter).
Answers:
username_0: @username_1 Is the app you want to integrate Share Kit into a brand new app? We're looking for integrations into existing, live websites/apps.
username_1: @username_0, yes that was the plan. Thought it would be a demo project to guide people to do the same.
username_2: @username_0 is this issue still open and ready for work to be submitted ?
username_0: @username_2 Yes it is! Which app or website are you thinking of integrating Bloom into?
Status: Issue closed
|
julianlam/nodebb-plugin-dwnvtr | 631114794 | Title: deletion of a post doesn't cause further fading
Question:
username_0: If a post is downvoted, deletion of the post doesn't cause fading as normal posts...
If several subsequent posts are downvoted as -1, -2, -3... deletion is not obvious with changing fade...
I think downvoting fading overrides the deletion fading... it would be better if deletion causes the same pattern for all posts independent of the number of votes it got.
Answers:
username_1: :smirk: |
larvalabs/breaker | 160869684 | Title: Flair overlap in #bombing
Question:
username_0: http://imgur.com/a/ME2DZ
Reported by s7alic
Answers:
username_0: This looks like a problem where the .flair class is resulting in an `inline` display, so the text-indent isn't working. Some other stylesheet is setting it to `inline-block` on reddit, but we're not getting that style for some reason. Would there be a problem with setting all our flair divs to `display: inline-block` @rickhanlonii ?
Status: Issue closed
|
mili-technologies/mili-website | 480681624 | Title: List of order statuses everywhere there is a order status drop down is not same
Question:
username_0: 


How restaurant manager will see Completed, Served, Confirmed orders when these statuses are not mentioned in the filter. |
swup/preload-plugin | 454121965 | Title: Preload not working on hover
Question:
username_0: Hi, I'm moving a site from the previous version of swup to the new one, and I've added the preload plugin but it doesn't seem to animate on hover anymore. Has this functionality been removed/broken?
Answers:
username_1: Hi, I guess you mean not preloading on hover. I checked and it was caused by a little typo I had. Update plugin and all should be good.
Thanks for reporting 👍
Status: Issue closed
username_0: Great thanks! |
apache/pulsar | 462628414 | Title: [doc] cause deadlock while use subscribeAsync demo in java client doc.
Question:
username_0: Here is the example in http://pulsar.apache.org/docs/en/client-libraries-java/#multi-topic-subscriptions

Use receive() to get message in subscribeAsync.thenAccept() will cause deadlock, because thread will block at receive() and can't call sendFlowPermitsToBroker(), so the thread will block forever.
Need to update doc http://pulsar.apache.org/docs/en/client-libraries-java/#multi-topic-subscriptions<issue_closed>
Status: Issue closed |
yinhaiying/Blog | 759401421 | Title: 由浅及深实现一个自定义的loader
Question:
username_0: ## webpack系列文章
1. [实现一个简易的模块打包器](https://juejin.cn/post/6893809205183479822#heading-14)
## 前言
在之前的文章中,我们已经实现了一个[简易的webpack打包器],但是在文章的最后,我们也提到过我们的打包器功能并不完善,比如我们目前不支持内置模块的引入,不支持ES6语法的转换,不支持css文件的打包。但是这些功能都可以通过webpack提供的loader和plugin等进行处理。这一篇文章我们就来聊一聊webpack中的核心功能之loader。还是我之前提到过的理念,你想要真正地了解一个东西,最好的办法就是去实现它,哪怕功能是最简单的。因此,本文我们同样会基于这个思路,手动去实现一个loader。<br/>
在[简易的webpack模块打包器]中,我们还不支持处理css文件。因此,首先我们先实现这个功能。为了便于大家直接使用代码,这里直接把之前的代码贴在下面。
```javascript
const fs = require("fs");
const path = require("path");
// 获取依赖
function getDependencies(str) {
let reg = /require\(['"](.+?)['"]\)/g;
let result = null;
let dependencies = [];
while ((result = reg.exec(str))) {
dependencies.push(result[1]);
}
return dependencies;
}
let ID = 0;
// 将每个模块转成对象描述形式
function createAsset(filename) {
// readFileSync 读取文件 最好传递绝对路径
let fileContent = fs.readFileSync(filename, "utf-8");
const id = ID++;
return {
id: id,
filename: filename,
dependencies: getDependencies(fileContent),
code: `function(require,exports,module){
${fileContent}
}`,
};
}
// 解析所有的模块得到一个大的数组对象。
function createGraph(filename){
let asset = createAsset(filename);
let queue = [asset];
// 使用let of 进行遍历,是因为我们在遍历过程中会往数组中添加元素,而let of会继续遍历新添加的元素,而不需要像for循环那样,需要进行处理。
for(let asset of queue){
const dirname = path.dirname(asset.filename);
asset.mapping = {};
asset.dependencies.forEach((relativePath) => {
const absolutePath = path.join(dirname,relativePath);
const child = createAsset(absolutePath);
asset.mapping[relativePath] = child.id;
queue.push(child);
})
}
return queue;
}
function createBundle(graph){
let modules = "";
graph.forEach((mod) => {
modules += `${mod.id}:[
${mod.code},
${JSON.stringify(mod.mapping)}
[Truncated]
## 总结
好了,到目前为止,我们基本上已经讲解完了一个loader所需的所有内容。主要包括:
1、首先我们通过自己实现了一个跟css-loader和style-loader功能相似的函数,说明了实际上就是一个函数。<br>
2、然后我们自定义了一个replace-loader,并且从零开始逐步丰富这个loader的内容。主要包括:<br>
- 如何实现loader的功能——定义一个函数,接收文件内容,返回处理后的文件内容
- loader的引入方式——第三方loader和自定义loader的几种引入方式
- loader如何支持配置参数,以及如何去获取参数——通过this和`loader-utils`进行获取
- loader如何返回值——返回一个值和返回多个值的处理
- 同步loader和异步loader的处理
- 多个loader之间如何使用以及他们的调用顺序影响
通过上面的方式,先引出loader,然后慢慢地实现一个loader,在实现的过程中逐步介绍loader的功能和配置,这样的话就能够在脑海中逐步建立起loader的知识框架,而不再是像以前一样,觉得loader很高深,从而望而生畏。还是那句话,**学习一个东西最好的方法就是去实现它**。<br>
相关的代码大家可以在[github](https://github.com/username_0/webpack/tree/main/packages/custom-loader)中进行查看。欢迎star。
完结撒花。
## 参考文献:
[how to write a loader](https://webpack.js.org/contribute/writing-a-loader/)
[loader官网](https://webpack.js.org/loaders/) |
hypesystem/Cryptoflow | 280928960 | Title: Get rid of d3.js!
Question:
username_0: An early experiment in visualization used d3.js, but it turns out the data-driven model makes little sense for the kind of data in Cryptoflow. We should get rid of any use of the library.
This means changing the visualization of the basic block overview (e.g. /blocks/chained_xor/ but not /blocks/chained_xor/innards/) to use the graph-based visualization algorithm rather than the simple d3 version. |
neuronsimulator/nrn | 606610214 | Title: [CoreNEURON] Replace contiguous NrnThread->_data with smaller memory chunks used in the CoreNEURON
Question:
username_0: Currently, NEURON constructs NrnThread for CoreNEURON with one big memory chunk via nt->_data:
```
nt._data = (double*)ecalloc_align(nt._ndata, sizeof(double));
nt._actual_rhs = nt._data + 0 * ne;
nt._actual_d = nt._data + 1 * ne;
nt._actual_a = nt._data + 2 * ne;
nt._actual_b = nt._data + 3 * ne;
nt._actual_v = nt._data + 4 * ne;
nt._actual_area = nt._data + 5 * ne;
nt._actual_diam = ndiam ? nt._data + 6 * ne : nullptr;
for (auto tml = nt.tml; tml; tml = tml->next) {
Memb_list* ml = tml->ml;
ml->data = nt._data + (ml->data - (double*)0);
}
....
int extra_nv = (&nt == nrn_threads) ? nrn_extra_thread0_vdata : 0;
if (nt._nvdata + extra_nv)
nt._vdata = (void**)ecalloc_align(nt._nvdata + extra_nv, sizeof(void*));
```
IIRC, there are also `vdata` members that can have global `offsets` to `nt._data`.
I believe it will be helpful if we can change these `global offsets` with more `local ones` and remove one big memory block `nt->_data` with smaller ones because of the following reasons:
* if we have to allocate different type of memories for different mechanisms or mechanism properties, currently it's not possible. This we require for GPUs or even KNLs.
* if we decide perform some of the mechanisms on GPUs on some on CPU, it's difficult to do because we have to copy all data to GPU (because of global memory offset).
* when we have to construct data structure piece-by-piece or use some nice C++ containers instead of raw pointers, it's not possible.
* @alkino is performing some of the refactoring of how we load the data in `nrn_setup` https://github.com/BlueBrain/CoreNeuron/pull/283 but these global offsets and contiguous memory requirement pose a significant challenge in simplifying the code.
@username_1 : we discussed this in the past but never able to make this priority. As Nico is doing very nice work on simplifying coreneuron code in https://github.com/BlueBrain/CoreNeuron/pull/283, do you think we can make this change? This will also help NMODL and how we generate the code.
Answers:
username_1: This is a very useful project. Past transformations involved the possibility of SoA memory layout and data permutation for higher performance ordering on gpu. Things have likely become so complex at
this point that a complete review is in order. The aspect that currently relies on a single double array
for everything is the use of indices instead of pointers into that array. For example, one of the more recent changes is communication of variables (trajectories and single values) between NEURON and CoreNEURON to allow vector recording and graphics. The key transformation on the
NEURON side is nrn_dblpnt2nrncore in src/nrniv/nrnbbcore_write.cpp and
double* stdindex2ptr in coreneuron/nrniv/nrnsetup.cpp on the CoreNEURON side. Clearly there needs to be some structure more complex than a mere int to replace pointers on the CoreNEURON side.
Status: Issue closed
username_1: Currently, NEURON constructs NrnThread for CoreNEURON with one big memory chunk via nt->_data:
```
nt._data = (double*)ecalloc_align(nt._ndata, sizeof(double));
nt._actual_rhs = nt._data + 0 * ne;
nt._actual_d = nt._data + 1 * ne;
nt._actual_a = nt._data + 2 * ne;
nt._actual_b = nt._data + 3 * ne;
nt._actual_v = nt._data + 4 * ne;
nt._actual_area = nt._data + 5 * ne;
nt._actual_diam = ndiam ? nt._data + 6 * ne : nullptr;
for (auto tml = nt.tml; tml; tml = tml->next) {
Memb_list* ml = tml->ml;
ml->data = nt._data + (ml->data - (double*)0);
}
....
int extra_nv = (&nt == nrn_threads) ? nrn_extra_thread0_vdata : 0;
if (nt._nvdata + extra_nv)
nt._vdata = (void**)ecalloc_align(nt._nvdata + extra_nv, sizeof(void*));
```
IIRC, there are also `vdata` members that can have global `offsets` to `nt._data`.
I believe it will be helpful if we can change these `global offsets` with more `local ones` and remove one big memory block `nt->_data` with smaller ones because of the following reasons:
* if we have to allocate different type of memories for different mechanisms or mechanism properties, currently it's not possible. This we require for GPUs or even KNLs.
* if we decide perform some of the mechanisms on GPUs on some on CPU, it's difficult to do because we have to copy all data to GPU (because of global memory offset).
* when we have to construct data structure piece-by-piece or use some nice C++ containers instead of raw pointers, it's not possible.
* @alkino is performing some of the refactoring of how we load the data in `nrn_setup` https://github.com/BlueBrain/CoreNeuron/pull/283 but these global offsets and contiguous memory requirement pose a significant challenge in simplifying the code.
@username_1 : we discussed this in the past but never able to make this priority. As Nico is doing very nice work on simplifying coreneuron code in https://github.com/BlueBrain/CoreNeuron/pull/283, do you think we can make this change? This will also help NMODL and how we generate the code. |
python/typeshed | 539755387 | Title: functools.cached_property and derived classes
Question:
username_0: `functools.cached_property` doesn't seem to create good annotations for derived classes. Is this a typeshed thing or a mypy thing?
Testcase:
```python
from functools import cached_property
class baseclass:
def __init__(self, data: str):
self.data = data
@cached_property
def characteristic(self) -> int:
return len(self.data)
class derived(baseclass):
@cached_property
def characteristic(self) -> int:
return 10
```
Output from mypy 0.760:
```
$ mypy mypytest.py
mypytest.py:10: error: Signature of "characteristic" incompatible with supertype "baseclass"
Found 1 error in 1 file (checked 1 source file)
```
Answers:
username_1: For what it's worth the typeshed definition is:
```
class cached_property(Generic[_S, _T]):
func: Callable[[_S], _T]
attrname: Optional[str]
def __init__(self, func: Callable[[_S], _T]) -> None: ...
@overload
def __get__(self, instance: None, owner: Optional[Type[_S]] = ...) -> cached_property[_S, _T]: ...
@overload
def __get__(self, instance: _S, owner: Optional[Type[_S]] = ...) -> _T: ...
def __set_name__(self, owner: Type[_S], name: str) -> None: ...
```
It's not immediately obvious to me why mypy is giving the errors you mention—maybe something to do with the invariant TypeVars that `cached_property` is generic over? It may be helpful to `reveal_type(characteristic)` inside the class body.
username_0: ```
mypytest.py:12: note: Revealed type is 'functools.cached_property[mypytest.baseclass*, builtins.int*]'
mypytest.py:18: note: Revealed type is 'functools.cached_property[mypytest.derived*, builtins.int*]'
```
It's the same outside of the classes as well.
username_0: The revealed types _seem_ accurate; the problem seems to be that there's an issue with the link between `baseclass` and `derived`. Which makes this seem like a mypy issue.
But `@property` works fine, though the typeshed definition looks completely different to me, which makes this seem like a typeshed issue.
username_1: `@property` is heavily special-cased in mypy itself, so its typeshed implementation doesn't necessarily tell you much.
I realized the issue here is the first type argument in the stub. If I remove that argument and instead use `Any`, the error goes away. I also tried making this typevar covariant instead, but that leads to `error: Cannot use a covariant type variable as a parameter` and doesn't fix the "incompatible with supertype" error. I'll submit a PR for the `Any` version.
Status: Issue closed
|
thrift-iterator/go | 428123512 | Title: panic when marshal struct
Question:
username_0: The code below will panic"
`package main
import (
"fmt"
"github.com/thrift-iterator/go"
)
type Foo struct {
Sa string `thrift:"Sa,1" json:"Sa"`
Ib int32 `thrift:"Ib,2" json:"Ib"`
Lc []string `thrift:"Lc,3" json:"Lc"`
}
type Example struct {
Name string `thrift:"Name,1" json:"Name"`
Ia int64 `thrift:"Ia,2" json:"Ia"`
Lb []string `thrift:"Lb,3" json:"Lb"`
Mc map[string]*Foo `thrift:"Mc,4" json:"Mc"`
}
func main() {
var example = Example{
Name: "xxxxxxxxxxxxxxxx",
Ia: 12345678,
Lb: []string{"a", "b", "c", "d", "1", "2", "3", "4", "5"},
Mc: map[string]*Foo{
"t1": &Foo{Sa: "sss", Ib: 987654321, Lc: []string{"1", "2", "3"}},
},
}
fmt.Printf("example: %v\n", example)
_, err := thrifter.Marshal(example)
if err != nil {
fmt.Printf("err: %v\n", err)
}
}`
the panic msg is below:
`example: {xxxxxxxxxxxxxxxx 12345678 [a b c d 1 2 3 4 5] map[t1:0xc000132450]}
panic: runtime error: growslice: cap out of range
goroutine 1 [running]:
github.com/thrift-iterator/go/protocol/binary.(*Stream).WriteString(0xc000148000, 0xbcce73b5c2737373, 0x203d2120bdbfef73)
/data01/xuhuaye/go/src/github.com/thrift-iterator/go/protocol/binary/stream.go:164 +0x116
github.com/thrift-iterator/go/binding/reflection.(*stringEncoder).encode(0xa79998, 0x6cceed, 0x6ff940, 0xc000148000)
/data01/xuhuaye/go/src/github.com/thrift-iterator/go/binding/reflection/encode_simple_value.go:24 +0x4a
github.com/thrift-iterator/go/binding/reflection.(*structEncoder).encode(0xc00000d180, 0x6cceed, 0x6ff940, 0xc000148000)
/data01/xuhuaye/go/src/github.com/thrift-iterator/go/binding/reflection/encode_struct.go:35 +0x156
github.com/thrift-iterator/go/binding/reflection.(*pointerEncoder).encode(0xc00000d1a0, 0xc000132450, 0x6ff940, 0xc000148000)
/data01/xuhuaye/go/src/github.com/thrift-iterator/go/binding/reflection/encode_pointer.go:20 +0x87
github.com/thrift-iterator/go/binding/reflection.(*mapEncoder).encode(0xc000132510, 0xc000132420, 0x6ff940, 0xc000148000)
/data01/xuhuaye/go/src/github.com/thrift-iterator/go/binding/reflection/encode_map.go:30 +0x2da
github.com/thrift-iterator/go/binding/reflection.(*structEncoder).encode(0xc00000d1c0, 0xc0000bcc00, 0x6ff940, 0xc000148000)
/data01/xuhuaye/go/src/github.com/thrift-iterator/go/binding/reflection/encode_struct.go:35 +0x156
github.com/thrift-iterator/go/binding/reflection.(*valEncoderAdapter).Encode(0xc0000107a0, 0x6aed20, 0xc0000bcc00, 0x6ff940, 0xc000148000)
/data01/xuhuaye/go/src/github.com/thrift-iterator/go/binding/reflection/unsafe.go:45 +0x51
github.com/thrift-iterator/go.(*frozenConfig).Marshal(0xc000136000, 0x6aed20, 0xc0000bcc00, 0x6aed20, 0xc0000bcc00, 0xc0000a9f18, 0xc0000bcc00, 0xc0000bcc00)
/data01/xuhuaye/go/src/github.com/thrift-iterator/go/config.go:251 +0x175
github.com/thrift-iterator/go.Marshal(0x6aed20, 0xc0000bcc00, 0x6aed20, 0xc0000bcc00, 0x1, 0x4e, 0x0)
/data01/xuhuaye/go/src/github.com/thrift-iterator/go/api.go:72 +0x49
main.main()
/data01/xuhuaye/go/src/iven_learn/test_thrifter/test.go:34 +0x317
exit status 2`
Answers:
username_1: This should fixed it https://github.com/thrift-iterator/go/commit/9b5a67519118594d9dea59cc6d75f5f64583bd3f
There might be more edge cases like this. The fundamental problem is how `interface{}` store the pointer. If the value itself is a pointer, it will not be stored as pointer to pointer, but embed directly into the `interface {}`. This is not a problem if using https://github.com/modern-go/reflect2 instead of reflect package which uses `interface {}`. But changing the reflection api of the package to match https://github.com/json-iterator/go is a much bigger project.
Status: Issue closed
|
umputun/remark42 | 974148709 | Title: TypeError: Cannot read property 'host' of undefined
Question:
username_0: I get this error while running on react nextjs

Answers:
username_1: @username_0 It's not a remark42's issue. Pls, use [this discussion](https://github.com/umputun/remark42/discussions/1106) to post any problems and I'll try to help.
Status: Issue closed
|
tessel/tessel.io | 112034024 | Title: Fix syntax highlighting on docs page
Question:
username_0: 
it's disgusting. Er, hard to read.
Answers:
username_0: solving https://github.com/tessel/tessel.io/issues/14 by making docs a gitbook would solve this problem at the same time
username_0: resolved with https://github.com/tessel/t2-docs/issues/50
Status: Issue closed
|
y20k/transistor | 509658409 | Title: select the deletion option and cancel it will unexpectedly cancel the sleep timer
Question:
username_0: Hi, I feel this issue is a little annoying, if fixed, will improve user experience a lot. I detail this issue as follows.
- I have searched the history issue list, and have not found similar issues.
- The issue was found in the latest release version 3.3.2 from Google Play (this version is also the latest code commit), and were reproduciable on both a Google Plxel 3 and an Android emulator 6.0 device.
- Issue
Scenario 1
1. When a station is playing, click the sleep timer to start the timer (the timer starts, correct!)
2. select the ``Rename`` option and cancel it (the sleep timer is still there and correctly counting down, correct!)
3. select the ``Delete`` option and cancel it (the sleep timer is unexpectedly cancelled, why?). This is strange that this ``noop`` action affects the timer.
- Reproducing video

Scenario 2
1. When a station ``A`` is playing, click the sleep timer to start the timer (the timer starts, correct!)
2. Switch to another station ``B``, which is not playing now (the timer is still there and counting down, correct!)
3. Select the ``Deletion`` option of station ``B`` and cancel it. Now, the timer is unexpectedly cancelled. It is strange because this ``noop`` action does nothing related to the station ``A`` where the timer was initially set.
- Reproducing video

Answers:
username_1: This behavior is not really an error. Let me explain.
First the sleep time is always cancelled, when you stop playback. The main problem is: Playback needs to stop in this app, when you try to delete a station. The reason is an weakness in the main architecture of the app, that bytes me here and in some other places (e.g. Android Auto), too. The background service that plays streams and the UI that manages the stations do not share a common understanding about the state of the station list. Basically it is never really safe to assume that the station you are about to delete is NOT the station playing right now. Therefore it is safer to stop playback before a delete action.
This issue is fixable. But since the root is so deep in the apps architecture, I do not intend to fix it in the forseeable future. Sorry. The needs to wait for the big Kotlin re-write, that eventually needs to come.
Status: Issue closed
username_0: Thanks for your detailed explanation, @username_1 . I can understand the complexity if fixing this issue. Yep, the asynchrony between the background service and the UI reallys do bring some trouble, and I can foresee the effort to fix this issue will be large if adjusting the architecture. Then let's temporarily just keep this for now. |
platformio/platformio-core | 509372414 | Title: Home: Could not initialize project
Question:
username_0: PIO Core Call Error: "The current working directory /Users/wqearth/Documents/PlatformIO/Projects/stm8 will be used for the project.\n\nThe next files/directories have been created in /Users/wqearth/Documents/PlatformIO/Projects/stm8\ninclude - Put project header files here\nlib - Put here project specific (private) libraries\nsrc - Put project source files here\nplatformio.ini - Project Configuration File\n\n\nError: Processing stm8sblack (platform: ststm8; board: stm8sblack; framework: spl)\n--------------------------------------------------------------------------------\nPackageManager: Installing framework-ststm8spl @ 0.20301.181217\nframework-ststm8spl @ 0.20301.181217 has been successfully installed!\nVerbose mode can be enabled via `-v, --verbose` option\nCONFIGURATION: https://docs.platformio.org/page/boards/ststm8/stm8sblack.html\nPLATFORM: ST STM8 1.0.1 > ST STM8S105K4T6 Breakout Board\nHARDWARE: STM8S105K4T6 16MHz, 2KB RAM, 16KB Flash\nPACKAGES: toolchain-sdcc 1.30804.10766 (3.8.4), framework-ststm8spl 0.20301.181217 (2.3.1), tool-stm8binutils 0.230.0 (2.30)\nError: Could not parse library files for the target.\nstm8s.h:2723:25: fatal error: stm8s_conf.h: No such file or directory\n\n********************************************************************\n* Looking for stm8s_conf.h dependency? Check our library registry!\n*\n* CLI > platformio lib search \"header:stm8s_conf.h\"\n* Web > https://platformio.org/lib/search?query=header:stm8s_conf.h\n*\n********************************************************************\n\ncompilation terminated.\n========================== [FAILED] Took 7.99 seconds =========================="
Status: Issue closed
Answers:
username_1: Duplicate of platformio/platform-ststm8#6 |
IntelRealSense/realsense-ros | 1186668299 | Title: I can't see the infrared camera
Question:
username_0: When I launch the following command ros2 launch realsense2_camera rs_launch.py it tells me that the stream depth and color are enabled but about the infrared it doesn't tell me anything, I have also launched the /camera/realsense2_camera/enable_infra1:=True commands and I get The topic about the infrared camera still does not come out
Device Name: Intel RealSense D435
Ros Version: Ros foxy
Answers:
username_1: Hi @username_0 In the RealSense ROS wrapper, the Infra1 and Infra2 topics are disabled by default and have to be enabled in order to be published.
https://github.com/IntelRealSense/realsense-ros/blob/ros2-beta/realsense2_camera/launch/rs_launch.py#L38-L39
Is the Infra1 topic published if you use the ROS2 launch command below, please?
**ros2 launch realsense2_camera rs_launch.py enable_infra1:=true**
username_0: I have both set to true, but the topic about the infra still does not come out, I do not know if I will have to enable the stream over the infrared anyway
username_1: Would it be possible to post your ros launch log from the terminal in a comment below, please?
username_0: [INFO] [launch]: All log files can be found below /home/panoimagen/.ros/log/2022-03-31-09-54-04-975959-panoimagen-27516
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [realsense2_camera_node-1]: process started with pid [27518]
[realsense2_camera_node-1] [INFO] [1648720445.493761824] [camera.camera]: RealSense ROS v3.2.3
[realsense2_camera_node-1] [INFO] [1648720445.494025228] [camera.camera]: Built with LibRealSense v2.50.0
[realsense2_camera_node-1] [INFO] [1648720445.494079956] [camera.camera]: Running with LibRealSense v2.50.0
[realsense2_camera_node-1] [INFO] [1648720445.768036832] [camera.camera]: Device with serial number 950122070567 was found.
[realsense2_camera_node-1]
[realsense2_camera_node-1] [INFO] [1648720445.768210247] [camera.camera]: Device with physical ID 2-4-4 was found.
[realsense2_camera_node-1] [INFO] [1648720445.768252095] [camera.camera]: Device with name Intel RealSense D435I was found.
[realsense2_camera_node-1] [INFO] [1648720445.768862429] [camera.camera]: Device with port number 2-4 was found.
[realsense2_camera_node-1] [INFO] [1648720445.768913067] [camera.camera]: Device USB type: 3.2
[realsense2_camera_node-1] [INFO] [1648720445.774758432] [camera.camera]: getParameters...
[realsense2_camera_node-1] [INFO] [1648720445.786362293] [camera.camera]: setupDevice...
[realsense2_camera_node-1] [INFO] [1648720445.786462724] [camera.camera]: JSON file is not provided
[realsense2_camera_node-1] [INFO] [1648720445.786510124] [camera.camera]: Device Name: Intel RealSense D435I
[realsense2_camera_node-1] [INFO] [1648720445.786571551] [camera.camera]: Device physical port: 2-4-4
[realsense2_camera_node-1] [INFO] [1648720445.786609566] [camera.camera]: Device FW version: 05.13.00.50
[realsense2_camera_node-1] [INFO] [1648720445.786644916] [camera.camera]: Device Product ID: 0x0B3A
[realsense2_camera_node-1] [INFO] [1648720445.786682309] [camera.camera]: Enable PointCloud: Off
[realsense2_camera_node-1] [INFO] [1648720445.786720704] [camera.camera]: Align Depth: Off
[realsense2_camera_node-1] [INFO] [1648720445.786756696] [camera.camera]: Sync Mode: Off
[realsense2_camera_node-1] [INFO] [1648720445.786827296] [camera.camera]: Device Sensors:
[realsense2_camera_node-1] [INFO] [1648720445.897900574] [camera.camera]: Stereo Module was found.
[realsense2_camera_node-1] [INFO] [1648720445.921506269] [camera.camera]: RGB Camera was found.
[realsense2_camera_node-1] [INFO] [1648720445.921963349] [camera.camera]: Motion Module was found.
[realsense2_camera_node-1] [INFO] [1648720445.922193673] [camera.camera]: (Infrared, 0) sensor isn't supported by current device! -- Skipping...
[realsense2_camera_node-1] [INFO] [1648720445.922266017] [camera.camera]: (Fisheye, 0) sensor isn't supported by current device! -- Skipping...
[realsense2_camera_node-1] [INFO] [1648720445.922304183] [camera.camera]: (Fisheye, 1) sensor isn't supported by current device! -- Skipping...
[realsense2_camera_node-1] [INFO] [1648720445.922340981] [camera.camera]: (Fisheye, 2) sensor isn't supported by current device! -- Skipping...
[realsense2_camera_node-1] [INFO] [1648720445.922378352] [camera.camera]: (Pose, 0) sensor isn't supported by current device! -- Skipping...
[realsense2_camera_node-1] [INFO] [1648720445.922416142] [camera.camera]: (Confidence, 0) sensor isn't supported by current device! -- Skipping...
[realsense2_camera_node-1] [INFO] [1648720445.922462260] [camera.camera]: num_filters: 0
[realsense2_camera_node-1] [INFO] [1648720445.922497856] [camera.camera]: Setting Dynamic reconfig parameters.
[realsense2_camera_node-1] [INFO] [1648720450.454908192] [camera.camera]: Done Setting Dynamic reconfig parameters.
[realsense2_camera_node-1] [INFO] [1648720450.457736263] [camera.camera]: depth stream is enabled - width: 848, height: 480, fps: 30, Format: Z16
[realsense2_camera_node-1] [INFO] [1648720450.466454023] [camera.camera]: color stream is enabled - width: 1280, height: 720, fps: 30, Format: RGB8
[realsense2_camera_node-1] [INFO] [1648720450.468547482] [camera.camera]: setupPublishers...
[realsense2_camera_node-1] [INFO] [1648720450.485548684] [camera.camera]: setupStreams...
[realsense2_camera_node-1] 31/03 09:54:10,704 WARNING [139822209009408] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: Resource temporarily unavailable, number: 11
[realsense2_camera_node-1] [INFO] [1648720450.755573629] [camera.camera]: SELECTED BASE:Depth, 0
[realsense2_camera_node-1] [INFO] [1648720450.757533051] [camera.camera]: Device Serial No: 950122070567
[realsense2_camera_node-1] [INFO] [1648720450.757644011] [camera.camera]: RealSense Node Is Up!
username_1: Near the bottom of the log, just above the **RealSense Node is Up!** message that indicates completion of the launch process, the details of the streams that are being published are provided. In your launch, it confirms that depth and color are published but infrared is not.

There is not anything obvious in the log to suggest a problem as the details look fine.
Let's try a launch with a custom stream configuration that will override the stream configuration in the launch file to see whether it makes a difference. This configuration should stream depth and infra1 at 848x480 at 30 FPS and color at 1280x720 and 30 FPS.
**ros2 launch realsense2_camera rs_launch.py enable_infra1:=true depth_width:=848 depth_height:=480 depth_fps:=30.0 infra1_width:=848 infra1_height:=480 infra1_fps:=30.0 color_width:=1280 color_height:=720 color_fps:=30.0**
username_0: I haven't launched anything, everything is the same without the topic and without the infrared stream enabled
username_1: In your opening message you stated that you launched with **ros2 launch realsense2_camera rs_launch.py**
username_0: Yes
username_1: Could you tell me please whether infra1 is published if you use the custom launch instruction that I provided above, please?
username_0: Whether I launch the ros2 launch realsense2_camera rs_launch.py enable_infra1:=true depth_width:=848 depth_height:=480 depth_fps:=30.0 infra1_width:=848 infra1_height:=480 infra1_fps:=30.0 color_width:=1280 color_height:=720 color_fps:=30.0 as ros2 launch realsense2_camera rs_launch.py I still get the same
username_1: Let's confirm whether Infrared1 is accessible in the RealSense Viewer tool. If you launch the Viewer and expand open the Stereo Module options, are **Infrared** and **Infrared 2** listed or is only Depth listed with no Infrared options?

username_0: If I do it from the viewer it works for me but I want to launch it with the ros modules
username_1: I asked because there is a problem that can cause the infrared streams to become inaccessible in RealSense applications and so I wanted to confirm that you did not have that problem, which can be indicated by missing infrared options in the Viewer. Thanks very much for the confirmation that Infrared is available to you in the Viewer.
Instead of using roslaunch, what happens if you use this alternative launch method:
**ros2 run realsense2_camera realsense2_camera_node --ros-args -p filters:=colorizer**
username_0: 
username_0: 
All except the infrared
username_1: Thanks very much for your patience. I researched the warning **Given stream configuration is not supported by the device** and found several other past cases where the Infrared streams had been declared unsupported by this message. RealSense team members on those cases advised to check whether the camera has its **Advanced Mode** enabled.
This can be performed in the RealSense Viewer.
1. Left-click on the **More** option near the top of the Viewer's options side-panel to reveal its drop-down menu and look at the**Advanced Mode** menu option.
2. If it has a tick-mark beside it then this means that Advanced Mode is enabled. If there is not a tick-mark, left-click on Advanced Mode to place a tick. Then please test whether the Infrared stream can now be found by the ROS wrapper.
 |
cdhart/cdhart-html | 1116307530 | Title: colorschemez January 27 2022 at 07:28AM
Question:
username_0: <blockquote class="twitter-tweet">
<p dir="ltr">productive magenta afflicted bluish purple hands-off pinkish https://t.co/S0h3gRzPpM</p>
— colorschemer (@colorschemez) <a href="https://twitter.com/colorschemez/status/1486692680750952451">Jan 27, 2022</a>
</blockquote>
<br>
<br>
January 27, 2022 at 07:28AM<br>
via Twitter |
SailCPU/CProgrammingCurriculum | 274863952 | Title: C Program to Find Largest Number Using Dynamic Memory Allocation
Question:
username_0: Depending upon the number of elements, the required size is allocated which prevents the wastage of memory. If no memory is allocated, error is displayed and the program is terminated.
Output
Enter total number of elements(1 to 100): 10
Enter Number 1: 2.34
Enter Number 2: 3.43
Enter Number 3: 6.78
Enter Number 4: 2.45
Enter Number 5: 7.64
Enter Number 6: 9.05
Enter Number 7: -3.45
Enter Number 8: -9.99
Enter Number 9: 5.67
Enter Number 10: 34.95
Largest element: 34.95 |
ray-project/ray | 489143205 | Title: Ray Cluster ModuleNotFoundError
Question:
username_0: <!--
General questions should be asked on the mailing list <EMAIL>.
Questions about how to use Ray should be asked on
[StackOverflow](https://stackoverflow.com/questions/tagged/ray).
Before submitting an issue, please fill out the following form.
-->
### System information
- **OS Platform and Distribution **: Ubuntu 16.04.2 LTS
- **Ray installed from (source or binary)**: Binary
- **Ray version**: 0.7.2
- **Python version**: 3.6.8
I am trying to build a manual cluster of the machines with IP Addresses. However, When I tried to run the PPO algorithm on the cluster I got an error message from one of the workers complaining about ModuleNotFoundError: No module named "v2i". Here the main module is my custom gym environment. It looks like ray could not able to sync the files between different nodes.
Here is the complete traceback. **wsl** is my worker hostname.
```
Traceback (most recent call last):
File "/home/mayank/miniconda3/envs/v2i/lib/python3.6/site-packages/ray/tune/trial_runner.py", line 436, in _process_trial
result = self.trial_executor.fetch_result(trial)
File "/home/mayank/miniconda3/envs/v2i/lib/python3.6/site-packages/ray/tune/ray_trial_executor.py", line 323, in fetch_result
result = ray.get(trial_future[0])
File "/home/mayank/miniconda3/envs/v2i/lib/python3.6/site-packages/ray/worker.py", line 2195, in get
raise value
ray.exceptions.RayTaskError: [36mray_PPO:train()[39m (pid=30729, host=rlmac)
File "/home/mayank/miniconda3/envs/v2i/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 364, in train
raise e
File "/home/mayank/miniconda3/envs/v2i/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 353, in train
result = Trainable.train(self)
File "/home/mayank/miniconda3/envs/v2i/lib/python3.6/site-packages/ray/tune/trainable.py", line 150, in train
result = self._train()
File "/home/mayank/miniconda3/envs/v2i/lib/python3.6/site-packages/ray/rllib/agents/trainer_template.py", line 126, in _train
fetches = self.optimizer.step()
File "/home/mayank/miniconda3/envs/v2i/lib/python3.6/site-packages/ray/rllib/optimizers/multi_gpu_optimizer.py", line 130, in step
self.num_envs_per_worker, self.train_batch_size)
File "/home/mayank/miniconda3/envs/v2i/lib/python3.6/site-packages/ray/rllib/optimizers/rollout.py", line 29, in collect_samples
next_sample = ray_get_and_free(fut_sample)
File "/home/mayank/miniconda3/envs/v2i/lib/python3.6/site-packages/ray/rllib/utils/memory.py", line 33, in ray_get_and_free
result = ray.get(object_ids)
ray.exceptions.RayTaskError: [36mray_RolloutWorker:sample()[39m (pid=11974, host=wsl)
File "pyarrow/serialization.pxi", line 461, in pyarrow.lib.deserialize
File "pyarrow/serialization.pxi", line 424, in pyarrow.lib.deserialize_from
File "pyarrow/serialization.pxi", line 275, in pyarrow.lib.SerializedPyObject.deserialize
File "pyarrow/serialization.pxi", line 174, in pyarrow.lib.SerializationContext._deserialize_callback
File "/media/win/MayankPal/miniconda3/envs/v2i/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle.py", line 965, in subimport
__import__(name)
ModuleNotFoundError: No module named 'v2i'
```
<!--
You can obtain the Ray version with
python -c "import ray; print(ray.__version__)"
-->
### Describe the problem
<!-- Describe the problem clearly here. -->
### Source code / logs
<!-- Include any logs or source code that would be helpful to diagnose the problem. If including trackbacks, please include the full trackback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. -->
* First start the ray head
`ray start --head --redis-port=6666 --num-cpus=22 --num-gpus=1`
* Start ray on worker machine with above redis address
`ray start --redis-address=xxx.xxx.xxx.xxx:6666`
* Start PPO training
`python train.py`
Answers:
username_0: @ericl Can you help ?
username_1: I have a similar problem
username_2: @username_1 are you on WSL?
username_1: No. I am on a MAC. The proplem I have is that the PYTHONPATH (sys path in general) are not carried along on the workers. I dumped the PATHS and they are quite different. I read a couple of issues where people suggested to set the PYTHONPATH on the workers (and/or clusters) but I do not see a way in the documentation/api to do so.
username_2: Does it work if you set the PYTHONPATH in `os.environ` before calling `ray.init` (assuming you're on a single machine)?
username_1: Wonderful.
It works. Thank you very much. It was probably a very dumb question :D
Status: Issue closed
username_2: No problem!
username_3: Can you please elaborate (code line example) on what do it means "to set PYTHONPATH"?
username_4: Just in case anyone face the same issue. Let me elaborate how to do the above suggestion concretely.
(I am very new to raytune, so please take it with a grain of salt).
OS Platform and Distribution: Ubuntu 20.04.3 LTS
Ray installed from (source or binary): Binary (pip install)
Ray version: 1.10.0
Python version: 3.8.10
Usage: raytune for hyperparameter tuning.
Normally, when you want to use a custom python module, you will have to use the following line in your code.
```
sys.path.append(module_path)
```
However, this seems to effect only in the main program that you used to call `tune.run`, but will not carried along to the workers as mention [above](https://github.com/ray-project/ray/issues/5635#issuecomment-576894149). As a result, the code inside `tune_method` will not be able to use the module added through `sys.path.append`. To solve this, we need to find a way to set the PYTHONPATH in the workers, one way to do it is to set the PYTHONPATH variable before calling tune.run as the following (I am guessing that ray.init is probably called in side this tune.run method) :
```
os.environ['PYTHONPATH'] = module_path
tune.run(tuning_method, ...)
```
Alternatively, I believed that you can do it like [this](https://github.com/ray-project/ray/issues/1639) also, but it is not suit my use-case, as I am running everything in one machine.
Note: The solution here can also be the answer for the following issue (https://github.com/ray-project/ray/issues/10067). |
18F/micropurchase | 155992854 | Title: Add 'project' to auctions api
Question:
username_0: The api currently does not have a `project` or `project_title` method. To implement the [prototype design](https://pages.18f.gov/micropurchase/), we will need this field to exist.
@mtorres253 @andrewmaier and I have been tossing around the idea of having a Projects page similar to [worklist](https://worklist.net/projects). If we move this direction, it will be useful to have more data for the project/repo that an auction is part of. We may even consider creating a separate Projects model and api endpoint. That being said, having a `project_title` field would be immediately helpful.
Answers:
username_1: If we want to expose project via API, we should have that as a model in our db, right?
If creating this page is prioritized by product, I'd vote for adding the concept of a "project" in the database, which would make exposing project info the the API easy.
username_0: @username_1 I think a project would be great. @adelevie was thinking that we could maybe add a model later when the "project" concept is more hashed out. I think I agree with him. For now I will just use `github_repo` and keep this ticket open for posterity
username_1: Ok cool, works for me! |
jitsi/jitsi-meet | 517612358 | Title: How Can I Change Default Language On Android SDK?
Question:
username_0: I have integrated jitsi meet into my android project. Everything works correctly, now I want the Chinese language by default. Please help me how to do it? my phone language also Chinese.
Answers:
username_1: The default language (if available) is selected to match that of the system. If we don't support said language, Engligh is used instead.
There is currently no way to force a language in the SDK.
What version of the SDK are you using? We have supported Chinese for a while IIRC.
username_2: i want to force to japan too
username_1: Do we support Japanese on the web? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.