repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
tremlab/built2break | 287226187 | Title: Error in /
Question:
username_0: ## Error in blt2brkJS
**Error** in **/**
Bad thing!
[View on Bugsnag](https://app.bugsnag.com/hackbright/blt2brkjs/errors/5a552b25099599001923d554?event_id=5a552b25099599001923d553&i=gh&m=ci)
## Stacktrace
http://localhost:5000/app.js:61 - sendUnhandled
[View full stacktrace](https://app.bugsnag.com/hackbright/blt2brkjs/errors/5a552b25099599001923d554?event_id=5a552b25099599001923d553&i=gh&m=ci)
*Created automatically via Bugsnag* |
MicrosoftDocs/azure-docs | 681357294 | Title: Need to include password not expire details
Question:
username_0: This article requires details/links to set break-glass account passwords to not expire:
https://docs.microsoft.com/en-us/microsoft-365/admin/add-users/set-password-to-never-expire?view=o365-worldwide
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: fff5de5d-9a97-73ff-4694-79027720b5c3
* Version Independent ID: 566a8387-bbdd-8e35-13cc-f2d891bd5608
* Content: [Manage emergency access admin accounts - Azure AD](https://docs.microsoft.com/en-us/azure/active-directory/users-groups-roles/directory-emergency-access)
* Content Source: [articles/active-directory/users-groups-roles/directory-emergency-access.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/users-groups-roles/directory-emergency-access.md)
* Service: **active-directory**
* Sub-service: **users-groups-roles**
* GitHub Login: @markwahl-msft
* Microsoft Alias: **curtand**
Answers:
username_1: @username_0 Thanks for your feedback! We will investigate and update as appropriate.
username_2: #reassign:username_2 Part of the maintenance of the account is password hygiene. This is covered in the doc https://docs.microsoft.com/azure/active-directory/roles/security-emergency-access#validate-accounts-regularly #please-close
Status: Issue closed
|
yanghuan/CSharp.lua | 893883415 | Title: 委托添加扩展方法编译有误
Question:
username_0: ```C#
class static Init()
{
public static void Test()
{
TestA testA = new TestA();
testA.action += testA.Add;
testA.action?.Invoke();
}
}
class TestA
{
public Action action;
}
static class TestAExtention
{
public static void Add(this TestA a)
{
Log.Info("test Add ");
}
}
```<issue_closed>
Status: Issue closed |
vlang/v | 1022322908 | Title: Multitype reflection during compile-time
Question:
username_0: Are there ways or needs to support multi-type reflection like codes below?
```
fn check_type<T>() {
$for field in T.fields{
$if field.typ in [f64 int bool []f64] {
println(field.name)
}
}
}
```
or
```
type UserData = f64 | []f64 | int | bool
fn check_type<T>() {
$for field in T.fields{
$if field.typ in UserData {
println(field.name)
}
}
}
``` |
node-red/node-red-dashboard | 673878191 | Title: Dashboard Gauge node: missing value format when "level" (water level) is selected
Question:
username_0: Animated GIF screenshot of issue [here on the Node-RED forum](https://discourse.nodered.org/t/possible-bug-missing-value-format-field-from-gauge-node-type-level/31193/3)
- Edit a Dashboard Gauge node in Node-RED (Chrome / Firefox, not tested on other browsers).
- Set the gauge type to "level" (the water meter style visualisation)
- The form field labelled "value format" disappears. This should not disappear, as it is still required for this gauge type.
- This is because the containing DIV with id="ui-gauge-format" has its style set to display:none, when you set the gauge type to "Level"
(A workaround for setting the value format is to change the gauge to any other type, edit the field, then set it back to level.)
I am running NodeRED v1.1.2
Answers:
username_1: Right - so the reason it is hidden is that the field doesn't accept Angular filters like the other gauges do - so you can't do things like truncate the number of decimals etc... as this is a completely different library.
So if we enable it so you can use {{msg.foo}} then someone will no doubt try {{msg.foo | number:0}} and then that won't work - so which is worse... ?
Status: Issue closed
|
siamezzze/bachelor_presentation | 89319177 | Title: Много списков
Question:
username_0: Маша! Я бы советовал вам разбавить текст какими-то визуальными элементами (картинками, таблицами и т.п.). Тем более, у вас распознавание образов.
Answers:
username_1: Угу, есть такое. То, что тут сейчас, ближе к плану презентации.
Раздумываю, на самом деле, сейчас над иллюстрациями. Кроме, очевидно, самих эллипсов, хочется вставить графиков с временем работы - вот как раз сейчас читаю, с помощью чего бы их сделать.
Спасибо.
username_1: Подозреваю, иллюстрации из используемых статей нельзя даже с указанием источников?
username_0: Иллюстрации можно любые. Указание источников опционально. Это же не текст
диплома, это просто техническое средство донесения идей до публики. Грубо
говоря, вы могли бы на защите открыть чужую статью и что-то
продемонстировать, но это долго, потому применяют презентации.
username_1: Сейчас должно быть лучше, добавила изображений и формул. Подумаю, что ещё сделать.
Status: Issue closed
|
NickCH-K/vtable | 1056701140 | Title: Display p values as opposed to F values in sumtable
Question:
username_0: Hi! Is there a way we could modify ```independence.test``` such that ```sumtable()``` displays p values as opposed to F values from the regressions?
I tried modify ```result$`Pr(>F)`[1]``` / ```result$`F value`[1]``` in ```independence.test```, but it didn't seem to work.
Thank you!
Status: Issue closed
Answers:
username_1: Check the `format` option in `independence.test`. `format = '{pval}'` will return p-values.
username_1: And you can pass this option along in `sumtable` by sending the `independence.test()` options as a named list. So `sumtable(group.test = list(format = '{pval}'))`, or for a working example:
`sumtable(mtcars, vars = c('mpg'), group = 'am', group.test= list(format = '{pval}'))` |
xinpianchang/fe-weekly | 761993907 | Title: webpack配置: optimization
Question:
username_0: ## optimization.runtimeChunk
* [何为runtimeChunk](https://www.jianshu.com/p/714ce38b9fdc)
* 简单理解:包含chunks 映射关系的 list
* [runtimeChunk配置](https://webpack.docschina.org/configuration/optimization/#optimizationruntimechunk)
* 三种配置:
1.默认: false
每个入口 chunk 中直接嵌入 runtime
2.true <=> 'multiple' <=> { name: entrypoint => `runtime~${entrypoint.name}`}
为每个只含有 runtime 的入口添加一个额外 chunk
3.'single' <=> { name: 'runtime' }
创建一个在所有生成 chunk 之间共享的运行时文件
* 为什么要覆盖默认值false
举例说明:把这个包含chunks映射关系的‘list’从output bundle(eg: app.111111.js)中提取出来, 生成runtime\~app.222222.js。映射关系改变时app的hash值不变,runtime~app的hash值改变,用户可以充分利用浏览器缓存,不需重新请求app.111111.js, 只需重新请求runtime\~app.333333.js |
asinha94/asos | 1162497189 | Title: Draw basic shape with Multiboot provided Video Memory
Question:
username_0: Now that multiboot has been setup to provide us with memory, we should actually make use of it by drawing something, making some boxes, adding some colout etc...
Answers:
username_0: New approach is to have mutliboot detect if we have a screen or not, and use the regular text mode terminal. Graphics is it's own can of worms and I think I want to migrate to C++. |
DefinitelyTyped/DefinitelyTyped | 410250871 | Title: @types/aws-lambda: There is no types for the communication with Application Load Balancer
Question:
username_0: - [x] I tried using the `@types/aws-lambda` package and had problems.
- [x] I tried using the latest stable version of tsc. https://www.npmjs.com/package/typescript
- [x] I have a question that is inappropriate for [StackOverflow](https://stackoverflow.com/). (Please ask any appropriate questions there).
- [x] [Mention](https://github.com/blog/821-mention-somebody-they-re-notified) the authors (see `Definitions by:` in `index.d.ts`) so they can respond.
- Authors: @dalen @trevor-leach @loikg @pl0xy @daniel-cottone
I am building right now the application with the usage of TypeScript and Serverless.
I replaced the API Gateway with Application Load Balancer which triggers the lambda functions directly.
I found that there are missing interfaces for this type of communication.
Could you tell me if it is a good observation? Maybe I should replace the library with a different one.
Apart from that, I created simple interfaces which are enough for my usage according to the [documentation](https://docs.aws.amazon.com/lambda/latest/dg/services-alb.html) and the article about the [differences](https://serverless-training.com/articles/api-gateway-vs-application-load-balancer-technical-details/).
```
export interface ApplicationLoadBalancerRequestEvent {
requestContext: ApplicationLoadBalancerRequestContext;
httpMethod: string;
path: string;
queryStringParameters?: { [name: string]: string };
headers?: { [name: string]: string };
multiValueQueryStringParameters?: { [name: string]: string[] };
multiValueHeaders?: { [name: string]: string[] };
body: string | null;
isBase64Encoded: boolean;
}
export interface ApplicationLoadBalancerRequestContext {
elb: {
targetGroupArn: string;
};
}
export interface ApplicationLoadBalancerResponse {
statusCode: number;
statusDescription: string;
isBase64Encoded: boolean;
headers?: { [name: string]: string };
multiValueHeaders?: { [name: string]: string[] };
body: string;
}
```
I suppose that it can be enough for me, but not enough for the whole appropriate communication.
Answers:
username_1: 👍 I'd appreciate these as well.
username_1: As far as I experience, the event's query params are encoded, worth to be documented IMHO.
username_0: Definitely, they are encoded. I will try to prepare something better, more specific in the meantime and create a pull request if it is possible. @username_1 we can talk about it then (what should be added or removed).
username_1: The "differences" link is definitely a good resource, and IMO worth to be added as well. However, an official resource would be great, since AWS may change the interface.
username_2: The ALBEvent and the response is also documented here
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/lambda-functions.html#receive-event-from-load-balancer
username_0: @username_1 @username_2 the pull request is ready to review: #33763. Feel free to comment on my work.
username_0: I am closing this issue due to a successful merge of pull request #33763
Status: Issue closed
|
MyersResearchGroup/iBioSim | 187772563 | Title: CHANGE: add port to species reference?
Question:
username_0: Consider adding port to species references, if we can figure out what that means for replacements and deletions.
Chris
Answers:
username_0: If parent is deleted and child is deleted, both removed from model and
references are invalid.
If parent is deleted and child is replaced, both removed from model and
references to child object point to new
object, references to parent are invalid.
If parent is replaced and child is deleted, both removed from model and
references to parent object point to new
object, references to child are invalid.
If parent is replaced and child is replaced, both removed from model and
references to both objects point to new
objects. |
mongo-dart/bson | 930874651 | Title: Index out of range when trying to deserialize
Question:
username_0: When trying to use the library using the following code
```
final serialized = BSON().serialize({'id': 42});
final deserialized = BSON().deserialize(serialized);
```
I get
```
Unhandled Exception: RangeError (index): Index out of range: index should be less than 13: 17
#0 Uint8List.[] (dart:typed_data-patch/typed_data_patch.dart:2221:7)
#1 BsonBinary.readByte (package:bson/src/types/binary.dart:256:29)
#2 BsonMap.extractData (package:bson/src/types/map.dart:13:27)
#3 BSON.deserialize (package:bson/src/bson_impl.dart:19:20)
#4 main (package:cool_tests/main.dart:35:31)
```
Am I using it in a wrong way?
Answers:
username_1: You should reset the buffer offset before deserializing:
```dart
final serialized = BSON().serialize({'id': 42});
serialized.offset = 0;
final deserialized = BSON().deserialize(serialized);
```
I will change it so that `deserialize` will do it internally without an explicit call.
username_0: @username_1 Thank you, that works. I did not think of that!
Status: Issue closed
|
ajaniv/django-core-utils | 151444241 | Title: Django rest framework does not invoke model clean
Question:
username_0: To reuse model clean logic, one needs to override serializer validation method,
create an instance of the model, and invoke clean.
It does not feel correct to create an instance in order to reuse the model validation. |
FHBielefeld-IFM-WS1718-SWEng1/WebAPI | 290107992 | Title: DELETE auf task route gibt 500er Fehler zurück
Question:
username_0: Beim ausführen von DELETE auf user/task mit body
```
{
"id": 30
}
```
Ich hatte vermutet, die id bezeichnet in diesem Fall die task id, wie sie auch bei POST zurückgegeben wird?
Rückgabe ist 500 und:
```
{
"error": "Unknown column 'userid' in 'where clause'"
}
```<issue_closed>
Status: Issue closed |
serde-rs/json | 210686219 | Title: Replace format_escaped_char with a safe implementation
Question:
username_0: #267 added this implementation:
```rust
let mut buf = [0; 4];
write!(&mut buf[..], "{}", value).unwrap();
let s = unsafe { str::from_utf8_unchecked(&buf[0..value.len_utf8()]) };
format_escaped_str(wr, formatter, s)
```
Once we drop support for <1.15.0, the implementation can be safe:
```rust
format_escaped_str(wr, formatter, value.encode_utf8(&mut [0; 4]))
```<issue_closed>
Status: Issue closed |
mesonbuild/meson | 789926268 | Title: Support explicitly enabling/disabling optional dependencies via a cmdline option?
Question:
username_0: continuation of #2452
Seems like this is not solved. #2411 was closed and not merged.
Even with #3376 it actually requires `meson.build` to do the correct thing (which it often does not or the build script is a mix of correct and wrong).
You probably want to have a look at `CMAKE_DISABLE_FIND_PACKAGE_<PackageName>` from CMake to actually learn from it.
On a different but related issue: I also saw people suggesting/looking for something like `<PackageName>_DIR` from CMake
Answers:
username_1: `<package>_DIR` is the directory containing the foo-config.cmake file. meson simply uses pkg-config, which is a standard and comes with PKG_CONFIG_PATH.
`CMAKE_DISABLE_FIND_PACKAGE_<package>` is a terrible, terrible, terrible UI because it does spooky action at a distance and what you actually want is to disable a feature, not the dependency the feature depends on, because otherwise you have no clue why it's being disabled.
"But people don't implement the ability to choose in their meson.build, therefore let's add new core meson features" is not really a convincing argument IMO.
FWIW, I was initially introduced to the fact that this cmake "feature" exists, by someone who was calmly explaining to me why "our project does not need options, see, you could just use CMAKE_DISABLE_FIND_PACKAGE for everything if you know which dependencies are internally used by the features you don't want".
I took a hard pass on that one. :(
username_0: Read the docs: https://cmake.org/cmake/help/latest/module/FeatureSummary.html.
In the end it doesn't matter how good a build system claims to be if the users writing the build scripts don't know how to do it properly. On the other hand the people packaging such libraries need escape hatches to not patch the hell out of every stupid build script they encounter (and no I don't have the time to correct every build script and submit the required changes upstream.).
username_1: That requires the project to both agree to support printing options/packages (and annotate them extensively) and not care about doing it properly with options.
It also requires you to do a full configure cycle, get it wrong, then figure out what to do next time.
username_1: I have actually, for real, told people "cmake sucks, don't use cmake. Cmake lets you do nonsense like DISABLE_FIND_PACKAGE instead of proper options. Use meson instead."
It would feel pretty weird to me if meson then went around and added support for this misfeature.
username_0: It is not nonsense it is just your inability to understand and use it correctly. People will always write bad build scripts no matter which buildsystem is used. I have seen my fair share of broken buildscripts in cmake/meson/autotools/qmake/plain make. It doesn't matter in the end.
username_2: So, if I understand correctly, the main issue here is that some projects do stuff like:
```meson
someDep = dependency('someDep', required: false)
if someDep.found()
# Do stuff you want to explicitly disable even if someDep is found
# and the user did not provide another toggle for this condition
endif
```
If this is the case, I would suggest trying to find a solution for this particular use case, instead of arguing why/if exactly, `CMAKE_DISABLE_FIND_PACKAGE` is bad. Also, we could argue if we want to actually do something about this / if we actually recognize this as a problem. Debating some CMake design decisions won't be to anyone's benefit.
username_1: It is my inability to philosophically view it as a good thing. Please don't tell me I don't "understand" it -- I understand it just fine.
username_2: I can think of two solutions to fix this:
- implementing `CMAKE_DISABLE_FIND_PACKAGE` as options (highly unlikely) or in cross/native files (not ideal but more likely)
- Making use of [`meson.override_dependency`](https://mesonbuild.com/Reference-manual.html#meson-object) to return a not found dependency and making this more accessible.
You should be, theoretically, already be able to do what you want with `override_dependency` and a not found dependency or by creating a super project including the project you want to build as a subproject.
```meson
project('root', ['c'])
meson.override_dependency('zlib', dependency('erferfergwrrgbwrbwr', required: false))
subproject('p1')
```
```meson
project('p1', ['c'])
dep = dependency('zlib', required: false)
if dep.found()
message('P1: FOUND')
else
message('P1: NOT FOUND')
endif
```
This is obviously not an ideal solution, but it shows that meson can theoretically already do this.
Also, can we please stop arguing about what CMake does, whether it is correct, arguing about semantics, and *focus on actually fixing the issue by providing a solution*.
username_1: Given the actual ticket we are in explicitly says to use `CMAKE_DISABLE_FIND_PACKAGE` for inspiration on what cmake is doing that the OP wishes meson did, I think it's pretty relevant why it's bad.
And, my entire argument is "here are the reasons why we don't want to do something about it, because it's bad when cmake does it and it will be bad if meson does it too".
There are two different ways to solve this in meson today:
- submit a project bug report "do not do stuff like this"
- decide you don't have time for that, you need to override it for your own use but not contribute upstream; in this case, `sed` meson.build from `dependency('foo', required: false)` to `dependency('', required: false)`
username_0: ```
someDep = dependency('someDep', required: false)
if someDep.found()
# Do stuff you want to explicitly disable even if someDep is found
# and the user did not provide another toggle for this condition
endif
```
@username_2 exactly
unfortunately some meson.builds try very hard in finding optional deps e.g. (as seen in gtk 4.0.1):
```
someDep = dependency('someDep', required: false)
if not someDep.found()
someDep = cc.find_library('someDep' <or whatever arguments are required>)
endif
if someDep.found()
# Do stuff you want to explicitly disable even if someDep is found
# and the user did not provide another toggle for this condition
endif
```
so I need a way to deactivate the `dependency()` call and the `find_library()`. CMake offers a way to deactivate both without ever touching the build scripts.
I don't mind the deactivation in native/cross files since this is easily added to the native/cross files generated within vcpkg to run the build. Simply add a [override_dependency] section to it? (although this doesn't take care of the find_library call.)
@username_1 hate on CMake all you like.... due to your response I cannot take you seriously any more and will simply ignore you.
The correct call for dependency would probably have been something like : `dependency('someDep', linked_option: '<someoption>')` but that ship sailed already.
username_2: I know that this is a heated discussion, but can we please stop with stuff like this?
---
Another question would be: Do we really need `required: false` anymore if we have features? As in can we safely deprecate it and force users to use our semi new feature options? This way this issue would solve itself over time, we won't need to implement something like `CMAKE_DISABLE_FIND_PACKAGE` and would arguably be a better design overall.
I am fully aware that this change would take years to have any effect and offers no short term solution.
username_2: Also, @username_0 could you please update your first comment (and maybe also the title of this issue) with a better description of this issue so other people have a better understanding of what it actually is you want to solve here?
username_0: In CMake I consider raw `find_library` calls outside a FindModule a code smell.
I don't know if meson has something similar to cmake's FindModules.
username_2: Meson does not have modules. Using the `dependency` function is usually recommended since it also takes care of include directories, version, etc. `find_library` is usually only used for libraries without `*.pc` **and** `*.cmake` files (and there is no built in meson dependency that handles libraries which are an absolute pain [like boost](https://www.boost.org/doc/libs/1_75_0/more/getting_started/windows.html#library-naming)).
A common use case is `cc.find_lirary('dl')`. This is usually not a code smell but `cc.find_lirary('boost_filesystem-mt-x86-64')` would be.
username_0: 1. Why would you ever need to ask this question? (and not have an option for it `with_dl`)
2. wouldn't it be better as builtin dep: `dependency('dl')`
username_2: I guess this depends on the definition of module. We don't have modules in the way that there is no way to include custom meson code that does arbitrary (and potentially stupid) stuff to look up a dependency. We try our best to encourage the use of `*.pc` files since they are not turning complete, easy to read and write, and actually designed to define how a dependency should be used.
For your last to points:
Ideally yes, but we would have to maintain a lot of additional code in this case if we want to support all/most common libraries like this. And even worse: We would most likely just end up calling `cc.find_library` from python instead of meson which solves basically nothing. We also can't remove `find_library` since some projects do have a legitimate use for it and we can't put everything into system dependencies. We explicitly also don't want another `FindXYZ.meson` eco-system that ends up copy-pasted around. And finally, `cc.find_library('dl')` works well enough as a common case.
username_0: Mark my words: It will happen sooner or later..... if your internal dependency lookup is unstable users want that. You are otherwise locking in users to a certain version of meson, especially since you don't fear breaking existing meson.build files. (and this is only one reason they exist in CMake)
So enough time invested in meson, I know return to the things I care more about ;)
username_1: We definitely need it. Example from one of my projects
```
gpgme = dependency('gpgme',
required : false,
static : get_option('buildstatic'))
# gpgme recently began providing a pkg-config file. Create a fake dependency
# object if it cannot be found, by manually searching for libs.
if not want_gpgme.disabled() and not gpgme.found()
gpgme_config = find_program('gpgme-config', required : want_gpgme)
if gpgme_config.found()
gpgme_version = run_command(gpgme_config, '--version').stdout().strip()
needed_gpgme_version = '>=1.3.0'
if gpgme_version.version_compare(needed_gpgme_version)
gpgme_libs = [
cc.find_library('gpgme',
dirs : [get_option('gpgme-libdir')]),
cc.find_library('gpg-error',
dirs : [get_option('gpgme-libdir')]),
cc.find_library('assuan',
dirs : [get_option('gpgme-libdir')]),
]
gpgme = declare_dependency(dependencies : gpgme_libs)
endif
endif
endif
```
Or one might look for `dependency('python3-embed', required: false)` and fall back to `dependency('python3')` for older versions of python.
Both of these cherry-picked examples are fixed in later versions of meson by special-casing config-tool handling or `import('python3').find_installation()` with the embed keyword. But users need this flexibility in the general case (plus why use the python module just for this).
username_2: Fair point, I was hoping that #4595 might help here (in 99% of all cases)...
username_0: ah yes..... that nonsense looks exactly like the things Qt5 does.
the lookup rules are basically **_always_** incomplete and lacking.
The important thing is to give the user a way to explicitly define the `whatever_dep` var outside of the `meson.build` |
Jimbly/regex-crossword | 873815699 | Title: Greedy matching breaks a clue
Question:
username_0: For `(...?)\1*` the pattern matching is greedy, which will always accept strings of the form ABCABCABCABC, but fail to match strings like ABABABABABAB
This is because for ? it tries to match maximum number of letters possible (ABA). And then the rest of regex doesn't work anymore. Instead. ? should probably iterate through both options and return True if either is True.
Status: Issue closed
Answers:
username_1: Thanks, good catch! I guess in JavaScript we *do* need to prepend/append `^` and `$` to do a full match... |
facebook/react-native | 125126144 | Title: flex 1.0 doesn't work if the height of parent is determined by parent's siblings
Question:
username_0: 1. **body**: `flex-direction: column`
2. **grandparent**: `align-items: stretch`
3. **parent**: `flex-direction: column;`
4. **me**: `flex: 1.0`
5. **uncle**: `height: 100px`
`uncle` has a fixed height `100px`, so that the height of `grandparent` is the same as `uncle`, and `parent` should be also be 100px high.
**The height of `me` is expected be `100px`, however it's not.**
**Works well in chrome**
<img src="https://raw.githubusercontent.com/username_0/Hi-Git/master/chrome.png" height="120">
**React/Layout.c**
<img src="https://raw.githubusercontent.com/username_0/Hi-Git/master/simulator.png" height="320">
### 2. source
```html
<!DOCTYPE html>
<html>
<head>
<style>
div {
display: flex;
}
.body {
background-color: #fff;
height: 300px;
flex-direction: column;
}
.grandparent {
align-items: stretch;
}
.uncle {
height: 100px;
background-color: #0ae;
}
.parent {
background-color: #fe0;
flex-direction: column;
}
.me {
flex: 1.0;
background-color: #0fc;
}
</style>
</head>
<body id="body" class="body">
<!-- grandparent -->
<div class="grandparent">
<!-- parent -->
<div class="parent">
<div class="me">
<label>"flex: 1.0" doesn't work on "me".</label>
</div>
</div>
<!-- uncle-->
<div class="uncle">
<label>This is uncle</label>
</div>
</div>
</body>
</html>
```
Answers:
username_1: +1
Status: Issue closed
username_2: React Native flexbox isn't supposed to work precisely the same as flexbox in the browser. It's operated by css-layout so it's slightly different. We've improved the docs a lot since January, so I think this is more clear now. I'm going to close this issue but if you think there's still an area where the RN behavior doesn't match the docs, then I think it would be great to open a new issue. A reproduction of the problem on rnplay.org is the best way to be helpful for that - that's somewhat more useful than comparison code to run in a browser. Thanks for pointing this issue out. |
tensorflow/tensorflow | 689104737 | Title: Tensorflow only using a fraction of the GPU power available (NLP model, dual GPU, tf.data pipeline)
Question:
username_0: Hi!
Somewhat recently I got a new training server which is really fast, but I'm currently having trouble utilizing it's GPU and CPU to it's full potential when training my model.
I'm training an NLP classification model with a string as input and a category as target. When I set the batch size to a reasonally small number, like 16 or 32, only around 10% of each of the 2 GPUs as well as of the CPU are used. Only when I size the batches up to 4096, CPU gets close to a 100% load but GPUs still only hit 7-8%. Training is really fast then but extremely inefficient because such batch sizes are b\*\*\*s\*\*\*, so the model converges only very slowly.
I found a sweet spot around bs=256, where only around 20% CPU and 10% GPUs load is achieved and gradient descent is still somewhat efficient, which means I get the best results in terms of wall time.
The data pipeline is implemented with [tf.data](https://tf.data), reading the data from several CSVs in parallel from an SSD. I couldn't find any bottlenecks so far.
This is somewhat frustrating because I can only make use of a fraction of the full potential of my new machine. Any ideas on how to improve this?
I'm grateful for any help.
​
​
My specs:
\- AMD Ryzen Threadripper 3960X 24-Core Processor
\- 64 GB RAM
\- two NVIDIA GeForce RTX 2070 SUPER with 8192MiB each
\- Win 10 (unfortunately, the ASrock Creator TRX40 motherboard we bought is currently incompatible with Linux, wtf...)
\- TF 2.1.0 installed from binary (anaconda)
\- Python 3.7.7
\- CUDA Version 10.2.89
​
The relevant part of my code:
class dataset_loader():
def __init__(self, data_dir, csv_file, batch_size, cycle_length, tokenizer=None, n_threads=1, n_prefetch=1):
self.batch_size = batch_size
self.cycle_length = cycle_length
self.n_threads = n_threads
self.n_prefetch = n_prefetch
self.output_size = 3943 ## !!!!!!!!!!! TODO nur für testzwecke bitte nicht hardcoden!!!!!!!!!!!
if(tokenizer is None):
self.tokenizer = tf.keras.preprocessing.text.Tokenizer(char_level=True)
self.tokenizer.fit_on_texts(strat_search_words_with_beginnings)
else:
self.tokenizer = tokenizer
char_dict = list(eval(self.tokenizer.get_config().get("word_index")).keys())[:-1] # hier vorletztes weglassen, da ein Out-Of-Vocabulary slot bei StaticVocabularyTable zwingend angegeben werden muss
char_index = list(eval(self.tokenizer.get_config().get("word_index")).values())[:-1]
char_table_init = tf.lookup.KeyValueTensorInitializer(char_dict, char_index, value_dtype=tf.int64)
self.char_table = tf.lookup.StaticVocabularyTable(char_table_init, 1)
[Truncated]
mirrored_strategy = tf.distribute.MirroredStrategy(cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())
with mirrored_strategy.scope():
model = keras.models.Sequential([
# keras.layers.GRU(128, return_sequences=True, batch_input_shape=[batch_size, None, max_id+1]),
keras.layers.GRU(128, return_sequences=True, input_shape=[ None, train_data_loader.max_id+1], use_bias=False),
keras.layers.GRU(128, return_sequences=True, use_bias=False),
keras.layers.GRU(128, use_bias=False),
keras.layers.Flatten(),
keras.layers.Dense(train_data_loader.output_size, activation="softmax")
])
model.compile(loss=[focal_loss_umbertogriffo.categorical_focal_loss(alpha=.25, gamma=2)], optimizer="adam", metrics=['accuracy'])
callbacks = list()
callbacks.append(keras.callbacks.EarlyStopping(patience=2))
callbacks.append(keras.callbacks.ModelCheckpoint(filepath = os.path.join(data_dir, "checkpoints"), save_best_only=True))
history = model.fit(train_data_loader.get_dataset(), validation_data=valid_data_loader.get_dataset(), epochs=25, callbacks = callbacks)
Answers:
username_1: @username_0 We donot support Anaconda builds as we don't have complete knowledge on how they were built. May be you need to post it in Anaconda repository.
Generally, pip builds for 2.1 are built with CUDA 10.1 (you mentioned you have CUDA 10.2).Please check [here](https://www.tensorflow.org/install/source_windows#gpu) for more details.
Version | Python version | Compiler | Build tools | cuDNN | CUDA
-- | -- | -- | -- | -- | --
tensorflow_gpu-2.3.0 | 3.5-3.8 | MSVC 2019 | Bazel 3.1.0 | 7.4 | 10.1
tensorflow_gpu-2.2.0 | 3.5-3.8 | MSVC 2019 | Bazel 2.0.0 | 7.4 | 10.1
tensorflow_gpu-2.1.0 | 3.5-3.7 | MSVC 2019 | Bazel 0.27.1-0.29.1 | 7.4 | 10.1
Two options are
(1) post it in Anaconda's repo where experts related to Anaconda will resolve your issue, or
(2) unistall TF, CUDA drivers, clean up any remaining files related to CUDA/TF, restart, then reinstall pip version freshly.
Hope it helps. Please let us know how it progresses. Thanks! |
bcgov/api-specs | 175801815 | Title: Router planner: change startup to operate against individual GeoJSON inputs
Question:
username_0: There are new street load file conventions created for geocoder v2.1. Modify startup to accommodate the individual inputs now available instead of the old street_load_other.json which is being deprecated. Also it might be prudent to allow the router to work off of street_load_street_segments.json (note the file name may have changed) and have a fall back to street_load_street_segments_post.json
Answers:
username_1: Confirmed in Test
username_2: Verified in TEST migration on Oct 21, 2016
Status: Issue closed
|
GothamElections2017/RandomThoughts | 380702158 | Title: Gross Domestic Product by State 2nd quarter 2018 https://t.co/kl9t2zOz4z
Question:
username_0: <blockquote class="twitter-tweet">
<p lang="en" dir="ltr" xml:lang="en">Gross Domestic Product by State, 2nd quarter 2018 <a href="https://t.co/kl9t2zOz4z">https://t.co/kl9t2zOz4z</a></p>
— <NAME> (@ge_ReedRichards) <a href="https://twitter.com/ge_ReedRichards/status/1062700575119343616?ref_src=twsrc%5Etfw">November 14, 2018</a>
</blockquote>
<br>
<br>
November 14, 2018 at 05:35AM<br>
via Twitter |
mgks/Android-SmartWebView | 746128913 | Title: Not receiving FCM
Question:
username_0: I have tried to set up FCM for the app but it does not seem to be working, I have followed the docs and nothing seems to be working, it does not seem to be connecting to the firebase console as it is not adding the device installed
Answers:
username_1: adding the device to what exactly?
also, have you tried sending notification from the firebase panel first?
use the unique FCM_TOKEN present in the cookie & log to push notification from panel. |
thecodingmachine/react-native-boilerplate | 1090590196 | Title: Failed to build iOS project. We ran "xcodebuild" command but it exited with error code 66
Question:
username_0: **Describe the bug**
I try to use simulator on macos and this is what I got

**To Reproduce**
Steps to reproduce the behavior:
1. `npx react-native init MyApp --template @thecodingmachine/react-native-boilerplate`
**Expected behavior**
A simulator should run usual with app opened
**Screenshots**
see above
**Desktop (please complete the following information):**
- OS: MacOS Monterey 12.0.1
- node: 16
- npm: 8
- simulator: 13.2
- xcode: 13.2.1
**Additional context**
```bash
objc[13648]: Class AMSupportURLConnectionDelegate is implemented in both /usr/lib/libamsupport.dylib (0x20876f130) and /Library/Apple/System/Library/PrivateFrameworks/MobileDevice.framework/Versions/A/MobileDevice (0x10679c2c8). One of the two will be used. Which one is undefined.
objc[13648]: Class AMSupportURLSession is implemented in both /usr/lib/libamsupport.dylib (0x20876f180) and /Library/Apple/System/Library/PrivateFrameworks/MobileDevice.framework/Versions/A/MobileDevice (0x10679c318). One of the two will be used. Which one is undefined.
** BUILD FAILED **
The following build commands failed:
Ld /Users/mcsdev/Library/Developer/Xcode/DerivedData/MyApp-awgefowcnwxvevcchywvfktxcjyl/Build/Products/Debug-iphonesimulator/MyApp.app/MyApp normal (in target 'MyApp' from project 'MyApp')
(1 failure)
```
Answers:
username_1: I had the exact problem. It was probably an m1 Mac issue.
The following solution fixed it for me.
https://stackoverflow.com/questions/66369650/undefined-symbol-protocol-descriptor-for-swift-expressiblebyfloatliteral-issu
username_2: I am on vacations, but I will make a new version with all dependencies up to date and I know that there is a method added for M1 issues in the PodFile. Hope it will fix this kind of issues
username_2: For the first message `Unable to boot...` : I have it also but it compiles and lauched even if this message is present.
For the fact that the build failed : I update the boilerplate today see if the issue is still here
Status: Issue closed
username_2: feel free to reopen |
snaekobbi/sprints | 97711186 | Title: [4.3:35] The system shall support the braille code used in Denmark, Finland, Norway, Sweden and Switzerland for 8-dot braille.
Question:
username_0: ### Requirement
[[4.3:35]](http://dev.pef-format.org/dp2/index.xhtml#4.3:35) The system shall support the braille code used in Denmark, Finland, Norway, Sweden and Switzerland for 8-dot braille.
### Tasks
### Test results
Answers:
username_1: I imagine that support for 8-dot braille is available in Liblouis for some, but not all, of the languages DK, FI, NO and DE. Am I right?
For SE we would like to support 8-dot braille. Need to consult Joel for how to do it though (in Dotify?).
Downgrade [4.3:35]<http://dev.pef-format.org/dp2/index.xhtml#4.3:35> to prio 2?
Davy has written that there is no 8-dot braille code in the Netherlands (for NL). [4.3:35B]<http://dev.pef-format.org/dp2/index.xhtml#4.3:35B> should be prio 3 accordingly.
username_0: The problem is actually that for several languages no definition of 8-dot braille exists (or not known by the agency).
It seems that Liblouis has support for 8-dot braille in Danish, Finnish and German (although I'm not sure if these tables are following an official standard, e.g. Jukka said 8-dot braille doesn't officially exist for Finnish).
username_1: I could send question about it to the braille authorities (in the Nordic countries at least).
username_0: OK, that would be great.
username_2: For German, there is no official definition of 8-dot braille, but there is a good overview on http://www.braille.ch/eb-id-vf.htm. The mapping seems to correspond to Liblouis' de-de-comp8.ctb table.
username_3: In The Netherlands, most people use either the US or German 8-dot tables. I would be interested in validating and expanding these (especially US), albeit not for immediate use in the embossing process. The US 8-dot table is ‘woefully inadequate’ right now.
DISCLAIMER:
De informatie verzonden met dit e-mail bericht is uitsluitend bestemd voor de geadresseerde. Indien u niet de beoogde geadresseerde bent, verzoeken wij u vriendelijk dit aan de afzender te melden (of via: <EMAIL><mailto:<EMAIL>>) en het origineel en eventuele kopieën te verwijderen.
The information sent in this e-mail is solely intended for the individual or company to whom it is addressed. If you received this message in error, please notify the sender immediately (or mail to <EMAIL><mailto:<EMAIL>>) and delete the original message and possible copies.
username_4: @username_0: Norwegian 8-dot spec is specified by mapping each of the 256 8-dot braille characters to Windows code page 1252. The spec was uploaded earlier and is available here: https://github.com/liblouis/braille-specs/tree/master/norwegian#8-punktstabell
@usama49 and I created a table based on it.
- table: https://github.com/snaekobbi/liblouis/blob/1f703b64cfe070bd04de9acefd59531cdd314827/tables/no-no-8dot.utb
- tests: https://github.com/snaekobbi/liblouis/blob/1f703b64cfe070bd04de9acefd59531cdd314827/tests/harness/no_harness_8dot.txt
@username_0: the test currently fail mainly because we need to figure out how to fall back to the norwegian 6-dot (uncontracted) table for characters not defined in 8-dot.
There's also a couple of characters that liblouis fails to translate it seems:
- [escape character](http://unicode-table.com/en/001B/)
- [no-break space](http://unicode-table.com/en/00A0/)
Status: Issue closed
|
appirio-tech/topcoder-app | 251414424 | Title: Challenge status in 'My filters'
Question:
username_0: Custom filters should only fetch`active` challenges. (Or make it configurable)
For example: I created a custom filter that shows me the challenges from `CODE` track but when I select it the results are full of completed challenges which makes the custom filter useless for me as I need to scroll and search to find an `active` challenge.
 |
frAGILE-development/2340 | 239656725 | Title: M7 - Google Map Display, Persistence and UML Sequence Diagram
Question:
username_0: **Design**
Now we will create sequence diagrams for the application. Go back to your previous design and analysis information and user stories created earlier, and make them into UML sequence diagrams. This is an individual requirement (as always, you do not have to cover for your team mates). Each person should make a sequence diagram that covers their user story and illustrates the dynamic interactions between objects in the design necessary to accomplish that story.
**Implementation**
We will now add the ability to display locations of the lost and found items using Google Maps. **We will also implement persistence** so that you no longer have to re-enter your information every time you restart the application.
You should create a Map Activity and display a map on the phone. There should be pins showing all the locations of both found and lost items. Clicking on a pin should give you some information about the item from the report.
Persistence can be accomplished in many ways. In class, I will demonstrate four techniques and provide sample code: database (extra credit), custom text file, binary serialization, json serialization. You may use any of these (or a completely different method as long as the data is persisted across program invocations). You may also choose how the loading and saving of data occurs (by user command, automatically on startup and shutdown, or real-time (like a database) where data is saved immediately upon entry).
**Requirements**
Have a way to navigate to the map screen
Display a map screen with pins at report locations
Clicking on a pin shows details about the report
Data is persisted between runs of the program
Grading Criteria
Sequence diagram ........... 40
Previous functions work .... 05
There is a way to navigate to the map display screen… 05
Pins are located at report locations ..... 10
Clicking on pin shows report details ...... 10
Data is persisted in application ..... 20
Javadoc and code design ..... 10
**Submissions**
Electronic submissions should be submitted in .PDF format.
Make sure to write your use case title at the top of your Sequence Diagram<issue_closed>
Status: Issue closed |
lukemurray/data-atom | 63215138 | Title: Uncaught TypeError: Cannot read property 'split' of null
Question:
username_0: [Enter steps to reproduce below:]
1. ...
2. ...
**Atom Version**: 0.187.0
**System**: Mac OS X 10.9.4
**Thrown From**: [data-atom](https://github.com/username_1/data-atom) package, v0.4.0
### Stack Trace
Uncaught TypeError: Cannot read property 'split' of null
```
At /Users/ahmet/.atom/packages/data-atom/lib/data-managers/data-manager.coffee:11
TypeError: Cannot read property 'split' of null
at PostgresManager.DataManager (/Users/ahmet/.atom/packages/data-atom/lib/data-managers/data-manager.coffee:11:27)
at new PostgresManager (/Users/ahmet/.atom/packages/data-atom/lib/data-managers/postgres-manager.coffee:8:7)
at DbFactory.createDataManagerForUrl (/Users/ahmet/.atom/packages/data-atom/lib/data-managers/db-factory.coffee:15:17)
at NewConnectionView.onConnectClicked (/Users/ahmet/.atom/packages/data-atom/lib/data-atom-controller.coffee:86:32)
at NewConnectionView.module.exports.NewConnectionView.connect (/Users/ahmet/.atom/packages/data-atom/lib/new-connection-view.coffee:135:8)
at HTMLButtonElement.<anonymous> (/Applications/Atom.app/Contents/Resources/app/node_modules/space-pen/lib/space-pen.js:181:36)
at HTMLButtonElement.handler (/Applications/Atom.app/Contents/Resources/app/src/space-pen-extensions.js:112:34)
at HTMLButtonElement.jQuery.event.dispatch (/Applications/Atom.app/Contents/Resources/app/node_modules/space-pen/vendor/jquery.js:4681:9)
at HTMLButtonElement.elemData.handle (/Applications/Atom.app/Contents/Resources/app/node_modules/space-pen/vendor/jquery.js:4359:46)
```
### Commands
```
-8:57.6 pane:show-item-1 (atom-text-editor.editor)
22x -8:56.1 core:undo (atom-text-editor.editor)
-8:55.0 core:save (atom-text-editor.editor)
-8:54.0 pane:show-item-2 (atom-text-editor.editor)
-8:51.9 rspec:run (atom-text-editor.editor)
3x -1:37.1 core:close (atom-text-editor.editor.is-focused)
-1:27.3 core:confirm (atom-text-editor.editor.mini)
-1:17.9 core:select-all (atom-text-editor.editor.mini)
-1:16.9 core:confirm (atom-text-editor.editor.mini)
-1:06.9 core:select-all (atom-text-editor.editor.mini)
-1:04.9 core:confirm (atom-text-editor.editor.mini)
-0:26.8 data-atom:execute (atom-workspace.workspace.scrollbars-visible-when-scrolling.theme-monokai.theme-flatland-dark-ui)
-0:04.7 core:select-all (atom-text-editor.editor.mini)
```
### Config
```json
{
"core": {
"projectHome": "/Users/ahmet/Projects",
"disabledPackages": [
"atom-bitcoin",
"atom-ctags",
"linter"
],
"themes": [
"flatland-dark-ui",
[Truncated]
autocomplete-plus, v2.6.0
autocomplete-ruby, v0.0.1
data-atom, v0.4.0
file-icons, v1.5.1
flatland-dark-ui, v0.2.3
highlight-line, v0.10.1
highlight-selected, v0.9.1
language-rspec, v0.3.0
linter-rubocop, v0.2.2
linter-ruby, v0.1.4
monokai, v0.12.0
pain-split, v1.3.1
rails-rspec, v0.3.1
rspec, v0.1.9
tabs-to-spaces, v0.9.0
travis-ci-status, v0.13.0
# Dev
No dev packages
```
Answers:
username_1: Fixed in 0.6 (should've been 0.5)
Status: Issue closed
username_0: :+1:
username_2: Getting this in 0.9 on windows
username_1: Any more details? Can't reproduce on my Mac
username_2: I just realized it's probably my fault but the error should still be handled. Had some trouble connecting so i tried this:

Hoping it would provide me with some more information where the process got stuck.
username_1: Ahh ok. I have a fix. It will be in 0.9.1 it should try to connect with no Auth and the results area should show you the connection error if there is one. Thanks for the quick feedback! |
mariobuikhuizen/ipyvuetify | 447709578 | Title: Navigation Drawer open and close
Question:
username_0: I am trying to create a NavigationDrawer but can't see how to show/hide it (I am in a jupyter notebook but don't think that is the issue).
```
def on_click(widget, event, data):
pass # (?)
vnd = v.NavigationDrawer(right=True, children=[v.Btn(color='primary', children=['Drawer button'])])
show_drawer = v.Btn(color='primary', children=['Open'])
show_drawer.on_event('click', on_click)
v.Layout(children=[show_drawer, vnd]) # (?)
```
I see some examples of doing this in Vuetify, but I am not sure how the mapping works of the display method to ipyvuetify (e.g., `https://codepen.io/thiagoaos/pen/KygpxV`)
There is a v_model which I assume maps to the v-model, but it says "!!disabled!!" so not sure if that is working in ipyvuetify?
Any guidance would be appreciated.
Answers:
username_1: You can use either the `v_model` attribute, or the `value` attribute:
```
nav = v.NavigationDrawer(value=False, app=True)
def on_click(*args, **kwargs):
nav.value = not nav.value
btn = v.Btn(children=["click"])
btn.on_event('click', on_click)
```
username_0: Awesome...
Status: Issue closed
username_2: The referenced codepen uses a very old version of vuetify. These examples are up to date: https://vuetifyjs.com/en/components/navigation-drawers
I've adapted your example to the following:
```
def on_click(widget, event, data):
vnd.v_model = not vnd.v_model
drawer_button = v.Btn(color='primary', children=['Drawer button'])
drawer_button.on_event('click', on_click)
vnd = v.NavigationDrawer(v_model=False, absolute=True, right=True, children=[
drawer_button
])
show_drawer = v.Btn(color='primary', children=['Toggle'])
show_drawer.on_event('click', on_click)
v.Layout(children=[vnd, show_drawer])
```
The v_model's initial value '!!disabled!!' means it is not initialized. We can't use null for this, because that's a valid value. Come to think of it !!un_initialized!! is probably more clear.
username_0: :) Disabled confused me. lol |
lathonez/clicker | 161758812 | Title: Setup has changed: static and dynamic platform testing
Question:
username_0: This [PR](https://github.com/angular/angular/pull/8739) breaks the setup documented in [your unit testing walkthrough](http://username_1.github.io/2016/ionic-2-unit-testing/).
Answers:
username_1: Thanks for raising. What's the problem exactly?
username_0: In short, if you follow the walkthrough with the latest angular, it won't work because the module structure changed. I used this and it works:
```typescript
import { setBaseTestProviders } from '@angular/core/testing';
import {
TEST_BROWSER_DYNAMIC_APPLICATION_PROVIDERS,
TEST_BROWSER_DYNAMIC_PLATFORM_PROVIDERS,
} from '@angular/platform-browser-dynamic/testing';
setBaseTestProviders(TEST_BROWSER_DYNAMIC_PLATFORM_PROVIDERS, TEST_BROWSER_DYNAMIC_APPLICATION_PROVIDERS);
```
username_1: I've made [this change](https://github.com/username_1/username_1.github.io/commit/9beed71c32eb67ccd8471c73588a4388bc33122f) which brings the blog in line with the working code in this repo.
What version of angular are you using? ionic are still on .rc1 and I don't move ahead of those guys, more trouble than it's worth.
username_0: I'm using `2.0.0-rc.1`
Status: Issue closed
username_1: Sweet, in which case the above change should sort it.
username_1: Thanks again for raising this @username_0
username_0: Your blog articles have saved me a ton of time. I appreciate what you're doing. |
RocketRobz/RocketVideoPlayer | 748118635 | Title: Stuck on entering directory
Question:
username_0: When I enter the directory my video is in, it is stuck on entering directory.
Answers:
username_1: You probably have a lot of files in that directory then.
username_0: Yes, it is the frames for my video. Is there a way I can fix this?
username_1: Delete them.
Status: Issue closed
|
MycroftAI/mycroft-precise | 600177211 | Title: Got this error pls check
Question:
username_0: Got this error pls check
(.venv) ubuntu@conducive-ringtail:~/mycroft-precise$ sudo python3 -m pip install cython
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 33, in vendored
__import__(vendored_name, globals(), locals(), level=0)
ModuleNotFoundError: No module named 'pip._vendor.cachecontrol'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 183, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/usr/lib/python3.6/runpy.py", line 142, in _get_module_details
return _get_module_details(pkg_main_name, error)
File "/usr/lib/python3.6/runpy.py", line 109, in _get_module_details
__import__(pkg_name)
File "/usr/lib/python3/dist-packages/pip/__init__.py", line 22, in <module>
from pip._vendor.requests.packages.urllib3.exceptions import DependencyWarning
File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 64, in <module>
vendored("cachecontrol")
File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 36, in vendored
__import__(modulename, globals(), locals(), level=0)
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 656, in _load_unlocked
File "<frozen importlib._bootstrap>", line 626, in _load_backward_compatible
File "/usr/share/python-wheels/CacheControl-0.11.7-py2.py3-none-any.whl/cachecontrol/__init__.py", line 9, in <module>
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 656, in _load_unlocked
File "<frozen importlib._bootstrap>", line 626, in _load_backward_compatible
File "/usr/share/python-wheels/CacheControl-0.11.7-py2.py3-none-any.whl/cachecontrol/wrapper.py", line 1, in <module>
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 656, in _load_unlocked
File "<frozen importlib._bootstrap>", line 626, in _load_backward_compatible
File "/usr/share/python-wheels/CacheControl-0.11.7-py2.py3-none-any.whl/cachecontrol/adapter.py", line 4, in <module>
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 656, in _load_unlocked
File "<frozen importlib._bootstrap>", line 626, in _load_backward_compatible
File "/usr/share/python-wheels/requests-2.18.4-py2.py3-none-any.whl/requests/__init__.py", line 97, in <module>
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 656, in _load_unlocked
File "<frozen importlib._bootstrap>", line 626, in _load_backward_compatible
File "/usr/share/python-wheels/requests-2.18.4-py2.py3-none-any.whl/requests/utils.py", line 11, in <module>
File "/usr/lib/python3.6/cgi.py", line 42, in <module>
import html
File "/usr/lib/python3.6/html/__init__.py", line 6, in <module>
from html.entities import html5 as _html5
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 674, in exec_module
File "<frozen importlib._bootstrap_external>", line 779, in get_code
File "<frozen importlib._bootstrap_external>", line 487, in _compile_bytecode
UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 0-2: invalid continuation byte
[Truncated]
from pip._internal.cli.req_command import RequirementCommand
File "/home/ubuntu/mycroft-precise/.venv/lib/python3.6/site-packages/pip/_internal/cli/req_command.py", line 15, in <module>
from pip._internal.index.package_finder import PackageFinder
File "/home/ubuntu/mycroft-precise/.venv/lib/python3.6/site-packages/pip/_internal/index/package_finder.py", line 21, in <module>
from pip._internal.index.collector import parse_links
File "/home/ubuntu/mycroft-precise/.venv/lib/python3.6/site-packages/pip/_internal/index/collector.py", line 5, in <module>
import cgi
File "/usr/lib/python3.6/cgi.py", line 42, in <module>
import html
File "/usr/lib/python3.6/html/__init__.py", line 6, in <module>
from html.entities import html5 as _html5
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 674, in exec_module
File "<frozen importlib._bootstrap_external>", line 779, in get_code
File "<frozen importlib._bootstrap_external>", line 487, in _compile_bytecode
UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 0-2: invalid continuation byte
_Originally posted by @username_0 in https://github.com/MycroftAI/mycroft-precise/issues/79#issuecomment-613944505_ |
DispatcherInc/react-native-seekbar-android | 130679869 | Title: Getting weird error
Question:
username_0: 
Answers:
username_1: @username_0 : I recommend taking a quick look at the ProgressBarAndroid implementation -- I adopted a lot of it. Comparing their implementation to the SeekBar will probably point to the problematic code
Let me know if that doesn't help and I will try to take a look later
username_0: @username_1 thanks for the response. I found out that it crashes because of crateReactNativeComponentClass. When i replace it with
var requireNativeComponent = require('requireNativeComponent');
at least it won´t crash anymore. Not sure how to solve this.
username_2: Any luck on this? I have the same problem.
username_3: Same Problem. Any Upgrade?
username_0: Since i have updated react-native to v0.20 everything works fine
username_2: Thanks for the update. I will give that a shot since I am currently on v0.18.
username_2: No luck for me, still crashes.
I tried replacing
`var createReactNativeComponentClass = require('createReactNativeComponentClass');`
with
`var requireNativeComponent = require('requireNativeComponent');`
in the index.android.js but it still crashes for me.
username_0: I had rn v0.18 before too, then i first removed node_modules folder, then run:
watchman watch-del-all
after that i installed only react native (which might sound weird) with
npm install [email protected]
after that run
npm install
then run
react-native start --reset-cache
react-native run-android
Not sure if this helps, but if not try to upgrade with
react-native upgrade
But don´t forget to save your third party modules/imports in settings.grade, build.gradle and MainActivity
Maybe this will solve it....
username_2: Hah okay, thanks.
I ended using https://github.com/jeanregisser/react-native-slider which seems to fit my needs.
username_4: I'm facing the same issue when i upgrade to [email protected] or later versions.
Has anyone found a solution?
username_1: Hey guys,
Take a look at https://facebook.github.io/react-native/docs/slider.html#content, with React 0.24, the slider is now platform independently supported
username_5: I'm posting to let people know that this **MYSTERIOUS** bug is caused by a limitation of react native to the number of subviews on old Android phones. (~16-20 Views MAX).
See those 2 issues: https://github.com/facebook/react-native/issues/5404 and https://github.com/facebook/react-native/pull/7416 . Apparently it has been fixed on https://github.com/facebook/react-native/releases/tag/v0.27.0-rc, so lets wait for that.
username_1: Thank you! |
softlayer/softlayer-python | 204950012 | Title: Allow advanced object filters in the slcli call-api
Question:
username_0: ### Expected Behavior
When I use slcli call-api I would like to be able to use betweenDate and some other advanced filters
### Actual Behavior
https://github.com/softlayer/softlayer-python/blob/master/SoftLayer/CLI/call_api.py#L16 actively rejects anything that isn't in a very short list.
### Solutions
1. _build_filters detects if the filter is json, and then just passes it along.
2. Add a flag for a "i know what im doing" or something mode, that skips _build_filters.
Answers:
username_0: 1. add `--json-filter` to allow users to submit a raw json type string to be passed directly as a filter.
2. add `--json-args` to allow a user to submit a raw json type string to be passed in as the args
Status: Issue closed
|
cyberbotics/urdf2webots | 822832865 | Title: Question: why not introduce a <webots> tag
Question:
username_0: When diving a little bit more into `webots` and this `urdf2webots` parser I started wondering the following:
Why not introduce a `<webots>` xml tag instead of parsing the `<gazebo>` tags. Webots is a different simulator and also your sensor and actuator components are differently reflected in your resulting file (`.proto` file). Wouldn't it be easier (and more complete) to introduce a custom `<webots>` tag that describes the webots specific information regarding actuators and sensors?
Looking forward to your thoughts.
Answers:
username_1: Yes, this is a very good idea. We should probably start drafting it here to define its content and see how this would help overcome a number of problems.
username_0: My first thought would simple be to translate all proto props to xml; so:
```
Lidar {
#fields that inherit from the Solid node:
vrmlField SFVec3f translation 0 0 0
vrmlField SFRotation rotation 0 1 0 0
vrmlField SFVec3f scale 1 1 1
vrmlField MFNode children [] # shape and solids fixed to that solid
field SFString name "lidar" # used by wb_robot_get_device()
field SFString model "" # generic name of the solid (eg: "chair")
field SFString description "" # a short (1 line) of description of the solid
field SFString contactMaterial "default" # see ContactProperties node
field MFNode immersionProperties [] # see ImmersionProperties node
field SFNode boundingObject NULL # for collision detection
field SFNode physics NULL # physical properties (Physics node)
field SFBool locked FALSE # to avoid moving objects with the mouse
field SFFloat translationStep 0.01 # step size used by translation manipulator
field SFFloat rotationStep 0.261799387 # step size used by rotation manipulator
field SFFloat radarCrossSection 0.0 # radar cross section of this solid
field MFColor recognitionColors [] # colors returned for this Solid by Cameras with a Recognition node
#fields specific to the Lidar node:
field SFFloat tiltAngle 0.0 # tilt angle of the lasers with respect to the sensor
field SFInt32 horizontalResolution 512 # number of point per revolution per laser
field SFFloat fieldOfView 1.5708 # horizontal field of view of each laser
field SFFloat verticalFieldOfView 0.2 # vertical field of view covered by the lasers
field SFInt32 numberOfLayers 4 # number of laser-layers
field SFFloat near 0.01 # OpenGL near clipping plane (meters)
field SFFloat minRange 0.01 # minimum range (meters)
field SFFloat maxRange 1.0 # maximum range
field SFBool spherical TRUE # to switch between a plane/sphere projection
field SFString{"fixed", "rotating"} type "fixed" # defines whether this is a rotating lidar or not
field SFFloat noise 0.0 # add a noise to the distance values
field SFFloat resolution -1.0 # distance resolution
field SFFloat defaultFrequency 10 # default rotating frequency of the lidar in Hz
field SFFloat minFrequency 1 # minimum rotating frequency of the lidar in Hz
field SFFloat maxFrequency 25 # maximum rotating frequency of the lidar in Hz
field SFNode rotatingHead NULL # Solid rotating head of the lidar
# hidden fields
hiddenField SFVec3f linearVelocity 0 0 0 # (m/s) Solid's initial linear velocity
hiddenField SFVec3f angularVelocity 0 0 0 # (rad/s) Solid's initial angular velocity
}
```
Could be something like this
```
<webots>
<lidar>
<tiltAngle>0.0</tiltAngle>
<horizontalResolution>512</horizontalResolution>
<fieldOfView>1.5708</fieldOfView>
<verticalFieldOfView>0.2</verticalFieldOfView>
<numberOfLayers>4</numberOfLayers>
<near>0.01</near>
<minRange>0.01</minRange>
<maxRange>1.0</maxRange>
<spherical>true</spherical>
<type>fixed</type>
<noise>0.0</noise>
<resolution>-1.0</resolution>
<defaultFrequency>10</defaultFrequency>
<minFrequency>1</minFrequency>
<maxFrequency>25</maxFrequency>
</lidar>
</webots>
```
Just a first example and maybe we need some reference to urdf links but this way the user can configure all webots parameters in urdf and if desired keep their original gazebo configuration alive so that the urdf can be used with both simulators.
username_1: Yes, that's a very good approach, simple and scalable.
Feel free to go ahead and start the implementation.
We will be happy to review it.
username_0: Thanks! Will try to find time for this within our team in the next sprint. Will get back to you.
username_2: Ideally one can also reference back to an urdf link like this: https://github.com/cyberbotics/urdf2webots/issues/31#issuecomment-544848739
username_3: A different option would be to have a config file which specifies additional components to be added to the proto file.
What do you think about this option?
We thought about implementing this for allowing to specify the appearance of the meshes of our robot better than in an URDF since it is limited to a flat color or a texture.
username_0: @username_3 this could still be done when using `<webots>` tags with a reference like @username_2 mentioned right? By introducing additional tags; we could have something like this:
```
<webots reference="link_name">
<sensor type="lidar">
<param1>..</param1>
</sensor>
</webots>
```
and for a material:
```
<webots reference="link_name">
<material>
<!-- optional resources to external resources here -->
</material>
</webots>
```
username_3: I think those are two different ways to solve the same problem. The advantage of putting everything into the URDF is that it is all in one place.
The advantage of putting it in a separate file is that we do not have to worry about breaking any other software by modifying the urdf.
I think in terms of flexibility both would be similarly well working since xml should be powerful enough.
Our problem with this would be that there are multiple visual models in the same link which have different materials.
username_4: This is a great idea, I would use it with xacro, so you could have a single model with a parameter to determine which simulators to render in the urdf.
```
<robot name="robot" xmlns:xacro="http://www.ros.org/wiki/xacro">
<xacro:arg name="webots" default="true" />
<xacro:if value="$(arg webots)">
<xacro:include filename="robot_webots.xacro" />
</xacro:if>
</robot>
</robot>
```
username_5: I would love to see a <webots> XML tree implemented. This would give us the freedom to address Webots specific features without interfering with existing code. The new Webots tag should be ignored by other xml parsers by default, therefore I would not worry about too much having these in the urdf. In general, it is a good practice to split Webots specific definitions (as well as gazebo stuff) into a different file. |
amiller27/OverviewAllWindows | 203372877 | Title: fedora 25 and gnome 3.22 failure
Question:
username_0: did try to activate it and it show a error button, don't know if it's a compatibility problem with gnome 3.22
if i try to access the setup menu i got that
GLib.FileError: Failed to open file '/home/XXX/.local/share/gnome-shell/extensions/OverviewAllWindows@amiller27/schemas/gschemas.compiled': open() failed: No such file or directory
Stack trace:
init@/home/XXX/.local/share/gnome-shell/extensions/OverviewAllWindows@amiller27/prefs.js:15
Application<._getExtensionPrefsModule@resource:///org/gnome/shell/extensionPrefs/main.js:75
wrapper@resource:///org/gnome/gjs/modules/lang.js:178
Application<._selectExtension@resource:///org/gnome/shell/extensionPrefs/main.js:89
wrapper@resource:///org/gnome/gjs/modules/lang.js:178
Application<._onCommandLine@resource:///org/gnome/shell/extensionPrefs/main.js:239
wrapper@resource:///org/gnome/gjs/modules/lang.js:178
main@resource:///org/gnome/shell/extensionPrefs/main.js:377
@<main>:1
Status: Issue closed
Answers:
username_0: ok |
lampepfl/dotty | 232160511 | Title: dotty.epfl.ch/blog alignment issue on phones
Question:
username_0: When in portrait mode the text gets squished against one side. This does not happen when in portrait mode. It looks like it is trying to fit som horizontal line on the right.

Answers:
username_0: Can be reproduced in chrome desktop by making the window as narrow as possible.
Note that the horizontal line is the part of the `Search API` field . |
BoletoSimples/boletosimples-ruby | 295612170 | Title: Método cancel não altera o estado do objeto para cancelado
Question:
username_0: O método cancel não altera o estado do objeto para cancelado.
Para reproduzir:
```ruby
billing_document = ::BoletoSimples::BankBillet.find(id)
billing_document.cancel
p billing_document.status
```<issue_closed>
Status: Issue closed |
dogma-io/react-frost-core | 320560092 | Title: Preact text input onChange
Question:
username_0: In Preact we should be using `onInput` instead of `onChange` for inputs in order for change events to happen on every keystroke. We should find or create a babel plugin to automatically change `onChange` to `onInput` for preact builds.<issue_closed>
Status: Issue closed |
friskit-china/comments.friskit-china.github.io | 592075942 | Title: 机器学习笔记:梯度下降 - Botian's Blog
Question:
username_0: https://blog.friskit.me/2020/03/31/ml-notes-gradient-descent.html
这一节将会分析一下梯度下降算法的理论知识。梯度下降回顾首先回顾一下,我们在进行梯度下降的时候需要解决下面这个带有损失函数的优化问题:其中$L$是损失函数,$\theta$是模型的参数。也就是找到一个最好的参数能够让损失函数最小。假设参数有两个:${\theta_1,\theta_2}$,我们用梯度下降法从一个随机... |
DigitecGalaxus/ProjectsRuler | 569941322 | Title: Fluent Builder API for Rules
Question:
username_0: While it's perfectly fine to create rules using only the constructor, it might be beneficial to allow a more fluent style for creating rules. Something like
Rule.CreateRule("This is a description")
.For("Package.A")
.Referencing("Package.A.Contracts")
.Kind(RuleKind.Forbidden)
Obviously, the names are up to debate. I myself prefer a sentence-like style, e.g. "Check that package X does not reference package Y" translated as
Check("Description")
.That("Package.A")
.DoesNotReference("Package.B")
but that is just personal preference.
Answers:
username_1: If you find this more readable, just do it! I personally prefer having a list of rules, that I can scan quickly - but then again, that is a personal preference.
Status: Issue closed
|
AutoPas/AutoPas | 442702815 | Title: Cpp 17
Question:
username_0: **Is your feature request related to a problem? Please describe.**
We would like to bump the C++ Version to 17.
At the Moment this is not possible due to our dependency to CUDA.
Both approaches currently do not work:
- NVCC does not support Cpp17
- CMake does not support building CUDA stuff with clang
https://gitlab.kitware.com/cmake/cmake/issues/16586
**Describe the solution you'd like**
Anything....
**Describe alternatives you've considered**
See points above
Answers:
username_0: #194
Status: Issue closed
username_1: fixed through #275 |
sebastian-software/rollup-plugin-rebase | 373517894 | Title: Imports of assets are in relation to the importer, however they are always copied in relation to the output folder.
Question:
username_0: I've created a reproduction for you here: https://github.com/username_0/rollup-plugin-rebase/tree/fix/asset-location-within-bundle-output-when-importer-is-deep
Answers:
username_1: Thanks @username_0. Interestingly what you report is actually interesting. You proposed change though changes the whole approach of the plugin. It should indeed remove all superfluous folders from the source structure. There is not really a reason for keeping it - at least not for the use case I imagined while developing the plugin. The goal was to move all assets to the top level while updating references inside any JS files to correctly point to the new location and name.
username_1: The use case from @username_2 seems different. I'll still investigate what happens there.
username_1: @username_2 are you able to share some small reproduction of this issues. I figure the woff files you are mentioning are references from inside a CSS file, right?
username_1: @username_0 Your test case is valid. It's just that the solution is not what I think is right.
Working on it now.
username_0: Thanks. Yeah, I made an attempt but the solution didn't seem right hence I never PR'd... >_<
username_2: @username_1 I don't remember exactly what my setup was, I'm not working anymore on this project, but yes I was importing the fonts from a file inside the react styleguide lib I was trying to build
username_0: @username_1 Any luck in solving the problem yet?
When I looked I couldn't work out what logic Rollup uses to generate the import paths.
username_3: Sorry, having too much other things to do right now in my business.
username_1: Thanks for your time - can you check whether the current repository version fixes your issue? Thanks a lot!
username_0: @username_1 I actually don't work at the company that wanted to use this anymore, however I have passed on the news to an ex-colleague @Sergiioo who will look into this at some point.
username_3: Released v3 of the plugin which should fix all these issues. If there are still problems feel free to open another report.
Status: Issue closed
|
gy190/rainbow | 224629503 | Title: secondhand somoke
Question:
username_0: ## phrase
- die from 死于 ..
- 感叹句
- i read 据我了解
- hate doing sth 讨厌/厌烦 做某事
## dialog
- It's scary how many people die from secondhand smoke each year.
- I read it's almost 100,000 people every year in china.
- I hate smelling smoke when i go into a restaurant.
- Year, whenever that happens i just walk out.
- Breathing in secondhand smoke is just as unhealthy as smoking.
- Right, I've heard that too. |
SublimeLinter/SublimeLinter | 901222773 | Title: Instead of `Linting...` show which linter is actually slow ans still running
Question:
username_0: We have the simple `Linting...` indicator via `busy_indicator_view.py`. Make this more useful by showing which linter is still running, for example `mypy...`. Maybe `flake8, mypy...` for multiple linters.
I don't know the exact optics here. `...` stands for the animated `.` thing we have. Typically only one linter is still running and we could just print `mypy...`. If really multiple linters are slow, either switch back to the simplified `Linting...` or spell all slow linters out like `flake8, mypy...`.
Answers:
username_1: I feel like we maybe discussed this before? Dejavu.
Anyway, I think the idea is good. "This thing is slow" is much better of course than "something is slow". We don't want to have multiple "animations" going on in the status bar, but I doubt that happens a lot (or rather, I hope so).
Ideally our view in the status bar is stable, like `linterA(w:1) linterB(e:1)` switching seamlessly to `linterA... linterB(e:1)` and switching seamlessly to `linterA linterB(e:1)`. Maybe we should just drop the animation and go for a simple horizontal ellipsis if we're waiting for a linter.
username_0: Yeah, for now I just wanted to reuse what we have, and we have `active_linter_view` *and* `busy_indicator_view`. Ideally we would have just one. If we have it in one, we face the problem that *mypy* might be slow but we're actually only showing `ok`, no names at all. etc. So it seems the one thing shows the results `ok`, `flake(w:1)`, or `eslint?`. And the other thing is "hold on, the left thing is probably lying; the result is not up to date".
username_1: Well, we always accept that the status bar lags behind what happens in the editor. In my mind it's a question of how much lag to accept before we update that view to say "our mypy result here is out of date, we're waiting for news", and show `mypy... flake(w:1)`
username_0: In the expanded state `mypy flake(w:1)` we can attach something to the name and make it `mypy... flake8(w:1)`, but in the short `ok` state I don't see that we should expand to `flake mypy...` just to go back to `ok` but rather do `ok mypy...`.
We use the animation to tell it is a transient, momentary state we're in, a static ellipsis is like our `?` something that lasts until the user does something, we lint again; it doesn't go away in just another second.
username_1: Yes, totally agree with everything there 👍🏻 |
gsantner/markor | 543355718 | Title: Feature Request: AsciiDoc Support
Question:
username_0: Hello @username_3, I would like to put in a feature request for [AsciiDoc](https://asciidoctor.org/) support.
It is another markup language like Markdown. Currently, there is no app on Android supporting it.
Thank you for your hard work. Love the app.
Answers:
username_1: I would be happy to donate/help however I can. This feature would be a huge plus for me.
username_2: I would love to have asciidoc support too, and would consider donating.
username_3: Implementation is waiting for contribution
username_2: Contribution as in code or donation?
username_3: Code
username_4: I too would love this feature. As asciidoc is plain text, however, the only support it really requires at minimum is the ability to open .adoc files (which is not required--the original Python AsciiDoc actually recommended .txt--but it does help with compatibility, syntax coloring support etc.). The newer AsciiDoctor Ruby port even has Markdown-compatible syntax for things like headers, so even now an AsciiDoc with MD-compatible headers would get partial syntax coloring & preview support with no other change required.
username_3: Can others confirm, that it makes sense to load Markdown highlighting and actions for .adoc file extension?
Otherwise, lets load plaintext format.
md :laugh:, plaintext :hooray:
username_4: Sorry to betray my own premise, but I voted plaintext lol. MD compatible is not the default format, and not everything is compatible anyway so things could get confusing. While full AsciiDoc support would be wonderful in the long term, I'd be just as happy to be able to edit in plaintext.
username_3: Closing, so far nobody stepped up for this. If there is interest, start to work on it please and make a pull request. Thanks.
Status: Issue closed
username_5: Could you please reopen the issue?
For me this is the most important Feature Request. There is no Android app supporting AsciiDoc, and I still hope that sometimes in the future I can switch my notes system from markdown to asciidoc, after some basic support is somehow implemented into Markor.
I never developed in Java, but maybe you could give some hints where and how to start in Markor. Maybe I will understand how language support is implemented and I could add some starting implementation. But where to start? And maybe later some other developers can improve. Maybe this is not so much a question of Java but more about rules.
I understand that we will not get full asciidoc support, but maybe at least some basic support.
username_5: What would be the right way to get asciidoc support into Markor?
- to use https://github.com/asciidoctor/asciidoctorj
- to start some limited implementation
username_3: There is a PR open for txt2tags. It gives a overview of files that need to be touched / created
username_5: Yesterday I converted my markdown based notes system into a asciidoc based notes system, using kramdoc. There is no Android support for asciidoc and Markor is still my preferred Android editor.
I could also live without asciidoc preview on Android, because it a plaintext format and easy to read. But I am curious, if it would be technical possible to use asciidoctor.js or AsciidoctorJ to render and preview.
Even without adoc preview I would like to make the editor to better work with AsciiDoc. There are currently some specific editor settings for different file types: markdown, todo, plaintext. Would it be not too complicated to add asciidoc as additional language with some small changes compared to markdown?
* different header system, based on "="
* different sorted and unsorted list style support
* different html link style. Currently I can't find, how to change the default markdown html link style which is used, when I use "send to Markor" and I started to use copy and paste of the html links. Is there a configuration which I can't find or is this style hard coded?
BTW, the main reason for switching was: I use AsciiDoc and Antora for technical documentation in different projects. And it is a bit hard to interchange content between my markdown based notes system and the technical project documentations.
Still open: switch my jekyll static site hosted as github page to asciidoc, or enable a mix of both.
username_3: Hello @username_3, I would like to put in a feature request for [AsciiDoc](https://asciidoctor.org/) support.
It is another markup language like Markdown. Currently, there is no app on Android supporting it.
Thank you for your hard work. Love the app.
username_5: I understand that until now nobody worked on this. But maybe as a first step it would be possible to just add asciidoc as "format type" even not supporting formatting yet. At least a specific toolbox could be implemented for asciidoc. There are several block markers, sorted lists are different. I can't program in Java, but I could provide a list of useful toolbar items. It would also allow to format markdown but to not format asciidoc content.
username_3: If you implement whatever part of it, or find somebody to do it - happy to review.
But just adding a asciidoc button, which is 100% equal to plaintext usage won't get added. Then you can just use plaintext format.
username_5: Finaly I am looking for more flexibility in the tool bar for different format types in General.
I am using the plain text toolbar for asciidoc, but I can't find how to adapt or extend it for my personal needs. It looks like it is hard coded. Maybe there are other ways to get this flexibility into tool bars? For example a general way to use user defined format types and related to them different user defined tool bars? Or ways to extend and adapt existing tool bars? |
PyTables/PyTables | 195157002 | Title: Empty arrays are not saved
Question:
username_0: ```
Answers:
username_0: It looks like the HDF5 format is capable of saving empty (zero-length) arrays. The JuliaIO HDF5 library supports saving empty arrays.
https://github.com/JuliaIO/HDF5.jl/issues/246
username_1: Well, what you are trying to do is to store a table with some fields that are empty. This is a different scenario than what you reported as working in Julia (an empty dataset). Not sure how NumPy has implemented support for that, but if that covers an important use case for you, and you are interested in providing support for it, we would be happy to accept a PR.
username_0: I'm pretty new to this code base. I cloned the project and started looking at file.py, group.py, leaf.py, node.py, and table.py. Should I be looking at the Pytables C code or perhaps numpy source code instead?
I'd greatly appreciate a nudge in the right direction. |
panoptes/POCS | 564455155 | Title: Check on hanging tests in docker branch.
Question:
username_0: Hi @zacharyt20 , sorry for delay. The tests have been hanging on this for a while, which is an annoying thing with our Travis CI testing that we can't see what is going wrong.
I got some develop permissions from travis yesterday so that I can get access to the logs when this kind of thing happens, so I am going to get to debugging this soon. Thanks!
_Originally posted by @username_0 in https://github.com/panoptes/POCS/pull/934#issuecomment-585424122_
Answers:
username_0: Fixed in #951. Closing for now. New problems should be reopened with specific Issues.
Status: Issue closed
|
onaio/onadata | 1136893124 | Title: Runtime error raised when calling textit service
Question:
username_0: ### Environmental Information
- Onadata version: laster
### Problem description
```
ERROR:root:Service threw exception: dictionary keys changed during iteration
Traceback (most recent call last):
File "/srv/onadata/onadata/apps/restservice/utils.py", line 18, in call_service
service.send(sv.service_url, submission_instance)
File "/srv/onadata/onadata/apps/restservice/services/textit.py", line 23, in send
extra_data = self.clean_keys_of_slashes(submission_instance.json)
File "/srv/onadata/onadata/apps/restservice/services/textit.py", line 45, in clean_keys_of_slashes
for key in record:
RuntimeError: dictionary keys changed during iteration
```
### Expected behavior
- The service should run without a runtime error
### Steps to reproduce the behavior
### Additional Information
_Logs, [related issues](github.com/onaio/onadata/issues), weird / out of place occurrences, local settings, possible approach to solving this..._<issue_closed>
Status: Issue closed |
graphql-java/graphql-java | 584254516 | Title: Discussion: GraphQL Spec Compliance Tests
Question:
username_0: Hello maintainers
As a part of research around the simplifying graphql spec tests across implementations, I've stumbled upon the specs written here for `graphql-java`. I'd really appreciate if I can get an idea around how difficult it has been to maintain this test suite, and how often does it go out of parity with the actual graphql spec? Were there cases when misunderstanding the spec led to issues with the library?
#### Would having a generic set of compatibility tests help around these issues if any?
As schema definition in SDL format is natively supported by the library then it would be easy to adapt generic testing suite across different languages.
Answers:
username_1: Hi,
Can you give us a bit more information about what kind of research you are doing and for whom?
An automated test suite is something all maintainers would love to have. But it is a serious effort and involves a couple
challenging details. The best effort so far was this project: https://github.com/graphql-cats/graphql-cats
username_2: For more detail on automated test for graphql implementations the `cats` are a good first attempt however its the low level details that stop this on practice.
For example lets say we want to have a test for "Fragements must not be circular".
https://spec.graphql.org/June2018/#sec-Fragment-spreads-must-not-form-cycles
Specifying this in cats is straight foward ish however how to `assert` was the challenge
We might have an error message with the text "Circular fragments not allowed" and grpahql-js might have "You are not allowed to make circular fragments"
Now each implementation of cats requires you to map a logical test to your own errors. This greatly reduces the "cats" value and dramtically increased the cost to be 'cats' ready.
It got worse in other test cases.
Ideally we could have common identifiers in errors that cats could know to look for say. Imagine `error.extensions.specList = [5.5.2.2]` which is the number of that rule in the spec.
The initial `cats` tests where too specific to the Scala implementation - it was a great first step in terms of ideas. It got a little better but not by enough so we stop trying to follow it - the value just want not there.
That said we do not have a great way to (quasi) formally prove that we are spec compliant on query execution and graphql type system SDL generation.
username_0: Actually I am researching for the project proposal for the project idea given by `GraphQL Foundation` for `Google Summer of Code 2020` -> [Project Idea](https://github.com/graphql/foundation/tree/master/mentorship/2020/gsoc#1-graphql-compatibility-acceptance-tests-medium)
The idea is to implement a `Compatibility Acceptance Test Suite` which can be integrated into the GraphQL implementation library.
username_0: @username_2 I completely understand that integrating `cats` requires a lot of effort.
Hence there are always some trade-offs, to make the testing more generic the ideal way is to test for only the input-output.
This can be done by spawning an instance of GraphQL server (graphql-java) and test it on various queries & schemas over HTTP. As all the implementations support running over HTTP.
@ maintainers how does that sound in terms of maintainability and community acceptance? Is there any alternative design that you would like to suggest?
username_0: @username_1 the `cats` project was not easy to integrate, as it's been designed to be coupled with the library's testing infra. My idea is to make it decoupled & more of a plug-and-play kind of thing so that it can be easy to integrate and disintegrate.
As mentioned above, the GraphQL server will be tested over HTTP with a client provided. The client will send requests and evaluates the responses.
username_2: Yeah so if you can come up with a more concrete proposal about how this would work and so on then we would be receptive.
Again I urge you to consider more deeply how to identify errors and conditions - these are the low level blockers that halted the other effort.
For example we can run up a HTTP graphql server (given a schema) and you could fire off a request with a circular fragment in it- it will error - BUT how do you know it errored on that condition and only that condition?
How can you look at the response and know its a circular fragment error and not something else.
One could argue that's enough but once you get into this I suspect you will find that you need to know more detailed information for assertions
All that said I can imagine such a system requiring
* a series of schemas that it must accept - it you POST in a schema and the server graphql tester makes up a server for that schema
* POST /useschema/{testname}
* POST /graphql/{testname} # for queries on that server instance
* a based data set that fetchers should give out for a test case
* POST /usedata/{testname}
* POST /graphql/{testname} # for queries on that server instance using that data
username_0: Yeah, you are correct. I am thinking of a master HTTP server with a couple of REST APIs exposed. This server will listen for events on various endpoints from the GraphQL client. Like
- Update the Schema/fakeData (Given schema/fakeData as payload)
- Spawn the GraphQL server with the updated schema
- Restart, kill the GraphQL instance
The client can run in either CLI (for running test in CI) or on a browser (Given master is reachable).
username_2: I think its ok for a CLI client to HTTP to a server even if its on a local machine
username_0: @username_2 the error approach you mentioned is somewhat relatable to the `rust` error handling i.e. the spec has defined the errors and their respective codes.
https://doc.rust-lang.org/error-index.html
Similarly, the TypeScript also comes with error codes for various errors.
https://github.com/microsoft/TypeScript/blob/0aa2e2783c42dd48e7d3085e1a612ac410416ec1/src/compiler/diagnosticMessages.json
If GraphQL spec comes with some sort of error codes then it would be great for users too. As users can directly google the error codes and read the error definition from the official source.
username_0: Is there any RFC regarding the error codes?
username_2: Not that I know of.
hey rather than an issue lets start a discussion for this in specturm
https://spectrum.chat/graphql-java?tab=posts
username_0: That sounds great. Any GraphQL foundation members are there?
Status: Issue closed
|
jfc3/atehere | 280738714 | Title: Add Han Oak Restaurant to the PDX JSON File
Question:
username_0: Need to add Han Oak Restaurant to the PDX JSON file.
Han Oak Restaurant
Han Oak Restaurant is a minimal-chic Korean spot with upscale tasting menus and drink pairings, alogn with noodle and dumpling nights.
Address: 511 NE 24th Ave, Portland, OR 97232
Phone: (971) 255-0032
URL - http://hanoakpdx.com/
Answers:
username_0: Added Han Oak Restaurant to the PDX JSON file.
Status: Issue closed
|
raiden-network/raiden | 430955831 | Title: Move the handling of pruned blocks to the JSONRPC client or web3 middleware
Question:
username_0: ## Problem Definition
At the moment we are checking for pruned blocks manually by using the `can_query_state_for_block()` function to see if a particular block identifier is corresponding to a pruned block and thus its state should not be queried. Example code [here](https://github.com/raiden-network/raiden/blob/1ecea15cd4f8b178f1500f151373e7f027703b31/raiden/network/proxies/token_network.py#L180-L184).
This is error prone for two reasons:
1. We can't be sure the block is actually pruned. The settings of the ethereum client can differ or the implementation may differ and as such a block may be pruned much later (or earlier?) than our preconfigured number of blocks (64).
2. We would need to be manually adding this check at every point where a call() to the client is added. If we miss a spot then this becomes a bug.
## Task
The solution here would be to move this check to either a web3 middleware or to move this to the `JSONRPCClient` class.
1. Research what is the behaviour for pruned blocks for both parity and geth (and we would have to do the same for each new client we support) as far as the RPC responses are concerned.
2. Write a module that would handle the errors that both clients throw at the lowest level and throw an exception when a pruned block is found and handle it there.<issue_closed>
Status: Issue closed |
orange-cloudfoundry/paas-templates | 496357560 | Title: Redis TLS support
Question:
username_0: ### Expected behavior
As a service user, in order to use redis over the network where untrusted 3rd parties may be listening, I need to connect to redis over TLS.
### Observed behavior
Redis coab in v43 returns credentials without tls details , see https://github.com/orange-cloudfoundry/redis-orange/issues/2
```json
"VCAP_SERVICES": {
"redis-ondemand": [
{
"binding_name": null,
"credentials": {
"host": "192.168.211.57",
"password": "...",
"port": "6379"
},
"instance_name": "redis-smoketest-1568969291",
"label": "redis-ondemand",
"name": "redis-smoketest-1568969291",
"plan": "small",
"provider": null,
"syslog_drain_url": null,
"tags": [
"Redis",
"Document"
]
}
```
### Affected release
Reproduced on version 43.x
Answers:
username_1: Considering the [Redis Security](https://redis.io/topics/security) model and security features in Redis (or the lack of...), we can wonder if this feature is desirable. Redis is designed for performance with the deliberate choice to sacrifice security.
If an application has sensitive information, it might not be the best idea to choose Redis from the start. Other key values stores like etcd has buit in security features like automatic TLS with optional client cert authentication, and can be a better choice for sensitive information.
username_2: https://redislabs.com/blog/stunnel-secure-redis-ssl/
username_1: For sure, using stunnel ou spiped is a way to encrypt any flow. However, It's a hack and goes against Redis design. If an application has strong requirements regarding the security, Redis should be chosen in the first place. It's not only a question on encrypting data in transit. Redis has not been designed for security and have no security feature at all. For instance, Redis does not have any mechanims for access control.
username_2: closing as redis is not the correct solution given this ssl constraint.
Might consider alternative product (eg: hazelcast/ memcache) depending on the required use case
Status: Issue closed
username_0: @username_1 @username_2 pivotal redis does support redis over TLS see https://docs.pivotal.io/redis/2-2/preparing-tls.html
Could this be an acceptable transient workaround until an alternative marketplace offer gets proposed ? |
spesmilo/electrum | 229793070 | Title: Transaction rejected by our node. Reason: Transaction is trying to double spend. Input 4 is already spent.
Question:
username_0: Transaction rejected by our node. Reason: Transaction is trying to double spend. Input 4 is already spent.
https://blockchain.info/tx/4e4d46a513a6f619b490baa8b07be671cd8c7c01d6196f0a0399893469c521e9
What happened ?
Answers:
username_1: There is a second transaction that spends the inputs: https://blockchain.info/de/tx/a456361919207c1ca3d9d187318fa079eb7819c958592c667ecd93762db7ee79
username_0: Yes, I try do double spend.
username_0: Status changed:
Transaction rejected by our node. Reason: Transaction was previously accepted but has been pruned from our database.
https://blockchain.info/tx/28ea67c0b9754c435ac7071273fba9ff40b34a6cff5076be13346fde8eb6ccc1
Status: Issue closed
|
swiety85/angular2gridster | 330566071 | Title: Loading dynamic size
Question:
username_0: It seems that I am having the same problem as I had in this issue #116 but this time with the responsive size instead of position.
When dynamic loading the widgets like in this example the size w / h is loaded instead of the responsive sizes (wXl, ...).
```
widgets:any = [];
widgetsTemp:any = [
{
x: 0, y: 0, w: 1, h: 2, wSm: 2, hSm: 3, wMd: 2, hMd: 3, wLg: 2, hLg: 3, wXl: 2, hXl: 3, xSm: 0, ySm: 0, xMd: 0, yMd: 0, xLg: 0, yLg: 0, xXl: 5, yXl: 0,
title: 'Basic form inputs 1',
content: 'Widget content'
}
]
...
this.widgets = this.widgetsTemp;
...
```
Answers:
username_1: Hi,
I can't reproduce your problem. Did you set `responsiveSizes: true` in your gridster options? This option is required to use responsive size properties.
You can provide code sandbox were the problem is visible:
[](https://codesandbox.io/s/w77q2rqll)
Status: Issue closed
username_1: Since there is no answer, I will close this issue. |
LoneGazebo/Community-Patch-DLL | 184629398 | Title: granary+ bannanas
Question:
username_0: _1. Mod version (i.e Date - 4/23):_
10/19 + htofix
_2. Mod list (if using Vox Populi only, leave blank):_
_3. Error description:_
just marginal weird issue. granary doesnt provide +1F to bannanas on hill. i just noticed it when i choped down jungle.


_4. Steps to reproduce (optional):_
---------------------------
Supporting information:
Please note that you can attach .zip files by dragging-and-dropping them. If possible, zip up all supporting data and post that way.
1. Log files (always attach your Logs folder, located at My Documents/My Games/Sid Meier's Civilization 5. Make sure you have enabled logging before experiencing an error! Go here to find out how: http://forums.civfanatics.com/showthread.php?t=487482):
2. Save game (always attach a save that was made a turn before the error; located at My Documents/My Games/Sid Meier's Civilization 5/ModdedSaves):
3. CvMiniDump.dmp file (attach if experiencing a game crash. Located at Program Files/Steam/steamapps/common/Sid Meier's Civilization V):
4. Screenshots (optional):<issue_closed>
Status: Issue closed |
numpy/numpy | 223620697 | Title: Documentation: numpy.histogram2d array shapes inconsistent
Question:
username_0: In the [`histogram2d` documentation](https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram2d.html), the number of bins and edges is inconsistent.
If `nx` and `ny` are the bin counts (as stated in the `bins` argument's text), then the return `H` has indeed shape `(nx, ny)`. However, the returned `xedges` and `yedges` will then have shape `(nx+1,)` and `(ny+1,)` respectively (and not as it is currently written `(nx,)` and `(ny,)`).
Answers:
username_1: Yep, I agree - do you want to submit a patch for this? You should be able to fix it right from the github editor.
Also, printing out the shape of `H` in the example code in that documentation might be handy too
Status: Issue closed
|
andrejmoltok/MiniEmpire | 791932889 | Title: Scrrenshots of the map
Question:
username_0: 

<issue_closed>
Status: Issue closed |
schliflo/bedrock-docker | 409378383 | Title: Configuration for multisite
Question:
username_0: There is a way to get this working on multisite subdomain and subdirectory?
Answers:
username_1: You can add multiple domains here https://github.com/username_1/bedrock-docker/blob/master/docker-compose.yml#L40
for example like this:
```
- VIRTUAL_HOST=$PROJECT_NAME.docker,subdomain1.$PROJECT_NAME.docker,subdomain2.$PROJECT_NAME.docker
```
As far as I know bedrock + multisite + nginx requires some fancy configuration. You can edit the nginx.conf here: https://github.com/username_1/bedrock-docker/blob/master/.docker/web/etc/nginx/site.conf.template (you probably need to hardcode the server_name when using multiple hosts)
So in theory you can get this working - the difficulty is configuring bedrock & nginx accordingly.
I do however not plan to support this special case by default with bedrock-docker. Feel free to send a PR though :)
username_1: Closing this for now. If you find a problem specific to bedrock-docker feel free to report here - I'll reopen the issue.
Status: Issue closed
|
Mohist-Community/Mohist | 513391843 | Title: 库文件下载地址失效
Question:
username_0: 第一次启动服务器,没有库文件,自动下载报错如下:
`检测到你的库文件存在新版本或者不存在,即将自动下载......
请在库文件获取完毕后, 重启服务器
java.net.UnknownHostException: mohist-community.gitee.io
at java.net.AbstractPlainSocketImpl.connect(Unknown Source)
at java.net.PlainSocketImpl.connect(Unknown Source)
at java.net.SocksSocketImpl.connect(Unknown Source)
at java.net.Socket.connect(Unknown Source)
at sun.security.ssl.SSLSocketImpl.connect(Unknown Source)
at sun.net.NetworkClient.doConnect(Unknown Source)
at sun.net.www.http.HttpClient.openServer(Unknown Source)
at sun.net.www.http.HttpClient.openServer(Unknown Source)
at sun.net.www.protocol.https.HttpsClient.<init>(Unknown Source)
at sun.net.www.protocol.https.HttpsClient.New(Unknown Source)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(Unknown Source)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(Unknown Source)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(Unknown Source)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(Unknown Source)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(Unknown Source)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source)
at java.net.HttpURLConnection.getResponseCode(Unknown Source)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(Unknown Source)
at red.mohist.down.Download.<init>(Download.java:23)
at red.mohist.down.DownloadLibraries.run(DownloadLibraries.java:26)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Exception in thread "pool-1-thread-1" java.lang.RuntimeException: libraries.zipThe file indicated does not exist.
at red.mohist.down.DownloadLibraries.unZip(DownloadLibraries.java:35)
at red.mohist.down.DownloadLibraries.run(DownloadLibraries.java:28)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)`
直接用浏览器访问mohist-community.gitee.io返回404。
Answers:
username_1: 我测试了 没问题呀 检查你的网络
username_0: 。。。原来是没有检测到minecraft_server.1.12.2.jar
我给改名了
username_0: 刚刚下载的Mohist-51fee0d-server.jar
username_1: https://ci.codemc.io/job/Mohist-Community/job/Mohist-1.12.2/ 在这里下载最新构建
username_0: 更新检测中......
如果你不想启用更新检测, 请将 mohist-config/mohist.yml 里 check_update 设为false
Exception in thread "main" java.lang.StringIndexOutOfBoundsException: String index out of range: 22
at java.lang.String.substring(Unknown Source)
at red.mohist.down.Update.hasLatestVersion(Update.java:34)
at red.mohist.Mohist.main(Mohist.java:65)
这是什么问题?libraries刚刚看了没有问题
username_1: 你把更新检测关了就行
username_1: 我测试了一下 应该是网络访问炸了 然后读不到数据就炸了
username_0: 启动了,谢谢
username_0: VeinMiner报的方块ID找不到正常吗
username_0: [VeinMiner] Block id minecraft:XXXXX not found! Ignoring这样的
username_0: VeinMiner是连锁挖矿服务器插件版,目前好像全部矿物都无法识别
username_1: https://jq.qq.com/?_wv=1027&k=5YIRYnH 如果你有qq 去群里讨论问题吧 |
jlippold/tweakCompatible | 571801502 | Title: `Choicy` working on iOS 12.1.1
Question:
username_0: ```
{
"packageId": "com.opa334.choicy",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.opa334.choicy",
"deviceId": "iPhone10,5",
"url": "http://cydia.saurik.com/package/com.opa334.choicy/",
"iOSVersion": "12.1.1",
"packageVersionIndexed": false,
"packageName": "Choicy",
"category": "Tweaks",
"repository": "BigBoss",
"name": "Choicy",
"installed": "1.1.2",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.opa334.choicy",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "Advanced Tweak Configuration!",
"latest": "1.1.4",
"author": "opa334",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
cake-build/cake | 167066862 | Title: Add support for adding messages to the AppVeyor build log
Question:
username_0: Add the ability to send messages to the AppVeyor build log via the:
```appveyor AddMessage``` command.
This could then be used to push specific messages (info, warning and error) to the separate AppVeyor messages tab for a build.
I would like to be able to combine this with the TaskSetup and TaskTeardown methods to log build script progress:
```c#
TaskSetup((context, task) =>
{
var message = string.Format("Task: {0}", task.Task.Name);
// custom logging
if(BuildSystem.IsRunningOnAppVeyor) {
BuildSystem.AppVeyor.AddInformationalMessage(message);
}
});
TaskTeardown((context, task) =>
{
var message = string.Format("Task: {0}", task.Task.Name);
// custom logging
if(BuildSystem.IsRunningOnAppVeyor) {
BuildSystem.AppVeyor.AddInformationalMessage(message);
}
});
```
Then further refine this into a common abstraction that can be agnostic of build system and can report messages specific to the current build environment (teamcity system messages, appveyor messages):
```c#
TaskSetup((context, task) =>
{
var message = string.Format("Task: {0}", task.Task.Name);
// custom logging
BuildSystem.AddInformationalMessage(message);
});
TaskTeardown((context, task) =>
{
var message = string.Format("Task: {0}", task.Task.Name);
// custom logging
BuildSystem.AddInformationalMessage(message);
});
```
Answers:
username_1: @username_0 Thank you for your contribution to Cake!
Status: Issue closed
|
PowerShell/PowerShellEditorServices | 159645833 | Title: Add Get-EditorCommand and $psEditor.GetEditorCommand()
Question:
username_0: The way it is currently setup, it can be very difficult to determine what the Editor Command Names are that are loaded. The user is only shown the Display Name, so they will actually have to go look at the source code of the Register-EditorCommand to get the name.
This makes it difficult to unregister Editor Commands if you do not already know exactly what the command's name is. This will become more problematic once people start integrating their commands into modules.
Answers:
username_1: Good point, will do that |
hadiakhan785/test | 324761027 | Title: Make Responsive
Question:
username_0: Your website is responsive till about 500-600 pixels. Screens smaller than that break the layout. For example, here is how your page looks like on my phone:

Compare this to how the actual mockup website looks like on my phone:
<issue_closed>
Status: Issue closed |
Kate-v2/sweater_weather_web | 402566247 | Title: Forecast - Today
Question:
username_0: As a VISITOR or USER,
when I visit a location forecast,
and I see the 'Today' section in the Overview,
I see the:
* weather description
* temperature
* high / low
and the:
* City, State (short)
* Country
* Time
and I have links to:
* change location
* add as favorite (if logged in)<issue_closed>
Status: Issue closed |
cloudfoundry/bosh | 44287911 | Title: getting waiting for agent while deploy bosh
Question:
username_0: HI , I am using bosh micro to create bosh VM on AWS.
I done all the re-search and change allot stem cell . each one got stuck at same point .
I also tried with bootstrap and it also stuck at same point-
Please find the console output-
root@2d62e55:/microbosh/deployments# bosh micro deploy --update ami-979dc6fe
Updating mybosh/micro_bosh.yml' tohttps://172.16.31.10:25555' (type 'yes' to continue): yes
Will deploy due to stemcell changes
Started prepare for update
Started prepare for update > Preserving stemcell. Done (00:00:00)
Started deploy micro bosh
Started deploy micro bosh > Using existing stemcell. Done (00:00:00)
Started deploy micro bosh > Creating VM from ami-979dc6fe
. Done (00:00:49)
Started deploy micro bosh > Waiting for the agent
Unable to connect to Bosh agent. Check logs for more details.
Output from bootstrap:-
Use bundle show [gemname] to see where a bundled gem is installed.
bundle exec bosh micro deployment firstbosh
fatal: Not a git repository (or any of the parent directories): .git
Deployment set to '/.microbosh/deployments/firstbosh/micro_bosh.yml'
bundle exec bosh -n micro deploy --update-if-exists ami-7017b018
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
Started deploy micro bosh
Started deploy micro bosh > Using existing stemcell. Done (00:00:00)
Started deploy micro bosh > Creating VM from ami-7017b018. Done (00:00:30)
Started deploy micro bosh > Waiting for the agentUnable to connect to Bosh agent. Check logs for more details.
/usr/local/lib/ruby/1.9.1/rake/file_utils.rb:53:in block in create_shell_runner': Command failed with status (1): [bundle exec bosh -n micro deploy --update-...] (RuntimeError) from /usr/local/lib/ruby/1.9.1/rake/file_utils.rb:45:incall'
from /usr/local/lib/ruby/1.9.1/rake/file_utils.rb:45:in sh' from /usr/local/lib/ruby/gems/1.9.1/gems/bosh-bootstrap-0.13.2/lib/bosh-bootstrap/cli/helpers/bundle.rb:11:inblock in bundle'
from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.6.5/lib/bundler.rb:235:in block in with_clean_env' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.6.5/lib/bundler.rb:222:inwith_original_env'
from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.6.5/lib/bundler.rb:228:in `with_clean_env'
from /usr/local/lib/ruby/gems/1.9.1/gems/bosh-bootstrap-0.13.2/lib/bosh-bootstrap/cli/helpers/bun
logs:-
I, [2014-09-27T03:14:52.975455 #2252] INFO -- : HTTP server is starting on port 25888...
E, [2014-09-27T03:14:53.420720 #2252] ERROR -- : Sinatra::NotFound
D, [2014-09-27T03:15:42.000862 #2252] DEBUG -- : (0.000234s) PRAGMA foreign_keys = 1
D, [2014-09-27T03:15:42.001000 #2252] DEBUG -- : (0.000032s) PRAGMA case_sensitive_like = 1
D, [2014-09-27T03:15:42.001398 #2252] DEBUG -- : (0.000296s) PRAGMA table_info('registry_instances')
D, [2014-09-27T03:15:42.002487 #2252] DEBUG -- : (0.000207s) SELECT * FROM registry_instances WHERE (instance_id = 'i-54d84bb9') LIMIT 1
D, [2014-09-27T03:15:42.003390 #2252] DEBUG -- : (0.000125s) SELECT COUNT(*) AS 'count' FROM registry_instances WHERE (instance_id = 'i-54d84bb9') LIMIT 1
D, [2014-09-27T03:15:42.003879 #2252] DEBUG -- : (0.000072s) SELECT sqlite_version() LIMIT 1
D, [2014-09-27T03:15:42.004034 #2252] DEBUG -- : (0.000060s) BEGIN
D, [2014-09-27T03:15:42.004526 #2252] DEBUG -- : (0.000238s) INSERT INTO registry_instances (instance_id, settings) VALUES ('i-54d84bb9', '{"vm":{"name":"vm-d1930349-1b16-4c38-aee0-3f667247420f"},"agent_id":"bm-b92fb0f6-a42a-4a2e-bae7-0c2c59e2c519","networks":{"bosh":{"cloud_properties":{},"netmask":null,"gateway":null,"ip":null,"dns":null,"type":"dynamic","default":["dns","gateway"]},"vip":{"ip":"172.16.31.10","type":"vip","cloud_properties":{}}},"disks":{"system":"/dev/sda1","ephemeral":"/dev/sdb","persistent":{}},"env":{"bosh":{"password":null}},"ntp":[],"blobstore":{"provider":"local","options":{"blobstore_path":"/var/vcap/micro_bosh/data/cache"}},"mbus":"https://vcap:[email protected]:6868"}')
D, [2014-09-27T03:15:42.004971 #2252] DEBUG -- : (0.000121s) SELECT * FROM registry_instances WHERE (id = 1) LIMIT 1
D, [2014-09-27T03:15:42.007517 #2252] DEBUG -- : (0.002390s) COMMIT
D, [2014-09-27T03:18:57.866806 #2252] DEBUG -- : (0.000314s) SELECT * FROM registry_instances WHERE (instance_id = 'i-54d84bb9') LIMIT 1
I, [2014-09-27T03:32:36.222613 #2252] INFO -- : BOSH Registry shutting down...
Can you please help me.
And as per the documents they selected non-vpc but now AWS by default gives VPC only no option for non-vpc.
Is this for i am getting stuck every time at same point "waiting for the agent"
Answers:
username_1: Hi all,
I am using MicroBOSH and i am hitting the same issue. I am using the latest stemcells and my security group configs are correct.
cfoundry@c-foundry:~/micro-deployment$ bosh micro deploy test/light-bosh-stemcell-2986-aws-xen-hvm-ubuntu-trusty-go_agent.tgz
No `bosh-deployments.yml` file found in current directory.
Conventionally, `bosh-deployments.yml` should be saved in /home/cfoundry.
Is /home/cfoundry/micro-deployment a directory where you can save state? (type 'yes' to continue): yes
Deploying new micro BOSH instance `manifest.yml' to `https://52.7.75.44:25555' (type 'yes' to continue): yes
Verifying stemcell...
File exists and readable OK
Verifying tarball...
Read tarball OK
Manifest exists OK
Stemcell image file OK
Stemcell properties OK
Stemcell info
-------------
Name: bosh-aws-xen-hvm-ubuntu-trusty-go_agent
Version: 2986
Started deploy micro bosh
Started deploy micro bosh > Unpacking stemcell. Done (00:00:00)
Started deploy micro bosh > Uploading stemcell. Done (00:00:10)
Started deploy micro bosh > Creating VM from ami-3742b55c light. Done (00:00:35)
Started deploy micro bosh > Waiting for the agent
username_2: i ran into the same "waiting for agent" problem and found it consistent across many 3012 builds for AWS.
Seems to be an issue with the stemcell.
I was succesful when using an older 3000 build stemcell @ https://d26ekeud912fhb.cloudfront.net/bosh-stemcell/aws/light-bosh-stemcell-3000-aws-xen-ubuntu-trusty-go_agent.tgz |
vitabaks/postgresql_cluster | 855394835 | Title: Problems with pgbackrest
Question:
username_0: when I try to set up a backup, I get this error
sudo -u pgbackrest pgbackrest --stanza=main check
ERROR: [082]: WAL segment 000000010000000000000007 was not archived before the 60000ms timeout
HINT: check the archive_command to ensure that all options are correct (especially --stanza).
HINT: check the PostgreSQL server log for errors.
HINT: run the 'start' command if the stanza was previously stopped.
my config on repo :
[main]
pg1-host=172.16.0.11
pg1-path=/var/lib/postgresql/13/main
[global]
repo1-path=/var/lib/pgbackrest
repo1-retention-full=2
start-fast=y
config on postgres node:
[global]
log-level-file=detail
log-path=/var/log/pgbackrest
repo1-type=posix
repo1-host=172.16.0.1
repo1-host-user=postgres
[main]
pg1-path=/var/lib/postgresql/13/main
process-max=2
recovery-option=recovery_target_action=promote
Answers:
username_1: This playbook is not currently intended for configuring backups.
Please refer to the documentation: https://pgbackrest.org/user-guide.html
check the PostgreSQL server log for errors.
I can suggest that you have not yet created a stanza
https://pgbackrest.org/user-guide.html#quickstart/create-stanza
username_1: This is not the first question related to setting up a backup. Apparently it's worth adding this functionality.
username_0: I read the logs and there was a problem with file access, thanks for the answer!
Status: Issue closed
username_0: when I try to set up a backup on pgBackRest 2.32, I get this error
sudo -u pgbackrest pgbackrest --stanza=main check
ERROR: [082]: WAL segment 000000010000000000000007 was not archived before the 60000ms timeout
HINT: check the archive_command to ensure that all options are correct (especially --stanza).
HINT: check the PostgreSQL server log for errors.
HINT: run the 'start' command if the stanza was previously stopped.
my config on repo :
[main]
pg1-host=172.16.0.11
pg1-path=/var/lib/postgresql/13/main
[global]
repo1-path=/var/lib/pgbackrest
repo1-retention-full=2
start-fast=y
config on postgres node:
[global]
log-level-file=detail
log-path=/var/log/pgbackrest
repo1-type=posix
repo1-host=172.16.0.1
repo1-host-user=postgres
[main]
pg1-path=/var/lib/postgresql/13/main
process-max=2
recovery-option=recovery_target_action=promote
archive_command = 'pgbackrest --stanza=main archive-push %p'
archive_mode = 'True'
username_0: But the question arose, before restoring from a backup, do you need to turn off the entire patroni cluster or master node? how to properly recover from a backup if I have a patroni cluster
username_1: You can run automatic restore of your existing patroni cluster
for PITR, specify the required parameters in the main.yml variable file and run the playbook with the tag:
`ansible-playbook deploy_pgcluster.yml --tags point_in_time_recovery`
See Recovery steps with pgBackRest
https://github.com/username_1/postgresql_cluster#restore-and-cloning
username_0: thanks a lot!
Status: Issue closed
|
hanna-zimmermann/GDMA-1485-Assignment-2 | 577380349 | Title: Friday Feedback
Question:
username_0: Cleanliness
- [ ] - Group styles under correct categories. 78-103 should be under components
- [ ] - Utilize font shorthand
- [ ] - Line 140, separate these styles as needed. Margin: 0 auto only works if a width is applied which means it will not work on text elements
- [ ] - Reduce repeating styles
- [ ] - Remove color category and group styles based on what they’re applying to: navigation, footer, etc.
Semantics
- [ ] - name images so that similarity is first: sprite-facebook
- [ ] - Link google web fonts before main.css so that it loads prior to being called
- [ ] - Images that communication content should have alt attribute values that communicate the same information
- [ ] - Do not apply a width to parent elements, utilize padding
Index
- [ ] - Line 37, jared chambers should be clickable
- [ ] - Missing address tag around contact information
- [ ] - Do not type text in all caps, utilize css to transfrom
- [ ] - Phone number linked incorrectly
- [ ] - Third party links should open in a new tab |
libretro/libretro-lutro | 797593562 | Title: Reset in Quick Menu does nothing
Question:
username_0: Currently, there is no way to reset a game in Lutro.
Expected result: Game should restart.
Actual result: Nothing happens
Answers:
username_1: It calls lutro.reset(), but it might be smart to unload and re-launch the game from memory. Not sure how we'd hold onto cart data though.
username_2: I tried to do and unload/reload of the game, but I didn't success in my implementation.
I guess for now people have to implement lutro.reset() in their games.
username_0: That sounds reasonable
username_3: Implementing lutro.reset() is fine, some extra work to port an existing Love2D game, but no big deal.
@username_0, running a game directly in Love2D is a different thing, there is no equivalent reset that it is called by the Love2D player, so no worries about conflicting or missing functionality related to that.
Status: Issue closed
|
pwa-builder/PWABuilder | 641767607 | Title: [ Forms and validation - PWA Builder - PWA Builder Feature]: No Error Identification or Suggestion Message is shown on searching anything wrong in search field
Question:
username_0: **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional info (please complete the following information):**
- OS: [e.g. Windows 10]
- Browser [e.g. edge, chrome, safari]
- Browser Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
Answers:
username_1: Issue is still repro. Try navigating in scan mode with down arrow keys. Verified on OS build Version 2004 (OS Build 19591.1000), Edge version: Anaheim version Version 85.0.552.1 (Official build) dev (64-bit), URL: https://preview.pwabuilder.com/, Screen Reader: Narrator
Hence Reactivating the bug
Status: Issue closed
username_2: Issue has been fixed on below Test Environment as:
OS: Version 2004 (OS Build20190.1000)
Browser:Version 86.0.608.2 (Official build) dev (64-bit)
Hence, closing the bug. |
hasura/graphql-engine | 802392167 | Title: Metadata allows generating invalid graphql schemas
Question:
username_0: I ran into this issue a bit while trying to use some other graphql tooling, specifically the [Apollo Devtools](https://github.com/apollographql/apollo-client-devtools). I was getting syntax errors whenever it would introspect the schema, and after pulling the schema down and generating the SDL, it turns out that the SDL was invalid.
For example, it included sections like the below examples:
```graphql
"""
update columns of table "channels"
"""
enum channels_update_column
```
```graphql
"""
input type for updating data in table "channels"
"""
input channels_set_input
```
which appear to be syntactically invalid(notes on that below).
It looks like the cause of this are metadata sections like the below:
```yaml
update_permissions:
- role: user
permission:
columns: []
filter: {}
check: null
```
While you'd think that just removing those sections would be enough to fix them, it looks like they're necessary in order to allow upsert semantics where we want to do nothing on conflict `(insert with on_conflict: {update_columns: []})`. NOTE: I'd be happy to be proven wrong about that point!
In any case, I think it makes sense to not allow metadata that results in an invalid graphql schema.
I'll note that, while I first discovered this in Apollo Devtools, I later tested it via both:
- `graphqurl`
- a combination of `get-graphql-schema` and `graphql-inspector`
both of which failed. `graphqurl` refused to print the schema out, giving:
```
Executing query... error
Error: Error: channels_update_column values must be an object with value names as keys.
```
`get-graphql-schema` printed out the schema without complaint, but `graphql-inspector` failed flagging similar issues.
Both of them failed somewhere in `node_modules/graphql/`, which tells me that they're probably using the same core validation logic, and thus either that validation is wrong, or the introspection JSON from Hasura is invalid.
A further detail, just in case it's helpful at all, is that different tools produced slightly different schemas for the same introspection JSON. `get-graphql-schema` produced examples like the above, whereas the Apollo Devtools seems to produce blocks more like:
```graphql
"""
input type for updating data in table "channels"
"""
input channels_set_input {
}
```
I hope all that helps. I'm currently on 1.3.0, and haven't had a chance to test it on 1.3.3 yet.
Answers:
username_1: This is a problem we have seen before: we didn't check whether some constructions could result in empty objects / enums / input objects. We fixed most of them during the rewrite that led to 2.0, so I guess technically this issue is a "won't fix" since it's about 1.3. Furthermore, post v2, we've fixed several bugs related to empty objects in the schema. However, I decided to have a quick look at the status of the code to see if there are things we can improve, to prevent bugs like this from arising in the future.
### Enums
For enums, our parser expects [a non-empty list of values](https://github.com/hasura/graphql-engine/blob/1243da1d54a0f122a68064668be566e04d790dfc/server/src-lib/Hasura/GraphQL/Parser/Internal/Input.hs#L242), meaning we simply cannot construct an empty enum in the schema anymore.
### Input objects
There, however, no such luck: `InputFieldsParser` stores [a list of definitions](https://github.com/hasura/graphql-engine/blob/1243da1d54a0f122a68064668be566e04d790dfc/server/src-lib/Hasura/GraphQL/Parser/Internal/Input.hs#L43); I don't think it's possible to change that and keep the applicative instance meaningful; we could, however change their [definition](https://github.com/hasura/graphql-engine/blob/1243da1d54a0f122a68064668be566e04d790dfc/server/src-lib/Hasura/GraphQL/Parser/Schema.hs#L441) to instead contain a non-empty list. Upside of this: we would fail to build schemas that would result in such invalid objects; downside: there'd be no way for users to still use Hasura with the invalid schema as they can today.
Another idea would be to change the `object` combinator (we even have [a TODO in the code for it](https://github.com/hasura/graphql-engine/blob/1243da1d54a0f122a68064668be566e04d790dfc/server/src-lib/Hasura/GraphQL/Parser/Internal/Input.hs#L267)): one way of doing this would be to return a `Maybe (Parser 'Input m a)`; each call site would then be forced to deal with the fact that it might not be representable. That seems fairly straightforward, I'll give it a try to see if there are any unforeseen issues there.
### Objects
Similarly, our [selection combinators](https://github.com/hasura/graphql-engine/blob/1243da1d54a0f122a68064668be566e04d790dfc/server/src-lib/Hasura/GraphQL/Parser/Internal/Parser.hs#L143) take a list, not a non-empty list. As the comment there suggests, a possibility would be to take non-empty lists everywhere, to enforce at every call site that we are creating objects that make sense.
We are gonna run into one known issue if we do this however: the empty `query_root` object. In short: the spec has an inherent contradiction; it both states that:
- the `query_root` object must always be present in the schema
- objects cannot be empty
As a result, it is impossible to build a "mutation only" server. We of course currently allow it, and generate an empty `query_root` object when that's the case. Most tools do actually have a workaround for that particular case, as it is quite common. However, if we do change our selection combinators, then we won't be able to express the query root at all. One potential solution would be to resurrect my internal pr that [added a placeholder in the case of an empty root](https://github.com/hasura/graphql-engine-mono/pull/148).
### tl;dr
- we won't solve this in 1.3, as most of those problems were fixed in 2.0
- enums are safe by construction
- I'll try to quickly see if we can make input objects safe by construction
- to make objects safe by construction, we would have to solve the problem of the empty root
username_2: @username_0 I'm closing this in light of @username_1's comment above, but please feel free to re-open it for discussion if you aren't satisfied with the resolution!
Status: Issue closed
|
swagger-api/swagger-codegen | 572367231 | Title: [CSharp] Swagger-gen converts required primitive-types to nullable types
Question:
username_0: #### Description
I have a simple class called Test which contains only two primitive properties in my server-side application:
```csharp
public class Test
{
[Required]
public int Number { get; set; }
[Required]
public bool IsEnabled { get; set; }
}
```
</br>
When I try to generate the client version of this class, swagger-gen generates a class like this for me:
</br></br>
```csharp
using System;
using System.Linq;
using System.IO;
using System.Text;
using System.Text.RegularExpressions;
using System.Collections;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.Runtime.Serialization;
using Newtonsoft.Json;
using Newtonsoft.Json.Converters;
using System.ComponentModel.DataAnnotations;
using SwaggerDateConverter = IO.Swagger.Client.SwaggerDateConverter;
namespace IO.Swagger.Model
{
/// <summary>
/// TestValueTypesModelsTest
/// </summary>
[DataContract]
public partial class TestValueTypesModelsTest : IEquatable<TestValueTypesModelsTest>, IValidatableObject
{
/// <summary>
/// Initializes a new instance of the <see cref="TestValueTypesModelsTest" /> class.
/// </summary>
/// <param name="number">number (required).</param>
/// <param name="isEnabled">isEnabled (required).</param>
public TestValueTypesModelsTest(int? number = default(int?), bool? isEnabled = default(bool?))
{
// to ensure "number" is required (not null)
if (number == null)
{
throw new InvalidDataException("number is a required property for TestValueTypesModelsTest and cannot be null");
}
else
{
this.Number = number;
}
// to ensure "isEnabled" is required (not null)
if (isEnabled == null)
{
throw new InvalidDataException("isEnabled is a required property for TestValueTypesModelsTest and cannot be null");
}
[Truncated]
{
throw new InvalidDataException("number is a required property for TestValueTypesModelsTest and cannot be null");
}
else
{
this.Number = number;
}
// to ensure "isEnabled" is required (not null)
if (isEnabled == null)
{
throw new InvalidDataException("isEnabled is a required property for TestValueTypesModelsTest and cannot be null");
}
else
{
this.IsEnabled = isEnabled;
}
}
```
But I couldn't find out why **required** is working for generating constructor exceptions and not working for property generations!
Answers:
username_1: Also running into this and I think @username_0 comments are spot on. |
gsilano/CrazyS | 327058848 | Title: Controller does not stop crazyflie when arrived at the goal pose!
Question:
username_0: Hello there,
First of all, thanks for your effort in making a simulator that works with gazebo and crazyflie. I have a project related to quadrotor swarms and I've been observing Crazyflie for some time now. I have built your code (fix-controller branch) and seen that when hovering, the responses are quite well; however, when I publish a waypoint the quadrotor does not stop at the goal pose and continues to fly in the direction it had to arrive the goal. I changed the goal in hovering_example.cpp file to (2.0, 0.0, 1.0) to see how it will respond. Can this be a problem (I don't think so since a similar code is in waypoint_publisher.cpp)?
Secondly, the quadrotor moves quite slow compared to other models like firefly. I guess its because crazyflie has a small and light structure and the motors are not powerful to respond fast like firefly does. Is this a bug (this is an issue with RotorS simulator as well)?
Thanks in advance...
Burak<issue_closed>
Status: Issue closed |
DrexelOMS/OvercomingMS | 401035521 | Title: Implement code for Timer
Question:
username_0: The goal is something where you can set the amount of time to count from, and correctly countdown to zero. Keep in mind the goal is that a number or slider will be updating every time the number changes, and we don't want the app to freeze while the timer is going, so this needs to be an asynchronous process. Obviously need a strong understanding of swift, but creating a delegate to update a UI progress bar controller to update itself when the time changes would be dope, but this not necessary if you find a better way.
You can simply test the UI update with a print statement that will be changed later to update the time. just know that the value in a timer format, and a percentage must be accessible to an observer.<issue_closed>
Status: Issue closed |
Kotlin/kotlinx.coroutines | 577878414 | Title: Parent job does not cancel child jobs
Question:
username_0: This issue is more of a question as I'm slowly beginning to really understand how coroutines work but I'm not yet sure if the way I'm using coroutines is right.
I have this class that processes requests and when done returns its results to the caller. I want to make this process cancellable but unfortunately the way I thought it would work doesn't work.
(Don't worry about some strange names. I had to censor the code)
```kotlin
class Processor(private val client: ApiClient) {
...
private val coContext = newSingleThreadContext(Processor::class.java.simpleName)
private var parentJob: Job? = null
fun processRequests(requests: List<Request>, callback: (List<Response) -> Unit) {
GlobalScope.launch(coContext) {
parentJob?.cancelAndJoin()
parentJob = launch {
val completedRequests = mutableListOf<Response>()
batchProcessRequests(requests, completedRequests)
callback(completedRequests)
}
}
}
private suspend fun batchProcessRequests(requests: List<Request>, completedRequests: MutableList<Response>) = coroutineScope {
requests.forEach { request ->
ensureActive()
launch(Dispatchers.IO) {
val response = client.getResponse(request) // Makes a Retrofit call. See below
completedRequests.add(response)
}
}
}
}
class ApiClient() {
private val api = Api.create(...)
suspend fun getResponse(request: Request): Response {
// Do some extra stuff before sending the request
...
return api.requesting(...)
}
}
interface Api {
@GET("...")
suspend fun requesting(...): Response
...
}
```
If I'm right the Retrofit suspend function works like `deferred.await()` that's why I'm launching another coroutine to not waste time. When I'm calling `processRequest()` shortly after a first call to it, it will call `cancelAndJoin()` and wait for the `parentJob` to complete its cancellation. Unfortunately this cancel call will not cancel any of the inner launched coroutines that are getting the response for a request. Only after completion of `batchProcessRequests()` the `parentJob` will be cancelled successfully.
Shouldn't `parentJob` cancel all its children?
Answers:
username_1: I don't see any problem that would prevent the cancellation of all the children. Can you provide a self-contained example that we can run to see the problem?
username_0: I just once again checked everything I used in the process of testing and I think I found the source of the problem I encountered. I was mocking the `ApiClient` with mockk to not make any calls to the real API in tests. Due to using the wrong mockk functions for answering a call to `getResponse()` I kind of turned this cooperative coroutine into a non cooperative.
For everyone who's interested in what code caused this problem:
```kotlin
private val apiClient = mockk<ApiClient>().apply {
coEvery { getResponse(any()) } answers {
Thread.sleep(1000L) // Simulate passed time
Response()
}
}
```
Instead this made it work as expected:
```kotlin
private val apiClient = mockk<ApiClient>().apply {
coEvery { getResponse(any()) } coAnswers {
delay(1000L) // Simulate passed time
Response()
}
}
```
Thank you very much for your feedback Roman. It helped me to change my focus from the processing to the testing component.
I'll close this issue.
Status: Issue closed
|
websockets/ws | 197430935 | Title: how to determine if ws is already open
Question:
username_0: ```js
ws.on('open', function open() {
console.log('client is opened');
ws.send(JSON.stringify({
uuid: uuid,
key: key,
lock: true
}));
});
```
can we do a check if ws is opened?
something like:
```js
if(ws.isOpen){
ws.send('foo');
}
else{
ws.on('open', function open() {
console.log('client is opened');
ws.send('foo'));
});
}
```
Answers:
username_1: You can check the `readyState`.
```js
if (ws.readyState === WebSocket.OPEN) {
...
}
```
username_0: thanks!
What I did for now, I put this in the top of my code:
```
ws.once('open', function open() {
ws.isOpen = true;
});
```
:) but thanks, I will probably do it the more canonical way
Status: Issue closed
username_1: Closing this, please comment or reopen if needed. |
braintree/braintree-android-visa-checkout | 324111639 | Title: Update gradle plugin version to 3.2 when available
Question:
username_0: See [this issue](https://github.com/braintree/braintree-android-drop-in/issues/72) in `braintree-android-drop-in` related to data binding version mismatches in upstream dependencies.
We'll need to update our gradle plugin to version 3.2 when it's available to ensure forwards and backwards compatibility with versions 1 and 2 of the data binding libraries.<issue_closed>
Status: Issue closed |
finnsson/pagerjs | 106374498 | Title: URL parameters with ampersand (&) in value get truncated
Question:
username_0: # Bug
I'm running into this bug right now with pagerjs 1.0.1. I use a page-href like this:
<a data-bind="page-href: { path: '/somepage', params: { param1: 'first & second', param2: 'other value' } }">link</a>
On the page, we have this:
param1 == "first"
param2 == "other value"
Note that param1 is truncated to where the ampersand is in the value.
# Cause
The problem appears to be here
var parseHash = function (hash) {
return $.map(hash.replace(/\+/g, ' ').split('/'), decodeURIComponent);
};
That turns this URL:
/somepage?param1=first+%26+second¶m2=other+value
Into this:
['/somepage?param1=first & second¶m2=other value']
The problem is that we're decoding the %26 into & too early. Now instead of "param1" having the value "first & second", we have these params:
{
"param1": "first ",
" second": undefined,
"param2": "other value"
}
# Fix
Instead of decoding the entire URL in parseHash, we need to be more selective.
* In parseHash, split off the hash from the rest of the URL (so we'd have "/somepage" and "param1=first+%26+second¶m2=other+value" separately).
* Decode just the page part ("/somepage") at this point, leave the hash as-is.
* Later when we call parseStringAsParameters(), we need to split on &, then separately decode each piece.
This will preserve ampersands in values w/o messing up the param splitting.
Answers:
username_1: I'm facing this bug right now. Is there a workaround I can use now, before you release a fix?
username_0: @username_1 - my workaround is to double-urlencode all values before setting them in the URL, and decode them when using pagerjs's params. This means that the values are decoded twice; once by pagerjs (and put into the page param observables), then a second time by you in order to get the real value. It's not pretty, and the URLs get a bit longer, but it does work safely (since chars like & that mess up pagerjs are encoded & hidden from pagerjs). Also (probably) fixes any other potential param encoding bugs (like ? and = and who knows what else).
Ideally we'll get a fix in pagerjs. I tried, but it quickly became large (ie. the current code parses way too early in the stack, and we need to pass additional parameters through a lot of functions in order to decode at the right point). It's not too complex, but not something I had time to finish unfortunately :( |
samp-incognito/samp-streamer-plugin | 484871118 | Title: Pickup + SetArrayData
Question:
username_0: Pickups with an array of worlds defined with Streamer_SetArrayData is not updated when the player's virtual world is changed.
Example:
Pickup are in worlds **3, 4, 5**.
Player goes from world **0 to 3**, the pickup **appears**.
Player goes from world **3 to 4**, the pickup **disappears**.
Player goes from world **4 to 5**, the pickup **still disappeared**.
Answers:
username_1: Duplicate #316
Already fixed in #317
Status: Issue closed
|
orchestracities/crate-ce | 806173279 | Title: Phase out CE image
Question:
username_0: As @amotl kindly pointed out (#4), going forward there shouldn't be any need for a custom Crate Docker image---see #4 for the details. Should we:
* deprecate this image---just in case others start using it, we should make it clear we won't maintain it.
* phase this image out---we need some kind of migration plan for our prod envs, e.g. upgrade to Crate 4.5 during 3rd quarter 2021
Thoughts? |
agronholm/apscheduler | 218439337 | Title: use TwistedScheduler run scrapy in linux, first time hung and second time is OK
Question:
username_0: seems run first time, apscheduler load scrapy and sencond time run it. expect run first time is ok
```
#coding=utf-8
from apscheduler.schedulers.twisted import TwistedScheduler
import logging
import sys
import os
import re
from twisted.internet import reactor, defer
from scrapy.crawler import CrawlerRunner
#from cuspider.spiders.kaili_spider import KailiSpider
from scrapy.utils.log import configure_logging
from scrapy.utils.project import get_project_settings
from datetime import datetime
from logging.handlers import RotatingFileHandler
reload(sys)
sys.setdefaultencoding('utf-8')
settings = get_project_settings()
#configure_logging(install_root_handler=False)
runner = CrawlerRunner(settings)
#cookie保存路径
cookie_file = './cookie/cookie.txt'
'''TBC:TimeRotatingFileHandler配置无效'''
logging.config.dictConfig({
'version': 1,
'disable_existing_loggers': False, #使spider输出到log
'formatters': {
'verbose': {
'format': "[%(asctime)s] %(levelname)s [%(name)s:%(lineno)s] %(message)s",
'datefmt': "%Y-%m-%d %H:%M:%S"
}
},
'handlers': {
'file': {
'level': 'INFO', #定义log输出级别
'class': 'logging.handlers.RotatingFileHandler',
# 当达到*byte时分割日志
'maxBytes': 1024 * 1024 * 10,
# 最多保留*份文件
'backupCount': 50,
# If delay is true,
# then file opening is deferred until the first call to emit().
'delay': True,
'filename': 'log/log.log',
'formatter': 'verbose'
}
},
'loggers': {
'': {
'handlers': ['file'],
'level': 'INFO',
},
}
})
[Truncated]
reactor.run()
print (u'任务调度中止时间:%s' % datetime.now())
logger.info(u'任务调度中止时间:%s' % datetime.now())
'''
#删除cookie文件
if True == os.path.isfile(cookie_file):
os.remove(cookie_file)
logger.info(u'删除cookie文件')
'''
except (KeyboardInterrupt, SystemExit):
'''TBC:正常情况下执行不到'''
logger.info(u'任务调度异常中止时间:%s' % datetime.now())
reactor.stop()
scheduler.shutdown()
sys.exit(0)
'''测试用'''
def tick():
print('Tick! The time is: %s' % datetime.now())
```
Answers:
username_1: You realize you haven't actually asked a question or reported a bug? What do you want?
username_0: @username_1 I think it is a bug.
username_1: Is there *any way* you could simplify your script to reproduce the alleged bug with minimal amount of code?
username_0: ```
from twisted.internet import reactor, defer
from scrapy.crawler import CrawlerRunner
from scrapy.utils.log import configure_logging
class MySpider1(scrapy.Spider):
# Your first spider definition
...
class MySpider2(scrapy.Spider):
# Your second spider definition
...
configure_logging()
runner = CrawlerRunner()
@defer.inlineCallbacks
def crawl():
yield runner.crawl(MySpider1)
yield runner.crawl(MySpider2)
if __name__=="__main__":
scheduler = TwistedScheduler()
scheduler.add_job(crawl, 'cron', hour='12', minute='00, 01')
scheduler.start()
reactor.run()
```
username_1: Ah, I see the problem now. TwistedScheduler sadly does not support `@inlineCallbacks`, as the default executor just runs everything in a thread pool. Only Asyncio and Tornado schedulers currently support coroutine jobs, and then only on Python 3.
username_0: @username_1 hope you can implement TwistedScheduler support `@inlineCallbacks` in Python 3. I use Python 2.7.13 in CentOS6.5. finally I use crontab instead of APscheduler in Linux. still use APscheduler in Windows. Thanks your effort!
username_1: I have little personal interest in Twisted. If there is going to be `@inlineCallbacks` support in the near future it will have to come as a PR from someone else. I am too busy with higher priority projects for the time being.
username_0: @nikolas @username_3 @c-oreills @username_2 anyone has time to implement above issue?
username_2: Nope sorry, I am just interested on the asyncio scheduler
username_3: @username_0 sorry, but I'm very busy right now
username_1: Closing as the original question was answered. If someone wants to add support for `@inlineCallbacks`, send a PR.
Status: Issue closed
|
houndci/hound | 209965569 | Title: Trying to install with homebrew, can't seem to bundle
Question:
username_0: I can't seem to bundle passed capybara webkit. I have qt8 installed and linked but it says Qtwebkit is no longer installed and I cannot install the capybara-webkit. I am trying to contribute or look at the code but can't get passed this point. Is there anything I am doing wrong?
☯ ⤖ brew install qt5
Warning: qt5 is a keg-only and another version is linked to opt.
Use `brew install --force` if you want to install this version
☯ ⤖ brew install qt5 --force
Warning: qt5-5.8.0_1 already installed, it's just not linked.
☯ ⤖ brew link --force qt5
Warning: Already linked: /usr/local/Cellar/qt5/5.8.0_1
To relink: brew unlink qt5 && brew link qt5
(add /usr/local/Cellar/qt5/5.8.0_1 to path in .bash)
☯ ⤖ source ~/.bash_profile
☯ ⤖ gem install capybara-webkit
Building native extensions. This could take a while...
ERROR: Error installing capybara-webkit:
ERROR: Failed to build gem native extension.
current directory: /Users/mdcomputer/.rvm/gems/ruby-2.3.1/gems/capybara-webkit-1.12.0
/Users/mdcomputer/.rvm/rubies/ruby-2.3.1/bin/ruby -r ./siteconf20170223-32651-1flr6k1.rb extconf.rb
Info: creating stash file /Users/mdcomputer/.rvm/gems/ruby-2.3.1/gems/capybara-webkit-1.12.0/.qmake.stash
cd src/ && ( test -e Makefile.webkit_server || /usr/local/bin/qmake -o Makefile.webkit_server /Users/mdcomputer/.rvm/gems/ruby-2.3.1/gems/capybara-webkit-1.12.0/src/webkit_server.pro 'LIBS += -L/usr/local/opt/libyaml/lib -L/usr/local/opt/readline/lib -L/usr/local/opt/libksba/lib -L/usr/local/opt/openssl/lib' ) && /Applications/Xcode.app/Contents/Developer/usr/bin/make -f Makefile.webkit_server
Project ERROR: No QtWebKit installation found. QtWebKit is no longer included with Qt 5.6, so you may need to install it separately.
make: *** [sub-src-webkit_server-pro-make_first-ordered] Error 3
Command 'make ' failed
current directory: /Users/mdcomputer/.rvm/gems/ruby-2.3.1/gems/capybara-webkit-1.12.0
make "DESTDIR=" clean
cd src/ && ( test -e Makefile.webkit_server || /usr/local/bin/qmake -o Makefile.webkit_server /Users/mdcomputer/.rvm/gems/ruby-2.3.1/gems/capybara-webkit-1.12.0/src/webkit_server.pro 'LIBS += -L/usr/local/opt/libyaml/lib -L/usr/local/opt/readline/lib -L/usr/local/opt/libksba/lib -L/usr/local/opt/openssl/lib' ) && /Applications/Xcode.app/Contents/Developer/usr/bin/make -f Makefile.webkit_server clean
Project ERROR: No QtWebKit installation found. QtWebKit is no longer included with Qt 5.6, so you may need to install it separately.
make: *** [sub-src-webkit_server-pro-clean-ordered] Error 3
current directory: /Users/mdcomputer/.rvm/gems/ruby-2.3.1/gems/capybara-webkit-1.12.0
make "DESTDIR="
cd src/ && ( test -e Makefile.webkit_server || /usr/local/bin/qmake -o Makefile.webkit_server /Users/mdcomputer/.rvm/gems/ruby-2.3.1/gems/capybara-webkit-1.12.0/src/webkit_server.pro 'LIBS += -L/usr/local/opt/libyaml/lib -L/usr/local/opt/readline/lib -L/usr/local/opt/libksba/lib -L/usr/local/opt/openssl/lib' ) && /Applications/Xcode.app/Contents/Developer/usr/bin/make -f Makefile.webkit_server
Project ERROR: No QtWebKit installation found. QtWebKit is no longer included with Qt 5.6, so you may need to install it separately.
make: *** [sub-src-webkit_server-pro-make_first-ordered] Error 3
make failed, exit code 2
Gem files will remain installed in /Users/mdcomputer/.rvm/gems/ruby-2.3.1/gems/capybara-webkit-1.12.0 for inspection.
Results logged to /Users/mdcomputer/.rvm/gems/ruby-2.3.1/extensions/x86_64-darwin-16/2.3.0/capybara-webkit-1.12.0/gem_make.out
☯ ⤖
Answers:
username_1: QT 5.5 was sadly the last version capybara-webkit was compatible with. As per their Wiki - https://github.com/thoughtbot/capybara-webkit/wiki/Installing-Qt-and-compiling-capybara-webkit#homebrew
If you install `qt55` from homebrew, you shouldn't have any issues installing capybara-webkit.
Status: Issue closed
username_2: Here https://github.com/thoughtbot/capybara-webkit/wiki/Installing-Qt-and-compiling-capybara-webkit |
mvysny/vaadin-on-kotlin | 242916542 | Title: When editing an updating Grid, I get exception..
Question:
username_0: I've been trying to figure out, what is wrong in my code. When I go to edit some of the data in grid, I get this exception..
10:03:03.833 [qtp279680875-909] ERROR c.vaadin.server.DefaultErrorHandler -
java.lang.IllegalStateException: Duplicate key Person(firstName=matti222, lastName=meikalainen22, sotu=111111)
at java.util.stream.Collectors.lambda$throwingMerger$0(Collectors.java:133)
at java.util.HashMap.merge(HashMap.java:1245)
at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1320)
at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1540)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at com.vaadin.data.provider.DataCommunicator$ActiveDataHandler.getActiveData(DataCommunicator.java:165)
at com.vaadin.data.provider.DataCommunicator.refresh(DataCommunicator.java:521)
at com.vaadin.ui.AbstractListing$AbstractListingExtension.refresh(AbstractListing.java:122)
at com.vaadin.ui.components.grid.EditorImpl.save(EditorImpl.java:250)
at com.vaadin.ui.components.grid.EditorImpl$1.save(EditorImpl.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.vaadin.server.ServerRpcManager.applyInvocation(ServerRpcManager.java:155)
at com.vaadin.server.ServerRpcManager.applyInvocation(ServerRpcManager.java:116)
at com.vaadin.server.communication.ServerRpcHandler.handleInvocation(ServerRpcHandler.java:445)
at com.vaadin.server.communication.ServerRpcHandler.handleInvocations(ServerRpcHandler.java:410)
at com.vaadin.server.communication.ServerRpcHandler.handleRpc(ServerRpcHandler.java:274)
at com.vaadin.server.communication.UidlRequestHandler.synchronizedHandleRequest(UidlRequestHandler.java:90)
at com.vaadin.server.SynchronizedRequestHandler.handleRequest(SynchronizedRequestHandler.java:41)
at com.vaadin.server.VaadinService.handleRequest(VaadinService.java:1577)
at com.vaadin.server.VaadinServlet.service(VaadinServlet.java:381)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1669)
at org.eclipse.jetty.websocket.server.WebSocketUpgradeFilter.doFilter(WebSocketUpgradeFilter.java:224)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
And here is the code I'm using:
[Truncated]
// automatically create filters, based on the types of values present in particular columns.
appendHeaderRow().generateFilterComponents(this, Person::class)
}
personGrid.addItemClickListener({ event -> Notification.show("Value: " + event.getItem()) })
}
override fun enter(event: ViewChangeListener.ViewChangeEvent?) {
personGrid.dataProvider.refreshAll()
}
}
```
Even if I leave the whole ListDataProvider out of it, I still get the same duplicate key error.
So I'm starting to wonder, is this a bug in Vaadin, or am I just doing something wrong? Is the problem with ListDataProvider, or with the Binder? I'm just not getting it how to use this properly through the docs..
Answers:
username_1: I believe that Grid requires all rows to be unique; or rather they have an unique ID that identifies every row without any doubt. The exception would support this belief, but Vaadin's `DataProvider.getId()` makes no such requirements. I would say that we ask the Vaadin gurus at forums, whether it is allowed for a ListDataProvider to contain two items with the same ID or not. Or perhaps we should create a Vaadin bug since it's not immediately clear from the documentation whether those IDs must be unique or not.
Anyways, assuming that the Grid requires all rows to be unique: by default the `Person` itself is the row ID. You can see this by looking into the `DataProvider.getId()` default implementation, which `ListDataProvider` doesn't override. That means that Grid uses `Person`'s `equals()` and `hashCode()`; with data classes this means that the instances of Person are considered equal if their fields are all equal.
I thus believe that there might be more than one `matti` in your database. The problem should go away by e.g. introducing a primary key column into the `Person` class.
username_0: Yes, i was also thinking that it requires them to be unique. But if the Grid uses the hashCode, it should be different, when the variables inside an objects are different. Or did I misunderstood that?
And if the keys are the same, wouldn't the same problem arise when getting in to the view, that lists the objects in Grid?
So this problem basically happens only after editing something. And it does not matter if you edit it so that it's unique. And in that test there is only two objects. And It happens even if I edit them so that they really are unique.
I also tested this by not using the DataProvider. I just assigner this person -list to the grid, and still the same thing..
Did you make a bug report to vaadin already, or will I make it?
username_1: Ah, I haven't realized that this only happens in the edit mode - I'm sorry. Well, then you are way more skilled in this area than me ;) Can you please open the bug report? I think you will be more capable of providing all necessary details than I can.
Well, assigning a person list to the Grid will actually create a `ListDataProvider` in the background I believe.
username_1: Also, if I understand correctly, you're migrating your project to Vaadin 8 and Kotlin? I would be thrilled to learn of the project and how the migration went; also I believe that Vaadin marketing guys would love to hear that, you should drop a mail to all of us - in Finnish, of course :-)
username_0: Yeap, I'm basically rewriting an old JAva and Vaadin 7.7 based project to use Kotlin and Vaadin.. But The whole backend is now rewritten with kotlin, but I'm only struggling with this UI now :) And it seems like the documentation in Vaadin is not the clearest one regarding the Grid component.
Btw. is there any major issues, if I try to use Vaadin 8.0.6 in vaadin-on-kotlin? Is there any features that will not work with older vaadin version?
username_1: Just open a bug report at https://vaadin.com/bug while Vaadin 8.1 is still in RC1. I think that guys will want to learn of the docu shortcomings and will be able to fix it until the 8.1 final is out.
I believe you should be able to use Vaadin 8.0.6 just fine with Vaadin-on-Kotlin. I'm personally using 8.1 RC1 because of ComponentRenderer support added in 8.1, and it's working quite nicely: aedict-online.eu , also https://martin.app.fi/karibudsl/
username_1: Please, when you do open a bug, just link it from here so that I can be notified when it's fixed upstream - thanks!
username_0: Here's the link to the bug: [https://github.com/vaadin/framework/issues/9678](https://github.com/vaadin/framework/issues/9678)
username_2: I have hit similar but not identical symptoms -- may or may not be the same.
My case, the grid started behaving *very strangely* but didn't always crash -- I debuged/resolved it by following the hints in the docs about unique ID's. The DataProvider getId() uses Object.equals() in some places - basically the same rules for 'identity' for Java collections.
Depending on the specifics of the object -- in kotlin - variants on interface/class/data class, @JvmField, @JvmStatic , extension properties etc can lead to 2 objects being 'the same' even if they are not (or visa-versa).
I solved it with a derived DataProvider with a custom getId()
NOTE: the object in question came from vok-db , where "id" is a special property that's required for data binding and DB access. For a while I was confused that this was what was being used by DataProvider.
Note: I've found the 'safest' classes to use for Grid are 'pure' 'data class' kotlin classes with nothihng fancy. Vaadin does a 'very good' job at discovering properties - often 'too good' for kotlin - finding functions or extension properties not intended for UI puproses. I havent yet found a clean way to ignore these in general to the point that I often dynamically generate classes at runtime just to put in a grid in a reproducible/reliably way with consistent naming. |
kan-bayashi/ParallelWaveGAN | 811755088 | Title: "fro" norm loss function
Question:
username_0: Hi, thank you for your amazing project.
I notice that at commit https://github.com/kan-bayashi/ParallelWaveGAN/commit/3524d8b7210425d25b96bbdc4749d715e0b3800 , you modified the spectral convergence loss function. The new function computes `fro norm ` over all dimensions (including the batch dimension). If so, then one data sample can affect the loss values of other data samples.
If I'm not wrong, the original `spectral convergence loss` at https://arxiv.org/pdf/1808.06719.pdf is intended to compute `pro norm` over the ``time and frequency`` dimensions only (``dim=(1,2)``).
I would very happy if you can share your opinions on this issue.<issue_closed>
Status: Issue closed |
dexidp/dex | 407131352 | Title: background health check
Question:
username_0: ☝️ just a guess, all suggestions are welcome 😃
Answers:
username_1: I wouldn't mind taking this on, extending health check retries on subsequent successful attempt makes sense. That being said I propose capping the time limit to a value that makes sense (5 minutes is my initial thought).
username_0: @username_1 great, thank you 😃 A five minutes cap seems reasonable, too. (💭 I guess it could be made configurable; but that doesn't have to go into the first iteration.)
username_1: /assign @username_1 |
QubesOS/qubes-issues | 83058761 | Title: Consider changing "dom0" to "qubes" for better UX
Question:
username_0: I'm not sure if this is the right place to file what I consider UX bigs / improvements- if it's not, please tell me where else to do so :-)
I propose changing the root VM currently called `dom0` to be called `qubes` as I wager it would be considerably more user friendly in testing. My reasoning is:
- It is less technical and jargony
- Does not introduce a new term a user must understand / remember
- Makes more logical sense "I installed Qubes OS on my computer, thus `qubes` is the main domain"
Answers:
username_1: I don't think this is good idea to name it `qubes`, because it can be
ambiguous. It will be even worse when we implement GUI VM (current
`dom0` will be unaccessible to the user directly then). But we are
slowly moving to name `AdminVM` (currently you can see it as a type for
dom0).
--
Best Regards,
<NAME>
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
username_2: I think Brennan's heart is the right place, but I have to agree with Marek. To elaborate:
1. a. "Qubes OS" refers to the entire OS, which includes Xen, `dom0`, and multiple `domU`s. If we start also referring to `dom0` as "Qubes," then the term "Qubes" will be ambiguous between the whole OS and a single VM inside that OS.
b. Joanna [makes a distinction](http://blog.invisiblethings.org/2013/03/21/introducing-qubes-odyssey-framework.html) between "Qubes OS" and "Qubes." As I understand this distinction, "Qubes OS" refers to the whole OS which we currently have, while "Qubes" *simpliciter* refers to the **Hypervisor Abstraction Layer (HAL)**, which is not by itself a complete OS.
2. Once `dom0` is bifurcated into the `GUIVM` and the `AdminVM`, there will not be a compelling reason to refer to only one of these VMs as "Qubes" instead of the other, and it would be strange to refer to *both* of them as "Qubes," since then someone could say, "I'm having a problem in Qubes," and there will be three(!) different things they could be referring to.
username_3: sounds like we have agreement that it should be `admin`?
the thrust of this ticket is to change from `dom0` which has no meaning to users and is a confusing term. There have been more recent in-person discussions about this with brennan, joanna and I think we all agree `qubes` is not the right term since that will be the future name for AppVMs most likely.
username_0: I definitely think `admin` is an improvement over `dom0` and per our in person chats... I think @username_3 is referring to what we discussed at the usability work session at the OTF Summit, which was calling VMs in general a `qube` and in this specific case it would be the `admin qube` which seems pretty logical and decently intuitive!
username_1: I don't think calling 'VM' a 'qube' is a good idea - this would make
virtually any non-Qubes documentation useless on Qubes...
--
Best Regards,
<NAME>-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
username_4: I agree. I don't think this kind of complexity (term: VM) can be hidden.
We'd still explain somewhere, that a qube is a VM. So we wouldn't reduce
complexity, but add another term on top, qube. And thereby increasing
complexity.
username_0: I think that's a fundamentally wrong assumption from a highly
technical user's POV. It can *totally* be hidden from
non-technical users. A VM does not need to be explained in the UI
via button names, terms, etc. Normal users will never need to
read / nor try to understand that a "qube" actually is a "VM"
Normal users never need to learn when they "connect to the
internet" or "go on the web" they are *actually* "making a TCP/IP
connection using the HTTP protocol to interact with to a
webserver". Or when they "access a website securely" they are
*actually* "doing public / private key cryptography using SSL /
TLS certificates." Yes, this admittedly dumbs things down, which
is entirely the point- it is essential for achieving a simple
"out of the box" UX for non-technical users. As long as honest
documentation & easy access to that technical information exists
for the curious / more technical, I don't see a problem here.
We have to think in terms of *initial first impression* of a user
when thinking of usability. Given the majority of users are
non-technical, is the term & concept of a "VM" helpful to anyone
if they don't already understand what a VM is? Would a user who's
never heard the term VM understand what a VM is? Furthermore,
would it make sense in the Qubes context after reading one to two
sentences? Almost certainly not. Therefore, eliminate jargon &
complexity and go for simple metaphors and try.
username_3: yep agreed. it would drastically increase the usefulness of the Qubes name & brand in explaining how Qubes works to non-technical users.
username_5: Maybe we could settle on the term „domain” which is both more
user-friendly than „VM” and already established in literature, most
importantly (and prominently) in libvirt, which we use.
username_0: If we try to make our UI naming conventions match things in the
literature, we are bound to end up with things that make sense to
technical users who've read "the literature" probably at the
expense of those who have not :-)
username_2: When I first encountered Qubes and saw the Qubes logo (a stack of three cubes, at the time), it was fairly obvious that "Qubes" was a unique spelling of "cubes," and that the "cubes" represented VMs. For this reason, I found it strange that nothing in Qubes was ever referred to as a "qube."
However, if the *primary* reason to start referring to VMs as "qubes" *now* is that the OS is already named "Qubes OS" and has had that name for many years, then this strikes me as a rather weak reason. There should be some compelling, *independent* reason to introduce a new term which abstracts away from VMs.
Here's one potential reason: In Qubes, there are important distinctions between different types of VMs. For example, TemplateVMs are typically more trusted than most of their child TemplateBasedVMs, and the compromise of a TemplateVM is generally (but not always) a worse outcome. We also have StandaloneVMs and HVMs, the latter of which can themselves be either TemplateVMs or StandaloneVMs. There are also ProxyVMs and NetVMs, which are typically less trusted than other types of VMs (but not always, e.g., TorVMs). Finally, we use the term "AppVM" to refer primarily to TemplateBasedVMs, but sometimes also to other types (rather ambiguously, unfortunately).
Given this complex situation, the term "qube" might be most helpful if it were used to refer only to a *proper subset* of these different types of VMs, namely the ones with which less technical users would primarily interact. For example, we might say:
A qube is where you do your work and store your files.
Qubes OS lets you compartmentalize your digital life into securely isolated qubes.
So, a "qube" would refer to most AppVMs, StandaloneVMs, (non-template) HVMs, TemplateBasedVMs, etc. Meanwhile, things like TemplateVMs, ProxyVMs, NetVMs, and dom0 would not be referred to as "qubes." Instead, these would be made less visible to less technical users in order to protect them from making common but fundamental mistakes (like trying to do work in a TemplateVM or in dom0).
(Note that this is just a provisional idea, not a finalized recommendation. My aim is to point out that there are more useful ways to leverage a term of abstraction like "qube" rather than as a 1:1 replacement for "VM.")
username_3: yes exactly, agreed.
this is what brennan, joanna, and myself had discussed in DC - at the most basic level, that AppVMs are `qubes`, while everything else ("service VMs" = proxyvm, netvm, dom0) is not.
I don't think non-technical users will be using StandaloneVMs, HVMs, TemplateBasedVMs, etc.
username_6: So, I think I still like the "VM" most, b/c it's shortest and also (for the slightly more technical crowd) immediately explains how the isolation is being done.
As for dom0 renaming -- we definately need to do that: not only is "Dom0" meaningless to most people, but also given Qubes HAL architecture it's too Xen-specific. I suggest to replace it with AdminVM, or just Admin (or 'admin' depending what looks nicer). In the future (Qubes 4) we will also have GUI domain, which should also be listed in the manager.
username_4: I think this was a mistake or a confusion about terms.
AppVMs can be one type of TemplateBasedVMs.
https://www.qubes-os.org/doc/glossary/
"Template-BasedVM Opposite of a Standalone(VM). A VM, that depends on
another TemplateVM for its root filesystem."
TemplateBasedVMs can be AppVMs, ProxyVMs, NetVMs, HVMs.
username_3: corrected in comment, thanks https://github.com/QubesOS/qubes-issues/issues/1015#issuecomment-155606604
username_0: More on this in another issue at a later date :)
username_2: This sounds like a response to the proposal of performing a 1:1 replacement of "VM" with "qube," but that was not the only proposal made. I pointed out that there are more useful ways to leverage a term of abstraction like "qube," e.g., by using it to refer only to a *proper subset* of the VMs in Qubes OS, such as the ones in which users are supposed to run programs and storing data.
username_7: I think this goes to the heart of the problem, although I'm quite surprised you found it so difficult. (HVM is pretty standard.) I would have thought that the solution was to improve documentation so that these terms are not so hard to understand. (Although this wont help people who wont read.)
But this is to make (technical) sense to (relatively) technical people, and you are trying to make a contribution at a technical level. You shouldn't need *any* of this to improve your security using Qubes.
Why is Bromium so neat? Because it's fast and there is **no** learning curve. A Windows user can just use it straight off.
I agree with @username_0 that detail can *totally* be hidden from non-technical users. But I don't think that the answer is to come up with new terms. It's to drastically simplify the offering (Is this what's implied in "recipes"?). Completely hide the implementation for those users.
I've been experimenting with a Torified Debian Qubes , with set VMs , and a single "standard" menu. No reference to VMs, no "domains" in the menu, no manager. Restricted applications running in the "email" and bank domain - all files are passed off to DispVMs for opening. Banking browser restricted to bank domains - this is the only thing that isn't easy to automate.
I don't have a mother to try it on, but I have given it to assorted folk, most completely non technical. The only complaints have been about browsing speed - that's Tor for you. Oh, and boot time. And that funny copy/paste thing. And slow boot times for VMs - not, of course, put in those terms. ( I mitigate that by prestarting standard VMs on login. ) And maybe some other minor stuff.
The point is that no explanation is required as long as it just works. And these are people who wouldn't find talk about VMs, recipes, qube(singular or plural) helpful. But they *do* want to be a little safer online.
Maybe that isn't the target?
The thing is that security is really difficult. You can improve security by educating people or by giving them tools to use, minimise the chances of misuse by enforcing use and taking away user choice. Once you let folk loose on the detail of the implementation then you had better hope they understand what they are doing and what the risks are: they wont unless they have some grasp of the technical background.
It's a moot point whether a terminology change will improve matters. I think not.
username_3: from Brennan's beautiful mock-ups and axon's points, it sounds like concept of a `qube` could be extended from just a replacement for the term AppVM to also incorporate some understanding of its netVM (ie how it connects to the internet, if it does).
So to tweak the mock-up I would say that a `qube` does not connect to a "Networking Qube", but rather one of the aspects of the `qube` itself is whether and how it networks.
That could be one of the "sides" of the `qube` (maybe the top?). Black if it is non-networked, purple if it is Torified, some-color for VPN, orange/yellow/red for regular clearnet.
@username_7 Your experiment sounds similar to discussions we have had about a "Beginner mode" for Qubes where there is not any domain management, just one AppVM and using the DispVM functionality for opening files/attachments. It would still be a step-up in security from plain Linux (or Windows...). Then there'd be "install some recipes" (Normal mode) and "I'll do it all myself" (Advanced mode).
I don't think this discussion is coming out of disagreement of target audience, just of what we think would be most helpful for new users. We all want non-technical, non-Linux users to be able to understand what Qubes does and integrate it into their existing workflows with minimum confusion. And terminology can help with that process.
username_6: 1. I really like the Brennan's mock-ups (except, I think we should avoid Americanisms, such as "super simple" and keep it more emotionally neutral, say just "simple").
2. I feel persuaded we should replace the term "VM" with a more catchy term, and I like the reasoning for using the term "qubes" for that.
3. I think we should use this "qubes" term for all the VMs, including the service VMs, not just AppVMs. This is to minimize the amount of different classes of abstract terms we're introducing. Also because the services VMs can also be interacted with the same way the user interacts with his or her AppVMs (e.g. I can start Terminal in my TorVM).
4. I do _not_ like the idea of squashing of the inter-VM (sorry, inter-qubes) connectivity picture on the properties of a single, 3D-visualized, qube. Not only might that be technically difficult (hard to show 3D pictures on 2D screen or paper, also mind the color-blind users if we wanted to use colors), it really is not correct (how would you represent a situation when 2 or more VMs use the same networking VM?) and also misses one of the important points of Qubes OS -- being able to separate networking, Tor, USB, etc (so everything we represent by a "qube") from the user AppVMs (so from the user "qubes").
5. So, I think we should perhaps advertise the term "qubes" as a container for running "things" -- be that user apps, be that (admittedly abstract to many users) networking.
6. Also thanks @username_7 for the detailed write-up, I agree with most of what you wrote there. As Michael mentioned, we definitely want to use the "recipe" infrastructure to implement "super easy", out of the box configurations.
username_2: Minor issue: "Qube" is a geometric/structural metaphor, while "recipe" is a cooking metaphor. We should avoid mixing metaphors. A "recipe" is a way to "cook up" a qube. But that's weird, because qubes aren't really "consumed". It's also the opposite of what you're going for with *pre-configured* qubes. A recipe is a set of instructions you follow when you cook something yourself. Pre-configured qubes are the opposite of that, because they're already configured ("cooked") for you.
username_2: Again, what about TemplateVMs?
In 3, you say you want to call *all* VMs "qubes," so TemplateVMs would be called qubes. Then in 5, you say that qubes should be advertised as containers for running "things." But users are **not** supposed to be running "things" in TemplateVMs (except, of course, to install and update software)!
username_6: That one would be the "Template for qube(s)", right? :)
username_2: So, a TemplateVM is a "Template for qube(s)"? Is it itself also a qube? If so, then since a qube should be advertised as a container for running "things" (by 5), it follows that a Template qube should also be advertised as a container for running "things" (false). If not, then since "qube" refers to all VMs (by 3), it follows that a TemplateVM is not a VM (false).
username_8: I need some kind of decision which kind of vocabulary should I use in QubesOS cli tools, manpages and the python API.
username_9: Yeah, this portion of the terminology is a messy transitional state right now. As far as documentation is concerned, we're slowly (very slowly) moving toward the term "qube" for all documentation that is intended to be consumed by non-technical end users. (For technical end users, more precise terms like "AppVM" and "TemplateBasedVM" are fine.) I've attempted to delineate this more clearly here: https://www.qubes-os.org/doc/glossary/
username_8: @username_9 It's more about qubes vs domains vs vm and not about "AppVM" and "TemplateBasedVM"
This is a part from the `qvm-block ls --help`:
```
positional arguments:
VMNAME list volumes from specified domain(s)
optional arguments:
--all perform the action on all domains
--exclude EXCLUDE exclude the domains from --all
```
Should it look like this? :
```
positional arguments:
QUBE_NAME list volumes from specified qube(s)
optional arguments:
--all perform the action on all qubes
--exclude EXCLUDE exclude the qubes from --all
```
username_9: According to our current "official glossary," `qube`, `domain`, and `VM` are all equivalent and interchangeable terms, so I suppose it depends on whether you think it's a problem to have multiple terms for the same thing. In some contexts (e.g., variable names?) maybe it is, but in other contexts (e.g., mailing lists), probably not.
username_0: Correct. I think getting rid of `domain` is the right path forward here. Using VM in technical documentation (and CLI tools) is alright, such as `qvm-move-to` is acceptable, while the GUI being the only place using `qube` in the text and docs!
However, changing `qvm-move-to` to `qubes-move-to` would be nicer, if it's not going to be a major headache, as there are other tools that start with `qubes-`
username_6: That was me tweeting. The reason for the use of 'domain' in that context was that 'qube' makes no sense outside of the Qubes OS.
username_9: In this case, the problem is not using "domain" instead of "qube." Rather, it's using "domain" (or "qube") to mean "workflow" instead of "VM."
But perhaps this is actually an opportunity. Instead of completely eliminating "domain" from Qubes' vocabulary, we could instead re-purpose it to mean something like "workflow."
username_3: not sure there is a conflict, you create a qube ("banking") to protect a particular (security) domain ("my banking"). A qube is the Qubes OS implementation/reflection of a real-life security domain.
So in that way I think in the glossary the AppVM, NetVM, ProxyVM should be defined as qubes (done, not done, not done), the qube should be defined as application or service VMs for particular security domains (~done but ambiguous), and the domain should not be defined as a software implementation but instead the real-life security domain a user is trying to have reflected within Qubes OS (opposite of what is currently there).
I don't think domain is equivalent to workflow, a workflow can involve multiple qubes (and security domains): for example your "work email" workflow could involve torified "email" and offline "split-gpg" qubes. this is something that could be clarified by the VM Manager by enabling the user to group qubes into workflows (using recipes, manually, etc).
username_9: Not so sure about that. Sounds like a complicated UI/UX decision.
username_9: One problem with not using "domain" as a synonym for "VM" is that this diverges from Xen's usage (e.g., "dom0" is short for **domain** zero), but I think it's fine for us to diverge from Xen, since it seems that our goal is to mostly hide Xen from the user.
username_9: As a general point, one advantage of using the term of abstraction "qube" instead of "VM" is that, if Qubes OS were ever to implement a form of compartmentalization that does not use virtual machines (but rather some other kind of underlying container), the term "qube" could usefully encompass all types of compartments used in Qubes OS.
username_10: I have to agree with the initial reactions here, despite liking the idea. Whenever we referred to 'qube' in the plural (qubes) there would be high potential for confusion with Qubes OS itself.
Looking afield, we see that Docker has 'containers'. No confusion there.
OTOH, should we use a term to hide the VM aspect? Probably, since Qubes is supposed to be more technology agnostic than that. Qubes would *also* benefit in the ecosystem sense if its architecture produced specific object types with common names (like 'container'); Then you have a userbase that can grow by making the objects available (and, therefore, tempting to try).
For example, a VPN provider could configure a proxy vm (proxy container?) entirely in the private.img /rw/config so a user would only have to download the vm and (um, somehow) install it, perhaps using a special mode of qvm-backup-restore.
I see some options here:
1. Go with 'qube' and suffer the conversational problems
2. Adopt Docker's term, as 'Qubes container'
3. Return to Xen 'domain', accept it on other platforms for historical reasons
4. Use another word like: box, vessel, canister, drum, case, chamber
5. Use a word that relates to the concept: encapsulation=capsule, isolation=solon? isol? cordon?
6. Play on geometry: sphere, ring
7. Coin a permutation such as qbox, qdrum, qcase, qring
8. Abbreviation, like 'qu' from 'Qubes domU' which we could playfully pronounce as simply /kyoo/ or maybe /kyoo-yoo/.
Choosing a permutation or rare word brings a semantic bonus: Greater clarity in Internet searches and greater ownership of the concept. You also don't have to qualify it as "Qubes domain" nearly as often if you just say 'qbox'.
My favorites so far: Qu, cordon, qbox, isol, qcase
...but if you're going to be conservative (which 'qube' is) then I think container or domain is better.
username_9: We actually have an issue for that functionality: https://github.com/QubesOS/qubes-issues/issues/1747
username_10: Some more terms:
qapsule
fort
brig, qbrig
briq
I really like **briq**...
username_11: I would call a TemplateVM a Slab, Foundation or a Base which upon a Qube is placed to create ..... (need anology of container).
Base is a geometric term - https://en.wikipedia.org/wiki/Base_(geometry)
username_9: In https://github.com/QubesOS/qubes-doc/pull/764, it looks like we're going to introduce new users to "qubes" and "templates."
@username_1, are we still committed to using the term "qube" in this way?
username_1: I think the general idea of using "qube" is good, but we need to use it consistently.
We have multiple cases:
1. Generic name for a VM ("qube settings", "create qube")
2. Names of VM types (TemplateVM, StandaloneVM, AppVM, DisposableVM)
3. VM types removed in R4.0 (NetVM, ProxyVM) - in R4.0 those are just VMs with `provides_network` property set to true.
4. Functional names, like UpdateVM, ClockVM, soon GuiVM. And also `default_dispvm` and `management_dispvm` properties (but see below).
Anything else?
I think the first point is simple - just using "qube" there should be ok.
The second one is more tricky though, as it apply to both presentation layer (how tools display them) and internal code. Maybe types should have "VM" suffix stripped, so it would be "Template", "Standalone", "App", etc? The "App" sounds strange, any better idea?
While at it, we probably should rename "Disp(VM)" to full "Disposable(VM)". I've seen at least once this abbreviation resulted in confusion (Display VM).
The third point may be just about dropping those terms, but looking at various docs, it would be convenient to have a term for "qube with privides_network property set to true". Especially as this may
be any qube type. "Proxy qube" may not be a good idea, as it may be confused with actual qube type. Any better idea? Or maybe it isn't a problem?
While we're discussing terminology, another related issue we should solve is "DVM template". This term has both "VM", and obscure abbreviation "DVM" from "DispVM" (which itself is from Disposable VM).
Additionally we have "default_dispvm" property, which actually points at DVM template, not Disp(osable)VM. "Disposable qube template" I think is too long, but I don't have any better idea.
As for the fourth point, maybe we should simply rename those to `UpdateQube`, `ClockQube`? This is both about presented name, and actual global properties. I'm not sure if that's the best idea...
There is also qrexec policy, which have various keywords containing "VM": `@anyvm`, `@dispvm`.
We should figure it out before Qubes 4.1 release, as some of this may require incompatible changes (like renaming VM types). We'd also need to decide how to handle it in various tools - should old names be rejected (breaking various user custom scripts), or automatically translate to new names (potentially resulting in some confusion like creating "AppVM" could result in creating "App" or whatever new name will be).
@username_9 we need your help here. @username_3 maybe you have some ideas here? @marmarta?
username_12: Breaking a lot of scripts "just" for renaming things doesn't sound nice, especially after the switch to 4.0 already had a lot of churn. OTOH having code to handle the old names in a lot of places sounds worse in the long term, so I would go with changing the names consistently if we want to use "qube" consistently.
---
When all those naming decisions are done we should not only update the glossary with the updated meaning but also with the preferred form/usage (for example "Use qube instead of VM in Qubes documentation and code.", or whatever gets decided).
username_9: No, that was intentional, because "dom0" and "domU" are the accurate and precise technical terms, and it is especially important to be accurate and precise in QSBs, which are, by their nature, technical documents. However, I can imagine restricting technical language to certain sections of the document and trying to make the user-actionable parts as accessible as possible.
username_12: I think at least part of this comes from the fact that there are two meanings of "AppVM". One is is the "A VM that is intended for running software application [through the user]" definition. The other is the `AppVM` class as used in 4.0 which is used for all template-based VMs (as oposed to `AdminVM`, `DispVM`, `StandaloneVM` and `TemplateVM`).
username_1: Yes, this is also a problem. Similar to ProxyVM term ("net qube"). So, maybe we should have "app qube" and "TemplateBased" qube type?
username_9: That would at least preserve the two distinct concepts denoted by "AppVM" at different times.
username_13: One possible source of confusion, should "VM" be replaced with "qube" in docs/UI/configs, is that the plural VMs becomes "qubes". Then it may be less clear, depending upon context, if this is meant to refer to multiple VMs or to the OS as a whole.
Brendan
username_14: Naming should inform mental models of how the broader ecosystem of QubesOS works. As such, I personally like calling VMs "qubes," which depending on the context _can_ get confusing. There is never an absolutely-perfect answer for any/every context, imho—especially because VMs in Qubes are used for so many different things. Common mental-models of VMs are, "Oh, I need to run another OS" or "Oh, I need a dev box," not "Oh, I need to isolate devices from internet from apps from where my library of distributions are kept."
I feel like there are bigger fish to fry for the time being. There also needs to be a friendly introduction to QubesOS, separate from the docs, that explains how the ecosystem works to newbies. Something Marta and I have discussed a bit. Docs imho need to be more for nitty-gritty details stuff, and outlining best practices; less for "here is generally how this thing works."
username_9: Out of curiosity, what medium do you have in mind? (In other words, if not a doc, would it be a video? An infographic? An interactive software tutorial?)
username_15: Just my 2 cents on this. I'd be inclined to this approach (being discussed here: https://github.com/QubesOS/qubes-issues/issues/1395) complemented with some video tutorials (updated for r4.1). I've have some thoughts on this that I'd like to share at some point.
username_9: You have a misconception. The use of "VM" in *technical* contexts, such as this technical software issue tracker, is not only entirely appropriate, but *necessary*. There is no way that technical people in *any* field would be able to get things done if they were restricted to user-facing language and unable to use their own technical terminology. We should *not* be trying to get contributors to stop using terms like "VM" in technical discussions.
username_14: @username_9 I'm not at all suggesting that technical discussions should abide by user-facing language. However there is a "gray zone" where UX work needs to map to how functionality is exposed to a user. If even developers cannot get those user-facing conventions correct, that to me is indicative of a problem with the mental model those names inform. Mental models help people intuitively piece-together how a complex system works, without memorizing 1:1 names to functional ideas. To shape a system to be as intuitive and usable as possible, they are important.
That comment you quote, above, was mostly me trying to be lighthearted with Sven, on an aside. This discussion has gotten a bit tense, and I am not at all seeking to create tension. Only to help make Qubes more intuitive, to newer and non-Linux native users, while also not alienating existing and highly technical users. That is not an easy task. User "experience" as a practice separate from designing "user interfaces," does need to look at everything from documentation, to naming, to performance, to accessibility, and to the visual interface itself. I am truly not seeking to undermine or overlook a lot of community effort already involved.
username_16: One problem with using “qube” is that (per internal discussion) it leads to confusion with “Qubes” (the complete system). In some languages (such as French), trailing “s”s are often silent, which makes the matter worse.
username_9: Sorry if I was too harsh. Seeing all of us go in circles over this renaming stuff (as in, literally repeating conversations from years ago that are still on the very same pages, just closer to the top) has been frustrating for me as a lover of efficiency and a hater of repeating myself. 😆
username_7: I dont know who the internal discussion was between, but in my
experience with new users, this doesn't give rise to any confusion. In
fact, as was said years ago, it's a natural usage for most people.
But I think it's beens said - we dont want to spend time on this unless
there is good reason (and evidence) to do so.
username_9: Example of "qube" causing some user confusion in this thread:
https://twitter.com/Piniora/status/1462043541237022722
username_3: I read that feedback highlighting inconsistency in usage, and confusion with using "Qubes" to refer to "Qubes OS".
we probably want all references to "Qubes" (meaning "Qubes OS") to be "Qubes OS". so "Qubes OS Global Settings", etc
username_7: Well, OK - but is this a genuine confusion, or manufactured?
username_3: well more consistency/clarity is always helpful.
but maybe also in some of these cases it is simplest & clearest to be removing words. like maybe "Qubes Global Settings" should just be "Global Settings", etc
username_14: <img width="520" alt="Screen Shot 2021-11-22 at 2 45 53 PM" src="https://user-images.githubusercontent.com/8262612/142946382-130cb451-2c78-4faa-86a5-1fadb62ab326.png">
The above are all perfect to resolve in a styleguide for naming. Took a screencap here.
A comments thread on an issue could go on forever. Perhaps we begin style-guiding terms use in a wiki? Working in Figma mockups has helped me identify many of the above problems by contextualizing them in the UI itself. I highly recommend those sorts of studies to help with an exercise like a style-guide.
username_14: Also—per the title of this issue, I just wanted to offer a correction—the current discussed idea, is retiring VM and instead using "qube" to meet both the needs of "domain" so it's not specific to VMs.
Instead of App VM, it would be App Qube. Likewise, Service Qube and Template Qube, even tho "Templates" and "Standalones" are acknowledged colloquial nomenclature.
username_9: - "Create Qubes VM" -> "Create qube"
- "Qubes Global Settings" -> "Global settings"
- "Qubes Update" -> "System update"
- "Qubes Template Manager" -> "Template manager"
- "Backup/Restore Qubes" -> "Backup and restore"
username_4: "Create Qube"
Nitpick perhaps but at the chance of lowercase which could introduce
confusion. Anything can.
username_3: "System settings"
i like Nina's suggestion of "System settings", it also parallels "System update" better. also is more clear, as it is referring to the system rather than referring to a metaphor of a globe/earth to mean the computer.
otherwise they look good and seem more clear!
username_7: They are Global, in that they apply to everything.
People who are confused by "Qube Manager" wont understand why "System
Settings" doesnt encompass networking, USB support and so on.
username_14: @username_7 In 4.2, it should. See #6898 😄
username_9: No strong opinion on "global" vs. "system." I'd be fine with either one. |
nager/Nager.Date | 777527680 | Title: County API
Question:
username_0: The holidays are resolved to county level. However the counties are enumerated in ISO 3166-2 format:
```
{
"date": "2021-01-06",
"localName": "<NAME>",
"name": "Epiphany",
"countryCode": "DE",
"fixed": true,
"global": false,
"counties": [
"DE-BW",
"DE-BY",
"DE-ST"
],
"launchYear": 1967,
"type": "Public"
}
```
For further processing, a resolution from ISO 3166-2 token to name would be useful, eg `DE-BY` -> `Bayern`.
This could be implemented in an endpoint like `/Api/v2/CountyInfo/{countryCode}`, which lists a country and its counties with token and name.
Answers:
username_1: For this we would again need a lot of translations for various languages. Maybe i add this logic to my [country project](https://github.com/nager/Nager.Country).
Status: Issue closed
|
bazaarvoice/HostedUIResources | 66938780 | Title: PHP notices for uninitialized variables assumed to have value
Question:
username_0: I am getting the following notices when using the php seo sdk. I realize 'notices' themselves do not cause problems, but are there to write better code.
Notice: Undefined variable: bvparam in Base->_getPageNumber() (line 372 of ~/bvseosdk.php).
Notice: Undefined offset: 1 in Base->_getPageNumber() (line 373 of ~/bvseosdk.php).
Notice: Undefined variable: bvparam in Base->_getPageNumber() (line 376 of ~/bvseosdk.php).
If I am doing something wrong, please let me know.
Answers:
username_1: Thanks for reaching out. Our updated SEO integration docs have moved to our [Knowledgebase](http://knowledge.bazaarvoice.com/wp-content/conversations/en_US/KB/Default.htm#SEO/SEO_About.htm), so we are no longer updating SDK information here. If you require any integration assistance, please contact your implementation team or reach out to our support team via [Spark](http://spark.bazaarvoice.com) or <EMAIL>.
Closing due to age and SEO docs have moved.
Status: Issue closed
|
embroider-build/embroider | 1099616827 | Title: `unable to resolve package {addon-name}` when using HFTNB syntax to reference components in the same addon
Question:
username_0: repo to reproduce: https://github.com/username_0/test-hftnb-embroider
Ran into this build error when I use HFTNB syntax in an addon to reference other components / helpers in the same addon.
E.g.
```hbs
{{! test-hftnb-embroider/addon/components/foo-bar.hbs }}
<TestHftnbEmbroider$Lorem>
{{yield}}
</TestHftnbEmbroider$Lorem>
```
I have set `excludeNestedAddonTransforms` to `true`, and the `static*****` options to `true` as well ([ember-cli-build.js](https://github.com/username_0/test-hftnb-embroider/blob/main/ember-cli-build.js)).
Build output:
```
$ ember b --environment test
Building into /private/<KEY>embroider/72ccc5
Environment: test
⠇ building... [@embroider/webpack]assets by chunk 640 KiB (id hint: vendors)
asset chunk.952b5c45014cea2d5253.js 580 KiB [emitted] [immutable] [big] (id hint: vendors)
asset chunk.a946c09be6f023797849.js 60.2 KiB [emitted] [immutable] (id hint: vendors)
asset chunk.36103af21939696d2b1e.js 34.9 KiB [emitted] [immutable] (name: assets/test.js)
asset chunk.a313cbe39e682ad36e3f.js 23 KiB [emitted] [immutable] (name: assets/dummy.js)
Entrypoint assets/dummy.js 83.2 KiB = chunk.a946c09be6f023797849.js 60.2 KiB chunk.a313cbe39e682ad36e3f.js 23 KiB
Entrypoint assets/test.js [big] 675 KiB = chunk.a946c09be6f023797849.js 60.2 KiB chunk.952b5c45014cea2d5253.js 580 KiB chunk.36103af21939696d2b1e.js 34.9 KiB
runtime modules 6.74 KiB 13 modules
modules by path ../../node_modules/ 258 KiB 78 modules
modules by path ./ 8.3 KiB 11 modules
modules by path ../../../ 243 KiB
modules by path ../../../../../../../../../../Users/ypiao/ui/tests/test-hftnb-embroider/node_modules/@babel/runtime/helpers/esm/*.js 1.63 KiB 4 modules
../../../../../../../../../../Users/ypiao/ui/tests/test-hftnb-embroider/node_modules/qunit/qunit/qunit.js 242 KiB [built] [code generated]
../../../externals/require.js 108 bytes [built] [code generated]
modules by path ../../components/ 612 bytes
modules by path ../../components/*.js 346 bytes 2 modules
modules by path ../../components/*.hbs 266 bytes
../../components/foo-bar.hbs 1 bytes [built] [code generated] [1 error]
../../components/lorem.hbs 265 bytes [built] [code generated]
ERROR in ../../components/foo-bar.hbs
Module Error (from ../../../../../../../../../../Users/ypiao/ui/tests/test-hftnb-embroider/node_modules/thread-loader/dist/cjs.js):
unable to resolve package test-hftnb-embroider from $TMPDIR/embroider/72ccc5
Thread Loader (Worker 1)
unable to resolve package test-hftnb-embroider from $TMPDIR/embroider/72ccc5
at PackageCache.resolve (/Users/ypiao/ui/tests/test-hftnb-embroider/node_modules/@embroider/shared-internals/src/package-cache.js:29:21)
at CompatResolver.tryComponent (/Users/ypiao/ui/tests/test-hftnb-embroider/node_modules/@embroider/compat/src/resolver.js:411:77)
at CompatResolver.resolveElement (/Users/ypiao/ui/tests/test-hftnb-embroider/node_modules/@embroider/compat/src/resolver.js:576:26)
at enter (/Users/ypiao/ui/tests/test-hftnb-embroider/node_modules/@embroider/compat/src/resolver-transform.js:141:57)
at visitNode ($TMPDIR/embroider/72ccc5/node_modules/ember-source/vendor/ember/ember-template-compiler.js:11763:16)
at visitArray ($TMPDIR/embroider/72ccc5/node_modules/ember-source/vendor/ember/ember-template-compiler.js:11855:20)
at visitKey ($TMPDIR/embroider/72ccc5/node_modules/ember-source/vendor/ember/ember-template-compiler.js:11831:7)
at visitNode ($TMPDIR/embroider/72ccc5/node_modules/ember-source/vendor/ember/ember-template-compiler.js:11785:9)
at traverse ($TMPDIR/embroider/72ccc5/node_modules/ember-source/vendor/ember/ember-template-compiler.js:11896:5)
at preprocess ($TMPDIR/embroider/72ccc5/node_modules/ember-source/vendor/ember/ember-template-compiler.js:13445:9)
@ ../../components/foo-bar.js 2:0-37 3:36-44
@ ./tests/integration/components/foo-bar-test.js 1:0-54 3:9-11
@ ./assets/test.js 4:9-67
webpack 5.65.0 compiled with 1 error in 7009 ms
```
Answers:
username_1: If somebody wants to debug and fix this I will merge PRs, but please understand that `ember-holy-futuristic-template-namespacing-batman` is very, very far down my list of priorities.
That addon **tells you not to use it**, it was deliberately named with a silly name so people wouldn't use it in production.
username_2: After looking at the reproduction briefly, it seems that there is no way for this to ever resolve "itself":
https://github.com/embroider-build/embroider/blob/f7a4ad0e4d939446ba47c5a0a1adb4a4bb463b9d/packages/shared-internals/src/package-cache.ts#L8-L26
If you called `cache.resolve('some-addon-name', packageForSomeAddonName)`, it will throw an error.
Changing that code above to something like this might work (though @username_1 would have to chime in RE: the validity):
```js
let result = getOrCreate(cache, packageName, () => {
// the type cast is needed because resolvePackagePath itself is erroneously typed as `any`.
let packagePath = resolvePackagePath(packageName, this.basedir(fromPackage)) as string | null;
if (!packagePath) {
if (fromPackage.isV2Addon() && fromPackage.name === packageName) {
return fromPackage;
} else {
// this gets our null into the cache so we don't keep trying to resolve
// a thing that is not found
return null;
}
}
return this.get(dirname(packagePath));
});
```
username_2: Should be resolved by #1170 |
github-nakasho/astroph | 714479761 | Title: Deep Learning for Line Intensity Mapping Observations: Information Extraction from Noisy Maps
Question:
username_0: # 論文概要
Line intensity mapping(LIM)は遠方銀河からの輝線放射の大規模ゆらぎを調べるための有用だが、前景放射と背景放射の混合やノイズが問題である。そこでその問題を解決するためにconditional generative adversarial networks(cGANs)を開発。ApJL.
# 論文リンク
https://arxiv.org/abs/2010.00809<issue_closed>
Status: Issue closed |
aws/aws-iot-device-sdk-cpp | 396747537 | Title: Disconnection and Reconnection handling in C++ SDK
Question:
username_0: Hi,
I have two questions
1. How do we turn off the keep alive operation?
When i set the keep alive interval as 0 or 1 seconds in the SampleConfig.json file, the program immediately enters into disconnect mode even when the network connection is available. Since a minimum of 2 seconds need to be given to the keep alive interval for proper operation, when the connection gets disconnected the disconnect callback does not kick in immediately, but gets triggered only after 1.5X the keep alive interval. This causes loss of data which is generated by the IoT device, in between the time that the disconnection happens and the time the disconnectcallback is called.
2. Where is the publish data queued when disconnection happens?
When i set the keep alive interval as 30 seconds, during disconnection the data generated by the IoT device gets queued up for upto 30 seconds. If any reconnection happens within that time period the queued up data is pushed to the cloud. But i am not able to identify the container that queues up the data in the local when disconnection happens.
Please let me know your thoughts on what needs to be done to establish a smooth transition when disconnection happens, without any loss of data. |
pingcap/dm | 568113859 | Title: update DM-ansible for new HA model
Question:
username_0: ## Feature Request
**Is your feature request related to a problem? Please describe:**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
DM-ansible does not support to mange the new HA versioin of DM.
**Describe the feature you'd like:**
<!-- A clear and concise description of what you want to happen. -->
update DM-ansible for new HA model<issue_closed>
Status: Issue closed |
aspnet/AzureSignalR-samples | 351879801 | Title: Realtime Sign-in Example using Azure SignalR Service - SignIn Hub Code
Question:
username_0: Please Post a Full Sample Not only function.json that references a C# function dll. Thanks in advance.
Answers:
username_1: You can find the source code at [here](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/RealtimeSignIn/function). Specifically, the function's source code is at [here](https://github.com/aspnet/AzureSignalR-samples/blob/master/samples/RealtimeSignIn/function/SignInFunction.cs).
username_2: The source code of the Hub is missing.
scriptFile": "../RealtimeSignIn.dll",
"entryPoint": "RealtimeSignIn.SignInFunction.Run"
I want the source code of that dll: RealtimeSignIn.dll
}
username_1: There is no C# `Hub` class in this example.
The page allows anonymous access. The `SignIn` function will generate authentication token for all clients in [here](https://github.com/aspnet/AzureSignalR-samples/blob/master/samples/RealtimeSignIn/function/SignInFunction.cs#L37-L41). Each client will use the token to authenticate with Azure SignalR Service in [here](https://github.com/aspnet/AzureSignalR-samples/blob/master/samples/RealtimeSignIn/content/index.html#L217). Also each client will listen on updates in the broadcast channel in [here](https://github.com/aspnet/AzureSignalR-samples/blob/master/samples/RealtimeSignIn/content/index.html#L210).
username_2: Where is that "signin" specified in here?
await signalR.SendAsync("signin", "updateSignInStats", stats.TotalNumber, stats.ByOS, stats.ByBrowser);
Somewhere that hub has to be created or black magic is goinhmg on...
username_1: The `signin` is a logical hub in this sample. It will be created and maintained within Azure SignalR Service.
Because in the Serverless scenario, clients only listen for messages. So there is no need to have a web server and a C# `hub` hosted in the server.
From Azure SignalR Service perspective, the hub name is included in the URL, which is generated [here](https://github.com/aspnet/AzureSignalR-samples/blob/master/samples/RealtimeSignIn/function/SignInFunction.cs#L39).
username_2: So I could name the logical hub `mylogicalazure-signalr-hub` and it will work, too? Ok thanks @username_1 for you intense clarifications.Well serverless sometimes make my head explode...
username_1: Yes, correct.
You are welcome. We are happy to help out. 😄
username_2: Found that article which describes all very well https://www.c-sharpcorner.com/article/azure-signalr-messaging-with-net-core-console-app-server-and-client/
username_1: That's great. Do you get all you need? If so, please close the issue.
Status: Issue closed
username_0: @username_1 Thanks for getting back. For others it would help if the official Microsoft documenation could be enhanced with perfect discription like the provided article. Currently the documention is very thin for Azure SignalR Service. I know you guys are before GA rollout but please forward the request to documention team.
username_1: @username_0 Thanks for the feedback!
We are aware of that our documentation is not in the perfect place yet. We will try our best to enrich documentation for GA. |
zaragoza-sedeelectronica/zaragoza-sedeelectronica.github.io | 69136145 | Title: Coherencia entre IDs de SOLR y SPARQL
Question:
username_0: SOLR devuelve los IDs de evento en formato:
`"id": "acto-117081",``
SPARQL devuelve los IDs:
`"id": "117081",`
Sugiero utilizar exactamente el mismo ID (el de sólo el número, como integer) para que los reutilizadores no tengamos que parsear a mano las cosas si usamos en la misma aplicación ambos APIs.
¡Gracias, saludos!
Answers:
username_1: En SOLR los registros deben tener un identificador, como además de actividades se almacenan otras cosas, para distinguirlas y no repetir identificadores (ya que puede existir un equipamiento con id=1234 y que a su ve exista un acto con id=1234) decidimos añadir un prefijo para diferenciarlos.
username_0: ¿y se puede procesar ese ID interno antes de devolverlo por el API?
username_1: En API ese ID Interno no se utiliza, sólo se utiliza en SOLR y desde allí no se puede procesar porque se publica directamente el índice.
username_0: Al menos documentarlo en http://www.zaragoza.es/ciudad/risp/camposindizados.htm#agenda ...
username_0: @username_1 de lo que comentabas el 20 abril... ¿equipamiento y evento se persisten como si fueran la misma cosa, compartiendo IDs? 😳
username_1: No comparten ids porque añadimos el prefijo para distinguirlos, el acto 1234 se guarda en solr con el id=acto-1234 y el equipamiento 1234 se guarda en solr con id=recurso-1234
Status: Issue closed
|
s3bk/tuple | 248163396 | Title: Mapping over a tuple
Question:
username_0: Not sure if this is possible with Rust macros at all, but I would like to create a tuple by calling a method on the elements of an existing tuple. So the expanded code would look lile
```rust
(self.0.foo(), self.1.foo(), self.2.foo(), ...)
```
Answers:
username_1: This is definitely possible, however it requires to specify the number of elements.
This could be done via `map!(old_tuple, 5, foo)` or `map5!(old_tuple, foo)`.
If all elements have the same type, then you can already use `old_tuple.map(|x| x.foo())`
Status: Issue closed
username_0: Ah I see, thank you! |
commercetools/nodejs | 269596133 | Title: [Discount code Import]: Reduce maximum batch size
Question:
username_0: ### Description
<!-- If you're describing a bug, please let us know the steps to reproduce your problem. -->
Reduce the maximum amount of concurrent imports that can be executed
Related to [this](https://github.com/commercetools/express-impex/issues/349)<issue_closed>
Status: Issue closed |
haskell/hackage-server | 176230225 | Title: CDN issue causing 00-index.tar to be out of sync with available tarballs
Question:
username_0: I've experienced this personally in running the all-cabal-hashes mirror, and have received user reports. Relevant links:
* https://twitter.com/username_0/status/773847008721395712
* https://github.com/username_0/yaml/issues/97
The idea is: you download the `00-index.tar.gz` file from Hackage (e.g., via `cabal update`), and it includes a `.cabal` file for a certain package/version combo (like `yaml-0.8.18.6.cabal`). But when you try to download `yaml-0.8.18.6.tar.gz` from Hackage, you get a 404 for a while, which eventually corrects itself. I've experienced situations where two different build servers - both in the US - returned a 404, while downloading from my house in Israel worked. This leads me to believe it's a regional caching issue with the CDN.
Just a complete guess here: perhaps it's worth disabling CDN caching for non-200 responses?
Answers:
username_1: CC'ing @davean who manages our Fastly CDN
More generally, we may be able reduce the time-window in which a client can experience an inconsistent view of the Hackage repository *temporarily*, but we will never be able to fully eliminate that window, unless the CDN or any cloud storage provides stronger guarantees. IOW, There's now way around Clients needing to be able to cope with transient object access failures as long as the communication path is made up of potentially unreliable components. When `cabal` is used with `hackage-security`'s object retrieval logic this is taken into account to some degree already.
username_2: Yes, can't be eliminated completely but reduced caching of 404s sounds like a good idea.
username_0: Just to give a little more detail of what I'm doing in case it helps: instead of running through the full mirroring scripts every 1/5/10 minutes, I'm moving over to a watch script which respects the `ETag` header, so that less bandwidth/CPU/disk access is used. You can see this script at:
https://github.com/fpco/hackage-mirror/blob/383666ff71c1fcafecd2d0a1a72a47e75d5fcda3/main/Watcher.hs
Due to this issue, I've included an arbitrary 10-run forced synchronization in case the index.tar.gz file got out-of-sync with the sdist tarballs.
username_0: One last detail: it may seem like it would be reasonable to just confirm that all tarballs are available instead of using the arbitrary 10-run cutoff. Unfortunately, there's another issue that prevents that from being possible: #436. Since there are some tarballs which legitimately fail the download (because they have been deleted for copyright purposes, but not removed from the index), and other tarballs that fail due to CDN caching issues, I don't see a way to detect that we should ignore the `ETag` and try synchronizing tarballs again.
username_1: < HTTP/1.1 410 Gone
< Server: nginx/1.8.1
< Date: Tue, 13 Sep 2016 08:52:15 GMT
< Content-Type: text/html
< Content-Length: 158
< Connection: keep-alive
<
<html>
<head><title>410 Gone</title></head>
<body bgcolor="white">
<center><h1>410 Gone</h1></center>
<hr><center>nginx/1.8.1</center>
</body>
</html>
* Connection #0 to host hackage.haskell.org left intact
```
username_0: I never noticed that. That could be very useful, thanks!
username_2: @username_0 since you're running a mirror, it'd be perfectly reasonable to bypass the CDN entirely. Then you get to choose if/how to respect the cache-control hints etc. If you'd like to do that, let us know and we can give you the details (ie IP address etc).
Also, if you'd like to take part in the public mirroring of hackage (ie serving in the same original format) then you may like to use https://github.com/username_1/hackage-mirror-tool and optionally have your mirror added to the public mirror list http://hackage.haskell.org/mirrors.json . If so, just let us know.
username_2: Update: @username_0 has set up a new mirror and it is now listed as an official public mirror in the upstream http://hackage.haskell.org/mirrors.json
Since the out-of-sync caching/proxying issue does not at appear to be a problem for `cabal` clients at the moment then we'll close this for now. The hackage-security client code has logic to cope with caching proxies but if this proves not enough for our CDN then we can switch things around so that we use our mirrors as primaries for clients rather than only as secondary / backups.
Status: Issue closed
username_0: Just to give one last note on all of this: I put a new page on stackage.org to track the relative up-to-dateness of Hackage vs mirrors and Git repos, you can see it at:
https://ci.stackage.org/status/mirror
I've configured the page to return a status 500 if the lag time is ever more than an hour, so using normal HTTP monitoring tools can give an alert if the mirroring functionality ever stops working.
username_1: @username_0 that's coincidentally something similar half-finished (sans the Git repos status) to what I've been hacking on as well, as we needed that for haskell.org as well... except less html'y, just a plain/text .cgi script which validates the TUF meta-data for freshness :-)
username_0: If it would be helpful to add a few more URLs to that table, just say so. It's no big deal for me too track the last-modified of a few more files.
username_1: @username_0 it may be interesting to add "http://objects-us-west-1.dream.io/hackage-mirror/01-index.tar.gz" there, as well as the `../timestamp.json` files (since that one's updated last by my tool)
username_0: Cool, commit pushed, should be live in a few minutes.
username_0: The mirror I've been running went down about 8 hours ago (see: https://github.com/commercialhaskell/all-cabal-hashes/issues/13). AFAICT, the problem is that the privately provided IP address for the upstream server (behind the CDN) changed. I've switched the mirror to use hackage-origin.haskell.org, is that correct?
username_3: That should be correct, yes. |
pallymore/wkhtmltopdf-binary-edge | 333710807 | Title: update to 0.12.5
Question:
username_0: 0.12.5 is out :)
Is it possible to update the gem to this version ?
It solves a lot of issues with Mac OS :)
thanks !
Answers:
username_1: +1
username_2: Uhh sorry, somehow I missed this one.
Working on this right now
username_0: thanks !
Status: Issue closed
|
wenzhixin/bootstrap-table | 191649163 | Title: 使用 bootstrap-table 必须引入bootstrap的css吗?
Question:
username_0: http://bootstrap-table.username_1.net.cn/zh-cn/getting-started/
我单独引入了
<link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/bootstrap-table/1.11.0/bootstrap-table.min.css">
数据并不能显示出来,需要在引入
Answers:
username_1: 要的,因为是基于bootstrap进行开发的
Status: Issue closed
|
sdqali/hugo | 440409807 | Title: Accessing Environment Variables in Gradle
Question:
username_0: Comments for [Accessing Environment Variables in Gradle](https://username_0.in/blog/2013/10/01/accessing-environment-variables-in-gradle/index.html)
Answers:
username_1: Where to set the variables?
username_0: @username_1 You would set the env vars in a place where gradle can access it from. For example, in your shell:
export FOO=bar
./gradle <some-target> |
haskell-infra/hackage-trustees | 385389460 | Title: Relax repa upper bound on QuickCheck
Question:
username_0: I attempted to email the repa mailing list (http://groups.google.com/d/forum/haskell-repa) requesting this constraint bump a while ago, but I don't think my post was ever approved by the moderators to go out to the list.
Repa currently has the constraint `QuickCheck <2.12`. Can this please be bumped to `QuickCheck <2.13`?
cc <EMAIL>
Answers:
username_0: This was handled by @tmcdonell. Cheers!
Status: Issue closed
|
Subsets and Splits