repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
qiscus/qiscus-chat-sdk-flutter | 581986104 | Title: Phase 1 core functionality
Question:
username_0: - [ ] `update_comment_status`
- [ ] `get_nonce`
- [ ] `block_user`
- [ ] `unblock_user`
- [ ] `get_user_data`
- [ ] `update_profile`
- [ ] `get_user_list`
- [ ] `register_device_token`
- [ ] `unregister_device_token`<issue_closed>
Status: Issue closed |
unoplatform/uno | 1170799922 | Title: `FlipView` does not flip on Skia
Question:
username_0: ### Current behavior

### Expected behavior

### How to reproduce it (as minimally and precisely as possible)
`FlipView` samples in Samples app
### Workaround
-
### Works on UWP/WinUI
Yes
### Environment
Uno.UI / Uno.UI.WebAssembly / Uno.UI.Skia, Uno.WinUI / Uno.WinUI.WebAssembly / Uno.WinUI.Skia
### NuGet package version(s)
Latest dev
### Affected platforms
Skia (WPF), Skia (GTK on Linux/macOS/Windows), Skia (Tizen)
### IDE
Visual Studio 2022
### IDE version
_No response_
### Relevant plugins
_No response_
### Anything else we need to know?
_No response_ |
pomeloJiang/BlogComments | 318262809 | Title: Android Framework之Activity启动流程(一) | 柚子 | pomeloJiang
Question:
username_0: https://pomelojiang.github.io/android_framework_start_activity_1.html#more
本文的分析基于Android 8.1源码。本文章将分三篇为大家讲解。第二篇:Android Framework之Activity启动流程(二)第三篇:Android Framework之Activity启动流程(三)
在文章的起始,插张时序图,先看结论再看过程。
Zygote在Android系统中,是由ActivityManagerService负责为应用程序创建新进程的。至于ActivityManagerService本身进程,则是由Zygote负责启动。Zygote翻译成中文,是受精卵。至 |
auth0-extensions/canirequire | 918928105 | Title: Add Segment NodeJS module
Question:
username_0: ## ✏️ Request Form
### Package info
Package Name: analytics-node
Package Version: 4.0.1
NPM Url: https://www.npmjs.com/package/analytics-node
### Reason for Request
I would like this package added because we want to integrate analytics via Segment and the Auth0 Rules are one of the prime locations to make the `identify()` call for the Segment API<issue_closed>
Status: Issue closed |
WayofTime/BloodMagic | 183188350 | Title: Ritual FPS drops
Question:
username_0: #### Issue Description:
Looking directly at the Master Ritual Stone for some activated rituals causes fps drops of varying severity whilst holding Ritual Tinkerer in Main hand. (all tested with fps limit removed, averaging 180-200 fps normally (occasionally getting 300-400 when looking up or down). based on debug overlay)
Insignificant (>80fps, decided to mention these, but not of importance as they don't affect fps much. unless you're used to >80fps):
Suppression
Gathering of the Forsaken Souls
Minor (50-60fps, usually unnoticable):
Well of Suffering
Aura of Expulsion
Crash of the Timberman
Moderate (~20-30fps):
Feathered Knife
Regeneration
Hymn of Siphoning
Severe (<10fps):
Satiated Stomach
#### What happens:
Significant fps drop
#### What you expected to happen:
Stable fps
#### Steps to reproduce:
1. Create rituals mentioned in description above
2. Look at MRS with Ritual Tinkerer in Main hand
3. Monitor framerate
____
#### Affected Versions (Do *not* use "latest"):
- BloodMagic: 2.1.0-65
- Minecraft: 1.10.2
- Forge: 12.18.1.2092
Answers:
username_1: I didn't believe you. Went to test it.

Status: Issue closed
|
kpumuk/meta-tags | 47556802 | Title: Remove :author (Google has discontinued support)
Question:
username_0: Google no longer supports Google+ Authors, so you can remove the code and demo for it.
https://support.google.com/webmasters/answer/6083347?rd=1
Answers:
username_1: Also, the documentation has got 2 use cases for `author` and doesn't seem to be returning what was expected.
1 .set_meta_tags author: "http://yourgplusprofile.com/profile/url"
# <link rel="author" href="http://yourgplusprofile.com/profile/url" />
2. set_meta_tags author: [ "<NAME>", "<NAME>" ]
# <meta name="author" content="<NAME>"/>
# <meta name="author" content="<NAME>"/>
This one is not working the way it was expected. Instead is creating a string with both names, rather than 2 meta-tags
<link rel="author" href="<NAME> <NAME>" />
Status: Issue closed
username_2: Resolved in https://github.com/username_2/meta-tags/pull/101 |
SelfKeyFoundation/Identity-Wallet | 496955372 | Title: Add link on document name - usability fix
Question:
username_0: Under SelfKey profile, to view the content of a document added right now you have to click on the edit icon and then on the name of the uploaded document.
For better usability, we need to add a link straight on the document name in the listing table -> https://gyazo.com/db9c24115e271375be6f6e0ce69fe344
Answers:
username_1: @username_0 Could you please QA this ticket?
username_2: Looks good. Pass
http://recordit.co/R9fL2Lrk2B
Status: Issue closed
|
yiisoft/yii2 | 544560931 | Title: Can yii2 support yar rpc framework?
Question:
username_0: In a Contronller:
`$server = new \Yar_Server(new \micro\models\testModel());
`
`$server->handle();`
and I get this headers already sent error.
I noticed #14329
### Additional info
| Q | A
| ---------------- | ---
| Yii version | 2.0.31
| PHP version | 7.4
| Operating system | macOS
Answers:
username_1: Please attach full stacktrace and show action's code
username_0: @username_1 I update the problem description.
username_1: If you don't want to sent any headers you just may do [`Yii::$app->response->isSent = true`](https://www.yiiframework.com/doc/api/2.0/yii-web-response#$isSent-detail) or [`Yii::$app->end()`](https://www.yiiframework.com/doc/api/2.0/yii-base-application#end()-detail) in the end of action code.
username_2: @username_1 is correct. Yar seems to be dealing with headers and output itself so you have to adjust Yii not to try adding headers at the time it's not possible to do.
Status: Issue closed
username_0: @username_1 Thank you so much! The result code:
```php
<?php
namespace micro\controllers;
use Yii;
use yii\rest\Controller;
class RpcController extends Controller
{
public $enableCsrfValidation = false;
public function actionIndex()
{
$server = new \Yar_Server(new \micro\models\testModel());
$server->handle();
Yii::$app->response->isSent = true;
}
}
``` |
iuap-design/tinper-bee | 546670761 | Title: 模态框里的表参照无法销毁,initialValue赋给的值没用,老是显示上次选的值
Question:
username_0: ## 环境及版本信息
- `tinper-bee` 版本号:
^2.1.5
- 若使用单个组件,请标明该组件版本号:
"ref-multiple-table": "2.0.1",
- 当前项目中`react`的版本号:
16.6.3
- 所使用的操作系统:
win10
- 所使用的浏览器:
google
## 您所在的领域、行业或项目组:
香港海外研发部
项目:icbc工商国际
## 描述这个问题:
模态框里面有个表参照,初始化值为A,修改参照值为B,取消模态框,再打开,参照值显示的值应该是A才对,但是显示的B。
1。模态框再打开的时候会初始化模态框里的表单数据
2。打印出初始化后参照的initialValue值,是A,但是仍然显示的是之前修改的值B。
3。 其他的非参照表单,在初始化默认数据后,并无这种问题,会正常销毁或者显示。
### 1、组件相关代码
<!-- 请详细描述问题复现步骤,并把代码粘贴到下面的 demo 区域 -->
代码粘贴区域:
```
-
- <FormItemPro
- errorMsg={getFieldError('mainAccount')}
- >
- <Label className="label-basic"><FormattedMessage id="js.fatca.0022" defaultMessage="主账户" /></Label>
- <RefCLientPerson
- {...getFieldProps('mainAccount', {
- initialValue: JSON.stringify({'refname':fatcaInfo.mainAccountName, 'refpk': fatcaInfo.mainAccount}) ,
- // rules: [{
- // required: true, message: this.formatMessage({id:"js.legalPerson.related.0037", defaultMessage:"请选择相关法人"}),
- // }],
-
- })}
- pkclient = {this.props.clientInfoId}
- />
- </FormItemPro>

```
### 2、报错信息
<!-- 请详细说明问题 -->
<!-- 截图说明 -->
## 当前的行为:效果(可截图说明)及动作描述
<!-- 请详细描述当前行为,以便我们复现及定位问题 -->
<!-- 截图说明 -->
## 期望的行为:
希望尽快处理,或者能教我解决的办法,项目星期五要打正式,客户要看,谢谢
Answers:
username_0: </RefMultipleTableWithInput>
)
}
```
username_0: 
刚刚找到个办法,另辟蹊径了一哈,关闭弹窗的时候, this.props.form.resetFields();一下。
把所有值都清空了,然后初始化的值,可以赋上了。
如果还有别的好方法,麻烦老师了
username_1: 您好,我看了一下你们这边的使用方式,使用了传url的方式,需要确认是使用pap-refer包还是ref-multiple-table包。因为demo上的代码使用方式应该是pap-refer
username_0: import RefMultipleTableWithInput, { RefMultipleTable } from 'pap-refer/lib/pap-common-table/src/index';
是这个
username_1: 产生这个问题的原因,关闭modal的时候,其实并没有销毁参照,参照始终存在,因此保留着之前选中的值。并且只有前后传入不同value才会触发参照值更新,但是这里前后传入都是A
使用this.props.form.resetFields()可以改变值的原因:初始化后参照的值,是A,使用this.props.form.resetFields()相当于将将改成'',再次打开参照再次赋值A,所以参照会展示A。
这边现有的一个解决方法是不使用getFieldProps的initialValue,直接在参照组件上传入value,动态改变value修改参照定的值。
username_0: 好的,谢谢老师 |
haxegon/haxegon | 300654134 | Title: Simplify the assets guide
Question:
username_0: It's almost 10kb. Just put the basics in it, and direct to the haxegon website for more info.
Answers:
username_0: Also, while you're at it: fix the loadcsv bug - there are two loadcsv functions, and I'm not sure I'm using the right one.
Status: Issue closed
|
retest/retest-model | 362855506 | Title: Improve DefaultValueFinder interface
Question:
username_0: The `DefaultValueFinder` interface currently offers a single method:
`getDefaultValue( IdentifyingAttributes, String ): Serializable`
While working on [this branch](https://github.com/retest/recheck-web/tree/feature/fix-DefaultValueFinder), I wonder if this rather should be something like:
`isDefaultValue( ... ): boolean`
This is what we are doing in `recheck-web`'s `DefaultValuesProvider`, which doesn't use the standard mechanism for default values. But even if we do, `AttributesDifferenceFinder` is currently not able to handle defaults like we need them in `recheck-web`—more or less an indication for flaws in the interface and the way `AttributesDifferenceFinder` is using it.<issue_closed>
Status: Issue closed |
aws/aws-parallelcluster | 407367428 | Title: ParallelCluster compute nodes not scaling down when cloud formation stack is in status 'UPDATE_COMPLETE'
Question:
username_0: **Environment:**
- AWS ParallelCluster version aws-parallelcluster-2.0.2
- OS: alinux
- Scheduler: SGE
- Master instance type: m5.2xlarge
- Compute instance type: r5.2xlarge
**Bug description and how to reproduce:**
I noticed that after making any udpates to my cloud formation stack that was created for the parallel cluster then the scale down functionality handled by nodewatcher on the compute nodes would no longer work. Even if the nodes didn't have any jobs for the period of inactivity threshold time (i.e. 10 min) the compute node still wouldn't be spun down. This results in loss of valuable cash. Looking at the nodewatcher logs it would always report the stack creation completion of False.
2019-02-06 12:45:07,040 INFO [nodewatcher:main] parallelcluster-fs-galprod1805-1a creation complete: False
I believe I have found why this is, but not sure the reasoning of being coded this way. Looking in nodewatcher.py, in function stackCreationComplete, it would only ever return true if the stack is at status CREATE_COMPLETE, however if anyone makes an update to their stack, the status then changes to UPDATE_COMPLETE, thus breaking this functionality:
```
def stackCreationComplete(stack_name, region, proxy_config):
log.info('Checking for status of the stack %s' % stack_name)
cfn_client = boto3.client('cloudformation', region_name=region, config=proxy_config)
stacks = cfn_client.describe_stacks(StackName=stack_name)
log.info('Status %s' % stacks['Stacks'][0]['StackStatus'])
return stacks['Stacks'][0]['StackStatus'] == 'CREATE_COMPLETE'
```
I would recommend this function be changed to handle this situation. We often make changes to the stack, for example changing the compute node type. Possibly change the return in that function to be (this is what I have done as a temporary patch):
return stacks['Stacks'][0]['StackStatus'] == 'CREATE_COMPLETE' or stacks['Stacks'][0]['StackStatus'] == 'UPDATE_COMPLETE'
Answers:
username_1: This has been fixed, just not released yet:
https://github.com/aws/aws-parallelcluster-node/blob/develop/nodewatcher/nodewatcher.py#L159
username_0: Ok, perfect. Is there an easy way I can get my compute nodes to use this version versus the current one?
username_1: yep, clone the repo and update the cluster to use a custom node package.
See: https://aws-parallelcluster.readthedocs.io/en/latest/custom_node_package.html
Note, this will only apply to newly launched compute nodes. You can scale the asg to 0 nodes, then back up. Basically:
```
pcluster stop [your_cluster]
# wait a few mins for scale down
pcluster start [your_cluster]
```
Status: Issue closed
username_1: Resolving. Feel free to re-open |
kubernetes/website | 320723530 | Title: Issue with k8s.io/docs/concepts/api-extension/custom-resources/
Question:
username_0: <!-- Thanks for filing an issue! Before submitting, please fill in the following information. -->
<!--Required Information-->
**This is a...**
<!-- choose one by changing [ ] to [x] -->
- [ ] Feature Request
- [ ] Bug Report
**Problem:**
"Comparing ease of use" table is broken
**Proposed Solution:**
**Page to Update:**
https://kubernetes.io/...
<!--Optional Information (remove the comment tags around information you would like to include)-->
<!--Kubernetes Version:-->
<!--Additional Information:-->
Answers:
username_1: I think this is a duplicate of #8271.
username_0: Very much it may be. But it still is broken.
Right now (in Safari on Mac):

username_2: Is this still an issue? The table renders correctly for me on Firefox.
username_3: Confirmed the table works on Safari now.
username_3: /close
username_0: Now OK. Thanks! |
tensorflow/agents | 814772575 | Title: metric_utils.eager_compute() executes very slowly
Question:
username_0: I wrote a reinforcement learning code using tf_agents at https://github.com/username_0/rl-tfagents/blob/main/tfagent_dqn.py. This code is very similar to the example code of https://github.com/tensorflow/agents/blob/master/tf_agents/agents/dqn/examples/v2/train_eval.py .
My code has the following setup:
+ uses python3.7, nvidia driver: 450.80.02, cuda 11.0
+ runs inside a docker container with tensorflow/tensorflow:2.4.0-gpu as base image
+ implements DQN to solve Breakout:v4 environment
+ uses multiple parallel environments for training and evaluation
+ the agent training loop is interleaved with evaluation
+ evaluation is performed as follows
```python3.7
metric_utils.eager_compute(eval_metrics, eval_tf_env, eval_policy, eval_num_episodes, train_step, eval_summary_writer, 'Metrics - Evaluation')
```
Problem:
+ The training runs fast and utilises up to 32% of the GPU.
+ However, during the evaluation period, the GPU utilisation drops to 6% and the evaluation runs very slowly.
Questions:
+ Why is `metric_utils.eager_compute()` very slow and why does it not utilise the GPU?
+ How can I perform evaluation of the trained agent policy at a faster speed by utilising GPU?
Answers:
username_1: One possibility is that during evaluation you're constantly going back and forth with the CPU to run your environment, then sending this data back to GPU! In contrast, when training we can send all the time frames to the GPU at once.
For this reason we recommend that for eval/data collection you use the CPU only. The cost of copying data back and forth from GPU to CPU for each time step slows everything down. It's only useful to use the GPU if you have an extremely complicated/large per-step model and the cost of doing compute on CPU overshadows the cost of CPU<->GPU transfer for each frame.
Let us know if that helps the speed. I'll close for now, but please reopen if you feel this doesn't address your question!
Status: Issue closed
|
appirio-tech/connect-app | 352892836 | Title: Remove delivery milestone duration (timeline ends)
Question:
username_0: Remove delivery milestone duration (timeline ends). Once the work is done we do a deliver as a single point in time, and the customer has no further option to extend/expand request for fixes. The delivery literally marks the end of the phase and provides a link (or links) to the customer to download their artifacts/deliverables.
Continued from #2301
Answers:
username_0: @username_1 @username_2 I didn't understand this requirement. Our delivery milestone is dynamic i.e. it renders itself immediately after `final-designs`, in a design phase, with option to either accept the designs or request for final fixes. Only after user chose one of these option, we know where to move in the timeline. If user choses to request fixes, we move backward and enable the `final-fix` milestone instead and disable the delivery milestone and if user choses to accept the designs, we keep the delivery milestone active, but manager has to add the links of the final deliverables so we keep this milestone open till manager adds the links. So, in nutshell, we need a valid duration for this milestone to be rendered correctly. Right now we have only 1 day (which is the minimum duration in our system for a milestone) as default duration for all types of timelines and I am not seeing any side effect of it.
username_1: The delivery milestone should not have a duration as it will expand the phase duration? Or we have a standard 1-3 days for code delivery? I'm not sure if delivery blocks any of the phases that follow.
Deferring to @username_2
username_2: Hi @username_0 - I think your explanation above makes sense. Let's leave the delivery milestone as-is with a default duration of 1 day.
Status: Issue closed
username_2: Remove delivery milestone duration (timeline ends). Once the work is done we do a deliver as a single point in time, and the customer has no further option to extend/expand request for fixes. The delivery literally marks the end of the phase and provides a link (or links) to the customer to download their artifacts/deliverables.
Continued from #2301
Aha! Link: https://topcoder.aha.io/features/TCCONNECT-379
Status: Issue closed
username_0: Thanks @username_2 . Closing the issue. |
rigetti/pyquil | 421127875 | Title: Observable expectation value estimation with readout error mitigation is exponentially slow
Question:
username_0: Currently readout correction relies on readout symmetrization to work properly. The only symmetrization method currently provided is exhaustive symmetrization -- where all arrangements of bit flips are measured for any given observable that is to be measured. This means that, for an observable with `n` qubits, we need to run `2**n` different measurements to get a symmetrized expectation value.
This is particularly pathological when running something like DFE, which has constant overhead wrt number of qubits, but gets exponentially slower when using readout error mitigation.
Exhaustive symmetrization is not an altogether bad idea -- it is the more efficient way to symmetrize small programs. So we need this option to exist.
The fix would be to add another method to symmetrize expectation values based on random sampling of bit flips for the readout symmetrization.
Yet another improvement would be to have symmetrization happen at lower levels of the stack, see e.g., rigetti/qvm#52
Answers:
username_0: Note that this is the default behavior! |
nfischer/rainbows-lang | 374828506 | Title: Better binary tools
Question:
username_0: Type inferencer (`bin/rain-infer.js`)
- [ ] should accept a `rain` file as stdin
- [ ] should optionally output a type dictionary to stdout
- [ ] (maybe) optionally pretty-print JSON
- [ ] should do something useful if there's no existing `raint` type dictionary
Interpreter (`bin/rain-interp.js`)
- [ ] should do something useful if there's no existing `raint` type dictionary |
OSGeo/gdal | 859673163 | Title: OGR GML Reader : error on CircleByCenterPoint with uom in kilometers
Question:
username_0: Parsing this piece of GML code (extract from Aviation AIXM data) with **GDAL 3.1.4** leads to generate a wrong geometry :
`<aixm:Surface srsName="urn:ogc:def:crs:EPSG::4326" gml:id="id.98dc2683-1bd8-4a1d-882f-014288ba51a7">
<gml:patches>
<gml:PolygonPatch interpolation="planar">
<gml:exterior>
<gml:Ring>
<gml:curveMember xlink:type="simple">
<gml:Curve srsName="urn:ogc:def:crs:EPSG::4326" gml:id="id.c3079e23-1331-4aab-ac72-050221db93da">
<gml:segments>
<gml:CircleByCenterPoint interpolation="circularArcCenterPointWithRadius" numArc="1">
<gml:pos>47.233333333 0.166666667</gml:pos>
<gml:radius uom="km">5.0</gml:radius>
</gml:CircleByCenterPoint>
</gml:segments>
</gml:Curve>
</gml:curveMember>
</gml:Ring>
</gml:exterior>
</gml:PolygonPatch>
</gml:patches>
</aixm:Surface>`
When I change the uom with "**[nmi_i]**" or even "**m**", it generates the **expected** geometry.
It seems "**km**" is not recognized and therefore GDAL assumes a radius of 5 deg by default (I checked that on QGIS, as the measure of the circle radius equals to 300 NM, aka 5*60 NM).
Result of ogrinfo command (only beginning) when _uom="km"_ is used :
_Layer name: Airspace
Geometry: **Curve Polygon**
Feature Count: 1
Extent: (42.233333, -4.833333) - (52.233333, 5.166667)_
Result of ogrinfo command (beginning only with one feature in the GML file) when _uom="[nmi_i]" or uom="m"_ is used :
_Layer name: Airspace
Geometry: **Polygon**
Feature Count: 1
Extent: (47.188333, 0.100433) - (47.278333, 0.232901)_
Answers:
username_1: See https://github.com/OSGeo/gdal/issues/3118 and test with a newer GDAL version.
Status: Issue closed
|
sfsoul/personalBlog | 689030175 | Title: React
Question:
username_0: ## React 中如何添加多个className
- [React 如何添加多个className](https://segmentfault.com/q/1010000005664656)
- [React使用className引入多个类样式](https://blog.csdn.net/qq_36742720/article/details/85766757)
Answers:
username_0: ## React 中如何添加多个className
- [React 如何添加多个className](https://segmentfault.com/q/1010000005664656)
- [React使用className引入多个类样式](https://blog.csdn.net/qq_36742720/article/details/85766757)
username_0: ## 错误边界(Error Boundaries)
### 关于事件处理器
**错误边界无法捕获事件处理器内部的错误。与 `render` 方法和生命周期方法不同,事件处理器不会在渲染期间触发。因此若它们抛出异常,React 仍然能够知道需要在屏幕上显示什么。若需要在事件处理器内部捕获错误,使用 JS 的 `try/catch` 语句。**
username_0: ## 为什么需要在 React 类组件中为事件处理程序绑定 this
- [为什么需要在 React 类组件中为事件处理程序绑定 this](https://juejin.im/post/6844903605984559118)
username_0: ## 函数组件与 class 组件的区别?
username_0: ## 无状态组件和有状态组件区别?
### 无状态组件
无状态组件(Stateless Component)是最基础的组件形式,由于没有状态的影响所以就是纯静态展示的作用。它的基本组成结构就是属性(props)加上一个回调函数。由于不涉及到状态的更新,所以这种组件的复用性也最强。
```
export const Header = (props) => {
return (
<div>无状态组件</div>
)
}
```
### 有状态组件
在无状态组件的基础上,若组件内部包含状态(state)且状态会随着事件或者外部的消息而发生改变的时候,就构成了有状态组件(Stateful Component)。有状态组件通常会带有生命周期(lifeCycle),用以在不同的时刻触发状态的更新。
```
export class Home extends Component {
constructor(props) {
super(props);
}
render() {
return (
<Header />
)
}
}
```
- [React系列之 -- 无状态组件你真的知道吗?](https://juejin.im/entry/6844903493816303624)
- [React 组件模式](https://www.html.cn/archives/9458)
username_0: ## react 父子组件通信
- [react 父子传值,子组件修改父组件的state数据](https://www.imooc.com/article/263942)
- [react 动态数据绑定修改对应的input框更新对应的文本内容](https://blog.csdn.net/qq_40190624/article/details/88779371)
username_0: ### 何时使用 Refs
- 管理焦点,文本选择或媒体播放;
- 触发强制动画;
- 集成第三方 DOM 库
### 创建 Refs
Refs 使用 `React.createRef()` 创建的,并通过 `ref` 属性附加到 React 元素。在构造组件时,通常将 Refs 分配给实例属性,以便在整个组件中引用它们。
```
```
username_0: ## 组件的生命周期
### 挂载
**当组件实例被创建并插入 DOM 中时,其生命周期调用顺序如下:**
- `constructor`
- `render()`
- `componentDidMount()`
### 更新
**当组件的 props 或 state 发生变化时会触发更新。组件更新的生命周期调用顺序如下:**
- `shouldComponentUpdate()`
- `render()`
- `componentDidUpdate()`
### 卸载
**当组件从 DOM 中移除时会调用:**
- `componentWillUnmount()`
### 常用的生命周期方法
username_0: **要编写一个非受控组件,而不是为每个状态更新都编写数据处理函数,可以使用 ref 来从 DOM 节点中获取表单数据。**
```
class NameForm extends React.Component {
constructor(props) {
super(props);
this.handleSubmit = this.handleSubmit.bind(this);
this.input = React.createRef();
}
handleSubmit(event) {
alert('A name was submitted: ' + this.input.current.value);
event.preventDefault();
}
render() {
return (
<form onSubmit={this.handleSubmit}>
<label>
Name:
<input type="text" ref={this.input} />
</label>
<input type="submit" value="Submit" />
</form>
);
}
}
```
#### 默认值
通过指定一个 `defaultValue` 属性来赋予组件一个初始值。
```
render() {
return (
<form onSubmit={this.handleSubmit}>
<label>
Name:
<input
defaultValue="Bob"
type="text"
ref={this.input} />
</label>
<input type="submit" value="Submit" />
</form>
);
}
``` |
cafejojo/schaapi | 324708801 | Title: Rename `TestableGenerator` class to `ClassGenerator`
Question:
username_0: In most cases, the class names are identical to the interfaces they implement, or an amalgamation of names such as JavaMavenProject, which implemetns JavaProject and MavenProject, or just simply an extension of the name as is the case with the filters.
However `TestableGenerator` seems to be the exception, so would it perhaps be an idea to rename the class to ClassGenerator, which is perhaps more indicative of what is actually does anyway.
Answers:
username_1: In the original design, `TestableWriter` and `TestableGenerator` were separated. However, the writer part is no longer part of the interface. Therefore, we should be able to call the `TestableGenerator` whatever we want. We could even call it the `ClassWriter` or the `FooBarBar`.
username_2: ~~+1 for `FooBarBar`~~
How about `TestableClassGenerator`?
username_1: @username_2 Considering that the current `TestableGenerator` does not do anything specific to tests, I think `ClassWriter`/`ClassGenerator` would be a better name.
username_2: Hmm yeah, fair point. In that case, ClassWriter seems the best option to me
Status: Issue closed
|
skywind3000/asyncrun.vim | 808502018 | Title: update info on compatibility with fugitive
Question:
username_0: The [wiki entry on vim-fugitive](https://github.com/username_1/asyncrun.vim/wiki/Cooperate-with-famous-plugins#fugitive) is out of date since https://github.com/tpope/vim-fugitive/commit/d4bcc75ef6449c0e5592513fb1e0a42b017db9ca
Instead custom commands
```vim
command! -bang -bar -nargs=* Gpush execute 'AsyncRun<bang> -cwd=' .
\ fnameescape(FugitiveGitDir()) 'git push' <q-args>
command! -bang -bar -nargs=* Gfetch execute 'AsyncRun<bang> -cwd=' .
\ fnameescape(FugitiveGitDir()) 'git fetch' <q-args>
```
must be added.
Answers:
username_1: fixed, thanks
Status: Issue closed
username_0: Now `FugitiveGitDir` returns the path of the `.git` subdir instead of that of the repository. To make the custom commands work again, replace in all lines
`fnameescape(FugitiveGitDir())` by `fnamemodify(FugitiveGitDir(), ":h:S")`. |
kitesurfer1404/WS2812FX | 256451535 | Title: Add support for animation direction, and virtual strips
Question:
username_0: Okay, since the title is limited, let me explain the virtual strips part.
Say you have a few various shaped "strips", "bits", "rings", etc. made of one or more WS2812 LEDs. Say you want to control them all, but from a single pin (because your project includes more and more devices that an ESP8266 or an Arduino can support).
So it would be nice to be able to define a single strip (say, 3x 24-LED rings in series, first one connected to a pin, second one connected to first one's output, et cetera), and be able to make segments out of it, animated separately. I'm thinking something similar to this:
```cpp
#include <WS2812FX.h>
#define LED_COUNT 72
#define LED_PIN 15
WS2812FX strip= WS2812FX(LED_COUNT, LED_PIN, NEO_GRB + NEO_KHZ800);
WS2812FXSegment segment1 = strip.segment(0 ,23);
WS2812FXSegment segment2 = strip.segment(24, 47);
WS2812FXSegment segment3 = strip.segment(48, 71)
void setup() {
strip.init();
strip.setBrightness(255); // Modifying the strip object will modify all segments too
strip.setSpeed(100);
strip.setMode(FX_MODE_FLASH_SPARKLE);
segment2.setMode(FX_MODE_RAINBOW_CYCLE); // setting the segment options after the strip will override those settings for said segment
segment2.setSpeed(150);
strip.start(); // starting the strip should go over all defined segments and start them separately
}
void loop() {
strip.service(); // servicing the strip should go over all defined segments and service them too
}
```
And about the direction of animations. Especially on rings, alignment is a bitch. Say you want two 24-LED rings do the same animation, and attach them back to back. Well you can't, since the directions won't match. So it would be nice if separate WS2812FX objects (and, if implemented, segments too) could have a direction set (either NORMAL, which means the current direction and addressing `0..n`, and REVERSE, which means reversed direction addressing `n..0`).
Answers:
username_1: I fully understand and I need a this in the near future for my Christmas decoration. I have three idetical windows that will get strips oround them. Running in circles etc. is a well needed improvement.
I can not promise to implement this immediately. But it's on the list.
username_0: Technically speaking, segmenting (at least in a properly OO language, say, C# or Java) is easy to solve - each segment is practically a WS2812FX object, but it maps only parts of the parent object's LEDs (such as, it translates e.g. 48 to 0, 49 to 1, 71 to 23, to extend my example).
Animation direction is again relatively simple, and I might give it a go in the near future.
username_2: I would totally dig this.. I build ws2812 clocks.. I could break out the different segments for each of the hands and other animation..
I absolutely love this library.. It has been useful in many projects.. Your work makes mine a pleasure.



Status: Issue closed
|
solgenomics/sgn | 336231914 | Title: Allow users to add a description when adding new list name
Question:
username_0: Note: just need to add an input box in the dialog and js. AJAX/List and CXGN/List are ready for storing description.
For Bugs:
---------
### Environment
<!-- Where did you encounter the error. -->
#### Steps to Reproduce
<!-- Provide an example, or an unambiguous set of steps to reproduce -->
<!-- this bug. Include code to reproduce, if relevant. -->
Answers:
username_1: @username_0 I take the opportunity of this issue to list up a couple other points on list:
- request for a list "intersect" option in addition to the existing "combine"
- the batch deletion of list doesn t not work if you increase the datable display above 10 entries
- list name and deletion #1974
Status: Issue closed
username_1: moved remaining point to a separate issue #2230 |
Johannafendrich/my-homie | 595834537 | Title: Add Setup
Question:
username_0: As a developer, I want to work with a modern development setup:
- [ ] create React App
- [ ] eslint
- [ ] prettier
- [ ] storybook
- [ ] emotion
- [ ] express
- [ ] dotenv
- [ ] mongoDB
- [ ] nodemon
- [ ] husky
- [ ] concurrently
- [ ] link with DeepScan<issue_closed>
Status: Issue closed |
libgdx/libgdx | 230962019 | Title: native crash on libc.
Question:
username_0: Please ensure you have given all the following requested information in your report.
#### Issue details
Crash on native libc and don't konw why, app just exit .I happends some times. please help !!
#### Reproduction steps/code
**just open the app**
I dont know if i use the api in the wrong way, this issue is not always reproducted.
#### Version of LibGDX and/or relevant dependencies
```shell
compile "com.badlogicgames.gdx:gdx-backend-android:1.9.6"
natives "com.badlogicgames.gdx:gdx-platform:1.9.6:natives-armeabi"
natives "com.badlogicgames.gdx:gdx-platform:1.9.6:natives-armeabi-v7a"
natives "com.badlogicgames.gdx:gdx-platform:1.9.6:natives-arm64-v8a"
natives "com.badlogicgames.gdx:gdx-platform:1.9.6:natives-x86"
natives "com.badlogicgames.gdx:gdx-platform:1.9.6:natives-x86_64"
compile "com.badlogicgames.gdx:gdx-box2d:1.9.6"
natives "com.badlogicgames.gdx:gdx-box2d-platform:1.9.6:natives-armeabi"
natives "com.badlogicgames.gdx:gdx-box2d-platform:1.9.6:natives-armeabi-v7a"
natives "com.badlogicgames.gdx:gdx-box2d-platform:1.9.6:natives-arm64-v8a"
natives "com.badlogicgames.gdx:gdx-box2d-platform:1.9.6:natives-x86"
natives "com.badlogicgames.gdx:gdx-box2d-platform:1.9.6:natives-x86_64"
```
#### Stacktrace
```java
05-24 04:23:40.930 10974-10974/alex.com.gdxdemo A/libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0x58 in tid 10974 (lex.com.gdxdemo)
[ 05-24 04:23:40.931 255: 255 W/ ]
debuggerd: handling request: pid=10974 uid=10087 gid=10087 tid=10974
```
#### Please select the affected platforms
- [x] Android
Nexus6 android 7.1.1
Answers:
username_0: Update: refer to: [stackoverflow here](https://stackoverflow.com/questions/36336141/libgdx-box2d-android-crash-fatal-signal-11-sigsegv-code-1-fault-addr)
I found I am adding body to the world outside the GlThread which may lead to `Synchronization problem with world obj`.
My solution is just moving add action into render() method. and not meet this crash.
I dont know if this is the right solution. Monitorring.
username_0: Solved。I use world obj to destory body when world is in step process.
Status: Issue closed
username_1: https://github.com/libgdx/libgdx/wiki/Issue-Tracker |
gitbucket/gitbucket | 115454917 | Title: Still cannot fork big repository
Question:
username_0: I have a repository about 1GB size. I've got timeout error every time I'm trying to fork it.
It copies about half of disk content in my repository folder.
Answers:
username_1: Is the size because of so many small files, or because a few big ( #1101 )?
username_2: Repository forking should be run asynchronously. |
birtona/rent-a-role-model | 56433006 | Title: some profile picture do not appear
Question:
username_0: for an unknown reason about half of the user's profile pictures don't show neither in the profile show nor in the users index view (see
. i don't see any system of which show and which don't, my first guess would be some difficulties with the xing api. anyone interested in having a look at this and maybe fixing it?
Answers:
username_1: It seems the requested assets are not available anymore:
Stacktrace:
```
The requested resource
/pubimg/users/7/1/c/0dccbbf58.14888391,2.140x185.jpg
is no longer available on this server and there is no forwarding address. Please remove all references to this resource.
``` |
Door43/ulb-en | 204360407 | Title: PHP 2:8 tr
Question:
username_0: 8He humbled himself and became obedient **_until death_**, the death of the cross.
The readers interpret the words 'until death' to mean until the point of death and not through his death. They suggested using the word "unto" instead of "until".
Where the following versions have:
NIV - becoming obedient to death-- even death on a cross!
ESV - becoming obedient to the point of death, even death
NASB - by becoming obedient to the point of death, even
Status: Issue closed
Answers:
username_1: The ULB of PHP 2:8 now reads:
\v 8 He humbled himself and became obedient to the point of death,
\q even death of a cross! |
gilbarbara/react-joyride | 308316280 | Title: Cannot read property 'ease' of undefined
Question:
username_0: at Object.scroll [as top] (index.js:18)
at Joyride.componentDidUpdate (index.js:494)
at commitLifeCycles (react-dom.development.js:8778)
at commitAllLifeCycles (react-dom.development.js:9946)
at HTMLUnknownElement.callCallback (react-dom.development.js:542)
at Object.invokeGuardedCallbackDev (react-dom.development.js:581)
at invokeGuardedCallback (react-dom.development.js:438)
at commitRoot (react-dom.development.js:10050)
at performWorkOnRoot (react-dom.development.js:11017)
at performWork (react-dom.development.js:10967)
at batchedUpdates (react-dom.development.js:11086)
at batchedUpdates (react-dom.development.js:2330)
at dispatchEvent (react-dom.development.js:3421)
Answers:
username_1: Duplicate of #320
Status: Issue closed
|
dropbox/dropbox-sdk-js | 164691350 | Title: longpoll uses api.dropboxapi.com host; should use notify.dropboxapi.com
Question:
username_0: https://www.dropbox.com/developers/documentation/http/documentation#files-list_folder-longpoll
Currently I am getting 404 errors since the API is using the api host.
Looks like host parameter (api or notify) for request() in routes.js is not used.
Answers:
username_1: this is related to #42
username_2: I believe this issue should be closed. It looks like #79 fixed it, however #85 still appears to be broken and longpoll is unusable at the moment.
Status: Issue closed
username_3: Yes, thanks for pointing that out! |
akkadotnet/akka.net | 168870945 | Title: Racy spec: TailChoppingSpec.Tail_chopping_group_router_must_throw_exception_if_no_result_will_arrive_within_the_given_time
Question:
username_0: It fails with message
```
Failed: Expected a message of type Akka.Actor.Status+Failure, but received {ack} (type System.String) instead from [akka://RuntimeType-120/deadLetters]
Expected: True
Actual: False
at Akka.TestKit.Xunit2.XunitAssertions.Fail(String format, Object[] args) in D:\BuildAgent\work\d26c9d7f36545acd\src\contrib\testkits\Akka.TestKit.Xunit2\XunitAssertions.cs:line 22
at Akka.TestKit.TestKitBase.InternalExpectMsgEnvelope[T](Nullable`1 timeout, Action`2 assert, String hint, Boolean shouldLog) in D:\BuildAgent\work\d26c9d7f36545acd\src\core\Akka.TestKit\TestKitBase_Expect.cs:line 206
at Akka.TestKit.TestKitBase.InternalExpectMsgEnvelope[T](Nullable`1 timeout, Action`1 msgAssert, Action`1 senderAssert, String hint) in D:\BuildAgent\work\d26c9d7f36545acd\src\core\Akka.TestKit\TestKitBase_Expect.cs:line 182
at Akka.TestKit.TestKitBase.InternalExpectMsg[T](Nullable`1 timeout, Action`1 msgAssert, String hint) in D:\BuildAgent\work\d26c9d7f36545acd\src\core\Akka.TestKit\TestKitBase_Expect.cs:line 157
at Akka.TestKit.TestKitBase.ExpectMsg[T](Nullable`1 duration, String hint) in D:\BuildAgent\work\d26c9d7f36545acd\src\core\Akka.TestKit\TestKitBase_Expect.cs:line 29
at Akka.Tests.Routing.TailChoppingSpec.Tail_chopping_group_router_must_throw_exception_if_no_result_will_arrive_within_the_given_time() in D:\BuildAgent\work\d26c9d7f36545acd\src\core\Akka.Tests\Routing\TailChoppingSpec.cs:line 138
```
Answers:
username_1: fixed in #2721
Status: Issue closed
|
acquia/acsf-tools | 261955279 | Title: Support ACSF stacks
Question:
username_0: This tool currently assumes a single webserver and sitename. This is not the case for all customers.
Answers:
username_1: Hi @username_0 - this probably exposes a gap in my ACSF knowledge around the Stacks feature. Doesn't each Stack get its own Repository (and therefore would have different configured `site_id`s)? When developing locally, are Stacks treated as separate installs (with separate docroots)? If the answer is yes to both, then this should work as is, unless I'm missing something. Maybe you an I could get together sometime and you can show me where you're running into an issue.
username_2: We have three stacks and the assumption of stack `01` is not correct. Sometimes we want to run on stack `02`
username_3: I don't think there is still assumption on the stack id (01, 02, ...) anymore at least in the main commands (ml, dump, restore, ...)
Regarding supporting stack, I am not sure it makes sense given each stack may run a different code base. It is even considered a different subscription in acquia cloud so it would be complex to execute the command on all the stacks.
username_4: I agree with Vincent on the fact that I have not seen any dependency with stacks in the commands I used frequently (on a project with 5 stacks)
Does anyone have any example of broken command in multi stacks or tangles context?
Status: Issue closed
|
autopkg/scriptingosx-recipes | 550473199 | Title: Python 3 pkg includes annoying behavior
Question:
username_0: Hello Armin! Thanks for these recipes.
Would it be possible to suppress the postinstall script for the Python_Documentation.pkg subpackage? It includes this script:
```
#!/bin/sh
PYVER="3.8"
FWK="/Library/Frameworks/Python.framework/Versions/${PYVER}"
FWK_DOCDIR_SUBPATH="Resources/English.lproj/Documentation"
FWK_DOCDIR="${FWK}/${FWK_DOCDIR_SUBPATH}"
APPDIR="/Applications/Python ${PYVER}"
SHARE_DIR="${FWK}/share"
SHARE_DOCDIR="${SHARE_DIR}/doc/python${PYVER}"
SHARE_DOCDIR_TO_FWK="../../.."
# make link in /Applications/Python m.n/ for Finder users
if [ -d "${APPDIR}" ]; then
ln -fhs "${FWK_DOCDIR}/index.html" "${APPDIR}/Python Documentation.html"
open "${APPDIR}" || true # open the applications folder
fi
# make share/doc link in framework for command line users
if [ -d "${SHARE_DIR}" ]; then
mkdir -m 775 -p "${SHARE_DOCDIR}"
# make relative link to html doc directory
ln -fhs "${SHARE_DOCDIR_TO_FWK}/${FWK_DOCDIR_SUBPATH}" "${SHARE_DOCDIR}/html"
fi
```
Although Python 3 can be installed silently, opening a Finder window is disruptive to users. Ideally, the script could check for the "command line install" flag, correct?
If you feel that it's better to ask the Python maintainers directly, let me know.
Thanks!
Mike
Answers:
username_1: We definitely _should_ ask the Python maintainers. At the very least they should wrap any UI actions in a test for the `$COMMAND_LINE_INSTALL` variable.
That said, we could add an action to replace this script or just a line in this script. it would be very fragile for changes in updates, though.
username_2: If the Python.org maintainers don't/won't fix this, I personally think a better approach is to split the distribution pkg into its component pkgs, and to install _only_ the Python_Framework.pkg, Python_Command_Line_Tools.pkg, and Python_Install_Pip.pkg.
The other subpackages are not really necessary and can be disruptive, as you've seen.
username_0: Thank you both! I'll submit a ticket for the Python maintainers and ask that they test for the `$COMMAND_LINE_INSTALL` variable before opening a Finder window.
Since we're using Munki, I'm going to try embedding a ChoiceChangesXML file into the pkginfo file, to exclude the problematic pkgs. Greg, are these what you had in mind?
```
<array>
<dict>
<key>attributeSetting</key>
<integer>1</integer>
<key>choiceAttribute</key>
<string>selected</string>
<key>choiceIdentifier</key>
<string>org.python.Python.PythonFramework-3.8</string>
</dict>
<dict>
<key>attributeSetting</key>
<integer>0</integer>
<key>choiceAttribute</key>
<string>selected</string>
<key>choiceIdentifier</key>
<string>org.python.Python.PythonApplications-3.8</string>
</dict>
<dict>
<key>attributeSetting</key>
<integer>1</integer>
<key>choiceAttribute</key>
<string>selected</string>
<key>choiceIdentifier</key>
<string>org.python.Python.PythonUnixTools-3.8</string>
</dict>
<dict>
<key>attributeSetting</key>
<integer>0</integer>
<key>choiceAttribute</key>
<string>selected</string>
<key>choiceIdentifier</key>
<string>org.python.Python.PythonDocumentation-3.8</string>
</dict>
<dict>
<key>attributeSetting</key>
<integer>0</integer>
<key>choiceAttribute</key>
<string>selected</string>
<key>choiceIdentifier</key>
<string>org.python.Python.PythonProfileChanges-3.8</string>
</dict>
<dict>
<key>attributeSetting</key>
<integer>1</integer>
<key>choiceAttribute</key>
<string>selected</string>
<key>choiceIdentifier</key>
<string>org.python.Python.PythonInstallPip-3.8</string>
</dict>
</array>
```
I'd submit a pull request for the recipe, but I'm not sure how to handle marking the pkg receipts as optional in AutoPkg...hmm. Splitting out the pkgs might be a better idea, yeah.
username_0: Also, I just noticed the installs array for `/Applications/Python 3.8/Python Launcher.app`. If I'm not installing `org.python.Python.PythonApplications-3.8`, that'd probably cause `managedsoftwareupdate` to loop.
username_0: Here's the issue on python.org's bug tracker:
https://bugs.python.org/issue39580
username_3: I've made an upstream PR to add a check. https://github.com/python/cpython/pull/20271 |
soutaro/steep | 842925052 | Title: Steepfile `library` doesn't work with sources besides rubygems.org.
Question:
username_0: I have my own RubyGem server at https://jubigems.org.
If I list my gem in my `Gemfile` as:
```
source 'https://www.jubigems.org' do
gem 'core'
end
```
`steep check` errors saying it can't find the library `core`. However if I list the same gem as:
```
gem 'core', github: 'username_0/core'
```
Using my GitHub source, it works fine.
Status: Issue closed
Answers:
username_0: Ah I need to add the files to the gemspec, like:
```
spec.files = Dir['lib/**/*.rb'] + Dir['sig/**/*.rbs']
``` |
ChkBuk/myexamples | 394433083 | Title: How to deploy war file on Tomcat installed in compute engine instance
Question:
username_0: I need to deploy my spring-boot application on compute engine in Google cloud platform. I have already created an instance and through SSH Apache and Maven have been installed. Further, war file has been uploaded into the bucket. Anybody can provide me with the remaining commands to deploy the war file on tomcat instance.
Thanks |
adobe/aio-cli-plugin-app | 930024924 | Title: @adobe/aio-lib-ims will require node-12 or greater starting July 14th, 2021
Question:
username_0: Your module is dependent on `@adobe/aio-lib-ims`.
Adobe IMS is retiring` TLS 1.0` and` TLS 1.1` support on **July 14th, 202**1.
`TLSv1.2` is the default for node-12 or higher: https://nodejs.org/api/cli.html#cli_tls_min_v1_2
Therefore any calls to Adobe IMS using `@adobe/aio-lib-ims` using `node-10` will fail after the retirement date above.
**This module should impose a minimum requirement of `node-12` going forward.**
Add this to your `package.json`:
```
{
"engineStrict": true,
"engines" : {
"node" : ">=12"
}
}
```<issue_closed>
Status: Issue closed |
sctplab/pr-sctp-improved | 200885455 | Title: Timing-Problems on Sender Side Implementation
Question:
username_0: In the test-case https://github.com/nplab/PR_SCTP_Testsuite/blob/master/forward-tsn/sender-side-implementation/ttl-policy/sender-side-implementation-7_1.pkt the FORWARD-TSN-Chunk in line 84 is not transmitted like expected at the relative time +1.9 seconds after the receiving of the SACK-Chunk.
Currently the test-case also fails, because the FORWARD-TSN-Chunk is not bundled with the still outstanding DATA-Chunk in line 81. This Issue is therefore related to https://github.com/sctplab/pr-sctp-improved/issues/5. |
groundwired/salesforce-food-bank | 217747650 | Title: duplicated and unduplicated is tracking incorrectly
Question:
username_0: <NAME>
to Laura, me
Hi Laura and Evan,
I just wanted to give some feedback that the duplicated and unduplicated is tracking incorrectly. I logged my first visit and it was marked as duplicated. The second visit logged as unduplicated. This should have been reverse. I tried it with adding a couple of clients and that happened each time. Your first visit is unduplicated, and your repeat visit is duplicated.<issue_closed>
Status: Issue closed |
GoogleCloudPlatform/ruby-docs-samples | 178121381 | Title: Ruby Storage Sample doesn't have a README
Question:
username_0: Story: A Rubyist looks for a README.md to understand how [Storage](https://github.com/GoogleCloudPlatform/ruby-docs-samples/tree/master/storage) should be used similar to [Vision](https://github.com/GoogleCloudPlatform/ruby-docs-samples/tree/master/vision).
Issue: There is no README.md in the [Storage](https://github.com/GoogleCloudPlatform/ruby-docs-samples/tree/master/storage) sample.
Possible-Solution: The Storage sample should have a README similar to [Vision](https://github.com/GoogleCloudPlatform/ruby-docs-samples/tree/master/vision).
Answers:
username_1: This fix will come with the new Storage sample ([branch](https://github.com/GoogleCloudPlatform/ruby-docs-samples/tree/new-storage-samples/storage))
Related (should fix this): #70
Status: Issue closed
|
ikedaosushi/tech-news | 365971629 | Title: If you don't hire juniors you don't deserve seniors
Question:
username_0: If you don't hire juniors, you don't deserve seniors<br>
Let me tell you the story of a very successful company that made a very big, dumb decision. We don’t hire junior developers or interns…if you don’t get a puppy, you don’t have to clean up its messes.<br>
https://ift.tt/2xFyGhw |
oozcitak/xmlbuilder-js | 127187068 | Title: Not working for emojis
Question:
username_0: If I try to stringify a text that contains an emoji, the error 'Invalid Character' gets thrown. Tried using both `{version: '1.0', encoding: 'UTF-8'}` and 32 encoding, doesn't work. What am I missing?
Answers:
username_1: Did you see Issue #98?
Status: Issue closed
username_0: Awesome, thanks. Sorry did not see the closed issues. |
flutter/flutter | 514443276 | Title: type 'List<dynamic>' is not a subtype of type 'List<String>'
Question:
username_0: <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Steps to Reproduce
<!--
Please tell us exactly how to reproduce the problem you are running into.
Please attach a small application (ideally just one main.dart file) that
reproduces the problem. You could use https://gist.github.com/ for this.
If the problem is with your application's rendering, then please attach
a screenshot and explain what the problem is.
-->
1. ...
2. ...
3. ...
<!--
Please tell us which target platform(s) the problem occurs (Android / iOS / Web / macOS / Linux / Windows)
Which target OS version, for Web, browser, is the test system running?
Does the problem occur on emulator/simulator as well as on physical devices?
-->
**Target Platform:**
**Target OS version/browser:**
**Devices:**
## Logs
<!--
Run your application with `flutter run --verbose` and attach all the
log output below between the lines with the backticks. If there is an
exception, please see if the error message includes enough information
to explain how to solve the issue.
-->
```
```
<!--
Run `flutter analyze` and attach any output of that command below.
If there are any analysis errors, try resolving them before filing this issue.
-->
```
```
<!-- Finally, paste the output of running `flutter doctor -v` here. -->
```
```
Answers:
username_1: I believe this is a typing issue. Please, can you provide your code so we can provide help?
username_0: This is flutter doctor output :
/Users/pro/Developer/flutter/bin/flutter doctor --verbose
[✓] Flutter (Channel stable, v1.9.1+hotfix.6, on Mac OS X 10.14.6 18G84, locale en-IN)
• Flutter version 1.9.1+hotfix.6 at /Users/pro/Developer/flutter
• Framework revision 68587a0916 (7 weeks ago), 2019-09-13 19:46:58 -0700
• Engine revision b863200c37
• Dart version 2.5.0
[!] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
• Android SDK at /Users/pro/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-29, build-tools 29.0.2
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
! Some Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses
[✓] Xcode - develop for iOS and macOS (Xcode 11.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 11.1, Build version 11A1027
• CocoaPods version 1.8.4
[✓] Android Studio (version 3.4)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 39.0.1
• Dart plugin version 183.6270
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
[✓] Connected device (1 available)
• Amit’s iphoneXR • 00008020-001C4D3E0E52002E • ios • iOS 12.3.1
! Doctor found issues in 1 category.
Process finished with exit code 0 |
berk/will_filter | 246794276 | Title: The images on the wiki documentation return not found
Question:
username_0: I am trying to use this gem and I am perusing the documentation. I observe that the images that illustrate how the tags look like are not displaying. I tried to copy their src attribute and load it directly from the browser but the page returned `Not Found`. Here is the link to the wiki document I am talking about
https://github.com/berk/will_filter/wiki/Customizing-Filters |
gorilla/websocket | 335857684 | Title: Fragment Size is not buffer size
Question:
username_0: messages.
Your implementation limits the size of the fragment to the size of the buffer (by default 4K).
As a result traefik proxy server can't send to backend messages longer then 4K.
Answers:
username_1: This package can send a message of any size and will fragment messages in some circumstances. As the quote from the RFC states, message fragmenting is allowed.
This package can receive a message of any size, independent of any fragmenting.
If the traefik server cannot handle fragmented messages, then report the issue with the traefik server.
Status: Issue closed
username_0: Sorry.
I use websocket to send document to editor in browser and back to server.
Sending to browser is without problems. I assume that python websocket implementation uses a fragmentation.
But on sending from browser to server packet was trimmed.
When I increased the defaultReadBufferSize and defaultWriteBufferSize in gorilla/websocket/conn.go the problem disappeared. But appeared again with larger than buffer size document.
And traefik use yours last version code.
username_1: This package does not trim messages. Based on a quick look at the traefik code, I don't see anything there that will trim a message. It sounds like the backend server does not support fragmented messages. What is the backend server?
username_0: When working without proxy problem absent.
On the backend side [gevent-websocket](https://github.com/jgelens/gevent-websocket). Citation:
`
def read_message(self):
message = bytearray()
while(True):
header, payload = self.read_frame()
...
message += payload
if header.fin:
break
return message
`
username_0: And:
header.fin = first_byte & FIN_MASK == FIN_MASK
FIN_MASK = 0x80
username_1: Does gevent-websocket report an error when reading the fragmented message? Can the application continue to read the websocket after reading the fragmented message?
username_1: messages.
Your implementation limits the size of the fragment to the size of the buffer (by default 4K).
As a result traefik proxy server can't send to backend messages longer then 4K.
username_2: I can take a look at this on the weekend.
If the OP can help reproduce with the topology described by Gary, it’d help
speed things up.
username_2: @sergeVL
Can you document the parts at play here? Where does socket.io fit in, since this is the first time you’ve mentioned it. Diagram (ASCII is OK) would be great.
I want to spin up the same client, Traefik, gevent backend to replicate this first. We need a standalone repro with an echo backend, and then we can look at Traefik & client-side by process of elimination.
username_0: I will make some test/simplified environment
Status: Issue closed
username_1: Closing because no response from OP in a month. |
kartoza/gfdrr_oondra | 157850891 | Title: Download world pop data and ingest it in geonode
Question:
username_0: Write a script to download world pop data and publish it in geonode.
Answers:
username_1: @Nyakudya please update this with what you did with the geoserver REST API and then describe the service that supersedes that (which @username_2 mentioned) where you can already get worldpop via a service.
- [ ] find and reference the inaSAFE metadata structure specification in a wiki page in OONDRA Github
- [ ] then create a metadata record for the worldpop service in the oondra geonode instance according to that metadata specificiation
username_0: Downloading the data using the API created by samweli only produces vector output. The tool has already been incorporated into inasafe [inasafe world pop intergration ](https://github.com/Samweli/inasafe/tree/worldpop_integration) but needs to be extended so that it can download world pop as raster layer.

The source code for world pop api can be sourced from: [world pop api source code](https://github.com/codeforresilience/worldpop-api).
username_2: Hi
Probably better to file an upstream ticket with Samweli too for this (if you didn't already).
Regards
Tim
username_1: @username_0 please do items in https://github.com/kartoza/gfdrr_oondra/issues/20#issuecomment-229941270 today.
We do need worldpop as a raster layer for InaSAFE as a priority. @username_2 can we task (and subcontract?) Samweli to do this? We need it asap so if we can't bank on him to do it soon then we'll have to do it in-house (who would be best?).
username_0: I have uploaded all Tanzania raster and ingested them in inasafe. Updated the wiki for what what I have done and layer naming conventions
username_1: @username_0 tick the outstanding item in https://github.com/kartoza/gfdrr_oondra/issues/20#issuecomment-229941270 asap then close
username_0: The only missing part is the integration of the samweli's inasafe branch and get the ui directly in inasafe or merging his branch into geosafe.
Status: Issue closed
username_1: @username_0 please read my comment carefully before closing
username_1: Publish worldpop service (minimum coverage Tanzania) metadata
Status: Issue closed
|
jpdev832/FeatureService | 59503011 | Title: Standardize REST api
Question:
username_0: Add the following REST endpoints.
Post - /place
Put - /place
Get - /place with request args
Get - /place/{id}
Post - /place/{id}
Post - /place/feature
Put - /place/feature
Get - /place/feature with request args
Get - /place/feature/{id}
Post - /place/feature/{id}<issue_closed>
Status: Issue closed |
Mysteryquy/Roguelike | 447284570 | Title: Multiple Branches
Question:
username_0: Hey
someone is always opening new branches instead of just using the main branch. How to fix???
Answers:
username_1: Hmm maybe he understands that its stupid and stops :) Maybe he would actually stop if he was certain that every Collaborator is informed about main brach patches and not only pulls ?
Status: Issue closed
|
numfocus/gsoc | 147507955 | Title: Improving reproducibility in science by adding provenance tracking to the EcoData Retriever
Question:
username_0: I had a look at the provenance library. Let us discuss the requirements so that we can get started on the design of the problem, and see if this is a good fit or not.
Link -> http://archive.is/74MON
Status: Issue closed
Answers:
username_1: @username_0 You probably want to move this discussion to https://github.com/weecology/retriever. |
blackbaud/skyux-forms | 657679364 | Title: Radio and Checkbox have weird spacing between Label and Control Group
Question:
username_0: If this is expected behavior feel free to close but it does feel weird enough to point out.
### Expected behavior
Radio and Checkbox to have proper spacing between the form control and the label when using a `sky-control-label` class on the label.
### Actual behavior
There seems to be a chunk of space added which throws off the consistency compared to other Sky label/input groups.
### Steps to reproduce
1. Open the [Stackblitz](https://stackblitz.com/edit/radio-label-spacing)
2. Browser may matter -- I was using Chrome.
3. Look at the spacing between the label and the control radio group.
Its related to the **User Agent Stylesheet** styles on the **ul** element (below). If we remove those and use the provided sky styling it looks as I'd expect.
```scss
ul {
display: block;
list-style-type: disc;
margin-block-start: 1em;
margin-block-end: 1em;
margin-inline-start: 0px;
margin-inline-end: 0px;
padding-inline-start: 40px;
}
```
### Resources


Example of the issue when next to other form inputs
<issue_closed>
Status: Issue closed |
dcmjs-org/dcmjs | 460934466 | Title: Convert JSON TO dicom object
Question:
username_0: Hi, i want to convert JSON to dicom object. i read #64 but input is a dicom file in that thread.
my JSON would be like
"00080018" : {
"vr" : "UI",
"Value" : [
"1.3.12.2.1107.5.4.3.321890.19960124.162922.29"
]
},
"00080020" : {
"vr" : "DA",
"Value" : [
"19511013"
]
}......
Answers:
username_1: It's basically the same. You can populate the javascript object from scratch rather than using one loaded from a file. You can follow the basic style shown in #64 (use tag names and the hex and vr are pulled from the dictionary).
username_0: I loaded my js file and edited the tag. but im unable to save it correctly. i also used "WriteBufferStream" but not working
const dcmjs = require("dcmjs");
const fs = require("fs");
const filePath = require("./data.json");
var dataset = dcmjs.data.DicomMetaDictionary.naturalizeDataset(filePath);
dataset.PatientName = "DemoPatient"
dataset = dcmjs.data.DicomMetaDictionary.denaturalizeDataset(dataset);
let new_file_WriterBuffer = new Buffer.from(JSON.stringify(dataset))
fs.writeFileSync("./seriesImage1.dcm", new_file_WriterBuffer);
username_1: Hard to say at a glance, but the best bet is probably just to print out each variable as you go and confirm it has the value you expect. Or start with the other working example and make incremental changes and confirm each one. This week is kind of busy for me, but probably next week we should make a some examples (tests) that cover this use case.
username_0: i have added a debugger and watched each and every value .
let DicomDict = dcmjs.data.DicomMessage.readFile(arrayBuffer);
this line is converting, dicom file from buffer to object => output is a object which has metadata and dict objects and same object is calling write function
DicomDict.write();
i have made the same js object in "DicomDict" variable as given in example and tried (without let DicomDict = dcmjs.data.DicomMessage.readFile(arrayBuffer); )
and then called
DicomDict.write();
Error => DicomDict.write() is not a function.
it would be really appreciated if you can help me in this in comments. thanks
username_1: Hi - maybe you already figured this out, but if not this code might help.
Basically this WIP code converts between part10 and json files. It doesn't handle PixelData or other data that should be base64 encoded but it works for other data.
https://github.com/dcmjs-org/dcmjs/blob/add-commander/bin/dcmjs-cli.js#L32-L43
username_0: but i need pixeldata aswell
username_1: Yes, I plan to add that to the dump/undump with base64 encode/decode according to the standard (or maybe as a separate file so the json is easier to manipulate).
In any case you should be able to just set PixelData to be the right kind of TypedArray.
I hope this is enough to get you going. If you get something working please post it here for future reference. Or if it's not working let me know and I can try to flesh out some more examples (obviously this project needs more documentation anyway 😄 )
username_0: i have written a new function and passing json to it. (i hardcoded some meta just for testing purpose, and added pixel data to 7fe00010 tag. its writing the dcm file. meta is correct but viewers i.e RadiAnt could not open it.
function toArrayBuffer(buf) {
var ab = new ArrayBuffer(buf.length);
var view = new Uint8Array(ab);
for (var i = 0; i < buf.length; ++i) {
view[i] = buf[i];
}
return ab;
}
var meta = { '7fe00010': { vr:'OB', Value:[]}}
var buff = Buffer.from(buffer['7fe00010'].InlineBinary)
meta['7fe00010'].Value.push(toArrayBuffer(buff));
var meta={}
meta['00020002'] = {vr: 'UI', Value: [buffer['00080016'].Value[0]]};
meta['00020003'] = {vr: 'UI', Value: [buffer['00080018'].Value[0]]};
meta['00020010'] = {vr: 'UI', Value: ["1.2.840.10008.1.2"]};
meta['00020012'] = {vr: 'UI', Value: ["2.25.661331934852393026330823390029087831751"]};
meta['00020013'] = {vr: 'SH', Value: ["DICOMzero-0.0"]};
delete buffer['7fe00010']
var dicomDict = new DicomDict(meta);
dicomDict.dict = buffer;
return dicomDict;
username_1: Here's some code to generate an image instance from scratch. The resulting file can be loaded in 3D Slicer, so it should be a good starting point for you:
https://github.com/dcmjs-org/dcmjs/blob/add-commander/examples/nodejs/generate.js
username_2: Any body know how to convert DICOM dataset to json ?
"00080018" : {
"vr" : "UI",
"Value" : [
"1.3.12.2.1107.5.4.3.321890.19960124.162922.29"
]
},
"00080020" : {
"vr" : "DA",
"Value" : [
"19511013"
]
}......
username_1: The example linked above does this, but using a dataset with names (a `namified` version of the dicom json model).
You can see a round trip from part10 to json model back to edited part10 here:
https://github.com/dcmjs-org/dcmjs/blob/add-commander/examples/nodejs/readwrite.js
username_2: Thanks @username_1
Does dcmjs support convert dicom dataset to dicom json model ?
Same: http://dicom.nema.org/dicom/2013/output/chtml/part18/sect_F.4.html
username_1: That should be `DicomDict.dict` in the example (I can't doublecheck right now - I'm away from my regular desk until next week).
https://github.com/dcmjs-org/dcmjs/blob/add-commander/examples/nodejs/readwrite.js#L9
What's your overall goal? You can also look at the dicomweb-server code which works with that form.
username_2: Thank you @username_1 . Let me check.
My overall goal is
I'd like to convert a dicom dataset responsed from DIMSE (use dimse-dicom package) to dicom json model for dicomweb client as ohif viewer react version.
username_1: Sounds good @username_2, let us know how it goes.
For reference, we are also looking at the option of building a DIMSE gateway on the server side as described in [this issue](https://github.com/dcmjs-org/dicomweb-server/issues/18). We are leaning toward the dcmtk-node option since dimse-dicom isn't actively developed at the moment.
username_2: Hi @username_1
1. I fixed my issues about converting dicom dataset to dicom json model by creating new code based on DicomMetaDictionary.denaturalizeDataset()
2. DIMSE gateway
I can not build use dimse-dicom yet, so I use OHIF meteor version to make a DIMSE gateway
- Remove unuse pakages, files from OHIF
- Add new restful APIs (QIDO, WADO-RS) => not sure right!!!
- Test work with OHIF react version <--> DIMSE gateway <--> CC PACS model
DICOM can be displayed correctly!
3. Other
I will try research https://github.com/jmhmd/dcmtk-node!
https://github.com/sync-for-science/dcmrs-broker A DIMSE getway use dcm4che lib.
Currently only the retrieval of DICOM Part 10 objects is supported. The retrieval of bulk data and meta data are not supported. |
nolaneo/SeaEye | 157732229 | Title: Display build start time and / or estimated end time instead of current time
Question:
username_0: Displaying the current time is not very helpful. It would be great t display the estimated end time of a running build instead.
<img width="301" alt="screenshot 2016-05-31 19 55 47" src="https://cloud.githubusercontent.com/assets/135148/15684911/af17ffca-2769-11e6-854a-40456386e412.png"> |
tiangolo/uwsgi-nginx-flask-docker | 286409167 | Title: prestart.sh not run in Ubuntu 16.04
Question:
username_0: Hi,
I have encountered a problem regarding restart.sh. Not sure if it is because of my error or not.
I have a Ubuntu 16.04 server. If I execute `docker run username_1/uwsgi-nginx-flask:python3.6` on it, I get the following output:
```
/usr/lib/python2.7/dist-packages/supervisor/options.py:296: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2018-01-05 21:23:59,700 CRIT Supervisor running as root (no user in config file)
2018-01-05 21:23:59,700 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2018-01-05 21:23:59,721 INFO RPC interface 'supervisor' initialized
2018-01-05 21:23:59,721 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2018-01-05 21:23:59,722 INFO supervisord started with pid 1
2018-01-05 21:24:00,725 INFO spawned: 'nginx' with pid 9
2018-01-05 21:24:00,727 INFO spawned: 'uwsgi' with pid 10
[uWSGI] getting INI configuration from /app/uwsgi.ini
[uWSGI] getting INI configuration from /etc/uwsgi/uwsgi.ini
*** Starting uWSGI 2.0.15 (64bit) on [Fri Jan 5 21:24:00 2018] ***
compiled with version: 4.9.2 on 10 August 2017 16:32:21
os: Linux-4.4.0-22-generic #40-Ubuntu SMP Thu May 12 22:03:46 UTC 2016
nodename: cb776ddb69b0
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 8
current working directory: /app
detected binary path: /usr/local/bin/uwsgi
your memory page size is 4096 bytes
detected max file descriptor number: 1048576
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
Python version: 3.6.2 (default, Jul 24 2017, 19:47:39) [GCC 4.9.2]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x238af00
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 1237056 bytes (1208 KB) for 16 cores
*** Operational MODE: preforking ***
2018-01-05 21:24:01,792 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-01-05 21:24:01,792 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x238af00 pid: 10 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 10)
spawned uWSGI worker 1 (pid: 13, cores: 1)
spawned uWSGI worker 2 (pid: 14, cores: 1)
```
Answers:
username_1: You probably have an old version of the image in your remote server.
The [`prestart.sh` support was added recently](https://github.com/username_1/uwsgi-nginx-flask-docker#whats-new) (about a month ago), so that's quite possible.
Try updating the base image in your remote server:
```bash
docker pull username_1/uwsgi-nginx-flask:python3.6
```
Then re-run the command:
```bash
docker run username_1/uwsgi-nginx-flask:python3.6
```
Let me know how it goes.
Status: Issue closed
username_0: Yes, it works. Why hasn't that occurred to me...
In my own Dockerfile, I always just use `FROM username_1/uwsgi-nginx-flask:python3.6`. So I assumed it would always be up to date.
Thank you for the help and this awesome container!
username_1: Awesome! Thanks for reporting back and closing the issue. |
bietkul/react-reactive-form | 584493735 | Title: Edit form still invalid after update values in field
Question:
username_0: In Edit mode after update the field values form Invalid until we enter & change something in required fields.
Answers:
username_1: @username_0 Can you please share a code sandbox link to reproduce the issue?
If you're using `patchValue` method to pref-fill the form then it should work fine. |
Adobe-Consulting-Services/adobe-consulting-services.github.io | 735451345 | Title: Oak Index for Audit Log Search is outdated
Question:
username_0: The index package provided does not use a compatVersion=2 Lucene index, which in my experience causes it to not always work properly. I'd suggest we update this to use an index definition more along the lines of
```
<cqAuditEvent
jcr:primaryType="oak:QueryIndexDefinition"
async="async"
compatVersion="{Long}2"
type="lucene">
<indexRules jcr:primaryType="nt:unstructured">
<cq:AuditEvent jcr:primaryType="nt:unstructured">
<properties jcr:primaryType="nt:unstructured">
<path
jcr:primaryType="nt:unstructured"
name="cq:path"
propertyIndex="{Boolean}true"/>
<time
jcr:primaryType="nt:unstructured"
name="cq:time"
propertyIndex="{Boolean}true"/>
<type
jcr:primaryType="nt:unstructured"
name="cq:type"
propertyIndex="{Boolean}true"/>
<user
jcr:primaryType="nt:unstructured"
name="cq:userid"
propertyIndex="{Boolean}true"/>
</properties>
</cq:AuditEvent>
</indexRules>
</cqAuditEvent>
``` |
polkadot-js/apps | 711504053 | Title: min amount to receive rewards
Question:
username_0: it would be useful for a nominator to be able to see the minimum amount for the validator to get rewards when choosing validators to nominate or to be able to search for validators that give rewards using the amount to filter.
Answers:
username_1: I'm not sure I understand the request. Nominators will get rewards per era if they are nominating at lease one active validator (with less than 100% comm.). It might be very small if the bonded amount is very small.
The Network > Staking > Target gives you the profit / era, given a certain amount (this can be defined in the field "amount to use for estimation").
username_0: Yes, but if I'm a nominator with little amount and a lot active validators with no 100% are oversubscribed, I'll receive rewards from then i my amout is bigger then a value ( now 256° on Polkadot and 128° on kusama).
At the end I don't receive rewards if i'm the 257° nominator or choosen validator has 100% fee
Many people decide to use services like Kraken because it is more simple this phase.
username_2: The issue here is that the assignments are determined by the validator election via phragmen. For this there is no „single correct“ result, father each validator computes within the selection window, submits and the best one gets selected for the next era.
Even if a validator has 512 nominators and these All end up nominating multiples, he may still end up with less that 256 (stake distributions during election).
And it obviously varies depending on the submissions, for instance currently (as of right now), there is not a single oversubscribed validator due to the last selection.
So it is tricky.
username_0: ty for the attention this is a thought born after the constant questions about it in our chats where most are nominators with small amount.
username_2: It makes sense and I always love the feedback. Technically, this is not actually quite do-able as of now. However, the algorithm is changing (the generation that will go up in the next upgrade, as above on Polkadot, will yield less oversubscribed).
So while basically "impossibe" (nothing ever is "impossible", so take with a grain of salt), there may be other avenues to look at. For instance, we may not filter of stake, but may on numbers. (So atm targets show the nominators that are active, the actual number may be much higher).
TL:DR While as suggested may not quite be possible, this does over up questions and agreed, we need to do something to help out.
username_0: I like very much your ideas they are good starting point, in this way we could change the nomination guide to explain it so the solution could be more near.
username_2: Applicable discussion https://polkadot.polkassembly.io/post/114
username_2: Closing this since the current election does things in a different way (yet again). Lots of small 1.x DOT amounts bonded and getting paid.
Status: Issue closed
|
palantir/gradle-docker | 332105709 | Title: Pushing just the defined tags
Question:
username_0: Using version `0.19.2` with the following script snippet, when we run the `dockerPush` task we're seeing the plugin push _all_ images, not just the version specified in `tags` and the `latest` as defined in `docker.tags`. Is there a way to get it to only push these two tags?
```groovy
docker.name = 'mycompany/myname'
docker {
tags project.version, 'latest'
... other items omitted for brevity ...
}
tasks.dockerPush.dependsOn { tasks.dockerTag }
dockerPush {
dependsOn { tasks.dockerTag }
}
```
Answers:
username_1: @username_0
Simple solution that does not need support from the Plugin Authors, add the following to your build script:
```
private static String ucfirst(String str) {
StringBuffer sb = new StringBuffer(str);
sb.replace(0, 1, str.substring(0, 1).toUpperCase());
return sb.toString();
}
task dockerPublish {
for(tag in docker.tags)
dependsOn "dockerPush"+ucfirst(tag)
}
```
now running dockerPublish should only push the tags you mentioned
username_2: Just want to second this issue. I used the following configuration:
```
docker {
tags "bras-test1"
...
}
```
When I ran `gradle dockerPush` and checked my repository after, the repository contained `bras-test1`, `bras-test2`, `bras-test3`, `bras-test4`, and `bras-test`, most of which were test images I'd create at a previous time.
username_3: See this PR #205 , and use `dockerTagPush` task to push taged image only, `dockerAllPush` task to push all |
NYU-DevOps-Fall2017-PromotionsTeam/promotions | 261883987 | Title: Query a Promo Based on Attributes
Question:
username_0: **As a** development team member
**I need** to set up 'query promo' functionality
**So that** a group of promo can be shown to users as the searching result
**Assumptions:**
* Attributes are given in standard format.
**Acceptance Criteria:**
```
Given a list of value following the standard attributes
When all query requirements are matched
Then return the result list of promo
```
Answers:
username_1: Some thoughts:
* On the Server side this could be done by reading the "request.args" dict from the Flask request and then just passing that dict to the Promotion model.
* On the Model side, you'd probably just have to write some code that loops over all items in the List and checks to see if each promotion has the parameters defined by the dict passed in.
* When we begin using a real Database, this could easily be done with a where clause in raw SQL
```
SELECT * FROM Promotions
WHERE Promotions.promo_type="$" AND Promotions.value < 10
``` you get the picture...
but I'm sure sqlalchemy has some sort of Python interface that abstracts away the raw SQL, so we can save SQL stuff for the next sprint.
So for now, we just need a simple cover that can do some basic filtering on the list....
Status: Issue closed
|
pytorch/pytorch | 1052354126 | Title: CI: test cases run in subprocesses cannot be disabled
Question:
username_0: For more context please read https://github.com/pytorch/pytorch/issues/68173#issuecomment-967153656
This is bad because most distributed tests are now run in a subprocess, and they cannot be disabled by an issue currently. |
smallrye/smallrye-config | 917001484 | Title: Error in trying to implement HOCON Config Source for Quarkus
Question:
username_0: Hello ,
I am trying to configure my Quarkus application to use HOCON file format for its ConfigSource implementation .
Here are the steps that i took:
1) Add below in my pom.xml:
<dependency> <groupId>io.smallrye.config</groupId> <artifactId>smallrye-config-source-hocon</artifactId> <version>1.10.0</version> </dependency>
2) created a file with with the following content and saved it in META-INF/microprofile-config.conf
{
"accepted.counties":
[
{"name": "Fiji", "code": "FJ"},
{"name": "Finland", "code": "FI"},
{"name": "France", "code": "FR"},
{"name": "French Guiana", "code": "GF"},
{"name": "French Polynesia", "code": "PF"},
{"name": "French Southern Territories", "code": "TF"},
{"name": "Gabon", "code": "GA"},
{"name": "Gambia", "code": "GM"}
]}
I am getting com.typesafe.config.ConfigException$WrongType, "accepted.counties" has type LIST rather than STRING
What step I am getting wrong?
Kind Regards,
Nweike
Answers:
username_1: I think it is because only [ConfigSyntax.CONF](https://github.com/smallrye/smallrye-config/blob/1110d8022dd9ed6af233b3d10c1789f00b0f7454/sources/hocon/src/main/java/io/smallrye/config/source/hocon/HoconConfigSourceProvider.java#L19) is supported ?
username_2: The issue seems to be here:
https://github.com/smallrye/smallrye-config/blob/c2b3707e1dd6d90b9d3ed367e415bb43e4f7c320/sources/hocon/src/main/java/io/smallrye/config/source/hocon/HoconConfigSource.java#L16
All values are being read as Strings. Let me see what I can do.
@username_0 do you have more complex examples that I can use to try and test?
username_0: Hello @username_2 ,
Right now my use case is any json object. Right now, i am passing a json array . I think that's is as complicated as it gets
username_2: Ok. Thank you!
Do you mind if I ask what do you prefer to use a HOCON source instead of the properties ones or even yaml? Do you want to have your configuration in json?
username_2: @username_0 Please check #599. Most likely we still need some more work there, but at least lists should be working.
Status: Issue closed
|
JanDeDobbeleer/oh-my-posh | 1064942682 | Title: Windows Terminal - Green vertical bar between glyphs
Question:
username_0: ### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
### What happened?

There is a strange vertical line on some glyphs. I'm using the recommended MesloLGM NF font.
### Theme
PowerLevel10k - Rainbow
### What OS are you seeing the problem on?
Windows
### Which shell are you using?
powershell
### Log output
```shell
Version: 6.17.0
Segments:
ConsoleTitle(true) - 0 ms - pwsh in ~
os(true) - 0 ms - ╭─
path(true) - 0 ms - ~
git(false) - 3 ms -
node(false) - 0 ms -
go(false) - 0 ms -
julia(false) - 0 ms -
python(false) - 1 ms -
ruby(false) - 0 ms -
azfunc(false) - 0 ms -
aws(false) - 0 ms -
root(false) - 0 ms -
executiontime(true) - 0 ms - ﮫ9.001s⠀
executiontime(true) - 0 ms - 9.001s
exit(true) - 0 ms - 1337
time(true) - 0 ms - 05:56:35 ─╮
text(true) - 0 ms - ╰─
text(true) - 0 ms - ─╯
Run duration: 5.6296ms
Logs:
2021/11/27 05:56:35 debug: getenv
C:\Users\shjor\AppData\Local
2021/11/27 05:56:35 getenv duration: 0s, args: LOCALAPPDATA
2021/11/27 05:56:35 getCachePath duration: 0s, args:
2021/11/27 05:56:35 getArgs duration: 0s, args:
2021/11/27 05:56:35 getArgs duration: 0s, args:
2021/11/27 05:56:35 getShellName duration: 11.3186ms, args:
2021/11/27 05:56:35 debug: getenv
[Truncated]
2021/11/27 05:56:35 getcwd duration: 0s, args:
2021/11/27 05:56:35 lastErrorCode duration: 0s, args:
2021/11/27 05:56:35 getcwd duration: 0s, args:
2021/11/27 05:56:35 getcwd duration: 0s, args:
2021/11/27 05:56:35 isRunningAsRoot duration: 0s, args:
2021/11/27 05:56:35 getcwd duration: 0s, args:
2021/11/27 05:56:35 getPathSeperator duration: 0s, args:
2021/11/27 05:56:35 getPathSeperator duration: 0s, args:
2021/11/27 05:56:35 getShellName duration: 0s, args:
2021/11/27 05:56:35 getCurrentUser duration: 0s, args:
2021/11/27 05:56:35 getHostName duration: 0s, args:
2021/11/27 05:56:35 getcwd duration: 0s, args:
2021/11/27 05:56:35 isRunningAsRoot duration: 0s, args:
2021/11/27 05:56:35 getcwd duration: 0s, args:
2021/11/27 05:56:35 getPathSeperator duration: 0s, args:
2021/11/27 05:56:35 getPathSeperator duration: 0s, args:
2021/11/27 05:56:35 getShellName duration: 0s, args:
2021/11/27 05:56:35 getCurrentUser duration: 0s, args:
2021/11/27 05:56:35 getHostName duration: 0s, args:
```
Answers:
username_1: I noticed something similar in #1327, so maybe the same solution works: try increasing/decreasing your font size with 1.
username_2: I don't see the vertical line in the screenshot though, but if it's indeed the same issue as @username_1 was experiencing, changing the font size might positively affect it.
username_0: It's between the Windows glyph and the right sided triangle. Adjusting the font size didn't helped. Tried from 9 to 14.
username_0: For comparison here it is the term at Fontsize 14

username_2: @username_0 I would ask for assistance as we can't solve that for you (oh-my-posh can't influence that rendering).
Status: Issue closed
|
Sitecore/Sitecore-Instance-Manager | 684232919 | Title: Reinstall 9.3 xm1 fails
Question:
username_0: **Scenario:**
1. Install Sitecore 9.3 xm1;
2. Reinstall it;
3. Check connection strings the CD server;
**Results**
- after step #1 login/pass in 'security' and 'web' connection strings are correct;
- after step #3 login/pass in 'security' and 'web' connection strings are wrong(and are different from the connection strings of CM instance).
Answers:
username_1: @username_0 fixed
Status: Issue closed
|
MediaMath/t1-python | 190098002 | Title: Deals microservice support
Question:
username_0: With the deals microservice we need to be able to support different api_bases, url parameters and response bodies per entity type. At the moment all `get()`s of t1 entities go to the 'mgmt' path.
proposed changes:
introduce a lookup map of entity name to path name - use this to look up the correct `API_BASE` for each entity name
create an intermediate specialised 'Service' (name tbd, we already have a `service` module) class to deal with url building, parameter validation and entity instantiation.
move all url parameters into a 'request_params' dict passed into each Service's http call functions.
this of course raises a deprecation issue, for both the existing deals endpoint (and any future endpoints which will be decomposed into a separate service) and support for it in in t1-python, and a potential new interface exposed to t1-python (at the very least we are instantiating options dicts instead of a param list, and we could potentially directly call each service separately) - Open to discuss how this should look.
Answers:
username_1: +1 on Lookup map of entity names.
'uri helpers' for the service name
How will the functions developers use change when switching to the new service? Ideally this should be an infrastructure change, rather than one that requires changes by SDK users.
username_2: Any updates on this?
username_0: @username_1 i don't think there'll be any change to the public interface, no. `t1.get()` can delegate, i think.
@username_2 working on it!
username_0: @username_2 release 1.6.0 has preliminary support for Deals.
Status: Issue closed
|
suhdonghwi/haneul | 688471487 | Title: 여러 IDE 에서 지원 가능하도록, Language Server 추가
Question:
username_0: 누리 언어를 다른 IDE 에서 지원 가능하도록 Language Server 기능이 추가됐으면 좋겠어요!
https://microsoft.github.io/language-server-protocol/
Language Server 를 만들게 되면, LS 를 지원하는 IDE (예를 들면 VSCode, Vim, IntelliJ) 에서 언어 지원을 할 수 있어요.
---
엄청난 챌린지겠지만, 프로젝트의 한단계 더 나아가는 의미에서 도전해보는것도 좋을거같아요.
Answers:
username_1: 꼭 해보고 싶긴 한데 정말 엄청난 챌린지네요! 좋은 제안 감사합니다 😎 |
256dpi/gomqtt | 270277361 | Title: Service automatic resubscription
Question:
username_0: I should be possible to configure the service to automatically resubscribe to topics if required.
Answers:
username_1: Just use a map[string]packet.Subcription to record all subscriptions, topic as key. If topics are the same, new one will replace the old one.
[overlap]
When Clients make subscriptions with Topic Filters that include wildcards, it is possible for a Client’s subscriptions to overlap so that a published message might match multiple filters. In this case the Server MUST deliver the message to the Client respecting the maximum QoS of all the matching subscriptions [MQTT-3.3.5-1]. In addition, the Server MAY deliver further copies of the message, one for each additional matching subscription and respecting the subscription’s QoS in each case.
[identical]
If a Server receives a SUBSCRIBE Packet containing a Topic Filter that is identical to an existing Subscription’s Topic Filter then it MUST completely replace that existing Subscription with a new Subscription. The Topic Filter in the new Subscription will be identical to that in the previous Subscription, although its maximum QoS value could be different. Any existing retained messages matching the Topic Filter MUST be re-sent, but the flow of publications MUST NOT be interrupted [MQTT-3.8.4-3].
username_0: Added in release `v0.9.0`.
Status: Issue closed
|
ant-design/ant-design | 650099584 | Title: Improvement to Date Picker
Question:
username_0: - [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### What problem does this feature solve?
Makes it more intuitive and natural.
### What does the proposed API look like?
Expected Behavior:
=================
The purpose of allowing the users to type the dates in manually is to make it easier for them. The glitches and restrictions above doesn’t necessary make it easier.
`Issue 1`. Tab key should work to navigate from field to field, and it should not wipe out the date values.
`Issue 2`. Navigating with the mouse clicks also should not wipe out the date values.
`Issue 3`. The manual date entry to should work for format mm/dd/yyyy, when the user enters something like 1/1/2019 or 01/3/18
These examples refer specifically when using Start and End Date together
`Issue 1`: Entry does not persist when navigating with the tab key
1. Enter Start Date by typing it
2. Press the tab key. <— this is what users normally do when filling in forms to move from one field to another
3. It doesn’t do anything the first time. Press the tab key again.
4. Observe the cursor now moved to the End Date, and also Start Date entry gets wiped out.
`Issue 2`: Entry does not persist when using the mouse either
1. Enter Start Date by typing it, do not press enter or tab.
2. Instead use the mouse pointer to click on the End Date.
3. Type in the End Date manually also instead of selecting a date from the calendar popup.
4. Press enter
5. Observe how dates are wiped out
`Issue 3`: Entry persists when pressing the Enter key but the format is restricted
1. Ensure date formats are set to MM/DD/YYYY
2. Start typing in the Start Date manually: 3/3/2020
3. Press the Enter key, and observe that the entry gets wiped out.
4. Enter the Start Date again and this time use: 03/03/2020
5. Press the Enter key, and observe that everything works this time.
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
Answers:
username_1: Looks very similar to https://github.com/ant-design/ant-design/issues/25403
These issues would be great to have fixed, thank you for writing it up
username_0: Thanks @username_1 I see [#25403](https://github.com/ant-design/ant-design/issues/25403#issue-650065414) and it seems to related to the arrow usage. I have tried to include a much wider scope to make the usage more intuitive, specially when using keyboard. Hopefully it can be looked into.
username_0: Thank you for acknowledging it. I am tight on cycles and am extremely unlike to pick this up.
But I see you already have the help wanted tag and hopefully someone else can step in.
Fingers Crossed |
baumblatt/capacitor-firebase-auth | 1117667896 | Title: Step 2 Issues
Question:
username_0: I'm having issues with Step 2:
In file android/app/src/main/java/.../MainActivity.java add the reference to the Capacitor Firebase Auth plugin inside the Bridge initialization.
`
[...]
import com.baumblatt.capacitor.firebase.auth.CapacitorFirebaseAuth;
// Initializes the Bridge
this.init(savedInstanceState, new ArrayList<Class<? extends Plugin>>() {{
// Additional plugins you've installed go here
// Ex: add(TotallyAwesomePlugin.class);
add(CapacitorFirebaseAuth.class);
}});
[...]
`
I put the code in the MainActivity.java but it throws errors. My MainActivity.java looks like this after I've added it.
`
package io.ionic.goal_tracker;
import com.getcapacitor.BridgeActivity;
import com.baumblatt.capacitor.firebase.auth.CapacitorFirebaseAuth;
// Initializes the Bridge
this.init(savedInstanceState, new ArrayList<Class<? extends Plugin>>() {{
// Additional plugins you've installed go here
// Ex: add(TotallyAwesomePlugin.class);
add(CapacitorFirebaseAuth.class);
}});
public class MainActivity extends BridgeActivity {}
`
Any clarification would be much appreciated, thank you!
Answers:
username_1: I think they changed it so that you don't have to initialize this anymore. It will do it automatically.
package com.mypackage.app
import com.getcapacitor.BridgeActivity;
public class MainActivity extends BridgeActivity {
} |
Ryujinx/Ryujinx-Games-List | 772380322 | Title: Peasant Knight
Question:
username_0: ## Peasant Knight
#### Game Update Version : 1.0.0
#### Current on `master` : 1.0.6127
Game plays at a high FPS without issues.
#### Hardware Specs :
##### CPU: AMD Ryzen 5 2600
##### GPU: NVIDIA GTX 1660 SUPER
##### RAM: 16GB
#### Screenshots :




#### Log file :
[PeasantKnight.log](https://github.com/Ryujinx/Ryujinx-Games-List/files/5725899/PeasantKnight.log)
Answers:
username_1: . |
jelhan/croodle | 50312597 | Title: [Layout][/#/create | /#/create/meta | #/create/options ] - do not use full monitor resolution for small forms
Question:
username_0: In #/create/options:
Do not use full monitor resolution to present creation forms, because the needed space ist kind of fixed. Please use a box big enough for the text in the middle of the monitor instead.<issue_closed>
Status: Issue closed |
eifinger/open_route_service | 728179595 | Title: Add config_flow
Question:
username_0: **Is your feature request related to a problem? Please describe.**
The configuration is not configurable via UI
**Describe the solution you'd like**
**Describe alternatives you've considered**
**Additional context** |
MelvorIdle/melvoridle.github.io | 577588958 | Title: Popover stays open when bank items are rearranged
Question:
username_0: This bug was DM'd to me. Posting here for tracking.
**Describe the bug**
Rearranging bank items when the popover menu is open keeps the menu open, causing weird and unintentional behavior.
**To Reproduce**
Steps to reproduce the behavior:
1. Click on a bank item
2. Click and drag that item to a different location
3. The popover will stay on the screen until you refresh.
**Expected behavior**
The popover should disappear upon moving a bank item.
**Screenshots**
https://i.imgur.com/ugt1FBE.gifv
**Browser**
Unsure
**Console output**
Unsure<issue_closed>
Status: Issue closed |
fbatioja/pruebas_e2e | 904491688 | Title: Semana 8 - Indicador de procesamiento
Question:
username_0: **Indicador de procesamiento**
El gif del botón "Sign in" que indica que se está realizando una petición al momento de autenticarse en el sistema, se queda en estado infinito de carga aún después de haber retornado la respuesta del servidor; esto cuando el acceso es denegado.
**To Reproduce**
Pasos para reproducir el evento:
1. Ir a 'http://localhost:2368/ghost/#/signin'
2. Diligenciar de manera errónea los campos del formulario
3. Click en el botón 'Sign in'
4. Observar el gif girando
**Comportamiento esperado**
Se debe ocultar o establecer en modo de "loading = false" al botón de ingreso al momento de retornar una respuesta no exitosa.
**Screenshots**
[Captura](https://github.com/username_0/ghost-issues/blob/main/issuesImgs/signing-loadstate-issue.PNG)
**Environment (please complete the following information):**
- Device: [Desktop]
- OS: [Windows 10]
- Browser [Chrome 89.0.4389.114]
- Version [Ghost 3.42.5] |
bytecodealliance/wasmtime | 537821673 | Title: Optionally print backtraces after error
Question:
username_0: When I run `cargo run --package wasmtime-cli --bin wast -- --enable-simd -d tests/spec_testsuite/proposals/simd/simd_conversions.wast` I see the following error (as I expect):
```
for directive on tests/spec_testsuite/proposals/simd/simd_conversions.wast:3:1
Caused by:
0: Failed to setup a module
1: Validation error: module did not validate: Unknown 0xfd opcode (at offset 629)
```
The error chain printing is great but I thought I would be able to add the actual backtrace to this output to this by setting `RUST_BACKTRACE=1` in the environment. Unfortunately, I am either doing this wrong or we need to add some code to `src/wast.rs`, `src/wasmtime.rs`, and `src/wasm2obj.rs`. @username_1, @joshtriplett: any thoughts on this?
Answers:
username_1: I believe that if you use the nightly toolchain and `RUST_BACKTRACE=1` this should work? Can you try testing that out and see if it produces a good enough backtrace for your use case?
username_0: Cool, that worked!
```
cargo +nightly run --package wasmtime-cli --bin wast -- --enable-simd -d tests/spec_testsuite/proposals/simd/simd_conversions.wast
```
Status: Issue closed
|
SaucyPigeon/RW-Realistic-Planets-Fan-Update | 572992520 | Title: Feature to disable new biomes provided by this mod
Question:
username_0: brightsideguy 2 Feb @ 12:39pm
Would there be a way in xml files for me to (safely) disable the new biomes this creates? I really want to try using only another mod's biomes while still while utilizing the axial tilt + arid/watery sliders that this fantastic mod has.
Status: Issue closed
Answers:
username_0: May be better to add compatibility for other mods by using their biomes instead. Otherwise, would create too many cases that are hard to test. |
legalscienceio/monqcle | 239233404 | Title: None
Question:
username_0: Google already crawled this prior to fixing the problem (adding "nofollow" to hidden links).
Answers:
username_0: Will request that Google remove the link, though they may not comply.
username_0: Assigning and reassigning label, using this issue to test webhook
username_0: so I'll make a lot of comments from me to the bot.
username_0: so one more comment now that I've set daemon to run all the time.
Status: Issue closed
username_0: Closing this out.
Made request with Google, nofollow exists, shouldn't happen again. |
webpack-contrib/mini-css-extract-plugin | 308615412 | Title: Allow defining which chunks will be loaded dynamically
Question:
username_0: It would be useful to have means to define which chunks are loaded dynamically and which are not. I did a small implementation that allows this:
```javascript
...
module.exports = {
plugins: [
new CssExtractPlugin({
loadAsync: chunk => chunk.modulesSize() >= 200000, // condition
captureCss: CssExtractPlugin.writeManifest('css-extract.json') // action
}),
]
};
```
I'm not sure of the naming but basically this does the trick for me. I write a manifest which I use later to inline the CSS but you could easily write a CSS file through webpack here as well. You can find my code [here](https://gist.github.com/username_0/19482ac8de5e06068c65e1ee648f9b96).
Let me know if you want a PR.
Answers:
username_1: All this chunking is getting so complicated. Could the new `splitChunks` option not unify all the chunking across the project, so you define a css group and use the `minChunks` option (in your example - `loadAsync`) in order to determine if it should be chunked? Seems there is so much duplicate of functionality across webpack which is why it's getting so complex.
username_0: @username_1 I think in the ideal case this plugin wouldn't be needed. I hope webpack 5 simplifies this. I had to implement the code above by necessity. 👍
username_2: @username_0 so were are already waiting for Webpack 5 to solve things that Webpack 4 didn't address 😜 . Thanks for the example, I've resorted to using `react-loadable` and inlining `link` tags for my page specific chunks. This results in double loading CSS though when the loaded async JS inserts the `link` tag for the async CSS. This wouldn't be a problem if I wasn't doing isomorphic rendering of JS but because I am the async loaded CSS causes an initial rendering of unstyled HTML for the page specific components.
username_0: @username_2 I have a SSR use case too. First I write a manifest through my extension and then use it to write critical CSS inline to initial HTML. This works well.
username_2: @username_0 that sounds cool and similar to what I’m doing. How do you prevent the request for the chunked CSS that you’ve already unlined? Your plugin looks cool but I was hoping to do this with he mini-css-extract-plugin as I thought it was going to be the maintained solution moving forward.
username_3: @username_2 @username_5 added this a day or two ago https://github.com/webpack-contrib/mini-css-extract-plugin#using-preloaded-or-inlined-css
username_2: @username_3 thanks for the heads up on this....it works great. Only problem is to get it to work I need to compile the `dist` directory for the plugin. @username_5 Wondering when we can expect a release?
username_4: Today will be release, also with fix incremental problem.
username_5: It's released, but the incremental problem is not fixed yet, as it required fixes to webpack too.
username_5: Regarding the original issue. This plugin just enables webpack to use CSS in chunks. You can use the usual webpack plugins/options to modify the chunk graph. i. e. `splitChunks` or `AggressiveMergingPlugin`.
Status: Issue closed
username_5: Inlining could be a html-webpack-plugin**-plugin**. I guess there is already one. |
commercetools/commercetools-payone-integration | 125412836 | Title: Extract key parameters into configuration properties
Question:
username_0: It could be useful to extract currently hardcoded parameters into configuration properties.
for example "ScheduledJob": sinceDate = ZonedDateTime.now().minusDays(2) (get all messages since 2 days ago)
Answers:
username_1: Not needed anymore, the scheduledJon was removed from the paymend adapter |
MicrosoftDocs/OfficeDocs-SharePoint | 498491479 | Title: Missing errors
Question:
username_0: Missing errors:
"Failed","SERVER FAILURE","Errors or timeout for Server Processing the file:Job Fatal Error","0x01710006"
"Failed","SERVER FAILURE","Errors or timeout for Server Processing the file:Not all the items in the package have been migrated","0x01710009"
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: cfa08336-690e-0d21-6491-3f14f6d13391
* Version Independent ID: 8927975a-7956-fc72-055e-36603e4eddaf
* Content: [Troubleshooting SharePoint Migration Tool](https://docs.microsoft.com/en-us/sharepointmigration/troubleshooting-common-spmt-issues#feedback)
* Content Source: [migration/troubleshooting-common-spmt-issues.md](https://github.com/MicrosoftDocs/OfficeDocs-SharePoint/blob/live/migration/troubleshooting-common-spmt-issues.md)
* Product: **sharepoint-server-itpro**
* GitHub Login: @JoanneHendrickson
* Microsoft Alias: **jhendr**
Answers:
username_1: @username_0 Thank you for submitting feedback and contributing to the docs. We are currently investigating this.
username_2: Another missing error:
"Failed","UPLOADING FAILURE","Failed to Upload the Job to Server:Upload file failed during migration","0x01510001"
username_1: @username_0 Thank you for submitting feedback. We understand that this issue has been resolved.
Please feel free to re-open this issue if there is a specific area of the docs that we can improve or make better. Thank you.
Status: Issue closed
|
chingu-voyage5/Bears-Team-3 | 327704290 | Title: Non-Profit Details
Question:
username_0: # Story
As a user, I want to learn more about a specific non-profit so that I can gather more specific information.
# Acceptance Criteria
GIVEN some precondition
AND some other precondition
WHEN some action by the actor
AND some other action
THEN some testable outcome is achieved
AND something else we can check happens too
# Notes
## Non-Profit Details
Answers:
username_0: @ElleeB Can you add the non-profit details (name, phone number, website, etc.) to the GitHub issue? |
Yoast/plugin-development-docker | 666908433 | Title: Slow docker possible cause
Question:
username_0: With 10.000 files in my wp-content/plugins directory, Docker slowed down to a crawl. Removing these (empty, non extension) files sped up the site significantly.
My guess is that the node_modules and other folders with a lot of files in the plugin repos might cause this slowness as well. If we can exclude them from mounting in the Docker container, maybe we can speed things up. |
malensek/3RVX | 444509554 | Title: Request: How about mute MIC?
Question:
username_0: Hi guys,
Not an issue really, a feature request. Dont know íf there is a proper procedure for that.
But how about to mute microphone? That would be great.
I use a desktop mic without mute button, so doing a conf call would be really useful to be able to mute the microphone as a feature
Thanks and regards
Answers:
username_1: I would disable the hotkey feature for mic mute in 3RVX because my keyboard has a key mapped to that function at the system level, **BUT** I would absolutely _love_ to have a floating (always-on-top) mic mute indicator!
My vision is that it would always show if an application is using the mic, and pop up / fade out on status change if nothing is using the input. Honestly, not being able to find anything like that already in existence, I'm considering trying to learn how to build a simple Windows app to do it myself, but it'd also be great to have built-in to this fairly well established project.
username_2: +1.
Totally agree with this feature suggestion. I've been using 3RVX "Mute Sound" feature for years, "Toggle Microphone" would be a very nice addition. |
seperman/deepdiff | 1086626507 | Title: [Question] Is deepdiff stable across different machine and python versions?
Question:
username_0: Thanks for your amazing package.
I just wonder whether `deepdiff` Is stable across different machine and python versions?
Answers:
username_1: Hi @username_0
You can see the testing matrix here: https://github.com/username_1/deepdiff/blob/master/.github/workflows/main.yaml#L14
Thanks
Status: Issue closed
|
apache/trafficcontrol | 639096851 | Title: Fix type query parameter for servers API
Question:
username_0: <!--
************ STOP!! ************
If this issue identifies a security vulnerability, DO NOT submit it! Instead, contact
the Apache Software Foundation Security Team at <EMAIL> and follow the
guidelines at https://www.apache.org/security/ regarding vulnerability disclosure.
-->
<!--
- For *SUPPORT QUESTIONS*, use the
[Traffic Control slack channels](https://traffic-control-cdn.slack.com) or [Traffic Control mailing lists](http://trafficcontrol.apache.org/mailing_lists/).
- Before submitting, please **SEARCH GITHUB** for a similar issue or PR. -->
## I'm submitting a ...
<!-- (check all that apply with "[x]") -->
<!--- security vulnerability (STOP!! - see above)-->
- [x] bug report
- [ ] new feature / enhancement request
- [ ] improvement request (usability, performance, tech debt, etc.)
- [ ] other <!--(Please do not submit support requests here - see above)-->
## Traffic Control components affected ...
<!-- (check all that apply with "[x]") -->
- [ ] CDN in a Box
- [ ] Documentation
- [ ] Grove
- [ ] Traffic Control Client
- [ ] Traffic Monitor
- [x] Traffic Ops
- [ ] Traffic Ops ORT
- [ ] Traffic Portal
- [ ] Traffic Router
- [ ] Traffic Stats
- [ ] Traffic Vault
- [ ] unknown
## Current behavior:
<!-- Describe how the bug manifests / how the current features are insufficient. -->
When user perform a /GET `https://{{TO_BASE_URL}}/api/3.0/servers?type=MID` with any type like MID,EDGE ...
The API will return 500 Internal Server Error.
```
{
"alerts": [
{
"text": "Internal Server Error",
"level": "error"
}
]
}
```
## Expected / new behavior:
<!-- Describe what the behavior would be without the bug / how the feature would improve Traffic Control -->
Should return all servers with related type
## Minimal reproduction of the problem with instructions:
<!--
If the current behavior is a bug or you can illustrate your feature request better with an example,
please provide the *STEPS TO REPRODUCE* and include the applicable TC version.
-->
## Anything else:
<!-- e.g. stacktraces, related issues, suggestions how to fix -->
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
Answers:
username_1: I believe this is duplicated by #4789, can that one be closed?
username_1: actually, I take that back. The dsId issue has a different root cause.
username_0: @username_1 sure
Status: Issue closed
|
rafaqz/GeoData.jl | 975896908 | Title: Plot a georeferenced image as image
Question:
username_0: Is it possible to plot a georeferenced image as an image? This is a RGB image.
Running
```
using GeoData, Plots, ArchGDAL
fl = download("https://data.geo.admin.ch/ch.swisstopo.swissimage-dop10/swissimage-dop10_2020_2671-1161/swissimage-dop10_2020_2671-1161_2_2056.tif", "ortho.tif")
a = geoarray(fl)
plot(a)
```
gives

but I expect

(the image extents are only approximate)
`PyPlot.imshow(a[1:2:end,1:2:end,:])` works (minus the missing coordinates of course).
Here the type of `a`:
```
julia> typeof(a)
GDALarray{UInt8, 3, String, Tuple{X{LinRange{Float64}, Projected{Ordered{ForwardIndex, ForwardArray, ForwardRelation}, Regular{Float64}, Intervals{Start}, WellKnownText{GeoFormatTypes.CRS, String}, Nothing}, Metadata{:GDAL, Dict{Any, Any}}}, Y{LinRange{Float64}, Projected{Ordered{ReverseIndex, ReverseArray, ForwardRelation}, Regular{Float64}, Intervals{Start}, WellKnownText{GeoFormatTypes.CRS, String}, Nothing}, Metadata{:GDAL, Dict{Any, Any}}}, Band{UnitRange{Int64}, Categorical{Ordered{ForwardIndex, ForwardArray, ForwardRelation}}, NoMetadata}}, Tuple{}, Symbol, Metadata{:GDAL, Dict{Symbol, Any}}, Nothing, Tuple{Int64, Int64, Int64}}
````
Answers:
username_1: Sure. I think ArchGDAL has image handling now too. The question is how to set/detect that a particular tiff can be shown as an image and how to handle plotting it.
1. a command to load a file as an image?
2. a command to plot a `GeoArray` as an image?
3. autodetect and add some trait to the `Band` dimension? Like the mode is not Categorical but Image?
Then you could also do `A = set(A, Band=RGBimage())` or something, to set it if not detected so normal `plot` would be an image.
I'm not fully across band coloring in geotiff, but we should probably try to accommodate the standard
username_0: I don't know either what makes a GeoTiff an image. Sounds like 2 is the easiest? 3 seems to be overkill for starters. 1 sounds ok too.
username_1: Ok we can use some flag like `plot(A; series=:image)` in the plot recipes, that does seem easy. If you want to look into how bands are usually shown in RGB in GeoTiff and similar we can do it roughly how it's normally done?
username_1: R raster has `plotRGB` that does pretty much what you want here:
```R
plotRGB(b, r=1, g=2, b=3)
```
https://rspatial.org/terra/pkg/6-plotting.html
It's cool that you can set which layer gets which color. But we don't want a plots dep, so better to add a keyword with a `NamedTuple`
```julia
ploat(A; rgb_bands=(r=1, g=2, b=3))
```
Then we just need a conditional check for the `rgb_bands` (or a different keyword) in our recipe, and combine bands if we find it. @username_0 does that sound reasonable?
username_2: Here's what I use to plot. I think it can easily be turned into a recipe.
```julia
using ImageCore
using Statistics
using Plots
using Rasters
using DimensionalData
const DD = DimensionalData
function eachband(r::Raster)
bands = dims(r, Band)
return (view(r, Band(b)) for b in bands)
end
skipmissing_(r::Raster) = (v for v in r if v != r.missingval)
function bandwise_quantile(raster::Raster, value)
(x -> quantile(skipmissing_(x), value)).(eachband(raster))
end
function normalize!(raster, low=0.1, high=0.9)
plow = bandwise_quantile(raster, low)
phigh = bandwise_quantile(raster, high)
for (l, h, band) in zip(plow, phigh, eachband(raster))
band .-= l
band ./= h - l + eps(float(eltype(raster)))
band .= clamp.(band, zero(eltype(raster)), one(eltype(raster)))
end
return raster
end
function plot_raster(r::Raster; bands=[1,2,3], low=0.02, high=0.98)
img = float32.(copy(r[Band([bands...])]))
normalize!(img, low, high)
img = permutedims(img, (Band, X, Y))
img = DimensionalData.reorder(img, DD.ForwardOrdered)
x = DD.index(reorder(dims(img, X), DD.ForwardOrdered))
y = DD.index(reorder(dims(img, Y), DD.ForwardOrdered))
plottable_img = colorview(RGB, parent(img))
Plots.plot(x,y,plottable_img,
title = string(name(r)),
xlabel = label(dims(r, X)),
ylabel = label(dims(r, Y)),
)
end
```
username_1: If you want to write it up that would be awesome.
We just need a keyword to pass to `plot` to trigger it - preferably an existing one. Is there still an` :image` series type in Plots.jl?
username_2: I'll write it up when I find the time. Is it possible to add plot recipes for Plots and Makie at the same time without depending on them?
username_2: Yes, but this will introduce a new dependency on MakieCore, correct?
username_1: Totally. This use case is a reason that got separated out ;)
https://github.com/JuliaPlots/Makie.jl/issues/996 |
dotnet-websharper/core | 883952094 | Title: compiler ignores parameter operation in match
Question:
username_0: ```
[<JavaScript>]
let Ajax (method: string) (url: string) (data: AjaxData option) = //: Async<string> =
Async.FromContinuations <| fun (ok, ko, _) ->
let ajSetting : AjaxSettings =
let ajs = JQuery.AjaxSettings()
ajs.Url <- url
ajs.Type <- As<JQuery.RequestType> method
ajs.ContentType <- Union<bool, string>.Union2Of2 "application/json"
ajs.DataType <- JQuery.DataType.Text
match data with
| Some ajdt ->
match ajdt with
| Serialized sdt ->
ajs.Data <- sdt
| NonSerialized odt ->
ajs.Data <- odt
| None -> ()
ajs.Success <- System.Action<obj, string, JqXHR>((fun result _ _ -> ok (result :?> string)))
ajs.Error <- System.Action<JqXHR, string, string>((fun jqXHR _ _ -> ko (System.Exception(jqXHR.ResponseText))))
ajs
JQuery.Ajax(ajSetting)
|> ignore
```
After WSFC compiling, the generated code looks like this
```
AjaxModule.Ajax=function(method,url,data)
{
function a(ok,ko)
{
$.ajax({
url:url,
type:method,
contentType:"application/json",
dataType:"text",
success:function(result)
{
return ok(result);
},
error:function(jqXHR)
{
return ko(new Global.Error(jqXHR.responseText));
}
});
}
return Concurrency.FromContinuations(function($1,$2,$3)
{
return a.apply(null,[$1,$2,$3]);
});
```
parameter data is missing....
Answers:
username_0: Even changed to this. still not work.
```
[<JavaScript>]
let Ajax (method: string) (url: string) (data: AjaxData option) = //: Async<string> =
Async.FromContinuations <| fun (ok, ko, _) ->
let dtObj : obj =
match data with
| Some ajdt ->
match ajdt with
| Serialized sdt ->
box sdt
| NonSerialized odt ->
odt
| None ->
null
let ajSetting : AjaxSettings =
let ajs = JQuery.AjaxSettings()
ajs.Url <- url
ajs.Type <- As<JQuery.RequestType> method
ajs.ContentType <- Union<bool, string>.Union2Of2 "application/json"
ajs.DataType <- JQuery.DataType.Text
ajs.Data <- dtObj
ajs.Success <- System.Action<obj, string, JqXHR>((fun result _ _ -> ok (result :?> string)))
ajs.Error <- System.Action<JqXHR, string, string>((fun jqXHR _ _ -> ko (System.Exception(jqXHR.ResponseText))))
ajs
JQuery.Ajax(ajSetting)
|> ignore
```
```
AjaxModule.Ajax=function(method,url,data)
{
function a(ok,ko)
{
var ajdt;
$.ajax({
url:url,
type:method,
contentType:"application/json",
dataType:"text",
data:data==null?null:(ajdt=data.$0,ajdt.$==1?ajdt.$0:ajdt.$0),
success:function(result)
{
return ok(result);
},
error:function(jqXHR)
{
return ko(new Global.Error(jqXHR.responseText));
}
});
}
return Concurrency.FromContinuations(function($1,$2,$3)
{
return a.apply(null,[$1,$2,$3]);
});
```
username_1: @username_0 Thanks for the report, indeed seems like a bug in the optimizer that converts list of setters to object expression. In original code, encountering anything else than `ajs.X <- Y` (like the `match data with` expression) should break gathering up the setters into an object expression, I will check into what's going on.
username_2: That's an unusual optimization. So if anything breaks that up, the generated code falls back to translating each statement separately?
username_1: @username_2 The reason behind this is to translate property setters on object construction nicely (like `MyObj(MyProperty = X)` in F# and `new MyObj() { MyProperty = X }` in C#) into a single JS object expression where possible. They are represented in the AST as sequential setter calls after the constructor, so in the F# AST it is undistinguishable from doing the property sets in new lines as above. I have an idea for it just now what to check.
username_1: @username_0 Fix is released in https://github.com/dotnet-websharper/core/releases/tag/4.7.2.445, sorry for the long delay, build stack is working well again now, and we are investigating better options for quicker turnaround going forward.
Status: Issue closed
|
MicrosoftDocs/azure-docs | 597574069 | Title: Adding Wildcard Domain via API
Question:
username_0: Where is the documentation/sample for adding wildcard domains via the API?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 81a99fee-aaee-89d5-958b-ab645d8f6a00
* Version Independent ID: edeb5885-5b26-299d-8ffe-6d0be1533769
* Content: [Azure Front Door - Support for wildcard domains](https://docs.microsoft.com/en-us/azure/frontdoor/front-door-wildcard-domain#feedback)
* Content Source: [articles/frontdoor/front-door-wildcard-domain.md](https://github.com/Microsoft/azure-docs/blob/master/articles/frontdoor/front-door-wildcard-domain.md)
* Service: **frontdoor**
* GitHub Login: @sharad4u
* Microsoft Alias: **sharadag**
Answers:
username_1: Thanks for the feedback and bringing this to our notice . At this time we are reviewing the feedback and will update the document as appropriate .
username_2: @username_0, what do you mean by via API? via .NET SDK?
username_3: #reassign: amitsriva
username_3: #reassign: amitsriva
username_4: Hello @username_0, could you please elaborate on what @jessica-ms asked?
username_4: #reassign: jessie-jyy
username_4: @username_0 since we haven't gotten a response back from you, we'll be archiving this issue for now. Please feel free to reach back out again. #please-close
Status: Issue closed
|
aws/aws-lambda-dotnet | 428569411 | Title: Lambda@Edge Events Package
Question:
username_0: I don't see a NuGet package with Lambda@Edge event models. (Similar to Amazon.Lambda.S3Events, Amazon.Lambda.DynamoDBEvents, etc..)
I've only seen Node examples of @Edge functions when I look around online but from the Lambda console it looks like you can deploy functions in any language. I'm still in very early stages of development for my app and would be happy to create a PR for whatever I end up using but was just wondering if there's already a package in the works.
I'll be rolling my own in the meantime based on this: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-event-structure.html
Answers:
username_1: @username_0, thanks for offering a PR. That would be appreciated.
username_0: .Net Core runtime isn't currently supported unfortunately
Status: Issue closed
username_2: still can't find a nuget package for |
taichi-dev/taichi | 1043032991 | Title: revert https://github.com/taichi-dev/taichi/pull/2486
Question:
username_0: Macros are not supported on GLES and we had to hack around it in https://github.com/taichi-dev/taichi/pull/3358/files. We might prefer to revert back to the version not using macros.
I gave this a quick try using git revert but it (unsurprisingly) has conflicts. So we'll have to revert this manually.
Status: Issue closed
Answers:
username_0: Closing via #3369 and #3376, thanks @k-ye ! |
donalgrant/p6-equations | 398431000 | Title: better test scripts
Question:
username_0: 1. Replace note/say/pull with diag in all test scripts.
2. Add subtest blocks in all test scripts.
3. Provide either -quiet or -verbose options to test scripts so that diagnostic messages can be turned off. Ideally, this would coordinate with issue #7 so that diagnostic messages (and conversely, debugging messages) could be enabled or disabled.
One goal is to be able to run a regression test on the entire t/ folder (possibly via 'prove') without generating pages of output.
Answers:
username_0: diag used with opt('verbose'); msg used elsewhere.
username_0: subtest blocks added to all test scripts
username_0: `prove -e perl6 t` command now operates quietly if set_opt('quiet') is used at beginning of otherwise "noisy" scripts. The only script which is noisy (currently) is Player.t.
username_0: The only thing left is to allow command line options for 'quiet' or 'verbose' (and perhaps 'debug' labels) for the test scripts. Perhaps a new command line parsing function to be added to Globals.pm6
Status: Issue closed
username_0: MAIN routine used in all scripts, with verbose and debug named arguments, and integration with Globals opt and debug. Completed with commit 0b9b61e. |
acidanthera/bugtracker | 1116804873 | Title: brightness slider and brightness keys doesn't change brightness of screen
Question:
username_0: my notebook is legion 5
comet lake cpu 10750h
gpus intel HD 630 and rtx 3050ti disabled by Whatevergreen
problem is brightness slider and brightness keys is working but doesn't have any effect on brightness on screen
on kernel log there is line
2022-01-28 00:15:25.277313+0200 localhost kernel[0]: (AGDCBacklightControl) AGDCBacklightControl: Didn't find DPMicro.
and this kext isn't loaded
attached my acpi folder and config file of OpenCore
[Archive.zip](https://github.com/acidanthera/bugtracker/files/7954223/Archive.zip)
Answers:
username_1: edit the igpu patch, you should inject the device id as well, I see enable-backlight-smoother uses, but if you do not work the brightness does not make sense .. I were in you, I would make a new patch igpu. https://dortania.github.io/OpenCore-Install-Guide/config-laptop.plist/coffee-lake-plus.html#deviceproperties
username_1: yours is a problem of efi / config, this issue makes little sense, write on the forums if you want support.
username_0: my igpu dev id is 0x9BC4 so it is natively supported so I think I don't need to inject anther dev id even I tried it before and didn't made any changes on this situation

this notebook contain mux switch may be problem from this
also anther person on this guide he complains from this problem also
https://github.com/yusfklncc/Lenovo-Legion-5-Hackintosh
username_0: this is my ioreg dump and dsdt
[Archive.zip](https://github.com/acidanthera/bugtracker/files/7958611/Archive.zip)
and this is method q11 and q12
`Method (_Q11, 0, NotSerialized) // _Qxx: EC Query, xx=0x00-0xFF
{
If (IGDS)
{
P80B = 0x11
Notify (^^^GFX0.DD1F, 0x87) // Device-Specific
}
Else
{
P80B = 0x11
Notify (^^^PEG0.PEGP.EDP0, 0x87) // Device-Specific
}
Notify (VPC0, 0x80) // Status Change
}
Method (_Q12, 0, NotSerialized) // _Qxx: EC Query, xx=0x00-0xFF
{
If (IGDS)
{
P80B = 0x12
Notify (^^^GFX0.DD1F, 0x86) // Device-Specific
}
Else
{
P80B = 0x12
Notify (^^^PEG0.PEGP.EDP0, 0x86) // Device-Specific
}
Notify (VPC0, 0x80) // Status Change
}`
username_1: try this
[config.plist.zip](https://github.com/acidanthera/bugtracker/files/7958919/config.plist.zip)
username_1: i see ioreg, and the pnlf is loaded correctly for cfl +
0xFF7B is correct and _UID is correct.

username_0: thanks for your reply
tried it the same problem
I think the problem at the level of acpi tables as the same problem on ubuntu also may be problem that Nvidia card control brightness don't know
even tried this without disabling Nvidia card but nothing
https://dortania.github.io/OpenCore-Install-Guide/troubleshooting/extended/post-issues.html#no-brightness-control-on-dual-gpu-laptops
username_2: If your screen is OLED then it's common problem. OLED Screen can't be adjust brightness. Slider and button is working but will no effect in screen. Make sure your internal screen is not OLED
username_0: it is IPS
15.6" FHD (1920x1080) IPS 300nits Anti-glare, 165Hz, 100% sRGB, Dolby Vision, G-Sync, DC dimmer
username_3: Not supported. PWM only.
Status: Issue closed
|
dotnet/cli | 136435065 | Title: Add RID support in universal host
Question:
username_0: We need to have two scenarios supported:
1) Get the RID corresponding to OS on which the host is running. @ericstj @ellismg What is the way to get this information? This will be required in the shim part of the host.
2) Add support for fat package assets by parsing RID fallback graph. This needs https://github.com/dotnet/cli/issues/1584 to generate the graph in app.deps.json. https://github.com/dotnet/cli/blob/anurse/update-deps-json-spec/Documentation/specs/runtime-configuration-file.md#portable-deployment-model has more details.
Answers:
username_0: This is completed.
Status: Issue closed
|
GoogleChrome/lighthouse | 331101240 | Title: DevTools Error: TRACING_ALREADY_STARTED
Question:
username_0: **Initial URL**: https://www.ne.se/info/
**Chrome Version**: 66.0.3359.181
**Error Message**: TRACING_ALREADY_STARTED
**Stack Trace**:
```
LHError: TRACING_ALREADY_STARTED
at Function.fromProtocolMessage (chrome-devtools://devtools/remote/serve_file/@a10<KEY>/audits2_worker/audits2_worker_module.js:1096:263)
at callback.resolve.Promise.resolve.then._ (chrome-devtools://devtools/remote/serve_file/@<KEY>/audits2_worker/audits2_worker_module.js:797:410)
``` |
itaditya/trick-or-treat-game | 503142663 | Title: Convert the Website into a PWA App
Question:
username_0: convert the Website into a PWA app so that it can be installed on a mobile/desktop and can be played even when offline.
Answers:
username_1: A prerequisite of that is to add a service worker for caching assets with **workbox**. See https://github.com/username_1/trick-or-treat-game/issues/14
I got two PRs both without Workbox so I'm rejecting them. Feel free to pick up that task if you're interested.
Once that's done we can start working on this.
username_0: @username_1 Can I still work on this issue? |
learningequality/kolibri | 262252917 | Title: License instances are overwriting each other when being merged into main DB
Question:
username_0: ## Summary
Saw a few cases of the wrong license showing under content. Turns out we use an auto-incrementing ID on License in the Kolibri models, and only export the licenses used in a channel, so the pk's don't always correspond to a particular license. Then, when we merge them into the joint DB, they overwrite each other (or don't, but in any case, many license FK's end up pointing to the wrong license.
In fixing this, we'll need to think holistically across Studio models, the export process, and the import process, and take into account Special Permissions, which come with custom text (so there's more than one Special Permissions license, in effect).
## How to reproduce
1. Import 66cef05505fa550b970e69c3623e82ba and 8b28761bac075deeb66adc6c80ef119c
1. Go to content pages
1. Depending on which channel was loaded first, content from one channel will show the wrong license.
## Screenshots

## Real-life consequences
Boatloads of legal trouble for everyone! :)
Answers:
username_0: Flagging for @username_2 and @username_1.
username_1: Joy. The other issue will be remapping the license_id field to the new pk as well.
username_1: (if we want a quick fix)
username_0: One option would be to keep an integer PK, to avoid breaking things, but have it come from a unique table on Studio rather than being autoincrement?
username_2: Can we use the license name as the pk? That should remain the same
username_0: Would add a bit more complexity on import, if we're maintaining forward-backward compat, but not sure how compat will work out anyway, so may be worth considering some sort of constant value. We'd still need to separately handle Special Permissions, though. How are those stored on the Studio?
username_2: It's just another license constant, and the description is stored on the ContentNode
username_0: Ok, yeah -- we may want to just transfer that same approach over to the Kolibri models.
username_2: Will this require a breaking change?
username_3: One small thing to watch out for is preserving the ability to change the display name of licenses if necessary. E.g. if we had "CC BY ND" and later wanted to rename it "CC BY-ND"
username_1: My preferred solution would be to fix this on the Kolibri side first, enhance the import logic to deal with existing license data, and then move the change to studio once we have our ducks in a row on backwards compatibility.
username_1: The proposed fix for this is to emulate Studio behaviour and write license data directly onto ContentNode models.
This will involve:
Deleting the License model.
Adding license_name and license_description fields to the ContentNode model. For now, this will involve complete denormalization of the license data onto the ContentNode.
In future, when we have frontend support for known licenses (like CC-BY), the license_name will be an identifier, rather than a title, and the frontend will handle display of the license name and description for known license types.
In the case of unknown license types the frontend will fall back to using the license_name and license_description from the database.
Note - this approach also allows for proper internationalization of known licenses in the frontend.
username_1: Fixed in #2368
Status: Issue closed
|
COVID19Tracking/website | 586583146 | Title: Home page (website restructure)
Question:
username_0: Implement 1.0 Homepage
Requirements:
https://docs.google.com/document/d/1NPypuNap-yFaOpzd2SKkXbI9GnkzI5QOsozLACSBuww/edit#bookmark=id.gdoqjntnfhdf
Answers:
username_1: - **Press links**, maybe like six?
- **Look up your state’s data report card** and **find out how to help**
Status: Issue closed
|
rundeck/rundeck | 235040302 | Title: Provide API to update an existing token
Question:
username_0: Issue type: Enhancement Request
**My Rundeck detail**
* Rundeck version: 2.8.2
* install type: rpm
* OS Name/version: centos7
* DB Type/version: mysql
According to [API docs](http://rundeck.org/docs/api/#authentication-tokens) we can only GET, CREATE and DELETE tokens. Since tokens now include more details such as expiration date or groups, I would like to see UPDATE action for an existing tokens, for updating groups or renewing expiration dates.
Answers:
username_1: I would call this an argument against UPDATE.
What happens to your session token when you UPDATE when using the API?
I am using create, get old token, delete old token, make current token old token, save new token.
This saves us from having currently long running jobs from failing due to a token Change.
We have multiple jobs running remotely that can take hours. If you update that token what happens to the stored token in an inline script? Job Fails.
So for update to work you would have to issue the update to any current jobs that are using the RD api. Some jobs are in a time out for 30 minutes before they poll again. Security concerns cause us to use external secure token storage. |
HeyImMatt/taskli | 611077640 | Title: Quick Add New Project
Question:
username_0: Modify the New Project behavior to open an empty Name field in the side nav projects list. Then on blur or save button click, load the project page and have the rest of the proj details fields available for editing.<issue_closed>
Status: Issue closed |
socrata/opendatanetwork.com | 175806022 | Title: change default map landing up a zoom level for counties
Question:
username_0: go to: https://opendatanetwork-staging.herokuapp.com/entity/0500000US53033/King_County_WA/jobs.job_proximity.jobs-prox-idx-mean?year=2015
note: zoom level is far in
should be something like the attached image. think this is only happening w/ counties for some reason.


Status: Issue closed
Answers:
username_1: Fixed this. You can adjust the padding used for auto zoom here: https://github.com/socrata/opendatanetwork.com/blob/staging/src/maps/constants.js#L39 |
hankcs/HanLP | 678478438 | Title: 在依存句法分析中,我加载了模型SEMEVAL15_PAS_BIAFFINE_ZH,但结果是:
Question:
username_0: <!--
Thank you for reporting a possible bug in HanLP.
Please fill in the template below to bypass our spam filter.
以下必填,否则直接关闭。
-->
**Describe the bug**
A clear and concise description of what the bug is.
**Code to reproduce the issue**
Provide a reproducible test case that is the bare minimum necessary to generate the problem.
```python
```
**Describe the current behavior**
A clear and concise description of what happened.
**Expected behavior**
A clear and concise description of what you expected to happen.
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- Python version:
- HanLP version:
**Other info / logs**
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
* [ ] I've completed this form and searched the web for solutions.
Status: Issue closed
Answers:
username_1: [auto-reply] Thanks for your comment. However, the essential information is required. Please carefully fill out the form when open a new issue. |
ComputeCanada/wheels_builder | 732729731 | Title: build_wheel deps with hyphen
Question:
username_0: If the dependency that was download is `Pillow-SIMD-7.0.0.post3.tar.gz`, the script tries to build just `Pillow` instead of `Pillow-SIMD`. The script assumes that there are no hypens in the package name:
https://github.com/ComputeCanada/wheels_builder/blob/d56ea31f17be0776bf3aef5e3b3055a29a60acd2/build_wheel.sh#L182
Insteaf of taking the first hypen-separated token, could we remove from the right? Not sure if we can count on a fixed number of tokens...
Answers:
username_1: Indeed, good catch! File names...a regex will be better suited here
username_1: `grep -Po '^[\w-_]+-' | sed 's/.$//'` should do the trick. It will handle anything [`a-z`, `A-Z`, `-`, `_`] at the start of line until `-`. sed will substitute by nothing the last character of every line.
Status: Issue closed
username_1: Fixed in e87dad8d884c738bbb1d8b6dbb7b5c0128a19eaf |
w3c/csswg-drafts | 332637645 | Title: Allow at-rules inside declaration blocks
Question:
username_0: There were draft errata posted at https://lists.w3.org/Archives/Public/www-style/2013Nov/0157.html
The actual published errata are: https://www.w3.org/Style/css2-updates/REC-CSS2-20110607-errata.html#s.4.1.1e
Note that @dbaron filed https://github.com/w3c/csswg-drafts/issues/991 on the published errata.
Note this section is becoming informative per WG resolution (https://github.com/w3c/csswg-drafts/issues/2224#issuecomment-384353286). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.