repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
PanthR/panthrLang | 153497849 | Title: Add R's "<<-" inherited assignment
Question:
username_0: Normal assignment assigns to current frame regardless of whether the variable exists in a parent frame.
Inherited assignment assigns to the frame the variable is defined in, or else to the global frame.<issue_closed>
Status: Issue closed |
tdryer/hangups | 1111994124 | Title: AttributeError: 'ConversationEventListWalker' object has no attribute 'positions'
Question:
username_0: If I hit End during browsing conversation history I am presented with:
```
Traceback (most recent call last):
File "/usr/bin/hangups", line 33, in <module>
sys.exit(load_entry_point('hangups==0.4.17', 'console_scripts', 'hangups')())
File "/usr/lib/python3.10/site-packages/hangups/ui/__main__.py", line 1215, in main
ChatUI(
File "/usr/lib/python3.10/site-packages/hangups/ui/__main__.py", line 153, in __init__
raise self._exception # pylint: disable=raising-bad-type
File "/usr/lib/python3.10/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/usr/lib/python3.10/site-packages/urwid/raw_display.py", line 416, in <lambda>
wrapper = lambda: self.parse_input(
File "/usr/lib/python3.10/site-packages/urwid/raw_display.py", line 515, in parse_input
callback(processed, processed_codes)
File "/usr/lib/python3.10/site-packages/urwid/main_loop.py", line 412, in _update
self.process_input(keys)
File "/usr/lib/python3.10/site-packages/urwid/main_loop.py", line 513, in process_input
k = self._topmost_widget.keypress(self.screen_size, k)
File "/usr/lib/python3.10/site-packages/hangups/ui/__main__.py", line 1012, in keypress
key = super().keypress(size, key)
File "/usr/lib/python3.10/site-packages/hangups/ui/__main__.py", line 304, in keypress
return super().keypress(size, key)
File "/usr/lib/python3.10/site-packages/urwid/container.py", line 1626, in keypress
key = self.focus.keypress(tsize, key)
File "/usr/lib/python3.10/site-packages/urwid/container.py", line 1135, in keypress
return self.body.keypress( (maxcol, remaining), key )
File "/usr/lib/python3.10/site-packages/hangups/ui/__main__.py", line 923, in keypress
return super().keypress(size, key)
File "/usr/lib/python3.10/site-packages/hangups/ui/__main__.py", line 304, in keypress
return super().keypress(size, key)
File "/usr/lib/python3.10/site-packages/urwid/container.py", line 1626, in keypress
key = self.focus.keypress(tsize, key)
File "/usr/lib/python3.10/site-packages/hangups/ui/__main__.py", line 436, in keypress
key = super().keypress(size, key)
File "/usr/lib/python3.10/site-packages/hangups/ui/__main__.py", line 304, in keypress
return super().keypress(size, key)
File "/usr/lib/python3.10/site-packages/urwid/listbox.py", line 994, in keypress
return actual_key(self._keypress_max_right((maxcol, maxrow)))
File "/usr/lib/python3.10/site-packages/urwid/listbox.py", line 1004, in _keypress_max_right
self.focus_position = next(iter(self.body.positions(reverse=True)))
AttributeError: 'ConversationEventListWalker' object has no attribute 'positions'
```
Arch Linux, x86_64, using latest snapshot from git |
DRSDavidSoft/additional-hosts | 585543798 | Title: PiHole compatabiliy
Question:
username_0: Hi David,
Possible to make this more PiHole friendly and remove domains starting with *. as these wildcards do not work with the PiHole.
Answers:
username_0: @username_2 * does not work in hostsfiles/Pihole so please remove these entries:
*.doubleclick.net
doubleclick.*
*.doublecklick.net
*.2mdn.net
*.admob.com
*.admob.xiaomi.com
admob.*
*.adnxs.com
*.applift.com
*.applovin.com
*.applvn.com
*.adswizz.com
*.batmobi.net
*.batmobil.net
*.comscore.com
*.mycomscore.net
*.mycomscore.com
settings-crashlytics-*.us-east-1.elb.amazonaws.com
settings-crashlytics-*.*.elb.amazonaws.com
*.fastclick.net
#*.intercom.io
*.ads.linkedin.com
*.pubmatic.com
*.mojiva.com
*.ads.mojiva.com
#.ads*.mojiva.com
*.mocean.mobi
#img.ads*.mocean.mobi
*.voicefive.com
*.thewhizmarketing.com
*.lp.mydas.mobi
*.ads.mp.mydas.mobi
*.w.inmobi.com
*.inmobi.com
ads-*.spotify.com
*.audio2.spotify.com
pixel*.spotify.com
*.video-ak.cdn.spotify.com
*.er.spo.spotify.com
*.spotx.tv
#audio-sp-*.spotify.com
#ash2-accesspoint-*.ap.spotify.com
#ash2-accesspoint-*.ash2.spotify.com
#lon2-accesspoint-*.lon.spotify.com
#lon2-accesspoint-*.lon2.spotify.com
#lon2-accesspoint-*.ap.spotify.com
#lon3-accesspoint-*.lon3.spotify.com
#lon3-accesspoint-*.ap.spotify.com
#lon6-accesspoint-*.ap.spotify.com
#lon6-accesspoint-*.lon6.spotify.com
#gew1-accesspoint-*.ap.spotify.com
#sjc1-accesspoint-*.sjc1.spotify.com
#gew1.ap.spotify.com
#sjc1-weblb-*.sjc1.spotify.com
#ash2-idp-*.ash2.spotify.com
#ash2-weblb-*.ash2.spotify.com
#ash2-msproxy-*.ash2.spotify.com
#lon3-msproxy-*.lon3.spotify.com
[Truncated]
*.msecn.net
#*.vo.msecnd.net # interfere with some useful products (e.g. VS Code)
*.atdmt.com
*.amazon-adsystem.com
*.media-match.com
*.omaze.com
*.tune.com
*.moatads.com
#*.s.moatpixel.com
#*.z.moatpixel.com
*.smaato.net
*.jumptap.com
*.appads.com
jupiter*.appads.com
neptune*.appads.com
saturn*.appads.com
req*.appads.com
*.adinfuse.com
*.smartadserver.com
*.advertising.com
username_1: 57 domains invalid!
username_2: @username_0 @username_1 Thank you for reporting this issue, and bringing this to me. I'll validate the lines and remove them from the list so it won't contain the invalid domains.
@username_1 Can you please also post the 57 domains here? Thanks
username_0: I posted them in the original post
If you meant to have wildcards in there, they dont break anything, but note they arent valid if used as a hosts file (they will be ignored... doesnt break anything though)
Status: Issue closed
username_2: Sorry that this issue has been open for so long. I moved the wildcard domains into another file, so Pi-hole would be compatible! |
karpenoktem/kninfra | 1175228455 | Title: Planningtemplates bruikbaar maken
Question:
username_0: Momenteel gebruikt het bestuur de website niet voor de planning. Misschien moeten we uitzoeken waarom en dat fixen. Zie ook #508 en de branch [yorickvp/update-planning](https://github.com/karpenoktem/kninfra/tree/yorickvp/update-planning) |
kfrozen/HeaderCollapsibleLayout | 350734222 | Title: RecyclerView 如果添加刷新和加载,怎么解决滑动冲突的问题
Question:
username_0: RecyclerView 如果添加刷新和加载,会出现推上去之后拉不下来的情况
Answers:
username_1: @username_0 您好,方便把相关的layout xml代码发上来看看吗?
username_0: <?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/container"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:fitsSystemWindows="true"
tools:context=".function.baby.activity.BabyDetailActivity">
<com.troy.collapsibleheaderlayout.HeaderCollapsibleLayout
android:id="@+id/default_header_collapsible_layout_id"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:bottomPanelLayoutId="@layout/comp_collapsible_layout_body"
app:overshootDistance="3000"
app:topPanelLayoutId="@layout/comp_collapsible_layout_header" />
<!--app:supportAutoExpand="false" />-->
<include layout="@layout/common_toolbar" />
</FrameLayout>
username_0: <?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent">
<com.scwang.smartrefresh.layout.SmartRefreshLayout
android:id="@+id/srl_baby_log"
android:layout_width="match_parent"
android:layout_height="match_parent">
<android.support.v7.widget.RecyclerView
android:id="@+id/rv_baby_log"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</com.scwang.smartrefresh.layout.SmartRefreshLayout>
</FrameLayout>
username_0: <?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/default_header_placeholder_id"
android:layout_width="match_parent"
android:layout_height="250dp"
android:background="@mipmap/my_bg">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginTop="@dimen/dp_60"
android:layout_marginLeft="@dimen/dp_15"
android:layout_marginRight="@dimen/dp_15">
<de.hdodenhof.circleimageview.CircleImageView
android:id="@+id/civ_baby_head"
android:layout_width="@dimen/dp_60"
android:layout_height="@dimen/dp_60"
android:src="@mipmap/default_head_icon" />
<TextView
android:id="@+id/tv_name"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_toRightOf="@+id/civ_baby_head"
android:layout_marginLeft="@dimen/dp_15"
android:layout_gravity="center_vertical"
android:text="彭于晏"
android:textStyle="bold"
android:textSize="@dimen/sp_14"
android:textColor="@color/white"
android:padding="@dimen/dp_5"
android:shadowColor="#ff000000"
android:shadowRadius="1"
android:shadowDx="5"
android:shadowDy="5" />
</LinearLayout>
<LinearLayout
android:layout_width="match_parent"
android:layout_height="@dimen/dp_40"
android:layout_above="@id/bottom_line">
<TextView
android:id="@+id/tv_tab_log"
android:layout_width="0dp"
android:layout_weight="1"
android:layout_height="wrap_content"
android:gravity="center"
android:text="日志"
android:textStyle="bold"
android:textSize="@dimen/sp_14"
android:textColor="@color/white"
android:padding="@dimen/dp_5"
android:shadowColor="#ff000000"
android:shadowRadius="1"
android:shadowDx="5"
android:shadowDy="5" />
[Truncated]
android:gravity="center"
android:text="成长记录"
android:textStyle="bold"
android:textSize="@dimen/sp_14"
android:textColor="@color/white"
android:padding="@dimen/dp_5"
android:shadowColor="#ff000000"
android:shadowRadius="1"
android:shadowDx="5"
android:shadowDy="5" />
</LinearLayout>
<include layout="@layout/invitation_code"
android:id="@+id/bottom_line"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"/>
</RelativeLayout>
username_1: @username_0 我在本机上试验了你说的情况,我这边用的官方的SwipeRefreshLayout,是可以正常工作的,下面是我body layout的代码:
<?xml version="1.0" encoding="utf-8"?>
<android.support.v4.widget.SwipeRefreshLayout
xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/swipe_refresh"
android:layout_width="match_parent"
android:layout_height="match_parent">
<android.support.v7.widget.RecyclerView
android:id="@+id/recycler_view"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</android.support.v4.widget.SwipeRefreshLayout>
您那边不work的话,首先建议看下SmartRefreshLayout中有没有对于scroll事件进行拦截,其次如果业务允许,可以考虑将swipe refresh移到HeaderCollapsibleLayout的外层,这样滚动上体验会更好一些。 |
pivorakmeetup/pivorak-web-app | 203806282 | Title: Add sitemap
Question:
username_0: Ofc
Answers:
username_1: do we need it for SEO ?
username_0: Ofc
username_1: there is [article in rubyweekly](https://www.sitepoint.com/start-your-seo-right-with-sitemaps-on-rails/?utm_source=rubyweekly&utm_medium=email). Is it good for sitemap implementation ?
username_0: Yes.
Status: Issue closed
|
cosmos/cosmos-sdk | 306699139 | Title: Tx interface
Question:
username_0: Wrote about recent Tx structure discussion here: https://github.com/cosmos/cosmos-sdk/issues/669
I think this is all great, but in looking at the interplay with baseapp, I'm not sure we want to thread a struct through that. It's really just the AnteHandler that sees the transactions, and I can forsee many forms of ante handler (we have one, ethermint has another, surely there's more).
So instead, I propose we have `type Tx interface{}` (an empty interface), since baseapp doesn't really need anything from a Tx. Then in ante handlers, we'd do the type assertions. I suspect this pattern might emerge anyways, so maybe it's best to make it explicit up front. The AnteHandler can also include the TxDecoder and SignBytes functions, since those are tightly coupled to the relationship between bytes and Tx Structure.
In sum, `AnteHandler` is responsible for the map between an empty Tx interface and the underlying Tx and Serialization structures. StdTx is our primary (sdk default) way of making a Tx, and its AnteHandler is `auth`.
I think the baseapp is a really useful abstraction over ABCI that can exist more independently of the Tx specifics (possibly MultiStore too ...).
Answers:
username_1: Per SDK design meeting, punted for now.
username_0: This can be closed. Still need to reduce baseapp dependency on auth - noted in https://github.com/cosmos/cosmos-sdk/pull/1579
Status: Issue closed
|
DavBfr/dart_pdf | 873642520 | Title: Printing not working in some windows devices on release mode
Question:
username_0: not printing in some devices.
Answers:
username_1: Any more information?
username_0: I got the issue cleared by copying fonts folder from data/flutter_assets/fonts to root folder, then the issue cleared.
on release fonts and all assets automatically copying to data/flutter_assets
username_1: Don't hesitate to [Buy Me A Coffee](https://www.buymeacoffee.com/JORBmbw9h).
Status: Issue closed
|
redux-saga/redux-saga | 650129889 | Title: How to change the amount of forked tasks waiting for channels dinamically?
Question:
username_0: What the documentation says is that we can have a queue implemented easily with Channels:
```javascript
import { channel } from 'redux-saga'
import { take, fork, ... } from 'redux-saga/effects'
function* watchRequests() {
// create a channel to queue incoming requests
const chan = yield call(channel)
// create 3 worker 'threads'
for (var i = 0; i < 3; i++) {
yield fork(handleRequest, chan)
}
while (true) {
const {payload} = yield take('REQUEST')
yield put(chan, payload)
}
}
function* handleRequest(chan) {
while (true) {
const payload = yield take(chan)
// process the request
}
}
```
I noticed that we can determine the number of concurrent tasks and in the sample it was 3 multiple tasks running in parallel:
```javascript
for (var i = 0; i < 3; i++) {
yield fork(handleRequest, chan)
}
```
However,
How can I change the number 3 to another number dynamically without cancel the tasks running in the fork?
If I want to increase the 3 to 5 it's easy, it's just add other forks whenever I want, but what about to decrease this number without lost the running tasks?
Answers:
username_1: You would have to implement a more sophisticated logic:
1. have a saga responsible for managing worker pool
2. create a channel for it
3. pass this channel to each forked saga
4. notify the manager saga about worker state changes (`processing: boolean`)
5. knowing about processing status of your workers you could quite easily implement logic in which you would "wait" for a worker to complete before shutting it down
username_0: Great, thanks.
I will check if I can get a solution for it.
username_0: Alright, I got a solution:
- Firstly I've given an id for each fork task because I haven't found a way to retrieve the fork id in the current task:
```javascript
let taskCounter = 0
const forks = {}
...
const chan = yield call(channel)
for (let i = 0; i < queueSize; i += 1) {
taskCounter += 1
const taskID = taskCounter
forks[taskID] = {
isRunning: true,
fork: yield fork(doSomething, { chan, taskID }),
}
}
```
- After that I've created a saga to watch the REMOVE_CHANNEL_TASK action:
The action:
```javascript
export const REMOVE_CHANNEL_FORK = 'REMOVE_CHANNEL_FORK'
export const removeChannelFork = ({ taskID }) => ({
type: REMOVE_CHANNEL_FORK,
taskID,
})
```
The saga:
```javascript
function* watchFinishFork() {
while (true) {
const { taskID } = yield take(REMOVE_CHANNEL_FORK)
forks[taskID].isRunning = false
}
}
```
- And finally I implemented the main saga to watch the actions conditioned to the isRunning:
```javascript
function* doSomething({ chan, taskID }) {
while (true) {
const payload = yield take(chan)
yield executeTheSomething(chan, payload)
if (!forks[taskID].isRunning) {
delete forks[taskID]
break //this means that the fork will finish its execution
}
}
}
```
So thank you for the guidance @username_1 👏
Status: Issue closed
|
angular/material | 144267930 | Title: iOS: mechanical md-sidenav (menu) scrolling
Question:
username_0: Scrolling in the md-sidenav menu in iOS is very mechanical compared to Android. On an Android device, a scroll will decelerate before it comes to a halt. On a iOS device, a scroll stops abruptly which is in conflict with the guidelines [(MD-Mass and weight)](https://www.google.com/design/spec/animation/authentic-motion.html#authentic-motion-mass-weight).
How to reproduce:
1. Open the md-sidenav menu (not the sidenav component)
2. Open the "Demo"-accordian to make the md-sidenav scrollable
3. Scroll up and down
4. Observe that the scroll stops instantly when using a iOS device
Tested on:
Version: Angular Material 1.1.0-RC1
Model: iPad mini
Browser: Safari 9.3
Operating system: 9.3 (13E237)
Tested on:
Version: Angular Material 1.1.0-RC1
Model: iPhone 6
Browser: Safari 9.3
Operating system: 9.3 (13E233)
Answers:
username_1: Wasn't aware that we had this on sidenav as well, I added https://github.com/angular/material/pull/7751 recently to address the same issue on the demos. I'll take care of this one.
Status: Issue closed
|
wxWidgets/Phoenix | 697287759 | Title: demo.py: the module version is deprecated for python3?
Question:
username_0: <!-- For bugs or other problems please provide the following details in addition to
your issue report, if applicable. See also https://wxpython.org/pages/how-to-submit-issue/
For issues about building on Linux, please read this page before reporting it here:
https://wxpython.org/blog/2017-08-17-builds-for-linux-with-pip/
-->
**Operating system**:
**wxPython version & source**: <!-- pypi, self-built, etc. -->
**Python version & source**: <!-- stock, anaconda, EDM, distro, self-built, etc. -->
**Description of the problem**:
<!-- if possible please include a small runnable application that demonstrates the problem -->
```python
# Put code sample here
```
python3 ModuleNotFoundError: No module named 'version'
Answers:
username_1: The file `demo/version.py` is now generated as part of the build, so you're probably using a version direct from git, or using the demo archive from the 4.1.0 release (IIRC) which forgot to include the file. I suggest getting the demo archive from one of the recent snapshot builds or you can just add the file yourself. It just needs to contain a line like: `VERSION_STRING = "4.1.0"`.
Also, see #1711 and #1690 (Always search for existing issues!) :-)
Status: Issue closed
|
bitcoinops/bitcoinops.github.io | 527783022 | Title: Topic Page Improvements
Question:
username_0: Two issues (one is more of a question) found during translating newsletter 73
- Topic page update : the NL 73 includes links to Topic page such as "[Trampoline payments](https://bitcoinops.org/en/topics/trampoline-payments/)" and "[Multipath payments](https://bitcoinops.org/en/topics/multipath-payments/)" but on "Optech newsletter and website mentions" section of these pages, the newsletter 73 is not mentioned. Not quite familiar with Jekyll's functionality but it'd be better to check if the Topic page can dynamically pull all the mentions to it or if not, making sure to update the topic page on every NL release. FYI reading the topic page source code it looks like pretty manual. If it's a known issue then I can make a PR to add NL73 to these pages.
- Topic page translation : Do we have a plan to translate the Topic pages too? I saw the `en` directory under `_topics` which is same structure as `_posts` hence was wondering if it's the case. Can I create a `ja` directory for topics and make a PR?
Answers:
username_1: That won't automatically work, sorry. It'll just add a Japanese-language page to the full index. Someone would have to go in and globalize the topic pages, like @bitschmidty did for the newsletter pages, before localized content can be added.
username_0: Thanks for your note! Hello @bitschmidty, do you have any plan to localize the topic pages too? Let me know if you have anything in your mind!
username_2: My two cents: if we chose to translate the topics pages, we should only translate the 'excerpt' and 'extended summary' sections (which are fairly static), and not try to translate the names of the links (which will be updated regularly).
@username_0 - there are currently ~40 topic pages, and we plan to bring that up to ~100 by the end of 2020. Are you interested in translating those?
username_0: @username_2 Agree with what you pointed out, i.e. translating only 'excerpt' and 'extended summary' sections. I'm happy to translate the ~100 topics. Please let me know how I can get started.
username_2: @bitschmidty - can you help here? @username_0 wants to translate the topics indexes. Are we able to incorporate those into the site? |
amplication/amplication | 737048386 | Title: Server: Changing and then changing back to original state is still counted as change
Answers:
username_1: Likewise for creating new entity then deleting entity, still shows up as a pending change.
username_2: same mechanics, but the github code diff doesn't show any resulting line changes (except the push version) which is good
Status: Issue closed
|
CartoDB/carto-vl | 520049955 | Title: Blog map issue
Question:
username_0: Hey team!
It seems that this CARTO VL map on one of our recent blog posts isnt working.
https://carto.com/blog/retail-revenue-prediction-data-science/
Any help would be appreciated! cc: @makella
Answers:
username_1: Fixed by @makella ✨
For these kind of visualizations, instead of using personal accounts for generating the final result, it'd be great to use a common account that anyone can access in case there's a failure 👍
Status: Issue closed
|
jhomlala/betterplayer | 952302895 | Title: [BUG] memory issue
Question:
username_0: W/hallengebattle(28007): Throwing OutOfMemoryError "Failed to allocate a 32 byte allocation with 0 free bytes and 0B until OOM, target footprint 201326592, growth limit 201326592" (VmSize 32717780 kB, recursive case)
W/hallengebattle(28007): "ExoPlayer:Loader:ProgressiveMediaPeriod" prio=5 tid=66 Runnable
W/hallengebattle(28007): | group="main" sCount=0 dsCount=0 flags=2 obj=0x12c85660 self=0x7a3bfb2a60
W/hallengebattle(28007): | sysTid=28265 nice=0 cgrp=default sched=0/0 handle=0x7812d73cc0
W/hallengebattle(28007): | state=R schedstat=( 540769736 20927864 155 ) utm=51 stm=2 core=5 HZ=100
W/hallengebattle(28007): | stack=0x7812c70000-0x7812c72000 stackSize=1043KB
W/hallengebattle(28007): | held mutexes= "mutator lock"(shared held)
W/hallengebattle(28007): at java.util.Arrays.copyOf(Arrays.java:3136)
W/hallengebattle(28007): at java.util.Arrays.copyOf(Arrays.java:3106)
W/hallengebattle(28007): at java.util.ArrayList.grow(ArrayList.java:275)
W/hallengebattle(28007): at java.util.ArrayList.ensureExplicitCapacity(ArrayList.java:249)
W/hallengebattle(28007): at java.util.ArrayList.ensureCapacityInternal(ArrayList.java:241)
W/hallengebattle(28007): at java.util.ArrayList.add(ArrayList.java:467)
W/hallengebattle(28007): at com.android.okhttp.internal.http.OkHeaders.toMultimap(OkHeaders.java:104)
W/hallengebattle(28007): at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getHeaderFields(HttpURLConnectionImpl.java:227)
W/hallengebattle(28007): at com.android.okhttp.internal.huc.DelegatingHttpsURLConnection.getHeaderFields(DelegatingHttpsURLConnection.java:179)
W/hallengebattle(28007): at com.android.okhttp.internal.huc.HttpsURLConnectionImpl.getHeaderFields(HttpsURLConnectionImpl.java:30)
W/hallengebattle(28007): at com.google.android.exoplayer2.upstream.DefaultHttpDataSource.getResponseHeaders(DefaultHttpDataSource.java:307)
W/hallengebattle(28007): at com.google.android.exoplayer2.upstream.DefaultDataSource.getResponseHeaders(DefaultDataSource.java:217)
W/hallengebattle(28007): at com.google.android.exoplayer2.upstream.cache.CacheDataSource.getResponseHeaders(CacheDataSource.java:649)
W/hallengebattle(28007): at com.google.android.exoplayer2.upstream.StatsDataSource.getResponseHeaders(StatsDataSource.java:107)
W/hallengebattle(28007): at com.google.android.exoplayer2.upstream.StatsDataSource.open(StatsDataSource.java:86)
W/hallengebattle(28007): at com.google.android.exoplayer2.source.ProgressiveMediaPeriod$ExtractingLoadable.load(ProgressiveMediaPeriod.java:1016)
W/hallengebattle(28007): at com.google.android.exoplayer2.upstream.Loader$LoadTask.run(Loader.java:417)
W/hallengebattle(28007): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
W/hallengebattle(28007): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
W/hallengebattle(28007): at java.lang.Thread.run(Thread.java:923)
W/hallengebattle(28007): "ExoPlayer:Loader:ProgressiveMediaPeriod" prio=5 tid=66 Runnable
W/hallengebattle(28007): | group="main" sCount=0 dsCount=0 flags=2 obj=0x12c85660 self=0x7a3bfb2a60
W/hallengebattle(28007): | sysTid=28265 nice=0 cgrp=default sched=0/0 handle=0x7812d73cc0
W/hallengebattle(28007): | state=R schedstat=( 541447913 20927864 156 ) utm=51 stm=2 core=4 HZ=100
W/hallengebattle(28007): | stack=0x7812c70000-0x7812c72000 stackSize=1043KB
W/hallengebattle(28007): | held mutexes= "mutator lock"(shared held)
W/hallengebattle(28007): at java.util.Arrays.copyOf(Arrays.java:3136)
W/hallengebattle(28007): at java.util.Arrays.copyOf(Arrays.java:3106)
W/hallengebattle(28007): at java.util.ArrayList.grow(ArrayList.java:275)
W/hallengebattle(28007): at java.util.ArrayList.ensureExplicitCapacity(ArrayList.java:249)
W/hallengebattle(28007): at java.util.ArrayList.ensureCapacityInternal(ArrayList.java:241)
W/hallengebattle(28007): at java.util.ArrayList.add(ArrayList.java:467)
W/hallengebattle(28007): at com.android.okhttp.internal.http.OkHeaders.toMultimap(OkHeaders.java:104)
W/hallengebattle(28007): at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getHeaderFields(HttpURLConnectionImpl.java:227)
W/hallengebattle(28007): at com.android.okhttp.internal.huc.DelegatingHttpsURLConnection.getHeaderFields(DelegatingHttpsURLConnection.java:179)
W/hallengebattle(28007): at com.android.okhttp.internal.huc.HttpsURLConnectionImpl.getHeaderFields(HttpsURLConnectionImpl.java:30)
W/hallengebattle(28007): at com.google.android.exoplayer2.upstream.DefaultHttpDataSource.getResponseHeaders(DefaultHttpDataSource.java:307)
W/hallengebattle(28007): at com.google.android.exoplayer2.upstream.DefaultDataSource.getResponseHeaders(DefaultDataSource.java:217)
W/hallengebattle(28007): at com.google.android.exoplayer2.upstream.cache.CacheDataSource.getResponseHeaders(CacheDataSource.java:649)
W/hallengebattle(28007): at com.google.android.exoplayer2.upstream.StatsDataSource.getResponseHeaders(StatsDataSource.java:107)
W/hallengebattle(28007): at com.google.android.exoplayer2.upstream.StatsDataSource.open(StatsDataSource.java:86)
W/hallengebattle(28007): at com.google.android.exoplayer2.source.ProgressiveMediaPeriod$ExtractingLoadable.load(ProgressiveMediaPeriod.java:1016)
W/hallengebattle(28007): at com.google.android.exoplayer2.upstream.Loader$LoadTask.run(Loader.java:417)
W/hallengebattle(28007): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
W/hallengebattle(28007): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
W/hallengebattle(28007): at java.lang.Thread.run(Thread.java:923)
I/hallengebattle(28007): Waiting for a blocking GC Alloc
I/hallengebattle(28007): Alloc young concurrent copying GC freed 0(0B) AllocSpace objects, 0(0B) LOS objects, 0% free, 192MB/192MB, paused 53us total 23.362ms
Status: Issue closed
Answers:
username_1: https://github.com/username_1/betterplayer/issues/641#issuecomment-894697721 |
caolan/async | 110518159 | Title: optional callback never called in async.each when finished with no errors
Question:
username_0: i do not see the optional callback called after all items processed, if there are no errors.
for example:
function iterator(item, callback) { if (item==4) callback('oops!'); else console.log(item); }
function callback(err) { if (err) console.log('there was an error: ' + err); else console.log('all done, no errors'); }
async.each([1,2,3], iterator, callback);
i would expect to see 'all done, no errors' logged to console. but callback(err) functioned passed to async.each never seems to be called.
Answers:
username_1: @username_0 you need run callback without arguments or with an explicit null argument, see https://github.com/caolan/async#eacharr-iterator-callback.
``` javascript
function iterator(item, callback) {
if (item==4)
callback('oops!');
else
console.log(item);
callback(); // <--- you need run callback without arguments or with an explicit null argument
}
...
```
Status: Issue closed
|
zwbetz-gh/papercss-hugo-theme | 754065571 | Title: When using a something like domain.com/blog, the main brand image redirects to domain.com instead of domain.com/blog
Question:
username_0: **Is your feature request related to a problem? Please describe.**
I setup my Hugo config.toml to have a `baseURL` as https://some-domain.com/blog. The main title nav (the nav-brand) logo links to https://some-domain.com which is some weird behavior.
**Describe the solution you'd like**
Some way in the config to specify the link for the same like the other nav parameters.
**Describe alternatives you've considered**
Tried tinkering around with config but couldnt fix it from there
**Additional context**
This is the link I am talking about.

Answers:
username_1: The nav title link can now be set by using the `navTitleLink` param. See commit 573b3dd0be2a4af66437e03ed8b7c149d202acb1
Status: Issue closed
|
zaragoza-sedeelectronica/zaragoza-sedeelectronica.github.io | 64720945 | Title: Ofrecer información estructurada de los precios
Question:
username_0: El campo de precio se ofrece como información en texto libre, para poder trabajar con este campo sería necesario que se ofreciera la información más estructurada.
Answers:
username_1: ## Análisis sobre algunas alternativas posibles
Es importante tener en cuenta que no existe un estándar concreto para la especificación de precios de lugares (comercios, equipamientos, etc.), por lo que la solución que se tome en este momento podría modificarse en el futuro si termina de surgir un estándar en este sentido. Algunas de las propuestas que se han hecho hasta el momento son las siguientes:
* https://schema.org/PriceSpecification y https://schema.org/price permiten especificar precios para items (por ejemplo, entradas) que se estén proporcionando en un marketplace online. Está basado en el vocabulario goodRelations y se utiliza también para la especificación de datos de contratación pública (http://contsem.unizar.es/def/sector-publico/pproc), que está siendo usado en la API de Zaragoza.
* Foursquare API (https://developer.foursquare.com/docs/venues/explore) permite explorar lugares y proporciona información básica sobre su precio. Básicamente, utiliza cuatro posibles valores, del 1 al 4, según el rango de precios de un lugar. Sacado directamente de la documentación: Currently the valid range of price points are [1,2,3,4], 1 being the least expensive, 4 being the most expensive. For food venues, in the United States, 1 is < $10 an entree, 2 is $10-$20 an entree, 3 is $20-$30 an entree, 4 is > $30 an entree.
Otros sitios como la Google Places API no proporcionan este tipo de información.
También se puede considerar la propuesta, sencilla, de @username_3 en su feedback sobre el uso de la API de Zaragoza (https://gist.github.com/username_3/8707031fb834d80a7828).
username_1: Teniendo en cuenta los comentarios proporcionados anteriormente, la propuesta que puede considerarse como más adecuada es una pequeña variación sobre lo propuesto por @username_3.
"price": {
"zaragoza-card": true,
"fares": [
{ "@type": "gr:UnitPriceSpecification", "gr:hasCurrencyValue": "3"^^xsd:float, "gr:hasCurrency": "EUR", "fareGroup": "regular", "minSize": "1"},
{ "@type": "gr:UnitPriceSpecification", "gr:hasCurrencyValue": "5"^^xsd:float, "gr:hasCurrency": "EUR", "fareGroup": "group", "minSize": "10"}
....
]
}
username_2: Teniendo en cuenta las diferentes propuestas y aportaciones hemos pensando en optar por la siguiente descripción de precios:
```json
{
"price": [
{
"hasCurrencyValue": 3,
"hasCurrency": "EUR",
"fareGroup": "regular",
"minSize": "1"
},
{
"hasCurrencyValue": 0,
"hasCurrency": "EUR",
"fareGroup": "zaragoza-card",
"minSize": "1"
},
{
"hasCurrencyValue": 5,
"hasCurrency": "EUR",
"fareGroup": "group",
"minSize": "10"
}
]
}
```
username_3: Así de primeras, respecto a los nombres de los parámetros, no sé si los `hasCurrency` y `hasCurrencyValue` suenan naturales en web semántica, pero para un API REST me suena mejor `currency` para el primero y `amount`, `price`, o `fare` para el segundo. También, `minSize`, no entendía al principio qué era relativo a grupos de personas... ¿igual algo más descriptivo como `minPeople`? ¿y en vez de `fareGroup` poner `fareType`?
username_1: Los de currency sí que son importantes puesto que son los que se usan
también en schema.org, basado en GoodRelations, y lo usan en muchos sites de
comercio electrónico. Los otros se pueden cambiar, sin problema, si resultan
más intuitivos de esa manera.
username_2: La información referente a los horarios queda de la siguiente manera:
```javascript
"price": [
{
"fareGroup": "string",
"hasCurrencyValue": 0,
"hasCurrency": "string",
"minSize": "string"
}
]
```
A través del vector *price* se pueden establecer múltiples precios haciendo diferentes combinaciones con el campo *fareGroup* que especifica el tipo/grupo de la entrada, pudiendo establecer también diferentes precios y número de personas necesario (la entrada individual tiene valor 1) para cada uno:
- Gratuita
- Anticipada
- Taquilla
- Normal
- Grupo
- Jóvenes
- Niños/as
- Jubilados/Pensionistas
- Desempleados
**Nota**: Los tipos/grupos pueden incrementarse en el futuro.
Ejemplo de actividad:
```json
{
"id": 000000,
"title": "Título de la actividad",
"price": [
{
"fareGroup": "Anticipada",
"hasCurrencyValue": 8,
"hasCurrency": "EUR",
"minSize": "1"
},
{
"fareGroup": "Normal",
"hasCurrencyValue": 10,
"hasCurrency": "EUR",
"minSize": "1"
},
{
"fareGroup": "Desempleados",
"hasCurrencyValue": 6,
"hasCurrency": "EUR",
"minSize": "1"
},
],
...
}
```
Se mantienen los campos **precioEntrada** y **comentariosEntrada** a nivel de actividad permitiendo la introducción de texto libre para aquellas actividades en las que exista algún dato especial referente al precio y para aquellas actividades en las que no se pueda establecer un precio estructurado.
Status: Issue closed
|
cybercongress/cyber | 582846591 | Title: White Paper translations
Question:
username_0: We believe that it is vital to spread the information about our protocol for its success. It is important to get this information to as many people as possible, hence we need translations.
We are ready to offer bounties for this work. Either via Gitcoin or (possibly) in our own tokens.
The languages we require first-hand, are:
(based on virality and crypto market and on censorship of information related to that particular language):
- [ ] Mandarin
- [ ] Korean
- [ ] Japanese
- [ ] Russian
- [ ] Spanish
- [ ] Turkish
- [ ] French
- [ ] Arabic
- [ ] Portuguese
- [ ] Hindi
- [ ] German
- [ ] Punjabi
- [ ] Bengali
- [ ] Vietnamese
*If its not marked with a V - the bounty is free for grabs. Lets discuss it right here!
Answers:
username_1: Hello
I'm ready to do the Arabic translation if still needed
username_0: Hey @username_1 - yes, you can go ahead. The bounty is still for grabs too
username_1: Thank you @username_0 , I will do it ASPA.
username_0: Don't start just yet please. There is an update coming till the end of the week. I have reserved the translation for you though. I will ping you on here as soon as the update is pushed (2-3 days)
username_2: I want to make russian translation. Have a big experience
username_0: Do you mind showing any of your previous work In Russian (I mean crypto)? One example would be enough
username_2: https://bitcointalk.org/index.php?topic=5216188.msg53562043#msg53562043
username_0: You can surely go ahead if you wish (please do not start till I give the green light in a few days), however, please bare in mind, that the Russian translation will be the one with the most questions for us (what I mean is, this is a WP translation, so please be prepared for corrections while translating, etc)
username_3: I am willing to make Korean translation if needed. This is my previous job
https://docs.google.com/spreadsheets/d/1Y66YdNb76ZUxcLd8NNhEAtyAzuVTpenpYIyQ8Dfrm-U/edit?usp=sharing
username_0: Hey! Korean is not taken yet! Please share a contact, I can contact you by to confirm your intentions
username_3: This is my email address: <EMAIL>
my telegram: @username_3
Thanks very much
username_3: This is my email address: <EMAIL>
my telegram: @username_3
Thanks very much
username_0: @username_1 can you provide me your contact please?
username_1: Sure
Telegram : @MostafaMohamedGamal
username_4: Can I make a translation in Chinese? I have had experience translating Whitepaper, ANN, Bounty, Web .... This is my previous job: https://docs.google.com/spreadsheets/d/1noNeg6__CF_UdinBosyiZuS5JIgd9jLi3B9Um78hffk/edit#gid=0
Telegram: @Thompsonal
Gmail: <EMAIL>
**PM me if accepted**
username_2: So, what about russian translation?
username_0: @username_2 It is reserved already
@username_4 i did, no answer
username_5: Dear manager, I would like to be dedicated to the Portuguese translation, please send me a message, if accepted. Telegram username: @Takisn This is my previous job
Portfolio link: https://docs.google.com/spreadsheets/d/1gegVNewxc9W7dJxj8ZlM4VoazmgBqzCdHSpjvqdXLi4/edit?usp=sharing
username_6: Dear Sir. I am ready for Japanese translation
Bct name: Amaraly
username_0: @username_6 Awesome! You may start IF you agree with the translation guidelines. I have sent you a PM on BTT and published them on the BTT thread
username_7: Hello. I'm ready and replied to your pm on Bitcointalk. My bitcointalk username is: anobtc, My Telegram id: @cajmartin
Thanks!
username_0: Excellent! I replied to you
username_0: Hey @username_1 are you still working on the translation?
username_0: @username_1 Please note, that I took you off the reserve for this bounty
username_8: hello do you need turkish translation i can join translation
username_9: @username_0 I can't click the #filipino.
username_10: I can do Finnish! I'm a native Finn and study English at uni.
username_0: Already reserved it (Ken in brackets). The V is after its done =)
username_0: Interesting. Any translations you've done before?
username_11: hello i want to join italian translation if you need to contact me on telegram https://t.me/username_11
username_0: Will contact you!
username_12: Hello Cyber team, I'm username_12 from bitcointalks , i'm a Polish translation and community management, I have been in crypto since 2019 and have a lot of experience in community management
You can see my proposal here:
https://docs.google.com/spreadsheets/d/1sNaYIO2AswslUa_1LKTwyHRX5mx9yI9juw__gLpMwx4/edit?usp=sharing
I will create social media channels Cyber Polish and promote on it. Moreover, I have a good relationship with major crypto communities and KOLs in Polish, which can help your project with many investors in Polish.
My email: <EMAIL> and telegram: https://t.me/username_12
Wait to hear from you.
username_13: hello i would like to put the Arabic translation as a link to my previous work : https://bom.to/cIncxO . Please contact telegram if you need : @abicll
username_0: @username_12 Hey! We do not hire community managers. The project lives on donations and community governance. Please see the ambassador bounty program for this: https://cybercongress.ai/post/obep/
username_0: @username_13 Have you done any technical translations? Cybers WP is a bit more technical then your previous work
username_13: this is my job . I used to use the m.d file for my work, I guarantee I will do a good job
https://bom.to/a2NTXm
username_0: OBEP is irrelevant
Status: Issue closed
|
shallinta/material-ui-tree | 386067460 | Title: How to avoid displaying + or - icon for leaf nodes that have no children?
Question:
username_0: Currently, if children is empty array at the leaf nodes, there's still the + or - icon on the left.
Is it possible to hide those for those nodes with no children?
Answers:
username_1: When clicking '+' of a node with no children, function `requestChildrenData` will be called. You are able to add some children to this node dynamicly, say starting an ajax request. So it is not easy to tell whether a node with no children really has no children or not. No certain flags can lead to hide the icon. You may clone this repo and try to improve this component if you have a idea. I'm glad to see a PR.
P.S `material-ui-tree` is inefficient. I suggest changing to other tree components or library, like `json-formatter-js`.
username_0: Thanks for your candor. I appreciate that. Do you have any recommendations for something that works well as a react component like yours? There's another material ui treeview component, but I find yours to be better.
username_2: I have the same problem :( |
tjwilson90/turbo-hearts | 579397520 | Title: Fix double charging
Question:
username_0: On subsequent rounds of charing you can select already charged cards.
Status: Issue closed
Answers:
username_1: https://github.com/tjwilson90/turbo-hearts/commit/239849de8b06d315ad723a80e9c6cd74fd8e5fb2
username_1: On subsequent rounds of charging you can select already charged cards.
Status: Issue closed
username_1: https://github.com/tjwilson90/turbo-hearts/pull/77 |
UniStuttgart-VISUS/damast | 1095270807 | Title: Error/bug loading state
Question:
username_0: After changing the layout I used the _Persist state_ feature to save a json locally. Then, I used the _Reset layout_ button. After that, I loaded the visualization state resulting in the following error:
```
Could not load state! Reason:
[
{
"instanceLocation": "#",
"keyword": "properties",
"keywordLocation": "#/properties",
"error": "Property \"map-state\" does not match schema."
},
{
"instanceLocation": "#/map-state",
"keyword": "properties",
"keywordLocation": "#/properties/map-state/properties",
"error": "Property \"zoom\" does not match schema."
},
{
"instanceLocation": "#/map-state/zoom",
"keyword": "type",
"keywordLocation": "#/properties/map-state/properties/zoom/type",
"error": "Instance type \"number\" is invalid. Expected \"integer\"."
},
{
"instanceLocation": "#/map-state",
"keyword": "additionalProperties",
"keywordLocation": "#/properties/map-state/additionalProperties",
"error": "Property \"zoom\" does not match additional properties schema."
},
{
"instanceLocation": "#/map-state/zoom",
"keyword": "false",
"keywordLocation": "#/map-state/zoom",
"error": "False boolean schema."
},
{
"instanceLocation": "#",
"keyword": "additionalProperties",
"keywordLocation": "#/additionalProperties",
"error": "Property \"map-state\" does not match additional properties schema."
},
{
"instanceLocation": "#/map-state",
"keyword": "false",
"keywordLocation": "#/map-state",
"error": "False boolean schema."
}
]
```
Answers:
username_0: Update: After logging out and in again and uploading the json right away, I seem to encounter the same error.
username_1: Yes, that is my fault. I recently updated the default center and zoom level of the map to better show the *"public"* data extent. In that vein, I also allowed zoom levels that are not integers, and set the default level to, I think, 4.5. I forgot that the schema expects an integer there. I will fix it shortly.
Status: Issue closed
|
linkerd/linkerd2 | 795483492 | Title: multicluster extension cannot be uninstalled if Linkerd is not installed
Question:
username_0: If the main Linkerd control plane has been uninstalled, it is no longer possible to uninstall the multicluster extension.
Uninstalling the multicluster extension should not required the main control plane.
```
bin/linkerd mc uninstall | k delete -f -
Error: you need Linkerd to be installed in order to install multicluster addons
Usage:
linkerd multicluster uninstall [flags]
```
Answers:
username_1: I read the issue and the related files. And I found this error to be triggered from https://github.com/linkerd/linkerd2/blob/main/multicluster/cmd/uninstall.go#L63 ; where `buildMulticlusterInstallValues()` is being called from https://github.com/linkerd/linkerd2/blob/main/multicluster/cmd/install.go#L175
The error is being outputted when we are trying to extract `linkerd-config-map` and values from config-map is being used to create a new instance of InstallValues for multicluster
I thought to remove the step where we are trying to extract the `linkerd-config-map` ; but that's not possible.
Any views??
username_2: @username_1 I'd recommend looking at how uninstall works for other extensions such as Jaeger and Viz. Instead of building install values, we look for the `linkerd.io/extension=...` label and delete those resources. Take a look at Jaeger's uninstall [here](https://github.com/linkerd/linkerd2/blob/74950e94076b416813125460df02ca43a16aaa2b/jaeger/cmd/uninstall.go#L37-L39).
If we change Multicluster's uninstall to work in a similar away we should be able to avoid the issue you described above.
username_1: Hi, @username_2 @username_0 I looked into the uninstallation process of Jaeger and Viz. The process of Multicluster is quite complex than Viz and Jaeger so I thought to show you sample code before start making changes in the file
So we have to change the code after "un-linking the cluster to uninstall the multicluster" and instead of building a new instance of install values [(here)](https://github.com/linkerd/linkerd2/blob/main/multicluster/cmd/uninstall.go#L63)
to something like this.
```
resources, err := resource.FetchKubernetesResources(ctx, k8sAPI,
metav1.ListOptions{LabelSelector: "linkerd.io/extension=linkerd-multicluster"},
)
if err != nil {
return err
}
for _, r := range resources {
if err := r.RenderResource(os.Stdout); err != nil {
return fmt.Errorf("error rendering Kubernetes resource: %v", err)
}
}
return nil
```
i.e. fetching the related resource and deleting them
username_2: @username_1 Yep that is more similar to Jaeger and Viz extension uninstalls, so it's what I would expect.
I think something you'll need to double check is that the resources in the multicluster templates that need the `linkerd.io/extension: linkerd-multicluster` label have it.
The fact that `namespace.yaml` has it is good because once that is deleted, it should also delete all the namespace scoped resources meaning that the deployment and services don't need it—they'd be included in deleting the namespace.
The cluster role and cluster role binding in `remote-access-service-mirror-rbac.yaml` may need the label added? I would look at the Jaeger extension for reference on what `Kind`s of resource should have the `linkerd.io/extension` label. Also just testing your change and making sure everything is deleted is good too; those resources may be fine without the label.
Also, feel free to leave any more questions! I'm happy to help out.
username_3: Hi @username_1 , just touching base :wave: Is there anything we can help you with?
username_1: @username_2 I checked the multicluster templates does contains the label as `linkerd.io/extension: linkerd-multicluster`
I took references from rabc.yaml files of Jaeger and Viz to replicate and added the label to the `kind` of resources that should have it(ClusterRole, ClusterRoleBinding); (ServiceAccount had the label in case of Viz but not in Jaeger. Should I add the label?)
I think the next step is to replace the deletion code with the one I mentioned above :fire:
@username_3 Sorry for no update for so many days, I was away due to bad health. And I do need a favor, as Kevin mentioned I should test my changes, could you explain how exactly I should test, in an efficient way :sweat_smile:
username_3: You can use the instructions in #5374 to set up a multicluster instance. Then you can check the output of `linkerd mc uninstall` and verify it lists all the multicluster-related resources that are in your source cluster.
username_1: Hi, @username_2 I tested out the changes. These are the outputs with latest linkerd edge release
```
$ linkerd mc uninstall | kubectl delete -f -
namespace "linkerd-multicluster" deleted
configmap "linkerd-gateway-config" deleted
deployment.apps "linkerd-gateway" deleted
service "linkerd-gateway" deleted
serviceaccount "linkerd-gateway" deleted
clusterrole.rbac.authorization.k8s.io "linkerd-service-mirror-remote-access-default" deleted
serviceaccount "linkerd-service-mirror-remote-access-default" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-service-mirror-remote-access-default" deleted
customresourcedefinition.apiextensions.k8s.io "links.multicluster.linkerd.io" deleted
```
and these are the outputs after making changes and adding labels to rbac.yaml
```
$ bin/linkerd mc uninstall | kubectl delete -f -
clusterrole.rbac.authorization.k8s.io "linkerd-service-mirror-remote-access-default" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-service-mirror-remote-access-default" deleted
namespace "linkerd-multicluster" deleted
```
username_1: I only added the label i.e. `linkerd.io/extension: linkerd-multicluster` to ClusterRole and ClusterRoleBinding
username_1: Hey, @username_2 @username_3
After testing out multiple times, I added the label to all the resources of multicluster. However, when I tried to delete the resources, only a few of them were deleted
```
$ bin/linkerd mc uninstall | kubectl delete -f -
clusterrole.rbac.authorization.k8s.io "linkerd-service-mirror-remote-access-default" deleted
clusterrolebinding.rbac.authorization.k8s.io "linkerd-service-mirror-remote-access-default" deleted
customresourcedefinition.apiextensions.k8s.io "links.multicluster.linkerd.io" deleted
namespace "linkerd-multicluster" deleted
```
I raised a PR to showcase the changes I made.
username_3: Thanks @username_1, I'll comment in #5744
Status: Issue closed
|
squalrus/merge-bot | 866851977 | Title: `No commit found for SHA: fix_3353` from `ref` as `heads/branch_name` vs `sha`
Question:
username_0: After update `ref` to `heads/branch_name` rather than `sha` (https://github.com/username_0/merge-bot/pull/49), it was [reported](https://github.com/username_0/merge-bot/pull/49#issuecomment-821207553) that a PR failed with an `No commit found for SHA: fix_3353` error. |
ansible-semaphore/semaphore | 115175832 | Title: Is there a reason why bugsnag is a dependency?
Question:
username_0: I am trying to understand why bugsnag is a dependency. I reviewed bugsnag and they don't seem to have a free plan. They have a free trial plan, but after the current 30 day limit the bugsnag dependency will cause issues.
It would be nice if the dependency were ignored when the configuration was not supplied in the credentials.json file.
Answers:
username_0: This was my mistake. It doesn't seem to be seem to be a dependency for running the application. I just didn't run it with proper credentials. My mistake.
Status: Issue closed
username_1: Well it shouldn't really be there, because it is viewed as a third-party analytics tool (ublock blocks it.)..
See #41 for an ongoing discussion with myself about the topic :P |
edvin/tornadofx | 291500899 | Title: Exception by SmartResizePolicy if table.isTableMenuButtonVisible is enabled
Question:
username_0: I'm using a `treetableview` and have set the property `isTableMenuButtonVisible` to true.
When I hide/show one of the columns, the SmartResizePolicy throws an exception.
```
java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at tornadofx.SmartResizeKt$resizeColumnsToFitContent$doResize$1.invoke(SmartResize.kt:366)
at tornadofx.SmartResizeKt$resizeColumnsToFitContent$doResize$1.invoke(SmartResize.kt)
at tornadofx.SmartResizeKt.resizeColumnsToFitContent(SmartResize.kt:374)
at tornadofx.SmartResizeKt.resizeColumnsToFitContent$default(SmartResize.kt:361)
at tornadofx.SmartResizeKt.resizeCall(SmartResize.kt:412)
at tornadofx.TreeTableSmartResize.call(SmartResize.kt:122)
at tornadofx.TreeTableSmartResize.call(SmartResize.kt:120)
at javafx.scene.control.TreeTableView.updateVisibleLeafColumns(TreeTableView.java:1908)
at javafx.scene.control.TreeTableView.lambda$new$117(TreeTableView.java:816)
at javafx.beans.WeakInvalidationListener.invalidated(WeakInvalidationListener.java:83)
at com.sun.javafx.binding.ExpressionHelper$Generic.fireValueChangedEvent(ExpressionHelper.java:349)
at com.sun.javafx.binding.ExpressionHelper.fireValueChangedEvent(ExpressionHelper.java:81)
at javafx.beans.property.BooleanPropertyBase.fireValueChangedEvent(BooleanPropertyBase.java:103)
at javafx.beans.property.BooleanPropertyBase.markInvalid(BooleanPropertyBase.java:110)
at javafx.beans.property.BooleanPropertyBase.set(BooleanPropertyBase.java:144)
at com.sun.javafx.binding.BidirectionalBinding$BidirectionalBooleanBinding.changed(BidirectionalBinding.java:264)
at com.sun.javafx.binding.BidirectionalBinding$BidirectionalBooleanBinding.changed(BidirectionalBinding.java:227)
at com.sun.javafx.binding.ExpressionHelper$Generic.fireValueChangedEvent(ExpressionHelper.java:361)
at com.sun.javafx.binding.ExpressionHelper.fireValueChangedEvent(ExpressionHelper.java:81)
at javafx.beans.property.BooleanPropertyBase.fireValueChangedEvent(BooleanPropertyBase.java:103)
at javafx.beans.property.BooleanPropertyBase.markInvalid(BooleanPropertyBase.java:110)
at javafx.beans.property.BooleanPropertyBase.set(BooleanPropertyBase.java:144)
at javafx.scene.control.CheckMenuItem.setSelected(CheckMenuItem.java:132)
at com.sun.javafx.scene.control.skin.ContextMenuContent$MenuItemContainer.doSelect(ContextMenuContent.java:1394)
at com.sun.javafx.scene.control.skin.ContextMenuContent$MenuItemContainer.lambda$createChildren$343(ContextMenuContent.java:1358)
at com.sun.javafx.event.CompositeEventHandler$NormalEventHandlerRecord.handleBubblingEvent(CompositeEventHandler.java:218)
at com.sun.javafx.event.CompositeEventHandler.dispatchBubblingEvent(CompositeEventHandler.java:80)
at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:238)
at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:191)
at com.sun.javafx.event.CompositeEventDispatcher.dispatchBubblingEvent(CompositeEventDispatcher.java:59)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:58)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.EventUtil.fireEventImpl(EventUtil.java:74)
at com.sun.javafx.event.EventUtil.fireEvent(EventUtil.java:54)
at javafx.event.Event.fireEvent(Event.java:198)
at javafx.scene.Scene$MouseHandler.process(Scene.java:3757)
at javafx.scene.Scene$MouseHandler.access$1500(Scene.java:3485)
at javafx.scene.Scene.impl_processMouseEvent(Scene.java:1762)
at javafx.scene.Scene$ScenePeerListener.mouseEvent(Scene.java:2494)
at com.sun.javafx.tk.quantum.GlassViewEventHandler$MouseEventNotification.run(GlassViewEventHandler.java:394)
at com.sun.javafx.tk.quantum.GlassViewEventHandler$MouseEventNotification.run(GlassViewEventHandler.java:295)
at java.security.AccessController.doPrivileged(Native Method)
at com.sun.javafx.tk.quantum.GlassViewEventHandler.lambda$handleMouseEvent$353(GlassViewEventHandler.java:432)
at com.sun.javafx.tk.quantum.QuantumToolkit.runWithoutRenderLock(QuantumToolkit.java:389)
at com.sun.javafx.tk.quantum.GlassViewEventHandler.handleMouseEvent(GlassViewEventHandler.java:431)
at com.sun.glass.ui.View.handleMouseEvent(View.java:555)
at com.sun.glass.ui.View.notifyMouse(View.java:937)
at com.sun.glass.ui.gtk.GtkApplication._runLoop(Native Method)
at com.sun.glass.ui.gtk.GtkApplication.lambda$null$48(GtkApplication.java:139)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
at com.sun.javafx.scene.control.skin.TreeTableViewSkin.resizeColumnToFitContent(TreeTableViewSkin.java:346)
... 59 more
```
Answers:
username_1: Hi! This has been reported several times earlier, and in every case it has been because one or more table columns don't have a defined header label value. This is required for the JavaFX function `TreeTableViewSkin.resizeColumnToFitContent()` to run correctly. Please verify this. If that doesn't solve your problem, can you post a minimal sample app that showcases the issue?
username_0: @username_1 Thanks for the reply!
Here's an example to reproduce this. As soon as I hide two columns the exception ocurrs.
I'm running this with Kotlin 1.2.21, Java 1.8_162 and tornadofx 1.7.14.
```kotlin
package eu.jadev
import javafx.application.Application
import javafx.scene.control.TreeItem
import tornadofx.*
class MyApp : App(TreeTableView::class)
class TreeTableView : View() {
private val rootnode: TreeItem<String> = TreeItem("root node")
init {
rootnode.children.add(TreeItem("child"))
}
override val root = treetableview(rootnode) {
isTableMenuButtonVisible = true
smartResize()
column<String,String>("Name 1", { it.value.value.toProperty() })
column<String,String>("Name 2", { it.value.value.toProperty() })
column<String,String>("Name 3", { it.value.value.toProperty() })
}
}
fun main(args: Array<String>) {
Application.launch(MyApp::class.java, *args)
}
```
username_1: Perfect, thanks! I was able to solve the problem by not calling resize for hidden columns. The same bug applied to TableView, so that's fixed now as well. I've pushed a new snapshot to sonatype if you want to try it out.
username_0: @username_1 Thanks!
Hiding is working now.
Showing the hidden columns again is still throwing that exception, though...
username_1: I've committed another fix. When you change a column, the skin is in a state where resize just can't run, so I'm simply catching that error now.
username_0: @username_1 Thanks! I'll test with the next snapshot :)
username_2: am using intellij idea 2018.2
tornadofx 1.7.17
kotlin 1.2.51
class myview: View("basic"){
override val root= hbox{
label("hello world"){
}
}
}
but this problem arises
java.lang.Reflection.InvocationTargetException
help me please
am beginner in tornadofx
Status: Issue closed
username_1: @username_2 Don't hijack other issues :) Create a new issue for your problem, including the complete code and complete stack trace showing the issue. Also include your Java version.
username_2: ok |
sef-global/sef-site | 547497472 | Title: Set text align to Justify on flagship programs section on home page
Question:
username_0: **Is your feature request related to a problem? Please describe.**
Currently, the flagship programs section on the home page's text alignment is centered and hard to read.
**Describe the solution you'd like**
Set text-align to justify.
Answers:
username_1: I'm working on it.
Status: Issue closed
|
pengqiangsheng/hexo | 440564100 | Title: 用nginx配置反向代理实现二级域名配置到公网ip的不同端口的应用 | 知与南_南风知我意,吹梦到西周
Question:
username_0: https://inner.ink/2019/04/21/%E7%94%A8nginx%E9%85%8D%E7%BD%AE%E5%8F%8D%E5%90%91%E4%BB%A3%E7%90%86%E5%AE%9E%E7%8E%B0%E4%BA%8C%E7%BA%A7%E5%9F%9F%E5%90%8D%E9%85%8D%E7%BD%AE%E5%88%B0%E5%85%AC%E7%BD%91ip%E7%9A%84%E4%B8%8D%E5%90%8C%E7%AB%AF%E5%8F%A3%E7%9A%84%E5%BA%94%E7%94%A8/#more
如何用nginx配置二级域名解析到ip的不同端口呢总所周知,ip:80这种方式的web应用可以直接用ip去访问,原因是输入ip的时候,浏览器会自动的默认认为是ip:80这样的形式去访问。利用这一点我们可以用域名解析到ip上进行只需要输入www.domain.com去访问web页面。各大网站也是如此。 如果是ip:80以外的端口怎么办?一台服务器不可能说只部署一个web应用吧,其他的都要通过ip+端 |
NationalSecurityAgency/ghidra | 871772937 | Title: PPC: e_li D, SIMM20 incorrectly calculates the immediate value
Question:
username_0: **Describe the bug**
SIMM20 only exports the lowest 16 bits of a 20-bit immediate value. This causes any immediate greater than 0xFFFF or less than 0x0000 to be incorrect.
https://github.com/NationalSecurityAgency/ghidra/blob/5234839b24c418219e35d0902161ca47ce98ce91/Ghidra/Processors/PowerPC/data/languages/ppc_vle.sinc#L48
**To Reproduce**
Find any `e_li` instruction that uses an immediate larger than 16 bits.
Ghidra's assembler works for testing negative values, just remember to set the VLE register to 1.
**Expected behavior**
SIMM20 should export all 20 bits of the immediate. Values like -0x1 and 0x12345 should work.
```
Actual: 70 7f 07 ff e_li r3, 0xffff
Expected: 70 7f 07 ff e_li r3, -0x1
```
```
Actual: 70 82 52 34 e_li r4, 0x1234
Expected: 70 82 52 34 e_li r4, 0x12345
```
**Environment (please complete the following information):**
- OS: Windows 10 build 19041
- Java Version: 14.0.2
- Ghidra Version: 9.2.0
- Ghidra Origin: official ghidra-sre.org distro
**Additional context**
Changing the size of the export to 3 bytes instead of 2 seems to fix the problem, but I'm not sure this is the best solution.
Answers:
username_1: We have a fix in for this that should be ready in the next couple of days. I'll make a note of this issue when it gets merged.
username_2: @username_1 if you're looking into PPC fixes, any chance for some attention on 1672?
username_1: Fixed by b7499e1bc1c4f097de2312fad3acae990fb2ac43
Status: Issue closed
|
ecomfe/vue-echarts | 864922144 | Title: Update data from sibling component
Question:
username_0: ## The type of this issue / Issue 类型
- [x] Feature request / 新特性需求
- [ ] Bug report / Bug 报告
## Not introduced by ECharts / 非 ECharts 本身问题
Problems about ECharts itself are not handled in this repo. / 本 repo 不负责处理 ECharts 本身的问题。
- [ ] I've checked it's not a problem of ECharts itself. / 我已检查过,这个问题非 ECharts 本身的问题。
## Details / 详情
### Vue version / Vue 版本
- [ ] Vue 3
- [x] Vue 2
### How are you importing Vue-ECharts? / 你是如何引入 Vue-ECharts 的?
- [x] Importing `vue-echarts` with a bundler environment / 在 webpack 等打包工具环境下引入 `vue-echarts`
- [ ] Using the global variable by including `<script>` tags / 通过 `<script>` 标签引入全局变量
### The version of Vue-ECharts you are using / Vue-ECharts 的版本
`"vue-echarts": "^6.0.0-rc.4"`
---
This is my scenario:
I have a page where a load two components:
```vue
<template>
<div>
<component-a />
<component-b />
</div>
</template>
```
In `ComponentA` I have a calendar where I can select a given date. Once the date is changed, I then request some data from an API.
In `ComponentB` I have an `Echart`:
```vue
<template>
<v-chart
class="__chart"
:option="this.$store.state.chartData"
/>
</template>
```
As you can see, I made the `option` as the data in the store that I use for this page.
Therefore, what I do is that once I get the data from the API in `ComponentA`, I save this data in `chartData`. The goal was to then automatically update the data in `ComponentB`.
The issue is that I get the following error messages:
```
[Vue warn]: Error in nextTick: "Error: `setOption` should not be called during main process."
```
```
Error: `setOption` should not be called during main process.
```
Any ideas?
Answers:
username_1: Please provide a runnable and minimal reproduction. Thank you.
username_0: Thank you.
I have just found out that this was caused for not setting `yAxis` in `chartData`. Everything is working now.
Status: Issue closed
|
Ailln/image-classification | 753309777 | Title: I change data source include multi classes,but it's not work 'IndexError: Target 3 is out of bounds.'
Question:
username_0: i have no idea
Answers:
username_1: try to change the class config in datas/class_to_idx.json
username_0: i wana to train model have [serven] classes, I don't know how to modify,Can you be more detailed?I just want to add a few more classes.
username_1: u can compare the content in that file. (if you dont understand the process, it is difficult to run.
username_0: thx, it work now.
Status: Issue closed
|
PrismarineJS/mineflayer | 666924713 | Title: Please help using bot.clickWindow(slot, mouseButton, mode, cb)
Question:
username_0: what values should be used in this request? bot.clickWindow(slot, mouseButton, mode, cb)
Answers:
username_1: Just take a look at the docs...
1. https://github.com/PrismarineJS/mineflayer/blob/master/docs/api.md#botclickwindowslot-mousebutton-mode-cb
2. https://wiki.vg/Protocol#Click_Window
username_0: what is cb?
username_1: Docs are your friend: https://github.com/PrismarineJS/mineflayer/blob/master/docs/tutorial.md#callbacks
Status: Issue closed
username_2: what values should be used in this request? bot.clickWindow(slot, mouseButton, mode, cb)
Status: Issue closed
|
gogs/gogs | 1173041889 | Title: How-To document request (MySQL to Postgresql migration)
Question:
username_0: Greetings!
I've been trying to migrate my gogs database from MySQL to Postgres. I simply used `pgloader` and, though it seems to have been a success, my gogs instance presents no pre-existing data.
My gogs instance is not the latest. Must I upgrade first?
```VERSION:
0.12.3```
I have not found anything useful in the logs, even after setting to Trace.
I see that a guide is referenced from a [pre-existing issue](https://github.com/gogs/gogs/issues/1388), but `discuss.gogs.io` appears to be down.
Can I read it elsewhere? Is there anything I can do to find helpful logs? Perhaps an `app.ini` example?
Here is the text from the aforementioned issue:
Hi @aventrax , forgot to update this issue. This feature has been supported quite a while, you can read more about how to do that: https://discuss.gogs.io/t/how-to-backup-restore-and-migrate/991
_Originally posted by @unknwon in https://github.com/gogs/gogs/issues/1388#issuecomment-331248655_ |
Sidoine/Ovale | 348748672 | Title: Can't find icon group
Question:
username_0: Just downloaded the alpha today, and I'm having a problem that does crop up for me every once in a while. I've confirmed the addon is enabled in-game, but I can't find the icon group anywhere. This is a fresh install, so I'm not sure what the deal is. In the past, if this somehow happens, I reinstall the addon, so I don't know what to do when it is from scratch, heh.
Related suggestion: Any chance the options menu could include something like "reset group position" or "move to center of screen." Just thinking out loud.
Answers:
username_1: Could you tell me with which specialization/class you get that problem with or is it on all of your toons ?
You can "reset" position by setting Horizontal/Vertical offset to 0 in Icon goup menu option
username_0: I'm a monk, and I just confirmed that it's missing with all three specs. I also checked that the offset was 0, and even moved those sliders around in the offchance that I'd see the icon group slide by, ha.
No luck.
username_0: It's here for my Arcane Mage alt that I just logged into! So that's some info.
username_0: It's incredibly wonky, though. I don't know if Arcane Mage isn't ready yet, or it's the fact that she's level 38, but the group goes blank when I go into combat. I got ready to attack something with four charges good to go, and it didn't recommend Arcane Barrage, either.
Sorry about the multi-post. Let me know if this info isn't useful, just trying to do what little I can.
username_1: Unfortunately Mage scripts are not updated so you will have errors/missing icons etc.
But monk should work it was one of the first updated specs.
So from what I understand if you click on minimap icon and select script nothing happens ?
Can you make some simple custom script and see if icon shows up ?
Something like
```
Include(ovale_common)
Include(ovale_monk_spells)
AddIcon help=main specialization=windwalker
{
Spell(tiger_palm)
}
AddIcon help=main specialization=brewmaster
{
Spell(tiger_palm)
}
AddIcon help=main specialization=mistweaver
{
Spell(tiger_palm)
}
```
Status: Issue closed
|
DestinyItemManager/DIM | 261107297 | Title: Add Light Level Calculator Preview
Question:
username_0: With the introduction of mods, It's hard to know if infusing will bring up your overall light level. I made a demo calculator for light level here: https://jsfiddle.net/m0Lcyj6b/show/
It was based on someones previous html/javascript/css
I did math calculations based on this post from D1 https://www.reddit.com/r/DestinyTheGame/comments/3kwmvh/how_overall_light_level_is_calculated/
It takes 7 light to increase your overall light level by 100% for the Kinetic, Energy, and Power slots.
It takes 8.4 light to increase your overall light level by 100% for the helmet, gauntlets, chest armor, leg armor slots
It takes 10.5 light to increase your overall light level by 100% for your class item.
By doing some math and averaging, like my demo, to show whether chasing a specific gear slot or by infusing a legendary/blue into a legendary with a legendary +5 mod; will help you achieve the next overall light level.
The decimal weighting are based on
weapons = (pweapon + sweapon + hweapon) / 3 * 0.42858;
armor = ((helmet + gauntlets + chest + legs) / 4) * 0.4762;
class = (classitem) * 0.09524;
those were calculated by:
CLASS 1/10.5 = 0.09524 (rounded up)
ARMOR 1/8.4 = 0.11905; (0.11905 * 4) = 0.4762 (rounded up)
WEAPONS 1/7 = 0.1429; (0.1429 * 3) = 0.42858 (rounded up)
In the game, you can calculate what your blue engrams are going to drop at by looking at your base power level. The easiest way is to just look at a faction vendor and see what the engrams they have will drop at. That will also be the level blues drop at. Its approx. 10 light under your current overall light level when you have legendary +5 mods in every slot. This scaling starts up at around 265 base power level. In which blues (rares) will start dropping at higher levels than 260. Once a guardian hits 300 base power level, blues/legendaries will drop at random power levels. Everytime your base power level goes up by 1, the blues/legendaries will also go up by one. So it is possible to tell a user that they can gain 1 light level overall by getting a specific blue (usually public events) and infusing it into their legendary with a legendary mod. (super helpful)
One user claimed the planet vendors rotate their power level on their stock every 30 minutes for the mida multi tool/mida mini tool/sturm/drang.
Also luminous engrams will not drop max level 300 gear until your base power level is 298. (299 at 297bpl) (298 at 296bpl) etc. |
XX-net/XX-Net | 168874293 | Title: 全局代理能访问Google等网站,PAC智能代理不行
Question:
username_0: 全局代理能访问Google等网站,PAC智能代理不行
Answers:
username_0: Aug 02 20:26:53.672 - [INFO] SSL use version:TLSv1_2
Aug 02 20:26:54.500 - [INFO] OpenSSL support alpn
Aug 02 20:26:54.563 - [INFO] load ip range file:D:\Program Files\XX-Net\code\default\gae_proxy\local\ip_range.txt
Aug 02 20:26:54.672 - [INFO] detected Win10, enable connect concurrency control, interval:40
Aug 02 20:26:54.719 - [DEBUG] network is ok, cost:2015 ms
Aug 02 20:26:54.734 - [INFO] load google ip_list num:116, gws num:116
Aug 02 20:26:54.766 - [INFO] OpenSSL support alpn
Aug 02 20:26:54.766 - [DEBUG] get ip:192.168.127.12 t:187
Aug 02 20:26:54.813 - [DEBUG] get ip:172.16.58.3 t:188
Aug 02 20:26:54.891 - [INFO] load manual.ini success
Aug 02 20:26:54.891 - [DEBUG] ## GAEProxy set keep_running: True
Aug 02 20:26:54.891 - [INFO] ------------------------------------------------------
Aug 02 20:26:54.891 - [INFO] Python Version : 2.7.11
Aug 02 20:26:54.891 - [INFO] OS : Version:10-0; Build:10586; Platform:2; CSD:; ServicePack:0-0; Suite:256; ProductType:0
Aug 02 20:26:54.891 - [INFO] Listen Address : 127.0.0.1:8087
Aug 02 20:26:54.891 - [INFO] GAE APPID : digital-elysium-137711|tidal-guild-137723|fleet-reserve-137711|glass-ranger-137723|main-depot-137723|midyear-pattern-137723|cobalt-chalice-137723|upheld-modem-137723|advance-medium-137723|nice-theater-137723|steadfast-sound-137723|true-shoreline-137723|linen-setting-137723|burnished-city-137711
Aug 02 20:26:54.891 - [INFO] Pac Server : http://127.0.0.1:8086/proxy.pac
Aug 02 20:26:54.891 - [INFO] ------------------------------------------------------
Aug 02 20:26:55.219 - [DEBUG] create_ssl update ip:172.16.58.3 time:203 h2:0
Aug 02 20:26:55.219 - [DEBUG] create_ssl update ip:192.168.127.12 time:235 h2:0
Aug 02 20:26:55.328 - [DEBUG] get ip:172.16.17.32 t:204
Aug 02 20:26:55.625 - [DEBUG] create_ssl update ip:172.16.17.32 time:203 h2:0
Aug 02 20:27:02.234 - [INFO] GAE CONNECT xxnet-update.appspot.com:443
Aug 02 20:27:02.280 - [DEBUG] GAE CONNECT Direct GET https://xxnet-update.appspot.com/update.json?uuid=a278f2b5-8614-4f52-8ef4-d2161e32e8f5&version=3.2.5&platform=Windows-10-10.0.10586
Aug 02 20:27:02.593 - [INFO] DIRECT t:312 s:1674 200 xxnet-update.appspot.com /update.json?uuid=a278f2b5-8614-4f52-8ef4-d2161e32e8f5&version=3.2.5&platform=Windows-10-10.0.10586
Aug 02 20:27:45.279 - [DEBUG] get ip:192.168.3.11 t:204
Aug 02 20:27:46.678 - [DEBUG] create_ssl update ip:192.168.3.11 time:205 h2:0
Aug 02 20:27:46.678 - [DEBUG] get ip:192.168.3.11 t:206
Aug 02 20:27:47.025 - [DEBUG] create_ssl update ip:192.168.3.11 time:205 h2:0
Aug 02 20:27:50.274 - [DEBUG] 172.16.58.3 close:host pool alive_timeout
[Truncated]
Aug 02 20:44:15.373 - [DEBUG] GAEProxy web_control 127.0.0.1:55212 POST /config?cmd=get_config
Aug 02 20:44:15.373 - [DEBUG] GAEProxy Web_control 127.0.0.1:55212 GET /scan_ip?cmd=get_range
Aug 02 20:44:15.373 - [INFO] load ip range file:D:\Program Files\XX-Net\code\default\gae_proxy\local\ip_range.txt
Aug 02 20:44:32.716 - [DEBUG] ssl_closed 172.16.58.3
Aug 02 20:44:32.717 - [DEBUG] 172.16.58.3 worker close:idle timeout
Aug 02 20:44:37.055 - [INFO] scan_ip add ip:192.168.127.12 time:1093
Aug 02 20:44:47.189 - [DEBUG] get ip:192.168.127.12 t:378
Aug 02 20:44:47.709 - [DEBUG] create_ssl update ip:192.168.127.12 time:394 h2:1
Aug 02 20:44:47.709 - [DEBUG] get ip:172.16.58.3 t:385
Aug 02 20:44:48.339 - [DEBUG] create_ssl update ip:172.16.58.3 time:392 h2:1
username_0: Aug 02 20:26:53.672 - [INFO] SSL use version:TLSv1_2
Aug 02 20:26:54.500 - [INFO] OpenSSL support alpn
Aug 02 20:26:54.563 - [INFO] load ip range file:D:\Program Files\XX-Net\code\default\gae_proxy\local\ip_range.txt
Aug 02 20:26:54.672 - [INFO] detected Win10, enable connect concurrency control, interval:40
Aug 02 20:26:54.719 - [DEBUG] network is ok, cost:2015 ms
Aug 02 20:26:54.734 - [INFO] load google ip_list num:116, gws num:116
Aug 02 20:26:54.766 - [INFO] OpenSSL support alpn
Aug 02 20:26:54.766 - [DEBUG] get ip:192.168.127.12 t:187
Aug 02 20:26:54.813 - [DEBUG] get ip:172.16.58.3 t:188
Aug 02 20:26:54.891 - [INFO] load manual.ini success
Aug 02 20:26:54.891 - [DEBUG] ## GAEProxy set keep_running: True
Aug 02 20:26:54.891 - [INFO] ------------------------------------------------------
Aug 02 20:26:54.891 - [INFO] Python Version : 2.7.11
Aug 02 20:26:54.891 - [INFO] OS : Version:10-0; Build:10586; Platform:2; CSD:; ServicePack:0-0; Suite:256; ProductType:0
Aug 02 20:26:54.891 - [INFO] Listen Address : 127.0.0.1:8087
Aug 02 20:26:54.891 - [INFO] GAE APPID :
Aug 02 20:26:54.891 - [INFO] Pac Server : http://127.0.0.1:8086/proxy.pac
Aug 02 20:26:54.891 - [INFO] ------------------------------------------------------
Aug 02 20:26:55.219 - [DEBUG] create_ssl update ip:172.16.58.3 time:203 h2:0
Aug 02 20:26:55.219 - [DEBUG] create_ssl update ip:192.168.127.12 time:235 h2:0
Aug 02 20:26:55.328 - [DEBUG] get ip:172.16.17.32 t:204
Aug 02 20:26:55.625 - [DEBUG] create_ssl update ip:172.16.17.32 time:203 h2:0
Aug 02 20:27:02.234 - [INFO] GAE CONNECT xxnet-update.appspot.com:443
Aug 02 20:27:02.280 - [DEBUG] GAE CONNECT Direct GET https://xxnet-update.appspot.com/update.json?uuid=a278f2b5-8614-4f52-8ef4-d2161e32e8f5&version=3.2.5&platform=Windows-10-10.0.10586
Aug 02 20:27:02.593 - [INFO] DIRECT t:312 s:1674 200 xxnet-update.appspot.com /update.json?uuid=a278f2b5-8614-4f52-8ef4-d2161e32e8f5&version=3.2.5&platform=Windows-10-10.0.10586
Aug 02 20:27:45.279 - [DEBUG] get ip:192.168.3.11 t:204
Aug 02 20:27:46.678 - [DEBUG] create_ssl update ip:192.168.3.11 time:205 h2:0
Aug 02 20:27:46.678 - [DEBUG] get ip:192.168.3.11 t:206
Aug 02 20:27:47.025 - [DEBUG] create_ssl update ip:192.168.3.11 time:205 h2:0
Aug 02 20:27:50.274 - [DEBUG] 172.16.58.3 close:host pool alive_timeout
Aug 02 20:27:50.275 - [DEBUG] ssl_closed 172.16.58.3
[Truncated]
Aug 02 20:44:15.373 - [DEBUG] GAEProxy web_control 127.0.0.1:55212 POST /config?cmd=get_config
Aug 02 20:44:15.373 - [DEBUG] GAEProxy Web_control 127.0.0.1:55212 GET /scan_ip?cmd=get_range
Aug 02 20:44:15.373 - [INFO] load ip range file:D:\Program Files\XX-Net\code\default\gae_proxy\local\ip_range.txt
Aug 02 20:44:32.716 - [DEBUG] ssl_closed 172.16.58.3
Aug 02 20:44:32.717 - [DEBUG] 172.16.58.3 worker close:idle timeout
Aug 02 20:44:37.055 - [INFO] scan_ip add ip:192.168.127.12 time:1093
Aug 02 20:44:47.189 - [DEBUG] get ip:192.168.127.12 t:378
Aug 02 20:44:47.709 - [DEBUG] create_ssl update ip:192.168.127.12 time:394 h2:1
Aug 02 20:44:47.709 - [DEBUG] get ip:172.16.58.3 t:385
Aug 02 20:44:48.339 - [DEBUG] create_ssl update ip:172.16.58.3 time:392 h2:1
username_1: 估计是 google 域名没进 pac 文件吧,可以打开 pac 文件查看下
username_1: 据说 edge 默认不支持 127.0.0.1 的代理,你可以用其他浏览器试试,比如 chrome
username_0: chrome会卡死,只能用firefox
username_1: 那就用 ff 吧,都差不多
Status: Issue closed
|
greenplum-db/gpdb | 368515214 | Title: Improve the performance of cdbhash function
Question:
username_0: https://github.com/greenplum-db/gpdb/blob/996639e03598e479b6004155a4fec82f5aba2cc6/src/backend/cdb/cdbhash.c#L253-L260
We have to check data type whether it is an `enum` type or a `range`type when we calculate the hash value, It will lead to a low performance because the cdbhash function is used very frequently.
If we assume that `enum` type and `range` type are not often used as a hash key, we can move the `typeIsEnumType` and `typeIsRangeType` to the back, the logic like this:
```
bool find = true;
switch(type)
{
case INT4OID:
case INT2OID:
……
default:
find = false;
}
if(!find)
{
typeIsEnumType
typeIsRangeType
switch(type)
{
case INT4OID:
case INT2OID:
default:
ereport()
}
……
}
```
**I test these case with a special table, it has 20 hash keys:**
```
shzhang=# create table test(a int, b int, c int, d int, e int, f int, g int, h int, i int, j int, k int, l int, m int, n int, o int, p int, q int, r int, s int, t int) distributed by (a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t);
CREATE TABLE
```
**It need 30s+ to insert 10,000,000 tuples before refactoring:**
```
shzhang=# insert into test select i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i from generate_series(1,10000000) i;
INSERT 0 10000000
Time: 35267.372 ms
shzhang=# insert into test select i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i from generate_series(1,10000000) i;
INSERT 0 10000000
Time: 39541.941 ms
shzhang=# insert into test select i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i from generate_series(1,10000000) i;
INSERT 0 10000000
Time: 38777.887 ms
shzhang=# insert into test select i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i from generate_series(1,10000000) i;
INSERT 0 10000000
Time: 91216.136 ms
[Truncated]
**After refactoring, it only needs about 20s.**
```
shzhang=# insert into test select i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i from generate_series(1,10000000) i;
INSERT 0 10000000
Time: 19845.996 ms
shzhang=# insert into test select i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i from generate_series(1,10000000) i;
INSERT 0 10000000
Time: 22774.828 ms
shzhang=# insert into test select i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i from generate_series(1,10000000) i;
INSERT 0 10000000
Time: 25103.703 ms
shzhang=# insert into test select i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i from generate_series(1,10000000) i;
INSERT 0 10000000
Time: 20768.090 ms
```
Answers:
username_1: We could do same optimization in isGreenplumDbHashable() also.
username_0: @username_1 @danielgustafsson
Good catch. We will file a PR soon to do this optimization.
Status: Issue closed
username_2: improved by the following PR and commit:
https://github.com/greenplum-db/gpdb/pull/6123
ce38fc230d4ef717d4bbbd4708ffe1674028ecda |
jlippold/tweakCompatible | 343018238 | Title: `WhoozItPro` working on iOS 11.1.2
Question:
username_0: ```
{
"packageId": "com.yourepo.kiiimo.whoozitrcrackedios9",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.yourepo.kiiimo.whoozitrcrackedios9",
"deviceId": "iPhone10,3",
"url": "http://cydia.saurik.com/package/com.yourepo.kiiimo.whoozitrcrackedios9/",
"iOSVersion": "11.1.2",
"packageVersionIndexed": false,
"packageName": "WhoozItPro",
"category": "Utilities",
"repository": "A Kiiimo Repositoty .ð",
"name": "WhoozItPro",
"packageIndexed": false,
"packageStatusExplaination": "This tweak has not been reviewed. Please submit a review if you choose to install.",
"id": "com.yourepo.kiiimo.whoozitrcrackedios9",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.0.7",
"shortDescription": "Now with iOS 11 support. The all in one perfect caller ID solution for your iDevice. No clue who is calling you and don't know whom you missed? Don't worry, now this all in one perfect caller ID solution will make your life is more simple. Just install this and TrueCaller on your device and just start searchin the calls from you stack apps like Phone, Messages & Notification Center and even from the Recents (tweak).",
"latest": "2.0.6-beta-7k",
"author": "IArrays",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
``` |
siconos/siconos | 159746965 | Title: siconos/mechanics/swig/mechanics/collision is missing
Question:
username_0: It seems that since (1c4a7f343fa2643640174e6078fe90f52cc44d92), the directory
siconos/mechanics/swig/mechanics/collision
is missing for the compilation.
Answers:
username_1: Oh no, I tried to be so careful. Is it possible I did a "mv" instead of "git mv"? I'm afraid I left my computer at work, can't check it until monday! I have a mac here that I can maybe I install xcode, I'll see what I can do
username_1: Reverted mechanics/swig, can you test?
username_0: No stress. Let us fix that on monday after the coffee :-)
Status: Issue closed
username_1: Reverted in 3f0ab095ab |
hansemannn/titanium-review-dialog | 220129976 | Title: isSupported
Question:
username_0: Hey,
if I ask for _isSupported_ OR ask for 10.3 myself, the review dialog is shown. But when I select the stars, I can't send the review. Do you know why?
Does the version has to match the version number in the app store? Or may not be beta or something?
Answers:
username_1: That's all handled internally. You cannot use it in Simulator and probably also not on non-distribution-provisioned devices. Please try a Test-Flight build and ensure you didn't review before. No idea what Apple thought when doing that API. There is no error-handling, no events and no official "supported" method.
Status: Issue closed
|
alien4cloud/alien4cloud | 50837468 | Title: Compute name linux rule
Question:
username_0: The name field allows the underscore ('_') character. But Linux does not allow this character.
Then, when linux and most of the systems allows the '-' character, Alien does not.

Answers:
username_1: Fixed in SM25.
Status: Issue closed
|
open-policy-agent/opa | 608709574 | Title: Chained rule head locations could be improved
Question:
username_0: Follow on from the thread in https://github.com/open-policy-agent/opa/pull/2337#issuecomment-620774527
As-is the parsed AST node locations for a chained rule's head are set (recursively) to be the same as the rule body. This leads to some inconsistency when compared to a normal rule.
For reference this happens in https://github.com/open-policy-agent/opa/blob/f7747e7826e5be13c16336c1d09ab9f9402747b7/ast/parser.go#L341-L354 on line 351.
As mentioned in the discussion on #2337 it isn't as simple as setting the location to the original spot. There are implications for UX with how errors are generated so some additional design/thinking will need to go into deciding how we can have both accurate location info (deciding what that means exactly too) _and_ having useful information for OPA users. |
yasunaga3/hello-github | 352758573 | Title: 音楽再生画面の作成
Question:
username_0: # やること
- [ ] プレーヤー機能を実装する
- [ ] プロジェクトにUI素材を追加する
- [ ] UIコンポーネントを配置する
- [ ] 動作を確認する
<issue_closed>
Status: Issue closed |
wradlib/wradvis | 174728144 | Title: HowTo add new features
Question:
username_0: For adding new features I suggest the following:
- put new features in separate branches
- intensive testing
- merging (probably after squashing devel commits) when feature is completed
Answers:
username_0: Started the wiki with this. Closing.
Status: Issue closed
|
dotnet/SqlClient | 1147818325 | Title: Unhandled exception. System.PlatformNotSupportedException: Strings.PlatformNotSupported_DataSqlClient
Question:
username_0: We are deploying azure webjob and when the webjob is deployed we are getting the following error:
```
[02/23/2022 08:33:01 > 57fccd: ERR ] Unhandled exception. System.PlatformNotSupportedException: Strings.PlatformNotSupported_DataSqlClient
[02/23/2022 08:33:01 > 57fccd: ERR ] at Microsoft.Data.SqlClient.SqlConnection..ctor(String connectionString)
[02/23/2022 08:33:01 > 57fccd: ERR ] at Microsoft.EntityFrameworkCore.SqlServer.Storage.Internal.SqlServerConnection.CreateDbConnection()
[02/23/2022 08:33:01 > 57fccd: ERR ] at Microsoft.EntityFrameworkCore.Storage.RelationalConnection.get_DbConnection()
[02/23/2022 08:33:01 > 57fccd: ERR ] at Microsoft.EntityFrameworkCore.Storage.RelationalConnection.Open(Boolean errorsExpected)
[02/23/2022 08:33:01 > 57fccd: ERR ] at Microsoft.EntityFrameworkCore.SqlServer.Storage.Internal.SqlServerDatabaseCreator.<>c__DisplayClass18_0.<Exists>b__0(DateTime giveUp)
[02/23/2022 08:33:01 > 57fccd: ERR ] at Microsoft.EntityFrameworkCore.ExecutionStrategyExtensions.<>c__DisplayClass12_0`2.<Execute>b__0(DbContext c, TState s)
[02/23/2022 08:33:01 > 57fccd: ERR ] at Microsoft.EntityFrameworkCore.SqlServer.Storage.Internal.SqlServerExecutionStrategy.Execute[TState,TResult](TState state, Func`3 operation, Func`3 verifySucceeded)
[02/23/2022 08:33:01 > 57fccd: ERR ] at Microsoft.EntityFrameworkCore.ExecutionStrategyExtensions.Execute[TState,TResult](IExecutionStrategy strategy, TState state, Func`2 operation, Func`2 verifySucceeded)
[02/23/2022 08:33:01 > 57fccd: ERR ] at Microsoft.EntityFrameworkCore.ExecutionStrategyExtensions.Execute[TState,TResult](IExecutionStrategy strategy, TState state, Func`2 operation)
[02/23/2022 08:33:01 > 57fccd: ERR ] at Microsoft.EntityFrameworkCore.SqlServer.Storage.Internal.SqlServerDatabaseCreator.Exists(Boolean retryOnNotExists)
[02/23/2022 08:33:01 > 57fccd: ERR ] at Microsoft.EntityFrameworkCore.SqlServer.Storage.Internal.SqlServerDatabaseCreator.Exists()
[02/23/2022 08:33:01 > 57fccd: ERR ] at Microsoft.EntityFrameworkCore.Migrations.HistoryRepository.Exists()
[02/23/2022 08:33:01 > 57fccd: ERR ] at Microsoft.EntityFrameworkCore.Migrations.Internal.Migrator.Migrate(String targetMigration)
[02/23/2022 08:33:01 > 57fccd: ERR ] at Microsoft.EntityFrameworkCore.RelationalDatabaseFacadeExtensions.Migrate(DatabaseFacade databaseFacade)
```
```
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>net5.0</TargetFramework>
<ErrorOnDuplicatePublishOutputFiles>false</ErrorOnDuplicatePublishOutputFiles>
<SkipFunctionsDepsCopy>true</SkipFunctionsDepsCopy>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Extensions.Configuration.Abstractions" Version="5.0.0" />
<PackageReference Include="Nethereum.Web3" Version="4.2.0" />
<PackageReference Include="Newtonsoft.Json" Version="13.0.1" />
<PackageReference Include="System.Data.SqlClient" Version="4.6.0" />
</ItemGroup>
<ItemGroup>
<PackageReference Include="Microsoft.ApplicationInsights.AspNetCore" Version="2.17.0" />
<PackageReference Include="Microsoft.Extensions.Configuration.AzureKeyVault" Version="3.1.16" />
<PackageReference Include="Microsoft.Extensions.Hosting" Version="5.0.0" />
<PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="5.0.0" />
<PackageReference Include="FlexLabs.EntityFrameworkCore.Upsert" Version="4.0.0" />
<PackageReference Include="RestSharp" Version="106.11.7" />
<PackageReference Include="RestSharp.Serializers.NewtonsoftJson" Version="106.11.7" />
<PackageReference Include="Serilog.Enrichers.Environment" Version="2.1.3" />
<PackageReference Include="Serilog.Extensions.Hosting" Version="4.1.2" />
<PackageReference Include="Serilog.Sinks.ApplicationInsights" Version="3.1.0" />
<PackageReference Include="Serilog.Sinks.Console" Version="3.1.1" />
<PackageReference Include="Serilog.Filters.Expressions" Version="2.1.0" />
<PackageReference Include="Serilog.Sinks.ApplicationInsights" Version="3.1.0" />
<PackageReference Include="Serilog.AspNetCore" Version="4.1.0" />
<PackageReference Include="Serilog.Sinks.Seq" Version="5.0.1" />
</ItemGroup>
<ItemGroup>
<PackageReference Include="CVV.RateGate" Version="1.0.1" />
<PackageReference Include="SocketIOClient" Version="1.0.3.13" />
</ItemGroup>
<ItemGroup>
[Truncated]
<PackageReference Include="NServiceBus.Serilog" Version="7.15.0" />
<PackageReference Include="NServiceBus.Transport.AzureServiceBus" Version="1.9.0" />
<PackageReference Include="Serilog.Enrichers.Environment" Version="2.1.3" />
<PackageReference Include="Serilog.Extensions.Hosting" Version="4.1.2" />
<PackageReference Include="Serilog.Settings.Configuration" Version="3.1.0" />
<PackageReference Include="NServiceBus.Callbacks" Version="3.0.0" />
<PackageReference Include="NServiceBus.Extensions.Hosting" Version="1.1.0" />
<PackageReference Include="NServiceBus.Persistence.Sql" Version="6.2.0" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\Shared.Domain\Shared.Domain.csproj" />
</ItemGroup>
</Project>
```
Firstly, we had Microsoft.Data.SqlClient 3.0.0 as a dependency. Then I moved to System.Data.SqlClient as you can see from *.csproj file. I always get the same error.
I am azure devops pipeline to make the deployment.
Answers:
username_1: @username_0 can you post a sample repro please? something minimal that reproduces the issue.
Thank you.
username_0: ```c#
using System;
using System.Data;
using Microsoft.Data.SqlClient;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Hosting;
using Newtonsoft.Json;
using Newtonsoft.Json.Converters;
using NServiceBus;
using NServiceBus.Logging;
using NServiceBus.Serilog;
using Serilog;
namespace Shared.NServiceBus
{
public static class HostBuilderExtensions
{
/// <summary>
/// Configures the host to start an NServiceBus endpoint.
/// </summary>
public static void ConfigureNServiceBus(EndpointConfiguration endpointConfiguration, HostBuilderContext context)
{
endpointConfiguration.License(LicenseHelper.GetLicense);
// Serialization
var settings = new JsonSerializerSettings
{
ContractResolver = new NonPublicPropertiesResolver(),
NullValueHandling = NullValueHandling.Ignore,
ConstructorHandling = ConstructorHandling.AllowNonPublicDefaultConstructor
};
settings.Converters.Add(new StringEnumConverter());
var serialization = endpointConfiguration.UseSerialization<NewtonsoftSerializer>();
serialization.Settings(settings);
LogManager.Use<SerilogFactory>();
var serilogTracing = endpointConfiguration.EnableSerilogTracing(Log.Logger);
serilogTracing.EnableMessageTracing();
serilogTracing.EnableSagaTracing();
// Audit
endpointConfiguration.SendFailedMessagesTo("Error");
endpointConfiguration.AuditProcessedMessagesTo("Audit");
if (bool.Parse(context.Configuration["Queues:SagaAuditEnabled"]))
endpointConfiguration.AuditSagaStateChanges("Audit");
// Installers
endpointConfiguration.EnableInstallers();
// Transport
var transport = endpointConfiguration.UseTransport<AzureServiceBusTransport>();
transport.ConnectionString(context.Configuration.GetConnectionString("ServiceBus"));
transport.Transactions(TransportTransactionMode.ReceiveOnly);
transport.EntityMaximumSize(1);
var outboxSettings = endpointConfiguration.EnableOutbox();
outboxSettings.UsePessimisticConcurrencyControl();
outboxSettings.TransactionIsolationLevel(IsolationLevel.ReadCommitted);
outboxSettings.KeepDeduplicationDataFor(TimeSpan.FromDays(7));
outboxSettings.RunDeduplicationDataCleanupEvery(TimeSpan.FromDays(1));
var persistence = endpointConfiguration.UsePersistence<SqlPersistence>();
var sagaSettings = persistence.SagaSettings();
sagaSettings.JsonSettings(new JsonSerializerSettings
{
TypeNameHandling = TypeNameHandling.Auto,
MetadataPropertyHandling = MetadataPropertyHandling.ReadAhead
});
persistence.SqlDialect<SqlDialect.MsSqlServer>();
persistence.ConnectionBuilder(() => new SqlConnection(context.Configuration.GetConnectionString("OperationalDatabase")));
}
}
}
```
I can send only this part of the code. This is the only place where Microsoft.Data.SqlClient is used. |
micrometer-metrics/micrometer | 483908788 | Title: Synchronization during meter registration may result in deadlock
Question:
username_0: A [Spring Boot issue](https://github.com/spring-projects/spring-boot/issues/17765) has been opened reporting a deadlock. There appears to be two parts to the problem, both caused by calls to foreign code while holding a lock. One part is in Spring Framework and the other is in Micrometer.
Here's a minimal example that reproduces a similar problem without Spring Framework's involvement:
```
package example;
import org.junit.jupiter.api.Test;
import io.micrometer.core.instrument.Clock;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.Timer;
import io.micrometer.core.instrument.simple.SimpleConfig;
import io.micrometer.core.instrument.simple.SimpleMeterRegistry;
public class MeterRegistrationDeadlockTests {
@Test
public void meterRegistration() {
ExampleSimpleConfig config = new ExampleSimpleConfig();
MeterRegistry meterRegistry = new SimpleMeterRegistry(config, Clock.SYSTEM);
config.setMeterRegistry(meterRegistry);
Timer.builder("example").register(meterRegistry);
}
static class ExampleSimpleConfig implements SimpleConfig {
private volatile MeterRegistry meterRegistry;
@Override
public String get(String key) {
Thread thread = new Thread(() -> {
Timer.builder("example-1").register(meterRegistry);
});
thread.start();
try {
thread.join();
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
}
return null;
}
void setMeterRegistry(MeterRegistry meterRegistry) {
this.meterRegistry = meterRegistry;
}
}
}
```
The `meterRegistration` test will hang because Micrometer takes a lock during meter creation before calling `ExampleSimpleConfig.get()`. The thread that is starts also attempts to register a meter which results in it attempting to take the same lock. It is unable to do so as it's already held by the main thread.
While this arrangement is rather contrived, it shows that the problem can occur without any locking beyond that performed by Micrometer. In the example reported in the Spring Boot issue tracker, the `SimpleConfig` implementation is `@Validated` which results in calls being made into the application context when its methods are called and Framework's [problematic locking behaviour](https://github.com/spring-projects/spring-framework/issues/23501) comes into the picture as well.
There is a risk of deadlock whenever foreign code is called while holding lock and, as such, it is best avoided. I wonder if Micrometer could be modified such that it sets a marker to indicate that a meter is being created and then performs the creation without holding any lock. Once created, the lock can be taken again, the meter stored, and the marker removed.
Answers:
username_0: Here's a zip of a complete project containing the test in the comment above: [meter-registration-deadlock.zip](https://github.com/micrometer-metrics/micrometer/files/3529652/meter-registration-deadlock.zip) |
fabric8io/docker-maven-plugin | 445711597 | Title: plugin execution out-of-order with packaging:docker
Question:
username_0: ### Description
building a spring-boot application with packaging type 'docker' run's the spring-boot-maven-plugin:repackage either ot-of order (mvn verify) or not at all (mvn package)
```
mvn verify
...
[INFO] --- docker-maven-plugin:0.27.2:build (default-build) @ docker-it ---
... --> image doesn't work as spring-boot deps and starter are missing in artifact
[INFO] --- spring-boot-maven-plugin:2.1.4.RELEASE:repackage (repackage) @ docker-it ---
[INFO] Replacing main artifact with repackaged archive
[INFO]
[INFO] --- docker-maven-plugin:0.27.2:start (default-start) @ docker-it ---
```
### Info
Following the documentation and building the image in phase *pre-integration-test* builds the image twice, the second image actually works:
```
<execution>
<id>default-start</id>
<phase>pre-integration-test</phase>
<goals>
<goal>build</goal>
<goal>start</goal>
</goals>
</execution>
INFO] --- docker-maven-plugin:0.27.2:build (default-build) @ docker-it ---
...
[INFO] --- spring-boot-maven-plugin:2.1.4.RELEASE:repackage (repackage) @ docker-it ---
[INFO] Replacing main artifact with repackaged archive
[INFO] --- docker-maven-plugin:0.27.2:build (default-start) @ docker-it ---
... --> working image
```
Building the first (non working) image can be avoided by disabling *default-build* execution like below:
```
<execution>
<id>default-build</id>
<phase>none</phase>
</execution>
[INFO] --- spring-boot-maven-plugin:2.1.4.RELEASE:repackage (repackage) @ docker-it ---
...
[INFO] --- docker-maven-plugin:0.27.2:build (default-start) @ docker-it ---
[INFO] Copying files to D:\src\git\maven-debug\docker-IT\target\docker\docker-it\1.0-SNAPSHOT\build\maven
...
[INFO] --- docker-maven-plugin:0.27.2:start (default-start) @ docker-it ---
[INFO] DOCKER> [docker-it:1.0-SNAPSHOT]: Start container f780513ce2b7
```
* d-m-p version : 0.30.0 (same behaviour with 0.27.2 and 0.29.0)
* Maven version (`mvn -v`) : 3.3.9 (IntelliJ bundled version)
* Docker version : 18.03.0-ce, build 0520e24302
* If it's a bug, how to reproduce :
Actually, I'm not sure if this qualifies as bug. It might be as well lack of understanding for the process on my side. *mvn package* not building a working image is rather unexpected, especially as the out-of-the-box configuration for packaging *docker* adds the build step at phase *package*. |
SySeVR/SySeVR | 598208969 | Title: AttributeError: 'ProgbarLogger' object has no attribute 'log_values'
Question:
username_0: Does anyone knows how to change this error?? I am getting this error while i was running bgru.py .... The following is the error I am getting :
Train...
(0, 0)
start
Epoch 1/10
Traceback (most recent call last):
File "bgru.py", line 220, in <module>
main(traindataSetPath, testdataSetPath, realtestdataSetPath, weightPath, resultPath, batchSize, maxLen, vectorDim, layers, dropout)
File "bgru.py", line 85, in main
model.fit_generator(train_generator, steps_per_epoch=steps_epoch, epochs=10)
File "/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1426, in fit_generator
initial_epoch=initial_epoch)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training_generator.py", line 229, in fit_generator
callbacks.on_epoch_end(epoch, epoch_logs)
File "/usr/local/lib/python2.7/dist-packages/keras/callbacks.py", line 77, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "/usr/local/lib/python2.7/dist-packages/keras/callbacks.py", line 336, in on_epoch_end
self.progbar.update(self.seen, self.log_values)
AttributeError: 'ProgbarLogger' object has no attribute 'log_values' |
fjcaetano/ios-simulator-gif | 675318771 | Title: Bug: Video is always recorded on Portrait orientation
Question:
username_0: Even if the device is on Landscape orientation, the video is always recorded as a Portrait.
<issue_closed>
Status: Issue closed |
eschnett/SIMD.jl | 165588221 | Title: Code generation problems
Question:
username_0: This code
```julia
using SIMD
function foo(x::Vector{Float32})
N = length(x)
y = Array(Vec{4, Float32}, N)
for k = 1:N
@inbounds y[k] = Vec{4, Float32}(x[k])
end
return y
end
x = rand(Float32, 100000);
foo(x);
@time foo(x);
```
run with a Julia 0.5 of today allocates way too much memory:
```julia
julia> @time foo(x);
0.614021 seconds (200.13 k allocations: 7.637 MB, 12.62% gc time)
```
code_llvm produces a mess including a scary call to jl_apply_generic.
In a 48 days old Julia I had available this is much better, `6 allocations: 1.526 MB`, unless `@inbounds` is removed, in which case it's back to 200k allocations. There code_llvm also produces a mess but at least without any jl_apply_generic.
I'm not sure if I'm doing something I shouldn't but at least it looks like something has regressed with recent Julia.
```
julia> versioninfo()
Julia Version 0.5.0-dev+5429
Commit 828f7ae* (2016-07-14 09:21 UTC)
Platform Info:
System: Linux (x86_64-pc-linux-gnu)
CPU: Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz
WORD_SIZE: 64
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
LAPACK: libopenblas64_
LIBM: libopenlibm
LLVM: libLLVM-3.7.1 (ORCJIT, haswell)
```
Answers:
username_1: Methinks this might be a type instability. The array and tuple handling code in Julia is currently changing. I will have a look.
username_1: Here is a shorter code to reproduce the problem:
```Julia
using SIMD
f(a) = @inbounds a[1] = Vec{4,Float32}(1)
@code_llvm f(Array{Vec{4,Float32}}(1))
```
I see that the code that creates the SIMD vector is translated fine; it is inline, and as simple as it should be. I thus think that the array indexing is causing problems.
The problem disappears when I use `NTuple` instead of `Vec`.
@username_2 Any ideas?
Here is the generated LLVM code:
```
define void @julia_f_67463(%jl_value_t*) #0 !dbg !6 {
top:
%1 = call %jl_value_t*** @jl_get_ptls_states()
%2 = alloca [6 x %jl_value_t*], align 8
%.sub = getelementptr inbounds [6 x %jl_value_t*], [6 x %jl_value_t*]* %2, i64 0, i64 0
%3 = getelementptr [6 x %jl_value_t*], [6 x %jl_value_t*]* %2, i64 0, i64 3
%4 = getelementptr [6 x %jl_value_t*], [6 x %jl_value_t*]* %2, i64 0, i64 2
%5 = bitcast %jl_value_t** %3 to i8*
call void @llvm.memset.p0i8.i32(i8* %5, i8 0, i32 24, i32 8, i1 false)
%6 = bitcast [6 x %jl_value_t*]* %2 to i64*
store i64 8, i64* %6, align 8
%7 = bitcast %jl_value_t*** %1 to i64*
%8 = load i64, i64* %7, align 8
%9 = getelementptr [6 x %jl_value_t*], [6 x %jl_value_t*]* %2, i64 0, i64 1
%10 = bitcast %jl_value_t** %9 to i64*
store i64 %8, i64* %10, align 8
store %jl_value_t** %.sub, %jl_value_t*** %1, align 8
store %jl_value_t* null, %jl_value_t** %4, align 8
%11 = getelementptr [6 x %jl_value_t*], [6 x %jl_value_t*]* %2, i64 0, i64 5
%12 = getelementptr [6 x %jl_value_t*], [6 x %jl_value_t*]* %2, i64 0, i64 4
store %jl_value_t* inttoptr (i64 4571598584 to %jl_value_t*), %jl_value_t** %3, align 8
store %jl_value_t* inttoptr (i64 4647558576 to %jl_value_t*), %jl_value_t** %12, align 8
%13 = bitcast %jl_value_t*** %1 to i8*
%14 = getelementptr %jl_value_t**, %jl_value_t*** %1, i64 176
%15 = bitcast %jl_value_t*** %14 to i8*
%16 = call %jl_value_t* @jl_gc_pool_alloc(i8* %13, i8* %15, i32 32, i32 16328)
%17 = getelementptr inbounds %jl_value_t, %jl_value_t* %16, i64 -1, i32 0
store %jl_value_t* inttoptr (i64 4647152816 to %jl_value_t*), %jl_value_t** %17, align 8
%18 = bitcast %jl_value_t* %16 to <4 x float>*
store <4 x float> <float 1.000000e+00, float 1.000000e+00, float 1.000000e+00, float 1.000000e+00>, <4 x float>* %18, align 8
store %jl_value_t* %16, %jl_value_t** %11, align 8
%19 = call %jl_value_t* @jl_apply_generic(%jl_value_t** %3, i32 3)
store %jl_value_t* %19, %jl_value_t** %4, align 8
%20 = bitcast %jl_value_t* %19 to float*
%21 = load float, float* %20, align 16
%22 = insertelement <4 x float> undef, float %21, i32 0
%23 = bitcast %jl_value_t* %19 to i8*
%24 = getelementptr i8, i8* %23, i64 4
%25 = bitcast i8* %24 to float*
%26 = load float, float* %25, align 4
%27 = insertelement <4 x float> %22, float %26, i32 1
%28 = getelementptr %jl_value_t, %jl_value_t* %19, i64 1
%29 = bitcast %jl_value_t* %28 to float*
%30 = load float, float* %29, align 8
%31 = insertelement <4 x float> %27, float %30, i32 2
%32 = getelementptr i8, i8* %23, i64 12
%33 = bitcast i8* %32 to float*
%34 = load float, float* %33, align 4
%35 = insertelement <4 x float> %31, float %34, i32 3
%36 = bitcast %jl_value_t* %0 to <4 x float>**
%37 = load <4 x float>*, <4 x float>** %36, align 8
store <4 x float> %35, <4 x float>* %37, align 8
%38 = load i64, i64* %10, align 8
store i64 %38, i64* %7, align 8
ret void
}
```
Given this, I assume that the call to `jl_apply_generic` returns the address of the array element, and the following for `insertelement` instructions are the assignment of the `Vec` tuple to the array element.
username_2: This doesn't seem like it could possibly be affected by any of my recent array changes:
```jl
julia> using SIMD
julia> v = Vec{4,Float32}(1)
4-element SIMD.Vec{4,Float32}:
Float32⟨1.0,1.0,1.0,1.0⟩
julia> a = Array{Vec{4,Float32}}(1)
1-element Array{SIMD.Vec{4,Float32},1}:
Float32⟨-0.00026230933,4.5644e-41,-0.02648136,4.5644e-41⟩
julia> @which a[1] = v
setindex!{T}(A::Array{T,N<:Any}, x, i1::Real) at array.jl:372
```
That line is [here](https://github.com/JuliaLang/julia/blob/edb112af0b058d1f5a815fc6cb524f586cd18bdd/base/array.jl#L372). This problem seems to be fixed (or at least, better) if you [comment out this line](https://github.com/username_1/SIMD.jl/blob/b907af52666a261af1bac914be63f00b560ac9ca/src/SIMD.jl#L127). So I'd look into your `convert` method.
username_3: Ref #6
username_0: This looks relevant:
```julia
julia> using SIMD
julia> code_llvm(Tuple, (Vec{4, Float32},))
define void @julia_Type_67726([4 x float]* noalias sret, %jl_value_t*, %Vec*) #0 {
[...]
%19 = call %jl_value_t* @jl_apply_generic(%jl_value_t** %4, i32 3)
[...]
}
```
In my older Julia (`Commit bc56e32* (49 days old master)`) this doesn't seem completely sane but might explain the observed regression:
```julia
julia> using SIMD
julia> code_llvm(Tuple, (Vec{4, Float32},))
define void @julia_Type_50122([4 x float]* sret, %jl_value_t*, %Vec*) #0 {
[...]
%18 = call %jl_value_t* @jl_apply_generic(%jl_value_t** %5, i32 3)
[...]
}
julia> Tuple(Vec{4, Float32}(0))
(0.0f0,0.0f0,0.0f0,0.0f0)
julia> code_llvm(Tuple, (Vec{4, Float32},))
define void @julia_Type_50122([4 x float]* sret, %jl_value_t*, %Vec*) #0 {
top:
%3 = alloca [4 x float], align 4
call void @julia_convert_50126([4 x float]* nonnull sret %3, %jl_value_t* inttoptr (i64 140529556360832 to %jl_value_t*), %Vec* %2) #0
%4 = bitcast [4 x float]* %0 to i8*
%5 = bitcast [4 x float]* %3 to i8*
call void @llvm.memcpy.p0i8.p0i8.i64(i8* %4, i8* %5, i64 16, i32 4, i1 false)
ret void
}
```
In current Julia the simpler code can't be provoked by running the constructor once.
username_1: @username_0 Thanks for pointing to #6; yes, this was the problem. Apologies for not understanding the main point of your pull request when you requested it three months ago.
Status: Issue closed
username_2: (I think you meant @username_3.)
username_1: @username_2 @username_3 Yes, sorry again. Not my day today.
username_3: :) |
laravel/horizon | 607725221 | Title: Behavior on multiple servers (horizontally) in a deploy
Question:
username_0: I'm deploying to kubernetes multiple pods with laravel horizon, so if a pod is processing a long job and is removed by kubernetes, what happens? i loose that job? how can i reprocess that? tks!
Answers:
username_1: Hi there,
Thanks for reporting but it looks like this is a question which can be asked on a support channel. Please only use this issue tracker for reporting bugs with the library itself. If you have a question on how to use functionality provided by this repo you can try one of the following channels:
- [Laracasts Forums](https://laracasts.com/discuss)
- [Laravel.io Forums](https://laravel.io/forum)
- [StackOverflow](https://stackoverflow.com/questions/tagged/laravel)
- [Discord](https://discordapp.com/invite/KxwQuKb)
- [Larachat](https://larachat.co)
- [IRC](https://webchat.freenode.net/?nick=laravelnewbie&channels=%23laravel&prompt=1)
However, this issue will not be locked and everyone is still free to discuss solutions to your problem!
Thanks.
Status: Issue closed
|
MicrosoftDocs/azure-docs | 546141969 | Title: Dead link
Question:
username_0: The Understanding Azure Storage Billing - Bandwidth, Transactions, and Capacity link is dead
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a05ea89e-d4d4-fc01-0519-93a337dd2182
* Version Independent ID: c1e1d3da-3e85-49f1-6570-a8f2bd1dfea4
* Content: [Use Azure Storage analytics to collect logs and metrics data](https://docs.microsoft.com/en-us/azure/storage/common/storage-analytics?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json#feedback)
* Content Source: [articles/storage/common/storage-analytics.md](https://github.com/Microsoft/azure-docs/blob/master/articles/storage/common/storage-analytics.md)
* Service: **storage**
* Sub-service: **common**
* GitHub Login: @normesta
* Microsoft Alias: **normesta**
Answers:
username_1: Hi @username_0 Thanks for your feedback. We will review and update as appropriate.
Status: Issue closed
username_2: @username_0 PR request has been raised.
We will now proceed to close this thread. If there are further questions regarding this matter, please tag me in your reply. We will gladly continue the discussion and we will reopen the issue. |
openfl/openfl | 186687487 | Title: Stage3D texture issues
Question:
username_0: 1. uploadFromTypedArray in Texture constructor is not a meaningless
call, please don't remove it. It's needed for render to texture.
2. What's low memory mode? Downscaling? That may break rendering if texture atlas is used. I guess many real-world applications use texture atlas in some way, so I don't think this is useful for users. I'd rather implement this in higher level, since Stage3D is meant for lower level abstraction. For example, you could make some space by removing old textures from cache if you run out of memory.
3. Does memory usage for Texture include all miplevels? It looks like only size for level 0 is used.
I've heard that Flask Stage3D textures allocate all mip levels even if you don't need them, but should I do that in OpenFL? That's bad for memory usage.
RectangleTexture doesn't have such problems since it doesn't support mipmaps.
Answers:
username_1: 1. Could you help share an example of where you are using `uploadFromTypedArray`? I think you can do something like `uploadFromByteArray (typeArray.buffer)` instead, though perhaps we could allow direct conversion (without referencing `.buffer`) in the future
2. As you (may) know, the Stage3D layer is a port from another platform, it appears to have code for a low-memory mode where it uses smaller textures, but this is disabled right now
3. Similarly, the code had ways to track memory usage, but it is not fully correct (as you have seen) with tracking multiple miplevels
I think the question of miplevels is whether you enable mipmapping for the texture or not
username_0: I meant you shouldn't delete this part. https://github.com/openfl/openfl/blob/aca5608a67d5294ab1db947af09f96c677b57722/openfl/display3D/textures/Texture.hx#L39
To be able to use Texture you should allocate at least first level using glTexImage2D. Since you don't use Texture.uploadFrom* calls in the case of render to texture, removing this part could cause OpenGL errors.
username_0: Where can I read more about low-memory mode? It's technically impossible to implement that by downscaling textures, as texture atlases/filters rely on texture resolution.
It might be possible to implement that with 16bit color texture though.
Status: Issue closed
username_1: We can discuss more off of Github issues, the `uploadFromTypedArray` issue should be resolved :smile: |
newrelic/newrelic-java-agent | 1005978428 | Title: Support dev.zio:zio_2.13:2.0.0-M3 and above
Question:
username_0: Verifier output:
Creating user classloader with custom classpath:
/home/jenkins/.gradle/caches/modules-2/files-2.1/dev.zio/zio_2.13/2.0.0-M3/e945850d55f40c6803b67746b0e8117b1d65eb58/zio_2.13-2.0.0-M3.jar
/home/jenkins/.gradle/caches/modules-2/files-2.1/dev.zio/zio-stacktracer_2.13/2.0.0-M3/786f50c686729f0d8e1f890eaa6f6cae74888929/zio-stacktracer_2.13-2.0.0-M3.jar
/home/jenkins/.gradle/caches/modules-2/files-2.1/dev.zio/izumi-reflect_2.13/2.0.0/2a5dfbbc1d414999c82a769a1e3262058c99fb91/izumi-reflect_2.13-2.0.0.jar
/home/jenkins/.gradle/caches/modules-2/files-2.1/dev.zio/izumi-reflect-thirdparty-boopickle-shaded_2.13/2.0.0/d5ded42c39067d7cd08c7ad4eea44cf40e1ff5f8/izumi-reflect-thirdparty-boopickle-shaded_2.13-2.0.0.jar
/home/jenkins/.gradle/caches/modules-2/files-2.1/org.scala-lang/scala-library/2.13.6/ed7a2f528c7389ea65746c22a01031613d98ab3d/scala-library-2.13.6.jar
WeaveViolation{type=MISSING_ORIGINAL_BYTECODE, clazz=zio/internal/Executor}
```
Add instrumentation that supports newer versions. |
cyber-dojo-languages/kotlin-test | 544703640 | Title: New image wasn't generated
Question:
username_0: Hey guys!
At #1 we updated the jars to a newer version. However, I believe a new image wasn't generated, and the old jars are still being used.
How can I trigger the generation of a new image?
Answers:
username_1: I will look into it today.
username_1: Hi Leonardo.
Fixed now. Thank you for highlighting this was not working.
Details... There is a service called puller which pulls the latest versions of all the langauge-test-framework images from dockerhub every 24 hours as a cron job. I broke it because of a refactoring which had changed the name of some dependent services. Sorry.
Status: Issue closed
username_0: Nice catch!
Thanks for the support :) |
docker/compose | 90773992 | Title: affinity:container messes up environment variables
Question:
username_0: Compose v1.3.0+ adds a set of `affinity:container` environment variables inside containers when those are **recreated** (second time a `docker-compose up -d` command is issued).
```
root@web:/var/www# env|grep affinity
VOYA_DB_1_ENV_affinity:container==f75d8c6ced29ed8e7f12cd60e3716e62844031caabb003d1feb74168bde942bc
DB_ENV_affinity:container==f75d8c6ced29ed8e7f12cd60e3716e62844031caabb003d1feb74168bde942bc
affinity:container==76df92d213821482e706251a81d9c9fde5416515f43c6813dcaf3816777a9849
DB_1_ENV_affinity:container==f75d8c6ced29ed8e7f12cd60e3716e62844031caabb003d1feb74168bde942bc
```
Such variables have invalid identifiers and cannot be exported/read by shell.
```
root@web:/var/www# export {affinity:container}==f75d8c6ced29ed8e7f12cd60e3716e62844031caabb003d1feb74168bde942bc
bash: export: `{affinity:container}==f75d8c6ced29ed8e7f12cd60e3716e62844031caabb003d1feb74168bde942bc': not a valid identifier
```
```
printf %q ${affinity:container}
''
```
Unfortunately there are fatal side effects of the empty ENV variables.
In my case all ENV variables are written into `/etc/php5/fpm/pool.d/env.conf` by a startup.sh script. The `affinity:container` variables value cannot be read properly, thus an empty value is written.
php-fpm does not tolerate empty variables in env.conf and simply crashes.
I added checks in the script, however it would make nice if docker-compose did not inject bad environmental variables.
Answers:
username_1: :+1:
username_2: :+1:
Status: Issue closed
username_3: Thanks for the bug report.
Unfortunately this is not something we can change on the compose side. This is the key that swarm expects. I believe https://github.com/docker/swarm/issues/288 is the issue to change this to use labels instead.
Since this isn't something we can change from the compose side, I'm going to close this issue. As a workaround you should be able to strip out the environment variables you don't want with a wrapper script that unsets things.
username_4: How do I unset such variables?
I have:
MYSQL_1_ENV_affinity:container==7c9320f488cb47242157e41a10476e0a59d4f14d849ce91914de09ca47379172
And if I do:
```root@a5a210f22223:/var/www# unset MYSQL_1_ENV_affinity:container
bash: unset: `MYSQL_1_ENV_affinity:container': not a valid identifier
root@a5a210f22223:/var/www# unset MYSQL_1_ENV_affinity:container=
bash: unset: `MYSQL_1_ENV_affinity:container=': not a valid identifier
root@a5a210f22223:/var/www#
````
username_5: @username_4 I have not yet figured out how to do that in bash, but in zsh or ash you are able to unset those.
username_5: @username_4 the only way I have found how to do that in bash is that if you have `mycommand` that breaks because of affinity:container you can do `env --unset='affinity:container' mycommand`
username_0: @username_4 If you are looking for a fix for PHP, take a look at this commit - https://github.com/blinkreaction/docker-drupal-cli/commit/217c24b9839e324e7b29e78e9865411f8928f0bd
It simplifies things considerably in my case.
username_4: Thanks. @username_0 the problem is that `clear_env` doesn't work with older versions of PHP 5 and Ubuntu 14.04 doesn't provide newer versions of PHP either so I had to compile PHP to make it work. I think clear_env was introduced in PHP ~5.5.10 |
KC3Kai/KC3Kai | 275959276 | Title: No LD indicator or prediction on Boss gauge
Question:
username_0: Example: [#1](https://cdn.discordapp.com/attachments/205766705463427072/382764137274540032/unknown.png) ([api data](https://cdn.discordapp.com/attachments/205766705463427072/382779389416308737/recorded_kcsapi_7.json))
Answers:
username_0: Should be fixed by #2355
Status: Issue closed
|
Sklore/HL_DD_5e_Colab | 208681031 | Title: COM_5ePack_PHB - Feats.user
Question:
username_0: The following is the corrected code for the eval script:
~ If we're disabled, do nothing &
doneif (tagis[Helper.Disable] = 1)
~ If not active get out now!
doneif (field[abilActive].value = 0)
~ Give a untyped bonus to melee attack damage
hero.childfound[Damage].field[dmmBonus].value += field[abValue].value
Status: Issue closed
Answers:
username_1: This will be fixed in the next update. |
QuEST-Kit/QuEST | 560366982 | Title: Support for multi-node computer clusters with CUDA
Question:
username_0: Will you support QuEST on multi-node computer clusters with CUDA in the future, or is it already in the developing plan?
Answers:
username_1: Hi Jiachen,
There are no plans currently to combine distribution with GPU-acceleration. Note there are a few ways this can be done, and I suspect none really align with QuEST's design philosophy, nor are practical due to memory overheads. I've wanted to pen these thoughts for a while, so read on below if interested! :)
Firstly, QuEST uses its hardware to accelerate the simulation of a single quantum register at a time. While I think there are good uses of multi-GPU to speedup simultaneous simulation of _multiple_ registers, this would be a totally new pattern to QuEST's simulation style. So let's consider using multi-GPU to accelerate a _single_ register.
There are a few ways you can have "multiple GPUs":
- **multiple NVlinked GPUs**
This is when you have multiple GPUs tightly connected with a high-bandwidth fabric (e.g. [this](https://www.nvidia.com/en-gb/data-center/nvlink/)). The bandwidth is enough that you sort of *can* imagine it as a single big GPU, and hence it *would* be worthwhile for accelerating single-register simulation. However, this only exists right now as NVLink and NVSwitch, compatible only with IBM's POWER architecture - you could argue this is still esoteric, and not worth a big refactor. Note it wouldn't actually be very hard to refactor QuEST for this platform - indeed QuEST works out-of-the-box with POWER8. But it's not something on our TODO list currently.
- **multiple local GPUs**
This is when you have multiple GPUs on the same machine, but maybe on different sockets and hence with a much lower bandwidth between them. The most common case is two GPUs - is it worthwhile using *two* GPUs over *one* to speedup single register simulation? Often, no!
In big QC simulation, having to move memory around is often the big killer, and should be avoided where possible. Unfortunately, simulating unitaries on registers often requires moving memory. If all the memory *stays* in the GPU (very high "internal bandwidth"), this is ok, but copying memory to the other GPU (across the socket) will introduce a *huge* per-gate overhead!
Hence, using two GPUs to simulate the same register size can be slower than using just one, especially as the simulation size grows and saturates the sockets!
There's hardly a benefit from the extra VRAM too, because doubling the memory enables simulation of *one* additional qubit. This is not worth the slowdown, or the hardware!
Even with more than two GPUs, the connections are likely hierarchical and so even more prone to saturation.
- **distributed GPUs**
This is when you have a GPU(s) on each distributed node of a cluster. In this circumstance, simulating a unitary gate which requires data exchange not only costs us a VRAM to RAM overhead (similar to before), but a *networking* overhead to talk to the other nodes! This can be somewhat improved by having a direct GPU to network-card connection (and MPI abstraction), but I believe that's pretty cutting-edge.
Let's say you have *n* nodes, each with a GPU and a multicore CPU, and you're resolved to a distributed simulation. When is it worthwhile to pay the extra memory overhead locally copying from RAM to VRAM (and use the GPU), over using just the CPUs? This is now the same trade-off to consider in the previous cases. So may or may not be worthwhile.
TL-DR: besides the somewhat esoteric case of having multiple tightly-connected GPUs, multi-GPU simulation introduces a new memory overhead that doesn't exist in single-GPU simulation. This overhead is almost always *way* longer than the time the GPU spends simulating the gate. As to whether the whole simulation is sped up by the use of multi-GPU is system and simulation specific.
Status: Issue closed
|
the-blue-alliance/the-blue-alliance | 466564804 | Title: 2019 FMS schedule reports dropped "Match" column, breaks event wizard schedule import
Question:
username_0: Specifically, this check fails because the "Match" column is no longer present:
https://github.com/the-blue-alliance/the-blue-alliance/blob/22daf5ee5bd45567862350951918568bcf5f6adc/static/javascript/tba_js/eventwizard_apiwrite.js#L110
A couple FIM offseasons have run into this, but I forgot to report it - we added the column to the spreadsheet manually at first, then I put together a script to import a CSV schedule report.
Fortunately, the "Description" column does contain the number that used to be in the "Match" column in a relatively easy-to-parse format: either `Qualification X` for quals or `(#X)` for playoffs.
This year's FMS can also export schedules as CSV - is that worth supporting in the event wizard as well? It could allow dropping the XLSX parser dependency eventually (although that might not be a good idea this year, in case FMS offseason doesn't support CSV exports or people are still running an older version for some reason).
2019mibb schedules attached for reference:
[MIBB2019_ScheduleReport_all.zip](https://github.com/the-blue-alliance/the-blue-alliance/files/3379656/MIBB2019_ScheduleReport_all.zip)<issue_closed>
Status: Issue closed |
uBlockOrigin/uBlock-issues | 377052142 | Title: Crash uBlock v1.17.3b4 | Linux x64 Chrome/72.0.3595.2
Question:
username_0: <!-- Do NOT delete this template or any part of it when submitting your issue -->
### Prerequisites
all sites
<!-- Check the appropriate boxes after you submit your issue -->
<!-- Speculated performance issues will be marked as invalid and closed if they do not come with actual profiling data + analysis supporting the claim -->
<!-- Opening issues for adding new filter lists is now disallowed and such issues will be declined and closed -->
- [ ] I verified that this is not a filter issue
- Filter issues MUST be reported at [filter issue tracker](https://github.com/uBlockOrigin/uAssets/issues)
<!--
- If disabling uBO <https://github.com/gorhill/uBlock/wiki/Quick-guide:-popup-user-interface#the-large-power-button> makes the issue go away, then in all likelihood this is a filter issue.
- See what the logger <https://github.com/gorhill/uBlock/wiki/The-logger> reports when you reproduce the issue, this will help you determine whether this is a filter issue.
-->
- [ ] This is not a support issue or a question
- Support issues and questions are handled at [/r/uBlockOrigin](https://old.reddit.com/r/uBlockOrigin/)
<!-- Such issue will be closed as invalid -->
- [ ] I performed a cursory search of the issue tracker to avoid opening a duplicate issue
- Your issue may already be reported.
- I tried to reproduce the issue when...
- [ ] uBlock Origin is the only extension
- [ ] uBlock Origin with default lists/settings
- [ ] using a new, unmodified browser profile
- [ ] I am running the latest version of uBlock Origin
- [ ] I checked the [documentation](https://github.com/gorhill/uBlock/wiki) to understand that the issue I report is not a normal behavior
### Description
Optional permission 'file:///*' is redundant with the required permissions;this permission will be omitted.
{
"author": "All uBlock Origin contributors",
"background": {
"page": "background.html"
},
"browser_action": {
"default_icon": {
"16": "img/icon_16.png",
"32": "img/icon_32.png"
},
"default_popup": "popup.html",
"default_title": "uBlock Origin dev build"
},
"commands": {
"launch-element-picker": {
"description": "__MSG_popupTipPicker__"
},
"launch-element-zapper": {
"description": "__MSG_popupTipZapper__"
},
"launch-logger": {
"description": "__MSG_popupTipLog__"
}
},
"content_scripts": [ {
"all_frames": true,
"js": [ "/js/vapi.js", "/js/vapi-client.js", "/js/contentscript.js" ],
"matches": [ "http://*/*", "https://*/*" ],
"run_at": "document_start"
[Truncated]
### Steps to Reproduce
1. [First Step]
2. [Second Step]
3. [and so on...]
### Expected behavior:
[What you expected to happen]
### Actual behavior:
[What actually happened]
### Your environment
* uBlock Origin version: 1.17.3b4
* Browser Name and version: Chrome/72.0.3595.2
* Operating System and version: Lubuntu 18.10 x64
Answers:
username_1: @gorhill
Status: Issue closed
|
XX-net/XX-Net | 377116664 | Title: 不废话,google上不去,youtube搜索用不了 怎么办?
Question:
username_0: 访问google https://www.google.com/ncr 没问题,搜索东西就完蛋了
经常性504,求解
Answers:
username_1: 配置、日志
username_0: 配置如下
属性 | 值
-- | --
IPv4 状态 | OK
IPv6 状态(如何开启) | OK
有效 IPv4 IP 数 | 0
有效 IPv6 IP 数 | 100
总 IP 数 | 100
XX-Net 状态 | 工作
IP 延迟 | 306
连接池(帮助) | 新:0 h1:0 h2:0
浏览器代理设置 | OK
CA 证书状态(下载) | OK
IP 扫描线程数(设置) | 1
username_0: Nov 04 13:47:38.119 - [WARNING] task GET www.google.com /sorry/index?continue=https://www.google.com/search%3Fnewwindow%3D1%26source%3Dhp%26ei%3D5IfeW_fjL6KHrwSc0qGICQ%26q%3Dsdn%26oq%3Dsdn%26gs_l%3Dpsy-ab.3..0l2j0i131k1j0l2j0i131k1j0j0i131k1j0l2.13789.14288.0.14915.3.3.0.0.0.0.424.424.4-1.1.0....0...1c.1j4.64.psy-ab..2.1.423....0.h1BciphyRJU&q=EhAgAQAAU6oGTBT6r-WOFAsCGPaP-t4FIhkA8aeDS1Gzn4bOcu5f5e6wVd9b1AIyDgP-MgFy timeout
Nov 04 13:47:38.120 - [WARNING] IP:fc00:db20:35b:7399::5 host:www.google.com not support GAE, server type:HTTP server (unknown) status:503
Nov 04 13:47:38.121 - [INFO] remove ip:fc00:db20:35b:7399::5 left amount:98 target_num:98
Nov 04 13:47:38.122 - [DEBUG] fc00:db20:35b:7399::5 worker close:
Nov 04 13:47:38.122 - [WARNING] read fail, ip:fc00:db20:35b:7399::5, chunk:0 url:GET www.google.com/sorry/index?continue=https://www.google.com/search%3Fnewwindow%3D1%26source%3Dhp%26ei%3D5IfeW_fjL6KHrwSc0qGICQ%26q%3Dsdn%26oq%3Dsdn%26gs_l%3Dpsy-ab.3..0l2j0i131k1j0l2j0i131k1j0j0i131k1j0l2.13789.14288.0.14915.3.3.0.0.0.0.424.424.4-1.1.0....0...1c.1j4.64.psy-ab..2.1.423....0.h1BciphyRJU&q=EhAgAQAAU6oGTBT6r-WOFAsCGPaP-t4FIhkA8aeDS1Gzn4bOcu5f5e6wVd9b1AIyDgP-MgFy task.timeout:56 e:error(9, 'Bad file descriptor')
Nov 04 13:47:38.123 - [WARNING] fc00:db20:35b:7399::5 trace: 0:connect, 1790:init, 276:request,:299, processed:0, transfered:0, sni:give-clothes.org
username_0: 配置默认,没有特殊的
username_0: 然后页面就504 gaeproxy time out
username_1: 好像是配额用完了没检测出来。
下载这个修复版本 https://github.com/XX-net/XX-Net/issues/11479#ref-commit-038b421
username_0: 我用的就是最新版 3.2.11 呀?
username_0: 我用的就是最新的源码3.2.11
username_1: https://github.com/XX-net/XX-Net/archive/master.zip
这个才是最新
Status: Issue closed
|
rust-lang/rust | 106237359 | Title: Invalid "error: source trait is inaccessible" error
Question:
username_0: This code _*does not*_ compile ([playpen](http://is.gd/uKBOfD)):
```rust
pub mod mymod{
mod foo {
pub trait ToFoo{
fn to_foo(self) -> i32;
}
}
pub use self::foo::ToFoo;
}
pub fn foobar<T: mymod::ToFoo>(x: T) {
let v = x.to_foo();
}
fn main() {}
```
Output:
```
<anon>:14:13: 14:23 error: source trait is inaccessible
<anon>:14 let v = x.to_foo();
^~~~~~~~~~
<anon>:14:13: 14:23 note: module `foo` is private
<anon>:14 let v = x.to_foo();
^~~~~~~~~~
error: aborting due to previous error
playpen: application terminated with error code 101
```
Strangely as @bluss pointed out on IRC, this style _does_ compile ([playpen](http://is.gd/2GABjB)):
```
pub mod mymod{
mod foo {
pub trait ToFoo{
fn to_foo(self) -> i32;
}
}
pub use self::foo::ToFoo;
}
use mymod::ToFoo;
pub fn foobar<T: ToFoo>(x: T) {
let v = ToFoo::to_foo(x);
}
fn main() {}
```
Answers:
username_1: Duplicate of #16264.
Status: Issue closed
|
ember-engines/ember-engines | 236506250 | Title: Dependency dedup issue from 0.5.4 -> 0.5.8
Question:
username_0: When upgrading from 0.5.4 -> 0.5.8, our engine's vendor file got a lot larger:
0.5.4
```
- dist/engines-dist/admin/assets/engine-vendor: 0 B
```
0.5.8
```
- dist/engines-dist/admin/assets/engine-vendor.js: 157.52 KB (38.49 KB gzipped)
```
There are several dependencies defined in the engine's `package.json`, however none of these are unique to the engine (they are all also defined in the parent app). Shouldn't these be de-duping?
Ember CLI: 2.13.2
Ember: 2.13.3
Answers:
username_0: After some further digging, it appears the issue is from [this commit](https://github.com/ember-engines/ember-engines/commit/b16f8553376a13bced4adb73ecf6827de1013d22). Going to keep digging further to understand what is happening (and if this an issue with our setup).
username_1: It is likely that the addons being used have custom trees and opting out of the default caching semantics from Ember-CLI. Thus, since we now take into account the caching key, it is no longer being deduped.
username_0: [Here](https://github.com/username_0/ember-engine-test/tree/engines-vendor-test) is an example repo. It is just a basic ember app with an in-repo lazy-loaded engine. The in-repo engine includes this in its `package.json`:
```
"dependencies": {
"ember-data": "*",
"ember-cli-babel": "*",
"ember-cli-htmlbars": "*"
}
```
In 0.5.4 this will build and the engine-vendor file will be 0 bytes. Checked out to [this commit](https://github.com/ember-engines/ember-engines/commit/b16f8553376a13bced4adb73ecf6827de1013d22), it will include a copy of ember-data in the vendor file.
I do not know very much about broccoli to debug much further, but it appears that `cacheKeyForTree` returns a different cache key for each addon per tree. So, if I am understanding correctly, this cannot de-dup because the cache keys never match.
username_0: @username_1 sorry I posted before seeing your comment. Is there something specific about the engine or addon that would opt it out of the default caching?
username_2: The general gist of the caching was proposed in [ember-cli/rfcs#90](https://github.com/ember-cli/rfcs/blob/master/complete/0090-addon-tree-caching.md). The solution in ember-data would be to implement the `cacheKeyForTree` hook and return a stable value (indicating that its addon tree is not customized per-consumer).
https://github.com/emberjs/data/pull/5110 adds the hook to ember-data.
username_0: Thanks! That makes sense. So is the path forward here to submit PR's on each addon that is not being properly cached?
Status: Issue closed
|
styled-components/babel-plugin-styled-components | 411755062 | Title: Ability to write full path to a component's file in className
Question:
username_0: Hi there!
Our common pattern is to write template in `index.tsx` (or `view.tsx`) and styles in `styles.tsx`. So the most common component dir structure looks like:
client/components/AwesomeComponent:
- index.tsx
- styles.tsx
That's why classes looks like `styles__Header-sc-1ei6mxn-18` and you can't easily identify what `Header` is it, from `Component1/styles.tsx` or from `Component2/styles.tsx`. It would be great for us to write full path to a file from cwd in className, like so `client_components_component1_styles__Header-sc-1ei6mxn-18`.
What do you think?
btw we absolutely love styled components
Answers:
username_1: What do you think @mxstbr? My only concern is this could greatly increase the payload size of both the CSS and the HTML, though some of it would get gzipped away. Perhaps it could make sense behind a flag that isn't enabled by default.
username_0: +1 for flag, it should be used in development only
username_2: I added a flag for it in this PR:
https://github.com/styled-components/babel-plugin-styled-components/pull/259 |
HW-PlayersPatch/Development | 482618568 | Title: MP: Power Multiplier
Question:
username_0: Add a game option that increases a player's ship health and damage. Would need 8 game options, one for each player to select 100%-400% for health. Damage would go up by half that much.
#Explanation
I thought more about the health modifiers that other rts games offer. 200% health and 150% attack should make for a fair 1v2. Example: 1 ion frig with 200health and 150 attack vs 2 ion frigs with 100health and 100attack. Both sides have 200health total. Solo guy always has 150 attack, the other guys start with 200 combined attack, but then the first frig dies half way through battle so they end up with only 100attack. The fight should end even or with one side at 1% health.
Solo guy always has the mono control coordination advantage, but if on teamspeak the other team should be well coordinated. The other team should have the overall edge due to two players being able to do multipronged attacks etc.
#Implementation
_Damage Multiplier_
Similar to the CPU Attack Damage multiplier.
_Health multiplier_
Because the upgrades don't stack, (hw2c lv1 1.3x, lv2 1.6x), we need to initialize a custom research file for hw2 races. So at game start give 2.0x health, and lv1 becomes 2.6x health.
leveldata\multiplayer\lib\research.lua This loops thro each player, and loads the research file by race. Just add some if statements here 'if player 1 Power Multiplier gameoption = 200%, and race = hiig, then load "Hiigaran Research200.lua".
http://forums.gearboxsoftware.com/t/modular-data-code-design-introducing-doscanpath/542045
If you're a modder and you want your frigates to have twice as much health you can set up your mod to specifically include:
doscanpath("data:Scripts/Races/Hiigaran/Scripts/Research", "Frigate_Research.lua")
doscanpath("data:Scripts/Races/Hiigaran/Scripts/Research", "Frigate_Upgrade_Speed.lua")
but add in your own:
doscanpath("data:Scripts/Races/Hiigaran/Scripts/MyMod/Research", "Frigate_Upgrade_2xHealth.lua")
Answers:
username_0: Closing. This feature didn't work out well in practice.
Status: Issue closed
|
randigtroja/Flashback | 253777138 | Title: Snabbkomandon
Question:
username_0: Vi bör ha stöd för snabbkomandon till vissa saker i appen om man kör på desktop
exempelvis:
Ladda om F5
bläddra mellan sidor i trådar: ctrl-left / ctrl-right
kanske något för att byta mellan alla menyitems i hamburgermenyn? |
durhamatletico/durhamatletico.github.io | 104767922 | Title: Need to locate blog post of 9/3 and publish
Question:
username_0: I made a mess of trying to blog. You should see a saved draft in a folder called "2015." The mistake I made was failing to create a proper file path (specifically I botched the date of the post).
In trying to fix this I also created another new post. This should be deleted.
Answers:
username_1: --
<EMAIL>
Status: Issue closed
|
CougsInSpace/CougSat1-Hardware | 318759510 | Title: Fitting components (end of summer)
Question:
username_0: o Get attitude’s final mounting mechanism (end of summer)
o Get attidue’s prototyped mounting mechanism (end of May)
o Get power’s final board dimensions (end of summer)
o Send power backplane board dimensions (end of May) – Colin<issue_closed>
Status: Issue closed |
MicrosoftDocs/azure-docs | 313179606 | Title: What are public certificates for?
Question:
username_0: Sorry, this may be common knowledge but I am having a surprisingly difficult time establishing why one would want to upload a public certificate. Can the use case for this be added to this document? The linked blog does not elaborate.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 8dfb8f30-3236-ad55-eabf-6940a7e16329
* Version Independent ID: 8f4385a4-9ac2-38da-d243-ed3c00b0fd36
* Content: [Bind an existing custom SSL certificate to Azure Web Apps](https://docs.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-custom-ssl#public-certificates-optional)
* Content Source: [articles/app-service/app-service-web-tutorial-custom-ssl.md](https://github.com/Microsoft/azure-docs/blob/master/articles/app-service/app-service-web-tutorial-custom-ssl.md)
* Service: **app-service-web**
* GitHub Login: @username_3
* Microsoft Alias: **username_3**
Answers:
username_1: @username_0 Thanks for the feedback! I have assigned the issue to the content author to investigate further and update the document as appropriate.
username_2: @SunBuild Could you potentially help us update this document?
username_3: @username_2 This has been addressed by https://github.com/MicrosoftDocs/azure-docs/pull/7337.
username_3: #please-close
Status: Issue closed
username_1: @username_0 We will now proceed to close this thread. If there are further questions regarding this matter, please reopen it and we will gladly continue the discussion. |
riscv/riscv-debug-spec | 288020620 | Title: Clarify trigger match logic
Question:
username_0: I have two questions about the trigger match logic:
Q1. When mcontrol.match is 1 and tdata2 is b10,
does a LW instruction to address 0 hit the trigger?
Q2. When mcontrol.match is 1 and tdata2 is b101,
does a LW instruction to address 3 hit the trigger?
It seems the trigger comparing base address without size?
--
Answers:
username_1: Looks like it. I think we did have support for access size at some point,
but evidently it got removed somewhere. Is this something you need?
Tim
username_0: Current spec fulfills our usages. I just want to make sure our implementations are compatible with spec.
The first case occurs when GDB sets a watchpoint on an halfword and the halfword is accessed by a LW instruction. For handling the case, OpenOCD needs to set a trigger on a full word, but it also introduces false matches. OpenOCD needs to parse the instruction type for checking whether the watchpoint is hit.
When misaligned load/store is supported, the trigger needs to compare a wider range.
Using mcontrol.match 2 and 3 only relieves false matches, and it takes more triggers.
It will be nice to have a new feature for improving above usage, such as adding one bit in mcontrol for comparing full accessed range of a instruction.
Zhonh-Ho
username_1: That's a good suggestion. Do you have any idea what such a bit should be called?
username_0: I did not notice that bit 20 is the last bit left in mcontrol for RV32.
I am hesitant to consume the last bit. So far, it's okay to let the feature pending.
Status: Issue closed
username_1: Closing due to lack of interest/activity. |
Tarskin/HappyTools | 276598548 | Title: Automatic edge adjustment of peak detection fails on sequentially overlapping peaks
Question:
username_0: This was caused by the i['Peak'] value no longer being in the middle of i['Data'] after
the border was changed. The fix was to always compare to relevant border of i['Data'] to
i['Peak'] rather than the two opposite ends of i['Data'].
Detailed Explanation:
--- ORIGINAL ---
Peak 1: 15.00-15.50 - Intensity: 1
Peak 2: 15.25-15.75 - Intensity: 1
Peak 3: 15.70-16.20 - Intensity: 1
--- PROCESSED --
Peak 1: 15.000-15.375 - Intensity: 1
Peak 2: 15.375-15.750 - Intensity: 1
Peak 3: 15.700-16.200 - Intensity: 1
--- CAUSE ---
15.750 - 15.375 = 0.375 (down from 0.5)
15.50 (Center) + 0.5*0.375 = 15.6875
Peak 2 Border (Expected) = 15.6875 - Peak 2 Border (Actual) = 15.750
No overlap between Peak 3 (Lower) and Peak 2 (Expected)
No adjustment made between peak 2 and 3
Answers:
username_0: --- FIX ---
Window = abs(Peak Edge - Peak Max) instead of abs(Peak Lower Edge - Peak Upper Edge)
This ensures that the algorithm knows the actual 'edge' of the peak window.
Status: Issue closed
username_0: This was caused by the i['Peak'] value no longer being in the middle of i['Data'] after
the border was changed. The fix was to always compare to relevant border of i['Data'] to
i['Peak'] rather than the two opposite ends of i['Data'].
Detailed Explanation:
--- ORIGINAL ---
Peak 1: 15.00-15.50 - Intensity: 1
Peak 2: 15.25-15.75 - Intensity: 1
Peak 3: 15.70-16.20 - Intensity: 1
--- PROCESSED --
Peak 1: 15.000-15.375 - Intensity: 1
Peak 2: 15.375-15.750 - Intensity: 1
Peak 3: 15.700-16.200 - Intensity: 1
--- CAUSE ---
15.750 - 15.375 = 0.375 (down from 0.5)
15.50 (Center) + 0.5*0.375 = 15.6875
Peak 2 Border (Expected) = 15.6875 - Peak 2 Border (Actual) = 15.750
No overlap between Peak 3 (Lower) and Peak 2 (Expected)
No adjustment made between peak 2 and 3
username_0: This bug also affects the generated peak list.
Status: Issue closed
|
pombase/fypo | 93494405 | Title: a possible cell size homeostasis phenotype
Question:
username_0: I am curating some of the seminal Peter Fantes/ Fantes plot papers which were mentioned so much at the meeting.
Despite these being really important papers, they are really experiments which just use a mutant and then observe the retoration of size homeostasis after a temperature shift.
The first one PMID:893551 is demonstrating
restoration of division length homeostasis over successive cell cycles (condition after shift from non-permissive to permissive temperature)
I can only annotate the elongated and normal length phenotypes for cdc2-33
I *could* make and annotation to reflect this observation
restoration of division length homeostasis over successive cell cycles (condition after shift from non-permissive to permissive temperature)
if you can think of a way that I could capture it....Have you done something else similar to this, it rings a bell.....I guess the point is that the size does not return instantly to normal but can only be corrected to a certain degree in each generation.
If you can't think of a way that I can do this I can add a note...
Answers:
username_1: The catch with this is that the cdc mutant is the only thing they used to make long cells in the first place; there was no way to get cells that had a wild-type genotype but were long. That means there's no way to tell whether the mutant differs from wild type in how long it takes to get back to "normal" length at division.
username_0: OK, I will just leave this and add comments to the paper. We need to do something else with these types of papers at some point....
username_1: Closing because we can't really do anything without a wt/mutant comparison.
Status: Issue closed
|
wecodemore/grunt-githooks | 57352615 | Title: Shell template HTML-escapes command
Question:
username_0: The shell template includes {{command}} which follows handlebars rules and HTML escapes the content e.g. a quote becomes "
This hinders writing some commands using the provided template.
My fix would be to change {{command}} to {{{command}}} which is non-escaping. I can do this with tests tomorrow if that sounds ok.
Answers:
username_1: Dan, please use proper MarkUp here. Use backticks to format code so it's readable in mails as well. Thanks in advance.
Also could you please show some examples of what you are trying to do before you file a pull request? I don't want you to put any effort (and maybe waste your time) in something that could actually be a XY problem.
username_0: You're right, I rushed this issue. I've updated it now, sorry about that.
I'm pretty much trying to implement [this](https://gist.github.com/sindresorhus/7996717) as a grunt-githook
Here is what I have:
```js
githooks: {
all: {
'post-merge': {
hashbang: '#!/bin/sh',
template: './node_modules/grunt-githooks/templates/shell.hb',
command: 'changed_files=\"$(git diff-tree -r --name-only --no-commit-id ORIG_HEAD HEAD)" && function run_if_changed() { echo "$changed_files" | grep --quiet "$1" && eval "$2"; } && run_if_changed "client/package.json" "npm install"',
taskNames: "",
startMarker: '## GRUNT GITHOOKS START',
endMarker: '## GRUNT GITHOOKS END'
}
}
}
```
The command contains quotes and ampersands that get escaped to ```"``` and ```&```. I could just pull the command into its own file I suppose.
Thanks
username_1: In short: You are trying to update your `npm` modules whenever you merge a branch? Sidenote: You might want to use `npm update` and not `npm install` in that case.
username_1: Additional info: You can build your own template as well. The provided templates are blueprints anyway.
Status: Issue closed
|
pcingola/SnpEff | 74657543 | Title: Issues with directory separator in Windows ( '\' )
Question:
username_0: From user:
I am trying to use snpsift.jar phastcons function however my Windows batch file has troubles finding the documents. When I do the same procedure with my Linux sh file, everything works correctly. I guess the problem is with "\", "/" difference between Linux and Windows platforms.
Answers:
username_0: Closing old issues.
Status: Issue closed
|
knative/serving | 783406502 | Title: Autoscaler memory usage increases even with total number of ksvcs/revisions kept constant
Question:
username_0: /area monitoring
## What version of Knative?
0.18.2 but should affect any version greater than this one.
## Expected Behavior
Creating/deleting a number of ksvcs while keeping the final number constant eg zero should not create any difference
in the heap memory allocated before and after the crud ops.
## Actual Behavior
Running reproducer reproducer.sh (see attached file) soak test that repeatedly creates and deletes ksvcs, keeping the total number of ksvcs constant autoscaler gets OOMKilled after 4 hours.

## Steps to Reproduce the Problem
Running `reproducer.sh` (see attached file) soak test that repeatedly creates and deletes ksvcs, keeping the total number of ksvcs constant) autoscaler gets OOMKilled after 4 hours.
```
[root@ocp-dynamic-6653 ~]# go tool pprof --base "knative-serving.autoscaler-699dff8cff-hk27b.heap.2021-01-10_09:24:04-05:00.pb.gz" "knative-serving.autoscaler-699dff8cff-hk27b.heap.2021-01-10_10:47:41-05:00.pb.gz"
File: autoscaler
Type: inuse_space
Time: Jan 10, 2021 at 9:24am (EST)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top
Showing nodes accounting for 47.90MB, 85.65% of 55.93MB total
Dropped 5 nodes (cum <= 0.28MB)
Showing top 10 nodes out of 130
flat flat% sum% cum cum%
26.89MB 48.08% 48.08% 26.89MB 48.08% go.opencensus.io/stats/view.NewMeter
6MB 10.73% 58.81% 6MB 10.73% go.opencensus.io/stats/view.(*collector).addSample
3.50MB 6.26% 65.07% 9MB 16.10% go.opencensus.io/stats/view.(*worker).tryRegisterView
3MB 5.36% 70.44% 3MB 5.36% go.opencensus.io/stats/view.viewToMetricDescriptor
2.50MB 4.47% 74.91% 2.50MB 4.47% knative.dev/pkg/metrics.copyViews
1.51MB 2.69% 77.60% 1.51MB 2.69% k8s.io/apimachinery/pkg/apis/meta/v1.(*FieldsV1).UnmarshalJSON
1.50MB 2.68% 80.28% 1.50MB 2.68% go.opencensus.io/stats/view.(*worker).getMeasureRef (inline)
1MB 1.79% 82.07% 1MB 1.79% go.opencensus.io/stats/view.(*worker).RegisterExporter
1MB 1.79% 83.86% 2MB 3.58% go.opencensus.io/stats/view.(*collector).collectedRows
1MB 1.79% 85.65% 4MB 7.15% go.opencensus.io/stats/view.newViewInternal (inline)
```
[autoscaler.heap.zip](https://github.com/knative/serving/files/5796157/autoscaler.heap.zip) contains the data for heap comparison.
This happens due to the fact that a NewMeter happens every time we record a metric and we actually never delete any meter from [this](https://github.com/knative/pkg/blob/93874f0ea7c0f1b45d873235b81c3aedcacf9527/metrics/resource_view.go#L66.) map. It seems like a bug at the knative.pkg/metrics side.
Credit goes to @maschmid for discovering this.
Answers:
username_1: (ノಠ益ಠ)ノ彡┻━┻
metrics again :)
username_1: /cc @username_2
username_2: I'd expect https://github.com/knative/pkg/pull/1741 fixes the problem, let me see if it got pulled in...
username_2: Yes, it looks like the release-0.18 branch does not have that patch.
Do you want me to try to back-port a cherrypick of that change?
username_0: @username_2 yes that would be great! I have question though, in that PR there is a 10minute TTL beyond that all exporters are flushed and meters are removed. I am wondering from a semantics perspective if a revision is just inactive (nothing happens), no traffic hits it and no event happens related to scaling, would that mean that we will remove its metrics so that prometheus for example would show no metrics when it will scrape the metrics endpoint?
username_2: Yes, that's what will happen. We can probably adjust the TTL, but tying it in with the informer cache is probably a much larger PR.
username_0: Ok cool, I guess we can cleanup things when we get a delete event like [here](https://github.com/knative/serving/blob/3ec271c55beb2099086966bb3035011894a4c2b7/pkg/reconciler/revision/controller.go#L122) per resource.
username_2: It would be nice if we could keep the Meter or a set of [Counters/Observers](https://pkg.go.dev/go.opentelemetry.io/[email protected]/metric#Meter.NewFloat64Counter) alongside the informer in the cache, and then load them into the Context when handling something for that Resource.
username_2: (Unfortunately, the current code is structured with a sort-of-global singleton for the Metrics, which isn't right for our use case, but I started the migration from the inside rather than the outside.)
username_0: @username_2 have a pointer for the migration? Anyway I can help?
username_0: @username_2 even with the fix were are hitting the same issue, activator and autoscaler is restarting in 12h soak test (here is a dump diff with a 25 minutes time diff):
```
[root@ocp-dynamic-20 soak]# go tool pprof --base "knative-serving_autoscaler-6d76d9d7f4-ddrvt_heap_2021-01-29T09:34:17-05:00.pb.gz" "knative-serving_autoscaler-6d76d9d7f4-ddrvt_heap_2021-01-29T10:00:30-05:00.pb.gz"
File: autoscaler
Type: inuse_space
Time: Jan 29, 2021 at 9:34am (EST)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top20
Showing nodes accounting for 6789.61kB, 38.01% of 17861.85kB total
Dropped 7 nodes (cum <= 89.31kB)
Showing top 20 nodes out of 156
flat flat% sum% cum cum%
2048.56kB 11.47% 11.47% 2048.56kB 11.47% go.opencensus.io/stats/view.(*collector).addSample
1552.15kB 8.69% 20.16% 1552.15kB 8.69% go.opencensus.io/stats/view.NewMeter
1537.44kB 8.61% 28.77% 3073.53kB 17.21% go.opencensus.io/stats/view.(*worker).tryRegisterView
1026.91kB 5.75% 34.52% 1026.91kB 5.75% k8s.io/apimachinery/pkg/apis/meta/v1.(*FieldsV1).UnmarshalJSON
1024.08kB 5.73% 40.25% 1024.08kB 5.73% go.opencensus.io/stats/view.viewToMetricDescriptor
1024.08kB 5.73% 45.98% 1024.08kB 5.73% knative.dev/pkg/metrics.copyViews
-902.59kB 5.05% 40.93% -902.59kB 5.05% compress/flate.NewWriter
-528.17kB 2.96% 37.97% -528.17kB 2.96% encoding/json.(*Decoder).refill
528.17kB 2.96% 40.93% 528.17kB 2.96% regexp.(*bitState).reset
-518.65kB 2.90% 38.03% -518.65kB 2.90% github.com/json-iterator/go.(*Iterator).stopCapture
-514kB 2.88% 35.15% -514kB 2.88% bufio.NewReaderSize
-512.28kB 2.87% 32.28% -512.28kB 2.87% encoding/json.(*decodeState).objectInterface
-512.25kB 2.87% 29.41% -512.25kB 2.87% reflect.New
512.14kB 2.87% 32.28% 512.14kB 2.87% k8s.io/apimachinery/pkg/apis/meta/v1.(*ObjectMeta).DeepCopyInto
-512.10kB 2.87% 29.41% -512.10kB 2.87% go.uber.org/zap.getCallerFrame
512.05kB 2.87% 32.28% 512.05kB 2.87% context.(*cancelCtx).Done
512.05kB 2.87% 35.15% 512.05kB 2.87% golang.org/x/net/http2.(*ClientConn).newStream
512.05kB 2.87% 38.01% 512.05kB 2.87% net/http.(*persistConn).roundTrip
-512.04kB 2.87% 35.15% -512.04kB 2.87% go.opencensus.io/stats/view.(*worker).reportView
512.03kB 2.87% 38.01% 512.03kB 2.87% context.WithCancel```
I think the reason for this is that we create the views here: https://github.com/knative/pkg/blob/ca02ef752ac658f5a315b619e56ec2f87fafdeff/metrics/resource_view.go#L131 but we never unregister them (so old metrics are still exported). During cleanup there is an exporter flush https://github.com/knative/pkg/blob/ca02ef752ac658f5a315b619e56ec2f87fafdeff/metrics/resource_view.go#L87 but the exporter is always nil. Metrics are not that many per resource thus the slow OOM issue. So we have to unregister the views during cleanup.
username_0: I think we can close this as there is a fix merged. We should keep an eye though.
Status: Issue closed
username_3: @username_0 I am investigating an issue where I see a metric (e.g. `knative.dev/internal/serving/revision/request_count`) being emitted fine to Stackdriver, but at the same time I see the same metric being rejected because of it has a timestamp earlier than the latest timestamp reported for that metric.
So, it seems that the same metric is somehow exported twice and in the failing cases, the time window is super large:
```
start_time_seconds: [1614485706], // 2021-02-27 20:15:06-08:00
end_time_seconds: [1616396367] // 2021-03-21 23:59:27-07:00 [NOW AT TIME OF CAPTURE]
```
Unfortunately, I can't get a process dump and I have very limited access to that cluster.
I am wondering if this could have anything to do with views not being unregistered properly and being kept in the background and trying to export the metric forever. |
greenstatic/bigbluebutton-exporter | 624453949 | Title: Unable to start bigbluebutton-exporter service
Question:
username_0: Hello Gregor,
Thanks for your great job. I'm a Grafana fan (and newbie as well).
As I'm not found of Docker (I rather prefer to understand what I'm running and how it is set up), I decided to install the systemd version, following the howto step by step.
Now I'm stuck unable to start the bigbluebutton service.
When I check the status :
jfvarin@labo:/etc/bigbluebutton-exporter$ sudo systemctl status bigbluebutton-exporter
● bigbluebutton-exporter.service - BigBlueButton Exporter
Loaded: loaded (/lib/systemd/system/bigbluebutton-exporter.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since lun. 2020-05-25 21:09:21 CEST; 2s ago
Process: 17957 ExecStart=/usr/bin/python3 /opt/bigbluebutton-exporter/bbb-exporter/server.py (code=exited, status=1/FAILURE)
Main PID: 17957 (code=exited, status=1/FAILURE)
mai 25 21:09:21 labo systemd[1]: bigbluebutton-exporter.service: Unit entered failed state.
mai 25 21:09:21 labo systemd[1]: bigbluebutton-exporter.service: Failed with result 'exit-code'.
So I went a step further by running
jfvarin@labo:/etc/bigbluebutton-exporter$ sudo /usr/bin/python3 /opt/bigbluebutton-exporter/bbb-exporter/server.py
Traceback (most recent call last):
File "/opt/bigbluebutton-exporter/bbb-exporter/server.py", line 6, in <module>
import settings
File "/opt/bigbluebutton-exporter/bbb-exporter/settings.py", line 17, in <module>
API_BASE_URL = validate_api_base_url(os.environ["API_BASE_URL"])
File "/usr/lib/python3.5/os.py", line 725, in __getitem__
raise KeyError(key) from None
KeyError: 'API_BASE_URL'
I triple checked the API_BASE_URL and the API_SECRET which work like a charm with the API Mate but it seems that server.py is not able to get the info.
My conf is quite standard : Ubuntu 16.04 with BBB and Greenlight + SSL + prometheus + python3-pip..,
If someone can give me a clue, I will really appreciate as I want to avoid running docker (sorry for the docker fans).
Thanks in advance for your help, I still have a lot to do to see it fully running.
Best regards from Paris.
JF
Answers:
username_1: Seems to me like you didn't edit the settings properly. Did you follow [steps 4-5](https://bigbluebutton-exporter.username_1.dev/installation/bigbluebutton_exporter/#4-copy-systemd-unit-service-and-example-settings)?
username_1: Also, for future reference, it's much easier to read error snippets if you insert them as "code".
https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet#code-and-syntax-highlighting
username_1: Just letting you know, I updated the docs regarding systemd installation with a bit safer `cp` command in step 4. It is functionally equivalent to the previous command but a bit safer if anybody mistypes anything. Just so you don't bang your head that something "looks different" than before.
username_0: Hello Gregor,
Thanks for your quick answer.
I did tons of experiences in order to both understand and fix the issue.
It's good to learn new technics after 60 ;-)
I checked all Python requiered modules, checked all versions...
Re-installed with the new version (included cp) and the result stayed the same.
My conclusion was that the script was not able to load the 2 env variables so I modify the `setting.py` script by adding the `dotenv module` and addressing the `settings.env` file
ran the script both as user and sudo and worked like a charm but still unable to run it as a service...
It will probably still take some time but once it's done it's done.
I apologize not using Markdown highlighting syntax but it was first git post ever. I learn everyday.
Kind regards from France
username_1: @username_0 if I will be so eager to learn new techniques at 60 I will be trilled :)
Can you report back with the output of:
```shell
$ sudo journalctl -u bigbluebutton-exporter
```
username_0: My pleasure.
I just think that the day I will not learn anything anymore I will be dead ;-)
I also put the modified script I tried.
JF
[journalctl2705.txt](https://github.com/username_1/bigbluebutton-exporter/files/4689571/journalctl2705.txt)
[settings.txt](https://github.com/username_1/bigbluebutton-exporter/files/4689580/settings.txt)
username_1: Ah I see the issue. Your second original snippet mislead me - since you are running it manually without having the environment variables in your session.
You are getting in your errors:
```
mai 27 16:56:05 labo systemd[1]: Stopped BigBlueButton Exporter.
mai 27 16:56:05 labo systemd[1]: Started BigBlueButton Exporter.
mai 27 16:56:05 labo python3[25933]: Traceback (most recent call last):
mai 27 16:56:05 labo python3[25933]: File "/opt/bigbluebutton-exporter/bbb-exporter/server.py", line 5, in <module>
mai 27 16:56:05 labo python3[25933]: from prometheus_client import start_http_server, REGISTRY
mai 27 16:56:05 labo python3[25933]: ImportError: No module named 'prometheus_client'
mai 27 16:56:05 labo systemd[1]: bigbluebutton-exporter.service: Main process exited, code=exited, status=1/FAILURE
mai 27 16:56:05 labo systemd[1]: bigbluebutton-exporter.service: Unit entered failed state.
mai 27 16:56:05 labo systemd[1]: bigbluebutton-exporter.service: Failed with result 'exit-code'.
```
This is a library issue. The documentation is missing a vital step, installing via pip the Python dependencies. Looks like I had them already installed when I was writing the documentation.
Give this a try:
```
$ sudo apt update
$ sudo apt install python3-pip # installs Python3 pip if you don't already have it
$ sudo pip3 install -r /opt/bigbluebutton-exporter/requirements.txt # install Python3 dependencies system-wide
```
Report back if it works, so we can fix the documentation.
username_0: Thanks for the idea of using `requierements.txt` to install dependencies,
I did it manually first and using the script allowed me to update some versions.
Anyway, it still don't grab the vars either when manually running `/usr/bin/python3 /opt/bigbluebutton-exporter/bbb-exporter/server.py` after failing `sudo systemctl start bigbluebutton-exporter`
I reboot the machine, logged in and started the service first and checked status just after.
Status was green with an error message (no module named xmltodict)...
For your information, in my previous tests, the first issue I face was xmltodict I installed manually...
THAT was my error !!!
Then turn back to grey.
Took a breath... uninstalled all xmltodict versions (pip, pip3) and reinstalled with the script.
Restarted the service, status check, netstat check and now it works ;-)
Hope for you not too much (old) people want to install it without Docker ;-)
JF
username_1: No problem, that's why the instructions are there :)
username_0: I'll keep you posted once everything runs.
I'll be pleased to help noobiewise speaking.
JF
Status: Issue closed
username_0: Hi Gregor,
As I told you, quick update.
Even if it's not perfect (still some issues with the metrics), my Grafana serveur is up and running grabing from my lab bbb serveur.
I just want to point out anothe mistake I did yesterday which was to run `sudo apt install prometheus` directly on the 16.04.
This version is completely obsolete and `prometheus.yml` files are not interpreted correctly.
So I moved to the latest version of Prometheus.
Again, thanks for your help.
Kind regards
JF
username_1: What kind of issues are you having with the metrics? |
gbif/portal-feedback | 303823474 | Title: Are pelicans are now dinosaurs?
Question:
username_0: **Are pelicans are now dinosaurs?**
We noticed yesterday that a number of bird genera are now considered dinosaurs (Reptilia, Dinosauria). Is this for real?
(cc @username_2)
-----
User provided contact info: @username_1
System: Chrome 64.0.3282 / Mac OS X 10.13.3
User: [See in registry](https://www.gbif.org/api/feedback/user/efa55fede42d54f6d1a7addbd011d3d6:cdeb946d719397b0f4f6f7ba3799d983ef1a8284a1432a94be266449652d8c202f844bfeea62139ae68ef1aee410ded39ba7c0b4390047a8c8c707a88a173209)
Referer: https://www.gbif.org/occurrence/gallery?taxon_key=9369825
Window size: width 1631 - height 953
[API log](http://elk.gbif.org:5601/app/kibana?#/discover?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:'2018-03-09T12:15:10.172Z',mode:absolute,to:'2018-03-09T12:21:10.172Z'))&_a=(columns:!(_source),index:'prod-varnish-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'response:%3E499')),sort:!('@timestamp',desc)))
[Site log](http://elk.gbif.org:5601/app/kibana?#/discover?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:'2018-03-09T12:15:10.172Z',mode:absolute,to:'2018-03-09T12:21:10.172Z'))&_a=(columns:!(_source),index:'prod-portal-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'response:%3E499')),sort:!('@timestamp',desc)))
System health at time of feedback: OPERATIONAL
Answers:
username_1: Also Charadrius, Crypturellus, Dromaius, Gygis, Oceanodroma...
Status: Issue closed
username_2: Apparently yes, see #1213 and please continue there |
JavaMoney/JavaMoney.github.io | 75917291 | Title: Blog frontpage does not get updated by JBake
Question:
username_0: At the moment the "Blog" feature of the JavaMoney site uses the blog "archive" (http://javamoney.github.io/archive.html) because for some reason JBake no longer updates the blog "frontpage" (http://javamoney.github.io/blog.html) when baking. Unless it's a JBake related problem (please feel free to try with different versions locally) some meta-information could be wrong/missing, or the number of blog entries may have exceeded this format and template. The "archive" page does the job, so it's not a critical issue, but the front page would be slightly more appealing if it can be reactivated.<issue_closed>
Status: Issue closed |
VTL-Community/VTL-Community | 1094392904 | Title: The stringOperators rule is ambiguous
Question:
username_0: The [`instrAtom` alternative](https://github.com/InseeFr/Trevas/blob/master/vtl-parser/src/main/antlr4/fr/insee/vtl/parser/Vtl.g4#L179) in the [`stringOperators` rule](https://github.com/InseeFr/Trevas/blob/master/vtl-parser/src/main/antlr4/fr/insee/vtl/parser/Vtl.g4#L175) is ambiguous. Optional parameters should be nested.
Example of ambiguous expression: `instr("Hello world world", "world", 11)`
The correct rule should be:
```
INSTR LPAREN expr COMMA param=expr ( COMMA optionalExpr (COMMA optionalExpr)?)? RPAREN # instrAtom
```
Same thing for [`stringOperatorsComponent`](https://github.com/InseeFr/Trevas/blob/master/vtl-parser/src/main/antlr4/fr/insee/vtl/parser/Vtl.g4#L182).
Answers:
username_0: (copied from https://github.com/InseeFr/Trevas-JS/issues/41). |
graphql-java/graphql-java | 748600397 | Title: Illegal character in module name.
Question:
username_0: **Describe the bug**
Illegal character in module name.
Java module names are not allowed to have dashes (-) in their names.
**To Reproduce**
Create a minimal java project using modules and try to add a requires com.graphql-java.
Observe that it fails to build.<issue_closed>
Status: Issue closed |
sparkydogX/sparkydogx_blog_comment | 419802061 | Title: 在Ubuntu中设置网络唤醒 | SparkydogX Blog
Question:
username_0: https://sparkydogx.github.io/2019/01/16/ubuntu-wake-on-lan/
网络唤醒,实际上就是远程开机了,首先需要主板支持wake on lan(大部分2000年后生产的主板都支持wol),此外机器需要连接到电源。 设置BIOS在BIOS设置中打开wol功能,通常到制造商的网站上可以找到具体操作步骤,我用的是华硕主板,设置步骤在这里:BIOS中如何开启&关闭网络唤醒 在Ubuntu系统中进行设置 安装 ethtoolsudo apt-get install et |
scikit-learn/scikit-learn | 57253353 | Title: different results on differen versions of sklearn
Question:
username_0: I reinstalled my sklearn to the latest 0.15 and re-ran the same code with the same input on same setting (k-fold splitting, random seed) using SVC, but the classification results are quite different (like ~75% vs 93%). Previously I was using 0.14.
Answers:
username_1: It's possible the cross validation implementation has changed. Can you
provide your code?
username_2: Can you reproduce your 0.14 results if you install that again? And can you try to git-bisect it?
If you can share the code + data we can help.
username_0: I can reproduce the results in 0.14.
I was using StratifiedKFold to only get the train and test indices and the rest of learning and testing was implemented by myself.
I quickly checked the train/test indices generated by StratifiedKFold, the ones from 0.14 is different from 0.15. It looks like this function might have been changed?
I can't release the data now, but I can share the code later if needed.
username_2: Yeah I think that function changed. If you shuffle the data for 0.14 before doing the cross-validation
```python
from sklearn.utils import shuffle
X, y = shuffle(X, y, random_state=0)
```
does the result on 0.14 change?
username_2: Or maybe more interestingly, if you run on 0.15, do you get better results again? If so, you data is ordered in some way, and you should ask yourself if you want to keep this ordering in your cross-validation or not.
username_0: I didn't try the shuffling, but I can imagine if I shuffle the data the results can be quite different.
But I was doing stratified k fold splitting and that's why I used StratifiedKFold nor KFold. And to me I don't think the split should be different on different versions?
Actually the new results on 0.15 are worse (w/o shuffling) and I did order the file by class but I don't think that will affect the results by doing stratified CV?
username_0: Aha, wait, I misread your previous answer.
Let me try shuffling the data before SKFing it.
username_2: The reason we change the splitting was that it was done in a pretty unnatural way before. You'd expect the first from a given label to be in the first fold, which was not the case before. So what was done before could hide correlation issues inside the data, because it was basically shuffling it.
If your results vary widely when using different cross-validation folds, you should think hard about your data and whether your results are meaningful.
username_0: That makes sense.
And shuffling the data before SKFing it solves the problem. Now 0.15 produces similar results.
Thanks!
username_2: Ok so then everything is as expected ;) The point is that if the ordering in your data is something like when the data arrived, and your test data will arrive at an even newer point in time, then the more pessimistic 0.15 result is more realistic for new test data.
Status: Issue closed
|
rancher-plugins/kontainer-engine-driver-oke | 849519703 | Title: Add support for setting custom pod and service CIDR blocks
Question:
username_0: 
Answers:
username_0: @adi13
username_0: @username_1
username_0: The two driver flags `--pod-cidr`, and `--service-cidr` need to be plumbed through to https://github.com/rancher/terraform-provider-rancher2
username_0: PR merged on the driver side: https://github.com/rancher-plugins/kontainer-engine-driver-oke/pull/36
username_1: This PR is merged into master https://github.com/rancher/terraform-provider-rancher2
Status: Issue closed
|
rafaelmardojai/firefox-gnome-theme | 758460490 | Title: Is it possible to use system style for window actions without enabling the title bar
Question:
username_0: Hello.
When the title bar is disabled (goal state) the close, maximize and minimize icons are styled for Gnome. Is it possible to make the window use the ones from the system theme? (Currently running mojave-gtk) |
viritin/viritin | 165731483 | Title: Cleaning containers up
Question:
username_0: We have different "flavors" of the ListContainer in viritin:
- `ListContainer` - the basic variant
- `FilterableListContainer extends ListContainer` - this one supports filtering
- `GeneratedPropertyListContainer extends ListContainer` - this one is used for generated columns in `MGrid`
Would it not be easier to merge `ListContainer` and `FilterableListContainer`, so we have one container for `MTable` and have the `GeneratedPropertyListContainer` for `MGrid`. This would also make it easier to use filtering on `MGrid` - right now one have to use `setContainerDatasource` with a `FilerableListContainer` to get filter support see #206, and it would also allow the combination of generated properties and filtering in `MGrid`, see #189.
Answers:
username_1: Not quite sure on this, I'll need to look into this more with fresh brains at some point. To me it is a design flaw that filtering is done at the "container level". I'd more like to see extensions or examples for (M)Grid so that editing "filtering fields" just changes the queries which are delegated to the backend.
username_0: `I'll need to look into this more with fresh brains at some point. ` - that's never wrong ;-)
But I think filtering on "container level" is fine, as long we don't use `LazyList`. In the `LazyList` case we should also delegates the filtering to the backend.
We would end up then with tow different cases:
1. Do sorting and filtering on container level - maybe poor performance, high memory consumption but easy to implement - and fine if the dataset is not too big
2. Do it lazy: sorting, filtering in the backend - more to implement for the user, but more efficient.
I wonder if in the 2nd case we do not duplicate the lazyquerycontainer:
http://github.com/tlaukkan/vaadin-lazyquerycontainer |
Altinn/altinn-studio | 641802532 | Title: Change partition key for applications collection to appId
Question:
username_0: ## Considerations
## Acceptance criteria
- [ ] Partition key in is appId
## Specification tasks
- [ ] Development tasks are defined
- [ ] Test design / decide test need
## Development tasks
- [ ] Update partition key in code/config
- [ ] Update queries for application to use the new key
## Definition of done
- [ ] Documentation is updated (if relevant)
- [ ] Technical documentation (docs.altinn.studio)
- [ ] QA
- [ ] Manual test is complete
- [ ] Automated test is implemented
- [ ] All tasks in this userstory are closed
Answers:
username_0: Issue not well justified. Closing issue. re-open if there is an uneven amount of requests to the various partitions.
Status: Issue closed
|
python-cmd2/cmd2 | 685169295 | Title: RecursionError when printing argparse Namespace
Question:
username_0: I've narrowed down the cause to this line in [decorators.py](https://github.com/python-cmd2/cmd2/blob/master/cmd2/decorators.py)
```Python
setattr(ns, 'get_handler', types.MethodType(get_handler, ns))
```
This change fixes it
```Python
setattr(ns, 'get_handler', functools.partial(get_handler, ns))
```<issue_closed>
Status: Issue closed |
sql-machine-learning/sqlflow | 608229999 | Title: SQLFlow error message examples
Question:
username_0: # SQLFlow Error Message Examples
This issue sorting some error cases according to the following format:
``` text
1. PreProduced SQL
1. Error message
1. What we wanted(optional)
```
## DataBase Connection Error
``` bash
ERROR: runSQLProgram error: GetAlisaTask got a bad result, response={"returnCode":"11020295002","requestId":"0ab411db15880473516257764d0bda","returnMessage":"调用Alisa失败:java.net.SocketException: Connection reset","returnErrorSolution":""}
```
We wanted:
``` bash
Alisa database driver error: {"returnCode":"11020295002","requestId":"0ab411db15880473516257764d0bda","returnMessage":"调用Alisa失败:java.net.SocketException: Connection reset","returnErrorSolution":""}
```
## PAI submitter program runtime error
``` sql
SELECT * FROM alifin_jtest_dev.sqlflow_titanic_train
TO TRAIN DNNClassifier1 WITH
model.hidden_units=[200, 200, 200]
LABEL survived
INTO my_titanic_model;
```
``` text
runSQLProgram error: failed: pai -name tensorflow1150 -project
****, detailed error message on URL: http://pai-logviwer-url
```
What we wanted:
``` text
Model Zoo check error: DNNClassifier1 does not exists, please check the available model list on: http://model_zoo.sqlflow.org/list
```
## Syntax Error
``` sql
SELECT * FROM alifin_jtest_dev.sqlflow_titanic_train
TO TO TRAIN DNNClassifier WITH
model.hidden_units=[200, 200, 200]
model.no_exits=abc,
LABEL survived
INTO my_titanic_model;
```
Error Message:
``` text
runSQLProgram error: syntax error: at (3 ~ 5)-th runes near "TO TRAIN D"
```
[Truncated]
``` sql
SELECT * FROM alifin_jtest_dev.sqlflow_titanic_train
TO TRAIN DNNClassifier WITH
model.hidden_units=[200, 200, 200],
model.no_exits=abc
LABEL survived
INTO my_titanic_model;
```
Error Message:
``` text
runSQLProgram error: unsupported attribute model.no_exits
```
What we wanted:
``` text
attributed check error: unsupported attribute model.no_exits in DNNClassifier, allowed attributes of DNNClassifier please go to http://sqlflow.org/models/dnnclassifier
```
Answers:
username_1: There're many other errors about **attribute checking** or other *semantics checking*:
### Type Checking Failed
```sql
SELECT * from iris.train TO TRAIN DNNClassifier WITH model.hidden_units=x LABEL class INTO iris.xgb;
```
Error Message:
```bash
Traceback (most recent call last):
File "<stdin>", line 136, in <module>
File "/root/go/src/sqlflow.org/sqlflow/python/sqlflow_submitter/tensorflow/train.py", line 126, in train
save_checkpoints_steps, validation_metrics)
File "/root/go/src/sqlflow.org/sqlflow/python/sqlflow_submitter/tensorflow/train_estimator.py", line 65, in estimator_train_and_save
eval_start_delay_secs, eval_throttle_secs)
File "/root/go/src/sqlflow.org/sqlflow/python/sqlflow_submitter/tensorflow/train_estimator.py", line 105, in estimator_train_compiled
estimator.train(lambda: train_dataset_fn(), max_steps=train_max_steps)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 370, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1160, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1190, in _train_model_default
features, labels, ModeKeys.TRAIN, self.config)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1148, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/canned/dnn.py", line 764, in _model_fn
batch_norm=batch_norm)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/canned/dnn.py", line 573, in dnn_model_fn_v2
mode=mode)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/canned/dnn.py", line 508, in _dnn_model_fn_builder_v2
name='dnn')
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/canned/dnn.py", line 328, in __init__
name=hidden_shared_name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/layers/core.py", line 995, in __init__
self.units = int(units)
ValueError: invalid literal for int() with base 10: 'x'
[Truncated]
Start training XGBoost model...
[12:39:48] 110x4 matrix with 440 entries loaded from train.txt_0
Traceback (most recent call last):
File "<stdin>", line 33, in <module>
File "/root/go/src/sqlflow.org/sqlflow/python/sqlflow_submitter/xgboost/train.py", line 64, in train
**train_params)
File "/usr/local/lib/python3.6/dist-packages/xgboost/training.py", line 216, in train
xgb_model=xgb_model, callbacks=callbacks)
File "/usr/local/lib/python3.6/dist-packages/xgboost/training.py", line 74, in _train_internal
bst.update(dtrain, i, obj)
File "/usr/local/lib/python3.6/dist-packages/xgboost/core.py", line 1109, in update
dtrain.handle))
File "/usr/local/lib/python3.6/dist-packages/xgboost/core.py", line 176, in _check_call
raise XGBoostError(py_str(_LIB.XGBGetLastError()))
xgboost.core.XGBoostError: value 0 for Parameter num_class should be greater equal to 1
```
What we want:
```
error: invalid argument: num_class of xgboost.gbtree should be greater or equal to 1
```
username_0: Came from @llxxxll
## Column Check Error
``` sql
SELECT * from fake.train TO TRAIN xgboost.gbtree
WITH objective=multi:softmax
INTO fake.xgb;
```
XGboost's feature type should be numerical if some column type of `fake.train` is string, .e.g. `cat`, `dog`, we should tell users it does work on string feature column type.
What we have:
TBD
What we want:
``` text
column type error: data type of input feature column `f` should be numerical for xgboost.gbtree estimator.
```
username_2: @username_0 and @username_1 , thank you so much for the summarization of these error reporting. We definitely need to do something to make it better. Here follow my few cents.
### Error Messaging in Go Code
In my mind, I have the following error handling schema in Go code.
1. No `panic` and `recover` unless justifiable; instead,
1. make every function return and error.
Given these two points, we can have error handling like shown in the following code snippet.
```go
func a_func(params Something) (SomethingElse, error) {
if e := do_some_thing(); e != nil {
return nil, fmt.Errorf("a_func: waited to do .... but cannot do some thing:\n%v", e)
}
return result, nil
}
func grpc_server_Run() (Response, error) {
r, e := a_func()
if e != nil {
return nil, fmt.Errorf("grpc_server_Run: wanted to ... but ...:\n%v", e)
}
}
func grpc_client_main() {
r, e := stub.Call(a_func)
if e != nil {
log.Error(e)
}
}
``` |
beyondallrepair/gobinance | 738338238 | Title: Implement Order Book Market Data Endpoint
Question:
username_0: Add an implementation for the the Market Data endpoint to the `Client` in the `gobinance` package.
Documentation is available here: https://github.com/binance-exchange/binance-official-api-docs/blob/master/rest-api.md#order-book |
cryptocoinjs/secp256k1-node | 351096655 | Title: Why the msg to be signed is limited to 32 bytes ?
Question:
username_0: The message to be signed should be any string, why it's limited to be a 32 bytes string here?
Answers:
username_1: Because signing is calculation with big numbers, but this numbers limited by 32 bytes. If you want sign string with arbitrary length you should calculate hash first and then sign.
Status: Issue closed
|
nidhaloff/igel | 919769029 | Title: re-write the cli using click (or maybe typer?)
Question:
username_0: ### Description
I'm the creator and only maintainer of the project at the moment. I'm working on adding new features and thus I would like to let this issue open for newcomers who want to contribute to the project.
Basically, I wrote the cli using[ argparse ](https://docs.python.org/3/library/argparse.html) since it is part of the standard language already. However, I'm starting to rethink this choice because it has some issues that the [click](https://click.palletsprojects.com/en/8.0.x/why/#why-not-argparse) library already overcome.
With that said, it would be great to re-write the cli in click or even in [typer](https://github.com/tiangolo/typer), which also uses click under the hood but adds more features.
If someone wants to work on this, please feel free to start directly, you don't need to ask for permission.
_PS: Feel free to suggest other libraries. I just suggested click since I'm familiar with it_
**I hope not but If this issue stayed open for a long time, then I will start working on it myself**
Answers:
username_1: Hi. Adding my 2 cents: maybe https://github.com/google/python-fire can help speed up the process :)
username_2: I use [docopt](http://docopt.org/) for most of my projects now as click is usually overkill for what I need.
Based on what I see I think click is probably going to be your best option for your project as it has great modularity.
username_0: @username_1 @username_2 I'm not familiar with the two packages but thanks for the suggestion.
Yes I think click is a good choice for igel. Maybe typer would be even better but I'm not sure. I must check it out but it is also based on click and has similar syntax so that's the advantage
username_0: @username_2 At the moment there are no tests for the cli. I implemented the first simple version using argparse, which had limitations and this is exactly one of the reasons why I want to switch to click.
If you want to give it a try, I would happily review and eventually update your PR. You don't have to be familiar with ML, the cli code has nothing to do with ML, it's just python.
username_3: Hi @username_0, I am new to open source and would like to start working on it. I would try to replace argparse with click in cli.py
username_4: @username_0 Could you please assign the issue under me, I have begun working on rewriting the code.
username_5: I would try to replace argparse with click in cli.py
username_5: @username_0 can u please share what were the issues you faced with argparse and why we need to rewrite it in click maybe I can fix that in argparse itself.
username_0: @username_4 I don't have to assign you the issue. Just work on it and make a PR. Many people told me in the past that they started working on it, but I still did not receive any PR from them, so I will not assign the issue to anyone. Just make a PR when you are done.
@username_5 there are no issues with argparse. Click is just convenient to use, the code would be cleaner. I also like the click group feature, where you can group many sub commands under one parent command https://click.palletsprojects.com/en/1.x-maintenance/api.html#click.option
If you have something in mind that you can improve using argparse, then create an issue for it and make a PR ;)
username_6: Hi @username_0, I have wrote something for command line arguments using click. I am still not sure how to integrate it and would really appreciate some suggestions. You can access the code from here https://github.com/username_6/igel/blob/master/igel/click_cli.py. |
annict/annict | 337272763 | Title: filter_unwatched in GET /v1/me/programs is not working
Question:
username_0: My account's GGO watched status is `watched` like this:

I called GET /v1/me/programs with `filter_unwatched`
```
{
"filter_started_at_lt": "2018/07/01 07:59",
"filter_unwatched": true,
"filter_work_ids": 5553,
"page": 1,
"sort_started_at": "desc",
}
```
Then I got following response:
```
{
"next_page": null,
"prev_page": null,
"programs": Array [
Object {
"channel": Object {
"id": 19,
"name": "<NAME>",
},
"episode": Object {
"id": 101214,
"number": "12",
"number_text": "#12",
"record_comments_count": 7,
"records_count": 56,
"sort_number": 130,
"title": "拍手",
},
"id": 88529,
"is_rebroadcast": false,
"started_at": "2018-06-30T15:00:00.000Z",
"work": Object {
"episodes_count": 12,
"id": 5553,
"images": Object {
"facebook": Object {
"og_image_url": "http://gungale-online.net/assets/images/og_image.png",
},
"recommended_url": "http://gungale-online.net/assets/images/og_image.png",
"twitter": Object {
"bigger_avatar_url": "https://twitter.com/ggo_anime/profile_image?size=bigger",
"image_url": "",
"mini_avatar_url": "https://twitter.com/ggo_anime/profile_image?size=mini",
"normal_avatar_url": "https://twitter.com/ggo_anime/profile_image?size=normal",
"original_avatar_url": "https://twitter.com/ggo_anime/profile_image?size=original",
},
},
"mal_anime_id": "36475",
"media": "tv",
"media_text": "TV",
"no_episodes": false,
"official_site_url": "http://gungale-online.net/",
"released_on": "",
[Truncated]
"season_name_text": "2018年春",
"title": "ソードアート・オンライン オルタナティブ ガンゲイル・オンライン",
"title_kana": "そーどあーとおんらいんおるたなてぃぶがんげいるおんらいん",
"twitter_hashtag": "GGO_anime",
"twitter_username": "ggo_anime",
"watchers_count": 1032,
"wikipedia_url": "https://ja.wikipedia.org/wiki/%E3%82%BD%E3%83%BC%E3%83%89%E3%82%A2%E3%83%BC%E3%83%88%E3%83%BB%E3%82%AA%E3%83%B3%E3%83%A9%E3%82%A4%E3%83%B3_%E3%82%AA%E3%83%AB%E3%82%BF%E3%83%8A%E3%83%86%E3%82%A3%E3%83%96_%E3%82%AC%E3%83%B3%E3%82%B2%E3%82%A4%E3%83%AB%E3%83%BB%E3%82%AA%E3%83%B3%E3%83%A9%E3%82%A4%E3%83%B3",
},
},
],
"total_count": 2,
}
```
API should not include `#11 イカれたレン`.
The other episodes looks ok.

Why the API include the episode with `filter_unwatched` and why another episodes are not include(it is correct)?
Answers:
username_0: @username_1 I reported about strange `GET /v1/me/programs` API response. plz check it out :)
username_1: Thank you for letting me know! I've tackled the problem but I don't know instantly...this is so weird. 😇 I'll continue to investigate.
Thanks!
username_1: @username_0 I've checked this issue again but the episode seems to be hidden currently. Did you do anything? If not, a cache might be expired.
If you encounter this issue again, please reopen this issue and let me know. I'll check cache data then. 🙇
Thanks!
Status: Issue closed
username_0: I got it.
`POST /v1/me/records` will not affect `GET /v1/me/programs` result before expiring the cache?
I called `POSt /v1/me/records` with another episode then `GET /v1/me/programs` with `filter_unwatched ` still returns the episode. |
300brand/ocular8 | 79481887 | Title: Apply Categories to Uncategorized Pubs
Question:
username_0: Single-use script needed to walk through every `pub` without categories and pull most recent `article`.
Process raw source (XML) and extract categories to apply back to `pub`
Status: Issue closed
Answers:
username_0: Previous category-less pub count: 41,348
Current category-less pub count: 18,781 |
wilg/headlines | 1019498363 | Title: Site seems to be broken
Question:
username_0: Hi, I've noticed for the past few weeks this site seems to not be working properly, now this is no pressure, I know you're probably busy (I am too with college) but wanted to let you know.
Answers:
username_1: Looks like we blew the database limit, fixing now, should be up in a few hours.
username_1: This may more or less be fixed |
sul-dlss/sul-requests | 137727546 | Title: Add paging schedule for Rumsey
Question:
username_0: When paging to RUMSEYMAP before 10:00am the item will arrive after 11:00pm 1 day later
When paging to RUMSEYMAP after 10:00am the item will arrive after 11:00pm 2 days later
Answers:
username_0: done.

Status: Issue closed
|
dart-lang/ffigen | 873623268 | Title: Add `ignore_for_file` comments to head to avoid messages by analyzer
Question:
username_0: The analyzer complains about `unused_field` and `unused_element` for my generated file. One might configure ffigen to only output definitions that are realy used, but I guess most people will just use the generated code as-is. Therefore the generated file could contain statements that disable these lints for the specific file. This can be done by including comments at the top of the file:
```dart
// ignore_for_file: unused_field
// ignore_for_file: unused_element
```
It is possible to configure a `preamble`, so my current workaround is to include this configuration:
```yaml
preamble: |
// ignore_for_file: unused_field
// ignore_for_file: unused_element
```
Answers:
username_1: The preamble was really meant for this purpose (Adding lint ignores, license headers, etc).
I don't think we should handle all possible lint warnings, since that would probably require `analyzer` as a dependency.
We can probably add a default value for preamble -
```
// ignore_for_file: unused_field
// ignore_for_file: unused_element
```
so users can override the preamble if they'd like.
cc @username_2
username_2: Yes, the preamble is meant for this.
I prefer to not add a default:
1. It would create weird behavior when people add their own preamble and it does not contain it.
2. Some people might not create a preamble and use the lints for seeing whether they include the right list of symbols.
Status: Issue closed
username_0: Sure, that makes sense - `preamble` works fine. I wonder if we want to update the documentation. The default value currently reference the header that is always added to the file anyway. Maybe `ignore_for_file` would be a better example? Beside that, I think we can close this ticket. |
sass/sass | 67739367 | Title: sass and sass-convert output mixed-precision numbers
Question:
username_0: I don't know if I should expect 2.00px to be unchanged or changed to 2px, but the result should be consistent:
```bash
$ echo 'a { b: 2.00px; c: 2.00px }' > x.scss
$ sass x.scss
a {
b: 2.00px;
c: 2px; }
$ sass-convert x.scss
a
b: 2.00px
c: 2px
$ sass-convert --to scss x.scss
a {
b: 2.00px;
c: 2px;
}
```
Status: Issue closed
Answers:
username_1: This is caused by a shortcut Sass takes wherein it doesn't parse properties that it can easily prove are completely static. `b` triggers this, and so comes through exactly as written; `c` does not, since it doesn't have a trailing semicolon, so it's parsed and re-emitted which causes the format to change. |
SWThurlow/mars-mission-gamejam | 943327668 | Title: Win/lose logic and print corresponding messages to screen.
Answers:
username_1: I have added win lose logic to the change-modals branch so should be able to be merged once reviewed, even if other changes aren't merged as well.
username_1: Simple win/lose logic has been added by merging the change-modals branch.
Status: Issue closed
|
ferro/ferro | 23427299 | Title: Backspace causes back navigation in current tab
Question:
username_0: Open ferro while a tab with any history is in the foreground, type in anything, hit backspace. In Windows 7 SP1 x64, Chrome will navigate backwards in the foreground tab history. It does not have this behavior in Chrome under Ubuntu, so this seems like a Windows-specific problem.
Google Chrome 31.0.1650.57 (Official Build 235101) m
OS Windows
User Agent Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.57 Safari/537.36
Answers:
username_1: I've noticed that if I'm careful to only hit the backspace ONCE (which deletes ALL the typed text), it doesn't exhibit this behavior. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.