repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
michaelcheng1991/Golf-Training-Aid-
110506344
Title: LSM9DS1 use without library Question: username_0: https://github.com/kriswiner/LSM9DS1/blob/master/LSM9DS1_MS5611_BasicAHRS_t3.ino Answers: username_0: this is the library you used for LSM9DS0 it works https://www.nordevx.com/content/lsm9ds1-9-dof-accelerometer-magnetometer-and-gyro
chatid/iframe-transport
152702337
Title: Remove debounce Question: username_0: Broadcasts should be 'reliable'. i.e. that you receive all broadcast in order. The `debounce` makes it so that broadcasts may be missed, which only works when the broadcast data is complete state (rather than incremental changes).
MicrosoftDocs/azure-docs
462937315
Title: POST Closest Point No Longer Working? Question: username_0: ## Creating an issue We prefer that you create documentation feedback issues using the Feedback link on the published article - the feedback control on the doc page creates an issue that contains all the article details so you can focus on the feedback part. * **I don't see a "Give Feedback" area in the POST Closest Point docs page.** You can also create a feedback issue here in the repo. If you do this, please make sure your issue lists: - [X] The relevant Azure service or technology. - Azure Maps API: POST Closest Point - [X] A link to the published documentation article that you have feedback about. - [Link to Example](https://docs.microsoft.com/en-us/rest/api/maps/spatial/postclosestpoint#examples) - [X] Clear, specific feedback that the author can act on. - **The Example POST body no longer works**. - **Steps to reproduce:** - Click "Try It" - `format` : `json` - `api-version` : `1` - `lat` : `47` - `lon` : `-122` - `subscription-key` : `<my Azure Maps API key>` - **Header:** `Content-Type` : `application/json` - **Request Body:** (straight from the example) ``` { "FeatureCollection": { "type": "FeatureCollection", "features": [ { "type": "Feature", "properties": { "geometryId": 1001 }, "geometry": { "type": "Point", "coordinates": [ -105.02860293715861, 40.516153406773952 ] } }, { "type": "Feature", "properties": { "geometryId": 1002 }, "geometry": { "type": "Point", "coordinates": [ -105.02860381672178, 40.515990990037309 ] } }, { "type": "Feature", "properties": { "geometryId": 1003 }, "geometry": { [Truncated] Copy content-length: 428 content-type: application/json; charset=utf-8 date: Mon, 01 Jul 2019 22:21:44 GMT strict-transport-security: max-age=31536000; includeSubDomains x-content-type-options: nosniff x-correlation-id: fc81cd35-a61d-470a-99e1-f3217a6320ec x-ms-azuremaps-region: West US 2 Body { "error": { "code": "UserData", "message": "The property 'type' MUST be defined with a valid value of either point, linestring, polygon, multipoint, multilinestring, multipolygon, geometrycollection, feature, featurecollection.; The value '' is not valid for the property 'type'. The valid GeoJSON types include point, linestring, polygon, multipoint, multilinestring, multipolygon, geometrycollection, feature, featurecollection." } } ``` My production runs which are based on this example no longer work either. **Has something changed?** Thank you Answers: username_1: Hi @username_0 Thank you for your feedback! Since this is a channel for driving improvements towards MS Docs, could you reference the URL of a specific documentation that you were following? That way, we're able to connect you with the right team that can assist you better :) username_0: Hi there @username_1 , its right there in the OP. Second bullet. username_0: After troubleshooting a while, I believe I figured out the issue: - The documentation should NOT have... ``` { "FeatureCollection": ``` ... on the front end of the POST body. The final `}` should also be removed to produce valid JSON. username_0: I want a pony and a raise Status: Issue closed username_0: PS. Here is a geojson linter to help out in the future: http://geojsonlint.com/ username_0: PPS, I had Microsoft Flows running based on the Azure Maps Docs so either: 1. The API Schema changed and the docs didn't keep up OR 2. The docs changed Since there is no edit date [on the docs](https://docs.microsoft.com/en-us/rest/api/maps/spatial/postclosestpoint), I can't tell which is which.
Decathlon/ara
440051130
Title: Add templates to this repository Question: username_0: # Add templates to this repository ## Context Currently, everyone can write an issue with as much information as this person one in it. ## Expected Add issue templates for Feature Request and for Bugs to help contributors known what kind of informations are expected. Status: Issue closed Answers: username_0: Implemented by #149
google/gapid
333481113
Title: Capture of graphics trace fails as app starts up Question: username_0: I have a Kindle 7th gen device with debugging correctly configured in dev options, I was able to use the [...] button to find the main activity for Package/Action as such: android.intent.action.MAIN:arrow.agjunction.com.arrow/arrow.agjunction.com.arrow.ui.main.MainActivity My main activity uses an GLSurfaceView, which I call to native GL to render from GLSurfaceView.Renderer interface implementation only. What I see is the capture start, claims to begin capturing for a few messages (screen is completely white), then just says EOF and doesn't go further. At that point, the app on the device just closes itself. Has anyone seen this issue? Log below: Press enter to stop capturing... 00:32:31.760 I: [Try ABI: armeabi-v7a⇒gapidapk.EnsureInstalled] <gapit> Examining gapid.apk on host... 00:32:31.764 I: [Try ABI: armeabi-v7a⇒gapidapk.EnsureInstalled] <gapit> Looking for gapid.apk... 00:32:32.212 I: [Try ABI: armeabi-v7a⇒gapidapk.EnsureInstalled] <gapit> Found gapid package... 00:32:32.212 I: [startDevInfoService] <gapit> Attempt to start service: com.google.android.gapid.DeviceInfoService 00:32:32.860 I: <gapit> Adding new device 00:32:32.860 I: <gapit> Device list: 00:32:32.860 I: <gapit> G0W0MA077444F035 00:32:33.736 I: <gapit> Package is debuggable 00:32:33.782 I: [start] <gapit> Turning device screen on 00:32:33.806 I: [start] <gapit> Checking for lockscreen 00:32:33.830 I: [start] <gapit> Checking gapid.apk is installed 00:32:33.830 I: [start] <gapit> Forwarding 00:32:33.832 I: [start] <gapit> Starting activity in debug mode 00:32:34.781 I: [start] <gapit> Forwarding TCP port 39894 -> JDWP pid 11025 00:32:34.783 I: [start] <gapit> Connecting to JDWP 00:32:34.786 I: [start] <gapit> Waiting for ApplicationLoaders.getClassLoader() 00:32:34.788 W: [start] <gapit> Couldn't break in ApplicationLoaders.getClassLoader. Vulkan will not be supported. 00:32:34.788 I: [start] <gapit> Waiting for Application Creation 00:32:34.789 I: [start] <gapit> Waiting for Application.<init>() 00:32:38.103 I: [start] <gapit> Waiting for arrow.agjunction.com.arrow.ArrowApplication.onCreate() 00:32:38.477 I: [start] <gapit> Installing interceptor libraries 00:32:39.100 I: [start] <gapit> GVR library not found 00:32:39.169 I: [start] <gapit> Waiting for connection to localhost:37312... 00:32:41.110 W: [start] <gapit> Failed to read packet. Error: read tcp 127.0.0.1:38444->127.0.0.1:39894: use of closed network connection 00:32:41.112 I: <gapit> Creating file '/home/adowdy/arrow_20180618_1731.gfxtrace' 00:32:41.612 I: <gapit> Capturing: 16B in 0s 00:32:42.612 I: <gapit> Capturing: 16B in 1s 00:32:43.613 I: <gapit> Capturing: 16B in 2s 00:32:44.613 I: <gapit> Capturing: 16B in 3s 00:32:44.978 I: <gapit> EOF: 16B Answers: username_1: Thanks for the report. We try to support most devices, but sometimes drivers will do things that are unexpected. W/GAPID (12547): [gapii/cc/android/installer.cpp:120] Interceptor error: Intercepting function at 0xb6d3af64 failed: End of function reached after 4 byte when rewriting 8 bytes @ben-clayton This looks like our old friend PLT again. username_0: No problem. Really great tool you guys provide here, I was able to use it on other devices :) username_1: A bunch of PLT fixes went in for 1.4.0. This should be fixed now. Status: Issue closed
oreporan/wePlayMin
104132423
Title: refactor to be MVC Question: username_0: Move all the schemas to the /models directory and put all DB things in the model Answers: username_0: @username_1 - I have finished doing this with User, please do the same with League, its very easy, all 1-liners username_1: Done for leagues, need to push before closing this issue Status: Issue closed username_1: All functions was added to leagues, neet to write tests
abclinuxu/abclinuxu
880656254
Title: Abc neposílá maily (Bugzilla Bug 1001) Question: username_0: This issue was created automatically with bugzilla2github # Bugzilla Bug 1001 Date: 2008-04-14 16:58:04 -0400 From: <NAME> &lt;<<EMAIL>>&gt; To: <NAME> &lt;<<EMAIL>>&gt; Last updated: 2008-10-08 19:21:01 -0400 ## Comment 3208 Date: 2008-04-14 16:58:04 -0400 From: <NAME> &lt;<<EMAIL>>&gt; Automatické maily (týdenní souhrn, upozornění na nové vzkazy správcům, info o smazaných zprávičkách) nejsou odesílány. ## Comment 3215 Date: 2008-04-14 23:28:32 -0400 From: danoh &lt;<<EMAIL>>&gt; toz melo by byt vyreseno ## Comment 3216 Date: 2008-04-15 07:52:53 -0400 From: <NAME> &lt;<<EMAIL>>&gt; Je nějaká šance, že budou ještě zpětně rozeslány maily, které předtím neodešly? ## Comment 3808 Date: 2008-10-08 19:21:01 -0400 From: <NAME> &lt;<<EMAIL>>&gt; neni, uzaviram<issue_closed> Status: Issue closed
prydin/vrops-import-hostprops
754569348
Title: Error when importing alert.xml Question: username_0: The XML file is incorrect at line 2, column 15. cvc-elt.1.a: Cannot find the declaration of element 'alertContent'. I get this error when I try to import alert.xml into Vrops under dashboard -> views We are using Version: 8.1.0 (16202959), Edition: Advanced Is it because this hasn't been updated in 3 years? Should I create new view manually? If so, could someone please walk me through the different parameters since it's confusing. When creating new view, there are 5 sections: Name, Presentation, Subjects, Data, Visibility I'm confused about Subjects and Data. I think Presentation should be list view.
PaddleHQ/Mac-Framework-V4
357272618
Title: The framework will not codesign correctly when implementing Question: username_0: ![screen shot 2018-09-05 at 8 01 35 am](https://user-images.githubusercontent.com/793774/45102176-ede2e400-b0e1-11e8-9d52-e3f413b7feb6.png) When I went through and manually replaced the folders with the proper symlinks the Framework codesigned correctly in Xcode. Answers: username_1: I have the same issue. username_2: Thanks @username_0 & @username_1 - we'll look into this. username_3: @username_0 @username_1 This has been fixed now. Could you please try again? username_1: @username_3 Seems to be fixed on my end! 👍 Status: Issue closed
rythmengine/rythmengine
160048475
Title: How to debug build Question: username_0: When trying to debug issues I run into the situation that there is a non debugable build call that hides how the template is rendered. in TemplateBase.java the method _internalBuild calls build which seems to be a class created at runtime. How can we get this in to a design for testability / design for debugability? ``` /** * Not to be used in user application or template */ protected void __internalBuild() { w_ = null; // reset output destination try { long l = 0l; if (__logTime()) { l = System.currentTimeMillis(); } final String code = secureCode; Sandbox.enterRestrictedZone(code); try { __internalInit(); build(); } finally { __finally(); Sandbox.leaveCurZone(code); } if (__logTime()) { __logger.debug("<<<<<<<<<<<< [%s] build: %sms", getClass().getName(), System.currentTimeMillis() - l); } } catch (RythmException e) { throw e; } catch (Throwable e) { handleThrowable(e); } if (null != w_) { try { IO.writeContent(toString(), w_); w_ = null; } catch (Exception e) { Logger.error(e, "failed to write template content to output destination"); } } } ``` Answers: username_1: My approach is have the IDE setup the project that include a special source code dir: the rythmengine's HOME_TMP dir. And then load the generated Java source code into IDE, set breakpoint inside it
hamika/ocs
249858675
Title: 演習課題 vol. 3 Question: username_0: ## Carクラスを作る 下記の3つのプロパティ(メソッドのことです)を持つ Carクラスを作りましょう。 - メーカー - 重さ - 色 ## Carクラスのコンストラクタを作る ```rb car = Car.new('toyota', 5000, 'red') ``` 上記のように new できるようにコンストラクタを作りましょう。 ## 沢山のCarクラスを扱う ```rb cars = [ Car.new('toyota', 5000, 'red'), Car.new('honda', 4560, 'blue'), Car.new('toyota', 3200, 'green') ] ``` 上記の carsにて、下記の課題をやりましょう。 - [ ] 全ての車の合計の重さ、平均の重さを計算する - [ ] 一番多いメーカーを計算する Answers: username_1: @username_0 1問目解けました! => e6149bf 一応 `cars` の中身が増えても合計と平均が出せるように書いてみました。 username_1: @username_0 2問目解けました。 => 542a6e6 ブロックの中からputsするだけで良かったのでしょうか? username_1: @username_0 3問目解けました。 => 7b432be `.gsub` を使ったのですが、邪道だったらやり直します。 username_1: @username_0 4問目も解けました。 => e6fe2ac お手隙の際にご確認よろしくお願い致します。 username_1: @username_0 Q3もQ4もうまい処理の方法がわかりません。 何かヒントいただけますか?宜しくお願い致します。 username_0: @username_1 コメントしておきました。見てみてくださいね。 username_1: @username_0 この課題とは全然関係ないですけど、配列を利用して [百ます計算](https://ja.wikipedia.org/wiki/%E7%99%BE%E3%81%BE%E3%81%99%E8%A8%88%E7%AE%97) というのを作ってみたくなって `handred.rb` というファイルに作ってみました。 => 350f2fc 生成される乱数を一意のものにするなど直したい箇所は色々ありますが、取り敢えず動かせるようになったので上げておきます。 username_1: @username_0 時間かかりましたが `.sort` を使ってやっと昇順 <=> 降順でレコード並べ替えて動かせました。色々なキーワードで調べて、結局公式ドキュメントが一番参考になった気がします。 [instance method Hash#sort](https://docs.ruby-lang.org/ja/latest/method/Hash/i/sort.html) レコードの件数を増やして動かしてみたので、きちんと実装できていると思います。お手隙の際にご確認お願い致します。あと `min, max`で取れる値が間違っているのはハッシュで使った場合、キーで並べ替えた配列を返してしまうので、ブロックに明示的に書く必要があると今読んでいる本で見かけたので試してみました。この認識で合っていますでしょうか? ```ruby puts "#{ company_to_count }" #=> {"toyota"=>5, "honda"=>3, "mazda"=>2, "suzuki"=>2, "subaru"=>3, "nissan"=>1} puts "#{ company_to_count.max { |a, b| a[1] <=> b[1] } }\t#{ company_to_count.min { |a, b| a[1] <=> b[1] } }" #=> ["toyota", 5] ["nissan", 1] puts "#{ company_to_count.max }\t#{ company_to_count.min }" #=> ["toyota", 5] ["honda", 3] # これだとキーのアルファベット順で表示している ``` username_0: 先の回答のことですよね。 あの時は `tmp.keys.max` `tmp.values.max`のようにkeys、valuesを介して最大を計算していました。これらの意味は理解しておけると良いです。それぞれどんな値を計算してしまったでしょうか。 ブロックを使っている意味についての理解は合っています。ここでのブロックは、並べ替えの条件をプログラマーが自分で指定するために利用しています(ブロックは様々な意味合いで使われます。 eachなどでも使っていますよね。どんなメソッドで利用されるかで意味が変わるので注意です)。 Status: Issue closed
ITJagraj/u-develop-it
895416150
Title: Create a database that contains the candidates table Question: username_0: * As a user, I can request a list of all potential candidates. * As a user, I can request a single candidate's information. * As a user, I want to delete a candidate. * As a user, I want to create a candidate.<issue_closed> Status: Issue closed
CocoaLumberjack/CocoaLumberjack
166373736
Title: CocoaLumberjack in Swift with ASL Question: username_0: Hi: As you know. in swift api of `print()` will not put the log to the Apple System Logs(ASL).and CocoaLumberjack is work with ASL, how did ASL works in Swift. in another word, how did CocoaLumberjack get the logs in swift where ASL is unavailable Answers: username_1: The Swift code is a wrapper around Objective-C where ASL available. I don't know of any place in the Swift code in the framework that uses `print()` Status: Issue closed
prestodb/presto
192955773
Title: Investigate why tasks are in "canceled" state for failed queries Question: username_0: The problem is likely here https://github.com/prestodb/presto/blob/master/presto-main/src/main/java/com/facebook/presto/execution/scheduler/SqlQueryScheduler.java#L278. I think this should be: ```java stage.addStateChangeListener(newState -> { if (newState.isDone()) { if (newState == ABORTED || newState == FAILED) { childStages.stream().forEach(SqlStageExecution::cancel); } else { childStages.stream().forEach(SqlStageExecution::cancel); } } }); ``` These types of changes must be carefully tested since any mistake can cause correctness bugs or distributed memory leaks.
bobthecow/psysh
403407656
Title: Use declarations are ignored for built-in commands Question: username_0: ``` Answers: username_1: Thank you for reporting this! It's fixed in `develop` and will go out with the next release. <img width="762" alt="screen shot 2019-01-26 at 2 33 10 pm" src="https://user-images.githubusercontent.com/53660/51794121-edc2ae00-2180-11e9-9c53-7c90b38e1d40.png"> Status: Issue closed
spring-projects/spring-data-mongodb
776530585
Title: Make Spring Data MongoDB work with java.time types out-of-the-box [DATAMONGO-2458] Question: username_0: **[<NAME>](https://jira.spring.io/secure/ViewProfile.jspa?name=jesperdj)** opened **[DATAMONGO-2458](https://jira.spring.io/browse/DATAMONGO-2458?redirect=false)** and commented Spring Data MongoDB does not support `java.time` types out-of-the-box. When you use these types in your entities, for example `java.time.ZonedDateTime`, without manually configuring converters, you will get an exception because Spring Data MongoDB does not know how to convert `ZonedDateTime` to a value that can be stored in the database. One way to make it work is to write some converters and register them using `MongoCustomConversions`. The `java.time` API has been available for quite some time now, and the current version of Spring requires Java 8, which includes the `java.time` API - so Spring can safely assume that this API is available. It would make sense to support the `java.time` API out-of-the-box, without the need to write your own custom converters. Example converters: Converts `ZonedDateTime` to `java.util.Date` for writing to the database: ``` import org.springframework.core.convert.converter.Converter; import java.time.ZonedDateTime; import java.util.Date; public class ZonedDateTimeToDateConverter implements Converter<ZonedDateTime, Date> { @Override public Date convert(ZonedDateTime zonedDateTime) { return Date.from(zonedDateTime.toInstant()); } } ``` Converts `java.util.Date` to `ZonedDateTime` for reading from the database: ``` import org.springframework.core.convert.converter.Converter; import java.time.ZoneId; import java.time.ZonedDateTime; import java.util.Date; public class DateToZonedDateTimeConverter implements Converter<Date, ZonedDateTime> { @Override public ZonedDateTime convert(Date date) { return date.toInstant().atZone(ZoneId.systemDefault()); } } ``` To register these converters: ``` @Bean public MongoCustomConversions mongoCustomConversions() { return new MongoCustomConversions(Arrays.asList(new DateToZonedDateTimeConverter(), new ZonedDateTimeToDateConverter())); } ``` --- **Affects:** 2.2.4 (Moore SR4) **Issue Links:** - [DATAMONGO-2400](https://jira.spring.io/browse/DATAMONGO-2400) Read/write converters not working Answers: username_0: If you would like us to look at this issue, please provide the requested information. If the information is not provided within the next 7 days this issue will be closed. Status: Issue closed username_0: Closing due to lack of requested feedback. If you would like us to look at this issue, please provide the requested information and we will re-open the issue.
Rapptz/discord.py
598363116
Title: voice_client getting in bad state with non-normal ConnectionClosed codes Question: username_0: **Note**: I added a TLDR note at the bottom which kindof sums up my findings. If you don't have time to look at all the info I collected please at least take a look at that. Versions python 3.6.8 discord.py 1.3.3 Ubuntu 18.04.03 LTS I've had some issues in the past with my bot getting into a bad state for voice channels and not being able to get out. Recently I've decided to dig in and try to figure out what's going on. The big problem with this is that it is a very inconsistent problem, and I can only debug it when the voice channel gets into a bad state, and I haven't figured out a way to trigger that manually. What I've done is I added a bunch of extra logging to voice_client.py and waited till I see the issue, and then try a couple things and see what happens. After taking a look at the logs, I think I've found a bug with voice_client.py. I'll start by explaining the issue I've been having and then show the logs that show the bug I've found. **Note:** My apologies if I've put too much log info and text in here. I just figure for debugging these kinds of things, more information is always helpful **Note:** As I said, I made some changes to voice_client.py. The main changes were generating a random number and saving it to voice_client.vcid when a new one is created, generating a random number at the beginning of poll_voice_ws. Those were to help keep track of which voice client and loop we're currently in. I also added some functionality that I thought might fix the problem, where if we get a ConnectionClosed that isnt in the acceptable close codes list, we try to do a disconnect before reconnecting. That's what the "Doin a disconnetaroni" is talking about. Note that that fix didn't seem to have an effect on the behavior im seeing. I've attached these voice_client.py changes here as a diff file: [voice_client_logging.diff.txt](https://github.com/username_2/discord.py/files/4465529/voice_client_logging.diff.txt) **Another Note:** This issue is not specific to just one voice channel, it sometimes happens to just one, and sometimes to a bunch, and there doesn't seem to be anything special about the voice channels its happening to. # The Problem The main problem that I'm having that started this investigation, is the voice client gets in a bad state where it thinks it is connected but really isnt. What is happening in voice_client.py is that in poll_voice_wc, we get a ConnectionClosed with a non-normal code (usually 1001 or 1006, but I've seen other codes in the past). What happens is that poll_voice_wc will infinitely loop trying to reconnect, but always getting the same bad code. I know that it isn't a problem with the thing its trying to connect to, because restarting the bot fixes the problem and it is able to connect just fine. For clarity and information, here's what the log looks like when this is happening: <details> <summary>First it encounters the Error, and schedules a reconnect in x seconds:</summary> ``` INFO:discord.voice_client:Bad code 1006 in poll_voice_ws (channel=175384431383543809) (reconnect=True), (id=0.5783516900838339) INFO:discord.voice_client:in poll_voice_ws loop for channel 175384431383543809 (id=0.5783516900838339) (loopid=0.16362357981923203) ERROR:discord.voice_client:Disconnected from voice channel 175384431383543809... Reconnecting in 257.33s. Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/websockets/protocol.py", line 528, in transfer_data msg = yield from self.read_message() File "/usr/local/lib/python3.6/dist-packages/websockets/protocol.py", line 580, in read_message frame = yield from self.read_data_frame(max_size=self.max_size) File "/usr/local/lib/python3.6/dist-packages/websockets/protocol.py", line 645, in read_data_frame frame = yield from self.read_frame(max_size) File "/usr/local/lib/python3.6/dist-packages/websockets/protocol.py", line 710, in read_frame extensions=self.extensions, File "/usr/local/lib/python3.6/dist-packages/websockets/framing.py", line 100, in read data = yield from reader(2) File "/usr/lib/python3.6/asyncio/streams.py", line 672, in readexactly raise IncompleteReadError(incomplete, n) asyncio.streams.IncompleteReadError: 0 bytes read on a total of 2 expected bytes The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/discord/gateway.py", line 747, in poll_event msg = await asyncio.wait_for(self.recv(), timeout=30.0) File "/usr/lib/python3.6/asyncio/tasks.py", line 358, in wait_for return fut.result() File "/usr/local/lib/python3.6/dist-packages/websockets/protocol.py", line 350, in recv yield from self.ensure_open() File "/usr/local/lib/python3.6/dist-packages/websockets/protocol.py", line 501, in ensure_open self.close_code, self.close_reason) from self.transfer_data_exc websockets.exceptions.ConnectionClosed: WebSocket connection is closed: code = 1006 (connection closed abnormally [internal]), no reason The above exception was the direct cause of the following exception: Traceback (most recent call last): [Truncated] INFO:discord.voice_client:Disconnecting from voice normally, close code 4014. (channel=175384431383543809), (id=0.5789144089966105), (loopid=0.9866699842304079) INFO:discord.voice_client:Calling disconnect for channel=175384431383543809 (id=0.5789144089966105) INFO:discord.voice_client:The voice handshake is being terminated for Channel ID 175384431383543809 (Guild ID 175384431383543808) INFO:discord.voice_client:The voice client has been removed for Channel ID 175384431383543809 (Guild ID 175384431383543808) INFO:discord.voice_client:Exiting poll_voice_ws for channel=175384431383543809, (id=0.5789144089966105) (loopid=0.9866699842304079) INFO:discord.voice_client:Connecting to voice channel 175384431383543809... INFO:discord.voice_client:Starting voice handshake... ``` </details> The first thing to notice about the above logs is that there are two different ids. These are the randomly generated ids that I added that are created when a voice_client is initialized. One is from the original voice_client, which has been infinitly looping in poll_voice_ws for a while (id=0.6783516900838339), and the other is from the new voice_client, which started up when I called the `?summon` command (id=0.5789144089966105). It appears that there are 2 voice_clients existing for the same server/channel. What seemed to happen as a result of that is that the old voice_client called disconnect, which successfully closed the new client (code 4014 is a anormal close code), and then tried (and failed) to reconnect by itself. This means that once it gets in this bad state, I don't think theres a way for it to get out. # TLDR Based on the above, it looks to me like there are 2 main issues that I've found: 1. voice_client.poll_voice_ws can get in a bad state if given non-normal close codes, and has no way to correctly recover. As I've show in the last set of logs, starting up a new voice_client is able to successfully connect, although it does get killed by the first one. That means that it's possible to change the implementation to fix the bad state, but I'm not sure how. 2. There can be 2 instances of voice_client for the same server/guild. This doesn't seems like it is intended. They interfere with each other and can cause problems. Does anyone here know why/how this could happen? Thanks for taking the time to look at this. It has been a big problem with my bot for a while and it's made the voice commands super inconsistent. Since it is hard to reproduce (idk how to manually trigger a bad close code), its super hard to debug. Please let me know if you have any advice of things to try or of extra logging to add. Since it usually starts happening at least once after 4+ hours of the bot running, I can try changes out and see if we can get more information. Thank you! Answers: username_1: I've run into this issue as well. I've found an easy reproduction method, while a VoiceClient is in a channel, change the server region. `2020-08-24 22:40:36,958:DEBUG:protocol.py: client x code = 4000, reason = [no reason]` Either turning does not seem to matter if you pass reconnect=False to the connect method. The bot ends up in a bad state where you cannot disconnect even with force=True. Nor does manually disconnecting the client from the discord ui. The voice_client is never removed, you can confirm by checking the voice_clients list in the main discord.Client. username_0: Good news is i think I've found an actual code change fix for it. I've created a pull request (linked above), which should hopefully solve the main issue of the zombie voice client. username_2: This seems fixed by the VoiceProtocol redesign Status: Issue closed
simplybusiness/Kiln
699204786
Title: [FEAT] Add additional context to log messages Question: username_0: **Is your feature request related to a problem? Please describe.** *As a* Kiln Operator *I want* messages that caused an error to be included in log output *So that* I can debug issues more easily **Describe the solution you'd like** When an error is encountered that is caused by an invalid message, bytes that can't be parsed as avro etc, the message content should be included in the log message generated under a custom field (to avoid conflicting with ECS field names) **Additional context** The data-collector already logs HTTP message bodies, so this only applies to the Report-parser and Slack-connector.
rust-lang/rust
1090047165
Title: SIGTRAP with custom target Question: username_0: Compiler crashes with SIGTRAP (probably somewhere in LLVM) when building no_std crate that uses custom target specification based on built-in `x86_64-unknown-none` with `-sse2` changed to `+sse2`. Simple reproduction: ``` git clone https://github.com/username_0/rust-crash-sigtrap cd rust-crash-sigtrap cargo build ``` <details> <summary>Reproducing without using repo</summary> 1. Start with basic no_std application. For example, see [smallest-no-std](https://docs.rust-embedded.org/embedonomicon/smallest-no-std.html) 2. Then create target specification: ``` rustc +nightly -Z unstable-options --print target-spec-json --target x86_64-unknown-none > x86_64-custom.json sed -i 's/"is-builtin": true/"is-builtin": false/' x86_64-custom.json sed -i 's/-sse2/+sse2/' x86_64-custom.json ``` 3. Build it! ``` cargo build --target x86_64-custom.json -Zbuild-std=core ``` </details> I've expected successful compilation (or any more meaningful error message). Instead, when `compiler_builtins` are compiling, following error occurs: ``` error: could not compile `compiler_builtins` Caused by: process didn't exit successfully: `rustc --crate-name compiler_builtins ~/.cargo/registry/src/github.com-1ecc6299db9ec823/compiler_builtins-0.1.66/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,future-incompat --crate-type lib --emit=dep-info,metadata,link -C embed-bitcode=no -C debuginfo=2 --cfg 'feature="compiler-builtins"' --cfg 'feature="core"' --cfg 'feature="default"' --cfg 'feature="mem"' --cfg 'feature="rustc-dep-of-std"' -C metadata=87b7f5e35c5dfb1a -C extra-filename=-87b7f5e35c5dfb1a --out-dir ~/minimal/target/x86_64-custom/debug/deps --target ~/minimal/x86_64-custom.json -Z force-unstable-if-unmarked -L dependency=~/minimal/target/x86_64-custom/debug/deps -L dependency=~/minimal/target/debug/deps --extern core=~/minimal/target/x86_64-custom/debug/deps/librustc_std_workspace_core-65e9596df1fda648.rmeta --cap-lints allow --cfg 'feature="unstable"' --cfg 'feature="mem-unaligned"'` \ (signal: 5, SIGTRAP: trace/breakpoint trap) ``` ### Meta <!-- If you're using the stable version of the compiler, you should also check if the bug also exists in the beta or nightly versions. --> `rustc --version --verbose`: ``` rustc 1.59.0-nightly (f8abed9ed 2021-12-26) binary: rustc commit-hash: f8abed9ed48bace6be0087bcd44ed534e239b8d8 commit-date: 2021-12-26 host: x86_64-unknown-linux-gnu release: 1.59.0-nightly LLVM version: 13.0.0 ``` <details> <summary>GDB backtrace</summary> ``` #0 0x00007ffff1bfbf78 in ?? () from ~/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/bin/../lib/../lib/libLLVM-13-rust-1.59.0-nightly.so #1 0x00007ffff180c288 in foldCONCAT_VECTORS(llvm::SDLoc const&, llvm::EVT, llvm::ArrayRef<llvm::SDValue>, llvm::SelectionDAG&) [clone .llvm.9555667491127074190] () from ~/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/bin/../lib/../lib/libLLVM-13-rust-1.59.0-nightly.so [Truncated] #18 0x00007ffff664b3d6 in rustc_codegen_llvm::back::write::codegen () from ~/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/bin/../lib/librustc_driver-86b6ef79da72f228.so #19 0x00007ffff65e123b in rustc_codegen_ssa::back::write::finish_intra_module_work::<rustc_codegen_llvm::LlvmCodegenBackend> () from ~/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/bin/../lib/librustc_driver-86b6ef79da72f228.so #20 0x00007ffff65e049b in rustc_codegen_ssa::back::write::execute_work_item::<rustc_codegen_llvm::LlvmCodegenBackend> () from ~/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/bin/../lib/librustc_driver-86b6ef79da72f228.so #21 0x00007ffff663010f in std::sys_common::backtrace::__rust_begin_short_backtrace::<<rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::ExtraBackendMethods>::spawn_named_thread<rustc_codegen_ssa::back::write::spawn_work<rustc_codegen_llvm::LlvmCodegenBackend>::{closure#0}, ()>::{closure#0}, ()> () from ~/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/bin/../lib/librustc_driver-86b6ef79da72f228.so #22 0x00007ffff663b7e3 in <<std::thread::Builder>::spawn_unchecked<<rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::ExtraBackendMethods>::spawn_named_thread<rustc_codegen_ssa::back::write::spawn_work<rustc_codegen_llvm::LlvmCodegenBackend>::{closure#0}, ()>::{closure#0}, ()>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} () from ~/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/bin/../lib/librustc_driver-86b6ef79da72f228.so #23 0x00007ffff3fa9da3 in alloc::boxed::{impl#44}::call_once<(), dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global> () at /rustc/51e8031e14a899477a5e2d78ce461cab31123354/library/alloc/src/boxed.rs:1811 #24 alloc::boxed::{impl#44}::call_once<(), alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global>, alloc::alloc::Global> () at /rustc/51e8031e14a899477a5e2d78ce461cab31123354/library/alloc/src/boxed.rs:1811 #25 std::sys::unix::thread::{impl#2}::new::thread_start () at library/std/src/sys/unix/thread.rs:108 #26 0x00007ffff3ea9259 in start_thread () from /usr/lib/libpthread.so.0 #27 0x00007ffff3dc75e3 in clone () from /usr/lib/libc.so.6 ``` </details> @rustbot label +requires-nightly +I-crash Answers: username_0: Addition: disabling soft-float (`+soft-float` -> `-soft-float`) fixes this. username_1: @username_0 can you confirm that the crash does not occur on the latest stable and/or beta? Trying to figure out if this is a regression or a bug that has always existed. username_0: As far as I know, it is not possible to use `-Zbuild-std` on non-nightly channels, so I've tried `1.57.0-nightly (5d2a410ff 2021-09-04)` which is the first 1.57.0 nightly — got same SIGTRAP. username_2: Not all feature combinations are supported by LLVM. `+soft-float,+sse2` in particular has been known to cause problems. We have issue #89586 for showing a warning for known-bad combinations. username_0: Thanks for the comment! I'll close this issue then. Hopefully, it would be helpful for those who will google this problem. Status: Issue closed
uestccokey/EZFilter
312942670
Title: Some devices are crashing while start record a portrait imported video Question: username_0: **While testing app in certain devices, some crashes while start recording a portrait oriented video from camera or video saved by EZFilter (whatsapp doesn't crash for... reason?), that's same bug I was having months ago** Code for import: ```java if (requestCode == REQUEST_CODE_CHOOSE && resultCode == RESULT_OK) { final List<String> paths = Matisse.obtainPathResult(data); if (!paths.isEmpty()) { String mimeType = URLConnection.guessContentTypeFromName(paths.get(0)); if(mimeType.startsWith("video")) { mRenderPipeline = EZFilter.input(Uri.parse(paths.get(0))) .setLoop(true) .into(mRenderView); ``` **Error appear when record camera/app videos not in landscape mode:** ```java 04-10 16:05:50.201 30626-30856/com.owner.filtertest E/ACodec: [OMX.Exynos.AVC.Encoder] failed to set input port definition parameters. 04-10 16:05:50.201 30626-30856/com.owner.filtertest E/ACodec: configureCodec multi window instance fail appPid : 30626 04-10 16:05:50.211 30626-30856/com.owner.filtertest E/ACodec: [OMX.Exynos.AVC.Encoder] configureCodec returning error -5001 signalError(omxError 0x80001001, internalError -5001) 04-10 16:05:50.211 30626-30855/com.owner.filtertest E/MediaCodec: Codec reported err 0xffffec77, actionCode 0, while in state 3 04-10 16:05:50.211 30626-30626/com.owner.filtertest E/MediaCodec: configure failed with err 0xffffec77, resetting... 04-10 16:05:50.231 30626-30626/com.owner.filtertest E/AndroidRuntime: FATAL EXCEPTION: main Process: com.owner.filtertest , PID: 30626 android.media.MediaCodec$CodecException: Error 0xffffec77 at android.media.MediaCodec.native_configure(Native Method) at android.media.MediaCodec.configure(MediaCodec.java:1778) at cn.ezandroid.ezfilter.media.record.MediaVideoEncoder.prepare(MediaVideoEncoder.java:60) at cn.ezandroid.ezfilter.media.record.MediaMuxerWrapper.prepare(MediaMuxerWrapper.java:40) at cn.ezandroid.ezfilter.media.record.RecordableEndPointRender.startRecording(RecordableEndPointRender.java:124) at cn.ezandroid.ezfilter.core.RenderPipeline.startRecording(RenderPipeline.java:634) at com.owner.filtertest .CameraActivity.startRecording(CameraActivity.java:690) at com.owner.filtertest .CameraActivity.bridge$lambda$0$CameraActivity(CameraActivity.java) at com.owner.filtertest .CameraActivity$$Lambda$3.run(Unknown Source) at android.os.Handler.handleCallback(Handler.java:739) at android.os.Handler.dispatchMessage(Handler.java:95) at android.os.Looper.loop(Looper.java:158) at android.app.ActivityThread.main(ActivityThread.java:7230) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1230) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1120) ``` Answers: username_1: In the develop branch I made a big refactoring, you can test it, in my project, there is no problem with crashes. Some api changes, you can refer to demo code username_0: Finally! I will update to new version after the refactor will include mRenderPipeline.enableRecordAudio, mRenderPipeline.setRecordOutputPath and old utilities. It would be nice if after video rendering is finished it will automatically refresh the output path, to make the video appear in Gallery, or maybe set a rendering listener! I'm having issues in 60% devices with manually refreshing output, I think it's caused by refreshing it before it's full rendered Thanks for all :) username_1: Oh, I'm sorry, I was very busy last week. In version 2.0.0, an interface IRecordListener is added to the RecordableRender class, and the onFinish function will be callback after the rendering is completed. username_0: Thank you for the update! 👍 Status: Issue closed
driusan/dkim
371413908
Title: Please, add a Readme with some basic setup information Question: username_0: How far I got * golang setup * letsencrypt tls setup * chasquid configured and running now... ```bash go get github.com/username_1/dkim cd $GOPATH/src/github.com/username_1/dkim/cmd/dkimkeygen go build mv dkimkeygen /bin cd $GOPATH/src/github.com/username_1/dkim/cmd/dkimsign go build mv dkimsign /bin cd $GOPATH/src/github.com/username_1/dkim/cmd/dkimverify go build mv dkimverify /bin ``` I ran dkimkeygen and out plopped two files ``dns.txt`` which is probably what needs to be put in the dns records, alongside ``private.pem``, which I know is some sort of certificate but from here on out I'm not sure how to bang everything together in a way that makes my emails have valid DKIM signatures. Send Help, many thanks. Answers: username_1: I'll try and add a README this weekend, but the tools are effectively intended to communicate via stdin/stdout and exit codes. The dns.txt is, indeed, the txt record that should be setup at selector._domainkey.example.com for some selector of your choice. I've never used chasquid, but it looks like dkimverify would fit in with the post-data hook. Following the example in their source, you would want to do something like: ``` if command -v dkimverify >/dev/null; then if ! dkimverify < "$TF" ; then echo "X-DKIM-Verify: Failed verification" fi echo "X-DKIM-Verify: pass" fi ``` In order to add an X-DKIM-Verify header to the incoming email. (Note that according to the DKIM standard you *shouldn't* reject an email if the verification fails. Also, I haven't tried this, I just wrote it on a GitHub comment box, so it might have syntax errors or other problems.) Signing outgoing mail is more complicated, because I can't find any hooks that chasquid has to invoke a script before sending data. dkimsign reads from stdin and writes a signed version to stdout. I use it as part of a pipe on 9front with the parameters "dkimsign -h From:Subject:To -s selector -d example.com -key /absolute/path/private.pem" where /absolute/path/private.pem is the path to the private.pem file you generated to go with public key encoded in dns.txt, 'selector' matches the 'selector' part of the DNS entry, and example.com is the domain. I emailed the chasquid developer to ask him if there's any pre-send hook (I didn't see any) where this would work, I'll let you know if I hear back. username_1: (Note that you can also test the DNS part by sending an unsigned email to yourself, copying the raw email to a file, and then playing with the dkimsign options locally and redirecting to a file, then running dkimverify on the file that you redirected from.) username_0: Wonderful, thank you! username_0: Ah good, the readme is there, thanks a lot. 👍 Status: Issue closed
fsi-open/admin-bundle
208411544
Title: Inject the EventDispatcher to ControllerAbstract constructor Question: username_0: It should replace the [template](https://github.com/fsi-open/admin-bundle/blob/master/Controller/ControllerAbstract.php#L53) parameter. This will remove the need for the `setEventDispatcher` method and the relevant compiler pass.<issue_closed> Status: Issue closed
inkle/inky
631269280
Title: freezing after switching window Question: username_0: i noticed that if you leave the word count up and switch windows (cmd-tab) inky will stop responding, and you will lose unsaved changes (won't be able to save or get back to the window -- at least not that i've seen). it isn't a problem now that i figured out why it was happening, but it did seem weird to me (i'm on macOS mojave, version 10.14.6 if that helps) let me know if i can provide any other information
cosmos/cosmos-sdk
304411189
Title: Implement IBC MVP relayer process Question: username_0: Relayer process relays the outgoing IBC messages to the destination chain. The functionalities of Relayer process are: * Connect to 2 Tendermint full node RPC * Query for `IBCTransferOut` messages from each chain * Submit `IBCTransferIn` messages to another chain<issue_closed> Status: Issue closed
csscomb/sublime-csscomb
29029713
Title: node.js path Question: username_0: I am getting the error "node is not recognized as ..." I can't seem to find where this node's path will go? I have node.js installed Status: Issue closed Answers: username_1: You can now set custom path to `node` in settings: https://github.com/csscomb/sublime-csscomb/blob/master/CSScomb.sublime-settings#L4 username_2: Thank you! username_3: I am having the same issue with CSS Comb. I added my Node path in the settings like so: "node-path" : "C:/Program Files (x86)/nodejs/node", And I added the path in my system variables as well: C:\Program Files (x86)\nodejs When I go to run CSS Comb I am still getting the error: CSScomb error: 'node' is not recognized as an internal or external command, operable program or batch file. Any more suggestions? username_4: Help please same error above. username_5: @username_4 what system are you using and how did you install nodejs?
mahnunchik/mag
123160940
Title: ISO formatted time Question: username_0: Hi I found that you're using `toLocaleTimeString()` method to write timestamp It might cause problems in some cases (ie if different locale is set on different environments and local developers machine) So, for example, if locale is set to `en-us` timestamp looks like `4:32:46 PM.765` I think it's better to use ISO formatted time, ie `16:32:46.765` Answers: username_1: Hi @username_0 Yes, using `toLocaleTimeString` method isn't good idea. But for `mag-fallback` module I think it will be enough to have only time in the message. I will try to implement it using `getUTC*` methods.
exoplanet-dev/exoplanet
930319770
Title: Case Study: rv-multi.ipynb MCMC sampler RuntimeError: Chain 0 failed Question: username_0: **Describe the bug** Case Study: rv-multi.ipynb MCMC sampler RuntimeError: Chain 0 failed **To Reproduce** Python 3.8.5 exoplanet.__version__ = '0.5.1' Run case study rv-multi.ipynb at https://gallery.exoplanet.codes/tutorials/rv-multi/ **Expected behavior** Sampling 2 chains for 1_000 tune and 1_000 draw iterations (2_000 + 2_000 draws total) took 51 seconds. **Your setup (please complete the following information):** - Version of exoplanet: exoplanet.__version__ = '0.5.1' - Operating system: MacOS Big sur 11.3 - Python version & installation method (pip, conda, etc.): Python 3.8.5 conda environment **Additional context** Add any other context about the problem here. See attached screen print <img width="1743" alt="screen1" src="https://user-images.githubusercontent.com/37123616/123455089-4cd5b880-d5af-11eb-911d-1a89e93f499a.png"> <img width="1743" alt="screen2" src="https://user-images.githubusercontent.com/37123616/123455107-552df380-d5af-11eb-8842-a45ab3b3d711.png"> <img width="1743" alt="screen3" src="https://user-images.githubusercontent.com/37123616/123455123-59f2a780-d5af-11eb-816d-73e97d111af4.png">
jhipster/generator-jhipster
116362659
Title: Spanish translation is broken Question: username_0: ## Overview of the issue Installing Spanish as a language and then selecting it in the dropdown menu results in a javascript error of `Unexpected token }` that breaks the application. ## Motivation for or Use Case Using spanish language in a jhipster app. ## JHipster Version(s) 2.23.1. I don't think it happened in previous versions but not entirely sure. ## Browsers and Operating System Chrome and Firefox on Linux ## Reproduce the error Create a new application, install Spanish language and then select it in the dropdown menu. ## Suggest a fix It's a reaaaally easy fix, just have to remove a trailing comma in line 110 of es/global.json file (error.size property). I'm sorry I can't provide a PR right now despite such an easy fix, but I won't be available until the end of the month. Status: Issue closed Answers: username_1: I think, it's already fixed in 3affe0760b7f10549259aa6cc98bffe9f2c32070
GridSpace/grid-apps
858543471
Title: Add option for 1.st layer height. Question: username_0: Very useful when printing with thin layers to get good bed adhesion. Answers: username_1: There is no such thing as first layer height on the belt. But Kiri:Moto already has a setting under `Base` called `Belt Offset` which is what you are thinking of. It defaults to `0` but you can increase it to add more spacing between the part and the belt. Status: Issue closed username_0: -- Mvh. <NAME> +47 4666 5222 username_1: it really depends on how you configured and leveled your belt ... how far the nozzle is from the belt, sometimes the nozzle size if it's been upgraded, etc
knpuniversity/javascript
208224089
Title: Registration functionaliry is not working. Question: username_0: 500 Internal Server Error - InvalidArgumentException Any comments ? Answers: username_0: I already know what is the cause. In my services.yml file the code was as following: `services: app.form.type.registration_type: class: AppBundle\Form\Type\RegistrationType arguments: [%fos_user.model.user.class%] tags: - { name: form.type }` But there was a deprecation message about Not quoting a scalar starting with the % indicator character, deprecated since Symfony 3.1 I changed it after starting the Tutorial to this `services: app.form.type.registration_type: class: AppBundle\Form\Type\RegistrationType arguments: [fos_user.model.user.class] tags: - { name: form.type }` and that is why i couldn't get the registration form properly. Sorry for submitting an issue for this. Status: Issue closed username_0: According to @javiereguiluz [issue](https://github.com/symfony/symfony-demo/issues/246), the fix for this deprecation in the tutorial code should be: services: app.form.type.registration_type: class: AppBundle\Form\Type\RegistrationType arguments: ['%fos_user.model.user.class%'] tags: - { name: form.type } with single quotes instead of double quotes. I hope this help someone. username_1: Thanks for the issue! You're right about the deprecated, unquoted strings! I just fixed it at sha: fb92ec18a60dfe431af658d4a410f378adce659d Thank you!
santoshvijapure/DS_with_hacktoberfest
500062630
Title: Implement Queue Question: username_0: Implement Queue data structure. Answers: username_1: can i take this issue ? please assign on my name username_0: Done! @username_1 username_1: I am working on this issue . Please assign username_2: I have implemented Queue in Swift #124 username_3: @username_0 Can you assign this issue to me...I have a better and efficient algorithm for this... Status: Issue closed
jstime/jstime
688470929
Title: Stack traces regressed Question: username_0: Seems like we really do need some sort of basic test runner to avoid regressions like this It would appear that b7a6d994d3b89288da638ea78f883eb852eb77df regressed our stack trace support /cc @devsnek Answers: username_0: Historical stacktrace code for posterity module: https://github.com/jstime/jstime/commit/b7a6d994d3b89288da638ea78f883eb852eb77df#diff-cc1cadebb5770328d1404803e7a56327L136-L156 script: https://github.com/jstime/jstime/commit/b7a6d994d3b89288da638ea78f883eb852eb77df#diff-420c68e7483ee630e16762977da06437L28-L36 username_1: @username_0 ohh my majestic creation is broken (?)! I could check for this if you want me too! I can see theres a PR already indeed. Status: Issue closed
ethereum-optimism/optimism
866516378
Title: Relay event only emitted if call doesn't revert Question: username_0: <!-- Need help? Refer to our contributing guidelines for additional information about making a good issue: https://github.com/ethereum-optimism/.github/blob/master/CONTRIBUTING.md --> **Describe the bug** @username_1 reported this bug recently in our Discord and could provide some more context. I'll quote what he said here and let him expand on what was said: """ 11. What happens if the contract a cross domain message calls reverts? 1. Doesn’t seem like I’m like able to pick that up with the watcher 1. Would like to be able to get the revert messages of relayed relayed messages, is that not possible? 2. Relay event only emitted if call doesn’t revert 3. Watcher looks for relay events 2. Does the RelayedMessage event have to be emitted only on success? 1. Don’t see why that can’t also be on failure (it literally gets stored as a relayed message even if it reverts) 2. Maybe a separate event like SuccessfulMessage or something would be useful? 3. Being able to detect a relayed message that didn’t succeed seems pretty important """ **To Reproduce** Steps to reproduce the behavior: @username_1 could you possibly provide a repro case? **Expected behavior** Should be able to retrieve revert reasons from cross-domain message call reversions. Answers: username_1: This was already solved @username_0 https://github.com/ethereum-optimism/optimism/issues/587 username_1: However this change should be followed up with a PR to allow the watcher to handle the new event. I have created an issue for discussion: https://github.com/ethereum-optimism/optimism/issues/602 Status: Issue closed
ticgal/gapp
756003088
Title: Problem of connection with GLPI 9.5.3 Question: username_0: Hello I have a problem since I updated my GLPI server, from version 9.5.1 to 9.5.3, unable to reconnect to the server by the GAPP application. I have tested with a basic user type "glpi / glpi", and it works. So I deduced that this comes from the fact that we go through a mail server to connect to GLPI. We therefore go through our mail server which is at Google Workspace (formerly Gsuite), and so here is our connection procedure on the web version: - **username**: user's email address - **password**: an application password generated in the Google account (different from the main password) - **choice of connection mode**: via the Google profile which therefore allows you to go through our mail server rather than through the internal GLPI database ![image](https://user-images.githubusercontent.com/45215079/100986269-42899980-354d-11eb-92c3-03dc6281535e.png) When we apply the application ID and password in GAPP, we have a username or password error ![image](https://user-images.githubusercontent.com/45215079/100986929-20444b80-354e-11eb-8937-01aeb6291a39.png) **Let me add again, with the version of GLPI in 9.5.1, the connection worked fine.** We tested from smartphones running Android 10 and Android 11, and it's exactly the same problem Can you see what it is and tell me if there is a solution? Thank you Answers: username_0: Hi, someone have an idea? username_1: Hi, Sorry about the late reply. We don't have enough information to debug this issue. Could you please post your system information? Setup > General > System > Information about system installation and configuration, add the information between the code tags. You can also use our debug tool GAT https://tic.gal/en/project/gat/ and post the generated file. Regards, username_0: Hi Oscar, Here is my system informations, thanks [code]   GLPI 9.5.3 ( => /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL) Installation mode: TARBALL -- Operating system: Linux vpscloud.innotec-sa.com 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:33:37 UTC 2016 x86_64 PHP 7.2.34 fpm-fcgi (Core, PDO, PDO_ODBC, Phar, Reflection, SPL, SimpleXML, Zend OPcache, bcmath, bz2, calendar, cgi-fcgi, ctype, curl, date, dba, dom, enchant, exif, fileinfo, filter, ftp, gd, gettext, gmp, hash, iconv, imagick, imap, intl, ionCube Loader, json, ldap, libxml, mbstring, mysqli, mysqlnd, odbc, openssl, pcre, pdo_mysql, pdo_pgsql, pdo_sqlite, pgsql, posix, pspell, redis, session, soap, sockets, sodium, sqlite3, standard, sysvmsg, sysvsem, sysvshm, tidy, tokenizer, xml, xmlreader, xmlrpc, xmlwriter, xsl, zip, zlib) Setup: max_execution_time="600" memory_limit="128M" post_max_size="8M" safe_mode="" session.save_handler="files" upload_max_filesize="16M" Software: Apache (Apache Server at support.proxl.fr Port 443) Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36 Server Software: (Ubuntu) Server Version: 5.7.32-0ubuntu0.16.04.1 Server SQL Mode: Parameters: jlacour@localhost/proxl_support Host info: Localhost via UNIX socket PHP version is at least 7.2.0 - Perfect! Sessions support is available - Perfect! Allocated memory > 64 Mio - Perfect! mysqli extension is installed ctype extension is installed fileinfo extension is installed json extension is installed mbstring extension is installed iconv extension is installed zlib extension is installed curl extension is installed gd extension is installed simplexml extension is installed intl extension is installed ldap extension is installed apcu extension is not present Zend OPcache extension is installed xmlrpc extension is installed CAS extension is not present exif extension is installed zip extension is installed bz2 extension is installed sodium extension is installed Database version seems correct (5.7.32) - Perfect! Access to timezone database (mysql) is not allowed. The log file has been created successfully. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/config has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_dumps has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_sessions has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_cron has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_graphs has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_lock has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_plugins has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_tmp has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_cache has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_rss has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_uploads has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_pictures has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/marketplace has been validated. Web access to files directory is protected GLPI_ROOT: /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL GLPI_CONFIG_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/config GLPI_VAR_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files GLPI_MARKETPLACE_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/marketplace GLPI_USE_CSRF_CHECK: 1 GLPI_CSRF_EXPIRES: 7200 GLPI_CSRF_MAX_TOKENS: 100 GLPI_USE_IDOR_CHECK: 1 GLPI_IDOR_EXPIRES: 7200 GLPI_TELEMETRY_URI: https://telemetry.glpi-project.org GLPI_INSTALL_MODE: TARBALL GLPI_NETWORK_MAIL: <EMAIL> GLPI_NETWORK_SERVICES: https://services.glpi-network.com GLPI_MARKETPLACE_PRERELEASES: GLPI_USER_AGENT_EXTRA_COMMENTS: GLPI_AJAX_DASHBOARD: 1 GLPI_CALDAV_IMPORT_STATE: 0 GLPI_DEMO_MODE: 0 GLPI_FORCE_EMPTY_SQL_MODE: 1 GLPI_DOC_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files GLPI_CACHE_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_cache GLPI_CRON_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_cron GLPI_DUMP_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_dumps GLPI_GRAPH_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_graphs GLPI_LOCAL_I18N_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_locales GLPI_LOCK_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_lock GLPI_LOG_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_log GLPI_PICTURE_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_pictures GLPI_PLUGIN_DOC_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_plugins GLPI_RSS_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_rss GLPI_SESSION_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_sessions GLPI_TMP_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_tmp GLPI_UPLOAD_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/files/_uploads GLPI_NETWORK_REGISTRATION_API_URL: https://services.glpi-network.com/api/registration/ GLPI_MARKETPLACE_PLUGINS_API_URI: https://services.glpi-network.com/api/glpi-plugins/ GLPI_I18N_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/locales GLPI_VERSION: 9.5.3 GLPI_SCHEMA_VERSION: 9.5.3 GLPI_MIN_PHP: 7.2.0 GLPI_YEAR: 2020 htmlawed/htmlawed version 1.2.5 in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/htmlawed/htmlawed) phpmailer/phpmailer version 6.1.6 in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/phpmailer/phpmailer/src) simplepie/simplepie version 1.5.6 in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/simplepie/simplepie/library) tecnickcom/tcpdf version 6.3.5 in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/tecnickcom/tcpdf) michelf/php-markdown in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/michelf/php-markdown/Michelf) true/punycode in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/true/punycode/src) iamcal/lib_autolink in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/iamcal/lib_autolink) sabre/dav in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/sabre/dav/lib/DAV) sabre/http in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/sabre/http/lib) sabre/uri in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/sabre/uri/lib) sabre/vobject in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/sabre/vobject/lib) laminas/laminas-cache in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/laminas/laminas-cache/src) laminas/laminas-i18n in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/laminas/laminas-i18n/src) laminas/laminas-serializer in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/laminas/laminas-serializer/src) monolog/monolog in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/monolog/monolog/src/Monolog) sebastian/diff in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/sebastian/diff/src) elvanto/litemoji in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/elvanto/litemoji/src) symfony/console in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/symfony/console) scssphp/scssphp in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/scssphp/scssphp/src) laminas/laminas-mail in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/laminas/laminas-mail/src/Protocol) laminas/laminas-mime in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/laminas/laminas-mime/src) rlanvin/php-rrule in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/rlanvin/php-rrule/src) blueimp/jquery-file-upload in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/blueimp/jquery-file-upload/server/php) ramsey/uuid in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/ramsey/uuid/src) psr/log in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/psr/log/Psr/Log) psr/simple-cache in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/psr/simple-cache/src) mexitek/phpcolors in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/mexitek/phpcolors/src/Mexitek/PHPColors) guzzlehttp/guzzle in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/guzzlehttp/guzzle/src) guzzlehttp/psr7 in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/guzzlehttp/psr7/src) wapmorgan/unified-archive in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/wapmorgan/unified-archive/src) paragonie/sodium_compat in (/var/www/vhosts/innotec-sa.com/Serveurs/SupportPROXL/vendor/paragonie/sodium_compat/src) Not active Way of sending emails: SMTP+TLS (<EMAIL>) Name: '<EMAIL>' Active: Yes Server: '{imap.gmail.com/imap/ssl/novalidate-cert/notls}Assistance PROXL' Login: '<EMAIL>' Password: Yes actualtime Name: ActualTime Version: 1.4.0 State: Enabled fields Name: Champs supplémentaires Version: 1.12.0 State: Enabled datainjection Name: Data Injection Version: 2.8.1 State: Enabled formcreator Name: Form Creator Version: 2.10.4 State: Enabled gappessentials Name: Gapp Essentials Version: 1.2.0 State: Enabled gdrive Name: GDrive Version: 1.3.0 State: Enabled [/code] username_1: GAT information will be needed finally, https://tic.gal/en/project/gat/ and post the generated file. Thanks username_0: I do the file with glpi user, with my credentials, i have this message ![image](https://user-images.githubusercontent.com/45215079/102468632-12b0ba80-4052-11eb-96be-a57b32cb6989.png) [support.proxl.fr_2020-12-17_10-21-58.txt](https://github.com/ticgal/gapp/files/5708249/support.proxl.fr_2020-12-17_10-21-58.txt) username_1: Did you modify the txt in any way? Your session token is set to 000000. Login is OK, since API returns information about your config. I don't understand that GLPI behaviour. I suggest running a GLPI test environment to test your auth method. No data, no plugins, only test the auth. username_0: I don't open this file ?? i just use the user glpi with internal database . i go to test with a test environment and i tell you. Thanks username_0: Oscar, i just install a new GLPI on my webserver, i don't install any plugin, and so it's the same problem ![image](https://user-images.githubusercontent.com/45215079/102480763-0b44dd80-4061-11eb-9168-6ccb46344dc2.png) so i do a GAT file with user "glpi" [glpi.proxl.fr_2020-12-17_12-09-36.txt](https://github.com/ticgal/gapp/files/5708904/glpi.proxl.fr_2020-12-17_12-09-36.txt) and system informations [code]   GLPI 9.5.3 ( => /var/www/vhosts/innotec-sa.com/Serveurs/glpitest) Installation mode: TARBALL -- Operating system: Linux vpscloud.innotec-sa.com 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:33:37 UTC 2016 x86_64 PHP 7.2.34 fpm-fcgi (Core, PDO, PDO_ODBC, Phar, Reflection, SPL, SimpleXML, Zend OPcache, bcmath, bz2, calendar, cgi-fcgi, ctype, curl, date, dba, dom, enchant, exif, fileinfo, filter, ftp, gd, gettext, gmp, hash, iconv, imagick, imap, intl, ionCube Loader, json, ldap, libxml, mbstring, mysqli, mysqlnd, odbc, openssl, pcre, pdo_mysql, pdo_pgsql, pdo_sqlite, pgsql, posix, pspell, redis, session, soap, sockets, sodium, sqlite3, standard, sysvmsg, sysvsem, sysvshm, tidy, tokenizer, xml, xmlreader, xmlrpc, xmlwriter, xsl, zip, zlib) Setup: max_execution_time="30" memory_limit="128M" post_max_size="8M" safe_mode="" session.save_handler="files" upload_max_filesize="2M" Software: Apache (Apache Server at glpi.proxl.fr Port 443) Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36 Server Software: (Ubuntu) Server Version: 5.7.32-0ubuntu0.16.04.1 Server SQL Mode: Parameters: username_0@localhost/glpi_test Host info: Localhost via UNIX socket PHP version is at least 7.2.0 - Perfect! Sessions support is available - Perfect! Allocated memory > 64 Mio - Perfect! mysqli extension is installed ctype extension is installed fileinfo extension is installed json extension is installed mbstring extension is installed iconv extension is installed zlib extension is installed curl extension is installed gd extension is installed simplexml extension is installed intl extension is installed ldap extension is installed apcu extension is not present Zend OPcache extension is installed xmlrpc extension is installed CAS extension is not present exif extension is installed zip extension is installed bz2 extension is installed sodium extension is installed Database version seems correct (5.7.32) - Perfect! Access to timezone database (mysql) is not allowed. The log file has been created successfully. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/config has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_dumps has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_sessions has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_cron has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_graphs has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_lock has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_plugins has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_tmp has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_cache has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_rss has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_uploads has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_pictures has been validated. Write access to /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/marketplace has been validated. Web access to files directory is protected GLPI_ROOT: /var/www/vhosts/innotec-sa.com/Serveurs/glpitest GLPI_CONFIG_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/config GLPI_VAR_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files GLPI_MARKETPLACE_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/marketplace GLPI_USE_CSRF_CHECK: 1 GLPI_CSRF_EXPIRES: 7200 GLPI_CSRF_MAX_TOKENS: 100 GLPI_USE_IDOR_CHECK: 1 GLPI_IDOR_EXPIRES: 7200 GLPI_TELEMETRY_URI: https://telemetry.glpi-project.org GLPI_INSTALL_MODE: TARBALL GLPI_NETWORK_MAIL: <EMAIL> GLPI_NETWORK_SERVICES: https://services.glpi-network.com GLPI_MARKETPLACE_PRERELEASES: GLPI_USER_AGENT_EXTRA_COMMENTS: GLPI_AJAX_DASHBOARD: 1 GLPI_CALDAV_IMPORT_STATE: 0 GLPI_DEMO_MODE: 0 GLPI_FORCE_EMPTY_SQL_MODE: 1 GLPI_DOC_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files GLPI_CACHE_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_cache GLPI_CRON_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_cron GLPI_DUMP_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_dumps GLPI_GRAPH_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_graphs GLPI_LOCAL_I18N_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_locales GLPI_LOCK_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_lock GLPI_LOG_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_log GLPI_PICTURE_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_pictures GLPI_PLUGIN_DOC_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_plugins GLPI_RSS_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_rss GLPI_SESSION_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_sessions GLPI_TMP_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_tmp GLPI_UPLOAD_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/files/_uploads GLPI_NETWORK_REGISTRATION_API_URL: https://services.glpi-network.com/api/registration/ GLPI_MARKETPLACE_PLUGINS_API_URI: https://services.glpi-network.com/api/glpi-plugins/ GLPI_I18N_DIR: /var/www/vhosts/innotec-sa.com/Serveurs/glpitest/locales GLPI_VERSION: 9.5.3 GLPI_SCHEMA_VERSION: 9.5.3 GLPI_MIN_PHP: 7.2.0 GLPI_YEAR: 2020 htmlawed/htmlawed version 1.2.5 in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/htmlawed/htmlawed) phpmailer/phpmailer version 6.1.6 in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/phpmailer/phpmailer/src) simplepie/simplepie version 1.5.6 in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/simplepie/simplepie/library) tecnickcom/tcpdf version 6.3.5 in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/tecnickcom/tcpdf) michelf/php-markdown in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/michelf/php-markdown/Michelf) true/punycode in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/true/punycode/src) iamcal/lib_autolink in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/iamcal/lib_autolink) sabre/dav in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/sabre/dav/lib/DAV) sabre/http in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/sabre/http/lib) sabre/uri in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/sabre/uri/lib) sabre/vobject in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/sabre/vobject/lib) laminas/laminas-cache in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/laminas/laminas-cache/src) laminas/laminas-i18n in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/laminas/laminas-i18n/src) laminas/laminas-serializer in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/laminas/laminas-serializer/src) monolog/monolog in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/monolog/monolog/src/Monolog) sebastian/diff in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/sebastian/diff/src) elvanto/litemoji in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/elvanto/litemoji/src) symfony/console in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/symfony/console) scssphp/scssphp in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/scssphp/scssphp/src) laminas/laminas-mail in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/laminas/laminas-mail/src/Protocol) laminas/laminas-mime in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/laminas/laminas-mime/src) rlanvin/php-rrule in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/rlanvin/php-rrule/src) blueimp/jquery-file-upload in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/blueimp/jquery-file-upload/server/php) ramsey/uuid in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/ramsey/uuid/src) psr/log in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/psr/log/Psr/Log) psr/simple-cache in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/psr/simple-cache/src) mexitek/phpcolors in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/mexitek/phpcolors/src/Mexitek/PHPColors) guzzlehttp/guzzle in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/guzzlehttp/guzzle/src) guzzlehttp/psr7 in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/guzzlehttp/psr7/src) wapmorgan/unified-archive in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/wapmorgan/unified-archive/src) paragonie/sodium_compat in (/var/www/vhosts/innotec-sa.com/Serveurs/glpitest/vendor/paragonie/sodium_compat/src) Not active Way of sending emails: PHP   gappessentials Name: Gapp Essentials Version: 1.2.0 State: Not installed [/code] username_1: We are unable to replicate. We have tested on 9.5.3 and works well. Could you please provide a test user/pass? username_2: <p>Testing</p><blockquote>Created with <a href='https://tic.gal'>GitSync</a> in GLPI by <NAME></blockquote> username_0: can i send you in private message the testing user/pass? username_0: i send you on mail <EMAIL> thanks username_0: Hi Oscar, i don't understand your messages from your GLPI support, i sended you our login for our glpi test, i see you create a ticket, but i still cannot connect to this with this login, by the web site, yes, but not by the app. username_1: Hi, I have already answer on our ticket. We have investigated the issue, and the GLPI API apparently is not considering IMAP authentication. Because of this, Gapp is unable to authenticate. You could open an issue at the GLPI project, to report it. Regards, Status: Issue closed
microsoft/PowerToys
733736441
Title: Prompted automatic update from 0.23.2 to 0.25.0 failed after powertoys uninstalls itself Question: username_0: ## ℹ Computer information - PowerToys version: Release 0.23.2 -> 0.25.0 - PowerToy Utility: N.A. - Running PowerToys as Admin: Yes - Windows build number: 19042.608 ## 📝 Provide detailed reproduction steps (if any) 1. Get prompted about an update is available from Release 0.23.2 -> 0.25.0 2. Click update now, powertoys uninstalls itself first, which is normal, then a notification is shown that the update has failed. 3. Powertoys has nowhere to be found in the computer. ### ✔️ Expected result If I understand it correctly, only when the new release is downloaded, will the update start. Normally after self-uninstall, powertoys will start re-install itself with the new version. I report this bug because I wish you guys to look into any potential bugs that will lead to this, then it will not affect the future releases. I only tried it on one of my computers for now. ### ❌ Actual result Powertoys is totally uninstalled. ## 📷 Screenshots No screenshot available. Answers: username_0: Here's an update of this bug, hopefully, this feedback can provide some clues. I downloaded the 0.25.0 new version of the Powertoys installer and clicked install. The notification came saying that "installing new Powertoys version", just like what it should have behaved in a normal auto-update. This means that the "watch process" (I didn't pay attention to what it is, but I suspect a process is running during the auto-update to give notifications and guide the installation of a new version.) was still running after the aforementioned failed update. However, when this manual re-installation is completed, all user settings are lost, and Powertoys needs to be re-configured as new. From my perspective, this might have something to do with the "watch program". username_1: possible dup #7649 username_2: Duplicate of #7649 Status: Issue closed
RyanSchuster/vos64
120615894
Title: Initial documentation Question: username_0: All items from the stable setup milestone should be included both on the wiki and in the html document versioned in the repo. - [ ] Skeleton wiki page for docs - [ ] Folder/file structure - [ ] Build process and toolchain minutiae - [ ] Boot process and memory layout - [ ] Use of source scanners<issue_closed> Status: Issue closed
AbsaOSS/cobrix
481215700
Title: Seemingly spurious records and missing variable length data Question: username_0: @username_1 - Thanks so much with your help on #147 . We were able to re-export the data to include the RDW. However, we're still facing some issues. *Background* I'm reading a single file with two records that use the same copybook. However, when I trie to save the dataframe to JSON, I see four records and sections of the JSON that should include repeating values (i.e. sections of the copybook that use OCCURS...DEPENDING ON) are empty. The relevant sections of the copybook are here. ``` 02 FI-IP-SNF-CLM-REC. 04 FI-IP-SNF-CLM-FIX-GRP. 06 CLM-REC-IDENT-GRP. 08 REC-LNGTH-CNT PIC S9(5) COMP-3. ... 06 IP-REV-CNTR-CD-I-CNT PIC 99. ... 06 CLM-REV-CNTR-GRP OCCURS 0 TO 45 TIMES DEPENDING ON IP-REV-CNTR-CD-I-CNT OF FI-IP-SNF-CLM-REC. ... ``` Cobrix logs the following. ``` -------- FIELD LEVEL/NAME --------- --ATTRIBS-- FLD START END LENGTH FI_IP_SNF_CLM_REC 1 31656 31656 4 FI_IP_SNF_CLM_FIX_GRP 244 1 2058 2058 6 CLM_REC_IDENT_GRP 7 1 8 8 8 REC_LNGTH_CNT 3 1 3 3 ... 6 IP_REV_CNTR_CD_I_CNT D 153 1249 1250 2 ... 6 CLM_REV_CNTR_GRP [] 360 4384 31653 27270 ... ``` Here's my code: ``` val inpk_df = spark .read .format("cobol") .option("copybook", "data/UTLIPSNK.txt") .option("generate_record_id", true) .option("is_record_sequence", "true") .option("is_rdw_big_endian", "true") .load("data/in/file1") inpk_df.write.json("data/out/file1") ``` This produces JSON that looks like this. ``` { "File_Id": 0, "Record_Id": 0, "FI_IP_SNF_CLM_REC": {...} } { [Truncated] I'm guessing there is a mismatch between what the RDW is indicating and the actual data. Do you have some pointers for troubleshooting that and working around it? *Second Question* The second question is, how come the nested JSON array isn't populated for the variable length field values? The value of the `IP-REV-CNTR-CD-I-CNT` field in the JSON for the first record looks like this: ``` ... "IP_REV_CNTR_CD_I_CNT": 23, ... ``` So, I expect 23 records to be populated. However, the value of the `"CLM_REV_CNTR_GRP"` key is an array of 23 elements, but they are all empty. The first 20 elements are all objects where each key has an empty value. The last three are just empty objects. Any ideas? Thanks so much for your help!!! Answers: username_1: Thanks for providing so much context. Still hard to tell what exactly went wrong, but here some ideas: * When RDWs are used the number of records is determined by them. So if RDWs are correct, then the file contains a sequence of 4 records. Or RDWs are wrong or biased and need adjustment. You can verify it this way. If each of the records starts with valid (properly parsed/decoded) values, then RDW is likely to be correct and there are actually 4 records (see the next idea). * If your data is hierarchical it might be that you have 2 root records having 2 child records. Although from a logical perspective it might be considered 2 records (child record being part of a root record), from the file layout perspective it might have 4 records (just an idea, I'm not suggesting this is the case). * An example RDW (from the error message) says the record size is `64*256 + 7 = 16391`, but the size of the copybook is 31653. Your segments redefine each other in the copybook, right? * The fact that the parsed data contains 23 empty elements of an array might be a sign that the copybook doesn't completely match the data. * The `REC_LNGTH_CNT must be an integral type` issue is interesting since `REC_LNGTH_CNT` is definitely integral. We are going to release a `1.0.0-SNAPSHOT` soon with the rewritten parser. I'm wondering if the issue is still valid for that version. username_1: Please, try this snapshot and let me know if it worked for you: ```xml <dependency> <groupId>za.co.absa.cobrix</groupId> <artifactId>spark-cobol</artifactId> <version>1.0.1-SNAPSHOT</version> </dependency> ``` You also need to use this option: ``` .option("variable_size_occurs", "true") ``` Status: Issue closed
flutter/flutter
209590858
Title: App Bar with no drawer has too much left padding on Android Question: username_0: ## Steps to Reproduce ``` flutter create myapp cd myapp flutter run ``` On iOS, the app bar is centered. On Android, you get this: ![center aligned](https://cloud.githubusercontent.com/assets/394889/23233739/a5ed0108-f904-11e6-9aab-585182d996d1.png) Because app bars are left aligned on Android, it should look like this: ![left aligned](https://cloud.githubusercontent.com/assets/394889/23233759/b35bfff6-f904-11e6-8459-09c5e1475263.png) @username_1 said he was interested in fixing this. ## Flutter Doctor [✓] Flutter (on Mac OS, channel master) • Flutter at /Users/jackson/git/flutter • Framework revision 9610ff6b8e (5 days ago), 2017-02-17 11:17:05 • Engine revision ab09530927 • Tools Dart version 1.23.0-dev.0.0 [✓] Android toolchain - develop for Android devices (Android SDK 25.0.2) • Android SDK at /Users/jackson/Library/Android/sdk • Platform android-25, build-tools 25.0.2 • ANDROID_HOME = /Users/jackson/Library/Android/ • Java(TM) SE Runtime Environment (build 1.8.0_91-b14) [✓] iOS toolchain - develop for iOS devices (Xcode 8.2.1) • Xcode at /Applications/Xcode.app/Contents/Developer • Xcode 8.2.1, Build version 8C1002 • ios-deploy 1.9.1 [✓] IntelliJ IDEA Community Edition (version 2016.3.4) • Dart plugin version 163.13137 • Flutter plugin version 0.1.10 [✓] Connected devices • Nexus 5 • 05efc788006b0a48 • android-arm • Android 6.0.1 (API 23) • iPhone 7 Plus • 153C0032-F485-4E73-9732-8E60C7D75B11 • ios • iOS 10.2 (simulator) Answers: username_1: /cc @username_2, who was just looking at this code for icon hit regions. username_2: Ah, I must have broken it. Will take a look username_2: Oh never mind, it was always like that. Will implement. username_2: I'm going for 16px based on this ![flutter_07](https://cloud.githubusercontent.com/assets/156888/23235050/f0a502b4-f908-11e6-9c77-eb2cb7cdc0fa.png) Status: Issue closed username_3: @username_2 I can't find an answer there for this particular question, but as a general rule the authority for what to do for widgets is the material spec at material.google.com.
jlippold/tweakCompatible
310273244
Title: `CountMyMessages` working on iOS 11.1.2 Question: username_0: ``` { "packageId": "com.imkpatil.countmymessages", "action": "working", "userInfo": { "arch32": false, "packageId": "com.imkpatil.countmymessages", "deviceId": "iPhone9,1", "url": "http://cydia.saurik.com/package/com.imkpatil.countmymessages/", "iOSVersion": "11.1.2", "packageVersionIndexed": false, "packageName": "CountMyMessages", "category": "Tweaks", "repository": "Kiran Patil's Repo", "name": "CountMyMessages", "packageIndexed": false, "packageStatusExplaination": "This tweak has not been reviewed. Please submit a review if you choose to install.", "id": "com.imkpatil.countmymessages", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.0.7", "shortDescription": "A Simple tweak to count my Total messages!", "latest": "1.0", "author": "<NAME>", "packageStatus": "Unknown" }, "base64": "<KEY> "chosenStatus": "working", "notes": "" } ```
r-hub/rhub
185207112
Title: Select platforms for cran test Question: username_0: Having a `check_for_cran` wrapper is very nice; CRAN repo policy suggests r-release and r-devel so I would prefer to have an option to select (just) these two (or even make it the default). Also, an aggregate result output data.frame would be sweet. Answers: username_1: :+1: Status: Issue closed
iu-parfunc/accelerack
120961071
Title: Cleanup: for starters, no redundant types. Question: username_0: Whew, it's hard to know where to start. This code has layers of renamings and wrappings that are basically deferred refactorings that need to *happen*. Structs should be defined in one place high up in the module dependence graphs. There should be one notion of `acc-array?` which we should go ahead and rename `acc-manifest-array?`. The "racket ops" module must work directly against this type... not work on the C data and then have separate wrappers that work over the proper ADT type!! Answers: username_0: The C data definitions are all very nice and serve our purposes well. They just need to be isolated in some module called, e.g. "c_arraydata.rkt". That module should expose an *absolutely minimal interface* (probably smaller than `acc_allocate` does now), and that should be all the rest of the modules know about CData. username_0: Most everything in `acc_header` is internal details of the "C data". But the `define-cstruct _acc-array`... that's currently our `acc-manifest-array?` struct as I understand it. username_0: Huh, along these lines, I can't remember why we need `rkt-acc-array` in `acc_header.rkt` -- is it still needed? username_1: I will remove the rkt-acc-array... They are not needed anymore... username_0: Great... I think the success criteria will be not needing anymore `require (prefix ...` business. username_1: Should racket_ops be able to work over normal lists? (acc-map add1 '(1 2 3 4)) is expected to produce '(2 3 4 5)? username_1: Almost everything in header.rkt is C data definitions. Should we rename "header.rkt" to "c_arraydata.rkt"? Status: Issue closed username_1: wrappers.rkt provides the necessary wrappers for racket_ops. "prefix-in" requirements are removed for racket_ops functions (they remain only for importing map from racket/base). This issue is fixed. Only thing pending is renaming of "header.rkt" to "c_arraydata.rkt" if necessary. Closing this issue.
ohjay/stable_fluids
396156289
Title: README images Question: username_0: ![stable_2d](https://user-images.githubusercontent.com/8358648/50723766-d10deb80-1096-11e9-9ea2-509681ff63a9.gif) ![stable_3d](https://user-images.githubusercontent.com/8358648/50723767-d2d7af00-1096-11e9-935e-06322229eee8.gif) ![stable_face1](https://user-images.githubusercontent.com/8358648/50723768-d4a17280-1096-11e9-8ecb-68ae34f848b9.gif) ![stable_face2](https://user-images.githubusercontent.com/8358648/50723769-d66b3600-1096-11e9-931a-a6a85a6425ac.gif) ![stable_s1](https://user-images.githubusercontent.com/8358648/50723771-d9febd00-1096-11e9-956a-ff5c9198bf03.gif) ![stable_s2](https://user-images.githubusercontent.com/8358648/50723772-dbc88080-1096-11e9-88be-88fa2ccaf1b2.gif)
rundeck/docs
473045620
Title: Text on page doesn't wrap properly manpages/man5/aclpolicy-v10.md Question: username_0: **Describe the bug** Lines of text in the v3.1 doc pages wrap somewhere off-page which makes using the pages difficult and frustrating. ![Screen Shot 2019-07-25 at 12 57 20 PM](https://user-images.githubusercontent.com/13004946/61904707-382ca980-aedc-11e9-9f95-186664da589c.png) The screenshot shows an example on this aclpolicy page but I believe it is probably happening on all pages. **Source page** https://docs.rundeck.com/3.1.0-rc2/man5/aclpolicy.html Answers: username_1: This seems ok in 3.1.0 final docs, please see https://docs.rundeck.com/3.1.0/man5/aclpolicy.html#actions-element Status: Issue closed
pelya/xserver-xsdl
1014907264
Title: Xsdl on Google cloud Question: username_0: Hi, I'm a newbie, I tried connecting google cloud to my smartphone with xsdl, I typed the command line shown on xsdl into google cloud, and I waited for it to render on xsdl. But nothing is showing, what do I need to do? P.S. I have installed xfce4 on google cloud.
nf-core/smrnaseq
738308386
Title: Set up AWS megatests Question: username_0: AWS megatests is now running nicely and we’re trying to set up all (most) nf-core pipelines to run a big dataset. We need to identify a set of public data to run benchmarks for the pipeline. The idea is that this will run automatically for every release of the nf-core/smrnaseq pipeline. The results will then be publicly accessible from s3 and viewable through the website: https://nf-co.re/smrnaseq/results - this means that people can manually compare differences in output between pipeline releases if they wish. We need a dataset that is as “normal” as possible, mouse or human, sequenced relatively recently and with a bunch of replicates etc. It can be a fairly large project I'm hoping that @username_1 can help here, but suggestions from anyone and everyone are more than welcome! ✋🏻 In practical terms, once decided we need to: - [ ] Upload the FastQ files to s3: `s3://nf-core-awsmegatests/smrnaseq/input_data/` (I can help with this) - [ ] Update [`test_full.config`](https://github.com/nf-core/smrnaseq/blob/dev/conf/test_full.config) to work with these file paths - [ ] Check [`.github/workflows/awsfulltest.yml`](https://github.com/nf-core/smrnaseq/blob/dev/.github/workflows/awsfulltest.yml) (should be no changes required I think?) - [ ] Merge, and try running the `dev` branch manually Answers: username_1: can we use this one: https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE97285 username_0: Looks great! <details> <summary><code>download.sh</code></summary> ```bash #!/usr/bin/env bash curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/005/SRR5398625/SRR5398625.fastq.gz -o SRR5398625_GSM2560978_control_preclinic3_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/003/SRR5398623/SRR5398623.fastq.gz -o SRR5398623_GSM2560976_control_preclinic1_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/006/SRR5398626/SRR5398626.fastq.gz -o SRR5398626_GSM2560979_control_preclinic4_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/004/SRR5398624/SRR5398624.fastq.gz -o SRR5398624_GSM2560977_control_preclinic2_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/007/SRR5398627/SRR5398627.fastq.gz -o SRR5398627_GSM2560980_control_preclinic5_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/008/SRR5398628/SRR5398628.fastq.gz -o SRR5398628_GSM2560981_control_preclinic6_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/001/SRR5398631/SRR5398631.fastq.gz -o SRR5398631_GSM2560984_preclinic3_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/009/SRR5398629/SRR5398629.fastq.gz -o SRR5398629_GSM2560982_preclinic1_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/000/SRR5398630/SRR5398630.fastq.gz -o SRR5398630_GSM2560983_preclinic2_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/002/SRR5398632/SRR5398632.fastq.gz -o SRR5398632_GSM2560985_preclinic4_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/003/SRR5398633/SRR5398633.fastq.gz -o SRR5398633_GSM2560986_preclinic5_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/004/SRR5398634/SRR5398634.fastq.gz -o SRR5398634_GSM2560987_preclinic6_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/006/SRR5398636/SRR5398636.fastq.gz -o SRR5398636_GSM2560989_preclinic7_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/009/SRR5398639/SRR5398639.fastq.gz -o SRR5398639_GSM2560992_control_clinic3_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/005/SRR5398635/SRR5398635.fastq.gz -o SRR5398635_GSM2560988_control_preclinic7_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/008/SRR5398638/SRR5398638.fastq.gz -o SRR5398638_GSM2560991_control_clinic2_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/007/SRR5398637/SRR5398637.fastq.gz -o SRR5398637_GSM2560990_control_clinic1_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/000/SRR5398640/SRR5398640.fastq.gz -o SRR5398640_GSM2560993_control_clinic4_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/001/SRR5398641/SRR5398641.fastq.gz -o SRR5398641_GSM2560994_control_clinic5_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/003/SRR5398643/SRR5398643.fastq.gz -o SRR5398643_GSM2560996_clinic1_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/002/SRR5398642/SRR5398642.fastq.gz -o SRR5398642_GSM2560995_control_clinic6_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/004/SRR5398644/SRR5398644.fastq.gz -o SRR5398644_GSM2560997_clinic2_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/005/SRR5398645/SRR5398645.fastq.gz -o SRR5398645_GSM2560998_clinic3_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/006/SRR5398646/SRR5398646.fastq.gz -o SRR5398646_GSM2560999_clinic4_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/009/SRR5398649/SRR5398649.fastq.gz -o SRR5398649_GSM2561002_control_clinic7_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/007/SRR5398647/SRR5398647.fastq.gz -o SRR5398647_GSM2561000_clinic5_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/008/SRR5398648/SRR5398648.fastq.gz -o SRR5398648_GSM2561001_clinic6_Homo_sapiens_ncRNA-Seq.fastq.gz curl -L ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR539/000/SRR5398650/SRR5398650.fastq.gz -o SRR5398650_GSM2561003_clinic7_Homo_sapiens_ncRNA-Seq.fastq.gz ``` </details> username_0: I'll get onto syncing this to s3 ASAP username_0: ok, data is now on s3 - should be able to see it here: https://nf-co.re/smrnaseq/dev/results#smrnaseq/input_data/
sxs-collaboration/spectre
1121445772
Title: GH+CCE runs on wheeler Question: username_0: # Bug reports: ### Expected behavior: <!-- describe the expected behavior --> ### Current behavior: <!-- describe the current behavior and how to reproduce --> I'm trying to run the CCE-GH executable #2323 on wheeler. The GH domain is as follows ``` DomainCreator: Shell: InnerRadius: 1.9 OuterRadius: 200. InitialGridPoints: [8,8] InitialRefinement: 2 UseEquiangularMap: true AspectRatio: 1.0 WhichWedges: All TimeDependence: None RadialDistribution: [Logarithmic,Logarithmic,Logarithmic,Logarithmic,Logarithmic,Logarithmic,Logarithmic,Logarithmic,Logarithmic,Logarithmic,Logarithmic] RadialPartitioning: [3.0, 5.0, 7.0, 10.0, 20.0,40.0, 60.0,80,120,160] BoundaryConditions: OuterBoundary: ConstraintPreservingBjorhus: Type: ConstraintPreservingPhysical InnerBoundary: Outflow ``` and the CCE grid is ``` Cce: LMax: 14 ExtractionRadius: 198 NumberOfRadialPoints: 58 ``` Running on 4 nodes, the system proceeds at least one time step per second. However, if I add one more radial or angular grid point for each element (namely `InitialGridPoints: [9,8]` or `InitialGridPoints: [8,9]`), the system won't take even a single time step within five minutes. This issue doesn't happen all the time but pretty frequently, and this issue is gone after I request fewer nodes. The CCE grid doesn't have an impact on this issue. ### Environment: Using all modules in `wheeler_clang.sh` ### Detailed discussion: Answers: username_0: Using 3 nodes, the run dies after `25M`. The error message is as follows ``` Shortened stack trace is: /panfs/ds09/sxs/sma/restart/test/EvolveGhCceKerrSchild() [0x2946d58] Address for addr2line: 0x2946d58 /panfs/ds09/sxs/sma/restart/test/EvolveGhCceKerrSchild() [0x2946b16] Address for addr2line: 0x2946b16 /usr/local/gcc/7.3.0/lib64/libstdc++.so.6(+0x8efe6) [0x7f317bc44fe6] Address for addr2line: 0x8efe6 /usr/local/gcc/7.3.0/lib64/libstdc++.so.6(+0x8f031) [0x7f317bc45031] Address for addr2line: 0x8f031 /panfs/ds09/sxs/sma/restart/test/EvolveGhCceKerrSchild() [0x1d2ca0b] Address for addr2line: 0x1d2ca0b /panfs/ds09/sxs/sma/restart/test/EvolveGhCceKerrSchild(_ZN22CkIndex_AlgorithmArrayI14DgElementArrayI17EvolutionMetavarsIN19GeneralizedHarmonic9Solutions9WrappedGrIN2gr9Solutions10KerrSchildEEES8_EN7brigand4listIJN8Parallel12PhaseActionsIN27GeneralizedHarmonicDefaults5PhaseELSF_0ENSB_IJN7Actions12SetupDataBoxEN14Initialization7Actions15TimeAndTimeStepIS9_EEN9evolution2dg14Initialization6DomainILm3ELb0EEENSJ_21NonconservativeSystemINS2_6SystemILm3EEEEENSM_14Initialization7Actions12SetVariablesIN6domain4Tags11CoordinatesILm3EN5Frame7LogicalEEEEENSJ_18TimeStepperHistoryIS9_EENSJ_17InitializeCcmTagsIS9_EENSJ_22InitializeCcmOtherTagsIS9_EENS2_7Actions30InitializeGhAnd3Plus1VariablesILm3EEENSJ_14AddComputeTagsINSB_IJNSZ_25MinimumGridSpacingComputeILm3ENS11_8InertialEEENS2_4Tags33ComputeLargestCharacteristicSpeedILm3ES1G_EENSZ_20SizeOfElementComputeILm3EEENSM_4Tags15AnalyticComputeILm3EN4Tags16AnalyticSolutionIS8_EENSB_IJNS5_4Tags15SpacetimeMetricILm3ES1G_10DataVectorEENS1I_2PiILm3ES1G_EENS1I_3PhiILm3ES1G_EEEEEEEEEEEENSO_7MortarsILm3EST_EEN5intrp7Actions23ElementInitInterpPointsINS26_4Tags15InterpPointInfoIS9_EEEENSJ_30RemoveOptionsAndTerminatePhaseEEEEEENSD_ISF_LSF_3ENSB_IJNS2_6gauges7Actions24InitializeDampedHarmonicILm3ELb1EEENS1B_21InitializeConstraintsILm3EEENSC_7Actions14TerminatePhaseEEEEEENSD_ISF_LSF_4ENSB_IJN9SelfStart7Actions10InitializeIST_EENSG_5LabelINS2Q_6detail10PhaseStartEEENS2R_18CheckForCompletionINS2V_8PhaseEndEST_EENSG_11AdvanceTimeENS2R_21CheckForOrderIncreaseEN3Cce7Actions17SendNextTimeToCceINS9_18CceWorldtubeTargetEEENS27_19InterpolateToTargetIS36_EENS1B_14ReceiveCCEDataIS9_EENSN_7Actions21ComputeTimeDerivativeIS9_EENS3C_24ApplyBoundaryCorrectionsIS9_EENSG_21RecordTimeStepperDataI10NoSuchTypeEENSG_7UpdateUIS3I_EEN2dg7Actions6FilterIN7Filters11ExponentialILm0EEES20_EENSG_4GotoIS2W_EENS2U_IS2Z_EENS2R_7CleanupES31_S2N_EEEEENSD_ISF_LSF_5ENSB_IJNSB_IJN9observers7Actions27RegisterEventsWithObserversENS27_31RegisterElementWithInterpolatorEEEES2N_EEEEENSD_ISF_LSF_8ENSB_IJNSG_20RunEventsAndTriggersENSG_14ChangeSlabSizeENSB_IJS37_S39_S3B_S3E_S3G_NSB_IJS3J_S3L_EEES3S_EEES31_N12PhaseControl7Actions18ExecutePhaseChangeINSB_IJNS4A_10Registrars14VisitAndReturnI31GeneralizedHarmonicTemplateBaseIS9_ELSF_6EEENS4D_31CheckpointAndExitAfterWallclockIS4G_EEEEEEEEEEEEEEEE9ElementIdILm3EEE26reg_receive_data_marshall8INSN_4Tags36BoundaryCorrectionAndGhostCellsInboxILm3EEESt4pairIS4X_I9DirectionILm3EES4R_ESt5tupleIJ4MeshILm2EESt8optionalISt6vectorIdSaIdEEES58_10TimeStepIdEEEEEiv+0) [0x1f723f0] Address for addr2line: 0x1f723f0 /panfs/ds09/sxs/sma/restart/test/EvolveGhCceKerrSchild(_ZN22CkIndex_AlgorithmArrayI14DgElementArrayI17EvolutionMetavarsIN19GeneralizedHarmonic9Solutions9WrappedGrIN2gr9Solutions10KerrSchildEEES8_EN7brigand4listIJN8Parallel12PhaseActionsIN27GeneralizedHarmonicDefaults5PhaseELSF_0ENSB_IJN7Actions12SetupDataBoxEN14Initialization7Actions15TimeAndTimeStepIS9_EEN9evolution2dg14Initialization6DomainILm3ELb0EEENSJ_21NonconservativeSystemINS2_6SystemILm3EEEEENSM_14Initialization7Actions12SetVariablesIN6domain4Tags11CoordinatesILm3EN5Frame7LogicalEEEEENSJ_18TimeStepperHistoryIS9_EENSJ_17InitializeCcmTagsIS9_EENSJ_22InitializeCcmOtherTagsIS9_EENS2_7Actions30InitializeGhAnd3Plus1VariablesILm3EEENSJ_14AddComputeTagsINSB_IJNSZ_25MinimumGridSpacingComputeILm3ENS11_8InertialEEENS2_4Tags33ComputeLargestCharacteristicSpeedILm3ES1G_EENSZ_20SizeOfElementComputeILm3EEENSM_4Tags15AnalyticComputeILm3EN4Tags16AnalyticSolutionIS8_EENSB_IJNS5_4Tags15SpacetimeMetricILm3ES1G_10DataVectorEENS1I_2PiILm3ES1G_EENS1I_3PhiILm3ES1G_EEEEEEEEEEEENSO_7MortarsILm3EST_EEN5intrp7Actions23ElementInitInterpPointsINS26_4Tags15InterpPointInfoIS9_EEEENSJ_30RemoveOptionsAndTerminatePhaseEEEEEENSD_ISF_LSF_3ENSB_IJNS2_6gauges7Actions24InitializeDampedHarmonicILm3ELb1EEENS1B_21InitializeConstraintsILm3EEENSC_7Actions14TerminatePhaseEEEEEENSD_ISF_LSF_4ENSB_IJN9SelfStart7Actions10InitializeIST_EENSG_5LabelINS2Q_6detail10PhaseStartEEENS2R_18CheckForCompletionINS2V_8PhaseEndEST_EENSG_11AdvanceTimeENS2R_21CheckForOrderIncreaseEN3Cce7Actions17SendNextTimeToCceINS9_18CceWorldtubeTargetEEENS27_19InterpolateToTargetIS36_EENS1B_14ReceiveCCEDataIS9_EENSN_7Actions21ComputeTimeDerivativeIS9_EENS3C_24ApplyBoundaryCorrectionsIS9_EENSG_21RecordTimeStepperDataI10NoSuchTypeEENSG_7UpdateUIS3I_EEN2dg7Actions6FilterIN7Filters11ExponentialILm0EEES20_EENSG_4GotoIS2W_EENS2U_IS2Z_EENS2R_7CleanupES31_S2N_EEEEENSD_ISF_LSF_5ENSB_IJNSB_IJN9observers7Actions27RegisterEventsWithObserversENS27_31RegisterElementWithInterpolatorEEEES2N_EEEEENSD_ISF_LSF_8ENSB_IJNSG_20RunEventsAndTriggersENSG_14ChangeSlabSizeENSB_IJS37_S39_S3B_S3E_S3G_NSB_IJS3J_S3L_EEES3S_EEES31_N12PhaseControl7Actions18ExecutePhaseChangeINSB_IJNS4A_10Registrars14VisitAndReturnI31GeneralizedHarmonicTemplateBaseIS9_ELSF_6EEENS4D_31CheckpointAndExitAfterWallclockIS4G_EEEEEEEEEEEEEEEE9ElementIdILm3EEE28_call_receive_data_marshall8INSN_4Tags36BoundaryCorrectionAndGhostCellsInboxILm3EEESt4pairIS4X_I9DirectionILm3EES4R_ESt5tupleIJ4MeshILm2EESt8optionalISt6vectorIdSaIdEEES58_10TimeStepIdEEEEEvPvS5C_+0x193) [0x1f72a33] Address for addr2line: 0x1f72a33 /panfs/ds09/sxs/sma/restart/test/EvolveGhCceKerrSchild(CkDeliverMessageFree+0x21) [0x4930481] Address for addr2line: 0x4930481 /panfs/ds09/sxs/sma/restart/test/EvolveGhCceKerrSchild(_ZN8CkLocRec11invokeEntryEP12CkMigratablePvib+0x41) [0x4955f21] Address for addr2line: 0x4955f21 /panfs/ds09/sxs/sma/restart/test/EvolveGhCceKerrSchild(_Z15_processHandlerPvP11CkCoreState+0x359) [0x4937e49] Address for addr2line: 0x4937e49 End shortened stack trace. Node: 2 Proc: 46 Line: 17 of /home/sma/spectre/src/Parallel/InitializationFunctions.cpp Function: auto setup_error_handling()::(anonymous class)::operator()() const Terminated due to an uncaught exception: vector::_M_default_append ``` username_1: The GR domain you are using seems to be overkill in resolution (44 radial elements by 96 angular elements, each with 512 grid points is over 2 million grid points...) username_1: Also have you run the problem with an executable compiled with build type Debug instead of Release? username_0: I do need more than 2 million grid points otherwise the ringdown is pretty noisy. username_0: The run died due to floating point exception after I switched to debug mode username_0: ``` ############ ERROR ############ Shortened stack trace is: /panfs/ds09/sxs/sma/restart/GH1/EvolveGhCceKerrSchild() [0xb05978d] Address for addr2line: 0xb05978d /usr/lib64/libc.so.6(+0x35670) [0x7f945c726670] Address for addr2line: 0x35670 /usr/local/openblas/0.2.18/lib/libopenblas.so.0(dgemm_kernel+0x19b8) [0x7f945dc1e5b8] Address for addr2line: 0x2dd5b8 End shortened stack trace. Node: 2 Proc: 67 Line: 23 of /home/sma/spectre/src/Utilities/ErrorHandling/FloatingPointExceptions.cpp Function: void (anonymous namespace)::fpe_signal_handler(int) Floating point exception! ############ ERROR ############ ``` username_0: The error message when address sanitizer is on. ``` ==14984==ERROR: AddressSanitizer failed to allocate 0xdfff0001000 (15392894357504) bytes at address 2008fff7000 (errno: 12) ==24349==ERROR: AddressSanitizer failed to allocate 0xdfff0001000 (15392894357504) bytes at address 2008fff7000 (errno: 12) ==6129==ERROR: AddressSanitizer failed to allocate 0xdfff0001000 (15392894357504) bytes at address 2008fff7000 (errno: 12) srun: error: wheeler085: task 0: Aborted ==14984==ReserveShadowMemoryRange failed while trying to map 0xdfff0001000 bytes. Perhaps you're using ulimit -v srun: error: wheeler087: task 2: Aborted srun: error: wheeler086: task 1: Aborted ==24349==ReserveShadowMemoryRange failed while trying to map 0xdfff0001000 bytes. Perhaps you're using ulimit -v ==6129==ReserveShadowMemoryRange failed while trying to map 0xdfff0001000 bytes. Perhaps you're using ulimit -v ``` username_2: That's almost 14 terabytes. Something is weird here. Every test seems to give a completely different error. username_2: Does the run still fail if you reduce the number of elements enough that you can run on one node? It may not be a useful run, but single-node runs are easier to debug. username_0: Yes, it still fails after I reduce the resolution and run it on a single node. ``` Evolution: InitialTime: 0.0 InitialTimeStep: 0.002 TimeStepper: AdamsBashforthN: Order: 3 DomainCreator: Shell: InnerRadius: 1.9 OuterRadius: 200. InitialGridPoints: [2,2] InitialRefinement: 0 UseEquiangularMap: true AspectRatio: 1.0 WhichWedges: All TimeDependence: None RadialDistribution: [Logarithmic,Logarithmic] RadialPartitioning: [3.0] BoundaryConditions: OuterBoundary: ConstraintPreservingBjorhus: Type: ConstraintPreservingPhysical InnerBoundary: Outflow AnalyticSolution: KerrSchild: Mass: 1.0 Spin: [0.0, 0.0, 0.0] Center: [0.0, 0.0, 0.0] EvolutionSystem: GeneralizedHarmonic: # The parameter choices here come from our experience with the Spectral # Einstein Code (SpEC). They should be suitable for evolutions of a # perturbation of a Kerr-Schild black hole. DhGaugeParameters: RollOnStartTime: 100000.0 RollOnTimeWindow: 100.0 SpatialDecayWidth: 50.0 Amplitudes: [1.0, 1.0, 1.0] Exponents: [4, 4, 4] DampingFunctionGamma0: GaussianPlusConstant: Constant: 0.001 Amplitude: 3.0 Width: 11.313708499 Center: [0.0, 0.0, 0.0] DampingFunctionGamma1: GaussianPlusConstant: Constant: -1.0 Amplitude: 0.0 Width: 11.313708499 Center: [0.0, 0.0, 0.0] DampingFunctionGamma2: GaussianPlusConstant: [Truncated] # sbatch Wheeler.sh export SPECTRE_BUILD_DIR=/home/sma/spectre/build/ export RUN_DIR=${PWD}/Run export SPECTRE_EXECUTABLE=${PWD}/EvolveGhCceKerrSchild export SPECTRE_INPUT_FILE=${PWD}/KerrSchildWithCce.yaml ############################################################################ # Set desired permissions for files created with this script umask 0022 export PATH=${SPECTRE_BUILD_DIR}/bin:$PATH mkdir ${RUN_DIR} cd ${RUN_DIR} # The 23 is there because Charm++ uses one thread per node for communication srun -n ${SLURM_JOB_NUM_NODES} -c 24 \ ${SPECTRE_EXECUTABLE} ++ppn 23 \ --input-file ${SPECTRE_INPUT_FILE} ``` username_2: OK, good. If you are running on one node, you should be able to log into that node and attach gdb to the process (`gdb -p <PID>`) and let it run until it fails. Don't forget to give the gdb command `catch throw` before continuing the execution so you will see C++ exceptions. I don't know how familiar you are with gdb, but you shouldn't need anything esoteric for this. Googling for a basic gdb guide should get you what you need if you're not familiar. username_0: The run failed immediately with the address sanitizer on. I didn't have a chance to try `gdb`. username_2: What if you try it with the address sanitizer off? username_0: The run fails immediately if I use debug mode The error message is as follows ``` ############ ERROR ############ Shortened stack trace is: /panfs/ds09/sxs/sma/restart/GH1/EvolveGhCceKerrSchild() [0xb05978d] Address for addr2line: 0xb05978d /usr/lib64/libc.so.6(+0x35670) [0x7f945c726670] Address for addr2line: 0x35670 /usr/local/openblas/0.2.18/lib/libopenblas.so.0(dgemm_kernel+0x19b8) [0x7f945dc1e5b8] Address for addr2line: 0x2dd5b8 End shortened stack trace. Node: 2 Proc: 67 Line: 23 of /home/sma/spectre/src/Utilities/ErrorHandling/FloatingPointExceptions.cpp Function: void (anonymous namespace)::fpe_signal_handler(int) Floating point exception! ############ ERROR ############ ``` The run starts normally if I use release mode, but it fails after `25M`. The error message is ``` Shortened stack trace is: /panfs/ds09/sxs/sma/restart/test/EvolveGhCceKerrSchild() [0x2946d58] Address for addr2line: 0x2946d58 /panfs/ds09/sxs/sma/restart/test/EvolveGhCceKerrSchild() [0x2946b16] Address for addr2line: 0x2946b16 /usr/local/gcc/7.3.0/lib64/libstdc++.so.6(+0x8efe6) [0x7f317bc44fe6] Address for addr2line: 0x8efe6 /usr/local/gcc/7.3.0/lib64/libstdc++.so.6(+0x8f031) [0x7f317bc45031] Address for addr2line: 0x8f031 /panfs/ds09/sxs/sma/restart/test/EvolveGhCceKerrSchild() [0x1d2ca0b] Address for addr2line: 0x1d2ca0b /panfs/ds09/sxs/sma/restart/test/EvolveGhCceKerrSchild(_ZN22CkIndex_AlgorithmArrayI14DgElementArrayI17EvolutionMetavarsIN19GeneralizedHarmonic9Solutions9WrappedGrIN2gr9Solutions10KerrSchildEEES8_EN7brigand4listIJN8Parallel12PhaseActionsIN27GeneralizedHarmonicDefaults5PhaseELSF_0ENSB_IJN7Actions12SetupDataBoxEN14Initialization7Actions15TimeAndTimeStepIS9_EEN9evolution2dg14Initialization6DomainILm3ELb0EEENSJ_21NonconservativeSystemINS2_6SystemILm3EEEEENSM_14Initialization7Actions12SetVariablesIN6domain4Tags11CoordinatesILm3EN5Frame7LogicalEEEEENSJ_18TimeStepperHistoryIS9_EENSJ_17InitializeCcmTagsIS9_EENSJ_22InitializeCcmOtherTagsIS9_EENS2_7Actions30InitializeGhAnd3Plus1VariablesILm3EEENSJ_14AddComputeTagsINSB_IJNSZ_25MinimumGridSpacingComputeILm3ENS11_8InertialEEENS2_4Tags33ComputeLargestCharacteristicSpeedILm3ES1G_EENSZ_20SizeOfElementComputeILm3EEENSM_4Tags15AnalyticComputeILm3EN4Tags16AnalyticSolutionIS8_EENSB_IJNS5_4Tags15SpacetimeMetricILm3ES1G_10DataVectorEENS1I_2PiILm3ES1G_EENS1I_3PhiILm3ES1G_EEEEEEEEEEEENSO_7MortarsILm3EST_EEN5intrp7Actions23ElementInitInterpPointsINS26_4Tags15InterpPointInfoIS9_EEEENSJ_30RemoveOptionsAndTerminatePhaseEEEEEENSD_ISF_LSF_3ENSB_IJNS2_6gauges7Actions24InitializeDampedHarmonicILm3ELb1EEENS1B_21InitializeConstraintsILm3EEENSC_7Actions14TerminatePhaseEEEEEENSD_ISF_LSF_4ENSB_IJN9SelfStart7Actions10InitializeIST_EENSG_5LabelINS2Q_6detail10PhaseStartEEENS2R_18CheckForCompletionINS2V_8PhaseEndEST_EENSG_11AdvanceTimeENS2R_21CheckForOrderIncreaseEN3Cce7Actions17SendNextTimeToCceINS9_18CceWorldtubeTargetEEENS27_19InterpolateToTargetIS36_EENS1B_14ReceiveCCEDataIS9_EENSN_7Actions21ComputeTimeDerivativeIS9_EENS3C_24ApplyBoundaryCorrectionsIS9_EENSG_21RecordTimeStepperDataI10NoSuchTypeEENSG_7UpdateUIS3I_EEN2dg7Actions6FilterIN7Filters11ExponentialILm0EEES20_EENSG_4GotoIS2W_EENS2U_IS2Z_EENS2R_7CleanupES31_S2N_EEEEENSD_ISF_LSF_5ENSB_IJNSB_IJN9observers7Actions27RegisterEventsWithObserversENS27_31RegisterElementWithInterpolatorEEEES2N_EEEEENSD_ISF_LSF_8ENSB_IJNSG_20RunEventsAndTriggersENSG_14ChangeSlabSizeENSB_IJS37_S39_S3B_S3E_S3G_NSB_IJS3J_S3L_EEES3S_EEES31_N12PhaseControl7Actions18ExecutePhaseChangeINSB_IJNS4A_10Registrars14VisitAndReturnI31GeneralizedHarmonicTemplateBaseIS9_ELSF_6EEENS4D_31CheckpointAndExitAfterWallclockIS4G_EEEEEEEEEEEEEEEE9ElementIdILm3EEE26reg_receive_data_marshall8INSN_4Tags36BoundaryCorrectionAndGhostCellsInboxILm3EEESt4pairIS4X_I9DirectionILm3EES4R_ESt5tupleIJ4MeshILm2EESt8optionalISt6vectorIdSaIdEEES58_10TimeStepIdEEEEEiv+0) [0x1f723f0] Address for addr2line: 0x1f723f0 /panfs/ds09/sxs/sma/restart/test/EvolveGhCceKerrSchild(_ZN22CkIndex_AlgorithmArrayI14DgElementArrayI17EvolutionMetavarsIN19GeneralizedHarmonic9Solutions9WrappedGrIN2gr9Solutions10KerrSchildEEES8_EN7brigand4listIJN8Parallel12PhaseActionsIN27GeneralizedHarmonicDefaults5PhaseELSF_0ENSB_IJN7Actions12SetupDataBoxEN14Initialization7Actions15TimeAndTimeStepIS9_EEN9evolution2dg14Initialization6DomainILm3ELb0EEENSJ_21NonconservativeSystemINS2_6SystemILm3EEEEENSM_14Initialization7Actions12SetVariablesIN6domain4Tags11CoordinatesILm3EN5Frame7LogicalEEEEENSJ_18TimeStepperHistoryIS9_EENSJ_17InitializeCcmTagsIS9_EENSJ_22InitializeCcmOtherTagsIS9_EENS2_7Actions30InitializeGhAnd3Plus1VariablesILm3EEENSJ_14AddComputeTagsINSB_IJNSZ_25MinimumGridSpacingComputeILm3ENS11_8InertialEEENS2_4Tags33ComputeLargestCharacteristicSpeedILm3ES1G_EENSZ_20SizeOfElementComputeILm3EEENSM_4Tags15AnalyticComputeILm3EN4Tags16AnalyticSolutionIS8_EENSB_IJNS5_4Tags15SpacetimeMetricILm3ES1G_10DataVectorEENS1I_2PiILm3ES1G_EENS1I_3PhiILm3ES1G_EEEEEEEEEEEENSO_7MortarsILm3EST_EEN5intrp7Actions23ElementInitInterpPointsINS26_4Tags15InterpPointInfoIS9_EEEENSJ_30RemoveOptionsAndTerminatePhaseEEEEEENSD_ISF_LSF_3ENSB_IJNS2_6gauges7Actions24InitializeDampedHarmonicILm3ELb1EEENS1B_21InitializeConstraintsILm3EEENSC_7Actions14TerminatePhaseEEEEEENSD_ISF_LSF_4ENSB_IJN9SelfStart7Actions10InitializeIST_EENSG_5LabelINS2Q_6detail10PhaseStartEEENS2R_18CheckForCompletionINS2V_8PhaseEndEST_EENSG_11AdvanceTimeENS2R_21CheckForOrderIncreaseEN3Cce7Actions17SendNextTimeToCceINS9_18CceWorldtubeTargetEEENS27_19InterpolateToTargetIS36_EENS1B_14ReceiveCCEDataIS9_EENSN_7Actions21ComputeTimeDerivativeIS9_EENS3C_24ApplyBoundaryCorrectionsIS9_EENSG_21RecordTimeStepperDataI10NoSuchTypeEENSG_7UpdateUIS3I_EEN2dg7Actions6FilterIN7Filters11ExponentialILm0EEES20_EENSG_4GotoIS2W_EENS2U_IS2Z_EENS2R_7CleanupES31_S2N_EEEEENSD_ISF_LSF_5ENSB_IJNSB_IJN9observers7Actions27RegisterEventsWithObserversENS27_31RegisterElementWithInterpolatorEEEES2N_EEEEENSD_ISF_LSF_8ENSB_IJNSG_20RunEventsAndTriggersENSG_14ChangeSlabSizeENSB_IJS37_S39_S3B_S3E_S3G_NSB_IJS3J_S3L_EEES3S_EEES31_N12PhaseControl7Actions18ExecutePhaseChangeINSB_IJNS4A_10Registrars14VisitAndReturnI31GeneralizedHarmonicTemplateBaseIS9_ELSF_6EEENS4D_31CheckpointAndExitAfterWallclockIS4G_EEEEEEEEEEEEEEEE9ElementIdILm3EEE28_call_receive_data_marshall8INSN_4Tags36BoundaryCorrectionAndGhostCellsInboxILm3EEESt4pairIS4X_I9DirectionILm3EES4R_ESt5tupleIJ4MeshILm2EESt8optionalISt6vectorIdSaIdEEES58_10TimeStepIdEEEEEvPvS5C_+0x193) [0x1f72a33] Address for addr2line: 0x1f72a33 /panfs/ds09/sxs/sma/restart/test/EvolveGhCceKerrSchild(CkDeliverMessageFree+0x21) [0x4930481] Address for addr2line: 0x4930481 /panfs/ds09/sxs/sma/restart/test/EvolveGhCceKerrSchild(_ZN8CkLocRec11invokeEntryEP12CkMigratablePvib+0x41) [0x4955f21] Address for addr2line: 0x4955f21 /panfs/ds09/sxs/sma/restart/test/EvolveGhCceKerrSchild(_Z15_processHandlerPvP11CkCoreState+0x359) [0x4937e49] Address for addr2line: 0x4937e49 End shortened stack trace. Node: 2 Proc: 46 Line: 17 of /home/sma/spectre/src/Parallel/InitializationFunctions.cpp Function: auto setup_error_handling()::(anonymous class)::operator()() const Terminated due to an uncaught exception: vector::_M_default_append ``` username_0: The same run proceeds normally on Frontera without failure username_1: Can you do a run on wheeler using `EvolveGhKerrSchild` with the same input file (commenting out the CCE parts) username_0: The `EvolveGhKerrSchild ` one goes normally (debug mode). username_1: The floating point exceptions are being generated by interpolating nans for the time derivatives of the GH variables to the world tube. This will happen in debug mode (and not release mode) if a DataMesh/Variables/Tensor is allocated but not initialized. (In debug mode the allocation initializes the DataMesh to nan in order to catch this problem. In release mode the executable will use whatever values happen to be in memory leading to random behavior). My suspicion is that the culprit is the `change the order of step_actions` commit on your branch which moved computing the time derivative to after sending the next time to CCE and interpolating to the target username_0: Indeed, the FPE is gone after I switch the order back. Since the Bjorhus boundary condition is applied within `ComputeTimeDerivative`, all CCE calculations need to be done before it so that I can use the Weyl scalar psi0 to complete the boundary condition. In practice, the time derivatives of the GH variables are not used by the CCE component so their random values shouldn't matter. username_1: so if the GH variables are not used by CCE, why are they being passed to CCE?
fdimuccio/play2-sockjs
132671156
Title: Details concerning failed xhr_tests? Question: username_0: Could someone elaborate on the reasons why the two mentioned xhr tests fail (test_abort_xhr_streaming and test_abort_xhr_polling)? Is it caused by Netty not detecting that the connection has been closed or is it directly related to the implementation of play2-sockjs? Answers: username_1: Hi, the problem isn't within netty (it works fine) but in how Play Iteratee work. When the client abort the connection the Iteratee doesn't detect it until data is pushed to it. This cause the test to fail because according to it the sockjs session is not yet closed (it will be as soon as the heartbeat signal or any data will be emitted). With Play 2.5, that is based on akka-stream, this is not true anymore and those tests pass as expected, you can try using the 0.5.0-SNAPSHOT. The only tests that don't pass are the one relative to WebSocket Hixie-76 protocol and I have to do further investigations before reporting the bug to the Play developers. Status: Issue closed
Sharoku-haga/anagrAmble
232250381
Title: StageObj関連のクラス作成と一部機能(描画)の実装 Question: username_0: #37 の作業2です。 StageObj関連のクラス作成と一部機能(描画)の実装を行います。 - [ ] 1.ObjBaseクラス関連とNormalBlockクラス修正とGroundBlockクラス実装を行う - [ ] 2. 残りのクラス作成と実装を順次行う なお、作成するクラスは下記のクラス図となります。 ![stageobjtype](https://cloud.githubusercontent.com/assets/20719722/26585080/0fc31ac2-4586-11e7-8d23-24baf32c6468.png) Status: Issue closed Answers: username_0: 作業完了したので、クローズします。
jlippold/tweakCompatible
627581240
Title: `Playing` working on iOS 13.5 Question: username_0: ``` { "packageId": "dev.hyper.playing", "action": "working", "userInfo": { "arch32": false, "packageId": "dev.hyper.playing", "deviceId": "iPhone11,2", "url": "http://cydia.saurik.com/package/dev.hyper.playing/", "iOSVersion": "13.5", "packageVersionIndexed": true, "packageName": "Playing", "category": "Tweaks", "repository": "Dynastic Repo", "name": "Playing", "installed": "1.1.2", "packageIndexed": true, "packageStatusExplaination": "This package version has been marked as Working based on feedback from users in the community. The current positive rating is 100% with 1 working reports.", "id": "dev.hyper.playing", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.1.5", "shortDescription": "Get a notification when a new song starts! — Playing is a tweak that allows you to get a notification when a new song starts.", "latest": "1.1.2", "author": "ConorTheDev", "packageStatus": "Working" }, "base64": "<KEY> "chosenStatus": "working", "notes": "iPhone XS #unc0ver" } ```<issue_closed> Status: Issue closed
ga-wdi-exercises/project4
243834567
Title: cannot figure out how to connect backend seed data to front end Question: username_0: **Below is my server.js** ‘use strict’ var express = require(`express`); // var mongoose = require(`mongoose`); var bodyParser = require(`body-parser`); var mongoose= require("../db/connection"); var app = express(); var router = express.Router(); var port = process.env.API_PORT || 3001; app.use(bodyParser.urlencoded({ extended: true })); app.use(bodyParser.json()); router.get(‘/’, function(req, res) { res.json({ message: ‘API Initialized!’}); }); app.use(‘/api’, router); app.listen(port, function() { console.log(`api running on port ${port}`); }); **Below is seeds.js** var mongoose = require("./connection.js") var seedData = require("./seeds.json") var Post = mongoose.model("Post") var Comment = mongoose.model("Comment") Post.remove({}).then(() => { Post.collection.insert(seedData).then(() =>{ process.exit() }) }) Comment.remove({}).then(() => { Comment.collection.insert(seedData).then(() =>{ process.exit() }) }) Answers: username_1: what are you using for your front end? username_0: React > username_1: Just messaged you on Slack. Let's meet in person Status: Issue closed
vimeo/psalm
1158264202
Title: `InvalidReturnStatement` reported when using a template type of array shape as return type, such as `@psalm-template TReturn of array{foo: int}` Question: username_0: Consider following example: ```php <?php /** * @psalm-template TParams as array{ * licenseId: int * } */ final class MyReportDefinition { public function __construct(private int $licenseId) {} /** * @psalm-return TParams */ public function getParams(): array { return [ 'licenseId' => $this->licenseId, ]; } } ``` Psalm reports ( https://psalm.dev/r/119b68d58f ): ``` Psalm output (using commit 766fc17): ERROR: [InvalidReturnStatement](https://psalm.dev/128) - 16:16 - The inferred type 'array{licenseId: int}' does not match the declared return type 'TParams:MyReportDefinition as array{licenseId: int}' for MyReportDefinition::getParams ERROR: [InvalidReturnType](https://psalm.dev/011) - 12:22 - The declared return type 'TParams:MyReportDefinition as array{licenseId: int}' for MyReportDefinition::getParams is incorrect, got 'array{licenseId: int}' Psalm detected 1 [fixable issue(s)](https://psalm.dev/docs/manipulating_code/fixing/) ``` Overall, I think this is related with quirkiness of `@template` with `array{ ... }` shape type definitions, but I don't have better words to match it with other potentially pre-existing issues. Is this straight out a bug, or a limitation? Answers: username_1: I'm not sure I see what the goal is here but I'd expect TParams to be bounded at some point. Otherwise, you basically have a more complex version of this: https://psalm.dev/r/64b3326670, which outputs the same errors username_1: The issue is the same IMHO How do you intend to implement a method that can satisfy this: https://psalm.dev/r/bf23b1d902 Remember that `array{licenseId: int}` is equivalent to `array{licenseId: int, otherKey: mixed}` so `array{licenseId: int, otherKey: int}` is a valid value for your template
mars-project/mars
667110734
Title: [BUG] df.groupby().size() failed when executed in distributed mode Question: username_0: <!-- Thank you for your contribution! Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue. --> **Describe the bug** `AttributeError: _sortorder` was raised in the tiling of aggregation. **To Reproduce** ``` Python In [3]: import pandas as pd In [4]: data = pd.DataFrame(np.arange(20).reshape((4, 5)) + 1, columns=['a', 'b', 'c', 'd', 'e']) In [6]: df = md.DataFrame(data) In [7]: df.groupby(['a','b']).size().execute() Unexpected exception occurred in enter_build_mode.<locals>.inner. Traceback (most recent call last): File "/Users/username_0/Documents/mars_dev/mars/mars/utils.py", line 353, in _wrapped return func(*args, **kwargs) File "/Users/username_0/Documents/mars_dev/mars/mars/utils.py", line 493, in inner return func(*args, **kwargs) File "/Users/username_0/Documents/mars_dev/mars/mars/scheduler/graph.py", line 633, in prepare_graph self._target_tileable_datas + fetch_tileables, tileable_graph) File "/Users/username_0/Documents/mars_dev/mars/mars/utils.py", line 399, in _wrapped return func(*args, **kwargs) File "/Users/username_0/Documents/mars_dev/mars/mars/utils.py", line 493, in inner return func(*args, **kwargs) File "/Users/username_0/Documents/mars_dev/mars/mars/tiles.py", line 350, in build tileables, tileable_graph=tileable_graph) File "/Users/username_0/Documents/mars_dev/mars/mars/utils.py", line 399, in _wrapped return func(*args, **kwargs) File "/Users/username_0/Documents/mars_dev/mars/mars/utils.py", line 493, in inner return func(*args, **kwargs) File "/Users/username_0/Documents/mars_dev/mars/mars/tiles.py", line 263, in build self._on_tile_failure(tileable_data.op, exc_info) File "/Users/username_0/Documents/mars_dev/mars/mars/tiles.py", line 302, in inner raise exc_info[1].with_traceback(exc_info[2]) from None File "/Users/username_0/Documents/mars_dev/mars/mars/tiles.py", line 243, in build tiled = self._tile(tileable_data, tileable_graph) File "/Users/username_0/Documents/mars_dev/mars/mars/tiles.py", line 338, in _tile return super()._tile(tileable_data, tileable_graph) File "/Users/username_0/Documents/mars_dev/mars/mars/tiles.py", line 203, in _tile tds = on_tile(tileable_data.op.outputs, tds) File "/Users/username_0/Documents/mars_dev/mars/mars/scheduler/graph.py", line 615, in on_tile return self.context.wraps(handler.dispatch)(first.op) File "/Users/username_0/Documents/mars_dev/mars/mars/context.py", line 69, in h return func(*args, **kwargs) File "/Users/username_0/Documents/mars_dev/mars/mars/utils.py", line 399, in _wrapped return func(*args, **kwargs) File "/Users/username_0/Documents/mars_dev/mars/mars/tiles.py", line 119, in dispatch tiled = op_cls.tile(op) File "/Users/username_0/Documents/mars_dev/mars/mars/dataframe/groupby/aggregation.py", line 481, in tile return cls._tile_with_tree(op) File "/Users/username_0/Documents/mars_dev/mars/mars/dataframe/groupby/aggregation.py", line 412, in _tile_with_tree index = out_df.index_value.to_pandas() File "/Users/username_0/Documents/mars_dev/mars/mars/dataframe/core.py", line 283, in to_pandas return self._index_value.to_pandas() File "/Users/username_0/Documents/mars_dev/mars/mars/dataframe/core.py", line 197, in to_pandas sortorder=self._sortorder, names=self._names) AttributeError: _sortorder ```<issue_closed> Status: Issue closed
stfwi/rsgauges
701355125
Title: Development issues Question: username_0: Hello, I was trying to address a bug I had in the mod and tried setting up the dev environment, unfortunately the build failed saying it couldn't find the keystore file, I feel this should be optional for contributors, after all, we don't need to sign the jars since we aren't releasing new version of the mod, I don't have the exist line since I was using a different computer. Answers: username_1: Hey man, aye, I'll check if that can be circumvented, should only be a condition in the build.gradle. Ty for the PR! username_1: Ok, done it is ;-). Status: Issue closed
10up/theme-scaffold
328490552
Title: Skip to content link is missing Question: username_0: Yes, but there isn't any html markup like `<main>` where to add it. Answers: username_1: Do you feel like an ID should be added to enable inclusion of the skip link @username_0? username_0: Yes, but there isn't any html markup like `<main>` where to add it. username_1: Which leads to my next question of _should we add landmarks?_ :thinking: username_0: Absolutely yes. I understand why theme is almost blank. But in the same time it should provide the minimum markup that every site needs to have. Here is [a11y handbook page for ARIA landmarks](https://make.wordpress.org/accessibility/handbook/best-practices/markup/aria-landmarks/). username_2: This a tough one because we don't know the make up of the site that will be built. Best practice docs might be a better place for it. I think there's an open issue to add it as well. username_2: Yeah, it's right here: https://github.com/10up/Engineering-Best-Practices/issues/150
hikalium/liumos
461038388
Title: GPD Micro PC Question: username_0: ``` UEFI 2.6: PI 1.4 8192MB Memory Celeron N4100 CPU @ 1.10GHz ``` Answers: username_0: Failed to load files via SimpleFileSystemProtocol username_0: OpenVolumeでマウントされるファイルシステムが、起動したBOOTX64.EFIの存在するものではなかったのが原因。この挙動をどう回避すればいいのかはいまだ不明。UEFI設定画面より、Boot Order Overrideで起動するとうまく起動する?ことは確認した。 username_0: https://www.storange.jp/2017/09/macosefi.html username_0: Max CPUID: 0x18 Max Extended CPUID: 0x80000008 family : 0x6 model : 0x7A stepping: 0x1 Intel(R) Celeron(R) N4100 CPU @ 1.10GHz MAX_PHY_ADDR: 0x27 phy_addr_mask: 0x7FFFFFFFFF CLFLUSH supported: True CLFLUSHOPT supported: True ``` username_0: ``` /02/00/0 10EC:8168 RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller BAR0(I/O): 0x0000E001 BAR2(Mem): 0xA1104004 type = 2 BAR4(Mem): 0xA1100004 type = 2 /01/00/0 8086:3165 (Unknown) BAR0(Mem): 0xA1200004 type = 2 /00/1F/1 8086:31D4 (Unknown) BAR0(Mem): 0xA1316004 type = 2 BAR4(I/O): 0x0000F041 /00/1F/0 8086:31E8 (Unknown) /00/1C/0 8086:31CC (Unknown) BAR0(Mem): 0xA1318004 type = 2 BAR2(Mem): 0xA1317004 type = 2 /00/19/2 8086:31C6 (Unknown) BAR0(Mem): 0xA131A004 type = 2 BAR2(Mem): 0xA1319004 type = 2 /00/19/1 8086:31C4 (Unknown) BAR0(Mem): 0xA131C004 type = 2 BAR2(Mem): 0xA131B004 type = 2 /00/18/2 8086:31C0 (Unknown) BAR0(Mem): 0xFEA10004 type = 2 BAR2(Mem): 0x00000004 type = 2 /00/18/3 8086:31EE (Unknown) BAR0(Mem): 0xA1320004 type = 2 BAR2(Mem): 0xA131F004 type = 2 /00/18/1 8086:31BE (Unknown) BAR0(Mem): 0xA1322004 type = 2 BAR2(Mem): 0xA1321004 type = 2 /00/18/0 8086:31BC (Unknown) BAR0(Mem): 0xA1324004 type = 2 BAR2(Mem): 0xA1323004 type = 2 /00/17/2 8086:31B8 (Unknown) BAR0(Mem): 0xA1328004 type = 2 BAR2(Mem): 0xA1327004 type = 2 /00/17/1 8086:31B6 (Unknown) BAR0(Mem): 0xA132A004 type = 2 BAR2(Mem): 0xA1329004 type = 2 /00/02/0 8086:3185 (Unknown) BAR0(Mem): 0xA0000004 type = 2 BAR2(Mem): 0x9000000C type = 2 BAR4(I/O): 0x0000F001 /00/12/0 8086:31E3 (Unknown) BAR0(Mem): 0xA1314000 type = 0 BAR1(Mem): 0xA1336000 type = 0 BAR2(I/O): 0x0000F091 BAR3(I/O): 0x0000F081 BAR4(I/O): 0x0000F061 BAR5(Mem): 0xA1335000 type = 0 /00/17/0 8086:31B4 (Unknown) BAR0(Mem): 0xA132C004 type = 2 BAR2(Mem): 0xA132B004 type = 2 /00/16/3 8086:31B2 (Unknown) BAR0(Mem): 0xA132E004 type = 2 BAR2(Mem): 0xA132D004 type = 2 /00/16/2 8086:31B0 (Unknown) BAR0(Mem): 0xA1330004 type = 2 BAR2(Mem): 0xA132F004 type = 2 /00/15/0 8086:31A8 Intel XHCI Controller BAR0(Mem): 0xA1300004 type = 2 [Truncated] BAR0(Mem): 0xA1334004 type = 2 BAR2(Mem): 0xA1333004 type = 2 /00/0F/0 8086:319A (Unknown) BAR0(Mem): 0xA1337004 type = 2 /00/00/0 8086:31F0 (Unknown) /00/19/0 8086:31C2 (Unknown) BAR0(Mem): 0xA131E004 type = 2 BAR2(Mem): 0xA131D004 type = 2 /00/0E/0 8086:3198 (Unknown) BAR0(Mem): 0xA1310004 type = 2 BAR4(Mem): 0xA1000004 type = 2 /00/16/1 8086:31AE (Unknown) BAR0(Mem): 0xA1332004 type = 2 BAR2(Mem): 0xA1331004 type = 2 /00/00/1 8086:318C (Unknown) BAR0(Mem): 0x80000004 type = 2 /00/17/3 8086:31BA (Unknown) BAR0(Mem): 0xA1326004 type = 2 BAR2(Mem): 0xA1325004 type = 2 ```
MicrosoftDocs/azure-docs
741901248
Title: Table Storage Documentation - is "Number of entities in a partition" correct? Question: username_0: This page lists the following table storage limit: **Resource** | **Target** ------------ | ------------- Number of entities in a partition | Limited only by the capacity of the storage account It was my understanding that there is some limit on the physical size of a partition that is smaller than the storage account or table size limits. Compare that to the documented [CosmosDB Partition Size](https://docs.microsoft.com/en-us/azure/cosmos-db/concepts-limits) where it is documented as 20GB: **Resource** | **Target** ------------ | ------------- Maximum storage across all items per (logical) partition | 20 GB I've opened this issue to double check and ask whether this documentation for the table storage partition size is correct? As it reads, it's saying that a single partition can be 500TiB. One way I could see this documentation being clarified would be something like _"The number of entities in partition is only limited by the partition's physical size limit, which is X size."_ Thanks in advance for checking! --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 68cfcaa9-f221-e16d-4778-909f74e6135c * Version Independent ID: 8f279858-4a77-6239-e5d1-915d64e9eaab * Content: [Scalability and performance targets for Table storage - Azure Storage](https://docs.microsoft.com/en-us/azure/storage/tables/scalability-targets) * Content Source: [articles/storage/tables/scalability-targets.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/storage/tables/scalability-targets.md) * Service: **storage** * Sub-service: **tables** * GitHub Login: @tamram * Microsoft Alias: **tamram** Answers: username_1: @username_0 Thanks a lot for reaching out to us, our team will review the query and circle back at the earliest. username_2: Thanks for the feedback. I have assigned the issue to the content author to investigate further and update the document as appropriate. Status: Issue closed
OpenPEPPOL/poacc-upgrade-3
556209680
Title: LineItem/Item/Description and Name for Order transaction Question: username_0: Does https://github.com/OpenPEPPOL/poacc-upgrade-3/blob/master/structure/syntax/ubl-order.xml contain the correct sequence for Description and Name elements into LineItem/Item? I'm not sure the appropriate elements sequence is checked by the xsd but ubl 2.1 puts Description before Name http://www.datypic.com/sc/ubl21/e-cac_Item.html thanks
HowardHinnant/date
313301081
Title: from_stream(stream&, fmt, duration&) rejects units longer than hours Question: username_0: Specifically, [this line](https://github.com/username_1/date/blob/master/include/date/date.h#L7391) assumes that durations may not be measured in days, months, or years. Answers: username_1: This is by design. `from_stream` is tailored to the [parsing flags](https://howardhinnant.github.io/date/date.html#from_stream_formatting) which include %H, %M and %S for durations. `from_stream` is not set up to handle more general units of durations such as weeks, fortnights, and centuries. I have in the past coded a generalized stream extraction operator for durations that basically reversed the effects of the stream insertion operator. That remains a possibility for the future. But for now, `from_stream` is really just `strptime` with better manners. username_0: Sorry, I can't understand your reasoning. `from_stream(stream&, same_format_string, time_point&)` parses years just fine. What's the difference? username_1: It doesn't parse the _duration_ `years`. It parses the partial calendar type `year` which can be controlled by the flags %Y, %y, %C, %G, or %g. There is no flag to parse a duration equal to the length of the average civil year (365.2425 days). I admit to being a _little_ inventive with the [POSIX strptime flags](http://pubs.opengroup.org/onlinepubs/009695399/functions/strptime.html), but I tried to keep that to a minimum. The rationale for that strategy is that I wanted the format/parse flags to already be familiar to my customers, as opposed to them having to learn a completely new "mini language" for parsing. username_0: To clarify my question: duration becomes a time point only when time system is specified, e.g. '2008-04-12T07:39:15.57' is still a duration, and '2008-04-12T07:39:15.57 TDB' is a time point. If a string does not contain time system name, the library can't know is it a duration or a time point. My use case: I read a time stamp from an image file, and it is defined in spacecraft clock system. The system was synchronised with the UTC at a point in the past, but now those time stamps are just offsets in its drifting time system. I want to reflect that in my program and return duration objects, but I can't read them directly and have to read `time_point` instead. username_1: Thanks for the clarification of the problem you are facing. It sounds most interesting. May I suggest that you parse into a `local_time<Duration>` instead of a `duration`? `local_time` is a `time_point`, but with as yet unspecified clock. It is much like the `void*` of pointers. This gives you a type system to keep these timestamps distinct from both durations and UTC (`sys_time`/`utc_time`). If at some point you discover the correct offset (say during a syncing process), you can convert the `local_time` to a `sys_time` by extracting the `.time_since_epoch()` and applying the offset correction. The parse flags will all work with `local_time` of whatever precision you require. username_0: Do I understand you correctly that for duration you want to use only stable units, because `from_stream()` can not inform or obtain a hint what to do with leap seconds and days? username_1: Actually there is a `from_stream` overload that takes `utc_time`. `utc_time` is just like `sys_time`, but takes leap seconds into account. The rationale for the current design is that `from_stream` evolves from the POSIX C function `strptime`, which most of my clients are migrating from. `strptime` has a relatively well specified set of parsing flags. And I have tried to deviate from these established flags as little as possible. When I do deviate, that deviation is in ways that most people will see as minor corrections to the POSIX spec, as opposed to a redesign. username_0: Thank you, `local_time` suits my purposes better. Yet I can't understand how to handle leap seconds/days in that case. The spacecraft clock follows UTC rules and when reading from a string I was assuming that converting the string to duration, those leap seconds will be counted in the same way as it is done for the UTC system. But unspecified clock can't have leap seconds, right? username_1: That is true. The parse into `local_time` will fail if you try to parse 60 in the seconds field. You'll need to parse into `utc_time` to handle that. To keep your type safety, perhaps it would be best to immediately turn your `utc_time` into some other time_point, perhaps `local_time`, or perhaps `chrono::time_point<spacecraft_clock, some_duration>`. In the latter case, you could write your own `from_stream` overload that calls the `utc_time` `from_stream` overload in order to implement the parsing. I don't know enough about your problem to know if the `spacecraft_clock` idea is a good one or not. username_0: Thank you! A `spacecraft_clock` class plus i/o overloads seems to the most logical solution. Status: Issue closed
structure7/waterLeakSensor
209911236
Title: Re-evaluate while (Blynk.connect() == false) Question: username_0: ``` while (Blynk.connect() == false) { // Wait until connected } ``` If I take this out will the sketch just push on? Need to test. Maybe I could just put this argument elsewhere in my sketch (where Blynk is actually being ran). Answers: username_0: Like Costa did something like this: ``` while (Blynk.connect() == false) { if(((millis()/1000) - timeout) > 10){ // issue msg if not connected to Blynk in more than 10 seconds firstrow = "Check the"; // for OLED secondrow = " Router "; // for OLED if(DEBUG){ Serial.println("Check the router"); } break; } ``` (http://community.blynk.cc/t/code-isnt-working-without-connected-blynk/5624/10?u=username_0) username_0: Especially considering I want MQTT to punch through regardless.
SparkDevNetwork/Rock
262877113
Title: Add Giving Pattern frequency to Data View Question: username_0: ### Prerequisites * [x] Put an X between the brackets on this line if you have done all of the following: * Can you reproduce the problem on a fresh install or the [demo site](http://rock.rocksolidchurchdemo.com/)? * Did you include your Rock version number and [client culture](https://github.com/SparkDevNetwork/Rock/wiki/Environment-and-Diagnostics-Information) setting? * Did you [perform a cursory search](https://github.com/issues?q=is%3Aissue+user%3ASparkDevNetwork+-repo%3ASparkDevNetwork%2FSlack) to see if your bug or enhancement is already reported? ### Description It would be awesome to include the giving frequency in data view / reporting (Currently included in the Giving Analytics pattern filter). (Currently you can filter on the 52-week and 6-week calculated giving frequencies.) ![screenshot 2017-10-04 10 52 45](https://user-images.githubusercontent.com/374209/31191176-35de094a-a8f2-11e7-95e6-1cb032f10f19.png) ### Versions * **Rock Version:** Prealpha * **Client Culture Setting:** en-US Answers: username_1: We now have a new Ideas area on our Rock Community website (https://community.rockrms.com/community) for posting, discussing and voting on enhancements to Rock. Since this issue is tagged as an enhancement we are closing it and requesting that if you still feel it has merit that you add it as an idea. This allows the full community to speak into it and help us determine it's priority. Status: Issue closed
python/planet
411881254
Title: Add feed Question: username_0: **My Name/Blog Name**: <NAME> **My Blog RSS or ATOM Python specific feed url**: https://www.username_0.com/feeds/python.tag.atom.xml ## I checked the following required validations: (mark all 4 with [x]) 1. [x] My feed is valid, I checked using https://validator.w3.org/feed/check.cgi?url=MY_FEED_URL and it is valid! 2. [x] My feed is a **Python Specific** feed, e.g: I am proposing the filtered tag or categorized feed url 3. [x] I only post content to this feed which is related to the Python language and its components and libraries. Or content that I consider interesting for the Python community. 4. [x] I am aware that once my feed is added it can take a few hours to start being fetched (according to the server update cycle) Thanks in advance for adding my feed to the PythonPlanet! :+1: Answers: username_1: Could you update the `config/config.ini` file and submit a PR? Thank you Status: Issue closed
feross/simple-peer
437864609
Title: Add simple way to capture first signal emitted Question: username_0: Docs state that the `on('signal')` should be installed right after initializing the SimplePeer object so the initial signal won't be missed in the case that this is the initiator side of the connection. I ran into a situation where the moment I initialize the object, and the moment I start to actually move around signals, are in two pretty separate places. I still need to get that initial signal event somehow, so my workaround is: 1. install an event listener right after initialization to capture any signal event, which adds the signal obj to some cache (array) 2. when time comes, empty cache into the signaling mechanism 3. install a real signal event handler which sends the signal through the signaling mechanism 4. remove initial signal-to-cache event handler That's a hassle I would love to eliminate. The way I see it, if SimplePeer could expose a `getFirstSignal` API, and guarantee that no more signals will be created until first signal is sent to peer, that will make things simpler. I would love to share my actual situation and workaround, but I didn't get it to work yet. Please nag me about it if it is relevant to the discussion. What is your take on this? Maybe there is an easier workaround on my side that I missed? Answers: username_1: EventEmitters typically don't cache events. If we did add caching of the `signal` event, we should be consistent for all events (ie you might not want to miss an important `stream` or `data` event, which can fire before you attach a listener). Taking an action that will fire an event before you're listening for that event is unusual in event-driven programming. I'd be interested to hear why you need to instantiate peer objects before setting up your signalling channel. It's really easy to do this caching yourself (but I don't recommend it): ```javascript var cache = [] const peer = new Peer() peer.on('signal', (data) => { // immediately after instantiating peer if (signalChannel.ready) { // ideally, your signaling channel should be ready before ever creating the peer signalChannel.write(data) } else { cache.push(data) } }) signalChannel.on('ready', ()=>{ cache.forEach(data => signalChannel.write(data)) cache = [] }) ``` Status: Issue closed username_0: After reviewing, I found that after all I could setup the peer obj and the handlers at the same routine, so no need to patch simple-peer. thank you for your attention to my subject!
dr4xor/flutter_mopub
1014967251
Title: Do you plan to support this repo in future? Question: username_0: Hello, I am a novice flutter developer and want to use mopub in my app. There is no official flutter plugin for that but I found yours. I want permission from you that can I use this in my app and do you plan to support this repo in future? If yes then I will request you to upload this repo to pub.dev so that more people can help in improving it and more have a plugin to use. Thanks
ikedaosushi/tech-news
499788144
Title: AWS障害1カ月浮き彫りになった課題 Question: username_0: AWS&#38556;&#23475;1&#12459;&#26376;&#12289;&#28014;&#12365;&#24427;&#12426;&#12395;&#12394;&#12387;&#12383;&#35506;&#38988;<br> &#12424;&#12426;&#20351;&#12356;&#12420;&#12377;&#12367;&#12289;&#12424;&#12426;&#12499;&#12472;&#12517;&#12450;&#12523;&#12395;&#65281;&#26085;&#32076;&#38651;&#23376;&#29256;&#12399;&#12487;&#12470;&#12452;&#12531;&#12420;&#12506;&#12540;&#12472;&#27083;&#25104;&#12434;&#20840;&#38754;&#30340;&#12395;&#35211;&#30452;&#12375;&#12414;&#12377;&#12290;&#12414;&#12378;&#26032;&#12383;&#12394;&#12488;&#12483;&#12503;&#12506;&#12540;&#12472;&#12434;&#12372;&#35239;&#12356;&#12383;&#12384;&#12369;&#12414;&#12377;&#12290;<br> https://ift.tt/2ndxkJj
dotnet/runtime
559150683
Title: Should Switch.System.Net.DontEnableSchSendAuxRecord have any effect in .NET Core 3.1? Question: username_0: I am migrating a .NET 4.6 application to .NET Core 3.1. The application makes a TLS 1.0 connection to a legacy device that is not going to receive any updates (it doesn't support TLS 1.1 or TLS 1.2) The issue that I'm facing is related to a mitigation introduced in https://support.microsoft.com/en-ca/help/3155464/ms16-065-description-of-the-tls-ssl-protocol-information-disclosure-vu The workaround that I had in .NET 4.6 was to set this AppContext switch to true: ```csharp AppContext.SetSwitch("Switch.System.Net.DontEnableSchSendAuxRecord" , true); ``` This switch does not seem to be doing anything in .NET Core 3.1.The communication to the device breaks as it does not know how to handle the mitigation (Mitigation is to split the first application data to 1 byte then send the remaining data). Would it be possible to disable the mitigation in .NET Core 3.1? I hate to be asking this question, but we are stuck with legacy devices and customers are probably not going to upgrade them. Thanks Answers: username_1: The switch doesn't exist in .NET Core. username_0: @username_1 Thanks for the confirmation. Is there a workaround in .NET Core? username_1: There is not. In general, .NET Core doesn't have those AppContext or registry key switches to temporarily turn off security fixes (such as SCH_SEND_AUX_RECORD) that .NET Framework did. username_0: Thank you. Status: Issue closed
juniorUsca/get-things-done
924542468
Title: Prueba de funcionamiento de Login con Google en firebase captura en el Readme.md. Question: username_0: las tegnologias usadas para que se haya logrado esta funcionalidad fueron primero con los siguientes comandos npm install firebase yarn add firebase-admin Answers: username_1: No se entiende exactamente que es lo que deseas plasmar con este issue, explicalo mejor username_0: bueno este issue lo estoy publicando para que se pueda ver como debe de visualizarse la funcionalidad del logeo con google mediante firebase y esto lo estoy publicando mediante el Readme ya que al logearte con google no te manda a ninguna vista pero esto se puede visualiza mediante un console.log username_0: y tambien me olvidaba que para lograr esto se debieron de instalar las siguientes dependencias las cuales estan en la parte de arriba username_0: y tambien este issue lo voy a subir en un solo pull request username_2: No olvides documentar tu codigo.
silverstripe/silverstripe-framework
284128364
Title: BASE_URL / Director.alternate_base_url issues & inconsistency Question: username_0: Affects 3.6.3. See also https://github.com/silverstripe/silverstripe-framework/issues/5109. ## Issue 1: ```yml Director: alternate_base_url: 'http://mysite.test' ``` This results in “Undefined index: path” here: https://github.com/silverstripe/silverstripe-framework/blob/3.6.3/control/Session.php#L358 Adding a trailing slash fixes the issue. I can’t find any documentation for `Director.alternate_base_url` which indicates whether or not a trailing slash should be present. ## Issue 2: ```php define('BASE_URL', 'http://mysite.test'); ``` Visiting `http://mysite.test?flush` results in a redirection to the wrong URL: `http://mysite.test/http:/mysite.test/?flush=&flushtoken=1234` This is because `ParameterConfirmationToken` treats the `BASE_URL` constant as a relative path, not an absolute URL. `BASE_URL` appears to be treated like a relative path in most cases, though the [code docs for it](https://github.com/silverstripe/silverstripe-framework/blob/3.6.3/core/Constants.php#L11) suggest that it should be an absolute URL. The code to calculate `BASE_PATH` if it’s not defined will also always return a directory name. ## Issue 3: `BASE_URL` is used as a fallback for `Director.alternate_base_url` when calling `Director::baseURL()`, so they should be consistent. Adding a trailing slash to `BASE_URL` results in double-slashes in URLs, and removing the trailing slash from `Director.alternate_base_url` results in issue 1 above. ## Suggested action: To save a massive refactor to support relative/absolute urls and trailing/non-trailing slashes for both, I’m advocating mostly documentation changes and simple patches. - Update the code docs for `BASE_URL` to indicate that it should be a relative path to the document root from the domain name (with no trailing slash); - Add documentation for `Director.alternate_base_url` to indicate that a trailing slash should be present; - Bonus round: patch `Director::baseURL()` to force a trailing slash Answers: username_1: `BASE_URL` and `alternative_base_url` have been pretty inconsistent and all round terrible for a while. It's part of the reason I originally had `SS_HOST` in SS 4 to remove this weird ambiguity around whether these vars should be absolute or not. see #6337 username_2: This is still as issue because although we have SS_BASE_URL in SS4, this setting is only a fallback. So there still appears [no `.env` setting to tell SS the correct protocol, domain, and path](https://docs.silverstripe.org/en/4/getting_started/environment_management/ ) to use for the BASE tag. SS seldom gets the BASE tag correct because it doesn't know about load balancers, CDNs, and WAFs that are terminating TLS and/or proxying to an origin domain name and/or rewriting the base path. There ought to be a `.env` setting that can actually override the SS's, frequently incorrect, guess at the protocol and/or domain name. username_2: This was our standard fix for SS3, we included this for every site to fix the protocol and domain in the base href, without also breaking the cookie path. ``` Config::inst()->update('Director', 'alternate_base_url', (rtrim(getenv('WEBSITE_URL'), '/') . '/')); // Need to set cookie_path to dodge a bug in inst_start() in framework/control/Session.php Config::inst()->update('Session', 'cookie_path', '/'); ``` username_1: @username_2 this is possible behind load balancers and other layers that terminate SSL - you just have to make sure that your load balancer / proxy is correctly whitelisted with SilverStripe and that the correct `X-Forwarded-*` headers are passed along username_1: That doesn't change the fact that the base tag is a fallback, that's correct. This is because using it as a source of truth will break other things like forcing www / https as well as figuring out the actual domain name being used. If you don't want to have SS_BASE_URL as a fallback you should be using `Director.alternate_base_url` in the config (as you are) though you'd want to set this in yaml for performance reasons and not PHP. username_1: Well, yes, I agree, I don't see it as the application's roll either, but whilst those features exist they have to work. The problem with making `SS_BASE_URL` the source of truth it that it can lead to problems when accessing the site from another domain or protocol (eg: If I access over https but the SS_BASE_URL is http, then the base tag would be written with http causing mixed content issues). In SS4 I believe you can use env vars to populate config by wrapping them in back ticks. see https://github.com/silverstripe/silverstripe-framework/blob/4.0.0/src/Control/Director.php#L96 username_3: Alternate_base_url is a relic intended only for testing to override constants. Maybe since we are using environmental vars now we should encourage this instead of this config. username_0: The back tick syntax currently only works for `Injector` service definitions (https://docs.silverstripe.org/en/4/developer_guides/extending/injector/#using-constants-as-variables): `Director.default_base_url` is a one-off special case where we convert the backticks manually username_1: ah :( username_4: This came up internally again today, I think we need to support it for all config rather than just in injector definitions username_1: have you set up trusted proxies? see https://docs.silverstripe.org/en/4/developer_guides/security/secure_coding/#request-hostname-forgery username_5: Ah, no, I haven't - need to look at that one. Thanks! username_0: This won’t be fixed for SS3 Status: Issue closed username_2: Hi @username_0, this is still an open issue for SS4. Although the original report was for SS3, it still hasn't been fixed in SS4 either. username_6: I'm having a nightmare today launching our first site since making the switch to SS4. Our sites sit behind an nginx proxy which does the SSL stuff so Silverstripe thinks it is using http when to the world it is using https. This causes lots of obvious problems which we used to get around in SS3 with alternate_protocol. The introduction of environment variables is a step in the right direction but here is the bit I can't understand: ``` # vendor/silverstripe/framework/src/includes/constants.php 104 if ($base) { 105 106 // Strip relative path from SS_BASE_URL 107 return rtrim(parse_url($base, PHP_URL_PATH), '/'); 108 } ``` So even though we explicity declare the correct SS_BASE_URL (something like https://example.com) silverstripe passes it to parse_url and extracts only the path. This means that BASE_URL is always defined as an empty string which means when SS goes to generate the base tag it has to fall back to it's old way of trying to figure it out itself which doesn't work. I can't understand why we are setting the BASE_URL to the url path. Surely a BASE_URL includes the full URL including scheme and host? Am I missing something? username_6: this is my ugly hack in mysite/_config.php ``` Director::config()->set('alternate_base_url', Environment::getEnv('SS_BASE_URL')); ``` This has at least got me live today username_6: Thank you @username_4 we will look into that. Any idea as to why SS is stripping the host and protocol out of the BASE_URL in constants.php?
imdrasil/jennifer.cr
255852648
Title: Change default behavior of #includes method Question: username_0: - Change default behavior of `#includes` method to `#preload` and remove last one. - Add method `#extends` with old behavior of `#includes`. This manipulations allows new upcomers from rails world easier switch to Jennifer - default behavior of ActiveRecord's `#includes` is preloading records rather than joining = Answers: username_0: will go to the next release Status: Issue closed
Phanx/wow-addon-updater
312840490
Title: Enable compatibility with Lua 5.3.4 (latest version) Question: username_0: When I run `lua update.lua` with Lua 5.3.4 I get the following error message: ``` lua: sites.lua:233: invalid escape sequence near ''<a href="fileinfo.php\?' stack traceback: [C]: in function 'dofile' update.lua:11: in main chunk [C]: in ? ``` Can be fixed by escaping the backslash. Line 233 in sites.lua would be `local id, name = tr:match('<a href="fileinfo.php}\\?[^"]*id=(%d+)[^"]*">(.-)</a>')` then. This doesn't seem to affect Lua 5.1 negatively, as far as I can see. _Disclaimer:_ I have only tested update.lua so far. _Edit:_ I forgot that I fixed another string earlier: line 94 of sites.lua the backslash has to be removed. It then reads: ` if url:find("/downloads/landing\.php") then` Answers: username_0: Added pull request if needed: #6 Status: Issue closed username_1: Actually this one was caused by me escaping things with a backslash out of habit (I primarily work in JavaScript) instead of a percent sign.
aesculus/EVTO-App-Feedback
217001563
Title: In the 'What Car are You Driving' dialog, construct a meaningful car name Question: username_0: For example, if they've specified a Model X 90D, construct 'My Model X 90D' rather than just 'My Car'. Folks initializing with multiple cars will appreciate that. Also, when naming a car, you need to check that it's not duplicating an existing car name. Answers: username_1: This is fixed in V1.1.2 username_1: Will need to validate it's not a duplicate name username_1: Fixed in V1.1.3 Status: Issue closed
goharbor/harbor
537438208
Title: docker login 使用oidc账号未授权 Question: username_0: What can we help you? @steven-zou 你好harbor-core在启动的时候未注册oidc_auth ![image](https://user-images.githubusercontent.com/18571340/70789061-41f07400-1dcd-11ea-8a15-5deebfe82581.png) 使用docker login -u cloud-operator -p qgj6282hzf943he0zgpxksq6q70sfjjx harbor91.iop.com不能登录,显示未授权 ![image](https://user-images.githubusercontent.com/18571340/70789136-6ba99b00-1dcd-11ea-9690-014caed16bf3.png) core的日志: ![image](https://user-images.githubusercontent.com/18571340/70789181-7f550180-1dcd-11ea-95e4-4ca3a88b132b.png) Answers: username_1: @username_0 what version of build are you using? username_0: @username_1 用的是1.8.1 username_1: is `could-operator` onboarded via OIDC authn flow and the password a valid cli secret? username_0: OIDC用户能够登陆harbor控制台,cli密码是通过使用文档,从harbor控制台获取的,如下: ![image](https://user-images.githubusercontent.com/18571340/70885759-bbc66e80-2014-11ea-98f7-6c54825fdd86.png) username_0: @username_1 找到问题了,为啥我的OIDC用户是cloud-operator,到了harbor之后变成cloud_operator了,这是问题所在 username_2: it seems oidc user can't login directly from CLI, it has to use "secret" after login from GUI, right? username_1: @username_0 Harbor的用户名有限制,因此在onboard时候做了转换。 @username_2 yes, because the CLI does not have support for the SSO authentication flow. Closing this issue and the problem is fixed. Status: Issue closed
DP-1313/data_pipelining
564378773
Title: npm run dev 안되는데요 Question: username_0: [ wait ] starting the development server ... [ info ] waiting on http://localhost:3000 ... Port 3000 is already in use. Use `npm run dev -- -p <some other port>`. npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! [email protected] dev: `next` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the [email protected] dev script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /home/ubuntu/.npm/_logs/2020-02-13T01_08_21_029Z-debug.log 해당이슈가 뜨면서 되질 않는군요 Answers: username_0: fix! Status: Issue closed
Rebirth-of-the-Night/Rebirth-Of-The-Night
988369255
Title: Ore Dictionary Update Question: username_0: Certain mod recipes not accepting materials from Biomes o' Plenty - all wind chimes from Better With Mods - all bridges stone and wood from Macaw's Bridges Recommended solution Complex - create a recipe for all wood added by mods Simple - have any wood added by mods id treated as oak wood id for chime and bridge recipes Simpler - create a recipe that can convert any wood added by mods into oak wood (same as the underground biomes stone conversion to coade stone recipe) Ill make sure to update this as I find other recipes but so far these are the incompatible ones I have found<issue_closed> Status: Issue closed
dotnet/aspnetcore
775986681
Title: UriHelper.BuildAbsolute: opportunity for performance improvement Question: username_0: ## Summary `UriHelper.BuildAbsolute` creates an intermediary string for the combined path that is used only for concatenating with the other components to create the final URL. It also uses a non-pooled `StringBuilder` that is instantiated on every invocation. Although optimized in size, it is a heap allocation with an intermediary buffer. ```csharp public static string BuildAbsolute( string scheme, HostString host, PathString pathBase = new PathString(), PathString path = new PathString(), QueryString query = new QueryString(), FragmentString fragment = new FragmentString()) { if (scheme == null) { throw new ArgumentNullException(nameof(scheme)); } var combinedPath = (pathBase.HasValue || path.HasValue) ? (pathBase + path).ToString() : "/"; var encodedHost = host.ToString(); var encodedQuery = query.ToString(); var encodedFragment = fragment.ToString(); // PERF: Calculate string length to allocate correct buffer size for StringBuilder. var length = scheme.Length + SchemeDelimiter.Length + encodedHost.Length + combinedPath.Length + encodedQuery.Length + encodedFragment.Length; return new StringBuilder(length) .Append(scheme) .Append(SchemeDelimiter) .Append(encodedHost) .Append(combinedPath) .Append(encodedQuery) .Append(encodedFragment) .ToString(); } ``` ## Motivation and goals This method is frequently use in hot paths like redirect and rewrite rules. # Detailed design ## StringBuilder_WithoutCombinedPathGeneration Just by not generating the intermediary `combinePath`, there are memory usage improvements in the when the number of components is highier. There are also time improvements in those cases, but it's wrost in the other cases. ## String_Concat_WithArrayArgument Given that the final URL is composed of more than 4 parts, the use of `string.Concat` incurs in an array allocation. But it still always performs better in terms of time and memory usage that using a `StringBuilder`. ## String_Create `string.Create` excels here in comparison to all the other options. It was created exactly for these use cases. [Truncated] { foreach (var host in hosts) { foreach (var basePath in basePaths) { foreach (var path in paths) { foreach (var query in queries) { foreach (var fragment in fragments) { yield return new object[] { new HostString(host), new PathString(basePath), new PathString(path), new QueryString(query), new FragmentString(fragment), }; } } } } } } } ``` Answers: username_1: Send the PR using string.Create username_0: @username_1, this will have a direct benefit on #28899 and #28903. Similar issues: #28904 and #28906. username_1: Thanks @username_0 !
Cimpress/react-cimpress-comment
282126291
Title: Adjust the component to the latest API changes Question: username_0: There were some API changes based on this issue: https://cimpress.githost.io/trdelnik/comment-service/issues/21 It would be nice to update the React component to match those. Answers: username_1: @username_0 what exactly is missing ? username_0: There doesn't seem to be any breaking changes, actually. When I created this issue in December the component didn't work with the latest version but I'm not sure why. The API changes I had in mind should not affect it at all - it's not using PUT and it doesn't seem to be POST-ing on `/resources` with a creator field in the `comment` object. So it should be fine. username_1: I think it is using PUT for editing existing comment. username_0: Yes, but that's on a different endpoint. That's fine. Status: Issue closed username_1: ok - it is not doing that. closing!
data-skeptic/bot-survey-engine
276179642
Title: Error during orders dialog Question: username_0: ``` ...BotBuilder:prompt-text - Session.send() ...BotBuilder:prompt-text - Session.sendBatch() sending 1 message(s) {"type":"send","message":"For privacy reasons, I won't confirm I have your order. But assuming it's there, what seems to be the issue?","user":"User"} ChatConnector: message received. {"type":"recieve","message":"baladf","user":"User"} UniversalBot("*") routing "baladf" from "emulator" Library("BotBuilder").findRoutes() explanation: ActiveDialog(0.5) ...BotBuilder:prompt-text - Prompt.returning(baladf) ...BotBuilder:prompt-text - Session.endDialogWithResult() ..store - waterfall() step 5 of 5 ReferenceError: ses is not defined at Array.waterfall (/Users/username_0/git/ds/dataskeptic-bot/dialogs/store.js:50:3) at /Users/username_0/git/ds/dataskeptic-bot/node_modules/botbuilder/lib/dialogs/WaterfallDialog.js:67:39 at next (/Users/username_0/git/ds/dataskeptic-bot/node_modules/botbuilder/lib/dialogs/WaterfallDialog.js:92:21) at WaterfallDialog.beforeStep (/Users/username_0/git/ds/dataskeptic-bot/node_modules/botbuilder/lib/dialogs/WaterfallDialog.js:99:9) at WaterfallDialog.doStep (/Users/username_0/git/ds/dataskeptic-bot/node_modules/botbuilder/lib/dialogs/WaterfallDialog.js:61:14) at WaterfallDialog.dialogResumed (/Users/username_0/git/ds/dataskeptic-bot/node_modules/botbuilder/lib/dialogs/WaterfallDialog.js:46:14) at Session.endDialogWithResult (/Users/username_0/git/ds/dataskeptic-bot/node_modules/botbuilder/lib/Session.js:358:28) at PromptText.Prompt.invokeIntent (/Users/username_0/git/ds/dataskeptic-bot/node_modules/botbuilder/lib/dialogs/Prompt.js:331:21) at PromptText.Prompt.replyReceived (/Users/username_0/git/ds/dataskeptic-bot/node_modules/botbuilder/lib/dialogs/Prompt.js:147:18) at Session.routeToActiveDialog (/Users/username_0/git/ds/dataskeptic-bot/node_modules/botbuilder/lib/Session.js:525:24) ..store - ERROR: ses is not defined ..store - Session.endConversation() Session.sendBatch() sending 2 message(s) {"type":"send","message":"Oops. Something went wrong and we need to start over.","user":"User"} {"type":"send","user":"User"} ``` Answers: username_0: Opened by accident, this should be in the other repo Status: Issue closed
opencontainers/runc
508308600
Title: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:430: container init caused \\\"process_linux.go:396: setting cgroup config for procHooks process caused \\\\\\\"failed to write 100000 to cpu.cfs_period_us Question: username_0: Hello, I am facing below error on my environment, and doesn't seem to be somehow reproducible. `Oct 16 14:24:19 agnsrlnicp1n10 hyperkube[1421]: E1016 14:24:19.003982 1421 kuberuntime_manager.go:744] container start failed: RunContainerError: failed to start container "a52afb440febae5ffc83430349aa38b801bbf4de331dcd124af4d63695f72b5a": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"process_linux.go:396: setting cgroup config for procHooks process caused \\\"failed to write 100000 to cpu.cfs_period_us: write /sys/fs/cgroup/cpu,cpuacct/kubepods/pod248fb7bf-f05b-11e9-8ad1-005056b7990b/a52afb440febae5ffc83430349aa38b801bbf4de331dcd124af4d63695f72b5a/cpu.cfs_period_us: invalid argument\\\"\"": unknown Oct 16 14:24:19 agnsrlnicp1n10 hyperkube[1421]: E1016 14:24:19.004037 1421 pod_workers.go:186] Error syncing pod 248fb7bf-f05b-11e9-8ad1-005056b7990b ("sla-collector-77cbd9cbb9-9q88h_sladev(248fb7bf-f05b-11e9-8ad1-005056b7990b)"), skipping: failed to "StartContainer" for "collector" with RunContainerError: "failed to start container \"a52afb440febae5ffc83430349aa38b801bbf4de331dcd124af4d63695f72b5a\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"process_linux.go:430: container init caused \\\"process_linux.go:396: setting cgroup config for procHooks process caused \\\\\\\"failed to write 100000 to cpu.cfs_period_us: write /sys/fs/cgroup/cpu,cpuacct/kubepods/pod248fb7bf-f05b-11e9-8ad1-005056b7990b/a52afb440febae5ffc83430349aa38b801bbf4de331dcd124af4d63695f72b5a/cpu.cfs_period_us: invalid argument\\\\\\\"\\\"\": unknown"` Distribution: **Ubuntu 16.04** Kernel Version: **4.4.0-164-generic** ``` runc --version runc version 1.0.0-rc8 commit: <PASSWORD> spec: 1.0.1-dev ``` Answers: username_1: Any updates here? ! username_2: Hi. I have been experiencing this issue for some weeks now. I'd appreciate some related information and fix. username_3: Can you provide some more information -- such as how often this happens, what type of workload you're trying to run, and so on. It's a bit difficult to figure out the root cause of an issue where the bug report says that the problem isn't reproducible. In the hopes of finding something out, I checked where you can get an `EINVAL` from when setting `cfs_period_us` but it didn't really help too much: * If the `cfs_period_us` value would overflow a u64 when represented in nanoseconds -- which isn't true for `100000`. ``` if ((u64)cfs_period_us > U64_MAX / NSEC_PER_USEC) return -EINVAL; ``` * If the cgroup being set is the root cgroup, which definitely isn't true ``` if (tg == &root_task_group) return -EINVAL; ``` * If the period is smaller than `min_cfs_quota_period` (`1ms`) -- but `100000us` is `100ms`. ``` /* * Ensure we have at some amount of bandwidth every period. This is * to prevent reaching a state of large arrears when throttled via * entity_tick() resulting in prolonged exit starvation. */ if (quota < min_cfs_quota_period || period < min_cfs_quota_period) return -EINVAL; ``` * If the period is larger than `max_cfs_quota_period` (`1s`) -- but `100000us` is `100ms`. ``` /* * Likewise, bound things on the otherside by preventing insane quota * periods. This also allows us to normalize in computing quota * feasibility. */ if (period > max_cfs_quota_period) return -EINVAL; ``` And that's it. All the other `EINVAL`s within the `cpu` cgroup code are unrelated to setting `cfs_period_us`. So given that I can't tell what error path might be being hit, there's not much I can do without any more information. This could be a kernel bug for all I know. username_0: @username_3 thanks for your response, could this be related to https://github.com/kubernetes/kubernetes/issues/72878 ? Unfortunately the patch of the kernel is not backported and I will not be able to test this kernel. username_0: Closing the issue, after kernel was patched. Issue no longer exists. Status: Issue closed
EdinburghGenomics/Analysis-Driver
386917211
Title: Ensure all files have been transferred before start run processing Question: username_0: In some case some bcl files were missing at the beginning of the Run processing due to the transfer lagging behind. We should check that all the file are present and if they're not add another 5 mins wait. Answers: username_1: Wasn't there also a problem with files being present but incomplete? username_0: The current process does not differentiate between missing and incomplete file so in short, I don't know. We should obviously check for both presence and completeness. username_2: @username_0 @username_1 Do you happen to have an Asana issue related to this issue, so I can pinpoint where the process is failing please? username_2: Following a discussion with @username_0 today, the function that needs to be changed was confirmed as "check_bcls" in analysis_driver/quality_control/bcl_validation.py. username_2: One possible approach would be to calculate the number of BCLs expected for a run, and then to check the number of BCLs on the file system to confirm whether this matches. This check would be in a while loop and succeeded by a 5 minute sleep, which would allow any file transfers which are in progress to complete. An upper waiting time limit should be set, to prevent the while loop from continuing forever. Status: Issue closed
ScandinavianSection-UCLA/Nexus2
381002599
Title: Book view tabs should return to their last page when re-selected Question: username_0: Currently when you leave a book view and return to its tab, it reverts to the chapter that it was first loaded to, even if you switch the page. The last viewed page should be re-loaded. Answers: username_0: So this works fine on Safari, but saves one page behind where it should be saved on Firefox. Status: Issue closed
masashi-y/abduction_kbc
444866586
Title: Pretrained models are missing Question: username_0: Hi, I successfully compiled coqlib.v on ubuntu 18.04 on AWS EC2. Pretrained models (but also the directory cl.naist.jp/~username_1/resources/ ), however, seems to be missing... Answers: username_1: Hello. Thank you for playing with my codes. I uploaded them to Google Drive and changed the links in the README accordingly. Please download them from there! username_0: Successfully downloaded. Thanks!! I'll try following directions ^^ Status: Issue closed
cypress-io/cypress
367983556
Title: Cypress run, screenshots show wrong character encoding Question: username_0: <!-- Is this a question? Do not open an issue. Please ask in our chat https://gitter.im/cypress-io/cypress Want something newly documented? Please open an issue in the respective repo: - docs: https://github.com/cypress-io/cypress-documentation - example recipes: https://github.com/cypress-io/cypress-example-recipes --> ### Current behavior: <!-- images, stack traces, etc [-->] ![39aa79b3d48a232c8912d08598897c33](https://user-images.githubusercontent.com/35747908/46637965-e2f6f580-cb5e-11e8-8525-e6d9af33c390.png) Everything else is fine, tests are running normally and the ouput/results are also fine. Only screenshots/videos shows the wrong character encoding. Running 'cypress open', normally on a windows machine, works perfectly fine. Using docker image cypress/base:8, is also perfectly fine. ### Desired behavior: <!-- A clear and concise description of what you want to happen --> Normal character encoding (UTF-8?) on the headless electron runner (screenshots) ### Steps to reproduce: <!-- Issues without reproducible steps might get closed. *Tip* You can fork https://github.com/cypress-io/cypress-test-tiny repo, set up a failing test, then tell us the repo/branch to try. --> Running the following Dockerimage jenkinsxio/builder-nodejs:0.0.388 https://github.com/jenkins-x/builder-nodejs/blob/v0.0.388/Dockerfile . Installed cypress via yarn add cypress --dev, then just cypress run ### Versions <!-- Cypress, operating system, browser --> Cypress v3.10 Dockerimage jenkinsxio/builder-nodejs:0.0.388 https://github.com/jenkins-x/builder-nodejs/blob/v0.0.388/Dockerfile Browser Electron Answers: username_1: @username_0 Hey, are you able to run the tests with `cypress:open` and select the Electron browser there to verify the encoding works fine there and also that screenshots taken there have characters shown correctly? <img width="321" alt="screen shot 2018-10-09 at 2 14 52 pm" src="https://user-images.githubusercontent.com/1271364/46689392-bd550500-cbcd-11e8-87ab-f2b308f75553.png"> username_0: @username_1 Hi, locally on my windows machine, using cypress:open with Electron, the encoding works fine! I am not sure how I can get screenshots while using cypress:open... (I tried using cy.screenshot() , but that only makes a picture of the browser) Also the screenshot from the opening post is from a failed test while using cypress:run (in a docker image). The weird encoding only has effect on the 'left' part of the screenshot, the attachd browser page is perfectly fine though! username_1: @username_0 To take a screenshot during `cypress:open` of the left side, you could make a test fail or try `cy.screenshot({ capture: 'runner' })` username_2: Did you get anywhere with that, @username_0 ? I'm facing the same issue. username_0: @username_2 , unfortunately no. cypress:open (on my Windows machine) and the screenshots from it, has normal encoding. During the Jenkins-ci in the Linux container(cypress:run), still has the weird encoding. username_3: Seeing the same issue using Jenkins 2.138.1 on a centOS 7 VM; running Cypress 3.1.0 in headless mode with the Electron browser version 59 through `npx cypress run` . The screenshot below shows garbled characters on the status pane and the browser pane. Any screenshots taken of test failures show the web app has rendered with a readable character set. Any idea if it might be an issue with fonts installed on the system running the test? ![cypress-gobbledygook](https://user-images.githubusercontent.com/11302134/48216174-d40b8a80-e338-11e8-9ad0-629f86d2e61d.png) username_4: You need to add the proper fonts to your Docker. I had the same issue and added some fonts to the DockerFile. I have to look it up when I am back at the office since I don't have the DockerFile on my current laptop. username_1: Hey @username_4 - this would be very helpful to share. @username_2 @username_3 @username_0 Can you confirm the proper fonts are installed in your system? username_3: @username_1 I started using one of the docker images provided by Cypress with built-in chrome browser and haven't reproduced the issue. username_5: I just stumbled over the same issue with a Dockerimage running on RHEL7. For people struggling with this too, the only thing I had to do was installing a Monospace font. I chose roboto which is fairly easy to install: `yum install -y google-roboto-mono-fonts`
w3c/activitystreams
314113535
Title: width, height in Image Question: username_0: Example 80 has `Image` objects with `width` and `height` properties. I don't think these are actually allowed in `Image`. ```json { "@context": "https://www.w3.org/ns/activitystreams", "summary": "A simple note", "type": "Note", "content": "A simple note", "icon": [ { "type": "Image", "summary": "Note (16x16)", "url": "http://example.org/note1.png", "width": 16, "height": 16 }, { "type": "Image", "summary": "Note (32x32)", "url": "http://example.org/note2.png", "width": 32, "height": 32 } ] } ``` Answers: username_1: oh wow, that's an oversight. Pretty certain a previous draft allowed for it. The domain for both `width` and `height` should include `Link` and `Image`. username_0: Example 51 shows an `Image` with an `url` property that's a `Link`, which *can* have a width and height. ```json { "@context": "https://www.w3.org/ns/activitystreams", "type": "Image", "name": "Cat Jumping on Wagon", "url": [ { "type": "Link", "href": "http://example.org/image.jpeg", "mediaType": "image/jpeg" }, { "type": "Link", "href": "http://example.org/image.png", "mediaType": "image/png" } ] } ``` Status: Issue closed
pandas-dev/pandas
282776401
Title: No automatic type casting in complete indexing of a large MultiIndex Question: username_0: #### Code Sample, a copy-pastable example if possible ```python In [2]: pd.MultiIndex.from_product([[1., 2.], range(5000)]).get_loc((1, 0)) Out[2]: 0 In [3]: pd.MultiIndex.from_product([[1., 2.], range(5001)]).get_loc((1., 0)) Out[3]: 0 In [4]: pd.MultiIndex.from_product([[1., 2.], range(5001)]).get_loc(1) Out[4]: slice(0, 5001, None) In [5]: pd.MultiIndex.from_product([[1., 2.], range(5001)]).get_loc((1, 0)) --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-5-a8f6fcc9f5d9> in <module>() ----> 1 pd.MultiIndex.from_product([[1., 2.], range(5001)]).get_loc((1, 0)) /home/nobackup/repo/pandas/pandas/core/indexes/multi.py in get_loc(self, key, method) 2141 key = _values_from_object(key) 2142 key = tuple(map(_maybe_str_to_time_stamp, key, self.levels)) -> 2143 return self._engine.get_loc(key) 2144 2145 # -- partial selection or non-unique index /home/nobackup/repo/pandas/pandas/_libs/index.pyx in pandas._libs.index.MultiIndexHashEngine.get_loc (pandas/_libs/index.c:15856)() 645 return algos.pad_object(values, other, limit=limit) 646 --> 647 cpdef get_loc(self, object val): 648 if is_definitely_invalid_key(val): 649 raise TypeError("'{val}' is an invalid key".format(val=val)) /home/nobackup/repo/pandas/pandas/_libs/index.pyx in pandas._libs.index.MultiIndexHashEngine.get_loc (pandas/_libs/index.c:15703)() 654 655 try: --> 656 return self.mapping.get_item(val) 657 except TypeError: 658 raise KeyError(val) /home/nobackup/repo/pandas/pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.MultiIndexHashTable.get_item (pandas/_libs/hashtable.c:24621)() 1446 return False 1447 -> 1448 cpdef get_item(self, object key): 1449 cdef: 1450 khiter_t k /home/nobackup/repo/pandas/pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.MultiIndexHashTable.get_item (pandas/_libs/hashtable.c:24576)() 1460 return loc 1461 else: -> 1462 raise KeyError(key) 1463 1464 cpdef set_item(self, object key, Py_ssize_t val): KeyError: (1, 0) ``` #### Problem description Related to #18519 - this is an inherent limit of the current design of the engine for large ``MultiIndex``es, and an [improved](https://github.com/pandas-dev/pandas/issues/18519#issuecomment-347323053) ``MultiIndexEngine`` should solve this too. [Truncated] feather: 0.3.1 matplotlib: 2.0.0 openpyxl: 2.3.0 xlrd: 1.0.0 xlwt: 1.3.0 xlsxwriter: 0.9.6 lxml: 4.1.1 bs4: 4.5.3 html5lib: 0.999999999 sqlalchemy: 1.0.15 pymysql: None psycopg2: None jinja2: 2.10 s3fs: None fastparquet: None pandas_gbq: None pandas_datareader: 0.2.1 </details><issue_closed> Status: Issue closed
cloudfoundry/bosh
254348713
Title: Ability to trigger monit reload Question: username_0: In some deployments after changing persistent storage (ex: config files), I need to reload the processes. Most of the times I end up doing a `bosh deploy --recreate`. I would prefer to issue a _monit_ reload with bosh as it would take a lot less time. Answers: username_1: Hi @username_0, `monit reload` is used for reloading the configuration for the `monit` daemon which lives in `/var/vcap/monit`. This should not be tied to changing persistent storage. If you want to restart the processes that `monit` is managing, you can use `bosh restart` to do so. I'm going to close this issue. Please feel free to reopen this issue if you have any further questions. Best, @username_1, CF BOSH Status: Issue closed
lsds/sgx-lkl
625574519
Title: Compile flag variables are confusing (SGXLKL_CFLAGS vs SGXLKL_CFLAGS_EXTRA etc.) Question: username_0: The SGXLKL_CFLAGS/SGXLKL_CFLAGS_EXTRA makefile variables are quite confusing as it's not clear from the name which one is used where. It seems like SGXLKL_CFLAGS is used *only* for the *.c files in tools/ and SGXLKL_CFLAGS_EXTRA for all other SGX-LKL code (host and enclave), but excluding third-party dependencies (for which there is THIRD_PARTY_CFLAGS). I suggest replacing the above SGXLKL_* variables with new `SGXLKL_HOST_CFLAGS` and `SGXLKL_ENCLAVE_CFLAGS` variables. Similarly, there should be `THIRD_PARTY_HOST_CFLAGS` and `THIRD_PARTY_ENCLAVE_FLAGS`. @username_1 Does this make sense to you? Am I missing something? Answers: username_1: @username_0 won't this problem go away once https://github.com/lsds/sgx-lkl/issues/326 is done? username_0: I hope so :) You can very easily do nasty stuff in CMake as well though. username_0: Closing this, I'm trusting @davidchisnall to do the right thing. Status: Issue closed
nipreps/fmriprep
905029276
Title: Incorrect interpolation of discrete labels in the FreeSurfer anatomical segmentation Question: username_0: This issue is in the anatomical processing pipeline of fmriprep. ### What version of fMRIPrep are you using? 20.2.1 ### What kind of installation are you using? Containers (Singularity, Docker), or "bare-metal"? Singularity container derived directly from Docker hub image ### What is the exact command-line you used? N/A ### Have you checked that your inputs are BIDS valid? Yes ### Did fMRIPrep generate the visual report for this particular subject? If yes, could you share it? Yes, can provide if requested. ### Can you find some traces of the error reported in the visual report (at the bottom) or in *crashfiles*? N/A ### Are you reusing previously computed results (e.g., FreeSurfer, Anatomical derivatives, work directory of previous run)? No, first run on raw scans. ### fMRIPrep log Can be provided if necessary. Our T1w scans have a resolution of 1.0x1.0x1.2 mm3 voxel size. Scans are processed correctly, with FreeSurfer resampling the volumes to 1mm isotropic resolution for internal use. fmriprep makes the segmentations conveniently available as *_desc-aparcaseg_dseg.nii.gz and *_desc-aseg_dseg.nii.gz in the fmriprep/subjectID/anat/ folder resampled to match the native resolution of the T1w scan. However, an interpolation method is used that is not suitable for volumes with discrete labels. On the left an example of the apardaseg_dseg produced by fmriprep. On the right is the original aparc+aseg from the FreeSurfer output. ![image](https://user-images.githubusercontent.com/5820515/119954521-5a435700-bf9f-11eb-8ce0-2e78df64523f.png) ![image](https://user-images.githubusercontent.com/5820515/119954550-629b9200-bf9f-11eb-9186-0eb31597d6c7.png) Answers: username_1: I'm seeing the same issue. I'm also using Singularity with fmriprep 20.2.1. username_2: I saw that also with singularity fmriprep 20.2.1, described in #2387. username_3: Hi all, just a note that we are working on this. Apologies for the slow responses... It's quite a busy time. username_2: Thank you @username_3 ! username_2: Same thing with singularity fmriprep 20.2.2.
microsoft/pylance-release
754345193
Title: "Unexpected token at end of expression" in f-string Question: username_0: ## Environment data - Language Server version: 2020.11.2 - OS and version: MacOS 10.15.7 - Python version: 3.8.6 ## Expected behaviour When using string formatting alongside f-string debugging, no error should be shown as this code is valid in Python 3.8 and above. Example: `f'{a = :.2f}'` ## Actual behaviour A red underline appears inside this f-string. <img width="198" alt="Screenshot 2020-12-01 at 12 09 05" src="https://user-images.githubusercontent.com/3770312/100738781-054acd80-33ce-11eb-8f41-7aa26ad43a11.png"> ## Code Snippet Input: ```python a = 3.1415 print(f'{a:.2f}') print(f'{a = }') print(f'{a = :.2f}') ``` Output: ``` 3.14 a = 3.1415 a = 3.14 ```
scala/bug
223677198
Title: Specialization regression in 2.12 Question: username_0: It seems that some specialized methods are not getting properly generated in Scala 2.12. I noticed this when porting Reactors to 2.12. Here is a simplified set of classes (from [this file](https://github.com/reactors-io/reactors/blob/master/reactors-core/shared/src/main/scala/io/reactors/Events.scala)): ``` trait Observer[@specialized(Int, Long, Double) T] { def react(x: T, hint: Any): Unit } trait Events[@specialized(Int, Long, Double) T] { def onEvent(observer: T => Unit): Unit = { onReaction(new Observer[T] { def react(x: T, hint: Any): Unit = observer(x) }) } def onReaction(obs: Observer[T]): Unit } trait Push[@specialized(Int, Long, Double) T] extends Events[T] { private[reactors] var demux: AnyRef = null def onReaction(obs: Observer[T]): Unit = { demux = obs } protected[reactors] def reactAll(value: T, hint: Any) { demux match { case null => case obs: Observer[_] => // some code ... } } } class Emitter[@spec(Int, Long, Double) T] extends Push[T] with Events[T] { private var closed = false def react(x: T): Unit = react(x, null) def react(x: T, hint: Any): Unit = if (!closed) { reactAll(x, hint) } } ``` I also have a test that instruments calls to `BoxesRuntime` and measures the amount of boxing: ``` def benchOnX(sz: Int): Unit = { var sum = 0 val emitter = new Emitter[Int] emitter.onEvent(sum += _) var i = 0 while (i < sz) { emitter.react(i) i += 1 } } ``` The boxing happens in the loop above, in the call to `react`. Inspection of the `benchOnX` bytecode revealed no boxing is going on above. The following code for `react` is generated for `Emitter$mcI$sp`: ``` public void react$mcI$sp(int, java.lang.Object); [Truncated] Nothing overridden from the superclass, which seems fishy. That means that the `react$mcI$sp` must be coming from the superclass, that is `Push.class`: ``` public void reactAll$mcI$sp(int, java.lang.Object); descriptor: (ILjava/lang/Object;)V flags: ACC_PUBLIC Code: stack=3, locals=3, args_size=3 0: aload_0 1: iload_1 2: invokestatic #314 // Method scala/runtime/BoxesRunTime.boxToInteger:(I)Ljava/lang/Integer; 5: aload_2 6: invokeinterface #153, 3 // InterfaceMethod reactAll:(Ljava/lang/Object;Ljava/lang/Object;)V 11: return ``` The base class implementation calls `boxToInteger`, naturally - this is because, in the base class, the `reactAll$mcI$sp$` is just the bridge the generic version. Expected behavior would be to that this method is overridden in the `Push$mcI$sp` class. Any idea why this is not the case any more? cc @username_4 @username_1 Answers: username_0: Here's another potentially related problem, affecting ScalaJS. Suppose you have this trait: ``` trait Arrayable[@specialized(Byte, Short, Int, Float, Long, Double) T] extends Serializable { val classTag: ClassTag[T] val nil: T def newArray(sz: Int): Array[T] def newRawArray(sz: Int): Array[T] def apply(array: Array[T], idx: Int): T def update(array: Array[T], idx: Int, v: T): Unit def withNil(n: T) = new Arrayable.WithNil(this, classTag, n) } object Arrayable { implicit val long: Arrayable[Long] = new Arrayable[Long] { val classTag = implicitly[ClassTag[Long]] val nil = Long.MinValue def newArray(sz: Int) = { val a = new Array[Long](sz) var i = 0 while (i < sz) { a(i) = nil i += 1 } a } def newRawArray(sz: Int) = new Array[Long](sz) def apply(array: Array[Long], idx: Int): Long = array(idx) def update(array: Array[Long], idx: Int, v: Long): Unit = array(idx) = v } } ``` The code that gets generated (Scala 2.12.2): ``` public interface io.reactors.Arrayable<T extends java.lang.Object> extends scala.Serializable ... public abstract int nil$mcI$sp(); descriptor: ()I flags: ACC_PUBLIC, ACC_ABSTRACT ``` Method `nil$mcI$sp` is not overridden in the following interface: ``` public interface io.reactors.Arrayable$mcJ$sp extends io.reactors.Arrayable<java.lang.Object> ``` It is also not overridden in the following concrete class: ``` public final class io.reactors.Arrayable$$anon$1 implements io.reactors.Arrayable$mcJ$sp ``` When ScalaJS tries to link against this, it breaks: ``` [error] Referring to non-existent method io.reactors.Arrayable$$anon$1.nil$mcI$sp()scala.Int [Truncated] [error] called from io.reactors.Reactor$$anon$1.initialValue()io.reactors.Reactor$MarshalContext [error] called from io.reactors.Reactor$$anon$1.initialValue()java.lang.Object [error] called from java.lang.ThreadLocal.get()java.lang.Object [error] called from io.reactors.Reactor$.currentFrame()io.reactors.concurrent.Frame [error] called from io.reactors.Reactor$.currentReactor()io.reactors.Reactor [error] called from io.reactors.Reactor$.selfAs()io.reactors.Reactor [error] called from io.reactors.Reactor$.self()io.reactors.Reactor [error] called from io.reactors.concurrent.Services$.loggingTag()java.lang.String [error] called from io.reactors.services.Log.$$anonfun$apply$1(java.lang.Object)scala.Unit [error] called from io.reactors.services.Log.<init>(io.reactors.ReactorSystem) [error] called from io.reactors.services.Log.<clinit>() [error] called from core module analyzer [error] involving instantiated classes: [error] io.reactors.Reactor$$anon$1 [error] io.reactors.Reactor$$anon$2 [error] java.lang.ThreadLocal [error] io.reactors.Reactor$ [error] io.reactors.concurrent.Services$ [error] io.reactors.services.Log ``` username_0: FYI: here is a self-contained reproducible Gist with the ScalaJS-related error: https://gist.github.com/username_0/d208a0612679a0eee9f9305508bd807e I can also try to produce a self-contained example of the first problem (although, I believe that the two are related, as explained above). username_1: ... default public int nil$mcI$sp() { return BoxesRunTime.unboxToInt(this.nil()); } ... ``` So `nil$mcI$sp` is a default method, not abstract. username_1: I can reproduce / see the problem of your original report. Here's a minimized version ```scala trait A[@specialized(Int) T] { def f(x: T): Unit } trait B[@specialized(Int) T] { def g(x: T): Unit = () } class C[@specialized(Int) T] extends A[T] with B[T] { def f(x: T): Unit = g(x) } ``` We get: ```java public class C$mcI$sp extends C<Object> implements B$mcI$sp, A$mcI$sp { public void f$mcI$sp(int x) { this.g$mcI$sp(x); } // no override of g$mcI$sp here } public class C<T> implements A<T>, B<T> { public void g$mcI$sp(int x) { B.g$mcI$sp$(this, x); } } public interface B$mcI$sp extends B<Object> { // no override here } public interface B<T> { default public void g$mcI$sp(int x) { this.g(BoxesRunTime.boxToInteger((int)x)); } } ``` In 2.11.x, the specialized copy of `g` is called: ```java public class C$mcI$sp extends C<Object> implements B$mcI$sp, A$mcI$sp { public void g$mcI$sp(int x) { B$mcI$sp$class.g$mcI$sp(this, x); } public void f$mcI$sp(int x) { this.g$mcI$sp(x); } } public abstract class B$mcI$sp$class { public static void g$mcI$sp(B$mcI$sp $this, int x) { return; } } ``` username_0: Thanks a lot for looking at this! Your example is indeed more minimal than my gist link above. username_0: I think that the problem only appears with multiple specialized classes, but I was not able to prune my example much further while triggering the problem. I think that your example with `A`, `B` and `C` is likely related to the same underlying issue. username_2: @username_3 just curious the reason of why this is moved to the backlog (other that it probably being dreadfully difficult to fix, of course ...). This has been blocking certainly libraries from upgrading to 2.12 for some time (in particular, Reactors, mentioned above, that I just happen to be interested in :-) ). Though I'm not sure I'm up to the challenge, I'll try to take a look, if anyone knows what areas of the compiler might be a good place to start (in both 2.11 and 2.12+) username_3: I don't think any of us were aware that there were people out there who considered this a blocker. Thanks for letting us know. I moved it to "Backlog" simply because there was no activity on the ticket for almost a year — no more specific motivation than that. Let's see what @username_1 says. Lukas, you self-assigned this, is it still something you realistically expect to tackle...? And/or, can you point Brandon in the right direction or where the fix might lie? username_2: Sorry, I'd have chimed up sooner! I just figured it was probably difficult and maybe low priority (though I imagine both of these are still partly true). Thanks for taking a look. username_1: I guess backlog is OK, it's one of those issues I'd like to take a look at, but I'm not sure when I can find the time.. I'm not familiar enough with specialization to be able to say what exactly should but doesn't happen in the example(s). I guess some assumptions in specialization broke with the new encoding for traits. username_4: Looks like I introduced this regression in 2bde3928833ae194fc7e2094b8955112b70fd31f. It might be tricky to fix in a minor release because of binary incompatibilities when specialized traits and subclasses are separately compiled with different compiler minor versions. The workaround is to use an abstract class rather than a trait for `TypedColumn`. username_5: I actually had explored that, however our type hierarchy is a little more complicated, and contains something like this: ``` trait Column trait TypedColumn[@specialized(Long, Double) T] extends Column trait CompositeTypedColumn[@specialized(Long, Double) T] extends TypedColumn[T] ``` This will result in a warning if `TypedColumn` is turned into an abstract class. username_6: Could a fix be hidden behind a compiler flag. For those that don't do separate compilation with differing minor versions it would be nice to be able to opt-in to the old behavior. Status: Issue closed
concourse/concourse
413118751
Title: Freeze before PUT/build docker image Question: username_0: I have deployed Concourse IO using docker-compose. Everything worked good on local machine (Arch Linux), but on the destination server (CentOS 7) something is wrong, and the build freezes for some time. ![screenshot_2019-02-21 build backend 5 - concourse](https://user-images.githubusercontent.com/317401/53200601-36577500-3622-11e9-950e-abeb37d169be.png) As you can see on the log, the time between one part and the second is about 40 minutes. My question is: how can I debug if further? I can read the log on the worker, but I have no idea what to look for, so it will be very hard for me to find anything. Standard data, pipeline: ``` resources: - name: version type: semver source: driver: git initial_version: 0.0.1 uri: https://github.com/username_0/cashcat.git username: {{gitUsername}} password: <PASSWORD>}} branch: master file: backend/version - name: cashcat-git type: git source: uri: https://github.com/username_0/cashcat.git branch: versions username: {{gitUsername}} password: {{<PASSWORD>}} - name: cashcat-image type: docker-image source: email: ((dockerEmail)) username: ((dockerUsername)) password: ((docker<PASSWORD>)) repository: username_0/cashcat - name: python-image type: docker-image source: repository: python tag: 3.7 jobs: - name: Build Backend public: true plan: - get: cashcat-git - put: version params: bump: minor - get: python-image params: save: true - put: cashcat-image params: build: cashcat-git/backend - task: run config: platform: linux image_resource: type: docker-image source: load_base: cashcat-image [Truncated] - default worker2: image: concourse/concourse command: worker --garden-dns-server 8.8.8.8 --garden-insecure-docker-registry registry-1.docker.io privileged: true volumes: - ./keys/worker:/concourse-keys env_file: env networks: - default volumes: pgdata: networks: apps: external: name: apps ``` Answers: username_1: This could be just because it's taking time to stream the inputs to the `put` step - by default, all artifacts created thus far are transferred over, including that `get: python-image` you've got just before it. These `put` steps also run privileged (because they use the `docker-image` resource), so there could be overhead from having to namespace all the files going into it. Status: Issue closed
facebook/react-native
436043959
Title: undefined is not a function (evaluating 'Uint8Array.from(this._hash)') Question: username_0: ## 🐛 Bug Report ``` 04-23 15:01:42.781 17632 17720 E AndroidRuntime: com.facebook.react.common.JavascriptException: undefined is not a function (evaluating 'Uint8Array.from(this._hash)'), stack: 04-23 15:01:42.781 17632 17720 E AndroidRuntime: bytes@695:666 04-23 15:01:42.781 17632 17720 E AndroidRuntime: encodeDestinationSetupFrame@702:862 04-23 15:01:42.781 17632 17720 E AndroidRuntime: encodeFrame@699:276 04-23 15:01:42.781 17632 17720 E AndroidRuntime: create@629:4198 04-23 15:01:42.781 17632 17720 E AndroidRuntime: <unknown>@627:123 04-23 15:01:42.781 17632 17720 E AndroidRuntime: _@2:1514 04-23 15:01:42.781 17632 17720 E AndroidRuntime: d@2:967 04-23 15:01:42.781 17632 17720 E AndroidRuntime: o@2:435 04-23 15:01:42.781 17632 17720 E AndroidRuntime: <unknown>@623:550 04-23 15:01:42.781 17632 17720 E AndroidRuntime: _@2:1514 04-23 15:01:42.781 17632 17720 E AndroidRuntime: d@2:967 04-23 15:01:42.781 17632 17720 E AndroidRuntime: o@2:435 04-23 15:01:42.781 17632 17720 E AndroidRuntime: <unknown>@359:293 04-23 15:01:42.781 17632 17720 E AndroidRuntime: _@2:1514 04-23 15:01:42.781 17632 17720 E AndroidRuntime: d@2:967 04-23 15:01:42.781 17632 17720 E AndroidRuntime: o@2:435 04-23 15:01:42.781 17632 17720 E AndroidRuntime: <unknown>@11:78 04-23 15:01:42.781 17632 17720 E AndroidRuntime: _@2:1514 04-23 15:01:42.781 17632 17720 E AndroidRuntime: d@2:897 04-23 15:01:42.781 17632 17720 E AndroidRuntime: o@2:435 04-23 15:01:42.781 17632 17720 E AndroidRuntime: global code@1270:4 04-23 15:01:42.781 17632 17720 E AndroidRuntime: 04-23 15:01:42.781 17632 17720 E AndroidRuntime: at com.facebook.react.modules.core.ExceptionsManagerModule.showOrThrowError(ExceptionsManagerModule.java:54) 04-23 15:01:42.781 17632 17720 E AndroidRuntime: at com.facebook.react.modules.core.ExceptionsManagerModule.reportFatalException(ExceptionsManagerModule.java:38) 04-23 15:01:42.781 17632 17720 E AndroidRuntime: at java.lang.reflect.Method.invoke(Native Method) 04-23 15:01:42.781 17632 17720 E AndroidRuntime: at com.facebook.react.bridge.JavaMethodWrapper.invoke(JavaMethodWrapper.java:372) 04-23 15:01:42.781 17632 17720 E AndroidRuntime: at com.facebook.react.bridge.JavaModuleWrapper.invoke(JavaModuleWrapper.java:158) 04-23 15:01:42.781 17632 17720 E AndroidRuntime: at com.facebook.react.bridge.queue.NativeRunnable.run(Native Method) 04-23 15:01:42.781 17632 17720 E AndroidRuntime: at android.os.Handler.handleCallback(Handler.java:891) 04-23 15:01:42.781 17632 17720 E AndroidRuntime: at android.os.Handler.dispatchMessage(Handler.java:102) 04-23 15:01:42.781 17632 17720 E AndroidRuntime: at com.facebook.react.bridge.queue.MessageQueueThreadHandler.dispatchMessage(MessageQueueThreadHandler.java:29) 04-23 15:01:42.781 17632 17720 E AndroidRuntime: at android.os.Looper.loop(Looper.java:207) 04-23 15:01:42.781 17632 17720 E AndroidRuntime: at com.facebook.react.bridge.queue.MessageQueueThreadImpl$3.run(MessageQueueThreadImpl.java:192) 04-23 15:01:42.781 17632 17720 E AndroidRuntime: at java.lang.Thread.run(Thread.java:784) ``` ## Environment React Native 0.58 Answers: username_1: Hello, I have the same problem for some reason Uint8Array dont have the function from (undefined). Bug report: TypeError: TypeError: TypeError: undefined is not a function (evaluating 'Uint8Array.from((0, _base.decode)(validFor), function (c) { return c.charCodeAt(0); })') This error is located at: in List (at ProductsTabScreen.js:26) in RCTView (at View.js:43) in ProductsTabScreen (created by Connect(ProductsTabScreen)) in Connect(ProductsTabScreen) (at SceneView.js:9) in SceneView (at createTabNavigator.js:39) in RCTView (at View.js:43) in RCTView (at View.js:43) in ResourceSavingScene (at createBottomTabNavigator.js:108) in RCTView (at View.js:43) in ScreenContainer (at createBottomTabNavigator.js:98) in RCTView (at View.js:43) in TabNavigationView (at createTabNavigator.js:178) in NavigationView (at createNavigator.js:57) in Navigator (at createNavigationContainer.js:376) in NavigationContainer (at SceneView.js:9) in SceneView (at StackViewLayout.js:478) in RCTView (at View.js:43) in RCTView (at View.js:43) in RCTView (at View.js:43) in AnimatedComponent (at screens.native.js:59) in Screen (at StackViewCard.js:42) in Card (at createPointerEventsContainer.js:26) in Container (at StackViewLayout.js:507) in RCTView (at View.js:43) in ScreenContainer (at StackViewLayout.js:401) in RCTView (at View.js:43) in StackViewLayout (at withOrientation.js:30) in withOrientation (at StackView.js:49) in RCTView (at View.js:43) in Transitioner (at StackView.js:19) in StackView (at createNavigator.js:57) in Navigator (at createKeyboardAwareNavigator.js:11) in KeyboardAwareNavigator (at createNavigationContainer.js:376) in NavigationContainer (at AuthNavigation.js:93) in Unknown (created by Connect(Component)) in Connect(Component) (at app.js:60) in RCTView (at View.js:43) in PersistGate (at app.js:56) in Provider (at app.js:55) in App (at renderApplication.js:32) in RCTView (at View.js:43) in RCTView (at View.js:43) in AppContainer (at renderApplication.js:31) This error is located at: in NavigationContainer (at SceneView.js:9) in SceneView (at StackViewLayout.js:478) in RCTView (at View.js:43) in RCTView (at View.js:43) in RCTView (at View.js:43) in AnimatedComponent (at screens.native.js:59) [Truncated] Node: 10.13.0 - /usr/local/bin/node Yarn: 1.13.0 - /usr/local/bin/yarn npm: 6.9.0 - /usr/local/bin/npm Watchman: 4.9.0 - /usr/local/bin/watchman SDKs: iOS SDK: Platforms: iOS 12.2, macOS 10.14, tvOS 12.2, watchOS 5.2 Android SDK: API Levels: 21, 22, 23, 24, 25, 26, 27, 28 Build Tools: 27.0.3, 28.0.3 System Images: android-27 | Android TV Intel x86 Atom, android-27 | Intel x86 Atom, android-27 | Intel x86 Atom_64, android-27 | Google APIs Intel x86 Atom, android-27 | Google Play Intel x86 Atom, android-28 | Intel x86 Atom, android-28 | Google APIs Intel x86 Atom, android-28 | Google Play Intel x86 Atom IDEs: Xcode: 10.2.1/10E1001 - /usr/bin/xcodebuild npmPackages: react: 16.4.1 => 16.4.1 react-native: 0.56.1 => 0.56.1 npmGlobalPackages: create-react-native-app: 2.0.2 react-native-cli: 2.0.1 react-native-git-upgrade: 0.2.7
expo/expo
437606430
Title: can't get Location.getHeadingAsync() in Android Tablet Question: username_0: ## 🐛 Bug Report I have the same problem with all Android tablet emulators. Location.getHeadingAsync() does not return the value when manual setting the magnetic-field but only when gps position change, like geo fix. Sometime even changing gps position with geo fix Location.getHeadingAsync() does not return the value. Attaching a then callback this does not being executed. ### Environment Expo CLI 2.15.4 environment info: System: OS: Linux 5.0 Ubuntu 19.04 (Disco Dingo) Shell: 5.0.3 - /bin/bash Binaries: Node: 10.15.2 - /usr/bin/node Yarn: 1.13.0 - ~/.yarn/bin/yarn npm: 5.8.0 - /usr/bin/npm npmPackages: expo: ^32.0.0 => 32.0.6 react: 16.5.0 => 16.5.0 react-native: https://github.com/expo/react-native/archive/sdk-32.0.0.tar.gz => 0.57.1 react-navigation: ^3.6.1 => 3.8.1 npmGlobalPackages: expo-cli: 2.15.4 <!-- Please also let us know about your app's target (iOS, Android, Client, Standalone, ExpoKit) --> The problem is common in all android tablet emulator, but works good in all android phone emulators. Right now i can't test it on ios. Expo kit project. ### Steps to Reproduce Attach a then callback to Location.getHeadingAsync() or just get the value with await/async and set manually magnetic field in virtual sensors of android emulator. Can't get the heading value in tablet emulator but works well in phone emulators. Start working when change the gps position with geofix but sometime does not work neither. For example ``` javascript Location.getHeadingAsync().then(headObj => { head = (headObj.trueHeading != -1) ? headObj.trueHeading : headObj.magHeading; console.log('heading value', head); }); ``` 1. if i change the magnetic-field sensors value in android emulator the head value does not being logged in console in case of android tablet emulator but works well in phone emulators; 2. if i change the gps position with geo fix i can see the heading value logged in console but sometime does not work neither; Again i can get the heading logged properly in all android phone emulator but not in tablets. ### Expected Behavior Return the heading value in case of android tablet like in android phone emulators ### Actual Behavior Heading value does not returned in android tablet changing magnetic-field sensor values Answers: username_1: Could you check if this bug will occurred when you change `getHeadingAsync` to `watchHeadingAsync`? Status: Issue closed username_1: I'm closing this issue because I couldn't reproduce this bug and you didn't provide me with more information. Feel free to reopen this issue if this bug still exists.
pingcap/tidb
977905217
Title: update mode Question: username_0: ## General Question tispark TiBatchWrite data into tidb like this ` val df = spark.sql("select * from szfky_znjc.ill_condition_detail") df.show(false) //append df.write .format("tidb") .option("tidb.user", "root") .option("tidb.password", "<PASSWORD>") .option("database","szfky_znjc") .option("table","ill_condition_detail_copy1") .mode("append") .save() ` but went wrong have this `User class threw exception: com.pingcap.tikv.exception.TiBatchWriteException: currently user provided auto increment value is only supported in update mode! please set parameter replace to true!`
martincostello/alexa-london-travel-site
226863033
Title: Reduce cookie size Question: username_0: Recently noticed this warning in the Chrome developer console: ``` Set-Cookie header is ignored in response from url: https://londontravel.username_0.com/account/external-sign-in-callback/. Cookie length should be less then or equal to 4096 characters. ``` It looks like the sign-in infrastructure creates cookies that are too large, which might cause sign in issues and might explain the missing correlation cookie warnings in the logs. [This](https://hajekj.net/2017/03/20/cookie-size-and-cookie-authentication-in-asp-net-core/) blog post sounds interesting regarding this issue. Answers: username_0: I'm not sure this is actually a problem, as the `ChunkingCookieManager` splits the cookies up so they shouldn't get too big. Status: Issue closed
scrapinghub/spidermon
465160723
Title: How to monitor specific spiders? Question: username_0: I'm following the 'Getting Started' tutorial in the docs, and the examples that are given only include one Spider, in my Scrapy project I have multiple Spiders, I'm wondering how I have my monitors run on specific Spiders?<issue_closed> Status: Issue closed
localstack/localstack-java-utils
609052997
Title: Version 0.2.1 - Unknown port mapping for service: dynamodb Question: username_0: Really strange behaviour, what is working on a colleagues machine is not working on mine which i thought was not meant to be the case when running docker images. Maven Dependency ``` <dependency> <groupId>cloud.localstack</groupId> <artifactId>localstack-utils</artifactId> <version>0.2.1</version> <scope>test</scope> </dependency> ``` Getting the error ``` Apr 29, 2020 2:07:18 PM cloud.localstack.docker.annotation.LocalstackDockerAnnotationProcessor getExternalHostName INFO: External host name is set to: localhost Apr 29, 2020 2:07:21 PM cloud.localstack.docker.Container createLocalstackContainer INFO: Started container: c356b2e82a2acbeefb4cb1f1032d814886b6474639476bc400dffb75124e02f8 Apr 29, 2020 2:07:21 PM cloud.localstack.Localstack startup INFO: Waiting for LocalStack container to be ready... java.lang.IllegalArgumentException: Unknown port mapping for service: dynamodb at cloud.localstack.Localstack.endpointForService(Localstack.java:204) at cloud.localstack.Localstack.getEndpointDynamoDB(Localstack.java:131) at cloud.localstack.TestUtils.getEndpointConfigurationDynamoDB(TestUtils.java:187) at cloud.localstack.TestUtils.getClientDynamoDB(TestUtils.java:143) at com.lmig.global.eventframeworktrackerinfra.DyamodbCRUDTest.setUp(DyamodbCRUDTest.java:36) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at cloud.localstack.LocalstackTestRunner.run(LocalstackTestRunner.java:42) at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47) at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242) at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) Apr 29, 2020 2:07:50 PM cloud.localstack.docker.Container stop INFO: Stopped container: c356b2e82a2acbeefb4cb1f1032d814886b6474639476bc400dffb75124e02f8 Process finished with exit code 255 ``` Here is my class if it helps ``` [Truncated] * This test is to test the dynamo db INSERT operation with the db created through the setUp */ @Test public void testInsertIntoTable() { Table table = dynamoDB.getTable(ERROR_TRACKER); // Build the item Item item = new Item() .withPrimaryKey("messageId", "f46gsfkv789") .withString("Subscriber", "FivaTestSubscriber") .withString("Timestamp", Timestamp.now().toString()) .withString("Status", "Failure") .withString("Description", String.valueOf(new IOException("Error accessing file").getStackTrace())); // Write the item to the table PutItemOutcome outcome = table.putItem(item); } } ``` Answers: username_1: Thanks for reporting this issue @username_0 . The default port mapping has been recently updated, and in the latest version all services now map to the edge service (on port 4566 by default). The Java library has been updated accordingly - can you please give it another try with the latest release version 0.2.2? Please report here if the problem persists. Thanks! Status: Issue closed
mkommar/brightlight
222821661
Title: H.J.Res. 43 - Providing for congressional disapproval under chapter 8 of title 5 United States Code of the final rule submitted by Secretary of Health and Human Services relating to compliance with title X requirements by project recipients in selecting subrecipients. Question: username_0: The President signed H.J.Res. 43 into law on April 13th, 2017<br><br><i><b>H.J.Res. 43</b> - Providing for congressional disapproval under chapter 8 of title 5, United States Code, of the final rule submitted by Secretary of Health and Human Services relating to compliance with title X requirements by project recipients in selecting subrecipients.</i><br><br><b>Sponsor</b><br> Rep. <NAME><br><br> via Sunlight Foundation http://ift.tt/2oP33gn
dotnet/wpf
461162504
Title: Snap master to release/3.0 and rebrand preview 7 -> preview 8 Question: username_0: - [ ] snap https://github.com/dotnet/wpf master -> release/3.0 - [ ] rebrand https://github.com/dotnet/wpf master to preview 8 - [ ] snap https://dev.azure.com/dnceng/internal/_git/dotnet-wpf-int master -> release/3.0 - [ ] rebrand https://dev.azure.com/dnceng/internal/_git/dotnet-wpf-int master to preview 8 Answers: username_0: https://github.com/dotnet/wpf/pull/1099 is opened to snap https://github.com/dotnet/wpf master -> release/3.0 username_0: https://github.com/dotnet/wpf/pull/1102 rebrands master to preview 8 Status: Issue closed
burnash/gspread
128415621
Title: Error when trying to create a .exe using PyInstaller Question: username_0: Hi, I'm currently working on a project that heavily relies on gpsread. I'm in charge of creating a .exe but I've been running into some problems... I have PyOpenSSL installed and the program works fine from the .py I'm doing some testing on a simple (and working!) program that does the following: - Connects to a spreadsheet using gspread - Get information from a specific cell from the spreadsheet - Update information of a specific cell on the spreadsheet When trying to make the .exe using PyInstaller, I get the following : Traceback (most recent call last): File "<string>", line 15, in <module> File "CurrentMouse.py", line 53, in spreadsheetOpen File "site-packages\oauth2client\util.py"m line 140 in positional_wrapper File "site-packages\oauth2client\client.py", line 1630, in __init__ File "site-packages\oauth2client\client.py", line 1581, in _RequireCryptoOrDie oauth2client.client.CryptoUnavailableError: No crypto library available google22nov returned -1 Status: Issue closed Answers: username_1: From the traceback, it looks like the issue is related to [oauth2client](https://github.com/google/oauth2client) which is a separate project. I'm closing this issue.
gbif/data-mobilization
263941369
Title: Project Roadkill Question: username_0: Dataset link: http://roadkill.at/en Region: global Taxon: all life Type: occurrence Why is this important: The goal of this citizen science is to get an overview of numbers and patterns of road-killed vertebrates, in hope of helping local authorities direct mitigation actions toward roadkill hotspots. The project combines the expertise and the commitment of citizens with open data to address biodiversity loss and road safety. Priority: medium License: Unspecified Comments: international project based in Vienna Dataholders contact information: [<NAME>](<EMAIL>), Institut für Zoologie, Department für Integrative Biologie und Biodiversitätsforschung, Universität für Bodenkultur Wien Users contact info: @username_0 Answers: username_1: The project is registered as a publisher (https://www.gbif.org/publisher/cec55b6c-5728-473a-b1b5-f045b3f494f6). They started working with BioCASe to get the data published, but we have not recent status update on this. The start of the collaboration pre-dated the establishment of the European regional IPT, and may need follow-up. username_1: Last status from October 2019: in preparation of a data paper to be published alongside the data. Follow-up from GBIF Dec 2020 so far without response or further news username_1: https://www.gbif.org/dataset/d0d5ef85-71b2-4da6-b6f6-c1c3d60987d3 closing Status: Issue closed
vadikom/smartmenus
404271883
Title: Click on right + expand - colapse back Question: username_0: Hi and thank you for sharing this. I have generated the following menu using KNP Menus and Nested Sets in Symfony and I behave the follosing issue: - when in mobile and click on the + buttons which appear I get a fast expand and collapse back of the sub-items. Any hints please ? <ul class="nav navbar-nav sm sm-blue sm-collapsible" id="main-menu" data-smartmenus-id="15487651135439114"> <li class="only-full"> <a href="/app_dev.php/categorii/1.html"> 1 </a></li> <li class="only-full"> <a href="/app_dev.php/categorii/2.html" class="has-submenu" id="sm-15487651135366206-1" aria-haspopup="true" aria-controls="sm-15487651135366206-2" aria-expanded="false"> 2 <span class="sub-arrow"></span></a> <ul class="menu_level_1" id="sm-15487651135366206-2" role="group" aria-hidden="true" aria-labelledby="sm-15487651135366206-1" aria-expanded="false" style="width: auto; display: none;"> <li class="only-full first"> <a href="/app_dev.php/categorii/2-1.html" class="has-submenu" id="sm-15487651135366206-3" aria-haspopup="true" aria-controls="sm-15487651135366206-4" aria-expanded="false"> 2.1 <span class="sub-arrow"></span></a> <ul class="menu_level_2" id="sm-15487651135366206-4" role="group" aria-hidden="true" aria-labelledby="sm-15487651135366206-3" aria-expanded="false" style="width: auto; display: none;"> <li class="only-full first"> <a href="/app_dev.php/categorii/2-1-1.html"> 2.1.1 </a></li> <li class="only-full last"> <a href="/app_dev.php/categorii/2-1-2.html"> 2.1.2 </a></li> </ul> </li> <li class="only-full last"> <a href="/app_dev.php/categorii/2-2.html" class="has-submenu" id="sm-15487651135366206-5" aria-haspopup="true" aria-controls="sm-15487651135366206-6" aria-expanded="false"> 2.2 <span class="sub-arrow"></span></a> <ul class="menu_level_2 sm-nowrap" id="sm-15487651135366206-6" role="group" aria-hidden="true" aria-labelledby="sm-15487651135366206-5" aria-expanded="false" style="width: auto; min-width: 10em; display: none; max-width: 20em; top: auto; left: 0px; margin-left: 136px; margin-top: -46.6667px;"> <li class="only-full first"> <a href="/app_dev.php/categorii/2-2-1.html"> 2.2.1 </a></li> <li class="only-full last"> <a href="/app_dev.php/categorii/2-2-2.html"> 2.2.2 </a></li> </ul> </li> </ul> </li> <li class="only-full last"> <a href="/app_dev.php/categorii/3.html"> 3 </a></li> </ul> Answers: username_0: ![image](https://user-images.githubusercontent.com/5830216/51971323-89099b00-2481-11e9-95e9-959848b2adf0.png) See that after clicking the + sign I am still with an unopened menu entry. In desktop version works ok, but on mobile open and close back fast...