repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
stellar/java-stellar-sdk
391947916
Title: Effects Streaming Api is timing out Question: username_0: I am just testing the streaming under EffectsRequestBuilder and I am getting timeout. Is the streaming api using the read timeout set in the okhttpclient? ``` 2018-12-17 19:27:37.558 2863-2934/com.blockeq.stellarwallet E/AndroidRuntime: FATAL EXCEPTION: OkHttp Dispatcher Process: com.blockeq.stellarwallet, PID: 2863 java.lang.IllegalStateException: Failed 200 at org.stellar.sdk.requests.SSEStream$StellarEventSourceListener.onFailure(SSEStream.java:174) at shadow.okhttp3.internal.sse.RealEventSource.processResponse(RealEventSource.java:83) at shadow.okhttp3.internal.sse.RealEventSource.onResponse(RealEventSource.java:54) at shadow.okhttp3.RealCall$AsyncCall.execute(RealCall.java:153) at shadow.okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) at java.lang.Thread.run(Thread.java:764) Caused by: java.net.SocketTimeoutException: timeout at shadow.okio.Okio$4.newTimeoutException(Okio.java:232) at shadow.okio.AsyncTimeout.exit(AsyncTimeout.java:285) at shadow.okio.AsyncTimeout$2.read(AsyncTimeout.java:241) at shadow.okio.RealBufferedSource.request(RealBufferedSource.java:68) at shadow.okio.RealBufferedSource.require(RealBufferedSource.java:61) at shadow.okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.java:304) at shadow.okhttp3.internal.http1.Http1Codec$ChunkedSource.readChunkSize(Http1Codec.java:469) at shadow.okhttp3.internal.http1.Http1Codec$ChunkedSource.read(Http1Codec.java:449) at shadow.okio.RealBufferedSource.indexOfElement(RealBufferedSource.java:394) at shadow.okio.RealBufferedSource.indexOfElement(RealBufferedSource.java:383) at shadow.okhttp3.internal.sse.ServerSentEventReader.processNextEvent(ServerSentEventReader.java:61) at shadow.okhttp3.internal.sse.RealEventSource.processResponse(RealEventSource.java:80) at shadow.okhttp3.internal.sse.RealEventSource.onResponse(RealEventSource.java:54)  at shadow.okhttp3.RealCall$AsyncCall.execute(RealCall.java:153)  at shadow.okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)  at java.lang.Thread.run(Thread.java:764)  Caused by: java.net.SocketException: socket is closed at com.android.org.conscrypt.ConscryptFileDescriptorSocket$SSLInputStream.read(ConscryptFileDescriptorSocket.java:551) at shadow.okio.Okio$2.read(Okio.java:140) at shadow.okio.AsyncTimeout$2.read(AsyncTimeout.java:237) at shadow.okio.RealBufferedSource.request(RealBufferedSource.java:68)  at shadow.okio.RealBufferedSource.require(RealBufferedSource.java:61)  at shadow.okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.java:304)  at shadow.okhttp3.internal.http1.Http1Codec$ChunkedSource.readChunkSize(Http1Codec.java:469)  at shadow.okhttp3.internal.http1.Http1Codec$ChunkedSource.read(Http1Codec.java:449)  at shadow.okio.RealBufferedSource.indexOfElement(RealBufferedSource.java:394)  at shadow.okio.RealBufferedSource.indexOfElement(RealBufferedSource.java:383)  at shadow.okhttp3.internal.sse.ServerSentEventReader.processNextEvent(ServerSentEventReader.java:61)  at shadow.okhttp3.internal.sse.RealEventSource.processResponse(RealEventSource.java:80)  at shadow.okhttp3.internal.sse.RealEventSource.onResponse(RealEventSource.java:54)  at shadow.okhttp3.RealCall$AsyncCall.execute(RealCall.java:153)  at shadow.okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)  at java.lang.Thread.run(Thread.java:764)  ``` Answers: username_1: ah,I got the same error info while using PaymentsRequestBuilder... username_2: Me too... 2018-12-19 21:50:58.939 17883-17943/? E/Fabric: Settings request failed. io.fabric.sdk.android.services.network.HttpRequest$HttpRequestException: java.net.SocketTimeoutException: connect timed out at io.fabric.sdk.android.services.network.HttpRequest.code(HttpRequest.java:1355) at io.fabric.sdk.android.services.settings.DefaultSettingsSpiCall.handleResponse(DefaultSettingsSpiCall.java:106) at io.fabric.sdk.android.services.settings.DefaultSettingsSpiCall.invoke(DefaultSettingsSpiCall.java:90) at io.fabric.sdk.android.services.settings.DefaultSettingsController.loadSettingsData(DefaultSettingsController.java:80) at io.fabric.sdk.android.services.settings.DefaultSettingsController.loadSettingsData(DefaultSettingsController.java:64) at io.fabric.sdk.android.services.settings.Settings.loadSettingsData(Settings.java:155) at io.fabric.sdk.android.Onboarding.retrieveSettingsData(Onboarding.java:126) at io.fabric.sdk.android.Onboarding.doInBackground(Onboarding.java:99) at io.fabric.sdk.android.Onboarding.doInBackground(Onboarding.java:45) at io.fabric.sdk.android.InitializationTask.doInBackground(InitializationTask.java:63) at io.fabric.sdk.android.InitializationTask.doInBackground(InitializationTask.java:28) at io.fabric.sdk.android.services.concurrency.AsyncTask$2.call(AsyncTask.java:311) at java.util.concurrent.FutureTask.run(FutureTask.java:237) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:428) at java.util.concurrent.FutureTask.run(FutureTask.java:237) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607) at java.lang.Thread.run(Thread.java:762) Caused by: java.net.SocketTimeoutException: connect timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:334) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:196) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:178) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:356) at java.net.Socket.connect(Socket.java:586) at com.android.okhttp.internal.Platform.connectSocket(Platform.java:113) at com.android.okhttp.Connection.connectSocket(Connection.java:1455) at com.android.okhttp.Connection.connect(Connection.java:1413) at com.android.okhttp.Connection.connectAndSetOwner(Connection.java:1700) at com.android.okhttp.OkHttpClient$1.connectAndSetOwner(OkHttpClient.java:133) at com.android.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:466) at com.android.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:371) at com.android.okhttp.internal.huc.HttpURLConnectionImpl.execute(HttpURLConnectionImpl.java:503) at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getResponse(HttpURLConnectionImpl.java:438) at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getResponseCode(HttpURLConnectionImpl.java:567) at com.android.okhttp.internal.huc.DelegatingHttpsURLConnection.getResponseCode(DelegatingHttpsURLConnection.java:105) at com.android.okhttp.internal.huc.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java) at com.google.firebase.perf.network.zze.getResponseCode(Unknown Source) at com.google.firebase.perf.network.zzd.getResponseCode(Unknown Source) at io.fabric.sdk.android.services.network.HttpRequest.code(HttpRequest.java:1353) at io.fabric.sdk.android.services.settings.DefaultSettingsSpiCall.handleResponse(DefaultSettingsSpiCall.java:106)  at io.fabric.sdk.android.services.settings.DefaultSettingsSpiCall.invoke(DefaultSettingsSpiCall.java:90)  at io.fabric.sdk.android.services.settings.DefaultSettingsController.loadSettingsData(DefaultSettingsController.java:80)  at io.fabric.sdk.android.services.settings.DefaultSettingsController.loadSettingsData(DefaultSettingsController.java:64)  at io.fabric.sdk.android.services.settings.Settings.loadSettingsData(Settings.java:155)  at io.fabric.sdk.android.Onboarding.retrieveSettingsData(Onboarding.java:126)  at io.fabric.sdk.android.Onboarding.doInBackground(Onboarding.java:99)  at io.fabric.sdk.android.Onboarding.doInBackground(Onboarding.java:45)  at io.fabric.sdk.android.InitializationTask.doInBackground(InitializationTask.java:63)  at io.fabric.sdk.android.InitializationTask.doInBackground(InitializationTask.java:28)  at io.fabric.sdk.android.services.concurrency.AsyncTask$2.call(AsyncTask.java:311)  at java.util.concurrent.FutureTask.run(FutureTask.java:237)  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:428)  at java.util.concurrent.FutureTask.run(FutureTask.java:237)  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133)  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607)  at java.lang.Thread.run(Thread.java:762) Status: Issue closed username_3: Thanks for reports. Fixed in [0.4.1](https://github.com/stellar/java-stellar-sdk/releases/tag/0.4.1). username_2: Thanks for the quick response, it's working for me now.
dotnet/core
512277429
Title: .Net Core 3.0 WebApi Reports Error When Return OK(List<dynamic>) Question: username_0: When .net core 3.0 webapi returns Ok(List<dynamic>) or Ok(List<object>), this error occurs: System.InvalidCastException: Unable to cast object of type '<GetEnumerator>d__0' to type 'System.Collections.IDictionaryEnumerator'. Answers: username_1: This issue was moved to aspnet/AspNetCore#16610 Status: Issue closed
MarkUsProject/Markus
863481
Title: Sanitize filename and convert file content to UTF-8 of uploaded files Question: username_0: We seem to have several problems related to bad filenames and messed up character encodings. I think it would be a good idea to implement the following: 1. Filenames get sanitized: new_filename = old_filename.gsub(/[^a-zA-Z0-9\.\-_]/,'_') I.e. replace all characters which aren't ASCII-alphanumeric, dash, dot or underscore. I think we are doing this already. Maybe we need to spell things out more explicitly in order to avoid accents, etc. in filenames. I'm not sure how ruby's \w character class is defined nor if [a-z] would match an accented e. 2. Convert the file content of uploaded files as they are uploaded to UTF-8 (at least make an attempt). I think we won't lose here, since we don't do any conversion at the moment and with the conversion in place we'd catch a few more encoding related errors. 3. Make sure we update JavaScript code of the student's file-manager code in order to account for changed file sanitation. For 2. we should probably use the cmess gem [1]. What do you think? [1] http://prometheus.rubyforge.org/cmess/ Answers: username_1: (1) seems to work now. Closing until we hear a new complaint about file contents encoding. Status: Issue closed
JunilHwang/TIL
780346297
Title: 대용량 이미지 Question: username_0: ![SODA_1581508095-ANIMATION](https://user-images.githubusercontent.com/18749057/103750785-a0c50300-504a-11eb-8eaa-524351ffcf31.gif) Status: Issue closed Answers: username_0: ![image](https://user-images.githubusercontent.com/18749057/132424789-d232dd06-5987-4156-a81b-2e38e7b4cde4.png)
cuberite/Core
34379555
Title: Add player count to web chat. Question: username_0: For a quick overview it might be useful to have a player count in the webadmin chat. Answers: username_1: Can we close this issue? It looks like PR [cuberite/cuberite/#1691](https://github.com/cuberite/cuberite/pull/1691) solves this. Status: Issue closed
Wechat-Group/WxJava
798904414
Title: createTmpFile这个创建的临时文件夹能加个删除吗 Question: username_0: # 提问前,请确保阅读过项目首页说明以及wiki开发文档相关内容,完成内容后,请务必移除包括本句在内的无用内容,以免影响他人阅读,否则直接关闭,谢谢合作~ ### 简要描述 ### 官方文档地址 __请提供所需功能对应的微信官方文档地址以便进行确认。__ Answers: username_1: 临时文件夹是放在系统临时目录下的,按说是系统会自动去处理的,除非你自己关闭了相关配置 username_0: 我也不知道为啥,那些文件就放在tomcat的temp文件夹下,都好久了,我服务器是winserver的 username_1: 请自行处理,使用代码或者借助工具 Status: Issue closed
jlengstorf/netlify-plugin-gatsby-cache
761814305
Title: Does not detect change to component on index page Question: username_0: Does not detect change to component on index page Answers: username_0: It will detect change with this mainPosts: allContentfulPost( limit: 5, sort: {fields: [publishDate], order: DESC}, filter: {coverStory: {eq: true},} ){ edges { node { id slug title categories { title slug } image { title fluid(maxWidth: 1800, maxHeight:1000) { ...GatsbyContentfulFluid } ogimg: resize(width: 1800) { src } } copy { childMarkdownRemark { html excerpt(pruneLength: 80) } } } } } but randomly not this contentfulModularPage(title: {eq: "Home"}) { id title sections { ... on Node { ... on ContentfulAdModule { id title image { id fluid(maxWidth: 730, quality: 80) { ...GatsbyContentfulFluid } } adPosition link } } ... on Node { ... on ContentfulModule { id title moduleDesignType [Truncated] } } metaImage { fluid(maxWidth: 768, maxHeight:512, quality: 70 ) { ...GatsbyContentfulFluid } } copy { childMarkdownRemark { html excerpt(pruneLength: 120) } } } } } } } } } username_1: hey there — can you provide a reproduction of this issue? this plugin doesn't affect the build, so my hunch is that something else is going on username_0: sorry i'm not sure how to show you. but just 10 minutes ago I did a build which is using your plugin and the what i would call the more complicated components on the index page didn't update. then when I do a build and clear cache it updates. this is the build where it didn't update and I can see where is says Updated entries 5 and that is accurate 3:04:27 PM: Build ready to start 3:04:29 PM: build-image version: 53b83b6bede2920f236b25b6f5a95334320dc849 3:04:29 PM: build-image tag: v3.6.0 3:04:29 PM: buildbot version: 94ed42511c4e70547a960237db686b74875fef2d 3:04:29 PM: Fetching cached dependencies 3:04:29 PM: Starting to download cache of 216.0MB 3:04:30 PM: Finished downloading cache in 1.234516481s 3:04:30 PM: Starting to extract cache 3:04:40 PM: Finished extracting cache in 9.93791295s 3:04:40 PM: Finished fetching cache in 11.230020883s 3:04:40 PM: Starting to prepare the repo for build 3:04:41 PM: Preparing Git Reference refs/heads/master 3:04:42 PM: Different publish path detected, going to use the one specified in the Netlify configuration file: 'public' versus 'public/' in the Netlify UI 3:04:42 PM: Starting build script 3:04:42 PM: Installing dependencies 3:04:42 PM: Python version set to 2.7 3:04:43 PM: Started restoring cached node version 3:04:46 PM: Finished restoring cached node version 3:04:47 PM: v12.18.0 is already installed. 3:04:47 PM: Now using node v12.18.0 (npm v6.14.4) 3:04:47 PM: Started restoring cached build plugins 3:04:47 PM: Finished restoring cached build plugins 3:04:47 PM: Attempting ruby version 2.7.1, read from environment 3:04:49 PM: Using ruby version 2.7.1 3:04:49 PM: Using PHP version 5.6 3:04:49 PM: Started restoring cached node modules 3:04:49 PM: Finished restoring cached node modules 3:04:49 PM: Started restoring cached go cache 3:04:49 PM: Finished restoring cached go cache 3:04:50 PM: go version go1.14.4 linux/amd64 3:04:50 PM: go version go1.14.4 linux/amd64 3:04:50 PM: Installing missing commands 3:04:50 PM: Verify run directory 3:04:51 PM: ​ 3:04:51 PM: ──────────────────────────────────────────────────────────────── 3:04:51 PM: Netlify Build 3:04:51 PM: ──────────────────────────────────────────────────────────────── 3:04:51 PM: ​ 3:04:51 PM: ❯ Version 3:04:51 PM: @netlify/build 7.0.0 3:04:51 PM: ​ 3:04:51 PM: ❯ Flags 3:04:51 PM: deployId: 5fd6e44b8fce2119b38896f6 3:04:51 PM: mode: buildbot 3:04:51 PM: ​ 3:04:51 PM: ❯ Current directory 3:04:51 PM: /opt/build/repo 3:04:51 PM: ​ 3:04:51 PM: ❯ Config file 3:04:51 PM: /opt/build/repo/netlify.toml 3:04:51 PM: ​ 3:04:51 PM: ❯ Context 3:04:51 PM: production 3:04:52 PM: ​ 3:04:52 PM: ❯ Loading plugins 3:04:52 PM: - [email protected] from Netlify app 3:04:52 PM: ​ [Truncated] 3:06:14 PM: Finished saving rust rustup cache 3:06:14 PM: Started saving rust cargo bin cache 3:06:14 PM: Finished saving rust cargo bin cache 3:06:14 PM: Started saving go dependencies 3:06:14 PM: Finished saving go dependencies 3:06:14 PM: Build script success 3:06:14 PM: Starting to deploy site from 'public' 3:06:14 PM: Creating deploy tree 3:06:14 PM: Creating deploy upload records 3:06:14 PM: 103 new files to upload 3:06:14 PM: 0 new functions to upload 3:06:18 PM: Starting post processing 3:06:18 PM: Post processing - HTML 3:06:24 PM: Post processing - header rules 3:06:24 PM: Post processing - redirect rules 3:06:24 PM: Post processing done 3:06:25 PM: Site is live ✨ 3:06:57 PM: Finished processing build request in 2m28.600973241s I have the same code across multiple sites. It seems to happen randomly. username_0: sorry I'm probably not being very helpful username_0: I made a copy of the site here https://github.com/username_0/gatsby-cache-issue-goulburn-phoenix username_1: it looks like you've got a Service Worker installed — the next time you see this, can you please clear the Service Worker in the browser and see if that clears it up before you clear the cache and rebuild? username_0: I will do that. Thank you!!!! username_1: of course! we'll get to the bottom of this 😄 username_0: I have been doing some testing and It is looking a lot like it was my dodgy service worker set up. Just wanted to say thank you again. Champion!!!!!! Status: Issue closed username_1: happy to hear you got it sorted out! thanks for following up!
amuntasim/how_to
175972145
Title: Can I use this in Rails 5? Question: username_0: Thank you. I will try and let you know. Answers: username_1: I didnt try with that, you can have a try. username_1: You can copy how_to/application.js and how_to/application.css.scss to your app/assets and replace the *= require how_to/bootstrap_and_overrides with your app's bootstrap. you may need to change the how_to/application.js as well. Hope it works for you, I will push another version with latest bootstrap soon. username_0: Thank you. I will try and let you know.
dask/dask
437769433
Title: Repartition min/max behavior Question: username_0: I’m working in the dataframe/multi.py file and had the following issue. When I call the merge method, the use of align_partitions often causes the desired behavior of divisions to no longer be adhered. That is, it might cause a DataFrame to have divisions d_0, d_1, …, d_n where the ith part doesn’t necessarily have minimum index d_i. Indeed, the merge function sometimes causes parts to be completely empty. My question is: in general, how strictly does Dask obey the divisions rule? What I’m implementing requires that the minimum of the ith division is d_i. Is there any way for me to enforce this rule? I tried making a list containing the minimum value of each division and then repartitioning on that list. On another similar note, is there any good way to enforce that the upper bound of each division is strictly non-inclusive? That is, I want the ith partition to not contain any indices of maximum value equal to d_{i+1}. Would df.repartition(divisions=df.divisions) do that or am I thinking about it wrong? Answers: username_1: Thanks for asking the question. I recommend reading the following related issue: https://github.com/dask/dask/issues/3384 cc @username_2 username_2: Nope. We can't enforce that a value exists over all operations. For example we might call something like `df.sample(frac=0.0001)` and it would be very unlikely that the minimum value continued to exist. The only way to achieve this would be to compute every operation as we go, which would destroy laziness. Status: Issue closed username_2: I think that this question has been answered. Closing.
chicasprogramando/coffeemap
284736943
Title: [Componente] visual-cafe-detail Question: username_0: ![visual-cafe-detail](https://user-images.githubusercontent.com/1291730/34384579-c6ea23ba-eafb-11e7-834f-19b9df6263b2.JPG) **Información:** Este componente va a tener varios componentes incluidos. El funcionamiento seguramente sea de tipo modal. @gianko ¿podrías encargarte de la base a nivel funcionamiento? Luego se irían incluyendo los componentes de las chicas. Answers: username_0: @gianko @lucianamendez @puribey voy a dejar generado este componente y utilizado en la página de **/custom-map** para que vean el funcionamiento. La idea es que reciba de prop el objeto con toda la información de ese café/local en particular. Status: Issue closed
AnarkisGaming/ATC-Feedback
1092807610
Title: map icons don't change status Question: username_0: after getting attacked by fauna they don't change status to hostile/occupied or undefended ![image](https://user-images.githubusercontent.com/91120787/147978403-a123d3ac-80c1-4018-91c2-59af54d0056a.png) ![image](https://user-images.githubusercontent.com/91120787/147978470-f4755c37-1f33-46d1-8a20-54184961f40a.png) Answers: username_1: The menu itself probably doesn't refresh fully / in real time. until closed/reopened. I'll have a look. username_0: there seems to be bug with outpost ownership in general... they seem to be reverting to previous owner!? [090 outpost.zip](https://github.com/AnarkisGaming/ATC-Feedback/files/7810914/090.outpost.zip) ![image](https://user-images.githubusercontent.com/91120787/148135218-87e49daa-9254-40c0-9f92-3781f22346d2.png)
projectdiscovery/nuclei-templates
688313154
Title: CVE-2017-7529: exploit Question: username_0: For one of the target, it shows it is vulnerable to `CVE-2017-7529`. How to exploit it. What is the POC for it? How `nuclei` detects that the target is vulnerable. Answers: username_0: And when I run the below script, it shows the target is not vulnerable: https://gist.githubusercontent.com/BlackVirusScript/75fae10a037c376555b0ad3f3da1a966/raw/d1cc081053636711881ea45c84e0971d5babe103/CVE-2017-7529.py username_1: Hey, you can check out https://github.com/projectdiscovery/nuclei-templates/blob/747aa48d09941369777c9c15777c2386e13cb4b6/nuclei-templates/cves/CVE-2017-7529.yaml to know how it detects and follow the reference to exploit. Status: Issue closed username_0: Manual inspection by the script shows not vulnerable, so does it mean nuclei gives false result? username_1: For one of the target, it shows it is vulnerable to `CVE-2017-7529`. How to exploit it. What is the POC for it? How `nuclei` detects that the target is vulnerable. username_1: Pinged you on the discord for the result validity. username_0: DM'ed. Please check. username_0: Please reach out to `ioribrn` , who is the author of this nuclei-template. Please check why python script is giving as not vulnerable, but nuclei shows that it is vulnerable username_1: Closing this as this is not or can not be verified, we will review this when we have more information about this issue. Status: Issue closed username_0: Why you closed this? It might be an issue in nuclei template right? username_0: I just checked, the target is running on nginx/1.16.1 and the reported CVE affect only Nginx versions from 0.5.6 up to and including 1.13.2, which means nuclei template gave a false positive result. Can we re-open the issue? username_1: For one of the target, it shows it is vulnerable to `CVE-2017-7529`. How to exploit it. What is the POC for it? How `nuclei` detects that the target is vulnerable. username_0: I believe the author has taken the reference from here: https://gist.github.com/thehappydinoa/bc3278aea845b4f578362e9363c51115 If you see, in the response it shows the Nginx version as 1.13.1 . The version of the target makes it vulnerable, not that `Range` payload that is currently being sent in the nuclei-template. username_1: This template has been removed from the master as this is verified producing false-positive results. Status: Issue closed username_0: So, now how does it validating that the host is vulnerable? `"Server: nginx"` does not mean that the host is vulnerable. It should be checking for the version. username_2: Hi @username_0, I know, but the nginx version can also be hidden in the config file with: ```conf server_tokens off; ``` At very least, it can reduce a lot of false-positive results. So users must cross-check vulnerable targets. username_0: Hi @username_2 , In this case, if the application is running on `nginx` it will report as vulnerable. So basically, we are again to square 1 where it reports the false positive.
gmamaladze/globalmousekeyhook
112022832
Title: How to get process's name which triggered mouse click Question: username_0: Hi, Is there param that contains process (id or name) which triggered the events (click, key,..)? Answers: username_1: No but you can get a name of a process with active window somewhere else. e.G. [DllImport("user32.dll")] public static extern IntPtr GetWindowThreadProcessId(IntPtr hWnd, out uint ProcessId); [DllImport("user32.dll")] private static extern IntPtr GetForegroundWindow(); string GetActiveProcessFileName() { IntPtr hwnd = GetForegroundWindow(); uint pid; GetWindowThreadProcessId(hwnd, out pid); Process p = Process.GetProcessById((int)pid); p.MainModule.FileName.Dump(); } username_2: @username_0 did you ever find a way to get the id or name of the process that triggered the event? Status: Issue closed username_1: Hi, Is there param that contains process (id or name) which triggered the events (click, key,..)?
azyuqian/POEModule
49818819
Title: Visualize accelerometer output on client side Question: username_0: Visualize accelerometer output (x, y and z) data on laptop for demostration Answers: username_0: Code is done. But oct2py is very slow for plotting all the data at once. Temporarily disable plotting accelerator and other data, only leave joystick plotting running. Status: Issue closed
gaenseklein/slidenotes
384575959
Title: kommentartag in md-code einführen Question: username_0: es soll möglich sein, per // kommentare einzufügen. das heißt: per // wird der text bis zum ende der zeile als kommentar markiert und nicht in der ausgabe angezeigt und auch sonst nicht interpretiert. Das gilt überall dort, wo md-code implementiert wird, sollte aber auch von anderen modulen möglichst eingehalten werden. oder soll das generell gelten? Answers: username_0: parse-probleme: das zeichen für kommentare // ist ein valides zeichen innerhalb von sources/urls bei images und links. gleichzeitig sollten aber links und images auch auskommentiert werden können denke ich mal. wenn ich es als letztes nehme werden alle one-line elemente trotzdem geparsed auch wenn ich sie auskommentiert habe. das betrifft images, links, inlinecode, pagebreak, anchors. alle anderen per-line elemente brauchen einen line-anfang, werden also eh nicht interpretiert wenn am anfang der zeile // steht - machen also keine probleme. kommentare müssen daher logischerweise vor allen one-line elementen geparsed werden, aber aufpassen, dass sie keine image-source oder url sind. valide urls: http:// https:// file:/// some/url//is/not/valid/or/is/it? was ist mit //url/to/somewhere? ich habs jetzt erstmal so implementiert, dass nach einem doppelpunkt vor dem // geguckt wird. ist dort eins ist es kein kommentar. damit sind alle obigen fälle abgedeckt, da er nach dem ersten // vom file:/// direkt zum letzten springt und nicht zum zweiten (weil er die ersten // bereits als part of url interpretiert). so kann nicht nur images und links benutzt werden sondern auch urls im text geschrieben werden (die werden sonst ja auch als kommentar interpretiert). nächstes problem sind die inlinecodes (codeblöcke dürften kein problem sein, da sie vom parsen ausgenommen werden). hier ist wieder das henne-ei prinzip - welches soll zuerst geparsed werden? wenn ich inlinecode zuerst parse ist es möglich, innerhalb von inlinecode // zu schreiben. dann ist es aber nicht mehr möglich, inlinecode auszukommentieren oder? bzw. dann wird im kommentar trotzdem ein inlinecode geparsed. ist das schlimm? ja ist es. das muss gelöst werden, sonst kommt es im editor zu folgenden fehlern: - error werden angezeigt wo keine sein sollten (angefangener inlinecode) - inlinecode wird im kommentar dargestellt in der präsentation kommt es zu einem fehler, da falsch code gelöscht wird durch das kommentar bzw. sich der text ja verändert hat vorher durch den eingefügten html-code von inlinecode. bleibt mir wohl nichts übrig als das abzufragen nachdem der kommentar gemacht wurde: in der map insertedhtmlinline abfragen und alle elemente die im kommentar stehen werden überschrieben durch ein "deleted"-typen ohne html und md-code. das ist zwar nicht so elegant wie die aus dem array rauszunehmen aber sonst muss ich die aus allen arrays rausfiltern, was fehleranfällig ist und evtl. sogar länger dauert. in der perror muss ich alle errors rausnehmen, welche in einem kommentar stehen würden. das ist mit einem splice getan. funktioniert jetzt. username_0: die behandlung innerhalb der datenblöcke ist den jeweiligen plugins überlassen, es sei denn sie lassen md-code zu. dann sind sie ebenfalls als kommentare geparsed. kommentare werden in headers ebenfalls standardmäßig angezeigt. um es zu ermöglichen, bpsw. optionen im header wirklich auszukommentieren - sprich das sie auch nicht verarbeitet werden - müssen sie noch aus den datenobjecten entfernt werden. am besten direkt beim hinzufügen des dataobjects. done. username_0: @username_1 denke, das ist erledigt. bitte mal kurz testen und sagen, ob die funktionalität dem entspricht was du dir vorstellst. würde das issue schließen. username_1: kommentare sollten nur durch // am zeilenanfang ausgelöst werden und erstrecken sich bis zum nächsten return-zeichen (absatz). Ausnahme: in sections in der auszeichnungszeile dürfen // als letzter teil der auszeichnungssyntax stehen. ```layout // hier kann ich mir einen kommentar machen, was ich mit dieser sektion anzeigen will, ohne dass dieser kommentar in der präsentation angezeigt wird im fließtext finde ich kommentare unglücklich, weil sie keinen "close-tag" haben. die Konvention bei JavaScript ist auch, dass // sich auf eine Zeile erstreckt. auch in Präsentationsprogrammen gibt es nur ein extra textfeld für kommentare, kein hin-und-her im Fließtext. lass uns an dieser stelle nicht unnötig komplexität mit weiteren closing-tags reinbringen. für den parser ist mir noch eingefallen (weiß nicht, wie das aktuell aussieht): leerzeile wird <p>, etc - nur ein umbruch heißt <br> innerhalb von <p>etc > username_0: ok, also kommentare nur an den zeilenanfang und in plugin-headern. die konvention bei javascript ist, dass alles ab kommentar-// bis zum zeilenende als kommentar interpretiert wird und nicht als gültiger code. so hab ich das momentan einfach umgesetzt. aber ein kommentar muss nicht am zeilenanfang stehen sodern irgendwo nur nicht in einer geöffneten anweisung - also bspw. "//" wird nicht als kommentar angesehen oder bla(//) auch nicht. ähnlichen fall haben wir halt mit den sourcen. wenn aber bei uns die regel gilt, dass nur ganze zeilen als kommentar interpretiert werden ausser in headern von ´´´code - dann ists kein problem das zu parsen. username_0: habs jetzt wie oben besprochen umgesetzt. kommentare gehen nur noch über ganze zeile und innerhalb von section-headern. Status: Issue closed
YunoHost-Apps/gitea_ynh
304110966
Title: Fix Readme Question: username_0: Image / links to "Integration level" and "Build Status" are broken. If I click on the image "Install with Yunohost" I'm redirected to a page stating that the app is neither maintained officially or by the community. This makes me wondering: Can I install this app yet? Also there are some issues/PR over https://github.com/Bridouz/gitea_ynh/issues, are they resolved here of should we re-open them to track them? Answers: username_1: Yes because this app is not finished yet I have not added it to the application list. But I'm working on it right now username_2: @username_1 Can you add it on https://ci-apps.yunohost.org/jenkins/ please? username_1: Migrated to Framagit https://framagit.org/YunoHost-Apps/gitea_ynh/issues/2 You can login with your GitHub account Status: Issue closed username_3: Probably fixed
TheAlmightyBob/Calendars
148894218
Title: Android master event (for recurrence) returns end time of 1969 Question: username_0: (the fact that recurring event instances are being returned with the master event's values is a separate issue, #14 , but it's still possible to retrieve the master event itself by ID) In Android, a recurring event (as opposed to an event instance) has no end time. Only a start time and recurrence properties. In fact, trying to set an end time will throw an exception. This library does not expect that, so currently translates that lack of end time into an end time in 1969. Not ideal. Should we expose this and make End nullable? Or replicate the iOS/Windows model by filling in Start/End with the values from the first instance of the event? Answers: username_0: Removing this from the 1.0 milestone because: - Retrieving the master recurrence event is currently only possible via GetEventByID, and we currently do not support creating/editing recurring events, so this is a bit of an edge case. - If/when support for recurring events is added, that may dictate the solution here.
nipy/nipype
1048779899
Title: [enhancement] ICC_rep_anova could be significantly faster than it is when running on images Question: username_0: ### Summary ICC_rep_anova calculates ICC(3,1) on a table of subjects and repeated measures. Setting up the design matrix is far more computationally expensive than actually calculating ICC from the input data (by around a factor of 100 for 2000 subjects and 2 repeated measures). For repeated calculations (such as calculating ICC on every voxel of a set of images), this goes much more slowly than it should. I made a modified version of ICC_rep_anova for a project I'm working on where I needed the speedup for the calculation to be practical; It's simple enough to cache the design matrix calculations and put in some if statements to decide if they need to be calculated/recalculated. I don't know if this is something anybody else actually ever wants to do, but if so, it's an easy fix. ### Actual behavior As implemented, the design matrix setup is performed on each call. ### Expected behavior The setup only depends on the shape of the input table. When running the calculation on an image, this shape will be the same for each voxel, so does not need to be redone. If the routine is uninitialized, the design information should be calculated and cached. If the routine is called again, and the design matrix is unchanged, this information should be retrieved and used. If the design matrix is changed, the cached information should be discarded and recalculated. ### Script/Workflow details [A proposed fix is here in lines 89 and 96-123](https://github.com/username_0/nipype/blob/master/nipype/algorithms/icc.py) This seems to pass all of the tests. Answers: username_1: Hi Blaise. Yes, we should definitely refactor this for efficiency. Would you care to open a PR? That will make it easier to comment with specific suggestions. username_0: Sure thing. Just following the proper order from the guidelines (issue first, PR next!) username_1: Ah, possibly we should revise those. Personally I solve my own problems and then open a PR to discuss whether they can be merged upstream. The issue -> PR pipeline makes more sense to me when I haven't already written code and wouldn't bother if I knew it would never get merged. username_0: I realized that there is one off topic addition to this PR - I added a nan_to_num to the final ICC calculation because I was very occasionally getting back NaNs for some particularly weird data. I'm happy to lose that if that's not considered best practice (i.e. maybe you should get the NaN back so that you know your input data is somehow unsound).
wilicc/gpu-burn
604300911
Title: [Makefile:10: drv] Error 127 Question: username_0: ``` git clone https://github.com/username_1/gpu-burn Cloning into 'gpu-burn'... remote: Enumerating objects: 19, done. remote: Counting objects: 100% (19/19), done. remote: Compressing objects: 100% (15/15), done. remote: Total 66 (delta 9), reused 11 (delta 4), pack-reused 47 Unpacking objects: 100% (66/66), 25.71 KiB | 110.00 KiB/s, done. [nikolay@localhost Downloads]$ cd gpu-burn [nikolay@localhost gpu-burn]$ make bash: make: command not found... Install package 'make' to provide command 'make'? [N/y] y * Waiting in queue... * Loading list of packages.... The following packages have to be installed: guile22-2.2.6-2.fc31.x86_64 A GNU implementation of Scheme for application extensibility make-1:4.2.1-15.fc31.x86_64 A GNU tool which simplifies the build process for users Proceed with changes? [N/y] y * Waiting in queue... * Waiting for authentication... * Waiting in queue... * Downloading packages... * Requesting data... * Testing changes... * Installing packages... PATH=/home/nikolay/.local/bin:/home/nikolay/bin:/usr/share/Modules/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/home/nikolay/.composer/vendor/bin:/var/lib/snapd/snap/bin:.:/usr/bin:/home/nikolay/.local/bin:/home/nikolay/bin:/usr/share/Modules/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/home/nikolay/.composer/vendor/bin:/var/lib/snapd/snap/bin /usr/local/cuda/bin/nvcc -I/usr/local/cuda/include -arch=compute_30 -ptx compare.cu -o compare.ptx /bin/sh: /usr/local/cuda/bin/nvcc: No such file or directory make: *** [Makefile:10: drv] Error 127 ``` Answers: username_1: @username_0 You need to install CUDA. username_2: same problem!! I was installed cuda-11.0, ubuntu 18.04 fix this problem with ln -s /usr/local/cuda-11.0 /usr/local/cuda username_3: See also, https://github.com/username_1/gpu-burn/pull/29 username_4: The requirement of CUDA should be in the README in my opinion. username_5: I'm still getting this with CUDA 11.1 . Install via lambdalabs Ubuntu 20.04 changed Makefile: CUDAPATH to /usr/include/cuda and NVCC to /usr/bin/nvcc to work around.
valor-software/ngx-bootstrap
1042542337
Title: Icons on the All Components page don't match the type of the site components Question: username_0: **Environment**: OS Linux mint 20.2 **Precondition**: Go to the site by using: https://ngx-bootstrap-demo--pr6305-feat-new-design-hk2yeo8v.web.app/ngx-bootstrap/#/ Steps to reproduce: 1. Click on the "Get started" button on the Start Page. > You reach the sites' "Start Page" 2. Click on the "Components" dropdowns in the left side menu > The Components Page opens **Expected Result**: The Accordion, Alerts, Dropdowns, Popover, Ppogressbar, Rating, Sortable, Tabs, Timepicker, Collapse, Typeahead, and Tooltip icons on the All Components page don't match the type of the presented on the site components **Actual Result**: The Accordion, Alerts, Dropdown, Popover, Ppogressbar, Rating, Sortable, Tabs, Timepicker, Collapse, Typeahead, and Tooltip icons on the All Components page correspond to the type of components shown on the site. Answers: username_0: Examples of better matching icons: 1. Accordion component icon: https://codyhouse.co/assets/img/gems/hero/multi-level-accordion-menu-featured.svg: 2. Alert component icon: https://www.jqueryscript.net/images/Bootstrap-Alerts-Plugin-jQuery-bsalert.jpg; 3. Dropdowns component icon: https://cms-assets.tutsplus.com/uploads/users/30/posts/35212/preview_image/dropdown.png; 4. Popover component icon: https://tutorialsplane.com/wp-content/uploads/2015/05/popoverplugin.png; 5. Progressbar component icon: https://d2d3qesrx8xj6s.cloudfront.net/img/screenshots/3c9c6afb4afed6ce8b6b674d6904e87e50468f14.jpg; 6. Rating component icon: https://d2d3qesrx8xj6s.cloudfront.net/img/screenshots/3c9c6afb4afed6ce8b6b674d6904e87e50468f14.jpg; 7. Rating component icon: https://www.jqueryscript.net/images/Lightweight-Star-Rating-Plugin-jQuery-Glyphicons-J-Rating.jpg; 8. Sortaible component icon: https://mdbcdn.b-cdn.net/wp-content/uploads/2018/10/plugin-sortable-jquery.jpg; 9. Tabs component icon: https://screenshots.imgix.net/react-bootstrap/react-bootstrap/tabs/1.0.0/5c87ddd251d445001a0e1e57/86cc96cc-a164-47c3-a824-eca38df2e352.png; 10. Timepicker component icon: https://i.stack.imgur.com/oVTZD.jpg; 11. Collapse component icon: https://csharpcorner-mindcrackerinc.netdna-ssl.com/UploadFile/BlogImages/12092017035706AM/accordion1.png; 12. Typeahead component icon: https://i.stack.imgur.com/ehSGK.png; 13. Tooltip component icon: https://res.cloudinary.com/practicaldev/image/fetch/s--_PU1Snvq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zmikkjwvkip7uca2qnfk.png Status: Issue closed
rust-lang/cargo
925120948
Title: `cargo tree` doesn't handle transitive duplications properly Question: username_0: **Problem** There are some situations where `cargo tree` will mark a package as a duplicate `(*)` when it probably shouldn't (and shows the wrong features with `--no-dedupe`). This arises when a package is built twice with the same features, but with different dependencies. A real-world example is when looking at the following: ```toml [package] name = "foo" version = "0.1.0" resolver = "2" [dependencies] diesel = { version = "=1.4.7", features = ["postgres"] } diesel_migrations = "=1.4.0" ``` Running `cargo tree -f '{p} {f}'` results in: ``` ~/Proj/rust/cargo/scratch/diesel-issue> cargo tree -f '{p} {f}' diesel-issue v0.1.0 (/Users/eric/Proj/rust/cargo/scratch/diesel-issue) ├── diesel v1.4.7 32-column-tables,bitflags,default,postgres,pq-sys,with-deprecated │ ├── bitflags v1.2.1 default │ ├── byteorder v1.4.3 default,std │ ├── diesel_derives v1.4.1 (proc-macro) default,postgres │ │ ├── proc-macro2 v1.0.27 default,proc-macro │ │ │ └── unicode-xid v0.2.2 default │ │ ├── quote v1.0.9 default,proc-macro │ │ │ └── proc-macro2 v1.0.27 default,proc-macro (*) │ │ └── syn v1.0.73 clone-impls,default,derive,extra-traits,fold,full,parsing,printing,proc-macro,quote │ │ ├── proc-macro2 v1.0.27 default,proc-macro (*) │ │ ├── quote v1.0.9 default,proc-macro (*) │ │ └── unicode-xid v0.2.2 default │ └── pq-sys v0.4.6 └── diesel_migrations v1.4.0 default ├── migrations_internals v1.4.1 default │ └── diesel v1.4.7 32-column-tables,bitflags,default,postgres,pq-sys,with-deprecated (*) └── migrations_macros v1.4.2 (proc-macro) default ├── migrations_internals v1.4.1 default (*) <-- PROBLEM IS HERE ├── proc-macro2 v1.0.27 default,proc-macro (*) ├── quote v1.0.9 default,proc-macro (*) └── syn v1.0.73 clone-impls,default,derive,extra-traits,fold,full,parsing,printing,proc-macro,quote (*) ``` The problem is the diesel-issue → diesel_migrations → migrations_macros → migrations_internals. It has a `(*)` to indicate that it is duplicated. However, migrations_internals is built twice, and this is deceiving because the second migrations_internals has a dependency on diesel with no features. Running with `--no-dedupe` is even worse, because it shows the wrong features for `diesel` under migrations_internals. **Steps** The following is an example in Cargo's testsuite format: ```rust #[cargo_test] fn dedupe_transitive_features2() { Package::new("differs", "1.0.0") .feature("some-feat", &[]) .publish(); Package::new("shared", "1.0.0") .dep("differs", "1.0") .publish(); [Truncated] ├── bar v0.1.0 [..] │ [build-dependencies] │ └── shared v1.0.0 [] │ └── differs v1.0.0 [] ├── differs v1.0.0 [some-feat] └── shared v1.0.0 [] └── differs v1.0.0 [some-feat] ", ) .run(); ``` **Possible Solution(s)** The issue is that `cargo tree` doesn't have the same logic that was added in #8701 to accommodate dependencies that are the same, but link to different dependencies. I have toyed with the idea of changing `cargo tree` to use the `UnitGraph` computed for a normal build instead of trying to recreate how some of these things are computed. There are some complexities and downsides to that approach, but I continue to feel that trying to replicate this logic in multiple places is not a good idea. **Notes** Output of `cargo version`: cargo 1.54.0-nightly (44456677b 2021-06-12) Answers: username_1: I also feel not right to replicate the logic. Love to know the downsides about reuse `UnitGraph`. Could you explain more? username_0: The main problem is that UnitGraph is based on compilation units, whereas `cargo tree` wants the dependencies between packages. There are dependencies in UnitGraph that need to be filtered out. For example, all the intra-package dependencies would need to be removed. Another example is build scripts, where the "run" unit depends on all *transitive* build scripts being run. Those are not real dependencies to be displayed. And conversely there is information missing from UnitGraph (like which dependencies are build-dependencies or dev-dependencies). That information is lost since the Resolve is discarded after the graph is computed. Another problem is that the Graph built for `cargo tree` has a fundamentally different structure, and changes based on command-line options. All that code for build the graph will probably need to stay to handle all of that. A concern is that attempting to share the code will end up needing *more* code to handle the impedance mismatch, and will be harder to maintain. So, I think this is a fairly hard problem, mostly trying to balance the maintainability of the code. I don't really know what the right solution is.
SAP/node-rfc
499389366
Title: [Request] Connect/Disconnected event emitter Question: username_0: It would be nice to have an event that can be subscribed to for the connection state. Right now I'm not sure how to tell when the connection is severed (which happens when the remote system is rebooted, or if the network drops for example). Answers: username_1: The SAP NWRFC SDK currently does not expose a mechanism to notify the consumer on connection break-up, backend system reboot etc. Could you please create the feature request for `SAP Cloud Platform Connectivity Service` subtopic of the [SAP Cloud Platform - Integration Suite](https://influence.sap.com/sap/ino/#/campaign/2282) campaign. Please mention the SAP NWRFC SDK in the request title and share the request link here.
datalad/datalad
223880568
Title: New-style command handling needs logic to ensure that a message is only logged once Question: username_0: My initial approach to pop the messages off the result dict would have worked. However, in our tests we can benefit from keeping them in -- avoids all the log swallowing and regex searches. Needs a good idea how to achieve both. It hasn't come to me yet. Answers: username_0: Here is my solution: if a `logger` is in the result it will be logged. If not, not. Hence when a message has been logged the logger needs to go. This is different from the status quo, where a default logger is used f none is given. IMHO, it makes perfect sense to do it this way: give a logger if you want logging, withhold it if not. Status: Issue closed
fullcalendar/fullcalendar
116908983
Title: addListener today button but never work Question: username_0: this is a demo. http://plnkr.co/edit/ZYh6EINRbgSfc4c6tRlK?p=preview I addListener prev, next and today , but today never work. I found the code at https://github.com/fullcalendar/fullcalendar/blob/master/dist/fullcalendar.js#L9234 look like disabled button before the event work Answers: username_1: If the today button is NOT already disabled at the moment when the user clicks the today button, then this should be considered a legit bug. a `setTimeout` will probably solve it username_2: @username_1 Where sholud i add the setTimeout function to solve the today button not trigger event issue. now my code as following: `element.find('.fc-today-button').click(function(){ console.info("today"); var view = element.fullCalendar('getView'); scope.trigger({start:view.start._d.getFullYear(),end:view.end._d.getFullYear()}); });` username_1: nevermind, setTimeout probably won't work. the button is being disabled before the event is able to bubble up to the document, so event delegation probably won't work. i'd try doing what you're doing in that last snipped you posted.. attaching the handler directly to the element (probably via viewRender). tho i can't guarantee it will work username_3: @username_1 Any update on this ? username_1: fixed in v5: https://codepen.io/username_1/pen/vYLZOMZ?editable=true&editors=001 Status: Issue closed
deeplearning4j/deeplearning4j
324348897
Title: Nd4J: Broadcast ops don't have shape validation Question: username_0: ``` INDArray x = Nd4j.create(10,10,60); INDArray y = Nd4j.ones(10,6); Nd4j.getExecutioner().exec(new BroadcastMulOp(x,y,x, 0, 2)); ``` Array y can't be broadcast to array x shape - but instead of an error, we just get a ton of shape errors logged (but no actual exception at the java level - everything continues along it's way as if nothing was wrong...) ``` H: Index 0 [10] must not be >= shape[10]. H: Index 0 [10] must not be >= shape[10]. H: Index 0 [10] must not be >= shape[10]. H: Index 0 [10] must not be >= shape[10]. H: Index 0 [10] must not be >= shape[10]. H: Index 0 [10] must not be >= shape[10]. H: Index 0 [11] must not be >= shape[10]. ``` Answers: username_1: Already fixed, exception is thrown: java.lang.IllegalStateException: Shapes do not match: dimensions[1] - x[2] must match y[1], x shape [10, 10, 60], y shape [10, 6], dimensions [0, 2] Status: Issue closed
hazzik/DelegateDecompiler
213409296
Title: The method 'Skip' is only supported for sorted input in LINQ to Entities. Question: username_0: Receiving the following error message when using the Decompile() method with a Kendo MVC UI: "The method 'Skip' is only supported for sorted input in LINQ to Entities.". I am not using the Skip method anywhere in the code. ` return Json(EmpLiteRepo.GetCompanyListForSelection().ToDataSourceResult(request), JsonRequestBehavior.AllowGet);` The definition for GetCompanyListForSelection() is as follows: ``` public IQueryable<CompanySelection> GetCompanyListForSelection() { try { var query = ( from company in _getCompanyQry() select new CompanySelection { CompanyCode = company.CompanyCode, CompanyName = company.CompanyNameDisplay, //company.CompanyCode + " - " + company.CompanyName }).Decompile(); return query; } catch (Exception ex) { throw ex; } } ``` Where _getCompanyQry() is a helper function that loads permissions. CompanyNameDisplay is a reference to property that must be decompiled and lifted. It is working in other similar queries where it is referenced, but for some reason breaks on this one. Answers: username_1: I don't know what is `ToDataSourceResult` but I assume this is Telekrik's extension. I have to assume that it adds Skip&Take to the `IQueryable<>`. What it is saying is that you need to add default sort order to the query. Status: Issue closed username_1: Also, It seems that you are using EF, so, please use DelegateDecompiler.EntityFramework and it's `DecompileAsync` method. username_0: I can confirm that adding an OrderBy clause to the code has resolved the error. Thanks, username_1! username_1: np
Praqma/helmsman
547486077
Title: Helmsman CRD's in --dry-run mode Question: username_0: Hi, I have run my helmsman with this flags "--dry-run --show-diff --suppress-diff-secrets --keep-untracked-releases" to check if prometheus-operator will be installed correctly. After that and check I got a message" "WARNING: Validation skipped because CRDs are not installed" Seems like helmsman in dry-run mode don't install CRD's. Have you got work around for it? Answers: username_1: that's probably because those CRDs are installed with a hook a dry run won't trigger the hooks and it shouldn't or it wouldn't be a dry run. you can install those CRDs yourself manuall, helm isn't great managing CRDs anyway, what I normally do it having the CRDs in a separate helm chart that I install first but I think helm v3 has some solution for that, I haven't investigated that yet. username_2: I've got the same opinion on how to manage CRDs with Helmsman as @username_1. We're usually having helm chart named crds and we're always installing it before anything else and I'm not letting other charts install their CRDs on their own. Not as clean as it should be, but pays the bills for now. username_0: It helped. Thanks guys! Status: Issue closed
cosmos/cosmos-sdk
728218444
Title: Register MsgServer type URLs in AppModuleBasic.RegisterInterfaces Question: username_0: ## Context In #7519, `MsgServer` request types are registered with custom type URLs in the `InterfaceRegistry` in `AppModule.RegisterServices`. Clients will not have these URLs registered just by calling `AppModuleBasic.RegisterInterfaces` so the registration needs to happen earlier for clients (CLI and grpc-gateway). ## Implementation First add a function `RegisterMsgServiceDesc(InterfaceRegistry, grpc.ServiceDesc)` which will be used in `RegisterInterfaces`: ```go // x/bank/types/codec.go func RegisterInterfaces(registry types.InterfaceRegistry) { ... RegisterMsgServiceDesc(registry, _Msg_serviceDesc) // _Msg_serviceDesc is generated by proto-gen } ``` `RegisterMsgServiceDesc` should do the same thing under the hood that `MsgServiceRouter.RegisterServices` is now doing under the hood: ```go // NOTE: This is how we pull the concrete request type for each handler for registering in the InterfaceRegistry. // This approach is maybe a bit hacky, but less hacky than reflecting on the handler object itself. // We use a no-op interceptor to avoid actually calling into the handler itself. _, _ = methodHandler(nil, context.Background(), func(i interface{}) error { msg, ok := i.(proto.Message) if !ok { // We panic here because there is no other alternative and the app cannot be initialized correctly // this should only happen if there is a problem with code generation in which case the app won't // work correctly anyway. panic(fmt.Errorf("can't register request type %T for service method %s", i, fqMethod)) } msr.interfaceRegistry.RegisterCustomTypeURL((*sdk.MsgRequest)(nil), fqMethod, msg) return nil }, noopInterceptor) ``` `MsgServiceRouter.RegisterServices` should then be updated to return an error if interfaces are not already registered. It can check what is registered using the `InterfaceRegistry.Resolve` method. That error should be very clear and explain how to use `RegisterMsgServiceDesc`.
ClickHouse/ClickHouse
1111208284
Title: same select on distributed table elapse takes different time, one is 80x slower than another... Question: username_0: 17:06:59.706266 [ 5911 ] <Trace> ContextAccess (default): Access granted: xxx 17:06:59.706787 [ 5911 ] <Debug> default.table_test (889c1c33-c0f9-43b6-ba77-4a88119470ec) (SelectExecutor): Key condition: unknown, (column 1 in ['xxx', 'xxx']), and, unknown, and, unknown, and, (column 0 in [1, 1]), and, unknown, (column 1 in ['xxx', 'xxx']), and, unknown, and, unknown, and, and 17:06:59.706814 [ 5911 ] <Debug> default.table_test (889c1c33-c0f9-43b6-ba77-4a88119470ec) (SelectExecutor): MinMax index condition: (column 0 in [19011, 19011]), unknown, and, unknown, and, unknown, and, unknown, and, (column 0 in [19011, 19011]), unknown, and, unknown, and, unknown, and, and 17:06:59.707140 [ 5911 ] <Trace> default.table_test (889c1c33-c0f9-43b6-ba77-4a88119470ec) (SelectExecutor): Running binary search on index range for part 20220119_0_6744_5 (236818 marks) 17:06:59.707170 [ 5911 ] <Trace> default.table_test (889c1c33-c0f9-43b6-ba77-4a88119470ec) (SelectExecutor): Found (LEFT) boundary mark: 0 17:06:59.707192 [ 5911 ] <Trace> default.table_test (889c1c33-c0f9-43b6-ba77-4a88119470ec) (SelectExecutor): Found (RIGHT) boundary mark: 2671 17:06:59.707198 [ 5911 ] <Trace> default.table_test (889c1c33-c0f9-43b6-ba77-4a88119470ec) (SelectExecutor): Found continuous range in 35 steps 17:06:59.710851 [ 5911 ] <Debug> default.table_test (889c1c33-c0f9-43b6-ba77-4a88119470ec) (SelectExecutor): Index `c1_minmax_index` has dropped 0/891 granules. 17:06:59.710863 [ 5911 ] <Debug> default.table_test (889c1c33-c0f9-43b6-ba77-4a88119470ec) (SelectExecutor): Index `test_minmax_index` has dropped 0/891 granules. 17:06:59.710867 [ 5911 ] <Debug> default.table_test (889c1c33-c0f9-43b6-ba77-4a88119470ec) (SelectExecutor): Selected 1/44 parts by partition key, 1 parts by primary key, 2671/236818 marks by primary key, 2671 marks to read from 1 ranges 17:06:59.711018 [ 5911 ] <Trace> default.table_test (889c1c33-c0f9-43b6-ba77-4a88119470ec) (SelectExecutor): Reading approx. 21880832 rows with 16 streams 17:06:59.711895 [ 5911 ] <Trace> InterpreterSelectQuery: FetchColumns -> WithMergeableState 17:06:59.712406 [ 5911 ] <Trace> InterpreterSelectQuery: WithMergeableState -> Complete 17:06:59.748571 [ 3536 ] <Trace> StorageDistributed (table_test_all): (clickhousehost1:9000) Cancelling query because enough data has been read 17:06:59.897405 [ 22032 ] <Trace> StorageDistributed (table_test_all): (clickhousehost2:9000) Cancelling query because enough data has been read 17:06:59.920821 [ 31850 ] <Trace> StorageDistributed (table_test_all): (clickhousehost3:9000) Cancelling query because enough data has been read 17:07:01.429952 [ 22032 ] <Trace> StorageDistributed (table_test_all): (clickhousehost4:9000) Cancelling query because enough data has been read 17:07:06.158797 [ 5911 ] <Information> executeQuery: **Read 131072 rows, 123.07 MiB in 6.457007691 sec., 20299 rows/sec., 19.06 MiB/sec.** 17:07:06.158845 [ 5911 ] <Debug> MemoryTracker: Peak memory usage (for query): 635.66 MiB. actually they both scan 131072 rows, seems the difference is only RemoteQueryExecutor schedule, do you have any idea for this? Answers: username_1: Is it reproducible ? username_0: yes, by now it exists always,,, Continuous insert into this table ,and with some other query being executed, scenario is very simple.. anything else do you need.. username_2: You should look at the query metrics and do query profiling. https://clickhouse.com/docs/en/operations/system-tables/query_log/ - see ProfileEvents https://clickhouse.com/docs/en/operations/optimizing-performance/sampling-query-profiler/ username_0: 'SelectQuery' 1 'FileOpen' 4 'Seek' 2 'ReadBufferFromFileDescriptorRead' 6 'ReadBufferFromFileDescriptorReadBytes' 3826612 'ReadCompressedBytes' 37576 'CompressedReadBufferBlocks' 3 'CompressedReadBufferBytes' 131082 'IOBufferAllocs' 8 'IOBufferAllocBytes' 5006582 'FunctionExecute' 171 'MarkCacheHits' MarkCacheMisses' ReadBackoff' 'CreatedReadBufferOrdinary' 4 'DiskReadElapsedMicroseconds' 1030 'NetworkSendElapsedMicroseconds' 1459 'SlowRead' 'SelectedParts' 1 'SelectedRanges' 1 'SelectedMarks' 2671 'SelectedRows' 'SelectedBytes' 'ContextLock' 280 'RWLockAcquiredReadLocks' 3 'RealTimeMicroseconds' 6457298 'UserTimeMicroseconds' 13058 'SystemTimeMicroseconds' 5122 'SoftPageFaults' 50 I'm trying to get understanding of them... I would be glad if you can let me know which one is abnormal. As of now it seems 'ContextLock' of 280 is a little high... username_0: sampling profiler shows that, 80%~90% of the three slower executions time are waitting here https://github.com/ClickHouse/ClickHouse/blob/b92692e00103113b26f9731de8c93d7bcb9ac84e/src/Processors/Executors/PipelineExecutor.cpp#L561 username_1: Disk is overloaded
clearlinux/distribution
483217680
Title: Please update `stack` to 2.1.1 Question: username_0: Current version of `stack` is 1.9.3, but the latest version is 2.1.1. Answers: username_1: In fact there's 2.1.3 already username_2: I integrated the update to 2.1.3, and it should be available within a couple of days. username_2: The update to 2.1.3 landed in release `31030`. Status: Issue closed
alibaba/git-repo-go
1055784622
Title: 在git协议下,当commit里面有lfs时,可以push, 但是pr会报错 Question: username_0: ➜ git-repo-lfs-test git:(master) git pr Uploading LFS objects: 0% (0/1), 0 B | 0 B/s, done. batch request: invalid character '0' after top-level value error: failed to push some refs to 'codeup.aliyun.com:606bafcd32fceefde379cc0e/git-repo-lfs-test.git' ---------------------------------------------------------------------- [FAILED] ./ master (upload failed: exit status 1) 如果是https协议就可以正常pr。 Answers: username_1: 使用 `git pr -vv` 能够复现,应该是 git-lfs 在 SSH 协议认证和推送上的问题。 ``` $ GIT_TRACE=1 GIT_TRACE_PACKET=1 git pr -vv ... 09:08:09.111199 git.c:742 trace: exec: git-lfs pre-push ssh <EMAIL>:gotgit/gotgit/git-lfs-test.git 09:08:09.111894 run-command.c:668 trace: run_command: git-lfs pre-push ssh <EMAIL>:gotgit/gotgit/git-lfs-test.git 09:08:09.125115 trace git-lfs: exec: git 'version' 09:08:09.148006 trace git-lfs: exec: git '-c' 'filter.lfs.smudge=' '-c' 'filter.lfs.clean=' '-c' 'filter.lfs.process=' '-c' 'filter.lfs.required=false' 'rev-parse' 'HEAD' '--symbolic-full-name' 'HEAD' 09:08:09.163391 trace git-lfs: exec: git 'config' '-l' 09:08:09.171766 trace git-lfs: pre-push: refs/heads/jx/ssh-protocol 77adac519753891c0c3d7bffe4ff72ec90dcbcab refs/for/master/jx/ssh-protocol 0000000000000000000000000000000000000000 09:08:09.948407 trace git-lfs: tq: running as batched queue, batch size of 100 09:08:09.948642 trace git-lfs: run_command: git rev-list --objects --ignore-missing --not --remotes=ssh --stdin -- 09:08:09.960441 trace git-lfs: tq: sending batch of size 1 09:08:09.960687 trace git-lfs: run_command: sh -c ssh -o SendEnv=AGIT_FLOW -- <EMAIL> 'git-lfs-authenticate gotgit/gotgit/git-lfs-test.git upload' 09:08:10.567195 trace git-lfs: ssh: <EMAIL> failed, error: invalid character '0' after top-level value, message: (try: 0/6) 09:08:10.567336 trace git-lfs: run_command: sh -c ssh -o SendEnv=AGIT_FLOW -- <EMAIL> 'git-lfs-authenticate gotgit/gotgit/git-lfs-test.git upload' 09:08:11.339378 trace git-lfs: ssh: <EMAIL> failed, error: invalid character '0' after top-level value, message: (try: 1/6) 09:08:11.339472 trace git-lfs: run_command: sh -c ssh -o SendEnv=AGIT_FLOW -- <EMAIL> 'git-lfs-authenticate gotgit/gotgit/git-lfs-test.git upload' 09:08:12.267201 trace git-lfs: ssh: <EMAIL> failed, error: invalid character '0' after top-level value, message: (try: 2/6) 09:08:12.267309 trace git-lfs: run_command: sh -c ssh -o SendEnv=AGIT_FLOW -- <EMAIL> 'git-lfs-authenticate gotgit/gotgit/git-lfs-test.git upload' 09:08:13.059273 trace git-lfs: ssh: <EMAIL> failed, error: invalid character '0' after top-level value, message: (try: 3/6) 09:08:13.059384 trace git-lfs: run_command: sh -c ssh -o SendEnv=AGIT_FLOW -- <EMAIL> 'git-lfs-authenticate gotgit/gotgit/git-lfs-test.git upload' 09:08:13.714400 trace git-lfs: ssh: <EMAIL> failed, error: invalid character '0' after top-level value, message: (try: 4/6) 09:08:13.714468 trace git-lfs: run_command: sh -c ssh -o SendEnv=AGIT_FLOW -- <EMAIL> 'git-lfs-authenticate gotgit/gotgit/git-lfs-test.git upload' 09:08:14.405591 trace git-lfs: ssh: <EMAIL> failed, error: invalid character '0' after top-level value, message: (try: 5/6) batch request: invalid character '0' after top-level value Uploading LFS objects: 0% (0/1), 0 B | 0 B/s, done. error: failed to push some refs to 'codeup.aliyun.com:gotgit/gotgit/git-lfs-test.git' 09:08:14.408263 pkt-line.c:80 packet: push> 0000 ``` username_2: 服务端ssh解析AGIT_FLOW环境变量时有bug,测试后会在下周发布窗口发布,在这之前请使用 http 协议 username_1: 经验证昨天的发布已经解决此问题。感谢 @username_0 的报告。 Status: Issue closed
angular/angular
469953471
Title: Ivy preview: ERROR in Unable to write a reference to AngularEditorComponent Question: username_0: I try to build my library with Ivy and got this error: ``` Compiling @username_0/angular-editor : es2015 as esm2015 10% building 3/3 modules 0 activei 「wds」: Project is running at http://localhost:4200/webpack-dev-server/ i 「wds」: webpack output is served from / i 「wds」: 404s will fallback to //index.html chunk {main} main.js, main.js.map (main) 2.02 kB [initial] [rendered] chunk {polyfills} polyfills.js, polyfills.js.map (polyfills) 694 bytes [initial] [rendered] chunk {runtime} runtime.js, runtime.js.map (runtime) 6.09 kB [entry] [rendered] chunk {styles} styles.js, styles.js.map (styles) 17.6 kB [initial] [rendered] chunk {vendor} vendor.js, vendor.js.map (vendor) 339 kB [initial] [rendered] Date: 2019-07-18T18:36:18.554Z - Hash: 8001275a78f05bdf7f5b - Time: 25892ms ERROR in Unable to write a reference to AngularEditorComponent in D:/Go/angular-editor/projects/angular-editor/src/lib/angular-editor.component.ts from D:/Go/angular-editor/projects/angular-editor/src/lib/angular-editor.module.ts ** Angular Live Development Server is listening on localhost:4200, open your browser on http://localhost:4200/ ** i 「wdm」: Failed to compile. i 「wdm」: Compiling... Date: 2019-07-18T18:36:20.396Z - Hash: 8001275a78f05bdf7f5b 5 unchanged chunks Time: 1425ms ERROR in Unable to write a reference to AngularEditorComponent in D:/Go/angular-editor/projects/angular-editor/src/lib/angular-editor.component.ts from D:/Go/angular-editor/projects/angular-editor/src/lib/angular-editor.module.ts i 「wdm」: Failed to compile. D:\Go\angular-editor\node_modules\@angular\compiler-cli\src\ngtsc\transform\src\compilation.js:228 finally { if (e_2) throw e_2.error; } ^ Error: Unable to write a reference to AngularEditorComponent in D:/Go/angular-editor/projects/angular-editor/src/lib/angular-editor.component.ts from D: /Go/angular-editor/projects/angular-editor/src/lib/angular-editor.module.ts at ReferenceEmitter.emit (D:\Go\angular-editor\node_modules\@angular\compiler-cli\src\ngtsc\imports\src\emitter.js:55:19) at Object.toR3Reference (D:\Go\angular-editor\node_modules\@angular\compiler-cli\src\ngtsc\annotations\src\util.js:134:31) at NgModuleDecoratorHandler._toR3Reference (D:\Go\angular-editor\node_modules\@angular\compiler-cli\src\ngtsc\annotations\src\ng_module.js:288:31) at D:\Go\angular-editor\node_modules\@angular\compiler-cli\src\ngtsc\annotations\src\ng_module.js:132:83 at Array.map (<anonymous>) at NgModuleDecoratorHandler.analyze (D:\Go\angular-editor\node_modules\@angular\compiler-cli\src\ngtsc\annotations\src\ng_module.js:132:48) at analyze (D:\Go\angular-editor\node_modules\@angular\compiler-cli\src\ngtsc\transform\src\compilation.js:172:60) at _loop_1 (D:\Go\angular-editor\node_modules\@angular\compiler-cli\src\ngtsc\transform\src\compilation.js:212:25) at analyzeClass (D:\Go\angular-editor\node_modules\@angular\compiler-cli\src\ngtsc\transform\src\compilation.js:220:25) at visit (D:\Go\angular-editor\node_modules\@angular\compiler-cli\src\ngtsc\transform\src\compilation.js:234:21) ``` Whats wrong? https://github.com/username_0/angular-editor/tree/ivy-preview Answers: username_1: Related/dup https://github.com/angular/angular/issues/29361 username_2: Closing as a dup. Status: Issue closed
ory/keto
332100060
Title: roles without members does not shown in roles list Question: username_0: Hello! In our case we have some roles without members, and its does not shown in roles list, but if you request an empty role detail by its id, all works fine Answers: username_1: I think this is because keto selects roles from keto_role_member table, and not from keto_role table https://github.com/ory/keto/blob/48dd9f1ffbeaa99ac8dc27085c5a50f9244bf9c3/role/manager_sql.go#L63-L70 https://github.com/ory/keto/blob/48dd9f1ffbeaa99ac8dc27085c5a50f9244bf9c3/role/manager_sql.go#L185-L191 username_2: Yeah, seems like a bug! Thanks for reporting and investigating! Status: Issue closed
ros-visualization/python_qt_binding
147294846
Title: Installation fails when alone in workspace Question: username_0: This may or may not be a duplicated of #18, but it definitely happens with regular old catkin_make: ``` mkdir -p pyqt_ws/src && cd pyqt_ws/src git clone https://github.com/ros-visualization/python_qt_binding.git cd .. source /opt/ros/indigo/setup.bash catkin_make install ``` Result: ``` Install the project... -- Install configuration: "" -- Installing: /home/administrator/pyqt_ws/install/_setup_util.py -- Installing: /home/administrator/pyqt_ws/install/env.sh -- Installing: /home/administrator/pyqt_ws/install/setup.bash -- Installing: /home/administrator/pyqt_ws/install/setup.sh -- Installing: /home/administrator/pyqt_ws/install/setup.zsh -- Installing: /home/administrator/pyqt_ws/install/.rosinstall -- Installing: /home/administrator/pyqt_ws/install/lib/pkgconfig/python_qt_binding.pc -- Installing: /home/administrator/pyqt_ws/install/share/python_qt_binding/cmake/python_qt_binding-extras.cmake -- Installing: /home/administrator/pyqt_ws/install/share/python_qt_binding/cmake/python_qt_bindingConfig.cmake -- Installing: /home/administrator/pyqt_ws/install/share/python_qt_binding/cmake/python_qt_bindingConfig-version.cmake -- Installing: /home/administrator/pyqt_ws/install/share/python_qt_binding/package.xml + cd /home/administrator/pyqt_ws/src/python_qt_binding + /usr/bin/env PYTHONPATH=/home/administrator/pyqt_ws/install/lib/python2.7/dist-packages:/home/administrator/pyqt_ws/build/lib/python2.7/dist-packages:/opt/ros/indigo/lib/python2.7/dist-packages CATKIN_BINARY_DIR=/home/administrator/pyqt_ws/build /usr/bin/python /home/administrator/pyqt_ws/src/python_qt_binding/setup.py build --build-base /home/administrator/pyqt_ws/build/python_qt_binding install --install-layout=deb --prefix=/home/administrator/pyqt_ws/install --install-scripts=/home/administrator/pyqt_ws/install/bin running build running build_py creating /home/administrator/pyqt_ws/build/python_qt_binding/lib.linux-x86_64-2.7 creating /home/administrator/pyqt_ws/build/python_qt_binding/lib.linux-x86_64-2.7/python_qt_binding copying src/python_qt_binding/QtBindingHelper.py -> /home/administrator/pyqt_ws/build/python_qt_binding/lib.linux-x86_64-2.7/python_qt_binding copying src/python_qt_binding/__init__.py -> /home/administrator/pyqt_ws/build/python_qt_binding/lib.linux-x86_64-2.7/python_qt_binding copying src/python_qt_binding/binding_helper.py -> /home/administrator/pyqt_ws/build/python_qt_binding/lib.linux-x86_64-2.7/python_qt_binding running install error: can't create or remove files in install directory The following error occurred while trying to add or remove files in the installation directory: [Errno 2] No such file or directory: '/home/administrator/pyqt_ws/install/lib/python2.7/dist-packages/test-easy-install-23151.write-test' The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was: /home/administrator/pyqt_ws/install/lib/python2.7/dist-packages/ This directory does not currently exist. Please create it and try again, or choose a different installation directory (using the -d or --install-dir option). CMake Error at python_qt_binding/catkin_generated/safe_execute_install.cmake:4 (message): execute_process(/home/administrator/pyqt_ws/build/python_qt_binding/catkin_generated/python_distutils_install.sh) returned error code Call Stack (most recent call first): python_qt_binding/cmake_install.cmake:55 (INCLUDE) cmake_install.cmake:120 (INCLUDE) make: *** [install] Error 1 Invoking "make install -j2 -l2" failed ``` Looks like [the buildfarm job passes](http://build.ros.org/view/Jbin_uT64/job/Jbin_uT64__python_qt_binding__ubuntu_trusty_amd64__binary/3/console#console-section-1) due to a sneaky little mkdir that happens in there, but seems not to happen for a regular old user like me trying to build this at home: ``` 09:09:40 Install the project... 09:09:40 /usr/bin/cmake -P cmake_install.cmake 09:09:40 -- Install configuration: "None" 09:09:40 -- Installing: /tmp/binarydeb/ros-jade-python-qt-binding-0.2.19/debian/ros-jade-python-qt-binding/opt/ros/jade/lib/pkgconfig/python_qt_binding.pc 09:09:40 -- Installing: /tmp/binarydeb/ros-jade-python-qt-binding-0.2.19/debian/ros-jade-python-qt-binding/opt/ros/jade/share/python_qt_binding/cmake/python_qt_binding-extras.cmake 09:09:40 -- Installing: /tmp/binarydeb/ros-jade-python-qt-binding-0.2.19/debian/ros-jade-python-qt-binding/opt/ros/jade/share/python_qt_binding/cmake/python_qt_bindingConfig.cmake 09:09:40 -- Installing: /tmp/binarydeb/ros-jade-python-qt-binding-0.2.19/debian/ros-jade-python-qt-binding/opt/ros/jade/share/python_qt_binding/cmake/python_qt_bindingConfig-version.cmake 09:09:40 -- Installing: /tmp/binarydeb/ros-jade-python-qt-binding-0.2.19/debian/ros-jade-python-qt-binding/opt/ros/jade/share/python_qt_binding/package.xml 09:09:40 + cd /tmp/binarydeb/ros-jade-python-qt-binding-0.2.19 09:09:40 + mkdir -p /tmp/binarydeb/ros-jade-python-qt-binding-0.2.19/debian/ros-jade-python-qt-binding/opt/ros/jade/lib/python2.7/dist-packages 09:09:40 + /usr/bin/env PYTHONPATH=/opt/ros/jade/lib/python2.7/dist-packages:/tmp/binarydeb/ros-jade-python-qt-binding-0.2.19/obj-x86_64-linux-gnu/lib/python2.7/dist-packages:/opt/ros/jade/lib/python2.7/dist-packages:/tmp/ros_buildfarm: CATKIN_BINARY_DIR=/tmp/binarydeb/ros-jade-python-qt-binding-0.2.19/obj-x86_64-linux-gnu /usr/bin/python /tmp/binarydeb/ros-jade-python-qt-binding-0.2.19/setup.py build --build-base /tmp/binarydeb/ros-jade-python-qt-binding-0.2.19/obj-x86_64-linux-gnu install --root=/tmp/binarydeb/ros-jade-python-qt-binding-0.2.19/debian/ros-jade-python-qt-binding --install-layout=deb --prefix=/opt/ros/jade --install-scripts=/opt/ros/jade/bin ``` Answers: username_0: Wait, nevermind. The issue is that I'm on catkin 0.6.16, which predated this fix: https://github.com/ros/catkin/pull/775 Status: Issue closed
fortrabbit/craft-copy
349175689
Title: gitignore generated `craft-copy-transfer.sql` Question: username_0: This is only just a question: Should the generated `storage/craft-copy-transfer.sql` be commited into source control or shoud it be `.gitignore`d? Answers: username_1: It should be ignored ``` *.sql ``` This is definitely missing here: https://github.com/fortrabbit/craft-copy/blob/master/src/.gitignore.example Please keep the ticket open and add it manually for now. username_2: Personally I'm using this one in my root .gitignore ``` # CRAFT STORAGE /storage/* !/storage/.gitkeep ```
joakin/loot
113829373
Title: Read up mobileapps service, find common interests Question: username_0: https://git.wikimedia.org/summary/mediawiki%2Fservices%2Fmobileapps How do we benefit each other? How do we share transforms? What are they using for making the transformations? Read up and sync with Bernd.
Beckhoff/ADS
557114974
Title: ADS-State Notification invalid value Question: username_0: Hi, if i add a notification to the ads-state: ![image](https://user-images.githubusercontent.com/26095455/73396769-60112580-42e2-11ea-8f5d-fdc5bf3ebbd4.png) i get the notification and an handle (see line 1): ![image](https://user-images.githubusercontent.com/26095455/73396832-7d45f400-42e2-11ea-91fa-4ee4d96d207b.png) I also received the initial notification after the add call. In the notification header i received this: ![image](https://user-images.githubusercontent.com/26095455/73397259-6653d180-42e3-11ea-9a70-f60d7f189c78.png) The given value seems to be wrong. With the TcAdsDll everything work fine. Answers: username_1: Do you use the TcAdsDll header? It might be related to issue #12 . See my old [comment](https://github.com/Beckhoff/ADS/issues/12#issuecomment-148304863) for details. If you want to use the same header, you can try a patch like this: https://github.com/dabrowne/ADS/commit/e181ca260d5cdcdd7afc4bf07c80393f4ce96ddc username_0: In my wrapper i have added to the struct AdsNotificationHeader the entry "data[ANYSIZE_ARRAY]" like TcAdsDll. The problem was that the notification datas was copied after the entry "data[ANYSIZE_ARRAY]" because in the marked line you want to jump to the data section of the callback with pointer arithmetic. ![image](https://user-images.githubusercontent.com/26095455/73640394-74507c00-466e-11ea-9ae0-7b7db69dd8f4.png) Status: Issue closed
loevgaard/SyliusBrandPlugin
480139876
Title: Getting <<Variable "remote_url" does not exist>> in a completely different file Question: username_0: Hi, it's me again, I'm soo close to getting this working now. ![remote_url_does_not_exist](https://user-images.githubusercontent.com/1257298/62942685-1d619c80-bdd9-11e9-9b50-90444ebec1c4.png) I would have assumed that this was my fault somehow if it wasn't for the fact that this error is mentioned in your documentation. Albeit in a completely different file. Now I'm getting this error in one of the stock (vendor) templates, not in something I modified myself. Now, I have barely started to learn the inner workings of Sylius so I may be completely off target here, but the documentation mentions that you inject something globally. Could your "remote_url" be conflicting with some other "remote_url"? These are the relevant packages in my composer.json: "require": { "php": "^7.2", "loevgaard/sylius-brand-plugin": "^2.0", "sylius/shop-api-plugin": "^1.0@RC", "sylius/sylius": "~1.5.0", Answers: username_1: Hi. This error occurs when you haven't override form template (and so not defined that `remote_url` in field's options) with some autocomplete field. Follow docs and override templates as mentioned there and error will gone. username_0: Thank you for replying @username_1 ! I did follow the instructions as best I could but I'm missing something. Do I have to copy the files in SyliusBrandPlugin/src/Resources/views to... somewhere? Because that is never specified in the installation instructions. I don't have a templates/bundles/SyliusAdminBundle/views/Product folder. Should I create it and add the files from the SyliusBrandPlugin repo as mentioned above? username_0: Probably unrelated but I'm throwing it in here anyway. This bit under "Configure": sylius_product: resources: product: classes: model: App\Model\Product repository: App\Doctrine\ORM\ProductRepository Gives me "Warning: class_implements(): Class App\Doctrine\ORM\ProductRepository does not exist and could not be loaded" Not adding that bit to services.yaml instead lands me on the previous remote_url error. username_0: Alright, new day, new attempts. As previously stated I don't have a **templates/bundles/SyliusAdminBundle/views/Product/Tab/_details.html.twig** template at all, I'm not upgrading per se, I just installed the older version because my composer didn't have minimum-stability set to dev, so it gave me 1.3 (I think) until I manually requested 2.0. So, I've uninstalled the plugin, installed 2.0 from scratch, following all the steps in the documentation, and if I do everything then I get the "Warning: class_implements()" error mentioned above. If I skip that part of the installation instructions and **don't** edit config/services.yaml then I get to the remote_url error. Next I'm going to install a fresh Sylius in an empty directory and then try to follow the instructions step by step again, if I get the same error then I can be reasonably sure that all this is not my fault. username_0: So, the first problem is that **composer require loevgaard/sylius-brand-plugin** gives you a version of the plugin that is not compatible with the documentation, you really should change it to **composer require loevgaard/sylius-brand-plugin:^2.0** then at least users won't end up where I ended up. Also include in the instructions that they need to add **"minimum-stability": "dev",** to their composer.json. Now, fresh install. Under step 4, in the example code, the namespace for Product is **App\Entity\Product** in the latest version. In step 5 I get **Warning: class_implements(): Class App\Model\Product does not exist and could not be loaded** (when trying to run migrations) This is because the instructions in step 4 under "Configure" specifies **model: App\Model\Product** which should be **model: App\Entity\Product\Product** After that the plugin works. So I'm guessing the template woes I'm having in my actual project is due to something else. Thanks for reading my rants.
KAIST-IS521/TeamFive
228623633
Title: Issues conventions Question: username_0: 이슈에 말머리를 (`[Bot] PGP Login` 처럼) 달면 좋을 것 같습니다. 아래 처럼 구분해서 합시다. - Back: (Board, Auth, DB, ...) - Front: (Static homepage, UI, ...) - Bot: WebDriver bot - Flag: Flag updater - Test: SLA checker - 말머리 없음: 기타 각 마일스톤 별로 기능을 적절히 구분해서 이슈로 만들어 주시고, "Milestone" 에 추가해주세요. 그리고 이슈에 대한 작업을 시작할 때 "Assignees"를 세팅해서 서로 작업하는 내용이 충돌하지 않도록 주의해 주세요. 질문이나 discussion 할 내용이 있을 때도 가볍게 이슈를 올려서 해주세요.
peterramsing/lost
144749188
Title: Removing a gutter does not adjust column width Question: username_0: I've created two columns—7/8 and 1/8. I removed the gutter for 7/8 `lost-column: 7/8 0 0;` It doesn't adjust the columns though and I'm left with a `30px` space on my right right. ![test](https://cloud.githubusercontent.com/assets/1240667/14161062/4977714e-f6a6-11e5-9f1b-9ab3f6b984c1.gif)<issue_closed> Status: Issue closed
chaoss/augur
838024402
Title: Add time period parameterization to merge request (PR) visualization endpionts. Question: username_0: To have the merge request (PR) endpoints include time period parameterization. As @ABrain7710 and I discussed, they do not currently, because @username_0 decided at some point we did not need that for those endpoints. @username_0 was wrong. average PR Duration: Not respecting the month parameter. http://spos.osshealth.io:5099/api/unstable/pull_request_reports/Average_PR_duration/?repo_id=25531&remove_outliers=0&group_by=month http://spos.osshealth.io:5099/api/unstable/pull_request_reports/Average_PR_duration/?repo_id=25530&remove_outliers=0&group_by=month http://spos.osshealth.io:5099/api/unstable/pull_request_reports/Average_PR_duration/?repo_id=25158&remove_outliers=0&group_by=month Answers: username_1: Hello, I would like to work on this issue. username_2: hey, this is my first time in the open source community and i would love to work on this issue. could you please assign it to me
microsoft/accessibility-insights-web
902759561
Title: Mac signed build fails with electron-builder 22.10.5 Question: username_0: ## Describe the bug When we upgraded from electron-builder 22.9.1 to 22.10.5 (PR #4237), our Mac signed builds started failing ([example failing build](https://dev.azure.com/accessibility-insights-private/Accessibility%20Insights%20(private)/_build/results?buildId=22651&view=results)). As a short term workaround, we reverted back to electron-builder 22.9.1 (PR #4284). **To Reproduce** Steps to reproduce the behavior: 1. Upgrade electron-builder to 22.10.5 2. Run a signed Mac build 3. Observe failures ## Expected behavior The signed Mac build succeeds ## Context (please complete the following information) - Pool: Azure Pipelines - Image: macOS-10.15 - Agent: Hosted Agent ## Additional context We have a bit more information about the failure from the signing team; ask @username_0 for the email "Apple Sign Failures for Accessibility Insights" if you don't already have it. Answers: username_0: As an update, it appears electron-builder switched how it adds license files to DMGs [with this commit](https://github.com/electron-userland/electron-builder/commit/9e7b3c39ad3c59b88d78e3f5a5948cf4ced3f92a). I'm having trouble validating this, but it's possible we haven't been (and aren't currently) showing our license on Mac. If that's true, the only reason signed builds weren't already failing was because of the bug fixed by the above commit. We can unblock the electron-builder upgrade by removing our license terms from Mac, but that isn't optimal either. username_0: electron-builder now up to date as of #4459 Status: Issue closed
joabakk/signalk-polar
919664056
Title: Time-stamp error/unable to parse when feeding data into influxDB Question: username_0: I have constant errors on the data from this plugin when I want to feed them into influx. "Jun 12 23:18:44 Error: A 400 Bad Request error occurred: {"error":"partial write: unable to parse 'performance.beatAngle,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=0.9876577115566776 NaN': bad timestamp\nunable to parse 'performance.beatAngleTargetSpeed,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=3.1381119060994607 NaN': bad timestamp\nunable to parse 'performance.beatAngleVelocityMadeGood,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=1.4573499257480211 NaN': bad timestamp\nunable to parse 'performance.targetAngle,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=0.9876577115566776 NaN': bad timestamp\nunable to parse 'performance.targetSpeed,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=3.1381119060994607 NaN': bad timestamp\nunable to parse 'performance.gybeAngle,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=3.1285888647430173 NaN': bad timestamp\nunable to parse 'performance.gybeAngleTargetSpeed,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=13.015447741691206 NaN': bad timestamp\nunable to parse 'performance.gybeAngleVelocityMadeGood,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=-13.014347310694532 NaN': bad timestamp\nunable to parse 'performance.polarSpeed,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=5.504556950043316 NaN': bad timestamp\nunable to parse 'performance.polarSpeedRatio,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=0 NaN': bad timestamp\nunable to parse 'environment.wind.speedTrue,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=11.27 NaN': bad timestamp\nunable to parse 'environment.wind.angleTrueWater,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=1.4573499257480211 NaN': bad timestamp\nunable to parse 'performance.velocityMadeGood,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=0 NaN': bad timestamp\nunable to parse 'performance.beatAngle,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=0.6151575271528897 NaN': bad timestamp\nunable to parse 'performance.beatAngleTargetSpeed,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=2.8 NaN': bad timestamp\nunable to parse 'performance.beatAngleVelocityMadeGood,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=1.4667 NaN': bad timestamp\nunable to parse 'performance.targetAngle,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=0.6151575271528897 NaN': bad timestamp\nunable to parse 'performance.targetSpeed,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=2.8 NaN': bad timestamp\nunable to parse 'performance.gybeAngle,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=3.1182664161777955 NaN': bad timestamp\nunable to parse 'performance.gybeAngleTargetSpeed,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=13.015447741691206 NaN': bad timestamp\nunable to parse 'performance.gybeAngleVelocityMadeGood,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=-13.01190696279637 NaN': bad timestamp\nunable to parse 'performance.polarSpeed,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=2.1606672140356946 NaN': bad timestamp\nunable to parse 'performance.polarSpeedRatio,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=0 NaN': bad timestamp\nunable to parse 'environment.wind.speedTrue,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=9.99 NaN': bad timestamp\nunable to parse 'environment.wind.angleTrueWater,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=1.6063 NaN': bad timestamp\nunable to parse 'performance.velocityMadeGood,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=0 NaN': bad timestamp\nunable to parse 'performance.beatAngle,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=0.6151575271528897 NaN': bad timestamp\nunable to parse 'performance.beatAngleTargetSpeed,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=2.8 NaN': bad timestamp\nunable to parse 'performance.beatAngleVelocityMadeGood,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=1.6063 NaN': bad timestamp\nunable to parse 'performance.gybeAngle,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=3.1182664161777955 NaN': bad timestamp\nunable to parse 'performance.gybeAngleTargetSpeed,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=13.015447741691206 NaN': bad timestamp\nunable to parse 'performance.gybeAngleVelocityMadeGood,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=-13.01190696279637 NaN': bad timestamp\nunable to parse 'performance.targetAngle,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=3.1182664161777955 NaN': bad timestamp\nunable to parse 'performance.targetSpeed,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=13.015447741691206 NaN': bad timestamp\nunable to parse 'performance.polarSpeed,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=3.68 NaN': bad timestamp\nunable to parse 'performance.polarSpeedRatio,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=0 NaN': bad timestamp\nunable to parse 'environment.wind.speedTrue,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=10.340335952885109 NaN': bad timestamp\nunable to parse 'environment.wind.angleTrueWater,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=1.3619 NaN': bad timestamp\nunable to parse 'performance.velocityMadeGood,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=0 NaN': bad timestamp\nunable to parse 'performance.beatAngle,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=0.9876577115566776 NaN': bad timestamp\nunable to parse 'performance.beatAngleTargetSpeed,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=3.1381119060994607 NaN': bad timestamp\nunable to parse 'performance.beatAngleVelocityMadeGood,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=1.3619 NaN': bad timestamp\nunable to parse 'performance.targetAngle,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=0.9876577115566776 NaN': bad timestamp\nunable to parse 'performance.targetSpeed,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=3.1381119060994607 NaN': bad timestamp\nunable to parse 'performance.gybeAngle,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=3.1285888647430173 NaN': bad timestamp\nunable to parse 'performance.gybeAngleTargetSpeed,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=13.015447741691206 NaN': bad timestamp\nunable to parse 'performance.gybeAngleVelocityMadeGood,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=-13.014347310694532 NaN': bad timestamp\nunable to parse 'performance.polarSpeed,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=3.55 NaN': bad timestamp\nunable to parse 'performance.polarSpeedRatio,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=0 NaN': bad timestamp\nunable to parse 'environment.wind.speedTrue,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=10.35 NaN': bad timestamp\nunable to parse 'environment.wind.angleTrueWater,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=1.4143 NaN': bad timestamp\nunable to parse 'performance.velocityMadeGood,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=0 NaN': bad timestamp\nunable to parse 'performance.beatAngle,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=0.9876577115566776 NaN': bad timestamp\nunable to parse 'performance.beatAngleTargetSpeed,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=3.1381119060994607 NaN': bad timestamp\nunable to parse 'performance.beatAngleVelocityMadeGood,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=1.2921 NaN': bad timestamp\nunable to parse 'performance.targetAngle,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=0.9876577115566776 NaN': bad timestamp\nunable to parse 'performance.targetSpeed,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=3.1381119060994607 NaN': bad timestamp\nunable to parse 'performance.gybeAngle,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=3.1285888647430173 NaN': bad timestamp\nunable to parse 'performance.gybeAngleTargetSpeed,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=13.015447741691206 NaN': bad timestamp\nunable to parse 'performance.gybeAngleVelocityMadeGood,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=-13.014347310694532 NaN': bad timestamp\nunable to parse 'performance.polarSpeed,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=2.2121116715127345 NaN': bad timestamp\nunable to parse 'performance.polarSpeedRatio,context=vessels.urn:mrn:signalk:uuid:52133a24-d0a3-4239-9d61-b0fd66fc39a2,source=calculated.signalk-polar value=0 NaN': bad timestamp dropped=0"} at IncomingMessage.Create.res.on (/home/pi/.signalk/node_modules/influx/lib/src/pool.js:50:38) at IncomingMessage.emit (events.js:203:15) at endReadableNT (_stream_readable.js:1145:12) at process._tickCallback (internal/process/next_tick.js:63:19) " Answers: username_1: the pushDelta function should be replaced with app.handleMessage. This way, the timestamp will be added and this message should disappear. Status: Issue closed
MaibornWolff/codecharta
655087232
Title: Don't break line for secondary metrics span Question: username_0: # Bug ## Expected Behavior Let's not break the line for `Secondary Metrics`. Instead get the string on one line. ## Screenshots ![image](https://user-images.githubusercontent.com/12533626/87210806-2d4e6d00-c317-11ea-96cb-e4cbec35d209.png) ## Steps to Reproduce the Problem 1. Select a building ## Specifications - is released in [online-demo](https://maibornwolff.github.io/codecharta/visualization/app/index.html?file=codecharta.cc.json&file=codecharta_analysis.cc.json): yes - CodeCharta Version: 1.50.0 - OS: Windows and Mac - Browser: Chrome<issue_closed> Status: Issue closed
Oldpost/Oldpost.github.io
430739584
Title: vue内置组件之keepAlive | 梵唱 | MC Question: username_0: https://blog.silviaxu.com/2019/01/25/Vue/vue-keepAlive/ 需求描述从花名册的查看/新增/修改页面进入到花名册列表页面时保持当前的搜索页面,从其他页面进入到花名册列表页时要初始化页面,即清空搜索条件并刷新当前列表。 需求分析按需求描述花名册列表页可能会缓存,其他页面不缓存。 将花名册列表页设为A页面 将查看/新增/修改页设为B页面 将其他页面设为C页面 B–&gt;A不刷新当前页,C–&gt;A刷新当前页 vue内置组件之keep-alive 它是一个
UsaidPro/FRC-2015-Java-code
60368986
Title: Issue with Eclipse and Git Question: username_0: I'm having an issue with Eclipse and Git. Almost all of Eclipse's tools and functionalities do not work. The error that appears when I try to indirectly use a tool is "The resource is not on the build path of a Java project". Did anyone else have this problem? If so, did you fix it? I can barely work on the code since I can't open declarations, use JavaDocs, and the error finder doesn't seem to work . . . . . . ![image](https://cloud.githubusercontent.com/assets/7944753/6559023/00e6c850-c64e-11e4-8e11-0549147cf546.png) Answers: username_0: Fixed. I had to import the project again from git into Eclipse :( Status: Issue closed
rossfuhrman/_why_the_lucky_markov
435336841
Title: Princess rail against the clouds called Heaven. The lilypad is very slow to start, but it also makes for a method called like other methods. Question: username_0: Toot: Princess rail against the clouds called Heaven. The lilypad is very slow to start, but it also makes for a method called like other methods. One comment = 1 upvote. Sometime after this gets 2 upvotes, it will be posted to the main account at https://mastodon.xyz/@_why_toots
uber-go/tally
358522214
Title: Feature request: dynamic gauge Question: username_0: In many cases it is necessary to report the current value of a certain variable. Other metric systems (such as Dropwizard one on Java) allow an user to bind a metric name to a function. That function gets called on every metric aggregation scan and its returned value is applied directly as momentary metric value. Something to the tune of: ``` func measureChannel() float64 { return float64(len(someChan)) } //.... g := tallyScope.DynamicGauge("channel_gauge", measureChannel) ``` Whereupon, `channel_gauge` metric will be set to a value returned by `measureChannel` function, when called on metric reporter defined intervals.
axios/axios
722359963
Title: Error "Request failed 503" Question: username_0: #### Describe the issue Hello! I'm trying to check, first the validity of an url and if it works and if yes, i take some data from the link with Cheerio (some html datas). The link i'm trying works perfectly (in my case Amazon Links), but the 60% times i try, or the validity function or the function to get datas, return the error `Error: Request failed with code 503`. I've already checked the connection, the site, but both works perfectly with no problems #### Example Code The function to see if the link works: ```js let myAxiosResult = await axios.get(uri).then( (response) => { // console.log("response data: "+response.data, response.status); return true; }).catch(error => { console.log('Axios error on url: '+uri, " --> "+error); return false; }); ``` and the second function to retrieve data: ```js getBaseProductDatas: async function(link) { let obtainLink = await axios.get(link).then( (response) => { if ( response.status == 200) { const html = response.data; var $ = cheerio.load(html); const imageLink = $('#product-image').attr('src'); return imageLink; } }) return obtainLink; }, ``` Both functions are async/await and i clal them in async methods with `await function(link)`; #### Expected behavior, if applicable Obtain no errors for a working link #### Environment - Axios Version: 0.20.0 - Node.js Version: v14.9.0
moul/assh
496774473
Title: Invalid multiplex command on Ubuntu 18.04 Question: username_0: When using the native ssh client on Ubuntu 18.04 assh invokes an invalid ssh command (`ssh -o false -O false test.example.com`) Example: ```bash $ /usr/bin/ssh -V OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017 $ assh --config /dev/null -D --verbose wrapper ssh test.example.com 2019-09-22T14:42:27.650+0200 DEBUG assh.pkg.commands commands/wrapper.go:62 Wrapper called {"bin": "/usr/bin/ssh", "target": "test.example.com", "command": [], "options": ["-O", "false", "-o", "false"], "sshArgs": ["ssh", "-O", "false", "-o", "false", "test.example.com"]} 2019-09-22T14:42:27.650+0200 DEBUG assh.pkg.config config/config.go:518 Loading config file {"file": "/dev/null"} 2019-09-22T14:42:27.651+0200 DEBUG assh.pkg.config config/config.go:536 Loaded config file {"file": "/dev/null", "num-host-before": 0, "num-host-after": 0, "num-host-diff": 0} Invalid multiplex command. ``` Answers: username_0: I managed to fix it with this: ``` diff --git pkg/commands/wrapper.go pkg/commands/wrapper.go index 71af6314..65e9c474 100644 --- pkg/commands/wrapper.go +++ pkg/commands/wrapper.go @@ -46,6 +46,14 @@ func runSSHWrapperCommand(cmd *cobra.Command, args []string) error { } for _, flag := range config.SSHStringFlags { for _, val := range viper.GetStringSlice(flag) { + if (flag == "o" || flag == "O") && val == "false" { + logger().Debug( + "Skip invalid option:", + zap.String("flag", flag), + zap.String("val", val), + ) + continue + } options = append(options, fmt.Sprintf("-%s", flag)) options = append(options, val) } ``` Status: Issue closed
neo4j-contrib/neo4j-helm
1075950184
Title: Issue when setting apoc export or import configurations Question: username_0: Hi, I have followed the steps on how to setup a standalone neo4j instance in a kubernetes cluster from this documentation: https://neo4j.com/docs/operations-manual/current/kubernetes/quickstart-standalone/ Everything went well, however i would like to use the apoc import and export features from neo4j but when i try to do so in neo4j browser, it says export or import are not enabled. However, when i exec into the pod, i can see the configurations in the neo4j.config file. Here is my helm config yaml file content: neo4j: resources: cpu: "1" memory: "500Mi" password: "*********************************" edition: "enterprise" acceptLicenseAgreement: "yes" volumes: data: mode: "volume" volume: azureDisk: diskName: "myDiskName" diskURI: "myDiskURI" kind: Managed import: mode: "volume" volume: azureFile: secretName: mySecretName shareName: myShareName readOnly: false config: dbms.directories.plugins: "/var/lib/neo4j/labs" dbms.security.procedures.unrestricted: "apoc.*" env: { NEO4J_apoc_import_file_enabled: "true", NEO4J_apoc_export_file_enabled: "true", NEO4J_apoc_import_file_use__neo4j__config: "true", NEO4J_dbms_security_procedures_unrestricted: "apoc.*" } Can someone help me achieve what i am looking for? Answers: username_1: Hi @username_0 this can depend on which version of APOC you're using, the docs you're following may be referring to an older version of APOC which placed its config in neo4j.conf. More recent versions of APOC expect a separate file in conf/apoc.conf. To do this, rather than using env vars what you would probably do is create a configmap mounted as a file to that location to configure APOC. You could do this once, and then have the file mount go to all of your containers. For APOC config, check here: https://neo4j.com/labs/apoc/4.3/config/ Also, please take note that this repo is deprecated, see this message for next steps: https://github.com/neo4j-contrib/neo4j-helm#deprecated username_0: Thank you very much for this clear and precise answer, i will definitely take a look and comment the solution that i found. username_0: I got it to work, i added the config map below and now it can read the configurations: apiVersion: v1 kind: ConfigMap metadata: name: "my-neo4j-release-apoc-config" namespace: "default" data: apoc.conf: | apoc.export.file.enabled=true apoc.import.file.enabled=true apoc.import.file.use_neo4j_config=true Thanks again for your guidance. Status: Issue closed
bigeasy/transcript
951026829
Title: Release Transcript version 0.1.5. Question: username_0: * Fix GitHub Actions workflow. * Build with Node.js 16 on GitHub Actions. * Build with Node.js 16 on Travis CI. * Remove silly, unused argument. * Attempt to fix package name. * Go port of Transcript.<issue_closed> Status: Issue closed
ordina-jworks/ordina-jworks.github.io
290785268
Title: Images of Spring IO 2015 blogposts don't work anymore Question: username_0: They link back to the old ordina.be website (which is gone). New images should be included instead. Answers: username_1: Apparently there are more blogposts with this issue (images and URLs). I will check them all and fix the images/links where possible. Blogposts with links to the old ordina.be website: - [Applying software patterns to microservices](https://ordina-jworks.github.io/microservices/2015/10/13/MicroservicesPatterns.html) - ["Bootiful" Microservices in CloudFoundry & One on One with <NAME>](https://ordina-jworks.github.io/spring/2015/05/08/SpringIO15-Microservices.html) - [JOIN 2015 from a developer''s perspective](https://ordina-jworks.github.io/join/2015/09/15/JOIN2015.html) - [JOIN 2016](https://ordina-jworks.github.io/conference/2016/09/27/JOIN-2016.html) - [Kickstarter Trajectory 2017](https://ordina-jworks.github.io/kickstarters/2017/10/05/Kickstarter-Trajectory-2017.html) - [Master Spring Boot auto-configuration](https://ordina-jworks.github.io/spring/2015/05/08/SpringIO15-Autoconfig.html) - [A production Spring reference application & One on One talk with <NAME>](https://ordina-jworks.github.io/spring/2015/05/08/SpringIO15-Sagan.html) - [Spring I/O 2015 Barcelona](https://ordina-jworks.github.io/spring/2015/05/09/SpringIO15-General.html) - [SpringOne Platform](https://ordina-jworks.github.io/conference/2016/08/09/s1p.html) - [A web of trusted commits](https://ordina-jworks.github.io/security/2016/07/25/Web-of-trusted-commits.html) username_1: @kencoenen Do you have any idea which images you used in your [blogpost about JAX London 2015](https://ordina-jworks.github.io/microservices/2015/10/13/MicroservicesPatterns.html)? Can I pick a random image for 'Microservices patterns' and 'Gartner Hype Cycle'? username_2: Awesome, @username_1 ! 💪🏼 Status: Issue closed
EvotecIT/PSWriteHTML
909854546
Title: Unable to sort multiple columns in opposite directions Question: username_0: I am trying to create a custom ticketing dashboard, and am running into an issue. I would like to sort first by priority of tickets (high = 1/med = 2/low = 3) and then by ticket ID (larger means newer) so I see high priority tickets first, then newest to oldest. Problem is that means Priority should be sorting Descending (1,2,3) but ID should be sorting Ascending (3,2,1). I've verified I can specify multiple sort columns (int or string arrays depending on specifying column indexes or names) via arrays, but I can only specify one way to sort the columns. Is there any solution/recommendation on how I might be able to do this as-is, or is this a limitation to the current implementation? https://github.com/EvotecIT/PSWriteHTML/blob/e62d662b1df9bef11c95cd3e8656bae0ef5b051e/Public/New-HTMLTable.ps1#L31-L33 Here's an example of what I'm after (sorting by PRI descending, then ID ascending): | PRI | ID | Summary | |-----|----|------------------| | 1 | 7 | Placeholder Text | | 1 | 5 | Placeholder Text | | 2 | 4 | Placeholder Text | | 2 | 2 | Placeholder Text | | 2 | 1 | Placeholder Text | | 3 | 6 | Placeholder Text | | 3 | 3 | Placeholder Text | Answers: username_0: Looking at this a bit more, I think the solution would be to convert $DefaultSortOrder from `[string]` to `[string[]]`, and within the `foreach()` loops create some kind of localized `$sort` variable. And at this point I've put in enough effort I should just create a PR... https://github.com/EvotecIT/PSWriteHTML/blob/e62d662b1df9bef11c95cd3e8656bae0ef5b051e/Public/New-HTMLTable.ps1#L33 https://github.com/EvotecIT/PSWriteHTML/blob/e62d662b1df9bef11c95cd3e8656bae0ef5b051e/Public/New-HTMLTable.ps1#L706-L726 Additional note: From reviewing this chunk of code, I'd imagine lines 714 and 722 (`if ($Column -ne - 1) {`) should be `-1` and not `- 1` with a space? username_1: I am not sure if we should do it this way: - https://datatables.net/examples/basic_init/multi_col_sort.html - https://datatables.net/forums/discussion/48105/sort-multiple-columns-by-default I'm trying to understand the difference between the current solution and the solution from the first link. Yes, this is a bug (-1). Most likely introduce while formatting 🙀 username_1: Two bugs: - https://github.com/EvotecIT/PSWriteHTML/blob/96e672bacf109d87868867f489b9e44fcae8d744/Public/New-HTMLTable.ps1#L722-L722 - https://github.com/EvotecIT/PSWriteHTML/blob/96e672bacf109d87868867f489b9e44fcae8d744/Public/New-HTMLTable.ps1#L714-L714 username_1: Ok, going thru the implementation. By default when you use DefaultSortColumn it will always sort it according to DefaultSortOrder. If there are 5 columns to sort on, It will have to be either all ascending or all descending. There are 2 choices: 1. Implement two arrays as you say but then deal with problems with having disprepency 2 to 1, 5 to 1, 7 to 3 between columns. It's possible possible people will use different 2. Implement it in the `New-TableColumnOption` and allow to define it per column Maybe even have both done. https://github.com/EvotecIT/PSWriteHTML/blob/96e672bacf109d87868867f489b9e44fcae8d744/Public/New-TableColumnOption.ps1#L1-L1 username_0: I've thrown together a PR (https://github.com/EvotecIT/PSWriteHTML/pull/242) that I think should fix it in a non-breaking way (backwards compatible, but makes reasonable enough sense). Haven't actually run/tested it yet. username_1: I'll take a look tomorrow - my brain is shutting down. Thank you for your effort. username_0: No, thank you for the awesome module! I've only dabbled with it a couple times, but it's fantastic to work with both on the coding side and the outputs. Status: Issue closed
mrdoob/three.js
428249116
Title: Suggestion: Making BufferGeometry.attributes/index private Question: username_0: 2. For VAO implementation. We've been working on VAO implementation in #14600. We proposed assigning one VAO to one geometry which manages the set of attributes and index. VAO is for performance, so we need to quickly detect whether we need to update VAO or not. Taking time each frame to judge if we need the update doesn't make sense. So I proposed attributes/index addition and removal observation. But currently Three.js API is too flexible that users can directly access `attributes/index` properties. This makes doing it hard. Making the properties private can have it easy because we only need to observe accessing methods. **Disadvantage** 1. Breaking core/examples/user code. I see a lot of codes directly accessing `geometry.attributes/index`. If we make the properties private, they need to be updated. 2. We need to add `.getAttributeNames()` method (or something) I sometimes see this type of code. ```javascript for ( var attributeName in geometry.attributes ) { var attribute = geometry.attributes[ attributeName ]; .... } ``` It can't be handled with the current attribute accessing methods. Probably we will need to add a new method `.getAttributeNames()` which returns attribute names as an array. Feedback is welcome. ##### Three.js version - [x] Dev - [ ] r103 - [ ] ... ##### Browser - [x] All of them - [ ] Chrome - [ ] Firefox - [ ] Internet Explorer ##### OS - [x] All of them - [ ] Windows - [ ] macOS - [ ] Linux - [ ] Android - [ ] iOS ##### Hardware Requirements (graphics card, VR Device, ...) Answers: username_1: To clarify, that statement is patently false. username_2: I think a sensible implementation of VAOs is only possible if this suggestions is accepted. It's more or less a general decision whether performance is more important for the engine than flexibility. My former view on the `three.js` was that project was always focused on flexibility. However, certain performance features like VAOs or caching of matrices require a more strict API. The latter one for example will require to make `Object3D.matrix` private which is a similar breaking change. I personally tend to performance (since better performance was requested multiple times) but it is also true that `three.js` was criticized in the past for its less stable API (for example [here](http://www.html5gamedevs.com/topic/37703-babylonjs-vs-threejs-choosing-a-webgl-framework-for-sony/)). Because of this, I'm very unsure about how to decide in this issue... username_3: I don't understand this part... what benefit does making `geometry.attributes` private give that just making it readonly would not? Are we worried about users doing: ```js // this? geometry.attributes.position = new THREE.BufferAttribute(...); // or this? geometry.attributes.position.updateRange(...); ``` If the latter is one of our concerns, then just changing it to `getAttribute('position').updateRange(...)` doesn't really solve anything. Making `geometry.attributes` an immutable object, rather than a private variable, would avoid a breaking change without – as far as I can tell – any downside for VAO support. Aside: My impression has always been that `addAttribute` and `removeAttribute` were the right way to go, and that direct assignments/deletions were never guaranteed to work. But I didn't know `getAttribute` existed, and have never used it. 😅 username_2: This would be bad. All direct assignments or deletions to `BufferGeometry.attributes` don't raise the new `BufferGeometry.version` property which is introduced in #14600. username_3: Right, but if `geometry.attributes.position.updateRange(...);` is not bad, then I don't see why we need `geometry.attributes` to be private rather than just immutable. Not necessarily opposed to that either, but it doesn't seem necessary for the goals here. username_4: You could make geometry.attributes a Proxy() object to handle direct access correctly and allow existing applications to continue functioning, but Proxy() is not supported on IE11. username_2: How would this look like? Maybe some code would clarify things^^ username_3: At the simplest... ```js class BufferGeometry { setAttribute( name, attribute ) { this.attributes = Object.assign( {}, this.attributes, { [ name ]: attribute } ); Object.freeze( this.attributes ); return this; } } ``` ... but there are any number of ways to implement this, which would all make it obvious to users that `geometry.attributes` is readonly. Making the right method obvious is probably more important than making it impervious to monkeypatching here. I'm offering this as a way to avoid breaking code that reads from `geometry.attributes`, not necessarily as advice. If we think that `getAttribute(name)` is a better API for whatever reason, we can also figure out a way to make that migration with limited inconvenience to users. username_5: I just wanted to comment that I could help converting the examples that use direct attribute access to the method api. username_4: The only examples using direct access for assignment appear to be examples/js/loaders/LWOLoader.js examples/webgl_buffergeometry_instancing2.html examples/webgl_buffergeometry_instancing_dynamic.html username_2: I'm not sure reading from `BufferGeometry.attributes` is the actual issue. The problematic part is the configuration of the `BufferGeometry`. And introducing VAO would mean a migration effort in any case, even if `.attributes/index` would be readonly. Against this backdrop, it could be worth to make `BufferGeometry.attributes` directly private in order to enable VAO and a cleaner API all at once. username_3: I agree that reading from `geometry.attributes` is not an issue, but this means making `.attributes` private (as opposed to readonly) is not actually necessary to enable VAO. In my opinion, `getAttribute('foo')` is cleaner than `.attributes.foo`, since setters are required to modify it anyway. Whether it is actually worth deprecating _reads_ from `.attributes.foo` and breaking a lot of existing code depends (to me) on how many other breaking changes VAO requires, and the performance benefits we see from that. username_2: It would be best to verify if this is actually possible and all relevant use cases are supported. In this way, the discussion avoids to promote a solution that might be inappropriate. username_3: I don't think there's any technical blocker here. We can use getters/setters: ```js Object.defineProperty(this, 'attributes', { get: function() { console.warn( 'THREE.BufferGeometry: Please use getAttribute().' ); return _attributes; }, set: function() { throw new Error( 'THREE.BufferGeometry: Please use addAttribute() / removeAttribute().' ); } }); ``` username_4: @username_3 surely that only handles assignment or getting the entire attribute object itself. not properties of the attribute object? username_3: Yes, but calling `geometry.attributes.position.count()` still accesses the attributes property, and will still log the same warning. :) username_4: ah, of course. Although set: will only catch this.attribute = foo rather than this.attribute.foo = foo so not handling that case with the right message. username_0: It seems that we can't hook `geometry.attributes.position = bufferAttribute;` and call `geometry.addAttribute( 'position', bufferAttribute )` insted (other than using `Proxy`). So the getter should be like this? ```javascript Object.defineProperty(this, 'attributes', { get: function() { console.warn( 'THREE.BufferGeometry: Please use getAttribute(), addAttribute(), or removeAttribute().' ); this.version ++; return _attributes; } }); ``` It counts up `geometry.version` leading to VAO update even if just getting an attribute, but keeps working. username_0: The current VAO implementation in #14600 requires - Deprecating direct `geometry.attributes/index` access - The following properties immutable ``` BufferAttribute: .normalized, .itemSize, .type, InterleavedBufferAttribute: .normalized, .itemSize, .type. .offset, .data.stride InstancedBufferAttribute/InstancedInterleavedBuffer: .meshPerAttribute. ``` https://github.com/mrdoob/three.js/pull/14600#issuecomment-478768913 The performance benefit is 10-15% performance gain overall application in significant VBO switching case. Maybe I need to test more general case but this performance benefit looks good even though we sacrifice something. IMO the second requirement seems ok because usually they aren't updated runtime. The first requirement might have a bit impact breaking a lot of code. But deprecating direct access looks cleaner API, so I'd like to go with it. username_0: OK, I thought we didn't want to break any user code/application even if having side effect which can be solved by updating their code. But seems like I misunderstood the discussion. username_0: Made draft PR to show how the change looks like. #16232 username_0: Closing for now because I made VAO without any breaking existing APIs PR #16287 We can revisit later if needed. Status: Issue closed
hep7agon/city-feedback-hub
142338134
Title: Constant sync with open311 Question: username_0: CHF should have constant sync with open311. All the data that is in feedback system should be available in feedback.hel.ninja. Status: Issue closed Answers: username_1: Cron job runs the command to synchronize feedbacks every five minutes. username_0: Excellent!
banzaicloud/jwt-to-rbac
753496123
Title: Leveraging SA token to access EKS cluster Question: username_0: **Is your feature request related to a problem? Please describe.** Having just SA token is not sufficient to access EKS cluster. **Describe the solution you'd like to see** Since this solution aims to solve managed oidc authentication issue for managed clusters, It would be great to show how to do it for the biggest cloud providers: AKS, EKS, GKS **Describe alternatives you've considered** n/a **Additional context** For EKS clusters performing just: ``` k --token $token --certificate-authority ca.crt --server $APISERVER get po error: You must be logged in to the server (Unauthorized) ``` after annotating service account with AWS role: ``` k --token $token --certificate-authority ca.crt --server $APISERVER get po error: the server doesn't have a resource type "po" ``` Answers: username_0: I am really curious if you have ever had this issue. If so, then how have you solved it. Regards username_1: Maybe it is just what it says? That "po" does not exist? Did you try with "pod"? username_0: yes, I did. Same error. username_2: Hi @username_0 , I can authenticate with SA token to my EKS cluster. ``` kubectl --token $TOKEN --server https://0A9D8FA06B1F6CEB05E4210D0A7A089F.gr7.us-west-2.eks.amazonaws.com get pod -n kube-system NAME READY STATUS RESTARTS AGE aws-node-96g2r 1/1 Running 0 65m aws-node-dv79l 1/1 Running 0 66m aws-node-m7hv4 1/1 Running 0 66m coredns-5946c5d67c-tjbwl 1/1 Running 0 71m coredns-5946c5d67c-tkdnb 1/1 Running 0 71m kube-proxy-46sxk 1/1 Running 0 65m kube-proxy-h5ndg 1/1 Running 0 66m kube-proxy-jjhht 1/1 Running 0 66m ``` username_0: @username_2 thanks for your answer. I did not decode the secret with `base64 --decode` and that was the issue. The jet-to-rbac works out of the box - great! It's a very promising solution for having cloud native oidc login to the closter. Btw, I am going to integrate `jwt-to-rbac` with the containers platform that I and other folks are working on. Take a look, there is a lot of valuable and interesting stuff: https://github.com/redkubes/otomi-core username_2: I close the issue but feel free to reopen it if you have additional questions. Status: Issue closed
big-data-europe/docker-spark
290878520
Title: entrypoint.sh not executed when starting the container Question: username_0: I'm using this docker image to launch a series of slave/masters and execute workloads using hdfs as the file system. I set the environmental variables needed such as CORE_CONF_fs_defaultFS or CLUSTER_NAME but they don't seem to have any effect on the hadoop configuration files. I've realised that the script [entrypoint.sh](https://github.com/big-data-europe/docker-hadoop/blob/master/base/Dockerfile) that uses these environmental variables is not executed. This script is included in the [docker-hadoop](https://github.com/big-data-europe/docker-hadoop) image, which is the base of this docker-spark image. Now, I'm not sure if launching a docker container means launching all of the inherited entry points of the parent images but the HDFS configuration part is not working. Answers: username_1: Hi @username_0 ! Which version/branch are you using? please provide minimal docker-compose which recreate your issue. username_0: It's basically like this. Is not a docker-compose but a Marathon JSON. It's understandable anyways with the version of the image under the image entry. { "id": "/hdfssparkstandalone", "groups": [ { "id": "/hdfssparkstandalone/namenode", "apps":[ { "id": "/hdfssparkstandalone/namenode/namenode", "cpus": 1, "mem": 2048, "container": { "type": "DOCKER", "docker": { "image": "uhopper/hadoop-namenode:2.8.1", "forcePullImage": true }, "volumes": [ { "containerPath": "/hadoop/dfs/name", "hostPath": "/home/vagrant/name", "mode": "RW" } ], "portMappings": [ { "hostPort": 0, "containerPort": 8020}, { "hostPort": 0, "containerPort": 50070 } ] }, "networks": [ { "mode": "container", "name": "dcos" } ], "env" : { "MULTIHOMED_NETWORK" : "0", "CLUSTER_NAME" : "hdfs-cluster", "HDFS_CONF_dfs_namenode_datanode_registration_ip___hostname___check" : "false", "HDFS_CONF_dfs_client_use_datanode_hostname" : "false", "HDFS_CONF_dfs_datanode_use_datanode_ip_hostname" : "false", "HDFS_CONF_dfs_datanode_use_datanode_hostname" : "false" }, "instances": 1 } ] }, { "id": "/hdfssparkstandalone/datanode", "dependencies": ["/hdfssparkstandalone/namenode"], "apps":[ { "id": "/hdfssparkstandalone/datanode/datanode", "cpus": 1, "mem": 2048, "container": { [Truncated] "MULTIHOMED_NETWORK" : "0", "CLUSTER_NAME" : "hdfs-cluster", "CORE_CONF_fs_defaultFS" : "hdfs://namenode-namenode-hdfssparkstandalone.marathon-user.containerip.dcos.thisdcos.directory:8020", "HDFS_CONF_dfs_namenode_datanode_registration_ip___hostname___check" : "false", "HDFS_CONF_dfs_client_use_datanode_hostname" : "false", "HDFS_CONF_dfs_datanode_use_datanode_ip_hostname" : "false", "HDFS_CONF_dfs_datanode_use_datanode_hostname" : "false", "SPARK_MASTER" : "spark://sparkmaster-sparkmaster-hdfssparkstandalone.marathon-user.containerip.dcos.thisdcos.directory:7077", "YARN_CONF_yarn_nodemanager_resource_memory___mb" : "4096", "YARN_CONF_yarn_nodemanager_resource_cpu___vcores" : "1" } } ] } ] } username_1: fixing the problem now username_1: @username_0 I fixed it on the branch you are using, you can remove images, pull and that should work for you now. Status: Issue closed
pcbend/GRSISort
68460901
Title: Checks in TAnalysisTreeBuilder Question: username_0: I have a few checks in the analysis tree builder which check to see if GRIFFIN is really a SCEPTAR or not. This is likely not the place to put this, and analysis scripts would be better to place this, but I have it in the main code for now. This issue is here to remind myself to move it. Answers: username_1: Where are these checks? I'd like to have a glance at them. Also, when did you include them? username_0: I think they might be causing problems anyway. We are currently taking a look at this. Will follow up. > username_0: They are under TAnalysisTreeBuilder::ProcessEvent() username_1: These have disappeared from leanGRSI. I think that they probably shouldn't be implemented in the code itself, but @username_0 and @jpore need to be aware of this. username_0: Does anyone have any ideas for how we should implement something like this? I'm thinking we can use some of our extra flags in the detector hits to say "this had the wrong detector type" or something upon building hits. At the very least we could use one of the reserved flags for "This hit is questionable" although we lose a bunch of information doing that. However, the current way of doing things loses far more information. username_1: Encountered this problem again, so *bump*. username_1: I like the "We can tell that this interaction has the wrong address, which means we don't trust the information, which means we should not use it" line of thought here. username_2: Is this issue still relevant? I don't think that we have as many problems with wrong addresses with new data, but maybe we should have this for the older data and the (few?) times this happens in newer data? username_2: If this is still relevant, I think we should make an entry on the wiki "Sorting old data" or something like that. There we can post any code that should be added to check for these issues. Status: Issue closed
postalsys/imapflow
997416919
Title: exists handler not being invoked Question: username_0: **Describe the bug** It looks like there is an issue with "exists" handler invocation, whenever a new email arrives to a subscribed mailbox **To Reproduce** I attached my backend log with detailed descriptions of significant steps https://gist.github.com/username_0/605b519f9d7f2a035461596d82fcc848 **Expected behavior** Whenever a new email arrives in the mailbox, which a client is subscribed on, there should be an invocation of "exists" handler I'd appreciate if you can provide a clue for a possible issue in my processing, but based on the detailed logger info from the library, server responds as expected, yet client library does not process the events accurately. Thank you in advance! Answers: username_1: What server is this? Is there an untagged FETCH after EXPUNGE or is it plain EXPUNGE without the updated message count? username_0: @username_1 , thx for looking into it. It's Communigate Pro. As for the second question, I'm afraid I'm not that well versed with IMAP implementation details. Is there any diagnostic I can provide you with from a server side, which could help to troubleshoot it? It's internal server luckily so I have access to the logs through our admin. username_1: It seems that ImapFlow is not completely compatible with that server. It is proprietary so I haven’t been able to test against it. Based on your log I think I have a hunch of what needs improving, I’ll look into it. username_1: Could you try v1.0.68 and see if it made things better or not username_0: Thank you, @username_1! It's working fine now Status: Issue closed
mzechmeister/serval
428177010
Title: WARNING: Matrix is not positive definite. Zero or negative yerr values for 0 at [] Question: username_0: This can happen, if only few data points are available. Low SNR data should not be normalised with a cubic polynomial. Answers: username_1: Sometimes even using -pspline option the error persists and no RVs are computed. Any suggestion? username_0: Can you mail me the input command and spectrum list?
sbt/sbt
417751446
Title: Wrong Type Sage repository when creating a Dotty template Question: username_0: It seems like `sbt` points to the wrong Type Safe repository when creating a Dotty template. ## steps 1. I create a Dotty template, as explained in [here](http://dotty.epfl.ch/docs/usage/getting-started.html): `sbt new lampepfl/dotty.g8`. 2. I run the command `sbt` and get errors. ## problem I get errors related to not finding the dependency (I think). Here is the [full output](https://gist.github.com/username_0/d97a572530fd67d340f2aeaa2158259b) and below just a small snippet: ``` :::::::::::::::::::::::::::::::::::::::::::::: [warn] [warn] Note: Some unresolved dependencies have extra attributes. Check that these dependencies exist with the requested attributes. [warn] ch.epfl.lamp:sbt-dotty:0.3.0 (scalaVersion=2.12, sbtVersion=1.0) [warn] [warn] Note: Unresolved dependencies path: [warn] ch.epfl.lamp:sbt-dotty:0.3.0 (scalaVersion=2.12, sbtVersion=1.0) (/Users/username_0reyes/Code/Godot/godotty/project/plugins.sbt#L1-2) [warn] +- default:godotty-build:0.1.0-SNAPSHOT (scalaVersion=2.12, sbtVersion=1.0) [error] sbt.librarymanagement.ResolveException: unresolved dependency: ch.epfl.lamp#sbt-dotty;0.3.0: Resolution failed several times for dependency: ch.epfl.lamp#sbt-dotty;0.3.0 {compile=[default(compile)]}:: ``` ## expectation I was expecting to run `sbt` successfully or any other `sbt` command, such as compilation. I am able to by-pass the issue by adding the `sbtPlugin := false` at the end of the `plugins.sbt`, which look like: ``` addSbtPlugin("ch.epfl.lamp" % "sbt-dotty" % "0.3.0") sbtPlugin := false ``` ## notes sbt version: *1.2.8* Can this somehow be related to #3100 ? P.S. Please, feel free to update the description with more meaningful information. Answers: username_1: Are you using Ubuntu? If so could you make sure to use Java 8, or Java 11 with certificates etc? username_2: I don't think the SSL handshake is the problem; after trying to debug this issue for myself, I noticed some key points: the resolver failures come back as ``` [error] typesafe-ivy-releases: unable to get resource for ch.epfl.lamp#sbt-dotty;0.3.0: res=https://repo.typesafe.com/typesafe/ivy-releases/ch.epfl.lamp/sbt-dotty/scala_2.12/sbt_1.0/0.3.0/ivys/ivy.xml: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target [error] sbt-plugin-releases: unable to get resource for ch.epfl.lamp#sbt-dotty;0.3.0: res=https://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/ch.epfl.lamp/sbt-dotty/scala_2.12/sbt_1.0/0.3.0/ivys/ivy.xml: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target [error] typesafe-ivy-releases: unable to get resource for ch.epfl.lamp#sbt-dotty;0.3.0: res=https://repo.typesafe.com/typesafe/ivy-releases/ch.epfl.lamp/sbt-dotty/0.3.0/jars/sbt-dotty.jar: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target ``` which is expected-- the dotty sbt plugin is hosted on maven central (http://repo1.maven.org/maven2/ch/epfl/lamp/sbt-dotty_2.12_1.0/0.3.0/). Running in debug mode and searching out dotty gives us an interesting behavior: ```[debug] local: no ivy file found for ch.epfl.lamp#sbt-dotty;0.3.0 [debug] tried https://repo1.maven.org/maven2/ch/epfl/lamp/sbt-dotty_2.12_1.0/0.3.0/sbt-dotty-0.3.0.pom [debug] public: found md file for ch.epfl.lamp#sbt-dotty;0.3.0 [debug] => https://repo1.maven.org/maven2/ch/epfl/lamp/sbt-dotty_2.12_1.0/0.3.0/sbt-dotty-0.3.0.pom (0.3.0) [debug] downloading https://repo1.maven.org/maven2/ch/epfl/lamp/sbt-dotty_2.12_1.0/0.3.0/sbt-dotty-0.3.0.pom ... [debug] public: downloading https://repo1.maven.org/maven2/ch/epfl/lamp/sbt-dotty_2.12_1.0/0.3.0/sbt-dotty-0.3.0.pom [debug] public: downloading https://repo1.maven.org/maven2/ch/epfl/lamp/sbt-dotty_2.12_1.0/0.3.0/sbt-dotty-0.3.0.pom.sha1 [debug] sha1 OK for https://repo1.maven.org/maven2/ch/epfl/lamp/sbt-dotty_2.12_1.0/0.3.0/sbt-dotty-0.3.0.pom [debug] [SUCCESSFUL ] ch.epfl.lamp#sbt-dotty;0.3.0!sbt-dotty.pom(pom.original) (112ms) [debug] tried https://repo.typesafe.com/typesafe/ivy-releases/ch.epfl.lamp/sbt-dotty/0.3.0/jars/sbt-dotty.jar``` Even though the package is found in maven central, it fails the download through `repo.typesafe`. username_0: Hi, I am using OSX Mojave 10.14.3. As mentioned, I only know that it does not get the package from maven central, which was hinted by @smarter. Thanks for looking into this. username_2: I can add a little more information from another debugging session: I tried manually adding maven central as a resolver, which caused the error message to become: ``` [error] sbt.librarymanagement.ResolveException: unresolved dependency: ch.epfl.lamp#sbt-dotty;0.2.6: Resolution failed several times for dependency: ch.epfl.lamp#sbt-dotty;0.2.6 {compile=[default(compile)]}:: [error] typesafe-ivy-releases: unable to get resource for ch.epfl.lamp#sbt-dotty;0.2.6: res=https://repo.typesafe.com/typesafe/ivy-releases/ch.epfl.lamp/sbt-dotty/scala_2.12/sbt_1.0/0.2.6/ivys/ivy.xml: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target [error] sbt-plugin-releases: unable to get resource for ch.epfl.lamp#sbt-dotty;0.2.6: res=https://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/ch.epfl.lamp/sbt-dotty/scala_2.12/sbt_1.0/0.2.6/ivys/ivy.xml: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target [error] typesafe-ivy-releases: unable to get resource for ch.epfl.lamp#sbt-dotty;0.2.6: res=https://repo.typesafe.com/typesafe/ivy-releases/ch.epfl.lamp/sbt-dotty/0.2.6/jars/sbt-dotty.jar: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target [error] typesafe-ivy-releases: unable to get resource for ch.epfl.lamp#sbt-dotty;0.2.6: res=https://repo.typesafe.com/typesafe/ivy-releases/ch.epfl.lamp/sbt-dotty/0.2.6/jars/sbt-dotty.jar: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target ``` Note that the last two (which should have been default maven central and manually added maven central) refer to typesafe-ivy-releases, I think somehting is cause SBT to rewrite the URL to the typesafe ivy repository. username_1: I am macOS as well, and the template works for me. ``` $ java -version java version "1.8.0_171" Java(TM) SE Runtime Environment (build 1.8.0_171-b11) Java HotSpot(TM) 64-Bit Server VM (build 25.171-b11, mixed mode) ``` Ivy goes through the resolvers to find the artifact, so if one is erroring out that could prevent the resolution from going forward. username_2: When I tried, I wasn't using the template, just declaring it as an SBT plugin in my project. SBT seems to resolve other resources from typesafe-ivy-releases fine, and as I noted above, I can clearly se SBT finding the information in maven central, then switching back to try to download the actual jar from typesafe-ivy-releases. Perhaps I should open a new issue related to that; it seems like the problem I am facing is expected: if the resource is found in maven central, it should finish downloading from maven central, not switch abck to a different resolver and error out. (I just tested again, trying to download the plugin still fails for the same reasons). @username_1 Did the dotty perhaps resolve from your ivy cache instead of from the internet? username_1: To make sure I don't have things cached on my machine, I just used a fresh Ubuntu image on Docker, and was able to install AdoptOpenJDK JDK 11, sbt 1.3.2, and run the Dotty template. https://gist.github.com/username_1/ef7a4972539c8014c662e1e6d021bf81 ``` [info] Done packaging. [info] Running Main Hello world! I was compiled by dotty :) [success] Total time: 14 s, completed Sep 29, 2019, 9:09:40 PM ``` For now I am going to close this issue, but it might be useful to reproduce this on Travis CI or some other repeatable infrastructure to figure out what might be going on? Status: Issue closed
JuliaLang/julia
379039739
Title: ccall((function, library), ...) does not work in __init__ when the library is PIE Question: username_0: Original discussion at: https://github.com/JuliaPy/PyCall.jl/issues/612 Executing `ccall((function_name, library), ...)` when `library` is a position independent executable (PIE) Python yields a segmentation fault when done in `__init__`. However, it works if (1) it's done in REPL or script (i.e., top-level statements) or (2) `Libdl.dlsym` is used. This is [maybe](https://github.com/JuliaPy/PyCall.jl/issues/612#issuecomment-437256636) Linux-specific. I can reproduce this with Julia 1.0.1 and master. Here is a demo: clone https://gist.github.com/username_0/871c893d8bebf6741e10ee14f46c30fe and run `make` or ```console $ conda create --prefix py defaults::python ... $ file -L py/bin/python py/bin/python: ELF 64-bit LSB pie executable x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, with debug_info, not stripped $ cat PIEPyCall-0.jl module PIEPyCall using Libdl const pyhome = abspath("py") const libpython = "$pyhome/bin/python3" const wPYTHONHOME = Base.cconvert(Cwstring, string(pyhome, ':', pyhome)) const wpyprogramname = Base.cconvert(Cwstring, libpython) function __init__() h = Libdl.dlopen(libpython, Libdl.RTLD_LAZY|Libdl.RTLD_GLOBAL) @show unsafe_string(ccall((:Py_GetVersion, libpython), Ptr{UInt8}, ())) @show ccall((:Py_IsInitialized, libpython), Cint, ()) ccall((:Py_SetProgramName, libpython), Cvoid, (Ptr{Cwchar_t},), wpyprogramname) ccall((:Py_SetPythonHome, libpython), Cvoid, (Ptr{Cwchar_t},), wPYTHONHOME) ccall((:Py_InitializeEx, libpython), Cvoid, (Cint,), 0) @show ccall((:Py_IsInitialized, libpython), Cint, ()) end end $ julia --startup-file=no PIEPyCall-0.jl unsafe_string(ccall((:Py_GetVersion, libpython), Ptr{UInt8}, ())) = "3.7.1 (default, Oct 23 2018, 19:19:42) \n[GCC 7.3.0]" ccall((:Py_IsInitialized, libpython), Cint, ()) = 0 signal (11): Segmentation fault in expression starting at /home/takafumi/repos/scratch/gist/871c893d8bebf6741e10ee14f46c30fe/PIEPyCall-0.jl:10 fileno_unlocked at /usr/lib/libc.so.6 (unknown line) _PySys_BeginInit at /tmp/build/80754af9/python_1540319607830/work/Python/sysmodule.c:2292 _Py_InitializeCore_impl at /tmp/build/80754af9/python_1540319607830/work/Python/pylifecycle.c:753 _Py_InitializeCore at /tmp/build/80754af9/python_1540319607830/work/Python/pylifecycle.c:859 _Py_InitializeFromConfig at /tmp/build/80754af9/python_1540319607830/work/Python/pylifecycle.c:1002 Py_InitializeEx at /tmp/build/80754af9/python_1540319607830/work/Python/pylifecycle.c:1034 __init__ at /home/takafumi/repos/scratch/gist/871c893d8bebf6741e10ee14f46c30fe/PIEPyCall-0.jl:16 unknown function (ip: 0x7fc31c786c3c) jl_fptr_trampoline at /buildworker/worker/package_linux64/build/src/gf.c:1831 jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2184 jl_apply at /buildworker/worker/package_linux64/build/src/julia.h:1537 [inlined] jl_module_run_initializer at /buildworker/worker/package_linux64/build/src/toplevel.c:90 jl_module_load_time_initialize at /buildworker/worker/package_linux64/build/src/toplevel.c:122 [inlined] jl_eval_module_expr at /buildworker/worker/package_linux64/build/src/toplevel.c:276 jl_toplevel_eval_flex at /buildworker/worker/package_linux64/build/src/toplevel.c:651 jl_parse_eval_all at /buildworker/worker/package_linux64/build/src/ast.c:838 jl_load at /buildworker/worker/package_linux64/build/src/toplevel.c:847 [Truncated] - @show ccall((:Py_IsInitialized, libpython), Cint, ()) + @show ccall(Libdl.dlsym(h,:Py_IsInitialized), Cint, ()) - ccall((:Py_SetProgramName, libpython), Cvoid, (Ptr{Cwchar_t},), wpyprogramname) - ccall((:Py_SetPythonHome, libpython), Cvoid, (Ptr{Cwchar_t},), wPYTHONHOME) + ccall(Libdl.dlsym(h,:Py_SetProgramName), Cvoid, (Ptr{Cwchar_t},), wpyprogramname) + ccall(Libdl.dlsym(h,:Py_SetPythonHome), Cvoid, (Ptr{Cwchar_t},), wPYTHONHOME) - ccall((:Py_InitializeEx, libpython), Cvoid, (Cint,), 0) + ccall(Libdl.dlsym(h,:Py_InitializeEx), Cvoid, (Cint,), 0) - @show ccall((:Py_IsInitialized, libpython), Cint, ()) + @show ccall(Libdl.dlsym(h,:Py_IsInitialized), Cint, ()) end end ``` cc @isuruf @stevengj<issue_closed> Status: Issue closed
microsoft/botframework-solutions
534316640
Title: [Personality Chat Dataset] Update Hospitality Assistant Sample to use the Caring dataset Question: username_0: ## User Story As ...a virtual assistant developer of a hospitality assistant I want ...to leverage the latest caring personality chat datasets to be used in my hospitality assistant so that ...I get the best quality multi-language caring capabilities out of box hospitality assistant ## Acceptance Criteria *List your requirements.* Update to use the latest Caring personality dataset for the following language in Virtual Assistant solution https://github.com/Microsoft/BotBuilder-PersonalityChat/tree/master/CSharp/Datasets - [ ] English - [ ] Chinese - [ ] German - [ ] Spanish - [ ] French - [ ] Italian Update documentation for hospitality asistant sample to indicate this new update: https://microsoft.github.io/botframework-solutions/solution-accelerators/assistants/hospitality-assistant/ <!-- Internal Use Only ### Time Estimations ⏱ Documentation: ___ ⏱ Development: ___ ⏱ Tests: ___ --> Answers: username_1: @username_0 Should we update ones already in original Chitchat? For example special ones like "What's your name" "My name is the Virtual Assistant! Nice to meet you :)". username_2: @username_1 I've updated the Hospitality VA With the latest VirtualAssistantsSample, you're free to update the KB whenever username_1: @username_0 I just add several examples to indicate the update. Please help to update documents about caring chitchat dataset. @username_2 Hotel Faq is removed now, should we remove examples part from document? Status: Issue closed
gatsbyjs/gatsby
1052385874
Title: GatsbyImage css classes specificity overtakes Tailwind only on build, not develop Question: username_0: ### Preliminary Checks - [X] This issue is not a duplicate. Before opening a new issue, please search existing issues: https://github.com/gatsbyjs/gatsby/issues - [X] This issue is not a question, feature request, RFC, or anything other than a bug report directly related to Gatsby. Please post those things in GitHub Discussions: https://github.com/gatsbyjs/gatsby/discussions ### Description When working with [Tailwind CSS aspect ratio](https://github.com/tailwindlabs/tailwindcss-aspect-ratio) and GatsbyImage, everything looks and works fine locally when working with `gatsby develop`, but once deployed I saw that GatsbyImage is providing css classes that override Tailwind specificity and produce unexpected results. I found [this discussion around StaticImage](https://github.com/gatsbyjs/gatsby/discussions/33565), but two things come to mind: - This should work the same in development as in production, I shouldn't be surprised by build time css specificity issues - For aspect ratio, Tailwind doesn't add !important when using the `important: true` directive in their config, so the workaround specified in the discussion won't work here - There should be an explicit workaround for these cases in the gatsby-plugin-image documentation You can see the live version of this here: https://inspiring-hamilton-bd685d.netlify.app/ ### Reproduction Link https://github.com/username_0/gatsby-plugin-image-tailwind-specificity ### Steps to Reproduce 1. Clone the repo & `npm install` 2. `gatsby develop` and go to localhost:8000 3. Stop the dev server and `gatsby build && gatsby serve` 4. Go to localhost:9000 ### Expected Result - Things should look the same both in develop and build versions of the site. - GatsbyImage should not be injecting css classes that conflict with any classes that I write ### Actual Result - Things look and work differently between develop and build versions of the site - GatsbyImage messes with my css ### Environment ```shell System: OS: macOS 11.5.2 CPU: (8) arm64 Apple M1 Shell: 5.8 - /bin/zsh Binaries: Node: 16.9.1 - ~/.nvm/versions/node/v16.9.1/bin/node npm: 7.21.1 - ~/.nvm/versions/node/v16.9.1/bin/npm Languages: Python: 2.7.16 - /usr/bin/python Browsers: Chrome: 95.0.4638.69 Safari: 14.1.2 npmPackages: gatsby: ^4.1.3 => 4.1.3 gatsby-plugin-image: ^2.1.2 => 2.1.2 gatsby-plugin-mdx: ^3.1.3 => 3.1.3 gatsby-plugin-postcss: ^5.1.0 => 5.1.0 gatsby-plugin-sharp: ^4.1.3 => 4.1.3 gatsby-source-filesystem: ^4.1.2 => 4.1.2 gatsby-transformer-sharp: ^4.1.0 => 4.1.0 npmGlobalPackages: gatsby-cli: 4.1.1 ``` ### Config Flags _No response_ Answers: username_0: As is traditional in software, I figured out a workaround after spending a huge amount of time documenting the core issue. If you enable `jit` in your tailwind config: ``` module.exports = { mode: 'jit', purge: [ './public/**/*.html', './src/**/*.{js,jsx,ts,tsx,vue}', ], darkMode: false, theme: {}, variants: { extend: {}, }, plugins: [ require("@tailwindcss/aspect-ratio"), require("@tailwindcss/typography"), require("@tailwindcss/forms"), ], } ``` You can then add a ! in front of the utility, as specified here: https://tailwindcss.com/docs/just-in-time-mode#built-in-important-modifier In my case, I put it in front of the aspect ratio classes, like so: `<div className="!aspect-w-3 aspect-h-2 sm:!aspect-w-4 sm:aspect-h-3">` I still think this is a bit kludgey, but it works! Hopefully this will benefit future internet travelers. username_1: Hi! Thanks for the issue. For the develop/build mismatch you can try https://github.com/gatsbyjs/gatsby/discussions/28138 and we're working on getting it into the hands of everyone by default. That the component overwrites custom styles from you is expected and how the CSS cascade works. We're not putting the CSS into a css file as that would mean an additional request, so it's either in a `<style>` tag or inline styles -- and those take precedence. Using `!important` is a well known workaround for these cases. Status: Issue closed username_2: Why is this closed? I'm having the same issue, and regardless of build/dev environment difference, the issue is caused by `gatsby-plugin-image` injecting it's styles later than we're able to do so via `gatsby-browser`. Thus, beating any custom CSS we add due to equal specificity but being located later. Shouldn't `gatsby-plugin-image` inject it's styles to the very top of the DOM tree to make this easier to override? @username_1
GrahamGoudeau/WMFO
226310033
Title: Add create account form Question: username_0: Will need these fields: * email * password * confirm password * unique code (the exec board will create an account under the new DJ's email, and the new DJ will get an email with a unique code that they enter here to finalize their account)<issue_closed> Status: Issue closed
ListenerJubatus/smtheme-fiftyOne
298453330
Title: Is the tournament edition still being worked on? Question: username_0: Apologies, didn't know how else to reach you. About a year ago you posted the following thread on ZiV: https://zenius-i-vanisher.com/v5.2/thread?threadid=9226 I'm curious if this edition of the theme is still being worked on and if there was a branch I could fork in order to test out and possibly contribute to. Thanks! Status: Issue closed Answers: username_1: Apologies, didn't know how else to reach you. About a year ago you posted the following thread on ZiV: https://zenius-i-vanisher.com/v5.2/thread?threadid=9226 I'm curious if this edition of the theme is still being worked on and if there was a branch I could fork in order to test out and possibly contribute to. Thanks! username_1: _Actually_ about that... stay tuned. username_0: ooh good to hear, I was taking a look at branch you had from a while ago and wasn't sure if you had brought it into 5.1 from 5.2 or not. in any case, exciting!
robolectric/robolectric
293228577
Title: Firebase and FirebaseAnalytics initialization exception Question: username_0: ### Description I'm new to unit testing and Robolectric so this is probably the cause of all my problem but still I need your help. My test class looks line this: ``` @RunWith(RobolectricTestRunner.class) @Config(sdk = { 16, 17, 18, 19, 21, 22, 23, 24, 25}) //ALL_SDKS public class ViewTest { MainActivity view; @Before public void setup() { Robolectric.getBackgroundThreadScheduler().pause(); Robolectric.getForegroundThreadScheduler().pause(); view = Robolectric.setupActivity(MainActivity.class); Robolectric.getBackgroundThreadScheduler().unPause(); Robolectric.getForegroundThreadScheduler().unPause(); } @Test public void test() { view.hideContent(); assertThat(view.binding.activityContent.getVisibility(), is(View.GONE)); } } ``` When I ran the test at first I had similar problem to the one described [here](https://stackoverflow.com/questions/38713427/firebase-roboelectric-java-lang-illegalstateexception-firebaseapp-with-name?rq=1) and after using responses provided in that SO post I move forward to following error: ``` java.lang.BootstrapMethodError: call site initialization exception at java.lang.invoke.CallSite.makeSite(CallSite.java:341) at java.lang.invoke.MethodHandleNatives.linkCallSiteImpl(MethodHandleNatives.java:307) at java.lang.invoke.MethodHandleNatives.linkCallSite(MethodHandleNatives.java:297) at android.app.job.JobScheduler.<init>(JobScheduler.java) at android.app.JobSchedulerImpl.<init>(JobSchedulerImpl.java) at org.robolectric.util.ReflectionHelpers.callConstructor(ReflectionHelpers.java:339) at org.robolectric.internal.bytecode.ShadowImpl.newInstanceOf(ShadowImpl.java:21) at org.robolectric.shadow.api.Shadow.newInstanceOf(Shadow.java:29) at org.robolectric.shadows.ShadowContextImpl.getSystemService(ShadowContextImpl.java:202) at android.app.ContextImpl.getSystemService(ContextImpl.java) at android.content.ContextWrapper.getSystemService(ContextWrapper.java:659) at com.google.android.gms.internal.zzcjg.zzzq(Unknown Source) at com.google.android.gms.internal.zzcjg.zzjD(Unknown Source) at com.google.android.gms.internal.zzchj.initialize(Unknown Source) at com.google.android.gms.internal.zzcgl.<init>(Unknown Source) at com.google.android.gms.internal.zzcgl.zzbj(Unknown Source) at com.google.firebase.analytics.FirebaseAnalytics.getInstance(Unknown Source) at com.example.android.App.setupFirebaseServices(App.java:124) at com.example.android.App.onCreate(App.java:73) at org.robolectric.android.internal.ParallelUniverse.lambda$setUpApplicationState$0(ParallelUniverse.java:173) at org.robolectric.util.PerfStatsCollector.measure(PerfStatsCollector.java:72) at org.robolectric.android.internal.ParallelUniverse.setUpApplicationState(ParallelUniverse.java:172) at org.robolectric.RobolectricTestRunner.beforeTest(RobolectricTestRunner.java:328) at org.robolectric.internal.SandboxTestRunner$2.evaluate(SandboxTestRunner.java:241) at org.robolectric.internal.SandboxTestRunner.runChild(SandboxTestRunner.java:123) at org.robolectric.internal.SandboxTestRunner.runChild(SandboxTestRunner.java:42) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) [Truncated] ``` compileSdkVersion 25 buildToolsVersion '26.0.2' ``` ``` compile 'com.google.firebase:firebase-crash:11.0.4' compile 'com.google.firebase:firebase-core:11.0.4' compile 'com.google.firebase:firebase-config:11.0.4' compile 'com.google.firebase:firebase-messaging:11.0.4' compile 'com.google.firebase:firebase-perf:11.0.4' compile 'com.google.android.gms:play-services-analytics:11.0.4' ``` I'm aware that this gms and firebase dependencies aren't the newest but can change it for production atm, but I've tried to upgrade it just to test if that would help and I'm affraid that's not the case. ``` testImplementation 'junit:junit:4.12' testImplementation "org.robolectric:robolectric:3.6.1" ``` Answers: username_1: @username_0 have you found any solution for this? username_0: Unfortunately no, tests for this project were abandoned long time ago so I forgotten about it.
openjournals/joss-reviews
920361400
Title: [PRE REVIEW]: Kinematics-vis: A Visualization Tool for the Mathematics of Human Motion Question: username_0: **Submitting author:** @username_2 (<a href="http://orcid.org/0000-0002-3635-1289"><NAME></a>) **Repository:** <a href="https://github.com/klevis-a/kinematics-vis.git" target ="_blank">https://github.com/klevis-a/kinematics-vis.git</a> **Version:** v.1.0.0 **Editor:** Pending **Reviewer:** Pending **Managing EiC:** <NAME> **:warning: JOSS reduced service mode :warning:** Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in [our blog post](https://blog.joss.theoj.org/2020/05/reopening-joss). ## Status [![status](https://joss.theoj.org/papers/ef7f95fc8216dd7c6d37a6679374b175/status.svg)](https://joss.theoj.org/papers/ef7f95fc8216dd7c6d37a6679374b175) Status badge code: ``` HTML: <a href="https://joss.theoj.org/papers/ef7f95fc8216dd7c6d37a6679374b175"><img src="https://joss.theoj.org/papers/ef7f95fc8216dd7c6d37a6679374b175/status.svg"></a> Markdown: [![status](https://joss.theoj.org/papers/ef7f95fc8216dd7c6d37a6679374b175/status.svg)](https://joss.theoj.org/papers/ef7f95fc8216dd7c6d37a6679374b175) ``` **Author instructions** Thanks for submitting your paper to JOSS @username_2. **Currently, there isn't an JOSS editor assigned** to your paper. @username_2 if you have any suggestions for potential reviewers then please mention them here in this thread (without tagging them with an @). In addition, [this list of people](https://bit.ly/joss-reviewers) have already agreed to review for JOSS and may be suitable for this submission (please start at the bottom of the list). **Editor instructions** The JOSS submission bot @username_0 is here to help you find and assign reviewers and start the main review. To find out what @username_0 can do for you type: ``` @username_0 commands ``` Answers: username_0: Hello human, I'm @username_0, a robot that can help you with some common editorial tasks. **:warning: JOSS reduced service mode :warning:** Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in [our blog post](https://blog.joss.theoj.org/2020/05/reopening-joss). For a list of things I can do to help you, just type: ``` @username_0 commands ``` For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type: ``` @username_0 generate pdf ``` username_0: ``` Software report (experimental): github.com/AlDanial/cloc v 1.88 T=0.04 s (1046.5 files/s, 129444.5 lines/s) ------------------------------------------------------------------------------- Language files blank comment code ------------------------------------------------------------------------------- JavaScript 32 621 173 3716 CSS 3 60 2 416 HTML 2 13 0 220 Markdown 1 12 0 31 Bourne Shell 2 0 0 27 DOS Batch 2 0 0 27 SVG 1 0 0 1 ------------------------------------------------------------------------------- SUM: 43 706 175 4438 ------------------------------------------------------------------------------- Statistical information for the repository '5dd976596eb34769f3b69782' was gathered on 2021/06/14. The following historical commit information, by author, was found: Author Commits Insertions Deletions % of changes Klevis 135 8091 3929 97.00 <NAME> 5 360 12 3.00 Below are the number of rows from each author that have survived and are still intact in the current revision: Author Rows Stability Age % in comments <NAME> 4510 1252.8 6.9 3.84 ``` username_0: PDF failed to compile for issue #3367 with the following error: ``` Can't find any papers to compile :-( ``` username_1: @username_0 check references from branch joos username_0: ``` Attempting to check references... from custom branch joos ``` username_0: ``` Reference check summary (note 'MISSING' DOIs are suggestions that need verification): OK DOIs - 10.1016/j.jbiomech.2004.05.042 is OK - 10.1016/j.jbiomech.2010.10.042 is OK - 10.1016/s0021-9290(00)00032-4 is OK - 10.1115/1.2894884 is OK - 10.1016/j.gaitpost.2021.05.004 is OK - 10.1016/j.jbiomech.2011.04.031 is OK - 10.2106/jbjs.g.01483 is OK - 10.1007/s11517-018-1903-3 is OK MISSING DOIs - None INVALID DOIs - None ``` username_1: @username_0 generate pdf from branch joos username_0: ``` Attempting PDF compilation from custom branch joos. Reticulating splines etc... ``` username_0: :point_right::page_facing_up: [Download article proof](https://raw.githubusercontent.com/openjournals/joss-papers/joss.03367/joss.03367/10.21105.joss.03367.pdf) :page_facing_up: [View article proof on GitHub](https://github.com/openjournals/joss-papers/blob/joss.03367/joss.03367/10.21105.joss.03367.pdf) :page_facing_up: :point_left: username_1: 👋 @username_2 - thanks for your submission. Due to a current shortage of editors (that we are working to resolve), I'm going to put it on our waiting list until an appropriate editor is available. username_2: @username_0 commands username_0: Here are some things you can ask me to do: ``` # List Whedon's capabilities @username_0 commands # List of editor GitHub usernames @username_0 list editors # List of reviewers together with programming language preferences and domain expertise @username_0 list reviewers EDITORIAL TASKS # Compile the paper @username_0 generate pdf # Compile the paper from alternative branch @username_0 generate pdf from branch custom-branch-name # Ask Whedon to check the references for missing DOIs @username_0 check references # Ask Whedon to check repository statistics for the submitted software @username_0 check repository ``` username_2: Possible reviewers for Javascript motion analysis/biomechanics/robotics applications. Great fit of language and domains: username_6 username_4 jizecn username_9 username_5 akuederle username_7 OK fit of language and domains: dhrubajyoti98 thelinuxmaniac dglmoore sixpearls ixjlyons malmaud Jgoldfar username_2: Possible reviewers for Javascript motion analysis/biomechanics/robotics applications. Great fit of language and domains: username_6 username_4 jizecn username_9 username_5 akuederle username_7 OK fit of language and domains: dhrubajyoti98 thelinuxmaniac dglmoore sixpearls ixjlyons malmaud Jgoldfar username_3: @username_0 invite @username_4 as editor :wave: @username_4 – would you be willing to edit this submission for JOSS? (Last invite for now I promise!) username_0: @username_4 has been invited to edit this submission. username_4: @username_0 assign @username_4 as editor username_0: OK, the editor is @username_4 username_4: Hi @username_2, I will be the editor for this paper. To get things going, could you please suggest 5-6 reviewers from [this](https://docs.google.com/spreadsheets/d/1PAPRJ63yq9aPC1COLjaQp8mHmEq3rZUzwUYxTulyu78/edit#gid=856801822) list that you consider appropriate to vet this paper? While selecting, you should confirm that the domain expertise of the reviewer matches the subject matter of your paper and software. Thank you! username_2: Already included this a few posts prior. I even had you as a great fit for review! Possible reviewers for Javascript motion analysis/biomechanics/robotics applications. Great fit of language and domains: username_6 username_4 (if you really want to :-) jizecn username_9 username_5 akuederle username_7 OK fit of language and domains: dhrubajyoti98 thelinuxmaniac dglmoore sixpearls ixjlyons malmaud Jgoldfar username_4: @username_2 Missed that, thank you! Let me try finding a couple of other reviewers and if we are unable to do so, I'll be happy to act as one of the reviewers as well username_4: @username_6 @jizecn @username_9 @username_5 @akuederle @username_7 - Would any of you be interested in reviewing this paper? I'd greatly appreciate your help! username_5: I am working to finish up my PhD, so I will have to pass on this one. username_6: I'm unfortunately fully booked for the foreseeable future, my apologies. username_4: @sixpearls @ixjlyons @malmaud @Jgoldfar @dhrubajyoti98 @thelinuxmaniac @dglmoore @jizecn @username_9 @AKuederle @username_7 - Can any of you please help review this paper? We'd greatly appreciate your contribution! username_7: I can review username_8: -- Best, <NAME> +1 (321) 895 4184 username_4: @username_7 - awesome! Thank you! @username_0 assign @username_7 as reviewer username_1: @username_4 - commands to @username_0 have to be the first thing in a comment - ☝️ didn't work username_4: @username_0 assign @username_7 as reviewer username_0: OK, @username_7 is now a reviewer username_9: @username_4 What's the Deadline for the review? I am next few weeks on vacation. I could submit the review mid August username_4: @username_9 we can definitely shoot for a mid-August completion to work around your vacation plans. I'll assign you as a reviewer and kickstart the review. You can get around to it whenever you're ready. username_4: @username_0 assign @username_9 as reviewer username_0: OK, @username_9 is now a reviewer username_4: @username_0 add @username_7 as reviewer username_0: OK, @username_7 is now a reviewer username_4: @username_0 start review username_0: OK, I've started the review over in https://github.com/openjournals/joss-reviews/issues/3490. Status: Issue closed
Sage-Bionetworks/nlp-sandbox
721958542
Title: Add LABEL org.opencontainers.image.source to docker image in CI/CD Question: username_0: org.opencontainers.image.source=${{ github.event.repository.clone_url }} Note: It looks like `${{ github.event.repository.clone_url }}` is empty when the GitHub action is triggered by `on: schedule`. This would be an issue for the nightly build of the image. Depends on https://github.com/Sage-Bionetworks/nlp-sandbox-schemas/issues/32 Answers: username_0: Here are other examples of labels to consider: org.opencontainers.image.created=${{ steps.prep.outputs.created }} org.opencontainers.image.url=${{ github.event.repository.html_url }} org.opencontainers.image.source=${{ github.event.repository.clone_url }} org.opencontainers.image.version=${{ steps.prep.outputs.version }} org.opencontainers.image.revision=${{ github.sha }} org.opencontainers.image.licenses=${{ github.event.repository.license.spdx_id }}
KrystianKjjk/CodersCamp2020.Project.FullStack-Node-React.OrganizationApp
811231446
Title: Investigate the authentication solutions Question: username_0: We need to select a tool/library for handling the authentications (solutions suggested in the previous course section should be sufficient, but maybe there is something more) Additionally please check how are the external mailing services integrated for authentication purposes<issue_closed> Status: Issue closed
open-telemetry/opentelemetry-dotnet
1097742330
Title: Question - Can we automatically emit well-known ASP.NET Core event counters as metrics? Question: username_0: # Question **Describe your environment.** .NET based API in Linux container. **What are you trying to achieve?** We are investigating if we can emit [well-known ASP.NET](https://docs.microsoft.com/en-us/dotnet/core/diagnostics/available-counters) event counters as OTEL metrics. ## Additional Context Event counters are new to me so maybe I'm looking over something. Answers: username_1: https://github.com/open-telemetry/opentelemetry-dotnet/issues/2521 need this. We can close this issue as #2521 is tracking the general EventCounter support.
SimonDedman/gbm.auto
795666815
Title: Loop: add prediction metrics summary Question: username_0: /home/simon/Documents/Si Work/PostDoc Work/Gbmauto help/Chuck Bangley/Mig/MLEvalMetricsBin.csv Scrape details from each MLEvalMetricsBin.csv, use same coding style as L267 (I think? Need to find a loop output, waiting for NAS to repopulate my local folders) Answers: username_0: Could do plots but need csv file equivalents of the Bin_Eval_???.png files in order to agglomerate them
electron-react-boilerplate/electron-react-boilerplate
656490788
Title: "import 'bootstrap/dist/css/bootstrap.min.css';" doesn't work inthe program Question: username_0: <!-- Please use the following issue template or your issue will be closed. --> <!--- Provide a general summary of the issue in the Title above --> I tried using `import 'bootstrap/dist/css/bootstrap.min.css` to import **Bootstrap**, but this code doesn't work as, ![图片](https://user-images.githubusercontent.com/27288110/87409564-aee41a80-c5f7-11ea-806a-b1ee57ff7d49.png) ## Prerequisites <!-- If the following boxes are not ALL checked, your issue is likely to be closed. --> - [x] Using yarn - [x] Using node 10.x - [x] Using an up-to-date [`master` branch](https://github.com/electron-react-boilerplate/electron-react-boilerplate/tree/master) - [x] Using latest version of devtools. See [wiki for howto update](https://electron-react-boilerplate.js.org/docs/dev-tools/) - [ ] Link to stacktrace in a Gist (for bugs) - [x] For issue in production release, devtools output of `DEBUG_PROD=true yarn build && yarn start` - [x] Tried solutions mentioned in [#400](https://github.com/electron-react-boilerplate/electron-react-boilerplate/issues/400) **Note for using latest version of devtools:** The command for updating extensions version cannot work on Windows PowerShell, which need to be confirmed. **Note for link to stacktrace in a Gist (for bugs):** I am sorry for that, as a beginner, I don't know how to do that. ## Expected Behavior <!--- If you're describing a bug, tell us what should happen --> `import 'bootstrap/dist/css/bootstrap.min.css` can work normally as, ![图片](https://user-images.githubusercontent.com/27288110/87411824-d5f01b80-c5fa-11ea-9096-ee5772399adf.png) <!--- If you're suggesting a change/improvement, tell us how it should work --> ## Current Behavior <!--- If describing a bug, tell us what happens instead of the expected behavior --> `import 'bootstrap/dist/css/bootstrap.min.css` seemed not to take effect, and it shows as, ![图片](https://user-images.githubusercontent.com/27288110/87409564-aee41a80-c5f7-11ea-806a-b1ee57ff7d49.png) <!--- If suggesting a change/improvement, explain the difference from current behavior --> ## Possible Solution <!--- Not obligatory, but suggest a fix/reason for the bug, --> <!--- or ideas how to implement the addition or change --> ## Steps to Reproduce (for bugs) <!--- Provide a link to a live example, or an unambiguous set of steps to --> <!--- reproduce this bug. Include code to reproduce, if relevant --> 1. Git clone the electron-react-boilerplate; 2. Install Bootstrap using `yarn add react-bootstrap bootstrap`; 3. Change the code of HomePage.tsx as, ```js import React from 'react'; import 'bootstrap/dist/css/bootstrap.min.css'; export default function HomePage() { [Truncated] <!--- Providing context helps us come up with a solution that is most useful in the real world --> ## Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> - Node version : v12.18.2 - Version or Branch used : 1.2.0 - Operating System and version : Windows 10 - Link to your project : <!--- ❗️❗️ Also, please consider donating (https://opencollective.com/electron-react-boilerplate-594) ❗️❗️ Donations will ensure the following: 🔨 Long term maintenance of the project 🛣 Progress on the roadmap 🐛 Quick responses to bug reports and help requests --> Answers: username_1: To import CSS you need to import like this: `import style from 'path/to/css';` then, use styles like this: `<div className={style.someClassProperty} />` In this case, you can import Bootstrap CSS from CDN on app.html `<link />` to use classNames as strings, but if you want to use your own CSS or Bootstrap CSS in the right way, I strongly recommend you to do like the above. username_0: Do I need to add some `loaders` in the `webpack.config.renderer.dev.babel.js`? username_1: if you haven't deleted anyone, no. but you need to have a `css.d.ts` on your components folder with the following code: ``` declare module '*.scss' { const content: { [className: string]: string }; export default content; } declare module '*.css' { const content: { [className: string]: string }; export default content; } ``` username_0: In fact, I found that the Bootstrap could work if add this line of code: ` <link rel="stylesheet" href="../node_modules/bootstrap/dist/css/bootstrap.min.css">` But, if I just use `import`, the Bootstrap still cannot take effect. username_0: Could you please add this feature, which can support `import`-style for importing `.css` file, like: `import 'bootstrap/dist/css/bootstrap.min.css';` username_2: I also have this issue. @username_1 In regards to: ``` To import CSS you need to import like this: import style from 'path/to/css'; then, use styles like this: <div className={style.someClassProperty} /> ``` How does that work for css classes with dashes in them? Because javascript cant use dashes, ie this doesnt work: ``` import bootstrapStyle from 'bootstrap/dist/css/bootstrap.min.css' <div className={bootstrapStyle.no-gutters} /> ``` Even if there's a workaround to use those types of class names (which bootstrap has a lot of) I think it would be much more convenient to just import the entire css as is, and then do the traditional bootstrap way of: ``` <div className="row no-gutters" /> ``` because bootstrap has a lot of class names that are optimized to be short, and relatively easy to remember. Basically it's nicer to write code like the above instead of: ``` <div className={`${bootstrapStyle.row} ${bootstrapStyle.no-gutters}`} /> ``` username_1: I actually don't know how to override this style-name problem but I was searching on internet for a solution and came across with this: https://react-bootstrap.github.io/ It can become very useful for your problem. username_3: [Here's an example](https://github.com/username_3/popcorn-time-desktop/blob/master/app/app.global.scss#L13 ) of an app that's using react-bootstrap and importing bootstrap's SASS username_2: @username_1 @username_3 Yeah react bootstrap is nice. Ive used it in the past. The second link looks worth looking into, thanks. In the mean time I have also found that I can just copy the bootstrap.min.css file into the app.global.css file and it works as exepected. (ie I can do: `<div className="row no-gitters" />`) Just curious but it seems the loading behavior of app.global.css is different than other css imports like: `import 'boostrap/dist/css/bootstrap.min.css`, is that a configuration somewhere in the webpack config? username_3: @username_2 yea, checkout the webpack render config Status: Issue closed username_0: Hi, @username_2 , so what is your favourite UI style framework now? username_2: @username_0 Well I mainly just use bootstrap for its layouts `bootstrap.grid.css`, everything else is a bunch of custom css. I know it's bad, but for my needs its fine Also, here's what I figured out about the css loading: In `configs/webpack.config.renderer.dev.babel.js` (and `configs/webpack.config.renderer.prod.babel.js`) there are some css loading options such as: (Note the comments) ``` // Extract all .global.css to style.css as is { test: /\.global\.css$/, use: [ { loader: MiniCssExtractPlugin.loader, options: { publicPath: './', }, }, { loader: 'css-loader', options: { sourceMap: true, }, }, ], }, // Pipe other styles through css modules and append to style.css { test: /^((?!\.global).)*\.css$/, use: [ { loader: MiniCssExtractPlugin.loader, }, { loader: 'css-loader', options: { modules: { localIdentName: '[name]__[local]__[hash:base64:5]', }, sourceMap: true, }, }, ], }, ``` so anything marked as .global.css will be loaded as is. and otherwise it generates a unique name to avoid collisions: `localIdentName: '[name]__[local]__[hash:base64:5]',` so you can have the best of both worlds :) What I did is copied the bootstrap min css into my actual project folder, and renamed it to end in `.global.css`, and now it gets loaded as is.
MicrosoftDocs/powerbi-docs
451375912
Title: Role-based security Question: username_0: I would very much appreciate exhaustive information regarding the capabilities and limitations of Power BI ability to relate to the permissions defined in SSAS roles. I suppose the following pertains to members being restricted from using entire dimensions (which seems to be supported in Power BI): "When a user has dimension security applied, the respective dimension members are not seen by the user in Power BI." When using the term 'cell' in the following, does this mean, for example, a specific member of a dimension? And this is not supported (would cause the user to be unable to connect to the cube at all?!)? If so, Microsoft should definitely work hard to provide such support, as this is one of the most important and useful aspects of restricting data access! "When a user has a cell security permission defined, where certain cells are restricted, then that user cannot connect to the cube using Power BI." --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 00aefcea-9c15-e4a5-b66b-59b6f446348e * Version Independent ID: 098101d4-0cdb-913f-1d6d-f6b74cc25426 * Content: [Analysis Services Multidimensional data in Power BI Desktop - Power BI](https://docs.microsoft.com/en-us/power-bi/desktop-ssas-multidimensional) * Content Source: [powerbi-docs/desktop-ssas-multidimensional.md](https://github.com/MicrosoftDocs/powerbi-docs/blob/live/powerbi-docs/desktop-ssas-multidimensional.md) * Service: **powerbi** * Sub-service: **powerbi-desktop** * GitHub Login: @username_2 * Microsoft Alias: **davidi** Answers: username_1: Thanks for your comment, @username_0. This issue has been assigned to the doc author for review. username_2: Adding members for review. username_3: Hi @username_0 Cell level security is described here: https://docs.microsoft.com/en-us/analysis-services/multidimensional-models/grant-custom-access-to-cell-data-analysis-services Dimension security doesn't necessarily mean the user doesn't have access to the whole dimension, it can be just the restricted members. Further info here: https://docs.microsoft.com/en-us/analysis-services/multidimensional-models/grant-custom-access-to-dimension-data-analysis-services username_3: #please-close @username_1 Status: Issue closed
dotnetcore/WebApiClient
738192754
Title: 需要临时保存PathQueryAttribute和HttpContentAttribute生成的查询串和请求体 Question: username_0: 可以在后续管道特性中计算签名,请考虑增强相关方法, 如在PathQueryAttribute中修改CreateUri方法加入参数ApiParameterContext context,可以把 keyValues放在context.Properties中 Answers: username_1: 除了PathQueryAttribute,还有其它地方需要类似的功能吗 username_1: 已经为它增加SerializeToKeyValues虚方法,(FormContentAttribute与FormDataContentAttribute本身已有此方法)见https://github.com/dotnetcore/WebApiClient/commit/9aec427ab81a3a233e969850603b883af0b8f643 username_0: HttpContentAttribute也需要 username_1: HttpContentAttribute是个abstract类,没有实现 username_0: JsonContentAttribute和XmlContentAttribute,其实我觉得放在基类更好些 username_1: JsonContentAttribute和XmlContentAttribute怎么有keyValue username_0: 需要临时保存body Status: Issue closed username_1: 更新到 1.1.3,已增加相关功能 username_0: 好的,谢谢
tofukko/filter
996913412
Title: firecross.jp 動画視聴 Question: username_0: ### ■URL https://firecross.jp/ https://firecross.jp/ebook/comics 等 ### ■説明 BPゲットの為の動画視聴が出来ない ### ■コメント Status: Issue closed Answers: username_0: https://github.com/username_0/filter/commit/e5b046e00087915a9c009df9ccd044409c8e1ee1 https://github.com/username_0/filter/commit/3b78201d4b778c1175ab1e6c983ae11b3c44e19f
conda-forge/iminuit-feedstock
357101869
Title: re-activating support for win-32? Question: username_0: I noticed that iminuit-1.2 was also build for the win-32 platform, while iminuit-1.3.2 is only build for *-64 platforms. Should we still support win-32? Answers: username_1: Suggest to drop win 32.There's already close to no downloads at https://anaconda.org/conda-forge/iminuit/files , the number of users that want that now might be zero. username_0: Ok, that's fine with me. Status: Issue closed
xforce/anno1800-mod-loader
1073327773
Title: Update Question: username_0: Es wäre mal schön das endlich von euch ein Update kommt denn viele haben das Problem das man keine Mods mehr laden kann und das spiel erst garnicht startet also wäre es sehr sehr nett wenn ihr endsprechend was tun würdet vielen herzlichen dank Answers: username_1: This can be closed i guess. Status: Issue closed
lasso-js/lasso
124404523
Title: Example custom bundle writer Question: username_0: The README mentions the ability to "Upload bundles to a resource server that backs a CDN instead of writing them to disk" An example of implementing these hooks would help immensely. @username_1 suggested opening a ticket on gitter. Answers: username_1: Hey @username_0, I'll go ahead and put some content here, but it needs to be cleaned up before it gets moved to the docs. ------ __Step 1) Create a Lasso.js plugin that configures a Lasso.js to use a custom bundle/resource writer:__ ```javascript var myWriter = require('./my-writer'); module.exports = function(myLasso, pluginConfig) { myLasso.config.writer = myWriter; ... }; ``` __Step 2) Implement your custom writer:__ ```javascript function uploadResource(bundle, reader, callback) { var readableBundleStream = reader.readBundle(); var bundleName = bundle.name; var contentType = bundle.contentType; // Either 'css' or 'js' ... } function uploadResource(path, reader, callback) { var readableResourceStream = reader.readResource(); ... } module.exports = { /** * This will be called for JS and CSS bundles */ writeBundle: function(reader, lassoContext, callback) { var bundle = lassoContext.bundle; uploadBundle( bundle, reader, function(err, result) { if (err) { return callback(null); } callback(null, { url: result.url }) }) }); }, /** * This will be called for front-end assets such as images, fonts, etc. */ writeResource: function(reader, lassoContext, callback) { var path = lassoContext.path; uploadResource( path, [Truncated] }; ``` __Step 3) Register the plugin when initializing Lasso.js:__ ```javascript require('lasso').configure({ plugins: [ { plugin: require('optimizer-plugin-inc'), enabled: true } ] ... }); ``` ------ Hopefully that is enough to get you started but please ask if you have questions. Please let us know if you have questions or get stuck. username_2: Perhaps this helps too https://gist.github.com/username_2/e77e495b0c21a64f811a9c90f004001a
MicrosoftDocs/powerbi-docs
638853665
Title: RLS on app not working Question: username_0: I tried a lot of times but the part where you do RLS and you say that by updating/sharing the app the user can only see his data (accordingly to the RLS) is just not working with me. First I had with some dax code extracting the last part of our company emailadress which was then used to filter the data on company. But after trying so much went for the straight email link to company like in the example (user table) and still it didn't work. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: f70ccdae-1552-2465-be6f-d47cd260ad8e * Version Independent ID: 1ffea5bb-64b0-2661-801e-9eec46e6eec1 * Content: [Distribute Power BI content to external guest users using Azure Active Directory B2B - Power BI](https://docs.microsoft.com/en-us/power-bi/guidance/whitepaper-azure-b2b-power-bi#case-study-building-a-bi-portal-using-power-bi--azure-ad-b2b--step-by-step-instructions) * Content Source: [powerbi-docs/guidance/whitepaper-azure-b2b-power-bi.md](https://github.com/MicrosoftDocs/powerbi-docs/blob/live/powerbi-docs/guidance/whitepaper-azure-b2b-power-bi.md) * Service: **powerbi** * Sub-service: **powerbi-service** * GitHub Login: @username_1 * Microsoft Alias: **davidi** Answers: username_1: Thanks for your comment, @username_0 - and sorry for the trouble you're encountering. This looks like a support request rather than documentation feedback. You might find a solution in the Power BI Community, found at https://community.powerbi.com. If not, you can get support by going to https://support.powerbi.com. Status: Issue closed
chonla/timer
386722999
Title: [Feature Request] Add sound when timer is end Question: username_0: Sorry, may while I use Bayo Timer. I always connect with HDMI. I might forgot to set output from HDMI to other. Answers: username_1: This feature has been already implemented. Status: Issue closed username_0: Sorry, may while I use Bayo Timer. I always connect with HDMI. I might forgot to set output from HDMI to other. username_0: Or it can be any feature to make more attention for time's up username_1: Please open other FR.
lervag/vimtex
470782633
Title: [docs-update request] Provide a more direct link to `latexrefman` in the documentation Question: username_0: In the documentation page for `vimtex`, I notice that the link to `latexreman` is given as `https://puszcza.gnu.org.ua/projects/latexrefman/` This is in-fact the GNU project-tracker page, which includes a lot more "meta-info" than the actual Latex-help contents of `latexrefman`. It would be more helpful to readers if the link was updated to directly point to the actual documentation, i.e. `https://latexref.xyz/` which is what most users are looking for. Answers: username_1: Thanks! This is resolved by #1432 and my recent commit. Status: Issue closed username_0: You are awesome! Thanks @username_1 username_1: Thanks! :)
enarx/enarx
540284070
Title: Open Source Summit North America 2020 Question: username_0: There's also Embedded Linux Conference at the same time/place, but it seems less obvious a conference for Enarx. **Dates** - CFP opens: 17 December 2019 - CFP closes: 23:59pm PST on Sunday 16 February 2020 - CFP results: Monday 23 March 2020 - Slides Due Date: Monday 15 June 2020 - Event: 22-24 June 2020 **Location** W Marriott Austin, Texas, USA **CFP** https://events.linuxfoundation.org/open-source-summit-north-america/program/cfp/ Answers: username_1: Title: Running sensitive workloads on untrusted hosts: Project Enarx Presenters: <NAME> & <NAME> Abstract: There are many sensitive workloads that you shouldn't entrust to a public cloud to run, or may even have concerns about running on on-premises systems. Trusted Execution Environments (TEEs) such as Intel's SGX and AMD's SEV provide a great opportunity to help secure these workloads, but they're not easy to use. Enarx is a project which aims to make it simple to deploy workloads to a variety of different TEE technologies in the cloud, on premises or elsewhere, whilst ensuring that your application workload is as secure as possible. It is written in Rust, and provides a WebAssembly (WASI) runtime. This session will outline the various parts of the sensitive workload deployment problem and how Enarx is architected to solve them, including attestation, a key pain point when considering deployment on TEEs. It will also include a demo of the current state of Enarx and talk about opportunities for involvement in the project. Enarx is a fully open source project, under the Apache 2.0 license, and a part of the Confidential Computing Consortium, a Linux Foundation community project. Differences from previous version: - new code, architecture, demo - accent on open source & ecosystem Status: Issue closed
aws/aws-sdk-ruby
253976249
Title: No Changelog for version 3 Question: username_0: Version 3.0.0 was released on [Rubygems](https://rubygems.org/gems/aws-sdk) yesterday, but there's no changelog entry for it and `master` on this repo still points to [v2.10.35](https://github.com/aws/aws-sdk-ruby/blob/master/VERSION). Can we get an update on what's going on, and where the code for version 3.0.0 is? Answers: username_1: After checking some branches it seems to be located in this branch: https://github.com/aws/aws-sdk-ruby/tree/code-generation username_2: We're moving V3 to the master branch soon, we have to coordinate that change with some internal tooling. For now V3 lives in code-generation, correct. username_3: No release tag either. username_0: Thanks for the update. Would be awesome if you could add the above details to the changelog, or a GitHub release - at the moment anyone doing a bundle update without a pinned version will be getting V3 but won't be able to find details of what's changed / currently happening easily. (My particular interest is that I run [Dependabot](https://dependabot.com) which automatically creates PRs to bump dependencies - we've had a few people ask if there's been an error on our side because they haven't been able to find any details of the new version in this repo!) username_2: Release tags are a more interesting issue now that we have modularized into 100+ gems. `aws-sdk` will probably be on `3.0.0` for a long time, it's the underlying service gems which will be updating. I'm open to ideas about how to add dozens of new GitHub releases/tags to this repo without it getting overwhelming. The CHANGELOGs in the `code-generation` branch are current and will soon be in `master`. Similarly, the V2 CHANGELOG will continue to update once it has moved to a branch. username_0: Interesting. Release notes aren't going to work for you with the above setup, but you should be able to get Rubygems to point to the right changelogs, which will help a lot - I'll create a PR for that now. username_2: The use case you have will be interesting with V3. We actually will be having hundreds of CHANGELOGs, and users will be migrating to only using a subset of service gems relevant to their use cases. It's unfortunately impractical for us to create repos for each gem, so a "single" CHANGELOG doesn't fit well going forward. Happy to discuss what we do have, or if there are ways to make our multiple changelogs easier to consume. username_0: Got a solution for you on that one: https://github.com/aws/aws-sdk-ruby/pull/1585 username_2: Yep, that looks great. We're going to take on a task to add this to our generation process. username_2: Version 3 is now on the `master` branch. Leaving open while we work on your PR. username_4: The PR #1586 update is merged, will be out in the next release : ), closing Status: Issue closed username_0: Nice, thanks!
tastybento/ASkyBlock-Bugs-N-Features
221528343
Title: Startup Error Question: username_0: Sorry for my bad englisch, I'm from germany. Answers: username_1: Update your server to a newer version and that will fix the problem. username_0: Thanks, which version it should be? It's 1.9 now. username_1: Well, I'd upgrade to 1.11.2, but if you really want to stay on 1.9.x for some reason, at least update to 1.9.4. That should fix it. There were bugs in 1.9 and this potion class was missing. username_0: Thanks so much, it works. I've updated to 1.9.4. Status: Issue closed
Fuyukai/OWAPI
344798515
Title: Can't start the application Question: username_0: Hello, I cannot start the application on my Ubuntu VPS. At first I would use the new version, but this gave me an error, that its need the Python 3.5.3 version (I have 3.5.2 installed) and so I have to used an older version, but there I got an error: Warning: activate_this.py not found. Your environment is most certainly not activated. Continuing anyway... INFO:asphalt.core.runner:Running in development mode 2018-07-26 11:04:45,047 - [INFO] asphalt.core.runner -> Starting application 2018-07-26 11:04:45,126 - [ERROR] asphalt.core.runner -> Error during application startup Traceback (most recent call last): File "/opt/OWAPI2/OWAPI/venv/lib/python3.5/site-packages/asphalt/core/runner.py", line 111, in run_application event_loop.run_until_complete(coro) File "/usr/lib/python3.5/asyncio/base_events.py", line 387, in run_until_complete return future.result() File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result raise self._exception File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step result = coro.send(None) File "/usr/lib/python3.5/asyncio/tasks.py", line 392, in wait_for return fut.result() File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result raise self._exception File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step result = coro.send(None) File "/opt/OWAPI2/OWAPI/owapi/app.py", line 133, in start from asphalt.redis.component import RedisComponent File "/opt/OWAPI2/OWAPI/venv/lib/python3.5/site-packages/asphalt/redis/component.py", line 6, in <module> from aioredis import create_reconnecting_redis ImportError: cannot import name 'create_reconnecting_redis' 2018-07-26 11:04:45,130 - [INFO] asphalt.core.runner -> Stopping application 2018-07-26 11:04:45,132 - [INFO] asphalt.core.runner -> Application stopped 2018-07-26 11:04:45,176 - [ERROR] asyncio -> Unclosed client session client_session: <aiohttp.client.ClientSession object at 0x7f46fa551860> Maybe you can help me here. Answers: username_1: Hi username_0, I think you had a error while setting up the venv. Please delete the venv folder and retry setting up the venv. username_0: Hi username_1, I got the same error message. It's very interesting. I have another VPS and there it works (the same OS Template was used), but this VPS will be deactivated soon :( username_0: Hmm ok, I think there is a problem with the "asphalt" plugin. I have copied the "site-packages" folder from the other machine and replaced it. Now it works! username_2: The aioredis version is pinned in the Pipfile. If you installed it using `pipenv install` on 3.5.3 it should've installed with the correct requirements.
wso2/analytics-apim
299268989
Title: [Alerts] [HealthAvailability] Streams perMinuteRequests and perMinuteResponses gets events with a wrong timeout window Question: username_0: **Description:** In APIMAnalytics-RequestSummarizer-RequestSummarizer-realtime1.siddhiql, Streams perMinuteRequests and perMinuteResponses receives events with a uniqueExternalTimeBatch window which has lesser timeout value than the window. This occurs when events are inserted into perMinuteRequestsTemp stream from perSecondRequestsWithUniqueId stream. Time Window is defined as 60 seconds but timeout value is 10 seconds. So events are flushed into perMinuteRequests stream multiple times within 60 seconds. So time window of 60 seconds gets neglected. [This occurs similarly when events are inserted into perMinuteResponsesTemp stream from perSecondResponsesWithUniqueId stream ] This bug causes the health availability alert's request/response count percentiles related triggering cause. Pattern count can gets incremented more than once in each minute and so alerts get triggered by lesser number of occurrences, than expected. **Affected Product Version:** 2.0.0, 2.1.0, 2.2.0<issue_closed> Status: Issue closed
GMLC-TDC/helics_benchmark_results
547032107
Title: Spikes in graphs Question: username_0: Several of the benchmark results have spikes in them, presumably due to two values existing at the same x-axis location (see attached). We need to investigate what is causing the spike in the graph and fix the graphing code or (more likely) the parsing of the source data to remove/clean-up the error. ![Zu60n_aUZF6_echo_inprocCore](https://user-images.githubusercontent.com/8697189/72005246-04cba600-3202-11ea-9c03-bc9e986d14a4.png) Answers: username_0: As a part of investigating the #7, I realized the key benchmark files ("bmk_...") are getting processed alongside the others. I'm guess that there are results that overlap when combined into a single dataset for graphing. I'll keep investigating. username_0: Spikes in graphs were the results of including key benchmark results in the general dataset. When this was corrected as a part of cleaning up #14, the spikes in the graphs went away. Status: Issue closed username_0: Several of the benchmark results have spikes in them, presumably due to two values existing at the same x-axis location (see attached). We need to investigate what is causing the spike in the graph and fix the graphing code or (more likely) the parsing of the source data to remove/clean-up the error. ![Zu60n_aUZF6_echo_inprocCore](https://user-images.githubusercontent.com/8697189/72005246-04cba600-3202-11ea-9c03-bc9e986d14a4.png) username_0: The latest run on 2020-01-09 bScQ6 shows vertical spikes in the messageSend2 graph.
khval/mplayer-amigaos
69677206
Title: Add support for external codecs Question: username_0: ``` A couple of places where binary codecs could be found: https://player.helixcommunity.org/2005/downloads/ http://www.mplayerhq.hu/design7/dload.html (Binary Codec Packages) Could be CoreAVC supported as well? http://code.google.com/p/coreavc-for-linux/ ``` Original issue reported on code.google.com by `<EMAIL>` on 28 Dec 2010 at 12:40 Answers: username_1: Not going to do it; and no one else is going to do it. Status: Issue closed
clangd/clangd
607011155
Title: Can you add <bits/stdc++.h> ? Question: username_0: I know it is a GCC template, but it is used in Competitve Programming. When I use it, it appears as an error and every function appears also as an error. Answers: username_1: This is part of the libstdc++ version of the c++ standard library. clangd doesn't ship the c++ standard library but rather uses one found on your system. (Generally, the same one clang uses). If you're in Mac or FreeBSD, libc++ is the default and you'll need to install libstdc++ and add `-stdlib=libstdc++` to your compilation database. If you're on another platform, maybe libstdc++ isn't properly installed. Can you compile things with clang or gcc? username_0: I can compile clang ans also gcc. But it says that that the library is unsupported on clangd username_1: I've verified that this header *can* work with clangd when libstdc++ is correctly installed (it works out-of-the-box on a fairly standard debian system). There's not really anything to be done on the clangd side here (we're unlikely to get involved in distributing standard library, and if we did it would be libc++ which does not include this header). You'd need to provide a lot more information to troubleshoot here, but this isn't a support forum in any case. I do hope you manage to get it working. Status: Issue closed username_2: Perhaps the issue is opening the `<bits/stdc++.h>` file itself in the editor and getting spurious errors? I've reported that previously in #136.
dotnet/roslyn
70135301
Title: Add support for marking analyzers with supported project flavors Question: username_0: DiagnosticAnalyzerAttribute takes arguments for supported languages for which to enable the owning DiagnosticAnalyzer. This allows IDE to selectively load an analyzer assembly only if it has a supported analyzer for current project's language. This works fine. However, we should add extra functionality to allow marking supported project flavors for analyzers. This will enable writing analyzers target specific project flavors for the supported languages. For example, then one can then write a shared project VSIX analyzer, which is only loaded for shared projects. Answers: username_1: @username_0 Except that shared projects aren't projects at the Roslyn level. All we see are the code files linked in from the shared project. username_2: I don't think teaching analyzers about a specific project system concept is a good idea. Is there a mechanism for filtering this way for regular MEF components? If not, I don't think we should add it for analyzers either. username_3: @username_2 @username_0 I think forcing every diagnostic analyzers to must be host agnostic is dictating on our own decision rather than responding to customer need. I don't believe every diagnostic analyzer wants to be host agnostic or even care to be run on command line. some of them, just want to provide extra information as diagnostic in IDE and only care about VS. (they probably even require some VS service to function) for such people, we do have DocumentDiagnosticAnalyzer. (which is currently internal). we can think of exposing it to public. and for DocumentDiagnosticAnalyzer, let author annotate it with more metadata such as project flavor. username_0: Why do we need to tie this metadata just to document analyzers? You can as well have any metadata applied to regular diagnostic analyzers via workspace layer attributes and every host can decide whether or not the metadata applies to it. For example, if we had an attribute which takes strings for supported project file extensions, this would be respected when analyzer is installed as VSIX analyzer for live IDE analysis. If analyzer is being executed for command line builds, such attribute will be ignored (why would you add an analyzer reference to a project file or on command line if you want to never execute it for that compilation?). If we were to implement this, we should allow host specific metadata on analyzers without forcing the analyzer itself to be tied to a host. username_0: @username_2 We are not tying up or teaching analyzers about a host. This would add ability to add host specific metadata to analyzers, so it can be configured/filtered in different ways for different hosts. Such workspace specific metadata will be ignored by command line analyzer host, but will be respected for VSIX analyzers in IDE. username_0: @username_1 shared projects was a bad example, I was referring more generally to different project flavors of C# and VB projects. username_3: @username_0 sure, we could go that way, but not sure why we would go that way if there is a way to get VS services for IDE only analyzers. and every host deciding what to do with some random metadata on diagnostic analyzer seems just a way to say behavior will become random... username_0: @username_3 Not sure how it would be random, the attribute/metadata will clearly be tied to a specific set of hosts. Our 2 primary vehicles to enable analyzers right now are: (a) Project OR compilation specific analyzer reference and (b) Host specific analyzer reference, e.g. VSIX installed analyzers for VS session. This metadata will only apply for the later mode. Approach you are suggesting seems too restrictive. It will force an analyzer, which is fully capable of being enabled via either (a) OR (b), to become only capable of being enabled via (b). An analyzer shouldn't be forced to become a host only analyzer just so that it can be configured for specific hosts. It seems extremely useful to retain the ability to still allow it to execute in command line builds as a project/compilation specific analyzer, even it wants additional configuration when enabled as a host analyzer. username_3: @username_0 ya, point was making it restrict to a host the metadata has meaning to. didn't want people to use the metadata as property bag to pass random data that each host interpret it differently. also, if an analyzer require specific host, I assume they do so since it requires host specific information. and I wanted to guide them to right pattern rather than create its own pattern like our own rename diagnostic. I bet nuget diagnostic analyzer also require some vs specific info to create diagnostic. username_0: How about I want write a simple symbol analyzer that works for just project-k projects? Are you suggesting that such analyzer can either (a) work only in IDE with project-k specific flavor filtering OR (b) work in IDE + command line, but then can't apply project flavor filtering. I think this is a too restrictive model. I would want such an analyzer to work in both IDE and command line, so can't be a Document analyzer AND also don't want it to run on non-project k projects when installed as a host (VSIX) analyzer in IDE. This is perfectly legitimate use case and our implementation shouldn't prevent it. username_3: @username_0 are you saying in compiler, there is a way to figure out something is project k? username_0: No, but why would you specify a project-k specific analyzer reference on command line/project file for a non project-k project? username_3: @username_0 ya, that is why I think it would be just simpler for people to use host specific diagnostic analyzer and have metadata there only. then, people who don't care about it doesn't need to see those or know it exist. if we made it either really visible in common DiagnosticAnalyzer, then someone who doesn't need to know about it need to figure out what that is, or if we hide it, then someone who needs it need to wonder around to figure out how to annotate those. I think it could be just simpler if we have those quite visible (and make them to think about it since it is relevant to the host they are targeting) and make it available only on the host specific analyzer. username_0: @username_3 @username_2 How about this approach: instead of adding metadata to analyzers for project flavors, we can do so for VSIX analyzer references itself. i.e. the Analyzer asset in the VSIX manifest can take attributes for project flavors, and the IDE host analyzer manager can filter such analyzer references for unsupported project flavors. This way we neither need to make analyzers know anything about project flavors, nor care about adding host specific information to analyzer metadata. Metadata that we care about for IDE host analyzers goes along with the mechanism to enable host analyzers, i.e. in the vsixmanifest file itself. ``` <Asset Type="Microsoft.VisualStudio.Analyzer" Path="MyProjectKAnalyzers.dll" ProjectFlavors=".kproj;.xproj" /> ``` username_0: Additionally, this way I can also write flavor specific analyzers on top of DiagnosticAnalyzer, instead of DocumentDiagnosticAnalyzer, and have the same analyzer added as a project analyzer reference or installed as a VSIX. username_3: @username_0 that seems good idea if that is only metadata we need for host specific analyzer. I think we are talking about slightly two different thing. host specific and host agnostic analyzer and project flavor. when I say host specific, I assume it would require some host specific service other host doesn't have. so, using it as project analyzer will cause issue since those service will not exist in other host. that is why I want host specific analyzer to be really explicit. and people can't confused and use it under different host. username_0: Agreed, but the current requirement of filtering based on flavors is when the analyzer executes in IDE host, and is needed for all analyzers, regardless of whether they are host specific or agnostic. So the mechanism we provide should not tie with whether analyzer is DiagnosticAnalyzer or DocumentDiagnosticAnalyzer. username_3: @username_0 project flavor sounds like very VS host specific though. is there any other host that has project flavor concept? username_0: Yes, project flavor and filtering based on that is host specific, but that shouldn't require the analyzer to be a host specific Document analyzer. Let's chat offline as we are looping over on the same point :) username_2: This was discussed and we decided against doing it. Status: Issue closed
craftcms/cms
593772686
Title: Config option for disabling minification of js code appended using {% js %} tag Question: username_0: There are sometimes situations where having minified JS is detrimental, for example, while debugging errors. Right now we can disable minification using `app.php` file, but it would be quicker and more comfortable doing that using `general.php` config file. Status: Issue closed Answers: username_1: I forgot this was already the case when we talked on Discord. You can use the [useCompressedJs](https://docs.craftcms.com/v3/config/config-settings.html#usecompressedjs) config setting to disable JS minification. ```php 'useCompressedJs' => false, ```
js-200/js-200-examples
454508025
Title: 초급 56) 모듈 시스템 이해하기 코드 오류가 있는 것 같습니다. Question: username_0: ``` import { hello } from './hello.js'; hello('es6 module'); export function hello(name) { console.log(`hello ${name}`); } ``` 비주얼 스튜디오 코드와 크롬에서 실행을 해봤는데 비주얼 스튜디오는 import { hello } from './hello.js'; ^ SyntaxError: Unexpected token export function hello(name) { ^^^^^^ SyntaxError: Unexpected token export 이렇게 에러가 뜨고 크롬은 결과가 표시 되지 않습니다. Answers: username_1: 초급56 예제는 Note에 기재된된대로 index.html를 라이브서버로 실행해주세요. 자세한 설명이 부족하여 발생된 부분이라.. 2판에서 이부분 보완하겠습니다. 감사합니다.
actions/runner
767628160
Title: HOME is overridden for containers Question: username_0: **Describe the bug** If passing the `HOME` environment variable to a container configuration, it will get ignored and set to `/github/home` anyway as per https://github.com/actions/runner/blob/4e85b8f3b7493aa1adf9dd0a574a9fa2bf1add71/src/Runner.Worker/ContainerOperationProvider.cs#L258 **To Reproduce** A workflow with ```yaml jobs: main: runs-on: ubuntu-latest container: image: 'rokmoln/sf-ubuntu-bionic-common:0.2.80' options: '--user sf:sf' # this user's home is /home/sf env: HOME: /home/sf # tried to change HOME from /github/home to /home/sf. doesn't work steps: - shell: bash run: | echo $HOME ``` will echo `/github/home` instead of `/home/sf`. **Expected behavior** Expected HOME to be left as is, as `sf` user's HOME, or at the very least to be allowed to enforce it via the `env` dictionary. ## Runner Version and Platform 2.274.2 ## What's not working? N/A ## Job Log Output N/A ## Runner and Worker's Diagnostic Logs N/A Answers: username_0: If I may suggest a possible solution: * set `HOME` only if not already set * set `GITHUB_HOME` or `GITHUB_RUNNER_HOME` in order to allow devs not to hardcode `/github/home` in case there's a use-case to access the runner's home from inside a container. username_1: 1. Create the `/github` directory 2. Create a Symlink with `$HOME` See: ```bash mkdir -p /github ln -s /root $HOME # Create Symlink /github/home ``` username_2: I created a bit similar issue about issues caused by setting `HOME`: #1146. username_3: Overriding `$HOME` causes all kinds of trouble, to no obvious benefit... username_4: Hola gracias salientes desde Los Mochis Sinaloa México. Soy mexicano y de familia numerosa saludos again asta luego.bestamosven línea. username_5: Also, at build the `HOME` is defined by image/distribution to `/root` for example. Then when running, the `HOME` is overwritten by GitHub to `/home/github`. This is causing issues with some tools installed in image. username_0: I can update this thread for future readers with my current workaround, that doesn't require messing with the filesystem as in https://github.com/actions/runner/issues/863#issuecomment-757230259 , but does require `sudo`, maybe `tee`. Add a first step to the job like ```yml - shell: bash run: 'echo HOME=/home/sf | sudo tee -a $GITHUB_ENV' ``` FWIW the `sudo tee` is needed because `$GITHUB_ENV` will be owned by another user. If you want to skip `tee`, you can run `sudo sh -c "echo HOME=/home/sf >> $GITHUB_ENV"` Following steps will start with the expected HOME, as well as source resource files correctly, like `~.profile` `~/.bash_profile` etc username_0: On another topic, driven by https://github.com/actions/runner/issues/863#issuecomment-997443323 "no obvious benefit", I have started thinking what can such a benefit be and I've come with: 1. persist state between container startups/shutdowns 1. persist state with host 1. persist state between containers 1. control the environment via `.profile`, `.bash_profile`, `.curlrc`, etc Let's start from the bottom, with 4. `/github/home` is empty. So the only way to reason around that is if Github wants NO environment, in case they found cases where settings in resource files create unexpected behaviour. Sounds far fetched. Regarding 3 - there's a specific `volumes` setting. "You can use volumes to share data between services or other steps in a job." Interesting formulation. First of all, **did anyone see if Github overrides $HOME for services as well?** I didn't. Secondly "other steps in a job" ?! I have just ran a test, and Github doesn't shuts down the container after each step. State is maintained between steps, and furthermore, action steps (i.e. steps with `uses:some/action@ref`) also run in the same container, having access to the same state. Regarding 2. Which host? Either you run the job in a container or in a real/virtual machine. Regarding 1. As mentioned for 3., state seems to be maintained. There's one particular case that I haven't tested, because I don't know where to look to understand how it works, how it should be invoked - docker hub action https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#example-using-a-docker-hub-action . Like ok, you reference an image hosted by hub.docker.com, what then? The docs reference an alpine image. How do you run shell commands? Or do they mean that you can package a docker image with a `action.yml` file in the WORKDIR, and they would run that. I really don't know how that works. But it could be that they start another container based on that image, set HOME to the same /github/home mount, same with GITHUB_WORKSPACE and maybe others, and thus share state though your job and the action step run isolated otherwise. Summary: * ❓ couldn't find a benefit * 🙏 can someone enlighten me if a service also has $HOME mounted as /github/home ? * 🙏 can someone enlighten me how a docker hub action step is intended to be used, how it works? ** 🚨 THE BEST thing would be if someone at Github can spend some minutes just to clarify WHY? I'm starting from the premise that there is a benefit. Something that evades some of us. If we understand why, then maybe we can even start using Github in different/better ways.** username_0: ping @TingluoHuang username_0: As the comment says, there's no citation, no documentation around that, AND it also doesn't hold in reality. I ran a test here https://github.com/ysoftwareab/yplatform-test/actions/runs/1783086868 . test3 jobs depends on test1 and test2, and test3 doesn't have access to the test1 and test2 files, neither in the real $HOME, neither in /github/home, neither in $GITHUB_WORKSPACE. Afaik, one uses upload-artifacts and download-artifacts actions for sharing files between jobs. Just wanted to be exhaustive. username_0: Looking at the runner's sourcecode, there's another mention of this mount https://github.com/actions/runner/blame/9a829995e02d2db64efb939dc2f283002595d4d9/src/Runner.Worker/Handlers/ContainerActionHandler.cs#L174 I ran out of time so I cannot test this to be sure, but **I guess the whole idea of overriding $HOME is to share state with a "docker container action"** https://docs.github.com/en/actions/creating-actions/creating-a-docker-container-action 😞 username_0: @username_6 why do you say it is working as expected? Where is it documented that "variables CANNOT be overridden" is normal behaviour? As far as I, and others on this thread, can see - this is a **BUG** . A step can set new, but cannot override previous variables. That's some peculiar "expectation". username_6: Hi @username_0, We are experiencing similar problems with the [Issue 434](https://github.com/actions/runner/issues/434), which is related to how we mount the fs to have those files re-used. After the discussion with the team, we marked it as a feature that we should work on in the near future. That problem is also very important to us. This issue is related to the HOME environment, which is used to share the state between the host system where the runner is executing, and the containerized application. Since issue 434 is considered a feature, I don't see why this issue would not fall under the same umbrella and should be picked up together. username_0: @username_6 Apologies. My [comment](https://github.com/actions/runner/issues/863#issuecomment-1083302504) above was wrong, because I thought we were talking about https://github.com/actions/runner/issues/789 (not an excuse but I have a cold and I wasn't thinking right). I will read up on #434 and come back if needed. Thanks and once again sorry for the noise. username_6: @username_0, no problem 😊. I misunderstood the issue [789](https://github.com/actions/runner/issues/789), so I re-opened it once I got it. There is a third party PR now related to that issue so it should be fixed as soon as it is merged.
PeculiarVentures/PKI.js
199369905
Title: Warning: WebCrypto is not available Question: username_0: Always response Warning: WebCrypto is not available var crypto = org.pkijs.getCrypto(); if (console.log) console.log(crypto); if (typeof crypto === 'undefined') { warn('WebCrypto is not available'); // Don't show any signature information this._capability.resolve(emptyResult); return; } Answers: username_1: Yesterday PKI.js "master" branch was changed with V2 code. The V2 code is incompartible with V1 code of PKI.js. If you do need PKI.js V1 then please specify correct NPM package number during installation of PKI.js Status: Issue closed username_0: thank you very much Yury... I'll try... username_0: Hey yury, I can't find version1 V1 of PKI.js for npm... :(
Ensembl/ensembl-vep
467886375
Title: VEP filter --tab vs --vcf count results different Question: username_0: Hello VEP team, I am enjoying using VEP for annotation, including its custom annotations and plugins. Amazing tool. For testing purposes, I annotated ensembl-vep/examples/homo_sapiens_GRCh38.vcf with some custom annotations, in two format (--vcf and --tab): - gnomAD(gnomad.genomes.r2.0.1.sites.GRCh38.noVEP.vcf.gz) - kaviar(Kaviar-160204-Public-hg38-trim.vcf.gz) - goNL (multisample.parents_only.info_only.vcf.gz) I am trying to count the rare variants (AF < 0.01) using the filter_vep as follows: ``` filter_vep \ --force_overwrite \ --input_file inVCF \ --output_file outVCF \ --filter "gnomAD_AF < 0.01" \ --filter "kaviar_AF < 0.01" \ --filter "goNl_AF < 0.01" ``` However, I noticed that I get the two different count results from the two formats. The TAB format gives me: AF < 0.01: 71 variants The VCF format gives me: AF < 0.01: 7 variants What might be the reason for that? Thank you in advance. Answers: username_1: Hi We are pleased to hear you are enjoying using VEP for annotation. We appreciate feedback from users. The number of lines in the VCF and tab output will not contain the same number of lines if a variant overlaps more than 1 transcript. filter_vep will then also report different number of lines matched. Do the number of matched lines differ or the number of variants? Information on VEP output formats is at: https://www.ensembl.org/info/docs/tools/vep/vep_formats.html#output In the VCF output: Each prediction, for a given variant, is separated by the character "," in the CSQ INFO field (e.g. when a variant overlaps more than 1 transcript). The results are in a single line in the VCF output. In the tab output, each transcript is on a separate line. Filtering tab output could return more matched lines. Helen username_0: Hello, Thank you very much for the quick reply. Indeed the number of (unique) variants in the TAB is the same as the the number of variants in the VCF format. Cedrick Status: Issue closed
vais-ral/CCPi-Framework
835899194
Title: ADMM axpby Question: username_0: This needs handleing properly with a `use_axpby` flag as PDHG does https://github.com/vais-ral/CCPi-Framework/blob/61a0286b6947592754acec139dab6ed999b29cc8/Wrappers/Python/cil/optimisation/algorithms/ADMM.py#L112 Answers: username_1: at least remove the comment line username_0: https://github.com/vais-ral/CCPi-Framework/blob/ca1147e8901abb9cb81ebd40190d3fc814d18b13/Wrappers/Python/cil/optimisation/algorithms/ADMM.py#L113-L118 this needs changing to use axpby Status: Issue closed username_1: closed by #825
thkl/homebridge-homematic
207078987
Title: Variable as Window or Door? Question: username_0: Is it possible to change a variable in my CCU2 to a window or a door? I tried to add the name of the variable to the window/door list in my config.json, but it seems that's not the right way. ^^ Is there another solution? Status: Issue closed Answers: username_1: you can add a boolean variable to Homekit. Just put the name of that variable into the section "variables" of your config.json username_0: Thanks for your answer. I think I described my problem not clear enough. 😕 Sure, adding variables to section "variables" works well! But in the Home app I only able to choose fan, light or switch as a type of this new "device". My problem is, that I didn't find a solution to change the type to window, door or garage opener. In my case the variable represents a self-made garage opener and I want to say to Siri "Open garage door" instead of "Turn garage door on". I know, thats just a small difference, but that makes harder to use for non-nerds. 😉 I tried to add the name of the variable to the window/door list in my config.json, but it seems that's not the right way. Is there another solution?
daiki510/Subsc_app
432035047
Title: 各アイコンリンクへポップオーバーで注釈を入れる Question: username_0: アイコンだけでは分かりにくいものあるので、注釈をホバー時に表示させる。 - [ ] 関連リンク - [ ] 詳細ボタン - [ ] 追加ボタン - [ ] ユーザー編集 Answers: username_0: タブ内に以下を追加する。 ```html 'data-toggle' => "popover", 'data-content' => "ユーザー編集" ``` jsに記載する。turbolinks:loadを入れないとページをリロードしないと表示されないので注意。 ```js // ポップオーバー $(document).on("turbolinks:load", function () { $('.detail-info').popover({ trigger: 'hover', placement: 'bottom', }); }); ``` Status: Issue closed