repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
aforren1/replan_finger
207640813
Title: Checklist Question: username_0: - [ ] Feedback for timing - [ ] Ensure all (relevant) data is saved - [ ] Save truncated, easy-to-digest data separately - [ ] Visual feedback for press/no press? - [ ] Handle no-press case (hang until one, see finger-6)<issue_closed> Status: Issue closed
crux-toolkit/crux-toolkit
222165667
Title: tide-search does not use the seed parameter Question: username_0: When tide-search is run with a fasta file, it shuffles the peptides to create decoys. Currently, this shuffling is done the same way every time, even if the --seed parameter is set. The decoys should be different if we use a different seed. I created an example, but it's too big to upload. I will email it directly to Kaipo. Answers: username_1: I'm guessing the seed parameter will work if tide-index is called on its own, but if tide-search calls tide-index it will not. tide-index gets called as part of tide-search's initialization function, and the random is seeded in the same function but afterwards. So, I think we can just rearrange the code in the initialization, here is a patch. [seed.txt](https://github.com/crux-toolkit/crux-toolkit/files/926439/seed.txt) username_2: Yes, this seems to fix the problem. And the changes to the code look fine. Please go ahead and check this in, and resolve this issue. Status: Issue closed
chingu-voyages/v13-bears-team-04
542235886
Title: Topics Question: username_0: Ran into this while working on the Create Community page. It shows the user suggested topics and is also searchable. Will need a model and some routes on the backend. Research shows that topics need the following: - text (string) --name - isRecommended (boolean) --recommended community - communities (array) --communities tagged with this topic - posts (array) --posts tagged with this topic - id --auto-generated Answers: username_0: Merged here: #29 Status: Issue closed
Sumwunn/DoomRPG
440272662
Title: Nightmare Soul, Hellmine and De-Vile don't give XP on death Question: username_0: Pretty much what the title says. The De-Vile still drops credits and has assigned stats to it, but it gives you no XP when killed. Answers: username_1: Confirmed for nightmare souls. Tried a oblige generated level with only lost souls enemies and used nightmare difficulty and was not able to receive a smidge of Xp. username_2: Yeah the Nightmare Soul & Hellmine both have the MF_NOXP | MF_NOAURA | MF_NODROPS flags. The original idea behind this is because Lost Souls get spewed by elementals which can result in XP spam. However I do not recall seeing any monsters spew those two guys? I'll probably take away the flag in that case but I'll leave the others because I don't think Souls are worthy of those flags though haha. The De-Vile though has no flags though; so that's strange. username_1: Iirc nightmare elemental spawn those when not directly engaged with a player hurm or was it a nightmare elemental mission bosses... well one of those summon nightmare souls. username_0: Regular Lost Souls give you XP as normal when spawned in the map, but not when spawned by Pain Elementals. Perhaps you could ensure the same is true for the other Lost Soul-type monsters? username_2: I've made some progress on this. With ZScript I can detect if a Lost Soul is map based. So anything else I can apply the appropriate flags. However I can only do this reliably with WorldThingDied. So that means I can't set the noaura on the fly; it can only be statically defined. I think it would be best if Lost Soul's had no auras whatsoever? I'm undecided at the moment. username_1: map lost souls could have auras , anyway aura lost souls aren't a problem usually its when you go into hellknight territory and higher that aura monsters start to be dangerous. Anything that come from a pain elemental should not have any aurasr else the pain elemental boss will be able to overwhelm you much more easily. username_0: Lost Souls not having auras at all sounds fine with me, if it's too cumbersome to implement map-based Lost Souls with auras/XP/drops. As long as the behaviour is consistent to all Lost Soul-type monsters. username_2: Hey guys I believe I've almost cracked this issue. Now, Lost Soul's other than map-based ones will not give you XP and this covers mission-based ones as well. Does anybody think that's unfair? I mean you still paid for completing the mission. I'm just asking because it saves me a chunk of work not to detect mission-based monsters. username_3: I'm totally fine with that. The fact it was even done at all is impressive to me! username_2: Alrighty then! I've uploaded the changes into the experimental branch for now. Post back here if any oddities are discovered as I still need to test it more. username_2: I've tested this plenty since and have now merged the changes into master. Enjoy. Status: Issue closed
rodrigopivi/Chatito
354800415
Title: Markdown output format Question: username_0: Do you have any plans to allow for markdown output format (instead of JSON)? Currently I use Rasa's function to convert it, but it could save us the trouble if this was an option here. E.g.: ``` from rasa_nlu.training_data import load_data load_data(json_training_file).as_markdown() ``` Answers: username_1: Not planned it, creating a custom adapter should be pretty easy, feel free to open a PR for that. For now maintaining Rasa's json format is enough i think. username_2: Good way, I am looking for a way to convert it, until I see this. username_2: Good way, I am looking for a way to convert it, until I see this.
parse-community/parse-server
227715226
Title: How to set userSensitiveFields from env. Question: username_0: Hello, I'm using pm2 to start parse-server and hence define all my parse-server attributes in the env of my ecosystem file. I can't figure out how to set the userSensitiveFields attribute from my ecosystem. I checked [here](https://github.com/parse-community/parse-server/blob/master/src/cli/definitions/parse-server.js#L184) and found, that there's no "env" key for userSensitiveFields. Any ideas? Answers: username_1: +1 username_2: +2 username_3: You should pass it though a configuration file or the CLI at the moment. Also note that this won’t override the default email sensitive field. username_2: @username_3 I'm fairly certain most of us need this specifically to override the default email sensitive field. You don't need to highlight the security risks. We have clients in the wild running queries that expect the email field to be present. For example, in my application we were using the email field existing to determine if the user was anonymous or not (not a good idea, but it is what it is.) The email field not being included breaks our app in the wild. Do you know of any way to achieve this short of hacking the codebase? Thanks. username_3: The email field should always be present for the authenticated user so your usage should still be ok. As discussed many times, this is not something we’re willing to budge on as stated many times. Status: Issue closed username_2: @username_3 Ok, I respect the decision. Unfortunately, this does indeed affect clients in the wild when you're querying for users who are not the authenticated user. For those who need a temporary workaround until the clients catch up, this is what I've found to work: In node_modules/parse-server/lib/ParseServer.js: `//userSensitiveFields = Array.from(new Set(userSensitiveFields.concat(_defaults2.default.userSensitiveFields, userSensitiveFields))); userSensitiveFields = [];` username_4: @username_2 why not creating a cloud function that does the same query as the client, but with the master key, and updating the client to call the function instead? username_2: @username_4 if I could magically make iOS clients in the wild use a new cloud function, I would. username_4: @username_2 Well, if you could magically update iOS clients, you probably wouldn't be asking for help here :) As you can imagine, I referred to releasing updates (as you did) and drop support for older versions...
dbeaver/dbeaver
358674778
Title: Incorrect indentation in SQL formatter Question: username_0: When I try to auto format the following statements, I get an incorrectly indented result. See below. **example** ```sql CREATE TABLE t_test1 (id INTEGER AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100)); CREATE TABLE t_test2 (id INTEGER AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100)); CREATE TABLE t_test3 (id INTEGER AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100)); CREATE TABLE t_test4 (id INTEGER AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100)); SELECT * FROM t_test1; ``` **Ctrl+Shift+F (default settings)** ```sql CREATE TABLE t_test1 (id INTEGER AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100)); CREATE TABLE t_test2 (id INTEGER AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100)); CREATE TABLE t_test3 (id INTEGER AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100)); CREATE TABLE t_test4 (id INTEGER AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100)); SELECT * FROM t_test1; ``` Tested with a new installation on Windows 7 64 and Linux Ubuntu 64.<issue_closed> Status: Issue closed
claranet/ssha
244603512
Title: SSH options Question: username_0: Some users will want to connect with a different username, not the current user on their laptop. ``` [ssh] username = "rbutcher" ``` Answers: username_0: SSH options are supported in the settings file now. Adding the ability to override any setting will be done in #9 Status: Issue closed
webpack-contrib/mini-css-extract-plugin
446063583
Title: moduleFilename option has no effect Question: username_0: I'm trying to use the moduleFilename option to output the extracted css in a different directory but it has no effect. * Operating System: Windows 10 * Node Version: 8.15.0 * NPM Version: 6.4.1 * webpack Version: 4.32.0 * mini-css-extract-plugin Version: 0.6.0 ### Expected Behavior Change the name of the outputted file ### Actual Behavior Nothing changes, the default filename option is used ### Code https://gist.github.com/username_0/166a066360e9013b77226cae9f42babf Answers: username_1: It is not published right now Status: Issue closed username_2: @username_1 I'd love to see this feature being published. Any ETA on this?
NYCPlanning/db-pluto
475880612
Title: Data quality for 2 lots on num buildings and resunits Question: username_0: 620 Webster Ave (2033600077) should be 1 building and 123 ResUnits. New building. Here is the DOB zoning documents link: http://a810-bisweb.nyc.gov/bisweb/BScanJobDocumentServlet?requestid=4&passjobnumber=220471503&passdocnumber=01&allbin=2121025&scancode=ESHS7711893 Looks like a 2-story, 2-unit building was demo’d at 3055 VALENTINE AVENUE http://a810-bisweb.nyc.gov/bisweb/JobsQueryByNumberServlet?requestid=3&passjobnumber=220242342&passdocnumber=01 and An 8-story 30-unit apt building is proposed http://a810-bisweb.nyc.gov/bisweb/BScanJobDocumentServlet?requestid=4&passjobnumber=220446211&passdocnumber=01&allbin=2094743&scancode=ESHS5756641. Right now the lot is vacant and no C of O’s issued
kamontia/qs
352165877
Title: Error handling by hard reset Question: username_0: In the current, qs command cannot recover git-rebase context when some conflicts. In this situation, the qs command should leave to the user's manual operation. Answers: username_0: @username_1 Hi, I have already implemented the logic for this issue on the branch(feature/Error-handling-by-hard-reset). But I have no idea to confirm whether this program works or not. Do you have any good idea? Simply, I just want to state conflicted in qs command. username_0: @username_1 I got how to realize the situation. Then I will implement the tests case and evaluate it. Status: Issue closed
clangd/clangd
754340303
Title: Configuration -x flag issues Question: username_0: I have a weird issue when trying to apply the `-x` flag via configuration. Setup (simplified since I can't share the exact config): ``` If: PathMatch: .*\.h CompileFlags: Add: -xobjective-c++-header --- If: PathMatch: .*\.cpp CompileFlags: Add: -xobjective-c++ ``` ``` [ { "file": "source.cpp", // relies on some objective-c++ includes "command": "clang++ -ILOTSOFINCLUDES source.cpp", "directory": "xyzrepo" }, ] ``` I can see that the flags are appended correctly int the output log: ``` /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ --driver-mode=g++ -ILOTSOFINCLUDES source.cpp -xobjective-c++ ``` --- **However clangd does not appear to parse the file as objective-c++ I get a lot of objective c related errors.** The same thing happens if I append `-xobjective-c++` manually to the compdb (with the config (.clangd) disabled): ``` [ { "file": "source.cpp", // relies on some objective-c++ includes "command": "clang++ -ILOTSOFINCLUDES source.cpp -xobjective-c++", "directory": "xyzrepo" }, ] ``` Output: ``` /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ --driver-mode=g++ -ILOTSOFINCLUDES source.cpp -xobjective-c++ ``` --- But if I insert `-x-objective-c++` manually to the comb db **in front of the input file** it works fine and I get no errors: ``` [ { "file": "source.cpp", // relies on some objective-c++ includes "command": "clang++ -ILOTSOFINCLUDES -xobjective-c++ source.cpp", "directory": "xyzrepo" }, ] ``` Output: ``` /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ --driver-mode=g++ -ILOTSOFINCLUDES -xobjective-c++ source.cpp ``` --- **TL;DR** `-xobjective-c++` only seems to work when inserted **in front** of the input file therefore this flag can't be added via a configuration file Status: Issue closed Answers: username_1: Thanks for the bug report. This is a known issue. We are tracking it in #555
elastic/apm-agent-dotnet
719183062
Title: APM agent logs basic authentication user/password Question: username_0: We had some transient network errors happen which caused the APM agent to log some errors. We have the error logging connected to our central logging system. We discovered that the error logs included fields like `ApmServerUrl`, `Url` and `EventsIntakeAbsoluteUrl` which all included our Basic HTTP Authorization username and password in the URL. The errors were logged mainly from `CentralConfigFetcher` and `PayloadSenderV2`. These should probably be removed or obfuscated before being logged Answers: username_1: @username_0 could you tell me which logs these are (of course without the username/pw)? I found 2 of those that could have this issue: From `PayloadSenderV2`: ``` Failed sending events. Following events were not transferred successfully to the server ({ApmServerUrl}):\n{SerializedItems} ``` From `CentralConfigFetcher `: ``` Exception was thrown while fetching configuration from APM Server and parsing it. ... ```` Anything missing here? Any additional log where you have this problem? We sanitize the url for the incoming HTTP request and we specifically hide the username/pw in case of basic HTTP authentication on those urls - My idea is that we'll just put the serverurl through the same logic. username_0: Seems like it's only those two. Found two unique messageTemplates in the logs (and they correspond to those you posted). ## **CentralConfigFetcher:** ``` {{{Scope}}} Exception was thrown while fetching configuration from APM Server and parsing it. ETag: `{ETag}'. URL: `{Url}'. Apm Server base URL: `{ApmServerUrl}'. WaitInterval: {WaitInterval}. dbgIterationsCount: {dbgIterationsCount}.\n+-> Request:{HttpRequest}\n+-> Response:{HttpResponse}\n+-> Response body [length: {HttpResponseBodyLength}]:{HttpResponseBody} ``` Here, both `{Url}` and `{ApmServerUrl}` contains the username and password. ```json { "_index": "removed", "_type": "_doc", "_id": "-C2ZDXUBkB0mQNlz7CgG", "_version": 1, "_score": null, "_source": { "exception": { "Message": "Resource temporarily unavailable", "RemoteStackTraceString": null, "Depth": 0, "HelpURL": null, "ClassName": "System.Net.Http.HttpRequestException", "StackTraceString": " at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)\n at System.Net.Http.HttpConnectionPool.ConnectAsync(HttpRequestMessage request, Boolean allowHttp2, CancellationToken cancellationToken)\n at System.Net.Http.HttpConnectionPool.CreateHttp11ConnectionAsync(HttpRequestMessage request, CancellationToken cancellationToken)\n at System.Net.Http.HttpConnectionPool.GetHttpConnectionAsync(HttpRequestMessage request, CancellationToken cancellationToken)\n at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken)\n at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)\n at System.Net.Http.HttpClient.FinishSendAsyncBuffered(Task`1 sendTask, HttpRequestMessage request, CancellationTokenSource cts, Boolean disposeCts)\n at Elastic.Apm.BackendComm.CentralConfigFetcher.FetchConfigHttpResponseImplAsync(HttpRequestMessage httpRequest)\n at Elastic.Apm.Helpers.AgentTimerExtensions.TryAwaitOrTimeout(IAgentTimer agentTimer, Task taskToAwait, AgentTimeInstant until, CancellationToken cancellationToken)\n at Elastic.Apm.Helpers.AgentTimerExtensions.TryAwaitOrTimeout[TResult](IAgentTimer agentTimer, Task`1 taskToAwait, AgentTimeInstant until, CancellationToken cancellationToken)\n at Elastic.Apm.Helpers.AgentTimerExtensions.AwaitOrTimeout[TResult](IAgentTimer agentTimer, Task`1 taskToAwait, AgentTimeInstant until, CancellationToken cancellationToken)\n at Elastic.Apm.BackendComm.CentralConfigFetcher.FetchConfigHttpResponseAsync(HttpRequestMessage httpRequest)\n at Elastic.Apm.BackendComm.CentralConfigFetcher.WorkLoopIteration()", "RemoteStackIndex": 0, "HResult": -2147467259, "Source": "System.Net.Http", "innerException": { "Message": "Resource temporarily unavailable", "RemoteStackTraceString": null, "Depth": 1, "HelpURL": null, "ClassName": "System.Net.Sockets.SocketException", "StackTraceString": " at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)", "RemoteStackIndex": 0, "HResult": -2147467259, "Source": "System.Private.CoreLib" } }, "fields": { "Scope": "CentralConfigFetcher", "SourceContext": "Elastic.Apm", "ExceptionDetail": { "HResult": -2147467259, "Message": "Resource temporarily unavailable", "Source": "System.Net.Http", "InnerException": { "Message": "Resource temporarily unavailable", "ErrorCode": 11, "Type": "System.Net.Sockets.SocketException", "HResult": -2147467259, "Source": "System.Private.CoreLib", "NativeErrorCode": 11, "SocketErrorCode": "TryAgain" }, "Type": "System.Net.Http.HttpRequestException" }, "HttpResponseBody": " N/A", "ETag": "<null>", "dbgIterationsCount": 1, [Truncated] "ApmServerResponseStatusCode": "ServiceUnavailable", "Index": "removed", "ThreadId": 10 }, "message": "{\"PayloadSenderV2\"} Failed sending event. Events intake API absolute URL: http://dp_apm_writer:*****@******:8200/intake/v2/events. APM Server response: status code: ServiceUnavailable, content: \n\"{\\\"accepted\\\":0,\\\"errors\\\":[{\\\"message\\\":\\\"queue is full\\\"}]}\n\"", "@version": "1", "messageTemplate": "{{{Scope}}} Failed sending event. Events intake API absolute URL: {EventsIntakeAbsoluteUrl}. APM Server response: status code: {ApmServerResponseStatusCode}, content: \n{ApmServerResponseContent}", "level": "Error", "@timestamp": "2020-10-10T18:34:44.854Z" }, "fields": { "@timestamp": [ "2020-10-10T18:34:44.854Z" ] }, "sort": [ 1602354884854 ] } ``` username_1: Perfect, thanks @username_0 . Status: Issue closed
metacall/core
486101494
Title: Include directory Question: username_0: It's unclear what directory I need to include to utilize this library from C/C++. In the `build` folder, there's a `source` folder with many subfolders with `include` directories. Do I need to include all of those? Just for context, I'm working in Xcode trying to build this for iOS/macOS. Side note if I can get this library to work for what I need I'd really love to donate to you. This library is godsend and would widen my options for other 3rd party libraries in my project. Answers: username_0: I have this set up as a submodule and would rather not have my build script install folders to root directories. Is there a way I can set an install prefix manually?
rasyidf/Rasyidf.Localization
587661585
Title: NullReferenceException issue Question: username_0: hi why i get NullReferenceException ? this is how i get string ` string test = LocalizationService.GetString("3", "Text", "آیا مایل به تغییر زبان برنامه هستید؟");` and this is my json ``` { "Languages": [ { "EnglishName": "English", "CultureName": "United States", "Culture": "en-US", "RTL": "false" }, { "EnglishName": "Persian", "CultureName": "Farsi", "Culture": "fa-IR", "RTL": "true" } ], "Data": [ { "data": [ { "Id": 2, "Header": { "en-US": "Exit", "fa-IR": "بستن" } }, { "Id": 3, "Text": { "en-US": "do you want to change language?", "fa-IR": "آیا مایل به تغییر زبان برنامه هستید؟" } }, { "Id": 10, "Content": { "en-US": "Subtitle", "fa-IR": "زیرنویس" } } ] } ] } ``` Answers: username_1: I'm still searching the causes. thank you for your feedback username_2: This usually occurs because localization is requested before it loads. I fix this and implement some cool features [here](https://gitlab.com/username_2/localization), feel free to look and pull request (i don't know how to interacts between gitlab and github) username_1: well, thank you. I'll try to implement it into github also. I don't know either how to interact between both. username_2: Wait a few hours, I'll move it out on github username_1: yes, thank you :) username_1: Then i'll close this issue. Status: Issue closed
pouchdb/express-pouchdb
46507443
Title: README note about bodyParser() outdated. Question: username_0: Hi, Express 4 seems to have outdated the README note about bodyParser(). I got this code to work (at least for the trivial test of /db). Look right? Is it possible to run tests against the server to see if it's working correctly? Thanks! ``` // The cookieParser should be above session app.use(cookieParser()); // Request body parsing middleware should be above methodOverride app.use(expressValidator()); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true })); app.use(methodOverride()); // Integrate Express-pouchdb app.use('/db', require('express-pouchdb')(require('pouchdb'))); ``` Answers: username_1: I reworded this in #162 to simply warn that middleware may conflict and used bodyParser as an example. Theoretically it **can** conflict, I'm not sure if we test that case though. Status: Issue closed username_1: Fixed in 29c016b28d93b7dad52d0df5a35bea6ec493da29
bradnoble/msc-vuejs
385080582
Title: Download errors Question: username_0: Generally have issues downloading any document. "Cannot GET /login". Chris says he has a fix but needs to be pushed to site and tested. Answers: username_1: Fixed v2.00.150 Status: Issue closed username_0: Generally have issues downloading any document. "Cannot GET /login". Chris says he has a fix but needs to be pushed to site and tested. username_0: Reopen, waiting for deployment username_0: Verified on deployed site. Status: Issue closed
apcountryman/cmake-utilities
802822002
Title: Standardize copyright notice Question: username_0: Updating the copyright dates in a file when that specific file is updated, and having each individual contributor add their own copy right notice results in a non-standard copyright notice format which creates a maintenance burden. ["Copyright Notices in Open Source Software Projects"](https://www.linuxfoundation.org/blog/copyright-notices-in-open-source-software-projects/) and ["Copyright notices for open source projects"](https://ben.balter.com/2015/06/03/copyright-notices-for-websites-and-open-source-projects/) discuss some of these maintenance burdens. Additionally, commits that contain copyright notice updates in addition to other changes cannot be reverted without messing up the copyright notices. Standardizing the copyright notice text, and committing copyright date updates independently of other changes eliminates these maintenance burdens. `Copyright [years], <NAME> <<EMAIL>> and the cmake-utilities contributors` will be the standard copyright notice where `[years]` includes every year during which the project is actively worked.<issue_closed> Status: Issue closed
Esri/spatial-framework-for-hadoop
64348141
Title: Update releases Question: username_0: The [labeled releases for spatial-framework-for-hadoop](https://github.com/Esri/spatial-framework-for-hadoop/releases) have nothing newer than 1.0.2 of October 2013. Specifically, there is no numbered release containing the [substantial performance upgrade of 2014](https://github.com/Esri/spatial-framework-for-hadoop/pull/56) and [binning functions](https://github.com/Esri/spatial-framework-for-hadoop/pull/59). In the [gis-tools-for-hadoop labeled releases](https://github.com/Esri/gis-tools-for-hadoop/releases), gis-tools-for-hadoop release 2.0 info references Spatial Framework For Hadoop 1.1, which does not exist. In gis-tools-for-hadoop samples/lib, there is spatial-sdk-hadoop.jar of 2014/08 that does not match any labeled release of spatial-framework-for-hadoop, includes the performance speedup, but lacks the [patch for correctness](https://github.com/Esri/spatial-framework-for-hadoop/issues/65). Thus everyone who uses that JAR shipped with gis-tools-for-hadoop will have a buggy library for their first experience with the spatial-framework-for-hadoop. Answers: username_0: https://github.com/Esri/spatial-framework-for-hadoop/releases/tag/v1.1 https://github.com/Esri/gis-tools-for-hadoop/commit/4bdf7afa63c14a1cff208f347bcbd87a0da1b4e3 Status: Issue closed
apache/shardingsphere
697706320
Title: Can Sharding-jdbc support join? Question: username_0: I have a query to join 2 tables ( t_order, t_order_item). **sql:** SELECT i.goods_pic, o.order_id, o.create_time, i.store_id AS shop_id, i.store_name AS shop_name, o.total_amount, o.receiver_detail_address AS receiver_address, o.freight_amount, o.`status`, i.goods_id, i.goods_name, i.goods_price, i.goods_brand, i.create_time FROM t_order AS o, t_order_item AS i WHERE o.order_id = i.order_id AND o.user_id = '322'; I found that very strange. That sql run in sharding-proxy is good, but not in sharding-jdbc. ![image](https://user-images.githubusercontent.com/6037435/92714444-cf79e800-f38e-11ea-8b86-df968702a3ea.png) **ENV:** mybatis: 3.5.0 sharding-jdbc: 4.1.1 spring-cloud: **Log:** ### Error querying database. Cause: groovy.lang.MissingMethodException: No signature of method: java.lang.String.mod() is applicable for argument types: (java.lang.Integer) values: [2] Possible solutions: drop(int), any(), find(), find(groovy.lang.Closure), find(java.util.regex.Pattern), is(java.lang.Object) ### The error may exist in file [E:\workspace\my project\mall\mall\mall\order-service\target\classes\mapper\OrderMapper.xml] ### The error may involve com.codebattery.repository.OrderMapper.getMemberOrders-Inline ### The error occurred while setting parameters ### SQL: SELECT i.goods_pic, o.order_id, o.create_time, i.store_id AS shop_id, i.store_name AS shop_name, o.total_amount, o.receiver_detail_address AS receiver_address, o.freight_amount, o.`status`, i.goods_id, i.goods_name, i.goods_price, i.goods_brand, i.create_time FROM t_order AS o, t_order_item AS i WHERE o.order_id = i.order_id and o.user_id = ? ### Cause: groovy.lang.MissingMethodException: No signature of method: java.lang.String.mod() is applicable for argument types: (java.lang.Integer) values: [2] Possible solutions: drop(int), any(), find(), find(groovy.lang.Closure), find(java.util.regex.Pattern), is(java.lang.Object)] with root cause groovy.lang.MissingMethodException: No signature of method: java.lang.String.mod() is applicable for argument types: (java.lang.Integer) values: [2] Possible solutions: drop(int), any(), find(), find(groovy.lang.Closure), find(java.util.regex.Pattern), is(java.lang.Object) at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:58) ~[groovy-2.4.5-indy.jar:2.4.5] at org.codehaus.groovy.runtime.callsite.PojoMetaClassSite.call(PojoMetaClassSite.java:49) ~[groovy-2.4.5-indy.jar:2.4.5] at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125) ~[groovy-2.4.5-indy.jar:2.4.5] at Script3$_run_closure1.doCall(Script3.groovy:1) ~[na:na] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_51] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_51] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_51] at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_51] at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93) ~[groovy-2.4.5-indy.jar:2.4.5] at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) ~[groovy-2.4.5-indy.jar:2.4.5] at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:294) ~[groovy-2.4.5-indy.jar:2.4.5] at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1019) ~[groovy-2.4.5-indy.jar:2.4.5] at groovy.lang.Closure.call(Closure.java:426) ~[groovy-2.4.5-indy.jar:2.4.5] at groovy.lang.Closure.call(Closure.java:420) ~[groovy-2.4.5-indy.jar:2.4.5] at org.apache.shardingsphere.core.strategy.route.inline.InlineShardingStrategy.execute(InlineShardingStrategy.java:94) ~[sharding-core-common-4.1.1.jar:4.1.1] [Truncated] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) [tomcat-embed-core-9.0.27.jar:9.0.27] at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408) [tomcat-embed-core-9.0.27.jar:9.0.27] at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) [tomcat-embed-core-9.0.27.jar:9.0.27] at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:861) [tomcat-embed-core-9.0.27.jar:9.0.27] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1579) [tomcat-embed-core-9.0.27.jar:9.0.27] at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-embed-core-9.0.27.jar:9.0.27] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_51] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_51] at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-9.0.27.jar:9.0.27] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51] **Files:** ddl: [2 table ddl.zip](https://github.com/apache/shardingsphere/files/5200689/2.table.ddl.zip) sharding-rule: [ShardingDataSourceConfig.txt](https://github.com/apache/shardingsphere/files/5200713/ShardingDataSourceConfig.txt) **help, please**<issue_closed> Status: Issue closed
SAP/fundamental
326199311
Title: Docs: List Group Question: username_0: Page Title - **List Group** Lists and tables are similar as both usually contain a vertical list of data, but lists generally contain basic data and tables tend to hold more complex data. If the list is a complex hierarchy, it is best to use a tree. **Simple List** A link can be used to allow the user to access more details about the item. **Lists with Action** The List item can contain quick actions. **List with Checkboxes** Checkboxes can be include on the left of each line for such purposes as bulk actions.<issue_closed> Status: Issue closed
nunit/nunit3-vs-adapter
485475939
Title: NUnit3TestAdapter 3.15.0 fails to run test: "NUnit failed to load" Question: username_0: When reporting a bug, please provide the following information to speed up triage: * NUnit and NUnit3TestAdapter versions Matches `App.sln` in repo below: <PackageReference Include="nunit" Version="3.8.1" /> <PackageReference Include="NUnit3TestAdapter" Version="3.15.0" /> <PackageReference Include="Microsoft.NET.Test.Sdk" Version="16.2.0" /> * Visual Studio edition and full version number (see Help About) `16.2.3` * A short repro, preferably attached or pointing to a git repo or gist Full reproduction https://github.com/username_0/NUnit-Test-Adapter-Bug-Repro `App.sln` is using <PackageReference Include="nunit" Version="3.8.1" /> <PackageReference Include="NUnit3TestAdapter" Version="3.15.0" /> <PackageReference Include="Microsoft.NET.Test.Sdk" Version="16.2.0" /> `App2.sln` is using <PackageReference Include="nunit" Version="3.8.1" /> <PackageReference Include="NUnit3TestAdapter" Version="3.14.0" /> <PackageReference Include="Microsoft.NET.Test.Sdk" Version="16.2.0" /> Test fails to run for `NUnit3TestAdapter` `3.15.0` but runs correctly for `3.14.0` * What .net platform and version is being targeted `net472` (but with the new Common Project System / SDK style project) * If TFS/VSTS issue, what version, hosted, on-premises, and what build task you see this in On-prem in Visual Studio 2019 Output Window - Tests (for `3.14.0`) ``` [8/26/2019 6:02:11.191 PM Diagnostic] Enqueue operation 'RunSelectedOperation', hashcode:16874580 [8/26/2019 6:02:11.192 PM Diagnostic] Operation left in the the queue: 1 [8/26/2019 6:02:11.192 PM Diagnostic] 'RunSelectedOperation', hashcode:16874580 [8/26/2019 6:02:11.192 PM Diagnostic] [8/26/2019 6:02:11.192 PM Diagnostic] Operation Dequeue : 'RunSelectedOperation' [8/26/2019 6:02:11.548 PM Diagnostic] Starting programmatic build of containers... [8/26/2019 6:02:11.605 PM Diagnostic] Loading Project 4b5691bb-da6f-4216-89bf-09783092ca8a [1] => === Id: 4b5691bb-da6f-4216-89bf-09783092ca8a ProjectFilePath: C:\dev\workspace\NUnit-Test-Adapter-Bug-Repro\App2.Tests\App2.Tests.csproj DefaultOutputPath: C:\dev\workspace\NUnit-Test-Adapter-Bug-Repro\App2.Tests\bin\Debug\net472\App2.Tests.dll DefaultTargetFramework: net472 ProjectName: App2.Tests Capabilities: UseFileGlobs,AppDesigner,AssemblyReferences,Managed,SupportAvailableItemName,FolderPublish,CPS,ProjectConfigurationsDeclaredDimensions,RelativePathDerivedDefaultNamespace,OpenProjectFile,UserSourceItems,VisualStudioWellKnownOutputGroups,TestContainer,DynamicDependentFile,NoGeneralDependentFileIcon,CSharp,SharedProjectReferences,Publish,ReferenceManagerProjects,EditAndContinue,Microsoft.VisualStudio.ProjectSystem.RetailRuntime,ReferenceManagerWinRT,AllTargetOutputGroups,SingleFileGenerators,ProjectReferences,ReferenceManagerSharedProjects,HostSetActiveProjectConfiguration,ClassDesigner,PackageReferences,GenerateDocumentationFile,AppSettings,HandlesOwnReload,RunningInVisualStudio,WinRTReferences,Pack,LaunchProfiles,ReferenceManagerCOM,LanguageService,ReferenceManagerAssemblies,ReferenceManagerBrowse,PersistDesignTimeDataOutOfProject,DependenciesTree,DeclaredSourceItems,PreserveFormatting,.NET,DataSourceWindow,OutputGroups,COMReferences IsAppContainer: False IsCpsProject: True [Truncated] [8/26/2019 5:57:02.299 PM Diagnostic] Tests run settings for C:\dev\workspace\NUnit-Test-Adapter-Bug-Repro\App.Tests\bin\Debug\net472\App.Tests.dll: <RunSettings> <RunConfiguration> <ResultsDirectory>C:\dev\workspace\NUnit-Test-Adapter-Bug-Repro\TestResults</ResultsDirectory> <SolutionDirectory>C:\dev\workspace\NUnit-Test-Adapter-Bug-Repro\</SolutionDirectory> <TargetPlatform>X86</TargetPlatform> <CollectSourceInformation>False</CollectSourceInformation> </RunConfiguration> </RunSettings>. [8/26/2019 5:57:02.601 PM Diagnostic] UpdateSummary Detail Unchanged: SKIPPED [8/26/2019 5:57:02.886 PM Informational] NUnit Adapter 3.15.0.0: Test execution started [8/26/2019 5:57:02.914 PM Informational] Running selected tests in C:\dev\workspace\NUnit-Test-Adapter-Bug-Repro\App.Tests\bin\Debug\net472\App.Tests.dll [8/26/2019 5:57:02.996 PM Diagnostic] UpdateSummary Detail Unchanged: SKIPPED [8/26/2019 5:57:03.162 PM Informational] NUnit failed to load C:\dev\workspace\NUnit-Test-Adapter-Bug-Repro\App.Tests\bin\Debug\net472\App.Tests.dll [8/26/2019 5:57:03.170 PM Informational] NUnit Adapter 3.15.0.0: Test execution complete [8/26/2019 5:57:03.284 PM Diagnostic] Project App.Tests references test adapter: NUnit3TestAdapter, version 3.15.0 [8/26/2019 5:57:03.286 PM Informational] ========== Run finished: 0 tests run (0:00:00.9445475) ========== [8/26/2019 5:57:03.526 PM Diagnostic] UpdateSummary Detail Unchanged: SKIPPED [8/26/2019 5:57:03.901 PM Diagnostic] UpdateSummary Detail Unchanged: SKIPPED ``` Answers: username_1: Seeing the same when trying to launch Selenium GUI tests on Visual Studio 2019 using adapter version 3.15; reverting to 3.14 resolves the problem locally. 3.15 does seem to function correctly on my AzureDevOps pipeline however. It seems like the test adapter is not locating the test cases. ``` InternalTrace: Initializing at level Debug 11:01:21.528 Debug [12] DefaultTestAssemblyBuilder: Loading C:\Repos\example\Project\bin\Debug\xyz.dll in AppDomain domain-9c416945-Project.dll 11:01:21.552 Debug [12] DefaultTestAssemblyBuilder: Examining assembly for test fixtures 11:01:21.645 Debug [12] DefaultTestAssemblyBuilder: Found 1 classes to examine 11:01:21.683 Debug [12] DefaultTestAssemblyBuilder: Found 1 fixtures with 0 test cases 11:01:21.811 Info [12] DefaultTestAssemblyRunner: Running tests ``` Example test case: ``` [TestCase(UsernameKeys.Sales, TestName = "ABC")] [Description("This will test ABC.")] [Category("Regression Test Pack")] [Retry(2)] public void Test_Case_ABC(UsernameKeys userKey) { Assert.That(() => { //start test }, Throws.Nothing); } ``` username_2: @username_0 I see you use NUnit 3.8. If I update to NUnit 3.12 it works for me. @username_1 This is hard to say, need a repro to confirm. username_2: @username_0 There is no explicit requirement about that, so I am bit surprised by why this pops up with 3.15. username_3: Most people in my team have this problem as well. We use NUnit 3.10 and NUnit3TestAdapter 3.15 With VS2017, when trying to run or debug a test, **half of the attempts** would end up with this error. The other half of the attempts successfully start running/debugging the test. It fails with this error **on every second attempt**. Accurate like a clock. first time running a test starts properly second tome - NUnit failed to load third time starts properly fourth time - failed to load and so on... I am guessing that the run that starts properly changes some persisting state (file?) and sets it in an invalid way, so that the next one can't start, but can reset the state... so that the next one can start and screw it up again. I hope this helps narrow down the possible problem. username_4: Same issue in my team with NUnit 3.9 and Visual studio 2015 (NUnit3TestAdapter 3.15) We can run unit test with RunAll button, but failed to load dll from selected test. Internal trace I got: RunAll test: `InternalTrace: Initializing at level Debug 15:56:36.333 Debug [16] DefaultTestAssemblyBuilder: Loading <dll Path> in AppDomain domain-1ee0f9a4-MTDWRServices.Test.dll 15:56:36.344 Debug [16] DefaultTestAssemblyBuilder: Examining assembly for test fixtures 15:56:36.358 Debug [16] DefaultTestAssemblyBuilder: Found 422 classes to examine 15:56:36.511 Debug [16] DefaultTestAssemblyBuilder: Found 46 fixtures with 1544 test cases ` one Unit Test: `InternalTrace: Initializing at level Debug 16:40:32.401 Debug [11] DefaultTestAssemblyBuilder: Loading <dll Path> in AppDomain domain-1ee0f9a4-MTDWRServices.Test.dll 16:40:32.476 Debug [11] DefaultTestAssemblyBuilder: Examining assembly for test fixtures 16:40:32.525 Debug [11] DefaultTestAssemblyBuilder: Found 0 classes to examine 16:40:32.525 Debug [11] DefaultTestAssemblyBuilder: Found 0 fixtures with 0 test cases ` Hope this point help! username_2: @username_7 When I enable verbosity, I see that when using NUnit 3.8.1 it fails to load the dll. " NUnit failed to load D:\repos\nunit\issues\issue648\NUnit-Test-Adapter-Bug-Repro\App.Tests\bin\Debug\net472\App.Tests.dll [28.08.2019 7:06:00.037 Informational] " It loads in NUnit 3.12, and it loads using NUnit3TestAdapter 3.14. Any ideas ? username_2: @username_7 That will work because you're then running All Tests. That is a seperate method being called. You need to go into the Test Explorer in Visual Studio and select some tests, that is what really breaks it - and also triggers the pre-filter. The pre-filter doesn't work for any command line case. username_2: I noticed one issue where they reported it also didn't work for Azure Pipeline builds, which uses the command line, but I can't see how that can be. Nothing is changed, so I think that is a fluke. Until some more people reports the same. All the rest of the issues are about this particular case with the pre-filter. This case here is weird though.....But it may point to something.... username_2: Acceptance test: Yeah. We need to create a test harness that executes that other interface, simulating calling it from VS. username_2: Thanks @username_5 ! I think this pretty much clears it up :-) We should add an NUnit issue to fix this then, and refer to it from this issue. And thanks a ton for helping clearing all of this up. username_2: Ok, so : |NUnit|Adapter|Result| |<=3.11|3.15|fail| |3,12|3.15 or 3.14|works| |<=3.11|3.14|works| That must mean there is something between 3.14 adapter and 3.15 that drags in something wrong ? username_2: Yes, that would work for this issue, but we also have the two other bugs, the one which makes this prefilter ignore anything with SetupFIxture, the other that makes it not work with custom TestCaseSources. That doesnt seem to be dependent upon the framework version. About the prefilter implementation we absolutely need to add the version check. How is that done through the engine ? username_2: Note the runner is created AFTER the pre-filter is created. So it seems it should really be in the engine. This should also be done per assembly, since they may exist with different frameworks, even if that is kind of "stupid". So, do we know the framework version before we call it ? Got a feeling it will be happening too late. username_2: We could use a feature flag in the runsettings for this case, that is, where the nunit is a lower version that we now support. It is a special case. username_5: @username_2 So are you creating a pre-filter that's passed in the test package? Looking at the code in GitHub, I couldn't find where that is done. username_5: @username_7 I think it's a bug in the framework and possibly also NUnitLite. NUnitLite defaults to generating a pre-filter based on the run filter that is passed in. I can't remember, but it might have the same bug. username_5: @username_2 You're doing pretty much what NUnitLite does - working from a list of test names. It's what any runner needs to do really. The culprit is the code in the framework, here: https://github.com/nunit/nunit/blob/master/src/NUnitFramework/framework/Api/DefaultTestAssemblyBuilder.cs#L239 It needs to look at the type of fixture and include it if it's a SetUpFixture. You might think we should check to see if it's a SetUpFixture that is a parent of one of the selected tests, but I don't think it matters. TestFixtures have code that runs at discovery as well as runtime but setup fixtures only run code at runtime. If the user uses a lot of static initialization, there's nothing we can do about that anyway. username_2: @username_7 Have you also had a look at the two other issues? They are correct versions, but still failing. The SetupFixture is the one I am really curious about, the other is probably the way the prefilter is set up that makes it not match the FQN. username_5: @username_2 it's pretty clear that setup fixtures will disappear unless you select the entire namespace in which it appears. At run time the parent namespace suites are all executed but the setup fixtures themselves have disappeared. This is an error in my initial implementation, which was not reported early on. Simplest fix would be to not remove any setup fixtures, since they cause no overhead outside of any static initialization. username_6: We already have nunit/nunit#3356, but it needs some more info found in this thread. username_2: @username_5 We get the test names from VS Test Explorer, and that is the base for the pre-filtering. The SetupFixtures are not included there, afaik, would be very strange if they did, since it is not a VSTest concept. So we are not removing anything, just adding what we know, and we dont know about the setupfixtures. Not sure if we can add anything really, so I think we need to have a fix in the framework/engine for this, so that pre-filtering doesnt affect things like this. The other option is see is to add the featureflag to disable it (or enable if we choose default off, not sure what is best here). username_2: Option 2 is a quick way of fixing this, default off :-) Then Option 3 can come as soon as it is possible to get it out, but it would not be critical then. I assume it would be good to add a issue in the framework repo about the setupfixture issue. username_2: @username_7 1. I will add in the runsettings flag, default off, for a 3.15.1 version. Will get that out by end of week. 2. Perhaps mention this in the newsletter, that we have observed issues in some special cases in 3.15 and that a fix will be out with 3.15.1. As you said, none of those testing this version had any issues, but the number of people hitting it afterwards were quite a bunch, so this is obviously being used. I think it deserves to be mentioned. username_2: @username_5 Take a look at the code around https://github.com/nunit/nunit3-vs-adapter/blob/3d976ae705de2a72a431cfbabb5b69bcfb647226/src/NUnitTestAdapter/NUnitTestAdapter.cs#L208 username_0: @username_7 Unfortunately, there currently isn't any possibility for me to upgrade my NUnit instance beyond 3.8.1. The package that I rely on (https://www.nuget.org/packages/Kentico.Libraries.Tests/) states it works with any version of NUnit >= 3.8.1, but my tests fail when upgrading to NUnit 3.11 or 3.12 with the following error: <blockquote> System.MissingMethodException : Method not found: 'NUnit.Framework.Interfaces.IPropertyBag TestAdapter.get_Properties()'. </blockquote> I'll send this thread to Kentico support to see if there is anything they can do on their end. username_5: Message suggests you are somehow mixing versions. Your code compiles against one version of the framework but a different version is present at runtime. username_0: Yup, that's exactly what's happening. The NuGet package I require (nuget.org/packages/Kentico.Libraries.Tests) takes a dependency on an NUnit API that no longer exists in the latest versions. At runtime the call fails. username_7: Unfortunately when we made test properties read-only, we discussed source-breaking changes (and decided they would be rare and went ahead in spite of them) but we did not think about binary-breaking changes. The .NET runtime and ECMA spec no longer recognizes a method as the same method if the return type changes to a new type that is not assignable to the old type. Some languages overload calls on return type. There aren't tools out there to help remind us when we miss things, so this is just too bad. The complexity of both source- and binary-breaking changes is so high that I'm going to advocate waiting for v4 so that users and library developers can plan and reason around our releases with fewer pitfalls. Kentico will need to recompile against NUnit 3.9 and set the version range to a minimum of NUnit 3.9. username_2: @username_0 There is now a beta version of the fix in https://www.myget.org/feed/nunit/package/nuget/NUnit3TestAdapter/3.15.1-dev-01134 . Would appreciate if you checked it. Status: Issue closed username_2: @username_0 3.15.1 hotfix released now. username_8: Hi. I'm having the same problem with 3.15.1 in [this project](https://github.com/username_8/Reactive4.NET). Visual Studio Community 16.3.9, Windows 10 x64 1809. NUnit 3.12. The Test Explorer window is populated but neither a specific nor all tests run. I did play with the X86 and X64 config to no avail. ``` [2019. 11. 15. 1:20:04.688 de. Diagnostic] Enqueue operation 'RunAllOperation', hashcode:36562506 [2019. 11. 15. 1:20:04.688 de. Diagnostic] Operation left in the the queue: 1 [2019. 11. 15. 1:20:04.688 de. Diagnostic] 'RunAllOperation', hashcode:36562506 [2019. 11. 15. 1:20:04.688 de. Diagnostic] [2019. 11. 15. 1:20:04.689 de. Diagnostic] Operation Dequeue : 'RunAllOperation' [2019. 11. 15. 1:20:04.737 de. Diagnostic] Starting programmatic build of containers... [2019. 11. 15. 1:20:04.847 de. Diagnostic] Completed programmatic build of containers. [2019. 11. 15. 1:20:04.847 de. Diagnostic] TestContainer update (build) complete : 109 ms [2019. 11. 15. 1:20:04.849 de. Diagnostic] test container discoverer executor://projectoutputcontainerdiscoverer/v1, discovered 2 containers [2019. 11. 15. 1:20:04.849 de. Diagnostic] Containers from 'Microsoft.VisualStudio.TestWindow.Client.TestContainer.ProjectOutputContainerDiscoverer' : [2019. 11. 15. 1:20:04.849 de. Diagnostic] C:\Users\username_8\git\Reactive4.NET\Reactive4.NET\bin\Debug\Reactive4.NET.dll [2019. 11. 15. 1:20:04.849 de. Diagnostic] C:\Users\username_8\git\Reactive4.NET\Reactive4.NET.Test\bin\Debug\Reactive4.NET.Test.dll [2019. 11. 15. 1:20:04.976 de. Diagnostic] test container discoverer executor://orderedtestadapter/v1, discovered 0 containers [2019. 11. 15. 1:20:04.976 de. Diagnostic] No containers found from 'Microsoft.VisualStudio.MSTest.TestWindow.OrderedTestContainerDiscoverer' : [2019. 11. 15. 1:20:05.019 de. Diagnostic] test container discoverer executor://generictestadapter/v1, discovered 0 containers [2019. 11. 15. 1:20:05.020 de. Diagnostic] No containers found from 'Microsoft.VisualStudio.MSTest.TestWindow.GenericTestContainerDiscoverer' : [2019. 11. 15. 1:20:05.237 de. Diagnostic] test container discoverer executor://webtestadapter/v1, discovered 0 containers [2019. 11. 15. 1:20:05.237 de. Diagnostic] No containers found from 'Microsoft.VisualStudio.MSTest.TestWindow.WebTestContainerDiscoverer' : [2019. 11. 15. 1:20:05.238 de. Diagnostic] DiscoveryOperation<RunAllOperation> FinishedChangedCotainers, changed container count is 2 [2019. 11. 15. 1:20:05.238 de. Diagnostic] Discovering the following containers : [2019. 11. 15. 1:20:05.238 de. Diagnostic] C:\Users\username_8\git\Reactive4.NET\Reactive4.NET.Test\bin\Debug\Reactive4.NET.Test.dll [2019. 11. 15. 1:20:05.238 de. Diagnostic] C:\Users\username_8\git\Reactive4.NET\Reactive4.NET\bin\Debug\Reactive4.NET.dll [2019. 11. 15. 1:20:05.310 de. Informational] ---------- Discovery started ---------- [2019. 11. 15. 1:20:05.414 de. Diagnostic] Project Reactive4.NET.Test references test adapter: NUnit3TestAdapter, version 3.15.1 [2019. 11. 15. 1:20:05.453 de. Diagnostic] TelemetrySession: Creating the event: VS/UnitTest/TestWindow/Ext/RunSettingsService [2019. 11. 15. 1:20:05.453 de. Diagnostic] Event:VS/UnitTest/TestWindow/Ext/RunSettingsService key: VS.UnitTest.TestWindow.RunSettingsService.Name value:VSTest Run Configuration [2019. 11. 15. 1:20:05.453 de. Diagnostic] TelemetrySession: Creating the event: VS/UnitTest/TestWindow/Ext/RunSettings [2019. 11. 15. 1:20:05.453 de. Diagnostic] Event:VS/UnitTest/TestWindow/Ext/RunSettings key: VS.UnitTest.TestWindow.RunSettings.Services value:1 [2019. 11. 15. 1:20:05.454 de. Diagnostic] File timestamp remains 2019. 11. 15. 1:10:12 for C:\Users\username_8\git\Reactive4.NET\Reactive4.NET.Test\bin\Debug\Reactive4.NET.Test.dll [2019. 11. 15. 1:20:05.454 de. Informational] ========== Discovery skipped: All test containers are up to date ========== [2019. 11. 15. 1:20:05.454 de. Diagnostic] File timestamp remains 2019. 11. 15. 1:10:12 for C:\Users\username_8\git\Reactive4.NET\Reactive4.NET\bin\Debug\Reactive4.NET.dll [2019. 11. 15. 1:20:05.564 de. Informational] ---------- Run started ---------- [2019. 11. 15. 1:20:05.565 de. Diagnostic] TelemetrySession: Creating the event: VS/UnitTest/TestWindow/Ext/RunSettingsService [2019. 11. 15. 1:20:05.565 de. Diagnostic] Event:VS/UnitTest/TestWindow/Ext/RunSettingsService key: VS.UnitTest.TestWindow.RunSettingsService.Name value:VSTest Run Configuration [2019. 11. 15. 1:20:05.565 de. Diagnostic] TelemetrySession: Creating the event: VS/UnitTest/TestWindow/Ext/RunSettings [2019. 11. 15. 1:20:05.565 de. Diagnostic] Event:VS/UnitTest/TestWindow/Ext/RunSettings key: VS.UnitTest.TestWindow.RunSettings.Services value:1 [2019. 11. 15. 1:20:05.579 de. Diagnostic] Grouped C:\Users\username_8\git\Reactive4.NET\Reactive4.NET.Test\bin\Debug\Reactive4.NET.Test.dll : (X64, Framework45, net452, ) [2019. 11. 15. 1:20:06.068 de. Diagnostic] Some tests from the test run selection do not have origin VSTestAdapterDiscovery and will be executed by name. [2019. 11. 15. 1:20:06.689 de. Diagnostic] Some tests from the test run selection do not have origin VSTestAdapterDiscovery and will be executed by name. [2019. 11. 15. 1:20:07.040 de. Diagnostic] UpdateSummary Detail Unchanged: SKIPPED [2019. 11. 15. 1:20:07.450 de. Diagnostic] UpdateSummary Detail Unchanged: SKIPPED [2019. 11. 15. 1:20:07.861 de. Diagnostic] UpdateSummary Detail Unchanged: SKIPPED [2019. 11. 15. 1:20:08.138 de. Diagnostic] UpdateSummary Detail Unchanged: SKIPPED [2019. 11. 15. 1:20:08.315 de. Diagnostic] Some tests from the test run selection do not have origin VSTestAdapterDiscovery and will be executed by name. [2019. 11. 15. 1:20:08.315 de. Diagnostic] Tests run settings for C:\Users\username_8\git\Reactive4.NET\Reactive4.NET.Test\bin\Debug\Reactive4.NET.Test.dll: <RunSettings> <RunConfiguration> <ResultsDirectory>C:\Users\username_8\git\Reactive4.NET\TestResults</ResultsDirectory> <SolutionDirectory>C:\Users\username_8\git\Reactive4.NET\</SolutionDirectory> <TargetPlatform>X64</TargetPlatform> <CollectSourceInformation>False</CollectSourceInformation> </RunConfiguration> </RunSettings>. [2019. 11. 15. 1:20:08.403 de. Diagnostic] UpdateSummary Detail Unchanged: SKIPPED [2019. 11. 15. 1:20:08.856 de. Informational] Logging TestHost Diagnostics in file: C:\Users\username_8\AppData\Local\Temp\TestPlatformLogs\6644_11_15_2019_01_17_38\logs.host.19-11-15_01-20-08_41858_8.txt [2019. 11. 15. 1:20:09.473 de. Informational] NUnit Adapter 3.15.1.0: Test execution started [2019. 11. 15. 1:20:09.487 de. Informational] Running all tests in C:\Users\username_8\git\Reactive4.NET\Reactive4.NET.Test\bin\Debug\Reactive4.NET.Test.dll [2019. 11. 15. 1:20:09.899 de. Informational] NUnit failed to load C:\Users\username_8\git\Reactive4.NET\Reactive4.NET.Test\bin\Debug\Reactive4.NET.Test.dll [2019. 11. 15. 1:20:09.905 de. Informational] NUnit Adapter 3.15.1.0: Test execution complete [2019. 11. 15. 1:20:09.915 de. Warning] No test matches the given testcase filter `FullyQualifiedName=Reactive4.NET.Test.ArrayQueueTest.Normal|FullyQualifiedName=Reactive4.NET.Test.AsyncProcessor1Tck.Optional_spec104_mustSignalOnErrorWhenFails|FullyQualifiedName=Reactive4.NET.Test.AsyncProcessor1Tck.Optional_spec105_emptyStreamMustTermin...` in C:\Users\username_8\git\Reactive4.NET\Reactive4.NET.Test\bin\Debug\Reactive4.NET.Test.dll [2019. 11. 15. 1:20:10.092 de. Diagnostic] UpdateSummary Detail Unchanged: SKIPPED [2019. 11. 15. 1:20:10.263 de. Informational] ========== Run finished: 0 tests run (0:00:04,6826326) ========== [2019. 11. 15. 1:20:10.348 de. Diagnostic] UpdateSummary Detail Unchanged: SKIPPED ``` username_2: @username_8 I can confirm this behavior, but it is caused by the underlying Reactive.Streams.tck package which has a dependency on NUnit 3.6.1, and has hardwired that pretty much into their package. I made my own version of, where I upgraded to 3.12, and then it all worked fine. ![image](https://user-images.githubusercontent.com/203432/68953981-56881f00-07c3-11ea-9690-3519293a3d97.png) (You have an huge amount of tests, and some of them are really slow, btw. ) username_8: Thanks for the info (I have very limited understanding of the .NET ecosystem unfortunately). The TCK specifies NUnit >= 3.6.1, shouldn't that automatically work with 3.12 given the target 4.5.2 supposedly does auto-redirect? Is it possible to workaround this in my project or does the TCK project have to release a newer version? username_2: The TCK project has to be updated, but I see it seems rarely updated. I can fork it and add my changes there and raise a PR to them. There is some breaking change somewhere in the range from 3.6 to 3.12. I noted your project tried to downgrade using the binding redirects. That is not a good way, it is better to directly specify you want to run the 3.6.1. Anyway, it just *might* run if you downgrade your project to 3.6.1, but I am not sure. (PS: I have asked the other NUnit core guys if we have the exact point somewhere about the breaking change. ). username_8: Thanks @username_2, that would be great! (I suppose one only needs to change the NUnit dependency versions all around the subprojects, right?) username_2: Yes, that is so. But they need to publish it, I just wonder if the maintainers are still around. username_8: I'm not sure about the maintainers. It is supposed to be the standard library & TCK for .NET so taking bits of it for my project wouldn't be interoperable. I meant to post a PR regarding NetCore3 support where I now had to upgrade the NUnit anyway (https://github.com/reactive-streams/reactive-streams-dotnet/pull/46). The problem is, RS.NET now fails its own verification test with NUnit 3.12 and I have no idea how or why. username_2: @username_8 Was just about to raise the PR when I noted you already had done so. Hope they accept that you do two things in one PR. You can link to this issue: https://github.com/reactive-streams/reactive-streams-dotnet/issues/47 username_2: I see the same errors I assume in my fork. There are several tests failing, but the code is not that easy to understand, so I think this has to be up to the maintainers. I can't see anything related to upgrading NUnit here, but there can be they do in fact have something related. username_8: Some failures are due to these: ```cs try { Assert.Fail(message, exception); } catch (Exception) { AsyncErrors.Enqueue(exception); } ``` As I understand it, in NUnit 3.6, Assert.Fail would throw and the exception would get suppressed. In newer NUnit, the framework remembers Fail so even if the test passes otherwise, the end result is still a test failure. username_2: That's a very good point. And why is the code written that way? It is better just to create an exception with the given message and use that username_8: No idea. username_2: And there seems to be tests that are designed to fail, calling the FlopAndFail. I don't understand this code at all. ***** They seem to expect certain exceptions..... And have then relied on the implementation of NUnit asserts. Very messy..... username_8: It is a test compatibility kit to verify 3rd party implementations honor a specification by providing NUnit test templates prepared with behavior. However, one must test the TCK itself to know it fails when it should fail, hence triggering a failure mode and then checking if the right AssertionExceptions are thrown. username_2: Ok, but then they should just queue up the AssertionExceptions themselves, and not rely on the Assert methods. What they do now is a very indirect way of achieving this. username_2: I started out with 55 failed tests. After replacing the Asserts with AssertionExceptions I am down to 15 failed tests. Those tests fails due to errors in the sequences they check, mismatched messages, and so on..... username_8: I've been working on the tests myself on that PR. Most test now pass except 3-5 that timeout when the entire class is run but not when the individual tests run. I don't understand why would the particular test case timeout after 500, 1000, 2000 milliseconds but work with 3000 milliseconds. As if Task.Run would need more than two seconds to spin up and do a trivial notification. username_2: I notice there are multiple Thread.Sleep calls in some methods. Can it be related to those? username_8: No. Those wait for things not to happen. For example, the source didn't signal an item within 1 second. The failures I'm debugging is that the source should have signaled an item within 1 second yet it didn't. username_2: I see. Anyway good that you are down to 3-5. That means you have a fix for the ones I have left, so I'll leave this project all to you :-) For anyone who wants to look, the repo @username_8 is fixing is : https://github.com/reactive-streams/reactive-streams-dotnet username_8: FWIW, I figured out why the tests kept failing due to the timeout. What happens is that when running the code on .NETCore3, The ThreadPool may end up filled with tasks blocked or doing NUnit work and the pool waits 500ms before creating a new worker thread. Some tests use code that execute `Task.Run` which would then wait multiples of 500 milliseconds to get running so they randomly timeout on my 4 core machine (no HT). username_9: I am experiencing the issue with NUnit 3.12 and NUnit3TestAdapter 3.15.1 DistributedTests: Test run is aborted. Logging details of the run logs. DistributedTests: New test run created. Test Run queued for Project Collection Build Service (Products). DistributedTests: Test discovery started. DistributedTests: Test Run Discovery Aborted . Test run id : 55716 DistributedTests: Unexpected error occurred during test execution. Try again. DistributedTests: Error : NUnit Adapter 3.15.1.0: Test discovery complete DistributedTests: Test run aborted. Test run id: 55716 System.Exception: The test run was aborted, failing the task. PowerShell script completed with 1 errors. it's possible that my issue is different, because for me it only works with adapter version 3.7, that is that last one that had Tools folder in the package we are using this suggestion to deploy the adapter on the target machine: [https://stackoverflow.com/questions/36622707/error-while-executing-run-functional-test-task-in-vsts](https://stackoverflow.com/questions/36622707/error-while-executing-run-functional-test-task-in-vsts) username_2: @username_9 First, please raise new issues, don't add to a closed one. You should not need to add any binaries at the target at all, just the nuget package. Please also add a repro solution if you want us to have a more close look at your issue. username_9: @username_2 sorry, I didn't realize it was closed I will try to gather enough info to crate a separate issue
dreamRs/billboarder
983068709
Title: legend in RMarkdown Question: username_0: I am trying to add billboarder charts in my .Rmd but if it's not located on the landing page the legend isn't loaded correctly. This is a simply example of the problem: ``` --- title: "Untitled" output: flexdashboard::flex_dashboard: orientation: columns vertical_layout: fill --- ```{r setup, include=FALSE} library(flexdashboard) library(billboarder) library(data.table) ``` Column {.tabset} ----------------------------------------------------------------------- ### Dummy tab ### Chart A ```{r} billboarder() %>% bb_linechart(melt(data.table(rock, keep.rownames = TRUE)[, area := as.numeric(area)], id.vars = "rn", measure.vars = c("area", "peri")), bbaes(rn, value, variable)) ``` ```
apache/cordova-android
415273235
Title: [Android 9 - TargetSKD 28] Backbutton not working Question: username_0: Hi all. I am developing an App with Cordova. The behavior of the back button on my Android 9 device is not what I expected: Setting the targetSDK to 28 (from Android Studio >> Project Structure >> app >> Flavors), the backbutton listener doesn't work, it kills the Application directly. On the other hand, if I make a downgrade of the targetSDK, the backbutton works correctly. This behavior only occurs on devices with Android 9 and project targetSDK level 28. In other devices with other versions of Android it works correctly. This way I'm subscribing to the backbutton event: ``` document.addEventListener("deviceready", onDeviceReady, false); function onDeviceReady() { document.addEventListener("backbutton", onBackKeyDown, false); } function onBackKeyDown() { //Do something... } ``` I have tried with different versions of cordova, cordova-android and Android Studio but I have not been able to solve the problem. Somebody could help me? Thank you! Answers: username_1: I tried this personally and I could not replicate from Cordova master. Which Cordova version are you using? username_0: Thank you for you feedback @username_1 These are the details of my environment: - Android Studio version: 3.3.10 - Gradle version: 4.10.1 - Cordova version: 8.1.2 ([email protected]) - Cordova-android version: 7.1.4 I set min and target SDK from Android Studio >> "Project Structure" When I set target SDK to API 28, the backbutton begins to work correctly after: - Unfold the keyboard - Change of application and I return to it - Blocking and unblocking the cell phone Any idea what it could be? Thank you very much username_2: the same issue. Look forward to solution. username_2: Btw, I'm using cordova-android 8 with default target SDK setting which is 28. Back-button will close the app directly(no 'backbutton' event triggered in JS side) until doing some interaction with APP. If switching to Cordova-android 7.1.4 with default target SDK setting, everything works fine. username_3: Can't replicate... ``` cordova create hello com.example.hello HelloWorld cd hello/ cordova platform add android@^7.1.4 cordova run cordova platform rm android cordova platform add android@^8.0.0 cordova run ``` my javascript ```javascript // Application Constructor initialize: function() { document.addEventListener('deviceready', this.onDeviceReady.bind(this), false); document.addEventListener("backbutton", this.onBackKeyDown, false); }, onBackKeyDown: function() { console.log("something"); }, ``` Tried 7.1.4, 8.0.0, tried to do the Android Studio thing, tested on Android 8 and 9 and worked 100% as expected. (logged "something") ``` Node version: v8.10.0 Cordova version: 8.1.1 ``` @username_0 would you mind running the above commands to create a brand new app, add the javascript and see if the problem persists? username_0: Hello @username_3 , thank you for your feedback. I have performed the indicated tests and the backbutton works correctly but I have detected that when I add the plugin "cordova-plugin-splashscreen" and set the targetSDK in API 28 (from Android Studio) the backbutton no longer triggers, it kills the app. Could you do that test and tell me if you find the same bug? What do you think about? Regards, Fabricio. username_4: We have the same problem, we have also the cordova-plugin-splashscreen... @ionic/cli-utils : 1.19.2 ionic (Ionic CLI) : 4.10.3 cordova (Cordova CLI) : 8.1.2 Cordova Platforms : android 8.0.0 Ionic Framework : ionic1 master username_1: I think since it is related to Splash Screen, this ticket can be closed here and a new ticket can be created in https://github.com/apache/cordova-plugin-splashscreen because it does not look like it is a general issue. username_0: This issue has been fixed: https://github.com/apache/cordova-plugin-splashscreen/issues/186 Status: Issue closed username_5: I am also having this issue, weird thing is that the backbutton works again if I resume the app, doesn't make any sense username_6: Same issue here. Sorted with https://github.com/prageeth/cordova-plugin-splashscreen would be great to merge the fix in master... username_7: Let's please keep the discussions in apache/cordova-plugin-splashscreen#186 (issue) and the proposed fix in apache/cordova-plugin-splashscreen#225. In case of an issue with cordova-android itself, please raise a new issue. We do not have the time or mental bandwidth to support closed issues. username_6: Uops. Sorry. Had open different pages in browser and wrote comment in the wrong one. username_8: Same issue here. But i'm not using "cordova-plugin-splashscreen", my plugins are: cordova-plugin-browsersync | npm | ^1.1.0 cordova-plugin-geolocation | npm | ^4.0.2 | 4.0.2 cordova-plugin-inappbrowser | npm | * | 3.2.0 cordova-plugin-network-information | npm | * | 2.0.2 cordova-plugin-whitelist | npm | * | 1.3.4 And my config are: cli-9.0.0 Android 8.0.0
Schlagen-Management/FitnessLeaderBoard
601233124
Title: Create Login component Question: username_0: Include external providers i.e., FB and Google to start with. This will include the page that follows signin where the users registers with the site. Answers: username_0: Fixed with commit: 0d8fae3a524ec57dc0b578d3d7721d2116d13177 Status: Issue closed
lortizb/GENE8940
519323844
Title: Project progress update (week2) Question: username_0: Weekly progress update of my project Answers: username_0: Activities done: 1-Copied all raw data into a Sapelo2 file called "Nanopore". In this forder I have all the files obtained during the run, but I will use only the fastq files that passess the QC (above 10). 2-Used _Porechop_ to demultiplex samples. It worked very well. I have 10 different folders (one for each sample). 3-I am working in the Canu script to perform the assembly for each sample. username_1: - good progress for first update 2/2 - please provide links to your code and location of files on the cluster in your next update.
jwly101/gitalk
464249407
Title: 戒为良药 第41季:控遗之道详尽篇 | 戒为良药 Question: username_0: https://jwly.cf/2019/07/03/jwly041/ 前言: 戒色吧不少戒友都会分享自己的撸管经历,分享经历很好,能够给别人以警示和启迪。但分享放纵经历时,要注意避免涉及细节,因为那些敏感的文字容易让人产生邪念,从而导致破戒的发生,有的戒友看着看着就漏了,有的戒友看了以后久久不能平静,因为勾起了他自己的放纵回忆。所以我们在分享自己的经历时,要淡化放纵的具体过程和细节,应该突出撸管的危害。这样既可以起到警示的作用,又不至于让人看了产生邪念。 寒假和过年
aws/aws-cdk
642081670
Title: (aws-ecs-patterns): ScheduledFargateTask fails to teardown cluster if tasks are running Question: username_0: <!-- description of the bug: --> Cluster deletions fail while the cluster has tasks running ### Reproduction Steps <!-- minimal amount of code that causes the bug (if possible) or a reference: --> Create a stack containing a `ScheduledFargateTask`, and then try to delete it while a task is running. The stack deletion will fail ### Error Log <!-- what is the error message you are seeing? --> ### Environment - **CLI Version :** 1.44 - **Framework Version:** - **Node.js Version:** <!-- Version of Node.js (run the command `node -v`) --> 12.17 - **OS :** macos 10.15.5 - **Language (Version):** <!-- [all | TypeScript (3.8.3) | Java (8)| Python (3.7.3) | etc... ] --> TS 3.9.5 ### Other <!-- e.g. detailed explanation, stacktraces, related issues, suggestions on how to fix, links for us to have context, eg. associated pull-request, stackoverflow, gitter, etc --> I have worked around this by having a custom resource that will find all tasks running on the cluster, stop them, and then wait for them to be stopped. --- This is :bug: Bug Report Answers: username_1: Hi @username_0, thanks for reaching out about this. It's currently expected behavior that cluster deletions fail while tasks are running, unfortunately--I think that's a tradeoff made in CF, not necessarily the CDK construct. I think your options here are the following: * Your custom resources solution * Use a combination of `aws ecs list-clusters`, `aws ecs list-tasks --cluster <yourCLuster>`, and `aws ecs stop-task --task <yourTask>` to programmatically kill tasks I apologize for the inconvenience, but there's nothing we can do on the cdk side of things to automate this as the CDK uses CF under the hood. I hope this helps. Please feel free to reopen if you have further questions. Status: Issue closed
waynerobinson/xeroizer
780714974
Title: Timeout option not set correctly on OAuth 2 client Question: username_0: It seems that the OAuth 2 client uses a `connection_opts` hash to set the timeout options, and this is currently empty even when initializing `Xeroizer::OAuth2Application` with a timeout option per issue #296. The options I'm seeing on the `OAuth2::Client` when making a request are: ``` :authorize_url => "https://login.xero.com/identity/connect/authorize" :token_url => "https://identity.xero.com/connect/token" :token_method => :post :auth_scheme => :request_body :connection_opts => Empty Hash :connection_build => nil :max_redirects => 5 :raise_errors => false :xero_url => "https://api.xero.com/api.xro/2.0" :tenets_url => "https://api.xero.com/connections" :unitdp => 4 :timeout => 20 :access_token => "the-access-token" :tenant_id => "the-tenant-id" ``` Is there a different way we should be initializing the `Xeroizer::OAuth2Application` to pass in the timeout options? See [Slow OAuth Providers](https://github.com/oauth-xx/oauth2/wiki/Advanced-usage#slow-oauth-providers) in the oauth2 docs for more info. Answers: username_1: I started noticing Faraday::TimeoutError: Net::ReadTimeout exceptions due to this I think. username_1: @username_0 I think you can pass `connection_opts: { request: { timeout: 300 } }` directly through the `Xeroizer::OAuth2Application` options: ```ruby Xeroizer::OAuth2Application.new(client_id, client_secret, connection_opts: { request: { timeout: 300 } }, rate_limit_sleep: 2, unitdp: 4, tenant_id: tenant_id ) ``` For what I'm seeing this options are passed directly from the `Xeroizer::OAuth2Application` initializer (https://github.com/waynerobinson/xeroizer/blob/master/lib/xeroizer/oauth2_application.rb#L32) to the `Xeroizer::OAuth2` initializer (https://github.com/waynerobinson/xeroizer/blob/master/lib/xeroizer/oauth2.rb#L9) and from there to the regular `OAuth2::Client` username_2: Documented: https://github.com/waynerobinson/xeroizer/wiki/OAuth-Client-Connection-Options---Timeouts Not an issue to fix, closing. Status: Issue closed
choria-io/go-choria
1170067045
Title: Use go 1.18 version info instead of external tool and build time vars Question: username_0: ``` go version The go command now embeds version control information in binaries. It includes the currently checked-out revision, commit time, and a flag indicating whether edited or untracked files are present. Version control information is embedded if the go command is invoked in a directory within a Git, Mercurial, Fossil, or Bazaar repository, and the main package and its containing main module are in the same repository. This information may be omitted using the flag -buildvcs=false. Additionally, the go command embeds information about the build, including build and tool tags (set with -tags), compiler, assembler, and linker flags (like -gcflags), whether cgo was enabled, and if it was, the values of the cgo environment variables (like CGO_CFLAGS). Both VCS and build information may be read together with module information using go version -m file or runtime/debug.ReadBuildInfo (for the currently running binary) or the new [debug/buildinfo](https://go.dev/doc/go1.18#debug/buildinfo) package. The underlying data format of the embedded build information can change with new go releases, so an older version of go may not handle the build information produced with a newer version of go. To read the version information from a binary built with go 1.18, use the go version command and the debug/buildinfo package from go 1.18+.```
mrdoob/three.js
151821008
Title: Wrong Label: 2 Height Segments in Editor Question: username_0: ##### Description of the problem There are 2 "Height Segments" in the editor's properties pane. I think one of them should be "Depth Segments". ##### Three.js version - [ ] Dev - [x] r76 - [ ] ... ##### Browser - [x] All of them - [ ] Chrome - [ ] Firefox - [ ] Internet Explorer ##### OS - [x] All of them - [ ] Windows - [ ] Linux - [ ] Android - [ ] IOS ##### Hardware Requirements (graphics card, VR Device, ...) Answers: username_1: Looks like a copy-paste error... Status: Issue closed
department-of-veterans-affairs/va.gov-team
810310581
Title: CLP Refinement Punch List - Internal UAT Question: username_0: Issue Description Some refinement is needed post PR for Campaign Landing Page, punch list listed below for FE work effort. Also plan to capture any internal bugs or needed fixes from our team walk through this afternoon via this ticket CLP UAT Refinement Task List - [ ] For Preview state, we should only show the Panels that are used – not all the empty panels. - [ ] Tier 1 Issue: There is an Add to Calendar link/component in the This Page is For that should not be there. - [ ] Video in Media Library did not display on the front end. (Need to fix this element or validate when code fix will be deployed) - [ ] Remove the carrot in the “Read the press release for details”? (per Randi, to be consistent we can leave the carrot off) - [ ] Answers: username_1: Social media share buttons won't work properly until we are on staging (not the preview servers) since it's taking your `hostUrl` username_1: @username_0 I'm a bit confused with this one and what I can do to resolve it? username_0: Good question @username_1, let me see if we can get you some clarity. I believe the issue was to hard code the CTA. I think it's just using a default value ATM. username_2: @username_1 I'm seeing quite a few of the FE issues resolved -- should @RLHecht and I wait until another deployment (today or Monday) to take a second look and validate all FE issues are closed? Don't want to keep reloading all day -- just easier for me (at least) to set aside time for a panel-by-panel look. username_1: I would wait until today's deployment so I can get [this PR](https://github.com/department-of-veterans-affairs/vets-website/pull/16108) merged + deployed 🙂 username_3: My understanding is that the Panel 6 CTA text was hardcoded in FE as "See more stories". Content editors would not be able to specify that. username_2: Great reminder @username_3 -- you're right and that approach works. Status: Issue closed
splunk/splunk-sdk-javascript
114321019
Title: Cannot overwrite Content-Type on Http.post Question: username_0: I need to be able to overwrite the Content-Type on the Http.post method for resources that require a value of `application/json` instead of the value `application/x-www-form-urlencoded` https://github.com/splunk/splunk-sdk-javascript/blob/2afaa209594795af0e5882a9b0d4e8666207a492/lib/http.js#L164 One resource that requires `application/json` is the KV Store, when adding / updating data in a collection: http://dev.splunk.com/view/SP-CAAAEZG Existing method: ``` post: function(url, headers, params, timeout, callback) { headers["Content-Type"] = "application/x-www-form-urlencoded"; var message = { method: "POST", headers: headers, timeout: timeout, post: params }; return this.request(url, message, callback); }, ``` Status: Issue closed Answers: username_1: Closing due to inactivity.
kelektiv/node.bcrypt.js
226233773
Title: install error Question: username_0: cannot install it at all here is logs ![Uploading dd.png…]() Answers: username_1: Hi there! There's no images so we can't really see what's your problem. But make sure you've already installed what is listed [here](https://github.com/kelektiv/node.bcrypt.js/wiki/Installation-Instructions) username_0: ![image](https://cloud.githubusercontent.com/assets/19831373/26277447/21a66222-3dba-11e7-8673-9453a89dd083.png) ![image](https://cloud.githubusercontent.com/assets/19831373/26277455/5778b6f2-3dba-11e7-96d6-21e9ab645ccf.png) confused ?what happened? why ???? username_1: Have you done `npm install --global --production windows-build-tools` on powershell with administrator permission? If yes then try running the install in admin. If it still fail, it might be a problem on your side because on the picture you sent us, it look like a problem with microsoft.NET username_1: You're also running an outdated version of npm, update npm with `npm install -g npm` and also install the latest version of node.js username_0: i tried npm instal --global ---production window-build-tools but faild to npm install anthing from package.json . by the way i give it up ! username_1: Your package.json must be broken, attach it username_2: You are using a very old version of bcrypt. Pre-built packages for it are not available and it fails to compile on newer node versions. Status: Issue closed username_2: Closing as issue was due to using very old version of bcrypt with unsatisfied dependencies
basho/riak_kv
940165298
Title: Installation of riak kv on ubuntu 20.04 Question: username_0: Hi, Which is latest stable version which I can install on ubuntu 20 ? I try install Riak KV in version 2.2.3 from repo but I get response from packagecloud with error status 402 Payment Required. Can you help me ? PS. I do not want use the docker images. Answers: username_1: Sorry, packagecloud is out of date, and we never setup a proper open source account following basho's demise. Packages can be found here: https://files.tiot.jp/riak/kv/ 3.0.6 is the latest version. To see history of changes, read from https://github.com/basho/riak/blob/develop-3.0/RELEASE-NOTES.md username_0: Thanks a lot, I installed it, but I have now problem with setup my claster, I do not have riak-admin command ? How I can get It ? Probably you do not have documentation to 3.0.6 version ? username_1: If you want to clear a node, clear the `platform data directory` (ring information is in the cluster_meta if you want to specifically clear this). The data directory is defined in riak.conf: ``` ## ## Default: ./data ## ## Acceptable values: ## - the path to a directory platform_data_dir = ./data ``` I don't know of a way of being more dynamic about IP. I'm not sure if it would be practical to try and do something with [`reip`](https://docs.riak.com/riak/kv/2.2.3/using/admin/riak-admin/index.html#reip). Documentation for the latest version is a problem right now. New feature documentation is scattered about markdown files in the docs section of the repo. Old documentation is generally valid, but with some hidden deltas (e.g. `riak admin` not `riak-admin`). We do need to get the online documentation updated, I apologise that it isn't.
noisebridge/buildout-capp
925490798
Title: Bike rack on wall somehwere Question: username_0: We can probably use more but this is awesome already ![51289132910_fec4a3bb93_c](https://user-images.githubusercontent.com/227661/124372738-7f338580-dc41-11eb-9ce0-279394978d58.jpg) Status: Issue closed Answers: username_1: I think ceiling dropdown might be better, to get them out of the way. like this https://www.amazon.com/dp/B00D4BTJQI/?coliid=IYOELXIZX5FEU&colid=1KCEJN0CN3CEN&psc=1 Bikes were often the most common thing that blocked the elevator acess area during events (when the elevator worked) username_2: We currently have 5 raks mounted on the wall near the laser cutter and lockers username_0: We can probably use more but this is awesome already ![51289132910_fec4a3bb93_c](https://user-images.githubusercontent.com/227661/124372738-7f338580-dc41-11eb-9ce0-279394978d58.jpg) Status: Issue closed
blakeblackshear/frigate
1104750832
Title: [Support]: HomeAssistant camera entity live stream blank, not detecting cars/motorcycles, birdseye blank screen with frigate logo Question: username_0: ### Describe the problem you are having After setting up a reverse proxy (traefik) with frigate on docker-compose, the addon on home assistant doesn't show the live stream when clicked into the entity like before, it just shows up a grey screen, but the display/image does update every few minutes. Besides that, as for the zones, both has only been detecting person and no cars or motorcycle along with birdseye just being a black screen with frigate logo in the middle. ### Version 0.9.4-26AE608 ### Frigate config file ```yaml mqtt: host: IP user: USER password: <PASSWORD> database: path: /media/frigate/database/frigate.db birdseye: enabled: True detect: width: 1920 height: 1080 fps: 5 enabled: True record: enabled: True retain_days: 28 events: max_seconds: 300 # Optional: Restrict recordings to objects that entered any of the listed zones (default: no required zones) required_zones: [] # Optional: Retention settings for recordings of events retain: # Required: Default retention days (default: shown below) default: 10 snapshots: enabled: True # Optional: Restrict snapshots to objects that entered any of the listed zones (default: no required zones) required_zones: [] retain: # Required: Default retention days (default: shown below) default: 10 cameras: carport-front: ffmpeg: inputs: - path: rtsp://[]:[]@10.69.25.10:554/Streaming/Channels/101?transportmode=unicast&profile=Profile_1 roles: - detect - rtmp zones: insidecarport: coordinates: 1920,1080,1920,933,1841,747,1704,495,1351,257,0,276,0,1080 objects: - person - motorcycle outsidecarport: coordinates: 713,260,1255,240,1684,436,1920,866,1920,0,0,0,0,346 [Truncated] - /home/mingsoonang/frigate/config:/config - /mnt/CCTV:/media/frigate labels: - "traefik.enable=true" - "traefik.http.routers.frigate.entrypoints=http" - "traefik.http.routers.frigate.rule=Host(`LINK`)" - "traefik.http.middlewares.frigate-https-redirect.redirectscheme.scheme=https" - "traefik.http.routers.frigate.middlewares=frigate-https-redirect" - "traefik.http.routers.frigate-secure.entrypoints=https" - "traefik.http.routers.frigate-secure.rule=Host(`LINK`)" - "traefik.http.routers.frigate-secure.tls=true" - "traefik.http.routers.frigate-secure.service=frigate" - "traefik.http.services.frigate.loadbalancer.server.port=5000" - "traefik.docker.network=proxy" networks: proxy: external: true ```` Answers: username_1: Not sure about your other issues, but for the car and motorcyle, as per [the docs](https://docs.frigate.video/configuration/index) you need to also define the objects you are tracking for in the objects list of the camera, not just the zone level. The zone level is for filtering out the ones that are set to be tracked at the camera / global level. username_0: I have added objects and I see them under tracked objects now, thank you :) username_1: @username_0 Glad that worked! Also as per the docs, for the birdseye view cameras will only show up if they have tracked an object in the last 30 seconds. If you want the birdseye to show all cameras continuously then you will need to adjust your configuration file. ```yaml # Optional: birdseye configuration birdseye: # Optional: Enable birdseye view (default: shown below) enabled: True # Optional: Width of the output resolution (default: shown below) width: 1280 # Optional: Height of the output resolution (default: shown below) height: 720 # Optional: Encoding quality of the mpeg1 feed (default: shown below) # 1 is the highest quality, and 31 is the lowest. Lower quality feeds utilize less CPU resources. quality: 8 # Optional: Mode of the view. Available options are: objects, motion, and continuous # objects - cameras are included if they have had a tracked object within the last 30 seconds # motion - cameras are included if motion was detected in the last 30 seconds # continuous - all cameras are included always mode: objects ``` username_0: I see, seems like I missed some stuff out, changed the mode to continuous, and its working now :) as for my home assistant issue, its prob cause of the port 1935 for RTMP feeds, if I get that fixed in my docker-compose then all should be good. Thanks again for helping out, much appreciated Status: Issue closed
eHealthAfrica/direct-delivery-dashboard
108137923
Title: npm install fails on Node v4.x Question: username_0: Likely node-sass needs updating: ``` fatal error: too many errors emitted, stopping now [-ferror-limit=] 20 errors generated. make: *** [Release/obj.target/binding/binding.o] Error 1 gyp ERR! build error gyp ERR! stack Error: `make` failed with exit code: 2 gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:270:23) gyp ERR! stack at emitTwo (events.js:87:13) gyp ERR! stack at ChildProcess.emit (events.js:172:7) gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:200:12) gyp ERR! System Darwin 14.5.0 gyp ERR! command "/usr/local/bin/iojs" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild" gyp ERR! cwd /Users/tom/Projects/client/ehealth-africa/direct-delivery-dashboard/node_modules/gulp-sass/node_modules/node-sass gyp ERR! node -v v4.0.0 gyp ERR! node-gyp -v v3.0.1 gyp ERR! not ok Build failed npm ERR! Darwin 14.5.0 npm ERR! argv "/usr/local/bin/iojs" "/usr/local/bin/npm" "i" npm ERR! node v4.0.0 npm ERR! npm v2.14.2 npm ERR! code ELIFECYCLE npm ERR! [email protected] install: `node build.js` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the [email protected] install script 'node build.js'. npm ERR! This is most likely a problem with the node-sass package, npm ERR! not with npm itself. npm ERR! Tell the author that this fails on your system: npm ERR! node build.js npm ERR! You can get their info via: npm ERR! npm owner ls node-sass npm ERR! There is likely additional logging output above. npm ERR! Please include the following file with any support request: npm ERR! /Users/tom/Projects/client/ehealth-africa/direct-delivery-dashboard/npm-debug.log ```
gurkenlabs/litiengine
651352493
Title: Utility doesn't work on Ubuntu Question: username_0: **Describe the bug** When utility.jar is opened it just creates a blank window **To Reproduce** Simply run the jar **Expected behavior** For the utility editor to open **Your System:** - OS:Ubuntu - LITIENGINE version: 0.4.20-alpha Answers: username_1: Please provide the Stacktrace generated in the `crash.txt`, as suggested in the bug issue template. username_0: A crash.txt file was never genrated du to the fact that it did not crash, but instead just shows a blank window username_1: Okay, maybe the name is misleading. However, the crash.txt is usually generated as soon as any exception comes up, not necessarily crashing the program entirely. It should be located in the same folder as the .jar you are trying to open. If you open the .jar via console, does it log any errors? username_0: I have been opening it through terminal since i can;t close it if i open it in any other way and it has never showed any errors. username_1: Please download the latest release and tell us if the Problem persists. We have tested it on Ubuntu successfully . username_2: Unable to reproduce with: Ubuntu 18.04.5 LTS utiLITI v0.5.0-beta username_1: That's good news! I hope OP will finally be able to use it as well. username_1: Closing this for now, feel free to reopen if the problem wasn't fixed with the latest release. Status: Issue closed
filecoin-project/lotus
704053940
Title: lotus-miner run cannot modify the default port 2345 Question: username_0: lotus-miner run --api=5432 ERROR: could not get API info: failed to parse multiaddr "5432": must begin with / Answers: username_0: https://github.com/filecoin-project/lotus/blob/4121063ca45cedb766c963f360522efa285bd23b/cli/cmd.go#L133 func GetAPIInfo(ctx *cli.Context, t repo.RepoType) (APIInfo, error) { // Check if there was a flag passed with the listen address of the API // server (only used by the tests) apiFlag := flagForAPI(t) if ctx.IsSet(apiFlag) { strma := ctx.String(apiFlag) strma = strings.TrimSpace(strma) apima, err := multiaddr.NewMultiaddr(strma) if err != nil { return APIInfo{}, err } return APIInfo{Addr: apima}, nil } username_1: I think this issue can be closed now, since we have the `--miner-api` flag on the newer versions of Lotus. #rengjøring Status: Issue closed
pyrocms/pyrocms
219867085
Title: [form-module] view options breaks the entries table view Question: username_0: When i added for ex. 'entry.event.id' to the view options, i got following error: `Call to a member function getType() on null` in `anomaly/streams-platform/src/Ui/Table/Component/Filter/Type/FieldFilter.php` on `line 33` I noticed when 'disabling' the filters, it works like expected. (in `\Anomaly\FormsModule\Http\Controller\Admin\EntriesController` commenting `line 75`) Status: Issue closed Answers: username_1: This is to be expected when erroneous fields are requested in config.
dotnet/roslyn
1052425183
Title: EditorConfig makes changes to my .editorconfig when switching the Code Style tab Question: username_0: **Version Used**: Version 17.1.0 Preview 2.0 [31910.343.main] Editors should not modify files unless I make changes. **Steps to Reproduce**: 1. git clone https://github.com/username_0/audio-switcher 2. git checkout 4f834870cadadc5b90a4ed8d50bc05f1eb0c03f5 2. Open src\AudioSwitcher.sln 3. Navigate to Solution Items\.editorconfig and open it 4. Switch tabs to Code Style **Expected Behavior**: Does not modify .editorconfig **Actual Behavior**: Modifies .editorconfig adding the following: ![image](https://user-images.githubusercontent.com/1103906/141537416-bfb4a687-7188-4f6e-8e57-db21e839fcfe.png) Status: Issue closed Answers: username_1: duplicate of duplicate of https://github.com/dotnet/roslyn/issues/59325
kframework/java-semantics
573433387
Title: wrong precision for floats Question: username_0: Some of my tests showed that K-Java has a wrong precision for float values. I assume that float values were just encoded as double precision values. As a result, I could observe that in some calculations, K-Java produces results that significantly diverge from the results of Java. Moreover, K-Java sometimes produces a concrete value, whereas Java returns Infinity, which is shown in the following example: ``` float a = (float) 1 - 1473150328275393513L / 0.16f * 6854191149854506640L * -128033296 * 23412341234123414L; System.out.println("" + a); //K-Java shows 1.8916947877681303e+62, Java shows Infinity double b = (double) 1 - 1473150328275393513L / 0.16 * 6854191149854506640L * -128033296 * 23412341234123414L; System.out.println("" + b); //K-Java and Java shows 1.8916947877681303e+62 ``` Additionally, I noticed that K-Java wrongly represents the infinity value (e.g. -Infinity.0) of double or float calculations and also the exponent representation is a bit different to Java.
uoregon-libraries/digital-exhibits-spotlight
469881852
Title: Heading options needed Question: username_0: When editing text on a page (highlighting text and a pop-up appears for formatting), currently there is only an H1 option for headings. There needs to be a menu with H1 – H6 options for accessibility.
GlotPress/gp-locales
146718868
Title: Add an automated way to validate locales Question: username_0: Properties which can be automated tested: * ISO codes: https://www.loc.gov/standards/iso639-2/php/code_list.php, http://www-01.sil.org/iso639-3/codes.asp?order=639_3&letter=a * Country codes: https://www.iso.org/obp/ui/#search * English name: http://www.unicode.org/cldr/charts/26/by_type/locale_display_names.languages__a-d_.html * Google code: https://cloud.google.com/translate/v2/using_rest#language-params * Facebook code: https://www.facebook.com/translations/FacebookLocales.xml
dotnet/cli
139711068
Title: Newtonsoft JObject fails to compile. Question: username_0: ## Steps to reproduce [MinimumRepro.zip](https://github.com/dotnet/cli/files/166154/MinimumRepro.zip) With the latest CLI (1672 at this writing) dotnet restore then dotnet pack the above project. ## Expected behavior Packaging should complete without issue. ## Actual behavior ``` Compiling MusicStore for DNX,Version=v4.5.1 Can not add property System.Collections to Newtonsoft.Json.Linq.JObject. Property with the same name already exists on object. ``` ## Environment data `dotnet --version` output: ``` .NET Command Line Tools (1.0.0-beta-001672) Product Information: Version: 1.0.0-beta-001672 Commit Sha: 12dd8d6112 Runtime Environment: OS Name: Windows OS Version: 10.0.10586 OS Platform: Windows Runtime Id: win10-x64 ``` Answers: username_1: Seems to be the same as https://github.com/dotnet/cli/issues/1775 @piotrpMSFT Can someone look into this please? Status: Issue closed username_2: This worked at 1.0.0-beta-002071.
JuliaPlots/Plots.jl
702578124
Title: [BUG] Cycling arguments Question: username_0: But I think think this cycling of arguments is not a good idea. (My opinion being biased by having been sent into a 3-day rabbit-hole expedition because of that "feature".) While I understand the appeal of being able to do things like `scatter(1:4, [1])`, I think it exposes the risk that someone (like me) mistakenly plots `scatter(x,y)` with `x` and `y` of different lengths, and cannot figure out what is happening. This is true especially for large-ish datasets that produce a somehow dense cloud of data. I would personally prefer the behavior to be only valid when calling a scalar, i.e. for `scatter(1:4, 1)` to actually do what `scatter(1:4, [1])` currently does, but I think that may conflict with other things (e.g., what is discussed in #2129). ### Backends This bug occurs on ( insert `x` below ) Backend | yes | no | untested -------------|-----|-----|--------- gr (default) | x | | pyplot | x | | plotly | | x | plotlyjs | | | x pgfplotsx | | | x inspectdr | | | x unicodeplots | x | | ### Versions - Plots.jl version: v1.6.4 - Backend version (`]st -m`): - GR v0.52.0 - PyPlot v2.9.0 - UnicodePlots v1.3.0 - Output of `versioninfo()`: ```julia julia> versioninfo() Julia Version 1.5.1 Commit 697e782ab8 (2020-08-25 20:08 UTC) Platform Info: OS: macOS (x86_64-apple-darwin19.5.0) CPU: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz WORD_SIZE: 64 LIBM: libopenlibm LLVM: libLLVM-9.0.1 (ORCJIT, skylake) ``` Answers: username_1: FWIW I've changed my mind on this, and agree it's unfortunate. username_2: The question is, what behaviour do we want? Personally, I like the `zip`-like behaviour of the unicode backend. username_0: I don't know what you will decide, but IMHO if it does not error, I really think it should throw a warning! username_1: I don't know what the unicode backend does - but wouldn't it be most consistent to error but have a recipe for `scatter(1:10, Iterators.cycle(1:3))`? username_0: I showed what the output of UnicodePlots up there! With it, `scatter(1:10, 1:3)` plots the same as `scatter((1:10)[1:3], 1:3)` Here it is again: <img width=70% src="https://user-images.githubusercontent.com/4486578/93313566-fbdcab00-f84b-11ea-98b4-3c5d0f790138.png"> username_1: Oh sorry! Yeah I can't think of a situation when that would be the desired behaviour. username_3: I agree that erroring would be best, and if we do this breaking change, I would extend it to other cases where we currently cycle, which seems to be some pseudo-random backend-dependent subset of plots attributes taking vector values. Would be nice if we can abolish `Plots._cycle`. Maybe we can directly support arbitrary iterables, so things like `scatter(1:10, Iterators.cycle(1:3))` can work? I think these look quite nice: ``` using Iterators: repeated, cycle plot(cycle([0,1]), y) scatter(x, repeated(0)) plot(y, color=cycle(1,2), fillrange=repeated(0)) # each column against the same vector: plot(repeated(x), eachcol(Y)) # or maybe plot(repeated(x), Y) ``` I think this can work by `zip`ping and collecting the positional inputs, and only then proceeding to dispatch by types according to the recipe system. Just need to error early if none of the inputs has known finite length (based on `Base.IteratorSize`).
Opentrons/opentrons
853823635
Title: bug: PD: Batch Edit - Starting Deck State scrolls over control bar Question: username_0: 1. Enter into the Batch edit mode 2. Scroll down to see the steps below, STARTING DECK STATE scrolls over the control bar. [file:///Users/nehaojha/Desktop/Screen%20Shot%202021-04-09%20at%201.23.13%20AM.png](url)<issue_closed> Status: Issue closed
jlippold/tweakCompatible
527547588
Title: `LongerCallButton` working on iOS 13.3 Question: username_0: ``` { "packageId": "com.icraze.longercallbutton", "action": "working", "userInfo": { "arch32": false, "packageId": "com.icraze.longercallbutton", "deviceId": "iPhone9,1", "url": "http://cydia.saurik.com/package/com.icraze.longercallbutton/", "iOSVersion": "13.3", "packageVersionIndexed": true, "packageName": "LongerCallButton", "category": "Tweaks", "repository": "Packix", "name": "LongerCallButton", "installed": "1.5", "packageIndexed": true, "packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.", "id": "com.icraze.longercallbutton", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.1.5", "shortDescription": "Make The Call Button Longer!", "latest": "1.5", "author": "iCraze", "packageStatus": "Unknown" }, "base64": "<KEY>", "chosenStatus": "working", "notes": "" } ``` Answers: username_0: 👍 Status: Issue closed
pulumi/pulumi
560654520
Title: Pulumi unit tests are stateful and depend on either pre-existing stacks (or stacks created by other tests). Question: username_0: Discovered by @username_1 If you have a no stacks in Pulumi.com and you run the following: ``` go test -count=1 github.com/pulumi/pulumi/cmd -run TestCreatingStackWithArgsSpecifiedFullNameSucceeds ``` Then you'll fail with: <details> ``` Created project 'test-env878886300' --- FAIL: TestCreatingStackWithArgsSpecifiedFullNameSucceeds (1.51s) /Users/username_1/workspace/pulumi/pulumi/cmd/new_test.go:141: Error Trace: new_test.go:141 Error: Received unexpected error: provided project name "test_project" doesn't match Pulumi.yaml github.com/pulumi/pulumi/pkg/backend/httpstate.(*cloudBackend).CreateStack /Users/username_1/workspace/pulumi/pulumi/pkg/backend/httpstate/backend.go:699 github.com/pulumi/pulumi/cmd.createStack /Users/username_1/workspace/pulumi/pulumi/cmd/util.go:147 github.com/pulumi/pulumi/cmd.stackInit /Users/username_1/workspace/pulumi/pulumi/cmd/new.go:542 github.com/pulumi/pulumi/cmd.promptAndCreateStack /Users/username_1/workspace/pulumi/pulumi/cmd/new.go:505 github.com/pulumi/pulumi/cmd.runNew /Users/username_1/workspace/pulumi/pulumi/cmd/new.go:248 github.com/pulumi/pulumi/cmd.TestCreatingStackWithArgsSpecifiedFullNameSucceeds /Users/username_1/workspace/pulumi/pulumi/cmd/new_test.go:140 testing.tRunner /usr/local/Cellar/go/1.13.7/libexec/src/testing/testing.go:909 runtime.goexit /usr/local/Cellar/go/1.13.7/libexec/src/runtime/asm_amd64.s:1357 could not create stack github.com/pulumi/pulumi/cmd.createStack /Users/username_1/workspace/pulumi/pulumi/cmd/util.go:156 github.com/pulumi/pulumi/cmd.stackInit /Users/username_1/workspace/pulumi/pulumi/cmd/new.go:542 github.com/pulumi/pulumi/cmd.promptAndCreateStack /Users/username_1/workspace/pulumi/pulumi/cmd/new.go:505 github.com/pulumi/pulumi/cmd.runNew /Users/username_1/workspace/pulumi/pulumi/cmd/new.go:248 github.com/pulumi/pulumi/cmd.TestCreatingStackWithArgsSpecifiedFullNameSucceeds /Users/username_1/workspace/pulumi/pulumi/cmd/new_test.go:140 testing.tRunner /usr/local/Cellar/go/1.13.7/libexec/src/testing/testing.go:909 runtime.goexit /usr/local/Cellar/go/1.13.7/libexec/src/runtime/asm_amd64.s:1357 Test: TestCreatingStackWithArgsSpecifiedFullNameSucceeds /Users/username_1/workspace/pulumi/pulumi/cmd/new_test.go:143: Error Trace: new_test.go:143 Error: Not equal: expected: "username_1/test_project/test_stack" actual : "" Diff: --- Expected +++ Actual @@ -1 +1 @@ -username_1/test_project/test_stack + Test: TestCreatingStackWithArgsSpecifiedFullNameSucceeds FAIL FAIL github.com/pulumi/pulumi/cmd 1.842s FAIL Error: Tests failed. ``` </details> The issue here is that this test (in isolation) will end up creating a stack whose test-name doesn't match its generated name. In non-isolation this passes as the test finds the stack from the real backend (possibly due to the non-deterministic execution of a previous *different* test) and doesn't validate anything about it. This is obviously not good. Our tests should be idempotent and not dependent on external factors like this. Answers: username_0: Giving to @username_1 as he is interested in spelunking here. That said, @pgavlin, is this something we have a general way to do better in tests? I'm not really familiar with these unit tests, but maybe you know of something? username_1: For now, I'm addressing the test failure with https://github.com/pulumi/pulumi/pull/3909 which doesn't remove the need for the external backend, but at least removes the statefulness of the test (and fixes the test). In the future, we should look into removing the need for the remote backend to complete this test. However, at the moment, local backends don't support the x/y/z fully-qualified stack naming style, so that's not a possibility until they do. Status: Issue closed
photopea/photopea
964902450
Title: Ability to Download Photopea Question: username_0: A .exe software might be good choice for people who want to use it in their desktops while there is no internet Status: Issue closed Answers: username_1: Photopea.com should be openable even when you don't have internet. I just disconnected from the internet and I can open www.Photopea.com in Chrome.
git/git-scm.com
126581330
Title: Linux downloads not available Question: username_0: When you get to this page: http://git-scm.com/download/linux there are no links to downloads, and none have automatically started. Answers: username_0: I got to page above by clicking from here: http://git-scm.com/downloads Status: Issue closed
hyb1996-guest/AutoJsIssueReport
319170723
Title: [163]com.stardust.autojs.runtime.exception.ScriptInterruptedException: java.io.IOException: Error running exec(). Command: [su] Working Directory: null Environment: null Question: username_0: Description: --- com.stardust.autojs.runtime.exception.ScriptInterruptedException: java.io.IOException: Error running exec(). Command: [su] Working Directory: null Environment: null at com.stardust.autojs.runtime.api.ProcessShell.execCommand(ProcessShell.java:206) at com.stardust.autojs.runtime.api.ProcessShell.execCommand(ProcessShell.java:236) at com.stardust.scriptdroid.tool.AccessibilityServiceTool.enableAccessibilityServiceByRoot(AccessibilityServiceTool.java:59) at com.stardust.scriptdroid.tool.AccessibilityServiceTool.enableAccessibilityService(AccessibilityServiceTool.java:26) at com.stardust.scriptdroid.ui.main.MainActivity$2.onClick(MainActivity.java:167) at com.afollestad.materialdialogs.MaterialDialog.onClick(MaterialDialog.java:361) at android.view.View.performClick(View.java:4855) at android.view.View$PerformClick.run(View.java:20273) at android.os.Handler.handleCallback(Handler.java:815) at android.os.Handler.dispatchMessage(Handler.java:104) at android.os.Looper.loop(Looper.java:192) at android.app.ActivityThread.main(ActivityThread.java:5618) at java.lang.reflect.Method.invoke(Native Method) at java.lang.reflect.Method.invoke(Method.java:372) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:976) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:771) Caused by: java.io.IOException: Error running exec(). Command: [su] Working Directory: null Environment: null at java.lang.ProcessManager.exec(ProcessManager.java:211) at java.lang.Runtime.exec(Runtime.java:174) at java.lang.Runtime.exec(Runtime.java:247) at java.lang.Runtime.exec(Runtime.java:190) at com.stardust.autojs.runtime.api.ProcessShell.execCommand(ProcessShell.java:189) ... 15 more Caused by: java.io.IOException: Permission denied at java.lang.ProcessManager.exec(Native Method) at java.lang.ProcessManager.exec(ProcessManager.java:209) ... 19 more Device info: --- <table> <tr><td>App version</td><td>2.0.16 Beta2</td></tr> <tr><td>App version code</td><td>163</td></tr> <tr><td>Android build version</td><td>1465793145</td></tr> <tr><td>Android release version</td><td>5.1</td></tr> <tr><td>Android SDK version</td><td>22</td></tr> <tr><td>Android build ID</td><td>ALPS.L1.MP3.V2.95_ALI6735M.35GC.A.L1_P36</td></tr> <tr><td>Device brand</td><td>DAXIAN</td></tr> <tr><td>Device manufacturer</td><td>DAXIAN</td></tr> <tr><td>Device name</td><td>ali6735m_35gc_a_l1</td></tr> <tr><td>Device model</td><td>DAXIAN R7</td></tr> <tr><td>Device product name</td><td>DAXIAN R7</td></tr> <tr><td>Device hardware name</td><td>mt6735</td></tr> <tr><td>ABIs</td><td>[armeabi-v7a, armeabi]</td></tr> <tr><td>ABIs (32bit)</td><td>[armeabi-v7a, armeabi]</td></tr> <tr><td>ABIs (64bit)</td><td>[]</td></tr> </table>
tidyverse/haven
958223253
Title: dataset with zero observations fails to parse, one observation reads in cleanly Question: username_0: not sure if `haven` fails to read datasets with zero observations? [these two sas files.zip](https://github.com/tidyverse/haven/files/6917714/two.sas.files.zip) were created off the same source SAS file, keeping zero and then one record with these data steps: ```r data x.zero_observations; set x.pu2018 (obs=0); run; data x.one_observation; set x.pu2018 (obs=1); run; ``` SAS pops up an error when opening the zero-observation dataset, but then allows the user to examine the column names and labels.. `haven` doesn't allow them to be read in at all ```r x <- haven::read_sas("one_observation.sas7bdat") # works y <- haven::read_sas("zero_observations.sas7bdat") # Error: Failed to parse C:/Users/AnthonyD/Desktop/zero_observations.sas7bdat: Invalid file, or file has unsupported features. ``` thanks!!! Answers: username_1: Hey @username_0, this is an error coming from ReadStat. @evanmiller can you please have a look? I've tried the linked zero observations file directly in ReadStat and get the same error.
tornadoweb/tornado
358313923
Title: Unable to connect to the Tornado SSL based server from Tornado Client Question: username_0: I am new to the ssl and stuff, I have generated the self signed certificates using **openssl**. `openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 3650 -out certificate.pem` Where **Server** has the following Code. if __name__ == "__main__": context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2) context.load_cert_chain("/home/rootkit/ssl/certificate.pem", "/home/rootkit/ssl/key.pem") http_server = tornado.httpserver.HTTPServer(Application(), ssl_options=context) # # http_server = tornado.httpserver.HTTPServer(Application(), ssl_options={ # 'certfile': '/home/rootkit/ssl/certificate.pem', # 'keyfile': '/home/rootkit/ssl/key.pem', # }) http_server.listen(8888) tornado.ioloop.IOLoop.current().start() When I access the url from chrome it just give the exception because it is not signed by any authority so I proceed it as unsafe. But if I see the traffic via **wireshark** it shows the encrypted traffic. But when I tried to connect with the **Tornado Client** it throws the following error. WARNING:tornado.general:SSL Error on 6 ('127.0.0.1', 8888): [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645) ERROR:tornado.application:Exception in callback functools.partial(<function wrap.<locals>.null_wrapper at 0xb72e514c>, <Task finished coro=<check_status() done, defined at /home/rootkit/PycharmProjects/websocketserver/file_upload/websocketclient.py:82> exception=SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645)')>) Traceback (most recent call last): File "/home/rootkit/.local/lib/python3.5/site-packages/tornado/ioloop.py", line 758, in _run_callback ret = callback() File "/home/rootkit/.local/lib/python3.5/site-packages/tornado/stack_context.py", line 300, in null_wrapper return fn(*args, **kwargs) File "/home/rootkit/.local/lib/python3.5/site-packages/tornado/ioloop.py", line 779, in _discard_future_result future.result() File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result raise self._exception File "/usr/lib/python3.5/asyncio/tasks.py", line 241, in _step result = coro.throw(exc) File "/home/rootkit/PycharmProjects/websocketserver/file_upload/websocketclient.py", line 89, in check_status param = await client.fetch(request) File "/usr/lib/python3.5/asyncio/futures.py", line 361, in __iter__ yield self # This tells Task to wait for completion. File "/usr/lib/python3.5/asyncio/tasks.py", line 296, in _wakeup future.result() File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result raise self._exception File "/home/rootkit/.local/lib/python3.5/site-packages/tornado/simple_httpclient.py", line 272, in run max_buffer_size=self.max_buffer_size) File "/home/rootkit/.local/lib/python3.5/site-packages/tornado/gen.py", line 1133, in run value = future.result() File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result raise self._exception File "/home/rootkit/.local/lib/python3.5/site-packages/tornado/gen.py", line 1141, in run yielded = self.gen.throw(*exc_info) File "/home/rootkit/.local/lib/python3.5/site-packages/tornado/tcpclient.py", line 242, in connect server_hostname=host) File "/home/rootkit/.local/lib/python3.5/site-packages/tornado/gen.py", line 1133, in run [Truncated] self._sslobj.do_handshake() ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645) Here is the **Client** code. async def check_status(): url = "https://127.0.0.1:8888/" request = httpclient.HTTPRequest(url=url, method="GET", client_key="/home/rootkit/client.key", client_cert="/home/rootkit/ssl/client.pem") client = httpclient.AsyncHTTPClient() param = await client.fetch(request) print(param) I have generated the client certificates using the came command I used for the server. What could be the possible issue. What I am missing ? Answers: username_1: The "client" certificate is a totally different thing: a way for the server to authenticate the client, so called "mutual authentication". It does nothing in this case because the server is not set up to check the client's certificate. It does not cause the client to skip validation of the server's certificate. To do that like you do for chrome, use `validate_cert=False`. (standard disclaimer that you need to make sure that you don't accidentally leave `validate_cert=False` in when this code makes it into some real-world product or service) username_0: So what should be the base requirement for the "mutual authentication" ? What do you mean by server is not setup to check the client's certificate ? What should I do for the ssl ? username_1: If you don't know what all this stuff is already, then just ignore and don't use "client" certificates, you don't need them. For testing with the self-signed server certificate you generated, just use `validate_cert=False` for the client. For "real production use" you probably want to generate a real trusted server certificate for your real dns domain, for example with "Let's Encrypt". This is all unrelated to tornado. username_1: yes Status: Issue closed username_3: how to use tornado with lets encrypt certificate and a domain name?
mosra/corrade
19706969
Title: Better compiler compatibility checks Question: username_0: Currently `UseCorrade.cmake` checks and enforces that proper GCC version is used if `GCC4*_COMPATIBILITY` is enabled. For example GCC 4.8 can't be used in depending project if Corrade was built with compatibility mode for GCC 4.7. However, this doesn't cover other cases: 1. Using e.g. Clang with libstdc++-4.7 (`GCC47_COMPATIBILITY` is not enforced and then compilation fails with missing library features, see #1). I don't know if it is possible to check for libstdc++ version from CMake, so not sure how to handle this one. 2. Using Clang for Corrade and GCC for depending project (might cause linker issues, see username_0/magnum-examples#2). However I don't like the idea of adding more preprocessor defines (like `BUILT_WITH_CLANG`), as they could be abused to write unportable or weirdly behaving code. The long term goal is portability (thus getting rid of all `*_COMPATIBILITY` options) and this would hurt it. Luckily there are no compatibility issues with Clang (the old 3.1 with proper libstdc++ can handle everything), so we don't need to add any `CLANG3*_COMPATIBILITY`. MSVC is a totally different story, though. Status: Issue closed Answers: username_0: This applied to old GCC/Clang versions with poor C++11 support. Since all GCC compatibility flags will be removed together with removing GCC 4.7 support (https://github.com/username_0/magnum/issues/274), none of this applies anymore. There is another sort of potential issues with MSVC frontend and Clang backend, but that's hopefully better detectable than this (and doesn't have issues with incompatible standard lib).
LogoFX/logofx-client-testing-integration-nunit
403523572
Title: Release 2.0.0 Question: username_0: - [x] Update assemblies versions - [x] Publish pre-release nuget packages and update internal nuget dependencies - [ ] Make sure the pre-release nuget packages don't break anything - [ ] Create release at GitHub - [ ] Publish release nuget packages<issue_closed> Status: Issue closed
khellang/Middleware
718371385
Title: Configurable behaviour for Cache-Control, Expires, Pragma headers Question: username_0: Please consider adding support for overriding the default behaviour of adding Cache-Control, Expires and Pragma headers. This could easily be done via a Action<IHeaderDictionary> property on ProblemDetailsOptions, similar to other extensibility points. My motivation for this request is admittedly needing to adhere to my own company's HTTP header standards and having to add a OnBeforeWriteDetails workaround to suit them, but also: 1. Pragma is deprecated, usage of it should not be hard-coded. 2. Expires is unnecessary with Cache-Control: no-store, which implies max-age=0 Happy to submit a PR for this if you're willing to accept, thanks. Answers: username_1: Hello @username_0! 👋 Thanks for filing this issue. Those headers were all added for a reason 😉 The caching headers is based off [the ASP.NET Core exception handler middleware](https://github.com/dotnet/aspnetcore/blob/36f6242e6f84208bc1c9d2c4cac94cbb134196df/src/Middleware/Diagnostics/src/ExceptionHandler/ExceptionHandlerMiddleware.cs#L176-L178) and [this StackOverflow answer](https://stackoverflow.com/q/49547). It's basically trying to be as compatible as possible. Before adding yet another knob to the options API, I would like to know a bit more about why one would care about these headers. Is is because of the byte count on the wire? Security? Especially since there are existing workarounds (either using an existing ProblemDetails hook or having a custom MW that strips these headers, which would cover other components using them as well) and this is the first time anyone's cared about it 😅 Status: Issue closed username_1: Hi @username_0! 👋🏻 I just pushed a new version, v5.2.0, to NuGet with this feature if you're interested in testing it out 😄 It can be configured like this: ```csharp services.AddProblemDetails(x => { x.AppendCacheHeaders = (ctx, headers) => { // TODO: Append headers here... }; }); ``` username_0: Ah, thanks Kristian, I'll give it a look tomorrow. To try to answer your question about why one would care about the headers... I personally don't all that much! We have some fairly strict and sometimes erroneous and puzzling standards for API behaviour set by our architecture team. Mostly designed in order to achieve consistency for APIs developed by different teams/departments, but I guess also with simplicity and security in mind. For these caching headers you could indeed argue it's more secure to have all bases covered, but if you know the consumer of your API is always another API rather than a web browser, it's overkill.
cockroachdb/docs
377405128
Title: Better support for schema changes in transactions Question: username_0: Background: https://airtable.com/tblD3oZPLJgGhCmch/viw5Jsp2TrY3it9Yr/reci8v1XFZBL8aVU8 PM: @username_2 Eng: @username_3 Answers: username_1: @username_3 I tried running all of the [Examples of statements that fail due to the no schema changes within txns limitation](https://www.cockroachlabs.com/docs/v19.1/online-schema-changes.html#examples-of-statements-that-fail) and they all still fail in CRDB version `v19.1.0-beta.20190304`. Therefore it seems like there is no additional documentation work needed here. Do you agree? username_2: @username_3 why do these still fail? username_3: Yes those still fail and for good reasons. We don't expect to fix them anytime soon. In fact we can't fix them because they involve updating the schema cache and the schema store in the same txn. username_1: @username_3 and @username_2 based on: - The conversation in this issue - Re-reading the current [Online Schema Changes](XXX) docs and re-running the various SQL statements described there with `CockroachDB CCL v19.1.0-beta.20190318` and getting the same results - Re-reading and testing SQL from the CockroachDB issue linked from the AirTable entry (https://github.com/cockroachdb/cockroach/issues/24919) I don't think there is additional documentation work needed here, and this issue can be closed. Do you agree with that assessment? username_2: I agree Status: Issue closed
DarthSim/overmind
383153922
Title: .overmind.rc Question: username_0: As an overmind user I want to specify a single overmind configuration To remove special scripts that start overmind with `--formation` etc I imagine the `procfile` and the `rcfile` as a manifest tuple that represents desired startup state. Answers: username_1: If I understood you correctly, `.overmind.env` is what you need. Please take a look at https://github.com/username_1/overmind#overmind-environment username_1: Closing due to no activity here Status: Issue closed
tokenly/swapbot
95839837
Title: SEO Friendly Bot Names Question: username_0: Allow users to create a url for their bot based on a wordpress-style slug. This would let Swapbot urls to take the form of: http://swapbot.tokenly.com/adam/adam-b-levines-ltbdisplay-swapbot Should we enforce the username in the URL? Or can even drop it down to: http://swapbot.tokenly.com/adam-b-levines-ltbdisplay-swapbot Each URL would need to be unique across all Swapbots. Answers: username_0: I think we should keep the username but still require the url slug to be unique across all swapbots. So there can't be a /adam/tokenly-for-btc-bot and a /devon/tokenly-for-btc-bot. There will be only one or the other. username_1: I think /user/bot naming convention makes sense Status: Issue closed
SharpenedMinecraft/SM2
394826118
Title: Add Packet Play Clientbound (0x0C) Question: username_0: _This issue has been created automatically_ Add Packet Play Clientbound Id: 0x0C From Client: False From Server: True Instructions required to write: ``` write uuid a write enum b switch TYPE jl$1.a[this.b.ordinal()] ```
knative/eventing
591483670
Title: pkg/utils has redundancies with knative/pkg Question: username_0: In particular the handling of cluster domains is redundant with: https://github.com/knative/pkg/blob/master/network/domain.go /kind good-first-issue Answers: username_1: Can I get assigned here please? username_2: /assign username_3: Is there anyone still dealing with this problem? If not, I could help. username_2: /unassign @username_1 /assign @username_3 Thanks @username_3 for taking a look at this one!
metallb/metallb
1114845917
Title: E2E test flake: metric metallb_speaker_announced not found Question: username_0: The following test is failing in CI from time to time: ``` 2022-01-23T08:02:49.0120353Z • Failure [124.416 seconds] 2022-01-23T08:02:49.0120967Z L2 2022-01-23T08:02:49.0122088Z /home/runner/work/metallb/metallb/e2etest/l2tests/l2.go:56 2022-01-23T08:02:49.0122435Z metrics 2022-01-23T08:02:49.0122912Z /home/runner/work/metallb/metallb/e2etest/l2tests/l2.go:330 2022-01-23T08:02:49.0123326Z should be exposed by the controller 2022-01-23T08:02:49.0123898Z /home/runner/work/metallb/metallb/e2etest/l2tests/l2.go:349 2022-01-23T08:02:49.0124422Z IPV4 - Checking service [It] 2022-01-23T08:02:49.0125044Z /home/runner/work/metallb/metallb/e2etest/l2tests/l2.go:468 2022-01-23T08:02:49.0125251Z 2022-01-23T08:02:49.0125810Z Jan 23 08:02:48.382: Timed out after 120.001s. 2022-01-23T08:02:49.0126110Z Expected 2022-01-23T08:02:49.0126603Z <*errors.errorString | 0xc00062d980>: { 2022-01-23T08:02:49.0127180Z s: "metric metallb_speaker_announced not found", 2022-01-23T08:02:49.0127475Z } 2022-01-23T08:02:49.0127791Z to be nil 2022-01-23T08:02:49.0127934Z 2022-01-23T08:02:49.0128355Z /home/runner/work/metallb/metallb/e2etest/l2tests/l2.go:456 ``` Answers: username_0: I'll work on this. username_1: thanks!
googlevr/cardboard
608985038
Title: Create layer compsitor for cardboard Question: username_0: The ability to render layers directly ( omitting a immediate render texture ) is essential for VR on Android devices with limited bandwith and power. For example, oculus built a compositor for most common shaped layers and supports usage on the oculus go. Also, as far as I can tell the gvr library also had support for 2D rectangular layers, but is missing more complex shapes like 360° video. I would like to propose adding a compositor to the cardboard lib that handles as many layer shapes as possible. While VDDC is not the most performant solution to render 2D UI layers usw, it would be really easy to adopt (since already open source for cardboard ) and automatically supports more complex shapes like 360° video. In a next step you could write special shaders for common surfaec shapes like 2D quad or equirectangular. For 2D quads it should be fairly easy ( [see](https://github.com/googlevr/cardboard/issues/30) ) but I haven't had time to figure out the math yet. What do you think ? I'd love to contribute. An example renderer for 360° video can be found here: [code](https://github.com/username_0/RenderingX/tree/master/RenderingXExample/src/main/java/constantin/renderingx/example/stereo/video360degree) Answers: username_1: Thanks for the feature request. We will add this to our backlog. However, if you would like to add these changes, please read CONTRIBUTING.md and follow the process. Patches are welcome. The right place for adding this feature would be within the CardboardDistortionRenderer module, where a variant for _renderEyeToDisplay would be needed, that takes in a list of layer descriptions, each of them containing type, geometry and a texture. We are not familiar with VDDC and are uncertain if it makes sense to add it as a dependency. username_0: Hello username_1, Let me first explain what dependencies / modifications would be needed for adding V.D.D.C to Cardboard: 1. The cardboard undistortion already distorts Vertices instead of Texture coordinates. However, its lens distortion model ( respectively the ViewportParams ) are calculated for a Full Screen OpenGL Viewport in contrast to V.D.D.C, which renders left and right eye in 2 different draw calls with a Viewport of Half the screen. Therefore, I had to make some changes to [lens_distortion](https://github.com/username_0/RenderingX/blob/33c9e9f95b85e580be4c0290848a95b8c7125700/RenderingXCore/src/main/cpp/DistortionCorrection/LensDistortion/MLensDistortion.h) Most notably adding a new struct for calculating the undistortion `ViewportParams` in HSNDC ( Half screen normalized device coordinates ) 2) V.D.D.C needs a `PolynomialRadialInverse` to calculate the inverse radial distortion more eficiently. I Have placed all this code in its own folder [PolynomialRadialDistortion](https://github.com/username_0/RenderingX/tree/33c9e9f95b85e580be4c0290848a95b8c7125700/RenderingXCore/src/main/cpp/DistortionCorrection/PolynomialRadialDistortion) It adds a [PolynomialRadialInverse](https://github.com/username_0/RenderingX/blob/33c9e9f95b85e580be4c0290848a95b8c7125700/RenderingXCore/src/main/cpp/DistortionCorrection/PolynomialRadialDistortion/PolynomialRadialInverse.h) together with the Math for caclulating it. Note: The old cardboard java library had the appropriate methods for calculating a PolynomialRadialInverse - I mostly only translated google java code to cpp there ) 3) For rendering textured geometry with V.D.D.C I have a simple OpenGL shader that can both sample from a 'normal' and 'external' texture and optionally applies V.D.D.C to the vertices: [GLProgramTexture](https://github.com/username_0/RenderingX/blob/33c9e9f95b85e580be4c0290848a95b8c7125700/RenderingXCore/src/main/cpp/GLPrograms/GLProgramTexture.h) You can find more details /helper for V.D.D.C in OpenGL shaders in this file: [VDDC.h](https://github.com/username_0/RenderingX/blob/33c9e9f95b85e580be4c0290848a95b8c7125700/RenderingXCore/src/main/cpp/DistortionCorrection/VDDC.h) Additionally, these Files come with some Header-only dependencies, most notably 'glm' for Matrices and some Helper [GLHelper](https://github.com/username_0/RenderingX/blob/33c9e9f95b85e580be4c0290848a95b8c7125700/RenderingXCore/src/main/cpp/GLHelper/GLHelper.hpp) [GLBuffer](https://github.com/username_0/RenderingX/blob/33c9e9f95b85e580be4c0290848a95b8c7125700/RenderingXCore/src/main/cpp/GLHelper/GLBuffer.hpp) Also they are compiled using c++17 ( I think Cardboard still uses c++11 ) username_0: Benefits of adding V.D.D.C: As already mentioned above, using V.D.D.C we can add all layer types that are for example already supported on oculus: [unity-ovroverlay](https://developer.oculus.com/documentation/unity/unity-ovroverlay/) Later, depending on their adoption ( for example I expect a high demand for the **Equirect** layer aka 360* video layer ) we can add customized shaders for these specific layer types. I already experimented with creating a [VrCompositorRenderer.h](https://github.com/username_0/RenderingX/blob/33c9e9f95b85e580be4c0290848a95b8c7125700/RenderingXCore/src/main/cpp/DistortionCorrection/VrCompositorRenderer.h) That holds some `VRLayer's` and draws all of them using V.D.D.C . I also created an example that contains both a 360° video layer and a 2D ui layer : [Renderer360Video](https://github.com/username_0/RenderingX/blob/33c9e9f95b85e580be4c0290848a95b8c7125700/RenderingXExample/src/main/cpp/stereo/video360Degree/Renderer360Video.cpp) username_1: Sorry for the delay here. Your proposed changes seem reasonable and valuable for the Cardboard SDK and we would like to take this to the PR phase. Taking into the account the magnitude of the required changes, another good option would be to propose a design first before moving forward with the required changes. Please read CONTRIBUTING.md and follow the process. Thanks for your proposal!
spaceconcordia/space-netman
76198891
Title: Error when running mock satellite interaction Question: username_0: Run the python ground commander 'mock_sat" interaction, and run a gettime command: ``` In a separate terminal window, run the following command tail -f /home/logs/NETMAN20150514.log /home/logs/COMMANDER20150514.log /home/logs/HE10020150514.log /home/logs/GROUND_NETMAN20150514.log GO >> 02:13:18 >> gt GO >> 02:13:22 >> GO >> 02:13:25 >> confirm GO >> 02:13:36 >> gnd: src/gnd_main.c:161: void loop_until_session_closed(netman_t*): Assertion `0' failed. NOGO >> 02:14:20 >> gnd: src/gnd_main.c:161: void loop_until_session_closed(netman_t*): Assertion `0' failed. ```
TechnionYP5779/team4
378279152
Title: use tdd to define a class that defines a list of pairs of real numbers Question: username_0: this list could be used e.g., for representing a set of data points to draw in an xy graph, e.g., x is time and y is weight. the series is the list of data points. - [ ] function recod(x,y) - [ ] iteration over pairs sorted by - [ ] statistics, such as count of data points. - [ ] implement linear regression https://en.wikipedia.org/wiki/Simple_linear_regression Answers: username_1: duplicate. #76 Status: Issue closed
cashapp/sqldelight
579402488
Title: Can't migration test multiple databases in the same gradle module Question: username_0: If a gradle module has more than one database, there doesn't appear to be a way to support migration testing for each database. The documentation says to place the .db migration test files under src/main/sqldelight -- rather than src/main/sqldelight/<my_database_package> -- so it looks like only one database is supported. https://cashapp.github.io/sqldelight/migrations/ I checked with some Square folks and they said this should probably be supported and to file an issue here. Status: Issue closed Answers: username_1: You can accomplish this by using different sources for the databases. I've added a test case to showcase the behavior: https://github.com/cashapp/sqldelight/pull/1730
RocketChat/Rocket.Chat.Livechat
493331670
Title: Latest version of Livechat Widget does not pop in the web page Question: username_0: <!-- Please see our guide for opening issues: https://rocket.chat/docs/contributing/reporting-issues If you have questions or are looking for help/support please see: https://rocket.chat/docs/getting-support If you are experiencing a bug please search our issues to be sure it is not already present: https://github.com/RocketChat/Rocket.Chat/issues --> ### Description: We have an infrastructure based on multiple rocketchats running in containers installed outside the root(/), after installed update 1.3.0v, the new livechat widget does not pop in the sites. E.g: domain.com.br/rocketchat1 domain.com.br/rocketchat2 ### Steps to reproduce: 1. Set up the rocketchat config on docker 2. Set up the latest version of rocketchat Livechat in your website 3. Go to the web page with embedded livechat widget ### Expected behavior: The Livechat widget to pop up in the inferior right corner of the web page. ### Actual behavior: The widget does not show anywhere in the site. ### Server Setup Information: - Version of Rocket.Chat Server: 1.3.0 - Operating System: Debian 9 (Stretch) - Deployment Method: Docker - Number of Running Instances: 3 - MongoDB Version: 4.0 ![image](https://user-images.githubusercontent.com/38132954/63458163-a2c3fd00-c428-11e9-9545-5cec3c93f44b.png) This is what my console shows when the configurations are applied. Answers: username_1: I upgrade to 2.0 and do some test. Maybe this will help: when I use this code for widget: `<script type="text/javascript"> (function(w, d, s, u) { w.RocketChat = function(c) { w.RocketChat._.push(c) }; w.RocketChat._ = []; w.RocketChat.url = u; var h = d.getElementsByTagName(s)[0], j = d.createElement(s); j.async = true; j.src = '**https**://domain/livechat/rocketchat-livechat.min.js?_=201903270000'; h.parentNode.insertBefore(j, h); })(window, document, 'script', '**https**://domain/livechat'); </script>` Livechat is showing in firefox and chrome in incognito mode When I am using this code: `<script type="text/javascript"> (function(w, d, s, u) { w.RocketChat = function(c) { w.RocketChat._.push(c) }; w.RocketChat._ = []; w.RocketChat.url = u; var h = d.getElementsByTagName(s)[0], j = d.createElement(s); j.async = true; j.src = '**http**://domain:**3000**/livechat/rocketchat-livechat.min.js?_=201903270000'; h.parentNode.insertBefore(j, h); })(window, document, 'script', '**http**://domain**:3000**/livechat'); </script>` It works in firefox and Chrome in both mode (normal and incognito) Status: Issue closed
RasaHQ/rasa
593412848
Title: Rasa domain model Question: username_0: The ultimate goal is to have a single [Domain Model](https://en.wikipedia.org/wiki/Domain_model) among all repos/docs at Rasa. This could be done in steps: 1. Analyse current situation and come up with the Domain model 2. Split the task and create separate issues for all repos (Rasa Open Source, SDK, Rasa X etc) That will also include revisiting endpoints names. Answers: username_1: I think there was already an issue from @username_2 for this one, can we link / replace that one? username_2: That's the issue I created a while back: https://github.com/RasaHQ/roadmap/issues/649 . I'll add the relevant parts to this issue and close it Status: Issue closed username_2: The ultimate goal is to have a single [Domain Model](https://en.wikipedia.org/wiki/Domain_model) among all repos/docs at Rasa. This could be done in steps: 1. Analyse current situation and come up with the Domain model 2. Split the task and create separate issues for all repos (Rasa Open Source, SDK, Rasa X etc) That will also include revisiting endpoints names.
OpenLiberty/ci.maven
511529006
Title: Not installing configured version of WLP Question: username_0: I'm using the 3.1 plugin and am running into an issue where I attempt to download a nightly build of OpenLiberty to use, but the Liberty Maven Plugin ends up installing the latest GA version of OpenLiberty (192.168.127.12). I can see in the maven output that it does download the nightly build, but the version of Liberty installed is 192.168.127.12. Here is my pom.xml file: https://github.com/OpenLiberty/sample-mp-graphql/blob/68e699f625f4ec77dd17dd1f21581c00fc10a7c4/pom.xml Here is the mvn output that I received: ``` $ mvn clean install liberty:run [INFO] Scanning for projects... [INFO] [INFO] -------------------< io.openliberty:mpGraphQLSample >------------------- [INFO] Building mpGraphQLSample 1.0-SNAPSHOT [INFO] --------------------------------[ war ]--------------------------------- [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ mpGraphQLSample --- [INFO] Deleting /Users/andymc/dev/sample-mp-graphql/target [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ mpGraphQLSample --- [WARNING] File encoding has not been set, using platform encoding UTF-8, i.e. build is platform dependent! [WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] skip non existing resourceDirectory /Users/andymc/dev/sample-mp-graphql/src/main/resources [INFO] Copying 1 resource [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ mpGraphQLSample --- [INFO] Changes detected - recompiling the module! [WARNING] File encoding has not been set, using platform encoding UTF-8, i.e. build is platform dependent! [INFO] Compiling 3 source files to /Users/andymc/dev/sample-mp-graphql/target/classes [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ mpGraphQLSample --- [WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] skip non existing resourceDirectory /Users/andymc/dev/sample-mp-graphql/src/test/resources [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ mpGraphQLSample --- [INFO] No sources to compile [INFO] [INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ mpGraphQLSample --- [INFO] No tests to run. [INFO] [INFO] --- maven-war-plugin:2.2:war (default-war) @ mpGraphQLSample --- WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by com.thoughtworks.xstream.core.util.Fields (file:/Users/andymc/.m2/repository/com/thoughtworks/xstream/xstream/1.3.1/xstream-1.3.1.jar) to field java.util.Properties.defaults WARNING: Please consider reporting this to the maintainers of com.thoughtworks.xstream.core.util.Fields WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release [INFO] Packaging webapp [INFO] Assembling webapp [mpGraphQLSample] in [/Users/andymc/dev/sample-mp-graphql/target/mpGraphQLSample] [INFO] Processing war project [INFO] Copying webapp resources [/Users/andymc/dev/sample-mp-graphql/src/main/webapp] [INFO] Webapp assembled in [22 msecs] [INFO] Building war: /Users/andymc/dev/sample-mp-graphql/target/mpGraphQLSample.war [INFO] [INFO] --- liberty-maven-plugin:3.1:install-server (install-server) @ mpGraphQLSample --- [INFO] CWWKM2102I: Using installDirectory : /Users/andymc/dev/sample-mp-graphql/target/liberty/wlp. [INFO] CWWKM2102I: Using serverName : mpGraphQLSample. [INFO] CWWKM2102I: Using serverDirectory : /Users/andymc/dev/sample-mp-graphql/target/liberty/wlp/usr/servers/mpGraphQLSample. [INFO] CWWKM2185I: The liberty-maven-plugin configuration parameter "appsDirectory" value defaults to "dropins". [INFO] Getting: https://public.dhe.ibm.com/ibmdl/export/pub/software/openliberty/runtime/nightly/2019-10-23_1229/openliberty-all-19.0.0.11-cl191120191023-1229.zip [INFO] To: /Users/andymc/.m2/repository/wlp-cache/openliberty-all-19.0.0.11-cl191120191023-1229.zip [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ mpGraphQLSample --- [Truncated] [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException ``` Note that it downloaded `https://public.dhe.ibm.com/ibmdl/export/pub/software/openliberty/runtime/nightly/2019-10-23_1229/openliberty-all-19.0.0.11-cl191120191023-1229.zip` But here is the output of `./productInfo version`: ``` $ /Users/andymc/dev/sample-mp-graphql/target/liberty/wlp/bin/productInfo version Product name: Open Liberty Product version: 192.168.127.12 Product edition: Open ``` If you want to reproduce the issue, you can clone and attempt to run my sample project at: https://github.com/OpenLiberty/sample-mp-graphql Answers: username_0: Update: When I changed this stanza (under the liberty-maven-plugin plugin element): ``` <executions> <execution> <id>install-server</id> <phase>pre-integration-test</phase> <goals> <goal>install-server</goal> </goals> <configuration> <install> <runtimeUrl>https://public.dhe.ibm.com/ibmdl/export/pub/software/openliberty/runtime/nightly/2019-10-23_1229/openliberty-all-19.0.0.11-cl191120191023-1229.zip</runtimeUrl> </install> </configuration> </execution> </executions> ``` to: ``` <configuration> <install> <runtimeUrl>https://public.dhe.ibm.com/ibmdl/export/pub/software/openliberty/runtime/nightly/2019-10-23_1229/openliberty-all-19.0.0.11-cl191120191023-1229.zip</runtimeUrl> </install> ... </configuration> ``` then everything worked as expected - the right version of Liberty was downloaded and executed. If you want to close this issue that is fine, but it might also be worth updating the docs at: https://github.com/OpenLiberty/ci.maven/blob/master/docs/install-server.md#install-server I based my original pom on example 3 (where install->runtimeUrl was under the install-server execution). Thanks for the help on this! username_1: @username_0 if you want your configuration to apply to all goals for a plugin you want to add the `<configuration>` at the plugin level, not just an execution level. There's a reference [here](https://maven.apache.org/guides/mini/guide-default-execution-ids.html) if you wanted to read more, but that might be all you need to know. When you're running the `liberty:run` goal your execution-level config isn't going to get picked up. So you're picking up the default behavior of the run goal, which isn't defined in POM but is actually coded in the Mojo Java implementation. So it's adding the config at the plugin level rather than removing the `install-server` goal which is making the difference for you. Though you probably don't need that goal anyway since the direction has been to do a lot more in the `run` goal. I don't think that doc is wrong either because that's more describing a case where your binding goals to phases in the lifecycle for your build, not just doing `liberty:run` from the command line. That all said, I'm a bit surprised the `mvn ... install liberty:run` would result in the `run` goal just installing (v 192.168.127.12) right over your installation from `install-server` at the different OL version (172.16.58.3). Maybe someone else could take a look and make sure that part of it looks right. But in any case, you probably just want to move the config. username_1: Funny enough.. I just happened to look at #671 and there it seems to suggest it's expected to install over a different, already-installed Liberty installation. If that's the case then maybe there's nothing more here but would still help to get confirmation. Status: Issue closed username_2: As Scott mentioned, the documentation is correct. It just depends on where you put the config as to what goals it applies to. If you want all the goals to use the runtime specified, it should go in the `<configuration>` at the plugin level.
CoolProp/CoolProp
192881387
Title: Wrong Entalphy for r1233zd Question: username_0: I´m a new user of coolprop inside mathcad. When learning and testing different fluids values for some fluids seems correct and some not. I searched old issues and found that SES36 gave wrong Entalphy . For me it still does (around 15% difference). A new fluid, r1233zd, also differs ![skarmklipp](https://cloud.githubusercontent.com/assets/24297794/20799832/b686b7f0-b7e3-11e6-8e25-36bb4145b0cd.PNG) **test** is in refprop and graphics from Honeywell = 262,35 kJ/kg and **test2**=439,95 kJ/kg I also tested r134a without remarks. Are there any selected properties that are harder to calculate? Or do the values differ around 10-15% ? What accuracy can I expect? It´s good to know when we design processes. Status: Issue closed Answers: username_1: See question #1 in the FAQ: https://github.com/CoolProp/CoolProp/blob/master/FAQ.md username_0: Thanx. So if I add a constant, offsett the output value, then I can use them. Is it the same offsett value for all entalphy calculations or is it variable depending on other parameters? username_1: It's a constant offset. Basically, never compare the enthalpy between two libraries at a given T,p, always compare the difference in enthalpy between two state points. You can also change the reference state: http://www.coolprop.org/dev/coolprop/HighLevelAPI.html#reference-states
cypress-io/cypress
765346975
Title: 微信喝茶全套是真的吗品茶学生外围上课 Question: username_0: 微信喝茶全套是真的吗▋▋薇87O9.55I8唯一靠谱▋▋赖卸瞪巫葡哪有酒店荤,桑拿服务全套SPA,水疗会所保健,品茶学生工作室洋妞年月日,上海第一财经频道《首席旅行官》最新一期节目邀请到的旅行官姬剑晶、马谦、杜松影视签约演员白柳汐,本期节目的首席旅行官白柳汐带大家来到北京。开启北京新时代体验之旅,用独特视角带你领略精彩的大千世界,感受不一样的视觉盛宴。北京是中华人民共和国首都,是全国的政治中心、文化中心、国际交往中心、科技创新中心,同时,北京也是世界著名古都和现代化国际城市,从文化传承到城市最新发展,祖国母亲的伟大让作为中国儿女的我们时刻感到骄傲和自豪。航星科技园把科技企业汇聚到一起,为充满创意的头脑提供交流、分微信喝茶全套是真的吗https://github.com/sjslhs6<issue_closed> Status: Issue closed
mpromonet/webrtc-streamer
317071949
Title: error while loading shared libraries: libX11.so.6: cannot open shared object file Question: username_0: Hi, I try to run the latest release in Ubuntu Xenial But this error occurred: ./webrtc-streamer: error while loading shared libraries: libX11.so.6: cannot open shared object file: No such file or directory. Any hints of fixing it? Answers: username_1: Hi Chris, The latest release use X11 to capture destop, you need to install libgtk-3. Best Regards, Michel. username_0: Hi, Thank you! It solves the issue. Closing the issue now. Best regards, Chris Status: Issue closed
ultiledger/ULT-FAQ
502897858
Title: 在核心企业没有对个人的业务情况下,个人如何通过链单和核心企业做交接和融资?可以通过主网把链单出售给二级市场吗? Question: username_0: 可以的。利用区块链,可以使得核心企业的信用流通起来。参与区块链的共识方保证根源上的信用可靠,流通的可靠由区块链保证。持有链单的供应商就可以将链单用于融资。融资的对象可以是下一级的供应商,可以是保理公司,也可以是个人(来自供应商或核心企业,甚至是普通人)。更进一步,可以将多个公司的链单打包成ABS,再出售给二级市场。主网当然是最方便的交易平台之一,技术上完全可行。还有一个选择就是直接通过政府平台进行出售。 Status: Issue closed Answers: username_0: 可以的。利用区块链,可以使得核心企业的信用流通起来。参与区块链的共识方保证根源上的信用可靠,流通的可靠由区块链保证。持有链单的供应商就可以将链单用于融资。融资的对象可以是下一级的供应商,可以是保理公司,也可以是个人(来自供应商或核心企业,甚至是普通人)。更进一步,可以将多个公司的链单打包成ABS,再出售给二级市场。主网当然是最方便的交易平台之一,技术上完全可行。还有一个选择就是直接通过政府平台进行出售。 Status: Issue closed
react-bootstrap/react-bootstrap
103137585
Title: Process is not defined Question: username_0: After updating to 0.25 whole applications stops work. ```Uncaught ReferenceError: process is not defined``` I dont have sourcemaps, so I cant provide more detailed stacktrace. It fails in Accordion (maybe in imported files?) __domUtils: createDeprecationWrapper(_utilsDomUtils2['default'], 'utils/domUtils', 'npm install dom-helpers')__ ``` exports.FormControls = _FormControls; var utils = { childrenValueInputValidation: _utilsChildrenValueInputValidation2['default'], createChainedFunction: _utilsCreateChainedFunction2['default'], ValidComponentChildren: _utilsValidComponentChildren2['default'], CustomPropTypes: _utilsCustomPropTypes2['default'], domUtils: createDeprecationWrapper(_utilsDomUtils2['default'], 'utils/domUtils', 'npm install dom-helpers') }; ``` Here is createDeprecationWrapper ``` function createDeprecationWrapper(obj, deprecated, instead, link) { var wrapper = {}; if (process.env.NODE_ENV === 'production') { return obj; } ``` Answers: username_1: How are you building or bundling React-Bootstrap? The built bundles don't have this. If you're building it yourself you'll need either envify or `DefinePlugin`. username_0: I have no idea what are you speaking about. I use browserify, npm, react. When trying to run server - everything is browserified well without any errors, but when i open it in console -> error occurs username_0: I am using gulp as builder username_2: when using browserify you need to use [envify](https://www.npmjs.com/package/envify) when bundling react from NPM, to provide values for `process` and `NODE_ENV`. This isn't specific to react-bootstrap, but the entire react ecosystem. Status: Issue closed
vshaxe/hashlink-debugger
313002488
Title: HXML file field should be replaced by vshaxe API Question: username_0: Currently a HXML file is required by the launch config for classpaths, this should be replaced with a vshaxe API to retrieve the classpaths. See also vshaxe/vshaxe#221. The reasons for this are that there's not always a HXML file: - compiler arguments might be in tasks.json only - it might be a Lime project or some other third-party build tool without hxml ______________ Moved from https://github.com/ncannasse/hashlink-debugger/issues/4<issue_closed> Status: Issue closed
jmespath/jmespath.site
423678160
Title: How to find all by key? Question: username_0: Could you please help me how to **find all** `condition` nodes? ![image](https://user-images.githubusercontent.com/3842386/54748513-881bfc80-4bda-11e9-80b3-ca2e0da49888.png) JSON example: ```json { "WHERE": { "AND": [ { "condition": { "AND": [ { "condition": { "name": "cond1" } }, { "condition": { "name": "cond2" } } ], "name": "cond-group" } }, { "condition": { "name": "cond3" } }, { "condition": { "name": "cond4" } } ] } } ``` I would like to find all `conditions` no matter where it is. Probably it should looks like: `*[?condition]` or `values(*)[?condition]` something like that. Answers: username_0: Probably this issue is relevant to https://github.com/jmespath/jmespath.py/issues/110
w3c/ash-nazg
1044845345
Title: Rename WICG/ScrollToTextFragment to WICG/scroll-to-text-fragment Question: username_0: The repo was renamed long ago to match WICG naming style but it looks like the IPR bot is still looking for the old name at https://labs.w3.org/repo-manager/repos which is causing failed checks on pull requests. Would appreciate guidance/help, thanks! Answers: username_1: Hi, The repository manager doesn't support repository renaming yet. I've updated the name manually so the problem should be fixed for your repository. Are you able to redeliver the webhook payload for the 2 pending PRs https://github.com/WICG/scroll-to-text-fragment/pull/175 and https://github.com/WICG/scroll-to-text-fragment/pull/176? If you aren't sure how to do this, I'm happy to help but I'll need admin rights to the repository to get access to the webhook. Status: Issue closed username_0: I think I figured it out. At least, I got the IPR success check on those PRs so I think it worked? Thanks for the help!
mrdoob/three.js
841825667
Title: deviceOrientationControls: add warning about the need of https on documentation or code Question: username_0: <!-- Ignoring this template may result in your feature request getting deleted --> **Is your feature request related to a problem? Please describe.** using `deviceOrientationControls `with http calls doesn't work, and it is not explained anywhere (or I didn't see it anywhere). I wasted a lot of time that could be avoided if there was a warning on the documentation page or code that said it is a must to use the controls with https calls. **Describe the solution you'd like** Add a warning to documentation explaining the need of https protocol. **Describe alternatives you've considered** add a warning in code also. Answers: username_1: SSL is required for other modern Web APIs like WebXR, too. However, I do not vote to enhance the documentation in that regard since you could then argue to add all Web API specific requirements to the documentation. This is clearly out of scope. Status: Issue closed username_2: @username_1 We can add this to the code though: ```js if ( window.isSecureContext === false ) { console.error( 'THREE.DeviceOrientationControls: DeviceOrientationEvent is only available in secure contexts (https)' ); } ``` [We do similar things for WebXR](https://github.com/username_2/three.js/blob/dev/examples/jsm/webxr/VRButton.js#L143-L148). username_1: <!-- Ignoring this template may result in your feature request getting deleted --> **Is your feature request related to a problem? Please describe.** using `deviceOrientationControls `with http calls doesn't work, and it is not explained anywhere (or I didn't see it anywhere). I wasted a lot of time that could be avoided if there was a warning on the documentation page or code that said it is a must to use the controls with https calls. **Describe the solution you'd like** Add a warning to documentation explaining the need of https protocol. **Describe alternatives you've considered** add a warning in code also. username_1: I was not aware of this check in `VRButton`. Okay, let me file a PR with this code^^. Status: Issue closed
astaxie/beego
283928875
Title: If I want to make the beego be my master server and need the tcp,how i could do? Question: username_0: If I want to make the beego be my master server and need the tcp to controller my slave servers. How I can do for that? 如果我想用beego来做总服,管理各个分服,进行负载均衡,这个时候我就需要beego做的总服跟各个分服进行数据验证联通,要用tcp来弄,我需要肿么办呢?<issue_closed> Status: Issue closed
hassio-addons/addon-grafana
564736653
Title: Unable to activate direct access to addon Question: username_0: # Problem/Motivation I just tried to avoid the problem with HTTP 401 response while accessing graphs via ingress. ## Expected behavior Addon is running and I am able to access it using ingress and preliminary specified port(3000) as well. ## Actual behavior Addon is not started at all. And I see following error in the logs: 20-02-13 14:28:48 ERROR (SyncWorker_4) [hassio.docker] Can't start addon_a0d7b954_grafana: 500 Server Error: Internal Server Error ("driver failed programming external connectivity on endpoint addon_a0d7b954_grafana (aff17550576f96d77082849df45add5c8e09e9cfa5323157167409a5578f2bb0): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 3000 -j DNAT --to-destination 172.30.33.1:80 ! -i hassio: iptables: No chain/target/match by that name. (exit status 1))") ## Steps to reproduce Just set custom port and restart addon. Answers: username_1: What Operating System do you run? username_0: sergiy@home:~$ uname -a Linux home 5.4.0-2-amd64 #1 SMP Debian 5.4.8-1 (2020-01-05) x86_64 GNU/Linux sergiy@home:~$ cat /etc/os-release PRETTY_NAME="Debian GNU/Linux bullseye/sid" NAME="Debian GNU/Linux" ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" username_1: The issue is Docker failing to specify the port, it isn't an issue with the addon itself but with Docker and iptables. username_2: That sounds really bad, do you have custom firewall rules or something that manages your iptables? If so, that would collide with the Supervisor. username_0: Well, yes I have custom rules, since the server is a gateway. And I need to protect my external interface(enp5s0). Here they are: Generated by xtables-save v1.8.2 on Fri Jan 24 14:52:27 2020 *nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A POSTROUTING -o enp5s0 -j MASQUERADE COMMIT # Completed on Fri Jan 24 14:52:27 2020 # Generated by xtables-save v1.8.2 on Fri Jan 24 14:52:27 2020 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -i lo -j ACCEPT -A INPUT -i enp5s0 -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -i enp5s0 -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -i enp5s0 -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -i enp5s0 -p tcp -m tcp --dport 3000 -j ACCEPT -A INPUT -i enp5s0 -p tcp -m tcp --dport 8112 -j ACCEPT -A INPUT -i enp5s0 -p tcp -m tcp --dport 8123 -j ACCEPT -A INPUT -i enp5s0 -p tcp -m tcp --dport 9000 -j ACCEPT -A INPUT -i enp5s0 -p tcp -m tcp --dport 50000:50100 -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -i enp5s0 -j DROP COMMIT # Completed on Fri Jan 24 14:52:27 2020 username_0: Thank You for the swift support. I will try to figure out what that might be. username_2: Well, I understand, but it seems like you've dropped the hassio chain in the process... username_0: It seems so..... Thank You! Status: Issue closed
barnoldsrs/MatchMarch21
858110577
Title: Timed mode: refresh screen so time value changes every second Question: username_0: We would like the screen to be refreshed every second so users can see the time elapsed (or remaining). Current implementation doesn't update once scene is live. Answers: username_0: This is related to Issue #28, and will be closed when that issue is closed. Status: Issue closed username_0: Commit on May 9, 2021 fixed this.
tensorflow/tensorflow
507800903
Title: Specifying output_shape is not working in tf.keras Lambda Layer Question: username_0: **System information** - Have I written custom code (as opposed to using a stock example script provided in TensorFlow): yes - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04 - TensorFlow installed from (source or binary): binary - TensorFlow version (use command below): 2.0.0 - Python version: 3.7.3 - CUDA/cuDNN version: 10.0/7 **Describe the current behavior** When creating a keras Model using a lambda function with specified output shape, the shape is not assigned to the resulting tensor: dense_net from the example below: `<tf.Tensor 'lambda_6/Identity:0' shape=(None, None, None) dtype=float32>` If used before another layer like Dense the error appears: `ValueError: The last dimension of the inputs to `Dense` should be defined. Found `None`.` **Describe the expected behavior** dense_net should have shape information: `<tf.Tensor 'lambda_6/Identity:0' shape=(None, 10, 5) dtype=float32>` **Code to reproduce the issue** ``` from tensorflow.keras.layers import Input, Dense, Lambda from tensorflow.keras.models import Model from tensorflow.keras.backend import to_dense test_input = Input((10, 5), sparse=True) dense_net = Lambda(to_dense, output_shape=(None, 10, 5))(test_input) test_net = Dense(50)(dense_net) ``` Answers: username_0: Edit: Found a workaround: ``` from tensorflow.keras.layers import Input, Dense, Lambda, Layer, Reshape from tensorflow.keras.models import Model from tensorflow.keras.backend import to_dense test_input = Input((10, 5), sparse=True) dense_net = Lambda(to_dense)(test_input) reshape_net = Reshape((10, 5))(dense_net) test_net = Dense(50)(reshape_net) ``` Bug still remains though. username_1: Issue replicating for TF-2.0, please find the [gist](https://colab.sandbox.google.com/gist/username_1/2071cc1a42c3c1991d99da279c98b051/33422.ipynb) for the same.Thanks! username_0: Trying a custom Layer instead of the Lambda Layer results in the same error: ``` class ToDenseLayer(Layer): def __init__(self, out_shape: int): super(ToDenseLayer, self).__init__() self.out_shape = out_shape def call(self, inputs, **kwargs): return tf.sparse.to_dense(inputs) def compute_output_shape(self, input_shape): return None, None, self.out_shape ``` username_2: `output_shape` in the Lambda Layer is used to help Keras do shape inference when in eager execution (or otherwise when shape information is not available), but it does not override TF shape inference on tensors, so it does not affect the tensor.shape attribute of the outputs. To set the shape of a symbolic tensor with partial shape information, you should use the `set_shape` method. Status: Issue closed username_3: Received this error in TF 2.4.1. Used keras Lambda with tf.squeeze and `output_shape` defined, but got the error within next layer which is Dense. I checked internals of sequential model, it did not infer output shape of the Lambda layer (`output_tensor` is `<KerasTensor: shape=<unknown> dtype=float32 (created by layer 'lambda')>`). Now I look for "clean" solution not to speak about the workarounds. My question - when `output_shape` is defined, and inference for output shape is unsuccessful, can the `output_shape` defined by user be used?
jrvansuita/PickImage
359842816
Title: PickImage Provider With Android File Provider Question: username_0: How to use pick image provider with android's file provider? I can only use one at this time. ![screenshot_11](https://user-images.githubusercontent.com/20544271/45484129-980add80-b76c-11e8-868b-a23e583e86a8.png) Answers: username_0: I want to use both but that gives error. username_1: I'm trying to do the same, did you find a solution? Status: Issue closed
i18next/i18next-parser
1125897364
Title: dynamically computed namespaces are parsed as the variable name Question: username_0: ## 🐛 Bug Report When calculating the namespace dynamically (for example, if I want to use the same namespace for all Trans components), i18n-parser uses the namespace variable name as the namespace. ## To Reproduce ```ts const translationNS = 'my-own-ns' const {t} = useTranslation(translationNS); return ( <div> <Trans ns={translationNS}>something</Trans> {t('more')} </div> ``` ## Expected behavior I expect to have a new json file names `my-own-ns.json` with the following: ``` { something: 'something', more: 'more' } ``` ## Your Environment - *runtime version*: node v16 - *i18next version*: 21.3.3 - *os*: Linux Answers: username_1: The parser does not run or interpret your code. If you don't use a string literral it will not work. That said it should throw a warning and not grab the variable name. username_1: I double checked and it ignores the namespace if it is a variable so I'm closing this. Status: Issue closed
pt07/raw_gps_utils
150504176
Title: Bug: missing #include Question: username_0: I had to add #include <vector> in satellites_obs.h I cannot push into your repo. Status: Issue closed Answers: username_1: Hi! Sorry I just saw the issue.. I fixed it! Now I'll give you the permission to push, if you need it for future things
bihealth/digestiflow-demux
423608098
Title: Unindexed runs do not produce Undetermined.fastq.gz, but snakemake expects them Question: username_0: # Bug Lanes without index reads to not produce Undetermined reads, as all reads belong to the one sample loaded. ```Undetermined_*’: No such file or directory``` # Fix In case of now index reads configured (no B in bases mask), do not expect Undetermined* output files. Answers: username_0: @username_1 Fixed in the bcl2fastq2 wrapper, which does not try do copy non-existing `Undetermined*` files anymore. Fixed in the snakemake workflow for flow cells with no index read. Not fixed for flow cells with *perfect* index reads. username_1: Fixed in current master. Status: Issue closed
jlippold/tweakCompatible
317966567
Title: `Hyperion [Members Plus]` working on iOS 11.1.2 Question: username_0: ``` { "packageId": "com.spark.hyperion", "action": "working", "userInfo": { "arch32": false, "packageId": "com.spark.hyperion", "deviceId": "iPhone10,6", "url": "http://cydia.saurik.com/package/com.spark.hyperion/", "iOSVersion": "11.1.2", "packageVersionIndexed": false, "packageName": "Hyperion [Members Plus]", "category": "Tweaks", "repository": "Spark's Beta Repo", "name": "Hyperion [Members Plus]", "packageIndexed": true, "packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.", "id": "com.spark.hyperion", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.0.7", "shortDescription": "OLED notification tweak", "latest": "0.0.9-3", "author": "Spark", "packageStatus": "Unknown" }, "base64": "<KEY>", "chosenStatus": "working", "notes": "" } ```
scylladb/scylla-migrator
739645075
Title: migrator tasks are failing with "Can't get more results because the continuous query has failed already" Question: username_0: java.util.concurrent.CancellationException: Can't get more results because the continuous query has failed already. Most likely this is because the query was cancelled at com.datastax.dse.driver.internal.core.cql.continuous.ContinuousRequestHandlerBase$NodeResponseCallback.cancelledResultSetFuture(ContinuousRequestHandlerBase.java:1531) at com.datastax.dse.driver.internal.core.cql.continuous.ContinuousRequestHandlerBase$NodeResponseCallback.dequeueOrCreatePending(ContinuousRequestHandlerBase.java:1225) at com.datastax.dse.driver.internal.core.cql.continuous.ContinuousRequestHandlerBase.lambda$fetchNextPage$2(ContinuousRequestHandlerBase.java:314) at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) at java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:792) at java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2153) at com.datastax.dse.driver.internal.core.cql.continuous.ContinuousRequestHandlerBase.fetchNextPage(ContinuousRequestHandlerBase.java:308) at com.datastax.dse.driver.internal.core.cql.continuous.DefaultContinuousAsyncResultSet.fetchNextPage(DefaultContinuousAsyncResultSet.java:102) at com.datastax.dse.driver.internal.core.cql.continuous.DefaultContinuousResultSet$RowIterator.maybeMoveToNextPage(DefaultContinuousResultSet.java:109) at com.datastax.dse.driver.internal.core.cql.continuous.DefaultContinuousResultSet$RowIterator.computeNext(DefaultContinuousResultSet.java:101) at com.datastax.dse.driver.internal.core.cql.continuous.DefaultContinuousResultSet$RowIterator.computeNext(DefaultContinuousResultSet.java:88) at com.datastax.oss.driver.internal.core.util.CountingIterator.tryToComputeNext(CountingIterator.java:91) at com.datastax.oss.driver.internal.core.util.CountingIterator.hasNext(CountingIterator.java:86) at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439) at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12) at com.datastax.spark.connector.writer.GroupingBatchBuilder.hasNext(GroupingBatchBuilder.scala:100) at scala.collection.Iterator$class.foreach(Iterator.scala:891) at com.datastax.spark.connector.writer.GroupingBatchBuilder.foreach(GroupingBatchBuilder.scala:30) at com.datastax.spark.connector.writer.TableWriter$$anonfun$writeInternal$2.apply(TableWriter.scala:241) at com.datastax.spark.connector.writer.TableWriter$$anonfun$writeInternal$2.apply(TableWriter.scala:210) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:112) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:111) at com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:129) at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:111) at com.datastax.spark.connector.writer.TableWriter.writeInternal(TableWriter.scala:210) at com.datastax.spark.connector.writer.TableWriter.insert(TableWriter.scala:188) at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:175) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:38) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:38) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:123) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
desktop/desktop
777886581
Title: Error: read EFAULT Question: username_0: ### Describe the bug The app crashes. ### Version & OS GH Desktop v2.6.1 MacOS Big Sur v11.1 ### Steps to reproduce the behavior It happened several times but I'm not really able to provide steps to reproduce the behavior. I will post new messages if I find how. As it happened several times, I thought it was important to report it even if I can't provide these steps. ### Logs ```sh 021-01-04T07:23:47.865Z - info: [ui] [AppStore.withAuthenticatingUser] account found for repository: DefinitelyTyped - username_0 (has token) 2021-01-04T07:23:48.907Z - info: [ui] [BranchPruner] Last prune took place in 3 hours - skipping 2021-01-04T07:23:50.654Z - info: [ui] Executing fetch: git -c credential.helper= -c protocol.version=2 fetch --progress --prune origin (took 1.791s) 2021-01-04T07:23:52.944Z - error: [main] Error: read EFAULT at Pipe.onStreamRead (internal/stream_base_commons.js:200:27) 2021-01-04T07:23:53.381Z - info: [main] Error report submitted 2021-01-04T07:23:57.658Z - info: [main] opening in browser: https://github.com/desktop/desktop/issues 2021-01-04T07:25:01.145Z - info: [ui] [AppStore] loading 52 repositories from store 2021-01-04T07:25:01.147Z - info: [ui] [AppStore] found account: username_0 (Androz) 2021-01-04T07:25:03.756Z - info: [ui] [BranchPruner] Last prune took place in 3 hours - skipping 2021-01-04T07:25:03.909Z - info: [ui] launching: 2.6.1 (Mac OS 10.16.0) 2021-01-04T07:25:03.910Z - info: [ui] execPath: '/Applications/GitHub Desktop.app/Contents/Frameworks/GitHub Desktop Helper (Renderer).app/Contents/MacOS/GitHub Desktop Helper (Renderer)' 2021-01-04T07:25:04.646Z - info: [ui] Stats reported. 2021-01-04T07:25:46.079Z - info: [main] opening in browser: https://desktop.github.com/release-notes/ 2021-01-04T07:28:47.167Z - info: [ui] [AppStore.withAuthenticatingUser] account found for repository: ManageInvite - username_0 (has token) 2021-01-04T07:28:58.463Z - info: [ui] Executing fetch: git -c credential.helper= -c protocol.version=2 fetch --progress --prune origin (took 11.250s) ``` ![Capture d’écran 2020-12-06 à 12 55 33](https://user-images.githubusercontent.com/42497995/103511288-0bb5e480-4e67-11eb-829d-b62e99ce1bcc.png) Answers: username_1: Hi @username_0, we've seen this come up in #11068 and #11258 as well. Are you by chance using dropbox to sync your repo? username_0: No I'm not using Dropbox username_0: And it happens even if I don't try to commit something username_2: Hey @username_0. Are you running GitHub Desktop on the new Apple ARM processor (M1) or on a previous generation mac? username_0: Hello, I'm running GitHub Desktop on the new Apple ARM processor (Mac Mini m1) username_3: Since we have had a couple of separate reports about this I am going to close this out to consolidate our issues. Please follow along in https://github.com/desktop/desktop/issues/11258 for updates. Status: Issue closed
jhipster/generator-jhipster
495735677
Title: Old logo used for webpack build notifications Question: username_0: #10131 ### **Overview of the issue** When webpack finishes execution we display a notification which shows our old hipster logo. We should either use one of our new hipsters or just the bowtie (I would suggest to just use the bowtie) ##### **Motivation for or Use Case** We have exchanged all old logos except that one ##### **Reproduce the error** Create a default application and execute e.g. npm run build ##### **Related issues** https://github.com/jhipster/generator-jhipster/issues/8679 ##### **Suggest a Fix** Replace https://github.com/jhipster/generator-jhipster/blob/master/generators/client/templates/angular/webpack/logo-jhipster.png with the new logo ##### **JHipster Version(s)** All until latest master - [x] Checking this box is mandatory (this is just to show you read everything) Answers: username_0: I will provide a PR for this. Status: Issue closed
dpkp/kafka-python
411326738
Title: Consumer drops messages posted to kafka. Question: username_0: My kafka consumer do not receive all the messages posted to kafka topic. If I look at the kafka logs of the topic, I see more messages than what was received by consumer. For some of the messages, the time taken for processing is going beyond request_timeout. And there were warnings in the logs saying, Hearbeat session expired, and WARNING:kafka.coordinator.consumer:Auto offset commit failed for group 45e933de-33b1-11e7-a919-92ebcb67fe33: CommitFailedError: Commit cannot be completed since the group has already             rebalanced and assigned the partitions to another member.             This means that the time between subsequent calls to poll()             was longer than the configured max_poll_interval_ms, which             typically implies that the poll loop is spending too much             time message processing. You can address this either by             increasing the rebalance timeout with max_poll_interval_ms,             or by reducing the maximum size of batches returned in poll()             with max_poll_records. This is log for Heartbeat session expired. WARNING:kafka.coordinator:Heartbeat session expired, marking coordinator dead WARNING:kafka.coordinator:Marking the coordinator dead (node 1) for group 45e933de-33b1-11e7-a919-92ebcb67fe33: Heartbeat session expired. We are using iterator interface to retrieve messages. And kafka-python version is 1.4.4 Any help is much appreciated. Answers: username_1: This isn't a problem with `kafka-python`, the problem is your configuration isn't appropriate for your workload... The error message explains it--try increasing `max_poll_interval_ms` so that you don't get rebalances. If this doesn't make sense, I suggest reading up on how kafka group rebalancing works and that provide a bit more context here... Status: Issue closed
starkat99/half-rs
304817419
Title: to_f32() visiblity Question: username_0: Is there a reason why to_f32() is private? Answers: username_1: Isn't the implementation of `From<f16> for f32` enough? username_0: Maybe just personal preference, but I think it's more concise. This is also the interface of rusts num traits (https://rust-num.github.io/num/num_traits/cast/trait.ToPrimitive.html) Status: Issue closed
cerebris/jsonapi-resources
218733037
Title: String resource attributes Question: username_0: I have been hunting down some bugs in an API I've started working on, and I finally realized that every bug stems from one issue: every resource defines it `attributes` as Strings, not Symbols. Now, for my project for the time being, I can just convert them to Symbols; however, this does seem to me to be the kind of thing that should be brought up. I have read thru a decent portion of the source code, but I haven't yet found where would be a perfect place to address this issue. I can say that for me, I have had issues with the `fields[{resource-type}]` param and with PATCHing and POSTing because of this issue. In short, wherever request attributes interact with resource attributes, if the resource attributes are Strings, the entire process will fail. I'm happy to help with this problem, although I know I am posting this Issue without having a clear sense of direction. I apologize for that. This is a truly wonderful gem; well-built, and amazingly functional. Thank you for all of your hardwork! stephen Answers: username_1: @username_0 Sorry for the delay in fixing this. I'll cherry pick #1034 into a new 0.9.x release once it's reviewed and merged. Status: Issue closed
leggett/simplify
634630101
Title: Bug: Layout Update? Inbox view is now requiring me to scroll horizontally Question: username_0: Looks like something changed with the container sizing? Depending on the size of my window I'm required to scroll to the right-side to see the date/time when the email was sent. ![Sent_Mail_-_username_0_circleci_com_-_CircleCI_Mail](https://user-images.githubusercontent.com/337445/84037303-62f4e480-a96c-11ea-9ef3-f25834413ee5.png) Answers: username_1: Thanks for the report @username_0. I'm not seeing this in the inbox but I am seeing it in label views when I have reading pane enabled and set to "No Split". I will get in a fix asap. Also, fwiw, I'm in the middle of building Simplify v2 which is a total re-write. It should make issues like this much less likely to happen. username_1: Found and fixed the problem. Uploading to Firefox, Chrome, and Edge stores now. username_1: Submitted. Status: Issue closed
geoadmin/mf-geoadmin3
365482660
Title: Permalink: Zoom parameter is overrule when ShowTooltip is active Question: username_0: Meteoswiss application Within an iframe application meteoswiss website [1] user can switch between layers and station triggered by events and then iframe centeres on the station- when the yellow circle is drawn, this triggers then an additional function on the meteoschweiz website (ist shows the station below the map) The call meteoswiss uses is https://map.geo.admin.ch/?lang=de&topic=ech&bgLayer=voidLayer&showTooltip=true&layers=ch.bafu.gefahren-basiskarte,ch.meteoschweiz.messwerte-globalstrahlung-10min&ch.meteoschweiz.messwerte-globalstrahlung-10min=EGO&layers_opacity=0.4,1&E=2642913.00&N=1225540.00&zoom=8 The issue now: The map is zoomed always on zoom=8 when showTooltip parameter is used. when eg zoom=4 is entered https://map.geo.admin.ch/?lang=de&topic=ech&bgLayer=voidLayer&showTooltip=true&layers=ch.bafu.gefahren-basiskarte,ch.meteoschweiz.messwerte-globalstrahlung-10min&ch.meteoschweiz.messwerte-globalstrahlung-10min=EGO&layers_opacity=0.4,1&E=2642913.00&N=1225540.00&zoom=4 it zooms on zoom 8 not 4 Expected result the zoom parameter is NOT overwritten when showTooltip parameter is used [1] http://cms.test.mchrel.quatico.com/cf#/content/meteoswiss-testpages/de/home/products/mchws-1060-messwertekomponente-v3.html?param=messwerte-lufttemperatur-10min (U/ P) will go to username_1 and Marc) Answers: username_1: Relevant code is here: https://github.com/geoadmin/mf-geoadmin3/blob/master/src/components/map/PermalinkFeaturesService.js and here: https://github.com/geoadmin/mf-geoadmin3/blob/master/src/components/map/PreviewFeaturesService.js I propose the following: when preview feature is in the permalink (form: ´layerID=featureID´) AND a zoom factor is also specified, we applied the specified zoom factor. If no zoom factor is specified, we apply the existing logic (define zoom based on feature). So we probably need to extent the addBodFeatures function with a zoom factor (which will come from the permalinkfeatureservice). I can give this a go for next deploy. username_1: PR ready. @danduk82 @procrastinatio can merge if review is good.
phan/phan
291833083
Title: InvalidVariableIssetPlugin should respect the "ignore_undeclared_variables_in_global_scope" option Question: username_0: test.php ```php <?php isset($this->a['b']); ``` .phan/config.php ```php <?php return [ 'directory_list' => ['./'], 'ignore_undeclared_variables_in_global_scope' => true, 'plugins' => ['InvalidVariableIssetPlugin'], ]; ``` Observed behavior: ```text ./test.php:3 PhanPluginUndeclaredVariableIsset undeclared variable $this in isset() ``` Expected behavior: no warnings Phan version: 32e0d52<issue_closed> Status: Issue closed
JetBrains/gradle-intellij-plugin
442543618
Title: `:buildSearchableOptions` task fails on linux Question: username_0: ``` Answers: username_0: The workaround is to add `buildSearchableOptions.enabled = false`. Not sure what implications for skipping the task are though. username_1: What's the IDE version and Java version you use for running IDE? username_1: also, please take a look at this comment: https://youtrack.jetbrains.com/issue/JBR-34#focus=streamItem-27-1615609.0-0 username_0: The build is being run on in docker container on a CI system. ``` Linux version 4.18.7-64 Red Hat 7.3.1-5 ``` ``` intellij { version '191.6707.61' ... } ```` username_0: What do I lose if I set ``` buildSearchableOptions.enabled = false ``` username_1: Settings of your plugin won't be searchable in the Settings and Find action popup username_0: No because of the build constraints. But I don't think that `youtrack` you referred to applies to the issue. There they had `libfreetype.so` and it was crashing in it. I did some digging and it appears that the Linux machines running my build don't have `freetype` library (`libfreetype.so`) installed on them. So this sort of staging of AWT app in `buildSearchableOptions` on them won't be possible for me. Status: Issue closed username_1: well, then `buildSearchableOptions.enabled = false` is a proper solution for this case. closing the issue
mezzio/mezzio-problem-details
672807239
Title: `ProblemDetailsExceptionInterface` should extend `Throwable` Question: username_0: ### Feature Request | Q | A |------------ | ------ | New Feature | yes | RFC | no | BC Break | yes #### Summary The `ProblemDetailsExceptionInterface` does not extend `Throwable` and thus, it cannot be used as a return type for `throws`. This doesn't matter unless you are using static code analysers. I want to propose extending the `Throwable` interface for the `ProblemDetailsExceptionInterface` as in most cases, this might be the use case anyway. Answers: username_1: Overall, `ProblemDetailsExceptionInterface` is designed as marker for `Throwable` types anyway, so adding this is not really a BC Break. username_0: Can you explain this more? I dont get it as technically it would be a bc break. Just trying to understand this. You mean, because the `ProblemDetailsResponseFactory` is checking for the Interface within the `fromThrowable` method? username_1: You labeled this issues as a `BC Break`, but it isn't (in my opinion). The BC break is only for third-party consumers that decided to implement `ProblemDetailsExceptionInterface` on their own, and don't do so on non-`Throwable` implementations. username_2: @username_0 A "marker interface" is one that defines no behavior and only exists to differentiate types. In this case, what @username_1 is saying is since the interface exists only to decorate/differentiate `Throwable` types, having it extend that type is not a BC break. username_2: (Sorry for adding my comment - screen hadn't refreshed to show the one from @username_1 (grrrrrr)). Status: Issue closed Status: Issue closed