repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
WeNeedCoffee/FoundDiamonds
653232528
Title: /stop causes exception error Question: username_0: Hi there, I have to be honest and say I haven't had time to update yet to the spigot upload. But just in case this is something that affects both versions, I thought I wouldn't ignore it and share this exception error regardless. ``` [13:46:53] [Server thread/INFO]: [FoundDiamonds] Disabling FoundDiamonds v3.6.6 [13:46:53] [Server thread/INFO]: [FoundDiamonds] Saving all data... [13:46:53] [Server thread/ERROR]: Error occurred while disabling FoundDiamonds v3.6.6 (Is it up to date?) java.lang.NullPointerException: null at co.proxa.founddiamonds.file.FileHandler.writeBlocksToFile(FileHandler.java:99) ~[?:?] at co.proxa.founddiamonds.file.FileHandler.saveFlatFileData(FileHandler.java:239) ~[?:?] at co.proxa.founddiamonds.FoundDiamonds.onDisable(FoundDiamonds.java:101) ~[?:?] at org.bukkit.plugin.java.JavaPlugin.setEnabled(JavaPlugin.java:265) ~[spigot-1.16.1.jar:git-Spigot-c3a49df-2f18108] at org.bukkit.plugin.java.JavaPluginLoader.disablePlugin(JavaPluginLoader.java:376) ~[spigot-1.16.1.jar:git-Spigot-c3a49df-2f18108] at org.bukkit.plugin.SimplePluginManager.disablePlugin(SimplePluginManager.java:501) ~[spigot-1.16.1.jar:git-Spigot-c3a49df-2f18108] at org.bukkit.plugin.SimplePluginManager.disablePlugins(SimplePluginManager.java:493) ~[spigot-1.16.1.jar:git-Spigot-c3a49df-2f18108] at org.bukkit.craftbukkit.v1_16_R1.CraftServer.disablePlugins(CraftServer.java:427) ~[spigot-1.16.1.jar:git-Spigot-c3a49df-2f18108] at net.minecraft.server.v1_16_R1.MinecraftServer.stop(MinecraftServer.java:717) ~[spigot-1.16.1.jar:git-Spigot-c3a49df-2f18108] at net.minecraft.server.v1_16_R1.DedicatedServer.stop(DedicatedServer.java:644) ~[spigot-1.16.1.jar:git-Spigot-c3a49df-2f18108] at net.minecraft.server.v1_16_R1.MinecraftServer.v(MinecraftServer.java:889) ~[spigot-1.16.1.jar:git-Spigot-c3a49df-2f18108] at net.minecraft.server.v1_16_R1.MinecraftServer.lambda$0(MinecraftServer.java:164) ~[spigot-1.16.1.jar:git-Spigot-c3a49df-2f18108] at java.lang.Thread.run(Thread.java:834) [?:?] [13:46:53] [Server thread/INFO]: Saving players ``` Each time I close the server, I notice this. [config.yml.zip](https://github.com/WeNeedCoffee/FoundDiamonds/files/4890405/config.yml.zip) Status: Issue closed Answers: username_1: https://github.com/WeNeedCoffee/FoundDiamonds/releases/tag/4.0.4-SNAPSHOT Please give this a try and let me know
centreon/centreon
97046440
Title: External command error: Malformed command on Nagios 4.0.3 and Centreon 2.5.0 Question: username_0: --- Author Name: **<NAME>** (<NAME>) Original Redmine Issue: 5338, https://forge.username_0.com/issues/5338 Original Date: 2014-03-06 Original Assignee: <NAME> --- We updated our system on the following versions: - Centreon 2.5.0 - Nagios 4.0.3 - NDO Utils 2.0.0 It seem's that everything works good, but for every command "downtime, acknowledge, check immediate ..." we have an error in the Nagios logfile, but it seem's that the command is executed correctly: [1394107089] EXTERNAL COMMAND: SCHEDULE_SVC_DOWNTIME;Osiris_2.0;MySQL index usage;1394107200;1394114400;1;0;3600;admin;Downtime set by admin [1394107089] External command error: Malformed command The same command send from Nagios Interface is working without the error message.<issue_closed> Status: Issue closed
naser44/1
103760301
Title: الشخص ﺍﻟﻠﻲ ما بدك ﺗﺸﻮﻓﻪ ﺳﺒﺤﺎﻥ ﺍﻟﻠﻪ بطلعلك ﻓﻲ ﻛﻞ ﻣﻜﺎﻥ Question: username_0: <a href="http://ift.tt/1Enbsx3">&#1575;&#1604;&#1588;&#1582;&#1589; &#65165;&#65247;&#65248;&#65266; &#1605;&#1575; &#1576;&#1583;&#1603; &#65175;&#65208;&#65262;&#65235;&#65258; &#1548; &#65203;&#65170;&#65188;&#65166;&#65253; &#65165;&#65247;&#65248;&#65258; &#1576;&#1591;&#1604;&#1593;&#1604;&#1603; &#65235;&#65266; &#65243;&#65246; &#65251;&#65244;&#65166;&#65253;</a>
dtugroupd/TasksIssueTracker
538624478
Title: Introduction // Suggested Solution Question: username_0: Present our general idea. Present what Campus Connect is, what the general ideas behind it are. It should be fairly brief, not way too in depth, as the specifics will be presented a bit later as use cases, user stories and actual requirements
SODALITE-EU/semantic-reasoner
853276757
Title: TOSCA Mapper - adjust names of types/templates Question: username_0: PDS would add namespaces to nodes_types and node_templates. Reasoner would need to parse it and trim namespaces if necessary. In such a way, we will have a valid TOSCA, and Reasoner would not need to guess the namespaces. The output would be like this: ``` tosca_definitions_version: tosca_simple_yaml_1_3 data_types: openstack_testbed/sodalite.datatypes.OpenStack.SecurityRule: derived_from: tosca.datatypes.Root properties: protocol: required: True type: string default: tcp constraints: - valid_values: ['tcp', 'udp', 'icmp'] port_range_min: required: True type: tosca.datatypes.network.PortDef port_range_max: type: tosca.datatypes.network.PortDef required: True remote_ip_prefix: default: 0.0.0.0/0 required: True type: string node_types: openstack_testbed/sodalite.nodes.OpenStack.SecurityRules: derived_from: tosca.nodes.Root properties: group_name: description: Name of the security group in openstack. required: True type: string ports: required: False constraints: - min_length: 1 type: map entry_schema: type: openstack_testbed/sodalite.datatypes.OpenStack.SecurityRule openstack_testbed/sodalite.nodes.OpenStack.KeyPair: derived_from: tosca.nodes.Root properties: name: type: string description: OpenStack Key Pair name topology_template: node_templates: openstack_testbed/security-rules-database-access: type: openstack_testbed/sodalite.nodes.OpenStack.SecurityRules properties: ports: ports-tcp-5432-5432: port_range_max: 5432 remote_ip_prefix: 0.0.0.0/0 port_range_min: 5432 protocol: tcp group_name: database-access openstack_testbed/test-key: type: openstack_testbed/sodalite.nodes.OpenStack.KeyPair properties: name: test-key requirements: - dependency: openstack_testbed/security-rules-database-access ``` Answers: username_1: I think this issue can be closed Status: Issue closed username_0: Yes indeed. Closed
perlancar/perl-Data-Sah-Coerce
437955675
Title: Tests fail on js-date.t Question: username_0: ``` Loading internal logger. Log::Log4perl recommended for better logging Reading '/home/username_0/.cpan/Metadata' Database was generated on Sat, 27 Apr 2019 14:30:34 GMT Running install for module 'Data::Sah::Coerce' CPAN: Digest::SHA loaded ok (v6.01) CPAN: Compress::Zlib loaded ok (v2.074) Checksum for /home/username_0/.cpan/sources/authors/id/P/PE/PERLANCAR/Data-Sah-Coerce-0.033.tar.gz ok CPAN: Archive::Tar loaded ok (v2.30) Data-Sah-Coerce-0.033/ Data-Sah-Coerce-0.033/weaver.ini Data-Sah-Coerce-0.033/META.json Data-Sah-Coerce-0.033/lib/ Data-Sah-Coerce-0.033/lib/Data/ Data-Sah-Coerce-0.033/lib/Data/Sah/ Data-Sah-Coerce-0.033/lib/Data/Sah/CoerceJS.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce.pm Data-Sah-Coerce-0.033/lib/Data/Sah/CoerceCommon.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/ Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/js/ Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/js/duration/ Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/js/duration/str_iso8601.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/js/duration/float_secs.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/js/bool/ Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/js/bool/float.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/js/bool/str.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/js/datenotime/ Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/js/datenotime/str.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/js/datenotime/obj_Date.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/js/datenotime/float_epoch.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/js/datetime/ Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/js/datetime/str.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/js/datetime/obj_Date.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/js/datetime/float_epoch.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/js/date/ Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/js/date/str.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/js/date/obj_Date.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/js/date/float_epoch.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/js/timeofday/ Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/js/timeofday/str_hms.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/perl/ Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/perl/duration/ Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/perl/duration/str_human.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/perl/duration/obj_DateTimeDuration.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/perl/duration/str_iso8601.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/perl/duration/float_secs.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/perl/float/ Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/perl/float/str_percent.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/perl/bool/ Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/perl/bool/str.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/perl/datenotime/ Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/perl/datenotime/obj_DateTime.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/perl/datenotime/obj_TimeMoment.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/perl/datenotime/float_epoch_always.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/perl/datenotime/float_epoch.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/perl/datenotime/str_iso8601.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/perl/int/ Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/perl/int/str_percent.pm Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/perl/datetime/ Data-Sah-Coerce-0.033/lib/Data/Sah/Coerce/perl/datetime/obj_DateTime.pm [Truncated] t/perl-timeofday.t ....... ok t/release-rinci.t ........ skipped: these tests are for release candidate testing Test Summary Report ------------------- t/js-date.t (Wstat: 256 Tests: 1 Failed: 1) Failed test: 1 Non-zero exit status: 1 Files=16, Tests=60, 7 wallclock secs ( 0.07 usr 0.03 sys + 5.68 cusr 0.88 csys = 6.66 CPU) Result: FAIL Failed 1/16 test programs. 1/60 subtests failed. make: *** [Makefile:1039: test_dynamic] Error 255 PERLANCAR/Data-Sah-Coerce-0.033.tar.gz /usr/bin/make test -- NOT OK //hint// to see the cpan-testers results for installing this module, try: reports PERLANCAR/Data-Sah-Coerce-0.033.tar.gz ``` also see https://rt.cpan.org/Public/Bug/Display.html?id=128427 . Answers: username_1: Could you try again with the latest version? Also could you describe your system (OS, node.js version, time zone setting)? Thanks. Status: Issue closed username_0: Works fine on debian x86-64 testing. Closing. username_1: Thanks. I tried it. It doesn't seem to list all though? perl-Perinci-CmdLine-Lite 4 0 0 perl-App-sshwrap-hostcolor 3 0 0 perl-Getopt-Long-More 1 17 0 alt-pm 1 2 0 o-bips 1 0 0 perl-File-MoreUtil 1 0 0 I know I have more open issues than just these. username_0: did you try the `-a` flag? username_1: That's it! Thanks. username_0: You're welcome: https://www.youtube.com/watch?v=79DijItQXMM . :)
Azure/azure-cli
651189141
Title: az rest - Retrieving Functions keys results in InternalServerError Question: username_0: ### **This is autogenerated. Please review and update as needed.** ## Describe the bug We have a script which currently calls the HTTP endpoint to retrieve the Function Keys from a newly created FunctionApp to place into Key Vault. The HTTP call seems to fail with an InternalServerError. If we wait long enough, this then succeeds. We suspect this might be some timing issue related to how the FunctionApp gets created from ARM templates or if there is a period of time the newly created resource cannot be queried? **Command Name** `az rest` **Errors:** ``` Bad Request({"Code":"BadRequest","Message":"Encountered an error (InternalServerError) from host runtime.","Target":null,"Details":[{"Message":"Encountered an error (InternalServerError) from host runtime."},{"Code":"BadRequest"},{"ErrorEntity":{"Code":"BadRequest","Message":"Encountered an error (InternalServerError) from host runtime."}}],"Innererror":null}) ``` ## To Reproduce: Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information. - _Put any pre-requisite steps here..._ - `az rest --method {} --uri {}` ## Expected Behavior HTTP call to always succeed ## Environment Summary ``` Windows-10-10.0.18362-SP0 Python 3.6.6 Installer: MSI azure-cli 2.5.1 * Extensions: interactive 0.4.3 ``` ## Additional Context <!--Please don't remove this:--> <!--auto-generated--> Answers: username_1: add to S173 username_2: `az rest` currently doesn't support support polling [asynchronous Azure operations](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/async-operations) (tracked at https://github.com/Azure/azure-cli/issues/14223). The script is responsible for querying the provisioning status of the prerequisite resources. An example can be found from the scenario test: https://github.com/Azure/azure-cli/blob/5dca36acfa2b95e7870f7ab0df77f89887c07407/src/azure-cli/azure/cli/command_modules/util/tests/latest/test_rest.py#L25-L32 username_0: @username_2 What operation are we supposed to inspect for the 'async' scenario? We provision the function app via a previous Azure DevOps task via ARM templates. Would this not imply that the resource already exists at this point? We are following the recommendation from https://github.com/Azure/azure-cli/issues/4041#issuecomment-603894815 username_2: `az rest` simply makes the REST call and doesn't care about the business logic. Could you create a **support ticket** to **Function Ap**p team to investigate? You may provide the `--debug` log of `az rest` (containing the raw HTTP request and response). A support engineer can help you identity the issue from service-side. Thanks for understanding. Status: Issue closed
uwec-innovation-labs/energy-dashboard-frontend
541180180
Title: Researching Tools and Libraries for Testing Question: username_0: Look into the different libraries and tools we can use for testing the frontend. Answers: username_0: I was able to setup render testing on all the components which will run through everything and check for basic errors. Anytime I create a new component, I will be adding a test file for testing its rendering. Status: Issue closed
KeppySoftware/OmniMIDI
594563576
Title: OmniMidi ASIO doesent work with PFA Question: username_0: ![Capture](https://user-images.githubusercontent.com/63203180/78505077-9e89ed00-7736-11ea-91e3-638e8318e95d.PNG) So I was looking at some of the renderers and I saw ASIO (audio stream input output) and I slelected it because people say its the best renderer, so I tried it and when I launched Piano from above, it gives me that error. (I put the picture on top) btw and I do have asio installed, but the weird thing is when I already have it open and then slect ASIO it does work which is weird so yeah if anyone can help me, that would be great. Answers: username_1: can't seem to get anything similar here, I installed [ASIO4ALL](http://www.asio4all.org/), switched OmniMIDI to use ASIO then restarted all programs using OmniMIDI and it worked. What ASIO did you install or are trying to use? username_0: Nevermind, fixed it I saw a closed post or something. Thanks for trying to help me though Status: Issue closed
Etnath/YAMP
333024434
Title: Loading songs is too slow. Question: username_0: Local tests show that loading ~300 songs take around 10 seconds. Answers: username_0: I suspect parsing the tags take most of the time. Tags should be cached username_0: Caching the tags improve loading times by an order of magnitude Status: Issue closed username_0: Done
googlecolab/colabtools
1021325344
Title: Pro+ Question: username_0: After subscribing to pro + I come across ONLY slow gpu and ALWAYS with 12gb ram. On another account with a $ 10 tariff, I always come across gpu with at least 25gb ram. This is clearly a mistake, or Google is simply bestial towards its customers. Answers: username_1: Yes, I am on the same boat This vague specs need to be stopped, just make it pay per minute already username_2: I Have the same problem too. With Colab pro plus, I don't have a much more ram, that is just 12 gb with gpu mode username_3: Ever since I subscribed I haven't been able to even have a GPU runtime. Keep in mind, that I haven't seriously used Colab's GPU resources in almost a year, and when I had a Pro, I never encountered ANY issues similar to these. username_1: I don't know law, but is it allowed to sell random undertermined service time? with no minimum per day? username_4: I have the same problem too. username_1: @username_5 Status: Issue closed
api-platform/core
418351069
Title: [GraphQL] Unable to use a variable as id Question: username_0: When I try to get a user by its IRI, I can have the result correctly ``` { user(id: "/users/1") { id } } ``` Response ``` { "data": { "user": { "id": "/users/1" } } } ``` When I try to use a function instead, I obtain an error With {"iri": "/users/1"} ``` query GetUser ($iri: String!) { user(id: $iri) { id } } ``` ``` { "errors": [ { "message": "Variable \"$iri\" of type \"String!\" used in position expecting type \"ID\".", "extensions": { "category": "graphql" }, "locations": [ { "line": 1, "column": 15 }, { "line": 2, "column": 12 } ] } ] } ``` Answers: username_1: You should write it like this: ```graphql query GetUser ($iri: ID!) { user(id: $iri) { id } } ``` username_0: Thank a lot ! Status: Issue closed
ryancrawcour/cosmosdb-graph-bulkexecutor
507691579
Title: Consider alternatives to reflection Question: username_0: The reflection approach, while quicker to bang out, is going to be quite costly on a per call basis. If you switched to building out expressions trees and baking/caching delegates you could pay those costs one time per type and eliminate almost all ongoing runtime costs on subsequent calls. _Originally posted by @drub0y in https://github.com/username_0/cosmosdb-graph-bulkexecutor/pull/1#issuecomment-542396034_ Answers: username_1: Expression trees is used to read the dynamic values in #7 username_0: Thanks. That's only for Dynamic objects, right? The rest still uses Reflection? username_1: Yeah only dynamic. I didn't change the approach for regular reads. I'm not sure if using expression trees will be vastly different, as you still need to use the reflection component to determine the name of the property you want to read.
RC-MODULE/media.hpp
85541596
Title: Вылет после нескольких секунд (минут) работы Question: username_0: Где-то в сдк происходит вылет после нескольких 10+ секунд работы, если в push последовательно передаются 2 пакета с идентичным pts. Программа либо просто вылетает с segfault, либо иногда пишет сообщения типа double free or corruption (fasttop): 0xb4900be0 *** Answers: username_0: Глюк плавающий, выскакивает далеко не всегда. Пока удалось выяснить, что часто вылетает в отрезке от 1300 обработанных пакетов и до 2000 пакетов. username_1: А вы может прислать код, дамп, стек потому что такому количеству информации я ничего сказать не могу. username_0: Проблема в том, что в стеке ничего нет, только 2 строчки с двумя вопросами. Я бы сам с радостью вам сказал, где именно проблема, но есть только дисассембл непонятно чего. username_1: Вы можете сделать пример на котором эта проблема воспроизводится? username_0: Я попробую на вашей программе для работы с ртп, я помню, что она бывало тоже вылетала с segfault. Если это так, то пришлю вам файл, на котором происходит вылет. Вылет практически всегда происходит после 4 и более минут воспроизведения. username_1: Я посмотрел стек что вы прислали, насколько я понимаю там произошло следущее: по какой-то причине стал уничтожаться объекты библиотеки (самый верхний вызов кода библиотеки деструктор system_clock) и видимо media::video::sink был уничтожен до system_clock. В расписании system_clock лежали кадры для показа выделенные video::sink. К сожалению до 9de85f70cc960a7efcac1b7aa1528f94ff20993e вызов деструктора кадра после деструктора video::sink приводил к segfault. Status: Issue closed
Forien/foundryvtt-forien-unidentified-items
768331956
Title: Duplication of other Modules in Item Menu Question: username_0: Hi, Big fan of the module been a user of Forien's modules for the last 4 months. Ever since the big foundry update I haven't been able to use unidentified items because of how it doubles magic items and better rolls in the item menu. (see example below) ![image](https://user-images.githubusercontent.com/1498139/102288334-4385d800-3f0a-11eb-903e-593e7c88049c.png) I confirmed that some how unidentified items is doing it through a process of disabling modules while leaving all others enabled. For now I will leave it disabled but I thought I should report it, I love the quest log module and use it every session. Big thanks!!!
CartoDB/cartoframes
517653441
Title: Add friendly message when DO is disabled Question: username_0: Now when a user wants to subscribe to a Dataset/Geography, after pressing the button the user gets an exception: ![](https://user-images.githubusercontent.com/13675438/68156736-59108c00-ff1a-11e9-8335-3e667557f612.png) We should add a field in the `subscription_info` endpoint with this information `do_enabled: true/false` so we can show a better message with a contact link instead of the buttons to subscribe. Answers: username_1: Message suggestion: "We are sorry. The Data Observatory is not enabled for your account. Contact your customer success manager or send an email to <EMAIL> to get access to it." @username_2 could you confirm this can be the message to show to people that don't have the DO enabled? username_2: w/ some minor changes in case you find them useful: "We are sorry, the Data Observatory is not enabled for your account yet. Please contact your customer success manager or send an email to <EMAIL> to request access to it." username_1: 👍 username_3: We need to extend this feature to all the methods of DO: subscribe, enrich, download, etc... Not only when we want to subscribe. Thus, we need to add a more generic endpoint that we could request to check the status of the user. @username_4 where should be the right place to place this endpoint? username_4: If I'm not wrong, right now if a user does not have DO enabled the APIs raise an exception as stated in the description of the issue. Shouldn't be enough with capturing the error in each case? This concrete issue is a different case, what @username_0 proposes is to know in advance if the user has DO enabled or not to adapt the UI. In case we want to implement that, I think a good place could be the public `/api/v4/me` endpoint. username_3: Cool, `/api/v4/me` makes lots of sense for me. I'd like to show an error if the user tries to perform an enrichment for example. CF should check it before running an operation and raises the exception if users have not data to DO. username_4: I think we can decide about this on a per case basis. Right now in enrichment, BQ throws an error (a 403 or similar) and we catch it in CF. Wouldn't be just enough with that? For other cases, it might be interesting the APIs to be the ones doing the validation, so we can store usage/interest metrics even when the user does not have DO enabled (just thinking out loud, not sure if this makes any sense). username_3: I think the following workflow would be easier since you don't need to catch the error at each method. - To have an endpoint which returns if the user has access to DO - Storage the response in the credentials object. - Add a decorator for each method need to access to DO. We could just create a generic decorator to catch generic error responses from BQ and our API, but it will be more difficult. username_5: Right now we are checking if DO is enabled for `token`, `subscribe` and `unsubscribe` endpoints, but not for `subscriptions` or `subscription_info`. If we also check it when calling `subscription_info` (I think it doesn't make sense to show the info if you can't do anything else), this issue would be solved, without having to make an extra call and add more information to `/me`. And as [`token` is also protected](https://github.com/CartoDB/cartodb-central/blob/c08342318ec171c91371d044e7eee1d12f660b16/app/controllers/api/users_controller.rb#L78), I think all the flows from CARTOframes are covered, aren't they? @username_6? username_4: I have one question here... shouldn't be subscriptions open to anyone? Even when they don't have DO enabled, it'd be useful to understand possible users interested. username_6: We have different cases about that. ### User out of team account (without DO): - **subscription_info**: ForbiddenErrorException: Access denied - **subscribe**: ForbiddenErrorException: Access denied - **subscriptions**: Works! - **enrichment**: ServerErrorException: ['The user does not have Data Observatory enabled'] - **download**: ServerErrorException: ['The user does not have Data Observatory enabled'] ### Team user without DO: - **subscription_info**: Works - **subscribe**: Message to subscribe and then ServerErrorException: ['The user does not have Data Observatory enabled'] - **subscriptions**: Works! - **enrichment**: ServerErrorException: ['The user does not have Data Observatory enabled'] - **download**: ServerErrorException: ['The user does not have Data Observatory enabled'] ### What I think we should offer to any user without DO: - **subscription_info**: Works - **subscribe**: You dont have access message - **subscriptions**: You dont have access message - **enrichment**: You dont have access message - **download**: You dont have access message ### What I think we should do internally: I think cartoframes should do the request to the backend to allow us to track the interest (to contact her/his, metrics, ...) username_3: After a call with @username_6 we've found a solution where we're going to use `token` endpoint to validate if the user has access to DO. 1. We're going to create a decorator `@do_access_required` in CF. 2. We decorator will call to `token` endpoint. 2.1. If token is returned, the decorator will call call to the decorated function. 2.2. If no token is returned because of lack of do permissions, we'll catch the error and we'll display it, the decorated function won't be executed 2.3. In both cases CF will save the result (in memory) to avoid future calls to `token` endpoint. Decorator is just a suggestion, feel free to use another approach if it fits better. username_0: In case the user has no access to DO, should we show the subscription info, with a different message to contact CARTO? https://github.com/CartoDB/cartoframes/issues/1155#issue-517653441 username_3: No, the subscription info cannot be fetched if the user has no access to DO. The message to contact CARTO would be great any time we say the user has no access to DO: enrichment, subscribe, download, etc... username_5: Ok, then we have to check if it's enabled from the `subscription_info` endpoint as well and catch the error from CARTOframes properly :+1: Do you think we should improve the message `The user does not have Data Observatory enabled` to include instructions or something? username_3: I think we can include the email to write to support or a link to a contact form. @username_1 please advise username_1: We have already talked about messaging above. I have updated the first comment with the message agreed 🙂 username_5: Oops, sorry! :sweat_smile: username_3: Because of an issue with speed licensing, instead of implementing this via the `token` endpoint we're going to do it via `/api/v4/me` Blocked until https://github.com/CartoDB/cartodb/issues/15254 will be completed username_0: This is unblocked with https://github.com/CartoDB/cartodb-central/pull/2620 username_0: Related to https://github.com/CartoDB/cartoframes/issues/1331 Status: Issue closed
linode/manager
233292152
Title: NodeBalancer - Field validation missing for Port field Question: username_0: https://cloud.linode.com/nodebalancers/balancer-1/configs/create When submitting a new Nodebalancer config, the port does not get validated and by default uses 80 (the helper text) when no value is entered in the field rather than squawking the user to enter a value. ![image](https://cloud.githubusercontent.com/assets/19841047/26743695/d1706108-47b0-11e7-9434-3de943a0dff4.png)<issue_closed> Status: Issue closed
MicrosoftDocs/windows-itpro-docs
660146348
Title: 衡阳哪里可以开成本做账发票-本地宝 Question: username_0: 衡阳哪里可以开成本做账发票-本地宝开票【█1 З 5-电-ЗЗ45-嶶-З429█】杨生【σσ/V信1З00█507█З60】正规税务业务代理.100%真-票此.信.息.永.久.有.效” 实体公司开/详情-项.目.齐.全 可先开验。无需打开直接联系点击上方“百度快照”现场曝光!美准航母烧了4天还冒烟 飞机洒水1500次(原标题:美"准航母"烧了4天还冒烟,直升机洒水1500次,现场曝光)海外网7月16日电 美国海军“好人理查德”号两栖攻击舰烧了四天,火还没灭。军方15日曝光了救援现场的最新画面。据今日俄罗斯消息,“好人理查德”号自7月12日爆炸起火至今,消防人员一直在持续进行灭火工作。美国海军在15日的一份声明中说,为了抑制大火蔓延,直升机已经洒水超过1500次,较大的火焰已经被熄灭。目前,消防人员正在全力以赴,扑灭军舰闷烧的个别地点。目前共有63人受伤并接受治疗,其中包括40名船员和23个平民。美海军表示,尽管发生大火和爆炸,船体还是避免了无法弥补的损害,并称“燃油箱没有受到威胁,船体稳定,结构安全。”<issue_closed> Status: Issue closed
TechforgoodCAST/chc-referrals
373020888
Title: When a referral is not approved through typeform it still takes up a slot Question: username_0: It's okay if these need to come into the system but they need to not take up slots until someone on the CHC looks at them and declines them again. Happy to have a chat about it if the issue is confusing. Answers: username_1: Is this when a response from Typeform [automatically declined](https://github.com/TechforgoodCAST/chc-referrals/blob/master/app/models/referral.rb#L66)? I'll take a look. Can you share an example referral that shouldn't have taken up a slot, e.g. `../british-red-cross/referrals/12`? username_1: _Reminder:_ Typeforms that want to have automatic declining as a feature should use the quiz functionality of Typeform to return a negative score when the form is submitted. E.g. you could ensure this by giving a score of -9999 when an undesirable answer is given. Status: Issue closed
Jzvd/JZVideo
1084016476
Title: 7.7.0问题 Question: username_0: Fatal Exception: java.lang.NullPointerException: Attempt to invoke virtual method 'int android.media.AudioManager.getStreamVolume(int)' on a null object reference at cn.jzvd.Jzvd.touchActionMove(Jzvd.java:471) at cn.jzvd.Jzvd.onTouch(Jzvd.java:404) at cn.jzvd.JzvdStd.onTouch(JzvdStd.java:323) at android.view.View.dispatchTouchEvent(View.java:14364) 7.6升级7.7才会出现这个问题而且报错率极高 Answers: username_1: Triage notifications on the go with GitHub Mobile for iOS or Android. You are receiving this because you are subscribed to this thread.Message ID: ***@***.***&gt; username_2: 我也有这个问题
DraqueT/PolyGlot
199559710
Title: First pronunciation match lingering Question: username_0: If there is an autogenerated pronunciation for a word and you hit backspace on the conword itself until there it's completely deleted. The last deleted/matching pronunciation generating match will leave its first related value in the pronunciation field. This won't come up much, since it only shows up if you have a blank conword field... but still.<issue_closed> Status: Issue closed
TrueCar/gluestick
205552119
Title: [V2] - Debug tests for generated app are broken Question: username_0: - In V1 debug generate tests works - We might need to move `test` command into `gluestick-cli` Answers: username_1: Also, projects that use for example aphrodite use browser stuff like `document` so when debugging those variables might not be defined. username_1: Some refs: https://github.com/facebook/jest/issues/1652 https://github.com/nodejs/node/issues/7593 Status: Issue closed username_1: Fixed in node `8.4.0`+
alibaba/canal
323879928
Title: 表结构解析失败,unknow column Question: username_0: tag: canal-1.0.26-preview-2 表中有一列名:conf_key 2018-05-17 13:28:36.412 [destination = xxxxx , address = xxxxx , EventParser] ERROR c.a.otter.canal.p arse.inbound.mysql.MysqlEventParser - dump address /10.4.217.125:5002 has an error, retrying. caused by java.lang.RuntimeException: unknow column : `conf_key`(8) at com.alibaba.otter.canal.parse.inbound.TableMeta.getFieldMetaByName(TableMeta.java:74) ~[canal.parse-1.0.26-SN APSHOT.jar:na] at com.alibaba.otter.canal.parse.inbound.mysql.tsdb.MemoryTableMeta.processTableElement(MemoryTableMeta.java:227 ) ~[canal.parse-1.0.26-SNAPSHOT.jar:na] at com.alibaba.otter.canal.parse.inbound.mysql.tsdb.MemoryTableMeta.parse(MemoryTableMeta.java:153) ~[canal.pars e-1.0.26-SNAPSHOT.jar:na] at com.alibaba.otter.canal.parse.inbound.mysql.tsdb.MemoryTableMeta.find(MemoryTableMeta.java:108) ~[canal.parse -1.0.26-SNAPSHOT.jar:na] at com.alibaba.otter.canal.parse.inbound.mysql.tsdb.DatabaseTableMeta.compareTableMetaDbAndMemory(DatabaseTableM eta.java:289) ~[canal.parse-1.0.26-SNAPSHOT.jar:na] at com.alibaba.otter.canal.parse.inbound.mysql.tsdb.DatabaseTableMeta.applySnapshotToDB(DatabaseTableMeta.java:2 51) ~[canal.parse-1.0.26-SNAPSHOT.jar:na] at com.alibaba.otter.canal.parse.inbound.mysql.tsdb.DatabaseTableMeta.rollback(DatabaseTableMeta.java:129) ~[can al.parse-1.0.26-SNAPSHOT.jar:na] at com.alibaba.otter.canal.parse.inbound.mysql.AbstractMysqlEventParser.processTableMeta(AbstractMysqlEventParse r.java:72) ~[canal.parse-1.0.26-SNAPSHOT.jar:na] at com.alibaba.otter.canal.parse.inbound.AbstractEventParser$3.run(AbstractEventParser.java:170) ~[canal.parse-1 .0.26-SNAPSHOT.jar:na] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45] Answers: username_0: ![image](https://user-images.githubusercontent.com/1378499/40158776-dc4bd130-59d8-11e8-9f1c-7c82d3d4774a.png) 和1.0.25 比较,MemoryTableMeta 删掉这几行,就不会出错了 Status: Issue closed
Documentive/TemplateBuddy
710098184
Title: [FEATURE]Clone this template into HTML (template23) Question: username_0: **Describe the solution you'd like** Given below is the link to the template. Download it and create the html format. Click [here](https://drive.google.com/file/d/1yPBAgSj3KRCtl0fxMmnHqcbBj6StuHY3/view?usp=sharing) to download the file. There would be a file format.txt in static/resume_templates/format. This file would contain all the placeholders that are to be put exactly in your clone. <strong>Note:</strong> You can only start working on the feature if a maintainer approves that request in the discussion and the maintainer assigns it to you. **Your PR will be merged only if you are assigned to this issue.** This issue would be assigned only to a single user. Answers: username_1: Assign it to me username_2: Sure @username_1. Assigning this to you. Thanks for showing interest to contribute. username_2: Sure @souravroy-test. Assigning this to you. Thanks for showing interest to contribute. username_3: I would like to work on this issue username_0: Sure @username_3 , will assign this to you. Status: Issue closed
simplworld/simpl.world.website
329928033
Title: Remove Frank's "SIMPL Token" from github Question: username_0: During the transition from githost -> github, in order to keep dependencies' ability to install themselves, I created a limited scope Github Token that needs to be removed once the libraries are opened up. Answers: username_1: This needs to be done after the repos are public
BlueWallet/BlueWallet
1113669982
Title: error Failed to install the app. Make sure you have the Android development environment Question: username_0: dependency's AAR metadata (META-INF/com/android/build/gradle/aar-metadata.properties) is greater than this module's compileSdkVersion (android-30). Dependency: androidx.sqlite:sqlite-framework:2.2.0. AAR metadata file: C:\Users\NeayNie\.gradle\caches\transforms-3\f5a0ade1653c808a85dcfa7d197c25c2\transformed\sqlite-framework-2.2.0\META-INF\com\android\build\gradle\aar-metadata.properties. * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. * Get more help at https://help.gradle.org BUILD FAILED in 11s at makeError (D:\LndHub\BlueWallet\node_modules\execa\index.js:174:9) at D:\LndHub\BlueWallet\node_modules\execa\index.js:278:16 at processTicksAndRejections (internal/process/task_queues.js:95:5) at async runOnAllDevices (D:\LndHub\BlueWallet\node_modules\@react-native-community\cli-platform-android\build\commands\runAndroid\runOnAllDevices.js:94:5) at async Command.handleAction (D:\LndHub\BlueWallet\node_modules\react-native\node_modules\@react-native-community\cli\build\index.js:186:9) info Run CLI with --verbose flag for more details. ``` ### file build.gradle ``` // Top-level build file where you can add configuration options common to all sub-projects/modules. buildscript { ext { minSdkVersion = 28 supportLibVersion = "28.0.0" buildToolsVersion = "30.0.3" compileSdkVersion = 30 targetSdkVersion = 30 googlePlayServicesVersion = "16.+" googlePlayServicesIidVersion = "16.0.1" firebaseVersion = "17.3.4" firebaseMessagingVersion = "20.2.1" ndkVersion = "23.0.7599858" } repositories { google() mavenCentral() } dependencies { classpath('com.android.tools.build:gradle:4.2.2') classpath("com.bugsnag:bugsnag-android-gradle-plugin:5.+") classpath 'com.google.gms:google-services:4.3.10' // Google Services plugin // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files } } allprojects { repositories { jcenter() { content { includeModule("com.facebook.yoga", "proguard-annotations") includeModule("com.facebook.fbjni", "fbjni-java-only") includeModule("com.facebook.fresco", "fresco") includeModule("com.facebook.fresco", "stetho") includeModule("com.facebook.fresco", "fbcore") [Truncated] } google() maven { url 'https://www.jitpack.io' } } } subprojects { afterEvaluate {project -> if (project.hasProperty("android")) { android { compileSdkVersion 30 buildToolsVersion '30.0.3' defaultConfig { minSdkVersion 28 } } } } } ``` Answers: username_1: We use macOS and Linux for development. Windows has not been compiled against. username_1: I'd be lying if I list anything out since none of our environments contain Windows. Status: Issue closed
WoWManiaUK/Blackwing-Lair
473654832
Title: [Spell] Death Knight - Death Coil Question: username_0: **Links:** (BfA Link tho but same spell ID in game) https://fr.wowhead.com/spell=47541/voile-mortel **What is happening:** Death Coil doesn't heal enough according to spell's Tooltip. Dr.Damage only gives damage coeff not healing one. Here, the self healing : https://puu.sh/DY2EH/9d726fbab0.png Here, the tooltip : https://puu.sh/DY36h/28a745db32.png Healing output is about 10% effective **What should happen:** Healing ratio have to be changed, I don't have the correct value nor a database tho.<issue_closed> Status: Issue closed
sindresorhus/np
218874403
Title: Disable dist-tag check when running with `--no-publish` Question: username_0: ``` ❯ np 2.0.0-beta.2 --no-publish ❯ Prerequisite check → You must specify a dist-tag using --tag when publishing a pre-release version. This prevents accidentally tagging unstable versions as "latest". https:/… ✔ Validate version ✖ Check for pre-release version → You must specify a dist-tag using --tag when publishing a pre-release version. This prevents accidentally tagging unstable versions as "latest". https… Check npm version Check git tag existence Git Cleanup Installing dependencies using npm Running tests Bumping version Pushing tags ✖ You must specify a dist-tag using --tag when publishing a pre-release version. This prevents accidentally tagging unstable versions as "latest". https://docs.npmjs.com/cli/dist-tag ``` It makes no sense to have that check when you don't intend on publishing to npm. I also noticed the output is printed twice and cut off. // @username_1 Answers: username_1: Will have a look at this! username_1: I'm looking into this. Was the output cut off because it exceeded the length of the terminal? username_1: Need [this](https://github.com/username_0/log-update/pull/16) in order to render the output on multiple lines :). username_0: Any idea why it's printed twice? Status: Issue closed username_1: You mean in the parent task and the subtask? It will be fixed in the next release of that renderer. Want to fix the log-update cut off as well first. username_0: 👍
laristra/flecsi
300894207
Title: Collection of requirements and initial interface design of set topology interface Question: username_0: **Task** Identification of requirements for PIC and MPM particle methods, and initial interface design. **Deliverables** * Initial set topology interface with data handle type and attributes that can be used as a proof-of-concept for co-design process with PIC and MPM teams. * Unit tests to cover set topology interface and types.<issue_closed> Status: Issue closed
gabrielcsapo/node-git-server
602682873
Title: How is authentication supposed to work? Question: username_0: In your examples you are not too specific how your authenticate function is supposed to work. You are just tracing the username/password and proceed. What am I supposed to do in order to achieve a proper authentication? So far I tried this: Server ``` const repos = new Server(path.resolve(__dirname, 'tmp'), { autoCreate: true, authenticate: ({ type, repo, username, password, headers }, next) => { console.log(type, repo, username, password); return new Promise((resolve, reject) => { if (username === 'foo') { return resolve(); } return reject("sorry you don't have access to this content"); }); } }); ``` The client calls this like so: ``` ~/Documents/tmp/git $ git push http://kms:7005/what master Username for 'http://kms:7005': foo Password for 'http://foo@kms:7005': remote: sorry you don't have access to this content fatal: Authentication failed for 'http://kms:7005/what/' ``` I'm getting asked client side for username/password, enter that, but get a rejection. Server side this is logged: `push what undefined undefined ` Answers: username_0: OK, figured it out: ``` const repos = new Server(path.resolve(__dirname, 'tmp'), { autoCreate: true, authenticate: ({ type, repo, user, headers }, next) => { user((username, password) => { console.log(username, password); if (username == "foo" && password == "bar") next() else next("authentication failed") }); } }); ``` Status: Issue closed username_1: This definitely needs more doc username_0: But it works well :)
datosgobar/series-tiempo-ar-api
424372938
Title: Agregar nuevos filtros categóricos a `search` Question: username_0: En forma similar a como funcionan los filtros por "dataset_source" o "dataset_publisher_name", implementar nuevos filtros (sus endpoints auxiliares, sus "aggregations" y los nuevos argumentos disponibles en el endpoint `search`). + "frequency": donde se devuelven textos para el usuario final ("Anual", "Semestral", "Trimestral", "Mensual", "Diaria"). + "is_updated": donde se vuelven los textos "Actualizada" y "Desactualizada" + "dataset_license"
wsvincent/djangoforprofessionals
631466137
Title: psycopg2.errors.UndefinedTable: relation "users_customuser" does not exist Question: username_0: Have finished first book and started second one. Hagin a great time and really enjoying material but i reached the dead end and hence i am writing here. As I was following the book and coding along i came across the error right after i enter the username after createsuperuser command. PS C:\Users\... \books> docker-compose exec web python manage.py createsuperuser Username: dm Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 86, in _execute return self.cursor.execute(sql, params) psycopg2.errors.UndefinedTable: relation "users_customuser" does not exist LINE 1: ...is_active", "users_customuser"."date_joined" FROM "users_cus... The above exception was the direct cause of the following exception: .... I tried to solve it myself and didnt go well. Then shut down docker and deleted the project folder and started form beginning. And then i came across the same problem. i then used the code provided here at github and same problem keeps occurring. Any suggestions/ideas how to explore this error further and where to find resolution? Status: Issue closed Answers: username_1: Hi @username_0, The `users` database table is not created for some reason. Good that you tried shutting everything down and starting from scratch, which is always my first step :) Have you run `migrate` on the database before trying `createsuperuser`? That's the issue here, the table for `users` isn't there, though I'm not completely sure why. Closing for now. Reopen if you're continuing to have issues.
zxing/zxing
116344024
Title: Edit New issue Barcodescanner scans different format and number Question: username_0: Scanning in Zxing sometimes I have different format and number. It happens when I am not moving my camera away from barcode it scans multiples time and then I got different result. What's the problem here? Answers: username_1: No problem, it's just a false positive read. You can disable formats you don't want to scan for to help this. Status: Issue closed username_0: Thanks for your help I only enabled EAN-13 and same result. Why this is happening? username_1: I don't know. What result do you see? You don't describe the problem username_0: ![img_20151112_104011](https://cloud.githubusercontent.com/assets/5268958/11112187/a32b4abe-8921-11e5-826d-1aaff3ae5438.jpg) I scanned this barcode 10 times and sometimes I see wrong result i.e (981409130867) but original is 8991389730867. I only enabled EAN-13 for the test. username_1: It scans as 8991389730867 -- http://zxing.org/w/decode?u=https%3A%2F%2Fcloud.githubusercontent.com%2Fassets%2F5268958%2F11112187%2Fa32b4abe-8921-11e5-826d-1aaff3ae5438.jpg It's possible, though rare, that you can get a misread for EAN.
iterative/dvc.org
556686812
Title: Custom header IDs mechanism breaks some pages Question: username_0: Examples: - https://dvc.org/doc/user-guide/running-dvc-on-windows - https://dvc.org/doc/user-guide/external-dependencies Error happens then header node have some children inside except the text. In the case of `external-dependencies` error was caused by this header: ``` ## Example: `import-url` command ``` Because `import-url` was rendered as a tag, incorrect id value was assigned for header in: https://github.com/iterative/dvc.org/blob/master/src/Documentation/Markdown/Markdown.js#L60-L65 Answers: username_1: @username_0 thanks for reporting this! will you take a look? if you don't have time, let's just disable the custom rendering, I'll fix the issue tomorrow. Should be something minor. username_0: @username_1 fixing it ATM. Status: Issue closed
bcgov/wps
1125400045
Title: Part -2 - Workshop - Roles and Responsibilities Question: username_0: **Describe the task** Part 2 meeting to continue with the discussion - Feb 10 **Acceptance Criteria** - [ ] first - [ ] second - [ ] third **Additional context** - Add any other context about the task here. - Or here
OGGM/oggm
279089770
Title: Glacier Domain and Preprocessing Question: username_0: Very small question: when you change the `cfg.PARAMS['border']` parameter to address the error `RuntimeError('Glacier exceeds domain boundaries.')`, do you need to re-run all of the preprocessing? Answers: username_1: Yes, there is no way around it. Sorry about that! Status: Issue closed
spinnaker/spinnaker
690918392
Title: Applications do not have a pager duty service key Question: username_0: ### Issue Summary: `Page app Owner` function sounds promising. Unfortunately, I did not find any documentation how to set up this. It really seems pretty good idea to be notified if my application suddenly stops working. First, I was expecting to get mail (because there is always stated owner mail). How to get this work, please? ### Cloud Provider(s): OpenStack Kubernetes: v1.18.6 Spinnaker: 1.22.1 ### Description: I can simply click on red button `Page app Owner` and write message. After confirmation I always get error: ``` <application> does not have a pager duty service key. ``` ### Additional Details: Here are orca logs reagrding to this error: ``` 2020-09-02 10:35:07.917 WARN 1 --- [ handlers-20] c.n.s.o.e.DefaultExceptionHandler : [<user>] Error occurred during task pageApplicationOwner java.lang.IllegalStateException: <application> does not have a pager duty service key. at com.netflix.spinnaker.orca.echo.tasks.PageApplicationOwnerTask.execute(PageApplicationOwnerTask.groovy:73) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler$handle$1$1$1.invoke(RunTaskHandler.kt:132) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler$handle$1$1$1.invoke(RunTaskHandler.kt:75) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler.withLoggingContext(RunTaskHandler.kt:397) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler.access$withLoggingContext(RunTaskHandler.kt:75) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler$handle$1$1.invoke(RunTaskHandler.kt:94) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler$handle$1$1.invoke(RunTaskHandler.kt:75) at com.netflix.spinnaker.orca.q.handler.AuthenticationAware$sam$java_util_concurrent_Callable$0.call(AuthenticationAware.kt) at com.netflix.spinnaker.security.AuthenticatedRequest.lambda$wrapCallableForPrincipal$0(AuthenticatedRequest.java:272) at com.netflix.spinnaker.orca.q.handler.AuthenticationAware$DefaultImpls.withAuth(AuthenticationAware.kt:53) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler.withAuth(RunTaskHandler.kt:75) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler$handle$1.invoke(RunTaskHandler.kt:93) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler$handle$1.invoke(RunTaskHandler.kt:75) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler$withTask$1.invoke(RunTaskHandler.kt:225) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler$withTask$1.invoke(RunTaskHandler.kt:75) at com.netflix.spinnaker.orca.q.handler.OrcaMessageHandler$withTask$1.invoke(OrcaMessageHandler.kt:68) at com.netflix.spinnaker.orca.q.handler.OrcaMessageHandler$withTask$1.invoke(OrcaMessageHandler.kt:46) at com.netflix.spinnaker.orca.q.handler.OrcaMessageHandler$withStage$1.invoke(OrcaMessageHandler.kt:85) at com.netflix.spinnaker.orca.q.handler.OrcaMessageHandler$withStage$1.invoke(OrcaMessageHandler.kt:46) at com.netflix.spinnaker.orca.q.handler.OrcaMessageHandler$DefaultImpls.withExecution(OrcaMessageHandler.kt:95) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler.withExecution(RunTaskHandler.kt:75) at com.netflix.spinnaker.orca.q.handler.OrcaMessageHandler$DefaultImpls.withStage(OrcaMessageHandler.kt:74) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler.withStage(RunTaskHandler.kt:75) at com.netflix.spinnaker.orca.q.handler.OrcaMessageHandler$DefaultImpls.withTask(OrcaMessageHandler.kt:60) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler.withTask(RunTaskHandler.kt:75) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler.withTask(RunTaskHandler.kt:214) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler.handle(RunTaskHandler.kt:90) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler.handle(RunTaskHandler.kt:75) at com.netflix.spinnaker.q.MessageHandler$DefaultImpls.invoke(MessageHandler.kt:36) at com.netflix.spinnaker.orca.q.handler.OrcaMessageHandler$DefaultImpls.invoke(OrcaMessageHandler.kt) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler.invoke(RunTaskHandler.kt:75) at com.netflix.spinnaker.orca.q.audit.ExecutionTrackingMessageHandlerPostProcessor$ExecutionTrackingMessageHandlerProxy.invoke(ExecutionTrackingMessageHandlerPostProcessor.kt:72) at com.netflix.spinnaker.q.QueueProcessor$callback$1$1.run(QueueProcessor.kt:89) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) 2020-09-02 10:35:07.917 ERROR 1 --- [ handlers-20] c.n.s.orca.q.handler.RunTaskHandler : [<user>] Error running PageApplicationOwnerTask for orchestration[01EH764VJ6NKXNJ0GE061N01V9] java.lang.IllegalStateException: <application> does not have a pager duty service key. at com.netflix.spinnaker.orca.echo.tasks.PageApplicationOwnerTask.execute(PageApplicationOwnerTask.groovy:73) [Truncated] at com.netflix.spinnaker.orca.q.handler.OrcaMessageHandler$withStage$1.invoke(OrcaMessageHandler.kt:46) at com.netflix.spinnaker.orca.q.handler.OrcaMessageHandler$DefaultImpls.withExecution(OrcaMessageHandler.kt:95) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler.withExecution(RunTaskHandler.kt:75) at com.netflix.spinnaker.orca.q.handler.OrcaMessageHandler$DefaultImpls.withStage(OrcaMessageHandler.kt:74) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler.withStage(RunTaskHandler.kt:75) at com.netflix.spinnaker.orca.q.handler.OrcaMessageHandler$DefaultImpls.withTask(OrcaMessageHandler.kt:60) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler.withTask(RunTaskHandler.kt:75) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler.withTask(RunTaskHandler.kt:214) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler.handle(RunTaskHandler.kt:90) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler.handle(RunTaskHandler.kt:75) at com.netflix.spinnaker.q.MessageHandler$DefaultImpls.invoke(MessageHandler.kt:36) at com.netflix.spinnaker.orca.q.handler.OrcaMessageHandler$DefaultImpls.invoke(OrcaMessageHandler.kt) at com.netflix.spinnaker.orca.q.handler.RunTaskHandler.invoke(RunTaskHandler.kt:75) at com.netflix.spinnaker.orca.q.audit.ExecutionTrackingMessageHandlerPostProcessor$ExecutionTrackingMessageHandlerProxy.invoke(ExecutionTrackingMessageHandlerPostProcessor.kt:72) at com.netflix.spinnaker.q.QueueProcessor$callback$1$1.run(QueueProcessor.kt:89) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) ``` Answers: username_1: Having same issue using: **Cloud Provider(s):** EKS Kubernetes: v1.16.13 Spinnaker: 1.22.1 Would be great to have a know how to configure in documented. username_2: there is any way to disable this on spinnaker?
channable/hoff
550111763
Title: Make branch deletion configurable Question: username_0: Hoff deletes the branch of the PR when merging it. This causes problems when there was another PR that used that branch as a base. It will be closed by github in that case. Either Hoff shouldn't delete the branch in that case, or, maybe even easier, hoff could just not delete branches in general and instead rely on the github auto-deletion feature for merged branches. That feature already correctly handles branches that are used as bases for other PRs. Answers: username_1: Also, as the base branch doesn't exist any more, GH won't allow reopening the PR. username_2: We can probably just delete all the code that deals with deleting branches, now that GitHub has auto-deletion there is not much added value for Hoff to do it. Status: Issue closed
egoist/tsup
1164788665
Title: Feature request: add tsup-run command Question: username_0: tsup is a great bundler! While it may take some finagling, I was able to get all of my code to build with tsup—no small feat. Other tools, like ts-node and others, are unable to play nicely with esm like tsup does. Plus, tsup already has all the polyfills and other options I need. I would love a `tsup-run` so that I can just use tsup for everythin, including one-off scripts. Answers: username_1: For now, `tsup src/index.ts --onSuccess "node dist/index.js"` is pretty much `tsup-run src/index.ts` username_0: @username_1 wow username_0: For reusability, I wrapped it in a shell script: ```bash #!/usr/bin/env bash # pattern matching https://reactgo.com/bash-check-string-ends-with-other/ # elif https://www.tutorialkart.com/bash-shell-scripting/bash-else-if/ # prefix https://stackoverflow.com/a/16623897 # suffix https://stackoverflow.com/a/61294531 # tsup-run https://github.com/username_1/tsup/issues/582 # exit https://unix.stackexchange.com/questions/308207/exit-code-at-the-end-of-a-bash-script filename=${1#.\/} if [[ "$1" == *ts ]]; then filename="${filename%.ts}" npx tsup-node "$filename.ts" --onSuccess "node dist/$filename.js" elif [[ "$1" == *tsx ]]; then filename="${filename%.tsx}" npx tsup-node "$filename.tsx" --onSuccess "node dist/$filename.js" else echo "Unsupported file extension: $1"; exit 1 fi ``` Status: Issue closed
scssphp/scssphp
730343339
Title: LibSass is Deprecated Question: username_0: As I saw [this](https://github.com/scssphp/scssphp/blob/243a16a35f404dd2c546192f5cd9d184a8a26d34/src/Compiler.php#L5828) I wonder, if the depreciation of LibSass has impact of this repo? https://sass-lang.com/blog/libsass-is-deprecated Thank you for this repo that has often helped me. Now I have a question Answers: username_1: This repo does not rely on libsass, so it is not directly impacted. And the discussion in #145 shows that there is still an interest in a pure PHP implementation of sass so it is not planned to deprecate scssphp. Lots of work is happening right now to increase the spec compliance of Scssphp to bring it closer to dart-sass (support for Sass modules will take a long time to happen though, as lots of other things need to be done first and it is a huge work. Follow #55 to track this) username_0: Thank you very much for your answer. Status: Issue closed
electron-userland/electron-builder
293535047
Title: Deprecated documentation about docker images Question: username_0: <!-- Which version of electron-builder are you using? --> <!-- Please always try to use latest version before report. --> * **Version**: 19.55.3 <!-- Which version of electron-updater are you using (if applicable)? --> <!-- What target are you building for? --> * **Target**: Windows/Linux <!-- Enter your issue details below this comment. --> Documentation (https://www.electron.build/multi-platform-build) states: ``` builder:wine — Wine, NodeJS 8 and required system dependencies. Based on builder:8. Use this image if you need to build Windows targets ``` However, `builder:wine` image actually contains NodeJS 9: ``` $ sudo docker run electronuserland/builder:wine node -v v9.4.0 ``` Is there a possibility to have an image ready for building Windows targets with specified NodeJS version? Status: Issue closed Answers: username_2: but my question may be, how can I upgrade/change the used node version? adding: in Dockerfile like: ``` FROM electronuserland/builder:wine RUN node install -g n RUN n 9.11.1 ``` is ok? username_2: RUN node install -g n ---> Running in f333fa3c40e7 module.js:557 throw err; ^ Error: Cannot find module '/project/install' at Function.Module._resolveFilename (module.js:555:15) at Function.Module._load (module.js:482:25) at Function.Module.runMain (module.js:701:10) at startup (bootstrap_node.js:194:16) at bootstrap_node.js:618:3 username_2: so funny ... the correct Dockerfile is: ``` FROM electronuserland/builder:wine RUN npm install -g n RUN n 9.11.1 ``` and it works, I can use node 9.11.1 or whatever using n
HXLStandard/hxl-proxy
318102302
Title: Update /pcodes to work around inconsistent iTOS API behaviour Question: username_0: From iTOS via @username_1 : What you are running into is the fact that some services include a PP layer, while others do not. In the case where PP data is present it takes the “0” layer, “1” is the country layer, “2” is the first admin level etc.. In cases (like BDI) where PP data doesn’t exist “0” becomes the admin0 layer, “1” becomes the admin1 layer etc. You can use the api to get the layer list for each country with their id: http://gistmaps.itos.uga.edu/arcgis/rest/services/COD_External/BDI_pcode/MapServer/layers?f=pjson "id": 0, "name": "Admin0", "type": "Feature Layer", http://gistmaps.itos.uga.edu/arcgis/rest/services/COD_External/SLE_pcode/MapServer/layers?f=pjson "id": 0, "name": "Populated Places", "type": "Feature Layer", Answers: username_0: If the Proxy makes that call every time, the service may become much slower (at least until we add better output caching). Will try that first, though. username_0: Added an input cache from iTOS (default: 1 week) to speed up performance. Completed. Confirmation link: https://beta.proxy.hxlstandard.org/pcodes/bdi-adm2.csv username_0: @username_1 - please confirm and close if OK Status: Issue closed
JetBrains/Exposed
240187381
Title: Exception when trying to add a column with a default value Question: username_0: After I added a new column like this ```kotlin val test = char("test").default('X') ``` I received the following exception: ``` Caused by: org.postgresql.util.PSQLException: ERROR: column "x" does not exist at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2476) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2189) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:300) at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:428) at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:354) at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:169) at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:136) at org.jetbrains.exposed.sql.Transaction$exec$2.executeInternal(Transaction.kt:91) at org.jetbrains.exposed.sql.statements.Statement.executeIn$exposed_main(Statement.kt:57) at org.jetbrains.exposed.sql.Transaction.exec(Transaction.kt:108) at org.jetbrains.exposed.sql.Transaction.exec(Transaction.kt:102) at org.jetbrains.exposed.sql.Transaction.exec(Transaction.kt:87) at org.jetbrains.exposed.sql.Transaction.exec(Transaction.kt:81) at org.jetbrains.exposed.sql.SchemaUtils$createMissingTablesAndColumns$$inlined$with$lambda$1.invoke(SchemaUtils.kt:128) at org.jetbrains.exposed.sql.SchemaUtils$createMissingTablesAndColumns$$inlined$with$lambda$1.invoke(SchemaUtils.kt:9) at org.jetbrains.exposed.sql.SchemaUtils.withDataBaseLock(SchemaUtils.kt:152) ... 9 more ``` Although the sql to stdout logger is active, I didn't found the sql statement that caused the error. So I started playing around and found the issue: You are trying to insert the column like this: ```sql ALTER TABLE foo ADD COLUMN test CHAR DEFAULT X; ``` This will fail of course because the default value for the column `test` should come from the column `x` which doesn't exist. Please enquote the value to get the desired sql statement: ```sql ALTER TABLE foo ADD COLUMN test CHAR DEFAULT 'X'; ``` Answers: username_0: Adding to the above behaviour, when I create the column myself, my application crashes on startup because of a syntax error near `MODIFY`. Removing the default value fixes this syntax error but is not what I want. So it looks like you are trying to update the default value although it is already correct and that this query has a syntax error. Unfortunately Exposed didn't tell me the faulty sql statement. Status: Issue closed
yse/easy_profiler
229288977
Title: Profiler freezes on reconnect Question: username_0: Reconnecting seems to be broken as profiler application freezes on reconnect. Closing game window does not terminate process as background music can still be heard even if window disappears. Terminating frozen profiler application makes game process exit completely. Steps to reproduce: 1. Start profiler and profiled application 2. Hit `Connect` button in profiler - data is gathered properly 3. Hit `Stop` button in profiler - data is displayed properly 4. Hit `Disconnect` button in profiler - shouldnt profiler disconnect in previous step? 5. Hit `Connect` button in profiler - profiler is frozen I should also point out is that we added [this fix](https://github.com/username_0/AtomicGameEngine/blob/fc77dac30231926f35e02df1810fe61046872ced/Source/ThirdParty/easy_profiler/easy_profiler_core/profile_manager.cpp#L1680-L1690) to get profiler into working order. For some reason without this loop profiler freezes on very first attempt to connect to a running application. Killing profiled application shows message that profiler could not connect. This was tested on linux and macos. While i test things with our little bit modified version our mods should not have any impact as they are pretty much all for macos + this loop i just linked. Answers: username_0: @username_1 ping :) username_1: @username_0 sorry for the delay, we have not much time to work on easy_profiler at the moment, but @username_2 is already working on the problem username_0: After i saw some activity yesterday i cant wait for the changes to hit develop so i can test. Thanks 👍 username_2: Well...It was really stupid bug. We forgot close socket on disconnect :-| Status: Issue closed
broadinstitute/cromwell
316967520
Title: Job labels can't be set to an empty string in all cases Question: username_0: To reproduce: 1. create a new label (has to be new) with a value by passing `{"key-1":"value-1"}` to /api/workflows/{version}/{id}/labels 2. set that label to an empty string `{"key-1":""}` 3. set that label to something other than an empty string `{"key-1":"value-2"}` 4. set that label to an empty string `{"key-1":""}` again The first time the label is set to "", it works; the second time, it isn't updated (although the response from /api/workflows/{version}/{id}/labels makes it look like it was)<issue_closed> Status: Issue closed
xamarin/Xamarin.Forms
287056397
Title: XF002 - Xamarin.Forms tasks do not match targets Question: username_0: Bug report best practices: https://github.com/xamarin/Xamarin.Forms/wiki/Submitting-Issues ### Description Working in VS Enterprise 15.5.2 and Xamarin 2.5.0.121934 The error XF002 - Xamarin.Forms tasks do not match targets appeared after in the error list after the 3rd compile of a Xamarin project ### Steps to Reproduce 1. I opened existing Xamarin project 2. debugged 3 times in a row via F5 3. 4th time I hit F5, the error XF002 appeared in both the UWP and Common project (those are the only two that I am compiling) I believe that the most interesting info in this is that between the 3rd and 4th debugs I opened 2 more VS solutions and one of them was the Xamarin.Forms repro and that stresses the memory on my system. ### Expected Behavior No error message ### Actual Behavior Error XF002 ### Basic Information - Version with issue: - Last known good version: - IDE: - Platform Target Frameworks: <!-- All that apply --> - iOS: NA - Android: NA - UWP: 16299 --> - Android Support Library Version: NA - Nuget Packages: - Affected Devices: Answers: username_1: Also seeing this with 2.5.0.122203 in 15.5.3 and in the 15.6 preview. username_2: @username_0 it's been a while since you reported this, but next time if you open or have had another solution open referencing Xamarin.Forms that throws this error, check that the versions were different between those solutions. In too many cases things are staying in memory between solutions that triggers this error. The most reliable solution is to clean, close all instances of VS, and reopen the solution you want to work on. username_3: @username_2 I just did the following: - Launched VS - Opened MvvmCross (my branch that has v3 of XF referenced) - Cleaned solution - Closed and relaunched VS - Opened MvvmCross - Attempted to rebuild MvvmCross.Forms project - Build failed but not related to this issue I guess this proves your point about it being related to upgrading/changing XF versions and VS holding onto references. username_4: Updated Xamarin Forms this morning and now cannot compile solution. The error message is:- /Users/<username>/<solution_name>/packages/Xamarin.Forms.3.0.0.482510/build/netstandard1.0/Xamarin.Forms.targets(3,3): Error XF002: Xamarin.Forms tasks do not match targets. Please ensure that all projects reference the same version of Xamarin.Forms, and if the error persists, please restart the IDE. (XF002) The detail of this error is:- /Users/<username>/<solution_name>/packages/Xamarin.Forms.3.0.0.482510/build/netstandard1.0/Xamarin.Forms.targets(44,3) Notice the error message and the detail both refer to Xamarin.Forms.targets but different numbers between the (). Have tried cleaning the solution, quitting VS and restarting Mac but to no avail. username_5: This helped me resolve this error 1. Do a build of the PCL/Standard project, this will probably fail. 2. Next after the build, do a Clean of the PCL/Standard project, this might fail too. 3. After the above 2 steps, go ahead and Rebuild. Shoud work now. 4. You can proceed to build any of the platform( Android or iOS) projects after the above steps. Just remember to always opt for a rebuild FIRST for the platform project first. username_6: @username_5 Why are we talking about PCLs? Any good reason not to use NET Standard library? username_7: This just worked for me: 1. Revert Xamarin.Forms to its prior version in the Forms project and your native project(s) (for me, I had to go from 3.1.0.583944 down one version to 3.0.0.561731 for Forms project and iOS) 2. Clean/Build (this still failed) 3. Update the packages in both projects back to current, and THEN Clean/Build which seemed to work 🤷‍♂️ Mileage may vary. I'm not sure if any of the other countless clean/build sequences leading up to these steps were also relevant, but I'd be interested to see if this work for anyone else (and for me the next time this inevitably happens..) username_5: My discoveries so far. There is a mismatch between the platform/CPU architecture target for the Standard project and the native projects (x86 or AnyCPU). Make sure the targets are the same for both before running or building. Most times when it bombs with the "tasks do not match targets" error, this is probably the cause. Fixing this makes the error go away or so it seems. Only snag is, while debugging the app in a simulator or physical device, if an exception gets thrown, and you try to rerun the app after making some changes, Visual Studio throws up a "deployment errors" exception. I typically fix this by changing the CPU target to say AnyCPU from x86 or vice versa. Man this is tiring... username_8: **Solution is to close VS is both cases and restart VS and load project.** username_9: I resolved the issue by cleaning the soln first. And then closing the VS instant. Delete bin and obj folder, delete the temporary .vs folder and then reopen the soln. username_10: username_9 - thank you for that. Did the job. It is amazing what you have to know when you are usng Xamarin, it seems a very unstable product. username_11: Here we go again. I just updated to Xamarin.Forms 3.1.0.697729 and this is happening all over again. Cleaning the solution fails (with the same error). Deleting bin and obj (no fix). Restarting VS (doesn't fix). I'm stuck in the water again rolling back to earlier versions. Took me hours last time. Come on Xamarin people. This is basic. And it's unacceptable. username_2: @username_11 can you please confirm there are absolutely not other versions of Xamarin.Forms referenced in your solution? Inspect each csproj or other config. If all is well there, please provide us with details: version of VS, project types and config in your solution, solution files/proj files, or better yet the solution that exhibits this behavior. username_10: @username_11 I have your version of forms installed and came here to fix this problem accordingly. I found username_9's solution fixes it. username_12: Had the same issue. Removing the *Xamarin.Forms* package and adding it again resolved the issue for me. username_13: ![image](https://user-images.githubusercontent.com/24257201/44934339-19587e00-ad32-11e8-8c4c-d995d8b6a3ce.png) MSBuild.exe and VBCSCompiler.exe still there even though I just compiled Delete them and build again work for me. username_14: @username_12 thanks, it works. Removing and Adding only Xamarin.Forms from PCL. My version Xamarin.Forms is 3.1.0.697729. I think the version is the latest now. username_15: I had the same error. to resolve this problem, I migrated my Core libraries to .Net Standard 2.0 (1.6 before) and my UWP projets to the min version Fall Creators Update (Anniversary Update before) However, the errors with Xamarin Forms are always complicated, and we loss a lot of time to debug. It's boring, we need that Microsoft improves the tools! username_16: The only thing that worked for me was to delete the error tag in the Xamarin.Forms.target file username_17: This shouldn't happen on latest version of XF (because we do not throw the XF002 anymore). Some weird behavior can still happen if you have multiple versions of XF loaded at the same time (same solution, or in different IDE), but we try to mitigate that by strong-signing our task assemblies. Status: Issue closed
JuliaLang/Pkg.jl
555520348
Title: Artifact progress bar doesn't work on Windows Question: username_0: On Windows, there is no progress bar and this causes: https://github.com/JuliaLang/Pkg.jl/blob/54eae6c1b591070918a65a581b7ab3a0c6d9d90c/src/Artifacts.jl#L910 to make the "Downloading artifact: ..." overwrite eachother. Answers: username_0: Should be fixed by using Downloads.jl Status: Issue closed
irods-contrib/irods_tools_ingest
205990343
Title: python ingest, registration and sync tool Question: username_0: We have a requirement to ingest and / or register a considerable about of data at rest in large, sometimes parallel, file systems. A design consideration is a new iRODS user with hundreds of millions of files, and also possibly an existing user who wishes to periodically sync a large volume with the iRODS catalog. Given a target directory, an initial list of features would include: * operates in parallel for all possible speed - recursively descend a file system and push fully qualified paths into a worker queue for ingest threads * option to wait N seconds to ensure file is at rest - landing zone style behavior * option to register or ingest files * option to checksum * option to provide regular expressions to skip * externalized metadata extraction using a DSL, inherited interface, or other mechanism for defining the rules to generate or extract metatadata from the at rest data * option to set iRODS ACLs after data is ingested * option for target collection, or collections given a mapping function * idempotent - ability to skip unchanged, properly ingested data and metadata Other possible features: * proxy as other iRODS users for ingest
FrankerFaceZ/FrankerFaceZ
390092869
Title: Emotes overlap Question: username_0: **Web Browser**: Chrome **Do you use BetterTTV or other Twitch extensions**: No only FFZ + Addon Pack **FFZ Logs (via FFZ Control Center > Home > Feedback >> Log; if Applicable)**: https://putco.de/OTczMg.brainfuck **Bug / Idea**: The latest version seems to have broken emotes as they overlap now underneath each other instead of being padded and separated neatly. **Steps to Reproduce (if Applicable)**: When I spam a lot of emotes around 12+ the emote overlaps. Answers: username_1: Looks like I need to update the Emote Alignment CSS to go with the changed emote DOM. I'll get this fixed tomorrow. Thanks for the report. Status: Issue closed
radiovisual/birdwatch
158504863
Title: Don't attach direct link to tweet in tweet markup Question: username_0: Twitter must have made an update in the way they server tweets, or I have some line of code lurking in the source that appends the direct link to the tweet at the end of the markup. What's worse, is that this link doesnt appear to be clickable, which means it was never seen by tweet-patch and/or added at the end of the process. For example, this tweet: ![image](https://cloud.githubusercontent.com/assets/5614571/15799280/0b15a83a-2a4f-11e6-81ae-13547d7bb71a.png) should not have the link at the end (which just takes you to the tweet itself) Answers: username_0: I have confirmed that this a tweet-patch problem. The link to the tweet is being included in the returned test from twitter, so tweet-patch should be converting this a valid link. I won't be trying to remove the link from the tweet data, because there are certain cases where that final link could have been typed by the tweet author, and I don't want to mess with the tweet composition. username_0: This was addressed by fixing / updating tweet-patch. Status: Issue closed
pkumod/gAnswer
693324292
Title: gStore DBpedia triple file is not accessible outside China - PLEASE FIX IT Question: username_0: Dear developers, your REST API and your gStore are not accessible in some countries outside China. We want to install a local version of gAnswer but we need to setup a local gStore but we cannot download the DBpedia_triple file from Baidu netdisk. We cannot create a Baidu account outside China. Please, fix these issues otherwise your project is completely unavailable outside China. We cannot replicate experiments on gStore and we are not able to include your system in our research activities. Answers: username_1: Hi pipokill Our server manager has banned some IPs to avoid DDOS attacks so that the APIs may not be available to some regions. We just uploaded the triples to Google Drive and the link is https://drive.google.com/file/d/1c7h0PsR_eV4pYTwFIOPTGBn8Hi-5pXP9/view?usp=sharing
weka-2016/weka-2016
188989634
Title: 8.3 Technical Blog ~ 1 hr Question: username_0: # 8.3 Technical Blog ~ 1 hr - [ ] Start your Toggl timer. **Write blog post** - [ ] Create a `username.github.io/blogs/t6-scope.html` file. **Explain to a non-tech friend** - [ ] What 'scope' is, and how it works in JavaScript. **Link your blog to the main page** - [ ] On your `index` (home) page, create a link to your technical blog post. - [ ] Stage and commit with meaningful commit message. - [ ] Push to GitHub to make it live! - [ ] Paste a link to your live blog in the waffle ticket comments below. **Share it!** - [ ] On your `cohort-specifc` Slack channel, share the link to your home page using the hashtag `#techblog[sprintNum]`, for example #techblog5.<issue_closed> Status: Issue closed
linq2db/linq2db
559817386
Title: using "distinct on" in postgres, is this possible? Question: username_0: is it possible to a query like here: https://www.sisense.com/blog/first-row-per-group-5x-faster/ in oracle it seems to be possible via first_value https://stackoverflow.com/questions/10515391/oracle-equivalent-of-postgres-distinct-on maybe we have a extension method for example "DistinctBy<>" where we could hand over a column or a new class with multiple columns? Answers: username_1: Please post SQL that you need and and information about database provider. Experiment yourself what is faster for your situation and we advise you how to create such SQL via linq2db. username_0: the sql is this select distinct on (customer_id) * from jobs order by customer_id, priority desc, created_at postgres now does a distinct only on customer_id, and fetches the rest of the data from the first row it gets (respecting the orderbys) It would be nice if we‘d get a extension method „distinctby“ wich could create this on postgres and use window functions on the others, like this: select * from jobs where id in ( select id from ( select id, row_number() over (partition by customer_id order by priority desc, created_at) as row_num from jobs ) as ordered_jobs where row_num = 1 ) username_1: I think it is "easily" to create such function even without linq2db source code change. linq2db supports window functions so we just have to create appropriate method chain expression ```cs static IQueryable<T> DistinctBy(this IQueryable<T> source, Expression<Func<T, object>> partition, IQueryable<T> oder) { // magic transformation code here } ``` And sample usage: ```cs var quert = from j in db.GetTable<Jobs>() from oj in db.GetTable<Jobs>().DistinctBy(oj => oj.CustomerId, q => q.OrderByDecending(it => it.Priority).ThenBy(it => it.CreatedAt)) .InnerJoin(oj => oj.Id == j.Id) select j ``` If you have a time i can guide you how to do such transformation. username_1: Code inside function should dynamically create this expression: ```cs var distinct = source.Select(q => select new RowNumberHolder<T> { RN = Sql.Ext.RowNumber().Over().PartitionBy(q.CutomerId) .OrderByDesc(q.Priority).ThenBy(q => q.CreatedAt).ToValue(), Value = q }) .Where(r => r.RN == 1) .Select(r => r.Value); ``` username_0: Yes, in the other dialacts we could create the method chain, but in postgres special SQL should be created! Cause Postgres supports "distinct on" directly in SQL! username_1: So we have to move our time to implement additional syntax which will work only if it is a Postgres and indexes on table are correct? FromSql is best solution for such cases. username_0: The synatx will also work if index are not correct ;-) No, only wanted to know if it is possible. Maybe We could reaturn a Expression Tree and for a special provider we could (somehow magicaly via attribute) provide another method wich creates the sql username_0: I look if I could generate the Expression tree for the "Windows functions". Maybe you can look when I'm finish if it looks okay. username_1: We have to implement new method and extend SelectQuery to handle distinct fields. I do not see other ways to do that right now. username_1: Implement window functions variant, and then we will invent/find way how to switch implementation for specific provider. Looks like it will be good solution. username_0: Will do username_0: @username_1 where do I find the "RowNumberHolder<T>" ? or do I need to implement it?
aws/aws-toolkit-jetbrains
544698882
Title: Sam build fails if executed within 60 seconds of a previous sam build command. Question: username_0: </details> **Expected behavior** It should be possible to invoke a python lambda function through the IDE within 60 seconds of the previous invocation. **Your Environment** - OS: Mac OS X (10.14.6, x86_64) - JetBrains' Product: PyCharm 2019.3.1 Build #PY-193.5662.61 December 18, 2019 - JetBrains' Product Version: 2019.3 - Toolkit Version: AWS Toolkit (1.9-193), AWS CloudFormation (0.6.18) - SAM CLI Version: SAM CLI, version 0.39.0 - JVM/Python Version: 11.0.5+10-b520.17x86_64, python 3.7.5 Answers: username_1: I too get this issue however its every second run (unless I delete the build directory manually). This is because the sam cli build command deletes the entire build directory before restarting a build and hence the requirements.txt that was in the build directory is no longer there. Once the build directory is gone then a subsequent run of the lamba function works because it uses the requirements.txt in the root of my project. An additional side effect of this is that all the requirements are downloaded every time the lambda function is run which considering the time it takes even if just boto3 is installed makes this a very cumbersome and relatively unproductive environment for debugging lambda functions locally due to over a minute of build time for each code change. For this particular issue a nice solution would be if the requirements.txt in the root directory is the same as in the build directory then do not do a full sam cli build but instead just replace the lambda file and re-deploy to the docker container. This would definitely make debugging lambda locally much faster and better. Thanks for any attention you can pay on this issue! Windows 10 Professional Pycharm Professional 2019.3.4 Python 3.7 SAM CLI version 0.45.0 aws-cli/1.17.10 Python/3.6.0 Windows/10 botocore/1.14.10
rui314/mold
1180513546
Title: RISC-V: Testing as-needed ... collect2: fatal error Question: username_0: Please inform which information should be shared, regarding my build environment. Answers: username_0: Please inform which information should be shared, regarding my build environment. username_1: What is your machine and distro? I'm using a SiFive Unmatched board with Ubuntu. username_0: Guest machine does not allow SSH access, from the host's side, cause of wrong configuration. Also check Qemu's configuration. There is an option to perform automated builds, run test suites within the guest machine (scripts). Also remote access, via an SSH, could allow some CI integration to be done (against's Qemu's RISC-V guest environment). username_0: @username_1 , where you able to reproduce?
DavideViolante/Angular-Full-Stack
284999334
Title: Deploy on VPS in VPN ? Question: username_0: Hi, I have problem with deployment this project or any Angular project as prod mod. It is working on my localhost well and everything is good, but i want to set it on my VPN ip. i have change .env file MONGO_URL to my VPN Ip and it is working. my question is how to change or where the localhost to my IP. I can change only in package.json ng servre --host {{My IP}} but it is only frontEnd host ( backEnd not working and its only on dev mode ). How to change configuration to made npm run prod with other IP for fronEnd And BackEnd ? Thanks for help.<issue_closed> Status: Issue closed
iterative/dvc
548359512
Title: refactor: use ABC's for `Remote` base class instead of a plain object Question: username_0: This would make the interface more explicit. Answers: username_1: Not a big fan. It won't make our code any significantly easier to read, we are not writing Java after all. However, there are possible big gains from separating remote and cache functionality. username_0: Let's close it then, @username_1 , I also think it is better to rethink remote and cache :eyes: Status: Issue closed
aws/aws-parallelcluster
433754524
Title: SGE jobs remain in running state after compute instance is terminated Question: username_0: **Environment:** - AWS ParallelCluster 2.2.1 - base_os: Ubuntu 16.04 - Scheduler: SGE - Master instance type: default - Compute instance type: c5.large When a compute instance is manually terminated or because it is a spot instance, jobs running on those nodes will not get properly cancelled or marked as failed. Instead they continue being in the running state forever. Also, the instances are not removed from SGE, e.g. `qhost` gives the following output: ``` HOSTNAME ARCH NCPU NSOC NCOR NTHR LOAD MEMTOT MEMUSE SWAPTO SWAPUS ---------------------------------------------------------------------------------------------- global - - - - - - - - - - ip- lx-amd64 2 1 1 2 0.00 3.6G 241.0M 0.0 0.0 ip- lx-amd64 2 1 1 2 - 3.6G - 0.0 - ip- lx-amd64 2 1 1 2 - 3.6G - 0.0 - ip- 0 lx-amd64 2 1 1 2 0.00 3.6G 242.0M 0.0 0.0 ``` I understand that variants of this problem have been mentioned in closed issues already (#687, #148). However, those fixes either do not apply to my environment or there is a new problem. Answers: username_0: I see now an additional issue (#334) where it is said that this is expected behavior if there is still a job running on a terminated instance. Is there any way to delete all jobs running on an instance before it is terminated? Would you consider this new feature? I think it is quite important to enable work with spot instances and fault tolerant applications. username_1: Hi @username_0 a better handling of the cases when an instance is reclaimed by spot (or manually terminated) is in the roadmap. Right now you would have to (force) delete the jobs and the hosts will be removed in the next sqswatcher cycle. username_0: Thank you for the clarification. I am glad it is being considered. At the moment, (force) deleting the jobs would be perfectly fine with me. Do you maybe have a recommendation regarding how to do this automatically when a node is terminated? username_2: I run the following /bin/csh script as part of the post_install script on the master node, to delete any jobs running on nodes that SGE considers non-responsive ( see /opt/sge/bin/dead-nodes). My python job scripts use DRMAA which returns an exception code 24 (no job exit code available) from SGE which I use to resubmit the job. I tested this by manually deleting instances OR picking spot instances that are in short supply (eg c5n.18xlarge in us-east-2c) resulting in many spot cancellations and my jobs recover on their own. #!/bin/csh -f setenv SGE_ROOT /opt/sge setenv PATH "${PATH}:/opt/sge/bin/lx-amd64:/opt/sge/bin" while ( 1 ) foreach X ( `dead-nodes` ) qmod -d all.q@$X foreach J ( `qstat -u \* | egrep $X | awk '{print $1}'` ) qdel -f $J end end sleep 5 end username_0: Thank you @username_2 ! Looks great. username_3: you can set `reschedule_unknown` in sge_conf which will cause jobs to be automatically rescheduled if the executing node is in an unknown state for long enough. See `man sge_conf` this has been handling 99% of my rescheduling for me without intervention. username_4: Yep and this option will be enabled by default starting with the next release of ParallelCluster! username_5: Because we have announced that we will be deprecating support for SGE in the near-future (see: https://github.com/aws/aws-parallelcluster/wiki/Deprecation-of-SGE-and-Torque-in-ParallelCluster), we will not be performing additional enhancements specific to SGE. I am going to close this issue. If you would like to request a similar enhancement for one of our other supported schedulers (Slurm or AWS Batch), please feel free to create a new issue. Status: Issue closed
The-Compiler/pytest-xvfb
222184929
Title: XIO: fatal IO error 0 (Success) on X server ":1001" Question: username_0: I tried to run the test suite of [`ltfatpy`](https://gitlab.lif.univ-mrs.fr/dev/ltfatpy/tree/master) using `pytest-xvfb` but it fails with the following error: ``` XIO: fatal IO error 0 (Success) on X server ":1001" after 158 requests (158 known processed) with 1 events remaining. ``` The test suite succeeds if I call pytest with `xvfb-run -a` and the verbose log indicates that no test has failed in either cases. Any idea what could cause this? Answers: username_1: Hmm, that's odd, especially the "Success" part :wink: Does it happen at the very end of the testsuite by any chance? What GUI toolkit are you using there? username_0: TkInter (`matplotlib` TkAgg backend) username_0: I am afraid I won't be of any help. It was mostly a heads-up in case someone else had trouble using it with a different UI toolkit. Worst case, we still have good old manual calls to `xvfb-run` which worked in my case. username_2: I have the following test that allows me to get the same "success" message at the end of pytest: ```py from PyQt4 import QtGui, QtCore class Window(QtGui.QWidget): def __init__(self): QtGui.QWidget.__init__(self) self.button = QtGui.QPushButton('Test', self) self.button.clicked.connect(self.handleButton) layout = QtGui.QVBoxLayout(self) layout.addWidget(self.button) def handleButton(self): print('Hello World') def test_qtgui(qtbot, capsys): w = Window() qtbot.mouseClick(w.button, QtCore.Qt.LeftButton) out, err = capsys.readouterr() assert 'Hello World' in out ``` ``` pytest test_gui.py -v ===================================================================================================== test session starts ===================================================================================================== platform linux -- Python 3.6.4, pytest-3.3.2, py-1.5.2, pluggy-0.6.0 -- /home/thomas/miniconda/envs/pyqttests/bin/python cachedir: .cache PyQt4 4.11.4 -- Qt runtime 4.8.7 -- Qt compiled 4.8.7 rootdir: /home/thomas/gitrepos/pyqttest, inifile: plugins: xvfb-1.0.0, qt-2.3.1 collected 1 item test_gui.py::test_qtgui PASSED [100%] ================================================================================================== 1 passed in 0.07 seconds =================================================================================================== python: Fatal IO error 0 (Success) on X server :1005. ``` username_1: FWIW I see something like this as well with Qt5 nowadays (I think it was with the new PyQt exit scheme). Interestingly, it only happens when I run a subset of my tests, not when I run all of them - so there might be *something* in there that fixes it. username_3: I just ran into the same issue with PyQt5. I'm getting it when I'm repeatedly opening and closing a dialog which performs threaded tasks, but the tasks and threads are being cleaned up before the dialog closes and the main window freezes. username_3: FWIW my issue was with a thread being closed and improperly closing sqlite3 resources username_1: For qutebrowser tests, it seems to depend on how much time is passing after the widget tests have finished. For example, `tox -e py38-pyqt515-cov -- tests/unit/misc/test_miscwidgets.py` crashes, so does `tests/unit/misc/test_miscwidgets.py tests/unit/utils/test_utils.py`, but `tox -e py38-pyqt515-cov -- tests/unit/misc/test_miscwidgets.py tests/unit/utils/` works. Maybe we need to change something so we close the server *after* pytest-qt has had a chance to clean up widgets? username_4: I'm having a similar problem using TkAgg backend. The unit tests all run fine, it's in the cleanup. Using latest version of matplotlib etc. username_1: @username_4 Any chance you can show things (the exact output, ideally the code triggering it) instead of describing them? That might help in tracking this down. username_4: The project is https://github.com/username_4/spatialmath-python and the raw log is: 2020-08-19T06:52:56.4452400Z env: 2020-08-19T06:52:56.4452578Z pythonLocation: /opt/hostedtoolcache/Python/3.7.8/x64 2020-08-19T06:52:56.4452696Z MPLBACKEND: TkAgg 2020-08-19T06:52:56.4452858Z ##[endgroup] 2020-08-19T06:53:03.1543067Z ============================= test session starts ============================== 2020-08-19T06:53:03.1545002Z platform linux -- Python 3.7.8, pytest-6.0.1, py-1.9.0, pluggy-0.13.1 2020-08-19T06:53:03.1545596Z rootdir: /home/runner/work/spatialmath-python/spatialmath-python 2020-08-19T06:53:03.1546170Z plugins: timeout-1.4.2, cov-2.10.1, xvfb-2.0.0 2020-08-19T06:53:03.1546435Z timeout: 50.0s 2020-08-19T06:53:03.1546665Z timeout method: thread 2020-08-19T06:53:03.1546953Z timeout func_only: False 2020-08-19T06:53:03.1550184Z collected 151 items 2020-08-19T06:53:03.1550291Z 2020-08-19T06:53:03.7592137Z spatialmath/test_pose2d.py ..in log 2020-08-19T06:53:03.9149222Z ....... 17 deg 2020-08-19T06:53:03.9403330Z .......in log 2020-08-19T06:53:03.9502632Z .in log 2020-08-19T06:53:03.9571865Z .t = 0.15, 0.72; -43 deg 2020-08-19T06:53:04.0336046Z ........ 2020-08-19T06:53:04.1289054Z spatialmath/test_pose3d.py ........ rpy/zyx = 17, 0, 0 deg 2020-08-19T06:53:04.1961094Z ............ 2020-08-19T06:53:04.3084803Z spatialmath/test_quaternion.py .......................... 2020-08-19T06:53:04.3912650Z spatialmath/test_twist2d.py ............ 2020-08-19T06:53:04.4234089Z spatialmath/test_twist3d.py ............. 2020-08-19T06:53:04.4338518Z spatialmath/base/test_argcheck.py ..... 2020-08-19T06:53:04.4508418Z spatialmath/base/test_quaternions.py .... 2020-08-19T06:53:11.6089887Z spatialmath/base/test_transforms.py ............................................. 2020-08-19T06:53:11.6090129Z 2020-08-19T06:53:11.6090380Z =============================== warnings summary =============================== 2020-08-19T06:53:11.6090796Z spatialmath/test_pose2d.py::TestSO2::test_conversions 2020-08-19T06:53:11.6091080Z /opt/hostedtoolcache/Python/3.7.8/x64/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject 2020-08-19T06:53:11.6091347Z return f(*args, **kwds) 2020-08-19T06:53:11.6091434Z 2020-08-19T06:53:11.6092392Z -- Docs: https://docs.pytest.org/en/stable/warnings.html 2020-08-19T06:53:11.6092651Z ======================= 151 passed, 1 warning in 12.08s ======================== 2020-08-19T06:53:11.6095921Z XIO: fatal IO error 0 (Success) on X server ":0" 2020-08-19T06:53:11.6096325Z after 3611 requests (3608 known processed) with 0 events remaining. 2020-08-19T06:53:11.6179884Z ##[error]Process completed with exit code 1. 2020-08-19T06:53:11.6236503Z Post job cleanup. the last test, spatialmath/base/test_transforms.py, does a bunch of matplotlib 3D plots and 3D animations using FuncAnimation. All graphical windows are closed after the tests using tearDownClass() handler. The tests all pass, but it looks like the error occurs as pytest is exiting up. From what I can see of the code, a SIGTERM is sent to xvfb but maybe this error is caused by the pipe closing before it dies?? This is running on some version of Ubuntu provided by GH, apparently "latest". I'm happy to add debugs if you can advise what to add. Thanks. username_4: @jhavl and I did some digging on this. The error happens in unconfigure, which is calling code in pyvirtualdisplay package. The error happens as soon as we attempt to terminate the Xvfb subprocess when THERE ARE STILL OPEN WINDOWS. This is really more an Xvfb issue than a pytest-xvfb issue, but there's lots of people in forums asking about this bug. The lesson, close all your windows after each unit test. username_1: I still haven't had a chance to dig into this more I'm afraid - thanks for the useful information though, @username_4! If that's all there is to it, I might be able to add some code to `pytest-xvfb` which force-closes all windows before pytest exits. I now found a way to reproduce this when running the entire testsuite of qutebrowser (rather than just a part): ```diff diff --git a/tests/unit/utils/test_utils.py b/tests/unit/utils/test_utils.py index d674dd694..bbeeff5cc 100644 --- a/tests/unit/utils/test_utils.py +++ b/tests/unit/utils/test_utils.py @@ -706,9 +706,9 @@ def test_set_unsupported_selection(self, clipboard_mock): (False, 'clipboard', 'fake text', 'fake text'), (False, 'clipboard', 'füb', r'f\u00fcb'), ]) - def test_set_logging(self, clipboard_mock, caplog, selection, what, + def test_set_logging(self, monkeypatch, clipboard_mock, caplog, selection, what, text, expected): - utils.log_clipboard = True + monkeypatch.setattr(utils, 'log_clipboard', True) utils.set_clipboard(text, selection=selection) assert not clipboard_mock.setText.called expected = 'Setting fake {}: "{}"'.format(what, expected) ``` Why? No idea.
coralproject/talk
286635543
Title: Comment Flagging status Flag/Flagged/?Moderated? Question: username_0: ### Expected behavior Once a moderator has actioned a flag it should show a change in status. For example, having flagged my own post I would like to know it was seen by a moderator. ### Actual behavior I suspect it (my flag action) remains permanently flagged so that I don't do it again by mistake. ### Steps to reproduce behavior This would have to be tested by moderation staff. Could I get a link to where the flag is documented? Answers: username_1: Hey @username_0! So the behavior here is that once a comment is Reported, it shows as reported, you can see that here: <img width="529" alt="reported" src="https://user-images.githubusercontent.com/1077300/34680998-86b6cffc-f468-11e7-94f4-30a57e448332.png"> A comment that is reported shows in the Reported queue for moderation. <img width="1273" alt="reported_queue" src="https://user-images.githubusercontent.com/1077300/34681015-930d46a0-f468-11e7-94a7-f8ab140910dc.png"> Depending on the newsroom and the customizations they've made to the software, there can be lots of other queues and report types, etc, but this is basic functionality out of the box for Talk. We don't have any plans to alert users of moderation actions that moderators have taken; one reason for that is because commenters could easily game the system. But I will bring this up in our next product meeting and can let you know if/when it comes up on our roadmap. Thank you! Status: Issue closed username_1: Confirmed this is also working as expected on TI: <img width="709" alt="reported_ti" src="https://user-images.githubusercontent.com/1077300/34681654-8d0493ba-f46a-11e7-9df2-c6cca5f37cda.png"> username_0: This is going to sound bizarre, but do my View Options include an alphabetical list?
sldrcyang/nMAGMA
783683798
Title: gene-set.txt file in tutorial Question: username_0: Hello, can you please tell me how did you create gene-set.txt file in your tutorial? Which software did you use and what is teh format of that file? Thanks Ana Answers: username_1: Hi, We downloaded gene sets directly from the Molecular Signatures Database (MSigDB, v 7.1, http://www.gsea-msigdb.org/gsea/msigdb) (e.g. ontology gene sets, also described in our manuscript) and organize them into the required format in MAGMA named as gene-set.txt (which looks like: gene-set1 gene1 gene2 gene3... \ngene-set2 gene1 gene2...). You can downloaded gene sets you need from MSigDB. Good luck! Yang
open-contracting/kingfisher-process
416179534
Title: At time of scrape, store a copy of all extensions used Question: username_0: Scenario: * We scrape some data that uses extensions * 3 months pass ....... * The extension changes, and it's an unversioned extension, so it just changes * 3 months pass ....... * We load the data again because we want to evaluate it, so we load it, and check it PROBLEM! The data now fails validation because we are checking old data against a new extension schema, and a bunch of fields are missing , wrong type, etc .... this is unfair! We really need to be checking the data against the extension at the time of the original scrape! SOLUTION: When we get data, also get copies of the extensions, schemas, codelists, etc .... save that alongside files! When we recheck later, use these copies. ARE EXTENSIONS VERSIONED OR NOT? Obviously, if the extension is versioned properly and we trust that versioning to be done well then this won't be a problem at all. So the question is - how many extensions are unversioned? How bad a problem is this? Realised when reading https://github.com/open-contracting/lib-cove-ocds/issues/9#issuecomment-468706578 Answers: username_1: Most extensions are unversioned, and that's not likely to change to the point that most are versioned, i.e. we will have to handle the case of unversioned extensions for the foreseeable future. Thinking through different scenarios: If a publisher has data, then changes its extensions in a backwards-incompatible way, but doesn't update its old data, then that data should fail. It doesn't matter that, at one time, its data and schema matched such that it would pass. We require that any presently accessible data match any presently referenced schema. So, in the above scenario, if the publisher did go back and change their old data to match the updated extension, we should re-download that old data before re-checking it. If they didn't go back, and now their old data has errors according to the updated extension, then that is a true error and isn't unfair. Publishers shouldn't be making backwards-incompatible changes to their extensions, and if they are, then they should at least version them (or publish them at different URLs). username_2: We're planning on keeping a record of pretty much everything we ever do on Kingfisher, right? So, we should still be able to say with confidence that a certain publishers' data passed validation on a certain date, even if we can't now reproduce that because the extensions that it uses have changed? username_1: Yes to the second question – I don't think we have a use case for re-checking year-old data against year-old extensions, but we do have a use case to say "publisher X passed validation at time Y" – though, regarding the first question, that doesn't necessarily need database support, as we'll have logged that in feedback reports, MEL measurements, etc. username_0: Are we planning on doing that outside Kingfisher? At the moment if you delete a collection from Kingfisher you delete the check results too. username_1: Leave it up to the user. Feedback reports and MEL reports will mention results; granular results can then be discarded. When Kingfisher is used for another purpose, I assume the relevant results will be captured at least as prose somewhere… If anyone uses Kingfisher and never reports any results anywhere else, then I assume that person won't be deleting their collections… username_1: These commits might be relevant in the old Kingfisher: https://github.com/open-contracting/kingfisher/commit/59a131b5164b0cca663295bd2896636f98628145 https://github.com/open-contracting/kingfisher/commit/03f969b3aede3d2c8134626bfa21fe1f0621c623 https://github.com/open-contracting/kingfisher/commit/c20cadc55dcc6e0c1d1439ffb2f998191fc6d5c6 username_1: The next version of the Extension Registry Python Package means that, if Kingfisher Process downloads all unique extensions referenced by packages, then the package's ProfileBuilder can use those downloaded extensions to generate an ad-hoc 'profile', which can be made available to other steps (e.g. the check step – if/when lib-cove-ocds allows passing in a schema) – so that those don't need to be retrieved at the time the check is performed. username_1: The Extension Registry Python Package can now generate extended package schema (like CoVE).
operable/cog
141691971
Title: Support '.yml' and '.yaml' extensions Question: username_0: Config files are in YAML. Currently we only support files with the `.yaml` extension. Since YAML files can also have a `.yml` extension, we should support both. Answers: username_0: ## Motivation Even though `.yaml` is the official extension some folks use `.yml`. We should support both to make it as easy for our users as possible. ## Objective Support both `.yaml` and `.yml` in an easily maintainable and modifiable way. ## Research There are several spots that will need some work to support multiple file extensions. Most concern relay. Both `Relay.Bundle.Installer` and the `Relay.Bundle.Scanner` depend on knowing the correct file extension. Additionally there are several places scattered throughout relay that reference config file extensions. `Relay.Bundle.Scanner` only references the extension in one function `pending_bundle_files/0`. `Path.wildcard` supports alternates, http://elixir-lang.org/docs/stable/elixir/Path.html#wildcard/2, so it should be an easy fix for the scanner. `Relay.Bundle.Installer` is a bit more complicated. We often use the pattern of checking if a path string ends with the config file extension to make decisions about what work to be done, example: https://github.com/operable/relay/blob/599299d9592c402172d3a9850fdf9027288ca78d/lib/relay/bundle/installer.ex#L65. I think we can shift some of that logic to spanner and simplify things a bit. ## Plan We need to add some utilities to spanner to help us identify what type of bundle is being installed. Spanner should also return a list of file extensions that we support for config files. Relay will then need to be modified to accept a list of said extensions and act accordingly. - [ ] Add helper functions to spanner for bundle identification - [ ] Have spanner return a list of supported extensions - [ ] Update relay to work with the new spanner utility function/s and list of extensions username_1: @username_0 Should we also silently support the `.json` extension since that is also technically YAML? username_0: It should be easy to add `.json` along with `.yaml` and `.yml` :+1: Status: Issue closed
toggl/toggldesktop
485133653
Title: Group TEs from different workspaces without project separately Question: username_0: ### 💻 Environment Platform: all platforms ### 📒 Description Currently in desktop the following is the case: when there are multiple TEs on the same day with the same description, no project assigned and in different workspaces, they are grouped together when TE grouping is on. That should change so that they're grouped separately, since they're in different workspaces and everywhere else (eg reports, mobile app, etc.) they'll not be grouped or summed up. ### ⭐️ Why do you want this? Based on discussion here: https://github.com/toggl/discussions/issues/66 Based on that discussion it was agreed that this should be the behavior across platforms (web, desktop & mobile)<issue_closed> Status: Issue closed
imgix/drift
435428980
Title: safari: strange inline pane cuts in safari Question: username_0: safari does not display the image in the zoom pane right. after the bigger zoom image is loaded, the image is shown but mostly cut at different heights i provided an example to reproduce it and screenshots of it. i already tried to fix that, but i had no idea.. it seems like safari does not display the transformed image fully... **Reproduce** try this in safari clearing cache makes it easier fastest way: start at the top and move the cursor to the bottom of the image https://codepen.io/studio-08-thomas/pen/GLGrKN **Screenshots** ![image](https://user-images.githubusercontent.com/20873706/56461129-f3114c80-63ad-11e9-9338-cd7ecd22a03a.png) ![image](https://user-images.githubusercontent.com/20873706/56461109-9f066800-63ad-11e9-9902-3917c9aaf998.png) ![image](https://user-images.githubusercontent.com/20873706/56461206-1688c700-63af-11e9-972b-151ef00579e7.png) **Expected behaviour** the image should not be fully visible **Information:** - drift version: 1.3.3 / 1.3.4 (have not tried other versions) - browser version: safari 12.1 & safari tp 12.2 **Additional context** works in all other major browser (did not tested it in ie/edge so far) thanks for the great stuff - its really nice to see and great to use :) Answers: username_1: Hey @username_0 sorry for the silence on this issue so far, it's taken a few tries but I finally have some updates. First to address the inline cuts, I noticed that the overlay image used in your example has `Content-Disposition: attachment` in its response header. I suspect that might be why in _some_ cases the inline cuts occur, as it may [interact strangely](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Disposition#As_a_response_header_for_the_main_body) with safari depending on how it handles those requests. In this [codepen](https://codepen.io/username_1/pen/VwZWRKy), I've included a similar example using imgix images and can see that the same issue does not occur. So I suspect that by changing where you request your images from, you may be able to avoid this issue. Incidentally, this issue also highlighted a different bug caused by [webkit not firing `load` events](https://stackoverflow.com/a/5024181) on second hover, causing the "loading" animation that can be seen on the cut out portion of the image. I'll be working on a fix for this shortly. username_0: Hey @username_1 ! nothing to be sorry about. i am actually happy you got a sight on it and providing me feedback. Thank you for this information. It's pretty useful and i am looking forward for the patch! a small offtopic question: is there a convenient way to change the zoom factor immediately without having to hack the mouseenter/mousemove events? (for example: manually handled mousewheel zoom or touch gestures) -> should i create a new issue out of it? (that would be a nice feature) username_1: If I understand your question correctly, no I don't think there is a convenient way to change the zoom factor on the fly. But I agree that would be a cool feature! Please feel free to open a new issue for it, thanks. Status: Issue closed
ludens-reklamebyra/react-crisscross
345522200
Title: Add table of contents to readme Question: username_0: **Is your feature request related to a problem? Please describe.** The readme is now of a length where it would be beneficial to have a TOC in order to find what you are looking for more easily **Describe the solution you'd like** Create a table of contents with links to the different sections. There may be a need to change some of the structure of the document as well. Answers: username_1: I agree that the docs are getting big. TOC is one solution, but we should also discuss the possibilities of making a docs-website. username_1: If we make a docs website, we can also have an examples page in there with live examples/code (maybe embed from codesandbox). This is just suggestions, but I would actually like having a website to gather all the docs. username_0: Maybe a TOC could be seen as a temporary solution then? Until a docs website is in place? username_1: We can do that. Status: Issue closed
constantinpape/affogato
1098226172
Title: Enable interactive mws out of core Question: username_0: From the meeting today: we want to enable interactive mws out of core. High level API discussion: - `segment`: just segment everything given the current constraints (respecting locked segments) - in core, already there, corresponding to [call function of the interactive mws class](https://github.com/username_0/affogato/blob/master/src/python/module/affogato/segmentation/interactive_mws.py#L122) in prototype - out-of-core, prototype exists for a block-wise mws, but cannot ingests user-defined constraints - `segment_current_fov`: segment everything in current field of view, apply some "padding" (e.g. by extending two next neighbor in the rag) for fast iteration (both in core / out of core), not implemented yet Thinking about this more, a challenge for running the interactive mws out-of-core is that (in the current iteration) we need a grid graph built up for the entire image. This happens in memory and is also not economical to do for a very large volume out of core, because we would need to serialize every node (=pixel) and edge (= all local and long-range-offsets). A potential solution is to never construct the big grid-graph, but instead construct small graphs for the field of view on the fly when a user interacts with the plugin and then use this grid graph to export to the constraint list. To make this work fully out-of-core, the block-wise mws would then need to be implemented s.t. it computes the grid-graph for its current block and ingests the extra constraints. I think that this does not need many changes compared to our current implementation, we just need to pass the shape and strides for the full array when computing the grid graph, s.t. the node index is treated globally. For the block-wise mws we need to think about how to handle affinities in-between block boundaries (probably just by adding a corresponding halo).
jooaodanieel/GCommit
371263806
Title: Not Python 2 friendly Question: username_0: When I try to execute using Python version 2.7, shows this error: `Traceback (most recent call last): File "/usr/local/bin/git-gcommit", line 100, in <module> main() File "/usr/local/bin/git-gcommit", line 95, in main except FileNotFoundError: NameError: global name 'FileNotFoundError' is not defined` Answers: username_1: I will take this one. Status: Issue closed username_2: @username_0 did you see? This issue was quickly closed! Now it might work for you :rocket:
wulf7/utouch
976355071
Title: mice (not mouses) Question: username_0: Today I discovered that it's debatable <https://english.stackexchange.com/q/9836/11504>, however my sense (as a native speaker of British English) is that **_mice_** is more correct; more commonly used. ![image](https://user-images.githubusercontent.com/192271/130353648-d52e0f85-7083-44bb-b69b-bb29282b3061.png) ---- Random side note: when I first saw _mouses_, I thought of a cartoon series that I watched around fifty years ago: <https://www.youtube.com/watch?v=E-Nrykw3m3I> Answers: username_1: Fixed. Thank you! P.S. At least wallmart uses "mouses" in plural form: https://www.walmart.com/c/kp/microsoft-wireless-mouses
dof-dss/architecture-catalogue
610703467
Title: Record Views to be Captured in Audit Logs / Usage Tracker Question: username_0: As an NICS EA user, I want the ability to see when a user has viewed a catalogue entry. At present, only Create, Update and Delete actions are being captured. Reads should be, also. Answers: username_1: Only individual full record views are recorded, not records returned as a collection via browse or search. username_0: This appears to be working as expected (the JSON associated with the records viewed appeared in the Audit portal). This will be tested more comprehensively when testing the Audit functionality in detail. Closing. Status: Issue closed
France-ioi/taskgrader
609958264
Title: taskgrader on edx scrollbar minimum height ? Question: username_0: here is the bug https://github.com/username_0/c-programming-with-linux-MOOC-issues-tracker/issues/635 thanks for the help ! Answers: username_0: more globally https://github.com/username_0/c-programming-with-linux-MOOC-issues-tracker/issues/636 username_0: Hi @mblockelet and @mathias-hiron would it be possible to look into this issue ? Thanks !
sifive/riscv-debug-spec
209420031
Title: "Debug mode" should be changed to "Halt mode". Question: username_0: When I reviewed the reset in 8.3, the term Debug Mode showed up: "If the halt signal is asserted when a core comes out of reset, the core must enter Debug Mode before executing any instructions " However, the "debug mode" should be "halt mode". So, every reference to debug mode should be changed to halt mode. Moreover, the privileged spec use the term "debug mode" so we will need to sync with privileged spec for the term. Answers: username_1: I think I fixed the debug spec. I'd like to hold off on changing the privileged spec until we're happy with the state of the debug spec. username_0: yes, I feel the same for the privileged part. Thanks for your fixing. username_2: We could also avoid changing the priv. spec by saying something like "Halt Mode is synonymous with Debug Mode described in Priv Spec" username_1: That seems unnecessarily confusing, especially since the Debug Mode description in the priv spec was created by reading an older version of the debug spec. username_2: @username_3 FYI we are talking about removing the concept of "Debug Mode" entirely from this spec, it's just called "Halt Mode". username_3: The RISC-V privileged architecture refers to privilege modes by a single letter, and there's already an H-mode, for hypervisor. So this name collision is not ideal. Specifically, in the SiFive implementation, the additional privilege mode used by the debugger should continue to be named D-mode. Or at least something that doesn't begin with U, S, H, M, or V. username_4: I am wondering: Why exactly did we name it "halt mode" and not "debug mode"? username_0: I don't know, so I suggest we could change it from now. Status: Issue closed username_2: Thanks this has been changed.
TkTech/pysimdjson
709495109
Title: Question: Comparison with other Python-JSON packages Question: username_0: I'm currently comparing various Python packages for handling JSON ([link](https://github.com/username_0/algorithms/tree/master/Python/json-benchmark): cpython 3.8.6 json, simplejson, ujson, simdjson, orjson, rapidjson). I'm trying to understand their differences. I would be super happy if you could help me with that by answering some questions: 1. Is there any difference in features between the JSON packages? 2. I compared reading/dumping a 2.3MB GeoJSON, a 631KB Twitter JSON, and a 2MB JSON full of floats. Is there any other thing you think I should compare for benchmarking? Do you have internal benchmarks? 3. Are you in contact with the other Python JSON package developers? Do you maybe share the way you benchmark or test cases? 4. Are you in contact with JSON package developers from other languages? 5. Are there other packages / articles for comparison I should have a look at? 6. My benchmarks show that pysimdjson is pretty fast for reading, but rather slow for writhing. Where does that come from? 7. What is the relationship to [libpy-simdjson](https://pypi.org/project/libpy-simdjson/) 8. Have you considered adding another maintainer or moving the git repository to an organization? This could build trust that the fate of the project does not stay/fall with a single developer. 9. According to the Trove-Classifiers, you consider this package to be in "alpha" state. Considering the fact that it is in version 3.x, over a year old, has multiple contributors and is used by other people... is that still current? Answers: username_0: I think one difference is that with simdjson, I could re-use the parser object. I haven't seen that before. Status: Issue closed
IBM-Cloud/ibm-cloud-cli-sdk
405342648
Title: ibmcloud slow Question: username_0: (I hope this is the appropriate place to report this) I've installed the `ibmcloud` executable in a variety of ways (all the available ones, basically), and they all run very slowly: ``` $ time ibmcloud -v ibmcloud version 0.13.1+0536e96-2018-12-20T10:00:05+00:00 real 19m15.144s user 0m0.370s sys 0m0.136s ``` I'm on Fedora 29, and my machine is doing fine -- disk usage is normal, memory is normal, cpu usage is normal, nothing that could easily explain why real time is so much higher than user/sys time. any idea as to why this is happening? Answers: username_1: is it repeatable? username_0: I'm not sure you can reproduce it, but yes, I can't use the CLI because of this; I had to deploy my app from another computer because it was so slow.
syndesisio/syndesis
370609905
Title: Google Calendar connector: Update/Create event actions date parsing issue Question: username_0: ## This is a... <pre><code> [ ] Feature request [ ] Regression (a behavior that used to work and stopped working in a new release) [x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [ ] Documentation issue or request </code></pre> ## Description Method createGoogleEvent method relies on concatenation of date and time part of the datetime. But the time part can be missing. The resulting stacktraces with more info: - Create event ``` java.text.ParseException: Unparseable date: "2018-10-01 null" at java.text.DateFormat.parse(DateFormat.java:366) at io.syndesis.connector.calendar.GoogleCalendarSendEventCustomizer.createGoogleEvent(GoogleCalendarSendEventCustomizer.java:169) at io.syndesis.connector.calendar.GoogleCalendarSendEventCustomizer.beforeProducer(GoogleCalendarSendEventCustomizer.java:112) at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61) at org.apache.camel.processor.Pipeline.process(Pipeline.java:138) at org.apache.camel.processor.Pipeline.process(Pipeline.java:101) at io.syndesis.integration.component.proxy.ComponentProxyProducer.process(ComponentProxyProducer.java:44) at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:148) at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:110) at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548) at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201) at org.apache.camel.processor.Pipeline.process(Pipeline.java:138) at org.apache.camel.processor.Pipeline.process(Pipeline.java:101) at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:110) at io.syndesis.integration.runtime.logging.ActivityTrackingInterceptStrategy$EventProcessor.process(ActivityTrackingInterceptStrategy.java:79) at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:110) at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548) at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201) at org.apache.camel.processor.Pipeline.process(Pipeline.java:138) at org.apache.camel.processor.Pipeline.access$100(Pipeline.java:43) at org.apache.camel.processor.Pipeline$1.done(Pipeline.java:157) at org.apache.camel.processor.CamelInternalProcessor$InternalCallback.done(CamelInternalProcessor.java:262) at org.apache.camel.processor.RedeliveryErrorHandler$2.done(RedeliveryErrorHandler.java:560) at io.syndesis.integration.runtime.logging.ActivityTrackingInterceptStrategy$EventProcessor.lambda$process$0(ActivityTrackingInterceptStrategy.java:93) at org.apache.camel.processor.Pipeline$1.done(Pipeline.java:166) at org.apache.camel.processor.CamelInternalProcessor$InternalCallback.done(CamelInternalProcessor.java:262) at org.apache.camel.processor.RedeliveryErrorHandler$2.done(RedeliveryErrorHandler.java:560) at org.apache.camel.processor.SendProcessor$1.done(SendProcessor.java:160) at org.apache.camel.processor.Pipeline$1.done(Pipeline.java:166) at org.apache.camel.util.component.AbstractApiProducer$1.run(AbstractApiProducer.java:98) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ``` - Update event ``` java.text.ParseException: Unparseable date: "2018-10-01 null" at java.text.DateFormat.parse(DateFormat.java:366) at io.syndesis.connector.calendar.GoogleCalendarUpdateEventCustomizer.createGoogleEvent(GoogleCalendarUpdateEventCustomizer.java:174) at io.syndesis.connector.calendar.GoogleCalendarUpdateEventCustomizer.beforeProducer(GoogleCalendarUpdateEventCustomizer.java:116) at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61) at org.apache.camel.processor.Pipeline.process(Pipeline.java:138) [Truncated] at org.apache.camel.processor.Pipeline.process(Pipeline.java:138) at org.apache.camel.processor.Pipeline.access$100(Pipeline.java:43) at org.apache.camel.processor.Pipeline$1.done(Pipeline.java:157) at org.apache.camel.processor.CamelInternalProcessor$InternalCallback.done(CamelInternalProcessor.java:262) at org.apache.camel.processor.RedeliveryErrorHandler$2.done(RedeliveryErrorHandler.java:560) at io.syndesis.integration.runtime.logging.ActivityTrackingInterceptStrategy$EventProcessor.lambda$process$0(ActivityTrackingInterceptStrategy.java:93) at org.apache.camel.processor.Pipeline$1.done(Pipeline.java:166) at org.apache.camel.processor.CamelInternalProcessor$InternalCallback.done(CamelInternalProcessor.java:262) at org.apache.camel.processor.RedeliveryErrorHandler$2.done(RedeliveryErrorHandler.java:560) at org.apache.camel.processor.SendProcessor$1.done(SendProcessor.java:160) at org.apache.camel.processor.Pipeline$1.done(Pipeline.java:166) at org.apache.camel.util.component.AbstractApiProducer$1.run(AbstractApiProducer.java:98) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ``` Answers: username_1: ## This is a... <pre><code> [ ] Feature request [ ] Regression (a behavior that used to work and stopped working in a new release) [x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [ ] Documentation issue or request </code></pre> ## Description Method createGoogleEvent method relies on concatenation of date and time part of the datetime. But the time part can be missing. The resulting stacktraces with more info: - Create event ``` java.text.ParseException: Unparseable date: "2018-10-01 null" at java.text.DateFormat.parse(DateFormat.java:366) at io.syndesis.connector.calendar.GoogleCalendarSendEventCustomizer.createGoogleEvent(GoogleCalendarSendEventCustomizer.java:169) at io.syndesis.connector.calendar.GoogleCalendarSendEventCustomizer.beforeProducer(GoogleCalendarSendEventCustomizer.java:112) at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61) at org.apache.camel.processor.Pipeline.process(Pipeline.java:138) at org.apache.camel.processor.Pipeline.process(Pipeline.java:101) at io.syndesis.integration.component.proxy.ComponentProxyProducer.process(ComponentProxyProducer.java:44) at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:148) at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:110) at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548) at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201) at org.apache.camel.processor.Pipeline.process(Pipeline.java:138) at org.apache.camel.processor.Pipeline.process(Pipeline.java:101) at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:110) at io.syndesis.integration.runtime.logging.ActivityTrackingInterceptStrategy$EventProcessor.process(ActivityTrackingInterceptStrategy.java:79) at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:110) at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548) at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201) at org.apache.camel.processor.Pipeline.process(Pipeline.java:138) at org.apache.camel.processor.Pipeline.access$100(Pipeline.java:43) at org.apache.camel.processor.Pipeline$1.done(Pipeline.java:157) at org.apache.camel.processor.CamelInternalProcessor$InternalCallback.done(CamelInternalProcessor.java:262) at org.apache.camel.processor.RedeliveryErrorHandler$2.done(RedeliveryErrorHandler.java:560) at io.syndesis.integration.runtime.logging.ActivityTrackingInterceptStrategy$EventProcessor.lambda$process$0(ActivityTrackingInterceptStrategy.java:93) at org.apache.camel.processor.Pipeline$1.done(Pipeline.java:166) at org.apache.camel.processor.CamelInternalProcessor$InternalCallback.done(CamelInternalProcessor.java:262) at org.apache.camel.processor.RedeliveryErrorHandler$2.done(RedeliveryErrorHandler.java:560) at org.apache.camel.processor.SendProcessor$1.done(SendProcessor.java:160) at org.apache.camel.processor.Pipeline$1.done(Pipeline.java:166) at org.apache.camel.util.component.AbstractApiProducer$1.run(AbstractApiProducer.java:98) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ``` - Update event ``` java.text.ParseException: Unparseable date: "2018-10-01 null" at java.text.DateFormat.parse(DateFormat.java:366) at io.syndesis.connector.calendar.GoogleCalendarUpdateEventCustomizer.createGoogleEvent(GoogleCalendarUpdateEventCustomizer.java:174) at io.syndesis.connector.calendar.GoogleCalendarUpdateEventCustomizer.beforeProducer(GoogleCalendarUpdateEventCustomizer.java:116) at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61) at org.apache.camel.processor.Pipeline.process(Pipeline.java:138) [Truncated] at org.apache.camel.processor.Pipeline.process(Pipeline.java:138) at org.apache.camel.processor.Pipeline.access$100(Pipeline.java:43) at org.apache.camel.processor.Pipeline$1.done(Pipeline.java:157) at org.apache.camel.processor.CamelInternalProcessor$InternalCallback.done(CamelInternalProcessor.java:262) at org.apache.camel.processor.RedeliveryErrorHandler$2.done(RedeliveryErrorHandler.java:560) at io.syndesis.integration.runtime.logging.ActivityTrackingInterceptStrategy$EventProcessor.lambda$process$0(ActivityTrackingInterceptStrategy.java:93) at org.apache.camel.processor.Pipeline$1.done(Pipeline.java:166) at org.apache.camel.processor.CamelInternalProcessor$InternalCallback.done(CamelInternalProcessor.java:262) at org.apache.camel.processor.RedeliveryErrorHandler$2.done(RedeliveryErrorHandler.java:560) at org.apache.camel.processor.SendProcessor$1.done(SendProcessor.java:160) at org.apache.camel.processor.Pipeline$1.done(Pipeline.java:166) at org.apache.camel.util.component.AbstractApiProducer$1.run(AbstractApiProducer.java:98) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ``` username_2: @username_0 please verify the fix Status: Issue closed username_0: Verified that user can create/update event that has no start/end time specification only date part.
TechieGuy12/PlexServerAutoUpdater
225516416
Title: Error: object reference not set to an instance of an object Question: username_0: When Launching psupdate.exe I get the error: object reference not set to an instance of an object. on: Windows Server 2016 Answers: username_1: What version of PlexServerAutoUpdater are you using? Are you logged in with an admin or standard user account, or System account? Status: Issue closed username_0: Can be resolved - Installed old (3.5) .NET feature and rebooted, and it is now running.
uikit/uikit
230253594
Title: 3.0.0-beta.23 - uk-section-primary and uk-alert heading Question: username_0: ### UIkit version 3.0.0-beta.23 ### Browser Chrome / Firefox / Safari ### Reproduction Link https://codepen.io/anon/pen/gWQZwN ### Steps to reproduce Have a section that is primary without `uk-preserve-color` class and within that an alert with a heading. ### What is Expected? Headings within alerts shouldn't get their colour overwritten? ### What is actually happening? Should act like it does with `uk-preserve-color` on the section.
switch-model/switch
770270983
Title: Python 3.0? Question: username_0: I was wondering if there were plans to update to Python 3? https://docs.python.org/3/howto/pyporting.html If not, is there an appetite for such a move? Status: Issue closed Answers: username_1: Switch has run on Python 3 since version 2.0.4. Please see [here](https://github.com/switch-model/switch/releases) for a list of releases. The easiest way to install the latest release is via `conda` or `pip`, as described [here](https://github.com/switch-model/switch/blob/master/INSTALL.md).
keplergl/kepler.gl
446583350
Title: Cluster layer goes blank Question: username_0: **Describe the bug** As said the `Cluster Layer` goes blank when zoom out at small zoom level, the official demo can reproduce. Change the `earthquakes` demo point layer to cluster ![image](https://user-images.githubusercontent.com/1115805/58094923-5e944700-7c04-11e9-9e4f-bb81d77f68ca.png) Then zoom out. ![image](https://user-images.githubusercontent.com/1115805/58094946-6d7af980-7c04-11e9-9163-e2a869b9dace.png) Answers: username_0: Figure out that ClusterLayers's setState can avoid it, but I don't think it's the right way. Any help? ```diff - this.setState({clusters}); + if (clusters !== null && clusters.length) { + this.setState({clusters}); + } ``` username_1: hmm, it doesn't seem to happen here, are you on the latest kepler.gl? <img width="1394" alt="Screen Shot 2019-05-27 at 7 19 59 PM" src="https://user-images.githubusercontent.com/3605556/58446178-87a95180-80b4-11e9-9784-8741fc1c43ce.png"> username_0: Yes the version number in screenshot is `1.0.0-2`. which is https://kepler.gl/demo/earthquakes. As I said you have to change it to **Cluster** layer. Layer in your capture is Point layer am I right? username_0: ![image](https://media.giphy.com/media/U3ms0LiYwWcpRrPVjZ/giphy.gif) I made a gif username_1: My bad, you were talking about the cluster layer, yea I was able to reproduce this, will investigate username_1: Looks like this is inherited from the clustering algorithm in [supercluster](https://github.com/mapbox/supercluster) When zoom is at very low zoom level <4, and radius is relatively small, supercluster will return in 0 clusters username_0: But supercluster official demo with Leaflet works well at very low zoom level. ![image](https://user-images.githubusercontent.com/1115805/59582029-75499300-9109-11e9-8f4e-2338b1f93cb2.png) username_2: @username_3 Can you look into this please? Thank you username_3: TLDR: https://github.com/mapbox/supercluster/issues/86 ---- Ok I tracked down the issue. At low zoom levels it's possible for `geoViewport.bounds` to return a minimum longitude that is < -180 or > 180, resulting in a bounding box where the minimum is larger than the maximum. So for example: ``` longitude: -109.6070260234451 latitude: 30.862131808723703 zoom: 4 width: 1680 height: 851 const bbox = geoViewport.bounds([longitude, latitude], zoom, [width, height]) // [-183.427734375, -4.959615024698014, -35.771484375, 57.06463027327855] // note the -183 longitude ``` A simple fix for this is to clamp the longitude to between -180 and 180, as [Supercluster already does](https://github.com/mapbox/supercluster/blob/v3.0.3/index.js#L77) for latitudes <s>(not sure why longitudes don't get this fix)</s>. **Update: it looks like they already fixed this: https://github.com/mapbox/supercluster/issues/86. Will bump Kepler's package version.** IMO this is a bug in `geoViewport.getBounds` since it is an invalid coordinate, but this fix should work for now. Status: Issue closed
ampproject/amphtml
194232563
Title: Forbid invoke hasOwnProperty as method Question: username_0: `#hasOwnProperty` is almost exclusively used to test on objects you didn't create, and it breaks when that object is prototypeless. A much safer way is to use `Function#call` to invoke it with the object as the context: ```js const hasOwnProperty = Object.prototype.hasOwnProperty; if (hasOwnProperty.call(foreignObject, property) { // ... } ``` Re: https://github.com/ampproject/amphtml/pull/6542 Answers: username_1: Is this request prioritized for this year? Thanks. username_0: Should be, it's a pretty simple change. username_2: ill work on it this week after my on duty ends
SeleniumHQ/selenium-ide
413993500
Title: Is there any way in new selenium IDE for chrome where we can reuse the same lines of steps across tests Question: username_0: ## 🚀 Feature Proposal In old selenium IDE we have a concept called rollup and user extension.js upload as a developer option. Where we can write small methods and if we upload user extension those methods will appear as commands(custom commands) to use them in selenium IDE. Rollup is a command where we give target as method and we pass different parameters for method. Rollup help us to reuse same lines of code across multiple test cases in a suite. ## Motivation Lot of duplicate steps will be removed. Single point of maintenance if some thing is changed. ## Example Command : rollup Target: login (method name in js file) Value: username=aa, Password=bb Is there any feature or command there in new selenium IDE ? If not can we get this rollup and user extension .js upload feature in new selenium IDE as well? Answers: username_1: Please take a look at the [run](https://www.seleniumhq.org/selenium-ide/docs/en/api/commands/#run) command. Attached an example [test-case-reuse.zip](https://github.com/SeleniumHQ/selenium-ide/files/2899732/test-case-reuse.zip). Refer to #96 for information on `user-extensions.js` Status: Issue closed
rackerlabs/nexus-control
108409748
Title: Tables lines are not appearing in grey-background rows on Firefox Question: username_0: For example, see http://staging.developer.rackspace.com/docs-cloud-files/#cors-headers-for-objects on Firefox on Mac. The even-numbered rows do not appear to have lines. On Chrome they do have lines. Answers: username_0: There's a related issue about the spacing of words in that particular table that isn't related but noting it here in case you see it while investigating the table lines. https://github.com/rackerlabs/docs-cloud-files/issues/12 username_1: Hooray, you've found a 4-year old Firefox bug that nobody wants to fix! https://bugzilla.mozilla.org/show_bug.cgi?id=688556 username_0: Ok great! Glad it's not in the source. Marking "DONTCARE" :) Status: Issue closed
operable/cog
191124380
Title: relay smoke test Question: username_0: Note: This will most likely be one of several issues for smoke testing communications between the various components of Cog. Cog admins need an easy way to smoke test a Cog installation. Previously we were using Docker's healthcheck hook to provide some insight into Cog's status, but that proved to be a bit too broad to be useful. So instead we will provide a simple smoke test script to be installed alongside relay. This should give admins a bit more flexibility and provide a sanity check during installation and/or troubleshooting. The smoke test script should be easily accessible via `docker exec` so we should copy it to somewhere in the path when building the image. Checks - [ ] the host running Cog is accessible - [ ] the services endpoint in accessible Done Criteria - [ ] write a simple smoke test script - [ ] update Dockerfile to install the script in the path (admins should be able to run the script with something like `docker exec cog_smoke_test` - [ ] Add usage documentation to Cog book Answers: username_1: This issue was moved to operable/go-relay#75 Status: Issue closed
brockpetrie/vue-moment
278700989
Title: Adopt code styling rules Question: username_0: @username_1 I'm going to go through and clean up the code a bit. Any JS style guide you prefer? I'm partial to Airbnb, but willing to go with something different if you hate it. Answers: username_1: I use Airbnb as a base for my styleguides so that would be perfect. Are you thinking of adding eslint and editorconfig? Status: Issue closed
da2k/curso-reactjs-ninja
400023441
Title: M2#A128 - Cannot convert undefined or null to object Question: username_0: <li key={fileId}> <button onClick={handleOpenFile(fileId)}>{files[fileId].title}</button> </li> ))} ``` copiei o codigo do repositório do curso e testei e tbm aparece o mesmo erro Todos os códigos dos arquivos estão iguais aos do repositório do curso, linha por linha <!-- Não apague daqui pra baixo! --> @username_1 Answers: username_1: Oi @username_0! Testei aqui e realmente tem um bug quando o localStorage começa zerado =) Vou criar uma aula do futuro pra corrigir o problema, obrigado por reportar :D Pra resolver, é só modificar [essa linha](https://github.com/da2k/curso-reactjs-ninja/blob/master/examples/m02/applications/markdown-editor/src/app.js#L99), no arquivo `src/app.js`, para usar o `files` apenas se ele existir. Senão, vamos usar um objeto vazio. A linha deve ficar assim: ```js this.setState({ files: files || {} }) ``` Obrigado mais uma vez por reportar, e desculpe o vacilo =) username_0: Não, tranquilo! Eu achei que era erro meu. Agora tá tudo funcionando certinho Valeu! 👍 username_1: Show! Vou gravar a aula do futuro e já subo lá =) Obrigado mais uma vez, e qualquer dúvida, só avisar :D Status: Issue closed
softonic/axios-retry
374325498
Title: update timeout value when retrying Question: username_0: Is it possible to update timeout value (no reset) when retrying first time and update again when retrying second time and so on ? Answers: username_1: Yes you can. The main readme details this : // Custom retry delay axiosRetry(axios, { retryDelay: (retryCount) => { return retryCount * 1000; }}); username_2: Looks like this is going to be the default behaviour with Axios 0.19.x We are currently working on supporting it. username_3: I simply skipped over the problem, hoping it would be helpful ``` let retries = 0 axiosRetry(axios, { retries: 3, retryCondition:(error)=>{ let config = error.config; if (!config) { return false; } retries = retries + 1 if(retries>= 3){ retries = 0 return false } return true // do something // axiosRetry.isNetworkError // axiosRetry.isRetryableError //axiosRetry.isSafeRequestError // axiosRetry.isIdempotentRequestError // axiosRetry.isNetworkOrIdempotentRequestError // axiosRetry.exponentialDelay } }); ``` Status: Issue closed username_2: All good if it works for your use case!
csuermann/node-red-contrib-presence-faker
788913079
Title: Starting time has 1h offset Question: username_0: Dear Cornelius, I seem to have troubles with the clock in my system. I am new to Node-RED and your presence-faker and I was just doing initial tests. First I wanted to post, that my presence-faker-node would not send messages - although successfully deployed and enabled. But then I was distracted for a while by a business call, which was good, because 1h hour later I learned, that the presence-faker was doing its job, but with 1h delay. But why ? ![image](https://user-images.githubusercontent.com/57750722/105022931-9d6a4800-5a4a-11eb-8ec5-a53025a798c5.png) In the debugging window you see what I find odd: Node-RED is putting the correct timestampt for this message (my local time was 10:17:00 AM when that alert appeared). So I would say that my system is set up correctly. However, as you see within the debug-message the time-window for the faker was set to 09:17 AM thought. Any recommendations on where to look at or what to do? As you and I are both from Germany, I do not expect that this is a bug in your code, but rather a wrong setting on my server...But on the other hand since Node-RED uses the correct time, I don't know what to do. Of cause I do know, that I could consider a offset time of 1 hour when setting up the faker, but that seems to be a lousy workaround. Answers: username_1: Hi @username_0, Could you please share the debug output generated by this simple flow? `[{"id":"bc3d8224.38cea","type":"inject","z":"7ecdfb0b.fa7794","name":"","props":[{"p":"payload"}],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","payload":"","payloadType":"date","x":810,"y":860,"wires":[["f528217.ce14ae"]]},{"id":"f528217.ce14ae","type":"function","z":"7ecdfb0b.fa7794","name":"now","func":"const now = new Date()\n\nreturn {\n payload: {\n now_utc: now.toISOString(),\n now_local: now.toString()\n }\n};","outputs":1,"noerr":0,"initialize":"","finalize":"","x":970,"y":860,"wires":[["dc36b050.b9127"]]},{"id":"dc36b050.b9127","type":"debug","z":"7ecdfb0b.fa7794","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"false","statusVal":"","statusType":"auto","x":1130,"y":860,"wires":[]}]` username_0: Thank you for the extrem quick response! Sure, here it is: 1/19/2021, 12:00:32 PMnode: dc36b050.b9127 msg.payload : Object object now_utc: "2021-01-19T11:00:32.101Z" now_local: "Tue Jan 19 2021 11:00:32 GMT+0000 (Coordinated Universal Time)" username_1: Thanks, I think that confirms my hypothesis of your server setting being set to the UTC timezone This is the output of the test flow on my system: ![image](https://user-images.githubusercontent.com/10939809/105032540-2e472080-5a57-11eb-88c0-adc18486b2f4.png) Is your Node-RED running on the same machine as your browser or a different one? username_0: Thank you so far. My NR is running on a linux based Timberwolf server (home automation server) and my browser is on a windows system. Within the servers time setting there is a page saying following (Server = Timberwolf, client = Windows Browser) ![image](https://user-images.githubusercontent.com/57750722/105034337-9eef3c80-5a59-11eb-9a24-20465a38cf87.png) username_0: so I guess the mistake may be in the deployment of my Node-Red container process...maybe I need to set some environment differently. username_1: That would be my guess, too. Let me know if you managed to solve it somehow. Could be interesting for others, too. username_0: Problem solved... I found the solution in the Timberwolf-Forum, but it my occure on other Docker-Installations as well, so here is what I did: Duplicate Container with additional ENV-Variable name=TZ value=Europe/Amsterdam And after deployment I deleted the "old" container...all settings, bindings and panels alived :-) Status: Issue closed
Remix-Design/RemixIcon
941256161
Title: User Preferences Question: username_0: First of all, I would like to thank all those who make this wonderful site remixicon I hope the site will be easier for the user to use When I choose an icon and I want to choose its color I have a HEX color code But when I choose another icon, I have to type the Hex code again So if the colors I choose can be constantly added to be present, I can easily choose my Hex code instead of writing it every time I choose a different icon So I hope it will be like a personal profile for each user who chooses the preferences for the appearance of icons, such as colors and size
ethereum/web3.py
318587345
Title: Add new autoproviders for geth --dev and Infura Question: username_0: ### What was wrong? Connecting to common places is still a little hairy. For example `geth --dev` and Infura. Inspired by #719 ### How can it be fixed? Add new auto providers: - [ ] Add a `from web3.auto.gethdev import w3` - [ ] Add a `from web3.auto.infura import w3` Answers: username_1: @username_0 can this be closed? Status: Issue closed
opencv/opencv_contrib
262310711
Title: Generator error: unable to resolve base Algorithm for ximgproc_SuperpixelSLIC Question: username_0: ` Followed by `make -j4` It gives the following error ` [ 98%] Built target opencv_videostab [ 98%] Generating pyopencv_generated_include.h, pyopencv_generated_funcs.h, pyopencv_generated_types.h, pyopencv_generated_type_reg.h, pyopencv_generated_ns_reg.h Generator error: unable to resolve base Algorithm for ximgproc_SuperpixelSLIC make[2]: *** [modules/python2/pyopencv_generated_include.h] Error 255 make[1]: *** [modules/python2/CMakeFiles/opencv_python2.dir/all] Error 2 make: *** [all] Error 2 ` How to resolve this? Answers: username_0: [here](https://gist.github.com/filitchp/5645d5eebfefe374218fa2cbf89189aa) Status: Issue closed
microsoft/WebTemplateStudio
707997720
Title: Review and remove unused packages from client and extension Question: username_0: We should review the client and extension for unused packages and remove them. Answers: username_1: After trying we are not 100% confident that it won´t cause any issues. After removing most of those packages some errors appeared although we are not sure if they were there before. We will only remove lodash for now because we tracked history and we are sure that is not currently used. <img width="458" alt="client" src="https://user-images.githubusercontent.com/1333036/96997678-83e05f80-1532-11eb-851c-a53a2a3f6db5.png"> <img width="448" alt="extension" src="https://user-images.githubusercontent.com/1333036/96997680-8478f600-1532-11eb-8ab9-0bf161e37df6.png"> Status: Issue closed
aws/aws-cdk
691247712
Title: [lambda] Function.currentVersion only updates for new-style assets Question: username_0: This code seems to assume that the Function's CloudFormation representation is good enough to determine the full source hash, which is not true as soon as indirection ​through a { Ref } or something comes into play: https://github.com/aws/aws-cdk/blob/master/packages/@aws-cdk/aws-lambda/lib/function-hash.ts The Code class needs a way to mix in another value in there, so that if the Code comes from an asset, even if that asset is referenced through { Ref }s or whatever, it still has an opportunity to mix in a sourceHash of some sort to cause the Version to be updated. --- This is :bug: Bug Report Answers: username_1: Can you provide an example? username_0: Yes, but not publicly. username_2: Why not just use environment variables as a way to include additional info in the hash? username_3: That is what we have been doing to work around this. It's not elegant, but it's functional. username_2: We can add some "formal" way to salt the version if that feels better. What do you think? username_4: Is it an idea to use `currentVersionOptions.codeSha256` as _the formal way_ to salt the version? username_3: sorry to necro the thread -- yes I think that would be a reasonable approach
jsdf/react-native-htmlview
371477031
Title: Last Link on page only work with double-tap Question: username_0: I have a strange issue. Links in the Page work well, but the last Link on a Page only work if I tap several times it. See Gif: ![lastlink_bug](https://user-images.githubusercontent.com/1778068/47150349-9c2f9b00-d2d6-11e8-8430-6676fa1944fd.gif) Answers: username_1: This seems like it might be "fixed" by adding a lineHeight to increase the clickable area: ``` <HTMLView value={product.content} stylesheet={{ a:{ lineHeight: 21, color: '#45b6fe', }, }} onLinkPress={async (url) => await WebBrowser.openBrowserAsync(url)} /> ```
edmontongo/presentations
162601785
Title: July 2016 talks Question: username_0: July 25, 2016 Answers: username_0: Perhaps I'll talk about GopherCon. Last year I didn't prepare well enough to do that, so it wasn't great. Anyone else interested in presenting? username_1: I can't agree, found that both the visuals and the story telling of last year's GopherCon were outstanding. Am still interested in go+unikernels, but am not really sure that is of general interest. Maybe they are 'varsity kernels'. username_0: @username_1 there was a unikernels talk at GopherCon :-) I don't imagine the videos will be out until August though. username_1: Did you mean [AtmanOS](https://github.com/gophercon/2016-talks/tree/master/BernerdSchaefer-GoWithoutTheOperatingSystem)? Had not heard of it before, sounds interesting. Am amazed at the number of projects around this topic. username_0: Yup. Status: Issue closed
NaoHTWK/HTWKVision
293486151
Title: Missing files Question: username_0: Is it on purpose that there are many (i.e. the interesting ones) files missing? It seems that just the previously existing files have been committed. Status: Issue closed Answers: username_1: oops, that happens when you get too used to mercurial... fixed
kyma-project/kyma
355878047
Title: Push Integration tests using Jaeger and K8S APIs Question: username_0: h3. Goal It should be possible to run integration tests with Jaeger and K8S APIs to ensure stability and correctness of the Event Bus functionality. * Do not mock Jaeger. Instead run it as an in-memory as we do for NATS Streaming. * Use K8S fake clients or a similar approach that gurantees K8S integration. Scenario: * Enhance the current push integration tests for Jaeger and K8S integration. Ideas: * For K8S APIs, consider using fakes generated as a a part of the code generation using K8S client code generator. * For Jaeger, explore if it is possible to run Jaeger in memory. AC: * One or more tests that verify event bus features with tracing, NATS Streaming and K8S integration. Answers: username_0: K8S API integration is already solved via the PR https://github.com/kyma-project/kyma/pull/1057 Status: Issue closed
karmapa/ketaka-lite
145184937
Title: 搜尋或取代,輸入新的關鍵字,畫面才自動往下跳到符合的字 (影片) Question: username_0: v0.1.75 目前的搜尋或取代,重新開啟搜尋欄,如果搜尋欄有之前留下的舊關鍵字時,畫面會自動往下跳到下一個符合的字 是否限定成輸入新的關鍵字時,畫面才自動往下跳到下一個符合的字 而如果搜尋欄再次開啟,留有舊的關鍵字,此時雖然算出舊關鍵字的符合數和 highlight 符合的字 但是除非使用者去按搜尋下一個或上一個字,否則畫面不移動 KETAKA-Lite 目前的實作,影片下方有說明 https://www.youtube.com/watch?v=ZAB5lqQRVW0&feature=youtu.be sublime 的例子,影片下方有說明 https://www.youtube.com/watch?v=1AO5OYyfcSM&feature=youtu.be Answers: username_1: @username_0 我測試目前版本似乎沒這問題耶? Status: Issue closed
phphe/he-tree-vue
1095683194
Title: Is this library deprecated Question: username_0: Is this library depreciated, are we supposed to move to https://github.com/username_1/he-tree. I had constructed my treeview extending this tree component and using overrideSlotDefault method. Is the same supported in the new component? Answers: username_1: Not deprecated. I will not add new features to this library. In he-tree, you need wrap the component and use slot.
imagemin/mozjpeg-bin
535802262
Title: Trying to install mozjpeg - behind corporate network Question: username_0: hello, i'm trying to install gatsby using a default template, which requires mozjpeg. however, i'm getting this error: `C:\dev>npm i -g mozjpeg` `C:\Users\a\AppData\Roaming\npm\mozjpeg -> C:\Users\a\AppData\Roaming\npm\node_modules\mozjpeg\cli.js` `` `> [email protected] postinstall C:\Users\a\AppData\Roaming\npm\node_modules\mozjpeg` `> node lib/install.js` `` `(node:13428) Warning: Setting the NODE_TLS_REJECT_UNAUTHORIZED environment variable to '0' makes TLS connections and HTTPS requests insecure by disabling certificate verification.` ` ‼ self signed certificate in certificate chain` ` ‼ mozjpeg pre-build test failed` ` i compiling from source` ` × RequestError: self signed certificate in certificate chain` ` at ClientRequest.<anonymous> (C:\Users\a\AppData\Roaming\npm\node_modules\mozjpeg\node_modules\got\index.js:111:21)` ` at Object.onceWrapper (events.js:284:20)` ` at ClientRequest.emit (events.js:196:13)` ` at TLSSocket.socketErrorListener (_http_client.js:402:9)` ` at TLSSocket.emit (events.js:196:13)` ` at emitErrorNT (internal/streams/destroy.js:91:8)` ` at emitErrorAndCloseNT (internal/streams/destroy.js:59:3)` ` at processTicksAndRejections (internal/process/task_queues.js:84:17)` `+ [email protected]` `updated 1 package in 13.251s` I've already set `strict-ssl` and `NODE_TLS_REJECT_UNAUTHORIZED` to false, but can't seem to get around this issue. hopefully someone can help? tks Lawrence Answers: username_1: Seem related to https://github.com/imagemin/jpegtran-bin/pull/90. username_2: I'm behind a corporate proxy and was running into this same issue. In my case, the issue seemed to be due to the fact that mozjpeg's install script was invoking another copy of node, and node didn't have the certs for my corp's proxy. Here's how I fixed it (note that I'm on macOS): - Grabbed a copy of the proxy's certificate, threw it into a file called cacert.pem (`openssl s_client -connect raw.githubusercontent.com:443 -showcerts >cacert.pem`, then Ctrl+D after it spits out its output) - Run `NODE_EXTRA_CA_CERTS=cacert.pem npm install`
nuxt-community/moment-module
689786757
Title: How to update en locale s relative time settings using this module? Question: username_0: - I am trying to display relative times using moment(...).fromNow() - I am trying to get the output in a certain format instead of the default '14 days ago' like '14 D' - As per [THIS answer on stackoverflow](https://stackoverflow.com/questions/38367038/format-relative-time-in-momentjs) it is possible by updating the relativeTime setting - how can I do this with this module - Thank you for your time and effort Answers: username_0: ``` export default (context) => { context.$moment.updateLocale('en', { relativeTime: { future: '%s', past: '%s', s: '1 s', ss: '%d seconds', m: '1 m', mm: '%d m', h: '1 h', hh: '%d h', d: '1 d', dd: '%d d', M: '1 M', MM: '%d M', y: '1 Y', yy: '%d Y', }, }) } ``` Make a plugin out of it Status: Issue closed