repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
anjlab/android-inapp-billing-v3
109921235
Title: multidex with google play service lib Question: username_0: Dear you, i am using android-inapp-billing-v3, but i have a problem error multidex : com/android/vending/billing/IInAppBillingService i configed multiDexEnabled true but still can't resolve this problem, could you please help me or please suggest for me anything ? thank you so much tran Answers: username_1: extend application class from MultiDexApplication? Status: Issue closed username_0: thank you for your support, i soloved problem, your lib is work very great, again thanks.
acceptbitcoincash/acceptbitcoincash
327247757
Title: Add 'KiT systems' to the 'Air conditioning equipment and security systems' category Question: username_0: Requesting to add 'KiT systems' to the 'Air conditioning equipment and security systems' category. Details follow: ```yml - name: KiT systems url: https://www.kitsystem.ru/ img: email_address: <EMAIL> country: RU bch: Yes btc: No othercrypto: Yes doc: https://www.kitsystem.ru/blog/info/oplachivayte-tovary-i-uslugi-kriptovalyutoy ``` Resources for adding this merchant: Logo Provided: ![KiT systems](https://www.kitsystem.ru/sites/default/files/0_my_files/img/logo/logo_ok.png) [Link to KiT systems](https://www.kitsystem.ru/) Maybe you might want to check their Alexa Rank: [https://www.alexa.com/siteinfo/www.kitsystem.ru/](https://www.alexa.com/siteinfo/www.kitsystem.ru/) - Verify site is legitimate and safe to list. - Correct data in form if any is inaccurate. If everything looks okay, Add it to the site: - Assign to yourself when you begin work. - Download and resize the image and put it into the proper img folder. - Add listing alphabetically to proper .yml file. - Commit changes mentioning this issue number with `closes #[ISSUE NUMBER HERE]`. Answers: username_1: I removed the country tag as this vendor does ship to CIS member states. I also added the lang tag because this page is only in Russian. And I added the Facebook page. Status: Issue closed
justinzm/gopup
765502025
Title: 青岛市市南区哪有特殊服务的洗浴_凤凰时尚 Question: username_0: 青岛市市南区妹子真实找上门服务╋薇:781372524美女】余位明星,亿红包。双十一苏宁大红包,你抢到了吗日晚点半,湖南卫视苏宁易购嗨爆夜,王一博、肖战、杨洋、江疏影、赵薇、黄晓明、周冬雨,吴亦凡等众多顶级流量助阵,陪众多粉丝双十一买买买,娱乐、剁手双丰收。  更有百亿补贴震撼归来神仙水、、资生堂、戴森等网红产品直接对标行业最低价。包含空调、冰洗、厨卫、数码、母婴、体育、通讯、百货等全品类的点爆发会场,持续高能。  此次华为手机拿下了湖南卫视苏宁易购嗨爆夜总冠名,场内外会员免费抽台华为手机,搏一搏,变  在现场嘉宾表演的同时,苏宁易购同步开启明星专属红包雨,亿红包共分轮送给所有用户,今夜是幸福的饭圈女孩们,看着偶像,抢着红包,就把东西买了,获得双倍的快乐。  现场可以看到,在这场盛宴中,火箭少女的火辣群舞超女周笔畅、张靓颖、尚雯婕再聚首李玟、蔡依林、吴青峰现场演绎经典金曲。从后到后,都能找到属于自己的青春。  除此之外,中国女排教练郎平携队员丁霞、朱婷和张常宁现身,与苏宁控股集团副总裁张康阳同台互动,支持中国造,为观众送出一整个集装箱的“中国制造大礼包”,包含红旗、海尔对开多门冰箱、联想笔记本电脑、美的空调等奖品,燃爆全场。网友感慨:民族品牌走向世界,那些中奖的小伙伴,真让人实名酸了。敖戳侠牙埔https://github.com/justinzm/gopup/issues/11912?D6yON <br />https://github.com/trailofbits/manticore/issues/1866 <br />https://github.com/justinzm/gopup/issues/11788?hjtef <br />https://github.com/justinzm/gopup/issues/12088?68750 <br />https://github.com/getmeli/meli/issues/20?49734 <br />
blunden/DoNotDisturbSync
449453579
Title: App missing in Wear OS 2.6 Question: username_0: Recently updated my watch to Wear OS 2.6 and the app is no longer in the list. Google Play shows it still as installed though. Also it's not syncing anymore. This is a great little piece of software but unfortunately seems like broken in newer versions. Answers: username_1: It's not working for me either on Wear OS 2.8, I have the same problems listed above. Status: Issue closed username_2: Yes, it was removed from the Play Store because it uses legacy APIs. See my response linked below for how to make it work on newer Wear OS versions before a new version is published. https://github.com/username_2/DoNotDisturbSync/issues/6#issuecomment-1012545839
Holzhaus/mixxx-gh-issue-migration
873232016
Title: Cannot export or import playlists, blank file selection dialog Question: username_0: I've had this issue since 1.10, and with 1.11 it's still there. Essentially, whenever I try to export a playlist, the file selection dialog appears as a blank window, making it impossible to continue (see attached screenshot). A couple of notes: - It seems someone already reported it as bug 887429, which seems to have expired a while ago - I'm using Ubuntu 13.10, and with 13.04 I could reproduce it as well - Exporting crates shows exactly the same problem - Importing a playlist shows a similar problem, although it seems there there is a small area in the file dialog where the file system is shown. Nevertheless, it also prevents from using the dialog, so playlists cannot be imported either (see the other attached screenshot)
discordjs/discord.js
1008345321
Title: "Unknown Interaction" but the interaction isn't "Unknown" Question: username_0: ### Issue description 1. Create a awaitMessageComponent 2. update it ### Code sample ```typescript let battleMenuResponse = await battleMessage.awaitMessageComponent({ filter, componentType: "SELECT_MENU", time: ms("50 seconds") }) console.log(battleMenuResponse.update) await battleMenuResponse.update({ content: "yes" }) console: ``` [AsyncFunction: update] DiscordAPIError: Unknown interaction ``` this does not make sense, i also logged `battleMenuResponse` and it logged ``` SelectMenuInteraction { type: 'MESSAGE_COMPONENT', id: '892081287748259920', applicationId: '886550857926213633', channelId: '886479373753004045', guildId: '886479373753004042', ``` ``` ### discord.js version 13 ### Node.js version 16 ### Operating system windows ### Priority this issue should have High (immediate attention needed) ### Which partials do you have configured? No Partials ### Which gateway intents are you subscribing to? GUILDS, GUILD_MEMBERS, GUILD_PRESENCES, GUILD_MESSAGES, GUILD_MESSAGE_REACTIONS ### I have tested this issue on a development release _No response_<issue_closed> Status: Issue closed
zhuochun/md-writer
123983895
Title: Consider changing some of your default keybindings Question: username_0: Your keybindings for indenting lists and continuing lists on enter override the autocomplete feature in atom, which can be a quite painful an issue to track down because the atom-global keys for auto-completion (enter or tab) just wouldn't work on any markdown-files because of markdown-writer's bindings. I fixed it now by disabling all your keybindings and putting this in my keymap: ```cson "atom-workspace atom-text-editor:not([mini])[data-grammar~='gfm']": "ctrl-enter": "markdown-writer:insert-new-line" "ctrl-tab" : "markdown-writer:indent-list-line" ``` but I just wanted to suggest you also change those bindings. May confuse people updating the package then, but I think badness-by-default is not a good idea. Status: Issue closed Answers: username_1: Duplicated of #98 . Will be fixed in next release.
canjs/can-define-stream
183565029
Title: Complete stream API Question: username_0: Complete the `stream` property definition behavior from https://github.com/canjs/can-stream/issues/2 ```js stream( stream ) { const fooStream = this.stream('foo') .map( foo => foo.toUpperCase() ); return stream .merge(fooStream) .combineLatest(this.stream('bar')); }, ```<issue_closed> Status: Issue closed
jetstack/version-checker
684009823
Title: Give info statement on runtime if test-all-containers is false Question: username_0: If the argument `test-all-containers` is false (default) , chances are that it won't process a thing. Therefore place a clear info message in the logs that it's running on false. That gives a better indication on what is going on.
MiniMau5/AirBNBFakesDataReport
209045811
Title: possible text Question: username_0: ***Please do not book instant book before contact me!*** mim@ bnbvacation . rentals Remove the spaces from the email address so you can contact me All the bookings made without prior contact will be canceled ! To see if the dates are available and so I can send you more photos email me at: m<EMAIL>. rentals The calendar is updating so please send me first your dates at: mim@<EMAIL> .rentals
elasticdog/transcrypt
312250931
Title: Document creating your own adapter Question: username_0: Omg I’m so sorry. I had two tabs open on my phone and created all these issues on the wrong repo! I’ll close these when I get on. So sorry about that! > Answers: username_1: Adapter? Can you explain what you mean by this request? username_0: Omg I’m so sorry. I had two tabs open on my phone and created all these issues on the wrong repo! I’ll close these when I get on. So sorry about that! > username_1: No problem...I can close them out. Status: Issue closed
dbeaver/dbeaver
565198502
Title: Hot key F3 doesn't work for the fisrt time - continuance (#7792) Question: username_0: **System information:** Windows 7 (64-bit) Build 7601 (Service Pack 1) DBeaver 6.3.4.202002011957 **Connection specification:** PostgreSQL 10.10 PostgreSQL JDBC Driver 42.2.9 **Describe the problem you're observing:** Occasionally after the first (initial) press of hot key F3 ("SQL Editor") on Database Navigator over Connection, the window of "SQL Scripts" temporary appears and immediately closes. After the second press F3, it appears again and stays permanently. However, I observe, a click "SQL Editor" from the right-mouse pop-up menu never causes such behaviour. See https://youtu.be/ALuZm5EDDaU for more details. Answers: username_1: thanks for the bug report username_2: Fixed. This bug was caused by new tooltips renderer. It forces focus change on tooltip hide and thus close popup panel. Status: Issue closed username_3: verified
skroutz/elasticsearch-analysis-turkishstemmer
233811311
Title: How to install Question: username_0: How can i install stemmer plugin on Elastic 5.x ? is it compatible ? Answers: username_1: Thanks for getting in touch. I've updated our README with a quick installation guide. Yes it's compatible with ES5.4. I hope it helps. Status: Issue closed
Azure/azure-sdk-for-net
238232095
Title: ArgumentException on Mono Question: username_0: We have some components that target .NET Framework 4.6 when built, but run on Linux via Mono. In one of those components, we use the Azure KeyVault client, which uses the `Microsoft.Rest.ServiceClient` class from here. We had [one exception with ADAL](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/issues/509), which we have a workaround for and they are fixing for the next version. However we also have another exception, only when running on Linux: ``` Unhandled Exception: System.ArgumentException: Value does not fall within the expected range. at System.Net.Http.Headers.Parser+Token.Check (System.String s) [0x00019] in <f9ac0c719f3449a0aa7ac0136a1ad250>:0 at System.Net.Http.Headers.ProductHeaderValue..ctor (System.String name, System.String version) [0x0000a] in <f9ac0c719f3449a0aa7ac0136a1ad250>:0 at System.Net.Http.Headers.ProductInfoHeaderValue..ctor (System.String productName, System.String productVersion) [0x00006] in <f9ac0c719f3449a0aa7ac0136a1ad250>:0 at Microsoft.Rest.ServiceClient`1[T].get_DefaultUserAgentInfoList () [0x0003f] in <7d943830d9bc44fab4bd01d7a9130a9c>:0 at Microsoft.Rest.ServiceClient`1[T].SetUserAgent (System.String productName, System.String version) [0x00010] in <7d943830d9bc44fab4bd01d7a9130a9c>:0 at Microsoft.Rest.ServiceClient`1[T].InitializeHttpClient (System.Net.Http.HttpClient httpClient, System.Net.Http.HttpClientHandler httpClientHandler, System.Net.Http.DelegatingHandler[] handlers) [0x00092] in <7d943830d9bc44fab4bd01d7a9130a9c>:0 at Microsoft.Rest.ServiceClient`1[T].InitializeHttpClient (System.Net.Http.HttpClientHandler httpClientHandler, System.Net.Http.DelegatingHandler[] handlers) [0x00000] in <7d943830d9bc44fab4bd01d7a9130a9c>:0 at Microsoft.Rest.ServiceClient`1[T]..ctor (System.Net.Http.HttpClientHandler rootHandler, System.Net.Http.DelegatingHandler[] handlers) [0x00006] in <7d943830d9bc44fab4bd01d7a9130a9c>:0 at Microsoft.Rest.ServiceClient`1[T]..ctor (System.Net.Http.DelegatingHandler[] handlers) [0x00006] in <7d943830d9bc44fab4bd01d7a9130a9c>:0 at Microsoft.Azure.KeyVault.KeyVaultClient..ctor (System.Net.Http.DelegatingHandler[] handlers) [0x00000] in <b66b7a9568464c98a370d795acdef11c>:0 at Microsoft.Azure.KeyVault.KeyVaultClient..ctor (Microsoft.Rest.ServiceClientCredentials credentials, System.Net.Http.DelegatingHandler[] handlers) [0x00000] in <b66b7a9568464c98a370d795acdef11c>:0 at Microsoft.Azure.KeyVault.KeyVaultClient..ctor (Microsoft.Azure.KeyVault.KeyVaultClient+AuthenticationCallback authenticationCallback, System.Net.Http.DelegatingHandler[] handlers) [0x00007] in <b66b7a9568464c98a370d795acdef11c>:0 at KeyVaultSecrets.KeyVaultHelper.RetrieveSecret (System.String secretKey) [0x00081] in <23498eee002143eda79164cb7d001b9d>:0 at KeyVaultSecrets.Program.Main (System.String[] args) [0x0001c] in <23498eee002143eda79164cb7d001b9d>:0 [ERROR] FATAL UNHANDLED EXCEPTION: System.ArgumentException: Value does not fall within the expected range. at System.Net.Http.Headers.Parser+Token.Check (System.String s) [0x00019] in <f9ac0c719f3449a0aa7ac0136a1ad250>:0 at System.Net.Http.Headers.ProductHeaderValue..ctor (System.String name, System.String version) [0x0000a] in <f9ac0c719f3449a0aa7ac0136a1ad250>:0 at System.Net.Http.Headers.ProductInfoHeaderValue..ctor (System.String productName, System.String productVersion) [0x00006] in <f9ac0c719f3449a0aa7ac0136a1ad250>:0 at Microsoft.Rest.ServiceClient`1[T].get_DefaultUserAgentInfoList () [0x0003f] in <7d943830d9bc44fab4bd01d7a9130a9c>:0 at Microsoft.Rest.ServiceClient`1[T].SetUserAgent (System.String productName, System.String version) [0x00010] in <7d943830d9bc44fab4bd01d7a9130a9c>:0 at Microsoft.Rest.ServiceClient`1[T].InitializeHttpClient (System.Net.Http.HttpClient httpClient, System.Net.Http.HttpClientHandler httpClientHandler, System.Net.Http.DelegatingHandler[] handlers) [0x00092] in <7d943830d9bc44fab4bd01d7a9130a9c>:0 at Microsoft.Rest.ServiceClient`1[T].InitializeHttpClient (System.Net.Http.HttpClientHandler httpClientHandler, System.Net.Http.DelegatingHandler[] handlers) [0x00000] in <7d943830d9bc44fab4bd01d7a9130a9c>:0 at Microsoft.Rest.ServiceClient`1[T]..ctor (System.Net.Http.HttpClientHandler rootHandler, System.Net.Http.DelegatingHandler[] handlers) [0x00006] in <7d943830d9bc44fab4bd01d7a9130a9c>:0 at Microsoft.Rest.ServiceClient`1[T]..ctor (System.Net.Http.DelegatingHandler[] handlers) [0x00006] in <7d943830d9bc44fab4bd01d7a9130a9c>:0 at Microsoft.Azure.KeyVault.KeyVaultClient..ctor (System.Net.Http.DelegatingHandler[] handlers) [0x00000] in <b66b7a9568464c98a370d795acdef11c>:0 at Microsoft.Azure.KeyVault.KeyVaultClient..ctor (Microsoft.Rest.ServiceClientCredentials credentials, System.Net.Http.DelegatingHandler[] handlers) [0x00000] in <b66b7a9568464c98a370d795acdef11c>:0 at Microsoft.Azure.KeyVault.KeyVaultClient..ctor (Microsoft.Azure.KeyVault.KeyVaultClient+AuthenticationCallback authenticationCallback, System.Net.Http.DelegatingHandler[] handlers) [0x00007] in <b66b7a9568464c98a370d795acdef11c>:0 ``` I've traced the problem to [this code](https://github.com/Azure/azure-sdk-for-net/blob/d6c9636ef100346afee80619beb191d31c5cba07/src/SdkCommon/ClientRuntime/ClientRuntime/ServiceClient.cs#L134-L137), which assumes that its safe to get the OS Name and Version from the registry if compiled with `FullNetFx`. However, that directive only tells you that the compilation target was the full framework. It doesn't tell you that the runtime environment is Windows. It's not safe to call Win32 functions from there, which the `OsName` and `OsVersion` properties do [here](https://github.com/Azure/azure-sdk-for-net/blob/71c19f830bcaffdacfb043b3f8f4278c39f653b4/src/SdkCommon/ClientRuntime/ClientRuntime/ServiceClient.cs#L64-L100). I've got a temporary workaround, which is to create fake entries in Mono's emulated registry that return something for these values, but the real fix would be to not rely on registry at all, but rather use the information in `System.Environment.OSVersion` for the full framework, or from `System.Runtime.InteropServices.RuntimeInformation` if you were to target .NET Standard. Answers: username_1: @username_0 thank you for reporting this. We will have this in our plans for our next release, but seems you have a workaround that unblocks you but generating the required entries in your virtual hive username_0: Yes, thanks. :) username_2: I'm experiencing this exact issue when using the KeyVaultClient running Mono on macOS. username_2: @username_0 Do you happen to know how to create fake entries in the Mono virtual registry on macOS? The path seems to be `/Library/Frameworks/Mono.framework/Versions/Current/etc/mono/registry`, but that directory only contains an empty `LocalMachine` directory on my machine. How exactly do you create the fake entries to get around this issue? username_2: Never mind, I figured it out! If anyone else is having this problem, you can add the fake registry keys by doing something along these lines: `Registry.LocalMachine.CreateSubKey("SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion").SetValue("ProductName", "Windows 8.1 Pro")` username_3: @username_0, I have the same issue while trying to use Azure Service Bus library on Mono on Linux. How did you do the workaround? How to add the fake entries in Mono's emulated registry? username_0: @username_3 - see the comment above yours for how to create them programatically. Or, if desired, you can create the files manually. They sit in `/etc/mono/registry`. I don't want to write a tutorial here, but I'm sure you can find some examples on the web. username_4: I also stumbled on this problem when creating a `ServiceBusManagementClient` instance (from the [Microsoft.Azure.Management.ServiceBus](https://www.nuget.org/packages/Microsoft.Azure.Management.ServiceBus/) package). Here is what I did to workaround the problem on macOS. 1. Create the fake registry directory structure: ``` $ cd /Library/Frameworks/Mono.framework/Versions/Current/etc/mono/registry/LocalMachine $ sudo mkdir -p "SOFTWARE/Microsoft/Windows NT/CurrentVersion" $ cd "SOFTWARE/Microsoft/Windows NT/CurrentVersion" ``` 2. Check my actual system values: ``` $ cat /System/Library/CoreServices/SystemVersion.plist ``` ```xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>ProductBuildVersion</key> <string>16G1314</string> <key>ProductCopyright</key> <string>1983-2018 Apple Inc.</string> <key>ProductName</key> <string>Mac OS X</string> <key>ProductUserVisibleVersion</key> <string>10.12.6</string> <key>ProductVersion</key> <string>10.12.6</string> </dict> </plist> ``` Create a `values.xml` file in the previously created directory structure, i.e.`/Library/Frameworks/Mono.framework/Versions/Current/etc/mono/registry/LocalMachine/SOFTWARE/Microsoft/Windows NT/CurrentVersion`: ```xml <values> <value name="ProductName" type="string">Mac OS X</value> <value name="CurrentVersion" type="string">10.12.6</value> <value name="CurrentBuild" type="string">16G1314</value> </values> ``` After saving this file, I was able to successfully create a `ServiceBusManagementClient`. username_5: This is no longer a problem in the new Azure.Core that replaces this library. All libraries will be moving to be based on Azure.Core over time. Hopefully this year for many of the management libraries. Until then, we recommend using the workaround. Status: Issue closed
Nordstrom/kafka-connect-lambda
607141368
Title: AwsRegionProviderChain class not found Question: username_0: I get this error when I try to create a connector: ``` "java.lang.NoClassDefFoundError: org/apache/commons/logging/LogFactory\n\tat com.amazonaws.regions.AwsRegionProviderChain.<clinit>(AwsRegionProviderChain.java:33)\n\tat com.amazonaws.client.builder.AwsClientBuilder.<clinit>(AwsClientBuilder.java:60)\n\tat com.nordstrom.kafka.connect.lambda.InvocationClient$Builder.<init>(InvocationClient.java:126)\n\tat com.nordstrom.kafka.connect.lambda.InvocationClientConfig.<init>(InvocationClientConfig.java:55)\n\tat com.nordstrom.kafka.connect.lambda.LambdaSinkConnectorConfig.<init>(LambdaSinkConnectorConfig.java:52)\n\tat com.nordstrom.kafka.connect.lambda.LambdaSinkConnector.start(LambdaSinkConnector.java:31)\n\tat org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:110)\n\tat org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:135)\n\tat org.apache.kafka.connect.runtime.WorkerConnector.transitionTo(WorkerConnector.java:195)\n\tat org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:257)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1183)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1400(DistributedHerder.java:125)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:1199)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:1195)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\nCaused by: java.lang.ClassNotFoundException: org.apache.commons.logging.LogFactory\n\tat java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)\n\tat java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:589)\n\tat org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)\n\tat java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)\n\t... 18 more\n" ``` Connector config: ``` { "tasks.max": "1", "connector.class": "com.nordstrom.kafka.connect.lambda.LambdaSinkConnector", "topics": "<my-topic>", "key.converter": "org.apache.kafka.connect.storage.StringConverter", "value.converter": "org.apache.kafka.connect.storage.StringConverter", "aws.region": "us-east-1", "aws.lambda.function.arn": "arn:aws:lambda:us-east-1:<my-arn>", "aws.lambda.batch.enabled": "true", "aws.lambda.invocation.mode": "ASYNC", "aws.lambda.invocation.failure.mode": "DROP" } ``` v1.2.2 of connector and Kafka Connect from Kafka v2.12-2.4.1 on Ubuntu 18.04.04. I get my AWS credentials from the EC2 instance profile. Answers: username_1: I do not recognize this. Do you have other Connect plugins installed? username_0: Yes I have the Confluent S3 and Lambda sinks both installed and working. username_2: `LogFactory` is a class found in the `commons-logging-1.2.jar` which is usually part of the Kakfa Connect distribution. For example, it can be found in `$CONFLUENT_HOME/share/java/kafka. common-logging-1.2.ja`r It is not included in the kafka-connect-lambda jar to avoid version conflict at runtime. username_0: Makes sense. Is there anything I can troubleshoot to figure out why it wouldn't be finding the library? I am using some confluent connector plugins but I am running the community Kafka Connect binary. The confluent connector plugins are not distributed as uber jars and I notice that `common-loggings` is included in their `lib/` directories. I don't see it in my `kafka/libs` though. username_2: You are right. I don't see `commons-logging-1.2.jar` in the Apache tarball, but do see it in several places (including in connector directories) in a Confluent install. As a work-around, you can put the logging jar file somewhere on your kafka-connect classpath, or in the same directory as the `kafka-connect-lambda.jar`. We'll look into including the logging jar into a future release. username_0: Thanks, that work-around worked username_1: This is addressed with 1.3.0 release. Status: Issue closed
noamt/elasticsearch-grails-plugin
160325306
Title: Error on Grails V2.5.0 with Elasticsearch plugin V0.1.0 Question: username_0: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'searchableClassMappingConfigurator': Invocation of init method failed; nested exception is java.lang.IllegalArgumentException: Mapper for [applicableSeries] conflicts with existing mapping in other types[object mapping [applicableSeries] can't be changed from nested to non-nested] at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.IllegalArgumentException: Mapper for [applicableSeries] conflicts with existing mapping in other types[object mapping [applicableSeries] can't be changed from nested to non-nested] at org.elasticsearch.index.mapper.MapperService.checkNewMappersCompatibility(MapperService.java:363) at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:319) at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:265) at org.elasticsearch.cluster.metadata.MetaDataMappingService$2.execute(MetaDataMappingService.java:444) at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:388) at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231) at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194) ... 3 more Config: Status: Issue closed Answers: username_0: Other Domain has the same name but different nodes ``` class C{ static searchable={ .... applicableSeries component: 'inner' } } ```
famsf/pecl-pdo-4d
214881741
Title: 'fourd.h' file not found Question: username_0: `pecl install pdo_4D-0.3` Generates error: In file included from /private/tmp/pear/temp/pdo_4d/pdo_4d.c:28: 'fourd.h' file not found Does anyone know what is causing this? My PHP version is: 5.6.2<issue_closed> Status: Issue closed
swsnu/swpp2019-team10
520899445
Title: [New API Needed] About user backend Question: username_0: I think we need some modification / additional api: 1. I think our servcies doesn't **have to** need user's `age`, `phone_number`, `gender`. It is good for remain optional fields. I think we can let user select they can submit addtional (not-essential) information on signing-up or edit it on their `my page`. 2. We need the api which returns user's `id` when the user's `username` (not `nickname`) is given. (`-1` when not found). We have to develop duplicated ID check feature. Answers: username_1: Should I make branch from dev or master? username_0: I think making branch is good. I realized we should modify the `POST` api of `signup` to return the dict of `{"id": int, "username":string, "<PASSWORD>":<PASSWORD>, "phone_number":string, "age":int, "gender":string}` so that I can implement the uploading profile picture feature. username_0: I think we need to modify the `signup` POST method to return the dictionary of newly created user information with given ID. Using that we can implement uploading of profile picture. username_2: I have changed codes @username_0 mentioned. 1. signup post method returns id 2. signup post method and user put method don't requires ```age```, ```phone_number```, ```gender```. 3. new signup_dupcheck post method added. url : /api/signup_dupcheck/ requires : content-type = application/json, ```{'username': string}``` returns : json type ```{'id': some integer if username exist, -1 o.w.}```
ropensci/tidyhydat
330731191
Title: Battery Voltage Information Question: username_0: Is it possible to view the Battery Voltage data from tidyhydat. Sometimes the transmission error in flow and level values can be captured using battery voltage information. An example of how battery voltage graph appears is shown below: ![battery_volt](https://user-images.githubusercontent.com/39829190/41170572-710f7e70-6b27-11e8-8bdd-209c305922fd.png) Answers: username_1: Does you have a public facing data source for this? username_0: From our Provincial site but not through water survey canada:Stn# - NF03OB0042 | | | | | | | | | | | Stn# - NF03OB0042 | | | Status: Issue closed username_1: Until this is a public facing data source it will have to stay out of tidyhydat.
creativecommons/creativecommons.github.io-source
568985162
Title: Back to top button not working on small screens once the user has reached footer. Question: username_0: ## Describe the bug Back to top button is not working for the small screen(around 700px or less) once the user navigates to the footer of the page. ## To Reproduce Steps to reproduce the behaviour: 1. Visit any page which has Back to top button visible, scroll down to the footer of the page and try clicking on the Back to top button, it will get stuck. ## Expected behavior It must take back to top. ## Screenshots ![back to top](https://user-images.githubusercontent.com/35863227/75041895-80985e00-54e3-11ea-9fbb-93cb5e7d8244.png) ## Desktop (please complete the following information) - OS: (ex. iOS) - Browser (ex. chrome, safari) - Version (ex. 22) ## Smartphone (please complete the following information) - Device: (ex. iPhone6) - OS: (ex. iOS8.1) - Browser (ex. stock browser, safari) - Version (ex. 22) ## Additional context Add any other context about the problem here. Answers: username_0: @username_1 Whenever Back to top button floats over footer part, It stops working for **Small screen devices**, I have verified this from my phone and also by minimizing the browser on minelaptop. username_1: Thanks for reporting this @username_0 username_2: Hey! I would like to work on this issue. username_0: @username_2 please go ahead. To edit the content you will have to go to **templates** folder and then into **layout.html** file. username_2: Hey, I think that this could be fixed by changing the `z-index` css property of the back-to-top button. But I realised that the CSS files are built and are not part of the repository. How should I modify the css file that is built? username_1: CSS is built from this file in the repo: https://github.com/creativecommons/creativecommons.github.io-source/blob/master/webpack/sass/main.scss username_0: @username_2 Yeah, that's perfect. username_2: Waiting for review! Status: Issue closed
peak/s5cmd
611167113
Title: Add 'Why s5cmd' section to README Question: username_0: We should keep a list of - why this tool is written and why we think it's awesome - why people should prefer it over awscli - written articles about `s5cmd`. [1](https://aws.amazon.com/blogs/opensource/parallelizing-s3-workloads-s5cmd/), [2](https://medium.com/@joshua_robinson/s5cmd-for-high-performance-object-storage-7071352cc09d), [3](https://medium.com/@joshua_robinson/s5cmd-hits-v1-0-and-intro-to-advanced-usage-37ad02f7e895) Status: Issue closed Answers: username_1: Closed by: #164
indiana-university/rivet-source
400319633
Title: Combobox/typeahead lookup component Question: username_0: ## New component Another frequent request and Q3 priority is a typeahead lookup component where the user can start typing a search term and be presented with a list of matches from a set of data. This is another design pattern that is very tricky to actually make accessible to screen readers and keyboard-only users. The gov.UK team has done a lot of work and testing on their solution, which they have also open-sourced here: https://github.com/alphagov/accessible-autocomplete You can see some examples of the different configuration options here: https://alphagov.github.io/accessible-autocomplete/examples/ @ahlord did some testing with this a while back, and it seems to be a pretty good solution to a difficult problem. The gov.uk plugin has a lot of nice options including the ability to pass in a `cssNamespace` option that can be used for theming. I've done a bit more on the theming and a simplified use case based on Austin's original prototype here: https://codepen.io/levimcg/pen/gjRaLB Answers: username_1: [mtwagner commented on Oct 25, 2018](https://github.iu.edu/UITS/rivet-source/issues/332#issuecomment-25367) *** will the source be able to be the returned results of an ajax call? Thanks. Sorry about the duplicate. username_1: [lmcgrana commented on Oct 25, 2018](https://github.iu.edu/UITS/rivet-source/issues/332#issuecomment-25368) *** @mtwagner No problem at all! That's a good question. There My initial thoughts are to make the source/data that gets filtered part of a config option. This one will be a bit different form other Rivet JavaScript components/plugins in that each autocomplete/combo box will need to be instantiated and configured so that you can control things like the data source. So I'm thinking some thing like: ```js const myFirstAutocomplete = new RivetAutoComplete({ data: sourceData, // Other config options }); I think we could design it so that sourceData is either: ``` An array of synchronous/static results (Strings) e.g. `['<NAME>', '<NAME>', '<NAME>'] A function that returns an array of results (Strings) That way if data in the example above is a function you could do your ajax calls there. Haven't given this tons of thought, but that's what I'd like to end up with. I'm still working my way through this one as it's a real beast in terms of making it accessible/usable for folks using screen readers. Here's a link to a very early prototype I started. There's still a lot of work to be done in terms of accessibility (keyboard navigation, labeling, etc.). https://codepen.io/levimcg/pen/XPyxyW Would love to hear some thoughts on how you see it working and what all types of options you'd like to have. Thanks! 🙌 Status: Issue closed
lerna/lerna
350556857
Title: How to ignore node_modules of internal dependencies on bootstrapping packages Question: username_0: First thank you for the nice mono repo manager. I'm working on angular applications as my packages.For internal dependencies when the package1 is install in "node_module" of package2, if it( package1 ) includes "node_modules", I get an error that mentions like this "not found index.ts " and after adding it in tsConfig.json I face another issue "No NgModule metadata found for 'AppModule'" when I'm trying to run the app or build it, but if I remove "node_modules" folder from package2/node_modules/@MyPackages/package1/**node_modules** , every thing is fine. So how can I ignore "node_modules" of internal dependencies on bootstrapping packages? ## Environment "@angular/core": "^6.1.2", "@angular/cli": "^6.1.3", "typescript": "~2.9.2" | Executable | Version | | ---: | :--- | | `lerna --version` | ^2.11.0| | `npm --version` | 5.6.0| | `node --version` | v8.11.3| | OS | Version | | --- | --- | | Windows7| | --> Answers: username_1: This looks like a TypeScript or Angular bug, not Lerna. One thing to note is that lerna manages npm packages, not magical ng-packagr artifacts. Wherever your leaf package.json's [`main`](https://docs.npmjs.com/files/package.json#main) property points to is where node will resolve `import` (or `require()`) paths. username_0: Your right. Because after I upgrade angular from 5 to 6, this issue appeared. But can I bootstrap the internal packages while ignore "node_modules" folder in Lerna? because this approach is going to solve my issue. Right now I'm doing manually username_1: I'm afraid I don't understand what you mean by "bootstrap internal packages while ignoring `node_modules`". username_0: Actually the packages that I'm work on them they are runnable application as well, and I face the issue only when I want to run each packages. I think there is nothing that Lerna can do for me because the concept of Lerna is deferent as my expectoration. Lerna Only creates a shortcut of each internal dependencies. Thanks again for your quick response. username_1: Ah, yes, that sounds like it. Happy to help clarify usage expectations. Status: Issue closed username_2: @username_0, So how you fixed it? cause I got same error and for now I'm fixing it by removing node_modules manually
buildkite/lifecycled
614403655
Title: Using lifecycled to handle Autoscaling terminations and spot terminations Question: username_0: Is using a single lifecycled daemon to listen and respond to Autoscaling terminations and Spot terminations a use case that is expected to be working? I'm seeing some inconsistent results when attempting to use it in this fashion. When I start the daemon with `--no-spot` it is able to handle the Autoscaling terminations just fine, so thanks! However, when I remove that flag and attempt to have it handle the Spot termination warning events as well, I'm not having success. Additionally, when I restart the daemon (without changing configurations) sometimes I will see it logging that the `spot` listener is polling sqs for messages, which to me, is unexpected. Other times the `spot` listener will log that it is polling ec2 metadata for spot termination notices, which is more of what I expect. This happens on the same host, not changing configurations, it appears that just stopping and starting the daemon results in different behavior. Example log excerpt with expected log entries: ``` time="2020-05-07T18:35:39Z" level=info msg="Looking up instance id from metadata service" time="2020-05-07T18:35:39Z" level=info msg="Starting listener" instanceId=i-instanceid listener=spot time="2020-05-07T18:35:39Z" level=info msg="Starting listener" instanceId=i-instanceid listener=autoscaling time="2020-05-07T18:35:39Z" level=info msg="Waiting for termination notices" instanceId=i-instanceid time="2020-05-07T18:35:39Z" level=debug msg="Creating sqs queue" instanceId=i-instanceid listener=autoscaling queue=lifecycled-i-instanceid time="2020-05-07T18:35:39Z" level=debug msg="Subscribing queue to sns topic" instanceId=i-instanceid listener=autoscaling topic="arn:aws:sns:us-east-1:account:terminate-hook-name" time="2020-05-07T18:35:40Z" level=debug msg="Polling sqs for messages" instanceId=i-instanceid listener=autoscaling queueURL="https://sqs.us-east-1.amazonaws.com/account/lifecycle d-i-instanceid" time="2020-05-07T18:35:44Z" level=debug msg="Polling ec2 metadata for spot termination notices" instanceId=i-instanceid listener=spot time="2020-05-07T18:35:49Z" level=debug msg="Polling ec2 metadata for spot termination notices" instanceId=i-instanceid listener=spot time="2020-05-07T18:35:54Z" level=debug msg="Polling ec2 metadata for spot termination notices" instanceId=i-instanceid listener=spot time="2020-05-07T18:35:59Z" level=debug msg="Polling ec2 metadata for spot termination notices" instanceId=i-instanceid listener=spot time="2020-05-07T18:36:00Z" level=debug msg="Polling sqs for messages" instanceId=i-instanceid listener=autoscaling queueURL="https://sqs.us-east-1.amazonaws.com/account/lifecycle d-i-instanceid" ``` Example log excerpt with unexpected log entries: ``` time="2020-05-07T23:09:21Z" level=info msg="Looking up instance id from metadata service" time="2020-05-07T23:09:21Z" level=info msg="Starting listener" instanceId=i-instanceid listener=spot time="2020-05-07T23:09:21Z" level=info msg="Starting listener" instanceId=i-instanceid listener=autoscaling time="2020-05-07T23:09:21Z" level=info msg="Waiting for termination notices" instanceId=i-instanceid time="2020-05-07T23:09:21Z" level=debug msg="Creating sqs queue" instanceId=i-instanceid listener=autoscaling queue=lifecycled-i-instanceid time="2020-05-07T23:09:21Z" level=debug msg="Creating sqs queue" instanceId=i-instanceid listener=spot queue=lifecycled-i-instanceid time="2020-05-07T23:09:21Z" level=debug msg="Subscribing queue to sns topic" instanceId=i-instanceid listener=autoscaling topic="arn:aws:sns:us-east-1:account:terminate-hook-name" time="2020-05-07T23:09:21Z" level=debug msg="Subscribing queue to sns topic" instanceId=i-instanceid listener=spot topic="arn:aws:sns:us-east-1:account:terminate-hook-name" time="2020-05-07T23:09:21Z" level=debug msg="Polling sqs for messages" instanceId=i-instanceid listener=spot queueURL="https://sqs.us-east-1.amazonaws.com/account/lifecycled-i-instanceid" time="2020-05-07T23:09:22Z" level=debug msg="Polling sqs for messages" instanceId=i-instanceid listener=autoscaling queueURL="https://sqs.us-east-1.amazonaws.com/account/lifecycled-i-instanceid" ``` Answers: username_1: Hmmm, that is odd @username_0. Did #85 fix the issue for you? username_0: Yes, it did. :) Status: Issue closed
kubevirt/cluster-network-addons-operator
650127872
Title: [ref_id:1182]Probes for readiness should succeed [test_id:1200][posneg:positive]with working HTTP probe and http server Question: username_0: /kind bug /triage build-officer **What happened**: `[ref_id:1182]Probes for readiness should succeed [test_id:1200][posneg:positive]with working HTTP probe and http server /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:43 Timed out after 60.000s. Expected <v1.ConditionStatus>: False to equal <v1.ConditionStatus>: True /root/go/src/kubevirt.io/kubevirt/tests/probes_test.go:82` [Prow job log](https://prow.apps.ovirt.org/view/gcs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/3726/pull-kubevirt-e2e-k8s-1.18/1278299846477877248) **Lane:** pull-kubevirt-e2e-k8s-1.18 Answers: username_0: /close @username_1 - why this is appearing on the CNAO flakefinder report page? https://storage.googleapis.com/kubevirt-prow/reports/flakefinder/kubevirt/cluster-network-addons-operator/flakefinder-2020-07-02-024h.html username_1: Because the [job configuration](https://github.com/kubevirt/project-infra/blob/master/github/ci/prow/files/jobs/kubevirt/cluster-network-addons-operator/cluster-network-addons-operator-periodics.yaml) misses required parameters, `org` and `repo` have to be set, otherwise they both default to `kubevirt` username_1: /cc @phoracek @qinqon
RustAudio/cpal
834415997
Title: PipeWire support Question: username_0: PipeWire will soon be replacing both JACK and PulseAudio on Linux starting with Fedora 34 which will be released in a few weeks. Arch currently has packages that make it easy to switch to PipeWire. I am unclear about Ubuntu's plans and just [asked them about that](https://bugs.launchpad.net/ubuntu/+source/pipewire/+bug/1802533/comments/36). PipeWire reimplements the JACK and PulseAudio APIs. Currently the PipeWire developer is not recommending to use the new PipeWire API if applications already work well with PipeWire via JACK or PulseAudio. I am not familiar enough with cpal to know, but there might be some benefits to using the PipeWire API instead of using PipeWire via the JACK API. I have attempted to test cpal's JACK backend with PipeWire but have not gotten it to work. The `pw-uninstalled.sh` script sets the `LD_LIBRARY_PATH` environment variable so PipeWire's reimplementation of libjack is loaded before JACK's original library. I know Rust links statically to other Rust libraries but I am unclear how Rust links to dynamic C libraries. Maybe the `LD_LIBRARY_PATH` trick does not work with Rust? ``` cpal on  master is 📦 v0.13.2 via 🦀 v1.49.0 ❯ cargo run --release --example beep --features jack Finished release [optimized] target(s) in 0.03s Running `target/release/examples/beep` Output device: default Default output config: SupportedStreamConfig { channels: 2, sample_rate: SampleRate(44100), buffer_size: Range { min: 170, max: 1466015503 }, sample_format: F32 } memory allocation of 5404319552844632832 bytes failed fish: “cargo run --release --example b…” terminated by signal SIGABRT (Abort) cpal on  master is 📦 v0.13.2 via 🦀 v1.49.0 ❯ ~/sw/pipewire/pw-uninstalled.sh Using default build directory: /home/be/sw/pipewire/build Welcome to fish, the friendly interactive shell Type `help` for instructions on how to use fish cpal on  master is 📦 v0.13.2 via 🦀 v1.49.0 ❯ cargo run --release --example beep --features jack Finished release [optimized] target(s) in 0.03s Running `target/release/examples/beep` Output device: default Default output config: SupportedStreamConfig { channels: 2, sample_rate: SampleRate(44100), buffer_size: Range { min: 170, max: 1466015503 }, sample_format: F32 } memory allocation of 5404319552844632832 bytes failed fish: “cargo run --release --example b…” terminated by signal SIGABRT (Abort) ``` Related discussions regarding PipeWire and PortAudio: https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/130 https://github.com/PortAudio/portaudio/issues/425 Answers: username_0: The PipeWire developer is very responsive and working hard on PipeWire every day. If anyone is interested in working on this for cpal, I recommend getting in touch with him through [PipeWire's issue tracker](https://gitlab.freedesktop.org/pipewire/pipewire/-/issues) and/or #pipewire on Freenode. username_1: I'm running Arch + pipewire{,-jack,-pulse} and the jack example works here. What distro do you use? ``` Running `target/release/examples/beep` Output device: default Default output config: SupportedStreamConfig { channels: 2, sample_rate: SampleRate(44100), buffer_size: Range { min: 3, max: 4194304 }, sample_format: F32 } ``` username_1: Oh, actually I made a mistake; the above one is probably using PipeWire's ALSA integration. When testing JACK you should run with `cargo run --release --example beep --features jack -- --jack`. username_1: When I run with pipewire-jack it doesn't crash but doesn't make sound either. Strange. username_1: Seems that we have the same regex problem as PortAudio. Fix incoming. username_0: Hopefully it doesn't take 3 days to figure out how to find and replace a string in Rust like it did for me in C :laughing: username_0: I recommend reading the [PipeWire wiki](https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/FAQ). There is a lot of good information there. username_1: FYI it's https://github.com/RustAudio/cpal/blob/5cfa09042c629c1ba39bbac1a6ae9278847a9ac1/src/host/jack/stream.rs#L152 and https://github.com/RustAudio/cpal/blob/5cfa09042c629c1ba39bbac1a6ae9278847a9ac1/src/host/jack/stream.rs#L178. But I'm planning to change to a different logic based on [mpv's](https://github.com/mpv-player/mpv/blob/5824d9fff829f8bb2beebe0b82df88c8e0330115/audio/out/ao_jack.c#L146). username_0: Hardcoding using the `system` ports in JACK is definitely not right and won't work with PipeWire at all. I am wondering why that was done. Was it just a quick hack to get something working or does this reflect a bigger problem in the cpal API? PortAudio's JACK implementation is quite quirky. Normally JACK applications create all their ports which any application communicating with the server can connect to arbitrarily. But PortAudio's abstractions flip that relationship upside down so PortAudio creates a port when the application asks to start a stream; all of the PortAudio application's ports are not available for other applications to connect to until the PortAudio application opens the stream. Does cpal's API have the same problem? username_1: As usual, our development effort is limited and the JACK backend is quite close to a proof-of-concept. We do care about making the abstractions not leaky, though. We register a port when you create a `Stream`... I hope it's fine? You can then start or stop the `Stream` through the methods. https://github.com/RustAudio/cpal/blob/5cfa09042c629c1ba39bbac1a6ae9278847a9ac1/src/host/jack/stream.rs#L37 username_0: As long as the application can create JACK/PipeWire ports for other applications or the session manager to connect to without the cpal application necessarily using all of them, I think it can work. There may need to be a mechanism for signaling the cpal application when another application or hardware device has connected to/disconnected from the cpal application's port, but I'm not familiar enough with any of these APIs to be sure about that. That may be related to hotplug support (#373). username_0: This means that a CPAL application using PipeWire could request a small buffer size for low latency. When the application goes idle, it could call Stream::pause and yield control of the buffer size back to PipeWire. This would allow keeping a low latency application open in the background, pausing it, then watching a YouTube video in Firefox without having to close any applications or mess with reconfiguring anything. If I force a JACK application using the PipeWire JACK reimplementation to use a small buffer size with the PIPEWIRE_LATENCY environment variable, YouTube videos in Firefox have glitchy audio while the other application remains running. JACK has an API somewhat like this with [int jack_deactivate (jack_client_t * client)](https://jackaudio.org/api/group__ClientFunctions.html#ga2c6447b766b13d3aa356aba2b48e51fa), but this does not fit with CPAL's abstractions. To use this, CPAL would need to track the state of each Device's streams and call `jack_deactivate` when all the CPAL Streams for a Device are paused. Related discussion on the PipeWire issue tracker: https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/722 username_0: @username_1 if you or anyone else works on a new PipeWire backend for CPAL, I would be happy to help test it. I am afraid writing it myself would be above my skill level though. username_2: It kinda sounds like you have the most domain expertise here. Why not have a go? username_0: I think that would be a bit ambitious for a first Rust project. username_0: FWIW, here is SDL's PipeWire backend: https://github.com/libsdl-org/SDL/pull/4094 username_0: This would be great for [Whisperfish](https://gitlab.com/whisperfish/whisperfish/-/issues/73)! username_3: Any updates on this? username_1: As far as I have tested, it seems the JACK backend (+pw-jack) works fine for low-latency use cases on PipeWire. If you have any specific thing missing that requires directly integrating with PipeWire, then please let me know with a comment. username_4: I've been looking at developing a PW backend, but one major problem I see is the main loop. A PipeWire application requires a main loop object to be created. An application calls the run method, which never returns (it can be terminated but you do it through sources). This would be a major problem with CPALand anything consuming it since it would never allow anything to run beyond the PW main loop. I'm guessingother audio backendsdo this, but I can't seem to find any way of stepping through the loop without using sources (e.g. timers). Thoughts about getting around this? username_1: I don't really know, but can you run that loop on another thread? It's common to create additional helper threads in cpal backends. username_4: I'll look at doing that -- I didn't think of that lol. username_0: With https://github.com/RustAudio/rust-jack/pull/154 I confirm that the cpal examples work with PipeWire's implementation of JACK. The `enumerate` example is only listing the ports cpal crates. I don't know if that's an issue with the `enumerate` example or cpal: ``` cpal on  HEAD (59ee8ee) [!] is 📦 v0.13.4 via 🦀 v1.56.1 ❯ cargo run --release --example enumerate --features jack Finished release [optimized] target(s) in 0.05s Running `target/release/examples/enumerate` Supported hosts: [Jack, Alsa] Available hosts: [Jack, Alsa] JACK Default Input Device: Some("cpal_client_in") Default Output Device: Some("cpal_client_out") Devices: 1. "cpal_client_in" Default input stream config: SupportedStreamConfig { channels: 2, sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } All supported input stream configs: 1.1. SupportedStreamConfigRange { channels: 1, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 1.2. SupportedStreamConfigRange { channels: 2, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 1.3. SupportedStreamConfigRange { channels: 4, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 1.4. SupportedStreamConfigRange { channels: 6, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 1.5. SupportedStreamConfigRange { channels: 8, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 1.6. SupportedStreamConfigRange { channels: 16, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 1.7. SupportedStreamConfigRange { channels: 24, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 1.8. SupportedStreamConfigRange { channels: 32, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 1.9. SupportedStreamConfigRange { channels: 48, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 1.10. SupportedStreamConfigRange { channels: 64, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } Default output stream config: SupportedStreamConfig { channels: 2, sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } All supported output stream configs: 1.1. SupportedStreamConfigRange { channels: 1, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 1.2. SupportedStreamConfigRange { channels: 2, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 1.3. SupportedStreamConfigRange { channels: 4, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 1.4. SupportedStreamConfigRange { channels: 6, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 1.5. SupportedStreamConfigRange { channels: 8, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 1.6. SupportedStreamConfigRange { channels: 16, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 1.7. SupportedStreamConfigRange { channels: 24, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 1.8. SupportedStreamConfigRange { channels: 32, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 1.9. SupportedStreamConfigRange { channels: 48, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 1.10. SupportedStreamConfigRange { channels: 64, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 2. "cpal_client_out" Default input stream config: SupportedStreamConfig { channels: 2, sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } All supported input stream configs: 2.1. SupportedStreamConfigRange { channels: 1, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 2.2. SupportedStreamConfigRange { channels: 2, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 2.3. SupportedStreamConfigRange { channels: 4, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 2.4. SupportedStreamConfigRange { channels: 6, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 2.5. SupportedStreamConfigRange { channels: 8, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 2.6. SupportedStreamConfigRange { channels: 16, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 2.7. SupportedStreamConfigRange { channels: 24, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 2.8. SupportedStreamConfigRange { channels: 32, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 2.9. SupportedStreamConfigRange { channels: 48, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 2.10. SupportedStreamConfigRange { channels: 64, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } [Truncated] SupportedStreamConfig { channels: 2, sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } All supported output stream configs: 2.1. SupportedStreamConfigRange { channels: 1, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 2.2. SupportedStreamConfigRange { channels: 2, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 2.3. SupportedStreamConfigRange { channels: 4, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 2.4. SupportedStreamConfigRange { channels: 6, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 2.5. SupportedStreamConfigRange { channels: 8, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 2.6. SupportedStreamConfigRange { channels: 16, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 2.7. SupportedStreamConfigRange { channels: 24, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 2.8. SupportedStreamConfigRange { channels: 32, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 2.9. SupportedStreamConfigRange { channels: 48, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } 2.10. SupportedStreamConfigRange { channels: 64, min_sample_rate: SampleRate(48000), max_sample_rate: SampleRate(48000), buffer_size: Range { min: 1024, max: 1024 }, sample_format: F32 } ALSA Default Input Device: Some("default") Default Output Device: Some("default") Devices: ... ``` username_0: https://github.com/RustAudio/cpal/pull/624 fixes the build using Pipewire for the jack feature. username_5: I'm started working on a PipeWire host here: https://github.com/username_5/cpal/tree/pipewire-host Any help/thoughts would be appreciated. I think I'm fairly far along, (I haven't pushed some changes yet), but I can't find any way aside from the PipeWire Stream api to actually write/read data, and I'm not sure if my node creation code is correct. username_0: Cool! Could you make a draft pull request? username_0: The pw_filter API does not have Rust bindings yet. This is in progress: https://gitlab.freedesktop.org/pipewire/pipewire-rs/-/merge_requests/112 username_0: If you're creating pw_nodes manually then I don't think you strictly need pw_filter, but my understanding is that using pw_filter would be easier to implement.
ppb/pursuedpybear
605828244
Title: Gamepad Support Question: username_0: We should support gamepads. * [SDL docs](https://wiki.libsdl.org/CategoryGameController) * [MDN on gamepad api](https://developer.mozilla.org/en-US/docs/Web/API/Gamepad_API) This should include: * Normalization of layout (SDL: [GameControllerDB](https://github.com/gabomdq/SDL_GameControllerDB) * Hotplug support * Metadata about button labels * Multiple device handling Note that this does not include joysticks, basically because none of the underlying libraries seem to care about them any more. This might be significantly easier with #97. Answers: username_0: Pardon, SDL does have a separate [joystick api](https://wiki.libsdl.org/CategoryJoystick) but the web does not have a distinct API. username_1: Also, the SDL_GameControllerDatabase: https://github.com/gabomdq/sdl_gamecontrollerdb
simple-icons/simple-icons
873097743
Title: ![PostgreSQL](https://img.shields.io/badge/-PostgreSQL-black?style=flat-square&logo=PostgreSQL) Question: username_0: <!-- We won't add non-brand icons or anything related to illegal services. If in doubt, open an issue and we'll have a look. Before opening a new issue please search for duplicate or closed issues or PRs. If you find one for the brand your requesting then leave a comment on it or add a reaction to the original post. When requesting a new icon please provide the following information: --> **Brand Name:** **Website:** **Alexa rank:** <!-- The Alexa rank can be retrieved at https://www.alexa.com/siteinfo/ Please see our contributing guidelines for more details on how we assess a brand's popularity. --> **Official resources for icon and color:** <!-- for example media kits, brand guidelines, SVG files, etc. --> <!-- For more details on our processes please see our contributing guidelines, which can be found at https://github.com/simple-icons/simple-icons/blob/develop/CONTRIBUTING.md --> Answers: username_1: Hi @username_0 - could you please elaborate on what your issue is? PostgreSQL is already part of our library, and is actually in discussion for update at the moment (see #5561). I'll look to close this issue out over the weekend as a duplicate, unless there's another reason for you opening it. Status: Issue closed
Azure/custom-script-extension-linux
507332348
Title: Does not work for Azure VM scale set Question: username_0: Using a command ```` az vmss extension set --resource-group $resourceGroup --vmss-name $ssName --settings customscript.json --protected-settings customscript-protected.json --name CustomScript --publisher Microsoft.Azure.Extensions --version 2.0 ```` to install a Custom Script and execute initializing script on every Linux (Ubuntu 18 LTS) Vm in a scale set and nothing happens. The folder /var/log/azure/custom-script/ is empty, and the waagent.log does not contain any single line pertaining Custom Script extension. On the other hand, command against a single VM `az vm extension set` works fine and does what's required. Any thoughts on how to fix that? Answers: username_0: CustomScript only executes automatically when the VM scale set is created with an automatic instance upgrade option. If not, each instance must be manually upgraded to execute the Custom Script. Status: Issue closed
bagustris/SER_ICSigSys2019
913615429
Title: model.add(LSTM(512, return_sequences=True, input_shape=(100, 34))) Question: username_0: Why input_shape equals to (100, 34)? 100 means time_steps? How to understand it? Thank you very much! Answers: username_0: I am very glad to receive your reply. I got it. Thank you very much! I want to consult you that how long is per frame. Between 20 and 30 ms? username_1: Typically the frame between 15-30 ms. But in this paper I used 200 ms. See the [save_feature.py code](../blob/master/code/python_files/save_feature.py) in this repo (window_n = 0.2). So it makes sense that the number of time steps is only 200 sequences. In my other research, I used 25 ms of frame resulting around 3000 timesteps, for instance here: https://github.com/username_1/ravdess_song_speech/
vijayrkn/AzFunTools
749951545
Title: Better experience when adding bindings Question: username_0: https://twitter.com/gabrymartinez/status/1327695777058189312 One more +1000 to a better experience when adding bindings, also could you please add VS Code for this one? Answers: username_0: https://twitter.com/uploadtocloud/status/1327681929597116416 Would be amazing if Visual Studio would have a better binding support. E.g. when you type “[Table” VS would suggest/autocomplete the table binding.
ceres-solver/ceres-solver
1034409139
Title: jet.h has some errors Question: username_0: When i try to run the helloworld.cc,It shows the following error, could someone please teach me how to fix it? ![image](https://user-images.githubusercontent.com/63281608/138595061-02ae2f78-a13b-4a98-b756-bd215551fd87.png) Answers: username_1: I am going to close this issue, if you have more questions/concerns please feel free to re-open it. Status: Issue closed
iterative/dvc
753900428
Title: dvc install --use-pre-commit doesn't install pre-push and post-checkout Question: username_0: ## Bug Report `dvc install` can't be used if a repository is already using `pre-commit`. While you can use `dvc install --use-pre-commit`, the `pre-push` and `post-checkout` hooks are not added to the git hooks. This behavior is not documented and feels misleading. It may be personal opinion but, I would expect there to be a way to add those hooks when using `pre-commit` and right now that doesn't appear to be the case. In order to get all of the hooks I had to delete the `pre-commit` hooks and run the following. ```console dvc install pre-commit install dvc install --use-pre-commit ``` This seems overly difficult and needless. Please let me know if I am just missing something obvious. ```console DVC version: 1.10.2 (pip) --------------------------------- Platform: Python 3.8.2 on macOS-10.16-x86_64-i386-64bit Supports: http, https, s3 Cache types: reflink, hardlink, symlink Caches: local Remotes: s3 Repo: dvc, git ``` Answers: username_1: This is really a pre-commit behavior thing, and is not DVC specific. The `--use-pre-commit` flag just modifies your existing `.pre-commit-config.yaml` and adds the DVC configuration. If you are already using pre-commit then you still have to tell pre-commit to install the hook stages you wish to use. By default pre-commit only ever installs the `pre-commit` hook, and will ignore any other stages unless you explicitly tell pre-commit to install them. So for a repo which is already using pre-commit you should be running: ``` pre-commit install --hook-type pre-push,post-checkout dvc install --use-pre-commit ``` (this can be done in either order) username_1: This is something that we can note in the DVC docs though, so I'll transfer this issue to the dvc.org docs repo
facebook/buck
517501349
Title: Kotlin internal modifier wired incorrectly Question: username_0: The -module-name argument used to specify the name mangling used for Kotlin bytecode generation is incorrect. It's currently using the invokingRule shortname, which usually results in methods like `myMethod$src_release()` This will frequently conflict and allow expanded access from Java. The documentation states "a Gradle source set (with the exception that the test source set can access the internal declarations of main);" Gradle uses the fully qualified module, we should probably use the full invokingRule sans the specific action after the ":" to allow tests and all other targets within one BUCK file to have access. [KotlincToJarStepFactory.java](https://github.com/facebook/buck/blob/master/src/com/facebook/buck/jvm/kotlin/KotlincToJarStepFactory.java) Answers: username_0: @IanChilds @kageiit @thalescm @kurtisnelson @leland-takamine username_1: I believe this issue was fixed by https://github.com/facebook/buck/commit/7ddc43c4ab5883d17208e930207028144ec5b856#diff-311c95e6c277e00e1457a5ba290e0739 ``` private static String getModuleName(BuildTarget invokingRule) { return new StringBuilder() .append(invokingRule.getBasePath().toString().replace(File.separatorChar, '.')) .append(".") .append(invokingRule.getShortName()) .toString(); } ``` Status: Issue closed
theloopkr/loopchain
293179681
Title: Radiostation RESTful API documentation spec error Question: username_0: 안녕하세요. loopchain을 테스트 해보고 있는데, 문서상에 명시된 스펙과 다른 부분이 있네요. [GET /api/v1/peer/status?peer_id={Peer의 ID}&group_id={Peer의 group ID}&channel={channel_name}](https://github.com/theloopkr/loopchain/blob/master/radiostation_proxy_restful_api.md#get-apiv1peerstatuspeer_idpeer%EC%9D%98-idgroup_idpeer%EC%9D%98-group-idchannelchannel_name) 에는 Response Body의 스펙이 아래와 같이 명시되어 있습니다. ```json { "response_code": "int : RES_CODE", "data": { "made_block_count": "해당 Peer가 생성한 block의 수", "status": "Peer의 연결 상태", "audience_count": "해당 Peer를 subscription 중인 Peer의 수", "consensus": "합의 방식(none:0, default:1, siever:2)", "peer_id": "Peer의 ID", "peer_type": "Leader Peer 여부", "block_height": "Blockchain 내에서 생성된 Block 수", "total_tx": "int: Blockchain 내에서 생성된 Transaction 수", "peer_target": "Peer의 gRPC target 정보" } } ``` 그런데 테스트 서버를 띄우고 API 에 request 날려 보면, 아래와 같이 던져 주고 있습니다. ```json { "made_block_count": 0, "status": "Service is online: 0", "peer_type": "0", "audience_count": "0", "consensus": "siever", "peer_id": "8ab941<PASSWORD>", "block_height": 1, "total_tx": 1, "peer_target": "192.168.1.80:7103", "leader_complaint": 1 } ```
dotnet/maui
626950938
Title: Is MVU really necessary? Question: username_0: I think MAUI should stick with only one way of designing UI, that's: **XAML** **Blazor Syntex** is okay, but MVU seems totally unnecessary mess to me. If it is to attract Flutter Devs, please, let them stay with Flutter; DO NOT destroy the beauty of XAML; Answers: username_1: Flutter has an entire [page](https://flutter.dev/docs/get-started/flutter-for/xamarin-forms-devs) dedicated to attracting people from Xamarin.Forms. You are saying we should ignore competition. Really? username_2: Blazor bindings are beautiful! I'm just starting out with them and they offer the simplicity that Flutter does. username_3: @davidortinau as I said in the other thread. The MAUI blog post created massive confussion. People now seem to think MVU = view as code/DSL. But this is completely independent from what MVU is. MVU is perfectly possible with XAML. It has nothing to do with how you write the view. It's only about creating an immutable model + update function which takes a model and msg and builds a new model and also a view function which does not mutate the model directly but sends new commands (messages) into the update loop. username_4: It is meant for C# and .NET dev. username_5: It has never been one way only. Code-based UIs have been supported through Xamarin.Forms from the beginning. Making that more approachable makes sense. And by the way: MVU can be used [easily with XAML](https://fsprojects.github.io/Fabulous/Fabulous.StaticView/). username_0: I know. Sometimes we do write `new Button() { .... }`, but this [post](https://devblogs.microsoft.com/dotnet/introducing-net-multi-platform-app-ui/) (**Image 1**) confused me, and many other, I believe. username_1: LOL. Imagine a page dedicated to "Windows Forms for WPF devs". username_6: XAML is just a "tool" on top of the object model... You can use xaml, c#. You can architect your app using MVVM (with or without XAML) or with MVU (to be fair the examples provided were not "real" MVU but this is another topic). If you dont like coded ui or the MVU approach just ignore it :) There is no need to push it back. I dont think this is just to attract flutter developer. The MVU pattern is on the rise, and is very well suited for mobile development. username_6: Also coded UI is on the rise... react, flutter, swiftUI, ecc... theyy are gaining a LOT of popularity username_0: # What I wanted to say: What will we choose between **Flutter/Swift/Coded-UI** thing and **WPF/XAML** with a **GUI Editor** like **Blend for Visual Studio**? username_5: Sometimes people write entire XF apps without touching XAML – and they are happy about it ;-). username_0: @username_5 I'm surprised..!! 😢 However, not for them, but for people like me, who wants **Blend** for Xamarin/MAUI, is unhappy: Android Studio **Motion Editor** https://developer.android.com/studio/write/motion-editor https://developer.android.com/studio/images/write/motion_animation_preview.gif Status: Issue closed username_3: @username_0 I wonder if you still want to have blend support once you worked in a system with properly working hot reload. Usually people prefer that a lot username_7: Coming from a XAML / Blend background, my initial thoughts around UI in code was to recoil but once I tried it, there were many benefits that I saw that I had simply not considered. The removal of the need for - what now seems like massively overly-complex but at the time felt totally reasonable - features such as converters, resources and similar have made me a real believe in code-first UIs. username_8: @username_0 - while a capable designer sounds like a great productivity tool, if you've been around for a while, you might have worked on "legacy" codebases, where the designer has been broken and stopped working a few Visual Studio versions back, and you'd have to understand and edit thousands of lines in .designer.cs by hand. As making even the tiniest of changes (like aligning a button) in such codebases might take a day or two - all those productivity benefits get reconsidered. (Had those experiences with both WinForms and WebForms previously). @dsyme talks about reliance on heavy tooling in [this talk about Fabulous](https://youtu.be/ZCRYBivH9BM?t=922) with a section dedicated to "The Problem with XAML". Even though Fabulous has lots of problems, it's hard to disagree with many of the points being raised. username_9: blazor binding has multiple tipps and issue 1- difference syntax between 1 way binding and two-way binding ( Value = @Value For one Way and Value-Changed = @Value for two-Way ) 2- Binding does not support IValueConverter and we must convert values inline. and this is not good for reusing code. 3- we can control UI updates by INotifyPropertyChanged from viewmodel or model. but in blazor, this is done automatically and this is an performance issue please keep xaml alive with its designation
restsharp/RestSharp
868410954
Title: IRestResponse does not contain a definition for Data Question: username_0: ## Read this first! **Important** - Please do not use GitHub issues to ask question about using RestSharp. - Ensure to read the Get help docs page at https://restsharp.dev/get-help/ before opening an issue/ - Issues not following our contribution guidelines will be marked as `invalid` and closed in three days. ## Expected Behavior Return deserialized simple object ## Actual Behavior IRestresponse does not contain a definition of Data ## Steps to Reproduce the Problem 1. IRestResponse response = client.Execute<LoginDto>(request); 2. return response.Data Need to do further Deserialization with JSonConvert.Deserialize<LoginDto>(response.Content) before it works. When a breakpoint is placed on the execute however, I can see the Data property with the correct serialized object. When I even try in the Command Window with this even with the breakpoint ? response.Data It gives same error, though I can browse and see Data property in the response object ## Specifications - Version: 106.11.7 - Platform: VS 2019, .NetFramework 4.7 - Subsystem: Answers: username_1: IRestResponse does not indeed. You should use IRestResponse<T> and Execute<T>, otherwise you won't get the Data property. https://github.com/restsharp/RestSharp/blob/0ed7b0a6b64ab4b9838c2c0cb76a1808facebe09/src/RestSharp/IRestResponse.cs#L127 username_0: Great, thank you @username_1 for the direction. Status: Issue closed
Terkwood/BUGOUT
832003192
Title: about territory calculation Question: username_0: I would like to make the calculation of the house more explicit and accurate. Especially those for dead stones are difficult to deal with clearly. Answers: username_1: Right now the project uses Sabaki's implementation for dead stones & estimates : https://github.com/SabakiHQ/deadstones username_1: I think the biggest question here is, "how do we improve the calculation of dead stones?" The Sabaki/dead stones lib uses Monte Carlo What approaches are used by other implementations of Go ?
benzngf/MoshingGraphic-AE
221350047
Title: compile issues Question: username_0: cannot open file 'opencv_world310d.lib' Would you explain about that? Thanks. Answers: username_1: Hi, sorry about that. I tried using opencv (an open source image processing library) at first but failed. That is an old part where I forgot to remove the compile dependency completely. I've pushed a new version now and it should no longer depend on file 'opencv_world310d.lib' Status: Issue closed
spacetx/starfish
481638951
Title: Run starfish CI on windows Question: username_0: **As a** starfish user on a windows platform **I want** starfish to be tested on windows **so that** I am confident that starfish will work on my operating system. ### Acceptance Criteria 1. Windows added to starfish CI Answers: username_0: Tagging @dany-fu. Thanks for the bug report on the community call yesterday! Status: Issue closed username_1: DONE
ualbertalib/HydraNorth
149835879
Title: date_modified used the wrong datetime value/zone when record is modified Question: username_0: @username_3 has noticed that when a record is updated, the date_modified date is set incorrect, with current MST but set in UTC timezone. eg: an object updated on 11:01 MST shows date_modified as: Wed, 20 Apr 2016 11:01:30 +0000 Further investigation discovered that this is a bug in older version of Sufia: https://github.com/projecthydra/sufia/blob/v6.2.0/app/controllers/concerns/sufia/batch_edits_controller_behavior.rb#L51 It uses Time.now instead of Time.current or Time.zone.now to get the time. This will affect OAI harvesting. Answers: username_1: Ugh, Date/Time code is such a mess. I got curious, and when you fix this you might want to sanity check this, as well: ActiveFedora's Solr indexer will dump a DateTime object with MDT or MST offset (depending on the time of year) into Solr if dateModified is blank https://github.com/projecthydra/active_fedora/blob/e1391fb0dd4108923b02a3dfb344a19dd971a6f8/lib/active_fedora/indexing_service.rb#L66. Depending on how the actual serialization in Solr takes place, the dates *may* all get normalized, but since we rely heavily on Solr for caching it might be worth checking to see whether or not there are any weird bugs that crop up from inconsistencies between raw UTC in that field, or Timezoned values. username_2: Great, yes, dcterms:modified should then be mapped for OAI datestamp, I'll add this info to the relevant ticket #820 Status: Issue closed
fh1ch/passport-gitlab2
231651616
Title: Why not make 'api' scope default? Question: username_0: In what cases would one use only 'read_user' scope? After the authentication, the profile is retrieved using the `api/v3/user` URL - using only 'read_user' scope will result in 403 error: `{"error":"insufficient_scope","error_description":"The request requires higher privileges than provided by the access token.","scope":"api"}` Answers: username_1: Hi @username_0 First of all, sorry for the delayed answer. I investigated a bit and it seems, that the `read_user` scope is indeed the right one to use, as it also worked perfectly fine in GitLab-9.0 to 9.1. However GitLab-9.2 has a regression issue in this are: https://gitlab.com/gitlab-org/gitlab-ce/issues/33022 I therefore hope, that it will be fixed in one of the GitLab patch releases (e.g. `9.2.3`) or at least in the next version (e.g. `9.3.0`). If you don't want to change the scope grammatically (see https://github.com/username_1/passport-gitlab2#how-do-i-change-permissions--scope-when-obtaining-a-user-profile), you can use version `2` of this package (`npm install --save passport-gitlab2@2`) which still has the `api` scope by default. I hope this helps you. Cheer
alanxmvp/week12
458829411
Title: Comment Question: username_0: In general, the main comment is that you should stick to convention in naming your fields, classes, and types. Your current naming is all over the place. The convention is as follows: 1. If the field is plural you must name it as plural, i.e, question has many `answers`, then on the Question class, it should have a field called `answers` 2. Class names should always have the first letter as uppercase. 3. Field names should be snake case (i.e all lower case where space is denoted by `_`) 4. Class names (and by extension file names ) should be camelCase for javascript, and should not be plural for entities. 5. Arguments for functions should not be capitalized. ### Quora The relationship between a question and a question vote is one to many. Each question vote record represents a vote made by a user for this question, hence it is not a one to one relationship. Same for answer and answer votes. ### AirBnB The properties_tags table is the join table for the many to many relationship between properties and tags. Hence you should generate the ManyToMany relationship between these 2, instead of properties_tags. You can also use OneToMany between properties and properties_tags with tags.
NiGhTTraX/strong-mock
920429295
Title: use of `...` on `instance` is unsafe Question: username_0: ```ts type Env = { sleep: (ms: number) => Promise<void> log: Logger } declare const withContext: (c:string, l: Logger) => Logger function someFunction(env_: Env) { const env = { ...env_, log: withContext("bla bla", env_.log) }, // this will throw TypeError: env.sleep is not a function env.sleep(10) // ... } const mockEnv = M.mock<Env>(); M.when(mockEnv.log).thenReturn(mkLogger()).anyTimes(); someFunction(M.instance(mockEnv)); M.verify(mockEnv); ``` Unfortunately I don't quite see how that could be fixed at all :( Answers: username_0: Even if I try to add `M.when(env.sleep).thenThrow().times(0);` it still fails. username_1: Hey @username_0, thanks for raising this. I think it's reasonable to return any properties that have expectations, so I'll keep the issue open until I address this. With that in place, you should be able to get the `sleep` property after spreading, provided that you have an expectation set on it. --- P.S.: `times(0)` is incorrect — the correct way of expecting that something won't be called is to simply not set any expectations on it. Any unexpected property access will automatically throw. In your example above it looks like you do intend to call `env.sleep`, so it should have a matching expectation. username_0: Correct, I just wanted to fix the `env.sleep is not a function` error. Tho even if there are is no expectation for `sleep` to be called, I think it would be nice to get error like sleep must not be called vs sleep is not a function. Status: Issue closed username_1: Hey @username_0, this is now fixed in [7.0.0](https://github.com/username_1/strong-mock/blob/b7a7472a8fc8d06c7e74169d070640f5c851cfcd/CHANGELOG.md#700-2021-06-17). username_0: That's great @username_1 tho even with that fix, if there are no expectations for accessing method prop and then the prop is accessed and called you have 2 different behaviour, if speed was used you get undefined is not a function error (at call time), vs much nicer unexpected prop access error(at access time). I think leaving note about this issue in readme is a good idea. username_1: Correct, you will get `undefined` instead of the `UnexpectedAccess` error. I've updated the [docs](https://github.com/username_1/strong-mock#can-i-spreadenumerate-a-mock-instance) with this, thank you for the feedback!
real-logic/artio
410407841
Title: Unable to purge archive Question: username_0: Upon startup it takes a long time to index messages in the archive. Most of them are irrelevant as only recent messages have any value and old ones can be discarded safely. Simply deleting the files in the archive doesn't work as their position is referenced by the replay-positions file. Do I understand correctly that in order to reduce the archive the application needs to 1) read the positions in replay-positions file 2) use catalog to work out whether messages are still relevant 3) remove the irrelevant positions from the file 4) remove the files from the archive Thanks. Answers: username_1: Thanks for raising this issue. My expected usage is that people periodically just rotate their log data and archive it. For example at an end of the day event. Is this not possible in your case? I've had a bit of a think and I'm wondering why it takes so long to index messages on startup. The way Artio is designed the indexer agent should be keeping up with the progress of the Framer agent. The scan on startup should just be of files that we are behind with. So that indicates to me that either: 1) Your Archiver is behind your Framer. You should be able to see from the Aeron counters what the position of the Archiver subscription is, or from the contents of the position indexes. Is this the case? 2) Our logic for scanning these logs on startup is suboptimal and we're scanning more data that we need to. username_0: Just to make sure I understand this: we can only rotate the log data when we reset the sequence numbers (i.e. beginning of the week)? Otherwise with both replay-index and replay-positions gone we would not be able to replay any fix messages, right? I also got to the bottom of my long start ups. For some reason when i was looking at the code i got an impression that Artio re-indexes each time, which is my mistake. After your comments i went through the logs and noticed an interesting thing: 1550845136049:main[INDEX] : Catchup [ReplayIndex]: recordingId = 1, recordingStopped @ 57920, indexStopped @ 50496 ... 1550845136284:main[INDEX] : Catchup [ReplayIndex]: recordingId = 9, recordingStopped @ 21783072, indexStopped @ 21783008 **1550845136364:main[INDEX] : Catchup [ReplayIndex]: recordingId = 13, recordingStopped @ 6245024, indexStopped @ 4096** 1550845141146:main[INDEX] : Catchup [ReplayIndex]: recordingId = 17, recordingStopped @ 17888, ... 1550845144585:main[INDEX] : Catchup [ReplayIndex]: recordingId = 11, recordingStopped @ 8113024, indexStopped @ 8112960 **1550845145018:main[INDEX] : Catchup [ReplayIndex]: recordingId = 15, recordingStopped @ 6256096, indexStopped @ 6496** 1550845149839:main[INDEX] : Catchup [ReplayIndex]: recordingId = 19, recordingStopped @ 23808, indexStopped @ 22016 Most recordings are almost at the end and just 13 and 15 are way off. I replayed those manually and apparently these only have hearbeats (TEMPLATE_ID == 16). My guess is that at some point we got disconnected from the venue but both library and engine were running. Problem is that ReplayIndex seems to be dropping everything but fix messages (https://github.com/real-logic/artio/blob/master/artio-core/src/main/java/uk/co/real_logic/artio/engine/logger/ReplayIndex.java:128) So I was wondering whether it should update position in case of a heartbeat? Many many thanks in advance. username_1: Hi, Yes - you're right that's exactly what should happen. Thanks for the debug information. username_1: Closing this issue since the `resetState()` operation solves this requirement I think. Status: Issue closed
NickMcConnell/FAangband
1087298114
Title: Dwarf witch tile Question: username_0: I came across one and it displayed as a purple "p". In the monster list there is an entry for witch but not dwarf witch which makes me think the code assigns a random race to the witch monster for each instance? If the code uses a single tile for all instances of witch that is problematic? Perhaps something along the lines if monster doesn't have a tile assigned use a generic race tile? Were it other than a dwarf you might even have two generic tiles for each race distinguishing sex? Answers: username_1: Yes, "player race monsters" (they get the PLAYER flag in monster.txt) have a player race attached. I think on the whole it's best to have a tile for witch rather than for the player race; all witches have the same spells, HP, etc, whereas the only real effect of the player races is how likely they are to be hostile to the player.
Killeroo/MaChe-Messenger
144950163
Title: Server breaks when client GUI disconnects Question: username_0: When Client GUI and another client are connected to the server, if the GUI client unexpectedly disconnects the server can have issues establishing new client connections Status: Issue closed Answers: username_0: caused by client connect not being closed and removed when client unexpectedly connects
netbox-community/netbox
772272886
Title: Custom field for interfaces and inventory item Question: username_0: ### Environment * NetBox version: 2.10.1 <!-- Describe in detail the new functionality you are proposing. Include any specific changes to work flows, data models, or the user interface. --> ### Proposed Functionality Refer to #1254 and #2281, it did not implement due to complexity of previous version CustomField Value model. As now each custom field reside within each model, is it possible to have this feature? I knew that many people would like this feature to be available. ### Use Case I'd like to implement custom field to stored some automated data to it. e.g. dot1x enabled / security-mode Status: Issue closed Answers: username_1: Thank you for submitting this issue, however it appears that this topic has already been raised. Please see issue #5401 for further discussion.
kubernetes/kubernetes
137352376
Title: Document new 1.2 liveness/readiness probe options Question: username_0: See #15967 Answers: username_1: What sort of documentation are you looking for? The parameters are already documented in the [API reference](http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_probe), and these are advanced features which I'm not sure belong in the user guide examples. username_2: @username_1 Could we document at least the criteria used to consider probes successful? I am using `httpGet` probes and I am working under the assumption that kubernetes is just looking at the response code and expecting 200? Is this correct? I think a separate issue could be giving the option of configuring expectation similar to LTM monitors. username_3: Can you please add a reference somewhere [here](http://kubernetes.io/docs/user-guide/liveness/) I know that I wanted to configure periodic timeout so I started to look for it and eventually found what I wanted in this thread (by looking through several issues), however if there were a link to API it would be helpful. I also believe that not everyone expects more options are available. username_1: Sure. It looks like all the information is there, but spread across multiple documents: - http://kubernetes.io/docs/user-guide/pod-states/#container-probes - http://kubernetes.io/docs/user-guide/production-pods/#liveness-and-readiness-probes-aka-health-checks - http://kubernetes.io/docs/user-guide/liveness/ - http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_probe /cc @devin-donnelly username_3: @username_1 well actually none of the guides mentions a way to configure how often to perform the probe. I think that if every page you mentioned above had a link to the relevant API it would help a lot. This is how I got to this thread: 1. First I found this page: (https://github.com/kubernetes/kubernetes/issues/12866) 2. From there I got to (https://github.com/kubernetes/kubernetes/pull/15967) 3. And finally to this thread. username_4: +1 for @username_2 's question. I could not find the following in the docs: If we use `httpGet` probes, does a 200 status code mean the check passed, and anything else means the check failed?
gosu/gosu
775485127
Title: Look into CI failures Question: username_0: Over the course of the last days I have seen the strangest CI failures: * Building on macOS 10.14 fails because `sdl2-config --static-libs` wants to link to CoreHaptics.framework, which has only been introduced in 10.15 * AppVeyor builds with Ruby 2.5 and 2.7 had test failures, although 2.6 was fine? https://ci.appveyor.com/project/gosu-ci/gosu/builds/37023536/job/t838vlwo2gcb2ek8 * C++ builds fail on Travis with Ubuntu 20.04 (focal), which is why we're still only using bionic. Not sure if these are worth looking into, or if Gosu should start using GitHub actions instead (although AppVeyor has been absolutely fantastic so far). Status: Issue closed Answers: username_0: Gosu now uses GitHub actions, and I have not noticed any oddities yet
dalejung/nbx
164678286
Title: Exiting attached console will kill the kernel. Question: username_0: Something changed where exiting an attach kernel via console kills the kernel. Exiting an attached kernel should detach and leave the kernel alone. Answers: username_0: found this https://github.com/jupyter/jupyter_console/issues/48 username_0: using solution form that ticket Status: Issue closed
swagger-api/swagger-ui
53649280
Title: Building swagger-ui on windows 7 does not work Question: username_0: Hello, I'm trying to build a swagger-ui distribution on windows 7, but I have some issues with npm, cake and handlebars configuration. I found a previous bug fix #68 but it doesn't work for me. Has anyone already done that ? Note: Ok, I've already created an issue about this, but I think like posting a new subject here is more appropriate. I have a couple fixes and functionalities that I'd like to submit as pull resquests on swagger-ui but... first I need to compile swagger-ui from the source code. Until now, I've done all the work straight in javascript on the dist code... So, I back on the right track :) step 1: npm run-script build [email protected] build c:\workspace\swagger-ui PATH=$PATH:./node_modules/.bin cake dist but cake does not started.. nothing happens. step 2 : I tried to run cake on command line : cake dist Build distribution in ./dist path.existsSync is now called fs.existsSync. : Reading src/main/coffeescript/SwaggerUi.coffee : Reading src/main/coffeescript/view/HeaderView.coffee : Reading src/main/coffeescript/view/MainView.coffee : Reading src/main/coffeescript/view/ResourceView.coffee : Reading src/main/coffeescript/view/OperationView.coffee : Reading src/main/coffeescript/view/StatusCodeView.coffee : Reading src/main/coffeescript/view/ParameterView.coffee : Reading src/main/coffeescript/view/SignatureView.coffee : Reading src/main/coffeescript/view/ContentTypeView.coffee : Precompiling templates... : Compiling src/main/template/content_type.handlebars : Compiling src/main/template/main.handlebars : Compiling src/main/template/operation.handlebars : Compiling src/main/template/param.handlebars : Compiling src/main/template/param_list.handlebars : Compiling src/main/template/param_readonly.handlebars : Compiling src/main/template/param_readonly_required.handlebars : Compiling src/main/template/param_required.handlebars : Compiling src/main/template/resource.handlebars : Compiling src/main/template/signature.handlebars : Compiling src/main/template/status_code.handlebars c:\workspace\pfmediation\pfs\pfs-tools\swagger-ui\swagger-ui-master\swagger-ui\Cakefile:56 throw err; ^ Error: Command failed: at ChildProcess.exithandler (child_process.js:637:15) at ChildProcess.EventEmitter.emit (events.js:98:17) at maybeClose (child_process.js:735:16) at Process.ChildProcess.handle.onexit (childprocess.js:802:5) handlebars templates are not generated. exec command does not work properly. step3 : I found a previous issue that seems to be related : #68 I tried to install handlebars and cake as global node modules still the same errors... step5 : I tried to generate the handlebars template on command line. ok this work, the template is correctly generated. so I don't understand why the exec command is not working, it seems that launching handlebars from cakefile does not work on windows 7... finally I'm kind of stuck in the middle of the build... Thanks for your help all. Best Regards Bruno Answers: username_1: I was able to build successfully on my Windows machine by manually manipulating my path to include ./node_modules/.bin and the [GOW utilities](https://github.com/bmatzelle/gow). Then you can run `cake dist` from the command line. username_2: I needed to patch the cakefile to make it work on Windows: https://github.com/username_2/swagger-ui/commit/9fe0737ca68ead6c5b0aa61e82ea51f0036f3882 (havent pull-requested it because I guess it won't work on *ix systems now) The other gotcha is that you must git-clone with `core.autocrlf=false`, otherwise you end up with lots of `\r` chars in the output files... username_3: now that gulp has been added in develop_2.0, the cakefile and windows issues should be resolved. please reopen if you see otherwise. Status: Issue closed username_4: How did you fix this? What are exact steps? username_3: ``` npm install -g gulp npm install gulp ``` username_5: or `npm run build` username_4: ` username_5: You shouldn't use `npm run` scripts in Windows. Please follow @username_3's instructions and make sure you have Node.js installed and also you have the latest. username_4: I did do his, but I dont have a GULP file. I am using 2.7 of Swagger, does it only work with the newest build? username_6: There's no 2.7 of Swagger... username_4: Sorry, that is RestExpress. 2.1.0-alpha.7 is what I have for the UI project. username_6: I'd suggest pulling from master as it includes quite a few fixes for issues in 2.1.0-alpha.7. If you still have an issue with that, please let us know. username_4: Yep latest worked with gulp, thanks guys. username_6: :+1: username_7: windows 7/64 bit, swagger ui was built but able to process only file from your server http://petstore.swagger.io/v2/swagger.json it does not take local json file - neither from swagger-ui/dist directory, placed as ../m.json, error: Please specify the protocol for file:///C:/a_work/Carnegie/swagger-ui/dist/index.html/../m.json nor from local website error: Can't read from server. It may not have the appropriate access-control-origin settings. tried to launche Crome with option --allow-file-access-from-files or -allow-file-access-from-files - it is not working anymore... any idea how to use swagger ui for local json file in windows?? that file is legit - i can see it in swagger editor
getsentry/sentry-react-native
1139701160
Title: Automatic Instrumentation don't track correctly navigation with react-native-navigation Question: username_0: Hello ! I have some struggle with **Automatic Instrumentation** and `react-native-navigation` ! I currently have only one screen tracked with the correct name. Other don't appear in the performance dashboard. <img width="1529" alt="image" src="https://user-images.githubusercontent.com/62896243/154224752-2b63d6e4-1291-4797-b381-28b6e743df64.png"> it's look like other navigation events are capture as Error by sentry ![image](https://user-images.githubusercontent.com/62896243/154224844-94e765cd-9066-499b-9cf6-fc84bb220f6a.png) ![image](https://user-images.githubusercontent.com/62896243/154225089-9fb98797-2e00-4099-bd55-7279c340a17e.png) ### Environment How do you use Sentry? **Sentry SaaS (sentry.io)** Which SDK and version? ```js // package.json "@sentry/react-native": "3.2.14-beta.1", <= to fix route name problem // podfile.lock - RNSentry (3.2.14-beta.1): - React-Core - Sentry (= 7.9.0) - Sentry (7.9.0): - Sentry/Core (= 7.9.0) - Sentry/Core (7.9.0) ``` Answers: username_0: Also, it's look like sometimes transaction still keep **Route Name** name <img width="1530" alt="image" src="https://user-images.githubusercontent.com/62896243/154227771-1cfdfada-0f75-440c-88b6-95293e5f5ab5.png"> <img width="331" alt="image" src="https://user-images.githubusercontent.com/62896243/154228200-33407575-08f1-4d36-ae01-c647b61507c8.png"> <img width="1185" alt="image" src="https://user-images.githubusercontent.com/62896243/154227348-de05a1a3-d727-4d7f-9d50-e126bab18e5a.png"> username_1: Is this fixed in beta? I am still seeing 'Route Change' username_0: Partially fixed! I started to see some screen name but not all the time username_2: cc @username_3 please have a look at it, apparently there's still edge cases for https://github.com/getsentry/sentry-react-native/pull/2053 username_3: I will take a look at reproducing this, looks like a similar issue as https://github.com/getsentry/sentry-react-native/pull/2053 but for `react-native-navigation` Status: Issue closed
crapos/piavpn-portforward
145352860
Title: Port check results in cloudflare http response Question: username_0: This is what I get back when I run the port check in this codebase: ` Why do I have to complete a CAPTCHA? Completing the CAPTCHA proves you are a human and gives you temporary access to the web property. What can I do to prevent this in the future? If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware. If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking for misconfigured or infected devices. CloudFlare Ray ID: xxxxxxxxxxxx &bull; Your IP: 46.166.xxx.xxx &bull; Performance &amp; security by CloudFlare`
jlippold/tweakCompatible
416478901
Title: `Core Utilities` working on iOS 11.3 Question: username_0: ``` { "packageId": "coreutils", "action": "working", "userInfo": { "arch32": false, "packageId": "coreutils", "deviceId": "iPhone8,1", "url": "http://cydia.saurik.com/package/coreutils/", "iOSVersion": "11.3", "packageVersionIndexed": true, "packageName": "Core Utilities", "category": "Utilities", "repository": "Bingner/Elucubratus", "name": "Core Utilities", "installed": "8.30-2", "packageIndexed": true, "packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.", "id": "coreutils", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.1.0", "shortDescription": "core set of Unix shell utilities from GNU", "latest": "8.30-2", "author": "(null)", "packageStatus": "Unknown" }, "base64": "<KEY> "chosenStatus": "working", "notes": "" } ```<issue_closed> Status: Issue closed
ColinFay/ur-first-5k
788051780
Title: felixmil - W03D2-Recovery Run Question: username_0: Hey @felixmil, here is W03D2-Recovery Run: • Run, easy pace, 15 minutes. • Cool down, 5 to 10 minutes. • Stretch. Feel free to join the discussion at [ur-first-5k/discussions/66](https://github.com/username_0/ur-first-5k/discussions/66) Answers: username_1: ![image](https://user-images.githubusercontent.com/34234913/105169969-914ebb00-5b1c-11eb-8c7b-a8328f65400d.jpeg) Status: Issue closed
RGLab/ncdfFlow
17342539
Title: Error occurs when reading ncdfFlowSet with phenoData Question: username_0: R Under development (unstable) (2013-05-01 r62700) Platform: x86_64-unknown-linux-gnu (64-bit) locale: [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 [7] LC_PAPER=C LC_NAME=C [9] LC_ADDRESS=C LC_TELEPHONE=C [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] tools parallel grid stats graphics grDevices utils [8] datasets methods base other attached packages: [1] MIMOSA_0.9.8 modeest_2.1 MCMCpack_1.3-3 [4] coda_0.16-1 pracma_1.4.5 openCyto_0.99.1 [7] flowIncubator_0.99.6 QUALIFIER_1.5.9 RColorBrewer_1.0-5 [10] reshape_0.8.4 flowWorkspace_2.7.25 ncdfFlow_2.7.07 [13] Rgraphviz_2.5.0 gridExtra_0.9.1 hexbin_1.26.2 [16] IDPmisc_1.1.17 flowViz_1.25.5 flowCore_1.25.17 [19] rrcov_1.3-3 pcaPP_1.9-49 mvtnorm_0.9-9994 [22] robustbase_0.9-7 Biobase_2.21.0 XML_3.96-1.1 [25] RBGL_1.37.2 graph_1.39.0 BiocGenerics_0.7.2 [28] Cairo_1.5-2 Rcpp_0.10.3 data.table_1.8.8 [31] doMC_1.3.0 iterators_1.0.6 foreach_1.4.0 [34] ROCR_1.0-4 gplots_2.11.0.1 MASS_7.3-26 [37] KernSmooth_2.23-10 caTools_1.14 gdata_2.12.0.2 [40] gtools_2.7.1 glmnet_1.9-3 Matrix_1.0-12 [43] lattice_0.20-15 xtable_1.7-1 lubridate_1.3.0 [46] stringr_0.6.2 ggplot2_0.9.3.1 plyr_1.8 [49] reshape2_1.2.2 ProjectTemplate_0.4-2 testthat_0.7.1 [52] devtools_1.3 loaded via a namespace (and not attached): [1] bitops_1.0-5 clue_0.3-46 cluster_1.14.4 [4] codetools_0.2-8 colorspace_1.2-2 compositions_1.30-1 [7] dichromat_2.0-0 digest_0.6.3 evaluate_0.4.3 [10] fda_2.3.4 feature_1.2.8 flowClust_3.1.1 [13] flowStats_2.19.9 Formula_1.1-1 gtable_0.1.2 [16] httr_0.2 hwriter_1.3 ks_1.8.12 [19] labeling_0.1 latticeExtra_0.6-24 memoise_0.1 [22] munsell_0.4 mvoutlier_1.9.9 proto_0.3-10 [25] RCurl_1.95-4.1 rgl_0.93.935 RSVGTipsDevice_1.0-4 [28] scales_0.2.3 stats4_3.1.0 tensorA_0.36 [31] whisker_0.3-2 ```<issue_closed> Status: Issue closed
latex3/babel
1028421501
Title: Babel recent update and babel-spanish Question: username_0: The most recent update of babel 60021 broke babel-spanish options such as es-noquoting, es-noshorthands Answers: username_1: I refactored the code and sadly there isn’t a quick fix. So, I’ll reject it altogether and restore the old code. I'll upload a new version asap (this very afternoon).
Kiskae/compile-testing-extension
1058519479
Title: Add to compile-testing Question: username_0: Hello, since the original maintainers don't seem to be updating to junit 5 anytime soon. I have published a fork of compile-testing here: https://github.com/jbock-java/compile-testing I have already made a couple of releases on maven central. For the next release, I am planning to move to junit 5 and would like to add `CompilationExtension`. Even though I don't fully understand yet what's going on with all that async stuff. It looks solid code and seems to work fine. Would you be willing to contribute to the repo with a PR? Answers: username_0: This project depends on google's compile-testing and therefore still pulls JUnit 4. I would like to add this class to the fork mentioned above, which would then not depend on JUnit 4 anymore. Would you mind if I add `CompilationExtension` to it? Because `CompilationRule` will be gone in the next release, and it really needs an alternative. username_0: There was no reply, so I went ahead and added it myself. I hope you that's OK. username_1: Apologies for the lack of a reaction, I never ended up being notified about the issue. I'm honestly fine with you including the code, just make sure that whatever you do complies with the license: compile-testing-extension/blob/master/LICENSE (google's compile-testing code is covered by the same license) Status: Issue closed
mathblogging/mathblogging.org
112015255
Title: SASS it up Question: username_0: Set up sass files correctly so jekyll builds out a single css file. Status: Issue closed Answers: username_0: Hm.... Should we revert this? It adds a lot of unnecessary changes and makes my PR invalid 😖 username_1: I don’t know what PR is. The change seemed trivial enough to me, that’s why I did it. Sam > username_0: Set up sass files correctly so jekyll builds out a single css file. username_0: Re-opening. I've reverted the commit that closed this (but it's saved in the branch username_1-sass). I've also modified my branch to copy your work (and extend it a little bit). Could you review #3? Status: Issue closed
peterarsentev/job4j_features_bugs
1101469628
Title: Опечатки в задании Question: username_0: ![image](https://user-images.githubusercontent.com/13130875/149308414-cf46f961-090c-44df-bd11-e1ae68588e9e.png) [https://job4j.ru/profile/exercise/81/task-view/439](https://job4j.ru/profile/exercise/81/task-view/439)
Atlantiss/NetherwingBugtracker
362965804
Title: Rogue's Stealth Question: username_0: [//]: # (Enclose links to things related to the bug using http://wowhead.com or any other TBC database.) [//]: # (You can use screenshot ingame to visual the issue.) [//]: # (Write your tickets according to the format:) [//]: # ([Quest][Azuremyst Isle] Red Snapper - Very Tasty!) [//]: # ([NPC] Magistrix Erona) [//]: # ([Spell][Mage] Fireball) [//]: # ([Npc][Drop] Ghostclaw Lynx) [//]: # ([Web] Armory doesnt work) **Description**: When you are in group with a rogue, you can't see him while stealthed, but you should be able to. **Current behaviour**: When you are in group with a rogue, you can't see him while stealthed **Expected behaviour**: You should be able to see your group mates, even if they are stealthed. **Server Revision**: Answers: username_1: fixed in 2110 Status: Issue closed
dotnet/roslyn
743946222
Title: None Question: username_0: @username_1 Looks like it might be caused by some codeflow issue, the fix is stuck in this PR https://github.com/dotnet/roslyn/pull/49817 Answers: username_1: <!-- runfo report start --> Runfo Tracking Issue: [TestCompleteParenthesisForExtensionMethodImportCompletionProvider is flaky](https://runfo.azurewebsites.net/tracking/issue/101) |Build|Definition|Kind|Run Name| |---|---|---|---| |[916343](https://dev.azure.com/dnceng/public/_build/results?buildId=916343)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49842](https://github.com/dotnet/roslyn/pull/49842)|Test Windows Desktop Spanish Debug 32| |[916343](https://dev.azure.com/dnceng/public/_build/results?buildId=916343)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49842](https://github.com/dotnet/roslyn/pull/49842)|Test Windows Desktop Spanish Debug 32| |[916343](https://dev.azure.com/dnceng/public/_build/results?buildId=916343)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49842](https://github.com/dotnet/roslyn/pull/49842)|Test Windows Desktop Spanish Debug 32| |[911314](https://dev.azure.com/dnceng/public/_build/results?buildId=911314)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49804](https://github.com/dotnet/roslyn/pull/49804)|Test Windows Desktop Spanish Debug 32| |[911314](https://dev.azure.com/dnceng/public/_build/results?buildId=911314)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49804](https://github.com/dotnet/roslyn/pull/49804)|Test Windows Desktop Spanish Debug 32| |[911314](https://dev.azure.com/dnceng/public/_build/results?buildId=911314)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49804](https://github.com/dotnet/roslyn/pull/49804)|Test Windows Desktop Spanish Debug 32| |[909466](https://dev.azure.com/dnceng/public/_build/results?buildId=909466)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49804](https://github.com/dotnet/roslyn/pull/49804)|Test Windows Desktop Debug 32| |[909466](https://dev.azure.com/dnceng/public/_build/results?buildId=909466)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49804](https://github.com/dotnet/roslyn/pull/49804)|Test Windows Desktop Debug 32| |[908884](https://dev.azure.com/dnceng/public/_build/results?buildId=908884)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|Rolling|Test Windows Desktop Release 64| |[908884](https://dev.azure.com/dnceng/public/_build/results?buildId=908884)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|Rolling|Test Windows Desktop Release 64| |[908698](https://dev.azure.com/dnceng/public/_build/results?buildId=908698)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49536](https://github.com/dotnet/roslyn/pull/49536)|Test Windows Desktop Debug 32| |[908698](https://dev.azure.com/dnceng/public/_build/results?buildId=908698)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49536](https://github.com/dotnet/roslyn/pull/49536)|Test Windows Desktop Debug 32| |[908391](https://dev.azure.com/dnceng/public/_build/results?buildId=908391)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|Rolling|Test Windows Desktop Debug 64| |[908391](https://dev.azure.com/dnceng/public/_build/results?buildId=908391)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|Rolling|Test Windows Desktop Debug 64| |[908246](https://dev.azure.com/dnceng/public/_build/results?buildId=908246)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49781](https://github.com/dotnet/roslyn/pull/49781)|Test Windows Desktop Debug 64| |[908246](https://dev.azure.com/dnceng/public/_build/results?buildId=908246)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49781](https://github.com/dotnet/roslyn/pull/49781)|Test Windows Desktop Debug 64| |[907915](https://dev.azure.com/dnceng/public/_build/results?buildId=907915)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49773](https://github.com/dotnet/roslyn/pull/49773)|Test Windows Desktop Spanish Debug 32| |[907915](https://dev.azure.com/dnceng/public/_build/results?buildId=907915)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49773](https://github.com/dotnet/roslyn/pull/49773)|Test Windows Desktop Spanish Debug 32| |[907318](https://dev.azure.com/dnceng/public/_build/results?buildId=907318)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49765](https://github.com/dotnet/roslyn/pull/49765)|Test Windows Desktop Debug 32| |[907318](https://dev.azure.com/dnceng/public/_build/results?buildId=907318)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49765](https://github.com/dotnet/roslyn/pull/49765)|Test Windows Desktop Debug 32| |[906853](https://dev.azure.com/dnceng/public/_build/results?buildId=906853)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49763](https://github.com/dotnet/roslyn/pull/49763)|Test Windows Desktop Debug 64| |[906853](https://dev.azure.com/dnceng/public/_build/results?buildId=906853)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49763](https://github.com/dotnet/roslyn/pull/49763)|Test Windows Desktop Debug 64| |[906388](https://dev.azure.com/dnceng/public/_build/results?buildId=906388)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|Rolling|Test Windows Desktop Release 64| |[906388](https://dev.azure.com/dnceng/public/_build/results?buildId=906388)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|Rolling|Test Windows Desktop Release 64| |[906388](https://dev.azure.com/dnceng/public/_build/results?buildId=906388)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|Rolling|Test Windows Desktop Release 32| |[906388](https://dev.azure.com/dnceng/public/_build/results?buildId=906388)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|Rolling|Test Windows Desktop Release 32| |[906248](https://dev.azure.com/dnceng/public/_build/results?buildId=906248)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49711](https://github.com/dotnet/roslyn/pull/49711)|Test Windows Desktop Debug 32| |[906248](https://dev.azure.com/dnceng/public/_build/results?buildId=906248)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49711](https://github.com/dotnet/roslyn/pull/49711)|Test Windows Desktop Debug 32| |[906248](https://dev.azure.com/dnceng/public/_build/results?buildId=906248)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49711](https://github.com/dotnet/roslyn/pull/49711)|Test Windows Desktop Release 32| |[906248](https://dev.azure.com/dnceng/public/_build/results?buildId=906248)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49711](https://github.com/dotnet/roslyn/pull/49711)|Test Windows Desktop Release 32| |[905612](https://dev.azure.com/dnceng/public/_build/results?buildId=905612)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 48629](https://github.com/dotnet/roslyn/pull/48629)|Test Windows Desktop Debug 64| |[905612](https://dev.azure.com/dnceng/public/_build/results?buildId=905612)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 48629](https://github.com/dotnet/roslyn/pull/48629)|Test Windows Desktop Debug 64| |[905312](https://dev.azure.com/dnceng/public/_build/results?buildId=905312)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49707](https://github.com/dotnet/roslyn/pull/49707)|Test Windows Desktop Spanish Debug 32| |[905312](https://dev.azure.com/dnceng/public/_build/results?buildId=905312)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49707](https://github.com/dotnet/roslyn/pull/49707)|Test Windows Desktop Spanish Debug 32| |[905149](https://dev.azure.com/dnceng/public/_build/results?buildId=905149)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49711](https://github.com/dotnet/roslyn/pull/49711)|Test Windows Desktop Debug 64| |[905149](https://dev.azure.com/dnceng/public/_build/results?buildId=905149)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49711](https://github.com/dotnet/roslyn/pull/49711)|Test Windows Desktop Debug 64| |[905018](https://dev.azure.com/dnceng/public/_build/results?buildId=905018)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49705](https://github.com/dotnet/roslyn/pull/49705)|Test Windows Desktop Spanish Debug 32| |[905018](https://dev.azure.com/dnceng/public/_build/results?buildId=905018)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49705](https://github.com/dotnet/roslyn/pull/49705)|Test Windows Desktop Spanish Debug 32| |[905018](https://dev.azure.com/dnceng/public/_build/results?buildId=905018)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49705](https://github.com/dotnet/roslyn/pull/49705)|Test Windows Desktop Spanish Debug 32| |[904788](https://dev.azure.com/dnceng/public/_build/results?buildId=904788)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49703](https://github.com/dotnet/roslyn/pull/49703)|Test Windows Desktop Release 64| |[904788](https://dev.azure.com/dnceng/public/_build/results?buildId=904788)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49703](https://github.com/dotnet/roslyn/pull/49703)|Test Windows Desktop Release 64| |[904786](https://dev.azure.com/dnceng/public/_build/results?buildId=904786)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49702](https://github.com/dotnet/roslyn/pull/49702)|Test Windows Desktop Debug 64| |[904786](https://dev.azure.com/dnceng/public/_build/results?buildId=904786)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49702](https://github.com/dotnet/roslyn/pull/49702)|Test Windows Desktop Debug 64| |[903407](https://dev.azure.com/dnceng/public/_build/results?buildId=903407)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49671](https://github.com/dotnet/roslyn/pull/49671)|Test Windows Desktop Spanish Debug 32| |[903407](https://dev.azure.com/dnceng/public/_build/results?buildId=903407)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49671](https://github.com/dotnet/roslyn/pull/49671)|Test Windows Desktop Spanish Debug 32| |[902497](https://dev.azure.com/dnceng/public/_build/results?buildId=902497)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49661](https://github.com/dotnet/roslyn/pull/49661)|Test Windows Desktop Release 32| |[902497](https://dev.azure.com/dnceng/public/_build/results?buildId=902497)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49661](https://github.com/dotnet/roslyn/pull/49661)|Test Windows Desktop Release 32| |[898982](https://dev.azure.com/dnceng/public/_build/results?buildId=898982)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49481](https://github.com/dotnet/roslyn/pull/49481)|Test Windows Desktop Debug 64| |[898982](https://dev.azure.com/dnceng/public/_build/results?buildId=898982)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49481](https://github.com/dotnet/roslyn/pull/49481)|Test Windows Desktop Debug 64| |[898602](https://dev.azure.com/dnceng/public/_build/results?buildId=898602)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49607](https://github.com/dotnet/roslyn/pull/49607)|Test Windows Desktop Debug 64| |[898602](https://dev.azure.com/dnceng/public/_build/results?buildId=898602)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49607](https://github.com/dotnet/roslyn/pull/49607)|Test Windows Desktop Debug 64| |[897580](https://dev.azure.com/dnceng/public/_build/results?buildId=897580)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49588](https://github.com/dotnet/roslyn/pull/49588)|Test Windows Desktop Release 32| |[897580](https://dev.azure.com/dnceng/public/_build/results?buildId=897580)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49588](https://github.com/dotnet/roslyn/pull/49588)|Test Windows Desktop Release 32| |[897497](https://dev.azure.com/dnceng/public/_build/results?buildId=897497)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49576](https://github.com/dotnet/roslyn/pull/49576)|Test Windows Desktop Release 32| |[897497](https://dev.azure.com/dnceng/public/_build/results?buildId=897497)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49576](https://github.com/dotnet/roslyn/pull/49576)|Test Windows Desktop Release 32| |[897284](https://dev.azure.com/dnceng/public/_build/results?buildId=897284)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|Rolling|Test Windows Desktop Debug 32| [Truncated] |[890713](https://dev.azure.com/dnceng/public/_build/results?buildId=890713)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 48274](https://github.com/dotnet/roslyn/pull/48274)|Test Windows Desktop Release 64| |[890713](https://dev.azure.com/dnceng/public/_build/results?buildId=890713)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 48274](https://github.com/dotnet/roslyn/pull/48274)|Test Windows Desktop Release 64| |[890464](https://dev.azure.com/dnceng/public/_build/results?buildId=890464)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|Rolling|Test Windows Desktop Spanish Debug 32| |[890464](https://dev.azure.com/dnceng/public/_build/results?buildId=890464)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|Rolling|Test Windows Desktop Spanish Debug 32| |[890232](https://dev.azure.com/dnceng/public/_build/results?buildId=890232)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49429](https://github.com/dotnet/roslyn/pull/49429)|Test Windows Desktop Debug 32| |[890232](https://dev.azure.com/dnceng/public/_build/results?buildId=890232)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49429](https://github.com/dotnet/roslyn/pull/49429)|Test Windows Desktop Debug 32| |[889716](https://dev.azure.com/dnceng/public/_build/results?buildId=889716)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49447](https://github.com/dotnet/roslyn/pull/49447)|Test Windows Desktop Release 64| |[889716](https://dev.azure.com/dnceng/public/_build/results?buildId=889716)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 49447](https://github.com/dotnet/roslyn/pull/49447)|Test Windows Desktop Release 64| |[889589](https://dev.azure.com/dnceng/public/_build/results?buildId=889589)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 48976](https://github.com/dotnet/roslyn/pull/48976)|Test Windows Desktop Debug 64| |[889589](https://dev.azure.com/dnceng/public/_build/results?buildId=889589)|[roslyn-CI](https://dnceng.visualstudio.com/public/_build?definitionId=15)|[PR 48976](https://github.com/dotnet/roslyn/pull/48976)|Test Windows Desktop Debug 64| Displaying 100 of 110 results Build Result Summary |Day Hit Count|Week Hit Count|Month Hit Count| |---|---|---| |1|7|48| <!-- runfo report end --> username_1: @username_0 reactivated this bug because it has occurred several times since the fix was merged in. username_0: @username_1 Looks like it might be caused by some codeflow issue, the fix is stuck in this PR https://github.com/dotnet/roslyn/pull/49817 username_1: Good catch. I didn't realize Chris' PRs were targeting the feature branch here vs. master. Re-closing. Will re-open if this shows up after the merge. Status: Issue closed
Gabryjiel/UploadSystemPRO
781502441
Title: Unable to create an account Question: username_0: I've hooked up a fresh postgresql database to my local server, ran `php artisan migrate`. Everything seemed be nice (database had all the required tables). When trying to log in with random credentials - I would get `401 (Unauthorized)` (duuuh...). Problems appear when trying to create an account. Server replies with `500` ([production version](https://uploadsystempro.herokuapp.com/) possibly faces the same problem). small preview of the problem: ![image](https://user-images.githubusercontent.com/8137764/103927075-7a729680-5122-11eb-9fa2-503398038f3c.png) Status: Issue closed Answers: username_0: I've hooked up a fresh postgresql database to my local server, ran `php artisan migrate`. Everything seemed be nice (database had all the required tables). When trying to log in with random credentials - I would get `401 (Unauthorized)` (duuuh...). Problems appear when trying to create an account. Server replies with `500` ([production version](https://uploadsystempro.herokuapp.com/) possibly faces the same problem). small preview of the problem: ![image](https://user-images.githubusercontent.com/8137764/103927075-7a729680-5122-11eb-9fa2-503398038f3c.png) username_0: It fixed the issue on my machine. I think that the live version possibly has some issues with the database. Right now it always responds with `index.html`: ``` C:\Users\dasph>curl https://uploadsystempro.herokuapp.com/api/register -H "Content-Type: application/json" -d '{\"name":\"ssd\", \"email\":\"<EMAIL>\",\"password\":\"<PASSWORD>\",\"password_confirmation\":\"<PASSWORD>\"}" <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width,user-scalable=no,initial-scale=1"> <meta name="theme-color" content="#f5f5f5"> <title>Upload System PRO</title> <link rel="stylesheet" href="styles/loader.a1ed872e.css"> <link rel="stylesheet" href="styles/tailwind.9fbcb616.css"> <link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Quicksand:300,400,500&display=swap"> <link rel="icon" type="image/png" href="favicon.f01a234d.png"> <script defer="defer" src="bundle.79ea2ec3.js"></script> </head> <body> <div class="app"> <div class="loader"><div></div><div></div><div></div><div></div></div> </div> </body> </html> ``` username_1: I don't think it's problem with database because I was able to run migrations on production. I think it's some king of a problem with deployment. Maybe wrong entries in config variables (.env). For now (2 weeks) I wouldn't focus so much on production. Status: Issue closed
verdaccio/verdaccio
1167633714
Title: Hostname should be a per connection thing Question: username_0: <!-- PLEASE READ THIS: - If you are not sure is a bug, OPEN a DISCUSSION, if is a legitimate bug, is easy to create a bug from a discussion. - Empty reports won't be considered and eventually be closed by a bot. - Include debugging notes will help to fix it faster, HOW TO: https://github.com/verdaccio/verdaccio/wiki/Debugging-Verdaccio - If you remove this template, ticket will be closed immediately. - No English perfect is required, use public translators if is need it, we will do our best to help you. - Extra bonus: The most complete this report is delivered, the faster you will get a response. - Extra bonus: include screenshots, logs (remove sensitive data). - If you are willing to fix it, there is a checkbox at the bottom. --> **Your Environment** <!-- bug below the version 5.x will be closed, see SECURITY.md for more details --> * **verdaccio version**: 5.7.0 * **node version** [12.x.x, 14.x.x]: 14.18.2 * **package manager**: [npm@7, pnpm@6, yarn@2] docker * **os**: [mac, windows@10, linux] docker * **platform**: [npm, docker, helm, other] docker **Describe the bug** <!-- A clear and concise description of what the bug is. --> Hostname is not a per connection thing. Seems the hostname from first req is persist forever. **To Reproduce** <!-- IMPORTANT: - How to reproduce the issue - Steps to reproduce the issue Be aware, the lack of reproducible steps the issue might cause your ticket to be closed. --> ```sh $ docker-compose down $ docker-compose up -d $ curl http://aaa:4873/foobar | grep tarball "tarball": "http://aaa:4873/xxx.tgz" $ curl http://bbb:4873/foobar | grep tarball "tarball": "http://aaa:4873/xxx.tgz" $ docker-compose down $ docker-compose up -d $ curl http://bbb:4873/foobar | grep tarball "tarball": "http://bbb:4873/xxx.tgz" $ curl http://aaa:4873/foobar | grep tarball "tarball": "http://bbb:4873/xxx.tgz" ``` **Expected behavior** <!-- A clear and concise description of what you expected to happen. --> ```sh $ docker-compose down $ docker-compose up -d $ curl http://aaa:4873/foobar | grep tarball "tarball": "http://aaa:4873/xxx.tgz" $ curl http://bbb:4873/foobar | grep tarball "tarball": "http://bbb:4873/xxx.tgz" [Truncated] **Environment information** <!-- Please paste the results of running `verdaccio --info` --> **Debugging output** - `$ NODE_DEBUG=request verdaccio` display request calls (verdaccio <--> uplinks) - `$ DEBUG=verdaccio* verdaccio` enable extreme verdaccio debug mode (verdaccio api) - `$ npm -ddd` prints: - `$ npm config get registry` prints: **Contribute to Verdaccio** - [ ] I'm willing to fix this bug 🥇 <!-- IMPORTANT: please do not attach external files, all content should be visible from any device. --> Answers: username_1: You might need to reveal what's behind your logic, if you don't provide the `host` header the url returned for the tarball would be whatever you have in the storage and that's by design. username_0: @username_1 1. two different hostnames, points to one same instance 2. two hostnames are different, because they are from different network username_0: Is there a way to alter tarball host, according to client's per connection HTTP_HOST? username_1: You can read logic here https://github.com/verdaccio/verdaccio/blob/5.x/src/lib/utils.ts#L145 username_0: @username_1 It's weird. I just checked the source. Get different result outside docker using `v6.0.0-6-next.30`, that result is expected. username_0: First request using 127.0.0.1 Without docker, v6.0.0-6-next.30, second req ``` $ curl x.x.x.x:14873/foobar |grep tarball "tarball": "http://x.x.x.x:14873/foobar.tgz" ``` With docker, 5.7.0, second req ``` $ curl x.x.x.x:24873/foobar |grep tarball "tarball": "http://127.0.0.1:24873/foobar.tgz" ``` username_0: Seems 5.X has this bug, v6.0.0-6-next.30 fixed it. Wish for v6 docker image. Thanks. :) Status: Issue closed
gbif/portal-feedback
649227919
Title: cannot access blog Question: username_0: **cannot access blog** ----- Github user: @username_2 User: [See in registry](https://www.gbif.org/api/feedback/user/b05b2addc2b547bad98ab1431118c138:9bde2147b065dff58278f8faba8e192108dd199c5c630bf54dd651ddd0102fde739b70f8c5d7b956121fa827bff893f622c4b659640647da13304351c0284f19) System: Chrome 83.0.4103 / Windows 10.0.0 Referer: https://www.gbif.org/ Window size: width 1745 - height 970 [API log](http://elk.gbif.org:5601/app/kibana?#/discover?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:'2020-07-01T19:00:26.274Z',mode:absolute,to:'2020-07-01T19:06:26.274Z'))&_a=(columns:!(_source),index:'prod-varnish-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'response:%3E499')),sort:!('@timestamp',desc))) [Site log](http://elk.gbif.org:5601/app/kibana?#/discover?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:'2020-07-01T19:00:26.274Z',mode:absolute,to:'2020-07-01T19:06:26.274Z'))&_a=(columns:!(_source),index:'prod-portal-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'response:%3E499')),sort:!('@timestamp',desc))) System health at time of feedback: OPERATIONAL Answers: username_1: @username_2, do you mean https://data-blog.gbif.org/? Where were you trying to access it from, and using what device? username_2: GBIF PC, Chrome. Works now, but not last evening. username_1: It worked when I looked yesterday, but it's hosted by an external service which I don't monitor. If it happens again, try somewhere else (e.g. a phone) to see if the problem is your computer on the website. Status: Issue closed
att/rcloud
188098389
Title: Intermittent JS of R Cloud Platform Shiny support failure Question: username_0: It is common to experience an intermittent Javascript failure while using shiny. It seems to be a race condition issue where a variable is undefined in some cases and defined in other cases. The error received is below. If this error is received a refresh seems to solve the problem but it is disconcerting because users may not know to refresh and inital draw of our page takes upto 1.5 minutes in R Cloud as is. Some times the javascript function at "extension.js:48" experiences an error: "Uncaught ReferenceError: _ is not defined(_)" which is caused by "mini.js:62" which is throwing "Uncaught TypeError: Cannot read property 'on_data' of undefined(_)"<issue_closed> Status: Issue closed
KC3Kai/KC3Kai
233430295
Title: Suggestion: Display CV effective airstrike shelling firepower Question: username_0: Can we make the stat tooltips display the followings? * for CV and its variants : Effective shelling firepower after carrier bombing calculation (FP+55+bombers) * for DD,CL and CA : Effective night battle firepower. Answers: username_1: test current progress at `update-cumulative` branch, can be found at ship tool-tip implemented for now: * day shelling power (or ASW power if can do OASW, or torpedo power for SS, or bombing power for CV) * night fire/torpedo power * partial pre-cap, post-cap can be applied. CI modifiers included. can be configured at settings TODO * aerial combat power * CVCI things * many factors still uncertain, unknown username_0: Why did carrier shelling have lower firepower than the carrier shelling formula would suggest? Hiyou = 55+ (29*1.5) = 98.5 (expected empty FP) > 68 displayed FP
ZEISS/precise-ui
552382731
Title: When using defaultSelectedIndex on Accordion it switches to controlled mode. Question: username_0: ## Bug Report ### Prerequisites - [x] Can you reproduce the problem in a [MWE](https://en.wikipedia.org/wiki/Minimal_working_example)? - [x] Are you running the latest version? - [x] Did you check the FAQs to see if that helps you? - [x] Are you reporting to the correct repository? - [x] Did you perform a search in the issues? For more information, see the `CONTRIBUTING` guide. ### Description When using an Accordion in uncontrolled mode and trying to select a default index, that makes all accordion tabs impossible to expand/collapse. ### Steps to Reproduce 1. Go to KitchenSink 2. Create an Accordion with a defaultSelectedIndex. 3. Try to close it or other accordion tabs **Expected behavior:** They should still be closed/opened normally. **Actual behavior:** Upon clicking, nothing happens. **Environment details:** MacOS, Chrome 79<issue_closed> Status: Issue closed
BabylonJS/Babylon.js
258268655
Title: Building WebGL libraries that work with each other in the same webgl context. Question: username_0: ##### Description of the problem It's difficult and complex these days to mix WebGL libraries together, to draw things using different libraries in the same webgl context. (examples of difficulties: https://github.com/mrdoob/three.js/issues/8147, https://github.com/pixijs/pixi.js/issues/3230, https://github.com/pixijs/pixi.js/issues/3345, https://github.com/pixijs/pixi.js/issues/1366, https://github.com/pixijs/pixi.js/issues/298, https://github.com/jonobr1/two.js/issues/233) For example, suppose we would like to render Three, Babylon, and Pixi objects into the same WebGL context, in the same 3D space. It is currently very difficult to do this because each WebGL library manages state of the context in their own ways, and these private internals often change and break solutions that people come up with because there's no standard way to do it. Pixi.js v4 goes through [efforts to make Pixi compatible with Three.js](https://github.com/pixijs/pixi.js/issues/3230#issuecomment-257381561), so that it can render in a Three scene, but this is obviously fragile. ##### Solution Enter [Regl](http://regl.party) to the party. Maybe if the foundations for each library (Three, Babylon, Pixi, Two, etc) were built on Regl, we'd have a common way of rendering to a single context. Regl makes an abstraction just on top of raw WebGL for managing WebGL state. It doesn't render for you all the things that Three.js can, it only provides the minimal foundation for working with raw WebGL in a stateful way that is easy to manage. It seems that libraries like Three, Babylon, PlayCanvas, Pixi, Two, etc, could benefit from using a standardized way for managing WebGL state, which would make it easy to combine renderings from any of these libraries into the same WebGL context. --- What are your thoughts on refactoring the foundation of Babylon.js to use Regl for managing WebGL state? Does Regl offer enough flexibility for Babylon.js to do what it needs to do on top of Regl? Answers: username_1: Hello:) interesting topic. We offer the engine.wipeCache() to reset all caches to unknown state. This way we successfully mixed bjs and pixi in the past for example. While the question is interesting, I see no big demand so far for such a feature and the work involved will probably not worth the gain. Status: Issue closed
godotengine/godot
638011276
Title: Double size Godot, when the window goes from an High DPI screen to a standard screen one Question: username_0: **Godot version:** Godot 3.2.2 RC 1 **OS/device including version:** Mac OS Catalina **Issue description:** My laptop has a high dpi screen, but my external monitor not. If I open Godot on the high dpi screen, et move the window in the other, the window will be double sized. (it does the opposite if on the opens on the other monitor) **Steps to reproduce:** Open Godot, move the window to another screen with a different DPI resolution. Status: Issue closed Answers: username_1: Duplicate of #30880 (and possibly other issues).
pagarme/pagarme-magento
365882192
Title: Nome do Portador incorreto na tela do pedido Question: username_0: ## Contexto Mesmo preenchendo um _nome do portador_ diferente na finalização de compra, aparece o _nome do cliente_ no bloco de informações de pagamento, na tela do pedido. ## Ambiente * Magento Community 1.9.3.4 * Módulo Pagar.me 3.14.5 * Servidor Linux * PHP 7.0 * Apache 2.4 * MySQL 5.7 * Redis ## Passos para reproduzir 1. Realize um pedido via cartão de crédito, preenchendo o um portador diferente do nome do cliente 2. Acesse o pedido no painel administrativo do Magento 3. Verifique o nome do portador no bloco de informações de pagamento ## Resultado esperado O bloco de informações de pagamento deve exibir o nome do portador, preenchido na finalização do pedido. ## Resultado atual O bloco de informações de pagamento exibe o nome do cliente. Answers: username_1: Olá @username_0, esse problema faz referência à branch v2 ou à master? username_0: V2 username_1: Obrigado pela informação Rafael! Vamos corrigir o problema assim que possível =) username_1: Corrigido no #370 Status: Issue closed
Bgallag5/Find-nFresh
935861260
Title: Local Storage: Question: username_0: save zip codes and/or ingredients to local storage for next browsing session. Status: Issue closed Answers: username_1: Loacal storage saves zip codes, ingredient last searched for, and recipes currently. It just needs to be merged into develop
GeorgeIpsum/dl-vimeo-account
459404239
Title: Make this a an easier-to-use command line utility for pete's sake! Question: username_0: "Even though I'd only ever need to use this once, having to clone this repo, run npm install, and then run npm start is too much for me! I hate that! Export to my path please!" "Also, you should allow for things like filepath, or vimeo api keys to be passed in via command line argument or in a file so I don't have to copy paste those!" Answers: username_0: Now This. This is something that I'll probably do in the future
relferreira/kubedev
1061698768
Title: GoReleaser Question: username_0: Hi, I just wanted to preflight with you folks if it’s okay when I contribute maybe the goreleaser integration? Answers: username_1: Hello @username_0 I didn't know about goreleaser, it seems to be a nice addition to the project. Nowadays we only have automatic releases of Docker Images using GitHub Actions. Feel free to contribute and let me know if I can help with something.
leancloud/js-realtime-sdk
153946063
Title: [建议] 3.0 可以避免对 avoscloud-sdk 的依赖吗 Question: username_0: 在前端代码里 APP KEY 毕竟不是很安全…… 富媒体对 avoscloud-sdk 的依赖好像就是 AV.File 和 AV.GeoPoint,但像我这样上传文件用已有的后端接口的用户不需要调用 save 之类的方法 希望可以对我们这部分用户只对 _lcfile 和 _lcloc 做一个简单的封装,只满足收发富媒体消息的需求,而需要前端上传文件的用户可以再单独引入 avoscloud-sdk 做上传 这样就不需要在前端使用 APP KEY 了 Answers: username_1: Realtime SDK 本身没有依赖 `avoscloud-sdk`,富媒体消息是通过 `leancloud-realtime-typed-messages` 这个额外的 package 支持的,这个 package 依赖 `avoscloud-sdk`。https://leancloud.cn/docs/realtime_guide-js.html#leancloud_realtime_typed_messages 如果你不希望我们提供的富媒体消息类,可以自己创建:https://leancloud.cn/docs/realtime_guide-js.html#创建新的消息类型 username_2: 好设计! username_1: @username_0 事实上,你描述的需求,是可以使用我们提供的基于 `avoscloud-sdk` 的富媒体消息类。如果只是收消息的话,不去初始化 `avoscloud-sdk` 可是可以的。 username_0: @username_1 可能是我描述的不够清楚,我的问题并不在于自定义消息类型 我已经引用了leancloud-realtime-typed-messages,但是我是希望这个 package 不要依赖 avoscloud-sdk 也能使用,因为觉得在前端代码里放 APP KEY 不安全。我试了一下不初始化 avoscloud-sdk,控制台里会报错…… 实际上我也可以绕过这个 package,收的话自己封装 message.content,发的话因为我们已经有基于 LeanEngine 的后端,可以调 RESTful API 去发送消息(还没有仔细去看可不可以直接构造一个 Message 类去 send ……) 但是如果在 SDK 里能直接提供不依赖 avoscloud-sdk 的用法,肯定更优雅吧哈哈,我猜想像我们这种有后端又有实时通信需求的还是挺多的吧…… username_1: 如果你说的是 `Get current user failed. It seems this runtime use an async storage system, please new AV.File in the callback of AV.User.currentAsync().` 这是个 warning,不去管他不会有问题。 可以试下这 demo: [收](http://jsplay.avosapps.com/xep/edit?html,js,console) [发](http://jsplay.avosapps.com/vug/embed?html,js,console,output) 关于 leancloud-realtime-typed-messages。如果想用我们提供的 AV.File 这个类那么 avoscloud-sdk 是省不掉的。如果不用,那么自定义一个 YourFileMessage 消息类型就可以了。事实上,我们提供的 FileMessage 是通过同样的机制扩展出来的:https://github.com/leancloud/js-realtime-sdk/blob/next/typed-messages/src/file-message.js。 username_0: 已测,收消息的话确实那个报错不管也没有问题,但发的话不行,因为要求发送的那个文件必须在服务端保存……包括收取历史消息的时候会有些图片的 _lcfile 没有 id ,也会被直接 throw 出来…… 不是很明白一定要保存在本地的意义,用 withURL 也可以的吧? username_1: 可以。http://jsplay.avosapps.com/poj/embed?js,console 其实我还是推荐使用自定义消息类型,因为我理解你需要发送信息的并不是一个完整的 AV.File,而只是一个 url(在用已有的后端接口上传文件之后)。那样其实没有必要用我们提供的富媒体消息类型。 ```javascript class FileMessage extend AV.TypedMessage { constructor(url) { super(); this.url = url; } } AV.messageType(1); AV.messageField('url'); ``` 不知道我对需求理解的对不对。 username_0: 但是我的消息类型需要和旧有的聊天记录以及其他端兼容…… 自定义的类型是正数,和默认的负数就不一致了呀 基于这个原因我必须得用你们提供的富媒体消息吧 我自己改typed-messages.js倒是也行……但是以后升级版本又是麻烦事 username_1: 明白了。SDK 在发送前会检查 file.id 是为了确保这个文件已经上传,你可以在通过自己的接口上传之后返回相关的信息自己构造一个。可以参考 SDK 是怎么构造一个已经存在的 AV.File 的:https://github.com/leancloud/js-realtime-sdk/blob/next/typed-messages/src/file-message.js#L42-L53 username_1: 实在不行还有大杀器:直接构造一个 `AV.Message`,自己拼 `_lctype`、`_lcfile` 等字段,想发什么发什么。当然从可维护性角度我并不推荐这么做。 username_0: OK,尝试用 File.withURL 构造再强行赋值 file.id 可以发送成功 但收消息时很多从 2.x 传过来的文件都没有 id,好像我们用户是做不到在 parse 之前就注入一个 id 来保证解析的?希望兼容一下 username_1: ```javascript class YourFileMessage extends AV.FileMessage { static parse(json, message) { json.id = json.id | 'just fake one'; } } ``` 设计上是支持这样的,我还没这样试过 😁。 username_0: ImageMessage 这些并不能再去调这个子类…… username_1: Right. 我会兼容一下没有 ID 的情况,并且考虑给 Realtime 增加一个 beforeParse 的 hook。 username_0: 好的,感谢! Status: Issue closed
kurozenzen/r34-react
1038880876
Title: Download Images as PDF Question: username_0: **Describe the feature** A feature to download site / Images as a PDF easily for offline viewing. **Additional Information** On IOS, downloading the site as a PDF only shows the pictures you are currently showing on screen, and is blank everywhere else. Answers: username_1: I dont think this is a usecase I want to solve with my application. There are lots of good r34 download tools out there. Status: Issue closed
flutter/flutter
628387526
Title: HtmlElementView goes inside main view. Question: username_0: Hi I am trying to add extenal script in my flutter web app using HtmlElementView but view goes inside main view. `import 'package:flutter/material.dart'; import 'dart:ui' as ui; import 'package:universal_html/html.dart' as html; class TestHTMLElement extends StatefulWidget{ @override _TestHTMLElementState createState() => TestHTMLElement(); } class _TestHTMLElementState extends State<TestHTMLElement> { html.DivElement _element; @override void initState() { super.initState(); _element = html.DivElement() ..id = 'waitingRoom' ..append(html.ScriptElement() ..text = """ var domain = "meet.jit.si"; var options = { roomName: "DP", width: 700, height: 380, parentNode: undefined, configOverwrite: {}, interfaceConfigOverwrite: { filmStripOnly: true, 'TOOLBAR_BUTTONS':['camera'] } } var api = new JitsiMeetExternalAPI(domain, options); api.executeCommand('displayName', 'DP'); """); // ignore: undefined_prefixed_name ui.platformViewRegistry .registerViewFactory('meeting-view', (int viewId) => _element); } @override Widget build(BuildContext context) { return Scaffold( backgroundColor: Colors.black38, body: Center( child: HtmlElementView(key: UniqueKey(), viewType: "meeting-view"), ), ); } }` for reference Please find the attachment. any idea what is an issue ![Screenshot from 2020-06-01 17-31-32](https://user-images.githubusercontent.com/5373539/83407195-d018ef00-a42d-11ea-8a1e-156716d26d58.png) Answers: username_1: Hi @username_0 From what I can see, the issue is related to a [3rd party plugin](https://pub.dev/packages/universal_html) rather than to Flutter itself. Please open the issue in the dedicated [repository](https://github.com/dint-dev/web_browser/issues). Closing, as this isn't an issue with Flutter itself. If you disagree, please write in the comments, possibly providing a minimal reproducible code sample that does not use 3rd party plugins, and I will reopen it. Thank you Status: Issue closed username_0: @username_1 same issue happen when i am trying with dart html like `import 'dart:html' as html;`
aitos-io/BoAT-X-Framework
1009357030
Title: Input variables are not checked Question: username_0: In the fuction `BOAT_RESULT BoatRandom(BUINT8 *output, BUINT32 outputLen, void *rsvd)` ``` BOAT_RESULT BoatRandom(BUINT8 *output, BUINT32 outputLen, void *rsvd) { BOAT_RESULT result = BOAT_SUCCESS; (void)rsvd; random_buffer(output, outputLen); return result; } ``` The input variables are not checked. The result value is meaningless and the return value must be success<issue_closed> Status: Issue closed
lima1/PureCN
926254148
Title: Issue running normalDB Question: username_0: Hello Markus, I've been running PureCN without error for a long time. Recently started working on a new dataset and got stuck on the NormaDB step. I didn't see an error before neither could find any similar issues reported. The data that I am working on includes about 340 genes with ~80X-200X coverage. I tried running without GC corrected coverage and got the same error. Here are my commands ``` ls PureCN_Coverage_Control/*loess.txt.gz > normal_coverages.list Rscript $PURECN_DIR/NormalDB.R --outdir purecn_reference_files_hg38_344Genes --coveragefiles normal_coverages.list --genome hg38 --assay SeqCap_VCRome --normal_panel Combined_PON_Filter_AC3.vcf.gz ``` Here is the error I am getting. ``` INFO [2021-06-21 09:52:12] Loading PureCN 1.22.1... INFO [2021-06-21 09:52:26] Creating mapping bias database. INFO [2021-06-21 09:52:27] Processing variants 1 to 5000... WARN [2021-06-21 09:53:36] Found 4238 variants in mapping bias database. INFO [2021-06-21 09:53:36] Creating normalDB... Error in (function (fmt, ...) : invalid format '%i'; use format %f, %e, %g or %a for numeric objects Calls: createNormalDatabase ... flog.info -> .log_level -> layout -> do.call -> <Anonymous> Execution halted ``` Thanks, Nihir Answers: username_1: Just to make sure, you get the error also without the --normal_panel ? username_0: Yup tried that too and just re-ran. Here is the log ``` INFO [2021-06-21 10:15:39] Loading PureCN 1.22.1... INFO [2021-06-21 10:15:49] Creating normalDB... Error in (function (fmt, ...) : invalid format '%i'; use format %f, %e, %g or %a for numeric objects Calls: createNormalDatabase ... flog.info -> .log_level -> layout -> do.call -> <Anonymous> Execution halted ``` username_1: Then looks an issue with the coverage files. You sure that ls PureCN_Coverage_Control/*loess.txt.gz > normal_coverages.list works as expected, i.e. lists the paths to the files, one sample per line? username_0: Yes, the list includes one file per line. To be sure I provide the full path to files. Also thought that may be one of the files is corrupted so tried a different set of files as well but the same error (Also re-ran coverage on all the files) . If it helps, here are a few lines from one of the files. ``` Target total_coverage counts on_target duplication_rate chr1:501-197168 0 0 0 NA chr1:197169-393836 0 0 0 NA chr1:393837-590505 0 0 0 NA chr1:590506-787173 0 0 0 NA chr1:787174-983841 0 0 0 NA chr1:983842-1180510 0 0 0 NA chr1:1180511-1377178 0 0 0 NA chr1:1377179-1573847 0 0 0 NA chr1:1573848-1770515 0 0 0 NA chr1:1770516-1967183 0 0 0 NA chr1:1967184-2163852 0 0 0 NA chr1:2163853-2360520 0 0 0 NA chr1:2360521-2557189 0 0 0 NA chr1:2557690-2557884 59097.7347166776 708.569258325414 1 0.0770252324037185 chr1:2558352-2558504 15023.7744561826 190.889394101633 1 0.0531400966183575 chr1:2559824-2559987 35695.6808445429 483.361917232912 1 0.0769230769230769 chr1:2560581-2560766 34717.247884892 395.103394591787 1 0.0451127819548872 chr1:2561426-2561584 25628.6181280921 324.207121744426 1 0.0217983651226158 chr1:2561665-2561851 13708.4229173423 175.985265305308 1 0.0103626943005182 ``` username_1: Hmm, ok. I'll dig deeper where this could happen. Not sure, never seen it before. username_0: Thanks for your help and prompt response. I will try on my end too and will keep you posted. If you want I can email few files to you for debugging username_1: Can you load the first sample in this file with x <- PureCN::readCoverageFile(files[1]) username_0: GRanges object with 20945 ranges and 5 metadata columns: seqnames ranges strand | coverage average.coverage <Rle> <IRanges> <Rle> | <numeric> <numeric> [1] chr1 501-197168 * | 0 0 [2] chr1 197169-393836 * | 0 0 [3] chr1 393837-590505 * | 0 0 [4] chr1 590506-787173 * | 0 0 [5] chr1 787174-983841 * | 0 0 ... ... ... ... . ... ... [20941] chrX 155041021-155240895 * | 0 0 [20942] chrX 155240896-155440770 * | 0 0 [20943] chrX 155440771-155640645 * | 0 0 [20944] chrX 155640646-155840520 * | 0 0 [20945] chrX 155840521-156040395 * | 0 0 counts on.target duplication.rate <numeric> <logical> <numeric> [1] 0 FALSE NA [2] 0 FALSE NA [3] 0 FALSE NA [4] 0 FALSE NA [5] 0 FALSE NA ... ... ... ... [20941] 0 FALSE NA [20942] 0 FALSE NA [20943] 0 FALSE NA [20944] 0 FALSE NA [20945] 0 FALSE NA ------- seqinfo: 23 sequences from an unspecified genome; no seqlengths ``` username_1: Ok, running out of ideas. If you are willing to share some minimal example data for me to reproduce, let me know by email and I'll share a link to upload (unless it fits in an attachment). Status: Issue closed username_1: Run it on the whole bam files and then extract your regions of interest downstream. PureCN will benefit a lot from the additional data. No need for off-target with WES, so simply generate the interval file for the WES baits file without --offtarget and follow the best practices. username_1: (if the additional runtime of WES is an issue and parallelization isn't an option, you can filter the VCF to contain only 10,000 or so heterozygous SNPs on average. Keep a high density of SNPs close to your genes of interest. You can for example filter SNPs in baits with high and low GC-content outside those ROIs. That should provide high quality coverage and still an even tiling of SNPs.) username_0: Thank you for the suggestion
getgauge/gauge
435072219
Title: Log level is not honored for some gauge commands Question: username_0: **Expected Behaviour** Log level flag is a global flag and should be honored for all gauge sub commands **Actual Behaviour** Log level is not honored for some gauge sub commands like : init, install, list, man, telemetry, uninstall, update, version. Steps to reproduce: - Run `gauge init js -l debug` (or any of the subcommands mentioned above with log level debug) - No debug statements are printed. Status: Issue closed Answers: username_1: Debug statements are displaying properly. This has been verified and found fixed with the version ``` gauge-1.0.5.nightly-2019-04-22```
rethinkdb/docs
172196416
Title: Consider adding the option names to the Command Syntax section on API pages Question: username_0: Currently in the API pages the "Command syntax" labels the options only as `options`. This is probably a space-saving means for the main page, and that makes sense there. But I would rather see the options spelled out (including the defaults), as they are in the Python docs. For example I would like to see the [`connect` Python page](http://rethinkdb.com/api/python/connect/) go from: ``` r.connect(options) → connection ``` to: ``` r.connect(host='localhost', port=28015, db='test', user='admin', password='', timeout=20, ssl=None) → connection ``` Answers: username_1: Since the main (index) page is now auto-generated from the individual pages, we need to weight the benefits of this against the extra work (and potential for mistake) of maintaining the signatures separately for the index and detail pages. Currently some commands have the full set of options in the syntax (e.g. `insert`), and others don't (e.g. `connect` or `run`). I assume this is depending on how many options there are for the particular command. I'd like to hear what @username_2's opinion on this is when he comes back. username_2: Yep. There's no hard and fast rule for this, but once a command gets more than three or four options I tend to take them out of the API header. There's only about a half-dozen commands that this is the case for. The API header for `run` if we documented everything would look something like this: ``` query.run(conn[, read_mode="single", time_format="native", profile=False, durability="soft", group_format="native", noreply=False, db="test", array_limit=100000, binary_format="native", min_batch_rows=8, max_batch_rows=<unlimited>, max_batch_seconds=0.5, first_batch_scaledown_factor=4]) → cursor query.run(conn[, read_mode="single", time_format="native", profile=False, durability="soft", group_format="native", noreply=False, db="test", array_limit=100000, binary_format="native", min_batch_rows=8, max_batch_rows=<unlimited>, max_batch_seconds=0.5, first_batch_scaledown_factor=4]) → object ``` ...and while this is admittedly the worst case to pick, I'm not convinced this would aid clarity. I could be convinced otherwise, but we have a couple commands that get into the 6+ option range—and sometimes have more than one form, like `run` here, and thus have to be repeated for each form. username_1: My opinion is that our current signatures are fine (we should maybe standardize on a specific cut-off value for the future, e.g. 4+ options). Consistency is good, but readability also matters.
dotnet/Kerberos.NET
903147484
Title: Implement MS-SFU on the KDC side Question: username_0: **Is your feature request related to a problem? Please describe.** The library implements MS-SFU within the client, but doesn't implement it on the server side. **Describe the solution you'd like** Implement the server portion of MS-SFU. [MS-SFU](https://docs.microsoft.com/en-us/openspecs/windows_protocols/MS-SFU/3bff5864-8135-400e-bdd9-33b552051d94)
VictoriaMetrics/VictoriaMetrics
1154716702
Title: Make documentation friendlier to non-Prometheus users Question: username_0: **Is your feature request related to a problem? Please describe.** While searching for an alternative to InfluxDB for storing financial data, I found VictoriaMetrics. I've never used Prometheus, but the documentation refers to it heavily, which makes it hard to understand what exactly VM does (maybe it's [not](https://docs.victoriametrics.com/FAQ.html#what-is-the-main-purpose-of-victoriametrics) a good solution for financial data after all?). The README includes a lot of low-level technical details, and after spending a couple hours reading https://docs.victoriametrics.com, I'm still unclear on the key concepts. For users who've never used a TSDB, VictoriaMetrics may be even harder to understand. Does VM deserve a place as a standalone TSDB, not just as a drop-in replacement for Prometheus or InfluxDB? **Describe the solution you'd like** A nice, clear, introduction to users new to TSDBs, similar to the InfluxDB one, which doesn't assume knowledge of any particular TSDB. InfluxDB has an easy-to-grasp documentation, which enabled me to get started quickly: https://docs.influxdata.com/influxdb/v1.8/introduction/get-started/. Also, its https://docs.influxdata.com/influxdb/v1.8/concepts/key_concepts/ page was super useful. Explaining or linking to some terms _considered_ known ("helm charts"? scraping? - I've been using Influx for years and never ran into these concepts) would be great. **Describe alternatives you've considered** Look around for random blogs introducing VictoriaMetrics? Answers: username_1: Hi @username_0 ! Thanks for your request! You can do it by sending us a PR with your suggestions. There is an app directory path https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/app where you can find source code and documentation of VM utils or services. For example in https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/app/vmagent you will find **README.md** which is used for https://docs.victoriametrics.com/vmagent.html. username_0: I stopped because at that point, the user needs to know what a "label" is, which hasn't been explained yet, despite the tutorial being aimed at beginners (beginners in which technology?), and I haven't yet managed to understand what exactly a label is, though this is probably very simple, just not mentioned yet. The problem with assumptions is that they lead to tortuous sections. For example, "Careful readers could notice that [Grafana](http://docs.grafana.org/features/datasources/prometheus/) draws constantly growing lines for all the queries above". This wouldn't be necessary if the article explained the concept of a counter first, and how it's always increasing. What I think would help illustrate these concepts: - an example data set that doesn't require any prior knowledge of tools like node_exporter. The InfluxDB docs does this very easily and naturally in their [sample data](https://docs.influxdata.com/influxdb/v1.8/concepts/key_concepts/#sample-data) section. A TSDB beginner probably knows the high-level concepts of fields, values, tags and so on but it's important to clarify their names, because various TSDB tools have different names for them, and the concepts *are* overlapping sufficiently that questions like "What is the difference between fields and values and tags" [still](https://dba.stackexchange.com/questions/163292/understanding-how-to-choose-between-fields-and-tags-in-influxdb) get [asked](https://stackoverflow.com/questions/54550715/telegraf-whats-the-difference-between-a-field-and-a-tag-in-a-metric). Clear documentation would clarify that, but I haven't found it yet for VictoriaMetrics. - a list of knowledge/prerequisites required before reading the tutorial (e.g. "good to have some experience with Grafana") - having someone new to TSDBs go through the tutorial and confirm that now they understand the concepts of VictoriaMetrics. While I'm not new to TSDBs, I can pt myself in a beginner's shoes, so I volunteer to go through such a tutorial. username_2: I found this issue due to the exact same problem. My google search was "victoriametrics for non prometheus". I am exploring VM as a way to store readings from IOT sensors and not for monitoring infrastructure (although I may use it for that also) and it's very hard finding a good starting point from which to start exploring that doesn't assume VM is being used as a data store for prometheus.
mwanji/toml4j
208922206
Title: Way to iterate/extract arrays of inline tables? Question: username_0: It doesn't seem clear to me if there is a way using the toml4j API to iterate over arrays of inline tables, such as: ```toml somekey = [ { name = "foo" }, { name = "bar" } ] ``` I did find a test that had an example using an array index in the key, however it feels like this is something that could be better surfaced in the API, or at least better documented? Status: Issue closed Answers: username_0: My bad, getTables covers this username_1: Yes, it does, but I've long thought it was poorly named. I think it should be getTableArray. Any opinions? username_0: Yea I think that would make it a little more obvious
raywo/MMM-MonthlyCalendar
640691494
Title: Events Not Viewing in Calendar Question: username_0: Hello, I followed all the directions, and everything went really well. I was able to get my calendar to sync to the local .ics file. When I launch MM, I get the monthly calendar, but no fields are filled in. My first thought was there is an issue with my calendar file. I was able to load it ok with the built in calendar app. This is a brand new calendar I created and has only a few entries in it. Is there anything I can look for or do from my end? Thanks
twilblog/twilblog.github.io
1100160274
Title: "Recovering a deleted merged Git branch" blog is wrong Question: username_0: [This](https://github.com/twilblog/twilblog.github.io/blob/master/_posts/2015-10-06-git-recover-deleted-branch.md) post states given this merge commit: ``` commit <very long SHA here> Merge: <more SHA stuff> Author: <NAME> <<EMAIL>> Date: Thu Nov 26 10:09:00 2015 +0000 Merged in branch-to-recover (pull request #100) ``` You recover the merged and deleted branch from "commit: <very long SHA here>". This is **wrong**. The deleted branch's head commit is one of the SHA1 hashes in "<more SHA stuff>". You recover it with that hash. I realize it's an old small personal blog post but the post is fairly highly ranked on search engines. Can you please fix this?
opencv/opencv
258713842
Title: Cannot load batchnorm layer of TensorFlow(tf.contrib.layers.batch_norm) Question: username_0: python3.5 TensorFlow 1.3 Ubuntu 16.04.1 Minimum codes: ``` from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow as tf features = tf.placeholder(tf.float32, [None, 128, 128, 2], name = 'input') model = tf.reshape(features, [-1, 128*128*2]) model = tf.contrib.layers.batch_norm(model, is_training = False) model = tf.layers.dense(inputs = model, units = 8, name = 'regression_output') with tf.Session() as sess: sess.run(tf.global_variables_initializer()) saver = tf.train.Saver() saver.save(sess, 'reshape.ckpt') tf.train.write_graph(sess.graph.as_graph_def(), "", 'graph.pb') ``` Freeze the model: python3 ~/.keras2/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py --input_graph=graph.pb --input_checkpoint=reshape.ckpt --output_graph=frozen_graph.pb --output_node_names=regression_output/BiasAdd Load the model: ``` #include <opencv2/core.hpp> #include <opencv2/dnn.hpp> #include <opencv2/highgui.hpp> #include <iostream> using namespace cv; int main()try { std::string const model("/home/whatever/deep_homography/cnn/tensorflow/frozen_graph.pb"); dnn::Net net = dnn::readNetFromTensorflow(model); if(net.empty()){ std::cerr<<"Can't load network by using the mode file:"<<std::endl; std::cerr<<model<<std::endl; return -1; } return 0; }catch(std::exception const &ex){ std::cerr<<ex.what()<<std::endl; } ``` Error message: OpenCV Error: Unspecified error (More than one input is Const op) in getConstBlob, file /home/whatever/opencv/modules/dnn/src/tensorflow/tf_importer.cpp, line 513 /home/whatever/opencv/modules/dnn/src/tensorflow/tf_importer.cpp:513: error: (-2) More than one input is Const op in function getConstBlob Answers: username_1: @username_0, try to fuse constant ops by ```bash ~/tensorflow/bazel-bin/tensorflow/tools/graph_transforms/transform_graph \ --in_graph=frozen_graph.pb \ --out_graph=fused_graph.pb \ --inputs=input_node_name \ --outputs=output_node_name \ --transforms="fold_constants sort_by_execution_order" ``` username_0: @username_1 Tried it, still pop out same error message OpenCV Error: Unspecified error (More than one input is Const op) in getConstBlob, file /home/whatever/opencv/modules/dnn/src/tensorflow/tf_importer.cpp, line 513 /home/whatever/opencv/modules/dnn/src/tensorflow/tf_importer.cpp:513: error: (-2) More than one input is Const op in function getConstBlob username_1: @username_0, let me ask a question about the model. Did you want to apply a batch normalization over the input image or not? If I'm correct and the model is `input->bn->reshape->fc` the following sample works for the latest state of OpenCV: ```python features = tf.placeholder(tf.float32, [None, 128, 128, 2], name = 'input') model = tf.contrib.layers.batch_norm(features, is_training = False) model = tf.reshape(model, [-1, 128*128*2]) model = tf.layers.dense(inputs = model, units = 8, name = 'regression_output') ``` Then save a checkpoint and the graph definition. ```bash python ~/tensorflow/tensorflow/python/tools/freeze_graph.py \ --input_graph=graph.pb \ --input_checkpoint=reshape.ckpt \ --output_graph=frozen_graph.pb \ --output_node_names=regression_output/BiasAdd python ~/tensorflow/tensorflow/python/tools/optimize_for_inference.py \ --input frozen_graph.pb \ --output opt_graph.pb \ --frozen_graph True \ --input_names input \ --output_names regression_output/BiasAdd ~/tensorflow/bazel-bin/tensorflow/tools/graph_transforms/transform_graph \ --in_graph=opt_graph.pb \ --out_graph=fused_graph.pb \ --inputs=input \ --outputs=regression_output/BiasAdd \ --transforms="fold_constants sort_by_execution_order" ``` Launch: ```python import cv2 as cv import numpy as np net = cv.dnn.readNetFromTensorflow('fused_graph.pb') inp = np.random.standard_normal([1, 2, 128, 128]).astype(np.float32) net.setInput(inp) out = net.forward() ``` username_0: @username_1 Thanks, after I pull latest codes from master branch and rebuild, it works now. I would suggest someone write down how to deploy tensorflow network with batchnorm on wiki, I do not have the right to access the wiki, else I will do it myself. rough example: ``` 1. clone https://github.com/tensorflow/tensorflow 2. cd to tensorflow 3. install bazel if you haven't 4. enter "/.configure" 5. enter "bazel build tensorflow/tools/graph_transforms:transform_graph" 6. freeze graph python ~/tensorflow/tensorflow/python/tools/freeze_graph.py \ --input_graph=graph.pb \ --input_checkpoint=reshape.ckpt \ --output_graph=frozen_graph.pb \ --output_node_names=regression_output/BiasAdd 7. optimize graph python ~/tensorflow/tensorflow/python/tools/optimize_for_inference.py \ --input frozen_graph.pb \ --output opt_graph.pb \ --frozen_graph True \ --input_names input \ --output_names regression_output/BiasAdd 8. transform graph ~/tensorflow/bazel-bin/tensorflow/tools/graph_transforms/transform_graph \ --in_graph=opt_graph.pb \ --out_graph=fused_graph.pb \ --inputs=input \ --outputs=regression_output/BiasAdd \ --transforms="fold_constants sort_by_execution_order" ``` username_2: I don't know why this issue is closed. The problem still exists in OpenCV 3.4.1. username_1: @username_2, please provide a minimal example to reproduce the problem.
carbon-design-system/carbon
506019252
Title: Component skeletons (loading states) need a11y improvements for screen reader users Question: username_0: ref #4176 ref #4307 attn @snidersd The loading states for our components (referred to as skeletons) are not accessible for screen reader users. Answers: username_1: In #4307, I've addressed these attributes for the following components (in React): * Checkbox * NumberInput * Select * Slider * TextArea * TextInput username_0: Closing this issue. We're exploring alternate solutions to this problem rather than addressing loading component by component 👍 Status: Issue closed username_2: ref #4176 ref #4307 attn @snidersd I'm specifically referring to our React package. The loading states for our components (referred to as skeletons) are not accessible for screen reader users. I'm referencing requirements around status messages [found here](https://www.ibm.com/able/guidelines/ci162/status_messages_71.html). Improvements: - add `tabindex="0"` - add `role="status"` - add `aria-live="assertive"` - add `aria-label="loading {components name}"` Components with Skeletons: - [ ] Accordion - [ ] Breadcrumb - [ ] Button - [x] Checkbox - [ ] CodeSnippet - [ ] DataTable - [ ] DatePicker - [ ] Dropdown - [ ] FileUploader - [ ] Icon - [x] NumberInput - [ ] ProgressIndicator - [ ] RadioButton - [ ] Search - [x] Select - [ ] Text - [x] Slider - [ ] StructuredList - [ ] Tabs - [ ] Tag - [x] TextArea - [x] TextInput - [ ] Toggle - [ ] ToggleSmall Status: Issue closed
marchaos/jest-mock-extended
561321247
Title: Comparison with ts-mockito Question: username_0: `jest-mock-extended` looks nice but what is the advantage of using it vs. [`ts-mockito`](https://github.com/NagRock/ts-mockito)? Answers: username_1: Well, you can't really compare the two as ts-mockito is a standalone library, whereas jest-mock-extended as the name suggests, is an extension to jest to add mocking capabilities using a jest compatible API. If you're using jest, I think the choose is obvious. Status: Issue closed
tensorflow/tensorflow
337614957
Title: tf.data.Dataset.from_tensor_slices incompatible with tuples? Question: username_0: ... for ds in [ds_l, ds_t]: ... it = ds.make_one_shot_iterator().get_next() ... while True: ... try: ... print(session.run(it)) ... except tf.errors.OutOfRangeError: ... break ... ['tensors:' 'A' 'nested' 'structure' 'of' 'tensors,' 'each' 'having' 'the' 'same' 'size' 'in' 'the' '0th' 'dimension.'] ('tensors:', 'A', 'nested', 'structure', 'of', 'tensors,', 'each', 'having', 'the', 'same', 'size', 'in', 'the', '0th', 'dimension.') ``` ### Describe the problem Apparently one can not use `tf.data.Dataset.from_tensor_slices` with tuples. This is very counter-intuitive as they almost everywhere have the same behavior as lists. Also using `tf.data.Dataset.from_tensors` is no option. Even though this seems to handle tuples properly, one only gets a single element instead of `n` elements. This is in alignment with the documentation but does not fulfill the same functionality as `tf.data.Dataset.from_tensor_slices`. Am I just using it wrong, is the documentation to ambiguous or should this be fixed?
nfarina/homebridge-tesla
220729156
Title: Error login in Question: username_0: [Model S] Error logging into Tesla: Error: 401: {"response":"undefined_method_`strip'_for_nil:nilclass"} any clues why I'm getting this error? Thanks Answers: username_1: Hmm .. perhaps something to do with your account? I just now tried using Siri to check lock/unlock state and it logged in for me without issue. That error is particular bizarre - speaks to a server problem on their end. username_2: I have the same problem, on a "Rasberry Pi Nano Wireless" install. Everything works like a charm on my other x86 linux box. But on my Pi nano, i get: [Model S] Error logging into Tesla: Error: 401: {"response":"undefined_method_`strip'_for_nil:nilclass"} -- Where do i go from here? Teslacmd works like it should, im able to get information from my car, also from my RasPi Nano... username_3: howdi, did you manage to find a solution? I am facing the same problem It happens when I try to retreive access token, using postman as testtool. this is my request: POST /oauth/token HTTP/1.1 Host: owner-api.teslamotors.com Content-Type: application/x-www-form-urlencoded Cache-Control: no-cache grant_type=password&client_id=81527cff06843c8634fdc09e8ac0abefb46ac849f38fe1e431c2ef2106796384&client_secret=<KEY>&username=emailofowner%40ownerdomain.xx&password=<PASSWORD>ie username_2: Yes, i configured the npm package homebridge-tesla to use username and password instead of token, i think that was the workaround that made the problem go away. username_3: thx, I have found the solution that gives me a working token: all example documentation of Get-Token uses **username**= , instead the request for the token should use **email**= _Tools like Postman use username as well when Request Token is issued ( which fails)._ the full request is: POST /oauth/token HTTP/1.1 Host: owner-api.teslamotors.com Content-Type: application/x-www-form-urlencoded Cache-Control: no-cache grant_type=password&client_id=81527cff06843c8634fdc09e8ac0abefb46ac849f38fe1e431c2ef2106796384&client_secret=<KEY>&email=emailofowner%40ownerdomain.xx&password=<PASSWORD>ie username_4: I think this issue can be closed as of current version fixing it. username_1: Thanks! Status: Issue closed
SolarDrew/skill-rpgchar
323779151
Title: Skill/ability check improvements Question: username_0: Checks are only semi-active at the moment, in that the DM triggers a check and the bot rolls it for the character. It would be nice to have some more flexibility on that including features like: - [ ] Passive checks - [ ] Players triggering their own checks - [ ] Checks reporting to the DM private chat rather than in the main room - [ ] Saves - not strictly checks but will use a lot of the same tech - [ ] Include proficiencies where relevant
mrlacey/Rapid-XAML-Toolkit
648253577
Title: rename master branch Question: username_0: change it to "main" check all links/paths/etc. that also need to be updated. Answers: username_0: "main" branch created default branch changed to "main" docs links changed appveyor settings updated username_0: will leave master branch around to avoid breaking any existing links but will remove this in 3 months. username_0: Removing now as GitHub now redirects missing branches to the default on the website. Status: Issue closed
yiisoft/yii2
291416634
Title: Error when using filterWhere(): Invalid parameter number: number of bound variables does not match number of tokens Question: username_0: While running the following code: $dbExpression = new \yii\db\Expression( 'ST_Distance_Sphere(POINT(:lng, :lat), address_point) / 1000' ); $query = Address::find()->select('*'); $query->addSelect(["$dbExpression AS Distance"])->params([':lng' => -123, ':lat' => 46]); $query->filterWhere(['LIKE', 'address_city', 'Vancouver']); //$query->where("address_city LIKE '%Vancouver%'"); $activeDataProvider = new ActiveDataProvider(['query' => $query ]); I want to get the following SQL query executed: ```SELECT *, ST_Distance_Sphere(POINT(-123, 46), address_point) / 1000 AS Distance FROM `address` WHERE address_city LIKE '%Vancouver%'``` However I get and error: ```SQLSTATE[HY093]: Invalid parameter number: number of bound variables does not match number of tokens\nThe SQL being executed was: SELECT COUNT(*) FROM `address` WHERE `address_city` LIKE '%Vancouver%'"``` If I replace filterWhere() with where() (see commented out code) then it works. If I comment out the line with addSelect and params then filterWhere() works too. ### Additional info | Q | A | ---------------- | --- | Yii version | 2.0.13.1 | PHP version | 7.1.12 | Operating system | Ubuntu 14.04.5 LTS Stack trace: "name": "Database Exception", "message": "SQLSTATE[HY093]: Invalid parameter number: number of bound variables does not match number of tokens\nThe SQL being executed was: SELECT COUNT(*) FROM `address` WHERE `address_city` LIKE '%Vancouver%'", "code": 0, "type": "yii\\db\\Exception", "file": "/home/ubuntu/workspace/project/basic/vendor/yiisoft/yii2/db/Schema.php", "line": 595, "stack-trace": [ "#0 /home/ubuntu/workspace/project/basic/vendor/yiisoft/yii2/db/Command.php(1082): yii\\db\\Schema->convertException(Object(PDOException), 'SELECT COUNT(*)...')", "#1 /home/ubuntu/workspace/project/basic/vendor/yiisoft/yii2/db/Command.php(412): yii\\db\\Command->queryInternal('fetchColumn', 0)", "#2 /home/ubuntu/workspace/project/basic/vendor/yiisoft/yii2/db/Query.php(448): yii\\db\\Command->queryScalar()", "#3 /home/ubuntu/workspace/project/basic/vendor/yiisoft/yii2/db/ActiveQuery.php(337): yii\\db\\Query->queryScalar('COUNT(*)', Object(yii\\db\\Connection))", "#4 /home/ubuntu/workspace/project/basic/vendor/yiisoft/yii2/db/Query.php(332): yii\\db\\ActiveQuery->queryScalar('COUNT(*)', Object(yii\\db\\Connection))", "#5 /home/ubuntu/workspace/project/basic/vendor/yiisoft/yii2/data/ActiveDataProvider.php(169): yii\\db\\Query->count('*', NULL)", "#6 /home/ubuntu/workspace/project/basic/vendor/yiisoft/yii2/data/BaseDataProvider.php(169): yii\\data\\ActiveDataProvider->prepareTotalCount()", "#7 /home/ubuntu/workspace/project/basic/vendor/yiisoft/yii2/data/ActiveDataProvider.php(106): yii\\data\\BaseDataProvider->getTotalCount()", "#8 /home/ubuntu/workspace/project/basic/vendor/yiisoft/yii2/data/BaseDataProvider.php(101): yii\\data\\ActiveDataProvider->prepareModels()", "#9 /home/ubuntu/workspace/project/basic/vendor/yiisoft/yii2/data/BaseDataProvider.php(114): yii\\data\\BaseDataProvider->prepare()", "#10 /home/ubuntu/workspace/project/basic/vendor/yiisoft/yii2/rest/Serializer.php(186): yii\\data\\BaseDataProvider->getModels()", "#11 /home/ubuntu/workspace/project/basic/vendor/yiisoft/yii2/rest/Serializer.php(152): yii\\rest\\Serializer->serializeDataProvider(Object(yii\\data\\ActiveDataProvider))", "#12 /home/ubuntu/workspace/project/basic/vendor/yiisoft/yii2/rest/Controller.php(99): yii\\rest\\Serializer->serialize(Object(yii\\data\\ActiveDataProvider))", "#13 /home/ubuntu/workspace/project/basic/vendor/yiisoft/yii2/rest/Controller.php(77): yii\\rest\\Controller->serializeData(Object(yii\\data\\ActiveDataProvider))", "#14 /home/ubuntu/workspace/project/basic/vendor/yiisoft/yii2/base/Controller.php(159): yii\\rest\\Controller->afterAction(Object(yii\\rest\\IndexAction), Object(yii\\data\\ActiveDataProvider))", "#15 /home/ubuntu/workspace/project/basic/vendor/yiisoft/yii2/base/Module.php(528): yii\\base\\Controller->runAction('index', Array)", "#16 /home/ubuntu/workspace/project/basic/vendor/yiisoft/yii2/web/Application.php(103): yii\\base\\Module->runAction('api/location/in...', Array)", "#17 /home/ubuntu/workspace/project/basic/vendor/yiisoft/yii2/base/Application.php(386): yii\\web\\Application->handleRequest(Object(yii\\web\\Request))", "#18 /home/ubuntu/workspace/project/basic/web/index.php(12): yii\\base\\Application->run()", "#19 {main}"
gamesir123/app
747164972
Title: Versi 10 android not conneting,please setting versi.. Question: username_0: Can not open gamepad Answers: username_1: Can Someone please Fix this Is sooo bad software programation o the compatibility wih PUBG MOBILE, i cant play with my gamesir t1s username_2: I have 3 Gamesir controllers G5, z2, T4pro, but they don't work on my Xiaomi model which run on Android 10. But when I tried them on my other phones with Android 9 they work just fine. I have tried ApK latest V.4.0.7, 4.0.6, 4.0.5 and found 3.7.6 which I read from a comment around here but that is still not working. Probably so much as the rest of the other versions also. So bad when 2020 phones are getting good but Gamesir could not tell us why their controllers aren't working on Android 10. Maybe time to switch to flydigi. P.S used Gamesir controllers for a long time and never had problems till Android 10. If anybody has solutions for this please let me know.
swt2-intro-exercise/rails-exercise-19-fabianhe
517642909
Title: New author page should exist Question: username_0: Did you mean? new_polymorphic_path *1/44 exercise tests have passed* Answers: username_0: Did you mean? new_polymorphic_path *If you have problems solving this task, please don't hesitate to contact the teaching team!* Status: Issue closed
foolin/goview
443782601
Title: Auto complete Extensions Question: username_0: The extensions should not be added if it's already provided in the include instruction: Example: I use the default configration of goview, (extensions is ".html") error.html: ``` {{define "head"}} {{include "style.css"}} {{end}} {{define "content"}} <h1>Error</h1> {{end}} ``` style.css: ``` body { background-color: red; } ``` The error I get: ``` ViewEngine execute template error: template: error:2:6: executing "head" at <include "style.css">: error calling include: ViewEngine render read name:style.css, path:<gopath>/github.com/<username>/<projectname>/service.web.oauth/views/style.css.html, error: open <gopath>/github.com/<username>/<projetctname>/service.web.oauth/views/style.css.html: no such file or directory ``` Answers: username_1: Oh, it’s a bug, Let me think how to fix this. username_1: By the way, why you need include style css? You can't do like this to solve the problem. ------------------------- 1. Use link style css: ``` {{define "head"}} <link rel="stylesheet" type="text/css" href="/public/style.css" /> {{end}} {{define "content"}} <h1>Error</h1> {{end}} ``` ------------------------- 2. Use base html template: base.html ```html <style type="text/css"> body { background-color: red; </style> ``` xxxx.html ``` {{include "base.html"}} {{define "content"}} <h1>Error</h1> {{end}} ``` username_2: hi username_1! how to convert content (string) to hmtl us func in template? thanks. username_1: Please see this: https://github.com/username_1/goview/issues/3 username_0: It was only for the demonstration username_1: if only for the demonstration, I close this issue now, if have any problem then reopen. Status: Issue closed
Praqma/praqma.github.io
122242709
Title: Adjust sizes of headline levels Question: username_0: Right now, level 3 headlines are bigger than level 2 headlines Answers: username_1: Came up with this. Looks pretty good on all pages excepting this one ![image](https://cloud.githubusercontent.com/assets/797993/12117030/f87c94c8-b3ce-11e5-805f-62c9579b127c.png) Status: Issue closed username_2: looking fine - I'm closing it.
ES2-UFPI/Unichat
446354529
Title: Criar tela de seleção de idiomas Question: username_0: A tela de idiomas vai dá suporte a troca de idiomas pelo usuário. Ou seja, se o idioma nativo dele é inglês, porém ele deseja receber uma mensagem em espanhol ou japonês, ele pode trocar o idioma nessa tela.<issue_closed> Status: Issue closed