repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
ayufan/easy-wireguard
735544122
Title: Concatenation of PSK when emitting config Question: username_0: Hello, I would like to know what is the benefits of concatenating PSKs (function `gen_psk`) when generating the configuration, if any? Indeed, on my side, wireguard-tools (v1.0.20200827) tells me not to recognize this key format so I had to remove the server PSK from it to work. Answers: username_1: I saw that too. It appears that this is not a valid base64. I think my intent was to create a PSK that contains both parties. username_1: Maybe I should always use a client one? On Tue, Nov 3, 2020 at 7:27 PM <NAME> <<EMAIL>> wrote: > I saw that too. It appears that this is not a valid base64. I think my > intent was to create a PSK that contains both parties. > > username_0: I think you should. I'm no expert on the subject, but it makes the most sense to me - and works well in my configurations.
containerd/nerdctl
1023861720
Title: client certificates are not sent when using `nerdctl pull`? Question: username_0: With `docker` I had my certificates in `~/.docker/certs.d/host:port/{ca.crt, client.cert, client.key}`. After moving these to `/etc/docker/certs.d/host:port/..` in the lima machine, nerdctl recognises them and I was able to login to the registry (`nerdctl login ..`). Before moving the files, I got an 400 Error when trying to login .. pulling images from this private (gitlab) registry is still not possible: ```bash $ nerdctl --debug-full pull $PRIVATE_REGISTRY:5000/web/docker-web:latest DEBU[0000] rootless parent main: executing "/usr/bin/nsenter" with [-r/ -w/home/dennis.linux --preserve-credentials -m -n -U -t 832 -F nerdctl --debug-full pull $PRIVATE_REGISTRY:5000/web/docker-web:latest] DEBU[0000] fetching image="$PRIVATE_REGISTRY:5000/web/docker-web:latest" DEBU[0000] resolving host="$PRIVATE_REGISTRY:5000" DEBU[0000] do request host="$PRIVATE_REGISTRY:5000" request.header.accept="application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, */*" request.header.user-agent=containerd/1.5.0+unknown request.method=HEAD url="https://$PRIVATE_REGISTRY:5000/v2/web/docker-web/manifests/latest" DEBU[0000] fetch response received host="$PRIVATE_REGISTRY:5000" response.header.content-length=237 response.header.content-type=text/html response.header.date="Tue, 12 Oct 2021 13:51:41 GMT" response.header.server=nginx/1.17.8 response.status="400 Bad Request" url="https://$PRIVATE_REGISTRY:5000/v2/web/docker-web/manifests/latest" FATA[0000] failed to resolve reference "$PRIVATE_REGISTRY:5000/web/docker-web:latest": pulling from host $PRIVATE_REGISTRY:5000 failed with status code [manifests latest]: 400 Bad Request ``` My assumption is, that nerdctl does not send the client certificate when trying to pull .. any ideas? Answers: username_1: Support for /etc/docker/certs.d is unimplemented
flutter/flutter
727004603
Title: [Webview_flutter]When will macOS be supported Question: username_0: [Flutter has previewed desktop applications](https://flutter.dev/desktop), but I can’t use Webview_flutter on mac OS (Windows/Linux). Does WebView_flutter have any plans to support the desktop? Answers: username_1: Platform views are not supported on desktop currently , so it is currently not possible to make webview_flutter support desktops right now. Status: Issue closed username_2: Closing this as duplicate of [41725](https://github.com/flutter/flutter/issues/41725)
liuyueyi/spring-boot-demo
698661916
Title: 【DB系列】h2databse集成示例demo | 一灰灰Blog Question: username_0: http://spring.hhui.top/spring-blog/2020/09/11/200911-SpringBoot%E7%B3%BB%E5%88%97h2databse%E9%9B%86%E6%88%90%E7%A4%BA%E4%BE%8Bdemo/ h2dabase基于内存的数据库,更常见于嵌入式数据库的使用场景,依赖小,功能齐全;一般来讲,正常的商业项目用到它的场景不多,但是在一些特殊的case中,还是比较有用的,比如用于单元测试,业务缓存,一些简单的示例demo等;本文将手把手教你创建一个继承h2dabase的项目,并支持从sql中导入预定好的schema和data
nrwl/nx
793496035
Title: Using environment variables in `run-commands` Question: username_0: nx : Not Found @nrwl/angular : 10.4.7 @nrwl/cli : 10.3.1 @nrwl/cypress : 10.3.1 @nrwl/eslint-plugin-nx : 10.3.1 @nrwl/express : 11.0.2 @nrwl/jest : 10.3.1 @nrwl/linter : 10.3.1 @nrwl/nest : 10.4.7 @nrwl/next : Not Found @nrwl/node : 10.3.1 @nrwl/react : 11.0.2 @nrwl/schematics : Not Found @nrwl/tao : 10.3.1 @nrwl/web : 11.0.2 @nrwl/workspace : 10.3.1 typescript : 4.0.3 ``` Answers: username_1: Perhaps it would be nice if we could use environment variables in the same way we use args that way we don't have to worry about Powershell `$env:SOMETHING` or bash `$SOMETHING` or CMD `%SOMETHING%`? **For example:** ```json "echo": { "builder": "@nrwl/workspace:run-commands", "options": { "command": "echo {env.SOMETHING}", "parallel": false } } ``` username_1: It would be a shame for this to be closed. I am still of the opinion that it would be nice to be able to use env vars as described in my earlier comment. username_2: I was currently looking at a way to achieve the same, came across that issue and tried the example @username_0 by replacing the `echo` target by : ```json "echo": { "builder": "@nrwl/workspace:run-commands", "options": { "commands": ["echo $TEST"], "parallel": false } }, ``` Since it worked this way, i thought i would let people know in case they land here after a google search :)
teal-language/tl
740804212
Title: Alternative grammar to generate railroad diagram ? Question: username_0: Based on the grammar in the documentation I created this alternative grammar that can be used to generate a railroad diagram with https://bottlecaps.de/rr/ui , probably it'll need a review. ``` chunk ::= block block ::= stat* retstat? stat ::= ';' | varlist '=' explist | functioncall | label | 'break' | 'goto' Name | 'do' block 'end' | 'while' exp 'do' block 'end' | 'repeat' block 'until' exp | 'if' exp 'then' block ('elseif' exp 'then' block)* ('else' block)? 'end' | 'for' Name '=' exp ',' exp (',' exp)? 'do' block 'end' | 'for' namelist 'in' explist 'do' block 'end' | 'function' funcname funcbody | 'local' attnamelist (':' typelist)? ('=' explist)? | 'local' 'function' Name funcbody | 'local' 'record' Name recordbody | 'local' 'enum' Name enumbody | 'local' 'type' Name '=' newtype | 'global' attnamelist ':' typelist ('=' explist)? | 'global' attnamelist '=' explist | 'global' 'function' Name funcbody | 'global' 'record' Name recordbody | 'global' 'enum' Name enumbody | 'global' 'type' Name '=' newtype attnamelist ::= Name attrib? (',' Name attrib?)* attrib ::= '<' Name '>' retstat ::= 'return' explist? ';'? label ::= '::' Name '::' funcname ::= Name ('.' Name)* (':' Name)? varlist ::= var (',' var)* var ::= Name | prefixexp '[' exp ']' | prefixexp '.' Name namelist ::= Name (',' Name)* explist ::= exp (',' exp)* exp ::= 'nil' | 'false' | 'true' | Numeral | LiteralString | '...' | functiondef | prefixexp [Truncated] typeargs ::= '<' Name ( ',' Name )* '>' newtype ::= 'record' recordbody | 'enum' enumbody | type recordbody ::= typeargs? ('{' type '}')? (Name '=' newtype)* (Name ':' type)* 'end' enumbody ::= LiteralString* 'end' functiontype ::= 'function' typeargs? '(' partypelist ')' (':' retlist)? partypelist ::= partype (',' partype)* partype ::= (Name ':')? type parnamelist ::= parname (',' parname)* parname ::= Name (':' type)? ``` Answers: username_1: Oh, the graphs on that website look nice! One thing I wanted to keep in the grammar though was for it to be easily compared with the grammar from the Lua Reference Manual, so that people could easily see in which ways Teal is syntactically a superset of Lua. I don't think we should maintain two versions of the grammar, though, so if we were to have the railroad diagram it would be best to have its input format be generated from that grammar through a script or something. I haven't compared them line by line, which changes in general did you have to make? username_0: I just updated to the latest grammar, the changes are basically replacing `[...]` by `(...)?` and `{...}` by `(...)*`. username_0: Also replacing `´` quotations by `"` and removing the `*+` at the beginning of some lines. username_0: Also looking at the railroad diagram I can see that the duplicate rules for `attnamelist` doesn't match between `local` and `global`: ``` "local" attnamelist (":" typelist)? ("=" explist)? | ... "global" attnamelist ":" typelist ("=" explist)? | "global" attnamelist "=" explist | ``` Is this intended ? username_1: Yes, the rules for "global" are stated more strictly because it is an all-new Teal construct. The less strict interpretation for "local" is used when the Teal compiler is processing code in lax mode, which is a transitional mode of operation when converting `.lua` code into `.tl` code. It is common to have grammar descriptions that describe a language that is larger than the set of valid programs, because there are entire classes of errors that are caught by later phases than parsing. It is also common to leave to later phases some classes of errors that _could_ be detected via parsing, for the sake of making a parser simpler to implement or the grammar easier for humans to read. Ultimately, this goal of the Teal grammar in the documentation, as is the one in the Lua reference manual: neither of them represent their respective parsers as implemented, but rather intend as a reader-friendly form of giving a formal description of the syntax. Teal's has the additional intention of serving as a "diff" from the one in the Lua reference manual, so keeping its form close to the one in the Lua manual is important. For this reason, I still don't think we should maintain alternative grammars in the documentation (it is already enough work to keep it up-to-date with the parser!), nor change it to make it more suitable for machine consumption at this time, so I'm closing this issue. Hope this clarifies! Status: Issue closed username_0: Where in the grammar this `{Error}` https://github.com/teal-language/tl/blob/ef4fcda5ba0a0c83d2a4dbfac67baf186e8c4252/tl.tl#L6448 is represented ? username_1: The line you linked is not part of the parser: these are not parse errors, but errors from a later semantic phase, so it has nothing to do with syntax or the grammar. But in any case, as a general comment unrelated to the implementation of Teal: In parsing theory, errors are not usually represented in grammars. Instead, anything that is _not_ described by the grammar is an error; in other words, errors are represented by omission. In some parsers implementing certain kinds of grammars, you will see explicit representations of errors (see for example labels in lpeglabel grammars), but that's a different thing.
onfido/api-javascript-client
406270555
Title: Latest version not published to NPM Question: username_0: The latest available version on NPM is 1.0.1: https://www.npmjs.com/package/onfido, however, the current version on GitHub is 1.5.0. I'm getting around this just now by installing via `yarn add onfido/api-javascript-client`. Answers: username_1: The version on npm doesn't look like it is being maintained by onfido. Using the recommended install path of the git repository is really bad, the major version bump has broken our code without warning because it just pulls the latest from master. Please can you setup an org on npm and publish legit, versioned packages? It would really help. Thanks Status: Issue closed username_2: Hi @username_0, the latest version is now on npm. Hi @username_1, sorry about that, I wasn't aware installing from GitHub would automatically install breaking changes. I've added a couple of older versions (1.5.0, 1.6.0) to npm as well, hopefully you can use those and fix your code. I've updated the README and will publish a patch version tomorrow so it's updated on npm. In the future, newer versions will be published to npm automatically.
editor-extensions/vscode-insert-special-symbol
707228240
Title: How to use this extension? Question: username_0: Hello, Can you provide more detailed documentation on how to use this extension? I tried selecting pi, autocompleting, writing it in the ctrl + p but nothing replaced it with the greek pi. Answers: username_1: 1. type theta then select it 2. press ctl-shift-p 3. select insert special character command
etiennestuder/gradle-credentials-plugin
1165305249
Title: How to set credentialsPassphrase for build.gradle Question: username_0: Hi I try to get an encrypted value from credentials plugin. To do this I add an secret value with gradlew addCredentials --key keyPassword --value 'asdf@|!sdf -PcredentialsPassphrase='<PASSWORD>' The entry is file ~/.gradle/gradle.E6EDACFABAB055B8D9EEC4D4E33EF714.encrypted.properties To get it back in my build.gradle file I use String password = credentials.forKey('keyPassword') and get "null" If I add keyPassword without credentialsPassphrase it work well. How to configure the plugin to read the credentialsPassphrase from a file or environment ? I read this `If a custom passphrase is passed through the credentialsPassphrase project property when starting the build, the credentials container is initialized with all credentials persisted in the passphrase-specific GRADLE_USER_HOME/gradle.MD5HASH.encrypted.properties where the MD5HASH is calculated from the specified passphrase.` But the big question where to set it ? The build.gradle build run inside an Jenkins instance. Next question in the same direction. How to work with multiple project's on my build machine ? I like use some think project1.credentialsPassphrase='<PASSWORD>' project2.credentialsPassphrase='<PASSWORD>' Is this possible ? I didn't like to store credentialsPassphrase inside an build.gradle file which life in git. Regards Stephan
InnovaLangues/CollecticielBundle
86593675
Title: Ajout d'un onglet Question: username_0: Dans la vue "Enseignant", il est néecessaire d'ajouter un onglet afin d'avoir de voir le collecticiel de l'apprenant. Status: Issue closed Answers: username_0: Voir https://github.com/InnovaLangues/CollecticielBundle/issues/40 et https://github.com/InnovaLangues/CollecticielBundle/issues/38
react-navigation/react-navigation
352218460
Title: Wrong animation for BottomTabNavigator after closing a modal screen from StackNavigator container Question: username_0: ### Current Behavior **Problem:** I have a StackNavigator with `mode: modal` that holds a TabNavigator and a standalone screen. Navigating between tabs work properly, but after I navigate to the standalone modal screen and back, the first tab navigation uses the modal animation. ![ezgif-3-a7808cee85](https://user-images.githubusercontent.com/6773794/44354861-55155d00-a481-11e8-84bb-374245065e3b.gif) ### Expected Behavior TabNavigator should use non-modal animation at all times. ### How to reproduce ``` const TabNavigator = createBottomTabNavigator({ TabA, TabB, }); const StackNavigator = createStackNavigator({ TabNavigator, ModalScreen, }, { mode: 'modal' }); ``` ### Your Environment | software | version | ---------------- | ------- | react-navigation | ^2.2.2 | react-native | 0.55.4 | node | v6.11.1 | npm or yarn | yarn 1.3.2 Answers: username_1: hi! Thanks for reporting. It is unclear whether `TabA` and `TabB` are screen components or stack navigators (that may be a workaround). Can you create a reproducible demo on https://snack.expo.io/? username_0: Hey, @username_1, thanks for replying. They are other stack navigators, yes! I've tried to reproduce my problem as best as I could on Snack, but it doesn't seem to happen there: https://snack.expo.io/SytyuRY87 What I have tried so far: - Switching between `goBack()`, `goBack(null)` and `goBack(navigation.state.key)` inside the Header component; - Removing `{...headerProps}` from the Header component; - Removing `LayoutAnimation.configureNext` inside the ModalScreen (I animate a View inside the ModalScreen when the keyboard appears / disappears). - Setting `mode: card` inside TabNavigator, TabA and TabB. username_2: hey @username_0! that's indeed quite strange! please ping me again when you manage to get it reproducing in a minimal setup (snack or otherwise) username_3: Seen the same thing before and it's usually caused by a component that calls LayoutAnimation.configureNext. @username_0 In your case, do you have eventlistener on keyboard that trigger LayoutAnimation? I updated your example to force the glitch to occur. (Just make sure you have triggered the modal before the timer on greenpage kicks inn) https://snack.expo.io/Hk57Y4yT7 username_2: nothing we can do here about LayoutAnimation unfortunately, it's a global API and it'll impact any state change in the frame that it's called in. i'd recommend staying away from it if this is a concern for you Status: Issue closed username_0: Thanks for the reproduction, @username_3 ! We are still having this issue and, at least for now, it's in backlog since it's not that big of a deal. Shame to see it won't have a fix, since it's not a proper bug, but... at least now I can explain it better when it comes up again, and explain we have to tone down LayoutAnimation use.
ticgal/gapp
1170949680
Title: Not found ao alterar o status pelo APP GAPP Question: username_0: Ao acessar o ticket e tentar alterar o status, é apresentado na tela a mensagem Not Found https://user-images.githubusercontent.com/98896237/158588632-5592ef2e-2117-497f-8f3b-b53890a12309.mov Steps to reproduce the behavior: 1. Logar no GAPP 2. Acessar qualquer ticket 3. Canto superior direito, tentar alterar o status 4. No centro da tela aparece NOT Found e não altera o status
lucasmafra/type-dynamo
376387231
Title: aws-sdk as dev dependency Question: username_0: Is there any special reason for aws-sdk being a direct dependecy? This increases my bundle size significally, since webpack is not able to exclude a dependecy of a dependecy properly, I've done some workarounds, but it would be much easier if aws-sdk was included as a devDependency or peerDependency An example, my code: ``` import { APIGatewayEvent, Context, Callback } from 'aws-lambda' import { TypeDynamo } from 'type-dynamo' export const typeDynamo = new TypeDynamo({ region: process.env.REGION || 'us-east-1' }) export async function teste(event: APIGatewayEvent, context: Context, callback: Callback) { try { console.log(typeDynamo) callback(undefined, {}) } catch (err) { callback(undefined, { err }) } } ``` And this is my bundle: <img width="1399" alt="captura de tela 2018-11-01 as 10 03 26" src="https://user-images.githubusercontent.com/12135688/47853863-e167c880-ddbe-11e8-9509-f24e15ac0bfd.png"> Answers: username_1: Hi @username_0, TypeDynamo is a wrapper around AWS SDK and its implementation relies on that to provide all of its features, which means that it needs AWS SDK to run not only in **development** mode but also in **production** (and please note that, for a library, production code refers to the code generated after the build process that it's distributed through npm, regardless of any kind of application that the users of the library might employ it) and therefore it does not make sense to consider it as a **dev** dependency (if you take a look on the project dev dependencies you will see they are only there for development and test purposes). I'm aware that AWS SDK is not a lightweight library and maybe there's room for optimization (i.e TypeDynamo could depend only on the Dynamo-specific parts instead of the whole SDK) but is inevitable that the AWS SDK **must** be available in the Node.js application where the TypeDynamo is running on. Regard your specific use case, I don't know what in your webpack configuration is leading your bundle to have multiple instances of aws-sdk, but webpack has built-in resources to avoid that kind of scenario such as [Tree Shaking](https://webpack.js.org/guides/tree-shaking/). username_1: Also, if you're running on Lambda, I recommend you to take a look at this TypeDynamo [example application](https://github.com/username_1/type-dynamo-examples/tree/master/serverless-todo-application). username_0: I'm aware of the need of aws-sdk, also i'm aware that TypeDynamo is not a "AWS Lambda only" library, but it makes no sense to bundle it and deploy it to Lambda, since AWS exposes the aws-sdk for you inside their node environment, you are just duplicating code. If you want a solution that embraces all environments, you can consider using aws-sdk as a [peerDependency](https://nodejs.org/en/blog/npm/peer-dependencies/), this enforces the application to have aws-sdk installed(either as a devDependency or a regular dependency). Regarding my specific use case, this is a pretty common problem with the serverless-webpack library(https://github.com/serverless-heaven/serverless-webpack/issues/306), as I mentioned, there are some workarounds, but I have not found a final solution. Also, I already have a project set, running on Lambda, that bundle I printed was generated using similar webpack/tsconfig files as the example application. I tried just replacing my config files with yours and i still get the same bundle sizes.
appium/appium
75773763
Title: Instrument launching time out when app has Question: username_0: If the app requires some access permission when first launched (i.e. location), then the test doesn't get started and times out. This is happening often, but not always. Simulator iOS Version: 8.3 noReset: false Here is the Appium log: ``` warn: Instruments socket client never checked in; timing out (global) info: [debug] Killall instruments info: [debug] [INSTSERVER] Instruments exited with code null info: [debug] Killall instruments info: [debug] Instruments never checked in info: [debug] Attempting to retry launching instruments, this is retry #1 info: [debug] Killall iOS Simulator ``` Status: Issue closed Answers: username_1: This is a duplicate of #4178 and a known bug in Apple Instruments. The workaround is to put a delay into your app before requiring the service which generates the alert.
uiowa/uiowa
1168783221
Title: Consistent field_tags widget across platform Question: username_0: Follow up from https://github.com/uiowa/uiowa/issues/4042 For consistency, other field_tags fields should be updated. - All media types - https://github.com/uiowa/uiowa/blob/main/config/features/event/field.field.node.event.field_tags.yml -https://github.com/uiowa/uiowa/blob/main/config/features/topic_page/field.field.node.topic_collection.field_topic_collection_tags.yml - Grad - https://github.com/uiowa/uiowa/blob/main/config/sites/grad.uiowa.edu/field.field.node.student_profile.field_tags.yml
rslint/rslint
812939414
Title: Is there a way to configure 'ignored' directories list? Question: username_0: Hi, me again here. I was experimenting with rslint, but after compiling TypeScript for js, rslint is trying to lint the ``dist`` directory too. In the documentation, I was unable to find information on how to set the ignored directory list as ``.eslintignore`` in ESLint. I would love it if I could open pull request, but I am still too new to rust (I practically only know how to print "hello world" out) and I will wait for a new version if there is no option to ignore it Thanks for all Answers: username_1: Currently there isn't but i am planning to add either `.rslintignore`, an ignored key in the config, or an ignore flag, or all of those. username_2: That's definitely a good option. Does eslint also ignore files/folders that are inside the `.gitignore`? That sounds like a good idea username_0: According to what I know, instead of using `.gitignore` itself they are using to config ``.eslintignore`` separately username_2: @username_0 Do you think it's a good idea to enable `.gitignore` support by default, and still have an extra `.rslintignore`? username_0: I think support for ``.gitignore`` is better to be enabled when put ``.gitignore`` in the ``.rslintignore`` file (something like extending to other configurations. In this case, extending to other ignore config), because there are some js test files that are ignored and need to be linted too username_3: Right now is the feature for .rslintignore being worked on? If not can I help contribute? username_2: I'm currently working on it, and I already have a working prototype that just needs some cleaning. However, I do not have much time right now so it might take a few days Status: Issue closed
Krypton-Suite/Standard-Toolkit
744761170
Title: [Feature Request]: TextBox item for KryptonContextMenu-Items Question: username_0: Is it possible to create a TextBox-Item for KryptonContextMenu? Like Combobox?! Thanks for your great work! Answers: username_1: Hi @username_0 I think I've already added this in the extended toolkit in the `Krypton.Toolkit.Suite.Extended.Tool.Strip.Items` module. username_2: @username_0 Is this available in the Normal Visual Designer, without having to add a "third-party / your own" control ?
zeplin/zeplin-extension-documentation
307550259
Title: Mode or language? Question: username_0: According to docs, there is `mode` property in [CodeExportObject](https://github.com/zeplin/zeplin-extension-documentation/blob/master/model/extension.md#CodeExportObject) but you use `language` property in [react-native-extension example](https://github.com/zeplin/react-native-extension/blob/ad8ef2940c528a46dd6091811fa344bd3cbd401f/src/index.js#L13-L16). What should I use? Is it related to this [saving issue](https://github.com/username_0/zepcode/issues/16)? Answers: username_1: @username_0 Sorry for the confusion! The name of that property was `mode` initially but we thought that `language` is semantically better. Actually, both of them are supported but you can consider `mode` as deprecated. Use `language` to be on the safe side. I just updated the docs. Thanks for reporting this! Status: Issue closed
ronisbr/TextUserInterfaces.jl
607168861
Title: Problems running examples in Julia 1.3 Question: username_0: Running the `windows_and_widgets` ``` ERROR: LoadError: UndefVarError: @connect_signal not defined in expression starting at REPL[4]:45 ``` Running the tic tac toe example ``` julia> using TextUserInterfaces.NCurses ERROR: UndefVarError: NCurses not defined ``` Version info: ``` julia> versioninfo() Julia Version 1.3.1 Commit 2d<PASSWORD> (2019-12-30 21:36 UTC) Platform Info: OS: Linux (x86_64-pc-linux-gnu) CPU: Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz WORD_SIZE: 64 LIBM: libopenlibm LLVM: libLLVM-6.0.1 (ORCJIT, broadwell) ``` Answers: username_0: Actually, seems like the registered version is `v0.0.1` which is about 104 commits behind master. Upgrading to master to fix all the problem. Maybe it's time to tag a new release? username_1: Hi @username_0 , There is still a long way to go until a stable version with the API I want. This package should be considered stable only if you want to use NCurses directly. In this case, it should be working pretty good. However, I do not want to tag a new release until I finish the API design and documentation. The goal is to have a package that provides an easier way to build TUIs than using NCurses directly. If you want to try the examples for now, you will need to use the development version. Notice that I would really appreciate help to build the API and test the code. However, there are no docs. Thus, you will need to see the code if you want to take this path :) Status: Issue closed
Azure/azure-cli
279142092
Title: vmss: default logics on handling 100+ instance is broken Question: username_0: Command `az vmss create --resource-group vmsssuccess --name vmsssuccess --image UbuntuLTS --disable-overprovision --instance-count 101` ends up with error of `Required property 'name' expects a value but got null. Path 'properties.template.resources[2]', line 1, position 2986.'.` By looking at the [code](https://github.com/Azure/azure-cli/blob/dev/src/command_modules/azure-cli-vm/azure/cli/command_modules/vm/custom.py#L1922), the `app_gateway` clearly can be None Status: Issue closed Answers: username_0: dupe to #5033
mapillary/inplace_abn
488868261
Title: can not compile base on Win10 + PyTorch 1.0 + VS2017 +Cuda 10.0.130 Question: username_0: Is there anyone can compile it on Win10 + PyTorch 1.0 + VS2017 +Cuda 10.0.130 ? when I compile it I got the error message 👎 ----------------------------------------------------------------------------------------------- src\inplace_abn_cpu.cpp(15): warning C4244: “return”: 从“int64_t”转换到“int32_t”,可能丢失数据 src\inplace_abn_cpu.cpp(130): error C2440: “初始化”: 无法从“c10::ScalarType”转换为“const at::Type &” src\inplace_abn_cpu.cpp(130): note: 原因如下: 无法从“c10::ScalarType”转换为“const at::Type” src\inplace_abn_cpu.cpp(130): note: 没有可用于执行该转换的用户定义的转换运算符,或者无法调用该运算符 error: command 'D:\\VS2017\\VC\\Tools\\MSVC\\14.16.27023\\bin\\HostX64\\x64\\cl.exe' failed with exit status 2 --------------------------------------------------------------------------------------------------- From the change log on Update 08 Jan. 2019, it mentioned **Requires now PyTorch 1.0** But from the current requirement mentioned on readme, it has follwing requirements "NOTE: our code requires **PyTorch v1.1** or later." From the requirement.txt , it mentioned only need **torch>=1.0** So, I am confused if I still still have chance to compile base on Win10 + PyTorch 1.0 + VS2017 +Cuda 10.0.130 Please comments. Status: Issue closed Answers: username_1: @username_0 The references to PyTorch 1.0 are outdated, the code actually requires version 1.1. I'll update the README and requirements files accordingly. As an aside, I would like to point out that, unfortunately, we are unable to provide support for Windows systems. I'm sorry about that.
atomiks/tippyjs
503021722
Title: How to use Tippy with Webpack? Question: username_0: I'm using Laravel-mix, which uses Webpack, and I'm used to requiring libraries like this: ``` var $ = require('jquery'); var time_helper = require('time_helper'); var moment = require('moment'); ``` I would LOVE to install Tippy v5 but have been unable to get it working over the past few hours. https://dev.to/iggredible/what-the-heck-are-cjs-amd-umd-and-esm-ikm leads me to think that I need the CJS version. I thought maybe `const tippy = require('tippy.js/dist/tippy.cjs');` and then `tippy(myTooltip, tippySettings);` would work, but there seems to be no effect (and no error). Do you know what I'm doing wrong? Sorry for being such a newbie. Answers: username_1: The default import should work?: ```js var tippy = require('tippy.js'); ``` username_0: Thanks for your quick response. I was just about to reply that I see https://username_1.github.io/tippyjs/faq/ mentions `const tippy = require('tippy.js');`. But it's not working for me. I get no error but no tooltip either. Tippy 2.6 used to work with `const tippy = require('tippy.js');`. username_1: What is ```js console.log(tippy) ``` username_0: Logging it, I see: ``` Module {…} animateFill: (...) createSingleton: (...) createTippyWithPlugins: (...) default: (...) delegate: (...) followCursor: (...) hideAll: (...) inlinePositioning: (...) roundArrow: (...) sticky: (...) Symbol(Symbol.toStringTag): "Module" __esModule: true get animateFill: ƒ () get createSingleton: ƒ () get createTippyWithPlugins: ƒ () get default: ƒ () get delegate: ƒ () get followCursor: ƒ () get hideAll: ƒ () get inlinePositioning: ƒ () get roundArrow: ƒ () get sticky: ƒ () __proto__: Object ``` I'm using "[email protected]", which seems to use "webpack": "^4.27.1". username_1: You need to append `.default` it seems (v4 was the same): ```js const tippy = require('tippy.js').default; ``` However I don't recommend using CJS version, because treeshaking doesn't work. Use ES module imports: ```js import tippy from 'tippy.js' ``` Status: Issue closed username_0: I'd love to learn how to use ESM with Laravel Mix but haven't been able to figure it out after hours and hours. For now, it seems `const tippy = require('tippy.js').default;` is getting me in the right direction. Thanks! Now I see that I just need to change the settings and the object and whatever else has changed since a few versions ago.
OpenLiberty/ci.maven
495474504
Title: liberty:package cannot build tar.gz Question: username_0: Before `mvn package` can generate .tar.gz file which can be used by the ADD command of Dockerfile (i.e. the archived file can be unzipped to the wlp directory) ``` <plugin> <groupId>net.wasdev.wlp.maven.plugins</groupId> <artifactId>liberty-maven-plugin</artifactId> <configuration> ... <packageFile> ${project.build.directory}/${app.name}.tar.gz </packageFile> </configuration> </plugin> ``` For io.openliberty.tools liberty-maven-plugin 3.0.2-SNAPSHOT, `mvn liberty:package -Dinclude=usr -DpackageName=serviceName.tar.gz` generates serviceName.tar.gz.zip file and it cannot be used by the ADD command of Dockerfile. (i.e. the archived file cannot be unzipped by the ADD command) Expectation: serviceName.tar.gz should be generated and it can be used by the ADD command of Dockerfile. Answers: username_1: After investigating this further, the `server package` command in Open Liberty does not seem to support generating a tar.gz package. The fact that the Liberty Maven plugin did not detect the invalid packageFile name extension before is evidence for why we changed the package goal to have `packageType`, `packageDirectory` and `packageName` parameters instead. The code previously thought it was generating a .zip when the packageFile did not end with .jar, and then simply named the resulting zip archive with the provided packageFile name. Here is the documentation for the Open Liberty package goal (https://openliberty.io/docs/ref/command/#server-package.html). See the description for `--archive`. username_0: When I run the command `server package defaultServer --archive=/Users/gkwan/tasks/CNAI/guides/username_0/guide-getting-started/finish/target/guide-getting-started.tar.gz --include=usr` directly under target, the generated archived is named .tar.gz and it can be consumed by the ADD command. But, the output of the `mvn liberty:package -Dinclude=usr -DpackageName=guide-getting-started.tar.gz` showed ``` [INFO] CWWKM2001I: Invoke command is [/Users/gkwan/tasks/CNAI/guides/username_0/guide-getting-started/start/target/liberty/wlp/bin/server, package, defaultServer, --archive=/Users/gkwan/tasks/CNAI/guides/username_0/guide-getting-started/start/target/guide-getting-started.tar.gz.zip, --include=usr]. ``` Look like the change from the plugin instead of the server command. username_1: Did you look at the documentation that I referenced? I didn't say the server command changed. I said it looks like it only supports zip and jar. I believe it must be generating a zip and simply uses the name you provided ending in tar.gz. Is it really a tar.gz file though? Can you find documentation to say that format **is** supported on the `server package` command? username_1: And `packageName` is not supposed to contain the file extension. That comes from the `packageType`. username_0: why only zip and jar? Is .zip most common in Unix/Linux or .tar.gz? Is it possible to support .tar.gz too? username_1: So the online documentation differs from the command line documentation. Here is what the cmd line help says: <img width="563" alt="Screen Shot 2019-09-19 at 10 32 48 AM" src="https://user-images.githubusercontent.com/29436451/65258834-3e4c9a00-dac9-11e9-91e9-1173bc833027.png"> Based on that, and the code I found in the PackageCommand.java in OL shown below, we should add support for `tar` and `tar.gz` in `packageType` on the `package` goal. <img width="321" alt="Screen Shot 2019-09-19 at 9 53 34 AM" src="https://user-images.githubusercontent.com/29436451/65258923-64723a00-dac9-11e9-940d-d6c1ee733fe7.png"> username_1: The `server package` command supports creating a self-extracting jar as well as a runnable jar. We need to add back the support for specifying `include=runnable` to support that. I also see the `server package` command has a new option `server-root`. I will add support for that while I'm updating our package goal. <img width="453" alt="Screen Shot 2019-09-20 at 9 32 33 AM" src="https://user-images.githubusercontent.com/29436451/65335216-a06cd400-db89-11e9-91bc-394125042474.png"> Status: Issue closed
FAForever/fa
146398849
Title: Glorious new abuse of mass fabricators (pretty brutal too) Question: username_0: forget megalith egg abuse you can stack the adjacency bonus! woohoooooo! produce percivals for free! (aren't you glad that e clan exists?) http://puu.sh/o8qMg/4b4b291075.jpg http://puu.sh/o8qN7/1579f3ffdc.jpg http://puu.sh/o8qWK/47376b1f34.jpg and finally the best one http://puu.sh/o8u6s/983e91c2e8.jpg actually this one is even better: http://puu.sh/o8uAZ/9f8b7ef273.jpg must be the whole point of engy mod i guess from what i can tell: 1.build fabs first 2.turn off fabs 3.build factory and start production 4.receive free adjacency bonus with turned off fabs! 5.turn on fabs 6.receive double the adjacency bonus :D:D:D:D:D:D oh and also that adjacency bonus still works when the fabs are disabled (not double though) and it seems that the double adjacency is carried onto the future percies so you only need to setup once. graciously discovered AND reported by e clan. #VoteeclanForCouncilors #WeAreTheLeadersYouNeedButNotTheOnesYouDeserve #AllPowerToeClan #HopeForTheBest Answers: username_1: Please don't be in beta... The number of times we've fixed fabs working when turned off... username_2: Both current patch and beta username_0: it looks like the bonuses are being applied when the fac is finished being built and when the fab is enabled. so if you finish building and THEN enable they stack. there should be a check for any existing bonuses when fab is enabled? i dunno username_1: 94f9097c1a7962b91f54e726aa62a4702b25c410 Ta daa. Status: Issue closed
apotdevin/thunderhub
776071694
Title: During any rebalancing action through thunderhub, red box in the UI pops up says "ExpectedFsToRebalance" Question: username_0: **Describe the problem/bug** During any rebalancing action through thunderhub, red box in the UI pops up says "ExpectedFsToRebalance. The exact error in logs is: ``` 2020-12-29 15:23:22 info [THUB]: Rebalance Params: { out_channels: [ [length]: 0 ], avoid: [ [length]: 0 ] } 2020-12-29 15:23:22 error [THUB]: [ 400, 'ExpectedFsToRebalance', [length]: 2 ] ``` **Your environment** * Version of ThunderHub: 12.2 * Deployment method: manual nothing special. all integrated on one machine. **To Reproduce** Steps to reproduce the behavior: 1. Go to rebalance 2. Click on advanced, then rebalance (or any specific rebalancing action 3. Scroll down to '....' 4. See error ExpectedFsToRebalance **Additional context** This suddenly started happening. I have BOS installed and it works fine. Perhaps I have to set the bos account in some way for thunderhub to recognize, but I found no guide on how to do this (ex. on bos I run "bos rebalance --node=<node name>" Answers: username_1: Thanks for opening this issue! Fixed with 3c8fb65975a8e1f65099c1177a500c90426503b4 and will be in the next release username_1: Is now in version 0.12.6 Status: Issue closed
trailofbits/ebpfpub
766113898
Title: 拉萨城关区妹子真实找上门服务b Question: username_0: 拉萨城关区妹子真实找上门服务〖加薇781乄372乄524〗月日,湖南卫视苏宁易购嗨爆夜准时开场。主持人嗨爆男团、女团刚开场便宣布了苏宁易购在晚会现场将送出十亿红包、免费手机、五折好货、汽车大奖等超多福利。  节目开场半小时,已经送出亿红包,就这么大的红包力度,怎么可能收视率不爆棚  根据月日全国卫视收视率实时数据统计,狮晚直播在线率达到,市场份额占到,较第二名浙江卫视同期节目的市场份额超出近。诖姆甘驮拱https://github.com/trailofbits/ebpfpub/issues/385 <br />https://github.com/trailofbits/ebpfpub/issues/5722 <br />https://github.com/trailofbits/ebpfpub/issues/275?21892 <br />https://github.com/trailofbits/ebpfpub/issues/3193 <br />https://github.com/trailofbits/ebpfpub/issues/1813?MJK3q <br />https://github.com/trailofbits/ebpfpub/issues/433?w2mo4 <br />https://github.com/trailofbits/ebpfpub/issues/528?I5Lal <br />gioumwwewsyqcwasuuomguimieouykwckos
SharePoint/sp-dev-docs
411158179
Title: Creating a tenant section and using the tenant prefix Question: username_0: For the uninitiated, explain what a 'tenant' is, how to confirm status, and how to find what the tenant prefix is. Not everyone knows that 'tenant' is really just an instance of Office 365 and that you need to get it, independent of signing up for the Developer program - say that! Also make the instructions idiot proof, as far as what the tenant prefix is. I spent hours stumbling around before I figured it out. Googling wasn't helpful either. Very aggravating --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 037842f6-772f-456d-6f44-0c46939bcddd * Version Independent ID: 608ab68a-92a1-ca34-5ce6-2ae71691f6e6 * Content: [Set up your Office 365 tenant](https://docs.microsoft.com/en-us/sharepoint/dev/spfx/set-up-your-developer-tenant) * Content Source: [docs/spfx/set-up-your-developer-tenant.md](https://github.com/SharePoint/sp-dev-docs/blob/master/docs/spfx/set-up-your-developer-tenant.md) * Product: **sharepoint** * GitHub Login: @spdevdocs * Microsoft Alias: **spdevdocs** Answers: username_1: Would a link to [this doc](https://docs.microsoft.com/en-us/office365/enterprise/subscriptions-licenses-accounts-and-tenants-for-microsoft-cloud-offerings) help here? That page does a good job of explaining what a tenant is in relation to user accounts, subscriptions, licenses. This doc does make the assumption you know what that is... so if you did know, does the rest of the doc make sense? It does to me, but I understand it so to improve it, would need to get perspective of someone else. TIA!
numba/numba
43644728
Title: Implement two argument round() Question: username_0: We do not yet support the form `round(x, ndigits)` for float32 or float64. Python (3.4) has two different rounding implementations. One tries to ensure that the string representation of the float has the desired number of digits: https://hg.python.org/cpython/file/6dcc96fa3970/Objects/floatobject.c#l865. The other uses the simpler method of multiplying the value by powers of 10, rounding to an integer, and then dividing by the same factor: https://hg.python.org/cpython/file/6dcc96fa3970/Objects/floatobject.c#l923 We could most easily support the second form in Numba.<issue_closed> Status: Issue closed
pyscripter/python4delphi
754278969
Title: ValueError: set_wakeup_fd only works in main thread Question: username_0: Traceback (most recent call last): File "<string>", line 8, in <module> File "C:\Users\545\AppData\Roaming\Python\Python38\site-packages\telethon\client\telegrambaseclient.py", line 245, in __init__ self._loop = asyncio.get_event_loop() File "C:\Users\545\AppData\Local\Programs\Python\Python38\Lib\asyncio\events.py", line 636, in get_event_loop self.set_event_loop(self.new_event_loop()) File "C:\Users\545\AppData\Local\Programs\Python\Python38\Lib\asyncio\events.py", line 656, in new_event_loop return self._loop_factory() File "C:\Users\545\AppData\Local\Programs\Python\Python38\Lib\asyncio\windows_events.py", line 310, in __init__ super().__init__(proactor) File "C:\Users\545\AppData\Local\Programs\Python\Python38\Lib\asyncio\proactor_events.py", line 632, in __init__ signal.set_wakeup_fd(self._csock.fileno()) ValueError: set_wakeup_fd only works in main thread I'm trying to call a python function in the delphi stream, it gives such an error, if I call on the button from the form everything works well Status: Issue closed Answers: username_1: Why do you assume this is a P4D bug? Your application crashes because a function is called from a thread that is supposed to be called only from the main thread. Anyway mixing asyncio with the Delphi message loop is very tricky. username_0: how can i fix it there are options? username_1: Sorry, this is a bug trucker. Not a "help me fix a problem in my code" forum.
corona-warn-app/cwa-app-android
727697002
Title: URSACHE: 4000 Question: username_0: Etwas ist schiefgelaufen. (Wer hätte das gedacht?) error during web request, http status 901 Android 6.0 on Acer T04 ![Screenshot_20201022-220532](https://user-images.githubusercontent.com/6900273/96926020-50abbb00-14b5-11eb-8a47-54a05b785528.png) Answers: username_1: Die App kann die Server nicht erreichen kommt die gleiche Meldung wenn WLAN deaktiviert ist? username_0: Ich habe keine Datenverbindung ohne WLAN (kein Bedarf, kein Vertrag), und mein WLAN funktioniert bestens. username_2: Ich habe http 901 status schon mal bekommen, als ich unterwegs war und eine schlechte Datenverbindung hatte. Wenn es sich um einen Heim-WLAN handelt, würde ich versuchen, den Router neu zu starten und Handy auch. Gibt es den Fehler dauerhaft oder nur sporadisch? username_3: @username_0 Auch ich hätte noch ein paar Fragen... - Kommt der Fehler regelmäßig/immer? Oder kam er jetzt zum ersten Mal? - Kann das Telefon das WLAN auch im Standby benutzen (entsprechende Option in den WLAN-Einstellungen aktiviert)? - UnknownHostException deutet darauf hin, dass der DNS-Sever nicht erreichbar war... Gibt es irgendetwas, das die Verbindung blockiert? Z.B., was [hier](https://www.coronawarn.app/de/faq/?search=timeout#cause9002_timeout) aufgeführt ist (bzw. hier: #998 )? Auch wenn kein 'timeout' angezeigt wird, könnten die in den verlinkten Quellen genannten Gründe zu dem Fehler hier beitragen. - Hilft eines dieser issues: #984 oder #785 weiter, bzw. kann die Fehlerquelle eingrenzen? username_0: Fehler kam zum ersten Mal, trat gestern abend nicht mehr auf. Ja Telefon kann WLAN permanent benutzen. Nein keine Blockade. Notebook erreicht alle Ziele über das selbe WLAN. Werde den beiden Tipps zur Fehlereingrenzung nachgehen, danke. username_4: Dear @username_0, Thanks for your posts. Is your reported issue still persistent? If not seems that it has been a one-off occurrence due to a bad internet connection. In that case, I would suggest closing the issue. Best wishes, DS Corona-Warn-App Open Source Team username_4: Duplicate of https://github.com/corona-warn-app/cwa-app-android/issues/785 username_4: Dear @username_0, dear community I will close this issue now. Thanks for contributing. If necessary re-open or post here in this topic https://github.com/corona-warn-app/cwa-app-android/issues/785. Best wishes, DS --- Corona-Warn-App Open Source Team Status: Issue closed
pandas-dev/pandas
232356015
Title: BUG: Joining a DataFrame with a PeriodIndex fails Question: username_0: #### Code Sample ```python In [19]: dates = pd.period_range('20100101','20100105', freq='D') In [20]: weights = pd.DataFrame(np.random.randn(5, 5), index=dates, columns = ['g1_%d' % x for x in range(5)]) In [21]: weights.join(pd.DataFrame(np.random.randn(5,5), index=dates, columns = ['g2_%d' % x for x in range(5)])) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-21-2fdb8b02f5a4> in <module>() 1 weights.join( ----> 2 pd.DataFrame(np.random.randn(5,5), index=dates, columns = ['g2_%d' % x for x in range(5)])) /usr/local/lib/python2.7/dist-packages/pandas/core/frame.pyc in join(self, other, on, how, lsuffix, rsuffix, sort) 4765 # For SparseDataFrame's benefit 4766 return self._join_compat(other, on=on, how=how, lsuffix=lsuffix, -> 4767 rsuffix=rsuffix, sort=sort) 4768 4769 def _join_compat(self, other, on=None, how='left', lsuffix='', rsuffix='', /usr/local/lib/python2.7/dist-packages/pandas/core/frame.pyc in _join_compat(self, other, on, how, lsuffix, rsuffix, sort) 4780 return merge(self, other, left_on=on, how=how, 4781 left_index=on is None, right_index=True, -> 4782 suffixes=(lsuffix, rsuffix), sort=sort) 4783 else: 4784 if on is not None: /usr/local/lib/python2.7/dist-packages/pandas/core/reshape/merge.pyc in merge(left, right, how, on, left_on, right_on, left_index, right_index, sort, suffixes, copy, indicator) 52 right_index=right_index, sort=sort, suffixes=suffixes, 53 copy=copy, indicator=indicator) ---> 54 return op.get_result() 55 56 /usr/local/lib/python2.7/dist-packages/pandas/core/reshape/merge.pyc in get_result(self) 567 self.left, self.right) 568 --> 569 join_index, left_indexer, right_indexer = self._get_join_info() 570 571 ldata, rdata = self.left._data, self.right._data /usr/local/lib/python2.7/dist-packages/pandas/core/reshape/merge.pyc in _get_join_info(self) 720 join_index, left_indexer, right_indexer = \ 721 left_ax.join(right_ax, how=self.how, return_indexers=True, --> 722 sort=self.sort) 723 elif self.right_index and self.how == 'left': 724 join_index, left_indexer, right_indexer = \ TypeError: join() got an unexpected keyword argument 'sort' ``` It seems the sort kwarg is invalid, but the internals are passing it in regardless #### Output of ``pd.show_versions()`` <details> In [22]: pd.show_versions() /usr/local/lib/python2.7/dist-packages/xarray/core/formatting.py:16: FutureWarning: The pandas.tslib module is deprecated and will be removed in a future version. [Truncated] bottleneck: 1.2.0 tables: None numexpr: 2.6.2 feather: None matplotlib: 2.0.1 openpyxl: None xlrd: None xlwt: 1.2.0 xlsxwriter: None lxml: None bs4: None html5lib: 0.999999999 sqlalchemy: 1.1.9 pymysql: None psycopg2: None jinja2: 2.9.6 s3fs: None pandas_gbq: 0.1.6 pandas_datareader: None </details> Answers: username_1: this fixes if you can do a PR ``` diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py index 15fd9b7..50d5958 100644 --- a/pandas/core/indexes/period.py +++ b/pandas/core/indexes/period.py @@ -919,14 +919,16 @@ class PeriodIndex(DatelikeOps, DatetimeIndexOpsMixin, Int64Index): self[loc:].asi8)) return self._shallow_copy(idx) - def join(self, other, how='left', level=None, return_indexers=False): + def join(self, other, how='left', level=None, return_indexers=False, + sort=False): """ See Index.join """ self._assert_can_do_setop(other) result = Int64Index.join(self, other, how=how, level=level, - return_indexers=return_indexers) + return_indexers=return_indexers, + sort=sort) if return_indexers: result, lidx, ridx = result ``` obviously need some more tests on the index join methods as well :> Here is the tests for datetimes in ``pandas/tests/indexes/datetimes/test_datetimes.py`` need to do something like this in periods/test_period.py ``` def test_join_self(self): index = date_range('1/1/2000', periods=10) kinds = 'outer', 'inner', 'left', 'right' for kind in kinds: joined = index.join(index, how=kind) assert index is joined ``` username_1: if you can do this in next day or 2 can get into 0.20.2 (end of week) username_2: @username_1 Is this issue still open? username_0: PR waiting here: https://github.com/pandas-dev/pandas/pull/16586 username_3: So I used my version 0.20.1 to add the fix suggested above by @username_1 , and that fixed the problem for me, but then a different one cropped up. Not sure if I should just put this in a different issue. In my use case, I took dates and made them a monthly period, and there are duplicates. Here is a way to make it happen: ``` perindex = pd.period_range('2016-01-01', periods=16, freq='M') perdf = pd.DataFrame([i for i in range(len(perindex))], index=perindex, columns=['pnum']) df2 = pd.concat([perdf, perdf]) perdf.merge(df2, left_index=True, right_index=True, how='outer') ``` This gives this sequence of errors: ``` TypeError Traceback (most recent call last) <ipython-input-45-a9a1ea5d6a78> in <module>() 1 df2 = pd.concat([perdf, perdf]) ----> 2 perdf.merge(df2, left_index=True, right_index=True, how='outer') C:\Anaconda3\envs\py36\lib\site-packages\pandas\core\frame.py in merge(self, right, how, on, left_on, right_on, left_index, right_index, sort, suffixes, copy, indicator) 4818 right_on=right_on, left_index=left_index, 4819 right_index=right_index, sort=sort, suffixes=suffixes, -> 4820 copy=copy, indicator=indicator) 4821 4822 def round(self, decimals=0, *args, **kwargs): C:\Anaconda3\envs\py36\lib\site-packages\pandas\core\reshape\merge.py in merge(left, right, how, on, left_on, right_on, left_index, right_index, sort, suffixes, copy, indicator) 52 right_index=right_index, sort=sort, suffixes=suffixes, 53 copy=copy, indicator=indicator) ---> 54 return op.get_result() 55 56 C:\Anaconda3\envs\py36\lib\site-packages\pandas\core\reshape\merge.py in get_result(self) 567 self.left, self.right) 568 --> 569 join_index, left_indexer, right_indexer = self._get_join_info() 570 571 ldata, rdata = self.left._data, self.right._data C:\Anaconda3\envs\py36\lib\site-packages\pandas\core\reshape\merge.py in _get_join_info(self) 720 join_index, left_indexer, right_indexer = \ 721 left_ax.join(right_ax, how=self.how, return_indexers=True, --> 722 sort=self.sort) 723 elif self.right_index and self.how == 'left': 724 join_index, left_indexer, right_indexer = \ C:\Anaconda3\envs\py36\lib\site-packages\pandas\core\indexes\period.py in join(self, other, how, level, return_indexers, sort) 927 928 result = Int64Index.join(self, other, how=how, level=level, --> 929 return_indexers=return_indexers, sort=sort) 930 931 if return_indexers: C:\Anaconda3\envs\py36\lib\site-packages\pandas\core\indexes\base.py in join(self, other, how, level, return_indexers, sort) 2995 else: 2996 return self._join_non_unique(other, how=how, -> 2997 return_indexers=return_indexers) 2998 elif self.is_monotonic and other.is_monotonic: 2999 try: C:\Anaconda3\envs\py36\lib\site-packages\pandas\core\indexes\base.py in _join_non_unique(self, other, how, return_indexers) 3076 left_idx, right_idx = _get_join_indexers([self.values], 3077 [other._values], how=how, [Truncated] C:\Anaconda3\envs\py36\lib\site-packages\pandas\core\algorithms.py in sort_mixed(values) 469 str_pos = np.array([isinstance(x, string_types) for x in values], 470 dtype=bool) --> 471 nums = np.sort(values[~str_pos]) 472 strs = np.sort(values[str_pos]) 473 return _ensure_object(np.concatenate([nums, strs])) C:\Anaconda3\envs\py36\lib\site-packages\numpy\core\fromnumeric.py in sort(a, axis, kind, order) 820 else: 821 a = asanyarray(a).copy(order="K") --> 822 a.sort(axis=axis, kind=kind, order=order) 823 return a 824 pandas\_libs\period.pyx in pandas._libs.period._Period.__richcmp__ (pandas\_libs\period.c:12067)() TypeError: Cannot compare type 'Period' with type 'int' ``` Let me know if I should open up a new issue, given that this bug happens when applying the above fix. username_0: Do you get the error run on that PR? If so, I would open a new issue? username_3: @username_0 I did a hand edit of pandas 0.20.1 to implement what is in the PR, and got the error. To test it against all PR's, I think I'd need that PR to be merged into master and then I can pull master and test. username_0: Great! FYI you can pull someone's PR for convenience, rather than hand-editing Status: Issue closed
ConsumerDataStandardsAustralia/standards-maintenance
708674010
Title: Performance Requirements - Improvement Question: username_0: **Description** Xero's and other high volume data recipients would see a degradation in service unless Performance Requirements are improved **Area Affected** Performance Requirements **Change Proposed** Xero has concerns regarding the Performance Requirements published in the Data Standards section available on GitHub. As a cloud-based online accounting software provider, timely access to transactional data is critical to the customer experience and is an ingrained customer expectation. Currently Xero makes the majority of customers’ daily transactional data available before 9am the following day. Based on the published performance requirements, the volume of Xero customer transactions and a potential 4-8 hour processing window, this customer expectation could no longer be met. Although this is Xero’s specific use case, other entities requiring large volumes of data accessed within a similar processing window could experience the same challenges. Xero has been experiencing performance issues with the UK’s Open Banking APIs and has been working with the OBIE and banks directly for months to ensure we're aligned on customer expectations and minimal performance standards. Xero would like to see the CDR avoid similar issues and is recommending a change in the CDRs Performance Requirements as shown below. Please note Xero’s proposal would only require these NFRs apply to the Banks referred to in the rules as “Initial Data Holders”, due to the large volume of transactions they generate. Please see the attached document for details on the proposed NFRs. [GitHub Submission Change Request - Submitted.docx](https://github.com/ConsumerDataStandardsAustralia/standards-maintenance/files/5280930/GitHub.Submission.Change.Request.-.Submitted.docx) Xero welcomes further engagement with Data61 to progress this proposed change. Answers: username_1: The above TPS seems to be low when all API endpoints are taken into consideration for all ADRs combined.
moo-ai/moo-ai.github.io
436983879
Title: [FATAL][2019-04-25 02:36:30] The online openlab deployment <slave> has Down, Please recovery asap! Question: username_0: For recover the ENV, you need to do the following things manually. The target node otc-openlab-nodepool in slave deployment is failed to be accessed with IP 192.168.211.84. Have a try: ssh [email protected] And try to login the cloud to check whether the resource exists.<issue_closed> Status: Issue closed
AugurProject/augur
788683875
Title: graph addLiquidity transaction share price doesn't add up Question: username_0: Formula I'm using to calc user's average price of shares from add liquidity, in this case user ends up with Yes Shares after adding liquidity to 60/40 Yes/No market My guess is yesShareCashValue get mixed up with noShareCashValue (no share - yes shares) / yesShareCashValue `(15000000 - 9999999) / 3999999.36` = `1.25` ![Screen Shot 2021-01-18 at 21 46 40](https://user-images.githubusercontent.com/3970376/104985661-14fe8f80-59d7-11eb-95d4-240e0331cbdf.png)
CSC-308/image-aggregate
868153668
Title: Create new Collection Question: username_0: User should be able to create a new Collection from the Collections page. Status: Issue closed Answers: username_0: Finished 4/30. Took approximately 1 hour. username_0: User should be able to create a new Collection from the Collections page. username_0: Currently only implemented in the frontend. Going to leave open until also implemented in backend. Status: Issue closed
OpenAPITools/openapi-generator
440777859
Title: [BUG] Javascript babel, weird output for comment block Question: username_0: #### Bug Report Checklist - [x] Have you provided a full/minimal spec to reproduce the issue? - [x] Have you validated the input using an OpenAPI validator ([example](https://apidevtools.org/swagger-parser/online/))? - [x] What's the version of OpenAPI Generator used? - [x] Have you search for related issues/PRs? - [ ] What's the actual output vs expected output? - [ ] [Optional] Bounty to sponsor the fix ([example](https://www.bountysource.com/issues/66123212-javascript-client-produces-a-wrong-object-for-a-string-enum-type-that-is-used-with-ref)) <!-- Please follow the issue template below for bug reports. Also please indicate in the issue title which language/library is concerned. Eg: [BUG][JAVA] Bug generating foo with bar --> ##### Description <!-- describe what is the question, suggestion or issue and why this is a problem for you. --> Using the javascript client module, when using `npm run build`, the first block comment in the files in `src` get distorted with a lot of blank spaces and renders navigation of the code really slow. Initial block comment: ``` /** * Custom API * Custom API * * OpenAPI spec version: 0.0.1 * * * NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech). * https://openapi-generator.tech * Do not edit the class manually. * */ ``` Block comment modified by babel and code added: ``` "use strict"; Object.defineProperty(exports, "__esModule", { value: true }); var _typeof = typeof Symbol === "function" && typeof Symbol.iterator === "symbol" ? function (obj) { return typeof obj; } : function (obj) { return obj && typeof Symbol === "function" && obj.constructor === Symbol && obj !== Symbol.prototype ? "symbol" : typeof obj; }; var _createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if ("value" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }(); /** * Custom API * Custom API * * OpenAPI spec version: 0.0.1 * * * NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech). * https://openapi-generator.tech * Do not edit the class manually. * */ [Truncated] <!-- unambiguous set of steps to reproduce the bug.--> After generating the module: ```bash cd custom_api_javascript npm install npm run build ``` ##### Related issues/PRs <!-- has a similar issue/PR been reported/opened before? Please do a search in https://github.com/openapitools/openapi-generator/issues?utf8=%E2%9C%93&q=is%3Aissue%20 --> None ##### Suggest a fix <!-- if you can't fix the bug yourself, perhaps you can point to what might be causing the problem (line of code or commit), or simply make a suggestion --> Remove `.babelrc`, but it might be used for other things. Answers: username_1: What about using the file post-processing hook to format the JS source code as part of the code generation? username_0: Do you mean replacing `prepack` by `prepare` in package.json ? That would be nice, although I am not sure how it relates to the space characters issue. It seems to be specifically related to the `env` preset in `.babelrc`, but it can also be solved by addind an empty line at the top of the files.
alejandroautalan/pygubu
821513040
Title: Tool-tips would be a good addition Question: username_0: This app is really good. What do you think about adding tooltips to the fields? For instance, I don't know what is padx and ipadx. With a tooltip, this would be much easier. ![image](https://user-images.githubusercontent.com/1277920/109876581-b91f5b80-7c50-11eb-9159-57ea4cc8e779.png) Answers: username_1: Hi Leandro, Yes, it is a good idea. Even I forget what some properties are for. I will try to add this in the next version of pygubu-designer. Regards <NAME>. username_0: Hey, not really related to this issue here. But I was able to redesign a window in one day of work, from scratch. Your tool is really good @username_1 It was the first time doing GUI. You can see the pictures here. https://github.com/Gasman2014/KiCad-Diff/pull/45 username_1: Hello, Thanks for your words! I am glad that pygubu is useful to you. Greetings! <NAME>.
baidu/san
705540250
Title: 提供类似 :is 的指令 Question: username_0: 列表中的每一项需要动态确定展示方式时,就需要动态确定子组件类型。在 vue 里有 `v-bind:is` 指令来完成,比如: ```html <component v-bind:is="currentTabComponent"></component> ``` 在 san 中等价的方式是 `getComponentType(aNode: ANode)` 方法,但是它的问题是 1. 动态性太强。ssr 实现时无法确定哪个 dom 结点可能是个组件,除非认为 `getComponentType()` 是数据无关的(事实上一般会依赖于数据,比如数据列表中每一项的 template/type 来标识组件类型)。 2. 用起来有点麻烦,且这个 API 比较隐藏。通过指令的方式通常情况写起来会更简单。 Answers: username_1: 讨论一下形态吧。我觉得,有两个点: 1. 能够通过表达式的方式,支持根据数据选择子组件。这应该比较明确 2. 是不是需要支持便利的静态表达? 举个例子,假设有个子组件叫sub-component: 如果设计成 `s-is="name"`,那就满足1,但是不满足2。因为我如果是明确的子组件诉求,我就必须写 `s-is="{{'sub-component'}}"` 如果设计成 `s-is="sub-component"`,那如果我要使用表达式,我就必须 `s-is="{{name}}"` username_0: 是不是和普通的 Attribute 一致比较好?就是 s-is="{{name}}" 这种形式。 username_2: 👍 username_1: 看起来 @username_2 支持directive。 @username_0 你呢 username_0: @username_1 我没有观点,都行吧。 username_2: ### 用 `s-is` 的理由 要用动态组件的场景多数情况下应该是求值,用 `s-is="name"` 可以简单一点。 ### 用 `is` 的理由 `is`, `slot` 都是 custom element 相关的,需要有一致性上的考虑。 username_1: 感觉可能会冲突呢。虽然 slot 已经冲突了 username_1: 所以,问题变成: 我们要不要支持 san 组件体系与 custom element 混用?要不要支持输出一个 html element 其实是个二次选择的 custom element? username_3: 感觉is并不是directive,directive是用来改变compiler的行为(流程)的,is和slot跟组件本身相关比较大,所以站is和现在的slot一样。 username_4: 现在社区里使用custom element时,通过`is`使用的情况多吗?我感觉按照现在巴不得所有样式reset掉的实际玩法,用`is`的场景会很少,仅适用于`<a>`、`<input>`等少数 username_1: 使用custom element的感觉就很少(手动笑哭) @username_4 Status: Issue closed
dotnet/machinelearning
365067994
Title: Static pipe doesn't validate sufficiently Question: username_0: Here is an example: https://github.com/dotnet/machinelearning/blob/master/test/Microsoft.ML.Tests/Scenarios/Api/CookbookSamples/CookbookSamples.cs#L657 If I change the line to be `PredictedLabel: c.KeyU4.TextValues.Vector` , I will essentially (wrongly) claim that the model's `PredictedLabel` is a vector of keys. But `Fit` will work just fine, although even the basic `GetOutputSchema` call would verify that this is an invalid pipeline. Status: Issue closed Answers: username_1: closing this as static API is no longer being developed.
SeleniumHQ/selenium
183854045
Title: window().maximize() throws unknown command for firefox geckodriver Question: username_0: ## Meta - OS: Window 7 <!-- Windows 10? OSX? --> Selenium Version: 3.0.0 beta4 <!-- 2.52.0, IDE, etc --> Browser: Firefox 49.0.1 Browser Version: 49.0.1 <!-- e.g.: 49.0.1 (64-bit) --> ## Expected Behavior - maximize() should work for firefox browser. ## Actual Behavior - WARNING: Exception thrown org.openqa.selenium.UnsupportedCommandException: POST /session/572a2c7b-5d3d-4250-8a00-3b3529006bb5/window/current/maximize did not match a known command (WARNING: The server did not provide any stacktrace information) Command duration or timeout: 29 milliseconds Build info: version: '3.0.0', revision: '350cf60', time: '2016-10-13 10:48:57 -0700' System info: host: 'PAS-VXI-142', ip: '10.50.14.142', os.name: 'Windows 7', os.arch: 'amd64', os.version: '6.1', java.version: '1.8.0_101' Driver info: org.openqa.selenium.firefox.FirefoxDriver Capabilities [{rotatable=false, raisesAccessibilityExceptions=false, marionette=true, firefoxOptions={args=[], profile=UEsDBBQACAgIACSTUkkAAAAAAAAAAAAAAAAHAAAAdXNlci5qc61Xy27jNhTd9ysCr6ZAxEkmnc10lcYpUCCYFOMGszFAUOSVxZgiWT6suF/fSz1sJ5YlZ9qVTfF1H+ecexk9OGodFB9muTM1johnBTT/pV6RiqmaOSCgWa5AzC4vCq<KEY>3upUwVPtq2j9v3lf8CUEsHCJqI+tHQBAAAehAAAFBLAQIUABQACAgIACSTUkmaiPrR0AQAAHoQAAAHAAAAAAAAAAAAAAAAAAAAAAB1c2VyLmpzUEsFBgAAAAABAAEANQAAAAUFAAAAAA==}, appBuildId=20160922113459, version=, platform=XP, proxy={}, command_id=1, specificationLevel=0, acceptSslCerts=false, processId=11040, browserVersion=49.0.1, platformVersion=6.1, XULappId={ec8030f7-c20a-464f-9b0e-13a3a9e97384}, browserName=firefox, takesScreenshot=true, takesElementScreenshot=true, platformName=windows_nt, device=desktop, firefox_profile=UEsDBBQACAgIACOTUkkAAAAAAAAAA...}] Session ID: 572a2c7b-5d3d-4250-8a00-3b3529006bb5 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.openqa.selenium.remote.ErrorHandler.createThrowable(ErrorHandler.java:216) at org.openqa.selenium.remote.ErrorHandler.throwIfResponseFailed(ErrorHandler.java:168) at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:780) at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:799) at org.openqa.selenium.remote.RemoteWebDriver$RemoteWebDriverOptions$RemoteWindow.maximize(RemoteWebDriver.java:1030) at org.openqa.selenium.support.events.EventFiringWebDriver$EventFiringWindow.maximize(EventFiringWebDriver.java:640) at org.openqa.selenium.remote.server.handler.MaximizeWindow.call(MaximizeWindow.java:30) at org.openqa.selenium.remote.server.handler.MaximizeWindow.call(MaximizeWindow.java:22) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.openqa.selenium.remote.server.DefaultSession$1.run(DefaultSession.java:176) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Oct 18, 2016 6:25:40 PM org.openqa.selenium.remote.server.rest.ResultConfig handle WARNING: Exception: POST /session/572a2c7b-5d3d-4250-8a00-3b3529006bb5/window/current/maximize did not match a known command (WARNING: The server did not provide any stacktrace information) Command duration or timeout: 29 milliseconds Build info: version: '3.0.0', revision: '350cf60', time: '2016-10-13 10:48:57 -0700' System info: host: 'PAS-VXI-142', ip: '10.50.14.142', os.name: 'Windows 7', os.arch: 'amd64', os.version: '6.1', java.version: '1.8.0_101' Driver info: org.openqa.selenium.firefox.FirefoxDriver Capabilities [{rotatable=false, raisesAccessibilityExceptions=false, marionette=true, firefoxOptions={args=[], profile=UEsDBBQACAgIACSTUkkAAAAAAAAAAAAAAAAHAAAAdXNlci5qc61Xy27jNhTd9ysCr6<KEY>QMvnNJsr54MP3Rx+8+CRbpY9d3z5awA/UpPH4DosJxieXKsVTy1UZ1ImW5WBrf+OP1IKdJJEKXwINnGfp9eDgTYz2zWt6bCA98Mkw0b3upUwVPtq2j9v3lf8CUEsHCJqI+tHQBAAAehAAAFBLAQIUABQACAgIACSTUkmaiPrR0AQAAHoQAAAHAAAAAAAAAAAAAAAAAAAAAAB1c2VyLmpzUEsFBgAAAAABAAEANQAAAAUFAAAAAA==}, appBuildId=20160922113459, version=, platform=XP, proxy={}, command_id=1, specificationLevel=0, acceptSslCerts=false, processId=11040, browserVersion=49.0.1, platformVersion=6.1, XULappId={ec8030f7-c20a-464f-9b0e-13a3a9e97384}, browserName=firefox, takesScreenshot=true, takesElementScreenshot=true, platformName=windows_nt, device=desktop, firefox_profile=UEsDBBQACAgIACOTUkkAAAAAAAAAA...}] Session ID: 572a2c7b-5d3d-4250-8a00-3b3529006bb5 ## Steps to reproduce - 1. Start selenium server ,POST maximize() command for firefox 49.0.1 using Geckodriver ,like below: 2016-10-18 18:25:40 DEBUG wire:72 - http-outgoing-11 >> "POST /session/572a2c7b-5d3d-4250-8a00-3b3529006bb5/ window/current/maximize HTTP/1.1[\r][\n]" 2016-10-18 18:25:40 DEBUG wire:72 - http-outgoing-11 >> "Content-Type: application/json; charset=utf-8[\r][\n]" 2016-10-18 18:25:40 DEBUG wire:72 - http-outgoing-11 >> "Content-Length: 45[\r][\n]" 2016-10-18 18:25:40 DEBUG wire:72 - http-outgoing-11 >> "Host: localhost:42969[\r][\n]" 2016-10-18 18:25:40 DEBUG wire:72 - http-outgoing-11 >> "Connection: Keep-Alive[\r][\n]" 2016-10-18 18:25:40 DEBUG wire:72 - http-outgoing-11 >> "User-Agent: Apache-HttpClient/4.5.2 ( Java/1.8.0_101)[\r][\n]" 2016-10-18 18:25:40 DEBUG wire:72 - http-outgoing-11 >> "Accept-Encoding: gzip,deflate[\r][\n]" 2016-10-18 18:25:40 DEBUG wire:72 - http-outgoing-11 >> "[\r][\n]" 2016-10-18 18:25:40 DEBUG wire:86 - http-outgoing-11 >> "{"windowHandle":"current","handle":"current"}" 2016-10-18 18:25:40 DEBUG wire:72 - http-outgoing-11 << "HTTP/1.1 404 Not Found[\r][\n]" 2016-10-18 18:25:40 DEBUG wire:72 - http-outgoing-11 << "Connection: close[\r][\n]" 2016-10-18 18:25:40 DEBUG wire:72 - http-outgoing-11 << "Content-Length: 144[\r][\n]" 2016-10-18 18:25:40 DEBUG wire:72 - http-outgoing-11 << "Content-Type: application/json[\r][\n]" 2016-10-18 18:25:40 DEBUG wire:72 - http-outgoing-11 << "Date: Wed, 19 Oct 2016 01:25:40 GMT[\r][\n]" 2016-10-18 18:25:40 DEBUG wire:72 - http-outgoing-11 << "[\r][\n]" 2016-10-18 18:25:40 DEBUG wire:86 - http-outgoing-11 << "{"error":"unknown command","message":"POST / session/572a2c7b-5d3d-4250-8a00-3b3529006bb5/window/current/maximize did not match a known command"}" 2016-10-18 18:25:40 DEBUG headers:124 - http-outgoing-11 << HTTP/1.1 404 Not Found I had raised a defect to geckodriver team and from their side confirmed that the maxinum command url should be `/session/{session id}/window/maximize`,But from our [JsonHttpCommandCodec](https://github.com/SeleniumHQ/selenium/blob/master/java/client/src/org/openqa/selenium/remote/http/JsonHttpCommandCodec.java) i saw we use command is ` defineCommand(MAXIMIZE_CURRENT_WINDOW, post("/session/:sessionId/window/:windowHandle/maximize"));` I think this's why geckodriver is failed for firefox using this command .please advise how to fix this issue ,thanks . See geckodriver issue I mentioned here: https://github.com/mozilla/geckodriver/issues/276 Answers: username_1: Can you show the selenium code / setup that you are using to reproduce this? Using java directly goes through the W3CHttpCommandCodec rather than the JsonHttpCommandCodec. And works for me. username_1: also I just used a standalone server and python from selenium import webdriver as w d = w.Remote(desired_capabilities={'browserName':'firefox'}) d.maximize_window() and works successfully. Please provide a way to reproduce, when you do we can reopen the issue. Status: Issue closed username_0: It's very simple if you just use Selenium 3.0.0 beta4 and geckodriver 0.11.1 System.setProperty("webdriver.gecko.driver","C:\\geckodriver.exe"); FirefoxDriver driver=new FirefoxDriver(); driver.manage().window().maximize(); driver.get("https://www.google.com"); username_1: please upgrade to 3.0.0 or 3.0.1 then username_0: thanks @username_1 ,I had found the root cause is from I rewrite the back-end selenium code but not notice that in new selenium 3.0.0 we use two implements ,one is from old selnium 2 implement using `JsonHttpCommandCodec` when the `createSession` return JSONString have `status` node , It will use the `JsonHttpCommandCodec` ,or else it will use `W3CHttpCommandCodec` . Thanks very much ,i had rewrite the code ,we can close it .
TheHolyWaffle/TeamSpeak-3-Java-API
203711966
Title: getIconId from DatabaseClient dont work Question: username_0: Hey, if i want get the Iconid from api.getDatabaseClientbyUId("UID").getIconId(); i get 0 and if i get the Iconid from api.getClientByUId("UID").getIconID(); i get the correct icon id. Answers: username_1: Heyo! I just tested this on a local server and got: ``` clientinfo clid=1 [...] client_icon_id=424599930 [...] error id=0 msg=ok clientdbinfo cldbid=2 [...] client_icon_id=0 [...] error id=0 msg=ok ``` In other words: The problem seems to be caused by the TS3 server and not the API. I found [a thread on the TS3 forums](http://forum.teamspeak.com/threads/119644-clientdbinfo-client_icon_id-is-always-0) reporting the same issue more than a year ago - seems like it hasn't been fixed yet. If you really need to get the icons used by offline clients, you could try using `clientdblist` (`getDatabaseClients()`) and then filtering out the clients you need. This is a terrible workaround, but it might just work 😛 Status: Issue closed
umbraco/Umbraco-CMS
738259679
Title: Can't register multiple trees in same tree group Question: username_0: There is supposedly 2 ways to register custom trees: - Creating a controller that inherits from `TreeController` and decorating it with the `[Tree(...)]` attribute - this is then type scanned and auto-registered - Creating a controller that inherits from `TreeControllerBase` and registering the tree in a composer with `composition.Trees().AddTree()` A [comment](https://github.com/umbraco/Umbraco-CMS/blob/v8/contrib/src/Umbraco.Web/Trees/TreeCollectionBuilder.cs#L24) on the latter method says "this is useful for ... a single tree controller for different tree aliases ... the tree controller cannot be decorated with the TreeAttribute". However when using this method, with a shared tree controller, but more than 1 tree is registered within the same "tree group" in the same "section" this fails. A cast error is thrown as it is expected the tree will implement `TreeController` which it cannot in this case. Tested on the latest 8.9.0, but looking at the code this likely applies to all versions of V8. Answers: username_1: Fixed in #9358 Status: Issue closed
red/red
233759722
Title: [GUI] WISH: face for multiple line text rendering Question: username_0: 'TEXT face can show multiple lines, but I have to put LF characters in the string. 'AREA face can show multiple lines, but It's for inputting, not for static text rendering. `TEXT-LIST face can show multiple lines, but It's a list, not a paragraph. Answers: username_1: @username_3 Isn't that covered by `text-box` in 0.6.4? username_0: I wish there would be an option ```Red [ multi-line?: true ] ``` to support this feature. Status: Issue closed username_0: 'TEXT face can show multiple lines, but I have to put LF characters in the string. 'AREA face can show multiple lines, but It's for inputting, not for static text rendering. `TEXT-LIST face can show multiple lines, but It's a list, not a paragraph. username_0: I wish there would be an option ```Red [ auto-break-lines?: true ] ``` to support this feature. username_2: @username_0, it can sometimes help to raise ideas like this in a chat room, rather than as tickets. Then they can be fleshed out a bit, and save the team time in reviewing open tickets. username_0: On macOS, 'Text face is multi-line automatically. But on Windows, it's not. username_3: Use `wrap` can make it work. ``` view [text "a very very very very very long string" 100x200 wrap] ``` username_4: https://github.com/red/red/pull/3683 partially fixes this (for Windows systems). What's left is to bring MacOS backend to support the same wrapping rules: "^/" should always start a new line, `sp` and `tab` - depending on the `wrap` flag of the `para` facet.
staudenmeir/eloquent-json-relations
608996349
Title: Has many with optional array items Question: username_0: Hey. Great package, I was really happy when I found this exists. I just tried to use it and run into an issue. I was not sure how to name it, but here it goes: Let's say I have this relationship: ```php public function conditionGroups() { return $this->belongsToJson(Group::class, 'data->conditions[]->group->id'); } ``` And the `conditions` array contains: ``` [ {group: {id: 1}, anotherKey: true}, {group: {id: 2}, anotherKey: true}, {field: {id: 1}, anotherKey: true}, {field: {id: 2}, anotherKey: true} ] ``` I would expect the query to be `SELECT * FROM groups WHERE id IN(1,2)`, but what I actually see is `SELECT * FROM groups WHERE id IN(0,1,2)`. As you can see there is an additional zero coming from somewhere. Would you agree this is a bug? Answers: username_1: Yeah, that's a bug. The relationship expects every record in the array to actually have a foreign key. The ones without a key return `null` and Laravel converts them to zero. username_0: Cool, thanks for confirming. Unless you would like to fix it yourself, I could work on a PR to fix it 👍 username_1: You're welcome to submit a PR. username_0: Fixed in #35. Status: Issue closed username_0: Hey @username_1, any ideas when the fixed version will be released? Thanks. username_1: I wanted to wait with the release until I finish a new feature I'm working on because I didn't think it was urgent. I thought the bug would only cause incorrect results if related models with an `id` of `0` actually exist. Are you getting incorrect results from the query? username_0: No, all good. No rush. Was really just wondering that is all. Thank you for the response. username_1: I've released the fix.
alphagov/govuk-design-system
724523693
Title: Add guidance and example for default / initial values for selects Question: username_0: The design system doesn't include any guidance for default values for selects. All the existing examples pre-fill with a chosen answer - this might be correct for sorting, but for other uses of selects you don't want a preselected option. I think we should include an example with a default option that's more of a placeholder. It's common in selects to do this: ![Screenshot 2020-10-19 at 12 16 30](https://user-images.githubusercontent.com/2204224/96443716-f4227480-1204-11eb-8702-b61f9582d92e.png) ![Screenshot 2020-10-19 at 12 16 36](https://user-images.githubusercontent.com/2204224/96443722-f5ec3800-1204-11eb-97e3-8a11983d4b17.png) Reasons you might do this: * You're using a select as a non-progressively enhanced option for the accessible autocomplete. You don't want an answer pre-filled (we don't pre-fill normally) * A pre-filled answer doesn't make sense There's a couple ways you might have a default option, hence I think it would be good if the design system had guidance on it. The other reason to do it is that we shouldn't give the false impression that selects should always default to one of the final options. ## Disabled first option ![disabled-first-option](https://user-images.githubusercontent.com/2204224/96444215-cc7fdc00-1205-11eb-900a-a03d40159a07.gif) ## Disabled and hidden first option* ![disabled-hidden-first-option](https://user-images.githubusercontent.com/2204224/96444228-d275bd00-1205-11eb-87cc-4964c77e7ddb.gif) ## Blank first option ![blank-first-option](https://user-images.githubusercontent.com/2204224/96444240-d570ad80-1205-11eb-8dcf-86dd71ea3a60.gif) ## Blank and hidden first option* ![blank-hidden-first-option](https://user-images.githubusercontent.com/2204224/96444246-d7d30780-1205-11eb-864c-af743248046e.gif) * it looks like the macros don't currently support the `hidden` attribute - though you could pass it with the attributes block.
kubernetes-sigs/kind
407947923
Title: IPv6 Support Question: username_0: Kubernetes currently lacks IPv6 CI support. Some work has been done to add such support via DinD here (https://github.com/kubernetes/test-infra/pull/7529). However, similar support in KinD is very desirable. Furthermore, there is an effort in the community to bring dual-stack support to kubernetes (https://github.com/kubernetes/enhancements/pull/648). Having an IPv6 CI in place will greatly aid that work. Full IPv6 support in KinD can progress in small steps, which I will try to outline below. 1. Add support for other CNIs (beyond weave); in particular the bridge and host-local IPAM plugins. Bridge and host-local IPAM are both simplistic which is desirable from a CI point of view as plugin specific problems can be minimized. Both plugins are under the CNCF umbrella (https://github.com/containernetworking) and have had IPv6 support for sometime. 2. Support bridge+host-local on a single node with IPv4 addressing. 3. Support bridge+host-local on a single node with IPv6 addressing. 4. Support bridge+host local on a multi-node cluster with IPv4 addressing. 5. Suppor bridge+host-local on a multi-node cluster with IPv6 addressing. I would envision the above features coming in a as a series of small PRs. I have wip that brings both 1 and 2 above to KinD. I will share that shortly and would love to get feedback. Answers: username_1: We are tracking / discussing 1.) in #205 / #278 FYI. In general I would love to see this 🙃 username_2: kubeadm _should be_ ready for this, but don't quote me on that. we had some recent PRs that fixed some IPv4 assumptions last cycle. but we don't have test signal for IPv6, so /shrug username_0: Hi @username_1 @username_2 Here is one approach https://github.com/kubernetes-sigs/kind/pull/281 Please have a look. It's just a wip that I thought I share to get early feedback. I'm sure there are a multitude of ways we could go about doing this. :) username_0: Hmm. Looks like I need to sign the CLA before you guys can look at the PR above :( I'll try to get this resolved asap with my company. username_3: Google doc with a proposal to support IPb6 and Dual stack clusters in kind https://docs.google.com/document/d/17e3TWWLfnIZrsVxpln9wNi4x0JVn2oHIHDYjaeENdVE/edit?usp=sharing username_1: we're making progress in this front, should be shipping IPv6 capable CNI soon, will need some more work on top of that. username_1: https://github.com/kubernetes-sigs/kind/pull/500 and https://github.com/kubernetes-sigs/kind/pull/524 are big steps in this direction username_1: The v0.4.0 release will have this!
dotnet/cli
159516123
Title: What does "only when sourcing script" mean? Question: username_0: dotnet-install: Extracting zip dotnet-install: Adding to current process PATH: /home/bb/dotnet. Note: This change will be visible only when sourcing script. dotnet-install: Installation finished successfuly. ## Environment data `dotnet --info` output: ``` [bb@bb-centos72-3 ~]$ dotnet --info -bash: dotnet: command not found ``` Or, more usefully: ``` [bb@bb-centos72-3 ~]$ ./dotnet/dotnet --info .NET Command Line Tools (1.0.0-preview2-002996) Product Information: Version: 1.0.0-preview2-002996 Commit SHA-1 hash: cff4f37456 Runtime Environment: OS Name: centos OS Version: 7 OS Platform: Linux RID: centos.7-x64 ``` Answers: username_1: http://www.tldp.org/HOWTO/Bash-Prompt-HOWTO/x237.html username_0: I think mainly I'm confused as to why it's printed at all. And/or it needs a "this" in the message. username_2: @username_1 is spot on @username_0 what experience do you suggest? @blackdwarf username_0: Since you're presenting information to me: What do you expect me to do with it? If "nothing", don't print it. Otherwise, notice that the message uses an open "when sourcing script". I can source lots of script(s) that has nothing to do with adding the dotnet install-dir to my $PATH. If the message has an interesting value that I'm not seeing, it needs to change from an open qualifier (suggesting you've somehow patched bash) to a closed qualifier, such as "when sourcing __this__ script". But, really, I can't see what the purpose for the message is. A reader of the script? Then make it a comment, not an echo. username_3: I completely agree. It confused me two. I think two changes are needed. 1. Add the word "this" @username_0 said above. So that it's clear that its talking about sourcing "this" script. However, that's actually too late, because by the time you see it, you've _already_ run the script. So 2. Get the documentation changed, here: https://docs.microsoft.com/en-us/dotnet/articles/core/tools/dotnet-install-script That page quite clearly says "By default, the script will modify the PATH, which makes the CLI tools available immediately after install." But in reality _none_ of examples on the page actually do that, because none of them source the script. It should be changed to say "*If your source the script*, the script will modify the PATH, which makes the CLI tools available immediately after install." and an example like this should be added, with a comment to say that _these_ examples will modify the path `. ./dotnet-install.sh --channel Future` or `source ./dotnet-install.sh --channel Future` username_4: @username_3 you're right - I'm not native speaker and despite knowing the theory I really have to focus hard to spot the difference between "a", "the", "this" or no article 😉 My 2c - I think we should not set the PATH at all in the script (and remove that sentence) - when you source the dotnet-install.sh you also do `set -e` which is later closing the shell on a first command which ends with non-zero exit code. If we want to set the PATH we should change the one-liner description in the guideliness username_5: It would be really nice to see this addressed. Since the guidance calls out *sourcing* the script, it shouldn't set options that dirty the shell. It took me a while to chase down why my shell was exiting on a failed, unrelated command after sourcing dotnet-install. username_6: It's 2018 and it still does not add to the path, I tried the `chmod +x`, executing as `bash ./dotnet-install.sh1`, etc (see referenced issue 9234 above). Can you not give an example that does work in the message? username_7: I'm just after a command that I can run on an Ubuntu container to install the dotnet sdk, and then use a `dotnet` cli commands in the same script.. Nothing on the docs site so far works. For example using this example command from the docs in a DOCKERFILE doesn't seem to do that: ``` RUN curl -sSL https://dot.net/v1/dotnet-install.sh | bash /dev/stdin -Channel 2.0 ``` username_8: @username_7 the best way is to set the `-InstallDir` (see [doc](https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-install-script#description)) and add this direcotry to the `$PATH`. As alternative create a symlink: `ln -s $installdir/dotnet /usr/local/bin`. Where `$installdir` is the given installation directory from above. username_7: @username_8 I was hoping to use `apt-get` but there are a few things missing on the sdk 3.0 debian10 based docker image that prevent this from being straightforward. Here is finally a working DOCKERFILE that shows the issues I had to overcome in order to apt-get dotnet core sdk 2.1 and build a Blazor client / server project. The issues were: 1. `sudo` is not installed. 2. `apt-get install -y dotnet-sdk-2.1` fails becuase of two missing libraries: - libicu57_57.1-6+deb9u2_amd64.deb - libssl1.0.2_1.0.2r-1~deb9u1_amd64.deb 3. microsoft package sources need to be added. Dockerfile: ``` #FROM mcr.microsoft.com/dotnet/core/sdk:3.0 #FROM mcr.microsoft.com/dotnet/core/sdk:3.0.100-preview5-nanoserver-1809 #3.0.100-preview5-buster-arm64v8 FROM mcr.microsoft.com/dotnet/core/sdk:3.0 ARG BUILD_CONFIGURATION=Debug ENV ASPNETCORE_ENVIRONMENT=Development ENV DOTNET_USE_POLLING_FILE_WATCHER=true EXPOSE 80 WORKDIR /src RUN apt-get update && apt-get install -y sudo RUN wget -qO- https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.asc.gpg RUN sudo mv microsoft.asc.gpg /etc/apt/trusted.gpg.d/ RUN sudo chown root:root /etc/apt/trusted.gpg.d/microsoft.asc.gpg RUN wget -q https://packages.microsoft.com/config/debian/9/prod.list RUN sudo mv prod.list /etc/apt/sources.list.d/microsoft-prod.list RUN sudo chown root:root /etc/apt/sources.list.d/microsoft-prod.list RUN sudo apt-get update # dotnet sdk 2.1 dependencies fail to install with apt-get unless this libraries are installed manually first. RUN wget http://debian.mirrors.uk2.net/pool/main/i/icu/libicu57_57.1-6+deb9u2_amd64.deb RUN wget http://debian.mirrors.uk2.net/pool/main/o/openssl1.0/libssl1.0.2_1.0.2r-1~deb9u1_amd64.deb RUN dpkg -i libicu57*.deb RUN dpkg -i libssl1.0.2*.deb # Now we can atleast install dotnet sdk 2.1 RUN sudo apt-get install -y dotnet-sdk-2.1 # Finally we can build a blazor server project that references a blazor client project. COPY ["Hub.Platform.Client/Hub.Platform.Client.csproj", "Hub.Platform.Client/"] COPY ["Hub.Platform.Core/Hub.Platform.Core.csproj", "Hub.Platform.Core/"] COPY ["Hub.Platform.Server/Hub.Platform.Server.csproj", "Hub.Platform.Server/"] COPY ["Hub.Platform.Shared/Hub.Platform.Shared.csproj", "Hub.Platform.Shared/"] # I install the blazor templates, not absolutely sure certain this is necessary. RUN dotnet new -i Microsoft.AspNetCore.Blazor.Templates::3.0.0-preview5-19227-01 RUN dotnet restore "Hub.Platform.Server/Hub.Platform.Server.csproj" COPY . . WORKDIR "/src/Hub.Platform.Server" RUN dotnet build --no-restore "Hub.Platform.Server.csproj" -c $BUILD_CONFIGURATION RUN echo "exec dotnet run --no-build --no-launch-profile -c $BUILD_CONFIGURATION --" > /entrypoint.sh ENTRYPOINT ["/bin/bash", "/entrypoint.sh"] ``` username_8: Aside: your dockerfile has too many layers. See [Best practices for writing Dockerfiles -- Minimize the number of layers](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#minimize-the-number-of-layers). --- Try something like this: ```Dockerfile FROM mcr.microsoft.com/dotnet/core/sdk:3.0 RUN curl https://dot.net/v1/dotnet-install.sh > dotnet-install.sh && chmod +x *.sh \ && ./dotnet-install.sh -Channel 2.0 -InstallDir /usr/share/dotnet \ && rm * ``` So ```sh root@6931308c0a7b:/src# dotnet --list-sdks 2.1.202 [/usr/share/dotnet/sdk] 3.0.100-preview5-011568 [/usr/share/dotnet/sdk] root@6931308c0a7b:/src# ``` username_7: RAZORTAGHELPER : Failed to load hF�, error : libunwind.so.8: cannot open shared object file: No such file or directory [/src/Hub.Platform.Core/Hub.Platform.Core.csproj] 5 Warning(s) 1 Error(s) Time Elapsed 00:00:24.97 The command '/bin/sh -c dotnet build --no-restore "Hub.Platform.Server.csproj" -c $BUILD_CONFIGURATION' returned a non-zero code: 1 Something about the non-optimised version I have shown, results in some difference that allows this build to succeed. Perhaps the apt-get approach results in additional tools being installed? no idea. I have just about run out of time debugging this issue now. username_7: oh i'm guessing it's the `rm *` you've put at the end deleting stuff it shouldn't? I'll give a quick try without that. username_8: The `rm *` is there to delete the downloaded `dotnet-install.sh`. But this can be specified -- i.e. without the wildcard, so it becomes `rm dotnet-install.sh` (I was just lazy).
morepath/morepath
37371623
Title: a WSGI setup documentation page Question: username_0: See here for what Flask does. http://flask.pocoo.org/docs/deploying/wsgi-standalone/ Answers: username_0: Someone who has no particular experience with Morepath internals could still write this, so marking this entry level. Status: Issue closed username_1: This seems to be a duplicate of #205 to some extent, so let's close this ticket in favour of the other one.
Submitty/Submitty
454872172
Title: Add discussion forum email functionality Question: username_0: **What problem are you trying to solve with Submitty** Emails regarding discussion forum notifications such as changes to posts, replies, announcements, etc., are not being made. **Additional context** The settings are there but there is no code to implement the emails. ![discussion_forum_emails](https://user-images.githubusercontent.com/15002610/59301750-53ab6000-8c60-11e9-83dd-8470a9005e67.JPG)<issue_closed> Status: Issue closed
ShikOfTheRa/scarab-osd
156320835
Title: Dronin Altitude incorrect + no Speed Question: username_0: I'm not sure if I have MWOSD configured incorrectly or not but the Speed indicator always reads zero and the default Altitude reads -21 meters although it's pretty accurate in incrementing (-19 at 2 meters of height). I've added the GPS Altitude and it appears to start at 0 and increments but is not as accurate as the barometer Altitude which I'd prefer to use. I'm using a Sparky 1.1 on a 250 frame with MWOSD 1.6 Master on a Micro MinimOSD and dRonin 2016-04-09.2 Release ("Tanto") #234 Answers: username_1: We send the wrong altitude by the looks. It needs to be inverted in the case where it comes from PositionActual.Down (PositionActual uses the NED co-ordinate system). https://github.com/d-ronin/dRonin/blob/next/flight/Modules/UAVOMSPBridge/UAVOMSPBridge.c#L461 username_2: Thanks username_1. Doesn't sound like anything for MWOSD. Can someone advise if / when a patch is made available in dRonin username_3: You can just close this bug. It's a dRonin thing. I thought it had been fixed over there, but not sure if it's come out in a release. Status: Issue closed username_1: Issue raised to fix this. username_2: I'm not sure if I have MWOSD configured incorrectly or not but the Speed indicator always reads zero and the default Altitude reads -21 meters although it's pretty accurate in incrementing (-19 at 2 meters of height). I've added the GPS Altitude and it appears to start at 0 and increments but is not as accurate as the barometer Altitude which I'd prefer to use. I'm using a Sparky 1.1 on a 250 frame with MWOSD 1.6 Master on a Micro MinimOSD and dRonin 2016-04-09.2 Release ("Tanto") #234 username_2: Re-opened to keep track username_0: Will you also be looking at the zero speed display as well? username_1: Can do, on brief inspection looks like the only place we send a speed through is the raw GPS message. username_1: @username_0 Can you please test d-ronin/dRonin#1035 and see if it fixes your issues? Thanks in advance. username_0: Thanks. Will do. username_1: Confirmed fixed by @username_0. 👍 username_2: Brill Status: Issue closed
concourse/concourse
281894332
Title: Compacting teams on the dashboard for better space utilization Question: username_0: @username_2 NOPE. This is our "high density view" improvement of the existing dashboard. If you look in the status bar right beside the "pending" hint you'll see a toggle for "High Density". The idea would be to show the current dashboard by default, and if you flip the switch you get dumped into this new view for high density TV view-age. Hope that makes sense Answers: username_1: # Feature Request ## What challenge are you facing? Many Concourse users choose to display their pipelines and pipeline dashboards on large TV monitors. It is important that such an "information radiator" be able to show as many pipelines as possible since a user will likely be unable to scroll through their pipelines if they are on a monitor. The goal is for the user to be able to glance at the TV monitor and know immediately if there is an issue with any of their pipelines. ## A Modest Proposal Lindsay and I have created the below "presentation mode" view that will display 130+ pipelines in a single 1920x1080 viewport. Pipelines are represented by rectangular cards with a strip of color on the left side that shows the current status. To avoid the need to scroll, the cards are small and contain less detail than cards in the existing beta dashboard view. The cards of pipelines which are failing will be entirely red and will radiate a lower opacity red as well. Font size has been enlarged for reading at a distance. ![high density proposal](https://user-images.githubusercontent.com/35495560/35520908-6377f44c-04e6-11e8-9e8d-c886d4c3536f.png) We arrived at this solution by considering various layouts that would allow us to increase information density. After several rounds of feedback from Concourse users inside Pivotal, we came to the conclusion that rearranging the view in this way would not be too confusing and might make life easier for operators and engineers who monitor multiple teams' pipelines. username_2: Is the old beta dashboard going to just be dropped entirely? It's really great for single-team CI displays, and I'd be sad to lose being able to see what part of the pipeline is broken at a glance. username_0: @username_2 NOPE. This is our "high density view" improvement of the existing dashboard. If you look in the status bar right beside the "pending" hint you'll see a toggle for "High Density". The idea would be to show the current dashboard by default, and if you flip the switch you get dumped into this new view for high density TV view-age. Hope that makes sense username_0: We've implemented the first pass of the designs, which you can check out here: https://ci.concourse.ci/beta/dashboard/hd I'm going to close the issue for now, but we definitely want to keep tweaking it. As usual, this feature is gonna be rolled under beta. We'll be keeping an eye out for: - how the column wrapping performs with different team sizes and pipelines - does the team label look weird to you? - legibility of team names, pipeline names and pipeline status Status: Issue closed
yegor256/veils
616020926
Title: still broken Question: username_0: @username_1 release, tag is `0.1.1` Answers: username_0: @username_1 release, tag is `0.1.1` username_1: @username_0 OK, I will release it now. Please check the progress [here](https://www.username_1.com/t/22024-626814463) username_1: @username_0 Done! FYI, the full log is [here](https://www.username_1.com/t/22024-626814463) (took me 2min) Status: Issue closed
tekartik/sqflite
325545966
Title: Data deleted Question: username_0: I've been using sqflite for a bit and have noticed that the data can disappear when I restart the app or the app crashes. Answers: username_1: Thanks for the report, I would be interested in being able to reproduce this as I never has such issue (SQLite is know to be robust on this). The only think I can think of would be that you open the database multiple times. If you have sample code to reproduce I would be interested. I'm not sure what you mean by "the data can disappear". Do you loose your entire database or only the most recent added data? Personally I have one global reference Database in my flutter application to avoid lock issues. Opening the database should be safe if called multiple time. Keeping a reference only in a widget can cause issues with hot reload if the reference is lost (and the database not closed yet). username_0: Sorry, it was my fault. Still learning Flutter and Dart. I did learn that I can close the database in the dispose function. I also learned that I need to create an auto incrementing id, retrieve that id from the insert call and set the id in my local class. Otherwise, I can't delete or update it later. I have written some classes that help me write out the SQL statements. I have created an Android SQLite library that uses reflection to build the database and would like something like that (ORM type) but it looks like Flutter doesn't have the equivalent (mirrors). Status: Issue closed
fga-eps-mds/2019.2-Amika-Backend
517242681
Title: Label da escolha do tipo na model de Agenda Question: username_0: **Descrição do bug** <!-- Descreva de forma clara e concisa sobre o bug. --> Deveria ser ['Grupo', 'Grupo'] e não ['Individual', 'Grupo']. https://github.com/fga-eps-mds/2019.2-Amika-Backend/blob/develop/amika/models.py#L30 **Checklist** - [x] A issue possui nome significativo. - [x] A issue possui descrição significativa. - [ ] A issue possui screenshots quando necessário. - [x] A issue possui labels.<issue_closed> Status: Issue closed
randombit/botan
744445183
Title: The dTLS server (1.2) is not handling properly the re-transmissions of flight 5 from the client Question: username_0: Problem: The retransmissions of Flight 5 made by the dTLS client are not well handled and end us with Alert. Prerequisities: • Botan cli as a dTLS server • Openssl as a dTLS client • Delay introduces in the dTLS server (on receiving the Client Key Exchange) to trigger the dTLS retransmissions by the client Reproduction: • Running Botan cli server, i.e.: botan tls_server my_cert.pem my_key.pem --port=8080 --type=udp • Running openssl client, i.e.: openssl s_client -connect 127.0.0.1:8080 -debug -msg -dtls1_2 -CAfile my_cert.pem • Sending any data through the client Current behavior: • dTLS session not established correctly, even if the client thinks that everything went fine • Application Data received triggers an alert on the server side (*) Expected behavior: • dTLS session successfully established, despite of the retransmissions of the flight 5 messages (Client Key Exchange, Change Cipher Spec, Finished), • Application Data received and decrypted on the server side More details / output from test: Server: ``` botan tls_server my_cert.pem my_key.pem --port=8080 --type=udp Listening for new connections on udp port 8080 Handshake complete, DTLS v1.2 using ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 Session ID 5FB2367026312B133DC67909107C3805729C841AD509ACE077E1A56319CDA548 Connection problem: Can't interleave application and handshake data (*) ``` Client: ``` openssl s_client -connect 127.0.0.1:8080 -debug -msg -dtls1_2 -CAfile my_cert.pem CONNECTED(00000003) ... New, TLSv1.2, Cipher is ECDHE-ECDSA-CHACHA20-POLY1305 Server public key is 256 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : DTLSv1.2 Cipher : ECDHE-ECDSA-CHACHA20-POLY1305 Session-ID: 5FB2367026312B133DC67909107C3805729C841AD509ACE077E1A56319CDA548 Session-ID-ctx: Master-Key: ... PSK identity: None PSK identity hint: None SRP username: None Start Time: 1605514864 Timeout : 7200 (sec) Verify return code: 21 (unable to verify the first certificate) Extended master secret: yes --- test [Truncated] TCP dump: ``` No. Time Source Destination Protocol Length Info 49 11.854230676 127.0.0.1 127.0.0.1 DTLSv1.2 263 Client Hello 50 11.855164514 127.0.0.1 127.0.0.1 DTLSv1.2 154 Server Hello 51 11.855189227 127.0.0.1 127.0.0.1 DTLSv1.2 578 Certificate 52 11.855963587 127.0.0.1 127.0.0.1 DTLSv1.2 178 Server Key Exchange 53 11.855991738 127.0.0.1 127.0.0.1 DTLSv1.2 67 Server Hello Done 54 11.860806511 127.0.0.1 127.0.0.1 DTLSv1.2 167 Client Key Exchange, Change Cipher Spec, Encrypted Handshake Message 59 12.869055992 127.0.0.1 127.0.0.1 DTLSv1.2 100 Client Key Exchange 60 12.869268235 127.0.0.1 127.0.0.1 DTLSv1.2 56 Change Cipher Spec 61 12.869480623 127.0.0.1 127.0.0.1 DTLSv1.2 95 Encrypted Handshake Message 70 14.884943509 127.0.0.1 127.0.0.1 DTLSv1.2 100 Client Key Exchange 71 14.885115231 127.0.0.1 127.0.0.1 DTLSv1.2 56 Change Cipher Spec 72 14.885248954 127.0.0.1 127.0.0.1 DTLSv1.2 95 Encrypted Handshake Message 81 16.864198336 127.0.0.1 127.0.0.1 DTLSv1.2 56 Change Cipher Spec 82 16.864369445 127.0.0.1 127.0.0.1 DTLSv1.2 95 Encrypted Handshake Message 95 19.508932296 127.0.0.1 127.0.0.1 DTLSv1.2 76 Application Data 96 19.509003712 127.0.0.1 127.0.0.1 DTLSv1.2 73 Encrypted Alert ``` Answers: username_1: There might be a bug with our retransmit logic. I'll look at this asap (probably this weekend or sometime next week) username_2: Thank you. I suspect that the problem is that the Flight 5 messages are not lost but just delayed. In the end the Flight 5 is received few times by the dTLS server. When it is received first time, then the session is already established. When next / retransmitted messages are received, below condition is already fulfilled: (https://github.com/username_1/botan/blob/master/src/lib/tls/tls_channel.cpp#L331) And this probably lead to some issue within the logic. username_0: Thank you. I suspect that the problem is that the Flight 5 messages are not lost but just delayed. In the end the Flight 5 is received few times by the dTLS server. When it is received first time, then the session is already established. When next / retransmitted messages are received, below condition is already fulfilled: (https://github.com/username_1/botan/blob/master/src/lib/tls/tls_channel.cpp#L331) And this probably lead to some issue within the logic. username_0: I can see few issues in dTLS. First one is the check about `epoch0_restart` in the `tls_channel.cpp`: ``` const bool epoch0_restart = m_is_datagram && record.epoch() == 0 && active_state(); BOTAN_ASSERT_IMPLICATION(epoch0_restart, allow_epoch0_restart, "Allowed state"); ``` In case of retransmission of Flight 5 messages, and not allowed epoch0_restart, the state is already not allowed. So in case of setting the policy: `allow_dtls_epoch0_restart()` to false, the retransmission of Flight 5 is already not allowed. Then handling the sequence/epoch looks fine for `Datagram_Sequence_Numbers`. Retransmitted Client Key Exchange and Change Cipher Spec messages are skipped as 'already seen' (sequence lower than m_window_highest) but then there is an encrypted 'Finished' message (epoch = 1, sequence = 1) which is accepted as the sequence is fine. It is then handled in Channel's `process_handshake_ccs`, where the `pending_state` is re-created (even if there is already an `active_state`). Then a check is made: `auto msg = pending->get_next_handshake_msg();`, which returns expected: HANDSHAKE_NONE and breaks the loop. As a result, there are both: pending_state and active_state available. If the client sends the APPLICATION_DATA, following check triggers an alert: ``` if(pending_state() != nullptr) throw TLS_Exception(Alert::UNEXPECTED_MESSAGE, "Can't interleave application and handshake data"); ``` From the peer perspective the handshake completes successfully, as Botan already responded with 'Change Cipher Spec' and 'Finished' for a original (not retransmitted) messages of the Flight 5. The message flow also looks fine from a network side, but then retransmitted Flight 5 results in re-creation of the `pending_state `inside Botan's channel and rejecting the APPLICATION_DATA. username_0: At least changing the condition mentioned above to: ``` if(pending_state() != nullptr && active_state() == nullptr) throw TLS_Exception(Alert::UNEXPECTED_MESSAGE, "Can't interleave application and handshake data"); ``` fixes the problem, but I'm not sure whether this is not breaking any other scenarios.
Azure/go-autorest
236567120
Title: Add a retry mechanism for authentication Question: username_0: Investigate adding a mechanism for automatically refreshing oath tokens. Answers: username_1: Oauth tokens are refreshed [just before every request](https://github.com/Azure/go-autorest/blob/master/autorest/client.go#L178). `WithAuthorization()` implementation [includes the refresh](https://github.com/Azure/go-autorest/blob/master/autorest/authorization.go#L43). username_0: Does storage use this implementation or just ARM? username_1: Storage data plane gens a new token with each request. Authorization header is very different in storage. username_0: Additional context when I opened this issue. "Autorest appears to handle failure/retry (just from glancing at code, haven’t taken a close look yet). However, it appears that Outh authentication happens early and the caller has to handle auth failures. It doesn’t retry later on and calls to ZonesClient or RecordSetsClient fail because there’s no valid auth token. Is there a way to configure retrying auth failures at the next call to a client function?" username_0: @xtophs Looking at the code, the oauth token is refreshed just before every request (see @username_1 previous comment). Can you provide more info (or even some sample code) where you're seeing this happen? username_0: Closing as this is not actionable. Status: Issue closed
rolfwessels/Command.Bot
306173336
Title: Bot unable to handle messages from IFTTT Question: username_0: Bot is unable to parse the command, when sent from IFTTT. But it works fine with normal user. Below exception is thrown for IFTTT messages. ``` Unable to parse message: Newtonsoft.Json.JsonReaderException: Error reading JObject from JsonReader. Path '', line 0, position 0. at Newtonsoft.Json.Linq.JObject.Load(JsonReader reader) at Newtonsoft.Json.Linq.JObject.Parse(String json) at SlackConnector.Connections.Sockets.Messages.Inbound.MessageInterpreter.ParseMessageType(String json) at SlackConnector.Connections.Sockets.Messages.Inbound.MessageInterpreter.InterpretMessage(String json) ``` Answers: username_1: Hi, If I look at the Stack trace it seems to be an issue with the SlackConnector that I use for integrating with slack. Not sure that there is much I can do on this. I will try to find some time this week to update the nuget package. Hopefully it has been fixed already. Otherwise I would suggest posting this issue on their issues page (https://github.com/noobot/slackconnector/issues). Status: Issue closed username_1: Hi, Sorry for the delay. I have updated all 3rd party tools and made a new release. Have a look and let met know if there are any further issues. Regards Rolf
TryGhost/Ghost-CLI
296883523
Title: GhostCLI creates remote mysql users with the wrong host Question: username_0: ## This issue is a * [x] Bug Report * [ ] Feature Request ### Summary GhostCLI creates remote mysql database users as 'ghost-xxx'@'db hostname' instead of 'ghost-xxx'@'web hostname' or 'ghost-xxx'@'%' This causes connections from a remote web server to fail (e.g. during `setup migrate`). ### Steps to Reproduce (for a bug report) 1. Setup two servers, one with GhostCLI, another with MySQL. Have a remote mysql root user that can create users, databases, and has the GRANT option. 2. Run `ghost install` specifying the remote MySQL server and your remote-capable mysql root user. 3. The installation will fail with the following error: ``` [20:41:03] Running database migrations [failed] [20:41:03] → Invalid database username or password A ConfigError occurred. Error detected in the production configuration. Message: Invalid database username or password Configuration Key(s): database.connection.user / database.connection.password Current Value(s): ghost-872 / ***password hidden*** ``` Running `ghost setup -V migrate` shows the following error ``` $ ghost setup -V migrate [20:42:37] Running database migrations [started] Running sudo command: /usr/share/nginx/html/current/node_modules/.bin/knex-migrator-migrate --init --mgpath /usr/share/nginx/html/current [20:42:38] Running database migrations [failed] [20:42:38] → The database migration in Ghost encountered an error. A ProcessError occurred. Message: The database migration in Ghost encountered an error. Help: https://docs.ghost.org/v1/docs/troubleshooting#section-general-update-error --------------- stderr --------------- [2018-02-13 20:42:38] ERROR NAME: RollbackError CODE: ER_DBACCESS_DENIED_ERROR MESSAGE: ER_DBACCESS_DENIED_ERROR: Access denied for user 'ghost-872'@'%' to database 'ghoster' level:normal OuterError: The server has encountered an error. RollbackError: ER_DBACCESS_DENIED_ERROR: Access denied for user 'ghost-872'@'%' to database 'ghoster' ``` ### Workaround To work around this issue, I manually modified the created user to 'ghost-872'@'%' and granted the user permissions to the GhostCLI-created ghost database. After that, `setup migrate` succeeded and I was able to start ghost. ### Technical details (for a bug report) This is automatically output by Ghost-CLI if an error occurs, please copy & paste: * OS: Ubuntu, v16.04 * Node Version: v6.12.3 * Ghost-CLI Version: 1.5.2 * Environment: production * Command: 'ghost install' I will admit I did not ask about this in Slack. Am I in an unsupported setup? - [ ] Tried to find help in Slack & Docs - [x] Checked for existing issues - [x] Attached log file - [x] Provided technical details incl. operating system Answers: username_1: yeah - I can see where this would occur. Remote mysql databases are definitely supported, so this is a bug. Thanks for reporting! Status: Issue closed
sockjs/sockjs-client
35295842
Title: Closure Compiler compatibility Question: username_0: Since keeping a small JS footprint is critical for our application (see #178), it would be great to be able to compile SockJS via Google Closure Compiler with ADVANCED_OPTIMIZATIONS. Is compatibility with GCC on the v1.x roadmap?
m13253/ltowrapper
806548228
Title: Doesn't work in Gitlab CI Question: username_0: Hi, thanks for this script. I have an issue though when using it in Gitlab CI: ``` $ ./LTOWrapper Created "ar" Created "nm" Created "ranlib" Created "cc" Created "c++" Created "gcc" Created "g++" Starting "/bin/bash" shell, type "exit" when you finish. bash: cannot set terminal process group (1): Inappropriate ioctl for device bash: no job control in this shell root@runner-0277ea0f-project-8572288-concurrent-0:/builds/lebiniou/lebiniou# exit removed '/tmp/ltowrapper.pqyRo9oUBM/gcc-ranlib' removed '/tmp/ltowrapper.pqyRo9oUBM/cc' removed '/tmp/ltowrapper.pqyRo9oUBM/gcc-nm' removed '/tmp/ltowrapper.pqyRo9oUBM/gcc' removed '/tmp/ltowrapper.pqyRo9oUBM/gcc-ar' removed '/tmp/ltowrapper.pqyRo9oUBM/c++' removed '/tmp/ltowrapper.pqyRo9oUBM/nm' removed '/tmp/ltowrapper.pqyRo9oUBM/ar' removed '/tmp/ltowrapper.pqyRo9oUBM/ranlib' removed '/tmp/ltowrapper.pqyRo9oUBM/g++' removed directory '/tmp/ltowrapper.pqyRo9oUBM' ``` Any workaround ? Answers: username_1: I think this error is because Gitlab CI is not using a PTY for output redirection. This is only a warning. I think you can ignore it. username_0: As you can see this error throws immediately a shell "exit". username_1: Oh I see. Please try ``` $ ./LTOWrapper bash ``` and see whether it works. username_0: It looks [OK](https://gitlab.com/lebiniou/lebiniou/-/jobs/1027345762#L3609) now, thanks ! Maybe worth documenting ? username_1: Sorry for that, I will do it later. Currently I only wrote this in the `--help` page: ``` Usage: Interactive mode: $0 Batch mode: $0 - command Cross compile mode: $0 host-triple command ``` username_1: If you are using LTOWrapper in an automated environment, I suggest that you put everything you want to run inside the environment into a script. Then you can call `./LTOWrapper - ./yourscript.sh`. Note that `-`, that's for the cross-compile triple. If you don't need cross-compiling, put a `-` there. username_0: I do [this](https://gitlab.com/lebiniou/lebiniou/-/blob/master/.gitlab-ci.yml#L175-198), works like a charm. username_1: I am not sure whether your it can work. First, your `./configure` is not covered by LTOWrapper. So the configuration script may not select the correct compiler. Second, through the log I didn't see LTOWrapper being called once. Would you please double check it? username_0: Oops, good catch. Updated [here](https://gitlab.com/lebiniou/lebiniou/-/blob/json_config_uglifyjs_cleancss/.gitlab-ci.yml#L184-208). But it looks like it still [fails](https://gitlab.com/lebiniou/lebiniou/-/jobs/1027459824#L3275)... username_1: Pay attention that CI is not interactive, if you call `./LTOWrapper bash`, it actually spawns a bash and waits for your keyboard input. Since there is no keyboard, this may fail. Anyway I suggest that you (a) either write a script, (b) create a here-document using bash then plug it in, (c) use something like `CC=gcc-10 ./LTOWrapper - bash -c './configure && make'` username_0: Damn logs. [Here](https://storage.googleapis.com/gitlab-gprd-artifacts/65/e1/65e1e64b7f5667275a6bd7d2ade5bcaa89c62c2b77b3597c86746c9861dade18/2021_02_12/1027459824/1120803571/job.log?response-content-type=text%2Fplain%3B%20charset%3Dutf-8&response-content-disposition=inline&GoogleAccessId=<EMAIL>&Signature=Cg9FWhs2NLacemJ7YTu40VUePPd6ICGvSDRPER36qub%2BMCOYwn6KbdGPtIRv%0AHZyrXk1qLzqePP2wYkX1e9tYyxqJHXQbJ7fmjEuyax3PKwoY%2F0PNdE9ISXE1%0AaP6TBw4AydPjAvORDRouDtI5%2BFGQIDw5xHfsfLNAGeeyBGR3x%2BPw8FqOjREf%0Ay3Hs44qKReURn%2FPL0Y%2BRQ7igYWKDKl3zT6GI8RmyjmLC6SxwrTkJfK4N15l9%0A9tWivod7ozTLyXzcJzLBSkMlLLPIkCca3v3oeeYTvRIW%2BrU2LUSGf2xqqatK%0A1ni9%2FO82qvK5PrplZ2dFxpjUPoPjJM2vz2MFnxZx%2FA%3D%3D&Expires=1613161345). username_0: Ok, let my try (c). username_0: [There](https://gitlab.com/lebiniou/lebiniou/-/jobs/1027483008/raw) you go :) username_0: Updated [.gitlab-ci.yml](https://gitlab.com/lebiniou/lebiniou/-/blob/json_config_uglifyjs_cleancss/.gitlab-ci.yml#L184-206). username_1: ``` checking for gcc... /tmp/ltowrapper.ml4TbHWtBL/gcc ``` Yay! It worked. username_0: Yep, saw that :) username_1: By the way, I think my script ignores your request for gcc-10. If you really need gcc-10, you might need to make some tweaks to make sure `which gcc` can point to your gcc-10. username_0: Right, I'll move `CC=gcc-10` before `./configure`. I guess this should work ? username_1: This should work as long as you don't care whether it is gcc-10 or older versions. If you really need gcc-10, you perhaps need to do something. username_0: In fact, I don't need this. gcc-10 is the default compiler in Debian/sid. username_0: ``` $ gcc --version gcc (Debian 10.2.1-6) 10.2.1 20210110 ``` username_1: That's great! username_0: Now, some [advertising](https://biniou.net) :) Status: Issue closed
astraw/stdeb
2161043
Title: call any registered install_data command during postinst Question: username_0: Sometimes packages need to perform some kind of post-install step. The distutils way of doing this seems to be by registering an [install_data command](http://docs.python.org/distutils/apiref.html#module-distutils.command.install_data) (see [this example](http://wiki.python.org/moin/Distutils/Tutorial)). We could generate a debian/<python-packagename>.postinst script that calls this command. For now, the workaroud is to do it manually: Do "python setup.py debianize" to create a debian directory with stdeb. (You need stdeb in your --command-packages to do this step.) Then create a file in debian/&lt;python-packagename>.postinst which will be run when the package is installed. (Unfortunately, this would not be carried in the plain distutils source, but only in the debian/ directory. Which is why I filed this ticket.) Answers: username_1: Note : the debianize solution seems to be broken : #132
arkenfox/user.js
822144567
Title: `user.js` Customization Interface Question: username_0: I think a lot of us who are on this repository believe that utilizing the browser is one of the most effective ways for casual users and experts alike. This is normally seen to be done through security/privacy extensions, as they're easy to install. However, I am afraid that the sophistication that goes into configuring `user.js` may deter users from trying it. So, it would be beneficial if we develop some kind of interface. _At first, one of us may jump to the conclusion that if this is going to be made, it should be done through the terminal (similar to how npm initializes a `package.json` file); but let's not forget that if we want to reach out to the audience, we have to meet them half way, because some people may not even know about the/a command prompt._ My first proposal is that we start with HTML and build it into an application with a framework. If that fails, other languages are always an option, such as V, Python, and the C family. When it comes to the interface, the user is given the option to either use the shell/batch files to update the script or implement it into their browser. For first timers, they are shown choices to customize their own `user.js` based on their priorities and threat model (potentially accompanied by pictures). Maybe this could be a separate project - maybe it could be a separate branch. Who knows? Nevertheless, I'd love to hear your thoughts on this enhancement. Answers: username_1: i think a lot of people would appreciate this, however to gain max exposure it would have to be (eventually) an add-on on AMO it doesn't seem to make any sense to have a UI with 100+ options, therefore presets would have to be determined which adds a bit of complexity lastly, i'm guessing (actually not because this was brought up b4) pants will not want anything to do with this so it would have to be a separate project, and preferably not a fork, rather just a UI i would think username_0: @username_1 What is "AMO", who is pants, and where/when was this previously brought up? username_2: AMO = addons.mozilla.org pants = username_3 username_1: 1. addons.mozilla.org 2. the lead project manager 3. i dunno, but it was, long ago... by myself, maybe others username_0: Oh! That's a possibility! Having it be a browser suggestion (though I am not sure how we would implement the shell scripts)! Gotcha, I tried giving the issues a search myself, but I was not successful. username_1: it would take some think time to figure it all out i suppose - the question is, is anyone interested (and capable) in doing it? you could start with a web page, as you mentioned, and go from there - it would be a matter of figuring out what presets to have and how to implement that and i suspect this would not be a trivial project because user.js gets updated often and so you'd have to figure out whether a change affects a preset, whether a pref needs to be moved from 1 preset to another, etc regarding the scripts, i don't believe there's an API to modify files in the profile, so the cleaner script, at a min., may still be necessary - then to actually update user.js, the web page/add-on would have to spit out the text which the user would have to save as user.js, though i think this could be automated using a 3rd party service as is the case with (some) add-ons that allow to add/remove search engines this is all off the top of my head, so there's that username_3: I wouldn't mind an interactive page, from within the arkenfox repo: e.g. `arkenfox/config/` - that way it is on the same eTLD+1. It's on my wish list ever since - @overdodactyl and #569, #574, #578 - https://nwmviewer.shinyapps.io/ghacks_userjs/ (not sure what old version it is currently pulling, it used to be the live one) - also see @icpantsparti and #608 - https://icpantsparti.github.io/firefox-user.js-tool/userjs-tool.html I personally find nwmviewer to be very slow to load (5+ seconds), always have . And I would prefer to use no external libraries or anything third party. I can create a new repo and assign members. I definitely do not want to do this in here. And TBH, it is not high on my priorities, as I consider the user.js already as easy as it gets. I mean come on, it's a TEXT file, and has 17 `[SETUP-WEB`, 13 `[SETUP-CHROME` and one `[SETUP-HARDEN` tags. It doesn't get any easier I see two parts - create an interactive that lets to show/hide relevant things - allow altering values to create an `overrides` text username_0: @username_3 What is eTLD+1? I will gladly look at the sources you have provided; and I am excited that you also like to have your projects as vanilla as possible. username_4: @username_0 `https://web.dev/same-site-same-origin/` username_0: Thank you for the useful information @username_4! Concerning @username_3' wish(es) with the project, I think it would be doable to the extent of merely providing a customized `user.js` file. username_0: Although this project provides a comprehensive interface, it lacks simplicity. username_0: It may help the repo if this was done, but I'll mention that if the default file is templated with the interface, it could be updated client-side. And hey! What if we find that the shell scripts come in handy? username_0: **I'll sum up my review of this project by saying that I have noticed a pattern in what is the biggest obstacle. There is a bridge to cross when transitioning the data from `user.js` to be displayed on these visuals. As a result, I now stand with the idea of keeping the interface a separate repository from this one.** username_0: @username_3 Would it be beneficial if we revisited this old, HTML version of the project? username_3: If anyone wants to take a copy of the user.js, and build an HTML file in their own repo, that parses it with no third parties, and returns an interactive HTML page, then go for it. When there is a proof of concept, then we can add a new arkenfox repo and work on styling. The user.js is pretty regimented in it's syntax for comments, notes, settings, test pages, references, setup tags, version numbers, etc username_3: It still needs to pull in the live master user.js: it's pointless using a static or modified one. A modified one requires upkeep, and a static one doesn't solve parsing issues username_0: ^ If I could pin that statement, I totally would. **It is safe to say that we can close this issue until those requirements are met.** I can start a repo and begin working on a parser for it, but I would need to know what information in particular should be displayed. If all of it should, then in what order of priority? Status: Issue closed username_0: Even though the issue is closed, somebody please let me know. username_3: All of it up to and including 4500 section. It's more a case of collapsing/hiding some content What I envisage is some toggle buttons and drop-downs. Only one applies at a time - e.g. all | section | version | win/mac/linux | settings | setup-tags | test | default | active-inactive | warnings - click active and all you see are active prefs etc - then click setup-tags and all you see are the setup items Another option, independent of the toggle buttons can be collapsed/full username_0: In the making of this, would NodeJS's default library be considered vanilla? username_3: Why do you anyone else's libraries? I get that it's a little complicated, and the rules are in my head, but it's not that complicated. All you need to do is parse using the syntax/rules logic that the user.js follows You're going to need some generic functions, like stripping multiple spaces, search and replace - you don't need a massive bloated external library for that, IMO username_0: I was asking because while I understand how the parser will work, I thought file reading and writing (including JSON formatting) would be a must-have.
Yohannfra/Vim-Vim-Project
854143533
Title: What about root Question: username_0: What about to add command that go to root like https://github.com/airblade/vim-rooter . Answers: username_1: I don't understand what you'd want. In the vim-rooter README there is this part <img width="885" alt="Capture d’écran 2021-04-09 à 19 41 46" src="https://user-images.githubusercontent.com/36271388/114219934-a133a980-996b-11eb-8c94-98122ae0ccea.png"> You could then just use vim-rooter and vim-vim-Project together and do something like ```vim let g:rooter_patterns = ['.vimproject'] ``` Status: Issue closed username_0: ok
topcoder-platform/topcoder-x-ui
337235376
Title: [$50] Allow user to change TC Direct ID Question: username_0: Topcoder Direct IDs change as contracts are renewed. We need to change this to allow the user to edit the TC Direct ID. ![screen shot 2018-06-30 at 1 50 13 pm](https://user-images.githubusercontent.com/21790/42128247-a6294c2a-7c6c-11e8-84bf-66f960403cf2.png)<issue_closed> Status: Issue closed
zxing/zxing
41010806
Title: Add Support for Micro QR Code? Question: username_0: It would be nice to be able scanning micro qr codes with the famous zxing library. are u planning to support it? Answers: username_1: Micro QR was adopted as JIS X 0510 and it was released in November 2011. username_2: Micro QR was standardized earlier in the main ISO spec, but, it's an optional element, and I have never seen it used. At this stage wouldn't consider a patch for it since the project is in maintenance mode only.
vegastrike/Assets-Masters
995410684
Title: Rename Files for their actual formats Question: username_0: There are a number of files that have the incorrect file extension so the filename can be misleading. - [ ] Textures should be `*.dds` instead of `*.png` - [ ] Sprites should be `*.spr` There may be others too. Answers: username_0: @username_1 did a tool for PWCU to help with some of this. He's a good resource for some of these changes. We may also need to change some hard coded stuff in the assets/engine too. username_1: As I said in the main gitter channel, the various image libraries in the VS engine already know how to decode based on the actual content of the files regardless of their extension. To wit, if this hadn't worked, then none of the current assets that are actually DDS encoded would've been able to load as their extension is currently .png. Hence, it may make more sense to simply rename all image files `foo.image` and then update all sprites to simply reference the relevant `foo.image` files? In the future, we might tack on extra .image _virtualised prefixes_ such as `bar.stereo.image` too (as suggested by MinisterOfInformation). username_0: So my main objection to just `.image` is support tooling outside VS - GIMP, etc - that might not like using `.image`. 1. It's not as intuitive 2. Some programs make it harder to open if not using their recognized naming patterns 3. Some operating systems (Windows) and even desktop environments use file extensions to register applications So while VS itself may be fine; we do also need to consider the tooling and environments that contributors will be using. Also, `bar.stereo.dds` is just as good as `bar.stereo.image` and will be recognized by tooling easier to recognize or filter via their Open Dialog filters. username_1: All fair points. In that case, I think the sanest approach is to build a small python-tool (for distribution with the engine), which can take a look at a known list of (image) extensions, check their magic type and rename the extension to the proper format and update sprites accordingly? Possibly with a `--report` or `--dry-run` flag that allows asset maintainers to check and verify that no booboos have crept in unnoticed. This is predicated on the idea that having to manually edit and change stuff which can be trivially machine checked/verified/changed is not a good use of maintainer time.
VIKING-GARAGE/operations
340516377
Title: Create a 'feedback loop' document Question: username_0: please, check it out @username_1: https://docs.google.com/spreadsheets/d/1NZDaI2xpbN2azGjh0Rd8B2EssZHzqzni3oIFF456jc8/edit#gid=0 Answers: username_0: please, check it out @username_1: https://docs.google.com/spreadsheets/d/1NZDaI2xpbN2azGjh0Rd8B2EssZHzqzni3oIFF456jc8/edit#gid=0 username_1: no @username_0 I meant the KPI tracking document, where we made the calculations including the adwords costs and the funnel dropout tracking... it was a spreadsheet, done a few months ago. username_0: Got it: https://docs.google.com/spreadsheets/d/1eTrhUTnhIKXzjUvf9Q2JKXvlmTQMB1hfZk90xNpYJCc/edit?usp=sharing I added here my document, I am not sure what do you mean - should I just copy it? Please, let me know. username_1: @username_0 yea thats the one. I think we should use the 'adwords KPIs' sheet to monitor the feedback loops. username_0: @username_1 I have no idea how do you imagine that. I mean - for adwords we can do that but for other tasks I created that document. username_1: alrite let me change the doc username_0: Ok, could you go through it tomorrow? username_0: oki – poprawione. Do kolumn w tym dokumencie chcę wpisywać issue, które zostały wykonane i mają wpływ na wyniki. Backlog niech pozostanie na GitHubie. Daj znać co o tym sądzisz. username_0: @username_1 please give me feedback to the document here. I have only this comments from Slack: <img width="727" alt="zrzut ekranu 2018-07-20 o 23 50 58" src="https://user-images.githubusercontent.com/28931514/43019463-a966141c-8c8f-11e8-8c27-c512d8d23510.png"> I guess, we did not understand each other during our conversation via phone, when we discussed the document changes. Unfortunately. I wanted something simple, I don't know how do you imagine the final result of the document. username_0: up. username_1: ok @username_0 great work on this one, I think the document https://docs.google.com/spreadsheets/d/1eTrhUTnhIKXzjUvf9Q2JKXvlmTQMB1hfZk90xNpYJCc/edit#gid=0 is exactly what we need. - [ ] check we can monitor all we want easy - [ ] update the last weeks - [ ] start monitoring
microsoft/BotFramework-Composer
816937027
Title: Bugbash v1.3.1 Composer UI will become frozen if it enter some long, intense task, such as calling bf-orchestrator to create a large snapshot file Question: username_0: <!-- Please search for your feature request before creating a new one. > <!-- Complete the necessary portions of this template and delete the rest. --> ## Describe the bug <!-- Give a clear and concise description of what the bug is. --> ## Version <!-- What version of the Composer are you using? Paste the build SHA found on the about page (`/about`). --> ## Browser <!-- What browser are you using? --> - [ ] Electron distribution - [ ] Chrome - [ ] Safari - [ ] Firefox - [ ] Edge ## OS <!-- What operating system are you using? --> - [ ] macOS - [ ] Windows - [ ] Ubuntu ## To Reproduce Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error ## Expected behavior <!-- Give a clear and concise description of what you expected to happen. --> ## Screenshots <!-- If applicable, add screenshots/gif/video to help explain your problem. --> ## Additional context <!-- Add any other context about the problem here. --> Answers: username_1: dup to #5986 Status: Issue closed
ValveSoftware/steam-for-linux
876519536
Title: Error installing Steam on Linux Mint Question: username_0: #### Your system information * Steam client version (build number or date): 1:1.0.0.70 * Distribution (e.g. Ubuntu): Linux Mint 20.1 * Opted into Steam client beta?: [Yes/No] No * Have you checked for system updates?: [Yes/No] Yes #### Please describe your issue in as much detail as possible: Hello, my Steam won't open. I have all the drives updated and the operating system as well. The error started to happen when I installed the new version of Linux Mint (20.1) before I used 19.3. Below I leave the photo of the error, I already tried to install the dependencies, however, I get the message that they are already installed. I also added the i386. #### Steps for reproducing this issue: 1. Installing Mint 20.1 2. Steam does not work ![error](https://user-images.githubusercontent.com/70028939/117158633-f9f13900-ad95-11eb-9b14-85dbe2c01129.png) Answers: username_1: Hello @username_0, if you run `steam` from a terminal, does the terminal spew give a hint? Also, please share the output of `apt policy libgl1-mesa-glx:i386` username_1: Thanks, let's ponder the output of `sudo apt install libc6:i386 libgl1:i386 libgl1-mesa-dri:i386`. If apt asks to remove a bunch of packages **do not** let the command proceed. username_0: $ sudo apt install libc6: i386 libgl1: i386 libgl1-mesa-dri: i386 [sudo] password for wesley: Reading package lists ... Done Building dependency tree Reading status information ... Ready Some packages could not be installed. This may mean that you requested an impossible situation or, if you are using a unstable distribution, that some required packages have not been created yet or have been removed from "Incoming". The following information can help to resolve the situation: The following packages have conflicting dependencies: common dictionaries: Depends on: debconf (> = 1.5.5) but will not be installed or debconf-2.0 Depends: libtext-iconv-perl but will not be installed E: Error, pkgProblemResolver :: Resolver generated failures, this can be packaged by collective packages (hold). username_1: `libtext-iconv-perl` is a weird dependency to get hung up on, and I don't think we've seen someone hit that packaging snafu on this issue tracker. Unfortunately, that also means that I'm personally not going to be much help figuring out what's going on here, but someone else may have some ideas to try. There should be some folks in your distro's community who can also help try to figure out how to resolve this package conflict. username_1: I suspect there's some kind of auto-translation at work here and `dictionaries-common` is the actual name of the package apt is refusing to install. username_1: Can you check if that package is installed with something like `apt policy dictionaries-common` and complain the same way as 32 bit mesa if you try `sudo apt install dictionaries-common`? username_0: Yes, there is a translation. I am Brazilian and I translate (Translator) into English. apt policy dictionaries-common dictionaries-common: Instalado: 1.28.1 Candidato: 1.28.1 Tabela de versão: *** 1.28.1 500 500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages 500 http://archive.ubuntu.com/ubuntu focal/main i386 Packages 100 /var/lib/dpkg/status sudo apt install dictionaries-common [sudo] senha para wesley: Lendo listas de pacotes... Pronto Construindo árvore de dependências Lendo informação de estado... Pronto dictionaries-common is already the newest version (1.28.1). 0 pacotes atualizados, 0 pacotes novos instalados, 0 a serem removidos e 6 não atualizados. username_2: If it asks to uninstall a large amount of packages, please cancel and submit a bug report to Linux Mint. username_0: Sources list # Do not edit this file manually, use Software Sources instead. deb http://packages.linuxmint.com ulyssa main upstream import backport #id:linuxmint_main deb http://archive.ubuntu.com/ubuntu focal main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu focal-updates main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu focal-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu/ focal-security main restricted universe multiverse deb http://archive.canonical.com/ubuntu/ focal partner LANG=C sudo apt --fix-broken install [sudo] password for wesley: apt Usage: apt command [options] apt help command [options] Commands: add-repository - Add entries to apt sources.list autoclean - Erase old downloaded archive files autoremove - Remove automatically all unused packages build - Build binary or source packages from sources build-dep - Configure build-dependencies for source packages changelog - View a package's changelog check - Verify that there are no broken dependencies clean - Erase downloaded archive files contains - List packages containing a file content - List files contained in a package deb - Install a .deb package depends - Show raw dependency information for a package dist-upgrade - Upgrade the system by removing/installing/upgrading packages download - Download the .deb file for a package edit-sources - Edit /etc/apt/sources.list with your preferred text editor dselect-upgrade - Follow dselect selections full-upgrade - Same as 'dist-upgrade' held - List all held packages help - Show help for a command hold - Hold a package install - Install/upgrade packages list - List packages based on package names policy - Show policy settings purge - Remove packages and their configuration files recommends - List missing recommended packages for a particular package rdepends - Show reverse dependency information for a package reinstall - Download and (possibly) reinstall a currently installed package remove - Remove packages search - Search for a package by name and/or expression show - Display detailed information about a package showhold - Same as 'held' showsrc - Display all the source package records that match the given package name source - Download source archives sources - Same as 'edit-sources' unhold - Unhold a package update - Download lists of new/upgradable packages upgrade - Perform a safe upgrade version - Show the installed version of a package LANG=C sudo dpkg --configure -a
AdguardTeam/AdguardForMac
355522743
Title: Safari delays before page starts loading Question: username_0: 1. it always happens if first time visiting a URL 2. the subsequent visits on the same URL/sub-URL will be normal 3. come back to the visited URL some time later, it delays again. Looks like after AG validates everything on a domain (which takes time), it then gives a fast track pass to the subsequent visits until the pass expires, something like that. If I turn off AG and use those browser-extension AD blocker like ABP, uBlock Origin, with the exact same filters subscribed, I don't have this issue at all, pages always start loading immediately. Standard alone version of AdGuard 1.5.8 Last Mojave beta, but same happens also with High Sierra Safri Version 12.0 (14606.192.168.3.11) Answers: username_1: Hi, I cant repeat this issue on Safari 12. Could you pleasу send me your adguard settings and filters list? username_0: Hi, all stock filters and stock setting. Tried also different combinations but more or less nothing is changed. username_2: @username_0 thanks for moving my post to here :-) looks like you encountered similar problem. @karina-archaz I originally posted on AdGuard forum (https://forum.adguard.com/index.php?threads/safari-delays-before-page-starts-loading.29305/), but that place looks like a ghost town... hope GitHub is a better place to discuss and troubleshot... I tried to purchase multiple licenses for all the Mac at home as Mojave is coming and AdGuard seems to be the only Ad Blocker that allows to subscribe filters under Mojave/Safari12, but I am really irresolute after try-out, due to this performance issue. The delay is obvious, visiting any URL, by clicking the link, or opening from bookmark, or entering from address bar, if this domain has not been visited for some time, there is a noticeable delay (or stuck) before Safari shows page loading progress, this happens even on local web site, I am with 1Gbps ISP, I don't have any Internet latency issue when AG is turned off. Not only page loading delay, but some other actions are also delayed, one sample is http://www.speedtest.net/, after clicked "Go" button, it takes 10 seconds to start speed testing when AG is on, you can compare it with AG off, or using other AD Blocker. I am not sure if AG needs to call home to validate something, it doesn't look like normal if it is a purely local app, but this intermittent delay really drives me nuts. username_2: Below 2 network activity recording on the same URL (https://github.com/AdguardTeam/AdguardForMac/issues) AdGuard spent more time on establishing connection and SSL handshake, AdGuard ![adguard](https://user-images.githubusercontent.com/3245152/45010378-d2bf9800-b03f-11e8-8e3c-62550fd6c53e.jpg) uBlock Origin ![ubo](https://user-images.githubusercontent.com/3245152/45010382-d81ce280-b03f-11e8-839b-9674bd3723d0.jpg) username_0: I think there will not be a solution. Same problem is also on windows. I think it's by design. username_2: I guess the performance problem is due to SSL, it takes too long time to complete the handshake comparing with Browsers, in fact it causes request broken when visiting internal https corp sites (error message below, in Win10 IE11), I have to exclude internal sites... ![snipaste_2018-09-05_11-22-51](https://user-images.githubusercontent.com/3245152/45069362-b0408400-b0fe-11e8-9bf5-4342427d160e.jpg) username_0: Why no one dev reply to this? username_0: I think never will be solved this problem. Bah... username_3: @username_0 sorry for the delay. Guys, the thing is that we're in process of migrating to a brand new filtering engine (https://adguard.com/en/blog/introducing-corelibs/), and all the networking issues are postponed till the first beta of the new Mac version is out. username_3: I'd say 99% of the performance issues are finally resolved in today's AdGuard for Mac [nightly version](https://adguard.com/en/blog/adguard-for-mac-gets-nightly/) -- [direct link](https://agrd.io/mac_nightly). Most of them were about the overhead added by https filtering when the HTTPS connection is established. We cannot get rid of it completely for obvious reasons - there are two encrypted connections instead of one, but we had to make it as low as possible. On my MBP with the previous nightly (and stable version as well), the overhead was about 100-120ms, and on the new one, it is about 20-30ms. Once the connection is established, the overhead is close to zero. And thanks to HTTP/2 support, these connections are kept alive and reused for a long period of time (unlike HTTP/1, browsers generally do not let them live for too long). Status: Issue closed
bpmn-io/bpmn-js
157845344
Title: PhantomJS tests crash on Windows Question: username_0: Running the tests in PhantomJS < 2.0.0 on Windows causes PhantomJS to disconnect and crash. Using PhantomJS > 2.0.0 fixes that. ``` .01 06 2016 09:52:09.935:ERROR [launcher]: PhantomJS crashed. 01 06 2016 09:52:09.942:INFO [launcher]: Trying to start PhantomJS again (1/2). 01 06 2016 09:52:40.450:WARN [PhantomJS 1.9.8 (Windows 8 0.0.0)]: Disconnected (1 times), because no message in 30000 ms. PhantomJS 1.9.8 (Windows 8 0.0.0): Executed 572 of 819 (skipped 5) DISCONNECTED (2 mins 6.725 secs / 30.176 secs) Warning: Task "karma:single" failed. Use --force to continue. Aborted due to warnings. Execution Time (2016-06-01 07:50:21 UTC) jshint:src 2s ■ 1% karma:single 2m 16.7s ■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 98% Total 2m 19s ``` Answers: username_1: @username_0 Is this still the case? username_2: Hello, I'm trying to run BPMN and I have the same issue. Did you find any solution to solve it? Thank you. username_1: Closing this due to inactivity. Status: Issue closed
tektoncd/cli
549715206
Title: Unable to use multiple doc YAML with tkn create commands Question: username_0: # Version and Operating System macOS Catalina **tkn Version:** `Client version: v0.6.0` **Operating System:** macOS # Expected Behavior The tkn create commands for `tkn task create` or `tkn res create` should accept Tekton resource YAML with multiple documents. # Actual Behavior Right now only the first document is parsed and processed. # Steps to Reproduce the Problem 1. tkn res create -f https://gist.githubusercontent.com/username_0/90a2031c0d3b55dc982e3789900dac4f/raw/5ead712636f5f758195158cd92aba77f0aeaf539/build-resources.yaml Answers: username_0: CC: @hrishin , @piyush-garg username_1: Hey @username_0. Thanks for opening this. I believe we will actually be going with a different approach in #582 where `tkn` actually directly invokes `kubectl create` and `kubectl apply` and repurposing `tkn <resource> create` commands to focus on interactive creation and working more directly with the [catalog of tasks](https://github.com/tektoncd/catalog) for tekton. This is also a duplicate of #575, so I am going to close this for now. Please feel free to add any comments or feedback I perhaps am missing in the issues currently opened. Status: Issue closed username_1: Please also note #574 for the approach in #582.
igvteam/igv
589408115
Title: Compatibility between Relative Paths in Saved Sessions for OSX and Windows Question: username_0: I'm having issues opening a saved session that was created on a Windows 10 PC on my Macbook (MacOS 10.12.6). The reverse is also true with a saved session created on MacOS and opened on Windows 10. I think the issue is with how these operating systems use slashes in their paths, with windows using forward slashes and mac using backslashes. Is there a workaround for this? Answers: username_1: One possible workaround is keeping things on the cloud (AWS) so (S3) paths and shared can be shared across multiple participants... it requires some time investment to set it up though: https://umccr.org/blog/igv-amazon/ https://umccr.org/blog/igv-amazon-backend-setup/ IMHO, it can be rather complex to disambiguate paths across heterogeneous storage and authentication systems. username_2: @username_0 Could you share one that is giving you an issue? You don't have to share the whole session file, just the problematic path. Status: Issue closed
epogrebnyak/cbr-db
137345889
Title: ERROR 1449 (HY000) at line 1: The user specified as a definer ('test_user'@'%') does not exist Question: username_0: I have `test_user` in MySQL but cannot pass a definer. Answers: username_1: @username_0 Please check the user host with the following command (should be %): SELECT host, user FROM mysql.user; You can also try to re-create user: DROP USER test_user; CREATE USER test_user IDENTIFIED BY '<PASSWORD>'; GRANT ALL PRIVILEGES ON dbf_db.* TO test_user; GRANT ALL PRIVILEGES ON cbr_db.* TO test_user; GRANT FILE ON *.* TO test_user; username_0: Thanks for the code, will try re-creating the user. username_0: What if host is not '%'? Relogin? username_1: @username_0 What do you see in `mysql.user` after re-creating `test_user`? Status: Issue closed username_0: Fixed by running: ```sql CREATE USER test_user IDENTIFIED BY '<PASSWORD>'; GRANT ALL PRIVILEGES ON dbf_db.* TO test_user; GRANT ALL PRIVILEGES ON cbr_db.* TO test_user; GRANT FILE ON *.* TO test_user; ``` before that had: ``` host;user 127.0.0.1;root localhost;excel localhost;pma localhost;root localhost;test_user ```` after "Create user" script: ``` host;user %;test_user 127.0.0.1;root localhost;excel localhost;pma localhost;root localhost;test_user ```
adyeths/u2o
280726240
Title: Validate OSIS again after running orefs.py Question: username_0: Just in case there are some unexpected contents within references (e.g. an Arabic comma that escaped our attention) it makes good sense to check that the OSIS is still valid. This is also one good reason to also keep the two scripts _separate_. Easier to find root cause of invalid OSIS before fixing xrefs and potentially "making things worse". Answers: username_1: An additional validation step isn't necessary. The only thing that changes in the osis is the addition of an osisRef attribute to the reference tags. And the way orefs is written there's only one particular case that I can think of where there could potentially be an issue. And that's the case where there is a single space after the reference and a single word following, such as this reference: `Deuteronomy 32:43 LXX`. The LXX part (or whatever would be in it's place) is simply appended to the front of the osisRef as "LXX:"... which will be problematic at some point. I'm still thinking about how to handle this better. As for keeping orefs separate, I've been thinking that might be a good idea as well. I just haven't gotten around to changing the orefs readme to reflect this decision yet. username_0: I didn't detect an Arabic comma until I attempted to validate after running my bespoke TextPipe filter. Just sharing a potentially useful observation. Status: Issue closed
EijsinkDev/FormCheckBox
341639558
Title: Duplicate icon Question: username_0: When setting an icon for the checkbox, it appears twice (at each side of the checkbox). Further details: form layout, Vaadin 8.4. ![image](https://user-images.githubusercontent.com/1122000/42777877-aa95366c-893b-11e8-8d79-0810f167ded1.png)
juha-h/baresip-studio
484774450
Title: TLS configuration Question: username_0: Hello! I have a asterisk SIP TLS with self-signed certificates working with baresip linux client with TLS. I'd like to use baresip android app with TLS. However, to enable the certificates, I do not see any baresip configuration on a few android 8.x phones. I'm checking got baresip configuration on android phone at storage/emulated/0 Could anyone help me on how to get baresip TLS config going on android phone? Answers: username_1: Configuring of certificate and ca files is not currently supported, but will be when I have extra time. Status: Issue closed username_1: This has now been implemented in version 9.0.0, which should be available soon in Google Play store and in a few days in F-Droid. Re-open if problems exist. username_0: Brilliant! Thank you will test and report
ikedaosushi/tech-news
333000336
Title: Docker Hubから誰でもダウンロードできるコンテナイメージに暗号通貨採掘ボットが潜んでいた Question: username_0: Docker Hub&#12363;&#12425;&#35504;&#12391;&#12418;&#12480;&#12454;&#12531;&#12525;&#12540;&#12489;&#12391;&#12365;&#12427;&#12467;&#12531;&#12486;&#12490;&#12452;&#12513;&#12540;&#12472;&#12395;&#26263;&#21495;&#36890;&#36008;&#25505;&#25496;&#12508;&#12483;&#12488;&#12364;&#28508;&#12435;&#12391;&#12356;&#12383;<br> &#12475;&#12461;&#12517;&#12522;&#12486;&#12451;&#20225;&#26989;&#12398;Fortinet&#12392;Kromtech&#12364;&#12289;17&#12398;&#27738;&#26579;&#12373;&#12428;&#12383;Docker&#12467;&#12531;&#12486;&#12490;&#12434;&#35211;&#12388;&#12369;&#12383;&#12290;&#12381;&#12428;&#12425;&#12399;&#12480;&#12454;&#12531;&#12525;&#12540;&#12489;&#12391;&#12365;&#12427;&#12452;&#12513;&#12540;&#12472;&#12384;&#12364;&#12289;&#20013;&#12395;&#26263;&#21495;&#36890;&#36008;&#12434;&#25505;&#25496;&#12377;&#12427;&#12503;&#12525;&#12464;&#12521;&#12512;&#12364;&#28508;&#12435;&#12391;&#12356;&#12427;&#12290;&#12373;&#12425;&#12395;&#35519;&#12409;&#12383;&#32080;&#26524;&#12289;&#12381;&#12428;&#12425;&#12364;500&#19975;&#22238;&#12480;&#12454;&#12531;&#12525;&#12540;<br> https://ift.tt/2tk9U4r
pachyderm/pachyderm
250402059
Title: Datums vs Runs Question: username_0: In reviewing the stats changes on the UI repo, some of the more unnecessarily complex parts are the contortions we go through to model/link between "Datums". Right now, Datums from the API have an "ID" field, but the message content isn't actually uniquely identified by that ID, because the info contained is tied to a specific Job. The result is that we have to generate composite JobID-DatumID client-side IDs that _actually_ uniquely identify a given Datum message, and then store the server-side "ID" in a separate field, which gets used for display, generating URLs etc. The reason, of course, that two Datums with the same ID can simultaneously have different content is because our Datum message doesn't actually just contain Datum info, it contains runtime (`Run`?) info. It seems pretty clear to me that these should be two separate types of objects Specifically, I'm proposing that: 1. A Datum consists of a list of files to be processed together. It's ID is a composite hash of those file hashes. It is computable given solely an input tree (ie glob patterns applied to a set of specific commits on input repos; it's a descriptor of data) 2. A Run contains the info about the processing of a particular Datum within a particular Job. It is therefore identified by a `Job`-`Datum` tuple. 3. The list of Datums to be processed by a Job can be computed (and displayed) as soon as a Job exists 4. The details about Runs aren't available until the Job has terminated 5. (Preferably) the list of Runs should also be available as soon as a Job exists (presumably this would use the `ListDatum` API internally), but the details are unpopulated, and the states are all 'pending' or something 6. Ideally, eventually, not today, and maybe never `Run`s: - cycle through 'queued' and 'active' before settling into either a 'completed', 'skipped', or 'error' state - have info about the specific worker to which they were assigned - have live-updated stats when in 'active' state - have a many-to-one relation with Datums in our standard execution model - (presumably) have a many-to-many relation with Datums in a user-batching execution model aaand, in case bringing all this up isn't enough to annoy you, I'm going to mention again that I find the term "Datum" makes the system more tricky than necessary to understand[1]. Given the specific definition of `Datum` that I'm proposing, `FileGrouping` (`FileGroup`?) or `FileSet` both seem to be more immediately obvious/descriptive alternatives. That being said, I do recognize how invasive it would be to change at this point. [1]: At best, it's a new, foreign technical term to learn; at worst, people already have a conception of what it means, that is almost certainly different from the way in which we're using it Answers: username_0: This separation also makes it clear what info is available for all jobs (a list of Datums) and what is only available for jobs that have stats enabled (the Run info). This is important, because at the moment we've lost the ability to view logs in the UI for jobs that don't have stats enabled username_1: Is this still relevant? username_2: Hasn't gotten any less relevant, but also hasn't gotten any easier. Status: Issue closed username_3: stale username_0: was just talking about this with @brycemcanally @username_2 and @msteffen as part of the datum improvements we want to tackle in 2019 username_0: In reviewing the stats changes on the UI repo, some of the more unnecessarily complex parts are the contortions we go through to model/link between "Datums". Right now, Datums from the API have an "ID" field, but the message content isn't actually uniquely identified by that ID, because the info contained is tied to a specific Job. The result is that we have to generate composite JobID-DatumID client-side IDs that _actually_ uniquely identify a given Datum message, and then store the server-side "ID" in a separate field, which gets used for display, generating URLs etc. The reason, of course, that two Datums with the same ID can simultaneously have different content is because our Datum message doesn't actually just contain Datum info, it contains runtime (`Run`?) info. It seems pretty clear to me that these should be two separate types of objects Specifically, I'm proposing that: 1. A Datum consists of a list of files to be processed together. It's ID is a composite hash of those file hashes. It is computable given solely an input tree (ie glob patterns applied to a set of specific commits on input repos; it's a descriptor of data) 2. A Run contains the info about the processing of a particular Datum within a particular Job. It is therefore identified by a `Job`-`Datum` tuple. 3. The list of Datums to be processed by a Job can be computed (and displayed) as soon as a Job exists 4. The details about Runs aren't available until the Job has terminated 5. (Preferably) the list of Runs should also be available as soon as a Job exists (presumably this would use the `ListDatum` API internally), but the details are unpopulated, and the states are all 'pending' or something 6. Ideally, eventually (but not today, and maybe never) `Run`s should: - cycle through 'queued' and 'active' before settling into either a 'completed', 'skipped', or 'error' state - have info about the specific worker to which they were assigned - have live-updated stats when in 'active' state - have a many-to-one relation with Datums in our standard execution model - (presumably) have a many-to-many relation with Datums in a user-batching execution model aaand, in case bringing all this up isn't enough to annoy you, I'm going to mention again that I find the term "Datum" makes the system more tricky than necessary to understand[1]. Given the specific definition of `Datum` that I'm proposing, `FileGrouping` (`FileGroup`?) or `FileSet` both seem to be more immediately obvious/descriptive alternatives. That being said, I do recognize how invasive it would be to change at this point. [1]: At best, it's a new, foreign technical term to learn; at worst, people already have a conception of what it means, that is almost certainly different from the way in which we're using it
afeinstein20/eleanor
1098321083
Title: New point release 2.0.4 Question: username_0: Hey, Can you release a new version, 2.0.4 with what is currently out? Right now in 2.0.3 you can't get any ecliptic plane data because `eleanor.Update` is failing because it's looking for targets in the CVZ. It seems like Ben fixed this in a PR in September, but to get that working you have to download from source instead of using pip upgrades. Our group has some novice python users where downloading and installing from source is a bit of a challenge. Answers: username_1: done! username_0: Wow that was fast! Thanks! Might be worth a tweet to notify people to upgrade to get access to the ecliptic. Status: Issue closed
JirkaDellOro/Softwaredesign
340812748
Title: Fragen Question: username_0: Wie übergibt man eine leere Inventoryliste als Eigenschaft eines Objekts vom Typ Character? Inventory als Dictionary damit nur 1 Itemsliste und nicht Health und Gear Listen einzeln bei Inventory Ausgabe dastehen? https://github.com/username_0/SoftwareDesign/blob/master/FinaleAbgabe/GameData.cs Für die Methode Fight(): Es ist nicht möglich auf Enemy.Lifepoints zu zugreifen, nur auf bestimmten Character wegen dem Dictionary. Navigation: _currentLocation wird richtig ausgegeben aber nicht überschrieben für den Raumwechsel. Startet wieder im Anfangsraum. -> CheckNonFightCases() https://github.com/username_0/SoftwareDesign/blob/master/FinaleAbgabe/MethodStore.cs Answers: username_1: Die Fragen sind leider so schlecht formuliert, dass ich sie nicht verstehe. Bitte pro Frage einen Thread und bitte versuchen, die Fragen verständlich zu formulieren. Fragt euch erst gegenseitig und wenn ihr meint, eine verständliche Formulierung gefunden zu haben, dann postet. Status: Issue closed
ooni/probe
419349702
Title: Estimated Time & Data for test cards should vary based on the tests enabled Question: username_0: Currently, the estimated test time and data usage in each card supposes that all tests of that test card are configured to be run. If single tests are disabled though, this does not change. In the case of Performance, this difference can be quite big, especially if DASH is disabled: ![image](https://user-images.githubusercontent.com/5436686/54110001-884d1880-43e0-11e9-9d7c-200e472521b5.png) We should try to more accurately estimate data and time. So instead of having an estimate for a default state of a test card, we have one for each individual test and show the sum of the estimations in the test card. I remember @hellais created a spreadsheet with some averages regarding these estimations. It could be good to revisit these for individual tests. What do you think? cc @bassosimone since this is conceptually MK related as well.
dmitrie43/QueryBuilder
476804594
Title: private $connect Question: username_0: https://github.com/dmitrie43/QueryBuilder/blob/e2a974e307024bd55dbd336f3b98fa20634a16e6/application/lib/QueryBuilder.php#L15 если `$connect` объявить приватным, наследники не смогут к нему обращаться
tekkosu/TheBusyBeavers
752860803
Title: 'Create a New Account' page design does not match the rest of the webpage Question: username_0: The 'Create a New Account' page design does not match the rest of the webpage. To be specific the 'submit' button style does not match the rest of the button styles used in the landing, recipe and ingredients pages. You can see the issue by browsing to the 'Create a New Account' page and comparing the submit button to the button in the main landing page (landing.html). ![image](https://user-images.githubusercontent.com/38876214/100536190-087e6780-31dc-11eb-9ffb-6b88e7b95f67.png)
gridap/Gridap.jl
794248970
Title: Support `ascii=true` option in `writevtk` Question: username_0: It would be convenient if Griap's version of `writevtk` supported the `ascii=true` function of the `WriteVTK` `writevtk` function. This would help debugging. It would also allow testing Gridap's output in the `VtkTests.jl` test case. Answers: username_1: Good point @username_0 ! In general, we want a way of passing all options supported by `WriteVTK.vtk_grid`. The signature of `Gridap.writevtk` has to be modified with care in order to not mess up with the arguments redirected to `visualization_data`. In file, https://github.com/gridap/Gridap.jl/blob/master/src/Visualization/Vtk.jl I would do something in this direction: ```julia struct VtkOptions{T} kwargs::T end VtkOptions(;kwargs...) = VtkOptions(kwargs) function writevtk(options::VtkOptions,args...;kwargs...) map(visualization_data(args...;kwargs...)) do visdata write_vtk_file( options,visdata.grid,visdata.filebase,celldata=visdata.celldata,nodaldata=visdata.nodaldata) end end # Needed for backwards compatibility function writevtk(args...;kwargs...) writevtk(VtkOptions(compress=false),args...;kwargs...) end ``` then propagate `options` until we reach the call to `vtk_grid` which now should be called as ```julia vtkfile = vtk_grid(filebase, points, cells; options...) ``` username_1: From the user perspective when visualizing a FEFunction: ```julia # Default options (like now) writevtk(trian,"trian",cellfields=["uh"=>uh]) # Custom options options = VtkOptions(compress=false,ascii=true) writevtk(options,trian,"trian",cellfields=["uh"=>uh]) ``` username_1: Idem for `createvtk`
Perfare/AssetStudio
775192710
Title: Some assets not showing Question: username_0: I've been trying to view the files of Battlestar Galactica: Deadlock, however when I load all the asset files, I have found some textures and models for ships not appearing, for example: I can see the Viper Mark II ship model and its texture file, I can also see the Viper Mark I ship model, however I can NOT see its texture file, meaning I cannot view the texture, and this is the same for several other textures in the game. ![image](https://user-images.githubusercontent.com/5975421/103190636-bb201380-48c9-11eb-8f3d-7a123e362a59.png) Answers: username_1: Please upload the files that you think would contain the texture. username_0: It should be in this one [sharedassets27.zip](https://github.com/username_2/AssetStudio/files/5759266/sharedassets27.zip) username_1: You need to include the asset.res file aswell. As there are likely many files, it may be better to upload all the asset files if you can. username_0: I can't upload everything due to the fact its 1.86GB altogether, but I can try to give you this one asset file that has the VM2 texture but not the VM1 [sharedassets27.zip](https://github.com/username_2/AssetStudio/files/5759413/sharedassets27.zip) username_1: Does not look like the texture in question would be there, unless it has a different name. Upload the asset files containing the mesh, or upload all the asset files to a cloud storage service like google drive or mega. username_0: Here https://drive.google.com/file/d/1Ajdpa1OpGo5wpcn_wbfaEHxUpgbtCo0n/view?usp=sharing username_1: That share is set to request only, you may want to change it to public. username_0: changed username_1: I do not think the HS VM1 has a texture based on: 1. The material of the model in blender is called "Ghost", normally the material indicates the texture or material, in this case the closest match is a transparency shader of the same name. 2. The top level object/entity containing the mesh is called "ColonialViperMkIProjection", possibly a projection/hologram 3. Other ships with names containing "Projection" do not extract with textures. 4. This is an LOD model. Those 4 indicate to me that this model only appears in a projection/hologram ingame, and does not have a texture. However, I can see that you did not include the .level files in your upload, it may be that the non-LOD and textures model you are looking for is in there. username_1: Additionally using steamdb.info I can see that there are files in "BSG_Data/StreamingAssets" Try loading the root/main game folder in AssetStudio instead of individual files, if you are doing this in the first place. username_0: I was loading the entirety of the BSG_Data folder (minus whats in the 4 sub folders as they aren't asset files and one of them crashes the loader) username_0: https://drive.google.com/file/d/1MTXb9zkZWHDtZsVJ9xKkxL3-aiaKNKLu/view?usp=sharing Heres the levels and the rest of the files username_1: I have had a look at the level assets and none of them reference the viper MK1 or MK7 [ViperSquads.zip](https://github.com/username_2/AssetStudio/files/5761338/ViperSquads.zip) I suspect that the MK1 and 7 are not used as neither of them have textures or are referenced beyond projections. username_0: They are used, the Mark I is the first fighter you have in the entire game, and I found their textures when I did a Ninja rip of an inspect mode of it, the textures ARE there but somehow they are not shown Status: Issue closed username_2: The error message I added in the new version. If you don’t encounter any error message during loading, it means that all assets in the file have been loaded.
taller2-2018-1-grupo2/python-server
317024442
Title: Fix internal server error Question: username_0: When hitting the heroku endpoint of the app server and the application has been idle for some time (due to Heroku policy of inactivity) the app server returns a 500 error saying "internal server error". We would need to determine the source of this error and fix it. Probably having better logging (#2) would help to know what is going on.<issue_closed> Status: Issue closed
easylist/easylist
471246070
Title: dainese.com [Incorrect blocking] Question: username_0: ### List the website(s) you're having issues: `https://www.dainese.com/row/en/motorbike/jackets/d-explorer-2-gore-tex-jacket-201593993.html?cgid=motorbike-jackets&dwvar_201593993_color=EBONY%2FBLACK#start=1` ### What happens? broken site. Can't view product description when adblock is enabled <details> Possible fix: `@@||dainese-cdn.thron.com/shared/plugins/tracking/current/tracking-library-min.js$domain=dainese.com` ![image](https://user-images.githubusercontent.com/8361299/61655517-65543e80-acc7-11e9-9105-b0d7e358b6a9.png) </details> ### List Subscriptions you're using: Easylist, Easyprivacy Answers: username_1: PR: #3818 Status: Issue closed
siznax/wptools
257585116
Title: Mixin get_wikidata(), get_restbase() in wptools.page Question: username_0: We should be able to use ``get_wikidata()`` directly from ``wptools.wikidata`` and remove this function from ``wptools.page``: ```python def get_wikidata(self, show=True, proxy=None, timeout=0): """ Envoke wptools.wikidata.get_wikidata() """ kwargs = {} kwargs.update(self.params) kwargs.update(self.flags) wdobj = WPToolsWikidata(self.params.get('title'), **kwargs) wdobj.cache.update(self.cache) wdobj.data.update(self.data) wdobj.get_wikidata(False, proxy, timeout) self.cache.update(wdobj.cache) self.data.update(wdobj.data) self.flags.update(wdobj.flags) self.params.update(wdobj.params) self._update_imageinfo() self._update_params() if show: self.show() return self ``` Likewise for ``get_restbase()``<issue_closed> Status: Issue closed
realm/realm-java
235776288
Title: create xxRealmProxy.java error, help Question: username_0: 注: Version 3.3.2 of Realm is now available: http://static.realm.io/downloads/java/latest 注: Processing class EstateInfoDb 注: Processing class MeterInfoDb 注: Processing class MeterSystemInfo 注: Creating DefaultRealmModule /Users/xx/Desktop/Project/Android/YCB/app/build/generated/source/apt/debug/io/realm/EstateInfoDbRealmProxy.java:27: 错误: EstateInfoDbRealmProxy不是抽象的, 并且未覆盖RealmObjectProxy中的抽象方法realmGet$proxyState() public class EstateInfoDbRealmProxy extends EstateInfoDb ^ /Users/xx/Desktop/Project/Android/YCB/app/build/generated/source/apt/debug/io/realm/EstateInfoDbRealmProxy.java:30: 错误: EstateInfoDbColumnInfo不是抽象的, 并且未覆盖ColumnInfo中的抽象方法copy(ColumnInfo,ColumnInfo) static final class EstateInfoDbColumnInfo extends ColumnInfo { ^ /Users/xx/Desktop/Project/Android/YCB/app/build/generated/source/apt/debug/io/realm/EstateInfoDbRealmProxy.java:48: 错误: 对于ColumnInfo(没有参数), 找不到合适的构造器 EstateInfoDbColumnInfo(String path, Table table) { ^ 构造器 ColumnInfo.ColumnInfo(int)不适用 (实际参数列表和形式参数列表长度不同) 构造器 ColumnInfo.ColumnInfo(ColumnInfo,boolean)不适用 (实际参数列表和形式参数列表长度不同) 构造器 ColumnInfo.ColumnInfo(int,boolean)不适用 (实际参数列表和形式参数列表长度不同) /Users/xx/Desktop/Project/Android/YCB/app/build/generated/source/apt/debug/io/realm/EstateInfoDbRealmProxy.java:50: 错误: 找不到符号 this._idIndex = getValidColumnIndex(path, table, "EstateInfoDb", "_id"); ^ 符号: 方法 getValidColumnIndex(String,Table,String,String) 位置: 类 EstateInfoDbColumnInfo /Users/xx/Desktop/Project/Android/YCB/app/build/generated/source/apt/debug/io/realm/EstateInfoDbRealmProxy.java:53: 错误: 找不到符号 this.nameIndex = getValidColumnIndex(path, table, "EstateInfoDb", "name"); Answers: username_1: Could you provide generated `EstateInfoDbRealmProxy.java` under `/Users/xx/Desktop/Project/Android/YCB/app/build/generated/source/apt/debug/io/realm/`? username_2: What version of Realm are you using? username_0: @username_1 under the path generated EstateInfoDbRealmProxy.java, but have errors username_0: @username_2 3.3.2,the day before yesterday,I use 3.3.1 it is OK,no this error!could you help? username_0: @realm-ci what is this problem?could you answer it? username_2: I'd actually just try a clean+rebuild. username_3: hope `clean+rebuild` solved your problem. Status: Issue closed
keylime/rust-keylime
514195006
Title: Secure Mount using /tmp and too a wide a search string. Question: username_0: Several things need changing in the secure mount implementation. 1. We mount as under `tmp`, in fact `/tmp/secure` - this is a security risk as its easy to guess (in fact no guessing needed at all), which makes it straight forward for attacks such as swapping out objects with nefarious. Also `tmp` is world writable. 2. We search for the `tmpfs` to establish if a secure mount is already present. My own machine has numerous instances of a tmpfs label present: ``` mount |grep tmpfs dev on /dev type devtmpfs (rw,nosuid,relatime,size=16352116k,nr_inodes=4088029,mode=755) run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) tpmfs on /tmp type tmpfs (rw,noatime) tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=3273344k,mode=700,uid=1000,gid=985) ``` I recommend instead we use more unique name. I also think we should mount to `/var/lib/keylime/secure` as a default.<issue_closed> Status: Issue closed
Codeception/Codeception
237164942
Title: Cannot pass to next page via amOnPage function Question: username_0: Hello, I'm tying to make order on my page. Everything was working smoothly. I have choosed specific item that i wanted. I have added it to the cart. Moving on in cart by choosing specific transportation and type of payment. but when i moved the section when you have to fill in personal info i've crashed into problem. Clicking on to move to the page that i wanted passes. Than i wrote function amOnPage with argument of page that i wanted and it passes aswell, but when i'm trying to see if element that i want to fill is here it fails and when I try to check if there is an element from previous page it pass. So it's acting like am on page that I was before, even when i clicked on moved forward and wrote amOnPage function. Thank you for all advices. $I->click("Pro kluky"); $I->see("Auta"); $I->click("Auta"); $I->click("Traktor Kubota + vlek"); $I->amOnPage("/traktor-kubota-vlek"); $I->click("Do košíku");//adding "to the cart" //this is making order of specific item $I->amOnPage("/objednavka"); $I->click("Pokračovat v objednávce"); $I->amOnpage("/doprava-a-platba"); $I->seeElement(['xpath' => '//*[@id="payment_for_1003"]/div[2]']); $I->click(['xpath' => '//*[@id="payment_for_1003"]/div[2]']); $I->click(['xpath' => '//*[@id="delivery_methods"]/div[1]']); $I->click("Pokračovat v objednávce");//this should forward me to the "/dorucovaci-udaje" $I->amOnPage('/dorucovaci-udaje'); $I->seeElement(['xpath'=>'//*[@id="ibod"]/div']);//this should fail but it pass, element is not on page "/dorucovaci udaje" $I->seeElement(['xpath' => '//*[@id="email"]']);//this should pass, but it fails * Codeception version: 2.1 * PHP Version: 5.3 * Operating System: Windows 10 * Installation type: Composer * List of installed packages (`composer show`) - I am using webception * Suite configuration: ```yml class_name: WebGuy modules: enabled: - PhpBrowser - WebHelper - REST config: REST: depends: PhpBrowser url: 'https://www.bambule.cz/' timeout: 90 depends: WebHelper PhpBrowser: url: 'https://www.bambule.cz/' curl: CURLOPT_RETURNTRANSFER: true CURLOPT_FOLLOWLOCATION: true ``` Answers: username_1: Use `seeCurrentUrlEquals` instead of `amOnPage` to validate current url. You don't need to call `amOnPage` after `click` because `click` opens the page targeted by link or form. Are you sure that Codeception 2.1 runs on PHP 5.3? It shouldn't. username_0: Sorry the version is 7.1.4. Yea i have used seeCurrentUrlEquals to check if I amn on page that i want, it pass, but see element that should not be there also passed. ( definetly sure that it supposed to not be there ) username_1: Then use `dontSeeElement` in your test. Run this test with `-vv` flag to see what requests it is actually making. username_0: I have deleted amOnPage lines, just to be clear that im not redirecting to same page after every click. Still no progress it looks like it didnt load the next page. username_1: Run this test with `-vv` flag to see what requests it is actually making. username_0: As that php script parameter ? username_1: `codecept run tests/acceptance/FileCept.php -vv` username_0: It wrote me option doesnt exist. (It's cest file); username_1: Upgrade to Codeception 2.3 and try again. Codeception 2.1 is unsupported. username_0: Thank you much for your time and help but still after upgrading to 2.3 it says option does not exist. I dont know really what wrong. I think the core of the problem was Ajax request did not redirect to other site. Am gonna probably start all over again reconfiguring everything and start from a scratch. username_1: PhpBrowser does not execute Javascript. username_0: I have rewrite my config file, but it doesnt work. Can someone please tell me what have i done wrong in this ? class_name: WebGuy modules: enabled: - WebDriver - Db config: WebDriver: url: 'https://www.bambule.cz/' browser: 'chrome' # curl: # CURLOPT_RETURNTRANSFER: true # CURLOPT_FOLLOWLOCATION: true Db: cleanup: false repopulate: false This is exception that webception give me: [Codeception\Exception\ConnectionException] Curl error thrown for http POST to /session with params: {"desiredCapabilities":{"browserName":"firefox"}} Failed to connect to 127.0.0.1 port 4444: Connection refused Please make sure that Selenium Server or PhantomJS is running. username_1: You have to start Selenium: http://codeception.com/docs/03-AcceptanceTests#Selenium-Standalone-Server username_0: I've started selenium with chromedriver in its folder run test, as test started it opens blank page with :data in url, but in webception it gave me this exception : [Codeception\Exception\ModuleException] Db: invalid data source name while creating PDO connection username_0: I've deleted db module and it worked, but gave me unknown error: [Facebook\WebDriver\Exception\UnknownServerException] unknown error: Element <a href="/traktorkubotavlek" class="product_name">...</a> is not clickable at point (634, 999). Other element would receive the click: <div id="cookies">...</div> (Session info: chrome=58.0.3029.110) (Driver info: chromedriver=2.30.477700 (0057494ad8732195794a7b32078424f92a5fce41),platform=Windows NT 10.0.14393 x86_64) (WARNING: The server did not provide any stacktrace information) Command duration or timeout: 166 milliseconds Build info: version: '3.4.0', revision: 'unknown', time: 'unknown' System info: host: 'DESKTOPIDJLL9S', ip: '192.168.200.157', os.name: 'Windows 10', os.arch: 'x86', os.version: '10.0', java.version: '1.8.0_131' Driver info: org.openqa.selenium.chrome.ChromeDriver Capabilities [{applicationCacheEnabled=false, rotatable=false, mobileEmulationEnabled=false, networkConnectionEnabled=false, chrome={chromedriverVersion=2.30.477700 (0057494ad8732195794a7b32078424f92a5fce41), userDataDir=C:\Users\inveo\AppData\Local\Temp\scoped_dir4140_1627}, takesHeapSnapshot=true, pageLoadStrategy=normal, databaseEnabled=false, handlesAlerts=true, hasTouchScreen=false, version=58.0.3029.110, platform=XP, browserConnectionEnabled=false, nativeEvents=true, acceptSslCerts=true, locationContextEnabled=true, webStorageEnabled=true, browserName=chrome, takesScreenshot=true, javascriptEnabled=true, cssSelectorsEnabled=true, unexpectedAlertBehaviour=}] username_1: I've seen this error in https://github.com/Codeception/Codeception/issues/4231#issuecomment-303144413 Probably you have to `scrollTo` element before clicking it. username_0: Sir you helped me so much. Bless you! Right now am on right path and everything looks smooth. I've got ot the same point where the problem started. I will let you know if I come into any issue during filling form for shipping detials. Status: Issue closed
FranckLab/FIDVC
199121421
Title: Problem regrding boundary of the image Question: username_0: Dear sir, To check your software for my experiment images, I wrap my image with known displacement field to create the stressed image. Then I compare the displacement field with what I applied. My image is a sphere under uniform expansion ( 400 pa). Your software perfectly matched the distorted image with initial one. But there is a problem on the boundary of sphere. As you can see in the attached pictures, the stress should be 400 pa within the sphere, but there is an significant error on the edges ( you can see the surface plot of the displacement, and the its gradient is different near the boundary due to interpolation) . Since I want the traction, this has caused serous problem for me. (I just attached the middle xy stack for representation) The first image is the actual and distorted image under uniform expansion The second image is showing the good match after iterations The forth is comparison of the actual and FIDVC displacement. The fifth one showed sigmaxx and the red dots is the periphery of sphere in the middle section. I really appreciate if you help me to mitigate this issue. I f you think sending my data will help, please let me know. Thanks in advance Yours sincerely ![1](https://cloud.githubusercontent.com/assets/21693418/21707483/2880b01e-d395-11e6-9419-dfb024a6cb72.png) ![2](https://cloud.githubusercontent.com/assets/21693418/21707484/2c6e1766-d395-11e6-8857-c69f368b97bf.png) ![3](https://cloud.githubusercontent.com/assets/21693418/21707485/2e8a7c56-d395-11e6-9254-973edb5af898.png) ![4](https://cloud.githubusercontent.com/assets/21693418/21707487/30a023e2-d395-11e6-8297-97327ceaaeec.png) ![5](https://cloud.githubusercontent.com/assets/21693418/21707488/33485a56-d395-11e6-8a3c-a260222c8985.png) Answers: username_1: Hi, I'm <NAME>, a graduate student from Franck lab. I'm currently traveling on vacation.I will get back to you soon on this issue. Can you wait for a while? Please let me know. Best, Mohak username_0: Hi, Thanks for your response. Sure, I really appreciate your help. Thank in advance Yours sincerely username_1: Hi Erfan, Thank you for waiting patiently. I'm back at work and can help you with your query. FIDVC is a DIC based tracking method, which tracks displacement by matching subsets of the images using cross-correlation. As a result, the displacement value near the border of the images is more error prone as the image subset gets cropped. This is one of the reasons why you have errors near the border of images. FIDVC for displacement measurement requires high contrast speckle pattern. From the raw images, it seems that the particle seeding density in the images is too low. Increasing the seeding density and decreasing particle size will help in improving the tracking performance of FIDVC. These are two main reasons why I think you are getting errors with FIDVC. In any case, can you send me your raw images? It will help me look into the issue in detail. My email id is: <EMAIL> I hope this helps. Best, Mohak Status: Issue closed
PnEcrins/FollowDem-admin-front
549602057
Title: Lors de l'ajout d'un animal/device/attribut il serait pas mal d'avoir des infos plus spécifique si les champs sont mal remplis Question: username_0: (entouré le champs incorrect en rouge, avoir un message d'erreur plus explicite, etc.) Answers: username_1: Actuellement aucun message d'erreur quand le formulaire a un problème (date incorrecte ou autre). Seulement l'enregistrement du formulaire ne fonctionne pas.
nuxt-community/auth-module
285111019
Title: Problem with updateToken action Question: username_0: When store tries to update the token this error appears <img width="1326" alt="image" src="https://user-images.githubusercontent.com/12446271/34440396-7dfde97c-ec7a-11e7-977b-e9df684ca4d7.png"> It started to happen after the last update. Answers: username_1: Hi. This problem should be fixed now. Would you please test against [email protected] ? Status: Issue closed username_0: Update the package and another problem arose <img width="1307" alt="screen shot 2017-12-29 at 10 18 26" src="https://user-images.githubusercontent.com/12446271/34441479-a3f0a988-ec81-11e7-8ad5-6ee02dfa3a74.png"> username_0: Fixed with version 3.4.1, Thanks
dale-roberts/MouseUnSnag
559296404
Title: Crash when display settings change Question: username_0: Sometimes (not always) when unplugging external monitors and plugging back in again, the application will crash. When this happens it goes from 2 screens (dual monitor, main monitor disabled) to single monitor (main monitor, external are now detached). Also, the main monitor is 4k but is configured for 1080p resolution, on some occasions (despite being configured to not have ANY 4k resolution), Windows 10 will still temporarily go to native 4k resolution. That past part is worth mentioning even though it _probably_ doesn't have anything to do with the failure (not evident in console log below, either). Here's a screenshot of the console at the time it freezes/crashes: ![2020-02-03_11-44-44](https://user-images.githubusercontent.com/4269377/73685667-681eeb80-467b-11ea-820f-53980a345293.png)
happyprime/automated-workflows
501674549
Title: Add configuration for Babel Answers: username_1: Here's the config I have so far in the [Front-end Tools](https://github.com/happyprime/front-end-tools) repo: ``` "babel": { "comments": false, "minified": true, "presets": [ "@babel/preset-env" ] }, ``` And here's the `browserslist` config: ``` "browserslist": [ "> 1%", "last 2 versions", "not dead" ], ```
theodi/dashboards
32344470
Title: Switch order of colours and quarters? Question: username_0: Q1 turquoise Q2 lightBlue Q3 midBlue Q4 darkBlue ![screen shot 2014-04-28 at 10 37 15](https://cloud.githubusercontent.com/assets/1837585/2815361/e08b3402-ceb8-11e3-9dbd-339e902352ac.png) Answers: username_1: ![bikeshed](http://www.shedscene.com/image/cache/Apex%20Bike%20Shed-500x500.jpg) Status: Issue closed username_0: This shed does not follow ODI style guidelines.
manusa/actions-setup-openshift
640914778
Title: DNS resolution fails within the Pods Question: username_0: ## Description Trying to access an Internet domain from within a Pod is not possible due to domain name resolution failure (Access to Internet/external addresses is possible). The file `/etc/resolv.conf` provided by OpenShift has the following content: ``` nameserver 172.30.0.2 search myproject.svc.cluster.local svc.cluster.local cluster.local options ndots:5 ``` ## Related issues - https://github.com/openshift/origin/issues/23495 - https://github.com/openshift/origin/issues/19877 - https://github.com/openshift/origin/issues/18358 Answers: username_0: The following fix works, but requires oc cluster up, down, up. Modify file `./openshift.local.clusterup/node/node-config.yaml` to set entry in `dnsIP` field to external DNS server (e.g. `8.8.8.8`) Once cluster is restarted, Pods will have the new entry in the propagated `/etc/resolv.conf` file: ``` nameserver 8.8.8.8 nameserver 172.30.0.2 search myproject.svc.cluster.local svc.cluster.local cluster.local options ndots:5 ``` username_0: The previous fix doesn't work in GH but does in local. In the end added [this workaround](https://github.com/openshift/origin/issues/23495#issuecomment-523725456) too. Status: Issue closed
meetanubhav/email-scheduler
365930674
Title: Minimising PyQt based application in Python Question: username_0: **Minimizing a PyQt5 based python application to system dock or windows/system sub-process.** Repo -: https://github.com/username_0/email-scheduler Customization along with screenshots are accepted Answers: username_1: Are you trying to mean minimizing by something like this? ```python x = "Hello!" # to x="Hello!" ``` username_0: No. I want to minimise my application to windows docker. Like tou have minimise tab in any application.