repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
thomasloven/lovelace-layout-card
849856905
Title: Break not found Question: username_0: My Home Assistant version: 2021.3.4 Layout-card version (FROM BROWSER CONSOLE): 2.2.3 the break option has stopped working ... Is it normal? ![alt text](https://i.ibb.co/xsWpb7G/break-layout.jpg) Answers: username_1: Hey @username_0 I get the exact same problem on the same version of Home Assistant. I hope this will be fixed as soon as possible. - Victor username_2: How did you install layout-card? username_1: Hey @username_2 I installed it trough HACS community store. ![image](https://user-images.githubusercontent.com/26089698/113614904-738cee80-9653-11eb-8c3c-f8b425a6d8c0.png) username_0: Hi @username_2 After installing the last stable version of homeassistant this error appeared, before it was working correctly. I have also updated layout-card from HACS I have created a new configuration, and it does not accept -break Status: Issue closed username_2: The last 7 releases of layout-card are named - in order: - 2.0.0: **BREAKING!** Layout-card 2.0 - 2.1.0: **2.0 is BREAKING** - Make grid layouts fully responsive - 2.1.1: **2.0 is BREAKING** - Improved grid layout responsiveness - 2.2.0: **BREAKING!** layout is renamed view_layout - 2.2.1: **2.2.0 IS BREAKING** - 2.2.2: **IF YOU'RE UPDATING FROM EARLIER THAN 2.0 - YOUR. CONFIG. WILL. NOT. WORK. WITHOUT. CHANGES.** - 2.2.3: **PLEASE READ THE RELEASE NOTES FOR THE LAST COUPLE OF RELEASES** This is also helpfully displayed every time you click the update button in HACS: ![image](https://user-images.githubusercontent.com/1299821/113782032-d785e480-9731-11eb-96d7-ee7022c5fefc.png)
joltup/rn-fetch-blob
1034147339
Title: ios 15 when download file app crash Question: username_0: ios 15 when download file app crash @ihavenoface5 ![Screenshot 2021-10-23 at 19 38 31](https://user-images.githubusercontent.com/49268822/138554433-ce66a45d-9fc3-4799-b0fa-7bb06521dc9c.png) @ihavenoface5 Answers: username_1: This module is unmentionable ⚠️! please use fork: https://github.com/RonRadtke/react-native-blob-util
protocolbuffers/protobuf
997395317
Title: 3.18.0 isn't compatible with Python 2 despite being installable Question: username_0: <!-- NOTE: this form is for bug reports only. For questions or troubleshooting, please post on the protobuf mailing list: https://groups.google.com/forum/#!forum/protobuf Stack Overflow is also a useful if unofficial resource https://stackoverflow.com/questions/tagged/protocol-buffers --> **What version of protobuf and what language are you using?** Version: v3.18.0 Language: C++/Java/Python/C#/Ruby/PHP/Objective-C/Javascript **What operating system (Linux, Windows, ...) and version?** macOS Mojave 10.14.6 **What runtime / compiler are you using (e.g., python version or gcc version)** ```bash Python 2.7.18 (default, Mar 30 2021, 14:20:09) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.46.4)] on darwin ``` **What did you do?** Steps to reproduce the behavior: 1. Activate a Python 2 environment 2. `pip install protobuf==3.18.0` 3. `python -c "from google.protobuf import descriptor"` **What did you expect to see** Either: - `ERROR: Could not find a version that satisfies the requirement protobuf==3.18.0` in (2) - no errors in (3) **What did you see instead?** ```python Traceback (most recent call last): File "<string>", line 1, in <module> File "/Users/username_0/.pyenv/versions/dev2/lib/python2.7/site-packages/google/protobuf/descriptor.py", line 113 class DescriptorBase(metaclass=DescriptorMetaclass): ^ SyntaxError: invalid syntax ``` in (3) Answers: username_1: This breaks [nrfutil](https://www.nordicsemi.com/Products/Development-tools/nRF-Command-Line-Tools/Download#infotabs), rendering a major provider's CLI tool for DFU upgrades inoperable with no way to pin this dependency. username_2: https://github.com/googleapis/python-api-core depends on protobuf and this version breaks Python2.7-based builds. My workaround was to force v3.17.3 in `requirements.txt`. username_3: @username_2 I just released a 1.x version of google-api-core that pins to protobuf <3.18.0. https://pypi.org/project/google-api-core/1.31.3/ username_4: https://github.com/protocolbuffers/protobuf/issues/9045 username_0: —https://github.com/protocolbuffers/protobuf/issues/9045#issuecomment-943640239 Perfect! Status: Issue closed
google/closure-compiler
589517733
Title: Compiler emits invalid syntax in for-of if iteree is a comma expr Question: username_0: TypeScript 3.7 [emits a comma expression][1] for `??`. When this is passed to Closure as the parenthesized iteree of a for-of loop, the code printer [removes the parens][2], causing a syntax error. The original TypeScript ```typescript declare const y: { x: number[] | null }; for (const a of y.x ?? []) {} ``` transpiles to ```javascript var _a; for (const a of (_a = y.x, (_a !== null && _a !== void 0 ? _a : []))) { } ``` which becomes ```javascript 'use strict'; var _a; for (const a of _a = y.x, _a !== null && _a !== void 0 ? _a : []) { } ; ``` which produces ``` Uncaught SyntaxError: Unexpected token ',' ``` The code printer needs to retain these parens. A work-around is to upgrade to TS 3.8, which transpiles it a little more efficiently. But it's still a problem that Closure is generating broken code. [1]: https://www.typescriptlang.org/play/?ts=3.7.5#code/CYUwxgNghgTiAEYD2A7AzgF3gTwFzwG94APfFAVwFsAjEGAbQF14AfeCiCeAXwG4BYAFAAzJDHgAKZOixR4SYTgB0xeAH418JgEpC3IA [2]: https://closure-compiler-debugger.appspot.com/gwt_debugger.html#input0%3Dvar%2520_a%253B%250Afor%2520(const%2520a%2520of%2520(_a%2520%253D%2520y.x%252C%2520(_a%2520!%253D%253D%2520null%2520%2526%2526%2520_a%2520!%253D%253D%2520void%25200%2520%253F%2520_a%2520%253A%2520%255B%255D)))%2520%257B%2520%257D%250A%26input1%26conformanceConfig%26externs%26refasterjs-template%26CHECK_TYPES%3Dtrue%26REWRITE_MODULES_BEFORE_TYPECHECKING%3Dtrue%26CLOSURE_PASS%3Dtrue%26PRESERVE_TYPE_ANNOTATIONS%3Dtrue%26PRETTY_PRINT%3Dtrue<issue_closed> Status: Issue closed
go-playground/validator
295111902
Title: Hostname begins with digits is not accpted by the validator. Question: username_0: The regex of hostname is: ```go hostnameRegexString = `^(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z0-9]|[A-Za-z0-9][A-Za-z0-9\-]*[A-Za-z0-9])$` ``` For hostname like `1.foo.com`, it is rejected by the validator. Accroding to [RFC1123](https://tools.ietf.org/html/rfc1123), valid DNS hostnames could be start with digits. Answers: username_1: Thanks @username_0 I’ll take a look as soon as I can, If you wanted to make a PR That’d be awesome too :) username_1: Hey @username_0 I added a new validation in [Release v9.10.0](https://github.com/go-playground/validator/releases/tag/v9.10.0) for RFC 1123 hostname called `hostname_rfc1123` I put a note in the release of how to continue to use the `hostname` tag with it. Status: Issue closed
Anuken/Mindustry-Suggestions
1102199056
Title: Español: Crear mods desde dentro del juego inglés:Create mods from within the game Question: username_0: ### Describe the content or mechanics you are proposing. Inglés:I would love that from the app you can create mods with an easy to use interface so that anyone can publish or play their mods and the game lacks a little more content but the game seems great 100 🌟 ### Describe how you think this content will improve the game. If you're proposing new content, mention how it may add more gameplay options or how it will fill a new niche. 🤚 ### Before making this issue, check the boxes below to confirm that you have acknowledged them. - [ ] I have checked the [Trello](https://trello.com/b/aE2tcUwF/mindustry-trello) to make sure my suggestion isn't planned or implemented in a development version. - [X] I am familiar with all the content already in the game or have glanced at the wiki to make sure my suggestion doesn't exist in the game yet. - [X] I have read [README.md](https://github.com/Anuken/Mindustry-Suggestions/blob/master/README.md) to make sure my idea is not listed under the "A few things you shouldn't suggest" category. Answers: username_1: unfortunately, making mods is mostly just coding. you're much better off just using an external IDE for that username_2: they're probably talking about json mods. that's actually possible but useless. username_3: I smell #1645
nodemcu/nodemcu-firmware
223251870
Title: Cron Bug while Disabled schedules called again Question: username_0: Make sure you read and understand http://nodemcu.readthedocs.io/en/dev/en/support/. Use one of the two templates below and delete the rest. 8<------------------------ BUG REPORT ----------------------------------------- ### Expected behavior first = cron.schedule("* * * * *", function(e) print("First Tick") end) second = cron.schedule("*/2 * * * *", function(e) print("second Tick") end) third = cron.schedule("*/3 * * * *", function(e) print("third Tick") end) first:unschedule() first:schedule("* * * * *") At each minute it should print First Tick ### Actual behavior For Each Two and third Minutes I am getting second Tick and Third Tick but not the first tick ### Test code Provide a [Minimal, Complete, and Verifiable example](http://stackoverflow.com/help/mcve) which will reproduce the problem. ```Lua first = cron.schedule("* * * * *", function(e) print("First Tick") end) second = cron.schedule("*/2 * * * *", function(e) print("second Tick") end) third = cron.schedule("*/3 * * * *", function(e) print("third Tick") end) ``` # Answers: username_0: @djphoenix Did you had a look of the above mentioned problem???? I think the basic problem lies in this section ` luaL_unref(L, LUA_REGISTRYINDEX, cronent_list[i]); memmove(cronent_list + i, cronent_list + i + 1, sizeof(int) * cronent_count - i - 1); cronent_count--; ` username_1: I'm not even sure how this ought to work. The implementation seems to require that the entry returned by cron.schedule is retained in a variable, otherwise it gets deleted by the gc. However, the documentation doesn't retain the reference: ``` cron.schedule("* * * * *", function(e) print("Every minute") end) ``` My inclination would be to have disabled entries deleted by the garbage collector, but otherwise retained. username_0: @username_1 well `cron.schedule("* * * * *", function(e) print("Every minute") end)` this code would be working perfect for one cron schedule , but if we schedule more than 3 cron schedule and try to unschedule the second one, this would corrupt the first and third as well, so that what I was asking here username_2: That's a tricky one. Is there a sensible use case where you'd disable a schedule only to to re-schedule it a few seconds/minutes/hours later? If the GC were to clear this one then calling `disable` would (implicitly) be the same as setting the schedule variable to `nil`. I'd find that a bit odd to be honest. username_1: If the gc can't reach a disabled entry, then it can't be re-enabled -- so there is no risk to delete it. If it can be reached, then it is possible that it will be re-enabled. If the entry is enabled, then it shouldn't be deleted by the gc. username_2: in which case it must not be GC'ed. If unscheduled entries _couldn't_ be rescheduled what would be the point of the `unschedule` function (might as well call `ent = nil`)? Status: Issue closed
facebook/react-native
265643149
Title: iOS linker error with Google Nearby Messages API Question: username_0: Following Google Nearby Message installation guide for iOS, after installing cocoapods and adding **NearbyMessages** compiling the Xcode project build failed with a linker error as follows: ``` duplicate symbol __ZN6google10AddLogSinkEPNS_7LogSinkE in: /Users/dimutu.k/Library/Developer/Xcode/DerivedData/nearby-axzoinmlhtvrrpfvlpxeomerfvml/Build/Products/Debug-iphoneos/libReact.a(logging.o) /Work/WINK/nearby/ios/gns/NearbyMessages/Libraries/libGNSMessages.a(logging.o) duplicate symbol __ZN6google13RemoveLogSinkEPNS_7LogSinkE in: /Users/dimutu.k/Library/Developer/Xcode/DerivedData/nearby-axzoinmlhtvrrpfvlpxeomerfvml/Build/Products/Debug-iphoneos/libReact.a(logging.o) /Work/WINK/nearby/ios/gns/NearbyMessages/Libraries/libGNSMessages.a(logging.o) ld: 2 duplicate symbols for architecture arm64 clang: error: linker command failed with exit code 1 (use -v to see invocation) ``` also have tried directly linking the libraries without pods which ended up with same error as above. seems like duplication with libGNSMessages.a static library with React/ThirdParty/glog/glog/logging.cc did anyone else had this issue trying the Google API? https://developers.google.com/nearby/messages/ios/get-started React Native 0.49.3
eslint/eslint
238514761
Title: Feature request: better error for forgotten 'async' when await is used. Question: username_0: This message is correct, but vague. Thanks for listening. I know the current behavior is correct, but `await` is becoming very popular and so is forgetting to add `async` to the parent function! Answers: username_1: Unfortunately this is something that the parser throws before ESLint can lint the file (the message may look like a lint message, but in reality it's a parser error). I'm all for fixing this, but I think we would have to fix this upstream in espree. username_2: Closing because this isn't something we can fix in ESLint -- we just get the error messages from whatever parser is configured. Feel free to open an issue on [acorn](https://github.com/ternjs/acorn) if you'd like to see this improve in our default parser. Status: Issue closed
flowtsohg/mdx-m3-viewer
600573257
Title: Material priorityPlane Mdx read/write type mismatch Question: username_0: The priorityPlane is being read as int32 but written as uint32. It probably doesn't matter, but thought I'd point it out :) https://github.com/username_1/mdx-m3-viewer/blob/3c94d049d76ee6f15d938f3020ccd777b66ce83d/src/parsers/mdlx/material.ts#L20 Answers: username_1: Will be fixed whenever I'll commit again, thanks 👍 Status: Issue closed
Whiley/WhileyRewriteLanguage
121075569
Title: Bug with Incremental Automaton Minimiser Question: username_0: The following is causing a problem: <pre> assert "loop invariant not restored": forall (int j, int i): if: i < 3 j >= 3 then: i + 1 >= 0 </pre><issue_closed> Status: Issue closed
GSG-G10/gifty
1013997850
Title: /product/:productId , GET Question: username_0: #### Description When the user enters any product page he can see the data about this product and the comments or reviews on it #### Files - Database -> Queries -> getPostQuery.js - Database -> Queries -> index.js - Controllers -> getPost.js - Controllers -> index.js - Routes -> index.js -> app.get(); --- #### Tasks - [ ] Create DB query to get the product from the database. - [ ] Import the query in controllers from DB>QUERIES>index.js and return to the front. - [ ] Create GET Route in Routes>index.js<issue_closed> Status: Issue closed
jeremylong/DependencyCheck
971791983
Title: I am using CLI to run dependency check, while running I am getting the following issue Question: username_0: @jeremylong , can you please let me know which command for a workaround. I tried this dependency-check cveUrlModified=http://mirror-url/nist/*.json.gz nothing happened not sure if I am using the command in the right way or not. Thanks Answers: username_0: @jeremylong , can you please let me know which command for a workaround. I tried this dependency-check cveUrlModified=http://mirror-url/nist/*.json.gz nothing happened not sure if I am using the command in the right way or not. Thanks username_1: duplicate of #3499 (and [other tickets](https://github.com/jeremylong/DependencyCheck/issues?q=is%3Aissue+SSLHandshakeException) ) your Java setup (or possibly network/proxy setup) is broken, nothing we can do about that username_0: Thank you for your comment, able to resolve the issue. Status: Issue closed username_3: How are you able to solve it ? I faced the same issue with my machine when trying to run `dependencyCheckAnalyze` command username_2: @username_3 As it works for your colleagues my gut feel would be (assuming your company has a re-encrypting proxy in the path to internet for antivirus/malware scanning) that they have already imported the proxy CA certificate in their truststore (as per my remark on the quoted issue: https://github.com/jeremylong/DependencyCheck/issues/3499#issuecomment-893490943) Another item to check would be if you use the same Java version, as java uses its own CA-trust and that trust gets updated periodically, so if your Java version is much older it might be that a regular CA signing certificate was not yet in the truststore. username_0: I had issues with Java setup, uninstalled and installed Java and added certs to keystore again. username_3: And how did you add the certs to keystore again if I may ask ? Apologies for asking a lot of questions username_0: You need to be on this path in cli "**C:\Program Files\Java\jre1.8.0_301\bin**" and you can use this command to import **" keytool -importcert -noprompt -trustcacerts –alias give your certificate name -file "path where it is stored" -keystore "C:\Program Files\Java\jre1.8.0_301\lib\security\cacerts" -storepass**
iv-org/documentation
978319248
Title: Link to instances.invidious.io/Invidious-Instances.md returns 404 Question: username_0: *\<!-- Please use the search function to check if the bug you found has already been reported by someone else --\>* Okay, found it: iv-org/invidious#2139 **Describe the bug** **Logs** **Screenshots** **Additional context** Continuing discussion in the existing issue is prohibited. (???) Answers: username_1: @username_2 username_2: Again?!.. Status: Issue closed
keboola/google-analytics-extractor
397238366
Title: Argument 3 passed to Keboola\Google\ClientBundle\Google\RestApi::decideRetry() must implement interface Psr\Http\Message\ResponseInterface, null given Question: username_0: ``` { "output": "Running query 'SessionIDTransactionID' Retrying request (0x) Using antisampling algorithm 'dailyWalk' Using antisampling algorithm 'dailyWalk' Using antisampling algorithm 'dailyWalk' Using antisampling algorithm 'dailyWalk' Retrying request (0x) Retrying request (0x) Running query 'SessionIDTime' Using antisampling algorithm 'dailyWalk' Retrying request (0x) Retrying request (0x) Fatal error: Uncaught TypeError: Argument 3 passed to Keboola\Google\ClientBundle\Google\RestApi::decideRetry() must implement interface Psr\Http\Message\ResponseInterface, null given, called in /code/vendor/keboola/google-client-bundle/Keboola/Google/ClientBundle/Google/RestApi.php on line 75 and defined in /code/vendor/keboola/google-client-bundle/Keboola/Google/ClientBundle/Google/RestApi.php:317 Stack trace: #0 /code/vendor/keboola/google-client-bundle/Keboola/Google/ClientBundle/Google/RestApi.php(75): Keboola\Google\ClientBundle\Google\RestApi->decideRetry(0, 8, NULL) #1 [internal function]: Keboola\Google\ClientBundle\Google\RestApi->Keboola\Google\ClientBundle\Google\{closure}(0, Object(GuzzleHttp\Psr7\Request), NULL, Object(GuzzleHttp\Exception\RequestException)) #2 /code/vendor/keboola/google-client-bundle/Keboola/Google/ClientBundle/Guzzle/RetryCallbackMiddleware.php(108): call_user_func(Object(Closure), 0, Object(GuzzleHttp\Psr7\Request), NULL, Object(GuzzleHttp\Exception\RequestException)) #3 /code/vendor/guz in /code/vendor/keboola/google-client-bundle/Keboola/Google/ClientBundle/Google/RestApi.php on line 317", "errorOutput": "Report contains sampled data. Sampling rate is 41%. Report contains sampled data. Sampling rate is 91%. Report contains sampled data. Sampling rate is 46%.", "container": { "id": "21658472-21644051.21658473--0-keboola-ex-google-analytics-v4", "image": "147946154733.dkr.ecr.us-east-1.amazonaws.com/developer-portal-v2/keboola.ex-google-analytics-v4:3.4.0" } ``` exceptionId: `docker-3a872c20eab72fe991441449586e4290`<issue_closed> Status: Issue closed
home-assistant/core
916070103
Title: Some cameras ONVIF don't work after update to 2021.6 Question: username_0: ### The problem Some cameras gave an unavailable error. If the camera is removed from the integration, it will not be found again. The EZVIZ camera works fine, only the Chinese cameras are missing. ### What is version of Home Assistant Core has the issue? core-2021.6.3 ### What was the last working version of Home Assistant Core? core-2021.5.1 ### What type of installation are you running? Home Assistant Supervised ### Integration causing the issue onvif ### Link to integration documentation on our website _No response_ ### Example YAML snippet _No response_ ### Anything in the logs that might be useful for us? ```txt Logger: homeassistant.config_entries Source: util/dt.py:54 First occurred: 11:02:37 (2 occurrences) Last logged: 11:02:40 Error setting up entry 360cam - 2e98012d2030 for onvif Error setting up entry street360Cam1 - 9a68012d2324 for onvif Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/config_entries.py", line 293, in async_setup result = await component.async_setup_entry(hass, self) # type: ignore File "/usr/src/homeassistant/homeassistant/components/onvif/__init__.py", line 72, in async_setup_entry if not await device.async_setup(): File "/usr/src/homeassistant/homeassistant/components/onvif/device.py", line 98, in async_setup await self.async_check_date_and_time() File "/usr/src/homeassistant/homeassistant/components/onvif/device.py", line 172, in async_check_date_and_time dt_util.get_time_zone(device_time.TimeZone) File "/usr/src/homeassistant/homeassistant/util/dt.py", line 54, in get_time_zone return cast(dt.tzinfo, zoneinfo.ZoneInfo(time_zone_str)) File "/usr/local/lib/python3.8/site-packages/backports/zoneinfo/_tzpath.py", line 95, in find_tzfile _validate_tzfile_path(key) File "/usr/local/lib/python3.8/site-packages/backports/zoneinfo/_tzpath.py", line 108, in _validate_tzfile_path if os.path.isabs(path): File "/usr/local/lib/python3.8/posixpath.py", line 62, in isabs s = os.fspath(s) TypeError: expected str, bytes or os.PathLike object, not NoneType ``` ### Additional information _No response_ Answers: username_1: Fixed this via #51620 Status: Issue closed
pyca/cryptography
530118781
Title: 'Type[...] has no attribute "Raw"' Question: username_0: With following example code in `test.py` from cryptography.hazmat.primitives.serialization import Encoding, PublicFormat from cryptography.hazmat.primitives.asymmetric.x448 import X448PrivateKey private_key = X448PrivateKey.generate() public_key = private_key.public_key().public_bytes(encoding=Encoding.Raw, format=PublicFormat.Raw) # type: bytes peer_public_key = X448PrivateKey.generate().public_key() shared_key = private_key.exchange(peer_public_key) Running `$ python3.7 -m mypy test.py` Shows warnings `test.py:7: error: "Type[Encoding]" has no attribute "Raw"` `test.py:8: error: "Type[PublicFormat]" has no attribute "Raw"` Is this a typeshed issue, mypy issue, or does the library have a bug? **Python version**: `Python 3.7.5 (default, Nov 20 2019, 09:21:52)` `GCC 9.2.1 20191008] on linux` **cryptography version**: `2.8` **mypy version**: `0.740` Answers: username_1: typeshed issue probably, we don't maintain the type hints for cryptography. Status: Issue closed username_0: Ah I see. Thanks! (Looks like mypy 0.750 was released the same day this ticket was created and typeshed updated at the same time: mypy no longer shows the warnings.)
ngonghia19092000/Front-End-2021
935385618
Title: Tiến hành đào dữ liệu Question: username_0: Tìm kiếm dữ liệu cho 100 sản phẩm nón bảo hiểm, nhóm 2 thành viên: mỗi người đào dữ liệu cho 50 sản phẩm @18130016 @username_0 Answers: username_0: Collect data đã xong. Dữ liệu chứa trong file product.json username_0: cập nhật dữ liệu vào file data.json
alexploner/Heatplus
268022579
Title: Suggestion: consistent use of nrow without row/column labels Question: username_0: As pointed out in [Issue 8](https://github.com/username_0/Heatplus/issues/8), the argument `nrow` does not really work if the underlying data matrix does not have row- or column names. For the sake of consistency and flexibility, it would be nice to be able to use this to finetune distances even if there are no dimension names defined... provided this does not break the clumsy interface even further.
bovesan/mistika-hyperspeed
250396374
Title: Implement installation of Stacks Question: username_0: dat: Change D(project/PRIVATE/basename.dat) to s(/hyperspeed/path/basename.glsl) glsl: Change s(basename.glsl) to s(/hyperspeed/path/basename.glsl) lut: Change s(basename.lut) to s(/hyperspeed/path/basename.lut) highres: Change p(/original/path/) to p(/hyperspeed/path/) lowres: Change p(/original/path/) to p(/hyperspeed/path/) audio: Change p(/original/path/) to p(/hyperspeed/path/) lnk: Change F(/original/path/basename.lnk) to F(/hyperspeed/path/basename.lnk) font: Copy to /usr/fonts/mistika Status: Issue closed Answers: username_0: dat: Change D(project/PRIVATE/basename.dat) to s(/hyperspeed/path/basename.glsl) glsl: Change s(basename.glsl) to s(/hyperspeed/path/basename.glsl) lut: Change s(basename.lut) to s(/hyperspeed/path/basename.lut) highres: Change p(/original/path/) to p(/hyperspeed/path/) lowres: Change p(/original/path/) to p(/hyperspeed/path/) audio: Change p(/original/path/) to p(/hyperspeed/path/) lnk: Change F(/original/path/basename.lnk) to F(/hyperspeed/path/basename.lnk) font: Copy to /usr/fonts/mistika Status: Issue closed
rxaviers/globalize-webpack-plugin
155732580
Title: Uncaught TypeError in production mode Question: username_0: When running in production mode, I get this js error: ![image](https://cloud.githubusercontent.com/assets/19405711/15394377/e2d23236-1dd2-11e6-8c70-3a47fe5f3bc3.png) It happens in the globalize compiled data files. Everything runs fine in development mode. My package.json has these dependencies: ``` "dependencies": { "classnames": "^2.2.3", "globalize": "^1.1.0-rc.5", "history": "^2.0.1", "immutable": "^3.7.6", "jest-cli": "*", "jquery": "2.2.2", "jsonschema": "^1.1.0", "keymirror": "^0.1.1", "moment": "^2.13.0", "object-assign": "^4.0.1", "react": "^15.0.1", "react-addons-css-transition-group": "^15.0.1", "react-addons-pure-render-mixin": "^15.0.1", "react-bootstrap": "^0.28.5", "react-dom": "^15.0.1", "react-onclickoutside": "^5.1.0", "react-redux": "^4.4.2", "react-router": "^2.0.0", "react-router-bootstrap": "^0.22.1", "react-router-redux": "^4.0.4", "react-textarea-autosize": "^4.0.0", "redux": "^3.3.1", "redux-form": "^5.0.1", "redux-logger": "^2.6.1", "redux-react-router": "^1.0.0-beta3", "redux-thunk": "^2.0.1", "superagent": "^1.8.3" }, "devDependencies": { "babel-cli": "^6.6.5", "babel-core": "^6.7.5", "babel-jest": "^11.0.0", "babel-loader": "^6.2.4", "babel-preset-stage-2": "^6.5.0", "babel-preset-es2015": "^6.6.0", "babel-preset-react": "^6.5.0", "chai": "^3.5.0", "chai-immutable": "^1.5.4", "expect": "^1.16.0", "html-webpack-plugin": "^2.12.0", "mocha": "^2.4.5", "react-addons-test-utils": "^15.0.1", "react-hot-loader": "^1.3.0", "webpack": "^1.12.14", "webpack-dev-server": "^1.14.1", "cldr-data": ">=25", "globalize-webpack-plugin": "0.3.4" } ``` My webpack plugin for production is: ``` new GlobalizePlugin({ production: true, developmentLocale: "en", supportedLocales: [ "da", "en" ], messages: "./src/main/js/messages/[locale].json", output: "i18n/[locale].[hash].js" }) ``` I have attached a pretty-formatted version of the en compiled data file in case that's helpful. [en.1a708029e517eaeb22eb.js.txt](https://github.com/username_1/globalize-webpack-plugin/files/272557/en.1a708029e517eaeb22eb.js.txt) The error happens in line 359. Answers: username_1: Hi @username_0, thanks for reporting this issue, but it's really hard to spot what's going on from this info. Please, could you export a reduced demo that reproduces the bug you see? Perhaps https://github.com/jquery/globalize/tree/master/examples/app-npm-webpack could be used as a baseline. Thanks Status: Issue closed username_0: Hi. Ok, pointing me to the example above helped. Turns out I was missing the whole ``` vendor: [ "globalize", "globalize/dist/globalize-runtime/number", "globalize/dist/globalize-runtime/currency", "globalize/dist/globalize-runtime/date", "globalize/dist/globalize-runtime/message", "globalize/dist/globalize-runtime/plural", "globalize/dist/globalize-runtime/relative-time", "globalize/dist/globalize-runtime/unit" ] ``` part in my webpack.config. Thank you. Closing issue. username_1: Awesome. By the way, using a hardcoded list like that in the example is cumbersome. Note you can make it dynamically by using the `PathChunkPlugin`: ``` new PathChunkPlugin({ name: "vendor", test: "node_modules/" }) ``` Ideally, we should get that example updated using it as well.
deezer/spleeter
842281879
Title: [Bug] Outdated wiki information about "spleeter separate -h"? Question: username_0: - [x] I didn't find a similar issue already open. - [x] I read the documentation (README AND Wiki) - [x] I have installed FFMpeg - [x] My problem is related to Spleeter only, not a derivative product (such as Webapplication, or GUI provided by others) ## Description When I'm trying to get help and type `spleeter separate -h` as stated on the page [2. Getting Started](https://github.com/deezer/spleeter/wiki/2.-Getting-started#usage) I get an error. ## Step to reproduce 1. Installed using `pip3 install spleeter` 2. Run as `python3 spleeter separate -h` 3. Got `Error: no such option: -h` error ## Output ``` $ python3 spleeter separate -h Usage: spleeter separate [OPTIONS] FILES... Try 'spleeter separate --help' for help. Error: no such option: -h ``` ## Environment <!-- Fill the following table --> | | | | ----------------- | ------------------------------- | | OS | Linux | | Installation type | pip | | RAM available | N/A | | Hardware spec | N/A | ## Additional context `spleeter separate --help` works correctly. Answers: username_1: Hi @username_0, Indeed, the last version of spleeter uses the `typer` package as CLI argument manager, so the short `-h` form for `--help` no longer works. We'll have a look if we can bring it back easily with typer. In the meantime I've updated the wiki with the long `--help` form. Thank you for spotting this. Status: Issue closed
syndesisio/syndesis
367754890
Title: Template Step Design, Documentation and Wording Problems Answers: username_1: I created this issue to change the wording of the step description. I think we must do that: https://github.com/syndesisio/syndesis/issues/3757 I can think of possible scenarios where it would be useful to have more than one Template step in an integration so I think we should allow that. For example, some connections provide a huge number of data fields. You might want to capture a few of those fields and send them to a high level executiive. And then capture many more of them and send them to an engineer who is trying to troubleshoot. So I vote for changing the text as I specified in the issue that I pointed to above. Allow more than one template step in an integration. Allow the user to drag in any type of file. username_2: Thank you @username_0 and @username_1 for brining up these issues. - File type I think .tmpl file type was mentioned in the [user story](https://github.com/syndesisio/syndesis/issues/3047), but I'm totally fine with accepting different text based file types (e.g. .txt, .xml, .adoc). However, I think it would still be good to have the ability to restrict file type in case that's desired in the futre. - Multiple Template steps Agreed that it's possbile that an integration could have more than one Template step, and it should be allowed. My question is, can users have back-to-back template steps? cc: @username_0 @username_1 @username_3 @username_4 username_3: I'm good with accepting text based file types. For multiple template steps, I'm ok with leaving that in also, but not a likely use case I think. username_4: Can we try and address the issue with the `dragleave` event firing unexpectedly when you drag a file over the editor in this issue? Currently if you drag a file over the editor, if you hover over the border around the editor, the "drop file here to upload" text appears, then disappears when your mouse actually goes into the editor.
bcgov/wps
940153439
Title: Reference R code source and constant inputs Question: username_0: We should be very clear on how the values are calculated. We should reference the underlying library we're using: https://r-forge.r-project.org/projects/cffdrs/ Also we should reference the constants for crown fuel load as per "Development and Structure of the Canadian Forest Fire Behaviour Prediction System" from Forestry Canada Fire Danger Group, Information Report ST-X-3, 1992 **As a** *(User Type/Persona)* **I want** *(Feature/enhancement)* **So That** *(Value, why is this wanted, what is the user trying to accomplish)* **Additional Context** - enter text here - enter text here **Acceptance Criteria** - [ ] Given (Context), When (action carried out), Then (expected outcome) - [ ] Given (Context), When (action carried out), Then (expected outcome) Answers: username_1: I've been thinking if this should be surfaced on the UI as well for transparency. Some sort of dialog/banner/message that describes all assumptions/constants, where they originate from, etc. IMO if I were a user I'd want to know these things right off the bat. username_1: As a data point, feedback from describing us using the published `ccfdrs` R package was very position Status: Issue closed
react-hook-form/react-hook-form
866922054
Title: When I use `shouldUnregister`, validation is not working after you trigger a check for a conditional field. Question: username_0: When I use `shouldUnregister`, validation is not working after you trigger a check for a conditional field. Test: https://codesandbox.io/s/validation-not-working-with-shouldunregister-cpjxy?file=/src/index.js Step to reproduce the error: 1. Enter test longer than 3 letters in "First name" field 2. Click checkbox 3. Click "submit" button and you can submit form, which should be prevented by validation for length of "First name" field in line 19. Is this an expected behaviour, or am I using `shouldUnregister` in a wrong way? _Originally posted by @AkariNishii in https://github.com/react-hook-form/react-hook-form/discussions/4940_<issue_closed> Status: Issue closed
aws/aws-sdk-php
984525491
Title: Field "ContentMD5" not listed for S3 PutObject operation Answers: username_1: @username_0 Thanks for bringing this up :) It appears this was done on purpose in this commit: https://github.com/aws/aws-sdk-php/commit/e671f28842f5e3fc0248ccf6c1c8b869114b5778#diff-850aaf548a36b64042d2ddaa1c7d26afe884fd61b5e63cf9548fb7551df97ebbR521 I'm not sure why he would have done that, potentially for some legacy reason that no longer apply? I will do some research and testing around it, and update when I'm able username_2: Linking https://github.com/aws/aws-sdk-php/issues/2256
sybrenstuvel/python-rsa
169816522
Title: BytesWarning in rsa.PrivateKey.load_pkcs1 Question: username_0: Running the code in the [Generating keys](https://stuvel.eu/python-rsa-doc/usage.html#generating-keys) documentation and enabling `BytesWarning`s with `-bb` results in a `BytesWarning`: ```python import rsa with open('private.pem') as privatefile: keydata = privatefile.read() pubkey = rsa.PrivateKey.load_pkcs1(keydata) ``` ```sh C:\Python35\python.exe -bb "C:/Users/<NAME>/PycharmProjects/test/a.py" Traceback (most recent call last): File "C:/Users/<NAME>/PycharmProjects/test/a.py", line 6, in <module> pubkey = rsa.PrivateKey.load_pkcs1(keydata) File "C:\Python35\lib\site-packages\rsa\key.py", line 75, in load_pkcs1 return method(keyfile) File "C:\Python35\lib\site-packages\rsa\key.py", line 511, in _load_pkcs1_pem return cls._load_pkcs1_der(der) File "C:\Python35\lib\site-packages\rsa\key.py", line 439, in _load_pkcs1_der (priv, _) = decoder.decode(keyfile) File "C:\Python35\lib\site-packages\pyasn1\codec\ber\decoder.py", line 825, in __call__ stGetValueDecoder, self, substrateFun File "C:\Python35\lib\site-packages\pyasn1\codec\ber\decoder.py", line 342, in valueDecoder component, head = decodeFun(head, asn1Spec) File "C:\Python35\lib\site-packages\pyasn1\codec\ber\decoder.py", line 825, in __call__ stGetValueDecoder, self, substrateFun File "C:\Python35\lib\site-packages\pyasn1\codec\ber\decoder.py", line 95, in valueDecoder if head in self.precomputedValues: BytesWarning: Comparison between bytes and string ``` Tested on Windows 10 with Python 3.5.2. Answers: username_1: Thanks for including an example in the report, it makes it much easier to triage. I've tried different things in feeding the `pyasn1` decoder the DER value, but unfortunately there doesn't seem to be a way on our side to address this. Fortunately, this [has apparently been fixed](https://sourceforge.net/p/pyasn1/discussion/324581/thread/45aadffe/) in the master branch of `pyasn1`. I'll keep this bug report open, to remind ourselves to update the minimum required version in `requirements.txt`, once this fix in `pyasn1` has actually been released. username_2: (Awesome library, btw) I'm not sure if this is the same issue. But I get an error when trying to read a PEM encoded private key as well (but from OpenSSL). DER encoding works fine, though: ``` # coding: utf-8 import rsa from OpenSSL import crypto # get an RSA key from OpenSSL k = crypto.PKey() k.generate_key(crypto.TYPE_RSA, 4096) # convert from OpenSSL format to pure python RSA format serialized_key = crypto.dump_privatekey(crypto.FILETYPE_PEM, k) # PEM doesn't work... rsak = rsa.PrivateKey.load_pkcs1( serialized_key.replace(b'PRIVATE', b'RSA PRIVATE'), 'PEM') ``` ``` (venv) D:\My Data\Projects\SslTesting\ssltester>python problem.py Traceback (most recent call last): File "problem.py", line 14, in <module> serialized_key.replace(b'PRIVATE', b'RSA PRIVATE'), 'PEM') File "D:\My Data\Projects\SslTesting\ssltester\venv\lib\site-packages\rsa\key.py", line 75, in load_pkcs1 return method(keyfile) File "D:\My Data\Projects\SslTesting\ssltester\venv\lib\site-packages\rsa\key.py", line 511, in _load_pkcs1_pem return cls._load_pkcs1_der(der) File "D:\My Data\Projects\SslTesting\ssltester\venv\lib\site-packages\rsa\key.py", line 459, in _load_pkcs1_der as_ints = tuple(int(x) for x in priv[1:9]) File "D:\My Data\Projects\SslTesting\ssltester\venv\lib\site-packages\rsa\key.py", line 459, in <genexpr> as_ints = tuple(int(x) for x in priv[1:9]) TypeError: int() argument must be a string, a bytes-like object or a number, not 'Sequence' ``` username_3: Just a quick note that pyasn1 0.2.1 is out. ;) username_1: The issue seems to be fixed with the latest pyasn1 :) I won't increase the minimum version of pyasn1 in `setup.py`, to keep things compatible with slightly older installs. Status: Issue closed username_4: I still have an similar problem: `self.rsaKey = rsa.PrivateKey.load_pkcs1(data, format='PEM')` ``` File "/usr/local/lib/python3.6/dist-packages/rsa/key.py", line 75, in load_pkcs1 return method(keyfile) File "/usr/local/lib/python3.6/dist-packages/rsa/key.py", line 511, in _load_pkcs1_pem return cls._load_pkcs1_der(der) File "/usr/local/lib/python3.6/dist-packages/rsa/key.py", line 459, in _load_pkcs1_der as_ints = tuple(int(x) for x in priv[1:9]) File "/usr/local/lib/python3.6/dist-packages/rsa/key.py", line 459, in <genexpr> as_ints = tuple(int(x) for x in priv[1:9]) TypeError: int() argument must be a string, a bytes-like object or a number, not 'Sequence' ``` (Ubuntu 17.10, Python 3.6, pyasn1 0.4.2 and downgraded 0.2.1) Any advice? username_3: Feels like you are feeding ASN.1 SEQUENCE to `int()` which understandably fails. Can it be some other data structure that you accidentally operate on, not RSA key? Also, I am not sure which version of the package do you use because line numbers in master [are different](https://github.com/username_1/python-rsa/blob/master/rsa/key.py#L459). A reproducer would definitely help understanding it. username_4: Thank you for your answer :smile: I have followed your [example](https://stuvel.eu/python-rsa-doc/usage.html#generating-keys) like this: ``` with open('private.pem', mode='rb') as key: data = key.read() print(data) # b'-----BEGIN RSA PRIVATE KEY-----\nMIIEvAIBADANBgkqhkiG9w0....iuq56RYf/m20w==\n-----END RSA PRIVATE KEY-----' self.rsaKey = rsa.PrivateKey.load_pkcs1(data, format='PEM') ``` I have installed Version: 3.4.2 (latest version via pip) The error appears in [this line](https://github.com/username_1/python-rsa/blob/fd70d79610ac1af8b072aa27fadf660b4a64797c/rsa/key.py#L459) username_4: Update: I have installed the version from master and debugged step by step throw the code: Error message: ``` self.rsaKey = rsa.PrivateKey.load_pkcs1(data) File "/usr/local/lib/python3.6/dist-packages/rsa/key.py", line 118, in load_pkcs1 return method(keyfile) File "/usr/local/lib/python3.6/dist-packages/rsa/key.py", line 560, in _load_pkcs1_pem return cls._load_pkcs1_der(der) File "/usr/local/lib/python3.6/dist-packages/rsa/key.py", line 495, in _load_pkcs1_der key = cls(*as_ints) TypeError: int() argument must be a string, a bytes-like object or a number, not 'Sequence' ``` ([key.py line 495](https://github.com/username_1/python-rsa/blob/master/rsa/key.py#L495)) --- my debugger can not look into the map `as_ints` (only says `map object at 0xhex`) but I have found out that `as_ints` is created from `priv` which is type `Sequence` ( `(priv, _) = decoder.decode(keyfile)`) I think this is the problem :arrow_up: username_1: Can you paste the output of `pip3 freeze`? username_4: I have installed a lot :sweat_smile: ``` $ pip3 freeze Adafruit-BMP==1.5.2 Adafruit-DHT==1.1.1 Adafruit-GPIO==1.0.3 Adafruit-PureIO==0.2.1 apt-xapian-index==0.47 apturl==0.5.2 asn1crypto==0.22.0 beautifulsoup4==4.6.0 blinker==1.3 Brlapi==0.6.5 certifi==2017.4.17 chardet==3.0.4 checkbox-ng==0.23 checkbox-support==0.22 command-not-found==0.3 coverage==4.4.1 cryptography==1.9 cupshelpers==1.0 defer==1.0.6 devscripts===2.17.9build1 distro-info==0.17 feedparser==5.1.3 file-magic==0.3.0 guacamole==0.9.2 html5lib==0.999999999 httplib2==0.9.2 idna==2.5 implements==0.1.3 Jinja2==2.9.6 keyring==10.4.0 keyrings.alt==2.2 language-selector==0.1 launchpadlib==1.10.5 lazr.restfulclient==0.13.5 lazr.uri==1.0.3 louis==3.0.0 lxc==0.1 lxml==4.0.0 Mako==1.0.7 Markdown==2.6.9 MarkupSafe==1.0 meld==3.18.0 mock==2.0.0 numpy==1.13.1 oauth==1.0.1 oauthlib==2.0.1 olefile==0.44 onboard==1.4.1 padme==1.1.1 pbr==2.0.0 pexpect==4.2.1 Pillow==4.1.1 piston-mini-client==0.7.5 plainbox==0.25 pyasn1==0.2.1 pycrypto==2.6.1 [Truncated] unity-scope-openclipart==0.1 unity-scope-texdoc==0.1 unity-scope-tomboy==0.1 unity-scope-virtualbox==0.1 unity-scope-yelp==0.1 unity-scope-zotero==0.1 urllib3==1.21.1 usb-creator==0.3.3 vboxapi==1.0 wadllib==1.3.2 webencodings==0.5 websocket-client==0.46.0 xdiagnose==3.8.8 xkit==0.0.0 XlsxWriter==0.9.6 zope.interface==4.3.2 $ ``` If it is necessary I can create an `virtualenv`... username_4: Still same error on a virtualenv: ``` (env) $ pip freeze pyasn1==0.4.2 rsa==4.0a0 (env) $ ``` username_5: Same issue here: ` File "/usr/local/lib/python3.6/site-packages/rsa/key.py", line 75, in load_pkcs1 return method(keyfile) File "/usr/local/lib/python3.6/site-packages/rsa/key.py", line 511, in _load_pkcs1_pem return cls._load_pkcs1_der(der) File "/usr/local/lib/python3.6/site-packages/rsa/key.py", line 459, in _load_pkcs1_der as_ints = tuple(int(x) for x in priv[1:9]) File "/usr/local/lib/python3.6/site-packages/rsa/key.py", line 459, in <genexpr> as_ints = tuple(int(x) for x in priv[1:9]) TypeError: int() argument must be a string, a bytes-like object or a number, not 'Sequence'` Pip freeze tells me: ` pyasn1==0.4.2 rsa==3.4.2 ` username_6: I get the same error! How to solve it later? username_3: It seems that you are running some other version of `rsa` because master has it like [priv[1:6]](https://github.com/username_1/python-rsa/blob/master/rsa/key.py#L495) while your traceback shows `priv[1:9]`. I can imagine that if you get some `Sequence` at the end of `priv`, the whole thing fails at casting it into `int`. So upgrade and try again. Another guess is that it has something to do with the PKCS#1 data you are feeding it. The way how it's being decoded (e.g. w/o `asn1Spec` given to decoder) allows for any valid ASN.1 data structure to be created out of the input. To figure out what's actually being decoded let's add this statement: print(priv.prettyPrint()) at [line 493](https://github.com/username_1/python-rsa/blob/master/rsa/key.py#L493) and watch the output. username_6: I just test like this,and it works: I got the same error when the key string has multi-lines: ``` -----BEGIN PRIVATE KEY----- <KEY> -----END PRIVATE KEY----- ``` When I set the string reshape to one-line will be OK: ``` -----BEGIN PRIVATE KEY----- <KEY> -----END PRIVATE KEY----- ``` username_7: Getting the same error: `TypeError: int() argument must be a string, a bytes-like object or a number, not 'Sequence'` How to circumvent this without editing `.pem` files? username_8: The same thing happens to me too, `TypeError: int() argument must be a string, a bytes-like object or a number, not 'Sequence'` It happens once I try to do this: `priv_key = rsa.PrivateKey.load_pkcs1(pem)` It is working with a single PEM I'm using but once I tried to replace it with another, it stopped. both are a one-line PEMs username_9: I met the save issue, it worked fine with PCSK#1. But failed when use PKCS#1.5 OpenSSL. ` as_ints = tuple(int(x) for x in priv[1:9]) TypeError: int() argument must be a string, a bytes-like object or a number, not 'Sequence' ` @username_1 Can you please help to this. username_10: `TypeError: int() argument must be a string or a number, not 'Sequence'` is still happening in both python2 and python3, are we still waiting for an upstream release from pyasn1? I've tried multi-line and single-line PEM. Is there any workaround? username_3: Reading through the issue, I am not sure there is anything to be fixed on the pyasn1 side. Tell me is this is a misunderstanding. BTW, the latest pyasn1 master has been released a couple of weeks ago. username_10: Thanks for the quick reply! I did try with the latest master, same error! I've been trying to analyze the code in key.py, but I'm afraid I don't know enough about what is expected and what is actually returned by the decoder to attempt a fix. username_11: I have same question. Have you solved this problem? And How? ` File "C:\Python37\lib\site-packages\rsa\key.py", line 118, in load_pkcs1 return method(keyfile) File "C:\Python37\lib\site-packages\rsa\key.py", line 560, in _load_pkcs1_pem return cls._load_pkcs1_der(der) File "C:\Python37\lib\site-packages\rsa\key.py", line 495, in _load_pkcs1_der key = cls(*as_ints) TypeError: int() argument must be a string, a bytes-like object or a number, not 'Sequence' ` username_3: As a way to debug this, try adding this print statement before line 495: ``` print(priv.prettyPrint()) print(as_ints.prettyPrint()) ``` I suspect it has something to do with the structure of the private key. username_12: 私钥字符串必须是PKCS1的格式,因为python rsa库不支持PKCS8的格式! 手动转化一下格式就行了 username_13: http://www.metools.info/code/c84.html username_14: 有用,谢谢。
coleifer/peewee
300074860
Title: Peewee+PSQL: create_tables for complex object relationships broken when DeferredRelations in play Question: username_0: As in the docs, circular FK dependencies are a code smell, and I agree, but here we are. I've got a big object domain with some of these objects having circular foreign keys. In `peewee` versions in the 2.8 line, `create_tables` seemed to be able to resolve any `DeferredRelation`s used for circular FK relationships and tell PSQL to create the tables appropriately. I see in the [docs](http://docs.peewee-orm.com/en/2.10.2/peewee/models.html#circular-foreign-key-dependencies) that there's a more manual approach documented, but I've never encountered need for it until upgrading from 2.8.x to 2.10.x. Two questions: 1) Am I correct in finding that the behavior of automatically doing this was done by `peewee` prior, and is now no longer supported? (otherwise, what magic could it have been that made this work before?) 2) Do you have any recommended methods for enabling a more opinionated methodology to creating tables in this way, or automating the process as documented? My CI environment does a build/teardown of tables every run (sometimes every test) and I'm thinking that having a special codepath for each `DeferredRelation`'s setup is a little unclean. I know it's a way uncommon use-case, but I'm hoping you might have some top-of-the-head insight as to what in particular changed; the changelog between the versions don't really show anything that appears related. As always, thanks for your continued work and support on this tool -- it's a wonderful piece of tech. Status: Issue closed Answers: username_1: I'm not sure about the answer to 1, but I'm pretty sure that before the only approach that worked was to substitute the circular instance of the `ForeignKeyField` with an `IntegerField` (or whatever), e.g.: ```python class User(Model): favorite_tweet_id = IntegerField() class Tweet(Model): user = ForeignKeyField(User) ``` In the above instance no constraints are created. The approach described in the docs for 2.10 allows you to declare the foreign-key *and* create the constraint: ```python # Create a reference object to stand in for our as-yet-undefined Tweet model. DeferredTweet = DeferredRelation() class User(Model): username = CharField() # Tweet has not been defined yet so use the deferred reference. favorite_tweet = ForeignKeyField(DeferredTweet, null=True) class Tweet(Model): message = TextField() user = ForeignKeyField(User, related_name='tweets') # Now that Tweet is defined, we can initialize the reference. DeferredTweet.set_model(Tweet) # Foreign key constraint from User -> Tweet will NOT be created because the # Tweet table does not exist yet. `favorite_tweet` will just be a regular # integer field: User.create_table() # Foreign key constraint from Tweet -> User will be created normally. Tweet.create_table() # Now that both tables exist, we can create the foreign key from User -> Tweet: # NOTE: this will not work in SQLite! db.create_foreign_key(User, User.favorite_tweet) ``` In 3.0 it has gotten somewhat easier: ```python class User(Model): username = CharField() # Tweet has not been defined yet so use the deferred reference. favorite_tweet = DeferredForeignKey('Tweet', null=True) class Tweet(Model): message = TextField() user = ForeignKeyField(User, backref='tweets') db.create_tables([User, Tweet]) ``` The 3.x code will not create the constraint, as there's no equivalent `db.create_foreign_key` method in 3.x. So if you wish to add the constraint, you'll want to use the migrator: ```python migrator = SchemaMigrator.from_database(my_db) migrator.migrate( migrator.add_foreign_key_constraint('table', 'column', 'rel table', 'rel column'), ) ``` username_1: As in the docs, circular FK dependencies are a code smell, and I agree, but here we are. I've got a big object domain with some of these objects having circular foreign keys. In `peewee` versions in the 2.8 line, `create_tables` seemed to be able to resolve any `DeferredRelation`s used for circular FK relationships and tell PSQL to create the tables appropriately. I see in the [docs](http://docs.peewee-orm.com/en/2.10.2/peewee/models.html#circular-foreign-key-dependencies) that there's a more manual approach documented, but I've never encountered need for it until upgrading from 2.8.x to 2.10.x. Two questions: 1) Am I correct in finding that the behavior of automatically doing this was done by `peewee` prior, and is now no longer supported? (otherwise, what magic could it have been that made this work before?) 2) Do you have any recommended methods for enabling a more opinionated methodology to creating tables in this way, or automating the process as documented? My CI environment does a build/teardown of tables every run (sometimes every test) and I'm thinking that having a special codepath for each `DeferredRelation`'s setup is a little unclean. I know it's a way uncommon use-case (and a noncurrent version on top of that), but I'm hoping you might have some top-of-the-head insight as to what in particular changed; the changelog between the versions don't really show anything that appears related. As always, thanks for your continued work and support on this tool -- it's a wonderful piece of tech. username_1: I think I'll reopen this to add the "create_foreign_key()" API back. username_1: See commit c0c5a0af9ee985bbcb1c4350d992925c1d5b7799 -- updated above comments to reflect changes. Status: Issue closed username_1: Documentation for 3.x updated: * http://docs.peewee-orm.com/en/latest/peewee/models.html#circular-foreign-key-dependencies * http://docs.peewee-orm.com/en/latest/peewee/api.html#SchemaManager.create_foreign_key username_0: My goodness you're fantastic. Thanks for the quick turnaround and thoughts. Any thoughts as to why it was just working before? I was using the `test_database` context manager / decorator from `playhouse` which in turn uses `create_model_tables`. It hadn't had any problems or complaints until I upgraded from `2.8.0` (woah, just realized how far behind I'd been; that's like 2 years!) just recently. username_1: By "working before" do you mean that the constraints were all set-up in the database schema? I'm not sure that that's the case. I'm also not sure what specific problems/complaints you ran into after upgrading? username_0: Sorry, to clarify: 2.8.0: using `test_database` decorator had no problem building the entire graph of tables (circular FK relationships included) from scratch in my test suite, for each test. >2.8.0: some tables just fail to create because their depended-upon tables don't yet exist. > username_1: You're positive that *both* foreign key constraints are being created? Hmm... and what version are you using now? It'd be great if you could show me the versions you're using and the exact issues you're experiencing with each. username_0: I'll have to pull together an example for you without proprietary info. I'm not certain the actual FK constraint is being generated, frankly, but at least the tables get successfully created. I am certain that `2.8.0` it "works" on -- insofar that at least the tables don't fail to be created. If you're okay with waiting a bit, I'll try to come up with a simplified example and find the version breakpoint. username_1: As in the docs, circular FK dependencies are a code smell, and I agree, but here we are. I've got a big object domain with some of these objects having circular foreign keys. In `peewee` versions in the 2.8 line, `create_tables` seemed to be able to resolve any `DeferredRelation`s used for circular FK relationships and tell PSQL to create the tables appropriately. I see in the [docs](http://docs.peewee-orm.com/en/2.10.2/peewee/models.html#circular-foreign-key-dependencies) that there's a more manual approach documented, but I've never encountered need for it until upgrading from 2.8.x to 2.10.x. Two questions: 1) Am I correct in finding that the behavior of automatically doing this was done by `peewee` prior, and is now no longer supported? (otherwise, what magic could it have been that made this work before?) 2) Do you have any recommended methods for enabling a more opinionated methodology to creating tables in this way, or automating the process as documented? My CI environment does a build/teardown of tables every run (sometimes every test) and I'm thinking that having a special codepath for each `DeferredRelation`'s setup is a little unclean. I know it's a way uncommon use-case (and a noncurrent version on top of that), but I'm hoping you might have some top-of-the-head insight as to what in particular changed; the changelog between the versions don't really show anything that appears related. As always, thanks for your continued work and support on this tool -- it's a wonderful piece of tech. username_1: Thanks, that'd be great. I'm willing to bet that the constraint isn't being created in the 2.8 example, so it appears to work, but if you wanted you would be able to break referential integrity. You could test this with your existing code: ```python class User(Model): favorite_tweet = DeferredForeignKey('Tweet') username = TextField() class Tweet(Model): user = ForeignKeyField(User, backref='tweets') content = TextField() db.create_tables([User, Tweet]) # If the constraint exists, the following will raise an error since the # tweet with the corresponding ID (12345 in the example) wouldn't exist: u = User.create(favorite_tweet=12345, username='foo') ``` username_1: Re-reading the issue, I'm a little unclear what exact issue you're encountering? Is there a specific error you're running into after upgrading? Is this error related to `create_tables()` or does it only happen when using some test setup? For what it's worth, peewee 3.x removes `test_database`. There are a few alternatives described in #1440 and the peewee tests themselves. username_0: I'll poke around the 2.10.x changelog and commits to see what might have happened, but I'd guess we're zeroing in on some change that more strictly enforces the creation of the constraint straightaway. Status: Issue closed username_1: Hmm... it looks like changes to the way deferred foreign-keys were resolved, although there are quite a few changes so I'm not positive. At any rate, I don't think there's a whole lot I can offer, as Peewee is now at 3.x and any fixes would pertain to how the code functions in 3.x (which is somewhat different from 2.10). Sorry not to be of more help. username_0: I appreciate your help with the investigation. I'm looking to get to 3.x eventually, but wanted to understand what was going wrong here before I opened that can of worms.
microsoft/playwright
1149845311
Title: [Question] Is there a way to configure playwright test runner NOT to launch a browser before executing test cases? Question: username_0: My use cases are around the following: - converting existing tests from Jest Runner to Playwright Test Runner: - these existing tests launch a browser within test script already - these tests may not need a browser at all as these are only API tests - writing new only API tests or any other tests which do not require a browser context to work In the above use cases, preparation of a browser context before running test scripts is a wasted compute resource and time in CI. Is there a way to use Playwright test runner without it launching a browser by default? PS: sorry if I missed something in the documentation, I read it all about the configuration and have not found an answer. I am evaluating Playwright **test runner** as a standalone alternative to Jest runner. Answers: username_1: As long as you don't use any of the fixtures that depend on the browser (ie page, context, or browser) no browsers should be launched. Additionally, you can install `@playwright/test` package (and not install `playwright`). username_0: Perfect! Thanks username_1: This applies to the [built-in fixtures](https://playwright.dev/docs/test-fixtures#built-in-fixtures) as well as fixtures you define. As you evaluate Playwright, feel free to ask questions as they come up! Fixtures are one of my favorite features/idioms of Playwright Test! 😄 [^1]: https://playwright.dev/docs/test-fixtures#with-fixtures username_0: Very cool. A lot of useful features packed in fixtures. One question: is there any 'teardown' / 'disconnect' / 'close' or 'destroy' for a fixture instance? For example, sometimes we re-use the same database client across tests in the same worker, but sometimes we need this client to be 're-created', (i.e. old client destroyed, connections closed, resources recycled and the new client instance created maybe with the same, maybe we new parameters). username_1: Let's say the parameter is the DB user role. If you only have a few, you can define a fixture per each (`adminDB`, `readonlyDB`), and then use the fixture(s) in whichever tests need them. If you have a lot of unique parameters you're changing in each test, you can use a helper method (e.g. `connectToDB: (connectionDetails: ConnectionParams) => Promise<DB>`) which could itself be a fixture (depending on the use case). Status: Issue closed
meelgroup/DeQuS
949488145
Title: Error in the smt-file generation Question: username_0: Hi everybody, I think there is an error in the generation of the smt-file. If you have a ```synth-fun``` with an argument ```d``` then the replace in line 50 in smt2.py will transform ```(declare-fun``` to ```eclare-fun```. If you would replace ```line.replace("("+var,"")``` by ```line.replace("("+var+" ","")``` this should fix this problem.
ant-design/ant-design
149763367
Title: [网站] Menu 弹出菜单 demo 部分被遮盖 Question: username_0: <img width="473" alt="qq20160420-1 2x" src="https://cloud.githubusercontent.com/assets/2723376/14675558/2acdc120-073c-11e6-9fbf-734293df1c84.png"> Answers: username_1: https://github.com/ant-design/ant-design/commit/27e6460fde270495a18d5b097f187da6e72a078a 估计是这个导致的。 username_0: 刚验证了下,确实是。 username_2: 我按照网站的例子,菜单有点问题。帮忙看看。**右哥**。 ![image](https://cloud.githubusercontent.com/assets/13232941/14697765/a3e873c0-07b9-11e6-8a55-f3e2556959ac.png) Status: Issue closed
apollographql/apollo-server
386130280
Title: Defer usage Question: username_0: On my Apollo Server I am running: ``` "apollo-server": "^2.2.5", "express": "^4.16.4", "graphql": "^14.0.2", "graphql-tools": "^4.0.3", ``` But I am unable to try out the `defer` directive, as my server is returning: `Unknown directive "defer".` I am unsure how to enable this feature. I have read all information I could find on `defer`, but no matter what I do I can not enable it. On my client I am also running latest version of React Apollo. How do I enable `defer`? :-) Answers: username_1: I got the same issue using `apollo-server` and `apollo-server-express`. `package.json`: ``` "apollo-server": "^2.4.0-alpha.0", "apollo-server-express": "^2.4.0-alpha.0" ``` Within the defer support doc [it says that I have to explicit enable the support ](https://github.com/apollographql/apollo-server/blob/defer-support/docs/source/defer-support.md#apollo-server-variants) when using `apollo-server-express` but without actually explaining how to add my custom handler. username_2: can you please give a hint on the docs that `@defer` is currently not working and a upcoming feature? username_3: I'm curious if there is a reason to not release the client-side support on it's own, especially given that `graphql-java` just merged defer support. username_4: We're tracking new `@defer` / `@stream` work in https://github.com/apollographql/apollo-server/issues/5893. Thanks! Status: Issue closed
adriens/colisnc-sdk
527951498
Title: getLatestStatusForColisList sort en erreur si un des colis n'existe pas Question: username_0: ``` List<String> aListOfColis = Arrays.asList(new String[]{"RP733152095CN", "XXX", "CA107308006SI", "7A53946342222"}); List<ColisDataRow> latestStatus = ColisCrawler.getLatestStatusForColisList(aListOfColis); Iterator<ColisDataRow> iterLatest = latestStatus.iterator(); ColisDataRow aRow; while(iterLatest.hasNext()){ aRow = iterLatest.next(); System.out.println(aRow); } ``` renvoie maintenant: ``` Date/Heure : <07/06/2019 11:25:23> Localisation : <DUMBEA MAIRIE> Status : <COLIS_LIVRE> Colis: <RP733152095CN> Date/Heure : <25/11/2019 08:18:31> Localisation : <NOUMEA AGENCE PRINCIPALE> Status : <COLIS_LIVRE> Colis: <CA107308006SI> Date/Heure : <09/09/2019 09:41:13> Localisation : <NOUMEA CDC> Status : <COLIS_LIVRE> Colis: <7A53946342222> Date/Heure : <07/06/2019 11:25:23> Localisation : <DUMBEA MAIRIE> Status : <COLIS_LIVRE> ``` Status: Issue closed Answers: username_0: ``` List<String> aListOfColis = Arrays.asList(new String[]{"RP733152095CN", "XXX", "CA107308006SI", "7A53946342222"}); List<ColisDataRow> latestStatus = ColisCrawler.getLatestStatusForColisList(aListOfColis); Iterator<ColisDataRow> iterLatest = latestStatus.iterator(); ColisDataRow aRow; while(iterLatest.hasNext()){ aRow = iterLatest.next(); System.out.println(aRow); } ``` renvoie maintenant: ``` Date/Heure : <07/06/2019 11:25:23> Localisation : <DUMBEA MAIRIE> Status : <COLIS_LIVRE> Colis: <RP733152095CN> Date/Heure : <25/11/2019 08:18:31> Localisation : <NOUMEA AGENCE PRINCIPALE> Status : <COLIS_LIVRE> Colis: <CA107308006SI> Date/Heure : <09/09/2019 09:41:13> Localisation : <NOUMEA CDC> Status : <COLIS_LIVRE> Colis: <7A53946342222> Date/Heure : <07/06/2019 11:25:23> Localisation : <DUMBEA MAIRIE> Status : <COLIS_LIVRE> ``` Status: Issue closed username_0: Fixed 👍
vaadin/vaadin-combo-box
128753956
Title: Deleting characters do not always return cursor to correct position on iOS Question: username_0: Typing and deleting a character does not always return the cursor to correct position on iOS. Visually it looks like there is an extra space in the beginning. Pressing backspace again will delete it and return cursor to the beginning of the field. See attached Gif for demo of the situation. Notice that the extra space also automatically disappears if one starts typing again while the space is visible (i.e. resulting typed text without any space in the beginning). ![delete-cursor-position](https://cloud.githubusercontent.com/assets/368220/12575547/f1685260-c415-11e5-9fc1-12e1b093d13c.gif) Answers: username_1: Can't reproduce on iOS (10) Safari/Chrome/FF. Closing this issue now. Probably been fixed in some paper-input update. Status: Issue closed
TensorSpeech/TensorFlowTTS
793875093
Title: Evaluation issues for Tacotron2 during training Question: username_0: I'm currently trying to train a voice using Tacotron2, basically following the Tacotron2 example. I'm using the LJSpeech pre-trained model as a base model and am using additional data (150 English sentences in LJSpeech format) to adapt the voice to that of a target speaker. At the moment I'm running it off Google Colab due to personal hardware restrictions so therefore would sometimes need to reinstall TensorflowTTS with pip. The command I'm executing is this: ``` !CUDA_VISIBLE_DEVICES=0 python examples/tacotron2/train_tacotron2.py \ --train-dir ./dump_akl_nz_sh/train/ \ --dev-dir ./dump_akl_nz_sh/valid/ \ --outdir ./examples/tacotron2/exp/train.tacotron2.v1/ \ --config ./examples/tacotron2/conf/tacotron2.v1.yaml \ --use-norm 1 \ --mixed_precision 0 \ --pretrained ./examples/tacotron2/exp/train.tacotron2.v1/checkpoints/model-120000_LJSpeech.h5 ``` The command was working previously earlier in the month. However. over the last few days I'm having an issue where during evaluation after 500 steps the UnboundLocalError is thrown: ``` [train]: 0% 485/200000 [41:17<277:26:54, 5.01s/it]2021-01-26 02:13:11,548 (base_trainer:140) INFO: (Steps: 485) Finished 97 epoch training (5 steps per epoch). [train]: 0% 490/200000 [41:42<275:16:32, 4.97s/it]2021-01-26 02:13:36,405 (base_trainer:140) INFO: (Steps: 490) Finished 98 epoch training (5 steps per epoch). [train]: 0% 495/200000 [42:07<276:56:32, 5.00s/it]2021-01-26 02:14:01,579 (base_trainer:140) INFO: (Steps: 495) Finished 99 epoch training (5 steps per epoch). [train]: 0% 500/200000 [42:32<276:38:55, 4.99s/it]2021-01-26 02:14:26,457 (base_trainer:883) INFO: (Steps: 500) Start evaluation. [eval]: 0it [00:00, ?it/s]2021-01-26 02:14:26.620295: W tensorflow/core/grappler/optimizers/loop_optimizer.cc:906] Skipping loop optimization for Merge node with control input: cond/branch_executed/_8 [eval]: 0it [00:00, ?it/s] Traceback (most recent call last): File "examples/tacotron2/train_tacotron2.py", line 513, in <module> main() File "examples/tacotron2/train_tacotron2.py", line 505, in main resume=args.resume, File "/usr/local/lib/python3.6/dist-packages/tensorflow_tts/trainers/base_trainer.py", line 999, in fit self.run() File "/usr/local/lib/python3.6/dist-packages/tensorflow_tts/trainers/base_trainer.py", line 103, in run self._train_epoch() File "/usr/local/lib/python3.6/dist-packages/tensorflow_tts/trainers/base_trainer.py", line 129, in _train_epoch self._check_eval_interval() File "/usr/local/lib/python3.6/dist-packages/tensorflow_tts/trainers/base_trainer.py", line 166, in _check_eval_interval self._eval_epoch() File "/usr/local/lib/python3.6/dist-packages/tensorflow_tts/trainers/base_trainer.py", line 897, in _eval_epoch f"(Steps: {self.steps}) Finished evaluation " UnboundLocalError: local variable 'eval_steps_per_epoch' referenced before assignment [train]: 0% 500/200000 [42:32<282:55:56, 5.11s/it] ``` Any help to resolve this would be greatly appreciated! Answers: username_1: @username_0 i do not know why it happened, see the code here (https://github.com/TensorSpeech/TensorFlowTTS/blob/master/tensorflow_tts/trainers/base_trainer.py#L897), i don't think there is a bug ? username_2: @username_0 This may be an issue with your batch size. Make sure that the batch-size in the config file is greater than the number of audio clips in your dataset. See #498 username_3: @username_1 it seems like pip package is not up to date. I am also getting the above error.
titouancreach/bomberman
148491246
Title: Move assets in the output directory Question: username_0: For this moment gradle don't copy the asset directory into ` ./build/exe/bomberman/release ` and output. The task I added bug sometimes (always up to date) and sometime, assets are not copied properly. Moreover, If I want to only build release with the task: `bombermanDebugExecutable` I don't want my assets to be copied in the release directory. We should custom rules for each of the build types (status given by `gradle properties` might be useful)<issue_closed> Status: Issue closed
operator-framework/operator-sdk
442132912
Title: how to CGO_ENABLED=1 in "operator-sdk build" Question: username_0: <!-- Thanks for filing an issue! Before hitting the button, please answer these questions. Fill in as much of the template below as you can. If you leave out information, we can't help you as well. We will try our best to answer the question, but we also have a mailing list and slack channel for any other questions. --> ## Type of question **Are you asking about community best practices, how to implement a specific feature, or about general context and help around the operator-sdk?** ## Question **What did you do?** ``` operator-sdk build my project ``` **What did you expect to see?** build successfully **What did you see instead? Under which circumstances?** ``` operator-sdk build harbor.haodai.net/db/codis-operator:v0.0.1 # g.haodai.net/chenmin/codis-operator/vendor/github.com/CodisLabs/codis/pkg/utils/unsafe2 vendor/github.com/CodisLabs/codis/pkg/utils/unsafe2/cgo_slice.go:31:7: undefined: cgo_malloc vendor/github.com/CodisLabs/codis/pkg/utils/unsafe2/cgo_slice.go:58:2: undefined: cgo_free # g.haodai.net/chenmin/codis-operator/vendor/github.com/CodisLabs/codis/pkg/utils vendor/github.com/CodisLabs/codis/pkg/utils/usage.go:8:43: undefined: Usage vendor/github.com/CodisLabs/codis/pkg/utils/usage.go:10:12: undefined: GetUsage vendor/github.com/CodisLabs/codis/pkg/utils/usage.go:15:12: undefined: GetUsage Error: failed to build operator binary: (failed to exec []string{"go", "build", "-gcflags", "all=-trimpath=${GOPATH}", "-asmflags", "all=-trimpath=${GOPATH}", "-o", "/Users/klutz/go/src/g.haodai.net/chenmin/codis-operator/build/_output/bin/codis-operator", "g.haodai.net/chenmin/codis-operator/cmd/manager"}: exit status 2) Usage: operator-sdk build <image> [flags] Flags: --docker-build-args string Extra docker build arguments as one string such as "--build-arg https_proxy=$https_proxy" --enable-tests Enable in-cluster testing by adding test binary to the image -h, --help help for build --namespaced-manifest string Path of namespaced resources manifest for tests (default "deploy/operator.yaml") --test-location string Location of tests (default "./test/e2e") ``` **Environment** * operator-sdk version: operator-sdk version v0.5.0+git * Kubernetes version information: Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.6", GitCommit:"<PASSWORD>", GitTreeState:"clean", BuildDate:"2019-02-26T12:59:46Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff<PASSWORD>ad<PASSWORD>", GitTreeState:"clean", BuildDate:"2019-01-10T23:28:14Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"} * Kubernetes cluster kind: minikube version: v0.33.1 **Additional context** I have tested, when export CGO_ENABLED=1, "operator-sdk up local" is fine, but when export CGO_ENABLED=0, "operator-sdk up local" will failed. github.com/CodisLabs/codis depends on github.com/spinlock/jemalloc-go and my project depends on github.com/CodisLabs/codis, so CGO_ENABLED must be opened.Thanks. Answers: username_1: Yes we seem to override that while building. https://github.com/operator-framework/operator-sdk/blob/master/cmd/operator-sdk/build/cmd.go#L175 Let me make turn this into a feature request, I can understand the need for this. But in the meantime, maybe just perform the build steps one by one instead of using the helper command. username_2: I am looking for a feature where I can specify Go build arguments. The flag could similar to `operator-sdk up local --go-ldflags`. This will be useful to inject build-time values as described in this blog: https://blog.alexellis.io/inject-build-time-vars-golang/ username_3: @username_2 Can you please open up a new issue for that so we can track the feature request? From the looks of it, passing in Go build args to `operator-sdk build` shouldn't be too difficult. Status: Issue closed username_2: @username_3 I have created a feature request here: https://github.com/operator-framework/operator-sdk/issues/1435
connect-foundation/2019-12
531627047
Title: 폰트 일부 수정 및 로고 변경 Question: username_0: ### Explanation - 파비콘 수정 ![image](https://user-images.githubusercontent.com/28642472/70005891-76cf2080-15ae-11ea-8657-cc883ce007d5.png) - 로고 수정 ![image](https://user-images.githubusercontent.com/28642472/70005919-9403ef00-15ae-11ea-8af6-2fbea81c10dd.png) - 폰트 수정 - SpoqaHanSans-kr 도입 - 강조 폰트(메인페이지 등) S-CoreDream-8Heavy 도입 Status: Issue closed Answers: username_0: #178 머지완료
whatwg/html
137042674
Title: Documenting upstream dependencies Question: username_0: I was wondering if we should create something where we document when we need to take action if upstream changes something. See some discussion in https://github.com/tc39/ecma262/pull/398. In particular, these kind of changes in ECMAScript will require corresponding changes to HTML: * New objects (requires changing StructuredClone) * New hooks (usually Symbol-based, e.g., `@@toPrimitive`, requires changing security infrastructure) Perhaps if we document these in `UPSTREAM.md` or `DEPENDENCIES.md` it's more likely that someone points out we missed something. Thoughts? Answers: username_0: It doesn't seem like we can turn this into some kind of process. Just hope that folks do their best. Status: Issue closed
keithbrink/amazon-mws-laravel
841114859
Title: fulfillment returns Question: username_0: is there nothing in this package for creating fulfillment returns? read through every file and haven't seen anything about fulfillment returns, just fulfillment order creation Answers: username_1: No, there is no support for fulfillment returns yet. Would love a PR for it, let me know if you need any guidance.
SAP/spartacus
648139110
Title: Deferred Loading still doesn't work properly in SSR mode Question: username_0: ## Overview Deferred Loading makes double loading (screen blinking) during first page rendering in SSR mode. ## Steps to Reproduce 1. Activate deferredLoading inside layout configuration with the following parameters: ``` deferredLoading: { strategy: DeferLoadingStrategy.DEFER, intersectionMargin: '50px' } ``` 2. Build and start application in SSR mode. 3. Open any page in Browser with cleared cache. 4. Page loads twice. Looks like SPA application reloads view rendered by SSR. ## Expected behaviour Page correctly loads without screen blinking. ## Environment Details - Spartacus: *-2.0.0 - Desktop browser: Chrome - Other environment details: Node.js 10.16.0 ## Additional context P.S. See original issue #6449 for previous Spartacus version. Answers: username_1: @username_0 just checking, can this be related to the page layout? We have mobile vs desktop page layouts in standard Spartacus for the header of the page. Since Spartacus/SSR doesn't have device detection as of now, it means that SSR will always produce the mobile version. If you load this on a larger device (desktop), the layout will be rebuild. username_0: @username_1 hi! I'm so sorry for such long delay with the answer. Honestly saying I'm not sure about page layout cuz I can reproduce the issue in mobile version also. username_2: It was already fixed for 2.1, we had a regression in 3.0 which was fixed by #10718. Status: Issue closed
nwillbanks/NinjaSquirrels
376211951
Title: MVP 3 Question: username_0: ## User Story I want to see pictures of the artist playing the music that is currently playing so that I get a visual reference ## Acceptance Criteria -[ ] Add pictures of the artist currently playing below the table
mlposey/cadtra
274478968
Title: Google id tokens make automated testing hard Question: username_0: Most of the API resources require a Google Id Token that Google must authenticate. This makes automated testing hard because the CI server would need to generate one for each build. Token generation can be done through the [OAuth Playground](https://developers.google.com/oauthplayground/), but the generated tokens do not contain this application's client id; the API *needs* this application's client id. Task List: - [ ] Create a test flag that will disable client id checks in the API code. - [ ] Store (on the CI server) the refresh token<sup>1</sup> of a test Google account. - [ ] Create a test harness to POST to the playground and retrieve an Id Token. --- Notes: 1. Refresh tokens never expire, so that token can be reused to generate the Id tokens.
amberframework/homebrew-amber
800692706
Title: Amber won't install with Crystal 0.36.1 Question: username_0: ``` ``` $ brew install amber ==> Installing amber from amberframework/amber ==> Downloading https://github.com/amberframework/amber/archive/v0.35.0.tar.gz Already downloaded: /Users/<username>/Library/Caches/Homebrew/downloads/bedbbc4a79eadc1005e0656b6d87641a5ce690e5c4f43320d8c4e1ae01b2ebf9--amber-0.35.0.tar.gz ==> shards install Last 15 lines from /Users/<username>/Library/Logs/Homebrew/amber/01.shards: Dependencies are satisfied Building: ameba Error target ameba failed to compile: Showing last frame. Use --error-trace for full trace. In /usr/local/Cellar/crystal/0.36.1/src/yaml/from_yaml.cr:12:3 12 | new(YAML::ParseContext.new, parse_yaml(string_or_io)) ^-- Error: wrong number of arguments for 'Ameba::Rule::Lint::Syntax.new' (given 2, expected 0..1) Overloads are: - Ameba::Rule::Lint::Syntax.new(config = nil) make: *** [bin/ameba] Error 1 If reporting this issue please do so at (not Homebrew/brew or Homebrew/core): https://github.com/amberframework/homebrew-amber/issues These open issues may also help: can't install amber 0.30.0 https://github.com/amberframework/homebrew-amber/issues/5 ```<issue_closed> Status: Issue closed
spring-cloud/spring-cloud-deployer-kubernetes
894267789
Title: Make IT tests run on minikube Question: username_0: Looks like some trouble: ``` 2021-05-18T10:10:35.3284807Z [ERROR] KubernetesAppDeployerIntegrationTests.testDeploymentWithGroupAndIndex:569->verifyAppEnv:493 cannot get service information for foo-app-4c36bf84-7614-474a 2021-05-18T10:10:35.3308428Z [ERROR] KubernetesAppDeployerIntegrationTests.testDeploymentWithLoadBalancerHasUrlAndAnnotation:321 2021-05-18T10:10:35.3320204Z Expected: map containing ["url"->ANYTHING], trying at most 300 times 2021-05-18T10:10:35.3320868Z but: failed after 300*2000=600000ms: ``` Answers: username_0: With #443 and spring-cloud/spring-cloud-deployer#346, we can now run it tests with minikube. Status: Issue closed
LLNL/axom
717630447
Title: Installs sparsehash when the external sparsehash package is found Question: username_0: Here: https://github.com/LLNL/axom/blob/develop/src/thirdparty/CMakeLists.txt#L184 Despite configure saying that sparsehash is found: ``` -- Sparsehash configured with '<functional>' header ``` Answers: username_1: We should update sparsehash at the same time this is addressed. username_2: If I understand the request correctly, it would be to make it possible to have headers from an external installation of sparsehash override the headers that we distribute. This would be useful to users that already maintain a sparsehash installation that they use elsewhere in their application. That would mean creating an optional CMake symbol `SPARSEHASH_DIR` and writing the necessary CMake code to point to headers in `$(SPARSEHASH_DIR)/include` when a user provides an installation path. Is that the way to go? username_0: Sparsehash is available as a package almost on all systems: https://repology.org/project/sparsehash/versions There should be no need to distribute it, you should just require the external ```sparsehash``` package. username_3: This is now guarded by the `axom` namespace and won't conflict with any users downstream. Status: Issue closed
gligoran/cordova-set-version
837126405
Title: Add version in more places in config.xml Question: username_0: First of all, great package, thanks, very useful. I have this as dependency in all my Cordova projects. Though I realised I needed to insert the version number in more places in config.xml What do you think of an extra option of simply replacing the string `{{cordova-set-version}}` by the version? The script would read the config.xml in plain text and just replace. Answers: username_0: If you agree with the concept, I try to do a PR username_0: Hi @username_1 , to play on the safe side, note that the characters `{` and `}` are perfectly [allowed](https://stackoverflow.com/questions/866706/which-characters-are-invalid-unless-encoded-in-an-xml-attribute) in XML attributes. Thus one could have in `config.xml`: ```xml <preference name="OverrideUserAgent" value="APP/com.my.android.app v.{{cordova-set-version}}"/> ``` What do you think? username_1: Hi, thank you for the suggestion. Sadly, I currently can't find the time to research this approach, but It's definitely an interesting idea. In the little spare time I can find I'm still working on making the tests a lot more adjustable so they don't need to be rewritten with every bigger change. username_0: I can make that upgrade via a PR. But I wanted to know your opinion beforehand, otherwise you wouldn't approve it :)
influxdata/ui
950379909
Title: Map: Coloring of pins Question: username_0: **Proposal:** The pins displayed in the map visualization should be able to indicate different datapoints/datagroups/etc. as different colored pins to distinguish between them. **Current behavior:** All lat/lon measures of a query are currently shown in the same color in the map visualization. **Desired behavior:** If the user adds specific tags to measures, the pins should be differently colored. For example, similar to the lat/lon definition: Define a specific tag (e.g. "Group"), which can be provided for lat/lon datapoints. All datapoints with the same value in this tag will have the same color as a pin, while other valued datapoints will have different colored pins. Also the "customize" tab when selecting the visualization type "Map" only directs to the documentation of the Map visualization, but does not provide any further customization methods. Maybe here some options to choose a specific tag and colors could be provided. **Alternatives considered:** As an alternative we are currently using multiple cells in the dashboard to display the differently tagged GPS data to be able to distinguish between two different "paths". An alternative to display differently grouped datapoints inside one map could be different forms of pins (although that would be unpractical I guess). **Use case:** We have a project in which we will start a weather balloon and track it with GPS signals. Further, we have a prediction software running which also return GPS data. All of those are written into the DB. An optimal solution would be to show the actual GPS data and the prediction in one map with different colors to compare them. Currently we are showing them as two cells in a dashboard, which works fine for itself, but takes up more space on the dashboard and is not as effective and convenient in terms of comparing the two paths. https://community.influxdata.com/t/coloring-pins-in-the-map-visualization/20570/5 Answers: username_1: @username_0 we are releasing color thresholds based on value this week. that will allow you to have different colors based on the value column, which might work for this use case. username_2: Fixed with thresholds for maps. Status: Issue closed
JuPedSim/simulator
744163711
Title: Worldbuilder Question: username_0: World should be created using a WorldBuilder with python bindings. - [ ] API for adding new floors, walls and special areas - [ ] Collecting the new geometry structures - [ ] Method for creating the world from collected structure (`buildWorld()`) - [ ] Sanity Checks for definition and implementation
SidOfc/browserino
372577147
Title: undefined method `gsub' for nil:NilClass when the user agent is nil Question: username_0: When the User-Agent is `nil` a `undefined method `gsub' for nil:NilClass` Exception is thrown. I have got this error in a Rails-Application when i testet using Capybara with the default RackTest-Driver. The User-Agent seems to be `nil` there by default. Answers: username_1: Hi @username_0! Thank you for reporting the issue, indeed I make the assumption that the user agent is present. I will fix it right away. Status: Issue closed username_0: Hi, thanks for the quick fix. Yes it works now 👍
logstash-plugins/logstash-filter-mutate
51478449
Title: Feature: Regexp modifiers for gsub mutations Question: username_0: Since the needle / pattern of the gsub mutation can only be a string, there's no way to set regexp modifiers like case insensitive and multiline on the regular expression. This limits the usability of the mutation. This can be done in one of two ways: 1. With a `gsub_modifiers` option that will be applied to all the regular expressions 2. With a fourth value that's passed along with the field name, regular expression and replacement Option 1: ~~~ filter { mutate { gsub => [ "message", "/Hello.*We saw the following error/", ""] gsub_modifiers => "m" } } ~~~ Option 2: ~~~ filter { mutate { gsub => [ "message", "/Hello.*We saw the following error/", "m", ""] } } ~~~ Answers: username_1: +1 - this feature would make many things easier username_2: +1 - just ran into this myself while trying to 'patch' some multi-line inputs. username_3: You can do this in-line in your regexp: ``` filter { mutate { gsub => [ "myfield", "(?m)dot '.' will now match the line terminator", "whatever" ] } } ``` The `(?m)` flag will set multiline (m) flag for the whole regexp. I'm sorry we don't document this :( You can learn more, for now, here: http://www.geocities.jp/kosako3/oniguruma/doc/RE.txt An excerpt from the above link: ```` 7. Extended groups ... (?imx-imx) option on/off i: ignore case m: multi-line (dot(.) match newline) x: extended form (?imx-imx:subexp) option on/off for subexp ``` Hope this helps :) username_2: Thanks, that helps! I had checked the standard JRuby Regexp.new() constructor args and it looked like a separate parameter was required not realizing that gsub() doesn't use the standard Ruby libraries. username_4: does anyone have a copy of the link that @username_3 provided? It appears to be dead now. username_5: Long time since the last commet, but if some one is still looking for the link to [oniguruma regex](https://github.com/kkos/oniguruma/blob/master/doc/RE).
SteamDatabase/BrowserExtension
380936922
Title: browser extension shows me UAH instead of RUR can we have a setting to select a currency? Question: username_0: browser extension shows me UAH instead of RUR can we have a setting to select a currency? Answers: username_1: Extension uses the country that you have already set by Steam in cookies. Does Steam detect you as being in Ukraine or something? What prices does it show you? username_1: I deployed a change on the server, can you tell me if this still happens to you? It should be using currency first to find correct price now. username_0: now it's great :) thank you a lot! Status: Issue closed username_1: Great!
spring-cloud/spring-cloud-skipper
475101146
Title: Merge release only CI plans into the corresponding regular CI plans Question: username_0: Currently, we have CI plans created for doing the release while there are separate plans to push the snapshot builds. We can probably merge the release only plans into the regular snapshot CI plans and use them for the releases.<issue_closed> Status: Issue closed
accuratencom/form-sender
216452995
Title: Форма падает, если не заполнен email Question: username_0: **Как сейчас** 1. Пользователь заполняет заявку на нашем сайте (имя и телефон) 2. Отправляет заявку на форму отправки PHP 3. Форма отрабатывает по каналу Slack, отсылает сообщение 4. Письма не приходит (естественно, ведь пользователь не заполнил поле email) 5. AJAX-запрос зависает без ошибки и успеха, форма зависает 6. В случае выключенного JS скрипт падает на попытке отправить в Mailgun, ругается на входящие данные **Нужно** 1. Если данных недостаточно, то в AJAX возвращается ошибка 2. Если нет email, то просто не отправлять письмо в Mailgun, но при этом кидать заявку в Slack<issue_closed> Status: Issue closed
EbookFoundation/free-programming-books
728782328
Title: Different types of hyphen Question: username_0: In some Markdown file the hyphen between link in Author is longer then "normal" So maybe the linter have to check this. Example: * [HTML](http://www.peterkropff.de/site/html/html.htm) — <NAME>ropff [Online, PDF] <- Wrong * [HTML](http://www.peterkropff.de/site/html/html.htm) - <NAME> [Online, PDF] If you want I can check the files for this issue and correct the files Answers: username_1: Has it caused any problems? That sort of change should wait till we've cleared our backlog because it will introduce many merge conflicts. username_0: No real problems, it just looks inconsistent. So it is not urgent 👍 username_2: these are the instances where the longer "-" appears: ``` % grep -R — * free-courses-en.md:* [Android Developer Fundamentals (Version 2) — Codelab](https://developer.android.com/courses/fundamentals-training/toc-v2) free-courses-en.md:* [Android Developer Fundamentals (Version 2) — Concepts](https://google-developer-training.github.io/android-developer-fundamentals-course-concepts-v2/index.html) free-programming-books-de.md:* [CSS](http://www.peterkropff.de/site/css/css.htm) — <NAME> (Grundlagen, OOP, MySQLi, PDO) [Online, PDF] free-programming-books-de.md:* [HTML](http://www.peterkropff.de/site/html/html.htm) — <NAME> [Online, PDF] free-programming-books-de.md:* [JavaScript](http://www.peterkropff.de/site/javascript/javascript.htm) — <NAME> (Grundlagen, AJAX, DOM, OOP) [Online, PDF] free-programming-books-de.md:* [MySQL](http://www.peterkropff.de/site/mysql/mysql.htm) — <NAME> [Online, PDF] free-programming-books-de.md:* [PHP](http://www.peterkropff.de/site/php/php.htm) — <NAME> (Grundlagen, OOP, MySQLi, PDO) [Online, PDF] free-programming-books-ua.md:* [Розплутаний ClojureScript](https://lambdabooks.github.io/clojurescript-unraveled) — <NAME> (LambdaBooks) free-programming-books-ua.md:* [Розуміння ECMAScript 6](http://understandinges6.denysdovhan.com) — <NAME> (LambdaBooks) free-programming-books-ua.md:* [Маленька книга про Ruby](https://lambdabooks.github.io/thelittlebookofruby) — Сергій Гіба (LambdaBooks) free-programming-books-zh.md:* [Docker —— 从入门到实践](https://github.com/yeasy/docker_practice) ``` username_1: fixing this now should cause no problems username_1: I would avoid changing it in titles; in the Chinese title, for example —— may be meant to be the character for "one". username_1: files have been reorganized; now is a good time to propose changes Status: Issue closed
craftcms/commerce-paypal
431092316
Title: PayPal Pro and PSD2 / SCA support Question: username_0: I have setup a charity website including a simple online store using Craft Commerce 2 and the PayPal Pro gateway. With the upcoming European regulation changes, it means that all sites will need to support the PSD2 requirements and Strong Customer Authentication (SCA). The changes will be compulsory from 14th September 2019. Are there any plans for the PayPal Pro element of this gateway to support the changes? In a PayPal email it suggested using a free authentication merchant plug-in from CardinalCommerce to activate 3D-Secure but I think that then needs to be integrated with the gateway. In the meantime I can switch my client to PayPal Express to use a hosted payment form. Answers: username_1: We built the plugin according to their documentation and our experience, but we are unable to ensure that it meets specific countries/area regulatory requirements at the moment. We have reached out to cardinal commerce to figure out a way we can get listed in their "paypal 3d secure registration page", and anything we need to do to the plugin to get ready for the regulations. Status: Issue closed
espressif/ESP8266_RTOS_SDK
416269238
Title: esp_register_freertos_idle_hook & esp_register_freertos_tick_hook don't work. Question: username_0: esp_register_freertos_idle_hook & esp_register_freertos_tick_hook don't work. #include "freertos/FreeRTOS.h" #include "task.h" #include "esp_log.h" #include "esp_freertos_hooks.h" long idle_count = 0; long tick_count = 0; void ConsumptionTick(int delay) { TickType_t startTick; TickType_t endTick; TickType_t nowTick; startTick = xTaskGetTickCount(); endTick = startTick + delay; //ESP_LOGI(pcTaskGetName(0),"startTick=%d endTick=%d\n",startTick,endTick); while(1) { nowTick = xTaskGetTickCount(); if (nowTick > endTick) break; } } bool ApplicationIdleHook(void) { idle_count++; return true; } void ApplicationTickHook(void) { tick_count++; } void clearHook() { idle_count = 0; tick_count = 0; } void printHook(char * title) { ESP_LOGI(pcTaskGetName(0),"%s",title); ESP_LOGI(pcTaskGetName(0),"idle_count=%ld tick_count=%ld",idle_count,tick_count); } void app_main(void) { ESP_LOGI(pcTaskGetName(0),"start"); esp_register_freertos_idle_hook(&ApplicationIdleHook); esp_register_freertos_tick_hook(&ApplicationTickHook); clearHook(); ConsumptionTick(200); printHook("Busy 200"); clearHook(); vTaskDelay(200); printHook("Delay 200"); } Answers: username_1: we are checking about this. Status: Issue closed username_1: @username_0 please pull the master, use menuconfig Component config -> FreeRTOS -> Use FreeRTOS extened hooks to enable the hook firstly, then retry. thanks
zkboys/jquery-modal
254532554
Title: Add prompt Question: username_0: Please, add prompt. It's missing. Status: Issue closed Answers: username_0: Please, add prompt. It's missing. username_0: Please, add function ".load()" of JQuery username_1: prompt is available now, see the index.html username_1: What do you mean 'Please, add function ".load()" of JQuery'? username_0: Hello Are you okay? What I mean is very simple. I would like you to add a JavaScript or jquery function to load content from another web page or html file. it is possible? Thank you! username_0: I just discovered that your plugins "modal" do not work with "jquery mobile". This is a big problem.
PecanProject/BETYdb-YABA
495406032
Title: Fix addition of elevation of polygons when loading sites Question: username_0: The code at https://github.com/PecanProject/BETYdb-YABA/blob/14625fb17168af3ce004fb8d2616f6aef691ca7b/app/Meta.py#L148 adds a Z value (elevation) of 115. This needs to be changed to first check for a Z value, and if it's missing adding on a 0 (zero) Z value Status: Issue closed Answers: username_0: Closing due to issue being placed in wrong organization
philnguyen/soft-contract
275800402
Title: struct accessors always return ● Question: username_0: I expected these 2 programs to verify. I think they fail because SCV thinks a struct accessor can return anything. #### Program 1 Defines a struct, provides an accessor, does not provide the constructor ``` #lang racket (struct foo (x)) (provide (contract-out (foo-x (-> foo? integer?)))) ;(blame ; (line 5 col 30) ; (violator ; : ; "/..../struct-access-0.rkt") ; (contract ; from ; : ; "/..../struct-access-0.rkt") ; (contracts : integer?) ; (values : ●)) ``` #### Program 2 Has two modules. One defines a struct & exports it with a contract. The other tries to use the struct. ``` #lang racket ;; struct-access-1.rkt (struct foo (x)) (provide (contract-out (struct foo ((x exact-integer?))))) ``` ``` #lang racket ;; struct-access-2.rkt (require "struct-access-1.rkt") (define (getx f) (foo-x f)) (provide (contract-out (getx (-> foo? exact-integer?)))) ``` Error message from `raco scv struct-access-2.rkt`: ``` - dependency: /..../struct-access-1.rkt for `idY7` (blame (line 5 col 31) (violator : "/..../struct-access-2.rkt") (contract from : "/..../struct-access-1.rkt") (contracts : exact-integer?) (values : ●)) ``` Answers: username_1: This is a current limitation of SCV. It doesn't know that certain structs have their instances created in a controlled way. Predicate `foo?` is just a tag check. We can for now get around this by using `(struct/c foo integer?)` instead of `foo?`. Program 1 can go through by: ```racket #lang racket (struct foo (x)) (define foo/c (struct/c foo integer?)) (provide (contract-out (foo-x (-> foo/c integer?)))) ``` Program 2 can be fixed/hacked in the same way, although I noticed a different problem. SCV somehow thinks struct `foo` may not have been defined in the second module, which I'll fix soon. Status: Issue closed
JobbeDeluxe/sfdl-bash-loader
409990381
Title: 22_V6_WITHPW_MultiPack2 Question: username_0: filearray: Array mit 0 Elementen ... DirectoryRoot: Array mit 22 Elementen ... Lade Index (lftp): Minion_Banana_Techno_Song FEHLER: Es konnte kein Index der FTP-Daten erstellt werden! 22_V6_WITHPW_MultiPack2.sfdl wird uebersprungen! find: Zugriff nicht möglich:550 No such directory. (/_testing_Data/folder)/_testing_Data)
XiaoMi/soar
373364270
Title: -report-type=text always get 100 score Question: username_0: Please answer these questions before submitting your issue. Thanks! 1. What did you do? If possible, provide a recipe for reproducing the error. ``` [root@34d3c3030590 ~]# echo "select title from sakila.film" | soar -log-output=soar.log -report-type=text Query: select title from sakila.film ★ ★ ★ ★ ★ 100分 ID: 25807E6B94BEA72C Item: CLA.001 Severity: L4 Summary: 最外层SELECT未指定WHERE条件 Content: SELECT语句没有WHERE子句,可能检查比预期更多的行(全表扫描)。对于SELECT COUNT(*)类型的请求如果不要求精度,建议使用SHOW TABLE STATUS或EXPLAIN替代。 [root@34d3c3030590 ~]# echo "select title from sakila.film" | soar -log-output=soar.log -report-type=markdown # Query: 25807E6B94BEA72C ★ ★ ★ ★ ☆ 80分 ```sql SELECT title FROM sakila. film ``` ## 最外层SELECT未指定WHERE条件 * **Item:** CLA.001 * **Severity:** L4 * **Content:** SELECT语句没有WHERE子句,可能检查比预期更多的行(全表扫描)。对于SELECT COUNT(\*)类型的请求如果不要求精度,建议使用SHOW TABLE STATUS或EXPLAIN替代。 ``` 2. What did you expect to see? 3. What did you see instead? 4. What version of are you using (`soar -version`)? 0.8.1 Answers: username_1: -report-type [markdown|html] will give a score of SQL, in other report-types soar doesn't give the score of SQL now. username_0: @username_1 why `-report-type text` show the score`100` ? Status: Issue closed
conveyal/analysis-ui
890180085
Title: Cannot import modifications to newly created project Question: username_0: **Describe the bug** Importing modifications to a newly created project tries to read a regionId property of undefined. **To Reproduce** Follow advanced test "Upload route alignment shapefile to an empty region". After creating the bundle as described with the Ctran feed, click the "new project" button to make an associated project. This will take you to the modification editing page. Click the import modifications button. This causes an error: `TypeError: Cannot read property 'regionId' of undefined at S (project-title.tsx:13)` **Desktop (please complete the following information):** - OS: MacOS 11.3.1 - Browser: Brave 1.23.71 with shields down<issue_closed> Status: Issue closed
MicrosoftDocs/azure-docs
530489986
Title: broken link Question: username_0: https://github.com/Azure-Samples/java-functions-eventhub-cosmosdb link is broken... --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 618b2135-6547-4b07-ebfb-3bc86553665d * Version Independent ID: de9607ad-d780-95ca-df06-44207c813822 * Content: [Tutorial: Use Java functions with Azure Cosmos DB and Event Hubs](https://docs.microsoft.com/en-us/azure/azure-functions/functions-event-hub-cosmos-db) * Content Source: [articles/azure-functions/functions-event-hub-cosmos-db.md](https://github.com/Microsoft/azure-docs/blob/master/articles/azure-functions/functions-event-hub-cosmos-db.md) * Service: **azure-functions** * GitHub Login: @username_2 * Microsoft Alias: **karler** Answers: username_1: @username_0 Thanks for your feedback! We will investigate and update as appropriate. username_1: Thanks! I made a pull request to remove the link. @username_2 and @jeffhollan can you confirm where the repository moved? There seems to be a related one here. https://github.com/jeffhollan/functions-java-eventhub-to-cosmosdb username_1: I'm closing this out since the broken link has been removed but I will update this thread with the correct repository if applicable. Status: Issue closed username_0: Thank you! It would be good to understand what happened with the Azure Sample which was behind this link - it was interesting sample and even though there are code snippets in the article it is always great to have actual running code sample... username_2: @username_0 I've fixed the link and restored it to the topic. Sorry for the inconvenience! The [sample repo](https://github.com/Azure-Samples/java-functions-eventhub-cosmosdb) is available now.
Javastro/jsofa
972373686
Title: No tag for newest version Question: username_0: The current IAU SOFA version is 18 (2021-05-12), and this is already committed here. However, it has no tag attached (latest tag is 20210125). Is there a specific reason for that? If not, could you tag the latest release? Answers: username_1: no reason - I think that I did something wrong - I had a local tag - anyway it should be there now Status: Issue closed
postmanlabs/postman-app-support
507355048
Title: Disable Automatic Updates Question: username_0: <!-- Please read through the [guidelines](https://github.com/postmanlabs/postman-app-support#guidelines-for-reporting-issues) before creating a new issue. --> **Describe the bug** Due to security concerns, all development is done in an offline environment. There is no connection to the wider Internet and there will never be one. So while I am in Postman there is a red dot on the wrench telling me that the updates failed. **To Reproduce** Steps to reproduce the behavior: 1. Install Postman on an offline machine 2. Wait about 2 to 5 minutes 3. At the top right hand wrench, the red dot will appear to get my attention about something 4. Go check to see if it is just the update message or an additional message **Expected behavior** I understand the concern to keep up to date on minor updates and security patches, but it will never be available in this environment. **Screenshots** If applicable, add screenshots to help explain your problem. **App information (please complete the following information):** - App Type [e.g. Chrome App, Native App] - Postman Version [e.g. 7.0.0] - OS: [e.g. macOS High Sierra 10.13.2] **Additional context** Our development environment will never allow a connection to the Internet, so the updates can't ever be retrieved by that method. I would need to know they are available, download them, copy to some media, transport to the development machine, then install them. Thanks, Tom Answers: username_1: I understand the concern @username_0 but taking into account that large section of users depend on Postman to dependable and have security expectation disabling minor updates are not something we provide, saying that we do give an option to turn off major version updates. Status: Issue closed
fission/fission
201668530
Title: Autoscaling for function pods Question: username_0: We need a way to autoscale specialized function pods up (and down) based on load. There are a few possible metrics to scale based upon (CPU usage, latency, queue depth) and there's some design work needed around how these metrics should be tracked, what component should make the autoscaling decision, and so on. Answers: username_1: Just came across a page discussing micro scaling which might inspire http://blog.microscaling.com/2016/04/microscaling-with-nsq-queue.html username_2: @username_0 K8s [horizontal-pod-autoscaling](https://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/) seems is doing the right job to autoscale the pods based upon (CPU usage) and customed metrics. So in 22f8240, seems we are going to scale the pods on our own. I think we can implement it in two ways: one based on autoscaling provided by k8s HPA, which is easier, then we can control the pods number on our own for fine-grained control over pods. username_0: @username_2 agree on those being the options. With the work on the autoscaling branch both options remain open. As you say HPA is simpler; we can even plug in our own metrics, such as req/sec. A good next step would be to come up with a cpu-intensive workload and experiment with HPA on that workload. username_2: @username_0 Hi, I just started [fission-benchmark](https://github.com/username_2/fission-benchmark) to provide benchmark tools and multiple workloads for fission. I make a simple workload on hello world function. The result is [here](https://github.com/username_2/fission-benchmark/blob/master/workloads/helloworld-workload/Result.md). The result shows under low stress fission performs almost the same as baseline (k8s service as control group). However, under an unlimited qps, 500 request and 30 concurrence, fission returns eight `502` and slowest request is over 50 seconds. So the router seems to be a bottleneck. I will add more workloads and test fission, k8s service baseline and k8s HPA baseline. username_3: Any thoughts of using something lightweight like Linkerd to handle the autoscaling + metrics of when to scale/not? It can be the "man in the middle" if between router <>service, Although would require the router to configure routes with linkerd for this so automatically work. username_2: Thanks @username_3 for your suggestions. I am going to take a look at linkerd. The router have to invoke poolmgr to create new instances for cold functions. I am not sure we can simply replace it with linkerd. For now, we are using k8s service and auto scaling to do the job. The router is then pointed to a service of the target function. The default metric is CPU usage. The next step is to pull metrics collected by Prometheus and use these customized metrics such as QPS, memory usage, message queue length. I will try this method in k8s v1.7. Router is the bottle neck, but it is stateless so we can scale it once the custom metrics are ready. username_4: Hi is there a sense of when this will be ready for release? username_5: What is the status of handling high scales in Fission? From what I get from this thread, there's quite a limit currently to the number of QPS Fission can handle. Is that still the case? username_0: Hey folks, Thanks for your patience, I know this has been a long standing ask. We're currently working on function pod autoscaling. Very quick overview of how that's going to work: 1. FunctionSpec will have parameters on the desired scale 2. Poolmgr service is renamed to an `executor` service, which is an abstraction with poolmgr as an executor "backend" (PR #384, merged). 3. Executor will dispatch function requests to a new backend when FunctionSpec needs autoscaling. The "new deploy" backend of executor will create a Deployment, a Service, and a Horizontal Pod Autoscaler (HPA) for the function. This work is being done by @vishal-biyani in PR #387. 5. That should give us the first cut of working autoscaling. Then comes benchmarking, beta testing with some real use cases, fine tuning. username_6: Have you thought about moving fission out of the critical path altogether? I think this is possible by using the nginx ingress's concept of a fallback service. This is a secondary service that requests are sent to if there are no pods in the service that would normally match the incoming request. This approach would mean that fission machinery is technically only invoked for cold starts. You'd need slightly different bootup and specialization logic for the autoscaling codepath, but I think it would have much cleaner scaling properties. Thoughts? username_0: So there's two parts of the fission "machinery" -- router and poolmgr. The poolmgr is the heavier one. We've already removed poolmgr from the critical path in the autoscaling case. Autoscaling uses the new backend of the executor, and uses the standard Kubernetes HPA. The router is lightweight, stateless, and so it's quite easy to scale up. It's also the thing that keeps track of which functions are idle and which aren't. So if we remove the router we have to have something else that tracks that, and exposes it in a way that fission can consume. We're integrating with istio/envoy right now, and maybe the envoy sidecar proxy can handle this for us. Alternatively the runtime environments themselves could handle it, but that's code that has to be written in each language runtime env, which is something we try to avoid. username_0: We support autoscaling now (since 0.5.0, with improvements in 0.6.0 and future releases). Status: Issue closed
compnerd/swift-win32
898713157
Title: Change `Button` to owner drawn class Question: username_0: Convert the `Button` implementation to use an owner drawn button. This should allow for changing the button to include an image and title simultaneously. Additionally, it would allow for embedding a `Label` and `ImageView` members which can then be made available to the user to control the button more thoroughly. Answers: username_1: Last few days I am playing with `BS_OWNERDRAW` on Button, and while my drawing code works (for background, border and text colors), I am having issues with passing `Button` ref to the WndProc which handles the drawing of the `Button`. I can't promise anything at the moment. username_0: Ah, you mean the reference to the `Button` class? You cannot do that directly; the way to accomplish that is by squirreling away the pointer into `GWLP_USERDATA`. There are a couple of examples of that in the tree already.
pascalabcnet/pascalabcnet
499302718
Title: Внутренняя ошибка при попытке сравнения метода с nil Question: username_0: ```pas type T = class public procedure p := exit; public procedure p1; begin if p = nil then ; end; end; begin end. ``` ``` Внутренняя ошибка компилятора в модуле [pabcnetc.exe] :'System.Exception: System.NullReferenceException: Ссылка на объект не указывает на экземпляр объекта. в PascalABCCompiler.TreeConverter.syntax_tree_visitor.find_operator(String name, expression_node left, expression_node right, location loc, Boolean no_search_in_extension_methods) в PascalABCCompiler.TreeConverter.syntax_tree_visitor.visit(bin_expr _bin_expr) в PascalABCCompiler.TreeConverter.returner.visit(expression expr) в PascalABCCompiler.TreeConverter.syntax_tree_visitor.convert_strong(expression expr) в PascalABCCompiler.TreeConverter.syntax_tree_visitor.visit(if_node _if_node) в PascalABCCompiler.TreeConverter.syntax_tree_visitor.convert_strong(statement st) в PascalABCCompiler.TreeConverter.syntax_tree_visitor.visit(statement_list _statement_list) в PascalABCCompiler.TreeConverter.syntax_tree_visitor.convert_strong(statement st) в PascalABCCompiler.TreeConverter.syntax_tree_visitor.visit_program_code(statement_list program_code) в PascalABCCompiler.TreeConverter.syntax_tree_visitor.visit(block _block) в PascalABCCompiler.TreeConverter.syntax_tree_visitor.visit_class_member_realizations(class_body_list _class_body) в PascalABCCompiler.TreeConverter.syntax_tree_visitor.visit(class_body_list _class_body) в PascalABCCompiler.TreeConverter.syntax_tree_visitor.visit(class_definition _class_definition) в PascalABCCompiler.TreeConverter.syntax_tree_visitor.visit(type_declaration _type_declaration) в PascalABCCompiler.TreeConverter.syntax_tree_visitor.visit(type_declarations _type_declarations) в PascalABCCompiler.TreeConverter.syntax_tree_visitor.visit(declarations _subprogram_definitions) в PascalABCCompiler.TreeConverter.syntax_tree_visitor.visit(block _block) в PascalABCCompiler.TreeConverter.syntax_tree_visitor.visit(program_module _program_module) в PascalABCCompiler.TreeConverter.SyntaxTreeToSemanticTreeConverter.CompileInterface(compilation_unit SyntaxUnit, unit_node_list UsedUnits, List`1 ErrorsList, List`1 WarningsList, SyntaxError parser_error, Hashtable bad_nodes, using_namespace_list namespaces, Dictionary`2 docs, Boolean debug, Boolean debugging) в PascalABCCompiler.Compiler.CompileUnit(unit_node_list Units, unit_or_namespace SyntaxUsesUnit) в PascalABCCompiler.Compiler.Compile()' ```<issue_closed> Status: Issue closed
ruby-china/rubygems-mirror
141463389
Title: 实现新的 Proxy Cache Server Question: username_0: ``` [GET /gems/foo-1.0.0.gem] | [Nginx] | [My Proxy App] | {Router} | ------------------------------------ | | [cache hit no expire] [cache miss/expired] | | | [response cache file] ------------------ [async fetch new data] | | | [status 200] [success: rewrite cache] [fail: do noting] ``` Answers: username_0: Nginx 的 proxy_cache_use_stale 就是干这个事情的 http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_use_stale username_1: ![image](https://cloud.githubusercontent.com/assets/1147375/13855672/8a6ff1b6-ecac-11e5-9ac8-aeec9365fbcd.png) 静态文件这一块,我建议使用腾讯云 cos 的回源服务。这些静态文件一般都是不会变更的文件。 所以当你使用 cos 之后,用户第一次访问以后,这些文件会被存储在 cos 上面。下次再访问就是从国内网络直接获取了。 rubygems.conf 里面的 `location ~ /(gems|quick)` 和 `location ~* \.(4.8|4.8.gz)` 可以由服务器返回一个 302 到 `http://your-cos-appname.qcloud.com/gems/rails.gz` 这样的请求。 用户在拿到这个请求之后,直接去请求 cos。 nginx 里面,首页和 api 页面,继续访问国外主机。gems 和 .gz 的文件也访问国外主机,但是主机吐个 302 出来。 cdn这东西,如果你的文件是经常被访问的,并且设置了缓存时间,那么它有加速效果。可是动态请求的话,由于腾讯云国外主机本身的网络也不算差的,所以加上了 cdn 反而多了一个网络层。并且 cdn 上面的文件是会过期的,虽然你可以设置缓存时间是1年。但它只会在不停有用户访问的时候,才缓存一年,一年后再进行一次回源,不可能在用户访问少的情况下一直占着硬盘。 所以对于你这边各种零散的 gems 来说,他们的 gz 文件很可能是不在 cdn 的缓存中的,也就是说按你现在的配置,每次用户下载非著名 gem 的时候,他的网络请求的大部分流量都会从你的国外主机回国内。而使用了 cos 之后,用户的大部分流量可以在国内消耗掉。cos 是会永久存储的,只是更新策略比较不灵活。 ps:我以上都是假设你们的 file cache 策略都是针对不会被更新的情况。如果情况不符,那我这建议也没用。 ---- 腾讯云cdn这块,今天跟内部交流的结果是,目前我们的业务重点还是在国内用户。国外的回源质量有待提升。(咳咳,我说得比较委婉) 假设现在你用了一个很牛逼的 cdn,国外回源质量好,以及假设cdn层没有任何的网络损耗。ok,那么首页不会像今天一样经常502。我们能想到的最好情况就是用了cdn和直接访问源站的动态请求速度是一样的。 但是,我上述的非著名 gem 慢的问题。还是会存在。在用了很牛逼 cdn 的情况下,由于文件没有被 cdn 缓存,流量大部分不命中,而是回源。一个北京用户装了 rails 的包,只能让北京的一个边缘节点缓存上这个包。腾讯云cdn在国内成百上千的边缘节点,想让这些节点都缓存也太难了。 还是把文件弄到国内来吧。 ps:我之所以觉得 gems.ruby-china.org 流量不足以让 cdn 发挥太多作用,是参考了 http://npm.taobao.org/ 的下载量级。 username_2: 如果 cos 在文件首次请求回源是否可靠?如果速度和稳定性都不错的话,我觉得可以试试 cos。 username_1: @username_2 cos的海外回源稳定性,我问问cos那边的人有没有香港回源专线可以用。可以用的话我就去申请一个。 username_1: cos的问题等我问问再给你们答复。现在出于安全策略不允许海外回源。。。。。。。。。。。。。。。。。。。。。。。。。 username_0: 已实现,部署 Status: Issue closed
casi/flatshare_app
176016016
Title: Wrong date and time in news article page Question: username_0: Wrong date and time in news article page (created & updated at). Will fix this in the next days. The output's localization as well. Status: Issue closed Answers: username_0: Wrong date and time in news article page (created & updated at). Will fix this in the next days. The output's localization as well. username_0: solved. Status: Issue closed
ZinedineMess/ZinedineMessahel_3_11122020
783467029
Title: Gradient et vérification du code Question: username_0: ### Gradient et vérification du code Le coeur doit être modifié afin que sa couleur soit un dégradé, et non pas une couleur simple. - [ ] gradient sur le coeur sur la pages d'accueil et les pages de menus.<issue_closed> Status: Issue closed
adafruit/circuitpython
689374414
Title: cannot make hex file for BOARD=makerdiary_nrf52840_mdk_usb_dongle Question: username_0: like in the Readme.md described. But I've got every time the error `make: *** No rule to make target 'hex'. Stop.` Can someone help me Answers: username_1: Try without the "bin" ``` make BOARD=makerdiary_nrf52840_mdk_usb_dongle SD=s140 V=1 -j4 ``` worked for me ``` Create build-makerdiary_nrf52840_mdk_usb_dongle-s140/firmware.bin Create build-makerdiary_nrf52840_mdk_usb_dongle-s140/firmware.hex Create build-makerdiary_nrf52840_mdk_usb_dongle-s140/firmware.uf2 python3 ../../tools/uf2/utils/uf2conv.py -f 0xADA52840 -c -o "build-makerdiary_nrf52840_mdk_usb_dongle-s140/firmware.uf2" build-makerdiary_nrf52840_mdk_usb_dongle-s140/firmware.hex Converting to uf2, output size: 836608, start address: 0x26000 Wrote 836608 bytes to build-makerdiary_nrf52840_mdk_usb_dongle-s140/firmware.uf2 [Jerry-desktop-mini:circuitpython/ports/nrf] username_1% ls -lrt build-makerdiary_nrf52840_mdk_usb_dongle-s140/ total 66776 drwxr-xr-x 3 username_1 staff 102 Aug 31 13:37 peripherals drwxr-xr-x 4 username_1 staff 136 Aug 31 13:37 nrfx drwxr-xr-x 12 username_1 staff 408 Aug 31 13:37 lib drwxr-xr-x 3 username_1 staff 102 Aug 31 13:37 device drwxr-xr-x 21 username_1 staff 714 Aug 31 13:37 common-hal drwxr-xr-x 3 username_1 staff 102 Aug 31 13:37 boards drwxr-xr-x 29 username_1 staff 986 Aug 31 13:37 shared-module -rw-r--r-- 1 username_1 staff 11176 Aug 31 13:37 autogen_usb_descriptor.c -rw-r--r-- 1 username_1 staff 1886650 Aug 31 13:37 ld_defines.pp -rw-r--r-- 1 username_1 staff 7648 Aug 31 13:37 common.ld drwxr-xr-x 3 username_1 staff 510 Aug 31 13:38 genhdr drwxr-xr-x 2 username_1 staff 7956 Aug 31 13:38 py drwxr-xr-x 3 username_1 staff 1802 Aug 31 13:38 extmod -rw-r--r-- 1 username_1 staff 981696 Aug 31 13:38 main.o -rw-r--r-- 1 username_1 staff 20942 Aug 31 13:38 main.P drwxr-xr-x 2 username_1 staff 136 Aug 31 13:38 build-makerdiary_nrf52840_mdk_usb_dongle-s140 -rw-r--r-- 1 username_1 staff 825208 Aug 31 13:38 fatfs_port.o -rw-r--r-- 1 username_1 staff 10840 Aug 31 13:38 fatfs_port.P -rw-r--r-- 1 username_1 staff 874900 Aug 31 13:38 background.o -rw-r--r-- 1 username_1 staff 18508 Aug 31 13:38 background.P -rw-r--r-- 1 username_1 staff 5064 Aug 31 13:38 autogen_display_resources.c drwxr-xr-x 2 username_1 staff 136 Aug 31 13:38 bluetooth -rw-r--r-- 1 username_1 staff 827780 Aug 31 13:38 sd_mutex.o -rw-r--r-- 1 username_1 staff 10656 Aug 31 13:38 sd_mutex.P drwxr-xr-x 42 username_1 staff 1496 Aug 31 13:39 shared-bindings drwxr-xr-x 3 username_1 staff 408 Aug 31 13:39 supervisor -rw-r--r-- 1 username_1 staff 807556 Aug 31 13:39 autogen_display_resources.o -rw-r--r-- 1 username_1 staff 11345754 Aug 31 13:40 firmware.elf.map -rwxr-xr-x 1 username_1 staff 14078984 Aug 31 13:40 firmware.elf -rw-r--r-- 1 username_1 staff 1176045 Aug 31 13:40 firmware.hex -rwxr-xr-x 1 username_1 staff 421936 Aug 31 13:40 firmware.bin -rw-r--r-- 1 username_1 staff 836608 Aug 31 13:40 firmware.uf2 ``` username_0: i have tried it with bin but both dosen't work. I use manjaro and gcc 10.1.0 username_1: Just to be sure, did you try the command I gave above -- no "hex"and no "bin" username_0: yeah i have tried your command im running it in ciruitpython/ports/nrf username_1: OK -- I don't have any better suggestions - sorry. username_2: `make: arm-none-eabi-gcc: No such file or directory` is pretty suspicious. Can you run `arm-none-eabi-gcc` from the command line? Did you download the toolchain file and unpack it? See https://learn.adafruit.com/building-circuitpython username_0: nop im installing it right now. But is it possible to include that information in that Readme ? Status: Issue closed
open-mmlab/mmhuman3d
1073077950
Title: ImportError: cannot import name 'axis_angle_to_quaternion' from 'pytorch3d.transforms' (/home/user/anaconda3/envs/open-mmlab/lib/python3.8/site-packages/pytorch3d/transforms/__init__.py) Question: username_0: ImportError: cannot import name 'axis_angle_to_quaternion' from 'pytorch3d.transforms' (/home/user/anaconda3/envs/open-mmlab/lib/python3.8/site-packages/pytorch3d/transforms/__init__.py) (open-mmlab) root@slave01:~/lol/mmhuman3d# conda list # packages in environment at /home/user/anaconda3/envs/open-mmlab: # # Name Version Build Channel _libgcc_mutex 0.1 main defaults _openmp_mutex 4.5 1_gnu defaults addict 2.4.0 pypi_0 pypi aiohttp 3.8.1 pypi_0 pypi aiosignal 1.2.0 pypi_0 pypi async-timeout 4.0.1 pypi_0 pypi attrs 21.2.0 pypi_0 pypi autobahn 21.11.1 pypi_0 pypi automat 20.2.0 pypi_0 pypi blas 1.0 mkl defaults bzip2 1.0.8 h7b6447c_0 defaults ca-certificates 2021.10.26 h06a4308_2 defaults cdflib 0.3.20 pypi_0 pypi certifi 2021.10.8 py38h06a4308_0 defaults cffi 1.15.0 pypi_0 pypi charset-normalizer 2.0.9 pypi_0 pypi chumpy 0.70 pypi_0 pypi colorama 0.4.4 pypi_0 pypi colorlog 6.6.0 pypi_0 pypi colormap 1.0.4 pypi_0 pypi constantly 15.1.0 pypi_0 pypi cpuonly 1.0 0 pytorch cryptography 36.0.0 pypi_0 pypi cudatoolkit 10.0.130 0 defaults cycler 0.11.0 pypi_0 pypi cython 0.29.25 pypi_0 pypi deprecated 1.2.13 pypi_0 pypi dotty-dict 1.3.0 pypi_0 pypi easydev 0.12.0 pypi_0 pypi ffmpeg 4.2.2 h20bf706_0 defaults flake8 4.0.1 pypi_0 pypi flake8-import-order 0.18.1 pypi_0 pypi fonttools 4.28.3 pypi_0 pypi freetype 2.11.0 h70c0345_0 defaults frozenlist 1.2.0 pypi_0 pypi fvcore 0.1.5.post20211023 pypi_0 pypi giflib 5.2.1 h7b6447c_0 defaults gmp 6.2.1 h2531618_2 defaults gnutls 3.6.15 he1e5248_0 defaults h5py 3.6.0 pypi_0 pypi hyperlink 21.0.0 pypi_0 pypi idna 3.3 pypi_0 pypi imageio 2.13.2 pypi_0 pypi incremental 21.3.0 pypi_0 pypi iniconfig 1.1.1 pypi_0 pypi intel-openmp 2021.4.0 h06a4308_3561 defaults iopath 0.1.9 pypi_0 pypi jpeg 9b 0 defaults json-tricks 3.15.5 pypi_0 pypi kiwisolver 1.3.2 pypi_0 pypi lame 3.100 h7b6447c_0 defaults lcms2 2.12 h3be6417_0 defaults ld_impl_linux-64 2.35.1 h7274673_9 defaults [Truncated] torchvision 0.9.0 py38_cpu [cpuonly] pytorch tqdm 4.62.3 pypi_0 pypi twisted 21.7.0 pypi_0 pypi txaio 21.2.1 pypi_0 pypi typing_extensions 3.10.0.2 pyh06a4308_0 defaults vedo 2021.0.7 pypi_0 pypi vtk 9.0.3 pypi_0 pypi wheel 0.37.0 pyhd3eb1b0_1 defaults wrapt 1.13.3 pypi_0 pypi wslink 1.2.0 pypi_0 pypi x264 1!157.20191217 h7b6447c_0 defaults xmltodict 0.12.0 pypi_0 pypi xtcocotools 1.10 pypi_0 pypi xz 5.2.5 h7b6447c_0 defaults yacs 0.1.8 pypi_0 pypi yapf 0.31.0 pypi_0 pypi yarl 1.7.2 pypi_0 pypi zlib 1.2.11 h7b6447c_3 defaults zope-interface 5.4.0 pypi_0 pypi zstd 1.4.9 haebb681_0 defaults Answers: username_1: Could you do this? ```shell echo "import torch;device=torch.device('cuda');\ from pytorch3d.utils import torus;\ Torus = torus(r=10, R=20, sides=100, rings=100, device=device);\ print(Torus.verts_padded());"|python ``` Maybe you have not install3d pytorch3d right. username_1: Please install the rely libs and use conda install username_1: This will be closed if no further information. Status: Issue closed
phetsims/vegas
905954284
Title: Define focus behavior for Game transitions Question: username_0: This came up in the context of Fourier, but is relevant to sims that have a game screen. A game screen is sort of like multiple screens in one. It as a Node that shows level-selection buttons, and a Node for each level. See example screenshots from Fourier below. Pressing one of the level-selection buttons does an animated transition (usually a left-to-right wipe) to that level Node. Pressing the Back button on a level does an animated transition (right-to-left wipe) to the level-selection buttons. TransitionNode (which lives in twixt) is responsible for the transition --- it's a general class, and does animated transitions between any 2 Nodes, not just game components. In Fourier (and probably other sims), after transitioning from level-selection buttons to a level, nothing has focus. And pressing tab moves focus to the navbar - which seems wrong. How should this behave? Should something have focus after a transition? <img width="539" alt="screenshot_1004" src="https://user-images.githubusercontent.com/3046552/120034473-58d35800-bfba-11eb-85f1-c4fbfc4c92e2.png"> <img width="538" alt="screenshot_1005" src="https://user-images.githubusercontent.com/3046552/120034482-5b35b200-bfba-11eb-820d-08c213650a88.png"> Answers: username_0: Slack discusion with @username_1: <details> <summary>Slack discussion</summary> <NAME> 1:13 PM In typical game screens, there’s a set of level-selection buttons. Press one of them, and the screen does an animated transition (usually a left-to-right wipe) to that level. TransitionNode is responsible for that transition. In Fourier (and probably other sims), after that transition occurs, nothing for the level has focus. And pressing tab takes me to the navbar. Thoughts on what the behavior should be, and how to make it so? 1:16 The first element in pDomOrder for the level is the Back button in the status bar. (Upper left corner of screen) 1:17 After pressing the level-selection button, if I press Shift-Tab, focus moves to the element that is LAST in the pDomOrder for the level — the “New Waveform” button. <NAME> 1:19 PM I am not certain where focus is after the transition, but my guess would be the document <body> based on how you describe it. Probably the default browser behavior because the element that had focus was removed from the document. It seems like focus should remain on screen in this case, and at the end of the transition maybe TransitionNode could move focus to whatever element seems most appropriate? <NAME> 1:20 PM Putting focus on the "Back" button would be very intuitive for me. <NAME> 1:21 PM Yes, I was thinking that TransitionNode (common code) probably needs to set focus. But I’m unclear on whether setting focus is OK, or whether the user should have to press tab in the level (like when they go to a screen). (edited) 1:22 Should this work like a screen, where there’s some description for the level, and the user needs to press tab to move focus to the first element (the Back button)? <NAME> 1:23 PM Its a lot like a Screen change, probably for the screen reader reasons we decided there we would want to put focus on the <h1> again. <NAME> 1:23 PM Similar problem when pressing the back button — what happens when we go back to the level-selection UI? Is there a description, like a screen? Does something get focus? 1:23 Game screens are kind of like multiple screens in one. And most games are like this. 1:24 There’s the level-selection “screen”, and a “screen” for each level. 1:24 … and we transition between them. <NAME> 1:25 PM That <h1> is part of the ScreenView, but it is private, there isn't a way to request focus be moved to it. <NAME> 1:25 PM I suspect that scenes (used in many sims) may require a similar discussion. Or maybe you’ve had that discussion already… <NAME> 1:25 PM No, this is the first time weve thought about this for scenes like this. <NAME> 1:26 PM So… What do you recommend? Should TransitionNode just move focus to the first element in the pDomOrder for the Node that it is transitioning to? (edited) <NAME> 1:26 PM What if we added a function to ScreenView that requested DOM focus be moved back to the top. Would be called when the Screen changes and could be called whenever a scene or other game thing changes. Would you have access to the ScreenView to call such a function? <NAME> 1:27 PM I guess that would work. But do you really want to describe the Wave Game screen, or do you want to describe the UI that is currently visible (and that we have transitioned to) ? <NAME> 1:29 PM Likely what has been transitioned to, perhaps not the <h1> for the ScreenView then <NAME> 1:29 PM For example, each level in a game is currently described in the Info dialog on the level-selection UI. And there’s also a message line in the level’s status bar that typically provides instructions for that level. Is that important to read to the user? (edited) 1:31 Since this hasn’t come up before, and Games are new…. Do we need a design meeting to discuss this with the team? [Truncated] <NAME> 1:34 PM OK. So in the meantime, I’ll modify TransitionNode to move focus to the first element in pDomOrder for the Node that it has transitioned to. For Fourier, that’s the Back button for a level, and the “Level 1” button for the level-selection UI. <NAME> 1:35 PM Yea, that sounds good, thanks! pdomOrder may be null for a lot of cases if the Node doesn't have one set or hasn't been instrumented yet. <NAME> 1:36 PM I’ll create the issue in vegas, even though vegas is unlikely to provide direct support, and TransitionNode lives in twixt. <NAME> 1:36 PM Ah OK, good to know <NAME> 1:36 PM Right — if pDomOrder is null, it will of course do nothing. <NAME> 1:36 PM Sounds good </details> username_1: @username_2 (and @terracoda if you have time) can we schedule a brief design meeting about this? We will want to decide what the long term behavior should be and what the behavior should be for Fourier 1.0 (if we need to defer this). username_1: Discussed today with @terracoda and @username_2: The behavior should be like this: - After transitioning from level selection to a level scene, focus should move to the first focusable element in the level scene. - After transitioning from the level scene to the level selection scene, focus should be on the button for the level that we just left. - When switching to the game screen from navigation bar or home screen focus should be on the h1 of the screen, which matches focus behavior for all screens. username_0: Looks like there's nothing to do here, it seems to be working as desired. username_0: This pattern was taken from the Home screen. username_0: Over in https://github.com/phetsims/twixt/issues/30, @username_1 and I agreed that this funtionality does not belong in TransitionNode -- it's much too general, and used for things other than games. It might be the case that we could build support for this behavior (and the UI framework) into vegas. But that's a big project, better tackled after we have more a11y experience with game screens. So for the time being, I've implemented this behavior in Fourier-specific code, see the above commit. @username_2 please verify that the behavior matches https://github.com/phetsims/vegas/issues/90#issuecomment-854034816. You can close this issue if it looks OK. Status: Issue closed username_2: I remember reviewing the focus order and keyboard nav behavior in general throughout the design process for Fourier: Making Waves, but it looks like this issue slipped through the cracks. That said, the published sim does have the behavior outlined in https://github.com/phetsims/vegas/issues/90#issuecomment-854034816, so closing.
jagrosh/GiveawayBot
728852894
Title: Editing on-going Giveaways Question: username_0: In the Discord Giveaways's #support channel, it's been asked many, many times to edit on-going Giveaways. Perhaps this should become a feature. I can see the disadvantages of editing Giveaways though, especially when editing them shortly before they end. Perhaps it's possible to allow users to edit a Giveaway for a few (10-15) minutes after creation? This could become an interesting discussion... Answers: username_1: This would not be hard at all. I still feel like users should make up what they give away before creating a giveaway. They also should double check the data they enter. While we are at feature things, many people also ask for translation. And as a side note, parsing time formats like 4d3h is implemented but never used / supported. I might also want to change that. username_2: There are 3 primary reasons why this hasn't been added: 1. Because giveaways are updated in a separate process from where they are set up, giveaways might not show their updated information for a significant period of time, or the two processes could end up competing to edit the message. Editing the end time of a giveaway could also negatively impact the internal 'status' of a giveaway. 2. Users could end up entering a giveaway they don't want (or weren't expecting) as the contents could end up changing after they have entered. User consistency is an important consideration in all features. 3. Editing the time of a giveaway could end up being syntactically confusing; using duration is already a struggle for some users (as opposed to using specific date times), but is currently used because duration is timezone-independent. That said, if a solution was found that addresses these concerns, I think it would be a feasible addition.
facebook/react-native
127535461
Title: Prevent touch when processing another one Question: username_0: Hello there. I'm using ToolbarAndroid with 2 action buttons. var toolbarActions = [ { title: 'Exit', icon: require('../img/exit.png'), show: 'always', showWithText: false }, { title: 'Save', icon: require('../img/save.png'), show: 'always', showWithText: false}, ]; render: function() { return ( <ToolbarAndroid actions={toolbarActions} onActionSelected={this._onActionSelected}> </ToolbarAndroid> ) }, _onActionSelected: function(position) { switch position { case 0: // Exit without save this.props.navigator.pop(); case 1: // Save and exit this._saveData(); // Save some data this.props.navigator.pop(); } }, When I push to action 1 button and then push fast to action 0 button this.props.navigator.pop() executing twice. Is it possible to prevent toolbar action handling while another action is in process? Answers: username_1: @username_0 When you have clicked on a Toolbar button, you should disable the user from clicking the other buttons. To do that, you will need to somehow keep track of if the navigator is currently in the process of popping or not. Perhaps you can use some state to track this. You will also probably need to hook into the navigator lifecycle events as mentioned here https://facebook.github.io/react-native/docs/navigator.html#onwillfocus username_0: @username_1 Thanks a lot for your answer. I try to track with state, but it doesn't work. State does not changing before the handler is called for the second time. I'll try to track state with 'willfocus' listener. username_0: Now I'm preventing second touch by code: _onActionSelected: function(position) { if (this.state.isBusy) { return; } this.setState({isBusy: true}); switch position { case 0: // Exit if (this.state.isDataChanged) { Alert.alert( "Warning!", "Data was changed. Save changes?", [ {text: 'No', onPress: () => this.props.navigator.pop()}, {text: 'Cancel'}, {text: 'Save', onPress: () => { if (this._saveData()) { this.props.navigator.pop(); } }}, ] ); } else { this.props.navigator.pop(); } break; case 1: // Save and exit if(this._saveData()) { // Save some data this.props.navigator.pop(); return; } } this.setState({isBusy: false}); // If do not pop current view in previous code }, Does it mean that I must use this preventing schema for all touchable components? Is there native way to make touchable* components single-touch? username_2: Hi there! This issue is being closed because it has been inactive for a while. But don't worry, it will live on with ProductPains! Check out its new home: https://productpains.com/post/react-native/prevent-toolbarandroid-action-touch-while-processing-another-one ProductPains helps the community prioritize the most important issues thanks to its voting feature. It is easy to use - just login with GitHub. GitHub issues have voting too, nevertheless Product Pains has been very useful in highlighting the top bugs and feature requests: https://productpains.com/product/react-native?tab=top Also, if this issue is a bug, **please consider sending a pull request with a fix**. We're a small team and rely on the community for bug fixes of issues that don't affect fb apps. Status: Issue closed
nodejs/diagnostics
580063174
Title: General Crash Reporting questions Question: username_0: Thanks all for being on the call yesterday and for being willing to answer my questions around Node crash reporting. Specifically, I am interested in a few things. I want to get better knowledge of how crash reporting works, and whether or not I can send crash reports to a third party. I'm interesting in solutions to how users get and send their crash reports from Node and Electron (which I realize is out of scope for this project) to developers, especially when there are Node native plugins involved. - Is there a better place to ask these questions? - Is there any way to hook into internal Node APIs for a third party vendor of crash reports? - Does `--experimental-report` use a third party vendor to create reports? Is it possible to choose a different module or vendor to create crash reports? - Where in the Node.js core codebase is the code for crash reporting, in general? - Is it possible to use `--experimental-report` in an Electron app? Are there issues with containers storing these logs https://github.com/nodejs/node/issues/31576? Are these the same question? - For people using Node native plugins, what do crash reports look like? Are they the same as using Node, in general? I may be misunderstanding what node native plugins are. - As far as I can tell, there are two exits from Node when a crash happens: an uncalled exception, or an unhandled Promise rejection. Does this sound right? - Is there a way to add a plugin to N-API or nan in order to compile crash reports? - Any resources for the industry standard on Node native crash reporting? If any of these questions unintelligible or phrased badly, it's possibly because my lack of knowledge makes framing poor. Let me know if that seems to be the case! Answers: username_1: we are trying to build one, please review https://github.com/nodejs/diagnostics/issues/295 I hope I provided answers to some, but feel free to expand on wherever things lack, I or other can help explaining username_2: An unhandled Promise rejection will not exit the Node.js process by default. There's a flag to set this behavior though (`--unhandled-rejections=strict`). Native (C++) crashes will also cause the process to exit. username_0: Thank you, @username_2 and @username_1! This is really quite helpful. I'm following up with my team on all of these, and will get back to you! I apologise for the delay here. :) username_0: @username_1 Thank you, again. I took a look at #295, and left a comment there thinking how I can move that forward. Hope it helps! As for my questions above - I think your answers actually answered the questions for the team I was working with. As this isn't actionable, I am going to close it. I hope others are able to find use out of this, too. If any of this could or should be merged into any FAQ-like docs (should we make an FAQ.md doc?) to hep others out, let me know. Status: Issue closed username_1: Looks like a great idea to me. Shall we discuss this in the WG? (probably needs an issue in itself) username_0: Let me know if it is. Happy to stub out a PR.
Gnomorian/WYE-Mod
56120068
Title: Head Collector Question: username_0: A Sword that when killing a mob it gives a 2% chance to drop the head of the mob, so it can be used to create the crowns. - [X] Recipe. - [ ] 2% chance when mob is killed it drops its head. - [ ] Texture. Status: Issue closed Answers: username_0: texture may need to be shrunk in width, it looks like a fly swatter
steelewool/iraf
223113300
Title: libVO.a not being made Question: username_0: Error shows up on line 15751 of the log12.txt file. This appears to be a graphics library package. [log12.txt](https://github.com/username_1/iraf/files/944262/log12.txt) Answers: username_1: Tried building using Linux as the target architecture. That worked no better than when I used Linux64. username_2: The code for 'xc' (unix/boot/xc.c) is failing to map a path for this specific library file: ``` $ xc -d iraflib: libmain.o -> /home/username_2/Dev/iraf/bin.linux64/libmain.o mkfname:/home/username_2/Dev/iraf/bin.linux64/libmain.o iraflib: libex.a -> /home/username_2/Dev/iraf/bin.linux64/libex.a mkfname:/home/username_2/Dev/iraf/bin.linux64/libex.a iraflib: libsys.a -> /home/username_2/Dev/iraf/bin.linux64/libsys.a mkfname:/home/username_2/Dev/iraf/bin.linux64/libsys.a iraflib: libvops.a -> /home/username_2/Dev/iraf/bin.linux64/libvops.a mkfname:/home/username_2/Dev/iraf/bin.linux64/libvops.a iraflib: libos.a -> /home/username_2/Dev/iraf/unix/bin.linux64/libos.a mkfname:/home/username_2/Dev/iraf/unix/bin.linux64/libos.a iraflib: libVO.a -> libVO.a mkfname:libVO.a iraflib: libcfitsio.a -> /home/username_2/Dev/iraf/bin.linux64/libcfitsio.a mkfname:/home/username_2/Dev/iraf/bin.linux64/libcfitsio.a iraflib: libf2c.a -> /home/username_2/Dev/iraf/unix/bin.linux64/libf2c.a gcc -o T_ /home/username_2/Dev/iraf/bin.linux64/libmain.o /home/username_2/Dev/iraf/bin.linux64/libex.a /home/username_2/Dev/iraf/bin.linux64/libsys.a /home/username_2/Dev/iraf/bin.linux64/libvops.a /home/username_2/Dev/iraf/unix/bin.linux64/libos.a libVO.a /home/username_2/Dev/iraf/bin.linux64/libcfitsio.a /home/username_2/Dev/iraf/unix/bin.linux64/libf2c.a -lm -lpthread -lm -lrt gcc: error: libVO.a: No such file or directory ``` username_0: The directory iraf/vendor/voclient/libvo contains a Makefile that appears to be designed to build libVO.a - but isn't working. username_0: One of the first things that happen when I do a 'make sysgen' is that all of the .a files are removed from the bin.linux64 directory. So, I created a perserve-linux64.bin directory and copied over all of the .a files before tring to do a make. Then after starting 'make sysgen' I copied the libVO from the perserve-linux64.bin file into the bin.linux64 directory before the file was required by the make. The results wlll be posted here shortly. But, I need to find out where the .a files are being deleted. Its seems that if the make file was set up correctly and the dependicies set correctly the make file could figure out if there was even a need to build new .a files.
ucm-vertnet/ucm-egg
752431433
Title: Monthly VertNet data use report for 2020-4, resource ucm_egg Question: username_0: Your monthly VertNet data use report is ready! You can see the HTML rendered version of this report at: http://tools-usagestats.vertnet-portal.appspot.com/reports/472a9647-bca4-4975-a5c4-613d1064ea69/202004/ Raw text and JSON-formatted versions of the report are also available for download from this link. A copy of the text version has also been uploaded to your GitHub repository under the "reports" folder at: https://github.com/ucm-vertnet/ucm-egg/tree/master/reports A full list of all available reports can be accessed from: http://tools-usagestats.vertnet-portal.appspot.com/reports/472a9647-bca4-4975-a5c4-613d1064ea69/ You can find more information on the reporting system, along with an explanation of each metric, at: http://www.vertnet.org/resources/usagereportingguide.html Please post any comments or questions to: http://www.vertnet.org/feedback/contact.html Thank you for being a part of VertNet.
jlippold/tweakCompatible
347748779
Title: `libCSColorPicker` working on iOS 11.3 Question: username_0: ``` { "packageId": "com.creaturesurvive.libcscolorpicker", "action": "working", "userInfo": { "arch32": false, "packageId": "com.creaturesurvive.libcscolorpicker", "deviceId": "iPod7,1", "url": "http://cydia.saurik.com/package/com.creaturesurvive.libcscolorpicker/", "iOSVersion": "11.3", "packageVersionIndexed": true, "packageName": "libCSColorPicker", "category": "Development", "repository": "BigBoss", "name": "libCSColorPicker", "installed": "0.7.3", "packageIndexed": true, "packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.", "id": "com.creaturesurvive.libcscolorpicker", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.1.0", "shortDescription": "A minimal color picker library for developers", "latest": "0.7.3", "author": "CreatureSurvive", "packageStatus": "Unknown" }, "base64": "<KEY> "chosenStatus": "working", "notes": "" } ```
Cyclid/Cyclid
222353663
Title: Overly broad Github OAuth scope Question: username_0: Is a known Github issue with OAuth scope granularity: https://github.com/dear-github/dear-github/issues/113 Might be worth adding the boilerplate provided by @mezis to the Github plugin page, to help explain why the scope seems so broad.
redwoodjs/redwood
811410444
Title: Allow running a single test by file path Question: username_0: It would be nice if you could run only a single file with our test runner… you can normally with Jest, but we’ve intercepted the second argument and require that it either be web or api to run the suite for one whole side: ``` rob$ yarn rw test web/src/components/Article/Article.test.js Invalid values: Argument: side, Given: "web/src/components/Article/Article.test.js", Choices: "web", "api" ``` I'm guessing this is a yargs thing...instead of requiring one or the other we should allow *anything* and then do a little additional work to determine if it's a valid side, otherwise assume it's a full path to a file. Answers: username_1: It would be amazing if we wrapped Jest the same way that we're wrapping Prisma's commands. So that you can pass any default arguments, but we provide some sane defaults for Redwood's context. username_0: So the command would become `yarn rw jest`? username_2: @username_0 wouldn't we be able to alias `rw jest` to `rw test`? username_0: That's what we're doing now, but from @username_1's comment it sounds like he's suggesting doing what we do for Prisma: we used to have `yarn rw db` but we got rid of that and as of 0.25 just proxy to the *real* Prisma commands with `yarn rw prisma`. This makes it easier to find documentation and not worry about trying to keep up with Prisma's changing migrate API. username_2: What I meant was to still proxy the command over to `jest`, but then do that with `test`. So basically everything behind `rw test` will be forwarded to `yarn run jest` username_0: Yep we definitely could, I'm just not sure how literal @username_1 was being! :) Status: Issue closed
schmittjoh/serializer
47402320
Title: GenericSerializationVisitor and shouldSerializeNull Question: username_0: Why he checks only string keys ? ( https://github.com/schmittjoh/serializer/blob/master/src/JMS/Serializer/GenericSerializationVisitor.php#L104) ``` if (null === $v && (!is_string($k) || !$context->shouldSerializeNull())) { ``` If I have array and **shouldSerializeNull == true** ``` array('test1' => null, 1 => null); ```` I got: ``` array('test1' => null); ```` In my opinion, it is not correct, or there are some but.. ? Answers: username_1: Ping... Same problem here: an array `[null, 1, null, 2]` will be serialized to something like `[1, 2]` which is not correct obviously. username_2: will be fixed with https://github.com/schmittjoh/serializer/pull/626 Status: Issue closed
dotnet/roslyn
278592159
Title: PathMap should accept and ignore empty mapping Question: username_0: **Version Used**: master **Steps to Reproduce**: 1. ```csc a.cs /pathmap:a=b,``` **Expected Behavior**: The empty mapping is ignored. **Actual Behavior**: ``` error CS8101: The pathmap option was incorrectly formatted. ``` Not accepting empty mapping makes logic that concatenates multiple mappings, for example in msbuild targets files, unnecessary complicates as it needs to avoid inserting extra ```,```. Answers: username_1: Tagging @khyperia as this may relate to https://github.com/dotnet/roslyn/pull/23053 Status: Issue closed
AirtestProject/Airtest
547349259
Title: 新版本使用Airtest_selenium生成测试报告无法查看到图片和内容 Question: username_0: **设备:** - 型号: Chrome 79 - 系统: Windows 10 - (别的信息) **其他相关环境信息** 1.2.1版本可以显示操作但没有截图 ![image](https://user-images.githubusercontent.com/38344643/72054547-257c2480-3304-11ea-9d03-3dbce727c955.png) Answers: username_1: 换成用chrome浏览器打开报告,注意旧版chome也不支持。 或者,使用更新的airtest版本,比如1.1.1 Status: Issue closed
kblin/ncbi-genome-download
641269172
Title: Issue download protozoa genomes Question: username_0: Same issue as in [#118](https://github.com/username_1/ncbi-genome-download/issues/118) Running version 0.2.12 installed from conda How do I get the fix? ```sh Traceback (most recent call last): File "/exports/cmvm/eddie/eb/groups/watson_grp/software/mickpython/ncbi-genome-download/bin/ncbi-genome-download", line 6, in <module> exit(main()) File "/exports/cmvm/eddie/eb/groups/watson_grp/software/mickpython/ncbi-genome-download/lib/python3.8/site-packages/ncbi_genome_download /__main__.py", line 25, in main ret = args_download(args) File "/exports/cmvm/eddie/eb/groups/watson_grp/software/mickpython/ncbi-genome-download/lib/python3.8/site-packages/ncbi_genome_download /core.py", line 159, in args_download return config_download(config) File "/exports/cmvm/eddie/eb/groups/watson_grp/software/mickpython/ncbi-genome-download/lib/python3.8/site-packages/ncbi_genome_download /core.py", line 196, in config_download curr_jobs = create_downloadjob(entry, group, config) File "/exports/cmvm/eddie/eb/groups/watson_grp/software/mickpython/ncbi-genome-download/lib/python3.8/site-packages/ncbi_genome_download /core.py", line 397, in create_downloadjob checksums = grab_checksums_file(entry) File "/exports/cmvm/eddie/eb/groups/watson_grp/software/mickpython/ncbi-genome-download/lib/python3.8/site-packages/ncbi_genome_download /core.py", line 465, in grab_checksums_file req = requests.get(full_url) File "/exports/cmvm/eddie/eb/groups/watson_grp/software/mickpython/ncbi-genome-download/lib/python3.8/site-packages/requests/api.py", li ne 76, in get return request('get', url, params=params, **kwargs) File "/exports/cmvm/eddie/eb/groups/watson_grp/software/mickpython/ncbi-genome-download/lib/python3.8/site-packages/requests/api.py", li ne 61, in request return session.request(method=method, url=url, **kwargs) File "/exports/cmvm/eddie/eb/groups/watson_grp/software/mickpython/ncbi-genome-download/lib/python3.8/site-packages/requests/sessions.py ", line 516, in request prep = self.prepare_request(req) File "/exports/cmvm/eddie/eb/groups/watson_grp/software/mickpython/ncbi-genome-download/lib/python3.8/site-packages/requests/sessions.py ", line 449, in prepare_request p.prepare( File "/exports/cmvm/eddie/eb/groups/watson_grp/software/mickpython/ncbi-genome-download/lib/python3.8/site-packages/requests/models.py", line 314, in prepare self.prepare_url(url, params) File "/exports/cmvm/eddie/eb/groups/watson_grp/software/mickpython/ncbi-genome-download/lib/python3.8/site-packages/requests/models.py", line 388, in prepare_url raise MissingSchema(error) requests.exceptions.MissingSchema: Invalid URL 'na/md5checksums.txt': No schema supplied. Perhaps you meant http://na/md5checksums.txt? ``` Answers: username_1: Yeah, it's the same issue as #118, that's fixed in master by #114, but I didn't get around to relase the new version with that fix yet. username_0: Thanks Kai, so are their instructions on how to install from (a specific branch of) github? username_1: The new release isn't out because this was planned to be the large "drop python2 support and do other breaking changes in the API" release, but I can do a quick 0.2.13 patch release before the breaking changes happen. username_1: But I think you can also just point pip at a git repo. `pip install git+https://github.com/username_1/ncbi-genome-download.git` should do the trick, and should give you the current master if I remember correctly. username_1: or `pip install -U git+https://github.com/username_1/ncbi-genome-download.git` if you're updating, of course. username_1: Fixed in 0.3.0 Status: Issue closed username_0: Great stuff Kai! I realise you have already done a lot of work, and thank you for that, just wondering if this will be pushed to BioConda at some point? 😊 Cheers Mick username_1: Bioconda version is in progress: https://github.com/bioconda/bioconda-recipes/pull/22778
arshadansari27/fitranginew
72080459
Title: "Upcoming Trips" Menu's - Make it visible Question: username_0: "Upcoming Trips" Menu's 1) Upcoming Trips right now keep only 1 menu but soon we will be adding following menu's 2) Camp Sites 3) Adventure Parks & Resorts 4) Logistics & Travel Answers: username_1: Is this the submenu under upcoming trips? username_0: Yes....Submenu Under Upcoming Trips! username_1: so let's add this when we have a mechanism to attach trips to one of the given categories.. right now the functionality isn't there in backend.. so let's not add a new menu for now.. username_1: if it is needed, we will reopen it later or create new.. Status: Issue closed
rust-lang/rust
321718373
Title: Rebuilding webrender/wrench takes a long time Question: username_0: ``` $ git clone https://github.com/servo/webrender/ $ cd wrench $ cargo build $ [do some small change to wrench/src/rawtest.rs] $ cargo build Compiling wrench v0.3.0 (file:///tmp/webrender/wrench) Finished dev [unoptimized + debuginfo] target(s) in 5.15 secs ``` These timings are from linux. It's quite a bit worse on Mac, but a lot of that time probably comes from dsymutil (https://github.com/rust-lang/rust/pull/47784) Answers: username_1: @ishitatsuyuki There's no harm in marking issues like this one with the `WG-compiler-performance` label. I personally view the label as a tool for discoverability, so it's better to tag too much than too little. username_1: I can reproduce this. Rebuilding takes 5.6 seconds on my system. Of that, 2.8 seconds are spent in the linker. The compiler also spends 0.780 seconds in LLVM, but it only re-compiles the two object files corresponding to `rawtest.rs`, so that's pretty much the best case. Note that using different linkers gives different results: | | rebuild time | |----------|------------------------| | ld | 9.38s | | gold | 5.65s | | lld | 3.41s | username_2: @username_1 can you clarify why that is the best case? =) that is, can we break down a bit more where the time is going? username_1: It is the best case in the sense that only the two object files get re-compiled that correspond to the changed module while all the others are re-used. Iow, the best case with respect to object file re-use. I'll post some `-Ztime-passes` and `perf focus` numbers later, so we can take a closer look. username_1: By the way, in order to compile with LLD, I had to use `clang` as the linker: ``` RUSTFLAGS="-Clinker=clang -Clink-arg=-fuse-ld=lld" cargo build ``` username_3: On my Linux box, using Nightly rustc (from 2018-05-22), linking takes about 4.8s out of a total time of 6.3s. That's with whatever the default linker is. username_1: Really? That's very surprising. Maybe it's bottlenecked on I/O on that system? username_3: @username_1: how can I tell which linker I am using? And how can I change the linker? username_1: @username_3 You can change the linker as described [above](https://github.com/rust-lang/rust/issues/50584#issuecomment-390898777). You can use `-Zprint-link-args` to view the exact linker invocation. username_3: I just re-measured on my Linux box. For "some small change to wrench/src/rawtest.rs" I just `touch`ed the file. Results: - `RUSTFLAGS="-Clinker=clang -Clink-arg=-fuse-ld=ld" time cargo build`: 7.6 seconds - `RUSTFLAGS="-Clinker=clang -Clink-arg=-fuse-ld=gold" time cargo build`: 4.4 seconds - `RUSTFLAGS="-Clinker=clang -Clink-arg=-fuse-ld=lld" time cargo build`: 2.9 seconds I had to run `sudo apt install lld` because `lld` wasn't installed by default on my system. Right, so what can we do about getting `lld` as the default linker? username_3: #39915 is the PR for that. username_4: @username_3 we're actually currently shipping LLD on nightly, but it's currently only intended for wasm and "accidentally" also works for MSVC. For targets like OSX and Linux the story is a bit trickier due to the other object files that clang/gcc typically insert. The next step for stabilizing LLD would be to get a flag, like `-Z linker-flavor=lld`, working for all targets (Windows + Mac + Linux). It'd do whatever it needs to do to work across the various platforms. Once that's done we can advertise it to the community, asking for feedback. Here we can gain both timing information as well as bug reports to send to LLD. If everything goes smoothly (which is sort of doubtful with a whole brand new linker, but hey you never know!) we can turn it on by default, otherwise we can work to stabilize the selection of LLD and then add an option to Cargo.toml so projects can at least opt-in to it. username_0: This is still pretty bad ``` $ touch src/rawtest.rs $ cargo build Compiling wrench v0.3.0 (/Users/username_0/src/webrender/wrench) Finished dev [unoptimized + debuginfo] target(s) in 12.56s ``` username_5: It does seem like an odd thing to track a particular crate though -- if this was reopened as "X pattern makes compiler super slow" then that would make more sense to me. We also have a somewhat long-standing issue I believe on rustc-perf to add webrender/wrench as a benchmark so I'm going to loosely close in favor of that. Status: Issue closed
alisdairjsmyth/node-red-contrib-blindcontroller
348287593
Title: Temperature / Clouds Threshold position improvements Question: username_0: Hi, I'm wondering if you'd be interested in adding a Temperature / Clouds Threshold **position** variable. Currently if Temperature Threshold is met blind will move to Max Closed position. Ideally I'd like to have a Temperature / Clouds Threshold **position** variable where (pseudo code) If Current Temp > Temperature Threshold then position = Temperature Threshold Position || Max Closed position The idea is under **normal** circumstances I'd like my blinds to be at least 50% opened (Max Closed Position) but if ambient temperature > 40c for example and sun is in window then close to say 90% - makes sense? If interested I can do a pull request Answers: username_1: Sounds like a useful feature. I would be happy to accept such a pull request. In its implementation, may I suggest two separate position parameters: one for temperature; and one for clouds. username_0: https://github.com/username_1/node-red-contrib-blindcontroller/pull/20 Status: Issue closed username_1: Incorporated into v4.5.0
GoogleCloudPlatform/nodejs-docs-samples
380877771
Title: Blurred image written to bucket isn't blurred Question: username_0: Steps to repro -- Set up cloud function as in readme steps 1-4 -- upload zombie picture as in readme step 5 Expected result -- Offensive content is detected -- Image is copied locally, blurred, then uploaded as new object -- Viewing new object shows blurred image Actual result -- Offensive content is detected -- Images is copied locally, acted on, and new object is uploaded -- Viewing new object shows unblurred copy of original image Answers: username_1: Is this PR addressing the issue? https://github.com/GoogleCloudPlatform/nodejs-docs-samples/pull/893 username_0: yes - the proposed change does fix the issue. Thanks, jwd -- about.me <https://username_0.me/jwd_about> 408-759-4664 calendar <https://username_0.me/jwd_gcal> <https://username_0.me/jwd_about> username_2: Fixed with https://github.com/GoogleCloudPlatform/nodejs-docs-samples/pull/893. Status: Issue closed
tconnor/goodpy
232934109
Title: Artifact Correction Question: username_0: The quartz artifact is a pain, and there is a way to handle that. There should be a feature to make a flat-correction from artifact frames. Answers: username_0: I've added a step 2a to the development branch that handles this. It needs testing -- and probably some debugging. username_0: Closing for now; the banding correction mitigates the quartz artifact, anyway. Status: Issue closed
dbeaver/dbeaver
587555726
Title: [*Possible* BUG] - Did DBeaver lost its "smart home key" funcionality? Question: username_0: Is it me, or hitting the "HOME" key used to go to the first character of the line, and now it goes to the begginning of line? :-S Answers: username_1: Hi @username_0 , what version do you use? In 7.0.1 it works ok for me - hitting Home button still used to go to the first character username_0: 7.0.1 Linux Mate 18.04. username_2: Same here with 7.0.2 on Windows x64 username_1: Hm, reproduced for me after default formatter changing. Also reset UI settings helps me. btw, @username_0 , @username_2 what formaters do you use? username_0: "I don't know" which spells "The default one" :-D ;-) username_2: A bit late, but I was also using the default formatter. username_3: Can't reproduce. `Home` always navigates to the first non-whitespace character (spaces, tabs). You can enable "Show whitespace characters" in text editor preferences to see what is actually in the beginning of the line. username_4: @username_0 , @username_2 Did Serge's advice help you? username_1: there is no update on issue for a long time. Probably, it is solved. If it is still actual for you - feel free to reopen the ticket Status: Issue closed
cbsd/cbsd
242810007
Title: Не определяет наличие модуля PF в ядре Question: username_0: # env workdir="$target" /usr/local/cbsd/sudoexec/initenv ... Do you want to modify /boot/loader.conf to set pf_load=YES ? [yes(1) or no(0)] n grep: /usr/jails/etc/pfnat.conf: No such file or directory ... Причем: # kldstat -v | grep ' pf$' 408 pf Answers: username_1: Тут только про grep надо будет поправить поведение ( No such file or directory ). В остальном, здесь не проверяется есть модуль или нет, поскольку это инициализация CBSD и в этом диалог-ответе она готовит и конфигурирует параметры, которые требуются для работы. То что модуль уже загружен, может быть результат ручной загрузки pf, что не гарантирует загрузку модуля при перезагрузки сервера и таким образом, повлечет неработоспособность CBSD. Поэтому, вопрос о правке конфига здесь подразумевает только правку конфига, чтобы пользователь знал, что меняется в его системе и что без спроса, менять в системных настройках система ничего не будет. username_1: Corrected, thanks! Status: Issue closed
nguyenq/tess4j
226180221
Title: Whether tess4j thread-safe Question: username_0: Whether tess4j thread-safe Answers: username_1: This [link](https://sourceforge.net/p/tess4j/discussion/1202293/thread/4562eccb/#799c) has some more information regarding it. It apparently depends on version of Tesseract being used. username_2: Yes, it is thread-safe: no shared state between different instances. Status: Issue closed
iovisor/bcc
641488969
Title: Return in LSM_PROBE not working as expected Question: username_0: A few days ago, I filed #2961 because I was confused that the LSM_PROBE program type was not behaving as expected. In particular, returning something like `-EPERM` should cause the operation to fail with EPERM. I have done some experiments with https://github.com/iovisor/bpftrace/pull/1347 (bpftrace implementation for LSM probes) and compared generated bytecode from bcc with generated bytecode from bpftrace. I have found the problem. bcc seems to be ignoring return value and always generating something like `r0 = 0; exit`, whereas bpftrace sets `r0` according to the specified value. Indeed, the following modifications to the below example produce the expected behavior: ```c // Original code that doesn't work #include <linux/fs.h> LSM_PROBE(inode_permission, struct inode *inode, int mask) { bpf_trace_printk("lsm\\n"); return -1; } ``` ```c // Modified code that does work #include <linux/fs.h> LSM_PROBE(inode_permission, struct inode *inode, int mask) { bpf_trace_printk("lsm\\n"); asm("r0 = -1; exit"); return 0; } ``` I think that LSM probes should be made to work like they do in https://github.com/iovisor/bpftrace/pull/1347, setting r0 according to the return value specified. Is this something that would be possible in bcc? If not, perhaps we could add a helper instead like `lsm_return` that results in the bytecode above? Answers: username_1: @username_0 Could you check whether the pull request https://github.com/iovisor/bcc/pull/2978 fixed your problem or not? username_0: @username_1 Yes this does fix the problem. Thanks! username_0: #2978 closes Status: Issue closed
PMA-2020/datalab
503556027
Title: Possibly needed env file updates Question: username_0: Copy/pasta from my workflowy: ``` Update datalab env - set datalab.pma2020.org bucket env to use this url: - https://pma-api.herokuapp.com/ --> https://api.pma2020.org - edit directly on s3 - env file - network file - main js ```
munki/munki-pkg
419982387
Title: Is it possible to install to different paths? Question: username_0: I have a GUI application and a separate launchd deamon that it needs to talk with. So my GUI bundle needs to go to `/Applications` but the launchd script will go to `/Library/LaunchDaemons`. Is this possible with munki-pkg? Answers: username_1: Of course it is. The payload directory would contain Applications/Your.app and Library/LaunchDaemons/your.plist Status: Issue closed
jlippold/tweakCompatible
426624475
Title: `BatteryBar` working on iOS 10.2.1 Question: username_0: ``` { "packageId": "com.dgh0st.batterybar", "action": "working", "userInfo": { "arch32": false, "packageId": "com.dgh0st.batterybar", "deviceId": "iPhone6,1", "url": "http://cydia.saurik.com/package/com.dgh0st.batterybar/", "iOSVersion": "10.2.1", "packageVersionIndexed": false, "packageName": "BatteryBar", "category": "Tweaks", "repository": "BigBoss", "name": "BatteryBar", "installed": "0.0.3-4", "packageIndexed": true, "packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.", "id": "com.dgh0st.batterybar", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.1.5", "shortDescription": "Display a bar that represents the battery percentage in the status bar.", "latest": "0.0.3-4", "author": "DGh0st", "packageStatus": "Unknown" }, "base64": "<KEY> "chosenStatus": "working", "notes": "" } ```<issue_closed> Status: Issue closed