repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
owncloud/ocis-reva | 621467697 | Title: XML properties in webdav response not properly encoded
Question:
username_0: On the xml webdav responses some properties are not properly encoded
for example
| filename | expected value (Value returned by oc10) | Actual value |
|---|---|---|
| /folder &2.txt | webdav\/file%20%262\.txt | webdav/file%20&2.txt|
| /C++ folder | webdav\/C%2b%2b%20folder | webdav/C++%20folder/ |
Answers:
username_1: is this not the same as https://github.com/owncloud/ocis-reva/issues/122 ?
I see that issue is closed but the tests are still skipped
username_1: I see you mentionned this in https://github.com/owncloud/core/pull/37388
In the future please always create connections between the tickets to make it easier to find the contexts
username_1: never mind, it was referenced indirectly.
usually I think it can be useful to add a sentence in the top post that says where this was discovered and what it related to, to avoid confusion
username_1: seems we're using Golang's `URL.EscapedPath` which apparently decided to not encode the spaces and plus signs
username_1: we could use `url.QueryEscape()` but it needs to done individually on each path section
username_1: so I just tried it locally but it encoded with lower case instead of upper case. "%2d" instead of "%2D", so the test also fails :disappointed: |
nextcloud/richdocuments | 598800665 | Title: Collaborative editing is not possible if the display language is Japanese
Question:
username_0: **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Each other is set to display Japanese.
2. Two users open the same file
3. Either one tries to select/move cell
4. Error load timout / clientzoom error
**Expected behavior**
Each user can edit.
**Workaround**
Either user sets the display language to something other than Japanese.
There seems to be a problem when both are set to Japanese display.
**Client details:**
- OS: Windows10
- Browser Firefox 75, 68ESR, Vivaldi 1.12, new Edge
- Device: Desktop
## Server details
**Operating system**:
CentOS 7.7
**Web server:**
nginx 1.16.1
**Database:**
MariaDB 10.2.29
**PHP version:**
PHP 7.3.16
**Nextcloud version:**
18.0.3
**Version of the richdocuments app**
3.5.3
**Version of Collabora Online**
6.2.8(loolwsd 4.2.2)
Status: Issue closed
Answers:
username_0: Sorry, This was a problem with Collabora itself. When I tried co-edit from other applications via WOPI, the same phenomenon occurred. |
yt-dlp/yt-dlp | 1046383518 | Title: Can not download the full plalylist in curiositystream
Question:
username_0: ### Checklist
- [X] I'm reporting a feature request
- [X] I've verified that I'm running yt-dlp version **2021.10.22**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Description
if I want to download a series I can not download the all episodes. it downloads only the video that I have put into the command line.
here is the log: https://ghostbin.com/jMyOD
Answers:
username_1: Ofcourse it only downloads the video that you pass the "video" URL the command line
If you want to download a series, pass the URL of the series. It should looks something like `https://curiositystream.com/series/2`
Status: Issue closed
username_0: Then how can I select the series that I want to download ? |
pololu/vl53l1x-st-api-arduino | 619123936 | Title: esp8266 + VL53L1X init failed
Question:
username_0: VL53L1X Model_ID: EF
VL53L1X Module_Type: EF
VL53L1X: EFFE
Autonomous Ranging Test
VL53L1_StartMeasurement failed
Answers:
username_1: If you are still looking for help troubleshooting this, please post on [the Pololu forum](https://forum.pololu.com/) and include pictures showing your connections. (The GitHub issues are intended for tracking specific problems with and feature requests for the software, and this does not currently appear to indicate a bug in the code itself.)
Status: Issue closed
|
aws-amplify/amplify-cli | 378930806 | Title: Merge additions to configuration file without destroying customer updates
Question:
username_0: **Is your feature request related to a problem? Please describe.**
Yes - The JavaScript library and iOS/Android SDKs rely on `aws-exports.js` and `awsconfiguration.json` to interact with services not only managed through the CLI, but also independent resources they created in their AWS account. In some situations today a customer might add something (like Kinesis information) and then run `amplify add auth` which would overwrite the configuration file.
**Describe the solution you'd like**
Union the customer provided configuration along with the Amplify CLI controlled config into the same file, with neither negatively impacting the other.
**Describe alternatives you've considered**
Manually copying everything before running Amplify CLI commands.
**Additional context**
Out of scope is the use case when a customer provides resource information (Cognito, AppSync, etc.) that the Amplify CLI provides as well. In these cases where there is conflict we should default to not overwriting the customer information.
Answers:
username_1: @username_0 We added this feature as a part of the latest version of the CLI :)
Status: Issue closed
username_2: @username_1 Could you please give an example of how this works now? Don't find an example in the docs. :-) |
hug-sun/element3 | 764810238 | Title: refactor badge tasking
Question:
username_0: - [x] 可以设置显示的内容支持数字或者字符串
- [x] 可以设置最大值
* 超过最大值 如 : max = 10 显示 10+
* 没有超过最大值 max = 10 值 = 8 则显示8
- [x] 是否显示小圆点, 如果现实则不显示输入的内容
- [x] 控制是否隐藏badge, 隐藏则不显示
- [x] 可以控制颜色,可选值有primary / success / warning / danger / info<issue_closed>
Status: Issue closed |
denoland/deno_std | 423281345 | Title: Cannot resolve module "deno" (NOT FOUND)
Question:
username_0: On this line
https://github.com/denoland/deno_std/blob/master/http/server.ts#L2
Get his error :
```
Downloading https://deno.land/x/[email protected]/deno... NOT FOUND
Uncaught NotFound: Cannot resolve module "deno" from "/home/username_0/.cache/deno/deps/https/deno.land/x/[email protected]/server.ts"
at DenoError (deno/js/errors.ts:22:5)
at maybeError (deno/js/errors.ts:33:12)
at maybeThrowError (deno/js/errors.ts:39:15)
at sendSync (deno/js/dispatch.ts:75:5)
at fetchModuleMetaData (deno/js/os.ts:80:19)
at _resolveModule (deno/js/compiler.ts:249:38)
at resolveModuleNames (deno/js/compiler.ts:479:35)
at compilerHost.resolveModuleNames (deno/third_party/node_modules/typescript/lib/typescript.js:118650:138)
at resolveModuleNamesWorker (deno/third_party/node_modules/typescript/lib/typescript.js:86767:127)
at resolveModuleNamesReusingOldState (deno/third_party/node_modules/typescript/lib/typescript.js:87001:24)
```
https://deno.land/x/[email protected]/deno gives a 404 error.
deno: 0.3.3
v8: 7.4.238
typescript: 3.3.3333
Answers:
username_1: 'deno' builtin module is removed at [email protected]. You need to update the import path to `https://deno.land/[email protected]/http/server.ts`
username_0: The issue was from this file https://deno.sh/[email protected]/package.ts
Nevermind.
Status: Issue closed
|
Azure-Samples/cognitive-services-speech-sdk | 754610153 | Title: Improved intent recognizer responses
Question:
username_0: **Is your feature request related to a problem? Please describe.**
We've been using the Intent Recognizer capability that's part of the Speech SDK to perform both speech-to-text and LUIS-based intent recognition in a single call. However, we've been missing some of the capabilities that are available when using these services separately. Specifically, the ability to get prediction scores for all intents does not seem to be available via Intent Recognizer (but is available from LUIS directly). Other capabilities, like sentiment and options around profanity are also less available.
**Describe the solution you'd like**
Provide an option to retrieve the full LUIS response from the Intent Recognizer response. This would include content like scores for all intents (not just the top scoring one; against the V2 LUIS endpoint, this requires verbose=true, and against the V3 LUIS endpoint, this requires show-all-intents=true). Providing access to other parts of the LUIS response, like text sentiment (when enabled for the model), would also be useful.
**Describe alternatives you've considered**
While some of the benefits of Intent Recognizer, like speech priming, are highly useful, the missing features highlighted above are leading us to consider switching away from this approach and using Speech API + LUIS API separately to get the full power of each. We've also considered just adding just a subset of intents to the Intent Recognizer; however, the lack of an ability to remove intents after starting limits this capability in practice.
**Additional context**
-
Answers:
username_1: @username_0 Thanks for request, we are working on this and targeting to support all intents from the JSON with the Speech SDK API (v2 based LUIS) solution on the 1.15.0 release. The release has ETA around mid January 2021.
username_2: Closed since there's a work item with ETA on our backlog. Please open a new issue if you need further support.
Status: Issue closed
|
skynav/ttt | 74742859 | Title: missing line when display align is after
Question:
username_0: When display alignment is 'after', e.g., with [this input](../blob/master/ttt-cap2tt/src/test/resources/com/skynav/cap2tt/app/test-013-placement-and-alignment-default.expected.xml), too much space is inserted, resulting in line(s) being clipped at after edge.
Answers:
username_0: Fixed in e5b404c14e07aff17e28d19df4184e95203c8474.
Status: Issue closed
|
roeldewit/PSE | 118407439 | Title: CustomLocaleResolver unhandled exception when string "lang" not present in session
Question:
username_0: Line 21: String lang = request.getSession().getAttribute("lang").toString();
Maybe add check, and give a default language if empty.
Answers:
username_1: If String "lang" is not present in request a NullPointerException will be thrown and catched. In the catch block a default language "nl" will be set.
Status: Issue closed
|
pennersr/django-allauth | 194415203 | Title: next URL in Email Confirmation
Question:
username_0: So I had the following issue today with a project:
- User clicks on a link, that needs an account and email verification. I added a method to the custom user model to achieve that:
```python
def has_verified_email(self):
return self.emailaddress_set.filter(verified=True).exists()
```
- User gets redirected to the registration page (I combined login and signup page into one)
- User registers and receives a confirmation email
- If the user clicks on the confirmation link, he should be redirected to the original page from where he was redirected to the login/signup page (this works with login)
Thus, the ``?next=/some_url/`` should be added to the Email confirmation link, so that the user does not need to click on the same button again.
I solved the Issue in the following way:
```python
# settings.py
ACCOUNT_AUTHENTICATION_METHOD = 'email'
ACCOUNT_EMAIL_REQUIRED = True
ACCOUNT_EMAIL_VERIFICATION = 'optional'
ACCOUNT_CONFIRM_EMAIL_ON_GET = True
ACCOUNT_LOGIN_ON_EMAIL_CONFIRMATION = True
ACCOUNT_ADAPTER = 'bosg.users.adapters.AccountAdapter'
# adapters.AccountAdapter
class AccountAdapter(DefaultAccountAdapter):
def is_open_for_signup(self, request):
return getattr(settings, 'ACCOUNT_ALLOW_REGISTRATION', True)
def render_mail(self, template_prefix, email, context):
context['next_url'] = '?next=%s' % self.request.path # <-- Append path to email context
return super(AccountAdapter, self).render_mail(template_prefix, email, context)
def get_email_confirmation_redirect_url(self, request):
"""
The URL to return to after successful e-mail confirmation.
"""
if request.user.is_authenticated():
if app_settings.EMAIL_CONFIRMATION_AUTHENTICATED_REDIRECT_URL:
return \
app_settings.EMAIL_CONFIRMATION_AUTHENTICATED_REDIRECT_URL
elif request.GET.get('next'): # <-- Check if request has 'next' parameter
return request.GET.get('next') # <-- Return the next parameter instead
else:
return self.get_login_redirect_url(request)
else:
return app_settings.EMAIL_CONFIRMATION_ANONYMOUS_REDIRECT_URL
# templates/account/email/email_confirmation_message.txt
...
To confirm this is correct, go to {{ activate_url }}{{ next_url }}
...
```
So this works propery. If a user wants to access, let's say, ``/dashboard/``, but email verification is enabled for this URL, then he will have the URL ``example.com/dashboard/``, but will see the **Verify Your E-mail Address** page. ``/dashboard/`` will be added to the email ``context['next_url']``. If the user now clicks on the activation link, he will be automatically logged in and redirected to the original URL he tried to access without a verified email.
It would be amazing, if you can implement this generically. It would be nice, if the default adapter recognizes, if there is a ``?next`` parameter for the signup page or if a user tried to access a page that required email verification. Once that is recognized, it should take higher priority in redirection. If nothing was provided, then the adapter can use the regular logic for redirection.
Let me know what you think!
Answers:
username_1: I am also trying to implement this and I came pretty much to the equivalent code to yours but in my case the link in the email is always `?next=/accounts/signup/`. For some reason, my adapter's `get_email_confirmation_redirect_url()` method is not being called at all.
As a side note, be careful with `ACCOUNT_CONFIRM_EMAIL_ON_GET = True` - this means anyone with the confirmation link can login as the user who received the link, over and over again until you delete the email confirmation record from the database.
A better option is to change the confirmation page template to not have a confirmation button and instead do a `POST` via AJAX (simulating the user clicking the "confirm" button) and then redirecting to wherever the server responds.
username_1: Nevermind, I misunderstood the order in which the adapter methods were called. `get_email_confirmation_redirect_url()` isn't called until the user clicks the confirmation button on the confirmation page (or gets automatically confirmed just by visiting the page, depending on the setting I mentioned in my previous comment).
I was assuming this method was called to generate the URL that gets sent in the email, which, now that I think of it, doesn't even make much sense.
And an important difference in our use cases that I forgot to mention: I'm trying to achieve the redirection after the user confirms his email after sign up, which means that setting `context['next_url'] = '?next=%s' % self.request.path` will always wiled `/accounts/signup` as the `next` url.
That's what you get for programming when you're sleepy.
The side note from my previous comment is still valid, though.
username_0: @username_1 Your sidenote is true, I read about it. But I was thinking, if someone has access your email account to retrieve the link, then being able to login to the portal i am programming is of little concern :P. Correct me if I am wrong.
username_0: Plus, I am hoping that the people who develop all-auth implement this feature soon, so I don't have to use a hacky solution. Till then, the above code works for that use case.
username_1: Well, I agree that if someone has access to your email then you have another problem to worry about. However, the one I mentioned is still a problem, because now the attacker has access to another account of yours, and depending on what that account grants access to, it can be a major headache.
In other words, the fact that someone was able to intercept the link from the user is no excuse for us programmers to make that user's life even harder by allowing access to his account on our application.
There's another subtle detail here: note that I never said an attacker needs to gain access to the user's email account in order to obtain the link. There are other ways to do that, including sniffing unencrypted traffic to your application, via social engineering, by taking a photo of the user's screen when they're looking at the account confirmation email... use your imagination and take your pick.
The assumption that if an attacker has intercepted the confirmation URL then he also has access to the user's email account, and thus the user has bigger concerns, is not always correct, and that makes it even less justifiable for us programmers to make the attacker's life easier.
I also hope something like this gets implemented but there's a problem with the approach we both took (I mentioned it previously): if the user is signing up, this code will always take the user to /accounts/signup. I ended up using session variables to identify where the user was going, which is also not perfect but at least works for my use case.
username_0: Yeah you're right. That's why I posted my solution here in the hopes, that the original developers of all-auth will implement such a feature in a solid way. I hope they looked at this.
Can you post your solution? Maybe I can switch to that instead, if it is safer.
username_1: I'm afraid I no longer have access to the code, it was a private repository of a project I was brought in to help for a short period and I'm no longer associated with it.
username_2: ```
from allauth.account.adapter import DefaultAccountAdapter
class CustomAccountAdapter(DefaultAccountAdapter):
def get_email_confirmation_url(self, request, emailconfirmation):
next_url = request.POST.get('next')
email_conf_url = super(CustomAccountAdapter, self).get_email_confirmation_url(request, emailconfirmation)
if next_url:
return '%s?next=%s' % (email_conf_url, next_url)
else:
return email_conf_url
def get_email_confirmation_redirect_url(self, request):
next_url = request.GET.get('next')
if next_url:
return next_url
else:
return super(CustomAccountAdapter, self).get_email_confirmation_redirect_url(request)
```
username_3: Just need to add the new adapter to the settings:
`ACCOUNT_ADAPTER = 'my.path.to.CustomAccountAdapter'
`
username_4: To avoid using `ACCOUNT_CONFIRM_EMAIL_ON_GET` because of the problems above, it is also possible to store a session. Unfortunately this works only if the user used the same browser right after signing up. Similarly, `ACCOUNT_LOGIN_ON_EMAIL_CONFIRMATION` needs the user to use the same browser just after signing up.
For example the code could be:
```python
class MyAccountAdapter(DefaultAccountAdapter):
def get_email_confirmation_url(self, request, emailconfirmation):
next_url = request.POST.get('next')
request.session['next_on_email_confirmation'] = next_url
email_conf_url = super(MyAccountAdapter, self).get_email_confirmation_url(
request, emailconfirmation)
return email_conf_url
def get_email_confirmation_redirect_url(self, request):
if 'next_on_email_confirmation' in request.session:
return request.session['next_on_email_confirmation']
else:
return super(MyAccountAdapter, self).get_email_confirmation_redirect_url(request)
``` |
itinance/react-native-fs | 830047116 | Title: EISDIR: illegal operation on a directory, using with React Native for Windows
Question:
username_0: I made a React Native for Windows app for testing RNFS.
This is my `App.ts`:
```
import React, { Component } from 'react';
import { Text, View } from 'react-native';
import RNFS from 'react-native-fs';
const str = RNFS.MainBundlePath;
class App extends Component
{
render()
{
return React.createElement(View, null, React.createElement(Text, null, str));
}
}
export default App;
```
But when I ran `react-native run-windows` command, this error happens:
```
Error: EISDIR: illegal operation on a directory, read
at Object.readSync (fs.js:592:3)
at tryReadSync (fs.js:366:20)
at Object.readFileSync (fs.js:403:19)
at UnableToResolveError.buildCodeFrameMessage (C:\Users\gpffl\Desktop\foo\node_modules\metro\src\node-haste\DependencyGraph\ModuleResolution.js:304:17)
at new UnableToResolveError (C:\Users\gpffl\Desktop\foo\node_modules\metro\src\node-haste\DependencyGraph\ModuleResolution.js:290:35)
at ModuleResolver.resolveDependency (C:\Users\gpffl\Desktop\foo\node_modules\metro\src\node-haste\DependencyGraph\ModuleResolution.js:191:15)
at DependencyGraph.resolveDependency (C:\Users\gpffl\Desktop\foo\node_modules\metro\src\node-haste\DependencyGraph.js:353:43)
at C:\Users\gpffl\Desktop\foo\node_modules\metro\src\lib\transformHelpers.js:271:42
at C:\Users\gpffl\Desktop\foo\node_modules\metro\src\Server.js:1097:37
at Generator.next (<anonymous>)
```
And this is my system spec:
```
System:
OS: Windows 10 10.0.19041
CPU: (4) x64 Intel(R) Core(TM) i5-4200M CPU @ 2.50GHz
Memory: 534.39 MB / 3.92 GB
Binaries:
Node: 14.15.4 - C:\Program Files\nodejs\node.EXE
Yarn: 1.22.10 - ~\AppData\Roaming\npm\yarn.CMD
npm: 6.14.10 - C:\Program Files\nodejs\npm.CMD
Watchman: Not Found
SDKs:
Android SDK: Not Found
Windows SDK:
AllowDevelopmentWithoutDevLicense: Enabled
AllowAllTrustedApps: Enabled
Versions: 10.0.18362.0
IDEs:
Android Studio: Not Found
Visual Studio: 16.8.30907.101 (Visual Studio Community 2019)
JetBrains Webstorm: 2020.3.1
Languages:
Java: Not Found
Python: 2.7.18
npmPackages:
@react-native-community/cli: Not Found
react: 16.13.1 => 16.13.1
react-native: 0.63.4 => 0.63.4
react-native-windows: ^0.63.0-0 => 0.63.24
npmGlobalPackages:
*react-native*: Not Found
```
Why this happens?
Answers:
username_1: Had the same issue while testing something in react-native 0.63.2 in macOS
username_2: I have the same issue on macOS but I don't use `react-native-fs`. This looks more like a react native or metro issue.
username_1: @username_2 it's a React-native issue i think. the error is linked to the `metro` dependancies inside node_modules.
username_3: Just so you're aware, the `EISDIR` error is likely to be a misreported error due to [a bug](https://github.com/facebook/metro/pull/567/files) in metro that has been fixed in metro 0.65. The underlying error is likely to be an import issue (similar to [what I found here]) but that needs to be verified. I faced that problem with a bunch of other libraries.
You can try to manually patch the metro library in your `node_modules` if you want to see what is the actual error.
username_4: Definitely can be a misreported error. Just had this when I had a typo in a style property name.
username_5: how to fix that?
username_6: adding some logs into metro's Server.js helped me get to the bottom of why it was throwing this error. Which ended up being a debugger-ui source map issue. In fact when I looked at the Chrome console logs I was seeing specific file warnings that showed every time the EISDIR showed in expo logs
username_7: Save for me, when I enable "Debug", http://localhost:8081/debugger-ui/ open then the error will show, but my app still work
username_8: same issue facing in mac
username_9: To unmask the [misreported issue](https://github.com/itinance/react-native-fs/issues/991#issuecomment-811053625) by using Metro `0.65.0` or later, when using yarn, add this to `package.json`:
```json
"resolutions": {
"metro": "^0.65.0",
"metro-config": "^0.65.0",
"metro-core": "^0.65.0",
"metro-react-native-babel-transformer": "^0.65.0",
"metro-resolver": "^0.65.0",
"metro-runtime": "^0.65.0"
},
```
That will override the Metro version installed by React Native.
username_10: same here everytime I run debug mode in react native. does anyone here have a fix for this?
username_9: Add `patch-package` to your project with `yarn add patch-package` and the `"postinstall": "patch-package"` script (see its installation instructions).
Then create a patch file `patches/metro+0.59.0.patch` with these contents:
```
diff --git a/node_modules/metro/src/node-haste/DependencyGraph/ModuleResolution.js b/node_modules/metro/src/node-haste/DependencyGraph/ModuleResolution.js
index 92c8a7e..ebab5d0 100644
--- a/node_modules/metro/src/node-haste/DependencyGraph/ModuleResolution.js
+++ b/node_modules/metro/src/node-haste/DependencyGraph/ModuleResolution.js
@@ -303,7 +303,7 @@ class UnableToResolveError extends Error {
try {
file = fs.readFileSync(this.originModulePath, "utf8");
} catch (error) {
- if (error.code === "ENOENT") {
+ if (error.code === "ENOENT" || error.code === 'EISDIR') {
// We're probably dealing with a virtualised file system where
// `this.originModulePath` doesn't actually exist on disk.
// We can't show a code frame, but there's no need to let this I/O
```
Now run `yarn` again or just `yarn postinstall` and Metro should be patched to unmask the EISDIR error and reveal the actual problem with your setup.
Note: If you're using RN 0.64 I believe you will be on Metro 0.64 and not 0.59. In that case you will need to create your own patch file for the new version by following the `patch-package` instructions.
username_11: in my case is for reanimated in te react navigation 5, remove this code from MainApplication.java works fine the debug
@Override
protected JSIModulePackage getJSIModulePackage() {
return new ReanimatedJSIModulePackage(); // <- add
}
username_12: Same issue on RN `0.64.1`, any solution?
username_9: @username_12 RN 0.64 uses Metro 0.64 and the fix is in Metro 0.65, so you still need to patch Metro to **unmask the actual issue** you are having. See my comment about creating a patch with `patch-package` above.
username_13: how to fix that?
username_14: If you are using Metro 0.64. This is patch. Make file `patches/metro+0.64.0.patch` with this context:
```
diff --git a/node_modules/metro/src/node-haste/DependencyGraph/ModuleResolution.js b/node_modules/metro/src/node-haste/DependencyGraph/ModuleResolution.js
index 5f32fc5..2b80fda 100644
--- a/node_modules/metro/src/node-haste/DependencyGraph/ModuleResolution.js
+++ b/node_modules/metro/src/node-haste/DependencyGraph/ModuleResolution.js
@@ -346,7 +346,7 @@ class UnableToResolveError extends Error {
try {
file = fs.readFileSync(this.originModulePath, "utf8");
} catch (error) {
- if (error.code === "ENOENT") {
+ if (error.code === "ENOENT" || error.code === 'EISDIR') {
// We're probably dealing with a virtualised file system where
// `this.originModulePath` doesn't actually exist on disk.
// We can't show a code frame, but there's no need to let this I/O
```
and follow instruction of @username_9
username_14: look my comment from above
username_15: it works for me, https://stackoverflow.com/questions/66771543/metro-bundler-error-eisdir-illegal-operation-on-a-directory-read
username_16: Is it possible to upgrade the metro version to 0.66.0 and RN still remains 0.64?
username_17: I moved index.js to src folder, which broke it. Moving it back, per the SOF @username_15 provided, fixed the issue.
username_18: this error came up when i forgot to add a default case in my redux reducer
username_19: 入口文件不对
js项目的package.json 里面配置的main
"main":"src/index.js",
在java的入口文件修改未”src/index“,默认是”index"
@Override
protected String getJSMainModuleName() {
return "src/index"; <---add this
}
username_20: Try using windows app of react native debugger. i was getting error on chrome extension of react native debugger. but after installing latest exe the error got resolved.
username_21: I was having this problem on MacOS, and simply running `$ yarn start —–reset-cache` solved it
username_22: If you modify `sourceExts` at **metro.config.js**, you should specify **all** extensions.
Wrong `sourceExts: ['tsx']`
```js
module.exports = {
resolver: {
sourceExts: ['js', 'jsx', 'ts', 'tsx'],
},
transformer: {
getTransformOptions: async () => ({
transform: {
experimentalImportSupport: false,
inlineRequires: true,
},
}),
},
};
```
username_23: Thanks, this was the only solution that worked for me having tried all the other codemods in here! 🥳
username_24: Stop remote debugging and reload the IOS simulator fixes the issue for me
username_25: I got around frozen splashcreen by:
Device -> Shake
"Stop Debugging"
Reload
Then re-enable debugging
username_26: I am using expo go application and I got same error accidentally. To fix this;
1- Shake your device while application is running on expo go.
2- Stop remote debugging
3- Refresh the application
It fixed my problem.
username_27: I'd got the same issue and this helped, but when I run `expo start` and connected to a phone this error came out.
`Error: Unable to resolve module ./debugger-ui/debuggerWorker.aca173c4 from /home/rainy/native/hakjum/.:
None of these files exist:
* debugger-ui/debuggerWorker.aca173c4(.native|.native.ts|.ts|.native.tsx|.tsx|.native.js|.js|.native.jsx|.jsx|.native.json|.json)
* debugger-ui/debuggerWorker.aca173c4/index(.native|.native.ts|.ts|.native.tsx|.tsx|.native.js|.js|.native.jsx|.jsx|.native.json|.json)
at ModuleResolver.resolveDependency (/home/rainy/native/hakjum/node_modules/metro/src/node-haste/DependencyGraph/ModuleResolution.js:211:15)
at DependencyGraph.resolveDependency (/home/rainy/native/hakjum/node_modules/metro/src/node-haste/DependencyGraph.js:413:43)
at /home/rainy/native/hakjum/node_modules/metro/src/lib/transformHelpers.js:317:42
at /home/rainy/native/hakjum/node_modules/metro/src/Server.js:1471:14
at Generator.next (<anonymous>)
at asyncGeneratorStep (/home/rainy/native/hakjum/node_modules/metro/src/Server.js:146:24)
at _next (/home/rainy/native/hakjum/node_modules/metro/src/Server.js:168:9)
at processTicksAndRejections (node:internal/process/task_queues:96:5)`
username_28: for reanimated2 users: this is expected, see https://docs.swmansion.com/react-native-reanimated/docs/fundamentals/installation#installing-the-package
username_29: I'm having the exact same problem now :( Have you been able to solve this? It's honestly super exhausting to work with expo now
username_27: @username_29 Use `expo start --web` this allows you to open your app on web and debug your code. You can also use `expo build:android` or `ios` to build your app. If you really want to use Expo Go on your phone, reinstall the expo and make the project again from the begining until it works out. I've actually made four different projects and copied the code from the original project in order to clean the code, somehow it is working.
(Though the EISDIR thing still show up, the app does work.)
username_0: Sorry for late response. I was busy for other reasons for a while, so I wasn't able to check this issue.
I will test at React Native 0.67 . Thanks for everyone!
username_30: I'm on expo sdk 42 and i'm running into this. Any fixes?
username_31: After following these instruction, I was able to resolve this error. But new Error popped up, which is specified below.
I somehow managed to solve this error. So I'm leaving my solution for someone who bumps into this one.
I am using expo 44.0
`
None of these files exist:
* index(.native|.ios..ts|.native..ts|..ts|.ios..tsx|.native..tsx|..tsx|.ios..js|.native..js|..js|.ios..jsx|.native..jsx|..jsx|.ios..json|.native..json|..json|.ios.json|.native.json|.json)
* index/index(.native|.ios..ts|.native..ts|..ts|.ios..tsx|.native..tsx|..tsx|.ios..js|.native..js|..js|.ios..jsx|.native..jsx|..jsx|.ios..json|.native..json|..json|.ios.json|.native.json|.json)
at ModuleResolver.resolveDependency (/Users/antares/Documents/Innovaid/dentlink-mobile/node_modules/metro/src/node-haste/DependencyGraph/ModuleResolution.js:211:15)
at DependencyGraph.resolveDependency (/Users/antares/Documents/Innovaid/dentlink-mobile/node_modules/metro/src/node-haste/DependencyGraph.js:413:43)
at /Users/antares/Documents/Innovaid/dentlink-mobile/node_modules/metro/src/lib/transformHelpers.js:317:42
at /Users/antares/Documents/Innovaid/dentlink-mobile/node_modules/metro/src/Server.js:1471:14
at Generator.next (<anonymous>)
at asyncGeneratorStep (/Users/antares/Documents/Innovaid/dentlink-mobile/node_modules/metro/src/Server.js:146:24)
at _next (/Users/antares/Documents/Innovaid/dentlink-mobile/node_modules/metro/src/Server.js:168:9)
`
my package.json was like this
`
{
"version": "1.0.0",
"main": "./App.tsx",
...
}
`
and my App.tsx was like this.
`
import { registerRootComponent } from "expo";
export default function App() {
return (
......
);
}
registerRootComponent(App);
`
As on the error message, metro was finding index.something, so I changed my app's entry point to index.js
on package.json
`
{
"main":"./index.js"
}
`
and made .index.js like this
`
import { registerRootComponent } from "expo";
import App from "./src/App";
registerRootComponent(App);
`
and finally, on the error message, .js extension was not supported on the metro config that expo/metro-config presents
so I added some extensions on metro.config.js
`
// Learn more https://docs.expo.io/guides/customizing-metro
const { getDefaultConfig } = require("expo/metro-config");
const defaultConfig = getDefaultConfig(__dirname);
defaultConfig.resolver.sourceExts.concat([".ts", ".tsx", ".js", ".jsx"]);
module.exports = defaultConfig;
`
and I was able to run my application.
username_32: I got same error, but all going well after I `yarn global remove wml`
```
yarn global remove wml
```
that save my life |
mwilliamson/jq.py | 655861659 | Title: Regex in compile doesn't escape "."
Question:
username_0: I am trying to do a regex to search for ips and a ValueError/compile error is thrown when executing the below code:
`import jq
re1 = '127\\.'
match = "127.0.0.1"
query = jq.compile(re1).input(match).all()
print(query)`
Traceback
`Traceback (most recent call last):
File "/Users/project/path/components/troubleshooting/test.py", line 7, in <module>
query = jq.compile(re1).input(match).all()
File "jq.pyx", line 56, in jq.compile
File "jq.pyx", line 160, in jq._Program.__cinit__
File "jq.pyx", line 131, in jq._JqStatePool.__cinit__
File "jq.pyx", line 84, in jq._compile
File "jq.pyx", line 72, in jq._compile
File "jq.pyx", line 78, in jq._compile
ValueError: jq: error: syntax error, unexpected INVALID_CHARACTER, expecting $end (Unix shell quoting issues?) at <top-level>, line 1:
127\.
jq: 1 compile error`
Answers:
username_1: Could you give an example of how you'd use jq (as in the command line tool) to do what you're looking for? It seems like the program you've written would just return the string `127\.` If you're trying to filter, I'd imagine you want to do something like:
```
jq.compile(r'select(test("127\\."))')
```
username_0: So this would be the command line
`jq '.asset_names[] | test("127\\.|10\\.|172\\.1[6-9]\\.|172\\.2[0-9]\\.|172\\.3[0-1]\\.|192\\.168\\.")' test`
and test looks like this
{
"asset_names": [
"127.0.0.1",
"127.0.0.1",
"127.0.0.1",
"127.0.0.1",
"127.0.0.1"
]
}
username_0: I think this is the fix. I'm testing it on a few more things now, but so far it's working
username_0: That was it. Thank you!
Status: Issue closed
|
translation-cards/translation-cards | 144637114 | Title: Button redesign in a deck
Question:
username_0: **As an** NGO worker
**I want** to be able to easily locate the create card and share deck buttons in a deck
**So that** I can perform these actions more easily
**Context**
To make the design consistent and the application easy to navigate, we are changing the position and design of the Create card and Share deck buttons.
**In Scope**
* Replacing the card creation fab with a button for card creation
* Adding a button for sharing the deck if there are cards in the deck
**Out of scope**
* Adding overflow menu in the upper right corner
**Dependencies**
* Issue #95: Empty Deck State
**Acceptance Criteria**
**1. Create card button**
When I am in the deck
Then I see a button for card creation
**2. No create card fob**
When I am in a deck
Then I do not see a card creation fab
**3. Share deck button**
Given I have one or more cards in my deck
When I am in the deck
Then I see a button for deck sharing
**Mockups**
<img width="220" alt="screen shot 2016-03-30 at 12 05 19 am" src="https://cloud.githubusercontent.com/assets/8908668/14132426/71d568f8-f60b-11e5-85b8-aed82056d897.png">
Answers:
username_0: Blocker: Waiting for UX specs from Olly
username_1: This card is no longer applicable because we are moving share functionality to My Decks screen. Closing Issue.
Status: Issue closed
username_1: **As an** NGO worker
**I want** to be able to easily locate the create card and share deck buttons in a deck
**So that** I can perform these actions more easily
**Context**
To make the design consistent and the application easy to navigate, we are changing the position and design of the Create card and Share deck buttons.
**In Scope**
* Adding a button for sharing the deck if there are cards in the deck
**Out of scope**
* Adding overflow menu in the upper right corner
* Replacing the card creation fab with a button for card creation
**Dependencies**
* Issue #95: Empty Deck State (Issue #95 should be played first.)
**Acceptance Criteria**
- [ ] **1. Share deck button**
Given I have one or more cards in my deck
When I am in the deck
Then I see a button for deck sharing
**Mockups**
<img width="220" alt="screen shot 2016-03-30 at 12 05 19 am" src="https://cloud.githubusercontent.com/assets/8908668/14132426/71d568f8-f60b-11e5-85b8-aed82056d897.png">
Status: Issue closed
|
jmfernandes/robin_stocks | 1060316623 | Title: python written for MAC, running on Windows
Question:
username_0: I'm trying to login to Robinhood on Windows pc, using code written for MAC. Is there anything I need to change to make it run on PC? I'm having trouble logging in -- it's not recognizing the 2FA number I'm entering that is being issued by Google Authenticator. |
sean-codes/atom-windows-titlebar | 253583273 | Title: "Alt" key doesn't always work
Question:
username_0: On my Windows 10 Pro laptop, if I start Atom with your theme, pressing "Alt" key shows the default menu, but if I switch window using the "alt + tab" keys, and then switch back to Atom with the same method, "Alt" stops working and the menu doesn't show up anymore. Need to restart Atom to get it work again.
Can this be solved?
Thank you in advance :)
Answers:
username_1: Same happens for me (but not always, just sometimes) Win10 Home
username_0: Don't ask me how, but I just discovered that, when the menu does not open with "alt", if you press "alt + win + t" (I guess it's the same with "alt + cmd + t") it does nothing but it makes the "alt" key work again.
Yes, I'm messing up with Atom and keyboard shortcuts ahah
username_1: @username_0 That doesn't work for me
username_0: @username_1 Let's wait for the dev to answer, and see if he has a better explanation 😅
username_2: @username_0 @username_1 Fix for this is up! I was using a silly trick to get that menu to show up. I moved it to a proper atom keybinding! Please let me know if you run into any issues or have any questions!
Status: Issue closed
username_0: Updated and it works, thank you 😄
But now there's a minor problem: when hitting "alt + tab" to change window, the menu appears as soon as I press "alt" (even before pressing "tab")...
username_2: On my Windows 10 Pro laptop, if I start Atom with your theme, pressing "Alt" key shows the default menu, but if I switch window using the "alt + tab" keys, and then switch back to Atom with the same method, "Alt" stops working and the menu doesn't show up anymore. Need to restart Atom to get it work again.
Can this be solved?
Thank you in advance :)
username_2: @username_0 Okay was sort of why I was using the trick. I'm going to take a second look!
Status: Issue closed
username_2: @username_0 Looks like there is a new way for binds so we can add a `^` for th ealt key this should fix us up! Please let me know if still experiencing after update!
username_0: Yippie!
Updated, restarted & perfectly working!
Thank you sir for really fast fix!
username_0: Last update: when using AltGr (for any key combination, like to write a square bracket) triggers the menu...
username_2: @username_0 Darn we are going rounds with this. New update might be the winner :]
username_2: @username_0 By the way I appreciate you sticking with me and help with testing!
username_2: On my Windows 10 Pro laptop, if I start Atom with your theme, pressing "Alt" key shows the default menu, but if I switch window using the "alt + tab" keys, and then switch back to Atom with the same method, "Alt" stops working and the menu doesn't show up anymore. Need to restart Atom to get it work again.
Can this be solved?
Thank you in advance :)
username_0: @username_2 Updated to 0.11.0 (lmao just yeterday it was 0.9.0) and looks like it's working 😄
Anyway, I'm happy you consider this testing, but I'm actually just doing some coding stuff these days and I find that menu bar popping up when I don't need it and not showing up when I need it quite annoying 🤣
Glad you like my attempt to help 😄
Status: Issue closed
|
DoctorMcKay/node-steam-user | 319276018 | Title: Token dumper no longer working
Question:
username_0: [tokendumper.js](https://github.com/username_1/node-steam-user/blob/master/examples/tokendumper.js) no longer gives any results.
The method it uses doesn't work anymore. This is how SteamDB does it nowadays:
```cs
List<uint> list = new List<uint>();
for (uint num = 1u; num <= 1000000u; num += 1u)
{
list.Add(num);
}
Console.WriteLine("Requesting tokens, please wait...");
Program.steamApps.PICSGetAccessTokens(list, Enumerable.Empty<uint>());
return;
```<issue_closed>
Status: Issue closed |
SAP/fundamental-styles | 706377830 | Title: List with icons
Question:
username_0: When certain items have icons where others do not, all of the list items still align right (as though they had icons) like this:


Currently, all the examples show that the items without icons shift back left

Answers:
username_1: Refactored in #1677
Status: Issue closed
|
jishi/node-sonos-http-api | 66921847 | Title: How to send status update to other devices?
Question:
username_0: Hi!
First I want to tell you that your work is really impressive! Great job there!!
I haves more a question than an issue. Could you point me on how to add notification to other devices? What I'm trying to do is to send an http request to update the status in real time.
For example, I have an amp that I can control through a box that send IR codes and that I can trigger through http request. My goal would be that if I start/stop the Sonos player that is hooked to that amp, to send an http request to turn the amp on/off.
How could I add this to your existing code? I've tried to looked into it, but I don't know enough about node to figure it out?
Many thanks for any help!
David
Answers:
username_1: Hi, thank you for your kind words. It's nice to see that people find it useful.
Your problem isn't easily fixed without some rewrites, and you would need to modify the application a bit to achieve what you want.
I would probably isolate it as a separate application, using the node-sonos-discovery module directly. It will trigger a change event, when a player changes state (from play, to pause, and vice versa), which could be used to turn your amp on and off (with a delay, preferrably). However, you need to take grouping etc into account, since it will only be the coordinator that sends the event, and you need to know which coordinator that is in charge of the amp you are controlling. It's doable, but requires some logic that might not be as straight-forward as one would hope.
What is the level of your programming skills?
username_0: Hi!
Thanks for responding so quickly!
Yeah, I though about the grouping issue...
My programming skills are not too shabby but I just discovered node and I'm not quite familiar with it yet. And I haven't been able to figure out yet how to listen for an event. I've started to look into node for this reason, but it's still unclear for me. This real time thing is bit confusing for me...
What I have done so far is monitor regularly the state of the player and when I see a change I would trigger the amp. But I don't like it so much, since I'm constantly polling the state waiting for a change of state and I would rather listen for the event.
Thanks again!
<NAME>
username_1: I have been planning on adding a configurable URL-hook to the http api, which would partly solve your issue (but you would still need to build the logic somewhere). I can make an outline of an a node.js application and give you the implementation positions where you would add your HTTP requests, if that would help.
username_0: I'm working on the logic right now: getting the zone my player belongs to and the coordinator of this zone. This works so far. If I send the name of my player, I get its coordinator in return.
So now I need to listen to the event, see if the event comes from the say coordinator and if so trigger the amp.
If you can make an outline of a node.js application, that would be more than great! Thanks!!
<NAME>
username_1: Hi, I created a simple stub here:
https://github.com/username_1/sonos-event-stub
Fork it and see if ti fits your need!
Status: Issue closed
username_0: Tu!
Thanks so much.
I will check that!
<NAME>
username_0: Hi!
Works perfectly.
Thanks a lot for the help!!
<NAME> |
cvxgrp/scs | 280677796 | Title: Example
Question:
username_0: Hi,
Are there some examples for using SCS in C? I probably am looking at the wrong place- wasn't finding much except the one/two liners in the github main page.
Thank you.
Answers:
username_1: The tests have some examples, e.g.
https://github.com/cvxgrp/scs/blob/master/test/problems/rob_gauss_cov_est.h
and
https://github.com/cvxgrp/scs/blob/master/test/problems/small_lp.h
Status: Issue closed
|
dcuddeback/libusb-sys | 337546052 | Title: Support hotplug feature
Question:
username_0: Hotplug feature is supported since v1.0.16 of libusb.
Answers:
username_1: I just opened #11 which should add hotplug support. I will add it to the -rs crate as well to give nicer support.
username_2: Unfortunately project looks like abandoned , if PR-s are not reviewed and considered for merge... :(
username_1: I think that's correct, although for what it's worth the code I added in #11 does not appear to work for me, and I was never able to diagnose why (it might just be my peripheral?).
I would be happy to maintain this crate if @username_3 would prefer, although I am not an expert in libusb so it would be more of a steward than a technical leader.
username_2: I have similar problem. Couple of methods are missing and they use a callbacks, which are tricky as I suppose.
I'm trying to figure out how to use them - https://stackoverflow.com/questions/41081240/idiomatic-callbacks-in-rust
Probably, you can try to write simple example and put it into PR, so may be someone else could check you code as well.
username_1: So roughly speaking that's what #11 is, the smallest possible bridge to allow registering rust callbacks (but it would want some nicer high level code in the `libusb` crate. It has been a while since I thought about this but from memory invoking the code in #11 didn't work, but neither did a native C implementation of the same thing so I wrote it off to either my system or my peripheral having issues. From memory there are some constraints with hotplug in libusb.
username_1: I can try to find if I still have that prototype somewhere but I'm not sure the chances are good.
Status: Issue closed
username_3: Fixed by #11. |
tweag/asterius | 507843005 | Title: `stack build` fails in Docker container with "no such protocol name: tcp"
Question:
username_0: This is similar to https://github.com/snoyberg/http-client/issues/292.
```
$ docker run -it -v ~/mirror:/mirror terrorjack/asterius
root@fbd6f87dfab8:~/asterius# stack build
Exception while reading snapshot from lts-14.8 (8af5eb80734f02621d37e82cc0cde614af2ddc9c320610acb0b1b6d9ac162930,524789):
HttpExceptionRequest Request {
host = "raw.githubusercontent.com"
port = 443
secure = True
requestHeaders = [("User-Agent","Haskell pantry package")]
path = "/commercialhaskell/stackage-snapshots/master/lts/14/8.yaml"
queryString = ""
method = "GET"
proxy = Nothing
rawBody = False
redirectCount = 10
responseTimeout = ResponseTimeoutDefault
requestVersion = HTTP/1.1
}
(ConnectionFailure Network.BSD.getProtocolByName: does not exist (no such protocol name: tcp))
```
This works:
```
root@think5:~/asterius# apt-get update && apt-get install netbase
[...]
root@think5:~/asterius# stack build
[...]
```
My suggestion would be to install `netbase` in `Dockerfile` (and leave it installed).
Answers:
username_1: Thanks for reporting. I think the real problem is lack of clarification of what the "docker image" is about; we ship it as a way of running prebuilt asterius, not as a dev environment for the asterius project itself. But still it may make sense to do a "dev environment" image which doesn't uninstall anything after booting; if you find such an image useful, it'd be nice to open a separate ticket for this.
Status: Issue closed
|
snowplow-referer-parser/nodejs-referer-parser | 493616394 | Title: js-yaml dependency vulnerability
Question:
username_0: There is a vulnerability in the used version of `js-yaml`
https://www.npmjs.com/advisories/813
Please upgrade to fixed version.
Thanks.
Answers:
username_1: solved by:
https://github.com/snowplow-referer-parser/nodejs-referer-parser/pull/8 |
EddyVerbruggen/nativescript-printer | 334766319 | Title: starting trouble
Question:
username_0: i require the plugin in js. then i wrote var printer = new Printer()
it says Printer is not a constructor....
solve this error please.............
--------------------------------------------------------------------------------------------------------------
TypeError: Printer is not a constructor
File: "file:///data/data/org.nativescript.preview/files/app/views/shared/view-models/sub-product-view-model.js, line: 116, column: 18
StackTrace:
Frame: function:'SubProductListViewModel.viewModel.camera', file:'file:///data/data/org.nativescript.preview/files/app/views/shared/view-models/sub-product-view-model.js', line: 116, column: 19
Frame: function:'exports.camera', file:'file:///data/data/org.nativescript.preview/files/app/views/itemDetails/itemDetails.js', line: 30, column: 16
Frame: function:'Observable.notify', file:'file:///data/data/org.nativescript.preview/files/app/tns_modules/tns-core-modules/data/observable/observable.js', line: 110, column: 23
Frame: function:'Observable._emit', file:'file:///data/data/org.nativescript.preview/files/app/tns_modules/tns-core-modules/data/observable/observable.js', line: 127, column: 18
Frame: function:'ClickListenerImpl.onClick', file:'file:///data/data/org.nativescript.preview/files/app/tns_modules/tns-core-modules/ui/button/button.js', line: 26, column: 23
at com.tns.Runtime.callJSMethodNative(Native Method)
at com.tns.Runtime.dispatchCallJSMethodNative(Runtime.java:1101)
at com.tns.Runtime.callJSMethodImpl(Runtime.java:983)
at com.tns.Runtime.callJSMethod(Runtime.java:970)
at com.tns.Runtime.callJSMethod(Runtime.java:954)
at com.tns.Runtime.callJSMethod(Runtime.java:946)
at com.tns.gen.java.lang.Object_button_19_32_ClickListenerImpl.onClick(Object_button_19_32_ClickListenerImpl.java:17)
at android.view.View.performClick(View.java:5215)
at android.view.View$PerformClick.run(View.java:21193)
at android.os.Handler.handleCallback(Handler.java:742)
at android.os.Handler.dispatchMessage(Handler.java:95)
at android.os.Looper.loop(Looper.java:157)
at android.app.ActivityThread.main(ActivityThread.java:5571)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:745)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:635) |
equalsraf/neovim-qt | 998743258 | Title: nvim-lspconfig not working with neovim-qt
Question:
username_0: So nvim-lspconfig doesn't work with neovim-qt and it works just fine in terminal neovim. This can be easily checked by running :LspInfo in neovim and neovim-qt. In terminal neovim it doesn't error and in neovim-qt it errors with:
E5108: Error executing lua . . .nvim/plugged/nvim-lspconfig/lua/lspconfig/ui/windows.lua:106: Invalid key 'border'
This also causes autocomplete with nvim-cmp and built-in lsp to not work. I tried this with both neovim and neovim-qt fresh from git and it doesn't change anything.
Answers:
username_1: @username_0
I cannot reproduce this issue (Linux or Windows). Since this is a plugin, could there be a configuration issue or conflict?
It would be helpful if you can provide a `minimal.vim` and set of instructions that reproduces the issue.
You can load `mininal.vim` with the command: `nvim-qt -- -u minimal.vim`.
username_0: Yeah this is a weird one. I tried making a minimal config to reproduce this, but then I can't reproduce it. I will start looking into my normal init.lua to see what causes this.
username_1: This sounds like a configuration issue...
One long-shot idea:
Try running `nvim-qt -- somefile.txt` from your terminal. Our desktop shortcut is a little weird and opens files by passing them as arguments to `nvim`. I don't see how this would impact the scenario, but it is a possible difference between the cases.
Status: Issue closed
|
tomjaguarpaw/haskell-opaleye | 88403946 | Title: Remove all/most of the Internal modules
Question:
username_0: They confuse things, bloat the interface, and should never be used anyway. Take feature requests for the ones that are in use, or Google for the module names.
Answers:
username_1: I'm not sure what you mean. I want every single implementation detail exposed in case it is useful to someone. I've lost count of the number of times I've been bitten by packages that didn't expose some internal I needed access to.
username_0: As a user, having 1000's of functions I "probably won't need but might" is a real pain. It massively increases the API, adding a lot of "chaff" for exceptionally little "wheat". That said, if all the good stuff is in a single module, then I can find it easily. Even with that, I can't help but feel exposed modules (even Internal ones) with no module header are probably a bad idea.
username_1: I can see the benefit of what you suggest. One fairly left-field solution to kill two birds would be to have an opaleye-internal package, and then an opaleye package which exists purely to reexport the public API into the `Opaleye` namespace.
username_0: That works for me - although seems more effort for you. If there was only one "public" module for Opaleye it becomes a much less severe problem.
username_1: Also, do we agree that at root this is a documentation tooling issue and the suggested fixes are just workarounds?
username_0: I think there is some value to restricting the API - I certainly never expose internal modules. But I can see that you want to, and once #42 is fixed, I think we could close this "who cares" - it has no real impact on users.
username_0: BTW, never trust users not to abuse your internals if you put them on display... I can tell you stories of people who used to work in teams you now work in who totally abused internal details which took months to get rid of and thus make essential changes to the internal details they abused.
username_2: Oh my, please don't *remove* these modules! I've written multiple new interfaces on top of `opaleye` that would be impossible without the `Internal` modules! I would however be ok if it got re-namespaced to be clearly different in the Haddocks. `Z.Opaleye.Internal` is a hack but at least gets it all out of the way ;)
username_1: @username_2 I'd be interested to know which internal bits you have been using. My aim is that Opaleye should be perfectly extensible without need to access anything in `Internal`. If it's not I'd like to fix that.
username_2: Tom, might be easiest if you just check out the rel8 repository and see the
modules there for imports. Not a huge amount of code
username_1: Closing this as stale. I don't think I'm ever going to remove the `Internal` modules.
Status: Issue closed
|
zachholcomb/monster_shop_2001 | 595930740 | Title: User Story 49, Merchant sees an order show page
Question:
username_0: As a merchant employee
When I visit an order show page from my dashboard
I see the recipients name and address that was used to create this order
I only see the items in the order that are being purchased from my merchant
I do not see any items in the order being purchased from other merchants
For each item, I see the following information:
- the name of the item, which is a link to my item's show page
- an image of the item
- my price for the item
- the quantity the user wants to purchase<issue_closed>
Status: Issue closed |
cilium/cilium | 631355160 | Title: When Cilium handles K8s Nodeport services, IPTables sees packets that trigger invalid connection tracking state checks
Question:
username_0: When Cilium is configured with
tunnel: disabled
enable-host-reachable-services: "true"
enable-external-ips: "true"
enable-node-port: "true"
AND
an IPTables rule of the form `-m conntrack --ctstate INVALID -j DROP`
exists, then a tcp handshake can not be completed for traffic from an external client addressed to a nodeport service delivered to a node hosting a backend pod.
We believe the sequence of operations that happens is something like this -
1. SYN from client {client_ip, nodeport_service_ip}
2. eth0 bpf hook translates this to {client_ip, pod_ip}, delivers to linux stack for routing since the pod is hosted locally
3. nf_conntrack entry for {client_ip, pod_ip} with initial state UNREPLIED
4. pod sends SYN_ACK with {pod_ip, client_ip}
5. veth bpf hook sees that this is a reply, translates to {nodeport_service_ip, client_ip}
6. nf_conntrack does *not* consider this part of the same flow, so does not transition entry out of UNREPLIED, but instead creates a new entry.
7. SYN_ACK packet makes its way back to the client
8. client sends ACK packet with {client_ip, nodeport_service_ip}
9. eth0 bpf hook translates this to {client_ip, pod_ip} by looking up BPF connection table
10. IPTables sees an ACK packet {client_ip, pod_ip} from client for a connection entry in state UNREPLIED, drops it.
At step 5) the un-dnat is happening at a point in the packet state machine that is different from where the dnat happened.
Is our understanding correct? Is this a known issue with Cilium, and is there a way to configure it to handle all K8s services (nodeport, loadbalancer and clusterip) but to only do the nat translation in one place?
The --ctstate INVALID -j DROP is unfortunately quite common for many users of IPTables (including kube-proxy, which writes this rule). As such, it seems like a very dangerous state to run in.
Answers:
username_1: Thanks for the issue. The reasoning looks correct: `--ctage INVALID -j DROP` should be responsible for dropping the packets.
We can work around this by getting rid of `NO_REDIRECT`. This would make the request to bypass the stack (step 2.), and thus no netfilter CT entry should be created in your case. The PR is out (https://github.com/cilium/cilium/pull/11557), but it needs some troubleshooting to figure out why the tests were failing.
username_0: It looks like we do want the ability to support IPTables even in cases where eBPF is handling the Kubernetes services. Getting rid of NO_REDIRECT would mean that no traffic (even not bound to k8s services) would skip over the netfilter hooks, correct?
An ideal solution would be for the un-dnat to happen at the same point in the linux stack that did the dnat. This probably requires some more state to be maintained in the ct maps though, is there some existing experiment/analysis done already that we can start from?
username_2: We've discussed two possible solutions:
1. Mark NodePort (and similar) packets with an FWMARK. Then insert an IPtables rule to match on this FWMARK and bypass conntrack table (i.e. -A PREROUTING -m mark --mark BYPASS_MARK -j NOTRACK)
2. Keep the current ingress behavior. On egress:
* Extract connection state from conntrack.
* If packet is NodePort, skip "un-DNAT" in bpf_lxc, and mark the packet.
* In bpf_host match packet with the mark, lookup in conntrack, and do "un-DNAT".
(2) Seems to be more solid long-term solution. We could start with simpler (1), but one concern is that it might be hard to upgrade to (1) without node reboot.
username_3: @username_2 I assume this on your radar now?
username_2: It's on our radar but we have not made any progress.
I think the last state is as follows:
1. @username_1 had a prototype
2. Prototype in (1) was not yet discussed/approved by broader set of Cilium decision makers.
3. This feature is very important and is a GA blocker for us because as it stands right now one can't do IPtables conntrack when Cilium runs in kube-proxy free mode.
username_3: @username_2 thanks for the update, feel free to bring this up e.g. in sig-datapath meeting.
username_1: The prototype: https://github.com/cilium/cilium/tree/pr/username_1/bpf-lxc-no-redirect.
Status: Issue closed
username_1: Considering that #15354 has been reverted (cc @krishgobinath), reopening the issue.
username_1: When Cilium is configured with
tunnel: disabled
enable-host-reachable-services: "true"
enable-external-ips: "true"
enable-node-port: "true"
AND
an IPTables rule of the form `-m conntrack --ctstate INVALID -j DROP`
exists, then a tcp handshake can not be completed for traffic from an external client addressed to a nodeport service delivered to a node hosting a backend pod.
We believe the sequence of operations that happens is something like this -
1. SYN from client {client_ip, nodeport_service_ip}
2. eth0 bpf hook translates this to {client_ip, pod_ip}, delivers to linux stack for routing since the pod is hosted locally
3. nf_conntrack entry for {client_ip, pod_ip} with initial state UNREPLIED
4. pod sends SYN_ACK with {pod_ip, client_ip}
5. veth bpf hook sees that this is a reply, translates to {nodeport_service_ip, client_ip}
6. nf_conntrack does *not* consider this part of the same flow, so does not transition entry out of UNREPLIED, but instead creates a new entry.
7. SYN_ACK packet makes its way back to the client
8. client sends ACK packet with {client_ip, nodeport_service_ip}
9. eth0 bpf hook translates this to {client_ip, pod_ip} by looking up BPF connection table
10. IPTables sees an ACK packet {client_ip, pod_ip} from client for a connection entry in state UNREPLIED, drops it.
At step 5) the un-dnat is happening at a point in the packet state machine that is different from where the dnat happened.
Is our understanding correct? Is this a known issue with Cilium, and is there a way to configure it to handle all K8s services (nodeport, loadbalancer and clusterip) but to only do the nat translation in one place?
The --ctstate INVALID -j DROP is unfortunately quite common for many users of IPTables (including kube-proxy, which writes this rule). As such, it seems like a very dangerous state to run in. |
jerry54604/my-profile | 213227060 | Title: What's this??
Question:
username_0: :open_mouth: what is this??
pro typescript :open_mouth:
Answers:
username_1: my profile lol
username_0: I only see **Loading** and **app works!**
username_1: thats what it suppose to show, show i'm a noob 😢
username_0: walao totally pro
write so much pro code just to do that one thing :+1:
username_1: lol, i didnt write so much, just use ng new and ng serve XD
username_0: lol still pro 😆
username_0: https://github.com/username_1/jinyong-legend-py
WHAT IS THIS!?!?!? :open_mouth: :open_mouth: :open_mouth:
Game???? :open_mouth: :open_mouth: :open_mouth: :open_mouth:
username_1: ya, a game
Proseph like that also know 😮
username_0: lol pro game developer
Text based or what? :open_mouth:
username_1: 2d sprite based?
just forking ppl's code where got pro 😢
username_0: lol fork means want to make changes to it.
Planning to make changes to it means know what this is. :open_mouth:
Wow got sound and graphics also LOL
IMBA game. Thought can play at work. :disappointed:
username_1: lol
still trying to make sense of what it is, im too noob 😢
username_0: I totally don't know what this is
username_1: at least you know it is game 😮
username_0: lol because of the title
Status: Issue closed
|
PacktPublishing/Spring-Boot-2.0-Projects | 369393639 | Title: Hi
Question:
username_0: Hi, There are some compile time errors w.r.t Article model. Article setters and getters are not working in any other classes. Please check once.
Answers:
username_1: Hi @username_0 Thanks for your reply. The Source Code uses Lombok to generate Getters, Setters, Equals etc. If you opening the source in am IDE install Lombok Plugin for that IDE or user **mvn clean install** to build.
username_0: Hi Shazin, I have used mvn clean install only but couldn't able to run blog project. Please fix the issues and push the code to git.
username_1: @username_0 As I said the code uses Lombok. There is nothing wrong with the code. After building the code with mvn clean install you can use mvn spring-boot:run as these are Spring Boot Applications. This source code is associated with the book Spring Boot 2.0 Projects available at https://www.amazon.com/dp/B07CSLQ2M8/ref=cm_sw_r_cp_ep_dp_b6nKBbADS90FP. All instructions are available in the book.
username_0: Sir, please check once, the code is not running.
username_0: Please find the error stack for Spring boot blog project:
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2018-10-19 19:40:30.211 ERROR 7088 --- [ restartedMain] o.s.boot.SpringApplication : Application run failed
java.lang.IllegalStateException: Failed to execute ApplicationRunner
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:784) [spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE]
at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:771) [spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:335) [spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1246) [spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1234) [spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE]
at com.packtpub.springboot2blog.SpringBoot2BlogApplication.main(SpringBoot2BlogApplication.java:29) [classes/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_181]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_181]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_181]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_181]
at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49) [spring-boot-devtools-2.0.0.RELEASE.jar:2.0.0.RELEASE]
Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{bG7RY1kXQsSnQqoKM0yU7Q}{localhost}{127.0.0.1:9300}]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:347) ~[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:245) ~[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:59) ~[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:366) ~[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:408) ~[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:80) ~[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:54) ~[elasticsearch-5.6.8.jar:5.6.8]
at org.springframework.data.elasticsearch.core.ElasticsearchTemplate.doScroll(ElasticsearchTemplate.java:776) ~[spring-data-elasticsearch-3.0.5.RELEASE.jar:3.0.5.RELEASE]
at org.springframework.data.elasticsearch.core.ElasticsearchTemplate.startScroll(ElasticsearchTemplate.java:790) ~[spring-data-elasticsearch-3.0.5.RELEASE.jar:3.0.5.RELEASE]
at org.springframework.data.elasticsearch.core.ElasticsearchTemplate.delete(ElasticsearchTemplate.java:690) ~[spring-data-elasticsearch-3.0.5.RELEASE.jar:3.0.5.RELEASE]
at org.springframework.data.elasticsearch.repository.support.AbstractElasticsearchRepository.deleteAll(AbstractElasticsearchRepository.java:256) ~[spring-data-elasticsearch-3.0.5.RELEASE.jar:3.0.5.RELEASE]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_181]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_181]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_181]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_181]
at org.springframework.data.repository.core.support.RepositoryComposition$RepositoryFragments.invoke(RepositoryComposition.java:377) ~[spring-data-commons-2.0.5.RELEASE.jar:2.0.5.RELEASE]
at org.springframework.data.repository.core.support.RepositoryComposition.invoke(RepositoryComposition.java:200) ~[spring-data-commons-2.0.5.RELEASE.jar:2.0.5.RELEASE]
at org.springframework.data.repository.core.support.RepositoryFactorySupport$ImplementationMethodExecutionInterceptor.invoke(RepositoryFactorySupport.java:629) ~[spring-data-commons-2.0.5.RELEASE.jar:2.0.5.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185) ~[spring-aop-5.0.4.RELEASE.jar:5.0.4.RELEASE]
at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.doInvoke(RepositoryFactorySupport.java:593) ~[spring-data-commons-2.0.5.RELEASE.jar:2.0.5.RELEASE]
at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.invoke(RepositoryFactorySupport.java:578) ~[spring-data-commons-2.0.5.RELEASE.jar:2.0.5.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185) ~[spring-aop-5.0.4.RELEASE.jar:5.0.4.RELEASE]
at org.springframework.data.projection.DefaultMethodInvokingMethodInterceptor.invoke(DefaultMethodInvokingMethodInterceptor.java:59) ~[spring-data-commons-2.0.5.RELEASE.jar:2.0.5.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185) ~[spring-aop-5.0.4.RELEASE.jar:5.0.4.RELEASE]
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92) ~[spring-aop-5.0.4.RELEASE.jar:5.0.4.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185) ~[spring-aop-5.0.4.RELEASE.jar:5.0.4.RELEASE]
at org.springframework.data.repository.core.support.SurroundingTransactionDetectorMethodInterceptor.invoke(SurroundingTransactionDetectorMethodInterceptor.java:61) ~[spring-data-commons-2.0.5.RELEASE.jar:2.0.5.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185) ~[spring-aop-5.0.4.RELEASE.jar:5.0.4.RELEASE]
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212) ~[spring-aop-5.0.4.RELEASE.jar:5.0.4.RELEASE]
at com.sun.proxy.$Proxy68.deleteAll(Unknown Source) ~[na:na]
at com.packtpub.springboot2blog.service.UserService.deleteAll(UserService.java:41) ~[classes/:na]
at com.packtpub.springboot2blog.config.SecurityConfig.lambda$applicationRunner$0(SecurityConfig.java:77) ~[classes/:na]
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:781) [spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE]
... 10 common frames omitted
2018-10-19 19:40:30.227 INFO 7088 --- [ restartedMain] onfigReactiveWebServerApplicationContext : Closing org.springframework.boot.web.reactive.context.AnnotationConfigReactiveWebServerApplicationContext@6456aae3: startup date [Fri Oct 19 19:40:11 CDT 2018]; root of context hierarchy
2018-10-19 19:40:30.243 INFO 7088 --- [ restartedMain] o.s.j.e.a.AnnotationMBeanExporter : Unregistering JMX-exposed beans on shutdown
2018-10-19 19:40:31.291 INFO 7088 --- [ restartedMain] r.ipc.netty.tcp.BlockingNettyContext : Stopped HttpServer on /0.0.0.0:8080
username_0: I am getting one more error in Spring twittter project:
C:\workspaces\Spring-Boot-2.0-Projects\Chapter07\spring-boot-2-twitter-clone>mvn spring-boot:run
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Build Order:
[INFO]
[INFO] spring-boot-2-twitter-clone [pom]
[INFO] backend [jar]
[INFO]
[INFO] --------------< com.packtpub:spring-boot-2-twitter-clone >--------------
[INFO] Building spring-boot-2-twitter-clone 0.0.1-SNAPSHOT [1/2]
[INFO] --------------------------------[ pom ]---------------------------------
[INFO]
[INFO] >>> spring-boot-maven-plugin:2.0.2.RELEASE:run (default-cli) > test-compile @ spring-boot-2-twitter-clone >>>
[INFO]
[INFO] <<< spring-boot-maven-plugin:2.0.2.RELEASE:run (default-cli) < test-compile @ spring-boot-2-twitter-clone <<<
[INFO]
[INFO]
[INFO] --- spring-boot-maven-plugin:2.0.2.RELEASE:run (default-cli) @ spring-boot-2-twitter-clone ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] spring-boot-2-twitter-clone 0.0.1-SNAPSHOT ......... FAILURE [ 2.243 s]
[INFO] backend 0.0.1-SNAPSHOT ............................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 4.115 s
[INFO] Finished at: 2018-10-19T19:58:08-05:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.springframework.boot:spring-boot-maven-plugin:2.0.2.RELEASE:run (default-cli) on project spring-boot-2-twitter-clone: Unable to find a suitable main class, please add a 'mainClass' property -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
C:\workspaces\Spring-Boot-2.0-Projects\Chapter07\spring-boot-2-twitter-clone>
username_1: You need to start the elasticsearch server. All the instructions are available in the book.
Status: Issue closed
|
grpc-ecosystem/grpc-gateway | 365844703 | Title: proto-gen-swagger: provide default description for HTTP 200 responses
Question:
username_0: In the swagger specification it says the following about description fields:
Field Name | Type | Description
-- | -- | --
description | string | REQUIRED. A short description of the response. CommonMark syntax MAY be used for rich text representation.
https://swagger.io/specification/#responseObject
Currently proto-gen-swagger will generate an empty string for this field on a 200 response, but I am unsure if this is enough to meet the specification. This has some ramifications as at least one of the generators in `swagger-codegen` or `openapi-codegen`, `rust-server` will not compile if the description is set to an empty string without manual intervention.
It seems to be this line which is responsible, so it may just be a matter of setting a default value, though I am not sure what this value should actually be.
https://github.com/grpc-ecosystem/grpc-gateway/blob/master/protoc-gen-swagger/genswagger/template.go#L697
Answers:
username_1: Thanks for submitting this issue @username_0! This does indeed seem to be in breach of the spec. What can I do to help you get a pull request in to fix this? I think it should be okay to introduce this backwards-incompatibility since any comments or explicit description will override it. How about just `A successful response`?
username_0: Have created a pull request for the change, but I am unsure how best to provide automated tests. I also interested to see how the default value looks when used with streaming responses.
username_1: Closed with #767
Status: Issue closed
|
prometheus/prometheus | 219604816 | Title: Add query hints to remote read
Question:
username_0: As discussed back at promcon, we should add a map[string]string with hints like the function wrapping the selector, the range and the step. This is unstructured and experimental as we don't know yet what if any of this will prove useful for things like downsampling optimisations on the other end.
At some point (likely after the rest of remote read is considered non-experimental), this should be changed to proper structured data when we have a good idea from multiple implementations what information is needed.
Answers:
username_1: I don't see an immediate benefit to making it unstructured initially. If you change something, the other end won't work anymore either way. Just that you won't notice it at compile time with `map[string]string`.
None is more or less work than the other really.
username_1: This would be another argument to `Select` I assume, like `Select(*Params, ...labels.Matcher)`?
What I can see immediately would be:
```go
type Aggregation int
const (
AggrNone Aggregation = iota
AggrAvg
AggrMin
AggrMax
AggrCounter
AggrCount
)
type Param struct {
Step int64
Aggr Aggregation
}
```
The aggregation would make some assumptions already about what a given PromQL function needs. We could just make it a function name of course. But I'd believe that every implementation making use of these would just end up mapping our dozens of functions theirselves, which introduces lots of room for error.
username_0: I don't think we should make any assumptions about how downsampling works and what information it finds useful for a first version. Thus why I propose passing it down and only finalising this API after there's more experience.
username_1: Then just the raw info
```
type Params struct {
Step int64
Func string
}
```
Strictly speaking `sum/avg/...` are not functions. Should this go into an extra `Aggr string` field?
username_0: There's no overlap between function names and operator names, so for a first pass I'd suggest putting them all in one field.
username_2: I'd like to work on this - I've started taking a look and I'll report back here if I run into anything, otherwise will hopefully have a PR soon.
Status: Issue closed
|
BookStackApp/BookStack | 853870210 | Title: Redis Unix Socket
Question:
username_0: I'm unable to use a unix_socket for the Redis cache.
I have tried the following options:
REDIS_SERVERS=localhost;unix_socket=/var/run/redis/redis-server.sock
REDIS_SERVERS=127.0.0.1;unix_socket=/var/run/redis/redis-server.sock
REDIS_SERVERS=unix_socket=/var/run/redis/redis-server.sock
REDIS_SERVERS=127.0.0.7:0:0
# Cache & Session driver to use
# Can be 'file', 'database', 'memcached' or 'redis'
CACHE_DRIVER=redis
SESSION_DRIVER=redis
Everything I tried leaves a blank page in my browser.
Bookstack 0.30.4, PHP 7.3, NGNIX, Debian 10
Answers:
username_1: Hi @username_0,
Looks like we don't currently support a way to use redis via a unix socket via our standard configuration options, Just TCP usage.
Out of interest, is there a specific reason why socket may be required over tcp?
username_2: Hi @username_0, Have you tired: `/var/run/redis/redis-server.sock` as the value for `REDIS_SERVERS`?
Also, please confirm you have php `redis` extension installed?
username_0: PHP-redis is installed and active. The value /var/run/redis/redis-server.sock did not work, blank page.
The main reason to use a unix socket is the latency. A TCP socket on the loopback interface is slower than a unix socket
username_1: Sure, but is it signifiant enough to worry about supporting it? My intentions in supporting redis are really for the purposes of remote or high-availability cache/session support, Otherwise the default filesystem driver is good enough.
Status: Issue closed
username_0: Close. Was just a question. I thought I configured something wrong |
ess-dmsc/essdaq | 424786079 | Title: essdaq on CentOS fixes
Question:
username_0: netdev/max_backlog -> netdev_max_backlog
force setting buffersizes?
Answers:
username_0: Doc grafana
Show (using efustats.py) how to configure a Grafana dashboard
username_0: Firewall rule to allow Grafana and UDP data on port 9000
sudo firewall-cmd --zone=public --add-port=9000/udp --permanent
sudo firewall-cmd --zone=public --add-port=3000/tcp --permanent
sudo firewall-cmd --reload |
JuanMTech/google_dark_theme | 728626976 | Title: state-icon-color and paper-item-icon-color should match
Question:
username_0: In both Light and Dark themes these colors are inverted:
paper-item-icon-color = state-icon-**active**-color
paper-item-icon-**active**-color = state-icon-color
Should be:
paper-item-icon-color = state-icon-color
paper-item-icon-active-color = state-icon-active-color
Answers:
username_1: That's the way I wanted them.
Status: Issue closed
|
MikeMillar/Expresso-Project | 346386109 | Title: Exceeds Expectations
Question:
username_0: Really nice job on your project. You passed all of the tests and and your project works! Great work building all of your SQL queries and handling the routing. Just keep in mind string interpolation. It can help you when you have variables in a string and also to format long strings. All in all, great job and congrats on finishing your final Capstone Project!
Answers:
username_1: Thank you for your review, I'll keep the interpolation in mind when working with it further.
Status: Issue closed
|
mpetroff/pannellum | 168464141 | Title: adding images to hotspots
Question:
username_0: Hi,
I love your library, and I have a question- I want to replace the hotspots with costume images, and I want those images to move as I move in the panoramic view. so I pushed them in the hotspot div, but when the image is in the side, it moves correctly but won't stretch the right way, so it looks quite bad.
I want it to look like a part of the background.
So, is there a way to replace the hotspots with an image and make it look like a natural part of the panorama?
I'm using the cube multires view.
thanks!
Answers:
username_1: This isn't possible at the moment. I plan to make it easier to use custom styles for hot spots fairly soon, but support for background-type hot spots is much further out.
username_0: Good to know. thank you for the quick answer
Status: Issue closed
|
alibaba/Sentinel | 398722598 | Title: Sentinel限流规则增加id或扩展标识
Question:
username_0: <!-- Here is for bug reports and feature requests ONLY!
If you're looking for help, please check our mail list and the Gitter room.
Please try to use English to describe your issue, or at least provide a snippet of English translation.
-->
## Issue Description
我计划在Sentinel的基础上封装一些预警功能,目前当出现规则不通过时,只能通过异常类型什么类型的规则不通过,无法知道具体的什么规则不通过,这样就无法做一些个性化的预警.
Type: *bug report* or *feature request*
feature request
### Describe what happened (or what feature you want)
1.每个规则上面新增id字段,增加规则的唯一标示性
2.新增预警注册模块,允许用户注册自定义类型的预警处理器,如:流控预警器,熔断降级预警器
3.在Slot中,check rule不通过时,先向预警监听器发送预警信息,后再抛出异常
### Describe what you expected to happen
1.每个规则上面新增id字段,增加规则的唯一标示性
2.新增预警注册模块,允许用户注册自定义类型的预警处理器,如:流控预警器,熔断降级预警器
3.在Slot中,check rule不通过时,先向预警监听器发送预警信息,后再抛出异常
### How to reproduce it (as minimally and precisely as possible)
### Tell us your environment
win10,idea
### Anything else we need to know?
nothing
Answers:
username_1: Refer to https://github.com/alibaba/Sentinel/pull/418#issuecomment-458024816
Status: Issue closed
|
saildeep/SmmCaptureEvolve.Altis | 314971450 | Title: Zeus / Ai control
Question:
username_0: Can someone have a look at:
https://steamcommunity.com/sharedfiles/filedetails/?id=491016790
it seems to be very intuitive and is ace compatible.
Status: Issue closed
Answers:
username_1: I tried before and it doesn't always work. For this to work, the AI needs to be commanded by the player, which will create additional problems (e.g. locality change). Also, I do not see Zeus compatibility. Therefore this won't be useful for us. I also aim for keeping external dependencies to a minimum. @username_0 please use more descriptive titles for your issues ;) |
VisionLearningGroup/SSDA_MME | 707336533 | Title: Hyper-parameters used in the tSNE plot
Question:
username_0: Dear Authors,
Thank you for making the code open source! I had a small question while reading the paper. I was wondering if you could let me know the approximate hyper-parameters used while getting the tSNE embeddings as given in the paper, i.e. the perplexity, number of iterations, learning rate among others. I am plotting the tSNE [using this function](https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html). Please do let me know if it would be possible.
Thanks,
Megh<issue_closed>
Status: Issue closed |
jenkinsci/azure-vm-agents-plugin | 1045110617 | Title: Spot VM evictions are not reported to Jenkins, so builds hang and status not reported
Question:
username_0: <!--
SECURITY ISSUES:
Never report security issues on GitHub or other public channels (Gitter/Twitter/etc.)
Follow these instruction to report security issues: https://www.jenkins.io/security/#reporting-vulnerabilities
-->
### Version report
Jenkins and plugins versions report:
<!-- paste below the version report from https://www.jenkins.io/doc/book/system-administration/diagnosing-errors/#how-to-report-a-bug -->
```
Result
Jenkins: 2.303.2
OS: Linux - 4.19.0-16-cloud-amd64
---
ace-editor:1.1
ant:1.11
antisamy-markup-formatter:2.1
apache-httpcomponents-client-4-api:4.5.13-1.0
authentication-tokens:1.4
authorize-project:1.4.0
azure-acs:1.0.4
azure-ad:180.v8b1e80e6f242
azure-artifact-manager:86.va2aa4b1038c7
azure-commons:1.1.3
azure-container-registry-tasks:0.6.5
azure-credentials:182.v3ccd4a755864
azure-iot-edge:2.0.0
azure-sdk:23.v5682688d0eef
azure-vm-agents:783.v58077630847d
basic-branch-build-strategies:1.3.2
blueocean:1.24.3
blueocean-autofavorite:1.2.4
blueocean-bitbucket-pipeline:1.24.3
blueocean-commons:1.24.6
blueocean-config:1.24.3
blueocean-core-js:1.24.3
blueocean-dashboard:1.24.3
blueocean-display-url:2.4.0
blueocean-events:1.24.3
blueocean-git-pipeline:1.24.3
blueocean-github-pipeline:1.24.3
blueocean-i18n:1.24.3
blueocean-jira:1.24.3
blueocean-jwt:1.24.3
blueocean-personalization:1.24.3
blueocean-pipeline-api-impl:1.24.3
blueocean-pipeline-editor:1.24.3
blueocean-pipeline-scm-api:1.24.3
blueocean-rest:1.24.6
blueocean-rest-impl:1.24.3
blueocean-web:1.24.3
bootstrap4-api:4.5.3-1
bootstrap5-api:5.1.0-3
bouncycastle-api:2.20
branch-api:2.6.5
build-timeout:1.20
caffeine-api:2.9.2-29.v717aac953ff3
[Truncated]
The build should be reported as FAILURE. At least if it is marked failed, we can have the pipeline re-run it. Ideally, eviction would deallocate the VM and Jenkins could allocate a new spot VM with the same disk and restart the failed stage.
Actual result:
Build hangs indefinitely until aborted. Logs report following:
Connection was broken
java.io.EOFException
at java.base/java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2872)
at java.base/java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3367)
at java.base/java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:936)
at java.base/java.io.ObjectInputStream.<init>(ObjectInputStream.java:379)
at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)
at hudson.remoting.Command.readFrom(Command.java:142)
at hudson.remoting.Command.readFrom(Command.java:128)
at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:61)
Caused: java.io.IOException: Unexpected termination of the channel
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:75) |
xmake-io/xmake | 909617056 | Title: Add a way for xmake packages to override their vs_runtime
Question:
username_0: ### Is your feature request related to a problem? Please describe.
For now, xmake will automatically install every package and report the vs_runtime to them:
```
lynix@OrdinateurElise:/mnt/c/Users/lynix/Documents/GitHub/NazaraEngine$ xmake.exe project -k vsxmake
checking for platform ... windows
checking for architecture ... x64
checking for Microsoft Visual Studio (x64) version ... 2019
note: install or modify (m) these packages (pass -y to skip confirm)?
in xmake-repo:
-> zlib 1.2.11 [vs_runtime:MD, from:assimp]
-> assimp 5.0.1 [vs_runtime:MD]
-> chipmunk2d 7.0.3 [vs_runtime:MD]
-> dr_wav 0.12.19 [vs_runtime:MD]
-> freetype 2.10.4 [vs_runtime:MD]
-> libogg v1.3.4 [vs_runtime:MD, from:libvorbis,libflac]
-> libflac 1.3.3 [vs_runtime:MD]
-> libsdl 2.0.14 [vs_runtime:MD]
-> minimp3 2021.05.29 [vs_runtime:MD]
-> stb 0.0 [vs_runtime:MD]
-> libvorbis 1.3.7 [with_vorbisenc:n, vs_runtime:MD]
-> newtondynamics v3.14d [vs_runtime:MD]
in local-repo:
-> nodeeditor 2.1.3 [optional, vs_runtime:MD]
please input: y (y/n/m)
```
But some of theses packages don't care about vs_runtime, either because:
- they're C libraries and aren't affect by vs_runtime
- they're header-only libraries
### Describe the solution you'd like
A way for a package to override the vs_runtime it uses (except maybe if the project forces it using configs)
### Describe alternatives you've considered
Setting vs_runtime in my project myself for every package and their dependency, which is a bit annoying.
Answers:
username_1: At present, I did not think of good interfaces to configure them in the package definition, because in addition to vs_runtime, there are many other builtin configurations that have to be processed. It is also very cumbersome to configure it for each headeronly package.
Therefore, you can first use `add_requireconfs` to rewrite packages that do not require vs_runtime. In your project, there are not many such packages. you need not set vs_runtime separately for each package.
username_0: Yeah it's cumbersome to have to do it in each xmake project as well, which I why I think it could be handled inside the package definition.
On a side note, not really related to this, I've noticed xmake takes vs_runtime into account on Linux, I think it could be forcely set to nil for other platforms than Windows
username_1: I improved xmake on dev and mark some libs as headeronly. them will ignore vs_runtime.
```lua
package("dr_flac")
set_kind("library", {headeronly = true})
```
you can try it again.
Status: Issue closed
username_0: Oh this is great, thank you!
I tested it and it works like a charm.
username_0: Oh I've noticed that headeronly libs are still separated by their arch (x86/x64), debug and shared config flags, wouldn't that be a good idea to ignore these as well for header only libs?
username_0: ### Is your feature request related to a problem? Please describe.
For now, xmake will automatically install every package and report the vs_runtime to them:
```
lynix@OrdinateurElise:/mnt/c/Users/lynix/Documents/GitHub/NazaraEngine$ xmake.exe project -k vsxmake
checking for platform ... windows
checking for architecture ... x64
checking for Microsoft Visual Studio (x64) version ... 2019
note: install or modify (m) these packages (pass -y to skip confirm)?
in xmake-repo:
-> zlib 1.2.11 [vs_runtime:MD, from:assimp]
-> assimp 5.0.1 [vs_runtime:MD]
-> chipmunk2d 7.0.3 [vs_runtime:MD]
-> dr_wav 0.12.19 [vs_runtime:MD]
-> freetype 2.10.4 [vs_runtime:MD]
-> libogg v1.3.4 [vs_runtime:MD, from:libvorbis,libflac]
-> libflac 1.3.3 [vs_runtime:MD]
-> libsdl 2.0.14 [vs_runtime:MD]
-> minimp3 2021.05.29 [vs_runtime:MD]
-> stb 0.0 [vs_runtime:MD]
-> libvorbis 1.3.7 [with_vorbisenc:n, vs_runtime:MD]
-> newtondynamics v3.14d [vs_runtime:MD]
in local-repo:
-> nodeeditor 2.1.3 [optional, vs_runtime:MD]
please input: y (y/n/m)
```
But some of theses packages don't care about vs_runtime, either because:
- they're C shared libraries and aren't affect by vs_runtime (ex: SDL2 under windows)
- they're header-only libraries (ex: dr_wav, minimp3)
### Describe the solution you'd like
A way for a package to override the vs_runtime it uses (except maybe if the project forces it using configs)
### Describe alternatives you've considered
Setting vs_runtime in my project myself for every package and their dependency, which is a bit annoying.
username_1: Some special packages may depend on the architecture to export specific macro definitions. Therefore, I cannot completely ignore them for the time being.
```lua
package("xxx")
if is_plat("x86") then
add_defines("XXX_ARCH_X86")
end
on_load(function (package)
if package:debug() then
package:add("defines", "XXX_DEBUG")
end
end)
```
username_0: I understand.
Here's an idea: wouldn't it be a good idea to add a hook to packages to give them the ability to get and change their configuration?
This would enable package that don't rely on this type of config to discard them (reset them to default/nil), and would also allow to handle some packages such as libsdl on Windows which is always shared for now (which means their shared/debug/vs_runtime configs don't have any effect).
username_0: for example:
```lua
package("libsdl")
on_config("windows", function (config)
config.debug = nil
config.shared = true
config.vs_runtime = nil
end)```
username_1: This is too complicated and will make the package definition more complicated. I don't plan to consider it at the moment, just deal with the headeronly packet type.
username_0: I understand it's a bit cumbersome but it would only be required for a few packages (especially the binary ones) as xmake already properly handle all other packages the right way (+ headeronly feature now).
The headeronly feature you added is already great, thanks for that.
username_1: debug/shared is nil by default, so this will not cause users to download and install repeatedly unless the user explicitly sets them.
username_0: By default shared is false, which means than doing this:
```lua
add_requires("libsdl", { configs = { shared = true }))
```
will install the package once, no problem so far.
Then he may want to link libsdl statically
```lua
add_requires("libsdl", { configs = { shared = false }))
```
For now, xmake will install libsdl a second time, but it will be identical to the first installation because on Windows libsdl is always downloaded from the precompiled binaries and the user won't understand why libsdl is not statically linked (I've been there).
A `on_config` or `check_config` hook would be a good place to override the configuration (to prevent installing a second time, it's not a big deal for libsdl but it may be for bigger packages as it takes time and disk space).
For example:
```lua
package("libsdl")
on_config("windows", function (config)
if config.shared == false then
warn("libsdl is only supported a shared library on windows for now")
end
if config.debug == true then
warn("libsdl doesn't support debug builds for now")
end
config.shared = true
config.debug = nil
config.vs_runtime = nil
end)
```
Also, users can easily set some configurations on some packages that don't support it, for example:
```lua
add_requires("chipmunk2d", "dr_wav", "entt", "freetype", "libflac", "libsdl", "minimp3", "stb", { debug = is_mode("debug") )
```
username_1: so we only add `assert` to check it in on_load/on_install, like this
https://github.com/xmake-io/xmake-repo/blob/7e26ab1473213d447f2e6f5f99e55a327bf3c5a5/packages/c/cef/xmake.lua#L37
https://github.com/xmake-io/xmake-repo/blob/7e26ab1473213d447f2e6f5f99e55a327bf3c5a5/packages/s/spdlog/xmake.lua#L43-L45
username_2: Provide a default value for `shared` could also help.
```lua
add_configs("shared", {description = "Build shared library.", default = is_plat("windows"), type = "boolean"})
```
username_0: Yes, but this doesn't prevent installing the library a second time even if the option does nothing (example: libsdl in static mode), and is a hard error. In my example it's simply a warning that will be emitted everytime the user builds its project until he changes the option, I find it a bit better than a failed assert.
username_1: I have improved it, and add `readonly = true` attribute to avoid user to modify it's value.
```lua
package("libsdl")
if is_plat("windows", "mingw") then
add_configs("shared", {description = "Build shared library.", default = true, type = "boolean", readonly = true})
add_configs("vs_runtime", {description = "Set vs compiler runtime.", default = "MD", readonly = true})
end
```
username_2: It's not required to fix the `vs_runtime` config since on windows shared library can be linked to either static runtime or dynamic runtime
username_1: Different vs_runtime configurations will generate different buildhash, so every time the user modifies vs_runtime, the libsdl package will be repeatedly installed in different buildhash directories.
But the binary files of libsdl are the same, it should only need to be installed once. for example:
```lua
set_runtimes("MD")
add_requires("libsdl")
```
```lua
set_runtimes("MDd")
add_requires("libsdl")
```
```lua
add_requires("libsdl", {configs = {vs_runtime = "MT"}})
```
If you do not fix it, then the user modifies the configuration three times, and the package will be installed three times.
username_0: This sure is an improvement, thank you.
Perhaps it would be great to have a warning if a user sets a readonly config on a package with a different value than the default one?
username_1: If there are too many packages, set_runtimes may trigger a lot of warnings. Each package configuration prompt can basically see some information.
username_0: I understand, maybe trigger a warning except for set_runtimes since it's the only automatic config (I think?).
username_1: It is a bit troublesome to distinguish them, so I won't consider it for now.
Status: Issue closed
|
kubermatic/kubermatic | 893027116 | Title: Fix Grafana access for KKP Admins
Question:
username_0: KKP Admin users should be able to access data of all organizations (= KKP Projects) in Grafana UI.
Answers:
username_0: KKP Admin users should be able to access data of all organizations (= KKP Projects) in Grafana UI.
username_0: At this point, KKP Users can see all Grafana organizations (KKP projects), but they cannot switch into them. |
Cnilton/react-native-floating-label-input | 831667731 | Title: Upgrade reanimated to version 2
Question:
username_0: I propose reanimated upgrade to version 2. To do that all usages of `Easing` have to be changed into `EasingNode`. To keep compatibility I would propose to publish
- library for reanimated 2 as 2.x.x
- library for reanimated 1 as 1.x.x.
This way it would be possible to use it with both versions.
Additionally I think it would be a good idea to clean the released files - delete:
- node_modules
- eslint config
- prettier
- watchman
- app.json
- jest
- babel
- metro
from the released package.
Answers:
username_0: I created a PR for that #58.
username_1: Hi @username_0, thanks for your contribution. I'll test it out and then release a new version.
username_1: Hi @username_0, please try version 1.3.5 following the guide within the README.md.
username_0: HI @username_1
It works in our project.
Thanks
Status: Issue closed
|
joaomilho/Enterprise | 997368609 | Title: Quadruple Licensing NEEDED
Question:
username_0: ### Issue Template
**Describe the problem here:**
**Add some debug logs:**
**Provide some information about your OS (any other information goes into the relevant sections above please!)**:
My new [Enterprise B2B Startup](https://github.com/Enty-Young-A-I-Cloud-Sharing-Startup) will sue you. We recruited a business communications expert whose slideshow told us there is no "Contact Sales" button on your product page. Also, where are the .msi's for our Windows Server 2012 R2 Cloud™? How do we even install this Enterprise™ thing when there's no way to accept the license requirements?
I propose a transparent Quadruple Licensing Scheme:
1. MIT-License for use in every Commercial Closed-Source Project that's being developed with Enterprise™, at the MIT
2. "Enterprise™ | First" EULA that's free for personal purposes and comes with a slightly reduced feature set (everything included except execution of your Enterprise™ programs)
3. "Enterprise™ | On-Prem™" special $$$ license which allows you to use "Enterprise™ | First" on your own hardware. It's essentially the same as 2. but it restricts the use of other devices.
4. "Enterprise™ | Enterprise" $$$$$$, allowing you to sell programs created with "Enterprise™ | Enterprise". Please note that this doesn't include the right to ship an Enterprise runtime, which the user has to obtain a license for. Life isn't a bowl of cherries - get that already!
Answers:
username_1: TOO OPEN SOURCE. Let's replace item 1 with:
1. Freemium license: Free to use but we'll collect ALL YOUR DATA. Freedom has a cost! Let's call it "Freedom™".
username_0: Set the database password to "<PASSWORD>" and I'm SOLD!
username_2: Note that the proposal was to authorize using the MIT license only for projects *at the MIT*, yielding the business value of people not understanding this restriction, and enabling us to sue them.
I still approve of the idea of a freemium license and propose a holistic and flexible Sixtuple Licensing Scheme, further enhancing our customer's license experience:
5. Apache-License for use of the Chiricahua, Jicarilla, Lipan, Mescalero, Mimbreño, Ndendahe, Salinero, Plains and Western Apache peoples.
6. Freemium license: Free to use but we'll collect ALL YOUR DATA *AND YOUR FIRSTBORN CHILD*. Freedom has a cost! We call it "Freedom™". |
docker/for-mac | 174826442 | Title: Cannot start docker on reboot -Mac
Question:
username_0: I was setting up a set of containers for a new project, everything was going OK til I forgot to pass a -d to compose and I did a control-C then I have been having problem and cannot get the containers to run. Now after restarting I cant get the docker itself to run.. Need to figure out how to start with a clean slate.
<!--
Thanks for reporting issues back to Docker! This is the issue tracker of Docker for Mac, please check first that your issue is not already described in our troubleshooting guide available at https://docs.docker.com/docker-for-mac/troubleshoot/#/troubleshooting
To make it possible for us to help you, please fill out below information carefully.
-->
### Steps to reproduce
1.
2.
3.
### Expected behavior
Tell us what should happen
### Actual behavior
Tell us what happens instead
### Information
```
Diagnostic ID: B22742E9-6EA0-4C87-987D-724DB69E9238
Docker for Mac: 1.12.1-beta24 (Build 11525)
macOS: Version 10.12 (Build 16A313a)
[ERROR] docker-cli
docker ps failed
[OK] virtualization kern.hv_support
[OK] menubar
[OK] moby-syslog
[OK] dns
[OK] disk
[OK] system
[OK] app
[OK] osxfs
[OK] virtualization VT-X
[OK] db
[OK] slirp
[OK] logs
[OK] env
[OK] vmnetd
[OK] moby-console
[OK] moby
[OK] driver.amd64-linux
```
Answers:
username_1: Sierra related issues are mainly resolved in beta 25: https://download.docker.com/mac/beta/Docker.dmg
Stable should be releasing soon.
If you still run into problems, please report, and we will look into it
Thank you for using Docker!
Status: Issue closed
username_2: Docker for Mac beta 26 has improved support for macOS 10.12 Sierra: all the major show stopper issues which we are aware of have been fixed.
If you have the opportunity, please give it a try on the latest Sierra GM seed and open fresh issues for any problems you find.
- Release notes: https://download.docker.com/mac/beta/1.12.1.12048/NOTES
- Download: https://download.docker.com/mac/beta/Docker.dmg
Thanks for your help testing Docker for Mac with Sierra! |
GermanBluefox/ioBroker.statistics | 370954346 | Title: Keine Anzeige von Tabelle
Question:
username_0: [iobroker.2018-10-17 - Kopie.log](https://github.com/username_1/ioBroker.statistics/files/2486650/iobroker.2018-10-17.-.Kopie.log)
Hallo Bluefox,
Adapter installiert,
leider bekomme ich keine Tabelle angezeigt, bzw es steht immer **lade** da.


Answers:
username_1: Browser Console Errors? Which browser?
username_0: Google Chrome ist auf dem neuesten Stand.
Version 70.0.3538.67 (Offizieller Build) (64-Bit)
F12:


username_1: Wie bist du auf idee gekommen "statistics.X" als Datenquelle zu verwenden?
Du musst die Statistiks daten wie alle anderen über sql/history/influx
speichern und über die Zeigen.
username_1: 
username_0: OK, danke aber warum kann man es dann auswählen?
username_1: Das ist eine andere Frage, die ich jetzt untersuchen muss :) |
ThevisionCK/blogtalk | 615566496 | Title: issues
Question:
username_0: var gitalk = new Gitalk({
clientID: 'eef8ed87a696d85079b4',
clientSecret: '<KEY>',
repo: 'thevisionck.github.io',
owner: 'thevisionck',
admin: ['thevisionck'],
id: location.pathname,
distractionFreeMode: false
})<issue_closed>
Status: Issue closed |
Suplanus/Suplanus.Sepla | 914421203 | Title: EplanApplicationInfo.GetActiveEplanVersion for EPLAN 2022 Beta OverflowException
Question:
username_0: Substitution of $(EPLAN_VERSION) returns 2022.0.1
https://github.com/Suplanus/Suplanus.Sepla/blob/df2a86ce001fd1d16cc47a8023c13129aec27d69/Suplanus.Sepla/Application/EplanApplicationInfo.cs#L50
Int16.MaxValue = 32767 |
MicrosoftDocs/windows-itpro-docs | 920707170 | Title: Better document
Question:
username_0: [Enter feedback here]
Please can you improve the documentation to provide a complete example on how you are supposed to update policies? The docs should show how you go from user/group names in AzureAD to the values required in this file.
Someone else had posted this script which converts from the GUID to the numerical S-1-12-1.
```
param(
[String] $ObjectId
)
$bytes = [Guid]::Parse("$ObjectId").ToByteArray()
$array = New-Object 'UInt32[]' 4
[Buffer]::BlockCopy($bytes, 0, $array, 0, 16)
$sid = "S-1-12-1-$array".Replace(' ', '-')
Write-Output $sid
```
This would be a lot easier if you could just use something like AzureADObjectID\[GUID] rather than having to jump through a lot of hoops and internet searches to find out what to do.
I'm getting the following error when trying to add a policy for /Vendor/MSFT/Policy/Config/LocalUsersAndGroups/Configure
Syncml(400): The requested command could not be performed because of malformed syntax in the command.
This error isn't very useful as it doesn't tell me what the error is or what part of the file has the error.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 8b0c55a7-9aaa-d395-409e-59c2595c8e9e
* Version Independent ID: cf8e5a71-a8fc-1d43-6eed-03d3f2490693
* Content: [Policy CSP - LocalUsersAndGroups - Windows Client Management](https://docs.microsoft.com/en-us/windows/client-management/mdm/policy-csp-localusersandgroups)
* Content Source: [windows/client-management/mdm/policy-csp-localusersandgroups.md](https://github.com/MicrosoftDocs/windows-itpro-docs/blob/public/windows/client-management/mdm/policy-csp-localusersandgroups.md)
* Product: **w10**
* Technology: **windows**
* GitHub Login: @ManikaDhiman
* Microsoft Alias: **dansimp**
Answers:
username_0: Googling shows that this might now work on Window 10 Business edition, can you confirm this?
username_1: Hello @username_0
**This would be a lot easier if you could just use something like AzureADObjectID[GUID] rather than having to jump through a lot of hoops and internet searches to find out what to do** - as it was mentioned in #9683, Graph API should be used to retrieve SID
**I'm getting the following error when trying to add a policy** - is it possible to provide command?
Thank you
Status: Issue closed
|
dji-sdk/RoboMaster-SDK | 772720822 | Title: 请增加对 Apple M1 芯片的支持
Question:
username_0: 
M1 芯片的 MacBook Pro 无法安装 RoboMaster SDK
Python 版本:Python3.9 和 Python3.8 均无法安装
Answers:
username_0: 通过 Anaconda 安装 Python 3.7 环境再 pip install robomaster 即可解决
username_0: 通过 Anaconda 安装 Python 3.7 环境再 pip install robomaster 即可解决
Status: Issue closed
username_1: 用了python3.7也装不上。
username_0: 请附上环境,以及具体的出错内容 |
serenity-bdd/serenity-core | 188428539 | Title: Running serenity with Geb gives Tests skipped
Question:
username_0: This is my Runner Class:
@RunWith(CucumberWithSerenity.class)
@CucumberOptions(features="src/test/groovy/Features/",format = ["pretty","json:target/cucumber.json"],tags=["@smoketest"])
class GoogleRunnerSpec {
}
This is my test class:
class GoogleGSpec extends GebTest {
@Given("^I open the Google Page")
public void i_open_the_Google_Page(){
to GooglePage
}
@Then("I type \"(.*?)\" in Search")
public void search(String searchText){
insertTextInSearchTextBox(searchText)
}
}
This is my feature File:
Feature: New Feature
@smoketest
Scenario: As a user, I should be able to search for a text
Given I open the Google Page
Then I type "Subrahmanyam" in Search
Running the feature file directly runs my tests but onrunning the Runner class it skips the tests.
Can someone let me know what I am missing ?
Answers:
username_1: You are combining Geb with Serenity, which isn't really supported.
username_0: @username_1 : Thanls for responding, But I was able to run the feature file and test cases were running. Its only when I run the Runner class, it says Tests Skipped. I would like to share my sample framework which I created. If you can just give me your email-id,I can share it with you.
Thanks. Waiting for your response.
username_0: @username_1 : It worked now !! The reason why the tests were skipping were because the Step definitions and Runner Class were not under the same package. I kept them under the same package now and now the tests are running.
Integartion of Serenity with Geb is now sucessfully running.
username_1: Thanks for the update!
Status: Issue closed
|
cisco-ie/pipeline-gnmi | 445661679 | Title: TestCodecGPBBasic fails
Question:
username_0: ```
Trying to determine which should be correct...
Answers:
username_0: This test passes in the original bigmuddy repo and the testdata and test itself is the same - this is a failure introduced in this repo.
username_1: Very interesting. Does the old repo test against recent Go versions? Looking at your description this issue seems like something related to JSON tags such as```omitempty```.
username_0: This does appear to be due to the updated vendor dependencies. Copying the `vendor/` folder from this project to original pipeline repository yields the same failures.
The failing line: `data_gpb:<nil> data_gpbkv:[]`
The proto: https://github.com/cisco-ie/pipeline-gnmi/blob/master/vendor/github.com/cisco/bigmuddy-network-telemetry-proto/proto_go/telemetry.pb.go#L85-L89
Should these fields be in the output?
username_1: These values should not be there if marshaled correctly. Interestingly these are the only pointers in the struct and although the definition of ```empty``` should be straightforward these are very nested custom types. |
dmaicher/doctrine-test-bundle | 403576865 | Title: Incorrect listener class
Question:
username_0: At least with Symfony 4.2.2 it was necessary to replace
````
<listener class="\DAMA\DoctrineTestBundle\PHPUnit\PHPUnitListener" />
````
with
````
<listener class="DAMA\DoctrineTestBundle\PHPUnit\PHPUnitListener" />
````
Answers:
username_0: Curious. My untouched `phpunit.xml.dist` from which I created a `phpunit.xml` has `<listener class="Symfony\Bridge\PhpUnit\SymfonyTestsListener" />`. Because of that, and the could not find message I got, I modified to `<listener class="DAMA\DoctrineTestBundle\PHPUnit\PHPUnitListener" />`.
I doubt there's much value in trying to sort that out so I'll close this if you don't mind.
Status: Issue closed
|
cjmling/findings | 465694033 | Title: php a collection return type
Question:
username_0: For example database query code in laravel
```
public function find()
{
return $this->mysql
->table('history')
->select(.....)
->get();
}
```
This function will return either empty collection or a collection with datas
The return type in this example is `\Illuminate\Support\Collection`
Though when we use the result of this query , we can use exactly like array that is because this class implements ArrayAccess interface. So it acts like an array.
You can check by print_r(class_implements('\Illuminate\Support\Collection'));
But it will throw error if we put return type as
`public function find(): array`
The correct way is
`public function find(): \Illuminate\Support\Collection`
It is also correct to code like
`public function find(): object`
as that `\Illuminate\Support\Collection` is an `object` too right. |
4Q-s-r-o/ota_update | 550183993 | Title: App closes when starts installing
Question:
username_0: Since I updated to Flutter 1.12, ota_update doesn't work. When I call it, it downloads the new APK, and when the Status = INSTALLING the app crashes. It logs nothing neither with nor without verbose.
Answers:
username_1: Works fine for me.
I have currently fixed example implementation as there was some errors. Can you try to run example?
Version 2.0.3 will be released today.
username_1: Version 2.0.3 is published. Please try to run example.
username_0: It doesn't work. This time, I bring you all the `flutter run --verbose` logs and my code. Btw, thanks for v2.1.0: I was really waiting for custom apk name.
*What happens*: When I open the app, a Dialog shows up asking me if I want to update. I tap "OK" and the code bellow happens. It shows my progress, logs `INSTALLING` an then it crashes. In any moment I see the "app instalator".
_To consider_: Happened after upgrading to Flutter 1.12
<details>
<summary>Code</summary>
```dart
import 'package:animu/widgets/dialog_button.dart';
import 'package:flutter/material.dart';
import 'package:ota_update/ota_update.dart';
import 'package:pub_semver/pub_semver.dart';
class Updater extends StatefulWidget {
final Version currentVersion;
final Version latestVersion;
final String downloadURL;
Updater({
Key key,
this.currentVersion,
this.latestVersion,
this.downloadURL,
}) : super(key: key);
@override
_UpdaterState createState() => _UpdaterState();
}
class _UpdaterState extends State<Updater> {
bool accepted = false;
String progress = '0';
OtaStatus status = OtaStatus.DOWNLOADING;
void ota() {
setState(() => accepted = true);
try {
OtaUpdate()
.execute(widget.downloadURL, destinationFilename: 'animu-release.apk')
.listen(
(OtaEvent event) {
setState(() {
progress = event.value;
status = event.status;
});
},
);
} catch (e) {
print('Failed to make OTA update. Details: $e');
}
}
@override
Widget build(BuildContext context) {
if (!accepted)
return AlertDialog(
title: Text('Nueva versión'),
content: Text(
[Truncated]
[ +6 ms] Sending to VM service: getIsolate({isolateId: isolates/865008030491535})
[ +3 ms] Sending to VM service: _flutter.listViews({})
[ +9 ms] Result: {type: FlutterViewList, views: [{type: FlutterView, id: _flutterView/0x70a0e6c420, isolate: {type: @Isolate, fixedId: true, id: isolates/865008030491535, name: main.dart$main-865008030491535, number: 865008030491535}}]}
[ +3 ms] DevFS: Creating new filesystem on the device (null)
[ ] Sending to VM service: _createDevFS({fsName: animu})
[ +81 ms] Result: {type: FileSystem, name: animu, uri: file:///data/user/0/com.juanm04.animu.dev/code_cache/animuYNVFGV/animu/}
[ ] DevFS: Created new filesystem on the device (file:///data/user/0/com.juanm04.animu.dev/code_cache/animuYNVFGV/animu/)
[ +1 ms] Updating assets
[ +76 ms] Syncing files to device ONEPLUS A6013...
[ +1 ms] Scanning asset files
[ +1 ms] <- reset
[ ] Compiling dart to kernel with 0 updated files
[ +5 ms] /home/juanm04/dev/tools/flutter/bin/cache/dart-sdk/bin/dart /home/juanm04/dev/tools/flutter/bin/cache/artifacts/engine/linux-x64/frontend_server.dart.snapshot --sdk-root /home/juanm04/dev/tools/flutter/bin/cache/artifacts/engine/common/flutter_patched_sdk/ --incremental --target=flutter -Ddart.developer.causal_async_stacks=true --output-dill /tmp/flutter_tool.PVNHPS/app.dill --packages /home/juanm04/dev/projects/animu/.packages -Ddart.vm.profile=false -Ddart.vm.product=false --bytecode-options=source-positions,local-var-info,debugger-stops,instance-field-initializers,keep-unreachable-code,avoid-closure-call-instructions --enable-asserts --track-widget-creation --filesystem-scheme org-dartlang-root
[ +147 ms] Result: {type: Isolate, id: isolates/865008030491535, name: main, number: 865008030491535, _originNumber: 865008030491535, startTime: 1579327523941, _heaps: {new: {type: HeapSpace, name: new, vmName: Scavenger, collections: 0, avgCollectionPeriodMillis: 0...
[ +24 ms] <- compile package:animu/main.dart
[+8937 ms] Service protocol connection closed.
[ +1 ms] Lost connection to device.
```
</details>
username_1: Thanks for response.
Flutter logs looks OK, I can't see anything wrong. But it shows only build and installation process. I was testing example on 1.12.
Maybe Logcat could provide more info? Since you are getting INSTALLING event it looks like the download was ok and you could have APK in downloads folder. If you run that directly does it work?
username_0: If you want me to open the APK manually to test if it's okay... yes, it works perfectly
username_1: Hmm. Do you run your app from android studio or intelij?
logcat may provide more information. If you are running app from command line you can get more logs by running
``adb logcat``
See: https://developer.android.com/studio/command-line/logcat
username_0: So... I found this
```
01-19 11:54:00.978 10215 10346 D FLUTTER OTA: OTA UPDATE TRACK DOWNLOAD RUNNING
01-19 11:54:01.088 26488 10347 D DownloadManager: [6787] Finished with status SUCCESS
01-19 11:54:01.147 10215 10215 D AndroidRuntime: Shutting down VM
01-19 11:54:01.153 571 571 E SELinux : avc: denied { find } for interface=vendor.qti.hardware.servicetracker::IServicetracker sid=u:r:system_server:s0 pid=1287 scontext=u:r:system_server:s0 tcontext=u:object_r:default_android_hwservice:s0 tclass=hwservice_manager permissive=0
01-19 11:54:01.154 9378 9378 D NotificationListener: onNotificationRemoved# hash: 49233398 sbn: StatusBarNotification(pkg=com.android.providers.downloads user=UserHandle{0} id=0 tag=1:com.juanm04.animu.dev key=0|com.android.providers.downloads|0|1:com.juanm04.animu.dev|10016: Notification(channel=active pri=0 contentView=null vibrate=null sound=null defaults=0x0 flags=0xa color=0xff607d8b actions=1 vis=PRIVATE))
01-19 11:54:01.155 10215 10215 E AndroidRuntime: FATAL EXCEPTION: main
01-19 11:54:01.155 10215 10215 E AndroidRuntime: Process: com.juanm04.animu.dev, PID: 10215
01-19 11:54:01.155 10215 10215 E AndroidRuntime: java.lang.RuntimeException: Error receiving broadcast Intent { act=android.intent.action.DOWNLOAD_COMPLETE flg=0x10 pkg=com.juanm04.animu.dev (has extras) } in sk.fourq.otaupdate.OtaUpdatePlugin$2@af0c1d2
01-19 11:54:01.155 10215 10215 E AndroidRuntime: at android.app.LoadedApk$ReceiverDispatcher$Args.lambda$getRunnable$0$LoadedApk$ReceiverDispatcher$Args(LoadedApk.java:1575)
01-19 11:54:01.155 10215 10215 E AndroidRuntime: at android.app.-$$Lambda$LoadedApk$ReceiverDispatcher$Args$_BumDX2UKsnxLVrE6UJsJZkotuA.run(Unknown Source:2)
01-19 11:54:01.155 10215 10215 E AndroidRuntime: at android.os.Handler.handleCallback(Handler.java:883)
01-19 11:54:01.155 10215 10215 E AndroidRuntime: at android.os.Handler.dispatchMessage(Handler.java:100)
01-19 11:54:01.155 10215 10215 E AndroidRuntime: at android.os.Looper.loop(Looper.java:214)
01-19 11:54:01.155 10215 10215 E AndroidRuntime: at android.app.ActivityThread.main(ActivityThread.java:7682)
01-19 11:54:01.155 10215 10215 E AndroidRuntime: at java.lang.reflect.Method.invoke(Native Method)
01-19 11:54:01.155 10215 10215 E AndroidRuntime: at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:516)
01-19 11:54:01.155 10215 10215 E AndroidRuntime: at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:950)
01-19 11:54:01.155 10215 10215 E AndroidRuntime: Caused by: android.util.AndroidRuntimeException: Calling startActivity() from outside of an Activity context requires the FLAG_ACTIVITY_NEW_TASK flag. Is this really what you want?
01-19 11:54:01.155 10215 10215 E AndroidRuntime: at android.app.ContextImpl.startActivity(ContextImpl.java:964)
01-19 11:54:01.155 10215 10215 E AndroidRuntime: at android.app.ContextImpl.startActivity(ContextImpl.java:940)
01-19 11:54:01.155 10215 10215 E AndroidRuntime: at android.content.ContextWrapper.startActivity(ContextWrapper.java:383)
01-19 11:54:01.155 10215 10215 E AndroidRuntime: at sk.fourq.otaupdate.OtaUpdatePlugin$2.onReceive(OtaUpdatePlugin.java:198)
01-19 11:54:01.155 10215 10215 E AndroidRuntime: at android.app.LoadedApk$ReceiverDispatcher$Args.lambda$getRunnable$0$LoadedApk$ReceiverDispatcher$Args(LoadedApk.java:1560)
01-19 11:54:01.155 10215 10215 E AndroidRuntime: ... 8 more
```
username_1: What version of android are you testing with?
We are currently setting this for android bellow api level 24, Androids above this should not need this, but that may have changed.
username_0: I'm using my phone: API level 29
username_1: It is different intent as above 24 there is intent to start installation, bellow 24 only opens APK file. I will have to test it.
Thanks for feedback.
username_0: Thanks _you_!
username_1: Version 2.1.1 is published. It should fix your problem. Feel free to reopen if problem persists.
Status: Issue closed
username_0: :clap: :clap: :clap:
Works perfectly. I hope nothing else is broken now |
yurake/k8s-3tier-webapp | 584359741 | Title: Install Loki
Question:
username_0: ### Description
- [ ] add loki with helm
- [ ] monitor system out through loki
- [ ] create application log
- [ ] monitor log through loki
### Related links
* xxx
Answers:
username_0: not try rest of two actions.
* create application log
* monitor log through loki
Status: Issue closed
|
cydran/cydran | 1165654184 | Title: Externalize error messages
Question:
username_0: ## Summary ##
Currently Cydran has a number of error messages that are useful to the user, but increase the size of the library non-trivially. This ticket would create a way to externalize error messages into a separately and optional js file deliverable.
Each discrete error case in the library would require a distinct error code in order to index into the list of supplied error strings.
## Benefits ##
- Developers could omit the error strings bundle for their production build and save the overhead of the message text
- Each discrete error case in the library would require a distinct error code in order to index and therefore could be documented with extra detail in the documentation with regards to exactly what the problem is and how to avoid it inclusive of example code and diagrams
- The Cydran source code can be easily audited via future automation to enforce that no two places in the code surface the same discrete error code |
amelchio/eternalegypt | 354184978 | Title: Documentation: LB1120 tested!
Question:
username_0: The LB1120 I have properly works with this code. I have HomeAssistant using it, succesfully. No changes were needed. Feel free to add this to the README. Cheers.
Answers:
username_1: Awesome. Can you tell me the firmware version that you have tested?
username_0: `M18Q2_v12.09.163431`
Status: Issue closed
|
xws-bench/battles | 131746103 | Title: Human:200 Computer:0
Question:
username_0: Tycho_Celchu*Push_the_Limit*Proton_Rockets*Autothrusters*A-Wing_Test_Pilot*Wired.Keyan_Farlander*Opportunist.Wes_Janson*Veteran_Instincts*BB-8*Integrated_Astromech.VSPatrol_Leader.Academy_Pilot.Academy_Pilot.Academy_Pilot.Academy_Pilot.Academy_Pilot.<br>
http://bit.ly/23O7tTR<br> |
matheo/angular | 866714121 | Title: Datepicker: Start of week
Question:
username_0: ## Describe the bug
The picker is not changing first day of week depending on locale like original picker does.
Could be luxon-related, since [ngx-material-luxon](https://www.npmjs.com/package/ngx-material-luxon) has a specific configuration for the first day of the week
## Minimal Reproduction
Stackblitz gives ngcc error atm unfortunately.
There's an example of french locale here:
https://material.angular.io/components/datepicker/examples

(lu = monday)
@username_1/datepicker:

## Expected behavior
Should change the first day of the week depending on locale.
Alternatively with configuration like
## Your Environment
**Angular Version:**
@angular/core: ^11.2.9,
@username_1/[email protected]
Issue on macOS both chrome and safari
Answers:
username_1: Yea, Stackblitz has problems connecting `luxon` to `@username_1/datepicker/luxon` :(
and indeed, you will need to provide your firstDayOfWeek function:
```typescript
import {} from '@username_1/datepicker/luxon';
@NgModule({
providers: [
{
provide: MAT_LUXON_DATE_ADAPTER_OPTIONS,
useValue: {
firstDayOfWeek: (locale: string) => {
// return 0 = Sunday, 1 = Monday
}
}
},
```
username_0: I forgot to change the import from `ngx-material-luxon` to `@username_1/datepicker/luxon`.
Thanks!
Status: Issue closed
|
ANYbotics/elevation_mapping | 622524666 | Title: debug issue
Question:
username_0: Dear ANYbotics team,
Thank you sharing such a great package within the robotics community!
I have been using elevation mapping, however, I am facing the elevation mapping node crash when I set the node logging level to debug. Here is the debug output:
[DEBUG] [1590069916.836895896, 1589904836.722827935]: Requested to fuse an area of the elevation map with center at (6.967468, -3.480633) and side lengths (50.000000, 50.000000)
[DEBUG] [1590069916.836922000, 1589904836.722827935]: Fusing elevation map...
[DEBUG] [1590069916.966533748, 1589904836.984662168]: Elevation map is running visibility cleanup.
terminate called after throwing an instance of 'std::runtime_error'564956
what(): Time is out of dual 32-bit range
Could you please suggest the possible solution to the problem?
Regards,
Ahmad
Answers:
username_1: This looks like a timing issue. What clock mode are you running?
And you can also point to where the code is crashing?
username_1: Hey @username_0, do you still have the issue? |
MicrosoftDocs/feedback | 408882528 | Title: Clicking "Getting Started" on Xamarin.Forms main page gives a 404 error
Question:
username_0: **Link to article:**
https://docs.microsoft.com/en-us/xamarin/xamarin-forms/
**Problem:**
Clicking "Getting Started" on the above link gives me a 404 error.
Answers:
username_1: /cc @username_2
username_2: @username_0 thanks for letting us know! We recently updated our "Get Started" experience, the correct link is https://docs.microsoft.com/xamarin/get-started/ - I hope you find it useful.
I have just fixed the broken link, which should be published soon.
#please-close
username_0: @username_2
Thanks for the quick response. I had already found the correct link, but appreciate it.
Status: Issue closed
|
AllenDowney/LittleBookOfSemaphores | 777490543 | Title: A handful of Makefile fixes
Question:
username_0: Attached git patches against 23d7177. Files renamed to *.txt as GitHub doesn't like *.patch
[0001-Need-another-run-of-LaTeX.patch.txt](https://github.com/AllenDowney/LittleBookOfSemaphores/files/5760545/0001-Need-another-run-of-LaTeX.patch.txt)
[0002-Complete-dependencies.patch.txt](https://github.com/AllenDowney/LittleBookOfSemaphores/files/5760546/0002-Complete-dependencies.patch.txt)
[0003-Generate-table.eps-via-Makefile.patch.txt](https://github.com/AllenDowney/LittleBookOfSemaphores/files/5760547/0003-Generate-table.eps-via-Makefile.patch.txt)
[0004-Add-clean-target-to-Makefile.patch.txt](https://github.com/AllenDowney/LittleBookOfSemaphores/files/5760548/0004-Add-clean-target-to-Makefile.patch.txt) |
microsoft/PowerToys | 713459189 | Title: FancyZones forgets Virtual Desktop layouts
Question:
username_0: <!--
**Important: When reporting BSODs or security issues, DO NOT attach memory dumps, logs, or traces to Github issues**.
Instead, send dumps/traces to <EMAIL>, referencing this GitHub issue.
-->
## ℹ Computer information
- PowerToys version: 0.230
- PowerToy Utility: FancyZones
- Running PowerToys as Admin: Yes
- Windows build number: 2004 (19041.508)
## 📝 Provide detailed reproduction steps (if any)
1. Apply specific custom layouts to different virtual desktops on Windows 10
### ✔️ Expected result
The layouts I applied to be maintained on each virtual desktop
### ❌ Actual result
The unique layouts are maintained for a while but eventually (and I have no idea why) the layout from one desktop gets applied to all other virtual desktops, removing any existing layouts that were applied there.
## 📷 Screenshots
Answers:
username_1: @username_0
did you delete and recreate the Virtual Desktops?
Can you please upload `%localappdata%\Microsoft\PowerToys\FancyZones\zones-settings.json`?
username_2: I have the same problem, it's annoying that I have to set the proper custom zone setup for desktop #2 all the time. My file is attached.
[zones-settings.zip](https://github.com/microsoft/PowerToys/files/5328119/zones-settings.zip)
username_1: @username_2
you didn't specify if this is after the layout setting was lost or after you re-applied.
username_2: Found that it resets when the monitor goes to sleep or is disconnected on the laptop, so essentially display switching.
I'm attaching 3 files:
zones-settings-bad-pre - initial state, it was reset (both desktops on TopGrid A)
zones-settings-good-set - when I set the good config (TopGrid A and TopGrid B for desktop 1 and 2)
zones-settings-bad-after - disconnected the monitor and reconnected after a few seconds which caused the reset to the first state
I hope this helps!
[zones-settings-bad-after.zip](https://github.com/microsoft/PowerToys/files/5335623/zones-settings-bad-after.zip)
[zones-settings-bad-pre.zip](https://github.com/microsoft/PowerToys/files/5335624/zones-settings-bad-pre.zip)
[zones-settings-good-set.zip](https://github.com/microsoft/PowerToys/files/5335626/zones-settings-good-set.zip)
username_1: @username_0 @username_2
if for you it's acceptable to install a private unsigned build, I've attached a build that may fix the issue.
We understand if you prefer to not install an unsigned build.
[PowerToysSetup-0.23.1-x64.zip](https://github.com/microsoft/PowerToys/files/5339425/PowerToysSetup-0.23.1-x64.zip)
username_3: I confirm this also occurs for me since updating to 0.23 ... it was working fine with 0.21.
username_3: This seems to solve it.
username_2: Yup, the fix works. Will this auto-update to the official version when the fix is released, or will I need to manually reinstall?
Status: Issue closed
username_4: Fixed in https://github.com/microsoft/PowerToys/releases/tag/v0.23.2
username_5: @username_1
Unfortunately, this bug appears to have come back for me with v.0.25.0.
username_4: @username_5 please create a new issue
username_5: Already working on it with @username_1 over at https://github.com/microsoft/PowerToys/issues/7790 . |
rust-lang/rust | 520764808 | Title: thread 'rustc' panicked at 'called `Option::unwrap()` on a `None` value'
Question:
username_0: The following rust code crashes `rustc 1.39.0 (4560ea788 2019-11-04) running on x86_64-unknown-linux-gnu`.
```rust
extern crate wasm_bindgen;
use wasm_bindgen::prelude::*;
#[wasm_bindgen]
pub extern fn third(a: Vec(u32)) -> u32 {
return a[3]
}
```
(Note, changing `Vec(u32)` to `Vec<u32>` fixes the crash.)
## Meta
Running with RUST_BACKTRACE, I get the following error message.
```
Compiling wasm_sample v0.1.0 (/home/david/Documents/rust/wasm_sample_2)
thread 'rustc' panicked at 'called `Option::unwrap()` on a `None` value', src/libcore/option.rs:378:21
stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.37/src/backtrace/libunwind.rs:88
1: backtrace::backtrace::trace_unsynchronized
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.37/src/backtrace/mod.rs:66
2: std::sys_common::backtrace::_print_fmt
at src/libstd/sys_common/backtrace.rs:76
3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
at src/libstd/sys_common/backtrace.rs:60
4: core::fmt::write
at src/libcore/fmt/mod.rs:1030
5: std::io::Write::write_fmt
at src/libstd/io/mod.rs:1412
6: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:64
7: std::sys_common::backtrace::print
at src/libstd/sys_common/backtrace.rs:49
8: std::panicking::default_hook::{{closure}}
at src/libstd/panicking.rs:196
9: std::panicking::default_hook
at src/libstd/panicking.rs:210
10: rustc_driver::report_ice
11: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:477
12: std::panicking::continue_panic_fmt
at src/libstd/panicking.rs:380
13: rust_begin_unwind
at src/libstd/panicking.rs:307
14: core::panicking::panic_fmt
at src/libcore/panicking.rs:85
15: core::panicking::panic
at src/libcore/panicking.rs:49
16: rustc::hir::lowering::LoweringContext::lower_path_segment
17: <core::iter::adapters::Map<I,F> as core::iter::traits::iterator::Iterator>::fold
18: <rustc::hir::ptr::P<[T]> as core::iter::traits::collect::FromIterator<T>>::from_iter
19: rustc::hir::lowering::LoweringContext::lower_qpath
20: rustc::hir::lowering::LoweringContext::lower_path_ty
21: rustc::hir::lowering::LoweringContext::lower_ty_direct
22: <core::iter::adapters::Map<I,F> as core::iter::traits::iterator::Iterator>::fold
23: <rustc::hir::ptr::P<[T]> as core::iter::traits::collect::FromIterator<T>>::from_iter
24: rustc::hir::lowering::LoweringContext::lower_fn_decl
25: rustc::hir::lowering::item::<impl rustc::hir::lowering::LoweringContext>::lower_item
[Truncated]
query stack during panic:
end of query stack
error: could not compile `wasm_sample`.
To learn more, run the command again with --verbose.
```
`rustc --version --verbose` yields
```
rustc 1.39.0 (4560ea788 2019-11-04)
binary: rustc
commit-hash: 4560ea788cb760f0a34127156c78e2552949f734
commit-date: 2019-11-04
host: x86_64-unknown-linux-gnu
release: 1.39.0
LLVM version: 9.0
```
Thank you.
Answers:
username_1: I reduced the proc macro to the following:
```rust
use wasm_bindgen_macro::wasm_bindgen;
#[wasm_bindgen]
pub extern fn third(_: Vec(u32)) -> u32 {
0
}
```
```rust
extern crate proc_macro;
use proc_macro::TokenStream;
use quote::ToTokens;
#[proc_macro_attribute]
pub fn wasm_bindgen(attr: TokenStream, input: TokenStream) -> TokenStream {
let mut tokens = proc_macro2::TokenStream::new();
let item = syn::parse::<syn::Item>(input).expect("parse");
if let syn::Item::Fn(f) = item {
f.to_tokens(&mut tokens);
}
tokens.into()
}
```
with tokens being:
```text
[wmacro/src/lib.rs:15] &tokens = TokenStream [
Ident {
ident: "pub",
span: #0 bytes(0..0),
},
Ident {
ident: "extern",
span: #0 bytes(0..0),
},
Literal { lit: Lit { kind: Str, symbol: C, suffix: None }, span: Span { lo: BytePos(0), hi: BytePos(0), ctxt: #0 } },
Ident {
ident: "fn",
span: #0 bytes(0..0),
},
Ident {
ident: "third",
span: #0 bytes(0..0),
},
Group {
delimiter: Parenthesis,
stream: TokenStream [
Ident {
ident: "_",
span: #0 bytes(0..0),
},
Punct {
ch: ':',
[Truncated]
span: #0 bytes(0..0),
},
Punct {
ch: '>',
spacing: Alone,
span: #0 bytes(0..0),
},
Ident {
ident: "u32",
span: #0 bytes(0..0),
},
Group {
delimiter: Brace,
stream: TokenStream [
Literal { lit: Lit { kind: Integer, symbol: 0, suffix: None }, span: Span { lo: BytePos(0), hi: BytePos(0), ctxt: #0 } },
],
span: #0 bytes(0..0),
},
]
```
username_2: https://github.com/rust-lang/rust/blob/9124f7a096007b5f96300e61e8f5817df10b315a/src/librustc/hir/lowering.rs#L1863
panic due to not found `(` while trying to suggest `Vec(T)` to `Vec<t>`.
cc @estebank
username_3: triage: P-high. Removing I-nominated.
Status: Issue closed
|
PartiPirate/personae | 405489993 | Title: Affichage personnalisé des groupes
Question:
username_0: Dans la catégorie "Mes groupes", afficher seulement les groupes dont on est membre, par catégorie de thèmes (AP - Equipes - Equipages).
Ne pas montrer les membres du groupe sur la page "Mes Groupes", on peut y accéder en cliquant sur le nom du groupe.
On peut indiquer les groupes dont on est membre sous forme de boutons plus petits (taille normalisée, texte ajusté; bouton en 3D si possible).
Answers:
username_0: Peut être un affichage sous forme d'arborescence entre les groupes/thèmes ? |
RadioAstronomySoftwareGroup/pyuvdata | 621385119 | Title: Documentation for UVData.baseline_array
Question:
username_0: The docs for the `UVData.baseline_array` parameter say that it contains "baseline indices", but I believe this is incorrect. I think it contains "baseline numbers". This is important because baseline numbers can be used as keys in `UVData.get_*` methods, and labeling the entries of the array as indices makes it sound like they cannot be used as keys (given the distinction between antenna numbers and indices).
Answers:
username_0: @username_1
Status: Issue closed
|
IBMPredictiveAnalytics/SPSSINC_MODIFY_OUTPUT | 663066617 | Title: SPSSINC MODIFY OUTPUT, excelexport
Question:
username_0: Dear <NAME>,
thanks for looking after SPSSINC_MODIFY_OUTPUT. Unfortunately, the new version 2.0.0 is not working as used to be (version 1.3.6) regarding the following:
SPSSINC MODIFY OUTPUT TABLES
/IF ITEMTITLESTART="Table_" PROCESS=ALL
/CUSTOM FUNCTION='customoutputfunctions.excelexport(file="xxx",' + 'sheet="Table_#",action="CreateWorksheet")'.
This command is to collect all tables in an output starting with "Table_" and write them to a xls-file. The issue is that "customfunction" has not bee initialized before usage. Could you fix this asap, please?
I am thanking you in anticipation.
Best regards,
Martin
(<EMAIL>) |
huaweicloud/huaweicloud-sdk-go-v3 | 922385907 | Title: region.go的ValueOf方法建议
Question:
username_0: 不存在的region,感觉没必要panic吧,返回个nil合理一点
Answers:
username_1: 你好,您的建议已收到,我们先内部讨论一下。此处 panic 是考虑到如果找不到region,SDK Core中必填项 endpoint 无法获取,在发请求的时候也会有异常,所以提前拦截了。
username_0: 我原打算利用该方法获取region,获取不到我自己会做判断,不会发请求的。现在这个方法根本没办法用。
username_2: 你好,针对panic的场景,我们建议你可以查看一下处理panic的[recover方法](https://blog.golang.org/defer-panic-and-recover),您看下这样能否满足您的使用场景 |
intellij-rust/intellij-rust | 717980046 | Title: NoSuchMethodError with 203.4449 EAPs
Question:
username_0: <!--
Hello and thank you for the issue!
If you would like to report a bug, we have added some points below that you can fill out.
Feel free to remove all the irrelevant text to request a new feature.
-->
## Environment
* **IntelliJ Rust plugin version:** 0.3.133.3402-203-nightly
* **Rust toolchain version:** 1.47.0 (18bf6b4f0 2020-10-07) x86_64-apple-darwin
* **IDE name and version:** IntelliJ IDEA 2020.3 EAP Ultimate Edition (IU-203.4449.2)
* **Operating system:** macOS 10.15.6
## Problem description
<details>
```
Error during macro expansion
java.lang.NoSuchMethodError
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at java.base/java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:603)
at java.base/java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:678)
at java.base/java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:737)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateParallel(ReduceOps.java:919)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)
at kotlin.streams.jdk8.StreamsKt.toList(Streams.kt:60)
at org.rust.lang.core.macros.MacroExpansionTaskBase$submitExpansionTask$1.invoke(MacroExpansionTask.kt:202)
at org.rust.lang.core.macros.MacroExpansionTaskBase$submitExpansionTask$1.invoke(MacroExpansionTask.kt:48)
at org.rust.stdext.CompletableFutureKt$supplyAsync$1.get(CompletableFuture.kt:18)
at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1700)
at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1692)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
Caused by: java.lang.NoSuchMethodError: 'void com.intellij.util.containers.ConcurrentWeakKeySoftValueHashMap.<init>(int, float, int, gnu.trove.TObjectHashingStrategy)'
at org.rust.lang.core.resolve.ref.RsResolveCacheKt$createWeakMap$1.<init>(RsResolveCache.kt:270)
at org.rust.lang.core.resolve.ref.RsResolveCacheKt.createWeakMap(RsResolveCache.kt:274)
at org.rust.lang.core.resolve.ref.RsResolveCacheKt.getOrCreateMap(RsResolveCache.kt:264)
at org.rust.lang.core.resolve.ref.RsResolveCacheKt.access$getOrCreateMap(RsResolveCache.kt:1)
at org.rust.lang.core.resolve.ref.RsResolveCache.getRustStructureDependentCache(RsResolveCache.kt:57)
at org.rust.lang.core.resolve.ref.RsResolveCache.getCacheFor(RsResolveCache.kt:151)
at org.rust.lang.core.resolve.ref.RsResolveCache.resolveWithCaching(RsResolveCache.kt:97)
at org.rust.lang.core.resolve.ref.RsReferenceCached.cachedMultiResolve(RsReferenceCached.kt:27)
at org.rust.lang.core.resolve.ref.RsReferenceCached.multiResolve(RsReferenceCached.kt:20)
at com.intellij.psi.PsiPolyVariantReferenceBase.resolve(PsiPolyVariantReferenceBase.java:47)
at org.rust.lang.core.resolve.ref.RsReferenceBase.resolve(RsReferenceBase.kt:28)
at org.rust.lang.core.stubs.index.RsModulesIndex$Companion$getDeclarationsFor$1$1.process(RsModulesIndex.kt:46)
at org.rust.lang.core.stubs.index.RsModulesIndex$Companion$getDeclarationsFor$1$1.process(RsModulesIndex.kt:31)
at com.intellij.psi.stubs.StubProcessingHelperBase.processStubsInFile(StubProcessingHelperBase.java:73)
at com.intellij.psi.stubs.StubIndexImpl.lambda$processElements$2(StubIndexImpl.java:289)
at com.intellij.psi.stubs.StubIndexImpl.processElements(StubIndexImpl.java:325)
at com.intellij.psi.stubs.StubIndex.processElements(StubIndex.java:49)
[Truncated]
at com.intellij.openapi.progress.impl.ProgressManagerImpl.executeProcessUnderProgress(ProgressManagerImpl.java:63)
at com.intellij.openapi.progress.impl.CoreProgressManager.runProcess(CoreProgressManager.java:165)
at com.intellij.openapi.progress.ProgressManager.runProcess(ProgressManager.java:59)
at com.intellij.openapi.progress.util.ProgressIndicatorUtils.runWithWriteActionPriority(ProgressIndicatorUtils.java:110)
at org.rust.openapiext.UtilsKt.runWithWriteActionPriority(utils.kt:297)
at org.rust.openapiext.UtilsKt.executeUnderProgressWithWriteActionPriorityWithRetries(utils.kt:284)
at org.rust.lang.core.macros.MacroExpansionTaskBase$submitExpansionTask$1$stages1$1.apply(MacroExpansionTask.kt:186)
at org.rust.lang.core.macros.MacroExpansionTaskBase$submitExpansionTask$1$stages1$1.apply(MacroExpansionTask.kt:48)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:271)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:952)
at java.base/java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:926)
at java.base/java.util.stream.AbstractTask.compute(AbstractTask.java:327)
at java.base/java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:746)
... 5 more
```
</details> |
bcallaway11/did | 954028956 | Title: Count Outcomes
Question:
username_0: Does the _did_ package accommodate count outcomes? If so, how is this achieved? Or, is this question a moot point? In the TWFE framework (which I am trying to get away from), I have used negative binomial or poisson regression to model count outcomes.
Answers:
username_1: This is tricky question, and I think the answer sort of depends on what viewpoint you take:
- Mechanically, the answer is yes. You can just "ignore" that the outcome is a count variable and run the code in the `did` package. If you interpret parallel trends (the main assumption underlying DID) as being just a reduced form assumption that might hold in the data (which you could in some ways check in pre-treatment periods), then this would be reasonable. My sense is that this is very commonly done in applied work (both with count data or the same sorts of issues would come up if you had, say, a binary outcome).
- That being said, it is going to be very hard to write down a model for potential outcomes like a poisson regression with fixed effects that leads to parallel trends holding. I'm sensitive to this point (perhaps more than most), but I think that carefully checking pre-trends and ignoring issues related to nonlinear models is very common.
One last comment, I don't think there is going to be a "fix" for this if you were to use TWFE, our approach, or some other approach. This is a more general issue of fixed effects in nonlinear models.
Brant
username_0: I greatly appreciate your thorough response, Brant!
I have one more related question, which I am ashamed to say is rather rudimentary.
In this context, how should treatment effects be interpreted for count outcomes? In the employment example provided by the authors, treatment effects are reported in terms of a percentage decrease/increase. I suspect this may have something to do with their logged outcome. Or, am I off base here?
username_1: I think you should just interpret as how much outcomes increase on average for the treated group due to participating in the treatment --- so I don't think there is anything different about interpreting the results due to the count outcome relative to a continuous outcome. I think the bigger issue is the one we were talking about previously.
You are also right about the interpretation in our application being due to the logged outcome.
Status: Issue closed
|
TablePlus/TablePlus | 591506155 | Title: Keychain duplicate entries
Question:
username_0: 1. Which driver are you using and version of it (Ex: PostgreSQL 10.0):
**Doesn't matter.**
2. Which TablePlus build number are you using (the number on the welcome screen, Ex: build 81):
**Build 300**
3. The steps to reproduce this issue:
Use TablePlus and look at the Keychain Access app.
<img width="1286" alt="image" src="https://user-images.githubusercontent.com/273809/78084271-ba337300-7385-11ea-9bea-87015cb315d7.png">
I'm assuming this is a bug with storing the key.
Answers:
username_1: Hi @username_0 that is intentional behavior. TablePlus stores the password of connection separately (around 0~4 per connection). If you remove a connection, the number or record will be reduced.
Status: Issue closed
|
rancher/rancher | 311559427 | Title: [2.0.0-beta1] Cluster provisioning issue. invalid mount config
Question:
username_0: HI All,
Getting an issue provisioning the Kubernetes Cluster with new Rancher Beta. This did work with the latest Alpha-24
This cluster is currently Provisioning; areas that interact directly with it will not be available until the API is ready.
[workerPlane] Failed to bring up Worker Plane: Failed to create [kubelet] container on host [10.132.194.37]: Error response from daemon: invalid mount config: must use either propagation mode "rslave" or "rshared" when mount source is within the daemon root, daemon root: "/var/lib/docker", bind mount source: "/var/lib/docker", propagation: "rprivate"
Regards
---
| Useful | Info |
| :-- | :-- |
Kubernetes 1.9.5, coreos is on 1688.5.3 and docker is on 17.12.1-ce and etcd on 3.1.12
|Versions|Rancher `v2.0.0-beta1` UI: `v2.0.33` |
|Access|`local` `admin`|
|Route|`authenticated.cluster.index`|
Answers:
username_1: This is a Docker 17.12.1 (and up) specific error, and was broken before beta. Docker 17.12.0 should work, but only up to 17.03.2 is supported from a Kubernetes point of view. Let me know if using a different Docker version doesn't solve this.
Status: Issue closed
username_0: Yes confirmed, Thank you! |
aliyun/aliyun-tablestore-nodejs-sdk | 857669663 | Title: 在进行全匹配查询时 报错 全匹配查询TypeError: name.endsWith is not a function
Question:
username_0: 我在 仿照 https://help.aliyun.com/document_detail/100613.html?spm=a2c4g.11186623.6.1015.74103645q4lQ7l 写代码时,报出错误,但是我的代码应该没有问题,请问这个bug吗? 以下是错误信息
我的代码:
``` JavaScript
var client = require('./client');
var TableStore = require('tablestore');
client.search({
tableName: "JeolShareSendFailDataTable",
indexName: "idx_file_nm",
searchQuery: {
offset: 0,
limit: 10, //如果只为了获取行数,无需获取具体数据,可以设置limit=0,即不返回任意一行数据。
query: {
queryType: TableStore.QueryType.MATCH_ALL_QUERY
},
getTotalCount: true //结果中的TotalCount可以表示表中数据的总行数,默认为false,表示不返回。
},
columnToGet: { //返回列设置RETURN_SPECIFIED(自定义)、RETURN_ALL(所有列)和RETURN_NONE(不返回)。
returnType: TableStore.ColumnReturnType.RETURN_SPECIFIED,
returnNames: ["file_name"]
}
}, function (err, data) {
if (err) {
console.log('error:', err);
}else{
console.log('success:', JSON.stringify(data, null, 2));
}
});
```
错误信息:
H:\work_file\nodejs-pj\node_modules\tablestore\lib\request.js:66
throw err;
^
TypeError: name.endsWith is not a function
at formatError (internal/util/inspect.js:914:13)
at formatRaw (internal/util/inspect.js:703:14)
at formatValue (internal/util/inspect.js:591:10)
at inspect (internal/util/inspect.js:221:10)
at formatWithOptions (internal/util/inspect.js:1693:40)
at Object.Console.<computed> (internal/console/constructor.js:272:10)
at Object.log (internal/console/constructor.js:282:61)
at Response.<anonymous> (H:\work_file\nodejs-pj\search.js:22:17)
at Request.<anonymous> (H:\work_file\nodejs-pj\node_modules\tablestore\lib\request.js:162:18)
at Request.callListeners (H:\work_file\nodejs-pj\node_modules\tablestore\lib\sequential_executor.js:113:20)
Answers:
username_0: nodejs更新成最新版本后的报错,上面的版本是12
error: 400:
**OTSUnsupportOperation(Unsupported operation: 'Empty endpoint'.**
at Request.extractError (H:\work_file\nodejs-pj\node_modules\tablestore\lib\client.js:92:44)
at Request.callListeners (H:\work_file\nodejs-pj\node_modules\tablestore\lib\sequential_executor.js:113:20)
at Request.emit (H:\work_file\nodejs-pj\node_modules\tablestore\lib\sequential_executor.js:81:10)
at Request.emit (H:\work_file\nodejs-pj\node_modules\tablestore\lib\request.js:189:14)
at Request.transition (H:\work_file\nodejs-pj\node_modules\tablestore\lib\request.js:57:10)
at AcceptorStateMachine.runTo (H:\work_file\nodejs-pj\node_modules\tablestore\lib\request.js:24:12)
at H:\work_file\nodejs-pj\node_modules\tablestore\lib\request.js:36:10
at Request.<anonymous> (H:\work_file\nodejs-pj\node_modules\tablestore\lib\request.js:73:9)
at Request.<anonymous> (H:\work_file\nodejs-pj\node_modules\tablestore\lib\request.js:191:12)
at Request.callListeners (H:\work_file\nodejs-pj\node_modules\tablestore\lib\sequential_executor.js:90:20) {
code: 400,
headers: {
date: 'Wed, 14 Apr 2021 13:34:32 GMT',
'transfer-encoding': 'chunked',
connection: 'keep-alive',
authorization: 'OTS LTAI5tR2n6fLdRm7axqy3w6G:kIv79PZBnTAoHw81k0p5fWPkP9o=',
'x-ots-contentmd5': 'omCQ6u5oxD8DR6byfuIcwQ==',
'x-ots-contenttype': 'protocol buffer',
'x-ots-date': '2021-04-14T13:34:32.427008Z',
'x-ots-requestid': '0005bfee-ce93-2ced-5e6b-ca0b0298f962'
},
time: 2021-04-14T13:34:31.637Z,
retryable: false
}
username_0: 已解决,原因是多元索引的使用 不支持日本东京 |
typeorm/typeorm | 748257081 | Title: { selected: false } doesn't work with relationship columns
Question:
username_0: ## Issue Description
### Expected Behavior
TypeORM should not return the `imageId` when querying a model with the following fields in the model:
```typescript
@Column({ name: 'image_id', type: 'int', select: false })
imageId: number;
@OneToOne(() => File, { eager: true })
@JoinColumn({ name: 'image_id' })
image: File;
```
### Actual Behavior
`imageId` is returned by default. If we remove the relationship field, then it's hidden as it should be.
```json
{
"id": "85a41473-2898-4cc8-aaa7-dc220ea1307d",
"title": "wololo",
"description": null,
"destinationUrl": null,
"imageId": "0523f629-4dec-4d51-8039-20cadce0cb3c",
"image": {
"id": "0523f629-4dec-4d51-8039-20cadce0cb3c",
"key": "picture.jpg",
"url": "http://example.com/picture.jpg",
"mimetype": "image/jpeg",
"misc": {
"k": "v"
}
}
}
```
### Steps to Reproduce
1. Add a one to one relationship to a model
2. Include a custom `@Column()` for the relationship
3. Add `{ select: false }` to the custom `@Column()`
4. Query the model
### My Environment
| Dependency | Version |
| --- | --- |
| Operating System | macOS 11.0.1 |
| Node.js version | v14.9.0 |
| Typescript version | v4.0.5 |
| TypeORM version | v0.2.29 |
### Relevant Database Driver(s)
- [ ] `aurora-data-api`
- [ ] `aurora-data-api-pg`
- [ ] `better-sqlite3`
- [ ] `cockroachdb`
[Truncated]
- [ ] `mysql`
- [ ] `nativescript`
- [ ] `oracle`
- [x] `postgres`
- [ ] `react-native`
- [ ] `sap`
- [ ] `sqlite`
- [ ] `sqlite-abstract`
- [ ] `sqljs`
- [ ] `sqlserver`
### Are you willing to resolve this issue by submitting a Pull Request?
- [x] Yes, I have the time, and I know how to start.
- [x] Yes, I have the time, but I don't know how to start. I would need guidance.
- [ ] No, I don't have the time, although I believe I could do it if I had the time...
- [ ] No, I don't have the time and I wouldn't even know how to start.
Not sure if I know exactly how to start, but more or less I do.
Answers:
username_1: This is probably indirectly caused by:
https://github.com/typeorm/typeorm/blob/c4a36da62593469436b074873eba186f0f8b990d/src/query-builder/SelectQueryBuilder.ts#L1775
The alias generated for `image.id` will be the same as for `imageId`, both `"image_id"`, and when the raw results are being mapped back to the object it will assign both.
https://github.com/typeorm/typeorm/blob/c4a36da62593469436b074873eba186f0f8b990d/src/driver/DriverUtils.ts#L48-L56
Perhaps `__` should be used instead of `_`? Or should this be delegated to the naming strategy? It is an issue in this example because the columns are snake_case, so it makes sense to allow the user to decide an alternative aliasing system. This would also allow for better solutions to the `maxAliasLength` hashing. |
bcgov/entity | 822549695 | Title: GL codes for vital stats
Question:
username_0: #### Description
To disburse money to vital stats, we need to configure proper GL to the database. Ticket is to collect the GL and update in DB for PROD.
#### Tasks
- [ ] When ticket has been created, post the ticket in RocketChat '#Operations Tasks' channel
- [ ] Add **entity** or **relationships** label to zenhub ticket
- [ ] Add 'Priority1' label to zenhub ticket
- [ ] Assign zenhub ticket to milestone: current, and place in pipeline: sprint backlog
- [ ] Reply All to IT Ops email and provide zenhub ticket number opened and which team it was assigned to
- [ ] Dev/BAs to complete work & close zenhub ticket
- [ ] Author of zenhub ticket to mark ServiceNow ticket as resolved or ask IT Ops to do so
Answers:
username_1: GL coding received from SBC Finance sent by email to Sumesh and Jyoti. |
jcberquist/commandbox-cfformat | 811482669 | Title: Too much spacing
Question:
username_0: With "binary_operators.padding": true each successive time the file is formatted it adds more space.
```
a = b
a = b
a = b
```
Also seeing strangeness with adding too many tabs.
`TAB TAB TAB TAB local.bed1=local.rooms[local.selectedRoom].beds[local.rooms[local.selectedRoom].beds.find(local.bedFinder(unhousedMember.wheelchair, unhousedMember.no_stairs))]`
becomes
`TAB TAB TAB TAB TAB TAB TAB TAB local.bed1=local.rooms[local.selectedRoom].beds[
TAB TAB TAB TAB TAB local.rooms[local.selectedRoom].beds.find(local.bedFinder(unhousedMember.wheelchair, unhousedMember.no_stairs))
TAB TAB TAB TAB]`
Answers:
username_0: Just noticed this as well:
if (!local.info.keyExists("spouse")) local.info.spouse = 0 // API may send nulls which throw reference errors in CF
if (!local.info.keyExists("type")) local.info.type = "" // API may send nulls which throw reference errors in CF
if (!local.info.keyExists("rank")) local.info.rank = "" // API may send nulls which throw reference errors in CF
if (!local.info.keyExists("approved")) local.info.approved = false // API may send nulls which throw reference errors in CF
if (!local.info.keyExists("sessions")) local.info.sessions = [] // API may send nulls which throw reference errors in CF
becomes
if (!local.info.keyExists("spouse"))
TAB local.info.spouse = 0 // API may send nulls which throw reference errors in CF
TAB if (!local.info.keyExists("type"))
TAB TAB local.info.type = "" // API may send nulls which throw reference errors in CF
TAB TAB if (!local.info.keyExists("rank"))
TAB TAB TAB local.info.rank = "" // API may send nulls which throw reference errors in CF
TAB TAB TAB if (!local.info.keyExists("approved"))
TAB TAB TAB TAB local.info.approved = false // API may send nulls which throw reference errors in CF
TAB TAB TAB TAB if (!local.info.keyExists("sessions"))
TAB TAB TAB TAB TAB local.info.sessions = [] // API may send nulls which throw reference errors in CF
Seems that when the curly braces are not used after an IF it goes to lunch.
username_0: {
"alignment.consecutive.assignments": false,
"alignment.consecutive.params": false,
"alignment.consecutive.properties": false,
"array.empty_padding": false,
"array.multiline.comma_dangle": false,
"array.multiline.element_count": 4,
"array.multiline.leading_comma": false,
"array.multiline.leading_comma.padding": true,
"array.multiline.min_length": 120,
"array.padding": false,
"binary_operators.padding": true,
"brackets.padding": false,
"comment.asterisks": "align",
"for_loop_semicolons.padding": true,
"function_anonymous.empty_padding": false,
"function_anonymous.group_to_block_spacing": "spaced",
"function_anonymous.multiline.comma_dangle": false,
"function_anonymous.multiline.element_count": 4,
"function_anonymous.multiline.leading_comma": false,
"function_anonymous.multiline.leading_comma.padding": true,
"function_anonymous.multiline.min_length": 120,
"function_anonymous.padding": false,
"function_anonymous.spacing_to_group": false,
"function_call.casing.builtin": "pascal",
"function_call.casing.userdefined": "camel",
"function_call.empty_padding": false,
"function_call.multiline.comma_dangle": false,
"function_call.multiline.element_count": 4,
"function_call.multiline.leading_comma": false,
"function_call.multiline.leading_comma.padding": true,
"function_call.multiline.min_length": 120,
"function_call.padding": false,
"function_declaration.empty_padding": false,
"function_declaration.group_to_block_spacing": "spaced",
"function_declaration.multiline.comma_dangle": false,
"function_declaration.multiline.element_count": 1,
"function_declaration.multiline.leading_comma": false,
"function_declaration.multiline.leading_comma.padding": true,
"function_declaration.multiline.min_length": 1,
"function_declaration.padding": false,
"function_declaration.spacing_to_group": false,
"indent_size": 4,
"keywords.block_to_keyword_spacing": "spaced",
"keywords.empty_group_spacing": false,
"keywords.group_to_block_spacing": "spaced",
"keywords.padding_inside_group": false,
"keywords.spacing_to_block": "spaced",
"keywords.spacing_to_group": true,
"max_columns": 150,
"metadata.multiline.element_count": 4,
"metadata.multiline.min_length": 120,
"method_call.chain.multiline": 3,
"newline": "os",
"parentheses.padding": false,
"property.multiline.element_count": 4,
"property.multiline.min_length": 120,
"strings.attributes.quote": "double",
"strings.convertNestedQuotes": false,
"strings.quote": "double",
"struct.empty_padding": false,
"struct.multiline.comma_dangle": false,
"struct.multiline.element_count": 4,
"struct.multiline.leading_comma": false,
"struct.multiline.leading_comma.padding": true,
"struct.multiline.min_length": 120,
"struct.padding": false,
"struct.quote_keys": false,
"struct.separator": " = ",
"tab_indent": true,
"tags.lowercase": true
}
username_1: I can reproduce the issue with the `if` statements. The missing semicolons are tripping up the formatter. I should be able to improve that, but again, as I noted in the other issue, this sort of parsing/formatting is hampered by the fact that cfformat does not currently perform enough of a parse to easily recognize where statements end.
I am going to need more information to try and reproduce the issues in the first comment. Could you possibly supply more context for them? As it stands, with the settings you supplied, and those sample statements on their own, I am not seeing the issues you describe.
username_0: Here is an example file that has been formatted 1, 2, and 3 times.
[formatting.zip](https://github.com/username_1/commandbox-cfformat/files/6031228/formatting.zip)
username_0: How's this going? We would love to use cfformat but this problem is a deal breaker. Let me know if there's more you need from me.
username_1: I have still not dealt with the `if` statements. There was an issue with `#`'s not inside strings which has been addressed. So this is not completely fixed, sorry.
username_1: Could you try v0.15.19 which I just published. At least with the sample file you sent, I think it performs better.
username_0: Will do. It's on my list but work has been rough lately.
username_0: Long time no see. It's been a busy summer. This works well for the tabbing. Just need to get the IF situation figured out.
Status: Issue closed
|
jjdltc/jjdltc-cordova-plugin-sftp | 925772738 | Title: callback does not seem to work?
Question:
username_0: trying this on android 8 oreo with ionic capacitor project
even though the plugin works perfectly, the success callback doesnt seem to even run here.
was it not implemented?
Answers:
username_0: nvm it works... sorry for bothering
Status: Issue closed
|
adamhammes/cs362 | 138422713 | Title: Adam's Printing Duties
Question:
username_0: Use Case Elaborations:
* Add User
* Add Book
* Add Version
Interaction Diagrams:
* Add User
* Add Book
* Add Version
Code:
* `DatabaseSupport.java`
In total, you should have 6 pieces of paper plus whatever `DatabaseSupport.java` takes up.<issue_closed>
Status: Issue closed |
google/guava | 638437014 | Title: Extend CachesExplained Wiki with CacheLoader.asyncReloading
Question:
username_0: I have added a paragraph and example on the CachesExplained wiki page, to make known about the asyncReloading wrapper.
This was requested in https://github.com/google/guava/issues/1625#issuecomment-257410069
Here is a preview Wiki: https://github.com/username_0/guava/wiki/CachesExplained#Refresh
Here is the commit in my wiki clone (it seems like you cant make a pull request out of it):
https://github.com/username_0/guava/wiki/CachesExplained/_compare/56f27a6970053b3f67f30eba2691fe65e0bf49c0...bddb8259da5dd51217c117a8e9e49cb1863cc5e2
I agree to the Code of Conduct and submit the doc and code change as a trivial change, let me know if you need a CLA anyway.
Answers:
username_0: In case it matters now, my employer has signed the Company CLA |
tlkio/feedback | 555053285 | Title: Self hosted version?
Question:
username_0: Hi,
Do you guys plan to release a on-premise version? Like a docker image would be great. :-)
Answers:
username_1: Hey @username_0!
We unfortunately don't offer this right now as tlk.io is written in Ruby and running it on a 3rd party machine can potentially leak the source code. What's your use case?
username_0: Hey @username_1, sorry for my late response.
I was looking for a pluggable messaging system which could extend our current webpage.
So basically the idea was, as one of our subpages loads, the system would look up a topic message board by ID and embed it within our page. If the topic not yet exists, it would create it on the fly.
username_1: This should be possible even with the current system. Do you have a sense of how active these chat rooms are expected to be and how many would you need?
username_0: Sure, but I can’t let the content be on the public web.
username_1: @username_0 would something like end-to-end encryption on our side solve this issue? |
tpm2-software/tpm2-tss | 341355733 | Title: AppVeyor build times are bad.
Question:
username_0: The current configuration mirrors the linux build using GNU TLS / libgcrypt. In most cases I'd say this is the right thing to do but downloading and building gcrypt and all of its dependencies increases build times by nearly 4x. This is way too much time. We need to either mirror pre-built binaries or hopefully something easier.
Answers:
username_0: @davidjmaria : Interested in your input since the proposed solution in #1098 may have an impact on your work.
Status: Issue closed
|
andmorefine/since-co | 655310734 | Title: A new way of advertising.
Question:
username_0: Hi! username_0.com
Did yоu knоw thаt it is pоssiblе tо sеnd аppеаl uttеrly lеgit?
Wе оffеring а nеw uniquе wаy оf sеnding rеquеst thrоugh fееdbасk fоrms. Suсh fоrms аrе lосаtеd оn mаny sitеs.
Whеn suсh businеss prоpоsаls аrе sеnt, nо pеrsоnаl dаtа is usеd, аnd mеssаgеs аrе sеnt tо fоrms spесifiсаlly dеsignеd tо rесеivе mеssаgеs аnd аppеаls.
аlsо, mеssаgеs sеnt thrоugh fееdbасk Fоrms dо nоt gеt intо spаm bесаusе suсh mеssаgеs аrе соnsidеrеd impоrtаnt.
Wе оffеr yоu tо tеst оur sеrviсе fоr frее. Wе will sеnd up tо 50,000 mеssаgеs fоr yоu.
Thе соst оf sеnding оnе milliоn mеssаgеs is 49 USD.
This mеssаgе is сrеаtеd аutоmаtiсаlly. Plеаsе usе thе соntасt dеtаils bеlоw tо соntасt us.
Contact us.
Telegram - @FeedbackFormEU
Skype FeedbackForm2019
WhatsApp - +375259112693<issue_closed>
Status: Issue closed |
wodby/php | 404211234 | Title: Failed to execute remote command by Drush v9 alias
Question:
username_0: I cannot execute remote drush command because of error `env: can't execute 'php': No such file or directory`. It seems the issue caused by wrong env var `PATH` in PHP container in sshd session env var:
```
PATH=/bin:/usr/bin:/sbin:/usr/sbin
```
Aliases in Drush v9 [does not allow](https://github.com/drush-ops/drush/blob/9.5.x/examples/example.site.yml#L122) to specify env vars so I cannot override `PATH` value.
Answers:
username_1: Included in 4.11.0
Status: Issue closed
username_2: Now available in Drush 10. |
swc-project/swc | 889735149 | Title: Conditional chain + await in assignment is transformed incorrectly
Question:
username_0: **Describe the bug**
When combining an assignment, conditional chaining, and `await` the output is incorrect.
**Input code**
```js
const cache = {}
async function getThing(key) {
const it = cache[key] || (await fetchThing(key))
return it
}
function fetchThing(key) {
return Promise.resolve(key.toUpperCase()).then(val => (cache[key] = val))
}
```
**Config**
```json
{}
```
**Output code**
```js
function asyncGeneratorStep(gen, resolve, reject, _next, _throw, key, arg) {
try {
var info = gen[key](arg);
var value = info.value;
} catch (error) {
reject(error);
return;
}
if (info.done) {
resolve(value);
} else {
Promise.resolve(value).then(_next, _throw);
}
}
function _asyncToGenerator(fn) {
return function() {
var self = this, args = arguments;
return new Promise(function(resolve, reject) {
var gen = fn.apply(self, args);
function _next(value) {
asyncGeneratorStep(gen, resolve, reject, _next, _throw, "next", value);
}
function _throw(err) {
asyncGeneratorStep(gen, resolve, reject, _next, _throw, "throw", err);
}
_next(undefined);
});
};
}
var regeneratorRuntime = require("regenerator-runtime");
var _marked = regeneratorRuntime.mark(_getThing);
var cache = {
};
function _getThing() {
[Truncated]
In the case where `cache[key]` is truthy, the code above will return `undefined` because of the line `it = _ctx.t0` and (as far as I can see) `_ctx.t0` is never assigned. I am not sure if the issue here is with `swc` or `regenerator`.
Rewriting the `getThing` function look like this:
```js
async function getThing(key) {
let it = cache[key]
if (!it) {
it = await fetchThing(key)
}
return it
}
```
Produces working output, so I can work around the issue in my project.
**Version**
The version of @swc/core: 1.2.57
Version of regenerator-runtime: 0.13.5<issue_closed>
Status: Issue closed |
corona10/PayCut | 152201568 | Title: 테스트 환경 통일할 필요가 있나요?
Question:
username_0: 있다면 기종 및 안드로이드 버젼알고싶어요
Answers:
username_1: 안드로이드 웨어 개발이 롤리팝부터 가능하기 때문에 에뮬레이터도 롤리팝으로 맞춰주시면 됩니다. 기종은 상관없을 것 같지만 저희가 개발하는 LG 스마트워치에서 구동했을 때 별 이상이 없으면 될것 같습니다.
아마 프로젝트가 이미 API Level이 이미 설정이 되어있으므로 그 부분은 유지해주시면 감사하겠습니다.
username_1: SDK버젼이 23으로 되어있네요. 프로젝트 임포트하고 빌드가 잘 안되시면 말씀해주시면 감사하겠습니다. |
itchio/itch | 122343290 | Title: Support .love games out of the box
Question:
username_0: Only question is how do they let us know which love version they need?
Answers:
username_1: you could extract https://love2d.org/wiki/Config_Files from the .love and try to extract the version number (foolproof would be to execute the lua file, but then you'd need to embed lua lol)
username_0: There's a discussion going on in the itch community forums as well: http://itch.io/t/11981/lve-2d-love-file-support
username_0: ..or just a lua parser! [luaparse](https://github.com/oxyc/luaparse) is only 90k with deps, 23k zipped.
username_2: An alternative is to get people using a launcher file if they aren't using a normal executable, Steam does this.
username_0: So do we, we launch `.bat` files in priority on Windows, and `.sh` on Linux and OSX in the absence of an .app bundle
username_3: I just posted on the forum before checking this page!
Maybe how about starting with an easy(fast) solution and then improve it over time? For now, any .love game will not work in linux. Instead of bundling dependencies, or trying to install stuff, just assume that the developer has notified that love needs to be installed, and run "love file.love" from the itch.io app. Probably it will be easy to detect if "love" is a valid command or not, and notify the user if its not installed.
username_0: :+1:
username_4: Assuming itch wants users to provide their own love install, it might be fine to ignore version numbers entirely:
1. Most users will only have the latest stable version of love installed
2. There is no standard way to have multiple love installs side-by-side
3. Opening a .love file in the wrong version of love will generate a warning anyways
so for the times that doing `love the-game.love` doesn't work the best thing you can do is to get the user to resolve the situation themselves, either by manually pointing to a love install or using a [wrapper script](https://github.com/username_4/flirt).
username_5: Considering [you just released a .love game](https://username_0.itch.io/capitalism-simulator) you may wish to make this a higher priority. It's a bit silly for the Itch app to not support the apps from the creator of Itch :stuck_out_tongue: |
npgsql/efcore.pg | 1168970916 | Title: Cannot use enum literal in EF query when column type is "text[]"
Question:
username_0: I attached to this issue a minimum working example.
[NpgsqlEfEnums.zip](https://github.com/npgsql/efcore.pg/files/8248819/NpgsqlEfEnums.zip)
Answers:
username_1: Thanks, I can confirm this is a bug. I've fixed it for 6.0.4.
Status: Issue closed
|
greg2010/antinub-gregbot | 906256206 | Title: Instead of supplying a logo, use server logs
Question:
username_0: Instead of supplying a url to a logo per relay via config, grab the server logo. Allow to supply a default fallback logo for cases when servers do not have a logo.
For DMs use the person's avatar instead.
This helps with clarity when one relay has multiple guilds as sources. |
ethereumclassic/ECIPs | 399047210 | Title: Make this repo a Jekyll site that generates a GH Pages site
Question:
username_0: Hey all. The Ethereum main-net repo is formatted as a Jekyll site, so it can be hosted on Github Pages or elsewhere and be browsed as a website. This means making some formatting changes and a few other config files.
I know Jekyll and if this is of interest, I can put a bit of time into it.
Answers:
username_1: Yes i think this would make the process much more approachable
username_2: @username_0 is this an example of minimal YAML header for each ECIP?
```YAML
---
layout: ecip
ecip: 1010
title: Delay Difficulty Bomb Explosion
author: <NAME> <<EMAIL>>
status: Final
type: Standard
created: 2016-09-13
---
```
username_1: Can this issue be closed?
username_3: https://ecips.ethereumclassic.org/ ?
username_0: @username_1 there was @username_5 who over wrote a bunch of changes in #12 so it’s no longer valid.
You can close it if you don’t want a web view, otherwise I still recommend you do this.
username_4: i would like a web view. minimal though. I think you can literally just have a folder of all `.md` files plus a config pointing to that folder and it will work.
username_5: @username_0 @username_4 I believe this issue can be closed! https://ecips.ethereumclassic.org/
Status: Issue closed
|
krakenjs/kraken-js | 73873309 | Title: "public" directory can change? i use the version 1.1 and 1.2 in the "public" directory
Question:
username_0: 
Answers:
username_0: how to set variable of the file config.json :)
username_1: What's the goal here? To change which directory based on a URL parameter, or on a header?
username_0: the version of my application have version 1.1 and version
1.5, in version 1.5 it has different js ,css,.dust file from the version 1.1
username_1: Yeah. Why not use `public` as the root, with URLs like `/public/1.1` mapping to the `1.1` folder?
username_0: but when i render the dust ,how to get the dust?
my code:
return res.render('/1.1/topic',{topic:resultObj,locale:res.locals.context.locality});
username_0: 
username_0: 
username_1: That is more complex. You may want to consider sub-apps to have separate configurations for adaro/engine-munger/etc for each version.
username_1: The alternative is to have a structure like `public/templates/1.1/template.dust`, then you can `render('1.1/template')`.
username_1: Or you can edit all the grunt tasks, and duplicate them for each separate set of templates for the localizr/dustjs production build, and set up separate kraken devtools for each public, and set up three static server middlewares. It's a lot of restructuring to have completely independent public folders.
username_0: not "public/templates/1.1/template.dust" , it is "public/1.1/templates/template.dust" in my code
username_1: Correct. I'm suggesting using a different layout to work more easily with a single instance of the tools.
username_0: but,the js ,css are all different in the version 1.1 and version 1.2
username_1: Yes. You'd need to namespace those, too. Either as is or changing `version/type` to `type/version`.
username_0: change the directory like the follow? and then can use render('1.1/template')?

username_1: No:
```
public/
templates/
1.1/
dust files here
1.5/
dust files here
js/
1.1/
js files here, etc.
```
Though depending on what tools you're using in the rest of the stack, the non-template parts of the re-arrangement may not be needed.
username_0: OK,let me have a try
username_2: Though I'm with @username_1 that the original question deserves a little reconsideration, to answer the question for posterity:
``` json
// config/*.json
{
"middleware": {
"static": {
"module": {
"arguments": [ "path:./somewhereElse" ]
}
}
}
}
```
username_0: "arguments": [ "path:./somewhereElse" ]// can use variable? because of in version 1.1 it is "path:./public/1.1",and in version 1.5 it is "path:./public/1.5"
username_0: 
my code in the controllers: return res.render('/1.1/topic',{topic:resultObj,locale:res.locals.context.locality});
logs:Error: Failed to lookup view "errors/500" in views directory "/code/Server-API2/public/templates"
username_0: @username_1 @username_2
username_1: Correct. If you change the name of the error views, the error handling middleware you are using will need to be updated accordingly.
username_0: return res.render('/1.1/topic',{topic:resultObj,locale:res.locals.context.locality});
this code run with the error,so go to find the errors/500.dust.
why this code run with error?:(
username_0: 
return res.render('/1.1/haha',{topic:resultObj,locale:res.locals.context.locality});
--logs:Error: Failed to lookup view "errors/500" in views directory "/Users/whw/code/Server-API2/public/templates"
username_1: I don't know what your error is there -- the error view will display it if you fix the error handler.
username_0: config.json can be dynamic setted? i want to set the "static" paramer of the config.json in my coder.
username_1: You can update the config with `config.set`, but I don't think you want want you think you want. Instantiating new static middlewares -- or altering the configuration of one live, if that's even possible with its implementation -- can have very strange consequences under concurrency. Requests don't start and end in the same tick, and changing it midstream could be very confusing.
username_0: 
username_0: in onconfig function how to set "static" paramer ,i want to set "path"="public/1.1" or "path"="public/1.5" dynamic.
----
"static" - serves static files from a specific folder
Priority - 40
Module - "serve-static" (npm)
Arguments (Array)
String - local path to serve static files from (default: "path:./public")
username_1: Yeah, you don't want to do that. Changing the value after the middleware is instantiated won't do anything. It's never referred to again.
username_0: But how I joined version subdirectory in the public directory:(
username_0: public/
templates/
1.1/
dust files here
1.5/
dust files here
js/
1.1/
js files here, etc.
---Use this method?
username_1: I do suggest that method, yes.
username_0: can not dynamically change the “path” parameter?

username_1: https://github.com/expressjs/serve-static is the implementation.
Imagine this:
```
var config = { "path": "/public" };
var static = serveStatic(config.path);
config.path = "/other";
```
There is no way for the change on the third line to affect the value on line 2. Just because you can change the value in the config doesn't mean anyone is listening and cares.
username_0: OK,i know~Thank you,i will try this method:
public/
templates/
1.1/
dust files here
1.5/
dust files here
js/
1.1/
js files here, etc.
-----but i try this method 10 hours ago,it is error,it may not find the haha.dust:
my code:return res.render('/1.1/haha',{topic:resultObj,locale:res.locals.context.locality});
--logs:Error: Failed to lookup view "errors/500" in views directory "/Users/whw/code/Server-API2/public/templates"
username_1: Yes. Why are you versioning error templates?
username_0: it is ok now .
return res.render('/1.1/haha',{topic:resultObj,locale:res.locals.context.locality}); // not ok
return res.render('1.1/haha',{topic:resultObj,locale:res.locals.context.locality});// ok
username_1: Yep! You will want `public/templates/errors/500.dust` though, and the other error templates outside the versioning.
username_0: Thank you very much @username_1 :)
username_1: You are welcome.
username_0: 
username_1: And where did you move your less files?
username_1: Oh that is interesting. Not sure why `tasks/less.js` isn't picking that up. In theory, it matches its glob.
username_0: do you know how to set ?,let tasks/less.js can do
username_1: In `tasks/less.js`, does your configuration look like so:
```
build: {
options: {
cleancss: false
},
build: {
files: [{
expand: true,
cwd: 'public/css',
src: ['**/*.less'],
dest: '.build/css/',
ext: '.css'
}]
}
}
```
particularly the `build` object inside the `build` object?
If so, remove the wrapper like so:
```
build: {
options: {
cleancss: false
},
files: [{
expand: true,
cwd: 'public/css',
src: ['**/*.less'],
dest: '.build/css/',
ext: '.css'
}]
}
```
username_0: 
username_1: Yes. Look in `tasks/less.js`
username_0: 
username_0: it is ok now,Thank you:):):0
username_1: You are welcome! Sorry for the bug -- that's https://github.com/krakenjs/generator-kraken/issues/111 but not fixed in the version you used!
username_0: OK:)
Status: Issue closed
username_1: I'm going to close this issue. Feel free to open another if you've other questions.
username_0: OK~
username_0: <div id="wrap">
<footer id="footer">
<div class="logo">
<img src="/img/1.1/logo.png" >
<div class="text">
<h2>{topic.title}</h2>
<span>{@pre type="content" key="subTitle"/}</span>
</div>
</div>
<a class="download" href="" >{@pre type="content" key="download"/}</a>
</footer>
</div>
---render this dust,it is error
username_0: “ <div id="wrap">
<footer id="footer">
<div class="logo">
<img src="/img/1.1/logo.png" >
<div class="text">
<h2>{topic.title}</h2>
<span>{@pre type="content" key="subTitle"/}</span>
</div>
</div>
<a class="download" href="" >{@pre type="content" key="download"/}</a>
</footer>
</div> ”
---render this dust,it is error
username_0: 
---render this dust,it is error
username_0: 
--when i remove all {@pre }, and render this dust ,it is ok
username_0: it is ok now , the "locales" directory must include version 1.1 and 1.5, like this:

username_1: Yep. Lots of the internal tools map things 1:1. I'm working on changing that for future revisions, but the current versions assume 1:1 maps. |
uspki/policies | 420617116 | Title: Section 5.7.3 - OCSP responses
Question:
username_0: Based on reviewing some of the trust store policies, I don't think this needs to be addressed in this version. In fact, there seems to be a gap / hole in this area. It's not exactly reasonable to address this problem in a policy statement on the first shipment of policy and practices / systems / functionality.
- Private key compromise (5.7.3)
- The validity dates on OCSP responses would not be aligned
I haven't researched Apple / Safari in depth but I believe Edge(MSFT), Chromium and Mozilla all have procedures to push out configurations to stop trust in a CA (and subscriber certs) when a private key compromise occurs.
Status: Issue closed
Answers:
username_0: Based on reviewing some of the trust store policies, I don't think this needs to be addressed in this version. In fact, there seems to be a gap / hole in this area. It's not exactly reasonable to address this problem in a policy statement on the first shipment of policy and practices / systems / functionality.
- Private key compromise (5.7.3)
- The validity dates on OCSP responses would not be aligned
I haven't researched Apple / Safari in depth but I believe Edge(MSFT), Chromium and Mozilla all have procedures to push out configurations to stop trust in a CA (and subscriber certs) when a private key compromise occurs.
Status: Issue closed
|
dfm/emcee | 348780483 | Title: FloatingPointError: invalid value encountered in subtract
Question:
username_0: array([-4964.73310394, -inf, -4711.26571368, -4970.54669421,
-inf, -4820.70787686, -4835.9471765 , -inf,
-4931.87203438, -4754.92616193, -4761.74519579, -4873.98569949,
-4836.12453068, -4796.68892564, -4987.50754428, -4714.68606488,
-inf, -4769.82785569, -4943.65543064, -4756.48557144,
-4811.02015636, -4873.93479207, -4966.71378152, -4923.9794561 ,
-4808.14407535])
```
### What have you tried so far?:
If I just copy/paste these arrays into a new file:
```
import numpy as np
newlnprob = np.array([-4948.91638591, -np.inf, -4672.16752718, -4973.36611267,
-np.inf, -4847.46764344, -4803.06307637, -np.inf,
-np.inf, -4805.5986002 , -4677.69154104, -4733.59151249,
-4831.54153719, -4797.05214905, -4842.05819292, -4728.12722244,
-np.inf, -4923.49027218, -4879.18207818, -np.inf,
-4790.03177514, -4802.55740805, -4872.55457856, -np.inf,
-4773.87951696])
lnprob0 = np.array([-4964.73310394, -np.inf, -4711.26571368, -4970.54669421,
-np.inf, -4820.70787686, -4835.9471765 , -np.inf,
-4931.87203438, -4754.92616193, -4761.74519579, -4873.98569949,
-4836.12453068, -4796.68892564, -4987.50754428, -4714.68606488,
-np.inf, -4769.82785569, -4943.65543064, -4756.48557144,
-4811.02015636, -4873.93479207, -4966.71378152, -4923.9794561 ,
-4808.14407535])
print(newlnprob - lnprob0)
[ 15.81671803 nan 39.0981865 -2.81941846 nan
-26.75976658 32.88410013 nan -inf -50.67243827
84.05365475 140.394187 4.58299349 -0.36322341 145.44935136
-13.44115756 nan -153.66241649 64.47335246 -inf
20.98838122 71.37738402 94.15920296 -inf 34.26455839]
```
I no longer get the error. If I mask the `-inf` elements in `newlnprob` with `np.nan` the error no longer appears.
This **only** happens if I use the `emcee.utils.sample_ball()` function. If I generate my own sample ball, I do not run into this error.
### Minimal example:
<!-- In this section, you should include or link to a code snippet that demonstrates this issue. -->
```python
import emcee
# sample code goes here...
```
Status: Issue closed
Answers:
username_0: I had a `np.seterr(all='raise')` line lost somewhere in the code that was causing this. Again, sorry for the noise. |
apache/couchdb | 275080624 | Title: Time-out issue when running native erlang views in 2.x on
Question:
username_0: I am seeing a time out when indexing a view written in native erlang. The erlang view works with 1.6.x for databases of any size; the view also works on 2.x, for database with fewer records or smaller documents. Ran against a database with many large(ish) documents, I see the following error:
[error] 2017-11-17T03:00:11.072015Z couchdb@localhost <0.12.1196> 19a93c5b89 rexi_server throw:{timeout,{gen_server,call,[<0.9106.1195>,{prompt,[...]}}]}]}]}} [{couch_mrview_util,get_view,4,[{file,"src/couch_mrview_util.erl"},{line,56}]},{couch_mrview,query_view,6,[{file,"src/couch_mrview.erl"},{line,244}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,139}]}]
If you latter try to call a view that failed due to time out, you get a second error:
[error] 2017-11-17T05:01:27.670419Z couchdb@localhost <0.26156.1198> d00b01bc7d rexi_server exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,256}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,204}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,286}]},{couch_mrview,map_fold,3,[{file,"src/couch_mrview.erl"},{line,503}]},{couch_mrview_util,fold_fun,4,[{file,"src/couch_mrview_util.erl"},{line,360}]},{couch_btree,stream_kv_node2,8,[{file,"src/couch_btree.erl"},{line,783}]},{couch_btree,stream_kp_node,7,[{file,"src/couch_btree.erl"},{line,710}]},{couch_btree,fold,4,[{file,"src/couch_btree.erl"},{line,217}]}]
I suspect the gen_server timeout needs to be extended. With multiple nodes, some index tasked might be preempted and thus timing out.
I used the Ubuntu package couchdb 2.1.1-1 on xenial; I also replicated the same error on a an earlier 2.0 version running under snap.
Answers:
username_1: Is there any progress with this? I ran into the same issue today when I was using `fabric:query_view` for view filtering. I have this error:
`mfa: fabric_rpc:map_view/5 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,256}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,204}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,308}]},{couch_mrview,finish_fold,2,[{file,"src/couch_mrview.erl"},{line,644}]},{couch_mrview,query_view,5,[{file,"src/couch_mrview.erl"},{line,263}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,139}]}]`
username_2: I have that same error message in my log. Is your CPU usage getting spiked really hard by the erlang process too? That's the problem I'm trying to solve. I'm using 2.1.1
username_3: Same error (on verify installation, installed via snap on Ubuntu 16.04).
username_4: Anyone have a way to duplicate this? The view engine changed between 1.6 and 2.x but the way Erlang functions are invoked shouldn't be any different so that's a bit odd. I'd also be interested in which version of Erlang is used as well.
username_5: @username_4 see https://github.com/apache/couchdb/issues/1142 for more context
username_4: Aha, updated their but also this seems like two different errors to me.
username_6: Same error , Erlang 6.2
[error] 2018-04-10T10:10:59.860288Z nonode@nohost <0.14711.9> bcb49f8b6b rexi_server: from: nonode@nohost(<0.32062.6>) mfa: fabric_rpc:reduce_view/4 throw:{timeout,{gen_server,call,[couch_proc_manager,{get_proc,<<"javascript">>},5000]}} [{couch_mrview_util,get_view_index_state,5,[{file,"src/couch_mrview_util.erl"},{line,101}]},{couch_mrview_util,get_view,4,[{file,"src/couch_mrview_util.erl"},{line,45}]},{couch_mrview,query_view,6,[{file,"src/couch_mrview.erl"},{line,244}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,139}]}]
username_5: Closing until a reproducible test case is provided.
Status: Issue closed
username_0: Hi @username_5 , @username_7 , @username_4
I have created a test suite that generates the timeout mentioned above.
https://github.com/username_0/couchapp-erlang-example.git
There is a python script that generates a large number of (fairly big) documents. There is a javascript and erlang version of each view. Try the script with 5 documents to see it function. Then try 500 and you should see a timeouts on the erlang view. The javascript view can may crash too, restarting the server.
I last ran it on Couchdb 2.2, on ubuntu 18.04 from the http://apache.bintray.com/couchdb-deb bionic package. I ran it on a NUC 7i with 4 cores and 15G of memory (n=1,q=8). I've also seen it on (n=1,q=1) and over three NUCs (n=3,q=8). The same erlang views ran on 1.6x without issue.
Perhaphs there is a configurable timeout that needs tweaking?
username_7: I am seeing a time out when indexing a view written in native erlang. The erlang view works with 1.6.x for databases of any size; the view also works on 2.x, for database with fewer records or smaller documents. Ran against a database with many large(ish) documents, I see the following error:
[error] 2017-11-17T03:00:11.072015Z couchdb@localhost <0.12.1196> 19a93c5b89 rexi_server throw:{timeout,{gen_server,call,[<0.9106.1195>,{prompt,[...]}}]}]}]}} [{couch_mrview_util,get_view,4,[{file,"src/couch_mrview_util.erl"},{line,56}]},{couch_mrview,query_view,6,[{file,"src/couch_mrview.erl"},{line,244}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,139}]}]
If you latter try to call a view that failed due to time out, you get a second error:
[error] 2017-11-17T05:01:27.670419Z couchdb@localhost <0.26156.1198> d00b01bc7d rexi_server exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,256}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,204}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,286}]},{couch_mrview,map_fold,3,[{file,"src/couch_mrview.erl"},{line,503}]},{couch_mrview_util,fold_fun,4,[{file,"src/couch_mrview_util.erl"},{line,360}]},{couch_btree,stream_kv_node2,8,[{file,"src/couch_btree.erl"},{line,783}]},{couch_btree,stream_kp_node,7,[{file,"src/couch_btree.erl"},{line,710}]},{couch_btree,fold,4,[{file,"src/couch_btree.erl"},{line,217}]}]
I suspect the gen_server timeout needs to be extended. With multiple nodes, some index tasked might be preempted and thus timing out.
I used the Ubuntu package couchdb 2.1.1-1 on xenial; I also replicated the same error on a an earlier 2.0 version running under snap.
username_8: Are there any known work arounds for this issue?
username_0: I can confirm I still see the same memory with the test suite above with a doc count of 500. When I run it there is no longer an error message; the process runs for some time until it quietly runs out of memory and restarts. (I am using the snap installation on ubuntu 19.04, version 2.3.1; erts-8.3.5.4; n=1; q=8 on a NUCs with 8 cores and 15GB).
I don't see the problem with larger databases with smaller documents. I suspect it also isn't only the size of the documents, but also the depth of nested structures. Memory management between erlang and the NIF is the likely culprit.
In my real-life database, as a workaround, I did a bit of everything: i) increased memory; ii) increased nodes n=5 (shared the problem around); iii) decreased the document size; iv) re-ran indexing multiple times. In my case, it now works on the second or third attempt of a full index. Incremental indexing is fine.
username_8: Our database is not big, it just has over 100 databases. Each one maybe 200Mb in size, with a few thousand documents in each. This issue occurs randomly for us, one of the nodes will simply start responding to requests very slowly (over 20 second delay). When looking at the processes, I see 2 couchdb processes pegging a few CPUs. The logs are showing similar messages to OP. Running 32GB RAM, 16 processors, 1TB SSD drives. Not sure what I can tweak to help remedy this.
username_0: Hi @username_5 , @username_7 , @username_4
This problem disappeared after I rebuilt couchdb using jiffy 1.04 (see https://github.com/username_4/jiffy/commit/0ba322e42171bb48ffdd0c053cef28a42eb35fc9). Thanks @username_4 for the fix.
Status: Issue closed
|
robinvdvleuten/vuex-persistedstate | 478641192 | Title: Mismatching childNodes vs. VNodes
Question:
username_0: I'm using nuxt
nuxt.config.js
`{ src: '~/plugins/localStorage.js', ssr: false }`
localStorage.js
`import createPersistedState from 'vuex-persistedstate'
export default ({ store }) => {
createPersistedState({
key: 'key',
paths: ['seen']
})(store)
}`
usage:
`
<div v-if='seen'>
<CardPreview v-for='item in seen' />
</div>
`
And i'm getting the error: warn Parent: <div class="catalogue page-layout__body">…</div>
warn Mismatching childNodes vs. VNodes
Answers:
username_0: Solved it by wrapping into `<no-ssr>`
Status: Issue closed
|
tensorflow/tensorrt | 935589083 | Title: convert int8 engine failed
Question:
username_0: Background,
I 'm convert yolov3 from MMDetection(PyTorch) to tensorrt engine, input shape is 1*3*320*320 used config in
'https://github.com/open-mmlab/mmdetection/blob/master/configs/yolo/yolov3_d53_320_273e_coco.py', and the tensorrt version is 7.2.2.4
issues,
while convert it into int8 engine, I found the int8 engine size abnormal, onnx size about 234M, fp16 size 122M, int8 size 234M.
and I had traced the root cause, and found while the onnx model had operators "Greater" or "Less", then the trt couldn't generate int8 engine.
So, does it doesn't support "Greater/Less" in int8 mode? |
dpkp/kafka-python | 858125755 | Title: Working with multiple Kafka: Invalid file object: None
Question:
username_0: ```
---
```
# requirements.txt
kafka-python==2.0.2
```
```yaml
# one of node configuration
version: '3'
services:
zoo1:
image: confluentinc/cp-zookeeper:6.1.1
hostname: zoo2
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
environment:
ZOOKEEPER_SERVER_ID: 3
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: 10.10.10.10:2888:3888;172.16.17.32:2888:3888;0.0.0.0:2888:3888
kafka1:
image: confluentinc/cp-kafka:6.1.1
hostname: kafka2
depends_on:
- zoo1
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 3
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 3
KAFKA_ZOOKEEPER_CONNECT: 10.10.10.10:2181,20.20.20.20:2181,30.30.30.30:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.16.58.30:9092
```
Status: Issue closed
Answers:
username_0: Sorry, but I migrated to https://github.com/confluentinc/kafka-images |
kitsunenosaraT/Mozilla-Italia-l10n-guide | 223337106 | Title: Chapter 3 – Flusso di lavoro
Question:
username_0: Vorrei suggerire alcuni *false friend* da aggiungere alla lista…
Answers:
username_1: Comment about workflow using Pontoon.
It could be interesting insert a note about the "bell notification" icon in the top part of Pontoon platform.
I.E.:
1. in order to see if there are pending translation for localizations you contribute to, Log in into Pontoon with your account;
2. check in the top part of the page (near your name) the "notification bell";
3. if the bell is red, updates for you are available! Check the list of untranslated items and start to cooperate ;)
4. if the bell is gray, no notification are waiting your attention.
username_1: Link to Chapter 3 is not updated (bad link)
username_0: Suggerimento eccellente per le notifiche @username_1
Ho corretto il link nel messaggio iniziale, si è rotto quando ho spostato tutti i capitoli nella sottocartella /IT e mi sono dimenticata di aggiornare 😅
username_0: Nel testing di Internet Heath #28 @elio ha sottolineato la scomodità di non poter leggere i commenti alle correzioni direttamente in Pontoon.
Infatti attualmente in Pontoon si possono unicamente aggiungere traduzioni alternative. Se vogliamo spiegare le ragioni delle nostre modifiche dobbiamo usare uno strumento esterno, come le mail, gli issue di GitHub etc.
Tra le proposte che abbiamo fatto per migliorare Pontoon, una è stata quella di implementare una funzione per i commenti, in modo da poter svolgere tutto il QA direttamente su Pontoon.
Voi cosa ne pensate? Vi piace questa idea? Preferireste un’alternativa?
username_1: Grazie Sara,
quella dei commenti sarebbe una cosa molto carina... in fondo immagino il flusso in questo modo:
1. traduco e uso il pulsante azzurro "suggest"
2. il revisore fa la QA, abilita la spunta "commento" ed inserisce il commento e poi usa il pulsante "suggest"
3. il traduttore vede la proposta e legge il commento
4. se va bene si valida la revisione
A me piace l'idea, tutto integrato e pratico.
Github a quel punto resta per comunicazioni più generali sui flussi di lavoro e non sulla specifica localizzazione.
username_1: Ciao, non so se ho preso la Issue giusta, ma vorrei segnalare una cosa riguardo Pontoon.
Stavo traducendo una stringa ed ho notato che per la prima volta i caratteri disponibili non mi sono bastati. Quindi ho dovuto tradure, se guardate lo screeshoot, una parola con "salvare" invece che "memorizzare" che nel contesto era migliore e meno equivoco. Come vedete nello screenshot mi dice "Translation too long". Credo onestamente non sia giusta questa cosa. Che ne pensate?

username_2: Il controllo caratteri lo fa in alcune stringhe tipo quelle che vengono
usate come tweet (il limite è 140 anche se mi sembra Twitter lo sta
aumentando tra reazioni varie degli utenti) o negli snippet per la home
(quelli che appaiono nella pagina about:home di Firefox) dove bisogna
cercare di essere più coincisi, purtroppo hai ragione, l'italiano è una
lingua con una certa verbosità, però c'è anche chi è messo peggio.
Comunque succede anche a me con Twitter di aver scritto dei bei tweet e
poi dover tagliare qua e là per restare nel limite, Sara è molto brava
nel tenere le stringhe coincise, sicuramente ti darà dei suggerimenti
più utili.
Se riesci, in questi casi, prova a vedere se riesci a esprimere lo
stesso concetto tagliando il superfluo o mantenendo solo l'essenziale.
username_1: Ciao @username_2 va bene, è quello che ho fatto. Ad ogni modo ho dovuto ignorare alcune "regole" Mozilla in alcuni casi. Credo che dove perdiamo in qualità di traduzione per questa limitazione dovremmo compensare con un po' di flessibilità.
username_2: In effetti è complicato, ho provato a inserire qualche suggerimento
anche io, però alcune le trovo difficili, tipo quella col pilota
automatico.
Get è un verbo fastidioso da tradurre, nel 90% ottenere non è mai il
traducente giusto, se si parla di applicazione ti consiglio di usare
Scaricare/Installare, nel caso di Sync penso Attivare sia quello più
corretto (è una funzione che va, appunto, attivata registrando l'account
o facendo il login).
Abilitare non usarlo mai, al suo posto usa sempre attivare.
Se pagina iniziale non ci sta, io preferisco abbreviare piuttosto che
tenere homepage, tipo Imp. pagina iniziale. (ho messo anche un
suggerimento errato perché confondo sempre pagina iniziale e principale).
Aspettiamo comunque Sara che come ti avevo già detto è molto brava ad
accorciare e tenere il senso delle stringhe originali.
username_1: Beh intanto grazie mille per le informazioni. Sul discorso in particolare, ovviamente attendo Sara. Grazie.
username_0: Allora, username_2 ha già evidenziato molte cose importanti. Io vorrei aggiungere solo alcuni punti.
Per prima cosa, le limitazioni di caratteri possono essere imposte per varie ragioni, per capire come gestirle occorre prima il tipo di testo che abbiamo davanti. Di solito le limitazioni si trovano per:
- problemi di spazio nell‘interfaccia utente: ormai quasi dappertutto ci sono i layout flessibili o adattivi, che "accomodano" automaticamente il contenuto. Tuttavia, anche la flessibilità ha un limite: un pulsante o una voce di menu contestuale non possono essere lunghe 300 caratteri. In questi casi è meglio seguire le norme della traduzione tecnica (essere diretti, usare la seconda persona per le azioni es. *Scarica Firefox* invece di *Scaricare Firefox*, quando possibile omettere articoli, deittici es. *Modifica Impostazioni* invece di *Modifica le impostazioni da qui*).
- Limitazione di caratteri in Twitter o snippets: questi sono di solito messaggi promozionali, quindi, al contrario del caso precedente, si può usare un tono spigliato e informale (che in molti casi è più sintetico). È consentito pure spezzare la frase per risparmiare qualche carattere. es. *Se il tuo browser è troppo lento, prova Firefox* può diventare *Browser lento? Prova Firefox*
- Limitazioni in risorse grafiche: brochure, caption, poster, volantini, slide ecc. Qui bisogna proprio considerare le parti testuali graficamente, es. le stringhe che vanno in caratteri cubitali non possono essere troppo lunghe, certe stringhe non possono finire su due righe ecc. Spesso si può ottenere un risultato migliore sostantivizzando i titoli, es. *Cosa abbiamo appreso dal sondaggio* diventa *I risultati del sondaggio*. Ps. Tutte queste risorse grafiche di solito hanno un QA supplementare per verificare la resa finale su grafica. Di solito chi inserisce i testi nell‘immagine non è italiano, quindi spesso va a capo in punti strani, o magari introduce errori ortografici senza accorgersene.
Poi c'è il caso dei sottotitoli, che però a noi non interessa.
Comunque in tutti questi casi il modo giusto di pocedere quando manca lo spazio è questo:
- Individuare l'informazione più importante della stringa, il messaggio che assolutamente deve passare al lettore.
- Fare una triage delle parti del discorso, ordinandole dalla più importante alla meno importante. Cercare di includere prima le informazioni più importanti, poi, se ci stanno, quelle meno importanti seguendo l'ordine.
es. Se sulla pagina di Firefox troviamo il pulsante *Click here to download Firefox*, letteralmente lo tradurremo con *Fai clic qui per scaricare Firefox*. Troppo lungo?
Le parole chiave sono *scaricare* e *Firefox*. *Fai clic* e *qui* sono informazioni obsolete: è un pulsante, è naturale che bisogni cliccarci sopra. Quindi possiamo accorciare tranquillamente in *Scarica Firefox*.
Se anche questo non dovesse bastare, cosa scegliamo di mantenere tra "Scarica" e "Firefox"?
Guardiamo il contesto: il nome di Firefox compare già sulla pagina, l'informazione davvero nuova del pulsante è l'azione *scarica*= se fai clic su questo pulsante inizierà il download. Quindi alla mal parata, il nostro pulsante striminzito può diventare semplicemente *Scarica*.
- ovviamente, aiuta molto fare uso della propria creatività e allontanarsi dall'originale per trovare una soluzione che funzioni meglio nel contesto.
username_1: Grazie, questo è davvero molto interessante anchge se è un caso particolare, sicuramente merita attenzione, forse aiuta anche la taduzione in generale se ci pensi.
Tutto chiaro davvero.
Grazie ancora.
username_3: In "Strumenti di traduzione collaborativa" forse sarebbe il caso di aggiungere "Crowdin" dal momento che lo abbiamo usato alcune volte
username_3: Suggerisco un feedback.
Io aggiungerei nella sezione `Il QA: le tre fasi per una traduzione di qualità` che, quando si chiede il QA, non bisogna inserire la `@menzione` di uno specifico revisore, a meno che sia proprio necessario. Poiché tutti gli utenti ricevono le notifiche (inclusi i revisori) e utilizzare in questo modo la funzione "menziona" mi sembra un po' un abuso.
Cosa ne pensate?
username_3: @username_4 grazie per la segnalazione. Appena possibile aggiorniamo il _capitolo 3_ con il link aggiornato, che è [questo](https://www.microsoft.com/en-us/language/search).
username_3: Nella sezione "Opzione filtro" aggiungere (magari anche con screenshots) la procedura per poter utilizzare questa funzione: ovvero che è necessario essere già nella pagina dove ci sono le stringhe di un tale progetto (magari consigliare di premere su "All strings").
Abbiamo avuto 2 casi recenti di volontari che non riuscivano a capire bene come utilizzare questo filtro, poiché non trovavano l'icona nella pagina del progetto (non sapendo, appunto, che è necessario entrare prima nella pagina delle stringhe effettive). |
msys2/MINGW-packages | 782773867 | Title: evince 3.38 isn't working
Question:
username_0: ``` (application/octet-stream) is not supported ``` when trying to open documents (pdf, djvu, etc.)
```bash
(evince.exe:5928): GLib-GIO-CRITICAL **: 08:54:57.785: g_content_type_is_mime_type: assertion 'type != NULL' failed
```
evince version 3.38.0
Answers:
username_1: I have tried with a random PDF file and it works with those warnings. Did you try that file with other application? Are every msys2/mingw packages updated?
username_0: My msys2/mingw64 packages were up to date. I can open those documents with other programs but not evince.
evince spams the terminal with ``` (evince:2400): GLib-GIO-CRITICAL **: 00:45:56.438: g_content_type_is_mime_type: assertion 'type !=NULL' failed ``` when I try to open any documents.
username_1: Would you like to check if a simple rebuild of the package solves the issue?
username_1: Here is a rebuild I did. Download the artifacts from here https://github.com/username_1/MINGW-packages/runs/1695353675. Extract a .tar.zst file from the zip file. Install that package with `pacman -U` command.
username_0: Binary from https://github.com/username_1/MINGW-packages/runs/1695353675 wouldn't work. The same error assertion 'type != NULL' failed still occurs.
username_2: Came here to report a slight evince installation problem; found this existing issue.
The problem (warnings during installation):
- Downloaded latest msys2*.exe, several hours ago
- Installed
- Updated
- Installed a few packages
- Got to evince
- Got many warnings of the following form:
- warning given when extracting /mingw64/share/help/.../figures/*.png (Can't create /same/path/*.png)
- Tried once again, got slightly different warnings:
- warning: could not get file information for mingw64/share/help/.../figures/*.png
...those figures did not show up.
Otherwise, evince seems to work, after all.
Anyway, just reporting.
username_3: Still doesn't work for me, I get the same message as the original report, many times in the console.
When I tried to open a pdf file from inside evince (from the dialog box), I noticed that pdf files are not showing up at all in the listing, not even when I explicitly pick "pdf" from the type select box. The only file I know I can see in the filesystem through evince is `.tif` files (when set to All Documents) and they open file, but nothing else, not even images. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.