repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
MicrosoftDocs/cpp-docs | 1046975618 | Title: CRT: fsopen: _SH_COMPAT is no longer defined.
Question:
username_0: [Enter feedback here]
This document says about the `_SH_COMPAT`, but `_SH_COMPAT` is not defined in any header.
and dose not say `_SH_SECURE` that is defined in "shared.h".
Please remove documents about the `_SH_COMPAT`, and add the `_SH_SECURE`
``` testcode.cpp
#include <stdio.h>
#include <stdlib.h>
#include <share.h>
int main()
{
FILE* fp;
fp = _fsopen("test", "w", _SH_DENYRW); // <== OK.
fp = _fsopen("test", "w", _SH_COMPAT); // <== error C2065: '_SH_COMPAT':
fp = _fsopen("test", "w", _SH_SECURE); // <== OK.
return EXIT_SUCCESS;
}
```
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 19df9b55-9ae6-9c08-b8e4-28518eaeb3bd
* Version Independent ID: c5941a32-c208-1b0a-c2be-6050cafcb33b
* Content: [_fsopen, _wfsopen](https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/fsopen-wfsopen?view=msvc-160#feedback)
* Content Source: [docs/c-runtime-library/reference/fsopen-wfsopen.md](https://github.com/Microsoft/cpp-docs/blob/master/docs/c-runtime-library/reference/fsopen-wfsopen.md)
* Product: **visual-cpp**
* Technology: **cpp-ucrt**
* GitHub Login: @TylerMSFT
* Microsoft Alias: **twhitney** |
libgit2/libgit2sharp | 229759933 | Title: Crash when using GlobalSettings.Version.ToString() in 0.23.1
Question:
username_0: In 0.23.1 the generated libgit2sharp_hash.txt file contains 'HEAD' rather than a commit hash.
If you call LibGit2.GlobalSettings.Version.ToString() this causes a crash as it tries to extract 7 characters from a 4 character string
see https://github.com/libgit2/libgit2sharp/blob/master/LibGit2Sharp/Version.cs#L65 and https://github.com/libgit2/libgit2sharp/blob/master/LibGit2Sharp/Version.cs#L72
Status: Issue closed
Answers:
username_1: Thanks for reporting this and for the fix! |
Anuken/Mindustry-Suggestions | 687000578 | Title: make mass conveyor overdriveable
Question:
username_0: **make mass conveyor overdriveable.**
*with this it's possible to transport more units at the same time, that will help in pvp for example, will be possible "compact" units to transport more easily, that again, will help a lot, and it's a mass **CONVEYOR** so make sense, because conveyors are overdriveable.*
**Before making this issue, replace the spaces in the following boxes with an `X` to confirm that you have acknowledged them.** *Failure to do so may result in your request being closed automatically.*
1. - [X] I have done a quick search in the list of suggestions to make sure this has not been suggested yet.
2. - [X] I have checked the [Trello](https://trello.com/b/aE2tcUwF/mindustry-trello) to make sure my suggestion isn't planned or implemented in a development version.
3. - [X] I am familiar with all the content already in the game or have glanced at the wiki to make sure my suggestion doesn't exist in the game yet.
Answers:
username_1: n
username_2: I honestly see no point in this. As the time to make the fastest unit to make is multitudes more than the throughput of the mass converyor.
username_0: But make sense, did I overdo it a little? Yes, but make sense
Status: Issue closed
|
openservicemesh/osm | 718759094 | Title: make all book demo service in one namesapce
Question:
username_0: I have run osm demo and found that osm create one namespace for each application,like that bookbuyer,bookstore,bookief,bookwarehouse.
I think deploy the demo in one namespace can make it more simplicity and more suitable for the use of namespace.
Answers:
username_0: \assign
username_0: I think i can try to resolve it. someone can assign it to me ?
username_1: The reason the demo uses multiple namespaces is because there are scenarios we want to showcase communication between traffic in different namespaces.
That being said, the app namespaces are driven via environment variables, so by default we could set the app namespaces to be the same: https://github.com/openservicemesh/osm/blob/main/demo/run-osm-demo.sh#L20-L23
Status: Issue closed
|
metatron-app/metatron-discovery | 394305399 | Title: Support for formatting x-axis datetime representations.
Question:
username_0: **Is your feature request related to a problem? Please describe.*
- x축 시간정보 표현 시 Formatter 를 지원하여, 원하는 형태의 시간정보 표현이 필요함
**Describe the solution you'd like**
- 필터 설정 시 x 축 시간 정보에 대한 Formatter 설정을 지원, 한글과 같은 언어 지원도 고려 필요 ( 예: 4월 1일(목) )
- 연도, 월 등이 정보를 안보이게하고 시간정보만 표현할 수 있어야 함
- 미니맵의 윈도우 크기 조절에 따라 x축에 표시된 시간정보가 보이지 않는 경우, 마우스 오버하여 레이블 확인 시에 시간정보를 표현해 줄 수 있는 기능이 있어야 함
Answers:
username_1: 시간값에 대한 포맷을 공통UX와 함께 논의하여, 기획서에 반영하여 공유드리도록 하겠습니다.
username_1: @username_0 @username_2 @username_3 @minjung-cho
- 미니맵의 윈도우 크기 조절에 따라 x축에 표시된 시간정보가 보이지 않는 경우, 마우스 오버하여 레이블 확인 시에 시간정보를 표현해 줄 수 있는 기능이 있어야 함
상기 내용은 시간 포맷과 관련된 내용 뿐 아니라, 차트의 축단위의 텍스트가 말줄임 표현되었을 때, 마우스 오버 툴팁으로 정보를 표시해야 한다는 공통 이슈로 보입니다.
해당 이슈는 시간 포맷에 대한 기능을 추가하는 것으로 정리하고, 위 내용은 분리하여 진행하였으면 합니다만, 의견 부탁드립니다.
username_2: @username_1 네 분리해서 진행 부탁드립니다.
@eltriny 축 눈금 레이블 표현 부분은 제 기억에 차트라이브러리 제약사항으로 알고 있었는데요. echart 업데이트시 개선이 되었나요?
username_1: @username_2 죄송합니다. ㅠ 기획 내부논의를 좀 더 진행하다보니.. 대시보드 레벨에서 해결이 안된다는 결론에 다다랐습니다. 결국 차트쪽에 date format을 추가해야 할것 같습니다. label원복 부탁드리겠습니다;;;
@username_3
기억하시겠지만, 마지막 F2F 미팅에서 date format은 대시보드 레벨에서 통일하여 설정하고, 이를 해당 대시보드 내 차트에 모두 적용한다고 결정했었습니다만..
오늘 더 @minjung-cho 님과 이야기를 더 하다보니, 결국 차트간의 Granularity가 서로 다르면 이를 일괄 설정하는 것이 UX상 너무 비효율적일 것 같다는 판단이 들어서, 다시 차트 내 설정으로 회귀해야 할 것 같습니다.
현재 '숫자포맷'을 '포맷'으로 변경하고, 포맷 설정 안에, 숫자포맷과 날짜포맷을 같이 두면 어떨까합니다만, 의견 부탁드립니다.
username_3: @username_1 넵 말씀처럼 포멧설정을 두면 좋을것 같아요. 진행부탁드려요~
username_2: @metatron-app/design timeformat 관련 설정은 3.3 에서 진행했으면 하는데요. 관련 기획일정 확인 부탁드립니다. |
ucsd-progsys/liquidhaskell | 199975418 | Title: Can't promote pattern-matched functions with default options
Question:
username_0: ```haskell
{-@ reflect fib @-}
{-@ fib :: Nat -> Nat @-}
fib :: Int -> Int
fib 0 = 1
fib 1 = 1
fib n = fib (n-1) + fib (n-2)
```
yeilds
```
<no location info>: Error: Uh oh.
/home/username_0/ucsd/cse202/assignment1.hs:5:13: Error: Cannot promote Haskell function fib to logic
altToLg on Default
```
but
```haskell
fib n = if (n == 0) || (n == 1) then 1 else fib (n-1) + fib (n-2)
```
works fine.
Answers:
username_1: We will almost surely need to support the former and not just the latter version if we are to get the triggers and axioms to work idiot
username_0: Yes, I agree?
username_1: That "idiot" is Siri being silly... wretched voice recognition ...
username_1: My apologies! :)
username_2: Hehe...
But I am not sure I am following how the Haskell code related with the triggering.
Even if we supported the first it would get translated in the logic exactly as the second.
username_1: It means we should NOT be translating it into the second if we want better
control of the axioms... of course this is a hunch at this point.
username_2: But what should we translate it to?
There is no pattern matching in the logic.
On Jan 11, 2017 4:20 PM, "<NAME>" <<EMAIL>> wrote:
It means we should NOT be translating it into the second if we want better
control of the axioms... of course this is a hunch at this point.
username_1: See the long email I sent about how I think this needs to work...
On Wed, Jan 11, 2017 at 9:41 AM, <NAME> <<EMAIL>>
wrote:
> But what should we translate it to?
> There is no pattern matching in the logic.
>
>
> On Jan 11, 2017 4:20 PM, "<NAME>" <<EMAIL>> wrote:
>
> It means we should NOT be translating it into the second if we want better
> control of the axioms... of course this is a hunch at this point.
>
> |
davidgohel/officer | 270279328 | Title: How to change the sequence of existing slides
Question:
username_0: For example, I have 4 slides as 1, 2, 3, 4. How could change the order to 1, 3, 4, 2?
For example, I have 4 slides as 1, 2, 3, 4. How could add one slide at the very begining? for exampe a new slide 0, then 1, 2, 3, 4?
Answers:
username_1: This is not implemented, that's why it is not documented ;)
Status: Issue closed
|
sarsbiker/betterparents | 281942965 | Title: 故事怎么讲
Question:
username_0: 可能可以讲的故事
Answers:
username_0: 故事一:
- 事件一:某A小时候在学校里受过语言暴力,因而变得自卑不自信,而且自己也偶尔会不自觉骂人。孩子长大将要上幼儿园了,从别的家长那听说幼儿园有老师孩子会骂人。他害怕孩子也像自己一样被影响变得自卑抑郁,然而做了很多尝试。没想到孩子上学之后真的有些变化。
- 事件二:尝试做游戏,戴手环,手环帮助孩子发现问题,而游戏让孩子意识到应该如何解决。
- 事件三:经过一段时间,孩子碰到语言暴力知道如何应对,也能调节情绪,并且用积极的心态影响他人。快乐成长。
故事二:
- 事件一:快乐的小刺猬去上学啦,然而过了一年,小刺猬性格变了,变得不爱说话,有些胆怯,连刺都少了些。
- 事件二:爸爸教小刺猬如何如何。
- 事件三:小刺猬知道如何面对别人的刺,用自己的刺挡别人的刺,不让自己受伤,也永远不刺伤他人,自己的刺也不会掉。
故事三:
- 事件一:小朋友性格变化,大人吼了句,被吼回来,大人意识到问题,回想过往。
- 事件二:大人觉得自己错了,跟孩子道歉,发现问题如何。尝试改变。
- 事件三:通过XX获得改变,彼此都意识到问题,愉快相处。 |
ant-design/ant-design-pro | 962357068 | Title: [Warning] The initialValues only take effect when the form is initialized, if you need to load asynchronously recommended request, or the initialValues?<Form/>:null ⚠️
Question:
username_0: ```
控制台就报以上警告。
### © 版本信息
- Ant Design Pro 版本: [e.g. 5.0.0]
- umi 版本 [3.4.0]
- 浏览器环境 [Chrome]
- 开发环境 [e.g. mac OS]
- `@ant-design/pro-form: 1.18.3`
Answers:
username_1: fastRefresh 是通过不停更新 props 来实现的,可以忽略掉。
另外这个只出现一次,但是可以大大的减少我们的答疑成本,不然每周都有 issue
username_1: fastRefresh 是通过不停更新 props 来实现的,可以忽略掉。
l这个只出现一次,但是可以大大的减少我们的答疑成本
Status: Issue closed
username_2: 我也出现了这个问题,但是我并没有使用表单。。。可以忽略他是吗?
username_1: 恩,可以忽略的,我改了下判读
username_2: Triage notifications on the go with GitHub Mobile for iOS or Android. |
sendgrid/docs | 387671297 | Title: Support page not working
Question:
username_0: Hi,
supportpage is not working. I'm only getting a disabled Continue button.
Eg.
https://support.sendgrid.com/hc/en-us/requests/new?ticket_form_id=43796
Brgs
<NAME>
Statnett SF
Answers:
username_0: Support i working.
Ticket id:1717612 is registered.
Status: Issue closed
|
dotnet/runtime | 984363898 | Title: Intermittent hang/deadlock in .net core 5.0.400 while debugging
Question:
username_0: ### Description
This is a duplicate of https://github.com/dotnet/runtime/issues/42375 as far as symptoms and behavior go but I am still encountering the exact same symptoms on 5.0.400. I can reproduce this on Mac OS and Windows.
When debugging, our dev team encounters sporadic hangs (about 30% of the time). There does not seem to be any specific reproducible pattern of when in the program execution the hang occurs. When it happens, the diagnostics logger stops updating

and I cannot break or terminate the program:


If I try to `dotnet trace collect` on a hung process, `dotnet trace` hangs as well.

I have tried taking and analyzing a memory dump using wpr as described [here](https://stackoverflow.com/questions/68746658/net-core-application-cpu-hang#comment121755761_68746658), but I have not been able to find anything meaningful.
### Configuration
Reproduced on 5.0.400 on Mac OS and Windows. In Visual Studio and Rider IDE.
### Regression?
This issue seems to have started when we upgraded from netcoreapp3.1 to net5.0
### Other information
The amount of logging and amount of asynchronous operations seems to make the issue more/less prevalent. For example, turning down the log level makes the issue happen about 20% of the time instead of 30% of the time.
Answers:
username_1: Is there any way you can share a dump when everything seems stuck (both the debugger or the target process) or a trace from ETW with CPU stacks? If the dumps, we'll likely need one dump of MSVSMON and one of your app.
username_0: @username_1 - sure - how would you like me to deliver it to you? Do you have a preference of Mac vs Windows, Rider vs Visual Studio?
username_1: You can open a feedback ticket on https://developercommunity.visualstudio.com/ and attach them there if possible (ensures proper deletion of customer data) and post it here. And Windows VS would probably be the easiest for us to examine.
username_0: @username_1 - all set - https://developercommunity.visualstudio.com/t/intermittent-hangdeadlock-in-net-core-50400-while/1519798
username_0: Any update?
username_0: @username_1 bump...
username_1: Sorry @username_0, things are building up around the .NET 6 release and with the long weekend I haven't had a chance to get to this yet. I'll try my best to sink some time into the analysis this week and come back to you.
username_0: thanks @username_1 - this is a real painful experience for us
username_0: @username_1 any luck? this is making using .NET 5 untenable.
username_1: Hey @username_0. I've started taking a look, took most of today to take a look at that. I still don't have a good sense why it's locked. I see that there were three threads sending notifications of the thread starting up, then the thread sends the event over to the debugger side and then all threads are stopped waiting for the debugger to notify that it's ok to continue. The runtime never got the even to continue, so not much to do. I need to think of the debugger side, and I am thinking still how to look at it. Given that it repros both on macOS and Windows and Rider and VS, I highly doubt that it's a bug in those layers (if they are all the same bug). I just need to think of a way to see what's happening without having a repro.
username_0: Thanks @username_1
is there anything I can do for I help? Or anything I can do to narrow down the cause at runtime when debugging?
we really would love to get this resolved.
username_0: @username_1 - any thoughts?
username_1: Sorry @username_0; I took a look the other day but ever since I haven't been able to take a look much as .NET 6 is about to ship and it has required my full attention. Without a repro I expect progress on this one to be slow. Not sure how big your project is, but does it only happen on one project or is it anything you try? The only way I can thing that could help without a repro is logging.
username_0: This is the core software for our product, so it puts us in an untenable situation.
This happens randomly about 30% to 50% of the time we debug.
Should we downgrade to a version of .NET that works? It would be a shame to determine that .NET 5.0 isn’t suitable for use.
username_1: I understand it's painful, and I'm sorry it got there. 30-50% is definitely REALLY high. I am surprised we haven't gotten any other reports of something like this. Looping @dotnet/dotnet-diag to see if someone has more time to check this one. The problem is most threads are in `Thread::WaitSuspendEventsHelper` waiting on `m_DebugSuspendEvent`. The only part where we set it is ReleaseFromSuspension called from `SysResumeFromDebug`. The only way to end up there is `DB_IPCE_CONTINUE` or `DB_IPCE_DETACH_FROM_PROCESS` getting received in the `DebuggerRCThread`. That thread didn't report any messages not getting passed down.
username_0: We really appreciate any help you can provide.
I am surprised that this isn’t more widespread as well. I have to think it has to do with the highly asynchronous nature of our software. It processes many hundreds of http calls to remote APIs in parallel and tracks thousands of TaskCompletionSources to orchestrate the concurrency. The debugger hang seems to happen when the process is spawning off all these tasks and then WhenAll’ing on them
I’m not sure if that helps shed any light on where we can look.
username_2: @username_1 do we leave the debugger logging in release mode? It seems like if we had logs of the left and right sides it would be helpful here
username_1: I'd say that's the only thing that could really help. I haven't gone fishing for what stress log statements could help us, or if a private will be needed.
username_2: I took a quick look, we have more stress log statements than I expected, they might be enough.
@username_0 could you collect more data? We're probably going to have to iterate a bit on what data exactly to collect, usually we like to have a local repro we can debug ourselves. The logging should be enough to get to the bottom of this, but we may have to go back and forth and potentially even ask you to run a privately built runtime before we have the right set of data.
For right now, we should start with the StressLog data, because it doesn't require any private builds. StressLog is an in memory circular buffer we use to collect additional data for hard to debug scenarios. Setting the following environment variables before startup on the debuggee process (testhost.exe) will tell it to create as large of a log as possible and only collect debugger events:
```
set COMPlus_StressLog=1
set COMPlus_LogFacility=8200
set COMPlus_LogLevel=10
set COMPlus_StressLogSize=80000
set COMPlus_TotalStressLogSize=8000000
```
Then with these all set, run your repro and once it hangs collect another dump and send it to us. We can extract the logging with SOS, our debugger extension. Depending on how curious you are, you are free to look at the logs yourself by loading it in [WinDBG](https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/windbg-install-preview) and running the !dumplog command - it will create a StressLog.txt in the directory with the StressLog contents.
There should be lots of CORDB statements:
```
63bc 13.513426000 : `CORDB` D::SendIPCEvent DB_IPCE_SYNC_COMPLETE to outofproc appD 0x0,
63bc 13.513425400 : `CORDB` GetIPCEventSendBuffer called in SendSyncCompleteIPCEvent
63bc 13.513425100 : `CORDB` D::SSCIPCE: sync complete.
63bc 13.513424900 : `CORDB` D::SC: suspension complete
63bc 13.513424700 : `CORDB` DRCT::ML:: wait set empty after sweep.
1954 13.513412400 : `CORDB` D::SSCIPCE: Calling IsRCThreadReady()
1954 13.513198100 : `CORDB` D::TART: Trapping all Runtime threads.
```
username_0: Thanks - I checked out the dmp file in windbg but still don't see any red flags that I can recognize. I'd be very interested in learning from what you find.
Here is the new ticket link: https://developercommunity.visualstudio.com/t/intermittent-hangdeadlock-in-net-core-50400-while-1/1533767
username_2: @username_0 I don't see the dump attached to the new issue. It's entirely possible I'm doing something wrong, can you verify if it succeeded in uploading so I can figure out if the problem is on my end?
username_0: ...I see it's not there as well....seems like something is broken with the site.
I tried to upload again, this time by adding a comment with an attachment. Please let me know if you see it now.
Thanks.
username_2: Thanks, I can see the file is uploaded now. We will take a look, I'm not sure exactly when but hopefully early this week
username_0: Thanks very much @username_2
username_1: Just wanted to let you know @username_0 I haven't forgotten about this. I am looking at your new dump - I don't have the debugger side of the log, but the in process control is saying what I suspected - Total time in you app reported is ~410 sec out of which 300 seconds is failure to suspend the runtime. The thing is - I am trying to see why can't we suspend the runtime. There are threads as I said last time, that are just trapped trying to pulse the GC. To do so they are waiting for the debugger event to be set - but that only gets toggled in `ReleaseFromSuspension`, that is - we are waiting for the debugger to tell us to continue essentially. I'll continue taking a look.
username_0: Thanks very much @username_1 - let me know if I can do anything to assist.
username_0: @username_1 - do you want me to capture msvsmon as well?
username_1: @username_0 that would be really helpful. To do this - set the environment that was mentioned before like:
```
set COMPlus_StressLog=1
set COMPlus_RSStressLog=1
set COMPlus_LogFacility=8200
set COMPlus_LogLevel=10
set COMPlus_StressLogSize=80000
set COMPlus_TotalStressLogSize=8000000
```
before opening vs from the commandline (probably something like `devenv <solution>`); then you can capture the dump of MSVSMON and your test process.
Another thing - this process seems to be in a weird state. Is it possible for you - in case it's needed - to use a special build of the runtime with more logging?
And thanks for working with us on this.
username_0: Sure - and thank you for your help @username_1
Just uploaded a new dump of both testhost.exe and msvsmon.exe
We'd be happy to reproduce on a special build of the runtime if it will help.
username_1: Hey @username_0. Did you upload them to https://developercommunity.visualstudio.com/t/intermittent-hangdeadlock-in-net-core-50400-while-1/1533767 ? I can't see any recent uploads there.
username_0: @username_1 - yes I uploaded a file "dumps.zip" with both dmp files. Do you not see it? I will upload them again as a new comment now...
username_0: Any update @username_1 ?
Thanks.
username_1: I am looking at it. Missed the ping. I still don't have an answer, but if I can find something I'll ping this thread back. Sorry it's taking so long.
username_1: Sorry, it's meant to be 0
username_0: 0 or false? It looks like you update the example to be false
username_1: Yeah, false. I made a mistake. The environment variable is 0, the property is false.
username_1: Thanks for getting back so quickly @username_0. Sorry this has dragged on for so long - I know this must be painful for your team. I'm surprised the suggested workaround didn't work. From your dumps+logs I was able to get a lot more information, which you can read later in this comment in case you are interested. I thought that the interaction between the debugger and tiered compilation was the root cause of your issue. I still think we can provide a good workaround while we get the fix into a servicing release. Meanwhile, would you mind collecting a dump with the workaround suggested once you reach the irresponsive state? I can take a look at it these couple of days to see if there's another edge case I am not accounting for in this or if it's a different bug so that we can get this solved for you ASAP.
### Investigation
The debugger received an event, and is waiting for the runtime to stop all threads. On the runtime side (your app) is waiting for all threads to reach suspension. All of your application dumps show the same state: all threads are in what we call preemptive mode - where managed code can't run - which is one of the requirements for suspension. There's another requirement though that one of your threads doesn't fall in under in all three cases: `InForbidSuspendForDebuggerRegion` is true for this thread and that's blocking the whole thing from going forward. The thread that entered this state did so in `StopAndDeleteAllCallCountingStubs`:
https://github.com/dotnet/runtime/blob/6d98f6dcc6e924a265f591f2e4d69f2e84d04807/src/coreclr/src/vm/callcounting.cpp#L955-L960
I suspect when it entered the holder, it didn't have `TS_DebugSuspendPending` in the thread state, it acquired the lock an set the forbid region on the thread for the debugger. Then it tries to suspend the runtime. At this time the debugger had likely already started enumerating threads and marked them for suspension (in the dump, I can see the thread marked for debugger suspension). It's unable to succeed in `SweepThreadsForDebug` and when never sends the sync complete event. The thing is `InForbidSuspendForDebuggerRegion` will never be set to false until we exit `StopAndDeleteAllCallCountingStubs`. That one will never succeed because the debugger already started the suspension.
@kouvel, do you expect the work around to have worked? I'll start an email thread to discuss possible solutions to this beyond just a work around.
username_1: @username_0, also, if it's possible, Can you please setup the logging to
```
set COMPlus_StressLog=1
set COMPlus_RSStressLog=1
set COMPlus_LogFacility=8240
set COMPlus_LogLevel=10
set COMPlus_StressLogSize=80000
set COMPlus_TotalStressLogSize=8000000
```
in case you have the time to collect this? We are looking at the definitive solution, and there's some events there that might help us see why is it happening in your case in terms of timing. Appreciate all the help here to improve this. I really hope we can make this right so that you can get the productivity back without needing to roll back and so that you have an upgrade story going forward.
username_1: @username_0 We continued looking at this and have a new theory of what's causing the deadlock. I also saw that the suggested workaround was properly applied didn't help in the way I thought it would, I'm sorry about that.
username_0: All good @username_1
if you have any other workarounds you can suggest we would greatly appreciate it
username_3: You're not crazy, I've noticed the same pattern. Sent some new dumps and I think there's some progress being made on this issue, let's hope for a resolution soon.
username_0: hi @username_1 - any update on a workaround/resolution?
username_0: Looks like this is affecting more people: https://github.com/dotnet/runtime/issues/38736#issuecomment-968563625
Would love to get some kind of workaround or something. Our team is losing hours of productivity a week.
username_1: (Sorry - this seems to have stuck in my outbox limbo. That's what I get for trying to reply to GitHub on my email.)
Hey @username_0. I think we might have an idea of what is causing this issue. While I might take a while as I make sure I am on the right trail, there's something that might help as a workaround and it will definitely be easier for you to confirm if it helps that anything I can do on my side.
I was talking to @username_2 and he realized that my suggestion to disable tiering is not complete. There's a feature in the profiler that uses that same feature that I believe is a player in the current issue you see. So in addition of needing `DOTNET_TieredCompilation=0`/`COMPLUS_TieredCompilation=0`, you should set `COMPlus_ProfApi_RejitOnAttach=0` and see if it helps out on this.
username_4: @username_1 I had already tried this (setting both `COMPlus_TieredCompilation` and `COMPlus_ProfApi_RejitOnAttach` to `0` in system environment variables) after seeing [this](https://github.com/dotnet/runtime/issues/42375#issuecomment-696261366) comment. But it did not solve the issue. :(
username_4: Ok! I was fumbling around and I think I fixed the issue. But I cannot confirm *what exactly* fixed it. Maybe someone else can try what I did and confirm if it worked. Note that I am using Visual Studio 2022 17.0.1
After the fix:
* I can successfully hit breakpoints in the code
* I was able to `Stop` debugging without VS freezing up
* I let VS run in debug mode for 30 mins, and it did not freeze up

What I did:
* Add the two environment variables like @username_1 mentioned above.
* FWIW, this step might be optional and might break your dev setup, so tread carefully. I kept only the latest revisions of each major versions. I simply deleted the folders from windows explorer. Thankfully nothing broke. This is the output of `dotnet --info` after the cleanup. Sorry, I don't have a "before".
```
.NET SDK (reflecting any global.json):
Version: 6.0.100
Commit: 9e8b04bbff
Runtime Environment:
OS Name: Windows
OS Version: 10.0.22000
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\6.0.100\
Host (useful for support):
Version: 6.0.0
Commit: <PASSWORD>
.NET SDKs installed:
3.1.415 [C:\Program Files\dotnet\sdk]
5.0.403 [C:\Program Files\dotnet\sdk]
6.0.100 [C:\Program Files\dotnet\sdk]
.NET runtimes installed:
Microsoft.AspNetCore.App 3.1.21 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 5.0.12 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 6.0.0 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 3.1.21 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 5.0.12 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 6.0.0 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.WindowsDesktop.App 3.1.21 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 5.0.12 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 6.0.0 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
To install additional .NET runtimes or SDKs:
https://aka.ms/dotnet-download
```
* In Visual Studio, disable resource usage limits from the Diagnostic Tools Property Pages. Steps:
* As soon as debugging starts, from the `Diagnostic Tools` tab, click `Select Tools` -> `Settings`

* Then uncheck `Enable resource usage limits`. `Apply` -> `Ok`.

Tagging the other open issue here for [reference](https://github.com/dotnet/runtime/issues/38736)
username_0: @username_1 - is this a supported fix?
username_5: Hi @username_0 I am a coworker for @username_1. He has been out on vacation for Christmas holidays but now that I am back from my own holiday vacation I'm going to fill in for him and help get this moving. I assisted with some of the earlier investigation so I think I am already mostly up-to-speed on this. My understanding so far is that:
- You sent us four sets of dumps (thanks!). The first 3 all had tiered compilation enabled and then the last one from Oct 22nd had tiered compilation disabled.
- When we investigated the dumps with tiered compilation enabled we saw each of them had an identical deadlock involving a thread doing call counting for tiered compilation inside a region of code where the debugger is forbidden from stopping the app. This is a bug in the runtime and I think we understand this part of the problem.
Dump 1 (thread 17):
```
02 0000006f`0467f570 00007ffc`dffba656 coreclr!ThreadSuspend::SuspendEE+0x228 [D:\workspace\_work\1\s\src\coreclr\src\vm\threadsuspend.cpp @ 6097]
03 0000006f`0467f710 00007ffc`dfe5f3d9 coreclr!CallCountingManager::StopAndDeleteAllCallCountingStubs+0xa9182 [D:\workspace\_work\1\s\src\coreclr\src\vm\callcounting.cpp @ 960]
```
Dump 2 (thread 30):
```
07 00000006`6b49f600 00007ffc`cd57a656 coreclr!ThreadSuspend::SuspendEE+0x449 [D:\workspace\_work\1\s\src\coreclr\src\vm\threadsuspend.cpp @ 6236]
08 00000006`6b49f7a0 00007ffc`cd41f3d9 coreclr!CallCountingManager::StopAndDeleteAllCallCountingStubs+0xa9182 [D:\workspace\_work\1\s\src\coreclr\src\vm\callcounting.cpp @ 960]
```
Dump 3(thread 17):
```
03 000000ed`b157f670 00007ffe`57baa656 coreclr!ThreadSuspend::SuspendEE+0x283 [D:\workspace\_work\1\s\src\coreclr\src\vm\threadsuspend.cpp @ 6144]
04 000000ed`b157f810 00007ffe`57a4f3d9 coreclr!CallCountingManager::StopAndDeleteAllCallCountingStubs+0xa9182 [D:\workspace\_work\1\s\src\coreclr\src\vm\callcounting.cpp @ 960]
```
- Then you disabled tiered compilation in the 4th dump and the problem still reproduced. When we investigated the dump that time we saw the situation was different. This means disabling tiered compilation did have an effect, it just wasn't sufficient because there were additional variations of the problem. We also have an understanding of this 2nd variation now and it too is a runtime bug.
```
Dump 4, thread 27
06 00000004`f1778030 00007ffc`85992b0a coreclr!CrstBase::Enter+0x5a [D:\workspace\_work\1\s\src\coreclr\src\vm\crst.cpp @ 330]
07 (Inline Function) --------`-------- coreclr!CrstBase::AcquireLock+0x5 [D:\workspace\_work\1\s\src\coreclr\src\vm\crst.h @ 187]
08 (Inline Function) --------`-------- coreclr!CrstBase::CrstAndForbidSuspendForDebuggerHolder::{ctor}+0x5db [D:\workspace\_work\1\s\src\coreclr\src\vm\crst.cpp @ 819]
09 (Inline Function) --------`-------- coreclr!MethodDescBackpatchInfoTracker::ConditionalLockHolderForGCCoop::{ctor}+0x5db [D:\workspace\_work\1\s\src\coreclr\src\vm\methoddescbackpatchinfo.h @ 134]
0a 00000004`f1778060 00007ffc`85991f6c coreclr!CodeVersionManager::PublishVersionableCodeIfNecessary+0x8ba [D:\workspace\_work\1\s\src\coreclr\src\vm\codeversion.cpp @ 1762]
```
- Since we underestimated the scope of the problem the first time we tried to identify all possible codepaths that enter the problematic `ForbidSuspendForDebugger` region to completely eliminate it. We think disabling tiered compilation and rejit together should have been sufficient to accomplish that, but we have reports from you and @username_4 that it still didn't work and we don't yet know why. It is possible our analysis missed something, or that there is yet another variation of the problem we are still unaware of, or there was some mistake in how we had you set up the most recent experiment. I'm sorry to keep asking for dumps but if you can capture one where the app is deadlocked and both tiered compilation and RejitOnAttach are disabled that will help us resolve this part of the puzzle.
In the meantime I am working on a fix for the portions of the bug we do understand from the dumps you already provided. However given that disabling both tiered compilation and rejit didn't work suggests our understanding of the issue is incomplete and anything I do to fix the part we do understand isn't going to be sufficient to fully solve this for you.
Next steps:
- For me I'll be working on a code fix in the runtime for the portion of the bug we currently understand
- For you my request is to capture a dump with the app deadlocked after setting the environment variables that disable both tiered compilation and RejitOnAttach. As soon as you can upload that I'll investigate it and let you know what it shows. Apologies again for the trouble I imagine this is causing. Your help solving it by collecting these dumps is really appreciated. Thanks!
username_0: Sorry for the delayed reply @username_5. Ultimately, we decided to move to Rider on MacOS for dev along with AVD VMs where Windows is strictly required. While expensive, the cost of the hardware is nominal in comparison to the productivity lost or the effort in downgrading to an earlier version of dotnet.
I still would like to help get you the information you need, but it will take some time to get a new dev environment set up where I can reproduce.
username_5: No worries on the timing at all @username_0 and sorry that it came to a new hardware purchase just to avoid this issue : ( I certainly appreciate any time you choose to spend helping diagnose the issue whenever that is. |
servicemesher/istio-official-translation | 524768400 | Title: /docs/tasks/security/citadel-config/plugin-ca-cert/index.md
Question:
username_0: Source File: [/docs/tasks/security/citadel-config/plugin-ca-cert/index.md](https://github.com/istio/istio.io/tree/master/content/en/docs/tasks/security/citadel-config/plugin-ca-cert/index.md)
Answers:
username_1: /accept
username_1: duplicated issue (https://github.com/servicemesher/istio-official-translation/issues/1388)
Status: Issue closed
|
darkreader/darkreader | 779432130 | Title: [Broken Website] icloud.com/mail
Question:
username_0: **Website**
[icloud.com/mail](https://icloud.com/mail) after signing in of course
**Problem**
Text of mail messages are not inverted
**Steps to Reproduce**
1. Sign into iCloud.com
2. Visit icloud.com/mail
3. Select a message from your inbox
4. Observe that the text of the message is not inverted
**Expected behavior**
The text of the mail message should be inverted
**Actual behavior**
It is not
**Screenshots**
<img width="50%" src="https://user-images.githubusercontent.com/8371943/103687309-b0165480-4f55-11eb-89b4-f86cf631fece.png">
**System info:**
- OS: macOS 10.15
- Browser: Chrome 87
- Darkreader Version: 4.9.26
Answers:
username_1: Hey @username_0!
Unfortunately I cannot get into icloud mail. However I know that github notifcations uses `raw` messages. My best assumption is that the current applied color to those text is `initial` which will fall back dark-ish color. Due to the nature of initial we cannot apply a good style for it. So:
1. Does this happen with other mails(especially which don't use `raw` messages).
2. Can you check in the devtools what the current applied styles are for those texts.
Regards,
username_1
username_0: I think the issue is it loads the message in an `iframe`, the color is not `initial`
username_1: IFrames shouldn't be a problem for DarkReader and the browser should inject it properly. Where does the color from the messages inherit from? |
yeoman/environment | 892230316 | Title: Can't save promptValues in the project .yo-rc.json without disabling
Question:
username_0: ## Description
When using `yo@^4`, the `store: true` setting prompts no longer saves the answer in the local `.yo-rc.json` like it did with `yo@^3`.
As shown below, that is because `skipLocalCache` is defaulted to `true`. Was this change intentional? Should it only disable the local cache in experimental mode?
https://github.com/yeoman/environment/blob/550757187894de6038546e1d93e2a6ac2be51135/lib/environment.js#L344-L349
### Recreating
#### Generator Code:
``` js
const Generator = require('yeoman-generator');
module.exports = class extends Generator {
async prompting() {
await super.prompt([{
name: 'projectName',
message: 'What is the name of your project?',
store: true
}]);
}
async writing() {
this.config.set('some config', 'some value')
}
};
```#### Yeoman 3 .yo-rc.json:
`npm i -g [email protected] && yo ./generators/app`
``` json
{
"@deere/generator-node-server": {
"some config": "some value",
"promptValues": {
"projectName": "something"
}
}
}
```
#### Yeoman 4 .yo-rc.json:
`npm i -g [email protected] && yo ./generators/app`
``` json
{
"@deere/generator-node-server": {
"some config": "some value"
}
}
```
#### Workaround Generator Code:
If I add `this.options.skipLocalCache = false;` to the generator, `[email protected]` and `[email protected]` both save the `promptValues`.
### Change
It looks like the root cause is that the default value for `skipLocalCache` changed to `true` even when not running in `experimental` mode [as part of this change](https://github.com/yeoman/environment/pull/244/files#diff-24984142d58a41889c987b60103bcd93653f3551f71f6cdf22b5762e885c9cd6R264-L268).
I don't see anything in the related commit, PR, or Release Notes.
Answers:
username_1: This was intentional, but forgot the consequence for old generators.
Reverting will be another breaking change.
`store: true` is useful in some use cases where the answer won't change even using a different project.
Like author, preferred license, email.
Project specific should use:
```
await super.prompt([{
name: 'projectName',
message: 'What is the name of your project?',
[askAnswered: true] // to always show the prompt
}], this.config);
this.config.get('projectName');
```
username_0: I have existing projects that used `store: true` with Yeoman 3.x and I want to use those answers when running an updated generator.
If I use `store: true`, the generator will use the `promptValues` from the local `.yo-rc.json` but, will not update them if the user changes the existing value.
To get the desired behavior, I can modify your solution to move the `promptValues` into `this.config` and clean them up so that there aren't duplicate values in the project `.yo-rc.json`:
``` js
const Generator = require('yeoman-generator');
module.exports = class extends Generator {
async prompting() {
this.config.set(this.config.get('promptValues'));
await super.prompt([{
name: 'projectName',
message: 'What is the name of your project?',
askAnswered: true
}], this.config);
}
async writing() {
const projectName = this.config.get('projectName');
this.log(`The project name is: ${projectName}`)
this.config.delete('promptValues');
}
};
```
In general, I also want to save the answers in the `.yo-rc-global.json` because the answers are usually the same across projects but, may be custom for a specific project. If I add `store: true` to the above code, it will use the value in `.yo-rc-global.json` **over** the value in `this.config`.
Basically, I want exactly the Yeoman 3 behavior to default to the local `.yo-rc.json` for existing projects, fall back to `.yo-rc-global.json` for new projects, and keep the values in sync in both. The only way I see to do that is to explicitly set `skipLocalCache = false`. Am I missing something? Is there a downside?
username_1: Fixed in 5.3.0 https://github.com/yeoman/generator/commit/c1c847dbed005e40ded29ea46d1efd4dbac5cc51 |
agnoster/base32-js | 425547359 | Title: NOT CROCKFORD COMPATIBLE
Question:
username_0: Please : why did you use the U instead of S ?
The crockford alpahbeth is : `0123456789abcdefghjkusername_1qrstvwxyz`
The one you use is : `0123456789abcdefghjkusername_1qrtuvwxyz`
Not the S is not present in your alphabet and the U is present.
This is not the Crockford Base32 correct implementation :
source : https://en.wikipedia.org/wiki/Base32 and https://www.crockford.com/base32.html and all other package in all other language (ie Python for my backend....)
I've lost 4 hours to figure out what's happened ....
Answers:
username_1: It's also emitting ones "1" and zeros "0".
username_0: This is normal.
Here the Crockford32 official specification : https://www.crockford.com/base32.html
0, O, o can be decoded as the same value 0x00, but the value 0x00 is alwais encoded as 0 (zero)
username_2: I dont know if this is the same issue, but in case it is, here's another example where this library disagrees with linux tools and the competing top result (hi-base32?)
```bash
echo '6<KEY>' | xxd -r -p | base32
# prints NT2HP5L5Z65KMETV2M3MHORPY7OSR3HODCSPEDAAI4ZIPDL5
```
and then this unexpected result:
```js
#!/usr/bin/env node
var base32 = require('base32');
var echo = '6cf477f57dcf<KEY>ecee18a4f20c004732878d7d'
var xxdStep = Buffer.from(echo, 'hex');
//xxdStep = xxdStep.toString();
var b32 = base32.encode(xxdStep);
console.log(b32);
// prints dku7fxbxtyxac4knucvc7ehfrzejhv7e32jf43008wt8f3bx
```
I mean granted this library has a disclaimer on it, and im grateful for that or else i would have spent even more time figuring this out |
systemjs/systemjs | 222354298 | Title: plugin hbs stopped working for 0.20
Question:
username_0: Getting:
` TypeError: Cannot read property 'meta' of undefined
at addDefaultExtension (C:\test\node_modules\systemjs\src\resolve.js:502:27)
at applyPackageConfigSync (C:\test\node_modules\systemjs\src\resolve.js:551:25)
at packageResolveSync (C:\test\node_modules\systemjs\src\resolve.js:244:10)
at doMapSync (C:\test\node_modules\systemjs\src\resolve.js:574:29)
at SystemJSLoader$1.packageResolveSync (C:\test\node_modules\systemjs\src\resolve.js:221:20)
at SystemJSLoader$1.normalizeSync (C:\test\node_modules\systemjs\src\resolve.js:178:29)
at SystemJSLoader$1.translate (file:///C:/test/src/vendor/github/davis/[email protected]/hbs.js!transpiled:6:44)
at C:\test\node_modules\systemjs\src\instantiate.js:210:41`
Answers:
username_1: Odd it seems to be working ok for me. Perhaps post an issue to the Handlebars project. Also it would help to know your replication, because I just replicated it my side and didn't get any issues.
username_0: I'm using es2015 module syntax:
`import "test.hbs"`
and:
System.import("test.hbs")
Not the ! syntax.
And configured jspm config (for transpilded bundled version):
```
meta: {
"*.hbs": {
"defaultExtension": false
}
```
and configured jspm config dev (for transpiling dev on the fly):
```
meta: {
"*.hbs": {
"loader": "hbs"
}
```
username_2: Getting the same thing:
```
system.js:4 Uncaught (in promise) Error: Cannot read property 'meta' of undefined
Instantiating https://localhost:8443/assets/lib/appLayoutView.hbs!https://localhost:8443/assets/jspm_packages/github/davis/[email protected]/hbs.js
Loading https://localhost:8443/assets/lib/appLayoutView.js
Loading https://localhost:8443/assets/lib/app.js
Loading https://localhost:8443/assets/lib/main.js
Loading lib/main.js
at oe (system.js:4)
at ie (system.js:4)
at Q (system.js:4)
at se (system.js:4)
at Ve.Q (system.js:4)
at Ve.X (system.js:4)
at Ve.translate (hbs.js:7)
at system.js:4
at <anonymous>
```
username_3: Same issue here, this forced me to return to system.js `0.19.41 `.
`0.20.14` produces the above error when using [hbs plugin](https://github.com/davis/plugin-hbs).
Reproduction steps:
1. use SystemJS version `0.20.14`.
2. Install the hbs plugin via jspm: `jspm install hbs`
3. Configure `jspm.config.js` to support loading of `.hbs` files without `!` while doing imports:
```
SystemJS.config({
packages: {
"app": {
"meta": {
"*.hbs": {
"loader": "hbs"
}
}
}
}
});
```
4. Use the following code imported via SystemJS:
```
import render from './template.hbs';
console.log(render({title: "Hello World"}));
```
5. Contents of `template.hbs`
```
<p>{{title}}</p>
```
By the glimpse of it, this might be the issue on the plugin's side rather than SystemJS's, but it appears there have been changes between 0.19 and 0.20 that break this plugin. Admittedly, I didn't dig too much as I'm not a plugin developer, but a guy who's got stuff to work on. If I get the time, I'll gladly report what and why it breaks, possibly include a fix too. In the mean time, if anyone manages to sort this one out - that'd be awesome.
Cheers guys.
username_0: Hi Guy,
I've managed to find the bug.
I'll be submitting a pull shortly.
With kind regards
username_4: Hi! I've just been looking into the same error and was able to pinpoint it to another (earlier) spot, resulting in an undefined `loader` parameter of `applyPackageConfigSync` which then get's passed on to `addDefaultExtension` and triggers this 'meta undefined' issue.
Although the above pull request fixes `applyPackageConfigSync` by swapping `loader` for `config` (and `addDefaultExtension`'s signature also implies using `config` ) the origin lies in [doMapSync](https://github.com/systemjs/systemjs/blob/master/src/resolve.js#L563)
```
function doMapSync (loader, config, pkg, pkgKey, mapMatch, path, metadata, skipExtensions) {
if (path[path.length - 1] === '/')
path = path.substr(0, path.length - 1);
var mapped = pkg.map[mapMatch];
if (typeof mapped === 'object')
throw new Error('Synchronous conditional normalization not supported sync normalizing ' + mapMatch + ' in ' + pkgKey);
if (!validMapping(mapMatch, mapped, path) || typeof mapped !== 'string')
return;
return packageResolveSync.call(this, config, mapped + path.substr(mapMatch.length), pkgKey + '/', metadata, metadata, skipExtensions);
}
```
where `packageResolveSync.call` get's called with `this` as the function's context, BUT, `doMapSync` is only called directly throughout the code, never through `apply` or `call` so it should use it's own `loader` parameter to pass on to `packageResolveSync`. Function [doMap](https://github.com/systemjs/systemjs/blob/master/src/resolve.js#L614) actually does this right.
I will write an extra pull request for this issue if you guys don't mind.
Status: Issue closed
username_1: Thanks, released in 0.20.18. |
Wynncraft/Issues | 140712827 | Title: Resource pack not working
Question:
username_0: The resource pack doesn't work. Each time I log in, it always tell me "making request: 100%", but it never downloads the resource pack. I don't receive any error message, apart from the one telling me that I don't have the resource pack on in the chat. I've tried to follow the advice of deleting the packs in my "server-resource-pack" file, but I don't have any pack whose name indicates that it is the Wynncraft pack. I've tried deleting them all, just in case, but it still doesn't work.
Could someone help me fix this please?
Answers:
username_1: The wynncraft resource pack should be titled 'legacyzip' in your server-resource-pack folder. Also, may I recommend something?
Try putting a '.' between legacy and zip, then move the resulting folder into your resource packs.
username_2: @username_0 , did you try username_1's advice? Is the problem fixed?
username_3: Usualy what you need to do to fix the problem is to go on the server list, click on wynncraft server and then click the Edit button, next, click the *Server Ressource Packs* button and make sure it says Enabled. When its done, log back in the server and it should be working, Hope it helped :p
username_4: Closed due to inactivity over ten days
Status: Issue closed
|
arkayenro/arkinventory | 757633404 | Title: LUA Error on click the Checkboxes in "BonusIDs" Tab
Question:
username_0: hi there,
if i click any of the `Item Suffix` or the `Corruption` Checkbox on the `BonusIDs` Tab, i will get this error:
```
30x ArkInventory\ArkInventoryPT.lua:538: bad argument #1 to 'wipe' (table expected, got nil)
[string "=[C]"]: in function `wipe'
[string "@ArkInventory\ArkInventoryPT.lua"]:538: in function `PT_BonusIDIsWantedClear'
[string "@ArkInventory\ArkInventoryStorage.lua"]:5128: in function `ObjectIDBonusClear'
[string "@ArkInventory\ArkInventoryStorage.lua"]:5201: in function `ObjectIDCountClear'
[string "@ArkInventoryConfig\ArkInventoryConfig-30936.lua"]:1670: in function <...aceArkInventoryConfig\ArkInventoryConfig.lua:1667>
[string "=[C]"]: ?
[string "@Ace3\AceConfig-3.0-3\AceConfigDialog-3.0\AceConfigDialog-3.0-79.lua"]:51: in function <...nfig-3.0\AceConfigDialog-3.0\AceConfigDialog-3.0.lua:49>
[string "@Ace3\AceConfig-3.0-3\AceConfigDialog-3.0\AceConfigDialog-3.0-79.lua"]:843: in function <...nfig-3.0\AceConfigDialog-3.0\AceConfigDialog-3.0.lua:664>
[string "=[C]"]: ?
[string "@Ace3\AceGUI-3.0\AceGUI-3.0-41.lua"]:72: in function <Ace3\AceGUI-3.0\AceGUI-3.0.lua:70>
[string "@Ace3\AceGUI-3.0\AceGUI-3.0-41.lua"]:306: in function `Fire'
[string "@Ace3\AceGUI-3.0-41\widgets\AceGUIWidget-CheckBox.lua"]:68: in function <...ns\Ace3\AceGUI-3.0\widgets\AceGUIWidget-CheckBox.lua:57>
```
greetings
fuba |
pokeclicker-dev/pokeclicker | 506318740 | Title: stop moving me plz
Question:
username_0: when you exit the farm/daycare/underground, the game put you on the route the location is... it's logic, but anoying
Answers:
username_1: haha, yeah this annoy the heck out of me too, makes me not go there most of the time. It is a godsend that you get the List button with area2 and you dont have to go to route5 alllll the time xD
username_2: This only happens on the Underground and Farm.
Clicking the little Daycare square will not move you to Route 5.
username_3: Think the daycare only stops moving you once you get to the second region.
Maybe we could implement something similar for Underground/Farm when we add shortcuts to those.
username_1: yeah sounds good. but these need a way to access in regions even if there is no shortcut unlocked yet. |
madetech/academy | 207286514 | Title: Coding Paradigms
Question:
username_0: Having had lots of success with onboarding interns in Clean Architecture project in the past, but typically a harder time with classic MVC style projects... My gut feeling is that we should be leaning towards clean architecture.
My only hesitation is that we have lots of rails projects, and probably will still deliver rails projects in the future. I believe that moving away from this is the future of Made Tech, but perhaps we should begin at the "front-door", so to speak, to get the ball moving forward?
Answers:
username_1: I'm not sure how feasible this will be given the current level of adoption of CA right now in Made
username_1: @username_0 maybe we should find some time for academy members to get involved with the CA learning objectives? |
garris/BackstopJS | 104466540 | Title: Specify path for the backstop.json config file
Question:
username_0: Is it possible to specify path for the backstop.json config file?
Answers:
username_1: +1
username_2: The short answer is no. You can specify locations of the screenshots (as of v6.0) but that's about it. If this were to be enabled how would you expect it to be implemented?
username_1: Probably as parameter you pass to the gulp tasks? Or even better as a property in the `package.json`?
username_2: Oh right, that _would_ be pretty useful. I owe the project a new roadmap. I will add this feature to it.
username_1: If you let me know which way you prefer I'd be happy to contribute a PR
username_2: That would be fabulous.
I like command line approach because it provides a very universal API -- without any specialized knowledge about how gulp works.
I think you would only need to modify paths.js to make this work...
https://github.com/username_2/BackstopJS/blob/master/gulp/util/paths.js
You would want to check for a supplied path parameter, if one exists check for a valid file, and if that exists, set `paths.activeCaptureConfigPath` With the supplied path.
Should not be too difficult.
Status: Issue closed
username_2: Ok. just tested this. Works beautifully! It's merged to master. Release forthcoming! |
rust-lang/rust | 69362071 | Title: dead code (+ unused assignment, etc) warnings in macros do more harm than good
Question:
username_0: (imported from improperly closed bug #17427)
Consider the following code:
```rust
fn g(x: u8) -> u8 {
1 - x
}
fn main() {
let mut x = 1u8;
macro_rules! m {
() => {{ x = g(x);
if x == 0 { x = 1; }
}}
}
m!();
g(1/x);
m!();
}
```
[playpen](http://is.gd/tUWPua)
The `unused_assignments` lint fires from the expansion of the second occurrence of `m!()`. But if you follow the advice of the lint and remove the assignment, you discover that the assignment was in fact significant, because when you remove the assignment, the side effect from the *first* occurrence of `m!()` is lost, and so the call to `g` divides by zero.
There are a number of different ways to handle this.
* This simplest would be to disable such lints for code with spans that are inside macro definitions, as was essentially the original suggestion of #17427
* Another option would be to revise our strategy for such lints, and to warn about an unused assignment (or other dead code) for a given span *only if* *ALL* such occurrences trigger the warning. I.e. as soon as one expansion proves that that code in the macro is useful, then you do not get any warning about it.
Answers:
username_1: I would like this. http://doc.rust-lang.org/bitflags/bitflags/macro.bitflags!.html is really annoying without it.
username_1: In the mean time I'm using the following workaround:
```
impl Sides {
// Execute all the code paths to shut up warnings.
// FIX: https://github.com/rust-lang/rust/issues/24580
#[allow(dead_code)]
fn _dead_code(&mut self) {
self.is_all();
self.is_empty();
self.bits();
self.intersects(*self);
self.remove(SIDE_RIGHT);
self.toggle(SIDE_RIGHT);
Sides::from_bits(0b00000000);
Sides::from_bits_truncate(0b00000000);
}
}
```
It's not generic - you'll need to customize it for your struct - but at least it's something.
username_2: Still repros.
username_2: @username_0 Are you still passionate about this?
username_3: I would still want this if possible. I have this macro:
~~~
macro_rules! assert_packet {
($var:ident, $( $x:expr ),+ ) => {
{
let mut index = 0;
$(
match $x.into() {
PacketMatchers::Str(a) => assert_eq!(a, $var[index]),
PacketMatchers::Regex(a) => {
let reg = regex::Regex::new(a).unwrap();
if !reg.is_match(&$var[index]) {
panic!("'{}' does not match regex: '{}'.", $var[index], a);
}
}
}
index += 1;
)*
}
};
}
~~~
This is for testing variable lengths packets (lists of strings) in a custom protocol. Usage is as follows:
~~~
assert_packet!(packet, "LOGIN", "testuser", Regex(r"[A-Z]+"));
~~~
I am rather pleased that macro and assert code are easy to read and understand, but for the last instance of the macro expansion I always get `warning: value assigned to `index` is never read`, which is displeasing.
Maybe there is a way to avoid it, but I did not find it yet - I added a semantically non-sensical `assert!(index >= 0)` to shut this up for now.
username_4: +1 for me. I'm trying to figure out a workaround for these warnings as well. |
analogdevicesinc/libsmu | 786039325 | Title: pysmu throws ImportError on fresh Ubuntu (18.04 and 20.04)
Question:
username_0: Source:
https://stackoverflow.com/a/21171861
Answers:
username_0: Replace line 24 by:
`from urllib.request import urlretrieve`
fixes the initial issue.
But a new error is thrown:
```
Traceback (most recent call last):
File "/usr/bin/pysmu", line 25, in <module>
import urllib2
ModuleNotFoundError: No module named 'urllib2'
```
username_0: `urllib2` cannot be used for python3.
[A topic on StackOverflow talking about that](https://stackoverflow.com/questions/2792650/import-error-no-module-name-urllib2)
Removing the import fixes the issue but I do not know if this causes invisible side effects.
username_1: Hi @username_0 !
I tried to reproduce this issue on a clean Ubuntu 20 image, but it all seems to work good for me. I tried both installing libsmu and pysmu with the .deb packages from [the latest release](https://github.com/analogdevicesinc/libsmu/releases/tag/v1.0.3) and building them manually and both methods worked as expected. I will take a look on your Docker file as well.
I will try to reproduce this issue as well in order to understand where it comes from and if your suggested solution causes other side effects. In the meanwhile, if something doesn't work with your version, you might try installing libsmu/pysmu using the .deb packages from the latest Release and see if this solves the issue for you.
Thank you!
username_0: Dear @username_1 ,
Thanks for your reply.
I have tried the installation only on simple Docker containers. (I have not tried on GUI VM)
You will see, my Dockerfile is the most simple possible and I just replace all "18" by "20" to switch between 18.04 and 20.04.
In the Dockerfile, I download the .deb packages and apt install them.
Installation is ok ; the error pops only at the execution of
`libsmu/bindings/python/bin/pysmu`
Let me know if you want me to try further tests.
I also would be glad to help on any python feature.
I am not a very experienced programmer but I may help on simple features.
You may also be able to answer one of my an ancillary question: I could not assert if the python bindings are for python2 or python3 ; it seems to be written for both. But, as shown in my previous comments, some python2 modules are not available in python3 ; and I was expecting some "switches" in pysmu for the appropriate imports. Do you have more information about the python part of the project ?
username_1: I understand now. Initially I thought that the issue appears when importing pysmu in a Python script. But after your last message I figured it out that you were trying to use the pysmu CLI.
About the bug: you are right, the issue is the one described in the posts you presented in the first comments. The problem is that urllib changed its API from Python2 to Python3 and the version from pysmu is the one for Python2. The solution for this issue would be something like this:
```
# Try importing urlib for Python3. If the script is run using Python2, the fallback imports will be used.
try:
from urllib.request import urlretrieve
except ImportError:
from urllib import urlretrieve
from urllib2 import urlopen
```
About the libsmu Python bindings: pysmu is the resulted package after compiling libsmu with the Python bindings. You can find more details about its implementation in its [main script](https://github.com/analogdevicesinc/libsmu/blob/master/bindings/python/pysmu/libsmu.pyx). As you can see, in this file there are "switches" for appropriate imports, based on the Python version you are using. You can choose which Python version is used for pysmu when building the libsmu library manually, using the USE_PYTHON2 CMake option. You can read more about the CMake option in the [README page](https://github.com/analogdevicesinc/libsmu#clone-configure-and-build). If USE_PYTHON is set on 1 (which is also its default value), Python2 will be used. Otherwise Python3 will be used. The resulting pysmu from the build process is Python version specific (i.e. it will work only for the desired Python version).
I think your confusion came from the fact that both the module and the CLI (libsmu/bindings/python/bin/pysmu) have the same name. You can use the CLI for its specific defined methods (calibration, firmware update etc.), but if you want to use pysmu for further development, the pysmu module is the one you need (the one that is built and which you can import in a Python script).
As already said, the pysmu module contains switches for the libraries versions, based on the Python version you are using. Since most of the people are using the module and not the Python CLI, this issue existed for quite a while and we didn't notice it. Thanks for pointing it out!
About the .deb packages: we only released the packages for Python3 because Python2 is out of support since the 1st of January 2020. Anyway, pysmu further supports Python2 if manually built, so the CLI should contain such "switches" for the dependencies imports.
Please let me know if you have any further questions or if I didn't explain something clear enough! Thank you!
username_0: Many thanks for your explanations; everything is clear
Status: Issue closed
|
GothamElections2017/RandomThoughts | 266909941 | Title: EPA Appoints <NAME> as Region 10 Administrator https://t.co/4ScFu5ZTv9
Question:
username_0: <blockquote class="twitter-tweet">
<p lang="en" dir="ltr" xml:lang="en">EPA Appoints <NAME> as Region 10 Administrator <a href="https://t.co/4ScFu5ZTv9">https://t.co/4ScFu5ZTv9</a></p>
— <NAME> (@Ge_Dawn_Granger) <a href="https://twitter.com/Ge_Dawn_Granger/status/921052052498808832?ref_src=twsrc%5Etfw">October 19, 2017</a>
</blockquote>
<br><br>
October 19, 2017 at 04:34PM<br>
via Twitter |
hashrocket/dotmatrix | 249892313 | Title: hr vimbundles, command not found. Readme or command needs update.
Question:
username_0: Hello, on a factory restored MacBook computer ,
The readme for dotmatrix states run:
```cd dotmatrix```
```bin/install```
Then
```hr vimbundles```
The result, ```hr command not found.```
The fix,
When in dot-matrix directory,
Run the hr bin script:
```./hr/bin/hr```
now it seemed the hr command will be found in future use.
<b>summary, perhaps ```bin/install``` is not performing the install for hr command.</b>
Answers:
username_1: Did you restart your shell after running `bin/install`?
username_0: @username_1, apologies taking so long to respond.
Yep, I had tried restarting the shell.
I think the solution needs to be coded and that the script is not being run.
I can make a pull request if there is interest.
To show how small the change would be.
The big question, "will it actually make a difference?"
That question will not be answered until the issue is able to be replicated by others.
The trouble, to replicate the issue you almost have to factory restore your computer.
Thereby making yourself a user with no preinstalled programmer tools.
username_2: @username_0 thanks for opening this issue!
I noticed in your issue description that you spelled the command `hr vimbundles` which is not correct. The correct command is `hr vimbundle` (singular as opposed to plural). This may have been the cause for the `command not found` message you were seeing. 🙂
The solution we use to install Vim plugins has recently changed to Vim Plug (see #64). The issue you were experiencing was prior to that change. I recommend you get the latest dotmatrix and follow the instructions here https://github.com/hashrocket/dotmatrix/blob/master/README.md#vim-plugins to see if this issue is still replicatable.
The short of it is that you no longer need to use `hr vimbundle` instead you will be able to install plugins using `:PlugInstall` directly from Vim and `:PlugUpdate` to update your plugins.
I recently installed the latest dotmatrix (w/ Vim Plug) on a fresh Ubuntu machine and did not encounter any issues at all. If you are still able to replicate the problem on a fresh Virtual Box machine please let us know, otherwise feel free to close this issue.
username_0: @username_2, great idea.
I believe the code was a fresh clone of dotmatrix because I was on a new programmer's computer.
I will check on the typo.
However, I remember after running the install script 'hr vimbundle' was working.
Status: Issue closed
username_2: Closing this issue for now. If you find a way to replicate the issue please let me know I'll reopen.
username_0: @username_2, thats fair.
If I get my computer in the correct environment to replicate it many times.
I will be back. 🙂 👍 |
nodeca/image-blob-reduce | 654664183 | Title: Simplify passing unsharp opions
Question:
username_0: Currently users are expeced to modify processing with ease. But probably passing unsharp options need something special.
At first glance, possible alternatives are:
- pass unsharp params the same way as `{ max: ... }`
- need to limit list of allowed params (currently `unsharpAmount`, `unsharpRadius`, `unsharpThreshold`).
- pass unsharp params as `{ max: ..., pica_opts: ... }` - separate object to merge on pica call.
- if user doesn't knows about pica, this looks unnatural.
- suggest to pass pica into construcor with overrien .resize()
- mad science :)
Additional quesions:
- do we need to pass pica's `alpha`? It's logic is hardcoded now. Can be changed with `_.transform()` override only.
- do we need to pass jpeg `quality` option? Can be changed with `.toBlob()` override, not a big deal.
Answers:
username_0: https://github.com/nodeca/image-blob-reduce/commit/4b11846692c250c4be4d4a486972342263da6ed5
Status: Issue closed
|
AlexsLemonade/refinebio | 318918380 | Title: Speed up end-to-end tests
Question:
username_0: ### Context
With the current base image, end to end tests cannot be done quickly because pulling the image and pushing it to a local registry take forever. However with the completion of #55 we will not have such a large image so this can be done quicker.
### Problem or idea
We should become paying clients of CircleCI so that we can parallelize our builds. Once we do so we should shoot to get the tests running in under a half hour.
### Solution or next step
My initial suggestion for how to parallelize is to have these test threads running simultaneously:
- [ ] One to run the general tests that are fairly lightweight.
- [ ] One to run individual processors.
- [ ] One to run end to end tests.
These may or may not all be necessary or the best way to do this.
SCAN.UPC is the processor which requires the annotation packages which really blow up the size of our Docker images the most. Therefore if we just make sure we don't run it during end-to-end tests, we won't need to push the image to a local registry which will speed things up a lot. We can instead pull that image and run a processor to test it, which I believe will probably be the longest test we'll have.
### New Issue Checklist
- [x] The title is short and descriptive.
- [x] You have explained the context that led you to write this issue.
- [x] You have reported a problem or idea.
- [x] You have proposed a solution or next step.
Answers:
username_1: I just looked into this. As an open source project, we do have parallelism already (4 containers). However, it needs to be enabled in the config file:
https://circleci.com/docs/2.0/configuration-reference/#jobs
username_0: Oh awesome then.
username_0: I dropped [this comment](https://github.com/AlexsLemonade/refinebio/issues/233#issuecomment-385493027) on https://github.com/AlexsLemonade/refinebio/issues/233 which potentially could have sped things up but it doesn't work.
Status: Issue closed
|
apple/swift-nio | 391314274 | Title: Document how to build NIO 2 before Swift 5 is public
Question:
username_0: Seems like the steps are
- new toolchain
- switch to legacy build system in Xcode (in project settings)
CC @tanner0101
Answers:
username_0: Yay, @tkremenek to the rescue: https://forums.swift.org/t/how-to-set-swift-version-5-for-recent-dev-snapshots-in-xcode-build-settings/18692/20
username_0: Done a while ago in readme
Status: Issue closed
|
Cinchoo/ChoETL | 578422191 | Title: Quotes in header throws exception
Question:
username_0: Presuming in the header I have quotes around the fields, is there any way I can use ChoCSVReader to parse it?
`"5108","mesg_s_umid","mesg_validation_requested","mesg_validation_passed","mesg_class","mesg_related_s_umid","mesg_is_text_readonly","mesg_is_delete_inhibited","mesg_is_text_modified","mesg_is_partial","mesg_status","mesg_crea_appl_serv_name","mesg_crea_mpfn_name","mesg_crea_rp_name","mesg_crea_oper_nickname","mesg_crea_date_time","mesg_mod_oper_nickname","mesg_mod_date_time","mesg_verf_oper_nickname","mesg_recordversion",":E:mesg_account_name",":E:mesg_account_number",":E:mesg_appl_sender_referencereference",":E:mesg_approver_nickname",":E:mesg_auth_delv_notif_req",":E:mesg_authoriser_dn",":E:mesg_b2b_instructions",":E:mesg_batch_reference",":E:mesg_beneficiary_account",":E:mesg_beneficiary_address",":E:mesg_beneficiary_bank_code",":E:mesg_beneficiary_bank_name",":E:mesg_beneficiary_name",":E:mesg_cas_sender_reference",":E:mesg_cas_target_rp_name",":E:mesg_charges",":E:mesg_contract_id",":E:mesg_copy_recipient_dn",":E:mesg_copy_service_cid",":E:mesg_copy_service_id",":E:mesg_copy_state",":E:mesg_copy_type",":E:mesg_credeb_account_number",":E:mesg_credit_amount",":E:mesg_credit_ccy",":E:mesg_custom_keyword1",":E:mesg_custom_keyword2",":E:mesg_custom_keyword3",":E:mesg_data_keyword1",":E:mesg_data_keyword2",":E:mesg_data_keyword3",":E:mesg_debit_account_instructing",":E:mesg_debit_amount",":E:mesg_debit_ccy",":E:mesg_delv_notif_req_mtype",":E:mesg_delv_notif_req_recDN",":E:mesg_delv_overdue_warn_req",":E:mesg_digest_value",":E:mesg_dirty",":E:mesg_e2e_transaction_reference",":E:mesg_exchange_rate",":E:mesg_expiry_date_time",":E:mesg_file_desc",":E:mesg_file_digest_algo",":E:mesg_file_digest_value",":E:mesg_file_header_info",":E:mesg_file_info",":E:mesg_file_logical_name",":E:mesg_file_size",":E:mesg_fin_ccy_amount",":E:mesg_fin_inform_release_info",":E:mesg_fin_value_date",":E:mesg_force_completed",":E:mesg_frmt_name",":E:mesg_has_verifiable_field",":E:mesg_identifier",":E:mesg_instructions",":E:mesg_intermediary_bank_code",":E:mesg_is_copy",":E:mesg_is_copy_required",":E:mesg_is_live",":E:mesg_is_recipient_list_public",":E:mesg_is_retrieved",":E:mesg_is_simplified_screen",":E:mesg_locked_template",":E:mesg_mesg_user_group",":E:mesg_nature",":E:mesg_network_appl_ind",":E:mesg_network_delv_notif_req",":E:mesg_network_obso_period",":E:mesg_network_priority",":E:mesg_nrs_required_sig_nbr",":E:mesg_odering_party_address",":E:mesg_odering_party_institution",":E:mesg_odering_party_name",":E:mesg_orig_snf_ref",":E:mesg_originator_account_id",":E:mesg_originator_address",":E:mesg_originator_name",":E:mesg_overdue_warning_delay",":E:mesg_overdue_warning_time",":E:mesg_payload_attribute_name",":E:mesg_payload_attribute_value",":E:mesg_payload_type",":E:mesg_pc_info_for_receiver",":E:mesg_possible_dup_creation",":E:mesg_product_name",":E:mesg_product_version",":E:mesg_rcv_charges_amount",":E:mesg_rcv_charges_curr",":E:mesg_receiver_alia_name",":E:mesg_receiver_swift_address",":E:mesg_recipient_list",":E:mesg_recovery_accept_info",":E:mesg_regulatory_reporting",":E:mesg_rel_trn_ref",":E:mesg_release_info",":E:mesg_remit_info",":E:mesg_request_e2e_control",":E:mesg_request_type",":E:mesg_requestor_dn",":E:mesg_retrieval_info",":E:mesg_rma_checked",":E:mesg_security_iapp_name",":E:mesg_security_required",":E:mesg_sender_X1",":E:mesg_sender_X2",":E:mesg_sender_X3",":E:mesg_sender_X4",":E:mesg_sender_branch_info",":E:mesg_sender_city_name",":E:mesg_sender_corr_type",":E:mesg_sender_ctry_code",":E:mesg_sender_ctry_name",":E:mesg_sender_institution_name",":E:mesg_sender_location",":E:mesg_sender_swift_address",":E:mesg_service",":E:mesg_signature_digest_reference",":E:mesg_signature_digest_value",":E:mesg_sla",":E:mesg_snd_charges_amount",":E:mesg_snd_charges_curr",":E:mesg_snep_profile_name",":E:mesg_source_template_name",":E:mesg_sub_format",":E:mesg_syntax_table_ver",":E:mesg_templ_prep_status",":E:mesg_template_descr",":E:mesg_template_name",":E:mesg_third_party_list",":E:mesg_tran_num",":E:mesg_transaction_date",":E:mesg_transfer_desc",":E:mesg_transfer_info",":E:mesg_trn_ref",":E:mesg_type",":E:mesg_use_pki_signature",":E:mesg_user_issued_as_pde",":E:mesg_user_priority_code",":E:mesg_user_reference_text",":E:mesg_uumid",":E:mesg_uumid_suffix",":E:mesg_vendor_name",":E:mesg_xml_query_ref1",":E:mesg_xml_query_ref2",":E:mesg_xml_query_ref3",":E:mesg_xmlv2_digest_value",":E:mesg_xmty_upload_id",":E:mesg_xmty_uri",":E:mesg_xsvc_uri",":E:mesg_z_approval_trailer",":E:mesg_zz41_is_possible_dup",
`
```
foreach (dynamic rec in new ChoCSVReader(@"D:\user\Downloads\temp\20200124\mesg_20200124.txt").WithFirstLineHeader())
{
Console.WriteLine($"Id: {rec.mesg_s_umid}");
Console.WriteLine($"Name: {rec.mesg_validation_requested}");
}
```
```
ChoETL.ChoParserException: Atleast one of the field header is empty. Please check the field headers at [174].
at ChoETL.ChoCSVRecordReader.GetHeaders(String line)
at ChoETL.ChoCSVRecordReader.<>c__DisplayClass21_0.<AsEnumerable>b__0(Tuple`2 pair)
at ChoETL.ChoPeekEnumerator`1.MoveToNext()
at ChoETL.ChoPeekEnumerator`1.TryFetchPeek()
at ChoETL.ChoPeekEnumerator`1.get_Peek()
at ChoETL.ChoCSVRecordReader.AsEnumerable(Object source, TraceSwitch traceSwitch, Func`2 filterFunc)+MoveNext()
at ChoETL.ChoCSVReader`1.<>c__DisplayClass49_0.<GetEnumerator>b__0()
at ChoETL.ChoEnumeratorWrapper.ChoEnumeratorWrapperInternal`1.MoveNext()
at ChoETL.ChoEnumeratorWrapper.BuildEnumerable[T](Func`1 moveNext, Func`1 current, Action dispose)+MoveNext()
```
Answers:
username_0: Investigating it further it seems that not the quotes are the issue here but the last comma in the header. Is there a way to ignore the last comma if there is no value after it?
username_0: Got it by passing `IgnoreColumnsWithEmptyHeader = true` in ConfigureHeader().
Status: Issue closed
|
cmangos/issues | 808461654 | Title: 🐛 [Bug Report] Ironforge Weapon Master's gossip menu is not available
Question:
username_0: ## 🐛 Bugreport
Ironforge Weapon Master's gossip menu is not available, and clicking "I'd like some weapon training" after clicking the gossip menu has no response.


### Expected behavior
The gossip menu is available.
### Version & Environment
Client Version:
"2.4.3" (TBC)
CMaNGOS Repo & Commit Hash:
b402c1d0ac0019967b62f33271839cb1487d2644
Database Repo & Commit Hash:
145967aa2f009fcecce09453c1fa151ad8f6de19
Operating System:
Linux
### Steps to reproduce
Status: Issue closed
Answers:
username_1: https://github.com/cmangos/tbc-db/commit/af66da8d8816d692e3997fdf172e386fa8878ca0 |
alibaba/canal | 452463858 | Title: 并行解析模式下,获取meta逻辑存在线程安全问题
Question:
username_0: try {
ResultSetPacket packet = connection.query("show create table " + fullname);
String[] names = StringUtils.split(fullname, "`.`");
String schema = names[0];
String table = names[1].substring(0, names[1].length());
return new TableMeta(schema, table, parseTableMeta(schema, table, packet));
} catch (Throwable e) { // fallback to desc table
ResultSetPacket packet = connection.query("desc " + fullname);
String[] names = StringUtils.split(fullname, "`.`");
String schema = names[0];
String table = names[1].substring(0, names[1].length());
return new TableMeta(schema, table, parseTableMetaByDesc(packet));
}
}
异常栈如下:
2019-05-31 21:34:51.678 [destination =*, address =* , EventParser] ERROR com.alibaba.otter.canal.common.alarm.LogAlarmHandler - destination:*
[com.alibaba.otter.canal.parse.exception.CanalParseException: com.alibaba.otter.canal.parse.exception.CanalParseException: parse row data failed.
Caused by: com.alibaba.otter.canal.parse.exception.CanalParseException: parse row data failed.
Caused by: com.alibaba.otter.canal.parse.exception.CanalParseException: com.google.common.util.concurrent.UncheckedExecutionException: com.alibaba.otter.canal.parse.exception.CanalParseException: fetch failed by table meta:`stress_test`.`d1`
Caused by: com.google.common.util.concurrent.UncheckedExecutionException: com.alibaba.otter.canal.parse.exception.CanalParseException: fetch failed by table meta:`stress_test`.`d1`
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203)
at com.google.common.cache.LocalCache.get(LocalCache.java:3937)
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3941)
at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4824)
at com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4830)
at com.alibaba.otter.canal.parse.inbound.mysql.dbsync.TableMetaCache.getTableMeta(TableMetaCache.java:196)
at com.alibaba.otter.canal.parse.inbound.mysql.dbsync.LogEventConvert.getTableMeta(LogEventConvert.java:956)
at com.alibaba.otter.canal.parse.inbound.mysql.dbsync.LogEventConvert.parseRowsEventForTableMeta(LogEventConvert.java:504)
at com.alibaba.otter.canal.parse.inbound.mysql.dbsync.LogEventConvert.parseRowsEvent(LogEventConvert.java:525)
at com.alibaba.otter.canal.parse.inbound.mysql.MysqlMultiStageCoprocessor$DmlParserStage.onEvent(MysqlMultiStageCoprocessor.java:330)
at com.alibaba.otter.canal.parse.inbound.mysql.MysqlMultiStageCoprocessor$DmlParserStage.onEvent(MysqlMultiStageCoprocessor.java:316)
at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:143)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.alibaba.otter.canal.parse.exception.CanalParseException: fetch failed by table meta:`stress_test`.`d1`
Caused by: java.io.IOException: should execute connector.connect() first
at com.alibaba.otter.canal.parse.driver.mysql.MysqlQueryExecutor.<init>(MysqlQueryExecutor.java:30)
at com.alibaba.otter.canal.parse.inbound.mysql.MysqlConnection.query(MysqlConnection.java:104)
at com.alibaba.otter.canal.parse.inbound.mysql.dbsync.TableMetaCache.getTableMetaByDB(TableMetaCache.java:93)
at com.alibaba.otter.canal.parse.inbound.mysql.dbsync.TableMetaCache.access$000(TableMetaCache.java:32)
at com.alibaba.otter.canal.parse.inbound.mysql.dbsync.TableMetaCache$1.load(TableMetaCache.java:63)
at com.alibaba.otter.canal.parse.inbound.mysql.dbsync.TableMetaCache$1.load(TableMetaCache.java:53)
at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3527)
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2319)
at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2282)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2197)
at com.google.common.cache.LocalCache.get(LocalCache.java:3937)
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3941)
at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4824)
at com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4830)
at com.alibaba.otter.canal.parse.inbound.mysql.dbsync.TableMetaCache.getTableMeta(TableMetaCache.java:196)
at com.alibaba.otter.canal.parse.inbound.mysql.dbsync.LogEventConvert.getTableMeta(LogEventConvert.java:956)
at com.alibaba.otter.canal.parse.inbound.mysql.dbsync.LogEventConvert.parseRowsEventForTableMeta(LogEventConvert.java:504)
at com.alibaba.otter.canal.parse.inbound.mysql.dbsync.LogEventConvert.parseRowsEvent(LogEventConvert.java:525)
at com.alibaba.otter.canal.parse.inbound.mysql.MysqlMultiStageCoprocessor$DmlParserStage.onEvent(MysqlMultiStageCoprocessor.java:330)
at com.alibaba.otter.canal.parse.inbound.mysql.MysqlMultiStageCoprocessor$DmlParserStage.onEvent(MysqlMultiStageCoprocessor.java:316)
at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:143)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
### Steps to reproduce
制造meta过期场景即可。如:
binlog中的字段数量与mysql中的字段数量不一致,此时重启canal进程(非tsdb)
解决这个问题pr如下:
同步方式获取meta
https://github.com/alibaba/canal/pull/1866
Answers:
username_1: 没有太理解线程安全问题
username_0: 那你找到这个方法getTableMetaByDB,看下具体实现吧。在并行解析模式下这个方法会被并发调用,而方法中用到的connection(封装了socket)并不是线程安全的,从而会有上述异常
username_0: 那你找到这个方法getTableMetaByDB,看下具体实现吧。在并行解析模式下这个方法会被并发调用,而方法中用到的connection(封装了socket)并不是线程安全的,从而会有上述异常
@username_1
Status: Issue closed
username_2: 代码已合并 |
facebookresearch/ParlAI | 229173846 | Title: Cuda Out of Memory in LSTM SQUAD model
Question:
username_0: Does this example require too much VRAM? I am trying to run this example on a box with a Nvida 970m (3GB VRAM) but I got this error:
`05/16/2017 04:02:32 PM: [ Ok, let's go... ]
05/16/2017 04:02:32 PM: [ Training for 1000 iters... ]
05/16/2017 04:02:36 PM: [train] updates = 10 | train loss = 9.83 | exs = 310
05/16/2017 04:02:39 PM: [train] updates = 20 | train loss = 9.79 | exs = 623
05/16/2017 04:02:42 PM: [train] updates = 30 | train loss = 9.75 | exs = 938
THCudaCheck FAIL file=/py/conda-bld/pytorch_1493680494901/work/torch/lib/THC/generic/THCStorage.cu line=66 error=2 : out of memory
Traceback (most recent call last):
File "examples/drqa/train.py", line 178, in <module>
main(opt)
File "examples/drqa/train.py", line 113, in main
train_world.parley()
File "/home/ivan/ParlAI/parlai/core/worlds.py", line 505, in parley
batch_act = self.batch_act(index, batch_observations[index])
File "/home/ivan/ParlAI/parlai/core/worlds.py", line 479, in batch_act
batch_actions = a.batch_act(batch_observation)
File "/home/ivan/ParlAI/parlai/agents/drqa/agents.py", line 192, in batch_act
self.model.update(batch)
File "/home/ivan/ParlAI/parlai/agents/drqa/model.py", line 113, in update
self.optimizer.step()
File "/home/ivan/anaconda3/lib/python3.6/site-packages/torch/optim/adamax.py", line 68, in step
torch.max(norm_buf, 0, out=(exp_inf, exp_inf.new().long()))
RuntimeError: cuda runtime error (2) : out of memory at /py/conda-bld/pytorch_1493680494901/work/torch/lib/THC/generic/THCStorage.cu:66`
I haven't dive in the code, so I am not sure if this is a bug or I just need more VRAM. Thank you
Status: Issue closed
Answers:
username_0: I just changed the batch size to make it work. |
dbankier/JAST | 61021273 | Title: Problems with node >= 0.11.x
Question:
username_0: I'm struggling to get JAST working with any version of Node from 0.11.x upwards, The latest version I can get the stack to work with is 0.10.37.
It looks like the problem is with the STSS dependency but before I raise an issue over there, I wanted to make sure that it isn't just me being silly.
So, has anyone got JAST to work with a version of Node higher than 0.10.37?
Answers:
username_1: yep - errors with node-sass on 0.12. Stick to 0.10 for the moment if you can...
username_0: Ok - that's as I thought.
For anyone else reading this, I had issues with TiShadow not reloading properly using an earlier version of Node. Upgrading to 0.10.37 seems to have fixed these issues.
Status: Issue closed
username_2: `grunt-stss` has been updated to support node > 0.12. Update the package.json to the latest version to fix. |
dotnet/roslyn | 241245990 | Title: ReplacementChangesSemantics fails to detect certain breaking changes
Question:
username_0: **Version Used**: 15.3 Preview 3
As described in https://github.com/dotnet/roslyn/pull/20596#issuecomment-312662668, `ReplacementChangesSemantics` fails to detect certain semantic changes, leading to broken diagnostics and code fixes for some cases. @username_1 has identified a potential solution in https://github.com/dotnet/roslyn/pull/20596#issuecomment-312857730 which could fix several outstanding bugs in this area.
Answers:
username_1: Regarding to #20596.
I've found some strange code (at least at my point of view) in src\compilers\csharp\portable\binder\Binder_Operators.cs file:
```csharp
var best = this.UnaryOperatorOverloadResolution(kind, operand, node, diagnostics, out resultKind, out originalUserDefinedOperators);
if (!best.HasValue)
{
ReportUnaryOperatorError(node, diagnostics, operatorText, operand, resultKind);
return new BoundUnaryOperator(
node,
kind,
operand,
ConstantValue.NotAvailable,
null,
resultKind,
originalUserDefinedOperators,
GetSpecialType(SpecialType.System_Object, diagnostics, node),
hasErrors: true);
}
```
Why ```SpecialType.System_Object``` used and not ErrorType since expression is invalid?
username_0: 💭 It seems like ideally the symbol returned would be the type of the operand, meaning it behaves in IntelliSense like the operand even if the operand (or operator) doesn't resolve. In almost all cases the unary operators do not change the type of the argument. |
feast-dev/feast | 1004128203 | Title: Be able to override the TTL parameter at retrieval time
Question:
username_0: **Is your feature request related to a problem? Please describe.**
While feature views are meant to be reusable, not all the teams/use cases have the same need for TTL. Therefore we should be able to pass a TTL at retrieval time that will override the one registered in the feature views.
**Describe the solution you'd like**
**Option 1:**
Add a "ttl" parameter that will override the TTL **for all feature views** for the retrieval query
```python
from datetime import timedelta
get_historical_features(
...,
ttl=timedelta(days=365)
)
```
**Option 2:**
Be able to pass a `List[FeatureView]` to the `get_historical_features()` method so that the users could possibly change tweak the information of FeatureView before performing a retrieval operation
```python
from datetime import timedelta
...
# fv1 & fv2 are FeatureViews loaded from the registry
fv1.ttl = timedelta(days=365)
get_historical_features(
features=[
fv1,
fv2,
]
)
```
**Option 3:**
Add a "ttl" parameter but be able to specify which ttl applies to which feature view of the retrieval query
```python
from datetime import timedelta
get_historical_features(
...,
ttl={
"fv1": timedelta(days=365),
}
)
```
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here. |
votca/xtp | 394987935 | Title: Generating overview_stochastic.eps trigggers an internal error in inkscape on s390x
Question:
username_0: When testing 1.5_rc1 on s390x, I get:
```
[ 89%] Generating fig/stochastic/overview_stochastic.eps
cd /builddir/build/BUILD/votca-1.5_rc1/s390x-redhat-linux-gnu/xtp/manual/fig/stochastic && /usr/bin/inkscape -f /builddir/build/BUILD/votca-1.5_rc1/xtp/manual/fig/stochastic/overview_stochastic.svg -E /builddir/build/BUILD/votca-1.5_rc1/s390x-redhat-linux-gnu/xtp/manual/fig/stochastic/overview_stochastic.eps
make[2]: Leaving directory '/builddir/build/BUILD/votca-1.5_rc1/s390x-redhat-linux-gnu'
BUILDSTDERR: terminate called after throwing an instance of 'std::logic_error'
BUILDSTDERR: what(): basic_string::_M_construct null not valid
BUILDSTDERR: Emergency save activated!
BUILDSTDERR: Emergency save completed. Inkscape will close now.
BUILDSTDERR: If you can reproduce this crash, please file a bug at www.inkscape.org
BUILDSTDERR: with a detailed description of the steps leading to the crash, so we can fix it.
BUILDSTDERR: ** Message: 03:23:39.190: Error: Inkscape encountered an internal error and will close now.
```
Answers:
username_0: This one is still valid! We should report this to Inkscape upstream.
username_0: https://bugs.launchpad.net/votca/+bug/1810425
username_0: The problem seems to be the length of the input (`-f` option) and output (`-E` option) path.
username_1: Fixing inkscape command line bug boils down to ugly CMaking? I would rather push eps figures to the manual repository. Also I do not have this platform installed to check if one can fix it in a different fashion.
username_0: Adding the eps files has the disadvantage that all users will have to download more data as the tarballs is bigger! I think the fix wasn’t that ugly....
username_1: If you have access to s390x, can you check if
inkscape --file=<dir>/<file>.svg --export-eps=<dir>/<file>.eps
works?
Or even put in parenthesis "<dir>/<file>.svg" and "<dir>/<file>.eps"
Status: Issue closed
|
openshift/jenkins-client-plugin | 448866192 | Title: openshift.process fails converting stdOut to Json
Question:
username_0: I´m trying to `openshift.process(.....)` my template, to than `openshift.apply(....)` it.
When invoking
```
def configYaml = script.readYaml file: deploymentConfigFile
def openshift = script.openshift
openshift.logLevel(3)
openshift.withCluster(clusterName) {
openshift.withProject(project) {
if (configYaml.kind == "Template") {
def list = openshift.process(configYaml, "-p", "PROJECT=${project}")
...
```
I sometimes get this message:
`Verbose sub-step output:
Command> oc --server=https://myserver --insecure-skip-tls-verify --namespace=myproject --loglevel=3 --token=XXXXX process -f /home/jenkins/workspace/myJenkinsProject/DeployTo-DEV3.9/process2542071550542573450.markup -p PROJECT=myproject -o=json
Status> 0
StdOut>I0527 13:09:35.632846 27 cached_discovery.go:77] skipped caching discovery info due to the server could not find the requested resource
I0527 13:09:35.644406 27 cached_discovery.go:77] skipped caching discovery info due to the server could not find the requested resource
I0527 13:09:35.651263 27 cached_discovery.go:77] skipped caching discovery info due to the server could not find the requested resource
I0527 13:09:35.654638 27 cached_discovery.go:77] skipped caching discovery info due to the server could not find the requested resource
{
"kind": "List",
"apiVersion": "v1",
"metadata": {},
"items": [
{
"apiVersion": "v1",
"kind": "DeploymentConfig",
"metadata": {`
It looks like the response from the server isn´t properly parsed, resulting in
```
Exception encountered: JsonException (Unable to determine the current character, it is not a string, number, array, or object
The current character read is 'I' with an int value of 73
Unable to determine the current character, it is not a string, number, array, or object
line number 1
index number 0
I0527 13:09:35.632846 27 cached_discovery.go:77] skipped caching discovery info due to the server could not find the requested resource
^)
```
How can this be fixed / workarounded ?
Answers:
username_0: I removed the logLevel(3) - but than I cannot see the
`oc --namespace --server......` commands anymore. These are required in the logs files for better traceability.
I would suggest changing the method and add a statement to "extract" the json part:
```
// OpenShiftDSL.groovy
public HashMap serializableMap(String json) {
// remove trailing *.go output from the passed argument (if loglevel is set)
String jsonStripped =json.drop(json.indexOf('{'))
....
}
```
username_1: Hello @username_0 !
openshift.process is only meant to parse templates, if you pass in anything else, it should fail to read the contents of the file as the provided resource is not a template. I tested this out with `oc process` command for a given file with a Service defined and got the following error.
```
error: unable to parse "nodejsexheadless.yaml", not a valid Template but *v1.Service
```
When I tested openshift.process with the following in the Jenkins Pipeline
```
openshift.withCluster() {
def template = openshift.process(templatePath)
println(template.kind)
oc.apply(template)
}
```
it worked with the following output in the logs which notes down the `kind` of all the resources in the template and also creates all the resources in the templates.
```
[Secret, Service, Route, ImageStream, BuildConfig, DeploymentConfig, PersistentVolumeClaim, Service, DeploymentConfig]
```
You won't be able to `openshift.process` any other resource apart from Templates sucessfully as it is not supported by openshift.process.
username_0: hello @username_1 I´m passing a valid template.yaml to openshift.process(.....) , see my code sample:
```
def configYaml = script.readYaml file: deploymentConfigFile
def openshift = script.openshift
openshift.logLevel(3)
openshift.withCluster(clusterName) {
openshift.withProject(project) {
if (configYaml.kind == "Template") {
def list = openshift.process(configYaml, "-p", "PROJECT=${project}")
...
```
**BUT**: If you enable the **_logLevel(3)_** the openshift.process(....) call will internally first get back a String starting with some *.go output
`StdOut>I0527 13:09:35.632846 27 cached_discovery.go:77] skipped caching discovery info due to the server could not find the requested resource I0527 13:09:35.644406 27 cached_discovery.go:77] skipped caching discovery info due to the server could not find the requested resource I0527 13:09:35.651263 27 cached_discovery.go:77] skipped caching discovery info due to the server could not find the requested resource I0527 13:09:35.654638 27 cached_discovery.go:77] skipped caching discovery info due to the server could not find the requested resource ..........
`
.... and after that *.go debug statements the interesting *.json output:
`.....{ "kind": "List", "apiVersion": "v1", "metadata": {}, "items": [ { "apiVersion": "v1", "kind": "DeploymentConfig", "metadata": {
`
This whole (debug statements + json) r.out String will than be passed to serializableMap(String json) - with the assumption that it´s plain json - but it is NOT !
```
public ArrayList<HashMap> process(Object obj,Object... oargs) throws AbortException {
...
// Output should be JSON; unmarshall into a map and transform into a list of objects.
return unwrapOpenShiftList(serializableMap(r.out));
}
```
=> the serializableMap(String out ) should apply a RegEx like ^.*(\{.*\})$ or the mentioned drop(....) first, to extract the *.json from the given output.
username_2: @username_0 @username_1 we need the contents of the `configYaml` to debug
We've got test cases in our regression bucket that look like @username_0 's example usage of `openshift.process` but the contents of the yaml are the key. More pain wrt groovy string parsing. There has been pain with this in the past.
Also @username_0 just as a sanity check, clarify the plugin version just so we know we are talking the same level (though I suspect you are at near the latest if I recall prior interactions).
username_0: Hi @username_2 sorry for missing the version number, but it´s just the latest version.
To be able to reproduce the issue you can try with any Template, but you need to set the logLevel(..) -and you have to make sure that the CLI will complain with a message (which may depend on the current state of your openshift cluster, for example if the image was deployed before or whatever)
In my example I mentioned the output `skipped caching discovery info due to the server could not find the requested resource` from the CLI, before the *.json parts.
The Result.out (r.out) will contain the **whole** output of the underlying API call(s) - and **NOT** only the *.json structure! Whenever you want to convert this into a Map, you have to _strip of any lines that are NOT *.json_.
Anyway, here´s my DeploymentConfig (replace ${project.version} with a valid string.
```
---
kind: Template
apiVersion: v1
metadata:
name: ms-azure-rawdata-extract-deployment
annotations:
description: Create ms-azure-rawdata-extract-deployment deployment on Openshift.
tags: ms-azure-rawdata-extract-deployment
labels:
template: ms-azure-rawdata-extract-deployment
group: ms-azure-rawdata-extract-deployment
message: ms-azure-rawdata-extract-deployment deployment on Openshift
objects:
- apiVersion: v1
kind: DeploymentConfig
metadata:
labels:
app: iqpress-ol-ms-azure-rawdata-extract
description: "iqpress-ol-ms-azure-rawdata-extract"
name: ms-azure-rawdata-extract-deployment
spec:
replicas: 1
selector:
deploymentconfig: ms-azure-rawdata-extract-deployment
strategy:
activeDeadlineSeconds: 21600
recreateParams:
timeoutSeconds: 600
resources: {}
type: Recreate
template:
metadata:
annotations:
prometheus.io/path: /metrics
prometheus.io/port: "8081"
prometheus.io/scheme: https
prometheus.io/scrape: "true"
labels:
app: iqpress-ol-ms-azure-rawdata-extract
description: "RawData-Extraction"
deploymentconfig: ms-azure-rawdata-extract-deployment
spec:
containers:
- name: iqpress-ol-ms-azure-rawdata-extract
# TODO need to qualify the internal registry here ?
image: docker-registry.default.svc:5000/${PROJECT}}/iqpress-ol-ms-azure-rawdata-extract:${project.version}
imagePullPolicy: Always
env:
- name: KAFKA_BOOTSTRAP_SERVERS
value: "confluent-kafka-service:9092"
- name: KAFKA_TOPIC_INGRESS
value: "RawData"
- name: KAFKA_TOPIC_EGRESS
[Truncated]
readinessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 8081
scheme: HTTPS
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 15
resources:
requests:
cpu: 250m
memory: 256Mi
triggers: []
parameters:
- name: PROJECT
description: "project need to be set if different from default 'iqpress'"
value: "iqpress"
```
username_3: Ich bin bis 03.06.2019 abwesend.
I am out of the office until 03.06.2019.
Ich werde Ihre Nachricht nach meiner Rückkehr beantworten.
In dringenden Fällen wenden Sie sich bitte an <NAME> (AC-6606)
Hinweis: Dies ist eine automatische Antwort auf Ihre Nachricht "Re:
[openshift/jenkins-client-plugin] openshift.process fails converting stdOut
to Json (#271)" gesendet am 31.05.2019 08:48:05.
Diese ist die einzige Benachrichtigung, die Sie empfangen werden, während
diese Person abwesend ist.
username_2: thanks for the clarification @username_0
I've got it now. And I've confirmed that the k8s code that `oc` uses to print that particular log to StdOut and that is why things are getting muddled.
This is a tricky to reproduce since you need to be on a system with the right client/server/apiObject combination to produce this.
And it *might* be tricky to fix since the stdout/stderr assumptions have been violated. There might be some simple character tests to discern that something is json/yaml, or that we are dealing with a log message. But it is going to get kludgy ... we'll see. Otherwise this will turn into a restriction.
Back to reproducing, your template above certainly reveals the api objects. Nothing crazy there.
Can you clarify the version of your `oc` binary in this case, and the version of the cluster, when this occurs so we can approximate the failure on the group/api version lookup?
And is it possible for you to try out patches that we build?
username_2: If you can run with a manually build hpi file @username_0 here it is
[client-plugin-1.zip](https://github.com/openshift/jenkins-client-plugin/files/3242391/client-plugin-1.zip)
If not, I may just cut a new version with the above PR sometime Monday / June 3 my time
username_2: v1.0.31 has been initiated at the jenkins update center |
datadvance/DjangoChannelsGraphqlWs | 336797195 | Title: Could you please provide an working example with django and channel2
Question:
username_0: I tried to use the one you provided in the readme. However, I did not make it. I am wondering if you can provide a working example in this project with django and channel 2.
Thanks a lot.
Answers:
username_1: Indeed it is necessary. We will add this soon.
username_2: It would be interresting to add the files to edit or create in the readme to make it easier to implement DjangoChannelsGraphqlWs.
username_1: Dear, @username_2 I am not sure I understand your previous comment. Could you please explain?
Status: Issue closed
username_1: Example with GraphiQL browser is added in v0.2.0. Enjoy ;-) |
torilmud/issues | 664583260 | Title: Level 6 Magic Missile Scroll Casting as though I were Level 15+
Question:
username_0: You recite an intricate divination scroll which turns to dust in your hands.
You feel informed:
Name 'a minute magical scroll'
Keyword 'minute magical scroll magicmissile', Item type: SCROLL
Item can be worn on: NOBITS
Item will give you following abilities: NOBITS
Item is: NOBITSNOBITS
Weight: 0, Value: 5000
Level 6 spells of:
magic missile
< 892h/892H 170v/170V > recite scroll device
You recite a minute magical scroll which turns to dust in your hands.
Your spell is partially absorbed by A large metal device.
You watch with self-pride as the magic missile hits A large metal device.
Your spell is partially absorbed by A large metal device.
You watch with self-pride as the magic missile hits A large metal device.
Your spell is partially absorbed by A large metal device.
You watch with self-pride as the magic missile hits A large metal device.
Your spell is partially absorbed by A large metal device.
You watch with self-pride as the magic missile hits A large metal device.
Your spell is partially absorbed by A large metal device.
You watch with self-pride as the magic missile hits A large metal device.
A level 6 magic missile should only produce 2 missiles not 5.
Answers:
username_1: This was resolved in the last patch.
Status: Issue closed
|
google/go-cloud | 357393588 | Title: Add listing of objects in a bucket based on some prefix
Question:
username_0: First of all, this is a super interesting project that is much needed in cloud infrastructure.
I think it would be neat to be able to add the ability to do get a list of all objects in a buck based on some prefix. For instance, I often find myself doing stuff like
```
gsutil ls gs://my_bucket/my_prefix/*
```
whether using the command line or in code.
Happy to discuss further on a scope for this functionality.
Status: Issue closed
Answers:
username_1: Hi Daniel, thanks for checking out Go Cloud!
I think this issue is a dupe of #241. |
jlippold/tweakCompatible | 482084503 | Title: `Apps Manager` working on iOS 12.4
Question:
username_0: ```
{
"packageId": "com.tigisoftware.appdatamanager",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.tigisoftware.appdatamanager",
"deviceId": "iPhone10,5",
"url": "http://cydia.saurik.com/package/com.tigisoftware.appdatamanager/",
"iOSVersion": "12.4",
"packageVersionIndexed": true,
"packageName": "Apps Manager",
"category": "Utilities",
"repository": "tigisoftware.com",
"name": "Apps Manager",
"installed": "1.5.0-12",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.tigisoftware.appdatamanager",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "This tool provides the way to WIPE, BACKUP, RESTORE AppData for installed Apps",
"latest": "1.5.0-12",
"author": "TIGI Software",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
snowplow/snowplow | 15064927 | Title: Snowplow CLI: re-implement S3 file moves to Processing using S3DistCp
Question:
username_0: Evaluate how feasible this is.
Performance?
File renaming options?
Answers:
username_0: Blocked by #1775
username_1: Here's the brute force way of doing things I came up with:
- keep the original folder structures (no flattening): as a result there wouldn't be any overwrite and we would have to glob the input path of the enrich step
- handle the --end and --start flags on a per collector format basis
Advantages:
- everything is s3distcp
- fairly generic (if we wish to add another format, we don't have to write and support another script or binary)
Drawbacks:
- no renaming, correct me if I'm wrong but since our folder structure is not flattened that wouldn't be an issue
Would love feedback as I don't know if the file renaming serve other purposes.
username_0: Hey @username_1 - TBH I am happy to drop the `--end` and `--start` arguments - we have never used these (although I know a few in the community did) and I think they have outlived their purpose and are unnecessarily complicated.
I don't think the renaming is essential. We just need to be careful that the sub-folder structure is preserved through the pipeline to prevent accidental overwrites.
username_1: The sub-folder structure will be fairly short-lived as it'll only be persisted up to enrich which will have a flat output just like right now.
I'll create a ticket for removing --end and --start then :+1: .
username_0: Right - but the sub-folder structure needs to be persisted when archiving the raw files out of staging...
username_1: True, that's something I haven't investigated yet.
Status: Issue closed
|
MicrosoftDocs/azure-docs | 921725918 | Title: Add Note for Redirect URI's for Azure Government
Question:
username_0: Is it possible to make a note for the Redirect URI as it relates to Government customers?
Could the verbiage be something to the effect "If the Redirect URI is left as www.microsoft.com, the User will not need to sign in" or "Please update the Redirect URI to a website for the user to sign in ie. Web Application, Azure Gov Portal, My Applications Portal, etc"
Reason: Customer encountered issue when attempting to configure Shared Imaged Gallery Access across Tenants as there was no pop up or notification to mention Success. CSP Administrators do not have a viable way of assisting with Troubleshooting from Tenant level.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: b4c09d03-24b1-f376-34fd-cbfd8d29d3d7
* Version Independent ID: 763073bf-734e-e079-2d4c-97c097eb7c88
* Content: [Share gallery images across tenants in Azure - Azure Virtual Machines](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/share-images-across-tenants)
* Content Source: [articles/virtual-machines/windows/share-images-across-tenants.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/virtual-machines/windows/share-images-across-tenants.md)
* Service: **virtual-machines**
* Sub-service: **shared-image-gallery**
* GitHub Login: @axayjo
* Microsoft Alias: **akjosh**
Answers:
username_1: Thanks for the feedback! I have assigned the issue to the content author to investigate further and update the document as appropriate. |
PublicarNuevosNegocios/ProveedoresOnLine | 98204289 | Title: DISEÑO
Question:
username_0: Revisar las estrellas que están quedando sobre el siguiente proveedor.

Answers:
username_1: no veo el issue en mi computador, por mas que mire los estilos, los veo bn.
Status: Issue closed
|
sqlalchemy/sqlalchemy | 510404589 | Title: Question about UnicodeDecodeError
Question:
username_0: (1) Error code:
`df_result = pd.read_sql_query(sql, engine)`
(2) Error message:
`UnicodeDecodeError: 'utf8' codec can't decode byte 0xa0 in position 41: invalid start byte`
(3) Traceback:
`site-packages\pandas\io\sql.py, line 314
site-packages\pandas\io\sql.py, line 1108
site-packages\sqlalchemy\engine\result.py, line 1216
site-packages\sqlalchemy\engine\base.py, line 1475
site-packages\sqlalchemy\engine\result.py, line 1211
site-packages\sqlalchemy\engine\result.py, line 1161`
(4) Error Reason:
DB encoding is iso-8859-1, nls_lang is AMERICAN_AMERICA.WE8ISO8859P1,
but db table has uninterruptedly space character(encoding is 0xa0), so utf-8 can't decode it
(5) Solution:
change sqlalchemy version to 1.2.6, it's ok
`pip install SQLAlchemy==1.2.6`
(6) So my question is what's the difference between version 1.2.6 and version 1.3.10, could you help me?
Answers:
username_1: there is an encoding_errors flag that will be in 1.3.11:
https://docs.sqlalchemy.org/en/13/changelog/changelog_13.html#change-ecabed6215423c25dbaabffd240cc534
try out from git:
https://github.com/sqlalchemy/sqlalchemy/archive/rel_1_3.zip
then set encoding_errors='ignore' in create_engine()
pls confirm it works thanks
username_1: #4799
username_1: 1.2 is not using SQLAlchemy's decoder, it uses cx_Oracle's, which I am beginning to suspect implicitly ignores encoding errors, even though this is [configurable](https://cx-oracle.readthedocs.io/en/latest/api_manual/cursor.html) what cx_Oracle version are you using please
username_1: oh also is this python 2 you are using. in python 3 there's no difference
username_0: @username_1 Thanks for your help, it works very well. I will list my operation here.
Install:
1. python setup.py install
2. easy_install SQLAlchemy-1.3.11.dev0-py2.7-win-amd64.egg
3. pip list (sqlalchemy version is 1.3.11.dev0)
Try:
engine = create_engine(ProdConfig.SQLALCHEMY_DATABASE_URI, encoding_errors='ignore')
It works fine!
username_1: are you using python 2 or 3 ?
username_0: Python2.7.15
username_1: ok that makes sense then, wait for 1.3.11
Status: Issue closed
|
TerriaJS/nationalmap | 335613577 | Title: Investigate traffic data from data.vic.gov.au - error on NationalMap
Question:
username_0: When trying to add the Traffic Count locations (under Victorian Government group) harvested from data.vic.gov.au it doesn't work (it's trying to load without succeeding).
Data is available in multiple formats here: https://www.data.vic.gov.au/data/dataset/traffic_count_locations
VIC team responded that data sources are working and are accessible.
Answers:
username_1: The KML (the one NationalMap chooses by default) doesn't work because it's big, 68 MB.
username_0: So the quickest solution would be to make the csv a csv-geo-au?
username_1: Just tried fixing the column names, and even the CSV is too big to work well (it crashes the tab). It has over 57 thousand points. The quickest solution is to expose it as an Esri MapServer. If that's not an option, we'd have to do some significant work to make this work well.
username_0: ok, thanks, I'll let the VIC team know. I don't think it will be a priority to fix at our end, since they have more transport data on NatMap which works fine (although not traffic specific).
username_2: Dataset no longer available at that link
Status: Issue closed
|
ant-media/Ant-Media-Server | 326034700 | Title: Any URL or string can be added as Stream Source
Question:
username_0: Stream Source URL only supports RTMP, HLS, RTSP streams.
If the string does not match this format, do not let form to be submitted
Status: Issue closed
Answers:
username_1: Stream Source URL only supports RTMP, HLS, RTSP streams.
If the string does not match this format, do not let form to be submitted
username_1: Test sonucu görülen sorunlar #130 ile yeni madde olarak kaydedildi.
Status: Issue closed
username_2: comments are written in #130 issue |
godotengine/godot-proposals | 563310126 | Title: GDScript: Make print use var2str for basic variables and auto-separation
Question:
username_0: <!--
Please fill in *all* the questions below and don't remove any of them.
Proposals not following the template below will be closed immediately.
-->
**Describe the project you are working on:**
Any.
**Describe the problem or limitation you are having in your project:**
**print** really only exists for debugging purposes but `print(1, 1.0, "1")` gives `111` which is very confusing.
**Describe how this feature / enhancement will help you overcome this problem or limitation:**
Makes it much quicker to debug, and less confusing.
**Show a mock up screenshots/video or a flow diagram explaining how your proposal will work:**
`print(1, 1.0, "1")` now instead gives `1, 1.0, "1"` which is very clear and easy.
**Describe implementation detail for your proposal (in code), if possible:**
Implementation is simple. Single string argument and thing like Nodes etc should not use var2str. The only question would be to support different separators but I don't think it's necessary and since GDScript still doesn't support named arguments it wouldn't fit well.
I've already implemented it myself but I think it's a good candidate for someone who's never contributed before.
**If this enhancement will not be used often, can it be worked around with a few lines of script?:**
It will be used often and by everyone.
**Is there a reason why this should be core and not an add-on in the asset library?:**
It is a core feature that everyone benefits from all the time.
Answers:
username_1: Are you aware of `prints` and `printt`?
username_0: Yes, in my opinion, prints is a pretty much useless functions since a simple space is not useful to clearly delineate between strings (which may have space in them) and sometimes not even between numbers, it can get confusing easily. printt might be useful sometimes, but I certainly prefer commas since that's how you write code so I've never actually used that one either, but most importantly neither function uses str2var.
I'd add str2var to all of them, but I don't think most people even know about them. I only found them when I patched the print function. I wouldn't mind if they were removed even - I had already mostly forgotten about them already. I just remembered to make this proposal today since the print function should probably be changed for 4.0.
username_2: Many people use multiple `print()` arguments as a substitute for string concatenation, how would this play with the existing situation?
username_0: @username_2 This does break compatibility (although it's a minor thing). It would require people (who use it the way you say) to change from `print(a, b, c)` to `print(a+b+c)` and use `str(a)` to concatenate non-strings to get the exact same behavior. I would say using + is a better way of doing it although having to use str for some variables is inconvenient.
We also have format strings with % for pretty printing. I'm not against a dump function however, although I don't think I'd want each variable on it's own line most of the time (you "run out" of lines so quickly). Would dump also write the name of the variable? That would certainly be convenient like `dump(a, b, c)` gives `int a = 1; float b = 1.0; str c = "1";` or something. And maybe no arguments prints out all variables organized by scope or something - in that case I'd want each scope on a separate line.
I think if GDScript could get named optional arguments we'd be able to make both print and a potential dump function much better. print could have sep=" ", end="\n" as default like in Python so you can print and dump easily without options (or only the options you want to change) but a sep="" can be used for current behavior of print and dump could have similar options.
Of course, named optional arguments would be super useful all over the place.
username_2: I would really like to implement this, but unfortunately, I have no idea how to do it. |
elishacloud/dinputto8 | 501774880 | Title: Cursor randomly flickers onto screen in Shogo
Question:
username_0: This is an issue that's been known about for awhile but mostly ignored/tolerated, but I was reading into solutions for the problem and really couldn't find any. We think it's caused by dgvoodoo 2 maybe, but it's necessary to use that utility otherwise the game doesn't render or capture correctly.
Answers:
username_1: I have never seen this cursor flickering myself, but granted I have not played the game more than a few minutes at any given time. If this is caused by dgVoodoo2 it should be easy to find out by just removing dgVoodoo1 and playing the game native to see if the issue happens.
For me, I use [dxwrapper](https://github.com/username_1/dxwrapper)'s [`Dd7to9`](https://github.com/username_1/dxwrapper/wiki/DirectDraw-to-Direct3D9-Conversion) feature which can convert the game to use Direct3D9. This feature requires software rendering because I have not yet implemented the 3D API's.
Here is what I am using: [dxwrapper.zip](https://github.com/username_1/dinputto8/files/3684491/dxwrapper.zip)
_Note: make sure you use software rendering with this!_

username_1: I should also mention that the quality is much better with 3D rendering. If you want to play with 3D rendering you can use this version. It will even let you play in 4K resolution.
Updated files with 3D rendering support: [dxwrapper.zip](https://github.com/username_1/dinputto8/files/3684524/dxwrapper.zip)
username_0: I was personally thinking that this might relate to the issue at hand: https://www.gamedev.net/forums/topic/332051-dinput--how-to-hide-the-mouse/
username_1: The mouse could definitely be hidden that way. Try this build. I just hard coded it to hide the mouse.
Just unzip this into the Shogo game folder: [dinput.zip](https://github.com/username_1/dinputto8/files/3684583/dinput.zip)
username_0: Ah, that is precisely what we were looking for, thanks! It would be nice to have a customization .ini for dinputto8 that gave one an option to toggle this and other things, but I think this is pretty great otherwise. |
grafana/agent | 1106765517 | Title: Allow specifying the name of a job in a {Service,…}Monitor
Question:
username_0: I'm using grafana-agent-operator, and have it manage the `default/kubelet` service.
I created a `ServiceMonitor` to scrape metrics from there:
```yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app: kubelet
name: kubelet
namespace: default
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
honorLabels: true
port: https-metrics
relabelings:
- sourceLabels:
- __metrics_path__
targetLabel: metrics_path
scheme: https
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecureSkipVerify: true
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
honorLabels: true
path: /metrics/cadvisor
port: https-metrics
relabelings:
- sourceLabels:
- __metrics_path__
targetLabel: metrics_path
scheme: https
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecureSkipVerify: true
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
honorLabels: true
path: /metrics/probes
port: https-metrics
relabelings:
- sourceLabels:
- __metrics_path__
targetLabel: metrics_path
scheme: https
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecureSkipVerify: true
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
honorLabels: true
path: /metrics/resource
port: https-metrics
relabelings:
- sourceLabels:
- __metrics_path__
targetLabel: metrics_path
scheme: https
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
[Truncated]
- action: keep
regex: scheduler_(.+)
sourceLabels:
- __name__
- replacement: kube-scheduler
targetLabel: job
scheme: https
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecureSkipVerify: true
namespaceSelector:
matchNames:
- default
selector:
matchLabels:
app.kubernetes.io/name: kubelet
k8s-app: kubelet
```
However, it seems it's not possible to influence the name of the job, even with the above relabelConfig - in my case, it still picks "kubelet".
Answers:
username_0: I checked the generated `/var/lib/grafana-agent/config.agent.yaml` in the pod. While it lists the config:
```yaml
#[…]
- bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
honor_labels: true
job_name: serviceMonitor/default/kube-scheduler/0
kubernetes_sd_configs:
- namespaces:
names:
- default
role: endpoints
relabel_configs:
- source_labels:
- job
target_label: __tmp_prometheus_job_name
- action: keep
regex: kubelet
source_labels:
- __meta_kubernetes_service_label_app_kubernetes_io_name
- action: keep
regex: kubelet
source_labels:
- __meta_kubernetes_service_label_k8s_app
- action: keep
regex: https-metrics
source_labels:
- __meta_kubernetes_endpoint_port_name
- regex: Node;(.*)
replacement: $1
separator: ;
source_labels:
- __meta_kubernetes_endpoint_address_target_kind
- __meta_kubernetes_endpoint_address_target_name
target_label: node
- regex: Pod;(.*)
replacement: $1
separator: ;
source_labels:
- __meta_kubernetes_endpoint_address_target_kind
- __meta_kubernetes_endpoint_address_target_name
target_label: pod
- source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- source_labels:
- __meta_kubernetes_service_name
target_label: service
- source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- replacement: $1
source_labels:
- __meta_kubernetes_service_name
target_label: job
- replacement: https-metrics
target_label: endpoint
[Truncated]
target_label: __tmp_hash
- action: keep
regex: 0
source_labels:
- __tmp_hash
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
#[…]
```
This doesn't show up in the targets that grafana-agent then knows about:
```
❯ curl http://localhost:8080/agent/api/v1/targets | jq | grep serviceMonitor/default/kube-scheduler/0
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 136k 0 136k 0 0 215k 0 --:--:-- --:--:-- --:--:-- 215k
```
username_1: 🤔 Hmm, it's weird that it's not showing up in the targets list. The scrape configs seem the same. Is the other service monitor showing up? i.e., `serviceMonitor/default/kubelet/X`?
Silly question: how long did you wait after making the latest change? I've found that the config reloader we use can take a minute or two for changes to the generated config to propagate.
username_0: Nope, I also kicked the grafana-agent pod - no luck.
The other service monitors are showing up:
```
❯ curl http://localhost:8080/agent/api/v1/targets | jq ".data[].target_group" | grep serviceMonitor
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 136k 0 136k 0 0 199k 0 --:--:-- --:--:-- --:--:-- 199k
"serviceMonitor/cert-manager/cert-manager/0"
"serviceMonitor/default/kubelet/0"
"serviceMonitor/default/kubelet/0"
"serviceMonitor/default/kubelet/0"
"serviceMonitor/default/kubelet/0"
"serviceMonitor/default/kubelet/0"
"serviceMonitor/default/kubelet/0"
"serviceMonitor/default/kubelet/1"
"serviceMonitor/default/kubelet/1"
"serviceMonitor/default/kubelet/1"
"serviceMonitor/default/kubelet/1"
"serviceMonitor/default/kubelet/1"
"serviceMonitor/default/kubelet/1"
"serviceMonitor/default/kubelet/2"
"serviceMonitor/default/kubelet/2"
"serviceMonitor/default/kubelet/2"
"serviceMonitor/default/kubelet/2"
"serviceMonitor/default/kubelet/2"
"serviceMonitor/default/kubelet/2"
"serviceMonitor/default/kubelet/3"
"serviceMonitor/default/kubelet/3"
"serviceMonitor/default/kubelet/3"
"serviceMonitor/default/kubelet/3"
"serviceMonitor/default/kubelet/3"
"serviceMonitor/default/kubelet/3"
"serviceMonitor/ingress-nginx/ingress-nginx-controller/0"
"serviceMonitor/ingress-nginx/ingress-nginx-controller/0"
"serviceMonitor/ingress-nginx/ingress-nginx-controller/0"
"serviceMonitor/ingress-nginx/ingress-nginx-controller/0"
"serviceMonitor/ingress-nginx/ingress-nginx-controller/0"
"serviceMonitor/kube-system/coredns/0"
"serviceMonitor/kube-system/kube-state-metrics/0"
"serviceMonitor/node-exporter/node-exporter/0"
"serviceMonitor/node-exporter/node-exporter/0"
"serviceMonitor/node-exporter/node-exporter/0"
"serviceMonitor/node-exporter/node-exporter/0"
"serviceMonitor/node-exporter/node-exporter/0"
"serviceMonitor/node-exporter/node-exporter/0"
```
username_1: Hmm, that's odd. Can you upload the entire generated config?
username_0: Please find the generated config here: [agent.yaml.gz](https://github.com/grafana/agent/files/7889713/agent.yaml.gz)
In `scrape_configs`, I removed some lines (and censored some URLs/usernames), but it should contain the kube-scheduler/apiserver scrape configs, as well as the kubelet config:
username_1: I think I found the problem:
```yaml
action: keep
regex: scheduler_.+
```
Did you mean for this to be `metricRelabelings`? Unless there's a label called `scheduler_*` it will drop the target.
```yaml
metricRelabelings:
- source_labels: [__name__]
regex: scheduler_.+
action: keep
```
username_0: Why `metricRelabelings`, and not `relabelings`? I'd assume it wouldn't affect whether something shows up in the list of targets or not.
Edit: Indeed, moving this to `metricRelabelings` (plus the kube-scheduler fix) made things show up in the list of targets.
Uuugh, I assume the __name__ label isn't available (yet) in `relabelings` (so we dropped everything during discovery phase)? Is there a way to improve the UX here?
username_1: Right, so `metricRelabelings` is for relabeling/dropping individual metrics from a target, while `relabelings` is for the entire target itself - it is processed before any scrape even happens. If a target gets dropped as part of the relabel rules, you won't get any metrics at all. They're separated for performance reasons; it'll be way cheaper to apply common relabelings at the target level rather than for each metric a target exposes.
Like you guessed, `__name__` will map to the metric name from individual metrics and there is no appropriate target-wide value.
Do you have any suggestions for how you'd expect this to work that would help improve the UX?
username_1: I imagine log spam would be the argument from the Prometheus side.
Would it help if there was an endpoint to show also targets pre-relabeling? That way you can do a diff between the two and notice that relabeling dropped something. It wouldn't help as much for _why_ something got dropped, though.
username_0: Wait, isn't the `__name__` label a semi-reserved thing, that can't be used in `relabeling`? At least in that case, that's for sure a user error, so could be logged.
username_1: Sort of, maybe. I think Prometheus reserves the right to create new labels prefixed with `__`, but it's valid to use them as source labels.
`__name__` at the target level doesn't really have any meaning; it'd get overridden by individual metrics and wouldn't show up (AFAIK).
It sounds like two things would've helped here:
1. Some kind of tooling/logging/whatever to examine a relabel rules and point out that a config doesn't make sense
2. Some tooling to examine targets pre-relabeling and maybe some way to find out what happens to those targets post-relabeling (i.e., "target dropped at rule X") |
ngneat/forms-manager | 537940897 | Title: Improve Typings
Question:
username_0: Currently, when we use the `path` parameter in the `selectValue`, and `selectControl` methods, we need to pass a generic that indicates what should be the type of the `value` property:
```ts
selectValue<string>('login', 'name');
```
This isn't the end of the world, but we can employ the technique used in this [question](https://stackoverflow.com/questions/59318560/typescript-type-object-nested-path-string/59320963#59320963) to infer it automatically. So the expected usage should be:
```ts
selectValue('login', 'name');
selectValue<string>('group', ['some', 'nested', 'control']);
```
Answers:
username_1: I can take this on tomorrow
username_1: I'm having some issues trying to actually implement this with the current set up of how the `path` is resolved, and with the technique referenced in the issue.
It will return a flattened list of all keys in the `FormState`, however, `selectValue`'s `path` argument allows for `.` to separate further nesting.
If we have a `FormState` of:
```ts
export interface AppForms = {
onboarding: {
name: string;
age: number;
address: {
street: string;
city: string;
country: string;
}
}
}
```
and want to get the `city` value we would use `selectValue('onboarding', 'address.city')` which would not match a type using the technique above as the available types would be:
```ts
'onboarding' | 'name' | 'age' | 'address' | 'street' | 'city' | 'country'
```
Next, having a flattened list such as this will prevent us from inferring the type of the `path` argument as we cannot do `FormState['city']` it would need to be `FormState['onboarding']['address']['city']`
The technique also raises an interesting dilemma where TypeScript currently only supports recursively mapping a Type to 50 levels of depth. Anything beyond this will throw an error.
This also raises an interesting question about whether all our Type information should live in one App-specific Interface, or rather, should it allow multiple forms to use their own Types that do not have to be declared at app level.
My thinking here is for people using Nx, where their form may live in a lib that can be pulled into any app, and also for third party libs that may setup forms that the app can then query from.
This issue of inferring the type certainly seems to be a lot more complex than I had originally believed it would be. I'm not sure if I am simply misunderstanding something or if I am overthinking it, but I think that for the time being, having to manually state what type is expected to be returned may be the only solution.
username_0: You probably missed the example in the first comment. We need to change the dot notation to an array:
```
selectValue('group', ['some', 'nested', 'control']);
```
username_1: I'll give this another attempt later. Using the array notation for nested levels should hopefully help. I'll let you know how I get on with it.
username_1: ? T[1] extends null
? A[R][T[0]]
: A[R][T[0]][T[1]]
: A[R][T[0]][T[1]][T[2]];
```
Again, pretty hairy.
However, it does work as can be shown in the example below:
```ts
class Something {
a = {
b: {
c: {
d: 'string',
e: 2,
},
},
};
h = {
num: 1,
};
}
function myFunc<T extends keyof Something, R extends NestedControlKey<Something, T>>(
form: T,
path: R
): NestedControlType<Something, T, R> {
return undefined;
}
const myValue = myFunc('h', ['num']);
const mySecondValue = myFunc('a', ['b', 'c', 'd']);
```
TypeScript can infer the types correctly :


It also provides intelligent suggestions when completing the array of values:

@username_0 If you have any thoughts, improvements etc, please let me know.
I spent a long time trying to get the recursion method to work but the close I got was this a result that would require the api to look like this:
```ts
selectValue('a', ['b', ['c', ['d']]]);
```
As I could not flatten the array after each recursive call popped off the stack.
username_0: Yes, I see what you mean. I need to investigate it further. Let's leave it as is for now. Thanks for the work. I appreciate it.
Btw, what do you think about this [issue](https://github.com/ngneat/forms-manager/issues/3)?
Status: Issue closed
|
quasarframework/quasar | 291133657 | Title: QModal not closing on iOS 9.3.x
Question:
username_0: Hello, I'm trying to use the QModal on an iOS device running 9.3.x, and the modal will open, but not close again.
example code:
```html
<q-modal ref="bookingModal">
<q-btn @click="$refs.bookingModal.close()">Close</q-btn>
</q-modal>
```
It **will** close on newest android, firefox, iOS etc.
I know this is properly because of the old version of iOS, just thought you should know.
Answers:
username_0: I think something else is fucking it up, my bad
Status: Issue closed
username_0: No, it's not me, it's sporadic, sometimes it works, and sometimes it doesnt
username_0: Hello, I'm trying to use the QModal on an iOS device running 9.3.x, and the modal will open, but not close again.
example code:
```html
<q-modal ref="modal">
<q-btn @click="$refs.modal.close()">Close</q-btn>
</q-modal>
<q-btn @click="$refs.modal.open()">Open</q-btn>
```
It **will** close on newest android, firefox, iOS etc.
I know this is properly because of the old version of iOS, just thought you should know.
Quasar: 0.14.7
OS:
Node:
NPM:
Browsers:
iOS: 9.3.x
Android:
Any other software related to your bug:
username_1: This may happen only while developing due to HMR & window history API issues. It does not happen in production. Similar tickets have been reopened and this has been addressed in v0.15, so closing it.
Status: Issue closed
username_0: Oh, okay, sorry.. I should have used the search bar.. |
biocompibens/pySpacell | 596356605 | Title: 3D support?
Question:
username_0: Hello,
I like your approach to make it simpler to do spatial neighbourhood analysis on images.
I was just wondering, is there a way to feed 3D information to these algorithms? By looking at the code it seems that only XY coordinates are supported.
Answers:
username_1: Dear Dominik,
unfortunately, only 2D images are supported right now.
Best,
France.
Status: Issue closed
|
fatturaelettronicaphp/FatturaElettronica | 1099203285 | Title: errore in $eDocument->isValid()
Question:
username_0: Ciao ho provato a validare un xml, con:
var_dump($eDocument->isValid());
ma ricevo warning:
Warning: DOMDocument::schemaValidateSource(): Invalid Schema in C:\wamp64\www\fatturaelettronicaphp\vendor\fatturaelettronicaphp\fattura-elettronica\src\Validator\DigitalDocumentValidator.php on line 43
con validazione "false"
Answers:
username_1: Vedo che stai usando windows. Quell'errore da come problema di caricamento il file `core.xsd` che carichiamo localmente, forse il tuo env php ha problema nel caricare quel file?
Prova la branch fix/xsd per vedere se lo risolve, altrimenti ho bisogno di più info sul perchè la tua installazione di php su windows non carica correttamente il file
username_0: ho provato l'altra branch senza successo.
Se provo a modificare il metodo (vedi linea commentata):
protected function getSchema(): string
{
//$schemaFile = $this->document->isSimplified() ? 'semplificata_1.0.xsd' : 'pa_1.2.1.xsd';
$schemaFile = $this->document->isSimplified() ? 'semplificata_1.0.xsd' : 'core.xsd';
$xsd = file_get_contents(__DIR__ . '/xsd/' . $schemaFile);
$xmldsigFilename = __DIR__ . '/xsd/xmldsig-core-schema.xsd';
$xsd = preg_replace('/(\bschemaLocation=")[^"]+"/', sprintf('\1%s"', $xmldsigFilename), $xsd);
return $xsd;
}
il warning se ne va, ma ho come risultato di $errors = $eDocument->validate()->errors();:
{http' => string '/ivaservizi.agenziaentrate.gov.it/docs/xsd/fatture/v1.2}FatturaElettronica': No matching global declaration available for the validation root.
username_0: a dir il vero non avevo ancora usato la validazione prima d'ora perchè stavo ancora lavorando sulla creazione dell' xml. c'è modo di recuperare vecchie releases così provo?
username_1: Sono tutte taggate su git e le puoi recuperare sia da GIT che da composer
username_0: si le sto provando con composer... sono andato a ritroso fino alla 2.0 ma nulla da fare... mi sembra di aver letto sul git dell'autore del codice che si occupa della validazione, che su win proprio non c'è verso di farlo andare sembra
username_1: Sembra che ci sia riuscito usando il file path invece della stringa source.
https://github.com/fatturaelettronicaphp/FatturaElettronica/actions/runs/1683442542
Prova di nuovo la branch `fix/xsd`
username_0: Magnifico! Funziona!
username_1: Rilasciato nella 2.5.3
Status: Issue closed
|
twilio/twilio-java | 310448683 | Title: username fieldincluded in null check in
Question:
username_0: *Note: These issues are for bugs and feature requests for the helper libraries.
If you need help or support, please email <EMAIL> and one of our experts
will assist you!*
**Version:**
### Code Snippet
```java
/**
* Returns (and initializes if not initialized) the Twilio Rest Client.
*
* @return the Twilio Rest Client
* @throws AuthenticationException if initialization required and either accountSid or authToken is null
*/
public static TwilioRestClient getRestClient() {
if (Twilio.restClient == null) {
// why do we include username null check here when accountSid and authToken are essentially required?
if (Twilio.username == null || Twilio.password == null) {
throw new AuthenticationException(
"TwilioRestClient was used before AccountSid and AuthToken were set, please call Twilio.init()"
);
}
TwilioRestClient.Builder builder = new TwilioRestClient.Builder(Twilio.username, Twilio.password);
if (Twilio.accountSid != null) {
builder.accountSid(Twilio.accountSid);
}
Twilio.restClient = builder.build();
}
return Twilio.restClient;
}
```
### Exception/Log
```
<place exception/log here>
```
### Steps to Reproduce
1.
2.
3.
### Feature Request
_If this is a feature request, make sure you search Issues for an existing
request before creating a new one!_<issue_closed>
Status: Issue closed |
cuba-platform/cuba | 334521570 | Title: Login button does nothing in Firefox
Question:
username_0: ### Environment
- Platform version: 6.8.5
- Client type: Polymer
- Browser: Firefox
- Operating system: Windows (probably, not only Windows)
### Description of the bug or enhancement
1. Create new application and add `polymer-client` module.
2. Run app and open Polymer UI.
3. Fill login form and click button.
E.R. Successfully logged in
A.R. Nothing happens. There is no any request in developer console.
Answers:
username_1: Same behaviour in Safari
username_2: Polyfill [issue](webcomponentsjs/issues/958)
Works in latest Safari with native Shadow Dom support
Status: Issue closed
|
williamgrosset/turtle | 300503268 | Title: wat
Question:
username_0: 
Answers:
username_0: this looks like some cool stuff Will, I'll have to check it out
username_1: 
Status: Issue closed
|
jeffreykemp/clicksend-plsql-api | 171615241 | Title: upload
Question:
username_0: Upload a file, optionally do a conversion.
function **upload** (file_content in blob, convert in [fax, mms, csv]) returns url
Add an overload for **send_mms** that accepts file_content as a blob instead of a url. |
duffelhq/paginator | 502501734 | Title: Order by & preloads + descending order
Question:
username_0: Hi,
My first question is how can I set order_by of some field? Let's say I've got such query:
```
defp compose_query({"order_by", "title_ascending"}, query) do
order_by(query, [advert], advert.title)
end
```
Setting cursor fields to [title: :asc] doesn't work for me - it seems that somehow metadata is broken. `Repo.paginate` responses with first page with correct entries and `metadata` seems like this:
```
%Paginator.Page.Metadata{
after: "g2wAAAABZAADbmlsag==", #this is wrong
before: nil,
limit: 4,
total_count: 7,
total_count_cap_exceeded: false
}
```
Any advices? 😄
___
My second question is about sorting by preloaded association field. Let's say I've got an `advert` that belongs to `user` & `user` has many `adverts`. I'm trying to paginate adverts ordered by user name.
My question is if is this possible somehow?
I'm performing query, which looks like that:
```
defp compose_query({"order_by", "user_email_ascending"}, query) do
join(query, :left, [advert], user in assoc(advert, :user))
|> order_by([advert, user], user.email)
end
```
My guess would be, that I should set `cursor_fields` to `[[:user, :email]]`, but it doesn't work.
I really hope for a fast answer 😄
Answers:
username_1: Add sort_direction: :desc / sort_direction: :asc into paginate as an option.
Eg:- Repo.paginate(query, include_total_count: true, cursor_fields: cursor_fields, sort_direction: :asc, limit: 15)
Status: Issue closed
username_0: Yea, thank you or answer. It would be nice to add a bit more info about `cursor_fields` in documentation :) |
rolling-scopes-school/support | 598703206 | Title: Cross-Check ‘virtual-keyboard’ Shevel
Question:
username_0: https://shevel.github.io/virtual-keyboard/
https://github.com/username_0/virtual-keyboard
Оценка в ходе самопроверки ~~55 баллов
Student 4 выставил 0
Есть выполненные пункты , поэтому я с оценкой 0 категорически не согласен.

Answers:
username_1: Согласно правил
- Производится только если разница между ожидаемым вами баллом и полученным по результатам шага#4 средним арифметическим баллом **- 15 или более**.
Даже если экстраполировать разницу в баллах, то
75 * 15/100 = 11.25
Status: Issue closed
|
trustedci/OSCTP | 178357446 | Title: Embargoed data example process
Question:
username_0: The example process for our embargoed data asset needs to included specific System and/or Hardware assets, showing how the AoA are exposed/broadened/revealed? as more systems/hardware are involved in the concerns about an asset. Likewise, the embargoed data asset diagram needs to reference other assets in its AoA (similar to the other data assets).
Answers:
username_1: Dop Todo: Update embargoed data diagram and update example.
Status: Issue closed
|
google/ExoPlayer | 1172435536 | Title: Bug with counting time Exoplayer 2.17.0
Question:
username_0: ExoPlayer version 2.17.0
Android version 7
Android device - Amazon Fire tv stick
When watching a video (streaming from the link), the video comes to the end of time but seconds continue to go further (for example, the video 30 seconds, when the video came to the end, the picture hangs, but the seconds continue to go 56/30) before updating the library, the OnPlayerStateChanged method from Playereventlistener class worked fine and now it is randomly, it can work immediately, and can after 30 seconds over time the video duration (59 out of 30 seconds) can work in a minute of 90 out of 30 seconds.
I can add apk for you test or example code from app.
My company using your player a long time and we never have had problem with that, but it came.
Thank you for help!!

!
Answers:
username_0: This is my link for stream video - https://cdn.brid.tv/live/partners/7343/sd/974099.mp4
username_1: I was not able to reproduce the issue with either of the videos, and neither seem to correspond to the video for which you initially reported the problem. Please could you double check. If you're sure one of these videos reproduces the issue, please:
1. Specify which one we should use (we don't need two)
2. Let us know what duration the UI is displaying, and at what duration playback actually finishes (if at all)
3. Let us know which version of ExoPlayer you're using
username_0: Hi @username_1
1. I leave one video link - https://cdn.brid.tv/live/partners/7343/hd/443852.mp4
2. Playing finished 04:53/03:45
3. I am using ExoPlayer version 2.17.

0
username_0: I am tested on browser - working fine, but on fire stick i have this trouble.
username_1: I am unable to reproduce the issue on a Pixel device. If this observed behavior is only on Fire TV Stick, I'd suggest you report it to Amazon because in that case it's likely to be device specific. You could try asking [here](https://github.com/amzn/exoplayer-amazon-port/issues). I would also suggest that you try and reproduce with the [demo app](https://exoplayer.dev/demo-application.html), to rule out any issues in your own code.
If you can reproduce the issue on an official Android device (i.e., not a device running Fire OS) with our demo app, please let us know the exact combination you're using so that we can try and reproduce. Else I'm not sure there's anything actionable for us to do here.
username_1: Tentatively marking this as a device specific issue. |
ros-infrastructure/ros_buildfarm | 172223432 | Title: pull request builds do not appear to be retriggering correctly when a slave goes offline
Question:
username_0: ```
I think that this is an issue for upstream, but I wanted to get it documented here first as it's more likely for our users to find it. Such as @rhaschke in https://github.com/ros/ros_comm/pull/871
Answers:
username_0: Digging around it looks like this might be related to the new security policy: https://wiki.jenkins-ci.org/display/JENKINS/Plugins+affected+by+fix+for+SECURITY-170
We're running an older ghprb plugin but a newer jenkins instance. 1.30 so I'm upgrading the plugin and will retry.
username_0: Fixing that plugin wasn't enough, and there's a lot of plugins out of date and new jenkins version available. We should probably schedule a full upgrade.
When that happens we'll need to update the configurations to have the right plugin versions embedded. And there's apparently some new config fields that need to be integrated.
username_1: The problem is that the requeue plugin triggers a new build but without any of the previous parameters (https://github.com/jenkinsci/jobrequeue-plugin/blob/c46cfba120328cfdd2d6322d2c6140604e6fabaa/src/main/java/org/jenkinsci/plugins/requeuejob/RequeueJobProperty.java#L39). That clearly doesn't work for a job triggered by the GitHub pull request build which requires numerous parameters.
A pull request to extend the plugin to support that use case is in: jenkinsci/jobrequeue-plugin#3 I have deployed a custom built version of that plugin to build.ros.org.
Status: Issue closed
|
openshift/openshift-docs | 130506039 | Title: explain node selectors
Question:
username_0: There is basically no doc I can find on our support for node selectors.
Here are the basics for a pod:
http://kubernetes.io/v1.1/docs/user-guide/node-selection/README.html
However we need to describe how this interacts with the per-project selector (oadm new-project --node-selector=...) and the defaultNodeSelector in master config, and we really don't have anything to say about it.
Answers:
username_1: @username_0 That's currently WIP here:
https://github.com/openshift/openshift-docs/pull/1490/files#diff-e27713e5f0f507233118f9edeb62ddeeR105
username_2: @username_0 If I'm understanding correctly, I think we're covering this info in the PR. If there's something missing, please let us know.
Status: Issue closed
username_2: The above PR merged. Nodeselectors info is in the docs. I'll close this issue. If there's anything more here, let us know. |
sympy/sympy | 28949676 | Title: Implement Lienard ODEs to solve Lane-Emden equation.
Question:
username_0: sin(x)/x (at least for the usual bc) is the solution
p. Original issue for "#4414":https://github.com/sympy/sympy/issues/4414: "http://code.google.com/p/sympy/issues/detail?id=1315":http://code.google.com/p/sympy/issues/detail?id=1315
p. Original author: "https://code.google.com/u/103943415108140063177/":https://code.google.com/u/103943415108140063177/
Answers:
username_1: This is now solved by the Bessel equation solver:
```julia
In [4]: eq = f(x).diff(x, x)+2/x*f(x).diff(x)+f(x)
In [5]: sol = dsolve(eq)
In [6]: sol
Out[6]:
C₁⋅besselj(1/2, x) + C₂⋅bessely(1/2, x)
f(x) = ───────────────────────────────────────
√x
In [7]: checkodesol(eq, sol)
Out[7]: (True, 0)
In [8]: sol.simplify()
Out[8]:
√2⋅(C₁⋅sin(x) - C₂⋅cos(x))
f(x) = ──────────────────────────
√π⋅x
In [9]: classify_ode(eq, f(x))
Out[9]: ('2nd_linear_bessel', '2nd_power_series_regular')
```
This issue can be closed if a test is added for this particular equation (or if a test is already present).
username_2: It may be useful to have a solver for the Lienard ODE as well https://www.maplesoft.com/support/help/AddOns/view.aspx?path=odeadvisor/Lienard
username_1: It looks like Lienard ODEs can be solved by substitution leading to 1st order linear ODE:
https://en.wikipedia.org/wiki/Li%C3%A9nard_equation
It woudl be useful if dsolve could perform substitutions to solve this kind of equation
username_1: More generally such a substitution can be used to reduce any 2nd order autonomous ODE to a 1st order ODE. Not all such ODEs have analytic solutions though.
username_3: @username_1, I added the test case, but I would like to work further on the issue of the ideas you mentioned. Kindly elaborate on the idea of substitution.
Thanks!
username_1: The substitution for Lienard ODEs is described on the wikipedia page.
#17590 is the best way to implement this kind of solver.
username_1: Given a 2nd order autonomous ODE:
```
d2x/dt2 + f(x)dx/dt + g(x) = 0
```
we can use the substitution `v = dx/dt` and then
```
d2x/dt2 = dv/dt = dv/dx dx/dt = v dv/dx
```
so the equation becomes
```
v dv/dx + f(x) v + g(x) = 0 (1)
```
which is a 1st order (nonlinear) ODE for `v(x)`. If we can solve that for `v` in terms of `x` say `v = h(x)` then since `v = dx/dt` we can get the solution for `x` by solving another 1st order separable ODE:
```
dx/dt = h(x) (2)
```
This technique actually applies to any 2nd order autonomous ODE (not just Lienard-type). How useful it is in practice depends on being able to solve `(1)`. At least for `(2)` we can always write the solution implicitly.
In the case of Lienard equations `(1)` is a degenerate case of an Abel equation of the second kind and there are various articles describing how to find exact solutions of those.
username_1: Autonomous and some other cases are discussed here:
https://mathworld.wolfram.com/Second-OrderOrdinaryDifferentialEquation.html
username_3: @username_1, tried a function for making substitution in any general 2nd order autonomous ODE
``` py
t,x=symbols('t x')
f,g=symbols('f g', cls=Function)
eq=Derivative(x,t,t)+f(x)*Derivative(x,t)+g(x)
v=Symbol('v')
z=Derivative(x,t)
eqv,eqv1,eqv2,eqv3=symbols('eqv eqv1 eqv2 eqv3')
for term in eq.args:
if term.has(Derivative):
if term.args[len(term.args)-1].has(2):
eqv1=(v*term.subs(z,v).subs(t,x))
else:
eqv2=term.subs(z,v)
else:
eqv3=term
eqv=eqv1+eqv2+eqv3
assert eqv == v*Derivative(v,x)+f(x)*v+g(x)
```
I tried making a function for it so that after converting the equation to `1st order (nonlinear) ODE`, we can find the solution without using the `Bessel equation solver`, as specified in the above equations.
Kindly suggest if it could be made better.
Can you tell me how can we add a new function in sympy, because directly adding, and then calling in terminal gives `NameError` and says that function called is not defined.
Thanks!
username_1: When we have a Bessel equation we want to solve it using the Bessel solver which already works quite well.
Lienard ODEs are something different and it might be good to have a solver for those.
username_3: Does the above code helps in implementing the solver for Lienard ODEs, we can use the new equation in terms of `(v,x)` and then we just need to solve `v=Derivative(x,t)` as you mentioned.
Is there some documentation, where steps are written for adding new functions to sympy, I tried to search but couldn't find it?
username_1: I'm not sure I understand what you mean. You can just add a new function to a `.py` file and then import it from that file.
username_1: I'm not sure I understand what you mean. You can just add a new function to a `.py` file and then import it from that file.
username_3: @username_1, I am trying to add solver for any general 2nd order autonomous ODE as:
``` py
class SecondAutonomous(SinglePatternODESolver):
"""
docstring
"""
hint = "2nd_autonomous"
has_integral = False
order = [2]
def _wilds(self, f, x, order):
a = Wild('a', exclude=[f(x).diff(x,x), f(x).diff(x), x])
b = Wild('b', exclude=[f(x).diff(x,x), f(x).diff(x), x])
return a, b
def _equation(self, fx, x, order):
a, b = self.wilds()
return fx.diff(x,x) + a*fx.diff(x) + b
def _get_general_solution(self, *, simplify: bool = True):
fx = self.ode_problem.func
x = self.ode_problem.sym
eq = self.ode_problem.eq
from sympy import symbols
eqv1, eqv2, eqv3 = symbols('eqv1 eqv2 eqv3')
v=Symbol('v')
z=Derivative(fx,x)
for term in eq.args:
if term.has(Derivative):
if term.args[len(term.args)-1].has(2):
eqv1=(v*term.subs(z,v).subs(x,fx))
else:
eqv2=term.subs(z,v)
else:
eqv3=term
geneq = eqv1 + eqv2 + eqv3
sol1 = dsolve(geneq, v)
sol2 = dsolve(sol1 - fx.diff(x))
return sol2
```
But I am getting:
```
File "C:\Users\<NAME>\sympy\sympy\solvers\ode\ode.py", line 657, in dsolve
hints = _desolve(eq, func=func,
File "C:\Users\<NAME>\sympy\sympy\solvers\deutils.py", line 268, in _desolve
raise ValueError(string + str(eq) + " does not match hint " + hint)
ValueError: ODE f(x) + Derivative(f(x), (x, 2)) + 2*Derivative(f(x), x)/x does not match hint 2nd_autonomous
```
I am not able to resolve this error, what I could get out is that for equation like
`dsolve(f(x).diff(x, x)+2/x*f(x).diff(x)+f(x),f(x),hint='2nd_autonomous')`, it goes inside `_equation()` but it doesn't follow it further.
Kindly guide me to solve this issue.
Thanks. |
ocaml-ppx/ocamlformat | 267332891 | Title: Add support for the OCaml object language
Question:
username_0: The object language of OCaml is largely unimplemented. Designing the formatting of these constructs would be best done by someone with a lot of experience with them, who knows the readability pitfalls to avoid, etc.
Answers:
username_1: A non-intrusive first step could be to support the `a#b` syntax. That would support code that uses object APIs without too much effort. My best guess is that `a#b` can be formatted with the same rules as `a.b`, but this is just a suggestion.
username_2: Fyi, I'm working on this.
Status: Issue closed
|
AngleSharp/AngleSharp | 110733184 | Title: Cannot access style property of a element
Question:
username_0: Hi there,
Love what this library is trying to do!!! I'm actually trying to take advantage of your JavaScript evaluation functionality, but am having an issue accessing a property off of the element object. I've boiled it down using a modified version of your example online for reference. This runs fine in the browser, but the `document.write(node.style.fontSize);` line seems to blow up when using AngleSharp. Any help you can provide is appreciated!
```c#
//We require a custom configuration
var config = Configuration.Default.WithJavaScript();
//Let's create a new parser using this configuration
var parser = new HtmlParser(config);
//This is our sample source, we will set the title and write on the document
var source = @"<!doctype html>
<html>
<head><title>Sample</title></head>
<body>
<div id='full-name' style='display:inline-block;font-size:40px;line-height:40px;margin:10px auto 0 auto;word-wrap:none;white-space:nowrap;'>Sample</div>
<script>
document.title = 'Simple manipulation...';
var node = document.getElementById('full-name');
document.write(node.id);
document.write(node.style.fontSize);
</script>
</body>
</html>";
var document = parser.Parse(source);
//Modified HTML will be output
Console.WriteLine(document.DocumentElement.OuterHtml);
Console.ReadLine();
```
Answers:
username_1: Alright so the problem I see here is that the CSS engine is not loaded. Can you try to modify the code to use
```csharp
var config = Configuration.Default.WithJavaScript().WithCss();
```
That will provide the default style engine with CSS.
username_0: Yep, agreed per your reversed comment above. That is all set.
Any change you could tell me when you guys think you will have support for window added in? I need to get the rending size of elements in my script... something like `window.getComputedStyle(node, null).getPropertyValue('width')`. Is something that you will support, or for that matter, have a work around for?
username_1: This scenario will definitely supported, but unfortunately I can't tell you when it will be available. BTW: `window` is already available (and I think the `getComputedStyle` method works as well), but the results could be different from browsers.
Status: Issue closed
|
dweb-camp-2019/organizing | 452765836 | Title: Create Scavenger Hunt across The Farm with Passport & Stamps (Dietrich, Arkadiy, Andi)
Question:
username_0: When we all arrive, one goal is to get grounded with the Farm landscape. We are hoping to create a game that will encourage groups to go explore and figure out where things are. A scavenger or Treasure hunt is one suggestion. You get your Swag item(s) when you finish the hunt.
Or everyone gets a Passport. At each station, there is a stamp to stamp your passport.
When you finish, you get your "prize".
<NAME> and <NAME> have volunteered to organize this, probably during the BUILD. If you want to help them contact thru Matrix channel. <NAME> is asked: will you design the Passports/stamps (?)<issue_closed>
Status: Issue closed |
sequelize/sequelize | 53741070 | Title: Sequelize is trying to update a virtual field
Question:
username_0: Hi, I have this code
```javascript
var Sequelize = require('sequelize');
var config = require(__dirname + '/config/database.json')['production'];
var sequelize = new Sequelize(config.database, config.username, config.password, config);
var User = sequelize.define("User", {
login: {
type : Sequelize.STRING,
unique : true,
allowNull : false,
validate : {
len : {
args: [6, Infinity],
msg: 'Login is too short'
},
isUnique : function(value, next) {
User.find({
where: Sequelize.and({login: value}, ['id <> ?', this.id]),
attributes: ['login']
})
.then(function(pf) {
if (pf)
return next('User is taken!');
next();
})
.catch(function(err) {
return next(err);
});
}
}
},
pass: {
type : Sequelize.STRING,
allowNull : false,
validate : {
len : {
args: [6, Infinity],
msg: 'A senha é muito curta. min 6 characteres'
}
}
},
check_pass: {
type : Sequelize.VIRTUAL,
allowNull : false,
validate : {
match : function (val) {
if (val !== this.check_pass) {
throw new Error('Wrong pass.');
}
}
[Truncated]
where: undefined,
file: 'src\\backend\\parser\\analyze.c',
line: '2001',
routine: 'transformUpdateStmt',
sql: 'UPDATE "Users" SET "id"=1,"login"=\'testing\',"pass"=\'<PASSWORD>\',"check_pass"=\'<PASSWORD>\',"updatedAt"=\'2015-01-08 08:25:33.191 -03:00\' WHERE "id"=1' },
sql: 'UPDATE "Users" SET "id"=1,"login"=\'testing\',"pass"=\'<PASSWORD>\',"check_pass"=\'<PASSWORD>\',"updatedAt"=\'2015-01-08 08:25:33.191 -03:00\' WHERE "id"=1' }
SequelizeDatabaseError: coluna "check_pass" da relação "Users" não existe // means Column "check_pass" of relation "Users" does not exists
at module.exports.Query.formatError (C:\Users\Guilherme\Documents\node\iapo1\node_modules\sequelize\lib\dialects\postgres\query.js:301:16)
at null.<anonymous> (C:\Users\Guilherme\Documents\node\iapo1\node_modules\sequelize\lib\dialects\postgres\query.js:64:21)
at emit (events.js:95:17)
at Query.handleError (C:\Users\Guilherme\Documents\node\iapo1\node_modules\pg\lib\query.js:99:8)
at null.<anonymous> (C:\Users\Guilherme\Documents\node\iapo1\node_modules\pg\lib\client.js:166:26)
at emit (events.js:95:17)
at Socket.<anonymous> (C:\Users\Guilherme\Documents\node\iapo1\node_modules\pg\lib\connection.js:109:12)
at Socket.emit (events.js:95:17)
at Socket.<anonymous> (_stream_readable.js:765:14)
at Socket.emit (events.js:92:17)
```
It validates to check if pass is equal check_pass and insert, it does the validation again on update but it doesn't remove check_field before trying to update it.
Answers:
username_1: Yup, I'm having just the same issue...
Status: Issue closed
|
biryu2205/Biryu | 262457408 | Title: Hackerrank Java 2D Array
Question:
username_0: ```java
import java.util.Scanner;
public class Solution {
public static void main(String[] args) {
Scanner scan = new Scanner(System.in);
int n = scan.nextInt();
int a[] = new int[n];
for (int i = 0; i < n; i++) {
int val = scan.nextInt();
a[i] = val;
}
scan.close();
for (int i = 0; i < a.length; i++) {
System.out.println(a[i]);
}
}
}
``` |
stitchdata/stitch-js | 446847004 | Title: only authorize source
Question:
username_0: @username_1
I have tried using the "only-authorize" feature committed on Oct. 25th, 2018 and the pop up window closes right after signing in to the 3rd party (Facebook Ads in my case). The authorization is not truly saved on Stitch. The UI shows an error as attached:

I believe the storage of authorization in Stitch takes place after the pop up windows returns from the 3rd party, an account ID is selected and the "Check and Save" button is clicked.
Remark: on this click, Stitch automatically allows the selection of tables/schema which defeats the purpose of this feature.
I don't believe an "only-authorize" feature can be truly implemented as intended unless the "Check and Save" can be manually clicked, and automatically trigger/click the "skip this step for now" hyperlink in the following page (Choose Your Data) to prevent the ability of selecting streams/schemas.
Answers:
username_1: Andres,
Facebook and Google, both AdWords and Analytics, require an additional step after the OAuth step to configure the connection prior to getting to field and table selection. This is the profile selections step. Profile IDs are returned by an endpoint at the third-party's site. These profiles are required to know what data to extract as a single user who logs in via OAuth might have multiple profiles available.
During the development of this feature, it was known that sources with a profile selection step were not going to work with the ONLY_AUTHORIZE feature, but it was neglected to be documented. The clients that were using FB and Google sources were not using ONLY_AUTHORIZE and the client that requested ONLY_AUTHORIZE does not require FB or Google sources.
I am unaware of any plans to extend stitch-js to support those connection types with ONLY_AUTHORIZE at this time, but will have a discussion with our product team to see if it can be addressed
Thank you.
username_0: Thanks for the update @username_1 ! I appreciate the confirmation on the status of this feature and your prompt reply.
Status: Issue closed
|
zhanghang1989/PyTorch-Encoding | 840644692 | Title: About train time
Question:
username_0: I train deeplab on 4gpus,each gpu 16g, but Each EPOCH training time takes more than an hour.Is this normal?
Answers:
username_1: The training using DataParallel is really slow (due to syncbn). You may try using `train_dist.py`, which is much faster. |
Borewit/music-metadata | 894472318 | Title: Wav file duration is wrong for some files
Question:
username_0: **Bug description**
I generated a wav file using AWS Polly pcm format. The wave file duration is wrong when ran through this library
**Expected behavior**
A clear and concise description of what you expected to happen.
**Audio file demonstrating the problem**
https://drive.google.com/file/d/1pL3zk97KACmBokqGROZM1bXDAlAxW9Fr/view?usp=sharing
Answers:
username_1: Caused by an odd high WAV chunk length.
Status: Issue closed
|
ppy/osu | 775655147 | Title: osu!lazer screen bug.
Question:
username_0: 
Everytime I join osu!lazer, this happens.
Answers:
username_1: Please do the following:
1. Press WinKey+R
2. Type "dxdiag"
3. In the window that opens, click "Save All Information..."
4. Post it here.
Status: Issue closed
username_0: Nevermind, I fixed it.
username_1: Can you explain how you fixed it, there have been like a few reports of the same thing (https://github.com/ppy/osu/issues/11192, https://github.com/ppy/osu/issues/11136, etc.) and the fix for them was to update drivers / change GPU.
username_0: I was missing some of the assets. |
zooniverse/Panoptes-Front-End | 91752539 | Title: Search not finding subjects that are named in existing talk board
Question:
username_0: I made a comment about a subject on whales. The subject ID shows up on the talk board. But if I search for that subject ID, nothing shows up.


Answers:
username_1: Not sure what the intended behavior is for subject ID searching. @username_2 ?
username_2: As intended, subjects aren't indexed. The only attribute you could even search for is by id, which basically isn't a search. On the other hand, I could also pack the subject id into the content field indexed by comments -- comments about the subject would show up for queries of the subject id
username_2: Yep. I'll go ahead and do that:

Status: Issue closed
|
thought-machine/please | 323860922 | Title: java_test - Forced to declare dependencies twice.
Question:
username_0: When I declare tests for a package using java_test, testing only succeeds if the deps I use for the corresponding java_library target explicitly in the java_test deps as well (which seems really redudant). If I simple rely on `:lib`, I get the following error:
```
Error building target //src/java/com/xyz/validation:_lib_test#lib: /home/ubuntu/workspace/bizdev/plz-out/tmp/src/java/com/xyz/validation/_lib_test#lib._build/src/java/com/xyz/validation/EmailTest.java:17:33: cannot access javax.validation.ConstraintValidator class file for javax.validation.ConstraintValidator not found
```
Do I really need to include the deps twice?
The BUILD file in question:
```
package(default_visibility = ["PUBLIC"])
java_library(
name = "lib",
srcs = glob(["*.java"], exclude=glob(["*Test.java"])),
deps = [
"//src/java:validation_api",
],
)
java_test(
name = "lib_test",
srcs = glob(["*Test.java"]),
deps = [
":lib",
"//src/java:junit",
"//src/java:hamcrest",
],
test_package = "com.xyz.validation",
size = "small",
)
```
The support 3rd party declaration:
```
package(default_visibility = ["PUBLIC"])
maven_jars(
name = "aws_s3",
id = "com.amazonaws:aws-java-sdk-s3:1.11.321",
)
maven_jars(
name = "commons_validator",
id = "commons-validator:commons-validator:1.5.0",
)
maven_jars(
name = "guava",
id = "com.google.guava:guava:18.0",
)
maven_jars(
name = "hamcrest",
id = "org.hamcrest:hamcrest-all:1.3",
)
maven_jars(
name = "junit",
id = "junit:junit:4.12",
)
maven_jars(
name = "validation_api",
id = "javax.validation:validation-api:2.0.0.Final",
)
```
Ubuntu: jessie
Please: 12.2.5
Java: 8
Answers:
username_0: After a little more digging, this is only a problem for the `validation_api` deps. All other 3rd party dependencies work as expected.
username_1: There is an `exported_deps` attribute of `java_library` which can be used for this - if your library uses dependencies that all its dependents will need, they will get access to them implicitly. I think that would resolve your issue for `validation_api` in your test?
username_0: That did it (and makes perfect sense). Sorry for missing that attribute.
Status: Issue closed
|
yamadapc/js-written-number | 244915843 | Title: Do not round decimal number
Question:
username_0: Hey, guys.
How can I avoid rounding up numbers?
Here: 342 877,50
I expect: **trezentos e quarenta e dois mil oitocentos e setenta e sete e cinquenta cêntimos**
I got: **trezentos e quarenta e dois mil oitocentos e setenta e oito**
Thanks
Answers:
username_1: That's not implemented
username_1: Closing as #2 already exists, PR's are wanted and accepted, but there's a non-negligible amount of work involved. Starting with one language would be ok.
Status: Issue closed
username_0: Sure. Thanks! |
video-dev/hls.js | 440001883 | Title: Downloading the video as a mp4
Question:
username_0: Let's say I'm watching a video through hls.js. But I want to save it to my computer. Can I download the video as a mp4 or other video formats?
Status: Issue closed
Answers:
username_1: In the demo, you can check the ` Dump transmuxed fMP4 data` checkbox, and then call `window.createFMP4(type)`, where `type` is `audio` or `video`. You cannot download the video in a non-debug player.
username_2: window.createFMP4(type)
not work
need to use:
createfMP4(type) |
markbirbeck/docker-engine | 332992223 | Title: Support the use of SSH tunnelling without requiring separate app
Question:
username_0: To connect to a Docker Swarm on AWS we usually launch an SSH tunnel along the following lines:
```shell
ssh -i <path-to-ssh-key> -NL localhost:2374:/var/run/docker.sock docker@<ssh-host> &
```
(See [Connect via SSH](https://docs.docker.com/docker-for-aws/deploy/#connect-via-ssh) in the Docker for AWS guide.)
It should be possible to incorporate this functionality into `docker-engine` so that we don't need to run a separate app (and therefore also be platform-independent).<issue_closed>
Status: Issue closed |
BigDaddy-Germany/EduMon | 68083265 | Title: window is not defined (EduMon.Util)
Question:
username_0: 
Answers:
username_0: something something time of execution ...?
username_1: is it in the worker context?
username_0: nope, main app (see log) and I have no idea why
Status: Issue closed
|
qfishpear/fishrss | 1076941137 | Title: bencode error
Question:
username_0: Script was working fine for quite a long time, up until two weeks ago with nothing changed on my end with python, etc.
`
user:~/fishrss$ /usr/bin/python3 "/full/path/to/fishrss/filter.py" --file "/full/path/to/fishrss/test.torrent" --deluge
2021-12-10 15:52:09,817 - INFO - deluge is connected: True
2021-12-10 15:52:09,817 - INFO - api or filter of dic is NOT set
2021-12-10 15:52:09,818 - INFO - api or filter of red is NOT set
2021-12-10 15:52:09,819 - INFO - api and filter of ops are set
2021-12-10 15:52:09,819 - INFO - api or filter of snake is NOT set
2021-12-10 15:52:09,820 - INFO - Traceback (most recent call last):
File "/full/path/to/fishrss/common.py", line 153, in error_catcher
func(*args, **kwargs)
File "/full/path/to/fishrss/filter.py", line 175, in handle_file
torrent = bencode.decode(raw)
AttributeError: module 'bencode' has no attribute 'decode'
`
Tried uninstalling and reinstall bencode to no avail. The closest I got was manually installing bencode.py, but that caused an error with IPython. This gives the same module 'bencode' has no attribute 'decode' error when running it out of irssi/autodl.
Python 3.6.13
Answers:
username_1: The bencode library my script using is called "bencode.py". Try uninstall other bencode libraries because they may cause namespace conflict.
username_0: That's not it. Serverside updated python and the script is fubar, and since this guy has no interest in keeping it up to date, I switched to varroa.
Status: Issue closed
|
pelias/model | 56592010 | Title: disallow names containing only whitespace
Question:
username_0: The setters currently only throw exceptions for empty strings (since they're `false`y), but should do the same for strings that contain only whitespace. I'm thinking about `.setName()`, but this probably applies to `.setAdmin()` and any others as well.
Answers:
username_1: yes, we should sanitize all texts (strip out leading/trailing white space for instance) before the setters set them and check for truthiness.
username_2: sounds like we need a new `transform` to do string `trim()`
username_3: Write test to verify it's been fixed.
Status: Issue closed
|
sbg/sevenbridges-python | 165202569 | Title: typo in quickstart
Question:
username_0: Missing comma before 'run' [here](http://sevenbridges-python.readthedocs.io/en/latest/quickstart/#managing-tasks):
try:
task = api.tasks.create(name=name, project=project, app=app,
inputs=inputs, batch_input=batch_input, batch_by=batch_by run=True)
Answers:
username_1: Will be fixed in milestone 0.4.0.
Thank you jack for reporting this.
username_1: Resolved in the latest 0.4.0 version
Status: Issue closed
|
mgolokhov/dodroid | 156914445 | Title: Change colour of feedback toast to smth more readable
Question:
username_0: green is fine
red should be changed
Answers:
username_1: Possible alternative? What about a hidden image view which displays for a few seconds with either a large green checkmark or large red x in response to question answer?
Status: Issue closed
username_0: Can be custom toast like


username_0: green is fine
red should be changed
username_1: Like the custom error and success toasts - a lot. Perhaps 3 custom toasts: 1) success, 2) first error, 3) second error which informs user to move to next question.
username_0: Clickable images:
<img src=https://cloud.githubusercontent.com/assets/294512/16455505/3e7fe01e-3e1d-11e6-96e2-95c3a324fe4f.png width=200><img src=https://cloud.githubusercontent.com/assets/294512/16455516/4625281a-3e1d-11e6-89b5-57329fe4b91f.png width=200>
username_0: 
Status: Issue closed
|
RamWire/NinaPagerView | 214303337 | Title: 在有navigationBar的情况下,会出现如下bug
Question:
username_0: 一开始顶部标签的位置会在(0,0)这个位置,过了1s之后才回到(0,64)这个位置。会出现一开始进去之后是没有标签栏的,然后闪一下再出现标签栏。pagerView的frame我设置是CGRectMake(0, 0, KscreenWidth, KscreenHeight)。如果换成CGRectMake(0, 64, KscreenWidth, KscreenHeight),刚进去正常(在navigationBar下面),1s之后又会下移64,出现了空白区域。看到后请尽快回复,谢谢
Answers:
username_1: 有尝试过在真机上测试吗?
username_0: 已经解决了
Status: Issue closed
|
achilikin/bdfe | 166693239 | Title: segfault trying to memcpy
Question:
username_0: I got this to build on OS X by stubbing out the I2C drivers (which would be awesome, but alas).
However, after generating few characters I'm getting a segfault at ~bdf.c:259 which is:
memcpy(gout, gin, dy);
It seems that gin gets corrupted somewhere (gdb reports it's "out of bounds").
Debugging the pointers shows, for example:
memcpy(0x10080360c, 0x100103765, 16)
memcpy(0x10080361c, 0x100103770, 16)
memcpy(0x10080362c, 0x100103770, 16)
memcpy(0x10080363c, 0x100103770, 16)
memcpy(0x10080364c, 0x103771, 16)
< segfault >
I'm using a BFD file I generated. Maybe you could post one that's known working?
Answers:
username_0: The pointer gets corrupted by the line
`gin -= displacement + ascender`
In this case, `displacement` == 0xFFFFFFFF. Changing its type to `int` instead of `unsigned` fixes the crash for me. I don't know the code as well as you do, so I don't know if it's more appropriate to guard against unreasonably large `displacement` values, or change it to signed (and guard against negative values?).
Anyways, hope this helps.
username_1: I have not seen any fonts with displacement going negative, but no harm to use int instead of unsigned.
username_1: I've changed unsifned to integer in the latest commit.
Could you please give me you file which cases negative displacement? Just want to trace and see why it is happening.
Status: Issue closed
username_0: Unfortunately I don't recall which files I used that crashed. I hacked up my copy trying to support large font sizes, and ultimately went a different direction. I switched the declaration back to `unsigned` but I can't reproduce the crash anymore.
I suspect it was a larger point-size than supported that generated the crash; my application is a watch that benefits from a nearly 64-pixel-tall font. One is attached, but not sure if it'll help or not.
[system_big.bdf.zip](https://github.com/username_1/bdfe/files/517755/system_big.bdf.zip) |
blesta/module-pterodactyl | 575847869 | Title: Configurable Options
Question:
username_0: It would have been nice to be able to configure eggid and memory as configurable options.
Answers:
username_1: I could see an argument for memory, but eggid? That completely changes the kind of product they are ordering and fields they are required to submit.
username_0: Yes. then you can choose between spigot or papper mc for example.
username_0: But maybe just choosing memory could have been better
Status: Issue closed
username_1: Did you close this because of the lack of response?
username_0: Yes I did, After all, it is not a problem but a suggestion. Label it as suggestion... |
Azure/azure-sdk-for-js | 793013544 | Title: GeographyPoint from search results has switched latitude and longitude
Question:
username_0: - **Package Name**: @azure/search-documents
- **Package Version**: 11.0.3
- **Operating system**:
- [x] **nodejs**
- **version**: 10.16.3
**Describe the bug**
Geography coordinates returned from search are parsed into a `GeographyPoint` incorrectly. It expects the arguments (`latitude, `longitude`), however it is being passed result coordinates in order, which is longitude then latitude.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a cognitive search index that has a Edm.GeographyPoint field
2. Use the JS SDK to search that index
3. Inspect the result and check the "latitude" and "longitude" values (they are switched)
```
// Assuming you have a client setup and a field named "coordinate" that is an Edm.GeographyPoint
const searchResults = await client.search('*', {});
for await (const result of searchResults.results) {
console.log('latitude', (result.document as any).coordinate.latitude);
console.log('longitude', (result.document as any).coordinate.longitude);
}
```
**Expected behavior**
Expect to see the correct latitude and longitude set on the response object (GeographyPoint)
Answers:
username_1: This issue has been fixed with PR: https://github.com/Azure/azure-sdk-for-js/pull/13405 The code changes will be included in the next release scheduled for next release
Status: Issue closed
|
unhosted/store | 99877137 | Title: Invalid certificate?
Question:
username_0: When opening https://unhosted-store.5apps.com/ the browser redirects to https://apps.unhosted.org/, then a popup about invalid certificate appears.
Answers:
username_1: Yes, @username_2 needs to add a CNAME entry to validate the new one. Not sure what the cause of the delay in adding it is. Maybe on vacation or something.
username_0: I am pretty sure @username_2 is easing into his new job
username_2: CNAME added, sorry for the delay, I was indeed busy at my new job! :)
username_0: Well, the banner does not show now... but the certificate did expire on 2nd of August, 2015.

username_1: New cert is deployed. Some browsers might use a cached version of the old one for a bit longer.
Status: Issue closed
|
wagtail/wagtail | 283277771 | Title: Users with the "add" and "publish" permissions can unpublish pages, but can't publish pages not owned by them
Question:
username_0: So it looks like users with a combination of the "add" and "publish" permissions should be able to publish and unpublish pages
### Technical details
* Django version: `Django==1.11.8`
* Wagtail version: `wagtail==1.13.1` |
pypa/pip | 90899000 | Title: Struggling with "setuptools must be installed to install from a source distribution"
Question:
username_0: I create a virtualenv using the latest release (13.0.3), but then I cannot install this package from source using pip: https://pypi.python.org/pypi/username_0-rambutan3/1.7.1
I get the dreaded "setuptools must be installed to install from a source distribution". I've googled high and low on this issue and I am clueless. I saw some bug reports in this project, but none of the tips / solutions worked for me.
I tired upgrading setuptools in the virtualenv to latest, but still no luck. I also tried upgrading pip, but my virtualenv already has the latest.
I am a Python3.4.3 user on Linux.
Any ideas?
Apologies if this issue seems vague. I am truly baffled. I can provide more debug info if you need it.
Status: Issue closed
Answers:
username_0: This is related to "import setuptools" from pip/req/req_install.py (~line 334). I tried manually on the python3 in my virtualenv. Due to bad linking (my fault) when building my python3 binary, the _ctypes module did not build correctly. As a result, setuptools cannot be imported. What a crazy red herring. Clearly not the fault of pip!
However, is it possible to provide the reason from ImportError in the InstallationError exception raised? It may be helpful in the future.
username_1: I create a virtualenv using the latest release (13.0.3), but then I cannot install this package from source using pip: https://pypi.python.org/pypi/username_0-rambutan3/1.7.1
I get the dreaded "setuptools must be installed to install from a source distribution". I've googled high and low on this issue and I am clueless. I saw some bug reports in this project, but none of the tips / solutions worked for me.
I tired upgrading setuptools in the virtualenv to latest, but still no luck. I also tried upgrading pip, but my virtualenv already has the latest.
I am a Python3.4.3 user on Linux.
Any ideas?
Apologies if this issue seems vague. I am truly baffled. I can provide more debug info if you need it.
username_1: A colleague just got bitten by this one and the `setuptools must be installed to install from a source distribution` error message did not help.
The traceback when running `import setuptools` was:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/dev/venvs/some_venv/lib/python3.4/site-packages/setuptools/__init__.py", line 12, in <module>
from setuptools.extension import Extension
File "/home/user/dev/venvs/some_venv/lib/python3.4/site-packages/setuptools/extension.py", line 8, in <module>
from .dist import _get_unpatched
File "/home/user/dev/venvs/some_venv/lib/python3.4/site-packages/setuptools/dist.py", line 18, in <module>
from setuptools import windows_support
File "/home/user/dev/venvs/some_venv/lib/python3.4/site-packages/setuptools/windows_support.py", line 2, in <module>
import ctypes
File "/usr/lib/python3.4/ctypes/__init__.py", line 7, in <module>
from _ctypes import Union, Structure, Array
ImportError: /home/user/dev/venvs/some_venv/lib/python3.4/lib-dynload/_ctypes.cpython-34m-x86_64-linux-gnu.so: undefined symbol: _PyTraceback_Add
As @username_0 suggested, I think we should improve the error message and include the exception.
Status: Issue closed
username_2: @username_1, I am also getting same error while trying to install from a source distribution.
What is the fix/workaround for this?
My pip list shows setuptools (18.4).
I am using virtualenvwrapper (4.6.0) and Python 3.4.0
username_3: I just got bitten by the exact same thing, probably due to a bug over at travis, cf. https://travis-ci.org/username_3/matplotlib2tikz/builds/87418012.
username_1: From my understanding of the `venv/lib/python3.4/lib-dynload/_ctypes.cpython-34m-x86_64-linux-gnu.so: undefined symbol: _PyTraceback_Add` issue, this happened after a minor upgrade of the distribution provide python 3.4.
Your venv contains **symlinks** to the standard library located in /usr/lib/python3.4 but a **copy** of the interpreter.
During the python upgrade, the symlinked standard library was updated but not the copied interpreter, making them incompatibles. A possible solution is to replace your venv python (located in `path/to/your/venv/bin/python3.4`) by the one from your system (`/usr/bin/python3.4`), a simple `cp /usr/bin/python3.4 path/to/your/venv/bin/python3.4` should do it.
An other, even simpler, solution is to destroy and recreate your venv from scratch :)
username_2: @username_1, destroy and recreate venv solved the issue.
Thanks for solution :)
username_4: Same issue here, not using venv but the good old virtualenv. Trying to fix it now...
username_5: Kumaha atuh?
username_6: The @username_1 solution worked for me as well. Thanks!
username_7: That solution worked for me as well! Not using venv, but virtualenv. Thanks!
username_8: Thanks for the tip. Copying the updated python interpeter to the virtualenv worked perfectly. Quite scary that this happened though...
username_9: I think this question happened because the setuptools version is not the newest , Just try update your setuptools into v20.3.6 or newer . I also found this question happened in Linux ,maybe because the default python in Linux setuptools version is 18.X
username_10: In this case try "pip install setuptools"
username_11: I found the above recreating a virtualenv to not work. However I destroyed my virtualenv, and then upgraded pip and virtualenv and this issue resolved itself.
```
pip install -U virtualenv
pip install -U pip
```
username_12: Thanks! username_11 solution fixed it! :) |
mozilla/addons-server | 443258409 | Title: Add more statsd logging for git-extraction related tasks/methods
Question:
username_0: We need more statsd visibility for git-extraction related tasks.
More specifically:
* AddonGitRepository.extract_and_commit_source_from_version
* AddonGitRepository.extract_and_commit_from_version
* addons.tasks:migrate_webextensions_to_git_storage
For the extraction, add specific timings for the following:
* Temporary workdir wrapped code (extraction, git add -a, committing etc)
* time `extract_extension_to_dest` separately to get more granular timings<issue_closed>
Status: Issue closed |
publiccodenet/about | 508958426 | Title: Add argumentation on public code as industrial policy
Question:
username_0: ## Aim:
Develop argumentation around public code as industrial policy
## Output:
A concise set of arguments for why public code is a viable industrial policy/strategy for the entrepreneurial state.
## User stories:
* While we are currently focusing on municipalities because the are the most agile of public organizations, the value we create is typically something sought after and the ambition of regional/national/supranational organizations.
The running assumption is a) that these large bodies will be more likely to cover our projected operational costs, and b) these might need a different set of arguments for how public code can help them achieve their ambitions.
## Initial draft ideas:
* Public code as model to boost local economic development and kick-start something akin to an export oriented industrialization strategy
* Public code as maximization of value creation from pubic expenditure
* Public code as the professionalization of public management over digitalised operations
Answers:
username_0: Deprioritising this for now
Status: Issue closed
|
stripe/stripe-android | 312050913 | Title: Tested the code, any idea why CardInputWidget is so slow?
Question:
username_0: Basically it is not usable, you cannot continue typing credit number, after each typing you need pause a little and type next number. Any idea?
Answers:
username_1: Thank you for reporting this issue! Could you tell me what phone you're seeing the issue on to help us focus our investigation?
username_2: I'm also seeing this on a Samsung Galaxy S7 Edge running android 7
username_3: The problem also appears on Galaxy S8 with lib version 7.0.0 in the credit card number input of a CardMultilineWidget
username_4: also happens on Samsung Galaxy SM-T350 running Android 7.1.1
username_2: Happens on an S7 Edge running 8.0 also
username_5: Hi Everyone! Thanks for bring this our attention -- adding to our backlog for prioritization.
ANDROID-270
Status: Issue closed
username_6: was this issue fixed because I am still facing it on Android 6(Samsung Note 4) and android 7(Samsung galaxy s6). Its working fine on Pixel3 XL Android 9
username_7: Fixed in #1787 |
bash/toby | 293825101 | Title: Add [env] to project config
Question:
username_0: Add new section `[env]` to project config which allows to set additional environment variables that are passed to scripts.
```toml
[env]
SUPER_AWESOME="yes"
```
Answers:
username_0: Update: Implementation uses `[environment]` instead of `[env]`
Status: Issue closed
|
inutano/chip-atlas | 499598098 | Title: empty bigwig files
Question:
username_0: Hi,
I see that some of my GSM files have empty bigwig files. I just wanted to ask if there is a spesific region. (otherwise I will download raw fastqs and run through the same pipeline on my end).
THANK YOU VERY MUCH FOR THIS CRAZY VALUABLE DATABASE. Like If you guys hadnt processed this data, it would have gotten me analyse 150 chipseq files. I appreciate your work and effort!
```
how i install
cat interested_gsm-srx.txt | parallel --verbose --col-sep "\t" -j10 "wget --no-check-certificate -O {1}.bigWig http://dbarchive.biosciencedbc.jp/kyushu-u/hg19/eachData/bw/{2}.bw"
```
empty bigwigs
```
(base) [tmorova@linuxsrv006 raw-data]$ cat problematic
GSM3290773
GSM3290774
GSM3290777
GSM3290778
GSM503906
GSM503907
GSM696843
GSM696844
```
Answers:
username_1: Hi,
Thank you for using ChIP-Atlas.
I found that GSMs of your interest are not archived in ChIP-Atlas due to following reasons:
GSM329077*: The data were submitted on Aug 2019, so they will be available from next update of ChIP-Atlas.
GSM50390*: The "Library Strategy" is "MNase-Seq".
GSM69684*: The "Library Strategy" is "OTHER".
(The "Library Strategy" to be archived in ChIP-Atlas are "ChIP-seq" or "DNase-Hypersensitivity".)
I would appreciate your understanding for our update interval and collecting policy of ChIP-Atlas data.
Cheers,
Shinya
Status: Issue closed
username_0: oh thank you very much for the explanation.
Best regards,
T. |
microsoft/vscode-cmake-tools | 893398933 | Title: Unable to resolve resource cmake-tools-schema:/schemas/CMakePresets-schema.json.
Question:
username_0: Issue Type: <b>Bug</b>
1. Create the new file `CMakePresets.json`
2. Enter `{}`
Actual result:
Unable to load schema from 'cmake-tools-schema:///schemas/CMakePresets-schema.json': cannot open cmake-tools-schema:/schemas/CMakePresets-schema.json. Detail: Unable to resolve resource cmake-tools-schema:/schemas/CMakePresets-schema.json.
Extension version: 1.7.3
VS Code version: Code 1.56.2 (054a9295330880ed74ceaedda236253b4f39a335, 2021-05-12T17:13:13.157Z)
OS version: Windows_NT x64 10.0.19042
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 9 3900X 12-Core Processor (24 x 3793)|
|GPU Status|2d_canvas: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>oop_rasterization: enabled<br>opengl: enabled_on<br>rasterization: enabled<br>skia_renderer: enabled_on<br>video_decode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled|
|Load (avg)|undefined|
|Memory (System)|63.92GB (56.89GB free)|
|Process Argv|--crash-reporter-id ee5daf53-921f-4db9-9a5d-ec3244a157c7|
|Screen Reader|no|
|VM|0%|
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vsreu685:30147344
python383cf:30185419
pythonvspyt700cf:30270857
pythonvspyt602:30300191
vspor879:30202332
vspor708:30202333
vspor363:30204092
pythonvspyt639:30300192
pythontb:30283811
pythonvspyt551cf:30291415
vspre833:30267464
pythonptprofiler:30281270
vshan820:30294714
pythondataviewer:30285071
vscus158cf:30286554
vscgsv2ct:30301613
vscorehovct:30302760
bridgeflightcf:30302070
vscod805cf:30301675
```
</details>
<!-- generated by issue reporter -->
Answers:
username_1: I am unable to reproduce this by following the steps you shared. If you are still experiencing this issue and can provide more information about how you hit the issue (e.g. settings, folder layout), we can take another look.
Status: Issue closed
username_0: 
username_0: Changed CMake Tools settings:
```
"cmake.cmakePath": "C:/Dev/Tools/CMake/bin/cmake.exe",
"cmake.clearOutputBeforeBuild": false,
"cmake.skipConfigureIfCachePresent": true,
```
username_2: I am not able to reproduce either. I am getting 'Missing property "version"' instead, which is correct.
I also tried with the 1.7.3 exact release, since I suspected that maybe we fixed this since then.
username_3: I just got this as well. It's likely that my file is invalid somehow. I was playing with this cmake generator https://github.com/friendlyanon/cmake-init and i hadn't had previous experience with the presets file before.
I opened up the generated project and it told me that vscode doesn't support version 1, so i changed it to version 2, and then I got this message. |
typescript-cheatsheets/react | 698984996 | Title: A copy-paste example of a basic ErrorBoundary component
Question:
username_0: Class components are not very popular anymore so it's a bit hard to recall how to write one in TypeScript. Here's a snippet to be used:
```ts
type ErrorBoundaryProps = {}
type ErrorBoundaryState = {
hasError: boolean
}
class ErrorBoundary extends React.Component<ErrorBoundaryProps, ErrorBoundaryState> {
state: ErrorBoundaryState = {
hasError: false
}
static getDerivedStateFromError(_: Error):ErrorBoundaryState {
// Update state so the next render will show the fallback UI.
return { hasError: true } as ErrorBoundaryState;
}
componentDidCatch(error:Error, errorInfo: Object) {
console.error("Uncaught error:", error, errorInfo);
}
render() {
if (this.state.hasError) {
// You can render any custom fallback UI
return <h1>Something went wrong.</h1>;
}
return this.props.children;
}
}
```
Maybe it could fit into the corresponding docs section: https://react-typescript-cheatsheet.netlify.app/docs/basic/getting-started/error_boundaries/
Answers:
username_1: thank you! that's exactly where it would go. PR welcome or I'll get around to this sometime
username_2: Hi, was also looking at this a bit. A few things worth noting:
1. You can type the `info: ErrorInfo` from `componentDidCatch(error: Error, info: ErrorInfo)`, as it's in the main React type defs:
`import {ErrorInfo} from React`
2. To avoid class components - would also suggest users to look at - [react-error-boundary](https://github.com/bvaughn/react-error-boundary) - that also has type definitions.
Also happy to supply a PR for this if it helps.
username_1: go for it! i like showing both but recommending that people use r-e-b
Status: Issue closed
|
vercel/next.js | 806931533 | Title: using the + sign in a url makes it refresh constantly in development mode
Question:
username_0: **What version of Next.js are you using?**
v10.0.7-canary.7
**What version of Node.js are you using?**
v15.7.0
**What browser are you using?**
Chrome
**What operating system are you using?**
Linux 5.4
**How are you deploying your application?**
next dev (development issue)
**Describe the Bug**
when using + in a url (filename like `/pages/hello+world.js`) in development mode, the page will refresh every few seconds as if i was changing the file, even if im not.
**Expected Behavior**
either reject the invalid character + (they technically arent allowed in urls, but work fine), or not constantly refresh the page
**To Reproduce**
1. project prep
```
yarn create next-app test
cd test
cp pages/index.js pages/hello+world.js
yarn dev
```
2. then open localhost:3000/hello+world
3. open developer tools, enable preserve log, and watch the page reload a ton
Status: Issue closed
Answers:
username_2: This issue has been automatically locked due to no recent activity. If you are running into a similar issue, please create a new issue with the steps to reproduce. Thank you. |
swagger-api/swagger-codegen | 106084071 | Title: CodegenOperation nonempty wrong
Question:
username_0: All `getHasXxxParam` methods in `CodegenOperation` use the following:
return nonempty(bodyParams);
They all use `bodyParams`, regardless the collection they are supposed to check, such as: `queryParams`, `headerParams`, `pathParams`.
As a result, any of these (`queryParams`, `headerParams`, `pathParams`) only show up if there are `bodyParams`.
Answers:
username_1: @username_0 thanks for reporting the issue. Agreed that these are bugs and we'll update that.
I couldn't find anywhere in code in which the `getHasXxxParam` methods are used. If you've a spec that has issue using codegen due to this bug, please share with me so that I can try to repeat the issue. (even though these methods are not used, we should still fix it and again thanks for reporting the bug)
username_0: The `CodegenOperation` is available in the Mustache templating. Hence things like `{{#hasQueryParams}}` are broken, even in the `htmlDocs/index.mustache` that's part of the standard distribution.
username_1: @username_0 you're right. It's done by this commit: https://github.com/swagger-api/swagger-codegen/commit/b14edffc795083ea13ff35e733069989ac042744
I'll file a PR to fix that.
username_0: I'm wondering if there's a workaround. I tried putting an alternative `io.swagger.codegen.CodegenOperation` on the classpath of the `swagger-codegen-maven-plugin`, but that seems not to work. I'd rather not have a temporary fork of the `swagger-codegen` in my environment.
username_1: @username_2 submitted #1225 to address the issue.
username_2: @username_1 have I though?
username_1: @username_2 sorry for incorrectly tagging you in this thread.
username_2: 😎 it's cool.
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.