repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
android-js/androidjs | 555551672 | Title: Packaging error
Question:
username_0: Hey so.. when i try to build the app with androidjs b or androidjs b -f which is [The Story app](https://github.com/android-js/sample-app/tree/master/story-app)
it says
copying user app done.
User data copied
skipped to copy core module !
Core Modules Copied !
reading C:\Users\mm\Desktop\eead\dist\app-debug\AndroidManifest.xml
package name com.androidjs.mypkg
{ '$': { 'android:name': 'android.permission.INTERNET' } }
{
'$': { 'android:name': 'android.permission.WRITE_EXTERNAL_STORAGE' }
}
{ '$': { 'android:name': 'android.permission.READ_EXTERNAL_STORAGE' } }
Done!
AndroidManifest updated!
changing app name C:\Users\mm\Desktop\eead\dist\app-debug\res\values\strings.xml
{ _: 'myapp', '$': { name: 'app_name' } }
App Name updated!
Icon updated!
Building...
(node:7584) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 end listeners added to [ReadStream]. Use emitter.setMaxListeners() to increase limit
events.js:187
throw er; // Unhandled 'error' event
^
Error: spawn java ENOENT
at Process.ChildProcess._handle.onexit (internal/child_process.js:264:19)
at onErrorNT (internal/child_process.js:456:16)
at processTicksAndRejections (internal/process/task_queues.js:80:21)
Emitted 'error' event on ChildProcess instance at:
at Process.ChildProcess._handle.onexit (internal/child_process.js:270:12)
at onErrorNT (internal/child_process.js:456:16)
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
errno: 'ENOENT',
code: 'ENOENT',
syscall: 'spawn java',
path: 'java',
spawnargs: [
'-jar',
'C:\\Users\\mm\\AppData\\Roaming\\npm\\node_modules\\androidjs-builder\\bin\\build_tools\\apktool.jar',
'b',
'C:\\Users\\mm\\Desktop\\eead\\dist\\app-debug',
'-o',
'C:\\Users\\mm\\Desktop\\eead\\dist\\app.apk'
]
}
any help will be appreciated
Status: Issue closed
Answers:
username_0: Fixed by twerking and messing around
username_1: Hi, I'm having this problem too. Could you let me know how you fix it and what version of androidjs and androidjs-builder are you using?
username_2: You should have Java installed, this error is generating because of Java. |
MarimerLLC/cslaforum | 154067067 | Title: Analyzer: Ensure DP Operations Return Void
Question:
username_0: At a client, someone decided to define `DataPortal_Execute()` like this:
private string DataPortal_Execute() ...
They thought they'd be able to return a `string` from a `DataPortal.Execute()` call, which, of course, didn't work.
This got me to thinking....should I create an analyzer that ensures that any DP operation has a return of `void`? I'm thinking that this would be an error condition.
What cases am I missing if I do this?
Answers:
username_1: They can return `void` or `Task`.
Good idea for an analyzer.
username_0: Since I'm on a roll recently with analyzers, @username_1, want to move this to a CSLA issue? I'd want to focus strictly on checking the return value for an operation method and say it can be either `void` or a `Task`. Checking to see if someone is invoking an operation directly should be a separate issue.
With respect to this second idea, are you saying code should *never* invoke an operation method directly, whether they're in an `ObjectFactory` instance or not?
username_1: Yes, this would be a good analyzer.
It is true, nobody should be calling DP_XYZ methods directly. I'd like to say that's true of `ObjectFactory` methods also, but really we're mostly hands-off with how/when people build and use those factories, so I'd say we should _not_ block direct calls to those methods.
username_0: OK, to be clear....
* Will you be adding an issue to CSLA's repo for the return value issue?
* Should we handle the second case?
username_1: Locking this thread so all discussion goes to the actual work issue.
username_0: Hey @username_1 any reason this should still stay open, or close it? I think this is done...
Status: Issue closed
|
rossta/montrose | 525611398 | Title: Defining `during` range to work across days
Question:
username_0: Hi, thanks again for the `during` implementation for the time range. However recently we've stumbled into issue where user defined a time range to run scheduled events to run during 11:00 pm-5:30 am. This call fails to run and it'll run indefinitely:
```ruby
Montrose.every(1.hours).during("11:00 pm-5:30 am").starts(Time.now).events.first
```
One way to cater is calling two separate queries and merge both arrays of events but it's a better idea if montrose gem can solve this issue instead of patching it. Is it possible?
Answers:
username_1: Yes I believe it would be possible, but I may not get to it soon. I will gladly accept patches.
username_2: A bit old, but here is a workaround by splitting it on midnight and building a schedule. I leave the split logic for others to implement as not sure the best way to do it offhand.
```
options = {... , during: "11:00 pm-5:30 am", ...}
Montrose::Schedule.build do |s|
during_split_midnight(options [during').map do |during|
options[:during] = during
s << Montrose.r(options)
end
end
```
`during_split_midnight` just splits it into start to midnight and midnight to start as an array, if the range passes midnight, else it returns `[during]`
Status: Issue closed
|
agdsn/sipa | 109663738 | Title: Add comment in the html source inviting to join
Question:
username_0: Beni had the idea to include some fancy html comment at the top of the source code inviting to join us, perhaps with an ascii-art like logo.
Giving it a thought, this could be really effective, because many people like to take a peek behind the actual pages, know that there is this “show source” function in their browser, and click it for fun.
The text could be something like “We like where you're looking at. Come join us for a beer! https://agdsn.de/pages/about_us/join” (note that this page does not exist yet).<issue_closed>
Status: Issue closed |
awslabs/aws-sdk-android-samples | 212022073 | Title: About S3transferutility
Question:
username_0: I wrote my **pool Id** and **my bucket** name in Constants.java.
But it doesn't work.
Should I wrote
observer = transferUtility.upload(
"bucketname",
"entityname",
file
this?
please tell me what should I do..
Answers:
username_1: Did you follow the [ReadMe](https://github.com/awslabs/aws-sdk-android-samples/blob/master/S3TransferUtilitySample/README.md)?
username_0: @username_1
Thank you for your response!
Yes. I already read Readme and followed all steps.
(wrote my Pool ID and my bucket name..)
And when I running the app, that text is IN_PROGRESS become FAILED.
But the percentage is become 100%.
Also the image is not uploaded......
I don't know why it is not working..

username_2: Hi, could you please paste your logs so that we could help you debug the issue?
Thanks,
Rohan
username_0: @username_2
Thank you for your response!
When I click the 'Upload an Image' button,
there were no tag message.. but the other error was made.

and should I add some code here?

I really want to know why it is not working..
username_2: Hello, I am unable to find any error logs in the screen shots above. Do you have the correct permissions to read the file? Can you paste the error logs which you get the when the file upload is cancelled?
Thanks,
Rohan
username_0: @username_2
I really appreciate your response.
Here is the error screen.

Can you give me some solution..?
username_2: Hello @username_0
We have identified the issue and you are right, its a bug related to the bucket region. As a fix, you need to add the following line:
`sS3Client.setRegion(Region.getRegion(Regions.AP_NORTHEAST_2));`
On line #73 in Util.java of the sample app. We will update once we have a fixed app in the repo.
Thanks,
Rohan
username_1: fixed with 2.4.0
Status: Issue closed
|
rwth-acis/VIAProMa | 640272581 | Title: Raise the "Draw Line" button a bit
Question:
username_0: If the "Draw Line" button in the connection lines menu is fully pressed, its blue background disappears in the menu's background. You should raise the button a bit so that the blue background stays visible.
<issue_closed>
Status: Issue closed |
maestrith/AHK-Studio | 210015249 | Title: Run Selected Text: An older instance of this script is already running Message
Question:
username_0: 1) when running this code `MsgBox, Hello` via "Run Selected Text" a message pops up: An older instance of this script is already running. running the same code after commenting it out (+F1) does not trigger the message box. IMO the latter is the expected/correct behavior.
2) selecting and running this code
```
#SingleInstance Force
MsgBox, Hello
```
will also trigger the message
Answers:
username_1: Addressed in 1.003.13. Please close this thread if it is fixed.
username_0: fixed
Status: Issue closed
|
dotnet/roslyn-analyzers | 120037016 | Title: Port FxCop rule CA1003: UseGenericEventHandlerInstances
Question:
username_0: **Title:** Use generic event handler instances
**Description:**
A type contains a delegate that returns void, whose signature contains two parameters (the first an object and the second a type that is assignable to EventArgs), and the containing assembly targets Microsoft .NET Framework?2.0.
**Proposed analyzer:** Microsoft.ApiDesignGuidelines
**Notes:**<issue_closed>
Status: Issue closed |
scalameta/scalameta | 115560828 | Title: we can probably do away with installNavigationLinks
Question:
username_0: Since our converting traversal is top-down anyway, we can propagate custom information from parents to children anyway. I've been doing this for a while already, and this technique seems to work. See one of the many examples here: https://github.com/scalameta/scalameta/blob/b3febd76693e69695e6588635b86d6701acbb3e3/scalahost/src/main/scala/scala/meta/internal/hosts/scalac/reflect/LogicalTrees.scala#L240.
Answers:
username_0: One useful thing about installNavigationLinks is that it enables easy crash reporting without having to maintain the stack of parents.
Status: Issue closed
|
StylishThemes/GitHub-Dark | 599714475 | Title: Use CSS Variables/Properties instead of magic values
Question:
username_0: There are 23 instances of `#202020` in the [file](https://stylishthemes.github.io/GitHub-Dark/github-dark.user.css). If GHD used variables, changing/customizing these values would be easier.
I'm trying to make it even darker (`#000`) but customizing the value forces me to copy huge selectors.
Answers:
username_1: GHD is a complex theme, using generators for instance https://github.com/StylishThemes/GitHub-Dark/blob/ea8443d0e06294b5701bac1775e7e9bc97b4b5e6/tools/generate.js#L21-L23 will exactly chnage this https://github.com/StylishThemes/GitHub-Dark/blob/ea8443d0e06294b5701bac1775e7e9bc97b4b5e6/github-dark.css#L644-L884
So taking the syntax themes aside, what new config colors are you exactly proposing?
username_0: I never talked about options/UI.
Currently I'm overriding the whole selectors in my extra userstyle for GitHub. If GHD used variables I could just set:
```css
:root {
--level-0: #000;
--level-1: #050505;
}
```
username_0: I mean, even the Code colors as they are configured now shouldn't be part of the stylesheet anyway, given the complexity they bring. They could just be additional stylesheets.
username_1: Yea we could use root values but separate stylesheets are used already as such we use different moz-doc sections.
The generator doesnt know about these diff moz-doc sections and doesnt egenerate CSS to these areas, and until it does, your design suggestion wont very well work.
I dont even know if the generator supports root values as is.
username_0: Or if if `var()` was used everywhere, just a _separate style to override the current **variables**_ rather than whole rules.
username_1: We cant have separate stylesheets, only @-moz-document sections.
And again IDK if the generator supports `var()`, for the sake of a darker style you can use this.
```css
/* ==UserStyle==
@name GitHub Prefer Color Scheme
@namespace StylishThemes
@version 0.0.2
@description Supports
@author StylishThemes
@homepageURL https://github.com/StylishThemes/Feature-Override-Styles/
@supportURL https://github.com/StylishThemes/Feature-Override-Styles/issues/new/choose
@updateURL https://raw.githubusercontent.com/StylishThemes/Feature-Override-Styles/master/dark-mode-for-github.user.css
@license CC-BY-SA-4.0
@var select media-query "Require system appearance" [
"None",
"Dark",
"Light"
]
@var color darkC "Dark background color" #111
@var color lightC "Light background color" #eee
@preprocessor stylus
==/UserStyle== */
@-moz-document regexp("^https?://((education|gist|guides|help|lab|raw|resources|status|developer|support)\\.)?github\\.com/((?!generated_pages/preview).)*$"), domain("githubusercontent.com"), domain("graphql-explorer.githubapp.com"), domain("www.githubstatus.com") {
if media-query == "None" {
@media (prefers-color-scheme: no-preference) {
html body, html .bg-white,
html body .selected.reponav-item {
background-color: inherit !important;
}
}
}
else if media-query == "Dark" {
@media (prefers-color-scheme: dark) {
html body, html .bg-white,
html body .selected.reponav-item {
background-color: darkC !important
}
}
}
else if media-query == "Light" {
@media (prefers-color-scheme: dark) {
html body, html .bg-white,
body.selected.reponav-item {
background-color: lightC !important
}
}
}
}
```
Thats some thing I wrote and testing for resolving https://github.com/StylishThemes/GitHub-Dark/issues/960
But at this time its not fully ready it seem to be slightly buggy, though it will darken this style to whatever colors you set.
Also a smaller style can be done to overwrite what you want, I dont have anyhing ready and cant promise Ill make one, but you get the idea, its faster than waiting for a change you want is probably more work than its worth imo at this time.
username_0: What you wrote ***is*** a separate style. I already wrote one with 3 of the biggest selectors I copied from GHD, no additional options.
username_1: A separate style is faster at this time, the amount of work to do what you want and implement is not something I'm personally NOT prepared to undertake at this time.
Also It needs to work with generator, this is integral, I reiterate my latest thoughts.
As for your approach is not ideal, you cant just copy huge rules out because these are automatically generated, this style is not intended to be overwritten like you currently have.
username_0: That’s the whole point of this issue. Use `var()` (when possible) and my stylesheet drops to 5 lines
username_1: PR's welcome
username_2: `var()` would be nice, but we first need a solid naming scheme for the variables which is rather hard when you have like 40 shades of grey to name. Any advice/best practices for this?
username_0: This would also be good.
username_1: Using this http://www.css-color-extractor.com/ we can see pretty much all the colors we use, for first moz-doc section pasted in.

username_0: Excellent! I was looking for that tool exactly but couldn't find it.
Yeah there are a lot of nearly-identical colors that could be merged. I'm sure those with transparencies are also duplicate but to deduplicate them you'd have to [use RGB colors everywhere](https://stackoverflow.com/a/41265350/288906) (and maybe it's not necessary)
Also not all of them would have to be variables, maybe just the most common ones: either because they appear multiple times in the CSS or because they appear often on the site.
username_1: Well if this is indeed going forward leaving a mix and match is not my idea of a job done.
username_3: No need to change the grey colors, they are grey in dark or light themes too :-)
mid-intensity rgbA colors with transparency are universal too.
Apart from that go for the CSS color variables !
The CSS rule for **@media screen and (prefers-color-scheme: dark) { }** would work,
but it would enforce one color only, and for the users that have that rule ON already.
No way to switch it ON, OFF during sunny moments. Or give it to all as a click-able option.
To resolve that, I did the hard work.
Made and tested the code for a CSS color switch with memory. Great for the night and evening time.
See live demo on GitHub page:
https://dorson.github.io/CSS-Dark-Mode-and-color-switch/
Have fun, feel free to fork, copy, adopt, port, etc...
username_2: Regarding syntax rework, I think we should include a "base" dark syntax theme in the style and other syntaxes would be separate usercss files. That way we can get rid of the theme sections and probably use `default` preprocessor that provides css variables.
username_4: @username_0 Fredrico, is your goal in making it darker to decrease overall brightness, increase contrast, or something else?
username_1: This request is now pretty much handled itself now GitHub provides CSS vars.
Pretty much 80% of all generated rules arent needed anymore since all it takes is reassign colors to the GH existing CSS vars.
IMO A generator is now overkilling the style with all the redundant stuff for GH domains.
username_2: GitHub is already at something like 500+ css vars. I'd say they are overkilling it with making too specific vars and not sharing enough of them and it's still a continous maintenance requirement when they add new vars for every tiny thing in the UI. |
18F/api.data.gov | 98313871 | Title: Ensure team is getting app-level error notification alerts
Question:
username_0: When an application or code-level error occurs on our api.data.gov systems, it currently gets logged and sent to [Rollbar](https://rollbar.com/) for tracking and alerting purposes. However, it's currently getting logged in a Rollbar account that belongs to NREL, which obviously isn't ideal for api.data.gov. This also means that I'm currently the only one getting these notifications for application-level errors (this is how I found out about #266). These type of errors aren't very frequent (knock on wood), so I had sort of forgotten about this when I was transferring all the ops accounts over to 18F proper, but this would also be a good thing to sort out and make sure everyone on the api.data.gov team has access to.
As a short-term fix, what I think might be easiest is to setup a free Rollbar account for api.data.gov and switch things to point there (our volume should be fine for the free account). As a longer-term fix, switching all of this over to New Relic (which provides similar functionality), would probably make most sense since we're already using New Relic for the servers. However, setting up New Relic will require just a tad more work to get all the proper integrations in place for alerting, so that's why a temporary Rollbar account might be the quickest initial option (since the integrations are already in place in the apps).<issue_closed>
Status: Issue closed |
GeoGuideProject/geoguide | 274628090 | Title: UX Improvements
Question:
username_0: - [ ] The name of the file should be default value for TITLE (@TiagoLisboa)
- [ ] Button “Explore” → “Highlight” (@TiagoLisboa)
- [ ] Let’s have a small configuration button on the top right by the side of heatmap. When it is clicked, it shows a popup with configuration options. Then we can safely remove the “Tuning PArameters” section. (@username_0)
- [ ] Let’s ask in the beginning which attributes are needed to be shown for each point (@username_0)
- [ ] Move filters to bottom (@chicobentojr)<issue_closed>
Status: Issue closed |
SIWECOS/siwecos-website | 457477022 | Title: Marketing-Kampagnen und externe Kommunikation effizient tracken
Question:
username_0: Hallo,
könnt Ihr bitte unsere IP-Adresse 192.168.3.11 über das Matomo Backend ausschließen. Am besten eure IPs auch, damit die internen Zugriffe die Statistiken nicht verwässern.
Ich empfehle auch Ziele einzurichten (Scan ohne Registrierung, Registrierung, Anmeldung, Scans im eingeloggten Zustand und Downloads. Bin nicht sicher ob noch weitere Ereignisse eine Rolle spielen.
Habe gerade gesehen, das es eine Ziel-Seite gibt: https://siwecos.de/willkommen-bei-siwecos/ -- hier können doppelte Aufrufe entstehen, wäre aber der Einfachheit halber ohne Aufwand über das Backend, als Ziel einzurichten.
Wenn Ihr da Unterstützung benötigt haben wir einen AP im Haus der Support geben kann.
Answers:
username_1: 192.168.3.11 aus dem Tracking genommen
username_0: Bekommt Ihr evtl. eure IP oder IP´s auch ausgeschlossen. Wenn nicht, sind Ungenauigkeiten je nach Hypothese mit den Daten kein Weltuntergang :)
Status: Issue closed
|
peeringdb/peeringdb | 773324056 | Title: Add Carrier Record Type
Question:
username_0: Hello PDB Team,
I'd like to request that we add a new object type: Carrier.
It could be related to both the IX and FAC objects. Lots of discussion needed in terms of how relations would work, etc.
But I'd really like to add the capabilities to document what carriers are at what facility and IXP.
Answers:
username_1: Is this issue intended to provoke discussion on what the definition of a "Network" is?
username_0: Actually no, it's meant to leverage PeeringDB as a carrier list for Facilities and a "reseller/transport" list for IXPs.
An effort to actually add value vs rejecting people.
username_1: Gotcha.
At EPF 2019 I suggested the PDB 'object type' data model should perhaps be reviewed to pave the way for more types of operations to be documented in PDB.
[Snijders_EPF2019_blurring_lines_ixp_isp.pdf](https://github.com/peeringdb/peeringdb/files/5732618/Snijders_EPF2019_blurring_lines_ixp_isp.pdf)
username_2: You simply tried to blur the line between `ixp` and `net` but did not contribute to an extension. A `carrier` object perfectly makes sense. Especially, as this gives a lot of new relations.
username_2: @peeringdb/pc; please take a **closer** look
username_1: Hmmmm, the slides suggest otherwise. I brought up "optical transport providers" as an example of something that doesn't fit the current model.
username_2: They do not. The basic line of your presentation is: **Anyone should be able to self-declare “I am an Exchange” or, “I am an IP service provider**
And this simply doesn't work. There has to be a **common** understanding what an **IXP** and what an **ISP** is.
username_1: I'd like to better understand what is meant with "Carrier", the word seems to mean different things to different people in the global community. It would be helpful to iterate which technologies are in use and how they are applied.
Many IP networks call themselves "carriers".
I think focus on technological interconnection models is another way to solve these categorization issues.
username_2: Sure. And they most likely are.
* A carrier is an organisation who provides a transparent OSI layer 1/2 connections between point A and point Z
username_0: My thought process was that "carriers" or facilities could announce carrier presence in a few ways:
1) Facility based DF
2) Leased DF
3) Wave/Lit L1ish
3) Ethernet
3a-z flavors of ethernet handoff?
On the IXP Side
1) Reseller VLANs
2) L2 connectivity to FAC for IXP connection
Obviously lots to think through, but I think it would be a good space for PDB to get into.
username_3: If we were to go ahead with this enhancement, how should the new `carrier` object differ from existing objects? Also, is it possible to separate the design and development of the new object from any policy or process needed to support users who want to create `carrier` objects?
username_1: I think from a 'peeringdb API technology and concept perspective' a `carrier` object would be very much like a `facility` object. In terms of peeringdb development effort I'd probably gauge it to be in the ballpark of "a copy of the facility object type" called `carrier`. And similar wizard & search properties related to the facility type.
username_2: IMO the `carrier` object differs from the `fac` object. I would expect that the `carrier` object has information about what transport technologies are supported. E.g. Ethernet, DF, Waves, you name it. So it's more like the `ix` object. And besides the `carrier` object we would also have
- `netcarrier`: `net` is reachable via `carrier`
- `ixcarrier`: carrier provides transport to `ix`
- `carrierfac`: `carrier` is available in `fac`
Does that make sense?
username_4: Not. Not at all. None of this is in english so any of us who haven't been
buried in PDB for the last five years can understand it.
username_2: IMO new objects should be in line with what we already have.
- `netcarrier` corresponds to `netfac` and `netixlan`
- `ixcarrier`corresponds to `ixfac`
- `carrierfac` corresponds to `ixfac` and `netfac`
Neither of them is in English as well.
username_4: The question I would have is what is the business purpose of the carrier
object? Is it to existence of a carrier network so it can then be bound to
a facility, presumably a fac record?
username_5: @username_4 my understanding is it's to represent organizations that offer Layer 1/2 services vs a network which generally offers Layer 3 services. @username_2 is proposing a data model that we would let such an org say they offer such services to reach a given network, a given ix or a given facility. Much like a given network can do now.
There's lots of ways we could implement this, this is just one proposal that's fairly similar to the way peeringdb is currently structured (we have a netix object to say a network can reach an ix. A netfac object to say a network can reach a facility. An ixfac object to say an ix can reach a facility, etc).
username_4: Thanks @username_5. Going back to the top of the thread @username_0 seems to say he simply wants to be able to relate a carrier to a facility. See https://www.peeringdb.com/fac/7878 and notes for what I would like to see us avoid doing by implementation of such an object. @username_0 may have additional thoughts.
username_2: @username_4, that is what @username_0 is proposing in this issue. And in my [earlier post](https://github.com/peeringdb/peeringdb/issues/909#issuecomment-781669317) I've ironed out how this could look like.
username_2: Why, @username_4?
username_4: @username_2 you should direct those questions to Mr. @username_0 since he made them. Thanks.
username_5: One reason I tagged @username_6 is he had proposed something similarly previously, partially because there existed carriers who offered services to connect to IXes. I think we're all in agreement we want carrier <=> facility representation. Let's see if there exist use cases for carrier <=> ix and carrier <=> network or reasons to explicitly not want to model them here.
username_2: Looks like I misunderstood "... and notes for what I would like to see us avoid doing ..."
username_2: use cases for carrier <=> ix: IX resellers
username_0: The point of the request was to inventory what L1/L2 carriers are present at a Facility and what L1/L2 resellers are available for an iX. I don't really see the point for Networks.
username_6: to let networks signal that they reach an ix thru a given carrier?
username_4: who cares about Ix’s? its about interconnection opportunity.
username_0: Yan,
I’m not sure that we want to inventory what IX participant rides on what carrier in PDB, that’s more of a contractual relationship.
I see Fac and IX.
-Chris
>
username_6: yeah, agreed, that one is not a credible use case
username_2: Would the last one need (carrier, fac, ix) then? I.e. while carrier and ix are both in fac1 and fac2, the carrier may only handoff connection to ix in fac1.
username_3: What is the status of this discussion? Do we want to create a `carrier` object and update a subset of other objects?
username_2: IMO this discussion is still ongoing. Esp. if we create `carrier` how it should look like and which composed objects (`ixcarrier` and `carrierfac`) we also need.
username_3: It would be good to get input from the rest of @peeringdb/pc on this. If there's consensus, perhaps we could schedule this for 2.28.0?
username_3: The PC ran two focus groups at the end of June. The discussion and next steps were summarized in [this blog post](https://docs.peeringdb.com/blog/carrier_object/). This issue will be updated when there is a decision on whether to create a new object type. If there is a decision to proceed there will be further engagement with PeeringDB users over the design and name. |
impossibl/pgjdbc-ng | 518438360 | Title: Unhandled message: C @ com.impossibl.postgres.protocol.v30.PrepareRequest$Handler java.lang.IllegalStateException: Unhandled message: C
Question:
username_0: I'm now getting this:
```
2019.11.06 13:21:31 [pool-2-thread-9] [null] [F397101258136ZGZMAO] ERROR no.officenet.origo.core.integration.QueueActor.$anonfun.applyOrElse:88:30 - Error in QUEUE_CONSUMER: Unhandled message: C @ com.impossibl.postgres.protocol.v30.PrepareRequest$Handler
java.lang.IllegalStateException: Unhandled message: C @ com.impossibl.postgres.protocol.v30.PrepareRequest$Handler
at com.impossibl.postgres.protocol.v30.MessageDispatchHandler.dispatch(MessageDispatchHandler.java:214)
at com.impossibl.postgres.protocol.v30.MessageDispatchHandler.channelRead(MessageDispatchHandler.java:149)
at com.impossibl.shadow.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
at com.impossibl.shadow.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
at com.impossibl.shadow.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
at com.impossibl.shadow.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:328)
at com.impossibl.shadow.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:315)
at com.impossibl.shadow.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:429)
at com.impossibl.shadow.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:283)
at com.impossibl.shadow.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
at com.impossibl.shadow.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
at com.impossibl.shadow.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
at com.impossibl.shadow.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422)
at com.impossibl.shadow.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
at com.impossibl.shadow.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
at com.impossibl.shadow.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931)
at com.impossibl.shadow.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at com.impossibl.shadow.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700)
at com.impossibl.shadow.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:635)
at com.impossibl.shadow.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:552)
at com.impossibl.shadow.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514)
at com.impossibl.shadow.io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044)
at com.impossibl.shadow.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(Thread.java:830)
```
How shall I proceed with debugging this?
Answers:
username_1: Closing as a duplicate of #457 Please attach this information on that issue.
Status: Issue closed
username_0: Actually, this issue comes from the fact that EclipseLink sends a "pingSQL", defined in `PostgreSQLPlatform.pingSQL`, after the `TooLongFrameException` has occured.
This are the last log-lines on the server (PG-log)
```
2019-11-07 14:36:52.075 CET [16700-52] username_0@rev LOG: execute cached-65948c5b: SELECT DISTINCT t1.entity_id, t1.part_type, t1.filename, t1.part_index, t1.mimetype, t1.raw_content_filename, t1.size, t1.uid, t1.version, t1.message_id, t1.parent_id, t1.charset, t1.string_content, t1.multipart_type, t0.entity_id, t0.header_index, t0.header_name, t0.version, t0.part_id FROM origo_email_part t1 LEFT OUTER JOIN origo_email_part_header t0 ON (t0.part_id = t1.entity_id) WHERE (t1.parent_id = $1) ORDER BY t1.part_index ASC
2019-11-07 14:36:52.075 CET [16700-53] username_0@rev DETAIL: parameters: $1 = '1'
2019-11-07 14:36:52.632 CET [16700-54] username_0@rev LOG: execute lo.close: select lo_close($1)
2019-11-07 14:36:52.632 CET [16700-55] username_0@rev DETAIL: parameters: $1 = '1'
2019-11-07 14:36:52.632 CET [16700-56] username_0@rev LOG: execute TR: ROLLBACK
2019-11-07 14:36:52.632 CET [16700-57] username_0@rev ERROR: prepared statement "cached-3fcfe755" already exists
2019-11-07 14:36:52.632 CET [16700-58] username_0@rev STATEMENT: SELECT 1
```
The last `SELECT 1` is from when EclipseLink calls `DatabasePlatform.wasFailureCommunicationBased` to check of the connection is still valid after the `TooLongFrameException: Adjusted frame length exceeds 20971520: 29028470 - discarded` error has occured, masking the error. So I don't really consider this issue closed nor fixed as it should be clear what the error is, not just `Unhandled message` which isn't very helpfull.
username_1: Keep this database handy, if you can. I agree that a more "graceful" failure here is wanted.
TooLongFrameException generating a much better error message (easy to do) and resetting the message dispatch state so as not to be "confused" (much harder to do) would be really helpful here.
username_0: It's no problem keeping this database around. |
ww156/comment-course | 468083667 | Title: HTML CSS - 编程入门教程_w3cschool
Question:
username_0: https://www.lovefree.cc/html/html-css/
HTML 样式- CSS CSS (Cascading Style Sheets) 用于渲染HTML元素标签的样式.
Look! Styles and colors Manipulate Text Colors, Boxes and more… 如何使用CSS CSS 是在 HTML4 开始使用的,是为了更好的渲染HTML元素而引入的.
CSS 可以通过以下方式添加到HTML中:
内联样式- 在HTML元素中使用”style” 属性 内部样式表 -在HTML文档头部 <head> 区域使用<style> 元素 来包含CSS 外部引用 - 使用外部 CSS 文件 最好的方式是通过外部引用CSS文件.
在本站的HTML教程中我们使用了内联CSS样式来介绍实例,这是为了简化的例子,也使得你能更容易在线编辑代码并在线运行实例。
你可以通过本站的 CSS 教程 学习更多的 CSS 知识。
内联样式 当特殊的样式需要应用到个别元素时,就可以使用内联样式。 使用内联样式的方法是在相关的标签中使用样式属性。样式属性可以包含任何 CSS 属性。以下实例显示出如何改变段落的颜色和左外边距。
<p style="color:blue;margin-left:20px;">这是一个段落。</p> 学习更多样式,请访问 CSS 教程.
HTML样式实例 - 背景颜色 背景色属性(background-color)定义一个元素的背景颜色: |
codebar/chapter-organiser-meetings | 400670028 | Title: Next steps for the codebar website
Question:
username_0: We've been struggling for years with performance and reliability on the codebar website. The central London chapter isn't able to send out emails anymore because of the cost, and because it takes down the website server having to send out so many emails at once.
At different times, people have floated the idea of rebuilding the website a bit at a time, using the '[strangler pattern](https://www.martinfowler.com/bliki/StranglerApplication.html)'. e.g. You could
- start by just replacing the evergreen landing pages with a static service
- create a new event page and event admin page that starts with workshops but gradually replaces all the different types of events
- just build a new email system
However, the codebar website is complex, and rebuilding even a small part of it could take a skilled developer weeks or months of full-time work. I'm interested in people's suggestions:
- What needs replacing most urgently?
- What should we replace it with? It's mostly written in Ruby - would it be more maintainable by more developers in the long term if it were migrated to JS?
- How do we go about getting this work done? Do we know any companies who would be willing to donate a large amount of developer time to help get this done?
Answers:
username_1: Some quick ideas (please correct me if I have anything wrong). The web site comprises the following:
- Landing pages
- Blog
- Tutorials
- Members (students, coaches) and workshop management
The tutorials are already separate and use a static web service (Jekyll) so it feels like a sensible technology to use to migrate the static (landing pages) and low update content (blog).
The members and workshop part needs a dynamic system and the current one works, with the exception of emails. So the first thing to look at would be the email service: what is being used now and should it be migrated to something else? (I use Mailgun at work and I find it great to work with) Going forward, it may be worth migrating the dynamic part to something else. One technology I'm experimenting with at work is a headless CMS with a lightweight dynamic site on top: I'm happy to report back on how it works once I know a bit more.
Based on that, the priorities seem to be:
1. Fix the email problem
2. Separate out the static and low update content to a technology like Jekyll
3. Work out a path for the dynamic part of the site
An additional question I have: considering there are more and more chapters, should the web site support multiple languages at some point?
username_2: I'm encouraging a group of people to start **codebar Rio de Janeiro** and one of the hurdles is the fact the website is all in English, so I can certainly see the appeal for this.
username_3: We have an ongoing initiative in Barcelona to have the landing page in EN, ES and CAT(alan). As it is now, codebar barcelona is not safe for people that does not speak English. Because we use English, majority of our students/coaches are expats. We want to have more locals, but until now online and during the workshops only EN was used.
Having multiple languages support will definitely help in this topic
username_4: @username_1 @username_2 the website already supports multiple languages, you can add translations via [Crowdin](https://crowdin.com/). For example check out: https://codebar.io/?locale=de. Of course we could improve upon the current implementation with a language select dropdown in the footer. |
NoseF17/APPR-2019-20 | 575379553 | Title: Izbira vrstic za v graf
Question:
username_0: Imam tabelo vseh evropskih držav, ki po letih prikazujejo število delovnih ur na teden.
Zanima me, kako lahko iz celotne tabele (v mojem primeru A1) izberem dve državi in nato le te dve prikažem v grafu, ki prikazuje spreminjanje delovnih ur po letih.
Answers:
username_1: Narediš lahko na primer tako:
```r
g3 <- ggplot(data = A1 %>% filter(Drzava %in% c("Belgium", "Denmark")),
aes(x=Leto, y=SteviloDelovnihUr, color = Spol, shape = Drzava)) +
geom_point() + ggtitle("Število delovnih ur po letih") +
theme(panel.background=element_rect(fill="grey"))
```
username_0: Hvala za pomoč. |
Kitteh6660/Corruption-of-Champions-Mod | 334840405 | Title: Bug
Question:
username_0: Please prefix the title with the issue type: [Bug], [Enhancement], [Question], [Other]_
### I have checked the following before submitting this issue:
Remove this hint: these checkboxes can be checked like this: [x] - Remove this line if you read and understood the checkboxes below
[ ] The bug is still present in the latest release
[x] Searched the existing issues so I do not open a duplicate issue
[x] Filled out the template, so developers are less likely to ask for more info
### Overview
Vag Fuck option for Urta does not recognise male genitals correctly for player character.
### Game version
1.4.13 android
### Expected behaviour
Urta should recognise player male genitals correctly
### Actual behaviour
During her dialog in the end of the scene she always recognise it as human, ordinary looking. Also if knot presented she will always recognise your male genitals as dog looking even if it is different type (like equine with knot) and her last dialog with human dong still exist.
### How often can this be reproduced?
Always
### Steps to reproduce the issue
1. Become lovers with Urta
2. In her sober state choose her place and and pick Vag Fuck scene
If possible, attach a save file that reproduces the issue.
Please be aware that any files uploaded here are publicly accessible and cannot be deleted (however, the link to them can be removed).<issue_closed>
Status: Issue closed |
huggingface/transformers | 521192076 | Title: MNLI: BERT No Training Progress
Question:
username_0: ## 🐛 Bug
I am using BERT and have successfully set up pipelines for 7/8 GLUE tasks, and I find comparably good accuracy on all of them. However, for MNLI task, training loss does not reduce. I am correctly using 3 classes. In fact, I have even reduced the scope of MNLI to a 2-class problem (entailment vs neutral/contradiction) for testing purposes, and the model fails to converge here as well. The exact same code (with only the dataset switched) works for MRPC, for example.
Model I am using (Bert, XLNet....): **BERT**
Language I am using the model on (English, Chinese....): **English**
The problem arise when using:
* [ ] the official example scripts: (give details)
* [x] my own modified scripts: (give details)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: **MNLI**
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
See linked co-lab for minimum viable example
https://colab.research.google.com/drive/1Thdrpvp0uX2TCaCpqsLoUVn7DuRLlILQ
## Expected behavior
I expect to see good training progress, whereas training loss only oscillates around 0.64.
## Environment
* OS: **Linux**
* Python version: **3.6**
* PyTorch version: **2.1.1**
* PyTorch Transformers version (or branch): **Main release**
* Using GPU ? **Yes**
* Distributed of parallel setup ? **No, single GPU**
* Any other relevant information: **None**
Answers:
username_1: I'm assuming you did multiple runs with ≠ seeds?
username_0: Yes -- I have tried with multiple different seeds.
username_2: @username_0 Have you found a solution? I'm also having issues with MNLI data.
username_0: @username_2 Nope. I think one resolution was to not use BERT large.
username_3: I too am facing the same issue with BERT and Albert as well. The model does not converge on fine-tuning and the loss does not decrease even over 5 epochs. Did someone manage to solve this issue? |
distriqt/ANE-CustomResources | 232091524 | Title: Tool versions working with Windows 10?
Question:
username_0: Can you confirm what versions of:
Ant
Java SDK
AIR SDK
were used to get this working on 10?
I have
Ant 1.10.1
jdk1.8.0_131
Air 26
Latest Android SDK
Trying to build firebase android config ANE
Here's what I get:
[console.txt](https://github.com/distriqt/ANE-CustomResources/files/1036592/console.txt)
Please advice
Answers:
username_1: Can you make sure you have the latest version of the script? That console output looks like its from the old version.
username_0: I copied from latest firebase code. I think that needs to be updated?
username_1: Yes, the firebase version is going to be removed.
username_0: Buildfile: C:\Users\Debug\Desktop\config\build.xml
clean:
clean_actionscript:
clean_default:
clean_android:
build:
build_actionscript:
[echo] Building actionscript library...
[compc] Loading configuration: C:\Program Files\Adobe\Adobe Flash Builder 4.7 (64 Bit)\sdks\4.6.0\frameworks\air-config.xml
[compc]
[compc]
[compc] 2172 bytes written to C:\Users\Debug\Desktop\config\platform\actionscript\bin\distriqt.extension.config.swc in 0.777 seconds
[compc]
[echo] done
build_default:
[echo] Building default library...
[compc] Loading configuration: C:\Program Files\Adobe\Adobe Flash Builder 4.7 (64 Bit)\sdks\4.6.0\frameworks\air-config.xml
[compc]
[compc]
[compc] 2016 bytes written to C:\Users\Debug\Desktop\config\platform\default\bin\distriqt.extension.config.default.swc in 0.669 seconds
[compc]
[echo] done
create_android_project:
[mkdir] Created dir: C:\Users\Debug\Desktop\config\platform\android\app
[copy] Copying 4 files to C:\Users\Debug\Desktop\config\platform\android\app
[mkdir] Created dir: C:\Users\Debug\Desktop\config\platform\android\app\src\main\java\com\mapinn\mycardmap
[copy] Copying 2 files to C:\Users\Debug\Desktop\config\platform\android\app\src\main\java\com\mapinn\mycardmap
[mkdir] Created dir: C:\Users\Debug\Desktop\config\platform\android\app\src\main\res
[copy] Copying 1 file to C:\Users\Debug\Desktop\config\platform\android\app\src\main\res
[copy] Copying 1 file to C:\Users\Debug\Desktop\config\platform\android
[copy] Copying 1 file to C:\Users\Debug\Desktop\config\platform\android\app\libs
build_android:
[echo] Building Android library...
build_android_osx:
build_android_windows:
BUILD FAILED
C:\Users\Debug\Desktop\config\build.xml:215: The following error occurred while executing this line:
C:\Users\Debug\Desktop\config\build.xml:146: The following error occurred while executing this line:
C:\Users\Debug\Desktop\config\build.xml:163: Execute failed: java.io.IOException: Cannot run program "gradlew.bat" (in directory "C:\Users\Debug\Desktop\config\platform\android"): CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at java.lang.Runtime.exec(Runtime.java:620)
at org.apache.tools.ant.taskdefs.launcher.Java13CommandLauncher.exec(Java13CommandLauncher.java:58)
at org.apache.tools.ant.taskdefs.Execute.launch(Execute.java:426)
at org.apache.tools.ant.taskdefs.Execute.execute(Execute.java:440)
at org.apache.tools.ant.taskdefs.ExecTask.runExecute(ExecTask.java:629)
at org.apache.tools.ant.taskdefs.ExecTask.runExec(ExecTask.java:670)
at org.apache.tools.ant.taskdefs.ExecTask.execute(ExecTask.java:496)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:293)
[Truncated]
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessImpl.create(Native Method)
at java.lang.ProcessImpl.<init>(ProcessImpl.java:386)
at java.lang.ProcessImpl.start(ProcessImpl.java:137)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
... 49 more
Total time: 2 seconds
```
Following are the environment variables:
ANT_HOME = C:\apache-ant\apache-ant-1.10.1
JAVA_HOME = C:\Program Files\Java\jdk1.8.0_131
JRE_HOME = C:\Program Files\Java\jre1.8.0_131
Amended Path variable with: %JAVA_HOME%\bin;%JRE_HOME%\bin;%ANT_HOME%\bin
Note: I was able to generate the ANE via mac OSX, but that was a loaner mac, and I need to use Windows for this to work and make further changes in the future. Please advice.
username_0: I just pulled the latest version and got the same issue in Windows 10.
username_1: Can you confirm you have gradlew.bat file in `C:\Users\Debug\Desktop\config\platform\android` and that it has executable permissions?
username_0: Yes its there. Executable permissions are also there.
username_1: How are you executing ant? Are you in the same directory as build.xml?
username_0: Yes
username_0: ```
PS D:\downloads\ANE-CustomResources-master\ANE-CustomResources-master> ls
Directory: D:\downloads\ANE-CustomResources-master\ANE-CustomResources-master
Mode LastWriteTime Length Name
---- ------------- ------ ----
da---- 31-May-17 1:36 AM build_config
da---- 31-May-17 1:36 AM images
da---- 31-May-17 1:36 AM platform
da---- 31-May-17 2:45 PM res
-a---- 31-May-17 1:36 AM 12 .gitignore
-a---- 31-May-17 1:36 AM 9118 build.xml
-a---- 31-May-17 1:36 AM 32 CHANGELOG.md
-a---- 31-May-17 1:36 AM 4857 README.md
PS D:\downloads\ANE-CustomResources-master\ANE-CustomResources-master> ant
Buildfile: D:\downloads\ANE-CustomResources-master\ANE-CustomResources-master\build.xml
clean:
clean_actionscript:
[delete] Deleting directory D:\downloads\ANE-CustomResources-master\ANE-CustomResources-master\platform\actionscript\bin
clean_default:
[delete] Deleting directory D:\downloads\ANE-CustomResources-master\ANE-CustomResources-master\platform\default\bin
clean_android:
[delete] Deleting directory D:\downloads\ANE-CustomResources-master\ANE-CustomResources-master\platform\android\app
build:
build_actionscript:
[echo] Building actionscript library...
[compc] Loading configuration: C:\airsdk\frameworks\air-config.xml
[compc]
[compc]
[compc] 2172 bytes written to D:\downloads\ANE-CustomResources-master\ANE-CustomResources-master\platform\actionscript\bin\distriqt.extension.firebase.swc in 1.059 seconds
[compc]
[echo] done
build_default:
[echo] Building default library...
[compc] Loading configuration: C:\airsdk\frameworks\air-config.xml
[compc]
[compc]
[compc] 2016 bytes written to D:\downloads\ANE-CustomResources-master\ANE-CustomResources-master\platform\default\bin\distriqt.extension.firebase.default.swc in 0.786 seconds
[compc]
[echo] done
create_android_project:
[mkdir] Created dir: D:\downloads\ANE-CustomResources-master\ANE-CustomResources-master\platform\android\app
[copy] Copying 4 files to D:\downloads\ANE-CustomResources-master\ANE-CustomResources-master\platform\android\app
[mkdir] Created dir: D:\downloads\ANE-CustomResources-master\ANE-CustomResources-master\platform\android\app\src\main\java\com\project\test
[copy] Copying 2 files to D:\downloads\ANE-CustomResources-master\ANE-CustomResources-master\platform\android\app\src\main\java\com\project\test
[mkdir] Created dir: D:\downloads\ANE-CustomResources-master\ANE-CustomResources-master\platform\android\app\src\main\res
[copy] Copying 1 file to D:\downloads\ANE-CustomResources-master\ANE-CustomResources-master\platform\android\app\src\main\res
[Truncated]
at org.apache.tools.ant.Target.execute(Target.java:435)
at org.apache.tools.ant.Target.performTasks(Target.java:456)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1405)
at org.apache.tools.ant.Project.executeTarget(Project.java:1376)
at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
at org.apache.tools.ant.Project.executeTargets(Project.java:1260)
at org.apache.tools.ant.Main.runBuild(Main.java:857)
at org.apache.tools.ant.Main.startAnt(Main.java:236)
at org.apache.tools.ant.launch.Launcher.run(Launcher.java:287)
at org.apache.tools.ant.launch.Launcher.main(Launcher.java:113)
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessImpl.create(Native Method)
at java.lang.ProcessImpl.<init>(ProcessImpl.java:386)
at java.lang.ProcessImpl.start(ProcessImpl.java:137)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
... 49 more
Total time: 3 seconds
PS D:\downloads\ANE-CustomResources-master\ANE-CustomResources-master>
```
username_0: I also got CMD console "Run as Administrator" and then ran ant, and got the same results. :(
username_1: Have you installed android studio and run at least once to install dependencies?
username_0: Yes, I have installed android sdk 23 to 25 on my PC. What other dependencies are we looking at here?
username_0: I think your build script is wrong for Windows
I have googled and what I noticed was that "/c" argument is used for gradlew.bat. Not sure how to get this going in ant, as I'm not very good with it, I will definitely play around with it today and see if I can get any good results.
username_0: So here's what I did to get it to work:
Replaced the build_android_windows in build.xml:
```
<target name="build_android_windows" if="is_windows">
<exec executable="gradlew.bat" dir="${android.dir}">
<arg line="wrapper" />
</exec>
<exec executable="gradlew.bat" dir="${android.dir}">
<arg line="assemble" />
</exec>
</target>
```
With this:
```
<target name="build_android_windows" if="is_windows">
<exec executable="cmd" dir="${android.dir}">
<arg value="/c"/>
<arg value="gradlew.bat"/>
<arg value="wrapper" />
</exec>
<exec executable="cmd" dir="${android.dir}">
<arg value="/c"/>
<arg value="gradlew.bat"/>
<arg value="assemble" />
</exec>
</target>
```
Removed all of this below:
```
<!-- PROPERTIES -->
<copy file="${android.dir}/template/local.properties" tofile="${android.dir}/local.properties" overwrite="true" >
<filterchain>
<tokenfilter>
<replacestring from="@ANDROIDSDK@" to="${android.sdk}"/>
</tokenfilter>
</filterchain>
</copy>
```
Created local.properties file in template folder and added this line:
sdk.dir=C:\\Users\\Debug\\AppData\\Local\\Android\\sdk
And it worked!
username_1: Ah interesting, nice work!
Will investigate those changes out on our windows machines, but they all seem reasonable replacements.
Status: Issue closed
|
geneontology/go-ontology | 294766823 | Title: Ligase activity GO terms whose EC xref is too broad
Question:
username_0: I suggest the following EC xrefs are deleted- they are too broad for the given GO term:
GO:0004774: remove EC:6.2.1 xref
GO:0015645: remove EC:6.2.1 xref
GO:0016421: remove EC:6.4.1 xref
GO:0003909: remove EC:6.5.1 xref
GO:0008452: remove EC:6.5.1 xref
Answers:
username_1: These are on my spreadsheet list I am going through, but I'll just do these now to close the ticket (there are many more)
username_1: done
Status: Issue closed
username_0: Thanks Harold! (Sorry to pre-empt your spreadsheet clean-up, but wanted to report these as part of completing my D.melanogaster ligase review.) |
wieni/wmdummy_data | 535677516 | Title: Move custom Faker providers to seperate packages
Question:
username_0: ## Summary
Move custom Faker providers to seperate packages and add them as Composer dependencies.
### Motivation
These providers provide functionality completely separate from this module. Moving them to separate packages allows other developers to use them outside of the context of this module. We could also submit them to the _Third-Party Libraries Extending/Based On Faker_ section of the README of the official package.
Answers:
username_0: Maybe not worth it creating separate packages for those four generators, given their size and complexity. |
ant-design/ant-design | 1091119210 | Title: virtual-list 里对 wheel 禁用,导致不能横向滚动
Question:
username_0: - [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### What problem does this feature solve?
tree 在大数据时可以横向滚动,,
### What does the proposed API look like?
这个地方改成可控的或删了。。。https://github.com/react-component/virtual-list/blob/master/src/hooks/useFrameWheel.ts#L41
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
https://codesandbox.io/s/antd-reproduction-template-forked-19h38?file=/index.js
Answers:
username_0: 这组件坑很多。。。
1. 即然支持 component,componentProps 是不是应该加上。。。
2. onScroll 都被你包了一层,,为啥不把 offsetTop 或 scrollTop 带上??
username_0: 还有个需求,,拆叠面板里是否考虑加入虚拟滚动,,或者 collapse 直接套 virtual-list。。。
这样可用:
```jsx
<VirtualList component={Collapse} componentProps={collapseProps}>
<Collapse.Panel></Collapse.Panel>
</VirtualList>
```
username_0: 还有个建议啊,,虚拟滚动应该是个独立组件,,,不能直接嵌 tree 里啊,,,用法应该跟我上面 Collapse 写的一样,,
username_1: https://github.com/ant-design/ant-design/issues/33849
这是同一个问题吗
username_2: Duplicate of #33849
Status: Issue closed
username_0: 你这货不应该关后面的,,链到这问题吗??
username_2: 那个描述的更完整。
username_0: 随便你,,扔个我的黑科技解决方案。。。。
```jsx
const bodyHolder = dom.getElementsByClassName(
'ant-tree-list-holder',
)[0] as HTMLDivElement;
bodyHolder.addEventListener('wheel', e => {
// 横向滚动被拦
// 去掉 virtual-list 里的 preventDefault: https://github.com/react-component/virtual-list/blob/master/src/hooks/useFrameWheel.ts#L41
e.preventDefault = () => {};
});
```
username_3: bodyHolder.addEventListener('wheel', e => {
// xxx
}, true)
试过了,addEventListener 这个方法还需要将第三个参数设置为 true,否则不能保证在 useFrameWhell 之前替换 e.preventDefault 方法。 |
huggingface/transformers | 1124933919 | Title: Error converting fine-tuned GPTNeoForCausalLM model to ONNX
Question:
username_0: Hi, I am trying to convert a fine-tuned GPT-Neo (125M) model to ONNX using the code below:
```
from transformers import pipeline, convert_graph_to_onnx, GPTNeoForCausalLM, GPT2Tokenizer
from pathlib import Path
import torch
model_name = "EleutherAI/gpt-neo-125M"
pipeline_name = "text-generation"
tokenizer = GPT2Tokenizer.from_pretrained("EleutherAI/gpt-neo-125M", bos_token='<|startoftext|>',
eos_token='', pad_token='<|pad|>')
nlp = pipeline(pipeline_name, model=model_name, tokenizer=tokenizer)
with torch.no_grad():
(
input_names,
output_names,
dynamic_axes,
tokens,
) = convert_graph_to_onnx.infer_shapes(nlp, "pt")
ordered_input_names, model_args = convert_graph_to_onnx.ensure_valid_input(
nlp.model, tokens, input_names
)
model_name = 'gpt-neo'
predictor_path = './' + model_name
model2 = GPTNeoForCausalLM.from_pretrained(predictor_path)
text = "I feel happy and "
prompt = f'<|startoftext|>Review: {text} Sentiment: '
encodings_dict = nlp.tokenizer(prompt, truncation=True, max_length=300, padding="max_length", return_tensors="pt")
torch.onnx.export(
model2,
(encodings_dict['input_ids'], encodings_dict['attention_mask']),
'model_test.onnx',
input_names=input_names,
output_names=output_names,
dynamic_axes=dynamic_axes,
do_constant_folding=True,
use_external_data_format=True, # Needed because of model size
enable_onnx_checker=True,
opset_version=13
)
```
But I get this error:
```
ValueError Traceback (most recent call last)
<ipython-input-15-024e093371a4> in <module>()
43 use_external_data_format=True, # Needed because of model size
44 enable_onnx_checker=True,
---> 45 opset_version=13
46 )
3 frames
[Truncated]
```
IndexError Traceback (most recent call last)
<ipython-input-16-89ddee8c10f8> in <module>()
43 use_external_data_format=True, # Needed because of model size
44 enable_onnx_checker=True,
---> 45 opset_version=13
46 )
15 frames
/usr/local/lib/python3.7/dist-packages/transformers/models/gpt_neo/modeling_gpt_neo.py in forward(self, input_ids, past_key_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
545 past_key_values = tuple([None] * len(self.h))
546 else:
--> 547 past_length = past_key_values[0][0].size(-2)
548
549 device = input_ids.device if input_ids is not None else inputs_embeds.device
IndexError: dimension specified as -2 but tensor has no dimensions
```
Can someone help me please?
Answers:
username_1: cc @lewtun @michaelbenayoun |
StefanSchroeder/Gocal | 428799990 | Title: dependencies from soniakeys repo was moved under github.com/soniakeys/meeus/v3/
Question:
username_0: package github.com/soniakeys/meeus/julian: cannot find package "github.com/soniakeys/meeus/julian" in any of:
/usr/local/Cellar/go/1.9.4/libexec/src/github.com/soniakeys/meeus/julian (from $GOROOT)
/Users/dlelf/go/src/github.com/soniakeys/meeus/julian (from $GOPATH)
package github.com/soniakeys/meeus/moonphase: cannot find package "github.com/soniakeys/meeus/moonphase" in any of:
/usr/local/Cellar/go/1.9.4/libexec/src/github.com/soniakeys/meeus/moonphase (from $GOROOT)
/Users/dlelf/go/src/github.com/soniakeys/meeus/moonphase (from $GOPATH)
Answers:
username_1: Thanks for the comment. I am going to fix it. |
kubernetes-monitoring/kubernetes-mixin | 568707305 | Title: custom alert rules message, rules.annotations.message doesn't get parsed with variable
Question:
username_0: Trying to customise the alert rule message but didn't get the variable parsed.
In config.libsonet
```json
showMultiCluster: true,
clusterLabel: 'eks-staging',
```
In file alerts/apps_alerts.libsonnet
line 5, 6
```json
prefixedNamespaceSelector: if self.namespaceSelector != null then self.namespaceSelector + ',' else '',
clusterLabel: self.clusterLabel
```
line 22
```json
annotations: {
message: 'Cluster, %(clusterLabel)s, Pod {{ $labels.namespace }}/{{ $labels.pod }} ({{ $labels.container }}) is restarting {{ printf "%.2f" $value }} times / 5 minutes.',
},
```
run make
```shell
$ make prometheus_alerts.yaml
```
Result:
```json
"message": "Cluster: %(clusterLabel), Pod {{ $labels.cluster }}/{{ $labels.namespace }}/{{ $labels.pod }} ({{ $labels.container }}) is restarting {{ printf \"%.2f\" $value }} times / 5 minutes."
```
desired result
```json
"message": "Cluster: "eks-staging", Pod {{ $labels.cluster }}/{{ $labels.namespace }}/{{ $labels.pod }} ({{ $labels.container }}) is restarting {{ printf \"%.2f\" $value }} times / 5 minutes."
```
Answers:
username_1: clusterLabel is the label-name that describes the cluster, not the cluster name itself. If you set the clusterLabel then you need to make sure it’s part of the alert when it goes out, for example by setting the external labels on your prometheus server.
username_0: @username_1 Thanks for the answer. In my prometheus I have inserted the variable per cluster using the "cluster" keyword. This is inline with Grafana dashboard. Prometheus rules also works prefect. The left over is the alert rule message doesn't tell its origin.
Basically I have federated out the prometheus metrics on each cluster to a centralise prometheus and now I want use this centralise metrics to send alerts.
I wish to find a good solutions that when I increase the number of k8s cluster, this framework shall able to scale with one simple variable changed.
Btw I am using prometheus-operator.
For example
up{cluster="eks-staging"} and up{cluster="eks-production"} is my valid labels
So in other word I can do this right?
```json
"message": "Pod {{ $labels.cluster }}/{{ $labels.namespace }}/{{ $labels.pod }} ({{ $labels.container }}) is restarting {{ printf \"%.2f\" $value }} times / 5 minutes."
```
username_0: @username_1 btw my actual question is why "%(clusterLabel)s" doesn't get parsed into string. Many thanks.
username_2: @username_0 do you use this as formatstring anywhere else? Otherwise it should work fine when you add the key mapping:
```
annotations: {
message: 'Cluster %(clusterLabel)s, [...].' % obj,
},
```
(replace `obj` with the object holding the clusterLabel object) |
digibyte/digibyte | 930671038 | Title: Add Bitlive.pro to exchange list
Question:
username_0: Hello
Our non-custodial exchange service provides our customers with the ability to exchange cryptocurrencies in over 90,000 exchange pairs.
Our service allows customers to exchange DigiByte cryptocurrency to more than 250 cryptocurrencies, we provide the most favorable exchange rate, and minimal exchange fees.
We would like to suggest that you place a mention for DigiByte cryptocurrency holders on your website in the exchange partners section about our service.
Installing our service in your section exchange partners will give DigiByte holders the opportunity to make instant exchange at favorable exchange rates with minimal commissions, without registration.
The advantage of our service: Non-custodial exchange, Best exchange rate, Round the clock support, Exchange without registration, User-friendly interface.
If anything needs to be provided to place our service in your partners section, please contact us and we will provide everything you need.
Official website https://bitlive.pro/
Regards, Bitlive.pro Affiliate Department |
data2viz/data2viz | 368301561 | Title: Deploy js lib to npm
Question:
username_0: To facilitate the use of the library by client application (demo, samples, documentation) it should be available on npm.
Answers:
username_0: To respect the structure of data2viz project (data2viz, geojson, ...), we should probably use subpackages: https://docs.npmjs.com/getting-started/scoped-packages
```
{
"name": "@data2viz/data2viz"
}
```
username_0: Done in that commit:
https://github.com/data2viz/data2viz/commit/c5ee0e5e10f11ba86c5499fb2f2fbd8be0c416ec
Status: Issue closed
|
josdejong/jsoneditor | 126910135 | Title: Missing 'search' option in v5.1.0
Question:
username_0: I'm getting console warning about 'search' option being unknown.
Status: Issue closed
Answers:
username_1: Thanks for reporting this. The option still works but is wrongly displaying a warning in the console. The new options validator was missing the option `search`. Fixed now in the develop branch.
username_0: Yeah, I was about to submit a PR, but got stuck with my GitHub SSH permissions for some yet unknown reasons... :)
username_1: thanks anyway! |
iMoe037/scrapy-automotive | 212692956 | Title: Not getting any data returned?
Question:
username_0: I am not getting any data returned in my json file although i have used your command "scrapy crawl automotive -o mybetternameforafile.json" and it creates the file but it is empty?
this is my shell output
2017-03-08 11:12:49 [scrapy.utils.log] INFO: Scrapy 1.3.2 started (bot: automotive)
2017-03-08 11:12:49 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'automotive.spiders', 'FEED_URI': 'mybetternameforafile.json', 'SPIDER_MODULES': ['automotive.spiders'], 'BOT_NAME': 'automotive', 'ROBOTSTXT_OBEY': True, 'FEED_FORMAT': 'json'}
2017-03-08 11:12:49 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
2017-03-08 11:12:49 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-03-08 11:12:49 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-03-08 11:12:49 [scrapy.middleware] INFO: Enabled item pipelines:
['automotive.pipelines.AutomotivePipeline']
2017-03-08 11:12:49 [scrapy.core.engine] INFO: Spider opened
2017-03-08 11:12:49 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-03-08 11:12:49 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/robots.txt> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/bugatti> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/acura> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/alfa-romeo> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/bentley> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/buick> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/bmw> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/audi> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/chrysler> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/aston-martin> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/cadillac> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/dodge> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/chevrolet> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/ferrari> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/fiat> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/genesis> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/gmc> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/honda> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/infiniti> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/jaguar> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/hyundai> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/koenigsegg> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/lamborghini> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/ford> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/jeep> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/lexus> (referer: None)
2017-03-08 11:12:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/lincoln> (referer: None)
[Truncated]
2017-03-08 11:12:50 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.caranddriver.com/toyota> (referer: None)
2017-03-08 11:12:50 [scrapy.core.engine] INFO: Closing spider (finished)
2017-03-08 11:12:50 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 11473,
'downloader/request_count': 51,
'downloader/request_method_count/GET': 51,
'downloader/response_bytes': 1285842,
'downloader/response_count': 51,
'downloader/response_status_count/200': 51,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 3, 8, 10, 12, 50, 178655),
'log_count/DEBUG': 52,
'log_count/INFO': 7,
'response_received_count': 51,
'scheduler/dequeued': 50,
'scheduler/dequeued/memory': 50,
'scheduler/enqueued': 50,
'scheduler/enqueued/memory': 50,
'start_time': datetime.datetime(2017, 3, 8, 10, 12, 49, 239611)}
2017-03-08 11:12:50 [scrapy.core.engine] INFO: Spider closed (finished)
Answers:
username_1: Hey @username_0
Unfortunately caranddriver went from static HTML to using Angular Components. Since they changed the structure of their HTML, the script no longer works. The project is not being actively maintained and it would require a bit of an overhaul and some time. If you need the data, send me your email and I'll send the files over. |
econ-ark/HARK | 356296502 | Title: setAndUpdateValues method
Question:
username_0: Thank you for providing HARK. I am trying to work my way through the ConsIndShockModel.py example. I have a question about the setAndUpdateValues method, defined on line 664, under the ConsIndShockSetup class. The method takes apart the IncomeDstn list, but I am getting an error when I try to replicate the method step-by-step.
Specifically, I do the following:
1. First, I create an instance of the idiosyncratic shock consumer type:
`tester = IndShockConsumerType(**Params.init_idiosyncratic_shocks)`
2. Line 693, under setAndUpdateValues, is as follows:
`self.PermShkValsNext = IncomeDstn[1]`
3. However, if I try to run a similar command manually:
`tester.IncomeDstn[1]`
I get the following error:
```python
Traceback (most recent call last):
File "<ipython-input-60-d66a500b81d3>", line 1, in <module>
tester.IncomeDstn[1]
IndexError: list index out of range
```
4. If I run the following command, there is no error:
`tester.IncomeDstn[0][1]`
This is because IncomeDstn appears to be a list inside of a list (if that makes sense).
5. Elsewhere in the file, on line 1901, the calcBoundingValues method appears to use the "expected" indexing:
`PermShkValsNext = self.IncomeDstn[0][1]`
My question: why does the setAndUpdateValues method work? I have tried to trace all the steps that lead to that method, but I don't see anywhere where the IncomeDstn attribute is "stripped" down to it's inner list before being passed to the setAndUpdateValues method. How come there is an error when I run the command on line 693 manually, but not when the entire code is run?
Answers:
username_1: Hi, welcome to HARK!
Short answer: The "missing" step where `tester.IncomeDstn[0]` in your test becomes `self.IncomeDstn` in the solver code is in HARK's built in `solve` method.
Long answer: First, note that `self` in the solver code and the thing that you're calling `tester` are *different* objects of *different* classes. Your `tester` is a representation of the agents themselves: an instance of `IndShockConsumerType`. This agent class' `solveOnePeriod' function is called `solveConsIndShock', whose code (other than function/input definition and docstring) is a whopping 5 lines long. When called, it merely creates an instance of ConsIndShockSolver, executes a couple methods, and returns the result. As you've found, all of the actual work happen inside of methods of ConsIndShockSolver (and its parent classes).
When the `solve` method of an `AgentType` subclass instance is called, HARK loops through the periods of that agent's "cycle", feeding the correct inputs to the one period solver for each period. That is, the information in `tester.IncomeDstn` is not passed directly to `solveConsIndShock`, only `tester.IncomeDstn[t]`, where `t` is a time index that HARK is looping over.
The inputs to the "one period solver function" for each micro model in HARK are partitioned into time-invariant inputs (named in `time_inv`) and time-varying inputs (named in `time_vary`); note that `time_vary' also includes other attributes that vary over time, but are not passed to the solver. One of the elements of `time_vary` for instances of `IndShockConsumerType` is `'IncomeDstn'`, so HARK treats that attribute as a time-varying list of one period values, rather than the same thing every single period. In contrast, `time_inv` includes (inter alia) `'CRRA'`, so the entire contents of `tester.CRRA` (2.0, in the example) are passed to the solver every period.
At this point, you might be asking, "Wait, what do you mean 'cycle' of periods? It's an infinite horizon problem!" HARK is set up to handle both infinite and finite (e.g. lifecycle) horizon variations of models. The integer-valued attribute `cycles' indicates the nature of the problem for a particular instance: `cycles=0` means it's an infinite horizon problem whose cycle of periods should repeat until convergence when solving; `cycles=1` means it's a lifecycle problem whose cycle of periods runs once from beginning to end; `cycles=N` means the cycle of periods runs N times cyclically.
The solver inputs named in `time_vary` should be lists of the same length, also stored in the attribute `T_cycle' in some (all?) models. In the example you're looking at, the "cycle" is only one period long, with `cycles=0`: the agent lives out the same "kind" of period over and over again until they die. Other included examples show how to set up simple lifecycle models, as well as infinite horizon problems with (say) a four period "season" cycle.
Let me know if this explanation helped, or if you have other questions.
username_0: Great, I totally follow what you are saying. If this weren't an infinite horizon model, tester.IncomeDstn might have a different distribution for each period, t, but in my case, there is only one period needed, hence the "list inside of a list."
So, I suppose, this line is grabbing the one income distribution:
```python
solve_dict[name] = eval('agent.' + name + '[t]')
```
Thank you for the prompt response!
username_1: That's exactly it. And that line really ought to be rewritten as:
`solve_dict[name] = getattr(agent,name)[t]`
I wrote the core HARK code before I was very familiar with Python, and used bad style in a few (read: many) cases.
username_1: Closing issue, might apply some tag about "issues encountered by newcomers" or "learning HARK" so discussions like this can be easily recovered.
Status: Issue closed
|
PhileCMS/phileInlineImage | 40165463 | Title: assign a version tag to releases
Question:
username_0: without version tags:
- composers throws an error on suggested install method (`php composer.phar require phile/inline-image:*`) because there's no stable release
- using dev-master is not a good solution on automated deployment (supposed content only update may offer sudden "surprises")
(also noticed that on the phile/rss-feed plugin)
Answers:
username_1: +1
That's not even a code change. Just `git tag 1.0.0 && git push --tags` (and make sure packaginst updates). pinging @username_2 @NeoBlack
username_2: Done. https://github.com/PhileCMS/phileInlineImage/releases/tag/1.0.0
Status: Issue closed
|
jsvine/pdfplumber | 261480582 | Title: Has anyone created a pdf out of the extracted results and checked with the original?
Question:
username_0: I was looking for a way to create the original document again from the character data that I have extracted using this amazing tool. Has anyone tried to do this before? Just wanted to see or any recommendations as to what packages to use to do it?<issue_closed>
Status: Issue closed |
react-ui-org/react-ui | 817661042 | Title: Document all accessibility features
Question:
username_0: On top of existing, document also:
- AT friendliness (test first?)
- user font size, UI scalability
- color contrast (does it actually work?)
- prefer reduced motion
- dark mode ready (thanks to theme) |
apache/camel-kafka-connector-examples | 1166035644 | Title: Split Examples into 0.11.x and 1.0.x
Question:
username_0: Since we have two LTS release train now, we should have two folders supporting the two different release:
- 0.11.x with old approach
- 1.0.x with kamelet approach (where we have Kamelets for connector) |
cruise-automation/k-rail | 514330204 | Title: webhook for enforced violations
Question:
username_0: Add a configurable webhook to call when violations are enforced.
It should have a configurable endpoint, method, and body that can be templated via the [go template format](https://golang.org/pkg/text/template/) to support injection of violation parameters.
This would be useful for notifying when a policy has been enforced to a Slack channel, for example. In our experience enforced violations are rare when configured correctly, but we generally want to know about them when they happen. |
HEPData/hepdata_lib | 861518823 | Title: Check that uncertainties are not all zero
Question:
username_0: In February 2020, I modified the [hepdata-validator](https://github.com/HEPData/hepdata-validator) code (see HEPData/hepdata-validator#7) to invalidate bins where all uncertainties are zero, in response to [problems](https://gitlab.com/hepcedar/rivet/-/issues/69) raised by the Rivet developers in fitting applications. I've just added a paragraph of explanation to the [Uncertainties](https://hepdata-submission.readthedocs.io/en/latest/data_yaml.html#uncertainties) section of the submission documentation. A similar check should be made in the `hepdata_lib` code, perhaps before the [lines](https://github.com/HEPData/hepdata_lib/blob/f7f85a9b8b86ef86f4b04fd373a7bb6cd530504b/hepdata_lib/__init__.py#L176-L193) constructing the `errors` key. If all uncertainties are zero, the errors key could be omitted from the dictionary or set to an empty list, and a warning message should be printed out. This issue was prompted by a [question](https://hepdata-forum.cern.ch/t/uncertainties-should-not-all-be-zero-error/22/3?u=graemewatt) in the [HEPData Forum](https://hepdata-forum.cern.ch).<issue_closed>
Status: Issue closed |
MicrosoftDocs/msteams-docs | 1068073152 | Title: Inconsistent handling of newlines in markdown in adaptive cards
Question:
username_0: According to the documentation, \n\n should be used when adding a newline in markdown text in a TextBlock in an adaptive card.
Example card:
```
{
"$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
"type": "AdaptiveCard",
"version": "1.2",
"body": [
{
"type": "TextBlock",
"text": "<at>User</a> Your package will arrive on {{DATE(2017-02-14T06:00:00Z, SHORT)}} at {{TIME(2017-02-14T06:00:00Z)}}",
"wrap": true
},
{
"type": "TextBlock",
"text": "**bold**\n\n_cursive_",
"wrap": true
}
],
"msteams": {
"entities": userAboveMentioned,
"width": "Full"
}
}
```
The bot sends this card with a mention to a **group chat** and the mentioned User opens client:
Works fine on desktop and web, \n\n = 1 newline
Android: \n\n = 2 newlines
iOS: \n\n = 0 newlines, needs \r\n and the user mention completely breaks rendering the card (date function and markdown shown in raw text) (see https://github.com/MicrosoftDocs/msteams-docs/issues/4253)
The bot sends this card with a mention to a **channel** (creates a new conversation using adapter.createConversation):
Desktop, web and Android: \n\n = 2 newlines
iOS: \n\n = 0 newlines, needs \r\n and the user mention completely breaks rendering the card (date function and markdown shown in raw text) (see https://github.com/MicrosoftDocs/msteams-docs/issues/4253)
Solution: Please make \n\n work the same everywhere
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 001a12ea-30fa-d626-18a9-02a518bfb37f
* Version Independent ID: e5f5f69e-66e9-d1b2-e19a-485d0ffcd6fa
* Content: [Text formatting in cards - Teams](https://docs.microsoft.com/en-us/microsoftteams/platform/task-modules-and-cards/cards/cards-format?tabs=adaptive-md%2Cconnector-html)
* Content Source: [msteams-platform/task-modules-and-cards/cards/cards-format.md](https://github.com/MicrosoftDocs/msteams-docs/blob/master/msteams-platform/task-modules-and-cards/cards/cards-format.md)
* Product: **msteams**
* GitHub Login: @surbhigupta
* Microsoft Alias: **lajanuar**
Answers:
username_1: @username_0 - We are looking into this I will get back to you soon.
username_2: Having the same issue. So far, `\r\r` will produce consistent line break across platform (iOS/Android/PC/Web), you can use that as a workaround.
username_3: @username_0 - We were able to repro this issue. We have raised a bug for the same.
username_0: @username_3 It works better now on desktop and web and also iOS has improved slightly but Android still has the issue of \n\n in markdown TextBlocks causing too many newlines. It would be great if this could be fixed. |
hexpm/hex | 148856491 | Title: Improve error message for transient registry update failure
Question:
username_0: When running `mix deps.get` I got a registry update failure but all deps were fetched.
```sh
✪ mdg
Registry update failed (http_error)
{:failed_connect, [{:to_address, {'s3.hex.pm.global.prod.fastly.net', 80}}, {:inet, [:inet], :nxdomain}]}
Running dependency resolution
Dependency resolution completed
cowboy: 1.0.4
cowlib: 1.0.2
plug: 1.1.3
ranch: 1.2.1
* Getting cowboy (Hex package)
Checking package (http://s3.hex.pm.global.prod.fastly.net/tarballs/cowboy-1.0.4.tar)
Using locally cached package
* Getting plug (Hex package)
Checking package (http://s3.hex.pm.global.prod.fastly.net/tarballs/plug-1.1.3.tar)
Fetched package
* Getting cowlib (Hex package)
Checking package (http://s3.hex.pm.global.prod.fastly.net/tarballs/cowlib-1.0.2.tar)
Using locally cached package
* Getting ranch (Hex package)
Checking package (http://s3.hex.pm.global.prod.fastly.net/tarballs/ranch-1.2.1.tar)
Using locally cached package
```
I assumed this must a transient error, however wasn't sure why this might occur or what I might do about it.
One way of increasing the amount of information about this error might be to handle an initial failure by retrying immediately and only reporting an error after 2 failures.
Another strategy might be that if there is still failure at the end of successfully fetching the deps (indicating that there is clearly no general network error) then retry for a third time and on 3 successive failures report that there might be:
- a registry server error (and here is the current status of the registry and when we'd it expect it back up)
and if it keeps happening,
- someone might be preventing you from receiving updates and exploiting your use of old, buggy software
Answers:
username_1: This is unlikely to help since we fetch packages and the registry from the same location.
username_1: We can definitely add retries but I think we should keep the error message as it is today. The reason something failed to fetch could be anything and I think that starting to suggest reasons will just confuse users.
username_1: We should also add retries for fetching package tarballs. Even more important now with the httpc proxy issue https://github.com/hexpm/hex/issues/227.
Status: Issue closed
|
jlippold/tweakCompatible | 405970053 | Title: `MING - LS Widget` working on iOS 11.4.1
Question:
username_0: ```
{
"packageId": "com.mshl.ming",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.mshl.ming",
"deviceId": "iPad7,5",
"url": "http://cydia.saurik.com/package/com.mshl.ming/",
"iOSVersion": "11.4.1",
"packageVersionIndexed": true,
"packageName": "MING - LS Widget",
"category": "Widgets",
"repository": "Packix",
"name": "MING - LS Widget",
"installed": "1.3.1",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.mshl.ming",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.0",
"shortDescription": "lockscreen widget",
"latest": "1.3.1",
"author": "Meshal (@thatMshl)",
"packageStatus": "Unknown"
},
"base64": "<KEY>",
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
pelias/openaddresses | 136153875 | Title: Configure admin lookup and deduplication via pelias.config
Question:
username_0: Unlike most of our other importers, this importer _only_ allows configuration of admin lookup and deduplication via command line flags. This makes it difficult to point to a single place where configuration changes can be made, whether in our production/dev builds, in the vagrant image, or even on a local dev setup. It also makes our Chef configuration more complicated, as the chef recipes have to know about our openaddresses configuration, rather than just start the importer and let the importer worry about configuration
Answers:
username_1: +1000
Status: Issue closed
|
digitalmethodsinitiative/dmi-tcat | 162449693 | Title: domain/analysis/ not responding
Question:
username_0: Hi there,
I am having an issue the last days when I try t log in into domain/analysis/. It is just not responding. Does not give an error or something. It it just the white page "waiting for server to respond" (according to what Chrome says). In the tab instead of "DMI Twitter Capturing and Analysis Toolset (DMI-TCAT)", it shows "reading-list://"
/capture/ works well. it loads and collects data.
I dont know if that might be because size of the database (more than 10M) tweets, or server extremely busy (i am hitting constant rate limits, so a huge quantity of data being collected), or lack of ram (server has 3,75 GB). However, checking the monitoring tools that Amazon EC2 provides, the server does not seem extremely overloaded.
Many thanks for the help!
Javier
Answers:
username_1: Hi Javier,
could you visit the analysis page again and then send the last 100 lines of your php error log?
Thanks,
Erik
username_0: Hi Eirk,
Thanks for following. My believe is that I run out of memory. My server in amazon has 3,75 Gb of ram, and the dataset contains around 15M tweets. If I open ip/analysis now, it works properly because it opens only a day (yesterday) and the number of tweets in the last days has been much less. However, if I change the sample to the entire dataset, or where the number of tweets was very high, then it is not responding.
root@ip-172-31-16-34:/home/admin# sudo cat /var/log/apache2/error.log
[Thu Jul 07 06:25:05.697672 2016] [mpm_prefork:notice] [pid 424] AH00163: Apache/2.4.10 (Debian) configured -- resuming normal operations
[Thu Jul 07 06:25:05.697694 2016] [core:notice] [pid 424] AH00094: Command line: '/usr/sbin/apache2'
[Thu Jul 07 10:05:38.253280 2016] [:error] [pid 25437] [client 172.16.31.10:65005] PHP Deprecated: mysql_connect(): The mysql extension is deprecated and will be removed in the future: use mysqli or PDO instead in /var/www/dmi-tcat/analysis/common/functions.php on line 849
[Thu Jul 07 10:05:52.002220 2016] [:error] [pid 25437] [client 172.16.31.10:65005] PHP Deprecated: mysql_connect(): The mysql extension is deprecated and will be removed in the future: use mysqli or PDO instead in /var/www/dmi-tcat/analysis/common/functions.php on line 849
[Thu Jul 07 10:06:05.334251 2016] [:error] [pid 25437] [client 172.16.31.10:65005] PHP Deprecated: mysql_connect(): The mysql extension is deprecated and will be removed in the future: use mysqli or PDO instead in /var/www/dmi-tcat/analysis/common/functions.php on line 849
[Thu Jul 07 10:06:14.454135 2016] [:error] [pid 25437] [client 172.16.31.10:65005] PHP Deprecated: mysql_connect(): The mysql extension is deprecated and will be removed in the future: use mysqli or PDO instead in /var/www/dmi-tcat/analysis/common/functions.php on line 849
[Thu Jul 07 10:06:32.610264 2016] [:error] [pid 25437] [client 172.16.31.10:65005] PHP Deprecated: mysql_connect(): The mysql extension is deprecated and will be removed in the future: use mysqli or PDO instead in /var/www/dmi-tcat/analysis/common/functions.php on line 849
[Thu Jul 07 10:07:50.862966 2016] [:error] [pid 25444] [client 172.16.31.10:65021] PHP Deprecated: mysql_connect(): The mysql extension is deprecated and will be removed in the future: use mysqli or PDO instead in /var/www/dmi-tcat/analysis/common/functions.php on line 849
[Thu Jul 07 10:11:03.091827 2016] [:error] [pid 25444] [client 172.16.31.10:65021] PHP Deprecated: mysql_connect(): The mysql extension is deprecated and will be removed in the future: use mysqli or PDO instead in /var/www/dmi-tcat/analysis/common/functions.php on line 849
[Thu Jul 07 10:15:17.914879 2016] [:error] [pid 25444] [client 172.16.31.10:65021] PHP Deprecated: mysql_connect(): The mysql extension is deprecated and will be removed in the future: use mysqli or PDO instead in /var/www/dmi-tcat/analysis/common/functions.php on line 849
[Thu Jul 07 10:18:03.567680 2016] [:error] [pid 25444] [client 172.16.31.10:65021] PHP Deprecated: mysql_connect(): The mysql extension is deprecated and will be removed in the future: use mysqli or PDO instead in /var/www/dmi-tcat/analysis/common/functions.php on line 849
username_1: Hi @username_0,
The only information that is loaded/calculated on the first page of analysis is the number of tweets per data set, and the number of tweets per 'interval' given a specific data set (and possibly queries within that data set). It seems unlikely that PHP will run out of memory, unless you have specified that PHP can only use a limited amount of RAM. Rather, I think that you are running into a timeout, either from PHP, or from your browser.
Above you have pasted the output of one of TCAT's error logs. Try to locate the error log of PHP itself and see if that indicates whether PHP has not been assigned enough RAM (through `ini_set('memory_limit',VALUE);`) or whether PHP runs into its maximum execution time (set via `ini_set(max_execution_time, SECONDS);`). Both can be increased in tcat/config.php
Hope this helps,
Erik
username_2: Hi @username_0
Any progress on this?
username_0: Hi there. Yes. Sorry.
Problem solved. I just changed instance to a bigger one (with 30GB of ram) and now it process everything. Takes a bit of time, but works.
Status: Issue closed
|
nextflow-io/nextflow | 192805346 | Title: Allow multiple `publishDir` directive for a single process
Question:
username_0: It can be sometimes useful to publish output files into multiple directories.
A use case could be if I have a process that produces two files and I would like to publish them in two different folders based on their names. E.g.:
```groovy
process test {
publishDir 'foos', pattern: 'foo.*'
publishDir 'bars', pattern: 'bar.*'
output:
file('*.txt')
'''
echo -e "foo\nbar" | awk '{f=$1".txt"; print ("hello", $1) > f}'
'''
}
```
Answers:
username_1: I'd love to see that too ;-)
username_2: +1
username_1: It works in fact, if you do something like that:
```
publishDir '.', saveAs: { it == "foo.*" ? "foos/$it" : "bars/$it" }
```
username_2: @username_1 Thanks but my use-case involves outputting the same file in multiple folders
username_3: +1
username_3: My work-around for now to publish in two different directories: running the script twice with two different options. This way:
publishDir path: out_option == 'id' ? "path1" : "path2"
username_1: I think that you could use the `copyTo` method to copy your file in a second directory without running your script twice.
https://www.nextflow.io/docs/latest/script.html#copy-files
username_3: Thx Max. I could not make it work with the copyTo method, but it might do the trick indeed.
username_4: I think it's time to give a try to this.
username_4: Available in version `0.29.0-RC1`.
Status: Issue closed
username_5: I have a similar issue but a different usecase; I want to publish the same files into two different `publishDir`'s. Is this supported? For example, something like this:
```
process tsv_2_sqlite {
// convert TSV files into SQLite databases
tag "${caller}-${sampleID}"
publishDir "output_per_sample/${sampleID}/${caller}", mode: 'copy', overwrite: true
publishDir "output/tsv_2_sqlite"
input:
set val(caller), val(sampleID), file(sample_tsv) from samples_updated_tsvs
output:
set val(caller), val(sampleID), file("${sampleID}.sqlite") into samples_sqlite
script:
"""
table2sqlite.py -i "${sample_tsv}" -o "${sampleID}.sqlite" -t variants -d '\t'
"""
}
```
Essentially, I want to have the same set of files but published under two different directory structures. One for a 'per-sample' output directory structure, and another for a 'per-process' output directory structure.
Not sure how to see what implementation was made in version `0.29.0-RC1`. |
PAK90/mtg-hunter | 243523745 | Title: QueryString not working
Question:
username_0: <SearchBox
translations={{"searchbox.placeholder": "Search comments"}}
queryOptions={{"minimum_should_match": this.state.matchPercent}}
prefixQueryFields={["comment"]}
autofocus={true}
searchOnChange={true}
searchThrottleTime={1000}
queryFields={["comment^1"]}
**queryBuilder={QueryString}**
/>
does not seem to work, can you please help, i keep getting below error message, using elastic search 5.4
{
"error": {
"root_cause": [
{
"type": "parsing_exception",
"reason": "no [query] registered for [queryString]",
"line": 1,
"col": 44
}
],
"type": "parsing_exception",
"reason": "no [query] registered for [queryString]",
"line": 1,
"col": 44
},
"status": 400
}
Answers:
username_1: Site is still using older version of Elasticsearch, the query structure has changed for v5, hence why removing the queryBuilder fixes it.
username_0: Can you please let me know what needs to be change to be compatible with ES 5.4 ? really need it working for my internal demo.
username_1: Oh, I missed the fact that you're using Searchkit's Searchbox component. That's been updated to v2.x to work with ES5.4, so as long as you update your Searchkit version that should be fine.
username_0: Yes i am using "searchkit": "^2.1.1" version, and trying to integrate QueryString, which is not working
i wanted to do OR , AND, NOT ..etc so i was looking at your code and that is throwing above exception
```
export function QueryString(query, options:QueryStringOptions={}){
if(!query){
return;
}
// Add escapes to the query's +, - and / chars.
query = query.replace(/\//g, "\\/").replace(/\+/g, "\\+").replace(/\-/g, "\\-");
return {
"queryString":assign({"fields":["name"],"query":query,"defaultOperator":"AND"})
}
}
```
username_1: I'm honestly not sure, since I never actually got around to re-working my code for ES5.x. Try asking in the [searchkit gitter channel](https://gitter.im/searchkit/searchkit) for some help.
Out of curiosity, what are you trying to construct for your internal demo?
username_0: Demo for our time series data which is stored in ES , I like you site http://mtg-hunter.com side bar with "AND-OR" feature that you have on the side bar.
username_1: Ohh I see. So the reason those work for me is that I modified some Searchkit code to let those go through. I modified the RefinementList component as well as some other ones that it depended on, this is all visible in the /src folder.
username_0: Yes i saw that, thanks :) |
tensorflow/tensorflow | 534562100 | Title: Memory leak with tf.shuffle, doesn't release buffer memory
Question:
username_0: **System information**
- OS Platform and Distribution Linux Ubuntu 16.04
- Python: 2.7.17 |Anaconda, Inc.| (default, Oct 21 2019, 19:04:46) [GCC 7.3.0]
- Tensorflow: 1.12.0
- Numpy: 1.16.5
- GPU: GeForce RTX 2080 Ti
- CUDA: 9.2
**Describe the current behavior**
CPU memory gradually increase after each epoch until the program restarts, i suspect that dataset.shuffle doesn't release the buffer memory. Tested with tf 1.15, same situation.
**Code to reproduce the issue**
```
import numpy as np
import tensorflow as tf
class ASRDataGenerator(object):
def __init__(self,num):
self.num = num
def __call__(self):
for i in range(self.num):
for j in range(106):
yield 'a','b',np.random.randn(100,120)
class TFASRDataSet(object):
def __init__(self,num,batch_size):
self.num = num
self.batch_size = batch_size
self.asrDataGenerator = ASRDataGenerator(num)
def setDataSetIterator(self):
dataset = tf.data.Dataset.from_generator(self.asrDataGenerator, (tf.string,tf.string,tf.float32))
dataset = dataset.shuffle(30000)
dataset = dataset.map(lambda s1,s2,feat: [s1,s2,feat])
dataset = dataset.batch(self.batch_size, drop_remainder=True)
self.iterator = dataset.make_initializable_iterator()
test_tfASRDataSet = TFASRDataSet(248,192)
test_tfASRDataSet.setDataSetIterator()
test_iter = test_tfASRDataSet.iterator
test_next = test_iter.get_next()
run_config = tf.ConfigProto()
run_config.gpu_options.allow_growth = True
run_config.allow_soft_placement = True
with tf.Session(config=run_config) as sess:
for i in range(100):
sess.run(test_iter.initializer)
while True:
try:
loss_list = sess.run([test_next])
print(len(loss_list[0]))
except tf.errors.OutOfRangeError:
print("train epoch %d finish" % (i+1))
break
```
Answers:
username_1: I could replicate the issue with Tf 1.15.
Please take a look at the [gist](https://colab.sandbox.google.com/gist/username_1/9f8ff40136953326a476bfadc15f1e46/untitled297.ipynb). Thanks!
username_2: @username_1 the user program does not provide evidence of the issue, so I am not sure what do you mean by "replicating" the issue.
When I modify the program with tracking of memory use:
```
...
def view_used_mem():
used_mem = psutil.virtual_memory().used
print("used memory: {} Mb".format(used_mem / 1024 / 1024))
def main(argv):
del argv
test_tfASRDataSet = TFASRDataSet(248, 192)
test_tfASRDataSet.setDataSetIterator()
test_iter = test_tfASRDataSet.iterator
test_next = test_iter.get_next()
run_config = tf.ConfigProto()
run_config.gpu_options.allow_growth = True
run_config.allow_soft_placement = True
with tf.Session(config=run_config) as sess:
for i in range(100):
sess.run(test_iter.initializer)
while True:
try:
loss_list = sess.run([test_next])
except tf.errors.OutOfRangeError:
print('train epoch %d finish' % (i + 1))
view_used_mem()
break
...
```
and run it using internal TF 1.15 build, the memory is not increasing between epochs.
Furthermore, I have confirmed that the shuffle buffer is properly disposed of between epochs by enabling logging of tf.data iterator constructors / destructors:
```
...
I0102 13:32:09.063783 34490 dataset.h:887] Iterator::Model constructor
I0102 13:32:09.063821 34490 dataset.h:887] Iterator::Model::MapAndBatch constructor
I0102 13:32:09.063834 34490 dataset.h:887] Iterator::Model::MapAndBatch::Shuffle constructor
I0102 13:32:09.064474 34490 dataset.h:891] Iterator::Model::MapAndBatch::Shuffle destructor
I0102 13:32:09.064498 34490 dataset.h:891] Iterator::Model::MapAndBatch destructor
I0102 13:32:09.064547 34490 dataset.h:891] Iterator::Model destructor
I0102 13:32:09.100376 34570 dataset.h:887] Iterator::Model::MapAndBatch::Shuffle::FlatMap constructor
I0102 13:32:09.100427 34570 dataset.h:887] Iterator::Model::MapAndBatch::Shuffle::FlatMap::FromTensor constructor
I0102 13:32:09.101287 34570 dataset.h:887] Iterator::Model::MapAndBatch::Shuffle::FlatMap[0]::Generator constructor
I0102 13:32:19.100455 34570 shuffle_dataset_op.cc:185] Filling up shuffle buffer (this may take a while): 10188 of 30000
I0102 13:32:29.100570 34570 shuffle_dataset_op.cc:185] Filling up shuffle buffer (this may take a while): 20548 of 30000
I0102 13:32:34.942694 34570 dataset.h:891] Iterator::Model::MapAndBatch::Shuffle::FlatMap[0]::Generator destructor
I0102 13:32:34.944426 34570 dataset.h:891] Iterator::Model::MapAndBatch::Shuffle::FlatMap::FromTensor destructor
I0102 13:32:34.944455 34570 dataset.h:887] Iterator::Model::MapAndBatch::Shuffle::FlatMap constructor
I0102 13:32:34.944461 34570 dataset.h:891] Iterator::Model::MapAndBatch::Shuffle::FlatMap destructor
I0102 13:32:34.944477 34570 dataset.h:887] Iterator::Model::MapAndBatch::Shuffle::FlatMap::FromTensor constructor
[Truncated]
I0102 13:33:02.945827 34490 dataset.h:887] Iterator::Model::MapAndBatch::Shuffle constructor
I0102 13:33:02.946734 34490 dataset.h:891] Iterator::Model::MapAndBatch::Shuffle destructor
I0102 13:33:02.946763 34490 dataset.h:891] Iterator::Model::MapAndBatch destructor
I0102 13:33:02.946774 34490 dataset.h:891] Iterator::Model destructor
I0102 13:33:02.973794 34643 dataset.h:887] Iterator::Model::MapAndBatch::Shuffle::FlatMap constructor
I0102 13:33:02.973845 34643 dataset.h:887] Iterator::Model::MapAndBatch::Shuffle::FlatMap::FromTensor constructor
I0102 13:33:02.974667 34643 dataset.h:887] Iterator::Model::MapAndBatch::Shuffle::FlatMap[0]::Generator constructor
I0102 13:33:12.974602 34643 shuffle_dataset_op.cc:185] Filling up shuffle buffer (this may take a while): 10238 of 30000
I0102 13:33:22.973838 34643 shuffle_dataset_op.cc:185] Filling up shuffle buffer (this may take a while): 20354 of 30000
I0102 13:33:28.369214 34643 dataset.h:891] Iterator::Model::MapAndBatch::Shuffle::FlatMap[0]::Generator destructor
I0102 13:33:28.369300 34643 dataset.h:891] Iterator::Model::MapAndBatch::Shuffle::FlatMap::FromTensor destructor
I0102 13:33:28.369320 34643 dataset.h:887] Iterator::Model::MapAndBatch::Shuffle::FlatMap constructor
I0102 13:33:28.369330 34643 dataset.h:891] Iterator::Model::MapAndBatch::Shuffle::FlatMap destructor
I0102 13:33:28.369357 34643 dataset.h:887] Iterator::Model::MapAndBatch::Shuffle::FlatMap::FromTensor constructor
I0102 13:33:28.369455 34643 dataset.h:891] Iterator::Model::MapAndBatch::Shuffle::FlatMap::FromTensor destructor
I0102 13:33:28.369468 34643 dataset.h:891] Iterator::Model::MapAndBatch::Shuffle::FlatMap destructor
I0102 13:33:28.369478 34643 shuffle_dataset_op.cc:234] Shuffle buffer filled.
train epoch 4 finish
used memory: 17145.87109375 Mb
```
username_3: The same issue is also with TensorFlow 2.0. After each iteration of the dataset, the buffer get filled again and doesn't release the old buffer.
username_4: Can confirm the same issue exists with Tensorflow 2.3
username_2: @username_3 and @username_4 please create a separate issue with instructions on how to reproduce (and evidence that leads you to believe that the shuffle buffer is not released. You can for instance run your workload with `--vmodule=dataset=2` to check whether the shuffle dataset iterator (which owns the buffer) is destructed at the end of each epoch. As per my response from January 2nd, I am not able to reproduce the issue with the instructions posted on this issue.
username_5: @username_2 could you please provide information about how to use this flag or where we can find information about it? Thanks
username_4: @username_2 we would like to reproduce the issue using `vmodule=dataset=2` but can not figure out how to provide that flag in tensorflow 2.0. Please advise.
username_2: Based on [this](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/platform/default/logging.h), I believe that executing your program with the environment variable `TF_CPP_VMODULE` set to `dataset=2` should have the desired effect.
username_5: @username_2 here's a google collab where our (@username_4 and I) issue can be confirmed by looking at the runtime logs. No 'Shuffle buffer filled message' appears and there are some constructors that seem to not be destroyed. https://colab.research.google.com/drive/1cjRs3FjEXtJBnhELkbsf_ueiS2U3pZ_C?usp=sharing
username_2: @username_5 your colab runs a single epoch of training, which means the shuffle transformation (and its underlying buffer) will be created once at the beginning and then destroyed (so there should certainly not be any leaking of shuffle buffer) ... when I ran your repro using a standalone Python binary with logging enabled, I receive the following log output:
```
I0903 16:01:32.081323 1038857 dataset.cc:495] Iterator::Model constructor
I0903 16:01:32.081376 1038857 dataset.cc:495] Iterator::Model::Prefetch constructor
I0903 16:01:32.081398 1038857 dataset.cc:495] Iterator::Model::Prefetch::BatchV2 constructor
I0903 16:01:32.081406 1038857 dataset.cc:495] Iterator::Model::Prefetch::BatchV2::Shuffle constructor
I0903 16:01:32.100963 1039269 dataset.cc:495] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap constructor
I0903 16:01:32.100995 1039269 dataset.cc:495] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2 constructor
I0903 16:01:32.101010 1039269 dataset.cc:495] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle constructor
I0903 16:01:32.301694 1039272 dataset.cc:495] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle::ParallelInterleaveV4 constructor
I0903 16:01:32.301746 1039272 dataset.cc:495] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle::ParallelInterleaveV4::TensorSlice constructor
I0903 16:01:32.402904 1039313 dataset.cc:495] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle::ParallelInterleaveV4[0]::Zip constructor
I0903 16:01:32.402943 1039313 dataset.cc:495] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle::ParallelInterleaveV4[0]::Zip[0]::FlatMap constructor
I0903 16:01:32.402955 1039313 dataset.cc:495] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle::ParallelInterleaveV4[0]::Zip[0]::FlatMap::TensorSlice constructor
I0903 16:01:32.406277 1039313 dataset.cc:495] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle::ParallelInterleaveV4[0]::Zip[1]::TensorSlice constructor
I0903 16:01:32.406596 1039316 dataset.cc:495] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle::ParallelInterleaveV4[0]::Zip[0]::FlatMap[0]::TFRecord constructor
I0903 16:01:32.419646 1039316 dataset.cc:499] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle::ParallelInterleaveV4[0]::Zip[0]::FlatMap[0]::TFRecord destructor
I0903 16:01:32.419682 1039316 dataset.cc:499] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle::ParallelInterleaveV4[0]::Zip[0]::FlatMap::TensorSlice destructor
I0903 16:01:32.419698 1039316 dataset.cc:499] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle::ParallelInterleaveV4[0]::Zip[1]::TensorSlice destructor
I0903 16:01:32.419709 1039316 dataset.cc:499] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle::ParallelInterleaveV4[0]::Zip[0]::FlatMap destructor
I0903 16:01:32.419722 1039316 dataset.cc:499] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle::ParallelInterleaveV4[0]::Zip destructor
I0903 16:01:32.422107 1039272 dataset.cc:495] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle::ParallelInterleaveV4 constructor
I0903 16:01:32.422194 1039272 dataset.cc:499] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle::ParallelInterleaveV4::TensorSlice destructor
I0903 16:01:32.422208 1039272 dataset.cc:499] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle::ParallelInterleaveV4 destructor
I0903 16:01:32.422235 1039272 dataset.cc:495] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle::ParallelInterleaveV4::TensorSlice constructor
I0903 16:01:32.422360 1039272 dataset.cc:499] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle::ParallelInterleaveV4::TensorSlice destructor
I0903 16:01:32.422374 1039272 dataset.cc:499] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle::ParallelInterleaveV4 destructor
I0903 16:01:32.623497 1039269 dataset.cc:495] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap[0]::TensorSlice constructor
I0903 16:01:32.915468 1039269 dataset.cc:499] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap[0]::TensorSlice destructor
I0903 16:01:32.956434 1039269 dataset.cc:495] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap[1]::TensorSlice constructor
I0903 16:01:33.213744 1039269 dataset.cc:499] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap[1]::TensorSlice destructor
I0903 16:01:33.216261 1039269 dataset.cc:499] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle destructor
I0903 16:01:33.216274 1039269 dataset.cc:499] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2 destructor
I0903 16:01:33.216282 1039269 dataset.cc:495] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap constructor
I0903 16:01:33.216284 1039269 dataset.cc:499] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap destructor
I0903 16:01:33.216294 1039269 dataset.cc:495] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2 constructor
I0903 16:01:33.216301 1039269 dataset.cc:495] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle constructor
I0903 16:01:33.384797 1039269 dataset.cc:499] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2::Shuffle destructor
I0903 16:01:33.384828 1039269 dataset.cc:499] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap::ParallelMapV2 destructor
I0903 16:01:33.384833 1039269 dataset.cc:499] Iterator::Model::Prefetch::BatchV2::Shuffle::FlatMap destructor
```
which indicates that every `shuffle` constructor is paired with destructor.
username_4: Hi, so the issue happens over multiple epochs / passes over the dataset.
But also there are multiple shuffles happening here (one over the record
names, and one over the flattened batches). Looking at the logs, I count 3
shuffle constructors and only 2 shuffle destructors. I'm not sure why there
are 3 shuffle constructors in the first place, but it appears that one of
them is not being destructed.
username_4: Oops, my mistake there I just noticed the third shuffle destructor. Disregard that last comment. Anyway, can you try running this for more than one epoch? I have a feeling that the issue only happens some of the time, but it will always happen at least once over a few epochs
username_2: There are multiple shuffles per epoch because your input pipeline contains multiple shuffles -- one before `flat_map` and one after `flat_map`. The one before `flat_map` will result in a shuffle buffer being created and destroyed for each input element of `flat_map` (as opposed to once per epoch).
I ran your example for 100 epochs using my standalone binary repro and do not see memory increase. In other words, I cannot reproduce your issue.
Have you confirmed that the issue goes away when you remove the shuffle buffer? If so, can you extend your colab to run the epoch within a loop and print the amount of memory allocated at the end of each loop?
username_6: Hi There,
We are checking to see if you still need help on this, as you are using an older version of tensorflow which is officially considered end of life . We recommend that you upgrade to the latest 2.x version and let us know if the issue still persists in newer versions. Please open a new issue for any help you need against 2.x, and we will get you the right help.
This issue will be closed automatically 7 days from now. If you still need help with this issue, please provide us with more information.
Status: Issue closed
username_2: I have recently investigated the memory growth observed for OSS version of TensorFlow when `shuffle` is used. The conclusion of my investigation is that the memory growth is related to poor performance of memory allocator (TensorFlow uses system malloc by default). In my experiments, switching to use TCMalloc (details below) resulted in constant memory usage.
For the evaluation, I used the following simple input pipeline:
```
import tensorflow as tf
import psutil
dataset = tf.Dataset.range(int(1e7))
iterator = dataset.shuffle(int(1e7)).batch(int(1e6))
for _ in iterator:
used_mem = psutil.virtual_memory().used
print("used memory: {} Mb".format(used_mem / 1024 / 1024))
```
When executed on workstation, it produces the following output:
```
$ python example.py
used memory: 19853.52734375 Mb
used memory: 19905.6484375 Mb
used memory: 19958.109375 Mb
used memory: 20014.796875 Mb
used memory: 20064.8359375 Mb
used memory: 20061.375 Mb
used memory: 20117.23828125 Mb
used memory: 20172.8515625 Mb
used memory: 20228.18359375 Mb
used memory: 20278.62890625 Mb
```
I then installed tcmalloc using `sudo apt-get install libtcmalloc-minimal4` and used it for the same program, as follows:
```
LD_PRELOAD=/path/to/libtcmalloc_minimal.so.4 python example.py
used memory: 19291.0859375 Mb
used memory: 19307.90234375 Mb
used memory: 19315.859375 Mb
used memory: 19315.859375 Mb
used memory: 19315.875 Mb
used memory: 19317.8671875 Mb
used memory: 19311.14453125 Mb
used memory: 19317.3515625 Mb
used memory: 19317.34765625 Mb
used memory: 19316.96484375 Mb
```
Not only the gradual memory growth disappeared, but the program also ran 2x faster.
username_7: did TF 2.5 release solved this issue for anyone ?
username_8: I am on 2.7 and still have the problem. |
RandomWireTechnologies/ctrl-o-models | 51645547 | Title: Windowless Cover Mount is too tight
Question:
username_0: Need to add more tolerance on the dimensions of the interface between the wall mount and the cover to allow the cover to attach when the wall mount is attached to a flat surface.
Answers:
username_0: Fixed, current version at commit 8b26da6aee7367e58d3160cb1cfcfe894323eb3d works. It is a bit tight, but definitely mountable.
Status: Issue closed
|
godotengine/godot | 486475774 | Title: Godot crashes in ubuntu studio 18.04 (i586 architecture)
Question:
username_0: Dumping the backtrace. Please include this when reporting the bug on https://github.com/godotengine/godot/issues
[1] linux-gate.so.1(__kernel_sigreturn+0) [0xb7f18d14] (??:0)
-- END OF BACKTRACE --
handle_crash: Program crashed with signal 11
Dumping the backtrace. Please include this when reporting the bug on https://github.com/godotengine/godot/issues
[1] linux-gate.so.1(__kernel_sigreturn+0) [0xb7f18d14] (??:0)
-- END OF BACKTRACE --
handle_crash: Program crashed with signal 11
Dumping the backtrace. Please include this when reporting the bug on https://github.com/godotengine/godot/issues
[1] linux-gate.so.1(__kernel_sigreturn+0) [0xb7f18d14] (??:0)
-- END OF BACKTRACE --
**Steps to reproduce:**
enter these commands in terminal:
1. sudo su
2. cd <path to folder where godot is placed>
3. cp <godot's file name> /usr/bin/<godot's file name>
4. <godot's file name>
**2.Minimal reproduction project:**
None. Just run Godot via terminal
Answers:
username_1: You're not supposed to run Godot as root. What happens if you run it normally without being root?
Additionally, I would advise not to put custom binaries in `/usr/bin`, this path is reserved for system packages. If you really need to have it in the path, I'd advise to put a symlink in `/usr/local/bin/godot` pointing to where you keep the binary in your `$HOME`.
username_0: CPU Work mode: 32-bit
Bit order: Little Endian
CPU: 1
List of active CPU: 0
Model name: Intel(R) Celeron(R) M processor 1.60GHz
**I know it is old pc, but it runned godot a half of year ago, so when i found the error i reported it as a issue. And I will try to run godot on that pc**
username_1: Can you post the output of `lscpu`? Are you using the 32-bit version of Godot?
username_1: I mean type `lscpu` in qma terminal, post what it prints.
username_0: Sorry, but I don't know what you mean. I'm totally shocked. In my last post in this problem I posted the exit from the Ubuntu Studio System Terminal. If this is not what you mean, please try to specify what you mean.
username_1: And instructions including the use of `sudo` and `cp`.
It means that you know how to start a terminal.
So I ask you to start a terminal, then type `lscpu`, press Enter, and copy paste the text that will be printed by the `lscpu` command in the terminal...
username_0: Ok. Thanks a lot btw. Wait a minute. I have to rewrite the text from 2nd pc to 1st pc (I have 2 PCs) and translate it. So, i will add a new comment when the text will (finally) be ready
username_1: Well you don't need to type everything, I'm mostly interested in the first few lines about the exact CPU model, architecture, etc.
username_0: Hehe 😅
username_2: run godot in a terminal with -v like this:
```
godot_name -v
```
and paste all the output here, not just the backtrace.
Also does a window open? Describe a little what exactly happens.
username_1: Does Godot 2.1.6 work on this computer?
https://downloads.tuxfamily.org/godotengine/2.1.6/Godot_v2.1.6-stable_x11.32.zip
I'm not sure what the issue would be, as official binaries are built on Ubuntu 14.04 LTS so they should run fine on 18.04 32-bit.
Maybe lack of SSE2? Though I don't think we enable it ourselves, but maybe a thirdparty library has a bogus check to enable it.
Can you post the output of `lscpu | grep Flags`?
username_0: Wait man... When i will be at home, I will do that.
username_0: So... is that bug fixed? Dou You need any help from me?
**Please, don't be angry for that - I don't know whether to open it as a new issue, so I'll write briefly.** I have bugs with VideoPlayer node, and I think godot should get 100% support for: dll libraries. It would be nice to export the game so that its files from the source code are next to the game's executable file. (I know that you can upload the godot binary next to the project, but due to the mentioned VideoPlayer bug this will not have the intended effect, because the intro will not start)
username_3: It's still relevant in 3.2 beta 4 (tested on freshly installed Lubuntu 18.04 and Debian 10.2.0 32-bit x86 VMs), version compiled on the 32-bit VM itself works fine.
username_4: It's still relevant in 3.2 beta 5 (tested on Linux Mint 64 bit and Windows 10 64 Bit)
username_5: @username_4 The issue reported here is very different, as it's reported on 32-bit Linux. Please create a new issue and make sure to include the requested information.
username_1: Did some tests in an Ubuntu Studio 18.04 i586 VM, and I can reproduce the issue with official binaries.
It seems to be caused by our use of `use_static_cpp=yes`. I also checked `use_lto=yes` but this one doesn't seem to be problematic, i.e.:
- Official: `debug_symbols=no use_lto=yes use_static_cpp=yes`: crash.
- Test 1: `debug_symbols=yes use_lto=yes use_static_cpp=no`: works (both as is and once stripped).
- Test 1: `debug_symbols=yes use_lto=no use_static_cpp=yes`: crash.
Backtrace for the crash:
```
Godot Engine v3.2.beta5.90.official - https://godotengine.org
handle_crash: Program crashed with signal 11
Dumping the backtrace. Please include this when reporting the bug on https://github.com/godotengine/godot/issues
[1] linux-gate.so.1(__kernel_sigreturn+0) [0xb7fd6d14] (??:0)
-- END OF BACKTRACE --
handle_crash: Program crashed with signal 11
Dumping the backtrace. Please include this when reporting the bug on https://github.com/godotengine/godot/issues
[1] linux-gate.so.1(__kernel_sigreturn+0) [0xb7fd6d14] (??:0)
-- END OF BACKTRACE --
warning: Error reading shared library list entry at 0x2490
Thread 1 "godot.x11.opt.t" received signal SIGSEGV, Segmentation fault.
0x00000000 in ?? ()
(gdb) bt
#0 0x00000000 in ?? ()
#1 0xb7c3c845 in __pthread_once_slow (once_control=0xb5d62f2c,
init_routine=0xac0e3a0 <__once_proxy>) at pthread_once.c:116
#2 0xb237cd27 in void std::call_once<void (&)()>(std::once_flag&, void (&)()) ()
from /usr/lib/i386-linux-gnu/libLLVM-6.0.so.1
#3 0xb237cdb6 in llvm::ManagedStaticBase::RegisterManagedStatic(void* (*)(), void (*)(void*)) const () from /usr/lib/i386-linux-gnu/libLLVM-6.0.so.1
#4 0xb23581f9 in llvm::cl::OptionCategory::registerCategory() ()
from /usr/lib/i386-linux-gnu/libLLVM-6.0.so.1
#5 0xb2282f35 in ?? () from /usr/lib/i386-linux-gnu/libLLVM-6.0.so.1
#6 0xb7fe77db in call_init (l=<optimized out>, argc=argc@entry=1, argv=argv@entry=0xbffff244,
env=0xbffff24c) at dl-init.c:72
#7 0xb7fe78de in call_init (env=0xbffff24c, argv=0xbffff244, argc=1, l=<optimized out>)
at dl-init.c:30
#8 _dl_init (main_map=<optimized out>, argc=1, argv=0xbffff244, env=0xbffff24c)
at dl-init.c:119
#9 0xb7feb855 in dl_open_worker (a=<optimized out>) at dl-open.c:522
#10 0xb7a7dc20 in __GI__dl_catch_exception (exception=0xbfffe070,
operate=0xb7feb390 <dl_open_worker>, args=0xbfffe07c) at dl-error-skeleton.c:196
#11 0xb7feafe6 in _dl_open (file=0xbfffe2d4 "/usr/lib/i386-linux-gnu/dri/swrast_dri.so",
mode=-2147483390, caller_dlopen=0xb6c67bf8, nsid=<optimized out>, argc=1, argv=0xbffff244,
env=0xbffff24c) at dl-open.c:605
#12 0xb7c27c65 in dlopen_doit (a=0xbfffe27c) at dlopen.c:66
#13 0xb7a7dc20 in __GI__dl_catch_exception (exception=0xbfffe210,
operate=0xb7c27bf0 <dlopen_doit>, args=0xbfffe27c) at dl-error-skeleton.c:196
#14 0xb7a7dcd0 in __GI__dl_catch_error (objname=0xc3e918c, errstring=0xc3e9190,
mallocedp=0xc3e9188, operate=0xb7c27bf0 <dlopen_doit>, args=0xbfffe27c)
at dl-error-skeleton.c:215
#15 0xb7c283c1 in _dlerror_run (operate=0xb7c27bf0 <dlopen_doit>, args=0xbfffe27c)
at dlerror.c:162
#16 0xb7c27d28 in __dlopen (file=0xbfffe2d4 "/usr/lib/i386-linux-gnu/dri/swrast_dri.so",
mode=258) at dlopen.c:87
#17 0xb6c67bf8 in ?? () from /usr/lib/i386-linux-gnu/libGLX_mesa.so.0
#18 0xb6c6728a in ?? () from /usr/lib/i386-linux-gnu/libGLX_mesa.so.0
#19 0xb6c4388e in ?? () from /usr/lib/i386-linux-gnu/libGLX_mesa.so.0
#20 0xb6c3f094 in ?? () from /usr/lib/i386-linux-gnu/libGLX_mesa.so.0
#21 0xb6c404a1 in ?? () from /usr/lib/i386-linux-gnu/libGLX_mesa.so.0
#22 0xb77ee4f9 in glXChooseFBConfig () from /usr/lib/i386-linux-gnu/libGLX.so.0
#23 0x0843de24 in ContextGL_X11::initialize () at platform/x11/context_gl_x11.cpp:157
#24 0x084484ba in OS_X11::initialize(OS::VideoMode const&, int, int) ()
at platform/x11/os_x11.cpp:304
#25 0x08465aa6 in Main::setup2(unsigned long long) () at main/main.cpp:1201
#26 0x0846b97e in Main::setup(char const*, int, char**, bool) () at main/main.cpp:1146
#27 0x08438cd4 in main () at platform/x11/godot_x11.cpp:49
#28 0xb7961e81 in __libc_start_main (main=0x8438c30 <main>, argc=1, argv=0xbffff244,
init=0xac6a440 <__libc_csu_init>, fini=0xac6a4b0 <__libc_csu_fini>,
rtld_fini=0xb7fe79b0 <_dl_fini>, stack_end=0xbffff23c) at ../csu/libc-start.c:310
#29 0x0843ba74 in _start ()
```
The multiple segfaults are due to the PRIME detection that we have for X11, but it crashes even if we bypass it with `DRI_PRIME=0` (with same backtrace as above).
username_1: I found that the export templates seem to work fine even though they're also compiled with `use_static_cpp=yes`.
So it seems something editor-specific crashes when using `-lstatic-libgcc -lstatic-libstdc++`, which doesn't affect the runtime.
username_1: 3.2 RC 1 and later should now work fine on 32-bit systems, they no longer link libgcc and libstbc++ statically.
username_1: Reminder to self: use the same workaround for 3.1.3 or rebuild 3.1.2 binaries.
username_0: Ok. I will check it soon and write what happened
username_0: Not working.
Output here:
```Godot Engine v3.2.rc1.official - https://godotengine.org
X Error of failed request: GLXBadFBConfig
Major opcode of failed request: 155 (GLX)
Minor opcode of failed request: 34 ()
Serial number of failed request: 27
Current serial number in output stream: 25
X Error of failed request: GLXBadFBConfig
Major opcode of failed request: 155 (GLX)
Minor opcode of failed request: 34 ()
Serial number of failed request: 27
Current serial number in output stream: 25
ERROR: initialize: Condition ' ctxErrorOccurred || !p->glx_context ' is true. returned: ERR_UNCONFIGURED
At: platform/x11/context_gl_x11.cpp:190.
OpenGL ES 2.0 Renderer: ATI RC410
ERROR: initialize: Directional shadow framebuffer status invalid
At: drivers/gles2/rasterizer_scene_gles2.cpp:4053.
handle_crash: Program crashed with signal 11
Dumping the backtrace. Please include this when reporting the bug on https://github.com/godotengine/godot/issues
[1] linux-gate.so.1(__kernel_sigreturn+0) [0xb7edcd14] (??:0)
[2] [0xb7ecb2c3] (??:0)
[3] /usr/lib/i386-linux-gnu/dri/r300_dri.so(+0x4e7f48) [0xb5f3cf48] (??:0)
-- END OF BACKTRACE --
Aborted (core dumped)
```
username_0: Anything else I can write is: system is using packages for i386 architecture, and some programs like steam client and blender crashes. Maybe source of those issues is the hardware.
username_1: Indeed, if Steam, Blender and Godot all crash on this system, it's not something we can handle further in Godot. The last backtrace and the GLX errors show that the crash happens in your drivers, which do not properly support OpenGL 2.1. You might have some luck upgrading to the latest version of Mesa available (either via a PPA, or trying Ubuntu Studio 19.10 if they still provide i386 support), but I wouldn't bet on it.
I'll keep this issue open though as the crash described in https://github.com/godotengine/godot/issues/31743#issuecomment-573013032 is still valid if using `use_static_cpp=yes` when compiling Godot for 32-bit Linux (though I could only reproduce it on Ubuntu derivatives).
username_0: Ok. I already updated system and Godot seems working, but I have to update
mesa drivers, cause when I I am pressing f6 or f5, there is a popup telling
me i have old GPU or it's old drivers. Anyway I finally can open project,
so that's very good news.
---------- Forwarded message ---------
<https://github.com/godotengine/godot/issues/31743?email_source=notifications&email_token=<KEY>EJKM4TY#issuecomment-575983183>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AK6JGP25H75BHMVMP74R4ETQ6QKQXANCNFSM4IRLPD3Q>
.
username_6: When debugging #26887 I added a print_line to the end of `OS_Unix::get_ticks_usec()` which causes enough overhead to cause this bug. I am on the latest master
username_0: Sadly..., No
---------- Forwarded message ---------
<https://github.com/Shin-NiL/Godot-Android-Admob-Plugin/issues/9?email_source=notifications&email_token=<KEY>#issuecomment-580393942>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AK6JGP3GM37FLIYNLB66HTTRAMNDRANCNFSM4J575Y5Q>
.
username_7: Can anyone still reproduce this bug in Godot 3.2.3 or any later release?
username_1: I haven't tried but I think the tests I did with early 3.2 in https://github.com/godotengine/godot/issues/31743#issuecomment-573013032 would still fail. It's not a Godot bug, but either a Ubuntu or a toolchain one. I'll have to give it another look one day and see if I can make a bug report in Ubuntu or the involved toolchain (not sure if it's GCC, binutils, ld-linux, or for some reason LLVM seems to be involved in my above stacktrace?).
Status: Issue closed
|
bukowskis/perimeter | 831124195 | Title: Question: What is project status?
Question:
username_0: I'm looking for gem to adopt / reference implementation of repository/entity patter to enforce borders in existed rails project.
Perimeter looks exactly what i'm looking for (thank you for share it!), but since there is no activity in this repo for a long time, i'm curios on project status.
Could could you please share you experience -- is this thing that just works for you and it doesn't need any updates, or you doesn't use it anymore? |
infinum/eightshift-libs | 991326262 | Title: Deprecated functions in WP 5.8
Question:
username_0: ## Describe the bug
Functions that are no longer valid in WP 5.8 and can cause compatibility issues.
## Your Environment
* PHP version: 7.4
* Libs version: 4.0
## Additional context
<!-- Add any other context about the problem here. -->
PHP Deprecated: `block_categories` is <strong>deprecated</strong> since version 5.8.0! Use `block_categories_all` instead. in .../functions.php on line 5458
PHP Deprecated: `allowed_block_types` is <strong>deprecated</strong> since version 5.8.0! Use `allowed_block_types_all` instead. in .../functions.php on line 5458
PHP Deprecated: `block_categories` is <strong>deprecated</strong> since version 5.8.0! Use `block_categories_all` instead. in .../functions.php on line 5458
PHP Deprecated: `block_editor_settings` is <strong>deprecated</strong> since version 5.8.0! Use `block_editor_settings_all` instead. in .../functions.php on line 5458
Answers:
username_1: These were supposed to be fixed in https://github.com/infinum/eightshift-libs/pull/223.
There should be
```php
// Only allow custom-built blocks.
if (\is_wp_version_compatible('5.8')) {
\add_filter('allowed_block_types_all', [$this, 'allowOnlyCustomBlocks'], 21, 2);
} else {
\add_filter('allowed_block_types', [$this, 'allowOnlyCustomBlocksOld'], 21, 2); // @phpstan-ignore-line
}
```
and
```php
// Create new custom category for custom blocks.
if (\is_wp_version_compatible('5.8')) {
\add_filter('block_categories_all', [$this, 'getCustomCategory'], 10, 2);
} else {
\add_filter('block_categories', [$this, 'getCustomCategoryOld'], 10, 2);
}
```
hooks in your `Blocks.php` file.
username_2: this is fixed
Status: Issue closed
|
JuliaDiff/ChainRulesCore.jl | 1028031092 | Title: Dep/failure on v1.7
Question:
username_0: ```julia
ERROR: LoadError: Wrapping `Vararg` directly in UnionAll is deprecated (wrap the tuple instead).
Stacktrace:
[1] UnionAll(v::TypeVar, t::Any)
@ Core .\boot.jl:255
[2] top-level scope
@ C:\Users\accou\.julia\packages\ChainRulesCore\bxKCw\src\projection.jl:134
[3] include(mod::Module, _path::String)
@ Base .\Base.jl:417
[4] include(x::String)
@ ChainRulesCore C:\Users\accou\.julia\packages\ChainRulesCore\bxKCw\src\ChainRulesCore.jl:1
[5] top-level scope
@ C:\Users\accou\.julia\packages\ChainRulesCore\bxKCw\src\ChainRulesCore.jl:31
[6] include
@ .\Base.jl:417 [inlined]
[7] include_package_for_output(pkg::Base.PkgId, input::String, depot_path::Vector{String}, dl_load_path::Vector{String}, load_path::Vector{String}, concrete_deps::Vector{Pair{Base.PkgId, UInt64}}, source::String)
@ Base .\loading.jl:1318
[8] top-level scope
@ none:1
[9] eval
@ .\boot.jl:373 [inlined]
[10] eval(x::Expr)
@ Base.MainInclude .\client.jl:453
[11] top-level scope
@ none:1
in expression starting at C:\Users\accou\.julia\packages\ChainRulesCore\bxKCw\src\projection.jl:134
in expression starting at C:\Users\accou\.julia\packages\ChainRulesCore\bxKCw\src\ChainRulesCore.jl:1
ERROR: LoadError: Failed to precompile ChainRulesCore [d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4] to C:\Users\accou\.julia\compiled\v1.7\ChainRulesCore\jl_BEA4.tmp.
```
pointing to `const _PZ_Tuple = Tuple{_PZ,Vararg{<:_PZ}} # 1 or more ProjectTo{<:AbstractZeros}`
Answers:
username_1: For my reference this was introduced in
https://github.com/JuliaLang/julia/pull/38136 |
carpentries/assessment | 320865671 | Title: Programmatically rename downloaded images
Question:
username_0: When images are exported (via download as html or other formats) they are given generic file names (e.g. `output_19_2.png ` ). Is there a way to specify the filename in Jupyter?
Answers:
username_1: This issue was moved to username_1/assessment-new#10
Status: Issue closed
|
pavelneznanov/netology-js | 368421591 | Title: Используйте оператор spread
Question:
username_0: https://github.com/pavelneznanov/netology-js/blob/be4e89c21da0642dd0aeb9a255c0ca23636738cd/game.js#L208
Оператор spread позволяет более элегантно преобразовать строку к массиву символов: вместо row.split('') можно написать [...row]<issue_closed>
Status: Issue closed |
TWiStErRob/TWiStErRob-env | 1159402301 | Title: Split SVN monorepo
Question:
username_0: For how, see https://github.com/username_0/username_0-env/tree/master/svn2git-migration
- [ ] Documents\ -> empty, never had contents, delete
- [ ] Examples\
- [ ] Hacks\
- [ ] Interviews\
- [ ] Libraries\android_google-play-services_lib\
- [ ] Libraries\android_google-play-services_lib_froyo\
- [ ] Libraries\android_support-v7_appcompat\
- [ ] Libraries\TWiStEr.Annotations\
- [ ] Libraries\TWiStEr.BitTorrent\
- [ ] Libraries\TWiStEr.Controls.WPF\
- [ ] Libraries\TWiStEr.Converters\
- [ ] Libraries\TWiStEr.DataStructures\
- [ ] Libraries\TWiStEr.Extensions\
- [ ] Libraries\TWiStEr.FormalLanguages\
- [ ] Libraries\TWiStEr.Fuzzy\
- [ ] Libraries\TWiStEr.NeuralNetworks\
- [ ] Libraries\twister-lib-android\
- [ ] Libraries\twister-lib-java\
- [x] Libraries\twister-plugin-gradle\ -> https://github.com/username_0/net.twisterrob.gradle/issues/95
- [ ] Projects\Android Color Filters\ -> https://github.com/username_0/net.twisterrob.colorfilters/issues/16
- [ ] Projects\Better London Travel\ -> https://github.com/username_0/net.twisterrob.travel/issues/2
- [ ] Projects\Cashier\
- [x] Projects\Cinema\ -> [**Publish & Maintenance 2021**](https://github.com/username_0/net.twisterrob.cinema/milestone/1) net.twisterrob.cinema/M1
- [ ] Projects\Color Schema Manager\
- [ ] Projects\Extra Refactorings\
- [ ] Projects\Family Tree\
- [ ] Projects\Git History Builder\
- [ ] Projects\GPS Track Analyzer\
- [ ] Projects\Inventory\ -> https://github.com/username_0/net.twisterrob.inventory/issues/171
challenge: twister-lib-android & twister-lib-java
- [ ] Projects\Language Cross Reference\
- [ ] Projects\ListMatcher\
- [ ] Projects\Minesweeper Multiplayer\
- [ ] Projects\Mosaic\
- [ ] Projects\Office Translator\
- [ ] Projects\Path Finder\
- [ ] Projects\PRSysMon2\
- [ ] Projects\Quick Public Transport\
- [ ] Projects\Regex Testbed\
- [ ] Projects\Sensor3D\
- [x] Projects\Sun\ -> https://github.com/username_0/net.twisterrob.sun/issues/5
- [ ] Projects\Syntax Highlight Editor\
- [ ] Projects\WebTwister\
- [ ] School\
- [ ] Utilities\Calendar Filler\
- [ ] Utilities\Console Highlighter\
- [ ] Utilities\Direct File Transfer\
- [ ] Utilities\Graph Energy\
- [ ] Utilities\GridMaker\
- [ ] Utilities\Paint.NET\
- [ ] Utilities\Pre-Commit Hook\
- [ ] Utilities\Processing\
- [ ] Utilities\Regex Color Editor\
- [ ] Utilities\Series Manager\
- [ ] Utilities\Simple Utilities\
- [ ] Utilities\Solvers\
- [ ] Utilities\SpecialGraph\
- [ ] Utilities\StereoViewer\
- [ ] Utilities\TimeShifter\
- [ ] Utilities\ViewModelGenerator\
- [ ] Utilities\WhatsUp\
- [ ] Utilities\XMLReorder\
- [ ] Web\Amoeba\
- [ ] Web\auroville.hu\
- [ ] Web\Browser Scripts\
- [ ] Web\Google Calendar Printer\
- [ ] Web\Google Maps Icons\
- [ ] Web\Portable Apache Server\
- [ ] Web\QuickJSONFormatterLib\
- [ ] Web\ScreenJunkies Plus\
- [x] Web\Test -> https://github.com/username_0/net.twisterrob.healthcheck/commit/02b48c69ae30bbed9cfae33cffbec1c5fde30957
- [ ] Web\twisterrob.net\
- [ ] Web\VotingTwister\
- [ ] Web\www.twisterrob.net\ -> https://github.com/username_0/twisterrob.github.io
Need to figure out how to store private files. |
Zulu-Inuoe/jzon | 925385567 | Title: [FR] Add to `quicklisp`
Status: Issue closed
Answers:
username_1: Hi @username_0 . I appreciate the support, unfortunately I'd like to avoid putting this on Quicklisp until it's more complete. At a minimum I wish to have:
1. A roundtripping reader/writer (output JSON exactly as it was read)
2. Object coercion mechanism for reading (eg read JSON as a CL class/structure)
Until these are done, I don't want to pollute QL with yet another JSON library.
username_2: +1 for adding to Quicklisp.
IMHO, jzon is already the best JSON library for rendering JSON views in web app.
So, I'd like to switch from Jonathan in my libraries, but can't because jzon is not in Quicklisp.
username_1: @username_2 Thanks for the support.
Unfortunately I haven't been doing much programming in my free time as of late.
But if you think the library is useful as-is, then it's fair to add it to quicklisp, and I can add my other desired features over time as I can.
I just don't want to continue to contribute to the 100 JSON library meme on Quicklisp unless I offer some substantial benefit
username_3: @username_1 I think that jzon is a new generation JSON parser taking into account advantages and disadvantages of previous generation of parsers. And it has full coverage of JSON standard verified by third-party test suite. This is already good reasons for releasing v1 on quicklisp.
username_4: Yes, can you please add this to quicklisp, the pretty print feature is exceptionally useful :)
username_5: Throwing my hat into the pile. I'd like to start using this. |
yyuueexxiinngg/BepInEx-Plugins | 810194281 | Title: Idea/Suggestion
Question:
username_0: Hello. It would be great if this mod instead is able to request items from ILS where the slot is set to Remote Supply. Then send a Vessel with the goods to your mecha. I know thats may hard, but it would make much more sense <3
Status: Issue closed
Answers:
username_1: Thanks for your great idea, but I'm not planned to do it, maybe you can share your idea in DSP modding discord server, which can be found here: [DSP Modding](https://dsp.thunderstore.io/) |
SublimeText/PackageDev | 421963294 | Title: Showing empty default theme settings
Question:
username_0: 1. 
1. 
Edit default Color Scheme Worked: (but note, it is directly installed on the loose `Packages` directory:

Answers:
username_1: Cannot reproduce. Both commands open a respective file in the User package with the intended template for me.
username_0: Thanks for looking into. I noted this, so later I can look into and try to fix it.
Status: Issue closed
|
ColquhounAudio/AxiomBuild | 359551304 | Title: Network error LED (red) not working correctly
Question:
username_0: I am seeing this LED flash during boot, which is not a big deal even though it should not appear. When I entered an incorrect password for a WiFi network the system falls back to hotspot mode correctly and the blue WiFi LED beings to flash, but I don't get a flashing red error LED at the same time.<issue_closed>
Status: Issue closed |
cisc3130/2-list | 263767702 | Title: Allocating header node to a Node pointer
Question:
username_0: I am wondering if it is actually legal to do header = new Node<T>*()
Allocating a pointer to a pointer?
This is my code:
`DList() {
head = new Node*();
head->next = head;
head->prev = head;
}`
Status: Issue closed
Answers:
username_1: Yes, the asterisk should be removed
```head = new Node();``` |
ErikEJ/SqlCeToolbox | 351278723 | Title: Impossible to configure data set after creating new table
Question:
username_0: Hi
Thanks for you tool
In new visual studio express for desktop 2015, when I installed your tool, I can now see my local database and I can add tables an manage it
the problem is that, in vb studio express, after doing that, you need to configure the dataset before slide it to the form you want it. In configuration wizard, you check the litte box left to your new table and so... your table is ready to be used (slide) to the form you need it
1.Can you tell me how to do this please.. I really need to work in my projects 2015 and before and I cannot anymore
2.Also, do you have the old iso of visual studio 2015 express for desktop that was made to work with local database?
Answers:
username_1: 1: DataSets are not supported and not recommended, use SqlCeResultSet og LINQ to SQL
2: No, I do not, sorry
Status: Issue closed
username_0: Hi
thanks for your answer
I need to figure something out please
I copy paste the visual studio 2015 folder from an windows 8 computer that uses old fashion dataset wizard and paste it to windows 10 new computer... knowing that some bugs will occur but when i will run it...
when I run the exe file from this copy paste visual studio 2015 folder on the new windows 10 computer.. i still see the new way for dataset wizard impossible to configure the new table added with your tool but impossible to slide it on the form
my question is.. if this new way of managing database in visual community appears when I run visual studio 2015 copy pasted on the windows 10 computer... it means that this portion of the software do not take this managment in the visual studio 14.0 folders/files ...
1.so where are those source of dataset wizard managment...
where does an old version of studio 2015 copy pasted takes this new datataset managment way else where from his folders
If I can't change it to old style...
2.do you have a tutorial to replace the dataset wizard meaning that I will not slide the dataset table anymore on my form but I will programme it or something else that you know
thanks
username_1: I am not sure I understand what you are asking - I have a guide for LINQ to SQL here:
http://erikej.blogspot.com/2013/10/sql-server-compact-4-desktop-app-with.html
username_0: thanks for your answer
I'll make it simple
before.. we had to configure the dataset in the Data source after creating a new table so that the dataset in data source is updated with the new table and then we slide it on the chosen form
1.i understand I cannot do that anymore so do you have a tutorial to do instead of that after creating a new table
2. do I have to recode all my projects with local databases? It would be incredable... hours and hours of recoding passing from :
```
For x = 0 To Form1.Database1DataSet1.datbase.Rows.Count - 1
next
```
to that :
```
Dim cn As New SqlConnection()
Dim DS As New DataSet()
Dim da As SqlDataAdapter
Dim dr As DataRow
Dim cmdBuilder As SqlCommandBuilder
cn.ConnectionString = "Data Source=xxxxxx;User Id=xxxxxx;Password=<PASSWORD>;"
cn.Open()
'Initialize the SqlDataAdapter object by specifying a Select command
'that retrieves data from the sample table.
da = New SqlDataAdapter("SELECT * FROM database order by no", cn)
'Initialize the SqlCommandBuilder object to automatically generate and initialize
'the UpdateCommand, InsertCommand and DeleteCommand properties of the SqlDataAdapter.
cmdBuilder = New SqlCommandBuilder(da)
'Populate the dataset by running the Fill method of the SqlDataAdapter.
da.Fill(DS, "database")
Dim dateX As String = Today.Month.ToString + Today.Year.ToString
Dim nom As String = "Home"
Dim verificateur As Boolean = False
For x = 0 To DS.Tables("database").Rows.Count - 1
next
``` |
dotnet/aspnetcore | 624795082 | Title: OAuth authentication options requires client secret even when using PKCE
Question:
username_0: ### Describe the bug
When using OAuth authentication in an ASP.NET Core app, the [OAuthOptions](https://github.com/dotnet/aspnetcore/blob/master/src/Security/Authentication/OAuth/src/OAuthOptions.cs) class fails validation if no client secret has been configured, even when the `UsePkce` flag has been set to `true`.
### To Reproduce
Add OAuth authentication to an ASP.NET Core app. Configure the OAuth options so that a client secret is not provided but the PKCE flag is enabled e.g.
```csharp
services.AddAuthentication().AddOAuth(options => {
options.AuthorizationEndpoint = "some_endpoint_url";
options.TokenEndpoint = "some_endpoint_url";
options.UserInformationEndpoint = "some_endpoint_url";
options.CallbackPath = new PathString("/auth/callback");
options.ClientId = "some_client_id";
options.UsePkce = true;
});
```
At startup, an exception is thrown when the OAuth options are validated:
```
System.ArgumentException: The 'ClientSecret' option must be provided. (Parameter 'ClientSecret')
at Microsoft.AspNetCore.Authentication.OAuth.OAuthOptions.Validate()
at Microsoft.AspNetCore.Authentication.RemoteAuthenticationOptions.Validate(String scheme)
at Microsoft.AspNetCore.Authentication.AuthenticationBuilder.<>c__DisplayClass4_0`2.<AddSchemeHelper>b__1(TOptions o)
at Microsoft.Extensions.Options.ValidateOptions`1.Validate(String name, TOptions options)
at Microsoft.Extensions.Options.OptionsFactory`1.Create(String name)
at Microsoft.Extensions.Options.OptionsMonitor`1.<>c__DisplayClass11_0.<Get>b__0()
at System.Lazy`1.ViaFactory(LazyThreadSafetyMode mode)
at System.Lazy`1.ExecutionAndPublication(LazyHelper executionAndPublication, Boolean useDefaultConstructor)
at System.Lazy`1.CreateValue()
at System.Lazy`1.get_Value()
at Microsoft.Extensions.Options.OptionsCache`1.GetOrAdd(String name, Func`1 createOptions)
at Microsoft.Extensions.Options.OptionsMonitor`1.Get(String name)
at Microsoft.AspNetCore.Authentication.AuthenticationHandler`1.InitializeAsync(AuthenticationScheme scheme, HttpContext context)
at Microsoft.AspNetCore.Authentication.AuthenticationHandlerProvider.GetHandlerAsync(HttpContext context, String authenticationScheme)
at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware.Invoke(HttpContext context)
```
The issue can be worked around by setting a dummy value for the client secret e.g.
```csharp
services.AddAuthentication().AddOAuth(options => {
options.AuthorizationEndpoint = "some_endpoint_url";
options.TokenEndpoint = "some_endpoint_url";
options.UserInformationEndpoint = "some_endpoint_url";
options.CallbackPath = new PathString("/auth/callback");
options.ClientId = "some_client_id";
options.ClientSecret = "PKCE";
options.UsePkce = true;
});
```
### Further technical details
```
dotnet --info
.NET Core SDK (reflecting any global.json):
[Truncated]
Microsoft.AspNetCore.App 2.1.11 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.12 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.13 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.18 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.2.2 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.0.0 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.1.3 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 2.1.11 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.12 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.13 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.18 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.2.2 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.0.0 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.3 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.WindowsDesktop.App 3.0.0 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 3.1.3 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
```
Answers:
username_1: PKCE does not replace client secrets.
username_0: Understood, but if an app is only using authorization code + PKCE, no client secret is required, and the default behaviour when issuing a challenge using the OAuth middleware is to kick off the authorization code flow.
Status: Issue closed
|
likecoin/likecoin | 623575558 | Title: The sample demo in codesandbox get error
Question:
username_0: 1. https://codesandbox.io/s/likecoin-api-badge-demo-4mcc4

2. https://codesandbox.io/s/likecoin-oauth-demo-3u9qq
After clicking the button, the browser got below.

Answers:
username_1: for 1, please browse the path `/badge/stat/liker.svg` as defined in express
2. please run the demo in new window instead of iframe, the error is due to.x-frame-option security setting
username_0: TKS, I close this issue.
Status: Issue closed
username_1: maybe we can keep this issue open to improve unclear instructions
username_1: 1. https://codesandbox.io/s/likecoin-api-badge-demo-4mcc4

2. https://codesandbox.io/s/likecoin-oauth-demo-3u9qq
After clicking the button, the browser got below.

username_1: updated with https://github.com/likecoin/likecoin/commit/82942db693b3a93818ee3185fe60a866a920d9f0 and https://github.com/likecoin/likecoin/commit/639db9d8a6592c96f8d7eb4263686ebac8dfc7dd to solve the issue
Status: Issue closed
|
oppia/oppia | 248827709 | Title: Adding triangle to the message bubbles
Question:
username_0: Hi,
Per our discussion on issue #3675, we've decided to add the triangles back to the message bubbles (for now). Leaners normally see this view when they navigate to an already-answered card within each exploration, where their answers are grayed out.
Thanks!

<img width="260" alt="screen shot 2017-08-08 at 12 54 35 pm" src="https://user-images.githubusercontent.com/12159451/29091744-c321adde-7c38-11e7-876d-3c0e1e507382.png">
Answers:
username_1: Hi, I have added the triangle above because I thought, if in any case the answer is long the size of message bubble will increase, and the avatar of the user remains at the top of the message bubble.


username_2: Thanks, @username_1!
@username_3, @username_0: any thoughts on the UI? Also, @username_1, I recommend opening a PR as well so that we can give you a code review simultaneously.
username_0: Thanks, @username_1! I think it looks good. One thought: is it possible to align the avatar picture with the message bubble? Currently, the avatar is slightly higher than the message bubble.

username_3: Actually I think the height is OK because it looks like the triangle is centered on the avatar, which is appropriate. It could maybe go a tiny bit higher but I wouldn't move it up more than a bit. Otherwise, LGTM!
username_1: Hi @username_3, yup the triangle can go a bit higher but it might get too high for a one line answer.

Should I give it a try ?
username_3: hi @username_1 I'm sorry, I misspoke. I think the arrow is fine where it is (though when it existed originally it was actually flush to the top of the bubble). I was referring to moving the avatar in response to @username_0 's comment, not the arrow. I should have said, if anything moves at all (and I'm not sure it needs to), maybe cheat the avatar down very slightly so that the arrow is centered on the avatar. But really, I think it's fine as is.
Status: Issue closed
|
quicwg/ops-drafts | 1062286211 | Title: Clarificaton on any transport- and application-layer delay in RTT measurements
Question:
username_0: https://github.com/quicwg/ops-drafts/blob/3893028c41567d89a6f416049cff901ce87a5c21/draft-ietf-quic-manageability.md?plain=1#L753, why are we so specific about TLS crypto operations here, it could be anything like later mentioned (waiting for a response to be send) or simply work load in the endpoints. Overall the inclusion of the transport- and application-layer delay in RTT measurements has been described in different ways without mentioning why one matters more than the others in certain context while pointing to same thing. This need to be fixed.
Answers:
username_1: The section you noted, is about the the handshake. As there is no data during the handshake, the main app-layer delay with be the crypto. Yes, there can always be additional random delays due to processing also on the NIC but these are usually smaller and therefore less relevant. Not sure what kind of change would be needed here.
username_0: it would be good to refer to same events that may cause application-layer delay, rather giving two different examples for the same application-layer delay.
username_1: Still not sure what you would like to see here. Can you make a proposal?
username_2: Alternate proposal: remove the example?
username_0: would actually work, but there are examples of application-layer delay in that later section, may be you want to bring that up here and only use one set of potential reasons for application-layer delay for the whole section.
Status: Issue closed
|
thoughtbot/factory_bot_rails | 302275870 | Title: Release v4.9.0 to rubygems
Question:
username_0: Hi,
5 days ago you released version v4.9.0 (at least - by tagging) but it's not pushed to rubygems, where version `4.8.2` is still the latest one.
cc @username_1
Answers:
username_1: Read more in https://github.com/thoughtbot/factory_bot_rails/issues/260 and https://github.com/thoughtbot/factory_bot_rails/issues/255
4.8.2 is currently the latest `factory_bot_rails` release. 4.9.0 is the latest (and likely last) `factory_girl_rails` release.
Status: Issue closed
|
tue-robotics/dashboard | 48077942 | Title: Fix the status of the wireless runstop
Question:
username_0: If the physical runstop is pressed, the wireless runstop state is unknown (tue-robotics/amigo_hardware#2). Make the button grey out.
Answers:
username_0: ```yaml
---
header:
seq: 588
stamp:
secs: 0
nsecs: 0
frame_id: ''
status:
-
level: 0
name: Wired
message: ''
hardware_id: ''
values: []
-
level: 1
name: Wireless
message: ''
hardware_id: ''
values: []
---
```
Status: Issue closed
|
carbon-design-system/gatsby-theme-carbon | 500587665 | Title: can't use multiple classes + syntax highlighting in code component
Question:
username_0: I'd expect to be able to apply more than one class to a `Code` component and retain syntax highlighting, e.g.:
```jsx
<Code className="language-js foobar">
'some js here';
</Code>
```
However, the contents of the `className` prop are passed directly into the `Highlight` component; it does not recognize `js foobar` (`language-` is stripped) as a known language, and does not highlight the code.
Two suggestions to address this:
1. Use a discrete `language` or `lang` prop
1. _Require_ (right now, it's optional) the language class name to have a `language-` prefix, and assign _only the matched substring_ to the `language` prop (to be passed into the `Highlight` component).
Out of these two, I'd prefer the first, as it's more explicit. Other ideas?
I can send a PR.
* * *
This is an x/y problem. Why use a `Code` component directly?
I'd like to limit the vertical size of the code block (mainly, I just want the "copy" button to work; it's a big file).
To do this, I have to use a `className` referencing a CSS class with a `height` property.
- I can't use a `style` prop in the JSX, because the component does nothing with it.
- Wrapping the `Code` component in e.g. `<div style={height: '250px'}>` doesn't work; the code block overflows into the footer, etc.
- It's not possible to reference a class name imported from a `.module.scss` in the markdown params (whatever those are called) of a fenced code block, afaict.
- I could just shadow the `Code` component and bend it to my will, but figured someone else might have this use case.
Answers:
username_1: Hi! Thanks for opening this issue. I think being able to add a custom `className` is a fair ask. I definitely think that the current workaround would be shadowing the component until we add this prop. Also, related to your issue: https://github.com/carbon-design-system/gatsby-theme-carbon/issues/463 The show more/less functionality might be useful for you in the future. As always, feel free to open a PR for this as well. :) |
Elite-Four/jDollarX | 143392692 | Title: Sounded useful, but it's not installable
Question:
username_0: ```shell
npm install --save jdollarx
fnpm ERR! Darwin 15.3.0
npm ERR! argv "/Users/Eric/.nvm/versions/node/v5.6.0/bin/node" "/Users/Eric/.nvm/versions/node/v5.6.0/bin/npm" "install" "--save" "jdollarx"
npm ERR! node v5.6.0
npm ERR! npm v3.6.0
npm ERR! code E404
```
Answers:
username_1: Sorry about that but it was working in progress.
It's published at present.
Status: Issue closed
username_0: :+1: Thanks! |
red/VScode-extension | 747996916 | Title: There is missing char `R` in the name of month `February`
Question:
username_0: Check this line: https://github.com/red/VScode-extension/blob/ef58e239253f80325f271f17760a34b01f63b322/syntaxes/Red.tmLanguage.json#L395
There is `|Feb(u(a(ry?)?)?)?` but there should be: `|Feb(r(u(a(ry?)?)?)?)?`
Answers:
username_1: @username_0 Thanks for pointing it out.
Status: Issue closed
|
gbif/portal-feedback | 401824476 | Title: Problem with photo of specimen
Question:
username_0: **Problem with photo of specimen**
Problem with photo of specimen
-----
System: Firefox 64.0.0 / Windows 10.0.0
User: [See in registry](https://www.gbif.org/api/feedback/user/3db87c38f5a4c7a316b8819b9bff7e62:d417aab3dff55f59974e119a149f6a4a063156c2c6f8e9797da12deed73035c2f64011c1fd82fcfec6ce5edebc21e92799b782013ff38c37db34e2b5267af926)
Referer: https://www.gbif.org/occurrence/1257562641
Window size: width 1920 - height 944
[API log](http://elk.gbif.org:5601/app/kibana?#/discover?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:'2019-01-22T15:35:59.763Z',mode:absolute,to:'2019-01-22T15:41:59.763Z'))&_a=(columns:!(_source),index:'prod-varnish-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'response:%3E499')),sort:!('@timestamp',desc)))
[Site log](http://elk.gbif.org:5601/app/kibana?#/discover?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:'2019-01-22T15:35:59.763Z',mode:absolute,to:'2019-01-22T15:41:59.763Z'))&_a=(columns:!(_source),index:'prod-portal-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'response:%3E499')),sort:!('@timestamp',desc)))
System health at time of feedback: OPERATIONAL
Answers:
username_1: Forwarded feedback via Tropicos feedback system. |
ayoolaolafenwa/PixelLib | 913124115 | Title: Error visualizing in custom dataset
Question:
username_0: Trying to train custom instance segmentation using the colab example , when trying to visualize a sample image before training I'm getting the error below ,
<__array_function__ internals> in amin(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/numpy/core/fromnumeric.py in _wrapreduction(obj, ufunc, method, axis, dtype, out, **kwargs)
85 return reduction(axis=axis, out=out, **passkwargs)
86
---> 87 return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
88
89
ValueError: zero-size array to reduction operation minimum which has no identity .
size of my dataset is less than 300 photos , do you think this could be the reason ?
Also images sizes are W 4032 X H 1816.
Can you advise please ?
Answers:
username_1: @username_0 this is caused by an empty array. Check the json files generated for your dataset and check if any is empty.
username_2: Hello,
I was having an issue trying to visualize images.
I am using Jupyter and am using mac os 11.4.
Whenever I run the code below the import statements (starting with vis_image = instance_custom_training), the kernel dies.
I am unsure as to what is causing the problem, but I sort of wonder if it is an incompatibility issue. I was hoping someone might be able to help me. I included a screenshot of my notebook at the bottom of this post.
<img width="1085" alt="Screen Shot 2021-06-14 at 11 12 06 PM" src="https://user-images.githubusercontent.com/85923220/122002003-0d64cc00-cd66-11eb-96fa-d9d008e7b825.png">
username_2: @username_1
username_2: nvm, resolved the issue
username_0: Checked all my JSON files seemed to be OK , I'm suspecting that when I turn on auto save in labelme ,
now i'm getting this new error when visualizing 'cv2.UMat' object is not subscriptable
username_1: @username_0 can you share the dataset and the annotated json files with me?
username_0: Of course , here is a sample of the annotated images
https://drive.google.com/file/d/16TnwjAqC88LOX-ptKDQhJ57YXiZe0A9m/view?usp=sharing |
projectcalico/calico | 219985458 | Title: Update kubernetes etcdless to KDD
Question:
username_0: We need to update the etcdless install to be Kubernetes Datastore Driver as well as update the page to match the features of the latest version:
- Update self-hosted manifest to enable BGP
- Update kubeadm + flannel section to remove flannel
- Update limitations section of "etcdless" doc
- Rename from "etcdless" to "Kubernetes datastore driver"<issue_closed>
Status: Issue closed |
StompMarket/helpdesk | 538290925 | Title: Error Reported By |<EMAIL>
Question:
username_0: Partner : F_PAYTM, User : <EMAIL>
Problem : Error : Http failure response for https://stompapimain.azurewebsites.net/package/packageSlipByPackageId/1813407/packageSlip?auth=<KEY>&cachebuster=1576488778086: 400 Bad Request
Message : undefined
StatusText : undefined
User Comment : undefined
Router Link : /fulfillment/fcs/294/packstations/batchpackages/1646883/orderdetail/packageId/2172218 |
rust-lang/rust | 793727053 | Title: Platform-specific function ABIs are not validated
Question:
username_0: This code builds (and runs) fine on stable Rust when targeting x86_64-unknown-linux-gnu:
```rust
extern "aapcs" fn f() {}
fn main() {
f();
}
```
Other nonsensical ABIs are also accepted, including `win64`, `avr-interrupt` (with feature gate). I expected all of these to result in a compilation error, since they are unsupported on x86_64-unknown-linux-gnu.
Answers:
username_1: cc #57182 #86231
username_2: Duplicate of #57182
Status: Issue closed
|
Freyj/CSDungeon | 216982911 | Title: Strcat problems
Question:
username_0: strcat cause bad access errors (according to valgrind)
Answers:
username_1: Resolve :
. initialization using "strcpy (buffer, "aString" );
and by
. strcpy (buffer, "aString");
. message = strcat(message, bufferGenMessage);
when we need to concatenate the message with a literal.
username_1: Resolve :
. initialization using "strcpy (buffer, "aString" );
and by
. strcpy (buffer, "aString");
. message = strcat(message, bufferGenMessage);
when we need to concatenate the message with a literal.
Status: Issue closed
|
Depado/goploader | 160883126 | Title: support for subdirs & https
Question:
username_0: Hi!
To protect the goploader page via reverse proxy, I would like so work in a subfolder. This does not work, because goploader does not support relative links / subdirs. Is it possible to create a parameter to configure a "base url" ?
Same thing using https. The link presented by goploader is always http. If you want to use https, the link does not work. Would be great to have a parameter to support https, too.
Have a nice weekend!!!
Answers:
username_1: In order to use goploader with a reverse proxy and for it to serve HTTPS links, you have to set the `X-Forwarded-Proto` header in your proxy configuration. Otherwise goploader can't determine whether or not the request was HTTP or HTTPS (because it has no knowledge of your proxy and your proxy's configuration).
As for the subdirectory, I'm sorry but I don't get it. Could you give me an example of what you're trying to achieve ?
username_1: Related to #38
username_0: Hi,
I wanted to have a proxy using a subdir (virtual) like `https://mydomain/goploader` and forward this to the goploader server via http. I think the issue concernig the subdir was a configureation fault on my site. But the link produced by goploader is http, because it is requested via http by the proxy. Can I change this without using https between the proxy and goploader?
Thanks in advance
Ronny
username_1: Hi,
I don't think this is related to goploader itself. It's more of a configuration problem on your proxy. It is not related on how your proxy talks to goploader, it's about the information sent by your proxy.
For example, here is my Caddy configuration :
```
gpldr.in up.depado.eu {
proxy / 127.0.0.1:8002 {
proxy_header Host {host}
proxy_header X-Real-IP {remote}
proxy_header X-Forwarded-Proto {scheme}
}
gzip
}
```
This is actually quite simple : Simply forward the protocol your proxy receives to the goploader server, and it will send back a link with the appropriate protocol. (Meaning, if someone sends an http request, goploader will send back an http link, same applies if someone sends an https request)
**nginx**
`proxy_set_header X-Forwarded-Proto $scheme;`
or
`proxy_set_header X-Forwarded-Proto "https;`
**Apache**
`RequestHeader set X-Forwarded-Proto "https"`
**Caddy**
proxy_header X-Forwarded-Proto {scheme}
Hope this helps
username_0: Hi,
the parameter `RequestHeader` for apache works. THX!
The other issue with subdirs did not work. Because I will protect this site using username and password, this is no showstopper ;-)
If you are interested in: the index.html has to be changed to link the .js files to the subfolder. But if you want to upload a file, jquery uses the absolute path ( / ) and so it does not work. I did not find where to configure that ...
username_1: So the second issue is about the way static files are handled. I'll have a look later. Thanks for the feedback :)
username_2: I got it to work with a subdir, e.g. http://my.server.com/gop/
First of all I had to replace all `\static\` href and script src in `index.html` with `/gop/static/`.
Then to make the upload script work in `custom.js` change url (line 106) to '/gop/'.
And then to have the download links displayed correct, append `/gop` to my `name_server` variable in the config file.
username_1: Thanks for your input @username_2 !
It's been a while I haven't worked on goploader, though I intend to work on it again soon.
There are multiple issues and enhancements that need some work. |
pghysels/STRUMPACK | 743466337 | Title: MPI Abort on BLR Matrix factorization.
Question:
username_0: I compiled STRUMPACK with intel MPI and am trying to run the `testBLRMPI.cpp` example. However, it fails with MPI abort and the following errors:
```
$ mpirun -n 1 ./testBLRMPI
# compressing 1000 x 1000 Toeplitz matrix, with relative tolerance 0.0001
# ProcessorGrid2D: [1 x 1]
# from_ScaLAPACK done!
rank = 0 ABORTING!!!!!
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 0 PID 88883 RUNNING AT g0125
= KILLED BY SIGNAL: 6 (Aborted)
===================================================================================
```
Here's the stacktrace when running it through GDB:
```
$ gdb --args ./testBLRMPI
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-110.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /home/acb10922qh/gitrepos/useful-tsubame-benchmarks/hmatrix-benchmarks/STRUMPACK-5.0.0/build/examples/testBLRMPI...(no debugging symbols found)...done.
(gdb) r
Starting program: /home/acb10922qh/gitrepos/useful-tsubame-benchmarks/hmatrix-benchmarks/STRUMPACK-5.0.0/build/examples/./testBLRMPI
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
warning: File "/bb/apps/gcc/7.4.0/lib64/libstdc++.so.6.0.24-gdb.py" auto-loading has been declined by your `auto-load safe-path' set to "$debugdir:$datadir/auto-load:/usr/bin/mono-gdb.py".
To enable execution of this file add
add-auto-load-safe-path /bb/apps/gcc/7.4.0/lib64/libstdc++.so.6.0.24-gdb.py
line to your configuration file "/home/acb10922qh/.gdbinit".
To completely disable this security protection add
set auto-load safe-path /
line to your configuration file "/home/acb10922qh/.gdbinit".
For more information about this security protection see the
"Auto-loading safe path" section in the GDB manual. E.g., run from the shell:
info "(gdb)Auto-loading safe path"
Detaching after fork from child process 89850.
# compressing 1000 x 1000 Toeplitz matrix, with relative tolerance 0.0001
# ProcessorGrid2D: [1 x 1]
# from_ScaLAPACK done!
rank = 0 ABORTING!!!!!
Program received signal SIGABRT, Aborted.
0x00002aaab5a96277 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install glibc-2.17-222.el7.x86_64 libibcm-41mlnx1-OFED.4.1.0.1.0.44100.x86_64 libibverbs-41mlnx1-OFED.4.4.1.0.0.44100.x86_64 libmlx4-41mlnx1-OFED.4.1.0.1.0.44100.x86_64 libmlx5-41mlnx1-OFED.4.4.0.1.7.44100.x86_64 libnl3-3.2.28-4.el7.x86_64 libpsm2-11.2.78-1.el7.x86_64 librdmacm-41mlnx1-OFED.4.2.0.1.3.44100.x86_64 librxe-41mlnx1-OFED.4.1.0.1.7.44100.x86_64 numactl-devel-2.0.9-7.el7.x86_64 ucx-1.4.0-1.44100.x86_64 zlib-1.2.7-17.el7.x86_64
(gdb) where
#0 0x00002aaab5a96277 in raise () from /lib64/libc.so.6
#1 0x00002aaab5a97968 in abort () from /lib64/libc.so.6
#2 0x0000000000409526 in abort_MPI(int*, int*, ...) ()
#3 0x00002aaaab7709fc in MPIR_Err_return_comm (comm_ptr=0x15ee0, fcname=0x15ee0 <Address 0x15ee0 out of bounds>, errcode=604577797) at ../../src/mpi/errhan/errutil.c:312
#4 0x00002aaaab3ca12a in PMPI_Bcast (buffer=0x15ee0, count=89824, datatype=6, root=-1, comm=-1431306624) at ../../src/mpi/coll/bcast/bcast.c:437
#5 0x000000000046d6e2 in void strumpack::MPIComm::broadcast_from<int>(std::vector<int, std::allocator<int> >&, int) const ()
#6 0x0000000000456869 in strumpack::BLR::BLRMatrixMPI<double>::factor(strumpack::DenseMatrix<bool> const&, strumpack::BLR::BLROptions<double> const&) ()
#7 0x0000000000456666 in strumpack::BLR::BLRMatrixMPI<double>::factor(strumpack::BLR::BLROptions<double> const&) ()
#8 0x0000000000409897 in main ()
```
Answers:
username_1: Thank you for reporting.
I think it should now be fixed in the latest commit on the master branch:
This BLR code is still under active development. We will change the API, documentation, examples, tests etc soon.
Status: Issue closed
username_0: Works. |
bwaldvogel/mongo-java-server | 558004653 | Title: Support for Collation (from 3.4) feature
Question:
username_0: There is Collation feature from 3.4 version
https://docs.mongodb.com/manual/reference/collation/
Some of unit tests rely on indexes and some of index are using collaction.
now when you try to create any index with Collation option, suing mongo-java-server (memory backed), you get exception
```
java.lang.IllegalArgumentException: Collation not supported by server version: ServerVersion{versionList=[3, 0, 0]}
at com.mongodb.operation.OperationHelper.validateCollation(OperationHelper.java:127)
at com.mongodb.operation.OperationHelper.validateCollation(OperationHelper.java:122)
at com.mongodb.operation.OperationHelper.validateIndexRequestCollations(OperationHelper.java:197)
at com.mongodb.operation.CreateIndexesOperation$1.call(CreateIndexesOperation.java:176)
at com.mongodb.operation.CreateIndexesOperation$1.call(CreateIndexesOperation.java:172)
at com.mongodb.operation.OperationHelper.withConnectionSource(OperationHelper.java:530)
at com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:492)
at com.mongodb.operation.CreateIndexesOperation.execute(CreateIndexesOperation.java:172)
at com.mongodb.operation.CreateIndexesOperation.execute(CreateIndexesOperation.java:72)
at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:206)
at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:175)
at com.mongodb.DBCollection.createIndex(DBCollection.java:1698)
```
Answers:
username_1: Use `MongoServer server = new MongoServer(new MemoryBackend().version(ServerVersion.MONGO_3_6));` to fix the issue like above.
username_2: @username_1: Thanks for the quick clarification!
@username_0: Does it work for you?
We need to at least improve the README but I’m actually thinking to make it the new default.
username_0: @username_2 it works! I guess we can close this ticket.
Status: Issue closed
|
nsurampu/Book-Image-Reader | 607612972 | Title: Cache update during Localhost Tunneling
Question:
username_0: An observed issue where cache gets corrupted if the images are first loaded on localhost and then another device connects to it through localhost tunnelling.
**Temporary Fix**: Tunnel localhost and connect the device to be used for reading before uploading the images |
ontohub/ontohub-frontend | 274508057 | Title: Check passwords against leaked passwords
Question:
username_0: When registering or changing the password, check if the password is insecure and has been leaked. Use a leaked password database such as [PasswordPing](https://www.passwordping.com/docs-passwords-api/).
If the chosen password is leaked, display a warning before allowing to register. Then, the user may choose to keep the insecure password, but they are aware of it.
Answers:
username_1: Is PasswordPing free? Wondering because of the `Free Trial` button and the auth header the docs mention.
username_1: https://haveibeenpwned.com/API/v2#PwnedPasswords seems to be free
username_0: Good point. I seem to have missed that. |
renehernandez/obsidian-readwise | 955128188 | Title: Adding action tag (header and concactening)
Question:
username_0: Hello,
Didn't find a way to do that. It seems really important as it delivers a lot of context to the highlights.
Thanks
Answers:
username_1: Wouldn't the use/rendering of action tags be on the Readwise side? Prior to this plugin pulling the rendered capture from Readwise?
username_2: There are new endpoints for handling tags in the Readwise API, which I haven't explored yet. I'll look into it |
pangaea-data-publisher/fuji | 1177909971 | Title: Document expectations towards a metadata endpoint. Here: RDF/SPARQL
Question:
username_0: Hi,
https://www.f-uji.net/?action=test is able to query endpoints providing OAI-PMH, SPARQL, CSW.
There should be some docs somewhere what is expected from those endpoints, or which have been successfully tested. For SPARQL, it should mention the actual endpoints / triplestores (I'd expect most/all would work ?).
But more importantly it should specify what RDF triples FUJI is expecting there. Seems the data and attributes expected are in such queries:
https://github.com/pangaea-data-publisher/fuji/blob/3e2f6d20e5620fae813de4d450403179f78dc218/fuji_server/helper/metadata_mapper.py#L212
or specific queries:
https://github.com/pangaea-data-publisher/fuji/blob/3e2f6d20e5620fae813de4d450403179f78dc218/fuji_server/helper/metadata_mapper.py#L142
executed in this class:
https://github.com/pangaea-data-publisher/fuji/blob/49c99ef4bdf1a0578dc61bc9e7eb555693887a05/fuji_server/helper/metadata_provider_sparql.py#L30
and available to FUJI through this class:
https://github.com/pangaea-data-publisher/fuji/blob/3fbd229b65e8c85d94673c553ea20edef59e6cd4/fuji_server/helper/metadata_collector_rdf.py#L44
So in the absence of the requested documentation, this issue can serve in its place :-)
Yours, Steffen
Answers:
username_1: Dear Steffen,
I know this is confusing, but these endpoints are not used to retrieve metadata itself but to check which metadata formats and vocabularies are supported by a given data provider. Therefore they are only used in e.g. FsF-I1-02M, FsF-R1.3-01M and FsF-I1-01M.
The reason is, that e.g. it is impossible to retrieve metadata for a given DOI using OAI-PMH and at least the few SPARQL endpoints I have seen and tried for testing astonishingly didn't contain PIDs. Maybe this can be improved in the future.
Robert |
xu-kj/sc1bot | 852017171 | Title: create a test map
Question:
username_0: create a test map that divides the map into 2 sections that are blocked by impassable terrain, this allows bot to build buildings and units without having to worry about getting attacked by AI
Answers:
username_1: Map attached. Player 1 have access to resources while player 2 not.
[emptyTestLand.zip](https://github.com/username_0/sc1bot/files/6275892/emptyTestLand.zip)
Status: Issue closed
username_1: 星际1的破烂地图编辑器不让固定初始位置,只能随机分配。我给两边都放了资源,差不多够运营了。 |
ColorlibHQ/AdminLTE | 496828891 | Title: admin lte theme is not working in mvc 5 asp.net c#
Question:
username_0: **Severity Code Description Project File Line Suppression State
Error Build:Module '"C:/Users/Rajesh/source/repos/WebApplication4/Content/AdminLTE-3.0.0-rc.1/plugins/filterizr/ActiveFilter"' has no exported member 'Filter'. WebApplication4 C:\Users\Rajesh\source\repos\WebApplication4\Content\AdminLTE-3.0.0-rc.1\plugins\filterizr\Filterizr.d.ts 4
**
Severity Code Description Project File Line Suppression State
Error Build:Module '"C:/Users/Rajesh/source/repos/WebApplication4/Content/AdminLTE-3.0.0-rc.1/plugins/filterizr/FilterizrOptions/defaultOptions"' has no exported member 'RawOptions'. WebApplication4 C:\Users\Rajesh\source\repos\WebApplication4\Content\AdminLTE-3.0.0-rc.1\plugins\filterizr\Filterizr.d.ts 5
Severity Code Description Project File Line Suppression State
Error Build:Module '"C:/Users/Rajesh/source/repos/WebApplication4/Content/AdminLTE-3.0.0-rc.1/plugins/filterizr/FilterizrOptions/defaultOptions"' has no exported member 'RawOptionsCallbacks'. WebApplication4 C:\Users\Rajesh\source\repos\WebApplication4\Content\AdminLTE-3.0.0-rc.1\plugins\filterizr\FilterContainer.d.ts 1
Severity Code Description Project File Line Suppression State
Error Build:Cannot find name 'Parameters'. WebApplication4 C:\Users\Rajesh\source\repos\WebApplication4\Content\AdminLTE-3.0.0-rc.1\plugins\fullcalendar\main.d.ts 136
Answers:
username_1: Your error looks like your use AdminLTE in TypeScript and I guess you converted the js files to ts files, if I guess right you need to manually update them.
AdminLTE doesn't support TypeScript right now.
username_2: Sorry but converting the ts files to js worked for you? i'm having the same problem, literally just downloaded the template, implemented at asp.net core layout and got the same error
Status: Issue closed
|
johnwdubois/rezonator | 830203565 | Title: Find in GridView
Question:
username_0: **What to do**
1. Make the "Find" function (a.k.a. "Find next") work in GridView. The default behavior should be:
- The Search text box for "Find" is pre-populated with the value of the cell the user is currently focused on.
- Pressing "F" for "Find" moves the cursor to the next instance of this value.
- by default, "Find" searches by column, searching only within the column where the cursor currently is.
2. If the user is focused on a Chain:
- in the "Find" dialogue box, include a checkbox with the following message: [ X ] "Search in the current Chain only"
- if the box is checked, the search looks only in the same Chain (or Stack)
- by default, the box is checked
3. If the user is not focused on any Chain or Stack, don't show the checkbox (because it's irrelevant).
4. If the user is focused on both a Track Chain and a Rez Chain, search in the Track Chain. Show the message: [ X ] "Search in the current Track Chain only"
5. If the user is focused on both a Chain and a Stack, search in the Chain.
If the user is focused on a Stack only, search in the Stack.
6. Provide a checkbox as follows: [ X ] "Find previous (reverse)". By default the box is not checked. If the user checks it, search up rather than down.
**Related to**
#632 #295 |
tstack/lnav | 1027996698 | Title: docker-compose logs broken
Question:
username_0: If I run `docker-compose logs --tail 100 | lnav` I get a blank screen. (update... if I press `t` I can see the file content in test mode)
If I add `-t` to `lnav` then I do see messages.
This is consistent with lnav v0.10.0 release (and it worked in v0.9)
I see that with several versions of `compose` and on several platforms. An example docker-compose is in this folder -> https://github.com/sqlpad/sqlpad/tree/master/docker-examples/mariadb
`docker-compose logs --tail 100 | lnav -C` finished without any output.
`docker-compose logs --tail 100 > /tmp/1 && cat /tmp/1 | lnav` works OK, with the mode shown on the top right of the window `plain text:: TEXT`
adding ` --no-log-prefix` to `docker-compose` "solves" this (in the cost of hiding the name of the container generated each log message.) Then the mode shown on the top right of the window is: `generic_log:: LOG`
Answers:
username_0: So I could not find out everything in the documentation, but I'm now under the impression that `module` field is used to do just that, i.e., use different parser for different part of the log file?
I'm not quite there yet. So now I have created a reduminary logger for mariadb and copied the pino_json from a different ticket. And independently they seem to work ok:
`docker-compose logs --no-log-prefix --no-color --tail 20 mariadb | lnav`

`docker-compose logs --no-log-prefix --no-color --tail 20 sqlpad | lnav`

And I have a "docker compose" file which can now recognize the different "modules" but I can't get it to parse the internal message
`docker-compose logs --no-color --timestamp --tail 20 | lnav`

[log.log](https://github.com/tstack/lnav/files/7357593/log.log)
```json
{
"$schema": "https://lnav.org/schemas/format-v1.schema.json",
"docker_compose": {
"title": "Docker-compose logs",
"description": "docker-compose logs output. use 'docker-compose logs --no-color --timestamp'",
"url": "https://docs.docker.com/compose/reference/logs/",
"regex": {
"anyline": {
"pattern": "^(?<container>[a-zA-Z_]\\S+)\\s*[|]\\s*(?<timestamp>[-.0-9:TZ]+)\\s+(?<body>.*)$"
}
},
"body-field": "body",
"module-field": "container",
"value": {
"body": {
"identifier": false,
"description": "body"
},
"container": {
"kind": "string",
"identifier": true,
"description": "The name of the container that generated the message"
}
},
"sample": [
{
"line": "mariadb_1 | 2021-10-16T07:05:20.019142100Z 2021-10-16 7:05:20 0 [Warning] 'proxies_priv' entry '@% root@8be6efc8284c' ignored in --skip-name-resolve mode."
},
{
"line": "sqlpad_1 | 2021-10-16T07:05:12.381842300Z {\"level\":30,\"time\":\"2021-10-16T07:05:12.381Z\",\"pid\":1,\"hostname\":\"sqlpad\",\"name\":\"sqlpad-app\",\"msg\":\"Loading cache\"}"
}
]
}
}
{
"$schema": "https://lnav.org/schemas/format-v1.schema.json",
"mariadb_log": {
"title": "mariadb log",
[Truncated]
"time": { "kind": "integer" },
"msg": { "kind": "string" },
"v": { "kind": "integer" },
"responseTime": { "kind": "integer" }
},
"timestamp-field": "time",
"timestamp-divisor": 1000,
"convert-to-local-time" : true,
"body-field": "msg",
"opid-field": "pid",
"line-format" : [
{ "field" : "__timestamp__" },
" ",
{ "field" : "msg", "default-value": "" }
]
}
}
```
help appreciated,
username_0: I can confirm that `docker-compose logs --timestamp` is detected corrctly in 0.10 too. thanks, closing
Status: Issue closed
|
Azure/azure-sdk-for-python | 843144748 | Title: Event Grid readme and samples issues
Question:
username_0: 1.
Section [link](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/eventgrid/azure-eventgrid#send-events-as-dictionaries):

Suggestion:
Add import package
```
import uuid
import datetime as dt
from msrest.serialization import UTC
```
2.
Section [link](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventgrid/azure-eventgrid/samples/sync_samples/sample_publish_events_to_a_topic_using_sas_credential.py#L21):

Reason:
azure.core.exceptions.HttpResponseError: (BadRequest) Invalid key in aeg-sas-token. Valid format :r=UrlEncodedResource&e=UrlEncodedUtcExpiration&s=UrlEncodedSignature. Input sas string: lrPUpeSQUJEMj3paatULJ1ejvtgMvU4PIy3o+sypQ0w=.
Suggestion:
EVENTGRID_SAS is not an access key, but a shared access signature.
But we unable to find SAS (shared access signature) in event grid topic from the 'Azure Portal'. Only find the `access key`.
Maybe we should consider this method: `sas = generate_sas(endpoint, topic_key, expiration_date_utc)` to generate sas.
@jongio for notification.
Answers:
username_1: Thanks for the issue!
For section2, it is intentional that we did not add a generate_sas
the reason is that it is highly unlikely that someone generates a sas and uses it for themselves. So, we have two different examples for using a sas and generating a sas (it's a cross language feedback that was received in arch board)
point 1 sounds fair to me - thank you!
username_1: I've updated the description of `EVENTGRID_SAS` to make it more clear and fixed the first one. Thanks again
https://github.com/Azure/azure-sdk-for-python/pull/17641/files
Status: Issue closed
|
electron/electron | 491492503 | Title: bug with min-height: 100vh vs height: 100vh
Question:
username_0: <!-- As an open source project with a dedicated but small maintainer team, it can sometimes take a long time for issues to be addressed so please be patient and we will get back to you as soon as we can.
-->
### Preflight Checklist
<!-- Please ensure you've completed the following steps by replacing [ ] with [x]-->
* [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/master/CONTRIBUTING.md) for this project.
* [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/master/CODE_OF_CONDUCT.md) that this project adheres to.
* [x] I have searched the issue tracker for an issue that matches the one I want to file, without success.
### Issue Details
* **Electron Version:**
* <!-- (output of `node_modules/.bin/electron --version`) e.g. 4.0.3 -->6.0.8
* **Operating System:**
* <!-- (Platform and Version) e.g. macOS 10.13.6 / Windows 10 (1803) / Ubuntu 18.04 x64 -->Windows 10 (1809)
### Expected Behavior
<!-- A clear and concise description of what you expected to happen. -->
When setting a `min-height` value of an element that is greater than the computed height, it should have the same outcome as setting that same value as just a plain `height` attribute.
### Actual Behavior
<!-- A clear and concise description of what actually happened. -->
Setting `min-height: 100vh` on a body element that has no margins, if it contains a single element with a non-zero margin-bottom, the height of the body element will be `100vh` + the margin of the single inner element. Setting the `height` attribute to `100vh` works as expected. Setting the height to **any** value earlier in the cascade also makes `min-height: 100vh` behave as expected.
### To Reproduce
<!--
Your best chance of getting this bug looked at quickly is to provide an example.
-->
Open this if Electron Fiddle, then go to dev tools and toggle the `height` attribute of `body` on and off.
https://gist.github.com/username_0/a23fe8b15e290999386c5448c8b219f2
<!--
For bugs that can be encapsulated in a small experiment, you can use Electron Fiddle (https://github.com/electron/fiddle) to publish your example to a GitHub Gist and link it your bug report.
-->
<!--
If Fiddle is insufficient to produce an example, please provide an example REPOSITORY that can be cloned and run. You can fork electron-quick-start (https://github.com/electron/electron-quick-start) and include a link to the branch with your changes.
-->
<!--
If you provide a URL, please list the commands required to clone/setup/run your repo e.g.
```sh
$ git clone $YOUR_URL -b $BRANCH
$ npm install
$ npm start || electron .
```
-->
### Screenshots
<!-- If applicable, add screenshots to help explain your problem. -->
### Additional Information
<!-- Add any other context about the problem here. -->
In v7.0.0-beta.4 this code behaves as expected.
Answers:
username_1: should this be closed, since we're on version 8 at the time of writing?
Status: Issue closed
username_2: The Electron version reported on this issue is no longer supported. See our [supported versions documentation](https://www.electronjs.org/docs/tutorial/support#supported-versions).
If this is still reproducible on a supported version, please open a new issue with any other new information that a maintainer should know.
Thank you for taking the time to report this issue and helping to make Electron better! Your help is appreciated. |
Sitefinity/feather-widgets | 119521134 | Title: incorrect login forgot password action link
Question:
username_0: This site is a multilingual website using the stock loginform.cshtml view.. This is the link its generating:
http://beta.mcmasteroptimalaging.org/tp:/beta.mcmasteroptimalaging.org/login/ForgotPassword/
 |
Azure/azure-cli | 536123455 | Title: test bot
Question:
username_0: **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.<issue_closed>
Status: Issue closed |
uthackathon/iWanna | 156894758 | Title: inAppBroser起動時にフリーズ(iOS)
Question:
username_0: 2016-05-26 12:09:50.472 myApp[237:9365] {"provider":"password","uid":"2db42d9a-e83c-4410-8be0-316b9e853939","token":"<KEY>","password":{"email":***********<EMAIL>","isTemporaryPassword":false,"profileImageURL":"https://secure.gravatar.com/avatar/a555fb27ec1f8bb8958558101e1448d?d=retro"},"auth":{"provider":"password","uid":"2db<PASSWORD>-e83c-4410-8be0-<PASSWORD>","token":{"email_verified":false,"email":"**********<EMAIL>","exp":1464318541,"iat":1464232141,"sub":"2db42d9a-e83c-4410-8be0-316b9e853939","auth_time":1464232141,"firebase":{"identities":{"email":["************<EMAIL>"]}}}},"expires":1464318541}
2016-05-26 12:10:02.243 myApp[237:9365] Resetting plugins due to page load.
2016-05-26 12:10:10.131 myApp[237:9365] Finished load of: http://ut-hackathon.strikingly.com/
Answers:
username_0: iOsネイティブアプリでの挙動を参考程度に上げておきます。(一部logを編集しています)
username_1: inAppBrowser のパッケージを公式・最新のものに変えたら今のところいけてます
Status: Issue closed
|
fedora-modularity/depchase | 249003053 | Title: depchase tracebacks when a non-existent binary rpm name is given on the commandline
Question:
username_0: $ ./depchase -a x86_64 -c repos.cfg resolve vim
Traceback (most recent call last):
File "./depchase", line 495, in <module>
cli(obj={})
File "/usr/lib/python3.6/site-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/lib/python3.6/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib/python3.6/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "./depchase", line 469, in resolve
binary, source = solve(solver, pkgnames, selfhost=selfhost)
File "./depchase", line 360, in solve
assert not sel.isempty(), "Could not find package for {}".format(n)
AssertionError: Could not find package for vim
I know that vim isn't a binary rpm package, but this should be a little more user friendly.
Answers:
username_1: so isn't this enough? ;) |
dominno/django-moderation | 2338354 | Title: moderation crashes on models with DecimalField
Question:
username_0: I'm trying to moderate the creation of a django-shop Product model. This model has a field 'unit_price', which is a CurrencyField, which is just a fancy DecimalField (see https://github.com/divio/django-shop/blob/master/shop/util/fields.py#L6)
When I create a new product, I get the following error:
Traceback:
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/django/core/handlers/base.py" in get_response
1. response = middleware_method(request, callback, callback_args, callback_kwargs)
File "/home/martin/Envs/marketto/src/ajaxmiddleware/ajaxmiddleware/middleware.py" in process_view
2. return new_callback(request, _callback_args, *_callback_kwargs)
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/django/views/generic/base.py" in view
3. return self.dispatch(request, _args, *_kwargs)
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/django/utils/decorators.py" in _wrapper
4. return bound_func(_args, *_kwargs)
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/django/contrib/auth/decorators.py" in _wrapped_view
5. return view_func(request, _args, *_kwargs)
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/django/utils/decorators.py" in bound_func
6. return func(self, _args2, *_kwargs2)
File "/home/martin/Projects/aquasys/marketto/vivitz/vehicle_shop/views/crud.py" in dispatch
7. return super(VehicleCreateView, self).dispatch(_args, *_kwargs)
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/django/views/generic/base.py" in dispatch
8. return handler(request, _args, *_kwargs)
File "/home/martin/Envs/marketto/src/ajaxmiddleware/ajaxmiddleware/views.py" in post
9. return super(HybridView, self).post(self, **kwargs)
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/django/views/generic/edit.py" in post
10. return super(BaseCreateView, self).post(request, _args, *_kwargs)
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/django/views/generic/edit.py" in post
11. return self.form_valid(form)
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/django/views/generic/edit.py" in form_valid
12. self.object = form.save()
File "/home/martin/Projects/aquasys/marketto/vivitz/vehicle_shop/forms.py" in save
13. return super(VehicleForm, self).save()
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/shop_simplecategories/admin.py" in save
14. product.save()
File "/home/martin/Projects/aquasys/marketto/vivitz/vehicle_shop/models.py" in save
15. super(Vehicle, self).save(_args, *_kwargs)
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/polymorphic/polymorphic_model.py" in save
16. return super(PolymorphicModel, self).save(_args, *_kwargs)
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/django/db/models/base.py" in save
17. self.save_base(using=using, force_insert=force_insert, force_update=force_update)
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/django/db/models/base.py" in save_base
18. created=(not record_exists), raw=raw, using=using)
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/django/dispatch/dispatcher.py" in send
19. response = receiver(signal=self, sender=sender, **named)
File "/home/martin/Envs/marketto/src/moderation/src/moderation/register.py" in post_save_handler
20. moderator.inform_moderator(instance)
File "/home/martin/Envs/marketto/src/moderation/src/moderation/moderator.py" in inform_moderator
21. recipient_list=MODERATORS)
File "/home/martin/Envs/marketto/src/moderation/src/moderation/moderator.py" in send
22. 'moderated_object': content_object.moderated_object,
File "/home/martin/Envs/marketto/src/moderation/src/moderation/register.py" in get_moderated_object
23. '_relation_object').get()
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/django/db/models/manager.py" in get
24. return self.get_query_set().get(_args, *_kwargs)
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/django/db/models/query.py" in get
25. num = len(clone)
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/django/db/models/query.py" in **len**
26. self._result_cache = list(self.iterator())
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/django/db/models/query.py" in iterator
27. obj = model(*row[index_start:aggregate_start])
File "/home/martin/Envs/marketto/src/moderation/src/moderation/models.py" in **init**
[Truncated]
29. signals.post_init.send(sender=self.**class**, instance=self)
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/django/dispatch/dispatcher.py" in send
30. response = receiver(signal=self, sender=sender, **named)
File "/home/martin/Envs/marketto/src/moderation/src/moderation/fields.py" in post_init
31. self._deserialize(value))
File "/home/martin/Envs/marketto/src/moderation/src/moderation/fields.py" in _deserialize
32. for parent in obj_generator:
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/django/core/serializers/json.py" in Deserializer
33. for obj in PythonDeserializer(simplejson.load(stream), **options):
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/django/core/serializers/python.py" in Deserializer
34. data[field.name] = field.to_python(field_value)
File "/home/martin/Envs/marketto/lib/python2.6/site-packages/django/db/models/fields/**init**.py" in to_python
35. return decimal.Decimal(value)
File "/usr/lib/python2.6/decimal.py" in **new**
36. "First convert the float to a string")
Exception Type: TypeError at /shop/products/create/
Exception Value: Cannot convert float to Decimal. First convert the float to a string
I really have no clue why there is a Float involved at all... Are there known issues with DecimalFields?
Status: Issue closed
Answers:
username_1: Closed because of inactivity.
Please feel free to open new issue. |
Nidheesh-Panchal/CineClick | 381931961 | Title: Email security issue
Question:
username_0: Hey Nidheesh,
I just walked through your project and got to know that for email service you are asking user to direct put his email and password, as this is bad from security side.Please use environment variables instead of direct putting your email and password in code base.
Correct me if i am wrong!
Thanks
Answers:
username_1: Hey Pankaj. You are right and we knew about this but because of the limited amount of time provided for the project, we had to leave it at this point. I don't see any reason now to fix it. But thanks for going through the code and pointing it out. |
suttacentral/suttacentral | 753368752 | Title: Search autocomplete
Question:
username_0: Native HTMl provides autocomplete, but it doesn't work for us and it currently is not working or in spec for mwc either.
https://github.com/material-components/material-components-web-components/issues/1976
https://github.com/material-components/material-components-web/issues/45
I think it's a useful feature and we should support it.
Answers:
username_0: Using native `<input type ="search">` this works now.
Status: Issue closed
|
arunthampi/raelsconf.com | 313426441 | Title: Re-architect site in JavaScript
Question:
username_0: Have we considered a more modern flux based architecture for this site? :trollface:
Answers:
username_1: PRs welcome!
username_2: the real Raels community would have tests
username_1: we're firm believers in the #nocode movement https://github.com/kelseyhightower/nocode |
msmexplorer/msmexplorer | 224420884 | Title: Interpretation of plot_pop_resids
Question:
username_0: I'm building an MSM on the internal dynamics of a ligand, which I think should be well sampled within microseconds of simulation. I can see 'clean' jumps in my tIC time evolution, but when the pop_resids plot is looking very different from the one in your documentation.

What kind of information can I extract out of [`msme.plot_pop_resids`](http://msmbuilder.org/msmexplorer/development/examples/plot_pop_resids.html)? I've never seen this plot in a publication.
Answers:
username_1: - Its a proxy for how much the MSM is perturbing the raw MD population of a microstate. In the large sampling limit, they should be close/almost exact. If they are different, that can still be okay and I would trust the MSM population more provided you do a bit of rigorous bootstrapping/central limit tricks for Poisson process to ascertain the modeling error.
- In this case, even if you have a lot of sampling if your ligand has few A<->B transitions, then the MSM populations might still get skewed because the model is parameterized by the transitions and not the raw counts.
- To get an idea whether or not there is enough sampling to study a process, try to compute the MFPT or exchange timescales between the states. The aggregate sampling should be >> the exchange timescales.
username_0: Hi @username_1
thanks for your answer. So if I understood you correctly, you would expect to see a completely decorrelated cloud of points (in the case of large sampling where the MSM populations and the MD populations match)? What exactly are the residuals? From my plot above, it looks like the Raw Populations axis is fairly homogeneously distributed. But with a strong correlation on the Residuals (?)
username_1: Exactly, for any given microstate, its population would ~ msm population, leading to a gaussian cloud around 0
i think the residual is np.log10(MSM)-np.log10(raw counts). so a difference of 1 is that something like 10x more. However, its important to note that for lowly sampled populations this might not be significant. 0.003 vs 0.03 is 10x but is hardly worth worrying about. Similarly 0.003 vs 0.0003 is the same in the opposite direction but again nothing to worry about.
@username_2 do i have the residual formula right?
username_1: hmm, maybe we should incorporate @jadeshi's code for doing bootstrapping here somehow.
username_2: Yup!
Status: Issue closed
|
spacemeshos/go-spacemesh | 502009003 | Title: Hare forced termination
Question:
username_0: If some instance of the Hare protocol never terminates, we would like to force it to finish.
The Hare result is only practical as long as we are (at most) hdist away from the running instance. So if there is a Hare instance running in layer x, and we are in layer x+hdist, we can terminate the protocol without any output.
In that case, we need to determine, while producing blocks in layer x+hdist, how to vote on blocks from layer x.<issue_closed>
Status: Issue closed |
pgbackrest/pgbackrest | 216179707 | Title: Differential/Incremental backup errors
Question:
username_0: During my testing of pgBackRest (v1.17) on a demo PostgreSQL cluster I am receiving the errors below. The errors occur when performing an Incremental or Differential backup after a Full backup has been completed and I have inserted, updated or deleted data for a table. The error stack is the same whether it's an Incremental or Differential backup
I can successfully perform a Full backup at all times and successful Differential and Incremental backups after the Full backup if no DML has been completed for a table.
```
2017-03-22 13:55:28.228 P01 INFO: backup file /app/data/xxxxxxxxx/global/pg_control (8KB, 99%) checksum dd7ae49879aa58480b7e019c3d885f9251d57bc7
ERROR [199]: remote process terminated on local-1 host: unknown error: Undefined subroutine &pgBackRest::BackupFile::pageChecksumBufferTest called at /usr/share/perl5/pgBackRest/BackupFile.pm line 116.
at /usr/bin/pgbackrest line 15
main::__ANON__('Undefined subroutine &pgBackRest::BackupFile::pageChecksumBuf...') called at /usr/share/perl5/pgBackRest/BackupFile.pm line 116
pgBackRest::BackupFile::backupChecksumPage('HASH(0x3223410)', 'SCALAR(0x2a01488)', 8192, 0, 'HASH(0x321f280)') called at /usr/share/perl5/pgBackRest/Protocol/Common.pm line 504
pgBackRest::Protocol::Common::binaryXfer('pgBackRest::Protocol::Common=HASH(0x2c38608)', 'GLOB(0x321eed8)', 'GLOB(0x2c37ff0)', 'out', 0, 1, 0, 'CODE(0x31a4040)', 'HASH(0x3223410)', ...) called at /usr/share/perl5/pgBackRest/File.pm line 1366
pgBackRest::File::copy(undef, 'db:absolute', '/app/data/xxxxxxxxx/base/13241/16415', 'backup:tmp', 'pg_data/base/13241/16415.gz', 0, 1, 1, 1490204938, ...) called at /usr/share/perl5/pgBackRest/BackupFile.pm line 229
pgBackRest::BackupFile::backupFile('pgBackRest::File=HASH(0x2c38428)', '/app/data/xxxxxxxxx/base/13241/16415', 'pg_data/base/13241/16415', 8192, undef, 1, 1, 1490204938, 1, ...) called at /usr/share/perl5/pgBackRest/Protocol/LocalMinion.pm line 76
pgBackRest::Protocol::LocalMinion::__ANON__('ARRAY(0x32234a0)') called at /usr/share/perl5/pgBackRest/Protocol/CommonMinion.pm line 174
eval {...} called at /usr/share/perl5/pgBackRest/Protocol/CommonMinion.pm line 198
eval {...} called at /usr/share/perl5/pgBackRest/Protocol/CommonMinion.pm line 206
pgBackRest::Protocol::CommonMinion::process('pgBackRest::Protocol::LocalMinion=HASH(0x2c38068)') called at /usr/bin/pgbackrest line 118
eval {...} called at /usr/bin/pgbackrest line 292
2017-03-22 13:55:28.239 P00 DEBUG: Common::Exit::exitSafe(): iExitCode = [undef], oException = [object], strSignal = [undef]
2017-03-22 13:55:28.239 P00 DEBUG: Protocol::Protocol::protocolDestroy(): bComplete = false, iRemoteIdx = [undef], strRemoteType = [undef]
2017-03-22 13:55:28.240 P00 DEBUG: Protocol::Protocol::protocolDestroy=>: iExitStatus = 0
2017-03-22 13:55:28.240 P00 DEBUG: Common::Lock::lockRelease(): bFailOnNoLock = false
2017-03-22 13:55:28.241 P00 INFO: backup command end: aborted with exception [199]
2017-03-22 13:55:28.241 P00 DEBUG: Common::Exit::exitSafe=>: iExitCode = 199
```
Answers:
username_1: Do you have page checksums enabled on the target cluster? You can use `show data_checksums` in psql to check.
Was this a manual install or did you install from packages? Also, what's your OS/version?
username_0: This was a manual installation using the source.
The OS/Version are:
```
postgres> lsb_release -a
LSB Version: :base-4.0-amd64:base-4.0-ia32:base-4.0-noarch:core-4.0-amd64:core-4.0-ia32:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-ia32:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-ia32:printing-4.0-noarch
Distributor ID: RedHatEnterpriseServer
Description: Red Hat Enterprise Linux Server release 6.8 (Santiago)
Release: 6.8
Codename: Santiago
```
```
postgres>uname -a
Linux 2.6.32-642.11.1.el6.x86_64 #1 SMP Wed Oct 26 10:25:23 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
```
I did not have data_checksums enabled on the first demo cluster, however, I recreated the cluster with data_checksums enabled. I performed the same test and experienced the same error stack as previously posted.
```
postgres=# show data_checksums;
data_checksums
----------------
on
(1 row)
Time: 1.174 ms
postgres=#
```
username_1: One thing you can try is to specify `--no-checksum-page` on the command line or `checksum-page=n` in `pgbackrest.conf`. This will force page checksums off. It seems that somehow page checksums are being enabled even though the required C library is not present. Currently this library is only provided via packages.
It would also be helpful if you could provide the `backup.manifest` file from a successful full backup. You can find this in `$REPO_PATH/backup/$STANZA/latest`.
Thanks!
username_0: I have attached a backup.manifest from $REPO_PATH/backup/$STANZA/latest for a successful Full backup.
[backup.manifest.zip](https://github.com/pgbackrest/pgbackrest/files/864979/backup.manifest.zip)
I ran a Differential and Incremental backup using ```--no-checksum-page``` on the command line and a Differential and Incremental backup with ```checksum-page=n ``` in ```pgbackrest.conf``` They all failed with the error stack from my first comment.
I also see the warning below during the Differential or Incremental backup indicating ```checksum-page``` cannot be altered to ```false``` so it is reset to ```true```. This warning is only present during Differential and Incremental backups. This warning is present during Differential and Incremental backups even when using ```--no-checksum-page``` on the command line or ```checksum-page=n ``` in ```pgbackrest.conf```
```
I2017-03-23 09:59:45.557 P00 INFO: backup start archive = 000000010000000000000 01A, lsn = 0/1A000028
WARN: diff backup cannot alter 'checksum-page' option to 'false', reset to 'true' from 20170323-090619F
2017-03-23 09:59:46.593 P01 INFO: backup file /app/data/xxxxxxxxx/gprof/21219/gmon.out (2MB, 25%) checksum e1635116a981e03b36e9ca288b9e9887c53ba7be
```
username_1: The warning above does not reference the same backup.manifest you sent (`20170323-090619F` in warning above, `20170323-100655F` in manifest).
Can you check the `20170323-100655F` manifest and see if `option-checksum-page=true`.
To use the `--no-checksum-page` option successfully, I think you will need to start using it from the full backup. It looks like you may have done that in the later backup?
username_0: The warning was just one of the few I received while executing a some tests this morning with your recommendations. I did run the backups Full, Differential and Incremental consistently with either checksum enabled or disabled from start to finish.
For clarity and completeness I have performed 6 tests which are outlined below. I have also included the backup.manifest for the Full Backup from each of the tests.
I did notice in the backup.manifest for all tests, the [backup:option] section contained the same boolean value for each option. _**option-checksum-page**_ was always _**false.**_
Tests 1-3 are with checksum enabled and Tests 4-6 are with checksum disabled.
Overall, whether I disabled or enabled checksum, the results were the same. I could not perform a successful Differential or Incremental backup after inserting data into a table after the successful Full backup and successful Differential backup or Incremental backup.
Full Backups as in previous tests were all successful.
**Test 1 - BACKUP checksum - Full, Differential, Incremental, Insert Data, Differential, Incremental**
Successful Full backup with checksum enabled. backup.manifest from $REPO_PATH/backup/$STANZA/latest for this successful Full backup
[backup.manifest.Test1.FullBackup.zip](https://github.com/pgbackrest/pgbackrest/files/866226/backup.manifest.Test1.FullBackup.zip)
Successful Differential backup with checksum enabled. Warning message indicating ```checksum-page``` cannot be altered to ```false``` so it is reset to ```true```
```
2017-03-23 15:32:43.090 P00 INFO: backup start archive = 000000010000000000000022, lsn = 0/22000028
WARN: diff backup cannot alter 'checksum-page' option to 'false', reset to 'true' from 20170323-152816F
2017-03-23 15:32:44.619 P01 INFO: backup file /app/data/ppgolot30/gprof/29700/gmon.out (2MB, 50%) checksum 795aa28afe8bcaf4d2175ff1739577530299fd81
```
Successful Incremental backup with checksum enabled. Warning message indicating ```checksum-page``` cannot be altered to ```false``` so it is reset to ```true```
```
2017-03-23 15:36:02.129 P00 INFO: backup start archive = 000000010000000000000024, lsn = 0/24000028
WARN: incr backup cannot alter 'checksum-page' option to 'false', reset to 'true' from 20170323-152816F_20170323-153245D
2017-03-23 15:36:03.606 P01 INFO: backup file /app/data/ppgolot30/gprof/30372/gmon.out (2MB, 50%) checksum 00ce735cff961a1460f99b2d60e2ca93a959e4f6
```
**_Insert Data into table_**
Failed Differential backup with checksum enabled with the error stack from my first comment. Warning message indicating ```checksum-page``` cannot be altered to ```false``` so it is reset to ```true```
```
2017-03-23 15:38:51.617 P00 INFO: backup start archive = 000000010000000000000026, lsn = 0/26000028
WARN: diff backup cannot alter 'checksum-page' option to 'false', reset to 'true' from 20170323-152816F
2017-03-23 15:38:52.598 P01 INFO: backup file /app/data/ppgolot30/gprof/30807/gmon.out (2MB, 20%) checksum f4ff686fe43435dafbced706279813491ea1663e
```
Failed Incremental backup with checksum enabled with the error stack from my first comment. Warning message indicating ```checksum-page``` cannot be altered to ```false``` so it is reset to ```true```. Warning message indicating an aborted backup exists which does not match the new backup type.
```
2017-03-23 15:41:04.266 P00 INFO: backup start archive = 000000010000000000000028, lsn = 0/28000028
WARN: incr backup cannot alter 'checksum-page' option to 'false', reset to 'true' from 20170323-152816F_20170323-153604I
WARN: aborted backup exists, but cannot be resumed (new backup-type 'incr' does not match aborted backup-type 'diff') - will be dropped and recreated
```
**Test 2 - BACKUP checksum - Full, Differential, Insert Data, Differential**
Successful Full backup with checksum enabled. backup.manifest from $REPO_PATH/backup/$STANZA/latest for this successful Full backup
[backup.manifest.Test2.FullBackup.zip](https://github.com/pgbackrest/pgbackrest/files/866229/backup.manifest.Test2.FullBackup.zip)
Successful Differential backup with checksum enabled. Warning message indicating ```checksum-page``` cannot be altered to ```false``` so it is reset to ```true```
[Truncated]
[backup.manifest.Test6.FullBackup.zip](https://github.com/pgbackrest/pgbackrest/files/866233/backup.manifest.Test6.FullBackup.zip)
Successful Incremental backup with checksum disabled. Warning message indicating ```checksum-page``` cannot be altered to ```false``` so it is reset to ```true```
```
2017-03-23 17:01:39.703 P00 INFO: backup start archive = 000000010000000000000049, lsn = 0/49000028
WARN: incr backup cannot alter 'checksum-page' option to 'false', reset to 'true' from 20170323-165917F
2017-03-23 17:01:41.629 P01 INFO: backup file /app/data/ppgolot30/gprof/8564/gmon.out (2MB, 50%) checksum 0b68c94ddba1cdc38d0df66ccf788e65444b0d96
```
**_Insert Data into table_**
Failed Incremental backup with checksum disabled with the error stack from my first comment. Warning message indicating ```checksum-page``` cannot be altered to ```false``` so it is reset to ```true```.
```
2017-03-23 17:03:01.323 P00 INFO: backup start archive = 00000001000000000000004B, lsn = 0/4B000028
WARN: incr backup cannot alter 'checksum-page' option to 'false', reset to 'true' from 20170323-165917F_20170323-170142I
2017-03-23 17:03:02.629 P01 INFO: backup file /app/data/ppgolot30/gprof/8899/gmon.out (2MB, 33%) checksum b64d6109a7f17b3e4e03a10de0f52a38cb12f2e0
```
username_1: Thanks for the very detailed tests. I have not been able to determine the cause of the error, but I was able to eliminate a lot of possibilities.
Somehow, `option-checksum-page` is being reported as `true` even though it is clearly `false` in `backup.manifest`. I can't reproduce this and tests around this code all work as expected.
Is there any chance you have an older copy of pgBackRest that was not property removed? The libraries migrated around a bit earlier on so it might not be in the same location. See removal instructions here: http://www.pgbackrest.org/user-guide.html#installation
You might also try `perl -v` and make sure that some strange Perl was not installed on your system. You should have `5.10`.
You could also try installing from packages. I know yum.postgresql.org has RHEL 7 packages but I'm not sure about RHEL 6.
Of course, this may still be a bug, but currently I'm not seeing it. If it's as pervasive as your tests indicate this should be blowing up all over the place. I'm hoping that it's environmental, or caused by some old libraries lying around.
username_0: It looks like there may have been a previous version installed as some of the file timestamps were from 1 year ago.
I performed the steps outlined in online documentation http://www.pgbackrest.org/user-guide.html#installation to remove pgBackRest.
I then performed an installation as outlined in the same documentation.
The version of Perl installed is 5.10.1 as indicated from the output below.
```
postgres> perl -v
This is perl, v5.10.1 (*) built for x86_64-linux-thread-multi
Copyright 1987-2009, <NAME>
Perl may be copied only under the terms of either the Artistic License or the
GNU General Public License, which may be found in the Perl 5 source kit.
Complete documentation for Perl, including FAQ lists, should be found on
this system using "man perl" or "perldoc perl". If you have access to the
Internet, point your browser at http://www.perl.org/, the Perl Home Page.
```
I executed 2 tests, performing a Full Backup, Differential Backup, Insert Data and a Differential Backup. I performed 1 test with checksum enabled and 1 test with checksum disabled. As with all the prior tests, the Final Differential Backup failed.
I'll try to install using packages, however, this may not be possible due to restrictions of internet access to most machines. I'll perform an uninstall before installing via packages.
username_0: Looking again at the error below from the error stack, its seems to indicate subroutine pageChecksumBufferTest cannot be found.
It seems to exist in the extracted source within file pageChecksum.c in ~/pgbackrest-release-1.17/libc. I do not see where this directory or file is copied in the installation instructions. Could this be the reason it is not found?
```
ERROR [199]: remote process terminated on local-1 host: unknown error: Undefined subroutine &pgBackRest::BackupFile::pageChecksumBufferTest called at /usr/share/perl5/pgBackRest/BackupFile.pm line 116. at /usr/bin/pgbackrest line 15
```
username_1: Yes, there are no instructions for compiling the C library manually. It is only distributed with packages at the moment.
However, the C library only hit the packages recently and the code has been around since December. It should gracefully handle a missing C library and has up until now. I ran it for months without the C library present on many systems.
Building the C Library might fix your immediate problem, but might also mask others.
username_0: I installed pgBackRest from source on a different server which did not have a previous installation of pgBackRest.
I executed 2 tests, performing a Full Backup, Differential Backup, Insert Data and a Differential Backup. I performed 1 test with checksum enabled and 1 test with checksum disabled. As with all the prior tests on the previous server, the Final Differential Backup failed.
I'm trying to determine if it will be possible to install via package but have not yet received an answer from my colleagues.
It seems odd that subroutine pageChecksumBufferTest is always being referenced, and the error not being handled as you previously mentioned, whether or not I'm using --no-checksum-page.
username_1: We are still unable to reproduce the error. The only thing we can think of at this point is to get debug output. Can you with run with `--log-level-file=debug` for the successful full, successful diiff, and failed diff and attach the resulting log file?
In a default install you'll find the log files at `/var/log/pgbackrest`. Probably best to clear out any existing logs before running the tests.
Thanks!
username_0: I have attached debug logs with tests for checksum enabled(**POC-backup-checksum.log**) and checksum disabled(**POC-backup-nochecksum**).
[POC-backup.zip](https://github.com/pgbackrest/pgbackrest/files/873202/POC-backup.zip)
username_0: I have attached logs with tests for checksum enabled and checksum disabled.
[POC-backup-debug.zip](https://github.com/pgbackrest/pgbackrest/files/873375/POC-backup-debug.zip)
The debug output was not placed in the logs located in /var/logs/pgbackrest, therefore, I redirected STDOUT and STDERR to logs named POC-backup-checksum-debug* and POC-backup-nochecksum-debug*
The logs created by pgBackRest have been named POC-backup-checksum.log and POC-backup-nochecksum.log
username_0: I uninstalled the installation via source and installed pgBackRest from pgbackrest-1.17-1.rhel6.noarch.rpm which was downloaded from https://yum.postgresql.org/9.5/redhat/rhel-6-x86_64/
I am still experiencing the error stack posted in my initial comment if I perform a Full Backup, Differential Backup, Insert Data and Differential Backup.
username_0: One final test was to install version 1.09 and perform a Full Backup, Differential Backup, Insert Data, Differential Backup.
All backups completed successfully and did not encounter the initial posted error after inserting data and performing a Differential Backup.
username_1: Sorry for the late reply. I have been through the logs but don't see anything unusual. There's not a lot of logging around the manifest primitives since they have always performed reliably.
I have an open item to document how to manually build the C library. You can see the steps here: https://github.com/pgbackrest/pgbackrest/blob/master/test/test.pl#L446
It boils down to:
```
cd libc
perl Makefile.PL INSTALLMAN1DIR=none INSTALLMAN3DIR=none
make
make test
make install
```
Of course, you'll need to have various build packages installed: `gcc make perl-ExtUtils-MakeMaker perl-Test-Simple`.
The reason I haven't written the docs yet is I don't think it's appropriate to install these tools on a prod server and moving the the binaries to another server depends a lot on the OS/Version. It's complex enough that I feel like working from packages is best, but the RHEL/CentOS packages do not provide the C Library yet. This is out of my control but I'm hoping it will happen soon.
username_0: Thank you for your reply. I performed the test to complete a Full Backup, Differential Backup, Insert Data and Differential Backup using source versions 1.09 through 1.17.
The test was successful for versions 1.09 through 1.11, however, the test failed for versions 1.12 through 1.17 with the error in my original post.
I'll continue to do more testing with version 1.11 in my environment and wait for the C libraries to be included with the RHEL/CentOS package.
I may do some additional testing with versions newer than 1.11 after building the C libraries if time is approved for my environment to test this configuration.
username_1: v1.11 is a perfectly good release to test with. However, I would hold off on using asynchronous archiving until you are able to use the improved implementation introduced in v1.13.
username_1: I haven't had any inspiration on this issue so I'm going to close it. I still believe it is environmental as this is the only field report of the issue and I have not been able to reproduce it.
I think the solution is to build the C library or use a package the provides it. In the not-to-distant future the C library will be required anyway.
Status: Issue closed
|
tensorflow/models | 574723288 | Title: ImportError: No module named absl
Question:
username_0: <!--
Please make sure that this is a bug.
As per our GitHub Policy (https://github.com/tensorflow/models/blob/master/ISSUES.md), we only address code bugs, documentation issues, and feature requests on GitHub.
Please go to Stack Overflow (http://stackoverflow.com/questions/tagged/tensorflow-model-garden) for help and support.
-->
**System information**
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- Mobile device (e.g., Pixel 4, Samsung Galaxy 10) if the issue happens on mobile device:
- TensorFlow installed from (source or binary):
- TensorFlow version (use command below):
- Python version:
- Bazel version (if compiling from source):
- GCC/Compiler version (if compiling from source):
- CUDA/cuDNN version:
- GPU model and memory:
<!--
You can collect some of this information using our environment capture (https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh)
You can also obtain the TensorFlow version with:
1. TensorFlow 1.0
`python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"`
2. TensorFlow 2.0
`python -c "import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"`
-->
**Please provide the entire URL of the model you are using?**
<!-- (e.g., https://github.com/tensorflow/models/tree/master/official/nlp/bert) -->
https://github.com/tensorflow/models/blob/master/official/recommendation/run.sh
**Describe the current behavior**

**Describe the expected behavior**
**Code to reproduce the issue**
<!-- Provide a reproducible test case that is the bare minimum necessary to generate the problem. -->
**Other info / logs**
<!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. -->
Answers:
username_1: Hi @username_0 , did you have all required packages?
pip3 install --user -r official/requirements.txt
To my knowledge, absl is required by tensorflow + related repos.
I would suggest trying: https://abseil.io/docs/python/quickstart
username_0: i did "pip3 install -r official/requirements.txt"
Moreover, i installed absl externally using "pip3 install absl-py"
Status: Issue closed
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.