repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
SwissDataScienceCenter/renku | 625568331 | Title: Underscore in username causes the image build to fail
Question:
username_0: Underscores `_` seem to be allowed in email usernames, they are not, however, allowed in docker image names. If a username has underscores, the image build will fail. We should handle this somehow but it's not obvious how if the user is allowed to pick that username.
Answers:
username_1: I can not reproduce this problem. Underscores in docker image names seem to be accepted, try
```
docker pull registry.renkulab.io/test_unserscore/public_test:d24424b
```
for example.
username_0: Did you try changing your username?
username_0: This definitely failed in May for a student in the SIB course. It's possible something has changed in the meantime in the docker registry and/or gitlab
username_1: Closing this issue since the problem can not be reproduced at the moment. Let's keep it in the back of our heads though and reopen the issue if we see a similar problem.
Status: Issue closed
|
gabriel-vasile/mimetype | 512987709 | Title: Add support for Microsoft Access file format
Question:
username_0: 1) Specify the MIME type and extension for which to add support
application/x-msaccess, .accdb
2) Share an example file
can be found online
3) Optionally, add a reference to the specification of the file format.
[magic](https://github.com/file/file/blob/c97b6d3cf7c738f566f6eb313da08c7e18fbe98e/magic/Magdir/database#L458)
Answers:
username_1: Can I give this a try?
Status: Issue closed
|
renyuzhuo/GitHub-Top | 1024725725 | Title: 打工人名贵
Question:
username_0: {"name":"WorkerLivesMatter","full_name":"WorkerLivesMatter/WorkerLivesMatter","owner":{"login":"WorkerLivesMatter","type":"User"},"description":"null","url":"https://api.github.com/repos/WorkerLivesMatter/WorkerLivesMatter"} |
googleapis/repo-automation-bots | 486082082 | Title: New Bot: Delete merged branches
Question:
username_0: This bot would look for branches used by PRs that have been merged to master, and delete them
Answers:
username_1: Can we use https://probot.github.io/apps/delete-merged-branch/?
username_0: This also looks like we can enable the "Delete head branches on merge" setting for our repositories
Status: Issue closed
username_1: @bcoe @username_0 is there a plan to enable this setting broadly then? |
process-analytics/bpmn-visualization-js | 685479866 | Title: [BUG] extra blank zone above the diagrams in the demo page
Question:
username_0: **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Open the demo page and load a file
**Screenshots**
_Using the MIWG A.2.0 file_

We can see an extra svg block above the svg node that holds the painted BPMN diagram
_With the `non-regression events` diagram_

**Desktop (please complete the following information):**
Reproduced on Ubuntu Firefox and Windows Chrome
**Additional context**
The issue is not present on the `lib-integration` and `non-regression` pages
Answers:
username_0: ### Anaysis
In `demo/index.ts`, the code starts with
```typescript
let bpmnVisualization = new BpmnVisualization(window.document.getElementById('graph'));
```
Then in the demo initialization function
```typescript
bpmnVisualization = new BpmnVisualization(window.document.getElementById(container));
```
So whatever is the page (`demo` or `non-regression`), there is 2 BpmnVisualization instances. The first one is only instantiated and never touched after init.
In `non-regression`, the first one is linked to a 'null' container, as there is no 'graph' div. So no svg block is created in the page.
In `demo`, it is linked to the graph div, so a first svg block by mxGraph at `BpmnVisualization` instance creation.
Then, the `startBpmnVisualization` function instantiates a new `BpmnVisualization` instance with linked to the 'graph' id in the demo (in `non-regression`, it uses the dedicated id of this page) and this new instance is assigned to the `bpmnVisualization` variable that is used by the rest of the demo code (in particular, when loading a BPMN diagram).
So a second svg block is created within the graph div. This svg block is then updated when the BPMN diagram is loaded.
The first svg block remains untouched as there is no access to the first created `BpmnVisualization` instance (it never loads BPMN diagrams).
### Origin
This has been introduced by the PR I create when managing big bpmn file for test (see #543): I put the share code in place and I forgot to remove the instantiation on the line where we declare the bpmnVisualization variable.
Status: Issue closed
|
google/recaptcha | 195939743 | Title: Traps keyboard: Can not tab past the checkbox element
Question:
username_0: A user using the keyboard cannot tab away from the checkbox, whether it is checked or not. After checking the checkbox via the space button, the user is able to tab past the checkbox and hit the next focusable item, the "privacy" link. But as soon as the user tabs back to the checkbox, they are trapped again and unable to tab in either direction.
This fails [WCAG 2.1.2](https://www.w3.org/WAI/WCAG20/quickref/#qr-keyboard-operation-trapping) No Keyboard Trap.
Steps to reproduce:
1. Visit [https://www.google.com/recaptcha/api2/demo](https://www.google.com/recaptcha/api2/demo)
2. Press tab until you reach the checkbox
3. Attempt to tab past the checkbox and notice you cannot
Answers:
username_1: Closing super old issues. Please re-raise if still relevant.
Status: Issue closed
|
IMP1/lojik | 180010855 | Title: modules need to be reusable
Question:
username_0: a module (definition) cannot have gates, only gate definition, and some connections.
(connections will need to be thought about - list of gate-input pairs?)
creating an instance of a module using a module definition should create gate instances from the module definition's gate definitions.<issue_closed>
Status: Issue closed |
Hapaxia/SelbaWard | 705763820 | Title: Unable to make a rotating cube in more that 2 axes
Question:
username_0: There seems to be some limitation about axes, No matter how hard I try I'm unable to create a rotating 3 face cube.
I theory I should be able to do it by just setting origin z for all faces, and increment the angles with a 90deg offset in X for one 2nd face and in X for 3rd face. The faces are floating in random directions instead.
Answers:
username_0: I've managed to figure out the angles for 3 faces
```
img1.rotation3d(45, 45, 0)
img2.rotation3d(-45, 45, 0)
img3.rotation3d(35, -30, -54.5)
```
It seems that for the 3rd face an additional transformation is required. Do you have any idea How to calculate that?
username_0: 
username_1: I wasn't sure exactly what the problem was that you were describing as you only provided an image of the correct result so I experimented to see what it was that you meant. I found that forming a full cube with Sprite 3D is pretty simple but transforming the cube as a whole isn't as easy.
There are two transformations here. Transforming the cube and transforming each face. Sprite 3D can only transform the face and it does so in a specific order in the axes. If I remember correctly, the matrix was formed by pitch being transformed first and then yaw. It's worth noting that the roll is simply the flat 2D rotation and is applied afterwards on the 2D plane.
Then, when it comes to a cube, these are transformations are no longer sufficient. To transform faces into their initial positions and then transform them again as a cube would require additional transformation.
Remember that Sprite 3D isn't a sprite within 3D space, it's a sprite with 3D rotation. The allowance of a z value for the origin was added as an afterthought.
You would probably notice this if you moved a cube made with Sprite 3D around - the perspective doesn't change.
That said, I've been considering ideas about a new version of Sprite 3D (or a new dawable entirely) that would be fully within 3D space and would also - hopefully - allow usage of Elastic Sprite's shader instead of relying on the Sprite 3D's subdivision to make the texture look "good enough".
username_0: Hi, thanks for your reply. It's, exactly what i figured out, When I added a full second transformation matrix I was able to rotate the camera around the cube, unfortunately the order of the faces is no longer easy to track with 2 transformations, so I started experimenting with pure opengl in perspective projection + zbuffer instead of relying on sfml transformable in ortho projection, this has it's drawbacks though. I can't wrap my head around of how to position and scale the quads in the 3d space while I change the far and near values to have the constant size of quads in pixels and nothing clipped on rotation, or FOV
username_1: Indeed. The fact that once you go into 3D, most people just jump into OpenGL directly and that's the reason I haven't really done the more 'expanded' 3D stuff. Generally, people use OpenGL for 3D - even within SFML - so Sprite 3D / Elastic Sprite tend to be used for effects more that full on 3D. Effects like a fake spinning cube in the Selba Ward demo (it uses only 2 sprites), for example.
As you mentioned, though, OpenGL does tend to take a lot of technical knowledge, especially for 3D. That said, anything made in SFML would probably require a lot of that knowledge to use it...
username_0: The reason I stumbled across your repo was trying to figure out how in our Attract Mode game launcher I could give a proper dept to game thumbnail/logo cards in the games selector. Now we only skew/pinch with broken perspective in UV mapping. Your class gives more freedom in transformations, but I would also require the ability to position the elements in z axis, or be able to look with a camera at the angle on all the elements. The cube was just a test of the limitations. I'm still thinking if there is even a way of achieving an angled stack of cards with the centre one facing forward, or a row of cards wrapped around a part of a circle and preserving the screen coordinate system at the same time.
username_1: For simple "cards", Sprite 3D should be able to handle most tasks, albeit not necessarily accurately represented within 3D space since they don't share that space with others. It looks like you are intending to move multiple cards as if one object; this would be a similar situation to the cube - multiple faces as one object being transformed together.
That said, cards can be done relatively well with Sprite 3D; it's just dependant on your requirement of camera angles.
For example, here's a video showing a short (actual) card animation:
[https://www.youtube.com/watch?v=yNSEmSMfbgQ](url)
This was actually done with Spinning Card but would be much simpler to do with Sprite 3D.
username_0: Yes I've seen that video, it's a very nice effect showing the power of Sprite3d, but you're right I won't be able to achieve what I need without a camera transform. I'm leaning towards doing it all in OpenGL as I have then no problem with draw priorities with zbuffer enabled, and I would not need tessellation. Just those screen space coordinates in a 3D space are bothering me still.
username_1: You could calculate the vertices in 3D manually. It would basically be taking into account multiple 3D transforms. Once you have the vertices for each card, you could draw these faces using Elastic Sprite (also in Selba Ward) with perspective interpolation with its corners in the positions you calculated; it automatically draws the texture as if its perspective fits those points.
[wiki/Elastic-Sprite](url)
username_0: I was considering that as well, but the elastic sprite uses vertex shaders, doesn't it? I won't be able to use it until I strip Attract Mode off of the legacy GLES 1.1 backend
username_1: Elastic Sprite doesn't use vertex shaders but does use a fragment shader. If you're unable to use fragment shader, yes, Elastic Sprite wouldn't be suitable.
Status: Issue closed
|
twisted/klein | 182665609 | Title: Add a doc page for application composition
Question:
username_0: As noted, in #73, we can already structure klein apps into parts that can be composed together.
Let's write add an example to the docs, since it's not super obvious how to do that.
I tweaked @username_3's example code into something I think may work:
```python
from klein import route, run, Klein
class HelloApplication(object):
app = Klein()
@app.route("/")
def hello(self, request):
return "Hello!"
class MathApplication(object):
app = Klein()
@staticmethod
def numberify(string):
if "." in string:
return float(string)
else:
return int(string)
@app.route("/")
def root(self, request):
return "Math happens here."
@app.route("/add/<a>/<b>")
def add(self, request, a, b):
return "{}".format(self.numberify(a) + self.numberify(b))
@app.route("/subtract/<a>/<b>")
def subtract(self, request, a, b):
return "{}".format(self.numberify(a) - self.numberify(b))
@app.route("/multiply/<a>/<b>")
def subtract(self, request, a, b):
return "{}".format(self.numberify(a) * self.numberify(b))
@app.route("/divide/<a>/<b>")
def subtract(self, request, a, b):
return "{}".format(self.numberify(a) / self.numberify(b))
@app.handle_errors(ValueError)
def valueError(self, request, failure):
return "Invalid inputs provided."
class Application(object):
app = Klein()
@app.route("/")
def root(self, request):
return "This is a web application composed from multiple applications."
@app.route("/hello/", branch=True)
def hello(self, request):
return HelloApplication().app.resource()
@app.route("/math/", branch=True)
def math(self, request):
return MathApplication().app.resource()
if __name__ == '__main__':
application = Application()
application.app.run("localhost", 8080)
```
Answers:
username_1: If I get it right HelloApplication and MathApplication objects are recreated for each request. Which is the best patter in order to avoid to rebuild the objects each time and store some status? Module level functions? Static methods?
username_2: One thing that I don't see an obvious way to handle, with this way of composition, sharing error handlers between `Klein` instances composed together.
username_3: Aah, that is a great callout @username_2 ! Do we have any other similarly registered metadata besides error handlers? Anything within werkzeug? |
shalzuth/RiskOfShame | 433942206 | Title: Not working after patch 4/16/2019
Question:
username_0: hope you can update the mod its one of the better ones right now
Answers:
username_1: updated. might need to update loader, added c# 6 features which requires different compilation.
username_0: awesome well done
username_0: https://github.com/username_1/RiskOfShame/releases/download/alpha/RiskOfShame.Loader.exe
link is broken atm or just not updated on thunderstore.
username_1: Tagged it beta and updated on thunderstore
Status: Issue closed
|
ArkEcosystem/desktop-wallet | 401144801 | Title: Not able to vote with the ARK wallet on other ARK forks/networks.
Question:
username_0: **Temporary fix for our wallet**
Change the following files with the config from our own network.
node_modules/@arkecosystem/crypto/lib/crypto/crypto.js
node_modules/@arkecosystem/client/lib/peers.js
node_modules/@arkecosystem/client/dist/index.cjs.js
and published the package under a different name on npmjs library.
After that, I've replaced all the lines with @arkecosystem/crypto and @arkecosystem/client to our own.
Answers:
username_1: There was a missing change in the ARK wallet release which currently doesn't impact mainnet or devnet nodes (since devnet has been told to not update to the current develop branch HEAD). We'll likely do a small release just to get this out there. The core version, assuming you based it off of the develop branch, will be looking for a different header than what the wallet is sending.
Perhaps there's a bug with the v1 endpoints and vote transactions.
I'll leave this open to confirm the above, but also look into the peers issue you're having.
username_0: I think you can close this one. Already fixed in 2.2.1 :)
Nice job.
Status: Issue closed
|
MichalLytek/type-graphql | 1157142510 | Title: Update Apollo Federation Code Example
Question:
username_0: **Describe the issue**
A clear and concise description of what is wrong or what feature is missing.
You may ask here for guides, e.g. "How to run TypeGraphQL on AWS Lamda?" if nobody helped you on Github Discussions or StackOverflow.
**Are you able to make a PR that fix this?**
If you can, it would be great if you create a pull request that fixes the docs, fills the gap with new chapter or new code example.
**Additional context**
Add any other context about the problem here.
Status: Issue closed
Answers:
username_1: The work has been done on the `v16` branch:
https://github.com/username_1/type-graphql/blob/v16/examples/apollo-federation/helpers/buildFederatedSchema.ts
However, I cannot test that because apollo federation did not yet support GraphQL v16.
username_1: Ah, it's done on the branch on my local, not pushed yet, sorry 😅 |
andrew-raphael-lukasik/RawTextureDataProcessingExamples | 606469000 | Title: Not seeing RGB24 type
Question:
username_0: On Unity 2019.1.0f2 I'm getting the following error when I include this code:
Assets\Editor\GaussianBlurRGB24Job.cs(31,29): error CS0246: The type or namespace name 'RGB24' could not be found (are you missing a using directive or an assembly reference?)
Is this a data type only included in a newer version of Unity? The burst compiler package was available for this version but I can't find anything on the net for <RGB24>
Answers:
username_1: Make sure to include this file (|directory): [RawTextureDataProcessingExamples/Structures/RGB24.cs](https://github.com/username_1/RawTextureDataProcessingExamples/blob/master/Structures/RGB24.cs)
Status: Issue closed
|
ropensci/rgbif | 491909819 | Title: Error in curl::curl_fetch_memory(x$url$url, handle = x$url$handle) : schannel: next InitializeSecurityContext failed: SEC_E_INVALID_TOKEN (0x80090308) - The token supplied to the function is invalid
Question:
username_0: I am trying to download data from the popler database using www.github.com/ropensci/popler. The package functioned well until a few days ago, upon which I got the error message reported in the subject line.
The code that got me the error message:
`devtools::install_github('ropensci/popler')
pplr_get_data( proj_metadata_key == 1 )`
Regarding the session:
<details> <summary><strong>Session Info</strong></summary>
```r
R version 3.6.1 (2019-07-05)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 18362)
Matrix products: default
locale:
[1] LC_COLLATE=English_United States.1252
[2] LC_CTYPE=English_United States.1252
[3] LC_MONETARY=English_United States.1252
[4] LC_NUMERIC=C
[5] LC_TIME=English_United States.1252
attached base packages:
[1] stats graphics grDevices utils datasets
[6] methods base
other attached packages:
[1] popler_0.2.0
loaded via a namespace (and not attached):
[1] Rcpp_1.0.2 pillar_1.4.2 compiler_3.6.1
[4] dbplyr_1.4.2 tools_3.6.1 digest_0.6.20
[7] jsonlite_1.6 evaluate_0.14 tibble_2.1.3
[10] gtable_0.3.0 pkgconfig_2.0.2 rlang_0.4.0
[13] DBI_1.0.0 rstudioapi_0.10 crul_0.8.4
[16] curl_4.0 xfun_0.9 dplyr_0.8.3
[19] stringr_1.4.0 knitr_1.24 triebeard_0.3.0
[22] grid_3.6.1 tidyselect_0.2.5 glue_1.3.1
[25] httpcode_0.2.0 R6_2.4.0 rmarkdown_1.15
[28] ggplot2_3.2.1 purrr_0.3.2 tidyr_0.8.3
[31] magrittr_1.5 urltools_1.7.3 scales_1.0.0
[34] htmltools_0.3.6 assertthat_0.2.1 colorspace_1.4-1
[37] stringi_1.4.3 lazyeval_0.2.2 munsell_0.5.0
[40] crayon_1.3.4
```
</details>
I have tried to re-install an old version and development version of the R package curl, but the error persists.
I have opened an issue here, thinking that the issue originates with curl, but of course I might be wrong.
Any help on this will be greatly appreciated, so thank you in advance!
Status: Issue closed
Answers:
username_1: curious why'd you open it here, ah, closed it 👍 |
privly/privly-firefox | 58650486 | Title: Add Continuous Integration System to Forks
Question:
username_0: Your fork does not run on TravisCI by default so you don't run all the integration tests on every "push".
This can be very confusing. This issue is for helping people setup TravisCI with SauceLabs. Comment if the documentation for continuous integration is not sufficient and you need help. We will then translate any issues or misconceptions into changes in the documentation.
Answers:
username_0: Privly-Jetpack is replacing this old Xul version. See the new version nearing completion [here](https://github.com/privly/privly-jetpack).
(closing all outstanding issues on this repository since it is now deprecated)
Status: Issue closed
|
nodeschool/nodeschool.github.io | 173850074 | Title: Navigation among excersices is not working
Question:
username_0: Hi, I have node v5.0 and npm v3.10.7 on win10. Navigation among the excersices si not working in any of the packages I've tried. It used to work before upgrading to win10 (on win8.1).
Any workaround?
Thanks!
Status: Issue closed
Answers:
username_1: Hi @username_0. For issues related to workshops, please open an issue on the [discussions](https://github.com/nodeschool/discussions/issues) repo. Closing this. |
jlippold/tweakCompatible | 414464744 | Title: `libmoorecon` working on iOS 12.1.2
Question:
username_0: ```
{
"packageId": "libmoorecon",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "libmoorecon",
"deviceId": "iPhone8,2",
"url": "http://cydia.saurik.com/package/libmoorecon/",
"iOSVersion": "12.1.2",
"packageVersionIndexed": true,
"packageName": "libmoorecon",
"category": "Tweaks",
"repository": "tateu's repo",
"name": "libmoorecon",
"installed": "0.9.9.0~beta4-1",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "libmoorecon",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.0",
"shortDescription": "Based almost 100% on libstatusbar by phoenix3200. This version is for my personal use and removes the upper limit CoreFoundation (iOS version) restriction and attempts to keep it from loading inside of non Application processes. Compiled for iOS9.x, armv7 and arm64.",
"latest": "0.9.9.0~beta4-1",
"author": "(null)",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
mapswipe/mapswipe | 452105089 | Title: "Download for later" doesn't seem to work
Question:
username_0: I recently wanted to download a chunk of tiles to help me pass the time on a flight. When I select "Download for later" under a project it first asks me for permission to download without WiFi. After giving the permission the "download for later" button is greyed out and nothing seem to happen. It doesn't seem to matter which package size I select. I kept the phone and mapswipe open for more than 10 minutes and still nothing happened.
I have never been able to download for later.
I have just reinstalled mapswipe and it didn't help.
Answers:
username_1: Hello @username_0, Thanks for the bug report! We're very much aware of this issue, it's been reported by dozens of people outside this bug tracker. Weirdly enough there doesn't seem to be a ticket for it. So I'll make this ticket the official one for "offline support" :)
I have a fix in the pipeline, that is part of https://github.com/orgs/mapswipe/projects/2
I will track progress on this ticket, so stay tuned!
username_0: Hi @username_1, thanks for replying. I wondered why I couldn't find a ticket on the issue :)
I look forward to test the fix. Are there a beta group on Google play or is it released as apk?
username_1: We're working on getting the update into the playstore, but for now, you can test the current `dev` branch build from https://github.com/mapswipe/mapswipe/releases You're very welcome to try it out, and report back here :) (note that this build works with a dev server, so your contributions will be discarded randomly, and you'll need to create a different account if you want to use it).
username_2: Hello, what is the status of this? Project for 2.0 is closed, this seems to end in todo list unfinished.
username_1: @username_2 you're right that it's not done yet. I think this now also relates to https://github.com/mapswipe/mapswipe/issues/157 as it could amend the way we fix this particular issue. Any thoughts welcome!
username_2: OK, if giving all data to Google is the way this app go, uninstalling it is the way I prefer. |
inviqa/ansible-jumpcloud | 515016193 | Title: Import_Tasks throws error
Question:
username_0: https://github.com/inviqa/ansible-jumpcloud/blob/ad562ba141047a052516820631d9d81e44dc1f86/tasks/main.yml#L13 . Throws an error when trying to run the playbook. I went with include which seem to do the trick.
Status: Issue closed
Answers:
username_0: I messed up, I was using a very old version of ansible. |
dlsc-software-consulting-gmbh/CalendarFX | 624687876 | Title: TimeScaleLabels overlap each other for bigger data range
Question:
username_0: 
If number of visible hours is to big then labels start to overlap each other as shown in attached image. I would like to hide some of them, but I could not find anything that could do that.
Is there any option, that could achive that, if not, could be it implemented?
Regards,
<NAME>, PSI
Answers:
username_1: This feature is not available, yet, but it can be added to TimeScaleViewSkin. Currently this feature is out-of-scope of our current agreement, so you have the following options:
a) you pay the second half of the invoice, which will make me very happy and then I will be inclined to support this feature free of charge
b) PSI enters into the maintenance contract we discussed for a while now or
c) you dig into the code and check for overlapping labels and submit a pull request, which I will gladly merge
To clarify: you ordered a specific set of new functionality, which I implemented. The timescale view always had this limitation, this could have been known to you as everything is out there in the open and you had ample opportunity / time to verify the calendar framework. None of your specs / mock-ups indicated that you want to display such small hours.
A slightly annoyed ….
Dirk
>
username_1: How do you want to proceed here?
Status: Issue closed
username_1: Implemented.
<img width="427" alt="Bildschirmfoto 2020-08-19 um 15 07 06" src="https://user-images.githubusercontent.com/9534301/90639504-0c8b0880-e22f-11ea-8933-7d5d4d455243.png"> |
kubernetes-client/java | 615109894 | Title: how to update deployment
Question:
username_0: Hello master:
I want to change k8s deployment configration,my key code:
**(1)AppsV1Api config:**
@Configuration
public class K8sConfig {
@Autowired
private K8sProperties k8sProperties;
@Bean
public AppsV1Api appsV1Api() throws IOException {
ClassPathResource resource = new ClassPathResource(k8sProperties.getK8sConfigPath());
ApiClient client = Config.fromConfig(resource.getInputStream());
return new AppsV1Api(client);
}
}
**(2)service:**
@Service
public class TestService {
@Autowired
CoreV1Api coreV1Api;
@Autowired
AppsV1Api appsV1Api;
public void test01() {
System.out.println("00");
}
public void test02() {
//deployment name
String name = "operate-demo";
String nameSpace = "extdev";
V1Patch v1Patch = new V1Patch("[{\"op\": \"replace\", \"path\": \"spec/replicas\", \"value\":2}]");
String pretty = "true";
String dryRun = null;
Boolean force = null;
try {
V1Deployment v1Deployment = appsV1Api.patchNamespacedDeployment(name, nameSpace, v1Patch, pretty, dryRun, null, force);
System.err.println("#######" + v1Deployment);
} catch (ApiException e) {
e.printStackTrace();
}
}
**(3)k8s config:**
kind: Deployment
apiVersion: apps/v1
metadata:
name: operate-demo
namespace: extdev
selfLink: /apis/apps/v1/namespaces/extdev/deployments/operate-demo
uid: a85a2aa1-9ac6-4b30-bc65-03d2a6054fc1
resourceVersion: '29332392'
generation: 2
creationTimestamp: '2020-04-29T11:48:07Z'
labels:
app: operate-demo
annotations:
deployment.kubernetes.io/revision: '2'
spec:
replicas: 1
......
**The phenomenon of description:**
I want to change replicas = 2 ,but don't take effect, still =1,
Ask the master for guidance
Answers:
username_1: Patch is complicated to use, I would recommend starting with replace rather than patch.
Also using `kubectl --v=10 ...` will print a bunch of JSON and HTTP information that will help you learn what is going on.
username_1: Closing as I believe the question is answered and no further requests. Please use `/reopen` if you need anything further.
Status: Issue closed
|
DataBiosphere/topmed-workflows | 320995402 | Title: create `checkertool` for WDL aligner
Question:
username_0: [~wshands] has created a WDL version of the original University of Michigan aligner workflow. To test and validate that workflow, and in order to post it on Dockstore, we need to include a `checkertool` for that workflow.<issue_closed>
Status: Issue closed |
ng-bootstrap/ng-bootstrap | 193084739 | Title: Pagination: prevent focusing disabled selectors
Question:
username_0: When a page selector is disabled, i.e current page is first or last, user should not be able to focus the disabled element. It should have attribute `tabindex` set to -1.
[Plunker](http://plnkr.co/edit/ZEM3hswj9YhMvFKp69yr?p=preview)
[Bootstrap](http://v4-alpha.getbootstrap.com/components/pagination/#disabled-and-active-states)<issue_closed>
Status: Issue closed |
ikedaosushi/tech-news | 476942895 | Title: Killing a process and all of its descendants
Question:
username_0: Killing a process and all of its descendants<br>
Killing processes in a Unix-like system can be trickier than expected. Last week I was debugging an odd issue related to job stopping on Semaphore. More specifically, an issue related to the killing of a running process in a job. Here are the highlights of what I learned:<br>
https://ift.tt/2T6zy9g |
matplotlib/matplotlib | 437201791 | Title: bar plot yerr lines/caps should respect zorder
Question:
username_0: ### Bug report
**Bug summary**
Bar plot error bars break when zorder is greater than 1.
```python
fig, ax = plt.subplots(1,1)
xm1 = [-2, -1, 0]
x = [1, 2, 3]
x2 = [4, 5, 6]
x3 = [7, 8, 9]
y = [1,2,3]
yerr = [0.5, 0.5, 0.5]
ax.bar(x=xm1, height=y, yerr=yerr, capsize=5, zorder=-1)
ax.bar(x=x, height=y, yerr=yerr, capsize=5, zorder=1)
ax.bar(x=x2, height=y, yerr=yerr, capsize=5, zorder=2)
ax.bar(x=x3, height=y, yerr=yerr, capsize=5, zorder=3)
fig.show()
```
**Actual outcome**

**Matplotlib version**
* Operating system: Arch Linux
* Matplotlib version: 2.2.3
* Matplotlib backend (`print(matplotlib.get_backend())`): module://ipykernel.pylab.backend_inline
* Python version: 3.6
* Jupyter version (if applicable): 5.7.0
* Conda default channel
Possible related issue: #1622<issue_closed>
Status: Issue closed |
aws-amplify/amplify-js | 854755993 | Title: DataStore can't delete items on belong-to relation
Question:
username_0: ### Before opening, please confirm:
I have [searched for duplicate or closed issues](https://github.com/aws-amplify/amplify-js/issues?q=is%3Aissue+) and [discussions](https://github.com/aws-amplify/amplify-js/discussions).
I have read the guide for [submitting bug reports](https://github.com/aws-amplify/amplify-js/blob/main/CONTRIBUTING.md#bug-reports).
I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue.
### JavaScript Framework
React
### Amplify APIs
GraphQL API, DataStore
### Amplify Categories
api
### Environment information
<details>
```
# Put output below this line
System:
OS: macOS 11.2.3
CPU: (12) x64 Intel(R) Core(TM) i9-8950HK CPU @ 2.90GHz
Memory: 166.14 MB / 32.00 GB
Shell: 3.2.1 - /usr/local/bin/fish
Binaries:
Node: 12.18.3 - ~/n/bin/node
Yarn: 1.22.10 - ~/npm/bin/yarn
npm: 7.5.2 - ~/npm/bin/npm
Browsers:
Chrome: 89.0.4389.114
Firefox: 87.0
Safari: 14.0.3
npmGlobalPackages:
@aws-amplify/cli: 4.46.1
@username_0/vite-plugin-pug: 1.0.3
@wethegit/preact-stickerbook: 1.0.2
@wethegit/sweet-potato-components: 0.0.3
@wethegit/sweet-potato-cooker: 0.9.0
@wethegit/sweet-potato-peeler: 0.3.8
node-gyp: 7.1.2
npm: 7.5.2
twosg: 0.1.0
tpci-tcgportal: 1.0.0
yarn: 1.22.10
```
</details>
### Describe the bug
Schema:
```graphql
[Truncated]
### Mobile Device
_No response_
### Mobile Operating System
_No response_
### Mobile Browser
_No response_
### Mobile Browser Version
_No response_
### Additional information and screenshots
_No response_
Answers:
username_1: https://docs.amplify.aws/lib/datastore/relational/q/platform/js#deleting-relations
It does not mention Belongs to relationships.
I have reached out to the team for clarification if this behavior is to be expected.
In the meantime, if you were to change your schema to this, you should see the Emails being deleted along with the related Project
```
type Project @model {
id: ID!
name: String!
emails: [Email] @connection(keyName: "byProject", fields: ["id"])
}
type Email @model @key(name: "byProject", fields: ["projectID"]) {
id: ID!
title: String!
language: String!
body: String!
projectID: ID!
}
```
username_1: I'm able to reproduce this consistently. Labeled this as a bug and have made the team aware.
username_0: Thanks for investigating it @username_1 !
username_2: Hi @username_0
Can you upgrade to the latest version of Amplify and let us know if you are still experiencing this issue?
username_3: Hey @username_0, I am going to resolve this issue for now as we have not heard from you in over 7+ days. Please let us know if you are still experiencing this issue on the latest version and we can reopen this issue. Thanks!
Status: Issue closed
username_4: Hi @username_1
We are also experiencing the same issue with the following schema when I attempt to delete a parent object (workspace)
While instead deleting the children objects (entity) does work
```
type Workspace
@model
@auth(
rules: [
{ allow: private }
{ allow: public } # Allows access with API key
]
)
@key(name: "UserWorkspaces", fields: ["userID"]) {
id: ID!
....
entities: [EntityAI] @connection(keyName: "WorkspaceEntities", fields: ["id"])
users: [ID]
}
type Entity
@model
@auth(
rules: [
{ allow: private }
{ allow: public } # Allows access with API key
]
)
@key(name: "WorkspaceEntities", fields: ["workspaceID"]) {
id: ID!
owner: String
....
workspaceID: ID!
}
```
Here the packages I'm using :
```
"@aws-amplify/core": "4.2.2",
"@aws-amplify/datastore": "3.3.0",
```
username_4: Never mind the issue was a misspelling in the configuration of the key for the connection between the workspace and another entity
If it can help anyone this issues occurs if you declare incorrectly the connection between 2 entities (e.g. my case)
`type Workspace
@model
@auth(
rules: [
{ allow: private }
{ allow: public } # Allows access with API key
]
)
@key(name: "UserWorkspaces", fields: ["userID"]) {
id: ID!
//////
trainingLogs: [TrainingLog]
@connection(**_key_**: "WorkspaceTrainingLogs", fields: ["id"]) -- issue here as it was key rather than keyName
entities: [EntityAI] @connection(keyName: "WorkspaceEntities", fields: ["id"])
users: [ID]
}
type TrainingLog
@model
@auth(
rules: [
{ allow: private }
{ allow: public } # Allows access with API key
]
)
@key(name: "WorkspaceTrainingLogs", fields: ["workspaceID"]) {
id: ID!
/////
workspaceID: ID!
}` |
FabyTapia/SCL008-md-links | 438098873 | Title: validacion de link
Question:
username_0: se crea funcion para validar links aparece
url, status y status text
se crea tambien nueva dependencia para node fetch
Answers:
username_0: prceso de cambio funcion asincrona utilizando pomesas
username_0: se crea estructura de test mas archivo de prueba
username_0: se logra rescatar links con promesas me entrega url mas statustext se logra pasar test
username_0: se agrega 2do it de test |
shuichiro-makigaki/mendeley_cli | 1010618018 | Title: Does not work after upgarding to the latest version
Question:
username_0: After upgrading, It seems that it doesn't work, here is what I get when I run:
```
MENDELEY_CLIENT_ID=10209 MENDELEY_CLIENT_SECRET='p6TmD0blabla' MENDELEY_REDIRECT_URI='http://localhost:8888' mendeley get token
Opening in existing browser session.
[416803:416803:0929/085051.338460:ERROR:broker_posix.cc(43)] Invalid node channel message
----------------------------------------
Exception happened during processing of request from ('127.0.0.1', 49276)
Traceback (most recent call last):
File "/usr/lib/python3.8/socketserver.py", line 316, in _handle_request_noblock
self.process_request(request, client_address)
File "/usr/lib/python3.8/socketserver.py", line 347, in process_request
self.finish_request(request, client_address)
File "/usr/lib/python3.8/socketserver.py", line 360, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/lib/python3.8/socketserver.py", line 747, in __init__
self.handle()
File "/usr/lib/python3.8/http/server.py", line 427, in handle
self.handle_one_request()
File "/usr/lib/python3.8/http/server.py", line 415, in handle_one_request
method()
File "/home/AE3X/.local/lib/python3.8/site-packages/mendeley_cli/__init__.py", line 50, in do_GET
mendeley_session = auth.authenticate(f'{mendeley_client.redirect_uri}{self.path}')
File "/home/AE3X/.local/lib/python3.8/site-packages/mendeley/auth.py", line 65, in authenticate
token = self.oauth.fetch_token(self.token_url,
File "/home/AE3X/.local/lib/python3.8/site-packages/requests_oauthlib/oauth2_session.py", line 199, in fetch_token
self._client.parse_request_body_response(r.text, scope=self.scope)
File "/home/AE3X/.local/lib/python3.8/site-packages/oauthlib/oauth2/rfc6749/clients/base.py", line 409, in parse_request_body_response
self.token = parse_token_response(body, scope=scope)
File "/home/AE3X/.local/lib/python3.8/site-packages/oauthlib/oauth2/rfc6749/parameters.py", line 376, in parse_token_response
validate_token_parameters(params)
File "/home/AE3X/.local/lib/python3.8/site-packages/oauthlib/oauth2/rfc6749/parameters.py", line 386, in validate_token_parameters
raise MissingTokenError(description="Missing access token parameter.")
oauthlib.oauth2.rfc6749.errors.MissingTokenError: (missing_token) Missing access token parameter.
```
Answers:
username_1: Could you double-check your `MENDELEY_CLIENT_SECRET`? I could reproduce your problem using the wrong secret. Your secret seems to be short... It may be copy&paste miss?
And, the secret should not be open. **Please regenerate your secret**, and fully copy&paste it.
username_0: Thank you,
Before posting this I have tried to update the secret code several times. Now I have regenerated it as per your recommendation and it works!
Sorry, I guess this is a side effect of staying too late at night.
Thanks
username_0: Maybe those mistakes can be added in README as an FAQ?
Status: Issue closed
|
the-turing-index/web-app | 728826070 | Title: Cleaner svg file for map display
Question:
username_0: ### Feature Categories:
- [ ] Angular
- [ ] SCSS / Styling
- [ ] Testing
- [ ] UI
- [ ] BugFix
- [x] Adobe Illustrator
**Describe the feature/Bug your gonna work on**
- Modernize our current map to match design of logos presented by team.
- Create rooms as a path removing line paths for easier programming of svg
- Mock up a couple options to present
**Additional context**
Add any other context or screenshots about the feature.<issue_closed>
Status: Issue closed |
SparkFund/google-apps-clj | 731586132 | Title: Unhandled java.net.SocketTimeoutException despite setting :read-timeout and :connect-timeout
Question:
username_0: Read timeout error for a large sheet (3k records, 120 columns). Setting :read-timeout and :connection-timeout in google-ctx not changing anything. Values set to 560000 but it still times-out after about 15s.
1. Unhandled java.net.SocketTimeoutException
Read timed out
NioSocketImpl.java: 283 sun.nio.ch.NioSocketImpl/timedRead
NioSocketImpl.java: 309 sun.nio.ch.NioSocketImpl/implRead
NioSocketImpl.java: 350 sun.nio.ch.NioSocketImpl/read
NioSocketImpl.java: 803 sun.nio.ch.NioSocketImpl$1/read
Socket.java: 982 java.net.Socket$SocketInputStream/read
SSLSocketInputRecord.java: 469 sun.security.ssl.SSLSocketInputRecord/read
SSLSocketInputRecord.java: 463 sun.security.ssl.SSLSocketInputRecord/readHeader
SSLSocketInputRecord.java: 70 sun.security.ssl.SSLSocketInputRecord/bytesInCompletePacket
SSLSocketImpl.java: 1421 sun.security.ssl.SSLSocketImpl/readApplicationRecord
SSLSocketImpl.java: 1033 sun.security.ssl.SSLSocketImpl$AppInputStream/read
BufferedInputStream.java: 244 java.io.BufferedInputStream/fill
BufferedInputStream.java: 284 java.io.BufferedInputStream/read1
BufferedInputStream.java: 343 java.io.BufferedInputStream/read
HttpClient.java: 754 sun.net.www.http.HttpClient/parseHTTPHeader
HttpClient.java: 689 sun.net.www.http.HttpClient/parseHTTP
HttpURLConnection.java: 1623 sun.net.www.protocol.http.HttpURLConnection/getInputStream0
HttpURLConnection.java: 1528 sun.net.www.protocol.http.HttpURLConnection/getInputStream
HttpURLConnection.java: 527 java.net.HttpURLConnection/getResponseCode
HttpsURLConnectionImpl.java: 308 sun.net.www.protocol.https.HttpsURLConnectionImpl/getResponseCode
NetHttpResponse.java: 37 com.google.api.client.http.javanet.NetHttpResponse/<init>
NetHttpRequest.java: 94 com.google.api.client.http.javanet.NetHttpRequest/execute
HttpRequest.java: 981 com.google.api.client.http.HttpRequest/execute
AbstractGoogleClientRequest.java: 419 com.google.api.client.googleapis.services.AbstractGoogleClientRequest/executeUnparsed
AbstractGoogleClientRequest.java: 352 com.google.api.client.googleapis.services.AbstractGoogleClientRequest/executeUnparsed
AbstractGoogleClientRequest.java: 469 com.google.api.client.googleapis.services.AbstractGoogleClientRequest/execute
google_sheets_v4.clj: 391 google-apps-clj.google-sheets-v4/get-cells
google_sheets_v4.clj: 375 google-apps-clj.google-sheets-v4/get-cells
google_sheets_v4.clj: 410 google-apps-clj.google-sheets-v4/get-cell-values
google_sheets_v4.clj: 404 google-apps-clj.google-sheets-v4/get-cell-values
Answers:
username_1: Having same issue.
I believe it is related with this dissoc:
https://github.com/SparkFund/google-apps-clj/blob/ffe53b4d07db153df0749f1c263c04a0226a185f/src/google_apps_clj/google_sheets.clj#L43
`;; TODO hilariously, drive and sheets java objects specify timeouts
;; by apparently completely incompatible mechanisms. Here, we just
;; explicitly ignore any timeouts while I pause and reflect on my
;; increasingly bad life choices
(let [google-ctx (if (map? google-ctx)
(dissoc google-ctx :read-timeout :open-timeout)
google-ctx)
cred (cred/build-credential google-ctx)
service (SpreadsheetService. "Default Spreadsheet Service")]
(doto service
(.setOAuth2Credentials cred)))` |
siimon/prom-client | 187121402 | Title: process_cpu_seconds_total increasing faster than 1/s
Question:
username_0: I recently upgraded my services to Node 6 in order to start collecting process_cpu_seconds_total. I'm now graphing the results and seeing strange results. The stats seem to be claiming that Node is using 125k seconds per second, according to this Prometheus query:
```irate(process_cpu_seconds_total{job="bt-actions"}[1m]) * 60```
Screenshot of Grafana plot attached. Any idea what's up? Am I querying this wrong or using the wrong units? Do you have reports from people successfully using the process_cpu_seconds_total stat?
Thanks,
Jacob

Answers:
username_1: I've not deployed it myself since I'm on parental leave, but it might happen if you have multiple cores, but it sounds abit much with 125k seconds in a second :)
@username_2 are you using that metric?
username_2: We're using it, yeah. I won't have access to the query until Monday, but I can check it out then
username_0: Yeah, I don't have that many cores. :-) Also, Node shouldn't use more than one core per process since it's single threaded. Will look forward to your results @username_2, thanks.
username_2: Ok, took a look now, and it's definitely a bug.
Trying to figure out why these print different diffs:
```js
'use strict';
let prevTotal = 0;
setInterval(() => {
const usage = process.cpuUsage();
const total = usage.user + usage.system;
const diff = total - prevTotal;
console.log(diff, 'curr diff');
prevTotal = total;
// console.log(total, 'total');
console.log();
}, 250);
let previousUsage = process.cpuUsage();
let totalUsage = 0;
setInterval(() => {
const diffUsage = process.cpuUsage(previousUsage);
previousUsage = diffUsage;
const diff = diffUsage.user + diffUsage.system;
console.log(diff, 'diff with prev');
totalUsage += diff;
// console.log(totalUsage, 'total with prev');
console.log();
}, 250);
```
username_1: @Blystad fix is merged and pushed in 6.1.1, @username_0 can you confirm that the fix resolves your issue?
username_2: This resolves our issue.

Status: Issue closed
username_1: Closing this issue then :) |
tlaplus/tlaplus | 390822822 | Title: IllegalArgumentException on symmetry set with one element
Question:
username_0: ```
TLC threw an unexpected exception.
This was probably caused by an error in the spec or model.
See the User Output or TLC Console for clues to what happened.
The exception was a java.lang.IllegalArgumentException
java.lang.IllegalArgumentException
at util.Set.<init>(Set.java:86)
at util.Set.<init>(Set.java:100)
at tlc2.value.MVPerm.permutationSubgroup(MVPerm.java:95)
at tlc2.tool.Tool.getSymmetryPerms(Tool.java:3100)
at tlc2.tool.TLCStateMutSource.init(TLCStateMutSource.java:67)
at tlc2.tool.Tool.init(Tool.java:114)
at tlc2.tool.AbstractChecker.<init>(AbstractChecker.java:127)
at tlc2.tool.ModelChecker.<init>(ModelChecker.java:97)
at tlc2.tool.ModelChecker.<init>(ModelChecker.java:77)
at tlc2.TLC.process(TLC.java:928)
at tlc2.TLC.main(TLC.java:247)
```
Answers:
username_0: Show a meaningful error message.
username_1: Closed with 0b9f474
Status: Issue closed
username_1: Reopening while improvements afoot.
username_1: ```
TLC threw an unexpected exception.
This was probably caused by an error in the spec or model.
See the User Output or TLC Console for clues to what happened.
The exception was a java.lang.IllegalArgumentException
java.lang.IllegalArgumentException
at util.Set.<init>(Set.java:86)
at util.Set.<init>(Set.java:100)
at tlc2.value.MVPerm.permutationSubgroup(MVPerm.java:95)
at tlc2.tool.Tool.getSymmetryPerms(Tool.java:3100)
at tlc2.tool.TLCStateMutSource.init(TLCStateMutSource.java:67)
at tlc2.tool.Tool.init(Tool.java:114)
at tlc2.tool.AbstractChecker.<init>(AbstractChecker.java:127)
at tlc2.tool.ModelChecker.<init>(ModelChecker.java:97)
at tlc2.tool.ModelChecker.<init>(ModelChecker.java:77)
at tlc2.TLC.process(TLC.java:928)
at tlc2.TLC.main(TLC.java:247)
```
Fix: Show a meaningful error message.
Status: Issue closed
username_1: Improved messaging and handling of unions provided with 5544517 |
thudugala/Plugin.LocalNotification | 1016129964 | Title: Notification not working
Question:
username_0: Notification on application stop working. They were working before 8.0.2, but then I realise that they are not setting becuase following error is coming:
`The type initializer for 'System.Text.Json.JsonSerializer' threw an exception.`
I have latest version of Xamarin Form, along with Latest version of Xamarin.Essential and Toolkit. Seems some conflict might be causing this?
Answers:
username_0: Close as I forget I have same issue open before.
Status: Issue closed
|
simple-salesforce/simple-salesforce | 327782465 | Title: Not a issue more of a question: Have you used simple salesforce in an AWS Lambda
Question:
username_0: Hi was wondering if you can use simple saleforce in a aws lambda script. Was trying to use the api gateway to pass a customer id to a lambda script and have that lambda script use simple salesforce to grab the customer info?
Thanks in advance for any help
Answers:
username_1: @username_0 I haven't tried this personally but I have used Lambda for other things. The approach you described sounds like it would work just fine. If you hit any issues with this setup, just let us know and we can help troubleshoot.
username_0: have trouble with the whole adding simple salesforce to aws. Im using codecommit as my git repo, code build as my unit testing, and codepipeline to do my continuous development. I got codebuild to pip in simple salesforce but it did not push that library up to aws lambda. Should I be zipping it up first and then committing to codecommit?
username_2: You could always upload your code manually in a zip file too, remembering to install the library in the same folder.
Anyway, did you manage to do this?
username_3: I am using serverless framework to do the deploy of the function and all its dependencies
username_1: ^ I would recommend this approach too. Another serverless framework you might consider is Zappa.
username_4: I'm using `simple-salesforce` in a lambda based ETL with results that I'm satisfied with. Rather than using [serverless](https://serverless.com/) as @username_3 & @username_1 suggested, I've built the lambda using [sam-local](https://github.com/awslabs/aws-sam-cli) based off of this wonderful [cookiecutter-aws-sam-python template](https://github.com/aws-samples/cookiecutter-aws-sam-python).
The only limitations I've seen thus far are with respect to:
1. making sure you're handling your Salesforce credentials in a secure manner
1. bulk processes that take longer than 5 mins (the max execution time of a lambda function)
1. bulk upserts using `sf.bulk.opportunity.upsert()`
There are fairly comfortable workarounds to each of these limitations:
1. we store Salesforce credentials securely in [aws secrets manager](https://aws.amazon.com/secrets-manager/) and get all the benefits of the service permissioned access, auditing, etc.
1. chunking data processed by the lambda to a reasonable limit
1. we've actually had to use [heroku/salesforce-bulk](https://github.com/heroku/salesforce-bulk) to get around this limitation
Other than that, it's been pretty dreamy.
username_1: ^ First time seeing the SAM CLI. Looks like a pretty cool approach.
username_5: @username_0 It looks like the above answers have put you on the right path to resolving your question. Please re-open and provide further clarification if these answers do not meet your needs.
Status: Issue closed
username_6: Has anyone tried storing Salesforce sessions for us by subsequent Lambda calls? I have an API Gateway service that gets a lot of traffic and we're hitting our Salesforce login limit because we're creating a new session every time the function gets called.
username_7: Hi. I am trying to use simple_salesforce with AWS glue ETL job. However when I run the job it gives me the following error: "ModuleNotFoundError: No module named 'cryptography.hazmat.bindings._padding'". I have tried various options like manually installing the dependent packages in sequence. But nothing worked. Any help on this will be greatly appreciated. Thank you.
username_7: Thanks @username_8 for your help. I will dig into it further.
username_7: Hello again.
I was able to make it work using python only library. I am facing another issue with date field in salesforce. I read that we need to pass utc format to salesforce from the API. However, I tried multiple options but none of the format worked. I get an error saying malformed request while doing an insert operation. My date is in format "YYYY/MM/DD" and i need to convert that is the format accepted by salesforce. I tried the strptime function as mentioned in the documentation but that gives me formatting error. Any help on this will be appreciated! |
natenho/Mockaco | 439067391 | Title: Support configuration for default mock response
Question:
username_0: The current implementation returns HTTP 501 Not Implemented for requests not matching any existent template.
It would be nice to support a "default" template as a fallback to any unmatched request.
Status: Issue closed
Answers:
username_0: In the current version, there is no need to configure this kind of fallback anymore, because the default response is returned with a (configurable) HTTP status code (default is 501 Not Implemented) and the body is a standardized JSON error object with the message "Incoming request didn't match any mock". |
YeOldeDM/tunnels-of-todog | 466150984 | Title: 8-direction movement
Question:
username_0: Migrate movement actions from arrow keys to the Keypad, allowing 8 options for movement.
<issue_closed>
Status: Issue closed |
airbnb/lottie-web | 974991220 | Title: Lottie files not Rendering Correctly - Edits and Gradients don't work
Question:
username_0: <!--
This template is for bug reports. If you are reporting a bug, please answer all the questions below.
If you are here for another reason (feature request, question, etc) please delete this template before continuing.
Note that leaving sections blank will make it difficult for us to troubleshoot bugs, causing delays in our response,
or result in closing this issue.
Please include screenshots where applicable.
-->
**Tell us about your environment**
* **Browser and Browser Version:**
* Google Chrome(Version 92.0.4515.159 (Official Build) (64-bit))
* **After Effects Version:**
* 18.4.1
**What did you do? Please explain the steps you took before you encountered the problem.**
(To give a bit of context; I was hired by my employer to make Halloween edits of existing lottie emojis that he had commissioned from a different team. The file was not originally mine and it's incredibly unorganized, so the solution has been difficult to find.)
I would select an animated emoji to edit. I then would click on the body to try and adjust the gradients in After Effects. I simply would try to fix the colors of the emoji to how I wanted it to be, using the in program gradient editor the previous team used.
Sometimes they would use a large folder of stacked gradients for the body, while other times they use a png for the body of the emoji. I took one of said pngs out of the After Effects group it was in, edited it in Photoshop, and placed it back into the group.
**What did you expect to happen?**
I expected my edits and work to be rendered properly and show up in the bodymovin preview.
**What actually happened? Please include as much _relevant_ detail as possible.**
What would happen is that either the color edits I made (specifically the gradients) would either show up as the colors the emoji was BEFORE I made the edits, completely black and white, or completely broken. I have included a before and after picture where you can see I had tried to make the character green, but in the Bodymovin render preview the hands are the yellow they were before I edited them, and the green png I imported for the body does not load at all.


**Please provide a download link to the After Effects file that demonstrates the problem.**
(The composition is called Ghoul in the project, that will show you the issue. I sincerely hope someone can help me solve this issue, I have been trying to fix it on my own for nearly a month now with no success.)
[https://drive.google.com/drive/folders/1O27cDAmesMoafozQI-tWVqtUB8PjbvTl?usp=sharing](url)
Answers:
username_1: I've played around with the .AE file a bit, but I can't seem to figure out where the 'yellow' in the hands is coming from. There are 'shy' layers, so maybe turn that off and see what's what (there are hand and hand 2 layers, but even deleting the unused layer resulted in the 'yellow hands'.
The 'missing image asset' your seeing rendered from the body is because you need to turn on "include images in json" under the assets tab of the export settings (since the body is a .png, you need to include all rasterized elements)

These damn hands- Quite frustrating, to be honest- haha. I think you could perhaps re-draw the hand shapes, use a gradient with less 'steps', you could parent the new hand layer to the existing animation. If you turn the gradient fill to a solid, the color works, which is why I think it may have something to do with the amount of 'stops' in the existing gradient.
Alas, that's all I can help with currently.
Good luck!
username_0: Thank you so much for the help! You just saved me a massive headache, I was worried I'd have to redo a large part of the assets myself.
And yeah, those damn hands. It's a bit comforting to know I'm not the only one who can't figure out what's going on, I thought I was going insane haha. Redoing them myself should be easy enough though, seeing as they are just circles.
Thank you so much again! You're a lifesaver. |
ayugioh2003/today-i-learned | 688502955 | Title: Danfo.js,類似 python 的 pandas 數學函式庫
Question:
username_0: ## 網址
- [Introducing Danfo.js, a Pandas-like Library in JavaScript — The TensorFlow Blog](https://blog.tensorflow.org/2020/08/introducing-danfo-js-pandas-like-library-in-javascript.html?linkId=98080391)
- [Danfo.js Documentation - Danfo.js](https://danfo.jsdata.org/)
- [opensource9ja/danfojs: danfo.js is an open source, JavaScript library providing high performance, intuitive, and easy to use data structures for manipulating and processing structured data.](https://github.com/opensource9ja/danfojs)
## 記錄原因
不曉得之後 js 能不能跟 python 和 R 一樣,有同等的分析能力 XD
## 觀念
## 學習點
## 相關資料 |
MAIF/izanami | 344063993 | Title: Jdk 10 and ++
Status: Issue closed
Question:
username_0: Izanami is running with jdk 10 in production with redis and elastic.
Did you get trouble running it with jdk 10 ? Reopen this issue if so.
Answers:
username_1: Why was this issue closed?
username_0: Izanami is running with jdk 10 in production with redis and elastic.
Did you get trouble running it with jdk 10 ? Reopen this issue if so.
username_1: Oh, nice. I thought the issue was closed because of obstructions, i am glad to hear it is working 👍 So the server is running with jdk 10 in the docker image on dockerhub?
username_1: Is it just the client that supports jdk 10? I see the Base image is jdk 8 for the server here https://github.com/MAIF/izanami/blob/master/izanami-server/build.sbt
`dockerBaseImage := "openjdk:8"`
username_1: The client works great on jdk 10 btw.
username_2: I created a pr for upgrading the server base image https://github.com/MAIF/izanami/pull/180 |
bountysource/core | 182428619 | Title: Take GitHub "reactions" into account for "thumbs up" in issues
Question:
username_0: Issues on Bountysource have a "thumbs-up count", which (as described by rappo) is a combination of "1) vote counts (if applicable, eg Jira or Launchpad) 2) thumbsup comments in the thread itself (eg `:+1:` in the github thread) 3) using our browser extension, or 4) clicking the thumbs up button on the front end".
GitHub now has a way to attach "reactions" (a fixed set of emojis) to issue comments, notably including the description of the issue itself. A :+1: reaction on the issue description could be considered a vote, and fall under 1) in the list above, but this is not currently taken into account by Bountysource and it should be.
Note that [retrieving reaction information from GitHub](https://developer.github.com/changes/2016-05-12-reactions-api-preview/) is still in developer preview, which means the API isn't yet stable and may break in the future.
Answers:
username_1: This is a solid idea. Incorporating into the roadmap. |
eyounx/ZOOpt | 297995216 | Title: pip installation
Question:
username_0: I tried to install using `pip`. Seems like the `pareoopt` file is not included in the installation as I keep getting this error:
```
from paretoopt import ParetoOpt
ModuleNotFoundError: No module named 'paretoopt'
```
I used the following command: `pip install zoopt` on Windows 10.
Answers:
username_1: I think you want to use ParetoOpt method (POSS). An example can be found [here](https://github.com/eyounx/ZOOpt/blob/master/example/sparse_regression/poss_opt.py).
username_1: Do I solve your problem? If there are some more questions, please feel free to let me know.
username_0: Sorry for only responding now. I think my initial code snippet is somewhat misleading. I'm trying to use the normal Zoopt optimization function. I upgraded from v0.1 to v0.2 and that error was raised when I tried to run the optimization function.
username_1: Could you post your specific code, especially the definition of `Objective` and `Parameter` objects?
In addition, please try if the error is raised too when you run this [example](https://github.com/eyounx/ZOOpt/blob/master/example/sparse_regression/poss_opt.py).
username_0: I will give it a try tomorrow and get back to you.
username_0: I installed v0.2 of ZOOpt and ran the example you suggested.
The following error was raised:
```
File "C:\Users\****\Anaconda3\lib\site-packages\zoopt\algos\opt_algorithms\paretoopt\pareto_optimization.py", line 11, in <module>
from paretoopt import ParetoOpt
ModuleNotFoundError: No module named 'paretoopt'
```
For reference, I'm using the Anaconda (v5.1) distribution of Python on Windows 10.
username_1: Thank you for your issue. We have fixed this bug and will update the release version in a while. To use the latest version, please:
1. pip uninstall zoopt
2. download this repository, and sequentially run following commands in your terminal/command line.
```
$ python setup.py build
$ python setup.py install
```
I think the latest version will work well.
Status: Issue closed
|
opencv/opencv | 255194605 | Title: Boosting CV_FOLDS Segfault
Question:
username_0: - OpenCV => 3.1
- Operating System / Platform => Ubuntu 14.04
- Compiler => gcc
Segfault when using `k_fold` bigger than 1. 0 and 1 itself are working.
` cv::Ptr<cv::ml::Boost> boost_ = cv::ml::Boost::create();`
` boost_->setCVFolds(k_fold);`
` boost_->train(data);`
Answers:
username_1: I've looked at the `Boost` code and noticed cross validation actually is not implemented.
`Boost` uses `DTrees` internally and CV can be used for better trees pruning.
Status: Issue closed
|
miguelgrinberg/Flask-SocketIO | 211936413 | Title: update session in a background thread
Question:
username_0: I want to store some data in a cookie or session. The data I want to store is coming from a background task and I really want to update the session or cookie in the background thread. I don't want to do it in the foreground because I don't know how to do it without blocking. I'm not sure if this is possible and if I'm wasting time. I'm also not a Flask expert so I may be missing something. I'm trying to do the following for example
```
@socket.io('event')
def foo():
def bar():
eventlet.sleep(10)
return dict('key'='value')
bar = copy_current_request_context(bar)
thread = eventlet.spawn(bar)
ret = thread.wait()
if isinstance(ret, dict):
session.update(ret)
```
The above works, but I don't like it because `thread.wait()` blocks. I tried to think of some clever solution with eventlet but nothing comes to mind. The following doesn't block, but I also don't update the original session object.
```
@socket.io('event')
def foo():
evt = eventlet.event.Event()
def bar():
eventlet.sleep(10)
evt.send(dict('key'='value'))
bar = copy_current_request_context(bar)
thread = eventlet.spawn(bar)
def waiter(session):
ret = evt.wait()
if isinstance(ret, dict):
session.update(ret)
waiter = copy_current_request_context(waiter)
eventlet.spawn(waiter, session)
```
I'd appreciate any advice, even if it's just to say that this isn't possible.
Answers:
username_1: It doesn't seem to me that the session is the proper place to store this information. For starters, the Flask session is stored in a cookie, so the only way to make a change is to set the cookie again, and this is not possible when using WebSocket because there are no HTTP responses. When you set the session, you are only setting it locally, the changes are not going out to the client, so they are not going to be visible on HTTP routes that rely on a cookie to restore the session.
Why don't you instead set up some server-side storage that you have full access? Maybe something like Redis or memcache is what I'm thinking.
username_0: That makes sense, I would use Redis to store this data normally. Maybe some backstory might explain why I'm thinking about doing this.
I'm building a [tool](https://github.com/username_0/bowtie) for data scientists to quickly generate dashboards. If it takes a while to compute something for a plot, then if possible, it might be nice to shove it in a cookie? If they can get this feature without provisioning a database, I think it would enable developing faster responding dashboards and deploying them quicker.
I was hoping to use sessions as a quick and dirty key-value store to hold expensive computations that don't generate much data. Sorry, I'm a little inexperienced so I don't quite understand what it means to set a session locally but it doesn't go out to the client. I'm basically trying to pick things up as I go along.
Hmm, I just had an idea, that maybe I could store some data in a dummy React component.
I don't know if there's any more you can say, but regardless thanks for helping 👍
username_1: Using cookies to store data is totally fine, but you need to keep in mind you are using WebSocket, so there is no vehicle for the server to send cookies to the client, like there is for HTTP. So using the user session for this becomes complicated, unless you switch to server-side sessions, in which case the rules change. If you wanted to use cookies for this, what you can do do instead, is to send the data to the client over the Socket.IO transport, and then have the client set a cookie by itself.
username_0: The last suggestion of yours sounds good. I'll give it a shot. Thanks again for your insight. I think I'll mark this closed now.
Status: Issue closed
|
brianc/node-postgres | 33986981 | Title: How do I get the error message out of the error object?
Question:
username_0: I'm a little confused about the errors node-postgres sends back.
I get an error object like below, but it is misformed json and I can't access the message inside the first array. I wan't to handle the error to save out the situation.
{ [error: null value in column "title" violates not-null constraint]
name: 'error',
length: 287,
severity: 'ERROR',
code: '23502',
detail: 'Failing row contains (36284, null, null, 0, 2014-05-21 17:11:56.119877+03, null, o7d3qkV7Dk, null, null, 0, null, null, null, Mx0IyT, f, null).',
hint: undefined,
position: undefined,
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
file: 'execMain.c',
line: '1611',
routine: 'ExecConstraints' }
Answers:
username_1: as @Rauno56 suggests, thats my solution:
function getErrorMessage(e) {
return JSON.parse(
JSON.stringify(e, ['message'])
).message;}
Status: Issue closed
|
dart-lang/sdk | 168436820 | Title: Incorrect type inference when swapping type parameters in a constructor call
Question:
username_0: The following code (inspired by HashBiMap.inverse from https://github.com/google/quiver-dart/blob/master/lib/src/collection/bimap.dart) produces strong mode errors:
class Pair<T, U> {
T t;
U u;
Pair(this.t, this.u);
Pair<U, T> get reversed => new Pair(u, t);
}
The errors are:
- At `new Pair(u, t)`: `The return type 'Pair<U, U>' is not a 'Pair<U, T>', as defined by the method 'reversed'.`
- At the `t` in `new Pair(u, t)`: `The argument type 'T' cannot be assigned to the parameter type 'U'.`
It seems like incorrect type parameters are being inferred for the call to the `Pair` constructor.
This behavior started happening in 8d2c6d033afd3427a28b3e0dc2d53649db404db8.
Answers:
username_1: @username_2
username_2: good catch!
it looks like an incorrect capture bug.
I wonder if we have a similar bug with generic functions. We certainly tried very hard to avoid this class of bug in them :)
If not, there's a few possibilities on what could be going on. Constructors have some notable differences:
- their downward inference uses matchType instead of the type variable constraint solver used for inference on generic functions/methods. (This is currently a TODO in the code)
- they go through some contortions to get back to the "uninstantiated" constructor function type, which generic functions/methods don't need to worry about (this one might be a TODO as well :) )
- we never fully refactored interface type generics to avoid capture like we did with function types
Anyway I'll dig in, but that's kind of a scary bug. Hopefully it's something little and not "omg this is fundamentally broken" :)
username_2: Couldn't reproduce this with generic functions, so it seems specific to constructors (which makes sense)
username_2: got it. Our substitution for the "extends" clause w/ recursive bounds was incorrectly affecting inference for this. Fix on the way
username_2: https://codereview.chromium.org/2205993002/
Status: Issue closed
username_3: Possibly related: https://codereview.chromium.org/2376213003 |
nengo/nengo.github.io | 543520123 | Title: "nengo.exceptions.ConfigError: stateful is not a valid config parameter"
Question:
username_0: Is there a repo for the tutorials to make a PR/issue? I was running [this tutorial](https://www.nengo.ai/nengo-dl/examples/lmu.html) and I get the error above when running the following code.
```
nengo_dl.configure_settings(
trainable=None, stateful=False, keep_history=False,
)
```
Answers:
username_0: nvm it looks like I need to update nengo-dl
Status: Issue closed
|
coreos/go-systemd | 650078052 | Title: Add support for transient timers
Question:
username_0: `systemd-run` supports creating units that run on a schedule instead of immediately.
This is implemented by creating a timer unit alongside the service that executes the command.
While creating persistent timers and services is certainly an option, they create manual cleanup overhead.
It would be very useful to support such a feature in go-systemd, similar to `systemd-run`'s `--timer-property=` option.
Similarly, systemd-run also supports creating transient socket and path units that launch the command service when triggered. These would also be useful. |
xtne6f/EDCB | 600331498 | Title: 番組表上右クリックでの予約追加で
Question:
username_0: 予約追加(A) が出て右側にダイアログ表示(X)デフォルトが出た時は
その上にマウスカーソルが移動できて選べるのですが
左側に出た時 https://i.imgur.com/h7EWM8z.jpg
マウスカソールが予約追加(A)から外れた途端 左側表示が消えてしまい選べません。
デスクトップ一番左端にEpgtimerの番組表を出して
左側2ch分は右側に表示が出ますが3ch以降は左に出てしまい選べません。
なにか設定で治るでしょうか?<issue_closed>
Status: Issue closed |
JetBrains/kotlin-native | 349838156 | Title: Task compileKonanGreetingIos_x64 FAILURE
Question:
username_0: * Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
BUILD FAILED in 9s
9 actionable tasks: 2 executed, 7 up-to-date
```
Answers:
username_1: Probably, the same issue as #1846 since you're using Java 10.
username_0: I tried installing java8
The error is the same
username_2: Did you fiddle with `-miphoneos-version-min`?
Error is pretty clear: `ld: targeted OS version does not support use of thread local variables in _kfun:kotlin.text.checkRadix$stdlib(kotlin.Int)ValueType for architecture x86_64`.
username_3: I've the same error while compiling with xcode 10
I had to uninstall it completely (switch to command line tools 9 did not solve the probleme) ... and go back to xcode 9
username_0: @username_3 Thanks, you've solved my problem
Status: Issue closed
username_4: Hi guys, rolling back to Xcode 9 works fine indeed, however do you happen to have a solution for Xcode 10? Thanks! |
tessgi/TessGiWebsite | 638385209 | Title: New items hyperlink does not appear correctly
Question:
username_0: A hyperlink in the news item that we [recently posted](https://tessgi.github.io/TessGiWebsite/tess-weekly-bulletin-june-8th.html) does not render correctly.
I will assign this to myself and fix, but **please do not push pages to the master branch that are have errors**, or at least check that the site works and correct those errors. This very nearly ended up on the live website.<issue_closed>
Status: Issue closed |
mhenrixon/active_campaign | 185078454 | Title: JSON::ParserError: A JSON text must at least contain two octets!
Question:
username_0: I using active_campaign version 0.1.14 and i have issue
```
@client = ::ActiveCampaign::Client.new(api_endpoint: ENV['ACTIVE_CAMPAIGN_ENDPOINT'], api_key: ENV['ACTIVE_CAMPAIGN_KEY'])
params = {
:query => { :listid => 1 },
:id => 1,
:email=>'<EMAIL>',
:ip4 => '127.0.0.1'
}
@client.contact_add(params)
```
Error log:
```
JSON::ParserError: A JSON text must at least contain two octets!
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/json_pure-1.8.3/lib/json/common.rb:155:in `initialize'
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/json_pure-1.8.3/lib/json/common.rb:155:in `new'
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/json_pure-1.8.3/lib/json/common.rb:155:in `parse'
from /Users/julian/.rvm/gems/ruby-2.3.0@skilltracker/gems/active_campaign-0.1.14/lib/active_campaign/client.rb:77:in `request'
from /Users/julian/.rvm/gems/ruby-2.3.0@skilltracker/gems/active_campaign-0.1.14/lib/active_campaign/client.rb:51:in `get'
from /Users/julian/.rvm/gems/ruby-2.3.0@skilltracker/gems/active_campaign-0.1.14/lib/active_campaign/method_creator.rb:28:in `block in define_api_method'
from /Users/julian/Documents/vinova/skilltracker-backend/app/services/service/active_campaign.rb:17:in `contact_add'
from (irb):14
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/railties-4.2.5.1/lib/rails/commands/console.rb:110:in `start'
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/railties-4.2.5.1/lib/rails/commands/console.rb:9:in `start'
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/railties-4.2.5.1/lib/rails/commands/commands_tasks.rb:68:in `console'
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/railties-4.2.5.1/lib/rails/commands/commands_tasks.rb:39:in `run_command!'
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/railties-4.2.5.1/lib/rails/commands.rb:17:in `<top (required)>'
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/activesupport-4.2.5.1/lib/active_support/dependencies.rb:274:in `require'
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/activesupport-4.2.5.1/lib/active_support/dependencies.rb:274:in `block in require'
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/activesupport-4.2.5.1/lib/active_support/dependencies.rb:240:in `load_dependency'
... 2 levels...
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/activesupport-4.2.5.1/lib/active_support/dependencies.rb:268:in `load'
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/activesupport-4.2.5.1/lib/active_support/dependencies.rb:268:in `block in load'
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/activesupport-4.2.5.1/lib/active_support/dependencies.rb:240:in `load_dependency'
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/activesupport-4.2.5.1/lib/active_support/dependencies.rb:268:in `load'
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/spring-1.7.1/lib/spring/commands/rails.rb:6:in `call'
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/spring-1.7.1/lib/spring/command_wrapper.rb:38:in `call'
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/spring-1.7.1/lib/spring/application.rb:191:in `block in serve'
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/spring-1.7.1/lib/spring/application.rb:161:in `fork'
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/spring-1.7.1/lib/spring/application.rb:161:in `serve'
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/spring-1.7.1/lib/spring/application.rb:131:in `block in run'
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/spring-1.7.1/lib/spring/application.rb:125:in `loop'
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/spring-1.7.1/lib/spring/application.rb:125:in `run'
from /Users/julian/.rvm/gems/ruby-2.3.0@global/gems/spring-1.7.1/lib/spring/application/boot.rb:19:in `<top (required)>'
from /Users/julian/.rvm/rubies/ruby-2.3.0/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require'
from /Users/julian/.rvm/rubies/ruby-2.3.0/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require'
```
Answers:
username_1: @username_0 What does your endpoint URL look like? I just got this and as it turns out you can't simply copy and paste the URL from the ActiveCampaign settings, you have to stick `/admin/api.php` onto the end of it.
username_2: This worked for me as well. Is it required? If so I'd be happy to open a PR updating the documentation.
username_1: @username_2 Yes, it does seem to be required, otherwise it doesn't work.
Status: Issue closed
|
theCrag/website | 464410321 | Title: Grade Distribution Chart: Japan does not show grade distribution (Firefox)
Question:
username_0: **What happened?**
https://www.thecrag.com/climbing/japan does not show a grade distribution chart - it seems to be there (hover over is shown) - it works for other countries though and it works on mobile and on Chrome but not on Firefox.
<img width="1164" alt="Screenshot 2019-07-04 at 18 03 31" src="https://user-images.githubusercontent.com/19162952/60689724-0ea4d300-9e86-11e9-972a-3c8bb5f41dca.png">
Ideally also provide screenshots:
**What you expected:**
Grade Distribution Chart as for all other countries.
Answers:
username_0: here another one for Europe: https://www.thecrag.com/climbing/europe
<img width="895" alt="Screenshot 2019-07-31 at 18 01 25" src="https://user-images.githubusercontent.com/19162952/62254998-ffe80800-b3c0-11e9-9172-0f96d94b8148.png">
username_0: and one for Zillertal: https://www.thecrag.com/climbing/austria/area/142177053
<img width="971" alt="Screenshot 2019-07-31 at 18 29 02" src="https://user-images.githubusercontent.com/19162952/62255034-2148f400-b3c1-11e9-9b0b-35184125997f.png">
username_1: I confirm this issue.
username_2: @username_1 it is only happening with some browsers. Do you have access to other browser versions. We do not know why some browsers do not display properly.
username_1: - Works not with Firefox 60.6.3esr and 68.0.1
+ Works with Internet Explorer 11.0.9600.19377
username_1: Why do use <b> tag to set the background and not the <span> tag that contains the grade text?

username_3: Fixed:
https://brendan.thecrag.com/climbing/japan
Status: Issue closed
|
NREL/SAM | 749025616 | Title: Battery can charge from fuel cell shows up on FOM UI options without a Fuel Cell
Question:
username_0: PV Battery Single Owner should not have "Battery can charge from fuel cell" displayed:

Ensure only proper UI options are displayed.
@cpaulgilman or @sjanzou Does "Is DC-connected for show-hide" belong in the release?
Answers:
username_0: More issues on this UI page:
AC connected allows clipped system power as a UI option, but not grid charging. This seems backwards.
Cycle degradation penalty and other automated dispatch options should be greyed out for "manual dispatch" and "dispatch to custom time series"
username_0: Similarly, manual dispatch is not greyed out when choosing an automated option.
username_0: Looks like a bad merge added the characters "28" to the front of a line in the callback script, which introduced this issue. Fixed in https://github.com/NREL/SAM/commit/b5e139882b6b6ff416d22ae7645ba3cd45693b4c
Status: Issue closed
|
release-it/release-it | 798368550 | Title: Unable to implement custom version format.
Question:
username_0: I am trying to implement a custom version format but the call of "parseVersion" in tasks.js is trying to do a semvar parsing which fails in my case.
Answers:
username_0: My implementation is also based on a plugin. I think `release-it-calver-plugin` works because the resulting version string can be parsed to a semver via the call to `semver.coerce`.
I would suggest that the plugin is extended by an interface that returns `{version, isPreRelease, preReleaseId}`. parseVersion can then go to Version.js as default.
Then plugins which want to implement a complete cusom versioning scheme can be fully functional.
username_1: Now I look at the implementation of the calver plugin I'm surprised it works. E.g.:
```
❯ semver -c 21.1.0.0-dev.0
21.1.0
```
Which "works", but not as intended. Now `npm publish` or `git tag` will use `21.1.0` as (sem) version.
In any case, feel free to open a PR with your ideas for release-it and I'm happy to look into it. |
moriazat/stitcher | 77106973 | Title: Wrong file size when overwriting on oversized file
Question:
username_0: Overwriting on a file with larger than expecting stitching result size, would lead to the file having the original (larger) size.
For example, if the result of a stitching task would be 1 MB, and the program overwrites it on a 100 MB file, the resulting file would be still 100 MB. |
juliuscanute/qr_code_scanner | 972334234 | Title: [BUG] : WEB LateInitializationError: Field '_channel' has not been initialized
Question:
username_0: Uncaught (in promise) Error: LateInitializationError: Field '_channel' has not been initialized.
at Object.throw_ [as throw] (errors.dart:251)
at qr_code_scanner._QRViewState.new.get [_channel] (qr_code_scanner.dart:61)
at qr_code_scanner._QRViewState.new.updateDimensions (qr_code_scanner.dart:91)
at updateDimensions.next (<anonymous>)
at runBody (async_patch.dart:84)
at Object._async [as async] (async_patch.dart:123)
at qr_code_scanner._QRViewState.new.updateDimensions (qr_code_scanner.dart:89)
at qr_code_scanner._QRViewState.new.onNotification (qr_code_scanner.dart:96)
at NotificationListener.new.[_dispatch] (notification_listener.dart:206)
at size_changed_layout_notifier.SizeChangedLayoutNotification.new.visitAncestor (notification_listener.dart:122)
at framework.SingleChildRenderObjectElement.new.visitAncestorElements (framework.dart:4133)
at size_changed_layout_notifier.SizeChangedLayoutNotification.new.dispatch (notification_listener.dart:138)
at size_changed_layout_notifier._RenderSizeChangedWithCallback.new.onLayoutChangedCallback (size_changed_layout_notifier.dart:64)
at size_changed_layout_notifier._RenderSizeChangedWithCallback.new.performLayout (size_changed_layout_notifier.dart:92)
at size_changed_layout_notifier._RenderSizeChangedWithCallback.new.layout (object.dart:1858)
at layoutChild (layout_helper.dart:56)
at stack.RenderStack.new.[_computeSize] (stack.dart:570)
at stack.RenderStack.new.performLayout (stack.dart:597)
at stack.RenderStack.new.layout (object.dart:1858)
at layoutChild (layout_helper.dart:56)
at stack.RenderStack.new.[_computeSize] (stack.dart:570)
at stack.RenderStack.new.performLayout (stack.dart:597)
at stack.RenderStack.new.layout (object.dart:1858)
at proxy_box.RenderDecoratedBox.new.performLayout (proxy_box.dart:116)
at proxy_box.RenderDecoratedBox.new.layout (object.dart:1858)
at proxy_box.RenderConstrainedBox.new.performLayout (proxy_box.dart:277)
at proxy_box.RenderConstrainedBox.new.layout (object.dart:1858)
at shifted_box.RenderPadding.new.performLayout (shifted_box.dart:233)
at shifted_box.RenderPadding.new.layout (object.dart:1858)
at scaffold$._ScaffoldLayout.new.layoutChild (custom_layout.dart:171)
at scaffold$._ScaffoldLayout.new.performLayout (scaffold.dart:1097)
at scaffold$._ScaffoldLayout.new.[_callPerformLayout] (custom_layout.dart:240)
at custom_layout.RenderCustomMultiChildLayoutBox.new.performLayout (custom_layout.dart:404)
at custom_layout.RenderCustomMultiChildLayoutBox.new.layout (object.dart:1858)
at material._RenderInkFeatures.new.performLayout (proxy_box.dart:116)
at material._RenderInkFeatures.new.layout (object.dart:1858)
at proxy_box.RenderPhysicalModel.new.performLayout (proxy_box.dart:116)
at proxy_box.RenderPhysicalModel.new.performLayout (proxy_box.dart:1388)
at proxy_box.RenderPhysicalModel.new.layout (object.dart:1858)
at proxy_box.RenderSemanticsAnnotations.new.performLayout (proxy_box.dart:116)
at proxy_box.RenderSemanticsAnnotations.new.layout (object.dart:1858)
at proxy_box.RenderRepaintBoundary.new.performLayout (proxy_box.dart:116)
at proxy_box.RenderRepaintBoundary.new.layout (object.dart:1858)
at proxy_box.RenderIgnorePointer.new.performLayout (proxy_box.dart:116)
at proxy_box.RenderIgnorePointer.new.layout (object.dart:1858)
at proxy_box.RenderRepaintBoundary.new.performLayout (proxy_box.dart:116)
at proxy_box.RenderRepaintBoundary.new.layout (object.dart:1858)
at routes._RenderFocusTrap.new.performLayout (proxy_box.dart:116)
at routes._RenderFocusTrap.new.layout (object.dart:1858)
at proxy_box.RenderSemanticsAnnotations.new.performLayout (proxy_box.dart:116)
at proxy_box.RenderSemanticsAnnotations.new.layout (object.dart:1858)
at proxy_box.RenderOffstage.new.performLayout (proxy_box.dart:116)
at proxy_box.RenderOffstage.new.performLayout (proxy_box.dart:3422)
at proxy_box.RenderOffstage.new.layout (object.dart:1858)
at proxy_box.RenderSemanticsAnnotations.new.performLayout (proxy_box.dart:116)
at proxy_box.RenderSemanticsAnnotations.new.layout (object.dart:1858)
at overlay$._RenderTheatre.new.performLayout (overlay.dart:745)
at overlay$._RenderTheatre.new.layout (object.dart:1858)
at proxy_box.RenderSemanticsAnnotations.new.performLayout (proxy_box.dart:116)
[Truncated]
at overlay._RenderTheatre.new.performLayout (overlay.dart:741)
at overlay._RenderTheatre.new.layout (object.dart:1858)
at view.RenderView.new.performLayout (view.dart:165)
at view.RenderView.new.[_layoutWithoutResize] (object.dart:1713)
at object$.PipelineOwner.new.flushLayout (object.dart:885)
at binding$5.WidgetsFlutterBinding.new.drawFrame (binding.dart:464)
at binding$5.WidgetsFlutterBinding.new.drawFrame (binding.dart:878)
at binding$5.WidgetsFlutterBinding.new.[_handlePersistentFrameCallback] (binding.dart:330)
at binding$5.WidgetsFlutterBinding.new.[_invokeFrameCallback] (binding.dart:1143)
at binding$5.WidgetsFlutterBinding.new.handleDrawFrame (binding.dart:1080)
at binding$5.WidgetsFlutterBinding.new.[_handleDrawFrame] (binding.dart:996)
at Object.invoke (platform_dispatcher.dart:974)
at _engine.EnginePlatformDispatcher.__.invokeOnDrawFrame (platform_dispatcher.dart:151)
at engine.dart:445
环境:Google Chrome 已是最新版本
版本 92.0.4515.159(正式版本) (64 位)
已权限允许
Status: Issue closed
Answers:
username_1: This is an issue with the flutter framework and a fix has already been merged in the master channel. See https://github.com/flutter/flutter/issues/83618 |
rParslow/TeamWhisky | 234422019 | Title: URBAN BAR Julep Cup
Question:
username_0: URBAN BAR Julep Cup<br>
http://ift.tt/2rY3AR2<br>
#TeamWhisky URBAN BAR Julep Cup Cup Angleterre LMDW http://ift.tt/2rY3AR2 12,90 € <img src="http://ift.tt/2sVVLZf"><br><br>
via Fishing Reports http://ift.tt/2dm5cfF<br>
June 08, 2017 at 08:12AM |
i3/i3 | 197719261 | Title: feature request : show empty workspaces
Question:
username_0: Is there a way to show all workspaces, even empty ?
Answers:
username_1: Empty workspaces don't exist unless they're the active workspace on an output. i3 discards empty workspaces and creates them when needed.
This is a core part of the i3 concept and we've discussed before that we're not looking to change it.
Status: Issue closed
username_0: https://github.com/username_1/i3/issues/117
If somebody catch this, and want to try to pull request this :+1: |
cinghie/yii2-user-extended | 215198777 | Title: advanced admin forbidden
Question:
username_0: Hello I add this code in backend/config/main.php
```php
'controllerMap' => [
'admin' => 'username_1\yii2userextended\controllers\AdminController',
'settings' => 'username_1\yii2userextended\controllers\SettingsController',
],
```
now the admin page is forbidden, without controllerMap I can have access to the page and manage users, but without extended module, that mean without name, surname etc..
how can I do?
this is all backend module
```php
'user' => [
'as backend' => 'dektrium\user\filters\BackendFilter',
'class' => 'dektrium\user\Module',
'cost' => 34,
'admins' => ['admin'],
'enableRegistration' => false,
'controllerMap' => [
'admin' => 'username_1\yii2userextended\controllers\AdminController',
'settings' => 'username_1\yii2userextended\controllers\SettingsController',
],
// Yii2 User Models Overrides
'modelMap' => [
'Profile' => 'username_1\yii2userextended\models\Profile',
'SettingsForm' => 'username_1\yii2userextended\models\SettingsForm',
'User' => 'username_1\yii2userextended\models\User',
],
],
'userextended' => [
'class' => 'username_1\yii2userextended\Module',
'avatarPath' => '@webroot/img/users/', // Path to your avatar files
'avatarURL' => '@web/img/users/', // Url to your avatar files
'showTitles' => false, // Set false in adminLTE
],
'rbac' => 'dektrium\rbac\RbacWebModule',
```
Answers:
username_1: Hi @username_0,
check if your user have 'admin' rule that is required to access the view
Status: Issue closed
|
Possseidon/dang-lib | 756537569 | Title: Use Doxygen comments instead of verbose Xml Doc Comments.
Question:
username_0: Currently Xml Doc Comments are used for code documentation. They are extremely verbose and annoying to edit and maintain. Visual Studio uses them by default and generates them automatically for an entity (function, class, ...) when you type `///` in front of it.
There is a variety of different less verbose styles you can get Visual Studio to use:
## Xml Doc Comments (current)
```cpp
/// <summary>
/// Adds two numbers.
/// </summary>
/// <param name="a">The first operand.</param>
/// <param name="b">The second operand.</param>
/// <returns>The sum of both operands.</returns>
int add(int a, int b);
```
## Doxygen (///)
```cpp
/// @brief Adds two numbers.
/// @param a The first operand.
/// @param b The second operand.
/// @return The sum of both operands.
int add(int a, int b);
```
Alternatively `//!` can also be used.
There is also two options using multi-line comments, (namely `/**` and `/*!`) but they don't auto-generate unfortunately.
That leaves Doxygen with style of `///` and `//!` as the remaining possibilities of which I have chosen `///` because it's easier to type.
Status: Issue closed
Answers:
username_0: All comments have been converted to the Doxygen `///` style. |
Multiverse/Multiverse-Core | 51852337 | Title: Spigot 1.8 /mv remove farm
Question:
username_0: Since the Spigot 1.8 Update every time i try to remove the Farmworld it says: "[Multiverse-Core] World 'farm' could not be unloaded. Is it a default world?" Anyway which world i try to remove... I also try to remove the World from the config, but if i regenerate the world and try to remove it again, the same error appear. Maybe you could update it..
Btw. sorry for my bad english, im a stupid german guy :D<issue_closed>
Status: Issue closed |
solgenomics/sgn | 220579222 | Title: trial tree doesnt load in cassavabase
Question:
username_0: <img width="512" alt="image" src="https://cloud.githubusercontent.com/assets/12039071/24853867/5f37771c-1dd4-11e7-833f-a208104a0855.png">
Answers:
username_1: It's in all databases. I've seen it in Yambase as well.
username_2: It works now in cassavabase.org, I refreshed it.
username_0: Same here, thanks @username_2
Status: Issue closed
username_1: Still not working in yam test. |
lazywithclass/winston-cloudwatch | 496346723 | Title: CloudWatch LogGroups Not Getting Created
Question:
username_0: My designated CloudWatch groups are not getting created. I see the logs coming through in the console log correctly, but they never show up in CloudWatch. This is my configuration.
```
const env = process.env.NODE_ENV || 'local'
const config = require('../../config/env.json')[env]
const winston = require('winston')
const WinstonCloudwatch = require('winston-cloudwatch')
const crypto = require('crypto')
let startTime = new Date().toISOString()
const logger = winston.createLogger({
exitOnError: false,
level: 'info',
transports: [
new winston.transports.Console({
json: true,
colorize: true,
level: 'info'
}),
new WinstonCloudwatch({
awsAccessKeyId: config.aws.accessKeyId,
awsSecretKey: config.aws.secretAccessKey,
logGroupName: 'my-api-' + env,
logStreamName: function () {
// Spread log streams across dates as the server stays up
let date = new Date().toISOString().split('T')[0]
return 'key-collector-requests-' + date + '-' +
crypto.createHash('md5')
.update(startTime)
.digest('hex')
},
awsRegion: 'us-east-1',
jsonMessage: true
})
]
})
const winstonStream = {
write: (message, encoding) => {
// use the 'info' log level so the output will be picked up by both transports
logger.info(message)
}
}
module.exports.logger = logger
module.exports.winstonStream = winstonStream
```
Then within my express app.
```
const morgan = require('morgan')
const { winstonStream } = require('./providers/loggers')
app.use(morgan('combined', { stream: winstonStream }
```
Answers:
username_1: What version are you using?
Also to help in debugging the problem reduce your configuration to the simplest form possible, so for example remove express and just run the bit that sends logs to AWS.
username_0: I'm running the following versions.
```
"winston": "^3.2.1",
"winston-cloudwatch": "^2.0.9"
```
I'm not sure how I would remove express and still test the logging? The logging works when I run express locally so I'm assuming this has something to do with AWS permissions and is probably not even appropriate for this area, but I figured I would see if someone had an idea and figured it would be a good place to have it documented if someone else had the same issue in the future.
username_0: I'm not positive as I haven't implemented it yet on this project, but I believe the issue has to do with the Lambda being in a VPC. When a lambda is in a VPC it loses access to the internet. In order to allow it to have access to the internet you must perform the tasks outlined in this article.
https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/
When I run this locally it has access to the internet and is thus able to create the logs. Hopefully once I implement this it will work correctly.
username_1: I suggested removing express to start seeing where the problem is, if the code works outside your deployment environment then you know where the problem is.
So from your second comment I get it you saw logs flowing in outside the VPC, right?
I have very little experience with VPCs, but I remember this project being used successfully in the past by someone that was using Lamba, `ktxhbye` was added to support a need they had.
username_0: Confirming that the problem was related to the lambda function being in a VPC and not granted public access to the internet through Subnets, Route Tables, NAT and Internet Gateways as described within this post. https://gist.github.com/reggi/dc5f2620b7b4f515e68e46255ac042a7
username_1: Great!
Status: Issue closed
|
nyu-devops-fall19-wishlists/wishlists | 528328361 | Title: Add Travis Slack notification hook
Question:
username_0: **As a** developer
**I need** to intergrate Travis checks with Slack
**So that** my team is aware of the status of the code at any time
**Assumptions:**
* (None)
**Acceptance Criteria:**
```
Given a new branch
When a Pull Request is done
Then I should see a notification in Slack assessing whether the tests are passing
```<issue_closed>
Status: Issue closed |
commitizen/cz-cli | 358371144 | Title: git-cz doesn't finish, doesn't return the prompt
Question:
username_0: Hi!
Since one of the recent updates (sadly I don't know which one exactly) when I run git-cz, the commit is created as expected, pre-commit hook also runs - if there is any -, but commitizen never exits after it's done. I also tried `git-cz --retry` and it works fine, there is no issue there.
There was a ticket addressing the same issue, but it's creator claims, that he solved the issue by updating his node version: https://github.com/commitizen/cz-cli/issues/545
I'm using node 10.10.0, npm 6.4.1, windows 7, commitizen 2.10.1 and cz-convencional-changelog 2.1.0.
I've tried using it through visual studio code's integrated terminal and windows command line.
There are no error messages, so this is how far I got.
Answers:
username_0: Since `git-cz --retry` doesn't display options and it works fine, I have a hunch, that the real culprit might be the [Inquirer](https://github.com/SBoudrias/Inquirer.js) package.
username_0: Possible related tickets on Inquirer side: https://github.com/SBoudrias/Inquirer.js/issues/667
username_0: I've just realized, that commitizen uses version `1.2.3` Inquirer, while the latest version is `6.2.0`. `1.2.3` was released on Nov 13, 2016. Might worth trying updating it.
username_0: On mac it works fine.
username_0: Now the battery on my windows laptop is about to die on low charge and commitizen finished without any issue whatsoever. Is there a race condition somewhere in the code that causes this behavior?
username_0: The power is back on and commitizen doesn't halt again.
username_0: And now with a shorter commit, it finished correctly...
username_0: Thank you for the information! I'll recheck this as soon as I can and I'll give feedback on whether the issue still present or not.
username_1: Cheers.
username_0: Updated commitizen to `3.0.2` and all works fine! Closing ticket.
Status: Issue closed
|
gaenseklein/slidenotes | 351141513 | Title: "_" im Markdown-Spec is parsed like "*"
Question:
username_0: schreiben können
Answers:
username_1: Wir kommen in Teufels Küche wenn wir anfangen, die Syntax Kontextabhängig zu machen. Aber wenn du unbedingt willst, dann würde ich die Syntax dahingehend erweitern, dass wir anstelle eines einfachen "`_`" in Zukunft ein " \_" brauchen und zum Schließen ein "\_ ".
Bist du dir denn sicher, dass du das willst? Was kommt dann als nächstes? `*` auch genauso? Was ist mit "`__`"? oder den Zeichen für Code? "\`"
Und wollen wir Character-Escape eigentlich einbauen? Ist noch garnicht implementiert. Der Standard für Escape im Github-Markdown ist \\\* \\\_ \\\` usw
username_0: ich wollte das Zeichen _ einfach aus dem spec rausschmeißen. Nix mit Zeichenketten kontextabhängig interpretieren.
>
username_1: Ok, also "\_" wird komplett rausgeschmissen? Kann ich mit leben. Hilft aber nur für den einen Use-Case und verärgert User, die das nutzen wollten weil von Github etc. gewohnt.
username_0: können wir erst mal mit leben, denke ich. So ist das mit Markdown, da existieren mehrere specs, die für ihre jeweiligen use cases optimiert sind. Wundert mich fast, dass GitHub flavored Markdown _ nutzt
>
username_1: ich würde das nicht prinzipiell rausnehmen sondern eher wählbar machen wollen - also ob du \* oder \_ verwenden willst. dann können die user selbst entscheiden, was für sie besser ist. ich denke nämlich auch, dass beides zu nutzen blödsinn ist, aber wenn du bspw. oft \* verwendest willst du es nicht ständig ausklammern. Bspw. bei gegenderter Sprache: Lehrer\*In
username_0: guter Punkt. machen wir
>
username_1: über plugin gelöst. Sternchen und Unterstriche sind jetzt unter globalOptions abwählbar wenn plugin "switch. Kann natürlich gui-mäßig noch verbessert werden, aber das spielt hier jetzt keine Rolle.
Ich habe jetzt nur Sternchen und Unterstriche generell abwählbar gemacht - heißt: Wenn Sternchen abgewählt ist geht auch kein Doppelsternchen usw.
Wenn das lieber einzeln abgewählt werden soll sag Bescheid, sonst schließ ich das Issue
username_1: naja, also wenn du das gerne hättest dann würd ich das lieber jetzt einprogrammieren und damit abschließen, als das auf halde zu legen und zu vergessen
username_0: nee, ich würde spätere neu- oder weiterentwicklungen von diesem bereich als feature-request behandeln und den nur auf Nutzeranfrage hin noch mal anfassen
>
username_1: ok, dann schließe ich das issue jetzt. funktioniert ja wie es ist.
Status: Issue closed
|
pandas-dev/pandas | 235482614 | Title: Convert ndarray to datafram
Question:
username_0: #### Description
Consider the 3D array
```
a = array([[[ 0, 1, 2],
[ 3, 4, 5]],
[[ 6, 7, 8],
[ 9, 10, 11]]])
```
which i wish to convert into a pandas DataFrame, so that each dimension of the ndarray above gets mapped onto a column name. Viz,
```
A B C val
0 0 0 0 0
1 0 0 1 1
2 0 0 2 2
3 0 1 0 3
4 0 1 1 4
5 0 1 2 5
6 1 0 0 6
7 1 0 1 7
8 1 0 2 8
9 1 1 0 9
10 1 1 1 10
11 1 1 2 11
```
What's the "pandas" way to do this ? At the moment this is what I'm doing:
```
def ndarray2dataframe(arr, dim_names=None, dim_values=None,
target_dim_name="val", keep_dtypes=False):
"""Converts an ndarray of shape (d_1,...,d_n) into a dataframe with n + 1
columns containing thesame number of rows as there are elements in the
input array, and for all index tuples (i_1, ..., i_n) in
[0...d_1] x ... x [0...d_n], we have
arr[i_1,...,i_n] == df.loc[mask].iloc[0]["val"],
where msk := (df[df.columns[0]] == i_1) * ... * (df.columns[n] == i_n)
Examples
--------
In [50]: utils.ndarray2dataframe(arange(2 * 2 * 3).reshape((2, 2, 3)))
Out[50]:
0 1 2 val
0 0 0 0 0
1 0 0 1 1
2 0 0 2 2
3 0 1 0 3
4 0 1 1 4
5 0 1 2 5
6 1 0 0 6
7 1 0 1 7
8 1 0 2 8
9 1 1 0 9
10 1 1 1 10
11 1 1 2 11
[Truncated]
# Form meshgrid with matricial indexing="ij" (N.B.: "xy" indexing screws
# everything up)
grid = np.meshgrid(*dim_values, indexing="ij", copy=False)
# Explode everything
data = np.reshape(arr, grid[0].shape).ravel().tolist()
grid = list(map(np.ravel, grid))
data = grid + [data]
df = pd.DataFrame(np.transpose(data), columns=columns)
# Pandas thinks it's smarter than us :)
df[target_dim_name] = df[target_dim_name].astype(arr.dtype)
if keep_dtypes:
for column, these_dim_values in zip(dim_names, dim_values):
df[column] = df[column].astype(type(these_dim_values[0]))
return df
```
I just wanted to be sure I wasn't reinventing the wheel.
Answers:
username_1: ```
In [6]: Series(a.ravel(), pd.MultiIndex.from_product([ range(n) for n in a.shape ]))
Out[6]:
0 0 0 0
1 1
2 2
1 0 3
1 4
2 5
1 0 0 6
1 7
2 8
1 0 9
1 10
2 11
dtype: int64
```
though not sure if this actually generalizes to your problem. SO might be a better forum for things like this.
Status: Issue closed
|
department-of-veterans-affairs/vets-website | 196983720 | Title: Refills left column- 0 should be red
Question:
username_0: Rx Refill->Refill prescriptions tab->List view
0 refills- color should change to red, currently still showing as black
<issue_closed>
Status: Issue closed |
cgeo/cgeo | 86186263 | Title: Requests to GK(M) for unsupported caches
Question:
username_0: 2015.06.03-RC1:
I had several GPX imports on my device for lab caches and custom made caches (not refering to a known geocode).
I would not expect, that these caches are checked towards GK or GKM for inventory as these geocodes are not supported for carrying GKs.
```
15:51:39.327 Debug cgeo 11463 [ModernAsyncTask #5] GeokretyConnector.searchTrackables: wpt=LABYRINTH-FESTUM
15:51:39.327 Debug cgeo 11463 [ModernAsyncTask #5] GET http://api.geokretymap.org/export2.php?wpt=LABYRINTH-FESTUM
15:51:39.437 Debug cgeo 11463 [main] Loading LABYRINTH-FESTUM Labyrinth-Festum (41) from DB
15:51:39.657 Debug cgeo 11463 [ModernAsyncTask #5] 200 (352 ms) GET http://api.geokretymap.org/export2.php?wpt=LABYRINTH-FESTUM
15:51:39.767 Debug cgeo 11463 [main] Loading LABYRINTH-FESTUM Labyrinth-Festum (41) from DB
```
Answers:
username_0: Duplicate of #4929
Status: Issue closed
|
mapbox/earcut.hpp | 109732643 | Title: Don't discard unused vertices
Question:
username_0: Discarding unused vertices makes earcut.hpp output unworkable for mapbox-gl-native due to the following:
* Fill outlines use `GL_LINES` with indexed vertices
* This requires knowing which vertex indexes correspond to which polygon rings
* We want to share the same vertex indices with fill triangles
* Therefore the indexes used by earcut need to correspond to the input vertex order -- no removal of vertices unused by triangulation
The JS implementation does not remove unused vertices.
cc @username_1 @kkaefer to double check my logic.
Answers:
username_1: Yes, after I switched the JS implementation to indexed output, the output indices now reference the original input array. Unused vertices are a rare occurrence anyway, and reusing the input array makes performance better.
Status: Issue closed
username_0: Fixed by #10. |
kaleidopop/Development_1 | 117867822 | Title: Pushed
Question:
username_0: Hey guys,
I pushed some code and a data.frame to github. Let me know if you can see it. The thing is messy, my laptop still crashes R studio when gut hub is up=( Hope whatever I pushed makes sense and you can work with it |
keithballinger/ToDoList | 1165622852 | Title: Stop war in Ukraine Текст:
Question:
username_0: While Ukraine is under missile attacks GitHub could be used by Russians to develop apps and platforms aiming to destabilize Ukrainian web resources.
Please, prevent these actions and don't stay on the same side with invaders! All information about war can be found at: https://war.ukraine.ua/
We urge you to close GitHub for Russia and its developers! We value your support and we are in need for your actions! |
spring-cloud/spring-cloud-commons | 615002155 | Title: Beans declared in Bootstrap configuration are not destroyed
Question:
username_0: When defining a bean in bootstrap configuration, it's destroy method is not called at shutdown.
Sample:
```
@Configuration
public class BootstrapConfiguration {
@Bean(destroyMethod = "destroy")
public Amazing amazing() {
return new Amazing();
}
public static class Amazing {
public void destroy() {
System.out.println("Destroyed!");
}
}
}
```
spring.factories
```
org.springframework.cloud.bootstrap.BootstrapConfiguration=org.github.test.BootstrapConfiguration
```
I've found that shutdown hook was disabled for bootstrap context via this commit: https://github.com/spring-cloud/spring-cloud-commons/commit/96ec2d32ea0467abab26b33776667cb7c37533d3#diff-ffd4ed2e59b54e6acb4b4f3a088f9e31. I suppose this is the one that breaks destroy mechanism.
Answers:
username_1: My guess is this will only be handled with our rework of bootstrap for 3.0
username_0: Okay.
Could you please check whether this could be a good workaround then?
https://github.com/testcontainers/testcontainers-spring-boot/pull/292/files
username_1: It seems like an ok work around. In 3.0.x bootstrap is no longer enabled by default.
Status: Issue closed
|
AgoraIO-Community/Agora-Flutter-Quickstart | 430083329 | Title: can we use this example to puild Video Broadcasting
Question:
username_0: hi ,
can we use this example to puild Video Broadcasting .
thank you
Answers:
username_1: Yes,
`AgoraRtcEngine.setChannelProfile(ChannelProfile.LiveBroadcasting)` before join channel,
and `AgoraRtcEngine.setClientRole(ClientRole.Broadcaster)` for broadcaster and
`AgoraRtcEngine.setClientRole(ClientRole.Audience)` for audience.
username_2: does it support screen sharing ?
username_1: Screen sharing is not supported for mobile SDK yet, it depends ReplayKit on iOS and Media Projection on Android.
username_2: So is it possible to pass external video in flutter?
Like i have screen sharing code in flutter which provides live video frames how to pass this to agora?
thank you
username_1: Sorry but external media source is not supported.
username_3: Hello, thank you for this Flutter SDK Sample!
I'm trying to make live broadcast work,
here is the gist https://gist.github.com/username_3/e7228253734ba11cd6cecc5cc50488f3
It is simplified to join directly to a broadcast,
The Broadcaster seems to work (I can see the preview video) on the Audience side I only see black screen.
This is how I initialise the Audience side
```
Future<void> _initAgoraRtcEngine() async {
AgoraRtcEngine.create('xxxxxxxxxxxxxxxxxx');
AgoraRtcEngine.enableVideo();
AgoraRtcEngine.setChannelProfile(ChannelProfile.LiveBroadcasting);
AgoraRtcEngine.setClientRole(ClientRole.Audience);
AgoraRtcEngine.enableWebSdkInteroperability(true);
}
```
And then I render with
```
AgoraRtcEngine.createNativeView(1, (viewId) {
AgoraRtcEngine.setupRemoteVideo(viewId, VideoRenderMode.Fit, 1);
AgoraRtcEngine.joinChannel(null, 'flutter', null, 1);
})
```
And here is how I initialise the Host side,
```
Future<void> _initAgoraRtcEngine() async {
AgoraRtcEngine.create('xxxxxxxxxxxxxxxxxxxx');
AgoraRtcEngine.enableVideo();
AgoraRtcEngine.setChannelProfile(ChannelProfile.LiveBroadcasting);
AgoraRtcEngine.setClientRole(ClientRole.Broadcaster);
AgoraRtcEngine.enableWebSdkInteroperability(true);
}
```
and render
```
AgoraRtcEngine.createNativeView(0, (viweId) {
AgoraRtcEngine.setupLocalVideo(viweId, VideoRenderMode.Fit);
AgoraRtcEngine.startPreview();
AgoraRtcEngine.joinChannel(null, 'flutter', null, 0);
})
```
Any ideas why I'm not getting video in the Audience side ?
Thank you ❤️
username_1: Hi @username_3
Audience has to setupRemoteVideo with uid of broadcaster, which you can get in the `onUserJoined` callback.
username_1: ```Dart
AgoraRtcEngine.joinChannel(null, 'flutter', null, audienceUid);
AgoraRtcEngine.createNativeView(broadcasterUid, (viewId) {
AgoraRtcEngine.setupRemoteVideo(viewId, VideoRenderMode.Fit, broadcasterUid);
})
```
username_3: onUserJoined, doesn't trigger on the Audience side
```
AgoraRtcEngine.setChannelProfile(ChannelProfile.LiveBroadcasting);
AgoraRtcEngine.setClientRole(ClientRole.Audience);
AgoraRtcEngine.onUserJoined = (int uid, int elapsed) {
_log("onUserJoined: uid: $uid elapsed: $elapsed");
};
AgoraRtcEngine.enableVideo();
_log('to join channel flutter with uid $_uid');
AgoraRtcEngine.leaveChannel();
AgoraRtcEngine.joinChannel(null, 'flutter', null, 0);
```
when is supposed to trigger the event ?
I'm testing on a real device, on debug mode.
username_3: But doesn't seem to work on Flutter SD
username_3: sorry it is working now, for some reason `onUserJoined` wasn't triggering but it is working.
thanks!
username_4: Hello username_3, in main.dart, I call the method Host.dart or Audiende.dart?
username_5: could you provide full example? i still getting black screen with as a audiance
username_6: I tried setting up mine the way @username_3 but when I try to broadcast I get a black screen. @username_1 could you help?
username_7: @username_6 please follow instructions [here](https://github.com/AgoraIO-Community/Agora-Flutter-Quickstart#reporting-an-issue) to report an issue.
username_7: @username_6 i'll close this one. please provide details in the new ticket you just opened.
Status: Issue closed
username_8: @username_7 I tried to add AgoraRtcEngine.createNativeView but its doesn't set the render view.
username_9: do you have any example of that ? plz i need to make a live bradcasting module in my app
username_7: @username_9 is the sample within this repository not working for you?
username_9: where it support only one parameter.. if i remove the broadcaster or audience ID which is here 0 in example.. the code works fine but audience cannot see the breadcaster..
please help me
username_7: sorry i don't quite get it. could you pls show me how you changed the code or share with me a sample project? |
Um-Mitternacht/Bewitchment | 666989714 | Title: Juniper Chests cannot be used or moved?
Question:
username_0: Juniper Chests should be able to be moved, and things should stay in the chest if holding the Juniper Key that comes with it, right? Holding a key, and holding a key ring made with multiple keys for other Juniper chests failed to allow items to stay inside the chests. Breaking the chest repeatedly does nothing, as the chest stays a permanent block. I'm not sure if there is a certain method I'm missing or if it is a bug, but I'd appreciate any insight on this issue. Thanks!
### Version 1.12.2, Bewitchment 1.12.2-0.0.22.19, 1.12.2 Forge, Patchouli 1.0-21, and Baubles 1.12-1.15.2 used.
Answers:
username_1: The juniper chest acts like a normal chest, but is different in that if you don't have the key, you can't open or break the chest. Otherwise, it acts just like a normal chest.
username_0: Hi! Thanks for replying. I know theoretically this should be the case, but even when I am holding the key in my hand, it won’t let me to break the chest or even put items in it. It lets me see inside it but that’s about it. I’m just not sure why 😅
username_1: Make sure you have the right key, it's based off coordinates so, if you moved it and didn't get a new key, that can be an issue.
username_0: I do have the right key in hand while I try to use the chest. I am able to open the chest with the key, but anything I place inside it spawns back in my inventory immediately. I also cannot move/break the chest at all since first placing it, even with the key directly in my hand. I can take a video if that helps as well.
username_2: i personally cannot recreate this, could you perhaps demonstrate what you did? (and update to the latest version of bewitchment just in case)
Status: Issue closed
|
jlippold/tweakCompatible | 623583805 | Title: `VolumePercent` working on iOS 13.5
Question:
username_0: ```
{
"packageId": "com.gilshahar7.volumepercent",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.gilshahar7.volumepercent",
"deviceId": "iPhone9,3",
"url": "http://cydia.saurik.com/package/com.gilshahar7.volumepercent/",
"packageName": "VolumePercent",
"packageVersionIndexed": false,
"iOSVersion": "13.5",
"category": "Tweaks",
"repository": "Packix",
"name": "VolumePercent",
"installed": "1.0",
"packageInstalled": true,
"packageStatusExplaination": "This tweak has not been reviewed. Please submit a review if you choose to install.",
"id": "com.gilshahar7.volumepercent",
"commercial": false,
"packageIndexed": false,
"tweakCompatVersion": "tweakcompatible-zebra-1.1.5",
"shortDescription": "Displays the volume percentage on the stock volume HUD.",
"latest": "1.0",
"author": "gilshahar7",
"packageStatus": "Unknown"
},
"base64": "<KEY>",
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
pichillilorenzo/flutter_inappbrowser | 430963439 | Title: Compatibility with other plugins
Question:
username_0: I am using the flutter_inappbrowser plugin in my project, as well as audioservice.
If I open a browser view before ever starting the audio service, all is okay, but if I start the audio service, then open the browser, the app crashes with the error:
```
08:54:03.581 145 info flutter.tools E/AndroidRuntime(31364): Caused by: java.lang.NullPointerException: Attempt to invoke virtual method 'java.lang.String android.content.Context.getPackageName()' on a null object reference
```
The call stack stops around here:
```
08:54:03.583 148 info flutter.tools E/AndroidRuntime(31364): at com.username_1.flutter_inappbrowser.InAppBrowserActivity.show(InAppBrowserActivity.java:309)
08:54:03.583 149 info flutter.tools E/AndroidRuntime(31364): at com.username_1.flutter_inappbrowser.InAppBrowserActivity.prepareView(InAppBrowserActivity.java:98)
08:54:03.583 150 info flutter.tools E/AndroidRuntime(31364): at com.username_1.flutter_inappbrowser.InAppBrowserActivity.onCreate(InAppBrowserActivity.java:69)
```
I suspect this could be related to audioservice requiring a custom MainApplication class, but I don't have much Android development experience, so any insight/help would be much appreciated.
audioservice plugin: https://github.com/ryanheise/audio_service
Status: Issue closed
Answers:
username_1: This should be fixed with the new version **1.2.0** |
fitzgen/bumpalo | 1097203848 | Title: `drain_filter` is private
Question:
username_0: The `Vec::drain_filter` method is private, eventhough the `DrainFilter` struct it returns is public (and it tries to link to `Vec::drain_filter`, which fails, because it is private). I assume this is an oversight? It is also missing documentation. |
pac4j/pac4j | 282035898 | Title: GenericOAuth20ProfileDefinition Can not set Id
Question:
username_0: https://github.com/apereo/cas/blob/master/support/cas-server-support-pac4j/src/main/java/org/apereo/cas/support/pac4j/config/support/authentication/Pac4jAuthenticationEventExecutionPlanConfiguration.java#L304
The client should need to set the profile id<issue_closed>
Status: Issue closed |
visit-dav/visit | 414874320 | Title: Improve anti-aliasing support
Question:
username_0: Cyrus did a short test of aa features in VTK-6 and found some good results (at the expense of extra renders). See ticket #74 for more details. Cyrus also observed that our current aa appears to affect only annotations. We should consider what improved aa support we can now offer with VTK-6. It may be that most users kinda sorta handle this manually already by rendering to 4k x 4k images and then resizing to smaller. So, I honestly don't know how common a use case it would be to use anti-aliasing. Given that I don't recall seeing a single email on the topic, I have rated expected use as exceptional.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1719
Status: Pending
Project: VisIt
Tracker: Feature
Priority: Normal
Subject: Improve anti-aliasing support
Assigned to: -
Category: -
Target version: -
Author: <NAME>
Start: 01/29/2014
Due date:
% Done: 0%
Estimated time:
Created: 01/29/2014 12:53 pm
Updated: 02/10/2014 08:02 pm
Likelihood:
Severity:
Found in version: 2.12.3
Impact: 3 - Medium
Expected Use: 1 - Exceptional
OS: All
Support Group: Any
Description:
Cyrus did a short test of aa features in VTK-6 and found some good results (at the expense of extra renders). See ticket #74 for more details. Cyrus also observed that our current aa appears to affect only annotations. We should consider what improved aa support we can now offer with VTK-6. It may be that most users kinda sorta handle this manually already by rendering to 4k x 4k images and then resizing to smaller. So, I honestly don't know how common a use case it would be to use anti-aliasing. Given that I don't recall seeing a single email on the topic, I have rated expected use as exceptional.
Comments: |
Alkantou/TicTacToe | 619857439 | Title: Install GTK for graphic library
Question:
username_0: Instructions are in :
https://python-gtk-3-tutorial.readthedocs.io/en/latest/install.html
Read carefully and if you run into issues don’t give up try to resolve by googling
- do not install from sources
Success criteria :
To be able to run the hello world example
https://pygobject.readthedocs.io/en/latest/ |
amatsuda/traceroute | 374983307 | Title: NameError: uninitialized constant ActionDispatch::Routing::PathRedirect
Question:
username_0: When i run rake traceroute command in my application i get this error. ruby 2.1.10
NameError: uninitialized constant ActionDispatch::Routing::PathRedirect
Answers:
username_1: @username_0 Hmm. What's your Rails version?
username_0: rails v 3.2.13. I had to downgrade gem version to fix the issue. i tried every downgraded version till 0.5.0 but issue existed. Fixed at version 0.5.0.
username_2: The `ActionDispatch::Routing::PathRedirect` class was added to Rails in 4.0; see https://github.com/rails/rails/commit/ec774983514d4ce1b593585ae14a17b730ee2c46 and https://apidock.com/rails/v4.0.2/ActionDispatch/Routing/PathRedirect.
In our case, we're using a custom constraint like the one described here: https://guides.rubyonrails.org/v3.2.13/routing.html#advanced-constraints; I suspect that's part of the cause, though I haven't been able to build a failing test case yet.
Status: Issue closed
username_1: @username_0 @username_2 Thank you for the hints! I guess I fixed this issue via e4b45c3e665a3403e65ebe15155898f8c5dfd8c6.
I'm not sure if I'd say we're still supporting Rails 3 though. I wrote this patch not because we still support Rails 3 but just because I could write the patch for this particular case.
If you're really maintaining an application on that legacy Rails, I recommend you to migrate to a newer supported version as soon as possible.
username_1: Just made a version 0.8.1 release with a fix for this issue. HTH
username_2: Thanks, that's great! I'm looking to use this gem as part of a "migrate off rails3" project :-)
username_1: Sounds great! Good luck on your project :) |
aws/aws-cli | 632781584 | Title: IAM database authentication with a global conditional context key:aws:CurrentTime can not allow only within Specific dates
Question:
username_0: Hello everyone,
I want to limit the time period during which IAM database authentication connections can be made.
Therefore, I connected to an Aurora PostgreSQL with IAM database authentication enabled from an EC2 with the following IAM policy permissions, but was unable to connect.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"rds-db:connect"
],
"Resource": "*",
"Condition": {
"DateGreaterThan": {
"aws:CurrentTime": "2020-06-06T00:00:00Z"
},
"DateLessThan": {
"aws:CurrentTime": "2020-06-07T23:59:59Z"
}
}
}
]
}
```
Of course, the current time falls within a specific time frame.
The execution of the command is as follows.
```
$ RDSHOST=【DB cluster ID】.cluster-XXXXXXXX.ap-northeast-1.rds.amazonaws.com
$ export PGPASSWORD="$( aws rds generate-db-auth-token --hostname $RDSHOST --port 5432 --username jane_doe )"
$ psql "host=$RDSHOST dbname=postgres user=jane_doe"
psql: FATAL: PAM authentication failed for user "jane_doe"
FATAL: pg_hba.conf rejects connection for host "10.0.0.202", user "jane_doe", database "postgres", SSL off
```
The documentation states that aws:CurrentTime is always included in the request context.
https://docs.aws.amazon.com/ja_jp/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-currenttime
`Availability – This key is always included in the request context.`
But I think aws:CurrentTime context key is not included in the request context.
By the way ,Using aws:TokenIssueTime instead of aws:CurrentTime was able to allow the limit the time period during.
I hope that you can fix it. Thanks
Answers:
username_1: Hi @username_0,
I apologize for the delay in response. This does look like an issue with the IAM policy, which I'm not the best to help you with. If you can provide the output of the command with `--debug` turned on so that the request and response can be viewed, I may be able to assist. Please be sure to sanitize the output and remove any sensitive information. Thanks! |
3box/3box-comments-react | 635979022 | Title: Demo is broken
Question:
username_0: Sorry, I thought it was obvious. This is what I see

Plus I have this in the console `TypeError: window.ethereum is undefined`
Answers:
username_1: Hey, Can you please provide some more information? In what way is it broken? Which steps did you take to test it out?
username_0: Sorry, I thought it was obvious. This is what I see

Plus I have this in the console `TypeError: window.ethereum is undefined`
username_1: Ok, you need Metamask or a similar wallet installed in order for the demo to work.
username_0: Hmm ok. Shouldn't this be documented somewhere ? Or at least mentioned in the demo ?
Btw it still does not work, I have the following error showing up
```
NotFoundError: IDBDatabase.transaction: 'orbitdb/QmNdj9kibVdkQARjhNaxCZ2CqLV4qGox7PdBGEUTVqPQVr/122060c892833b44700be8061cf3e01cdc2b9c38b9caeb65b44c0b9a64d372532626.root' is not a known object store name
``` |
dotnet/dotnet-api-docs | 850131921 | Title: Is it possible to change the max value of a named semaphore without reboot?
Question:
username_0: I found no methods to directly change the max value of a named semaphore. Even if I disposed and reconstructed one, the new semaphore still inherited the first configuration, i.e., the new max value given to the reconstruction is ignored.
The only way to change the max value I found is to reboot the OS.
Is it by design or a bug? Any workarounds? |
LiHongyao/Blogs | 286475997 | Title: Gulp Guide
Question:
username_0: 当编译less时出现语法错误或者其他异常,会终止watch事件,通常需要查看命令提示符窗口才能知道,这并不是我们所希望的,所以我们需要处理出现异常并不终止watch事件(gulp-plumber),并提示我们出现了错误(gulp-notify)。
```javascript
var gulp = require('gulp'),
less = require('gulp-less'),
//当发生异常时提示错误 确保本地安装gulp-notify和gulp-plumber
notify = require('gulp-notify'),
plumber = require('gulp-plumber');
gulp.task('less', function () {
gulp.src(['src/less/*.less'])
.pipe(plumber({errorHandler: notify.onError('Error: <%= error.message %>')}))
.pipe(less())
.pipe(gulp.dest('src/css'));
});
gulp.task('watch', function () {
gulp.watch('src/**/*.less', ['less']); //当所有less文件发生改变时,调用less任务
});
```
## 2. 使用webstorm运行gulp任务
说明:使用webstorm可视化运行gulp任务;
使用方法:将项目导入webstorm,右键gulpfile.js 选择”Show Gulp Tasks”打开Gulp窗口,若出现”No task found”,选择右键”Reload tasks”,双击运行即可。
# # 基于gulp前端静态站点结构
[参考这篇文章](http://www.ydcss.com/archives/570) |
sebhildebrandt/systeminformation | 953352079 | Title: Improper expected types when using bluetoothDevices()
Question:
username_0: **Describe the bug**
I am attempting to pull bluetooth device description into my logger. However, when attempting `systemInfo.bluetoothDevice()[0].device` it provides me with a type error that it does not exist. Digging deeper, I find that the return type is expected to be `BlockDevicesData` instead of `BluetoothDeviceData`.
Is this to be expected? If so, how can I pull `systemInfo.bluetoothDevice()[0].device` properly?
**To Reproduce**
Steps to reproduce the behavior:
1. used function 'bluetoothDevice()'
2. code snippet
```
const bluetoothDeviceInfo = await systemInfo.bluetoothDevices();
logger.debug('BLUETOOTH DEVICE INFO', bluetoothDeviceInfo[0].device); // throws type error
// cannot pull device.
```
3. start app / code
4. See output/error:
```
Property 'device' does not exist on type 'BlockDevicesData'.ts(2339)
```
**Current Output**
If applicable, add output to help explain your problem.
**Expected behavior**
As stated in the docs, we should be able to pull device directly:
https://github.com/username_1/systeminformation#15-bluetooth
**Environment (please complete the following information):**
- systeminformation package version: 5.7.6, 5.7.8, 5.7.10
- OS: MacOS 10.15.7
- Hardware
MacBook Pro (16-inch, 2019)
**Additional context**
Should this https://github.com/username_1/systeminformation/blob/master/lib/index.d.ts#L941
be using the BluetoothDeviceData instead?
Answers:
username_0: Introduced here:
https://github.com/username_1/systeminformation/commit/143b1abe86632a3fa652a471cf532938ca7e0de3#diff-cfc7807f6b174e1c3705c5c2747f56e49a889a52999dc96e4080a3af0131a534R839
username_1: @username_0 my fault!!! Sorry ... will provide a fix later today ...
username_1: @username_0 ... should be fixed. Version 5.7.11 just released.
Status: Issue closed
username_0: Great! Thank you for the fast fix @username_1. |
kentcdodds/kcd-discord-bot | 714755763 | Title: Don't auto close `?private-chat` created by `moderators`
Question:
username_0: **Problem description**:
Moderators can open a private chat with a user, to talk about issues. Because private chats are being closed after a few minutes of inactivity, it's possible that the chat is being closed before the invited user reads the message. Resulting in the message never being read.
**Suggested solution**:
When a moderator starts a private chat, the chat should not be closed automatically. Instead, the moderator should request the bot explicitly to close & remove it.
Answers:
username_1: This is a great point. I don't think the rule should be "if it's created by a mod, don't auto-close it" though, because mods may create private chats for other reasons as well. I think we just need better control over how auto-closures work. Especially in regards to triaging CoC issues. Like, maybe we'll give the user 24 hours to acknowledge they've seen the message and if they don't then the bot will ping them again about it. If they don't acknowledge it again, then the bot can kick them (they can rejoin if they decide to later).
Just an idea. There are a few things to consider here... I'll noodle on it.
username_0: That might not be a bad idea at all. It makes me even consider a more "extreme" way of moderation, that's a bit less personal.
Imagine, the following scenario:
```
user: posts CoC violating message
mod: flags message
bot: removes post & kicks user
# user now only sees a single channel, in which they need to reply to the bot they understand the CoC.
user: acknowledges
bot: restores user access
```
That sounds extreme. But... this way `user` won't have hard feelings to a `mod`. They also doesn't need to spend "10 minutes" talking to a `mod` explaining the situation. And they can't do more harm, while the `mod` is awaiting response. All they need to do is tell the `bot` they want back in.
This can later be expanded with `cool-downs`. Think of:
```
kick 1: access restores directly
kick 2: access restores after 15 minutes
kick 3: ... 60 minutes
kick 4: ... 24 hours
kick 5: ... 72 hours
kick 6: ban
```
And in addition the the 1 bot channel, a second `talk to mods` channel could be added. So that the user has a way to talk to the mods in case they think they're treated unfairly.
Needless to say, this should only be used for clear violations! Not for an innocent misplaced meme that shouldn't have been posted.
Just another thought I found worth sharing.
username_1: I'm a fan of all the ideas you're sharing @username_0 👍 And yes, this would only be for actual CoC violations.
For some small stuff like misplaced memes/posts, what if a moderator could add a reaction to the post which would trigger the bot to delete it and send the user a private message just to let them know that they're not in trouble or anything, the post was just deleted because some may find it offensive. Thoughts?
username_0: Sounds good to me. So a `delete + private message` for small stuff, and a `delete + kick + re-acknowledge understanding of CoC` for clear violations of the CoC.
I think we can fix those with commands that can be posted in the `#🔒-moderators-room `, and I would also propose to make the commands only effective there. That way other mods know what's going on, and users don't.
How about:
```
?warn {link-to-message}
?kick {link-to-message}
```
username_1: Yeah, I think let's do reactions rather than commands and the bot will just post to the moderators channel when an action like that is taken.
To be clear, `kick` in discord terms means the user is no longer a member of the server and has to rejoin. I was thinking "penalty box" is more like what that is, but I don't like how punitive that sounds 😅 It's also kinda like a "time out" but that also sounds punitive 🙃 Not sure how to refer to this, but yeah, I think emoji would probably be best.
username_2: I really like the idea of automating all of this so there won't be any negative feelings towards the mods.
I also like the cool-down idea like @username_0 explained.
Although I think we should also have a way of decreasing the cool-down counter.
So if people are reacting in the `talk to mods` channel and the mod decides it was indeed unnecessary to do the kick, the kick will not count for extra cool-down time
I also like the emoji approach.
And of course we need to make sure these actions will only be triggered by mods/admins.
So if non-mods/admins react with the chosen emoji's it won't trigger the kick |
lopelex/node-red-contrib-harmony | 710552647 | Title: At any time the Observe Node doesn't send any command
Question:
username_0: Heya, I have a small problem. After updating node-red for example it is restarted completly and then it seems that the observe node doesn't recognize any button pressed on the harmony...
so is it possible to set the ip with an inject node after start of node-red? maybe this can help to fix it.
Answers:
username_1: Duplicate of #15
Status: Issue closed
|
pauldotknopf/pauldotknopf.github.io | 496850028 | Title: The argument against Entity Framework, and for micro-ORMs
Question:
username_0: See this: [Why not OData?](https://docs.servicestack.net/why-not-odata)
Also this: [AutoQuery](https://docs.servicestack.net/autoquery)
Answers:
username_1: Thank you for not advocating building queries from strings :).
Change tracking downside of Entity Framework could have an easy enough workaround: use new DBContext for applying changes, then effects are more obvious.
What doesn't have an easy workaround: startup time of Entity Framework, which is quite noticeable in desktop apps.
Right now is not really a great time to look at EF bug list, or jump into EF, for that matter. The query translator got rewritten, but hasn't really stabilized, as the issue list rightfully indicates. Their test coverage for new/rewritten parts is also not great. "Please try again with nightly builds" has been the standard response for over a month now. Let's hope by the time 3.1 releases in November things have stabilized.
username_2: I stumbled into this article on reddit, and really liked it. I agree with a lot of what you're saying. 😊
However, the quote you've included in the "Change Tracking" paragraph feels like a straw man. I agree that it can be dangerous to throw around change tracking entities to every corner of your system, and thus lose sight of what actually happening in your system, but I feel like most people know this shouldn't be done, and that if you have this problem in your code base, it might be a code smell.
Currently I like to implement a command-handler approach in my system, where a command corresponds to a particular business concern, and has only one handler, where I centralize the preparation and execution of my business logic.
I agree that the change tracking gets in the way when all you need is to load read-only data for display. Not only that but if you're eager loading, and have only a couple of nested collections in your entities, the amount of data EF is querying the database for can get huge. We recently had to debug this problem where I work. A request for ~1000 entities could result in result sets of well over 100,000 lines. Surely this could be optimized in EF, but when I encounter something like this, I like to take a step back and evaluate the tool we're using.
If you're querying read-only data from the database, what exactly do you need an ORM for? I'd advocate for introducing a micro-ORM (Dapper is my current favorite, but I might check out ServiceStack.OrmLite soon 😉). Nobody says you can't use more than one data-access library in your solution.
This approach of using full-fledged ORMs to retrieve and save business entities in your command handlers, and using bare metal ADO.net or a micro-ORM to very efficiently query for read-only data goes very well with the architectural pattern of CQRS, and is why I recommend it a lot to developers who are dealing with large systems and complex domains.
Anyway, the blog post was great, and I generally I agree with you. 😊 Just don't write ORMs off right of the bat, and consider them as a tool as you would any other dependency in your system.
Hope this brain dump makes to sense anybody. 😅
username_3: This blog post is good up to a certain level of application complexity. Most large enterprise projects are going to be dealing with data sets with millions of rows of data, with billions of relational outcomes. Advocating eager loading in such a scenario is non-sense. EF allows you to cherry pick relationships to eager load when using lazy loading by default. If you are using a webserver and constantly eager loading large graphs then you are placing a huge burden on the server to load data that gets dumped in the garbage when the controller returns. If it's a desktop app, you may get away with it for a while, but eventually your app will be holding a couple of gigs of data in RAM and user experience will suffer. On mobile, you should be just as strict with memory usage as on a REST server.
EF was designed to work in all these scenarios and when used correctly, does so admirably. Enterprise code bases need consistency and reliability as their top concerns for maintainability. Designing a system under the assumption that only you or someone of your skill level will be maintaining code is both arrogant and dangerous.
username_4: There's other options for ORMs.
For example, Tortuga Chain (which I work on) using database reflection. Rather than just assuming the class exactly matches the table or doing everything using SQL string literals, it compares the table and class definitions at runtime. This dramatically reduces the boilerplate, especially when you don't want every column.
username_4: Another is SQL Alchemy which allows you to build complex SQL expressions using an object model. Unfortunately it is Python only at this time.
username_4: Regarding boilerplate, consider this line:
Consider this line:
dataSource.Update("dbo.Person", new { ID = personId, Name = "<NAME>"}).Exceute();
Why can't all ORMs do this? Why do they usually require manually dealing with connections/contexts and an extra round trip to the database just to perform a simple update?
In my opinion, the only time I should see a `using` statement in my DB code is when I actually need a transaction. And that should only be needed if I'm updating multiple records.
username_5: This is the same Update query in OrmLite:
```csharp
db.UpdateOnly(() => new Person { Id = personId, Name = "Another Name" });
```
Which if you prefer you could also update from an anonymous object or untyped Dictionary, e.g:
```csharp
db.Update<Person>(new { Id = personId, Name = "Another Name" });
db.Update<Person>(new Dictionary<string,object> {
["Id"] = personId,
["Name"] = "Another Name"
});
```
Other [Update Examples in OrmLite](https://github.com/ServiceStack/ServiceStack.OrmLite#update):
username_6: Having read this .... many of the points here are exactly where i'm at right now ... the hidden technical debt, abstraction issues I can't solve, complexity with no explanation, poor performance in scenarios that often seem trivial on face value.
I'm currently looking in to an alternative to EF Core to resolve a ton of issues I have that the EF Core team seems to be either confused about, not interested in, or simply willing to pick at my description rather than focusing on the issue at hand.
The fact is, since EF6, each and every subsequent release has removed or broken functionality that my stack has depended on and i'm sick of swallowing that with the reasoning being "this is the cost of progress".
So here's where i'm at ...
I'm trying to find an alternative that as this blog post states should not be an issue because "there shouldn't be a problem that EF solves that another framework couldn't also solve".
So here's my core functionality requirements from the EF functionality that I currently use ...
- Mapping LINQ to SQL
- Mapping SQL query results to Entities
- Managing / migrating the DB (ideally without having to manually crank out SQL myself)
- Complex filtration when "nested questions" happen.
That last point appears to be the sticking point for most "micro-ORMs" .. the "micro" prefix usually means like with say Dapper that it does the SQL query to Entity mapping but won't do the bit before that to get from the LINQ expression tree to SQL.
Assuming this ORM can handle that I'm looking for examples ofdoing things like applying set filters so I can achieve something like ...
var results = Db.GetAll<BaseT>()
.Include(t => t.Property)
.ThenInclude(i => i.SubProperty)
.ToArray();
... key things to note here that EF solves that I can't seem to find a solution to in other ORM's the include, and then the sub include are both filtered by the relationship but also on filter conditions applied to the table regardless of context in which that table is questioned.
This seems to be a feature missing in all but EF.
Does this ORM support this scenario?
username_5: OrmLite supports ["POCO References"](https://github.com/ServiceStack/ServiceStack.OrmLite#reference-support-poco-style) where it lets you load nested references with its [Load* APIs](https://github.com/ServiceStack/ServiceStack.OrmLite#querying-pocos-with-references) whereas all OrmLite's other APIs wont load any nested references, so you can't pick and choose which references you want to load (i.e. without defining different classes with different references), but as all APIs returns POCOs you also have the option to [stitch & merge multiple POCO resultsets together](https://github.com/ServiceStack/ServiceStack.OrmLite#merge-disconnected-poco-result-sets).
Like all of OrmLite APIs they return "disconnected POCOs" so it's not possible to have hidden N+1 queries that could occur when traversing the result set. i.e. you get back your exact POCOs you've defined and not some proxy sub class with injected decorated behavior.
OrmLite also includes typed APIs for inspecting and [altering existing schemas](https://github.com/ServiceStack/ServiceStack.OrmLite). Paul [outlines a good migration solution](https://pknopf.com/post/2019-09-22-the-argument-against-entity-framework-and-for-micro-orms/#migrations) on his blog, there's also a few different migration solutions in this [Customer Forums Thread](https://forums.servicestack.net/t/any-recommendations-for-maintaining-database-schemas/3304).
I'd recommend reading the [OrmLite docs on its home page](https://github.com/ServiceStack/ServiceStack.OrmLite) which documents its many features.
username_6: This requires you to know all the possible combinations of questions that you might want to ask the api up front or manually wiring up a second model so that the traversal is possible with sub queries.
That's never a desired result when you have a db with potentially billions of rows to manage and you want a joned set of maybe 1,000 of them from the results of a single dynamically generated SQL query.
Consider putting an OrmLite managed DB behind an OData API where the questions are virtually limitless but every possible combination of scenarios has to be considered and handled.
With EF this is trivial as I can tell EF that the Set has a filter and then any time EF sees any portion of a linq query hitting a given table it applies the filter then the requested query to the SQL query.
People often forget about the complexity that EF is solving stating that something is a "micro-ORM" instead of a "full ORM" is seemingly just like declaring "this solves half your issue, now go find something else that solves the other half, but it's fast ok".
I've not yet found anything that can match this type of EF solved scenario that didn't require a crap ton of "work arounds" or "patching stuff together" and it's the one thing that keeps my solutions sat on it ... which is frustrating because I both hate it and have to use it at the same time as there is seemingly no alternative.
Other things of note ...
- For some reason Code generation is seen as bad
- Designer tooling focus is seen as bad
... all features that I often use to solve dynamic scenarios that would otherwise be unsolvable or for situations like I don't want to sit around writing boiler plate stuff like the SQL statement for building a table when I have a class that exactly matches it's structure.
Also, the current version of EF core will never return "some proxy sub class with injected decorated behavior", as that functionality was ripped out as part of the rebuild when EF6 became EF Core 1.0.
My current entity model, has a DbContext as you might expect with entity sets, none of which I pull from the Db in such a manner that I use things like Lazy loading, or proxies, or in any way require the resulting entities to be "attached" to the context, this is basically the same setup as is explained here.
My ideal ORM would allow me to do something like ...
```
using(var db = new factory.GetConnection("db name"))
{
IQueryable<T> results = new Query<T>().Where(...).Select(...).Include(...).OrderBy();
}
```
... in this situation I would be constructing a simple ADO.Net connection to the Db and then telling the framework "build me a SQL query of Type T", large complex "models" seem to feel like overkill for me since the type metadata for the query you're building should tell you all you need to know.
I continue my composition on that to construct the full query, including notifying it (as per my previous comment) of what "sub sets" I want in the results, then performing a .ToArray() .ToList() or simply iterating over it would actually execute it.
The results would be disconnected from the DB (simple POCO's) and be appropriately secured unless I specifically asked the framework to track changes for me to make saving easier later.
If a calling user asks for "select * from Table" configuration of a secure where clause should be a key feature of any ORM "micro" or not.
My issue tends to boil down to the fact that all these ORM's claiming to be better than EF appear to be so on face value, but as soon as you start drilling in to the complex scenarios they fall short of features resulting in me having to extend or build tons of framework around the ORM.
So the article makes the comparison of OrmLite's 89k cloc to EF's 514k cloc in the chart at the begining but it's forgetting that there's a bunch of stuff in that extra 400k cloc that OrmLite can't do.
Unless i'm missing something?
I recently asked the Dapper guys about how implementing a Dapper based back end for OData might look I simply got a one liner "you'd have to write your own version of LINQ to SQL as Dapper doesn't do that" ...
So Dapper can execute a query, and map the results to an Object graph, but it can't build the query in the first place. That's half the work is it not?
**In short**
I'd love to see a "micro-ORM" implementation of an OData or GraphQL or similar API model as those sorts of API's really push the limits of what ORM's can do.
username_0: See this: [Why not OData?](https://docs.servicestack.net/why-not-odata)
Also this: [AutoQuery](https://docs.servicestack.net/autoquery)
username_5: @username_6 You're living under the misguided assumption that EF's complexity is necessary, it's absolutely not, you have more flexibility and freedom with a typed ORM which gives you direct, clean access to RDBMS features, not some App-level Object Mapping framework that you're a slave to. Doesn't look like you've noticed that none of the new & modern NoSQL data persistence technologies have anywhere the complexity, tracking and impedance mismatch of Heavy ORMs. Something other thriving OSS platforms are aware as their popular ORMs do not suffer EF's complexity tax. They provide straight-forward APIs to access their underlying features, essentially the goal of most API libraries which provide typed APIs to a networked resource, for some reason Heavy ORMs is the only Exception which some ("enterprise" .NET/Java) devs seem to cling on to as the only one true way they could ever envisage code could ever interact with a database.
Your ideal ORM API is not my ideal ORM API which is to provide a clean typed API over RDBMS queries so you know exactly the SQL that's going to be executed - i.e. exactly what OrmLite does where its abstraction is only concerned about offering cross-RDBMS compatible implementation for portability + simplified UX, not some hidden N+1 + tracking magic that EF deems appropriate.
Your OData is a great example of unnecessary complexity, pure unnecessary bloat, resulting in slow, implementation coupled APIs that you haven't got a dream of feasibly being able refactor & replace its implementation without breaking external clients - effectively breaking the golden rule of API design. Read Paul's links if you want to see how you can build a simpler, cleaner & faster data-driven services using a Micro ORM (OrmLite) without any of the pitfalls of OData.
Happy that you're productive with EF, but you've got very little chance in convincing many devs that have made the switch to Micro ORMs that Heavy ORMs are still the ultimate sophistication of how a ORM should be designed an implemented - many of us came from there, we're not going back.
username_6: I don't agree with the bulk of points in that article, for example the supposed "tight coupling of internals" problem highlighted is not present in my stack but that's a far more complex and different discussion. I was using OData as an example mainly because of the complexity in scenarios that it exposes us to only, even Microsoft who pushes OData heavily states that best practice is to have N-Tier separation and promotes the use of both an API model and a DB model, the mapping for that is an entirely different issue though that's not worth discussion here.
The article seems to suggest that AutoQuery is an alternative to OData which it just isn't.
the fact is, as the article points out "the OData query-space can reference any table and any column that was exposed.", not quite true but the premise is ... you have a model (that model doesn't have to be the same as your data model, but you can build any question on any part of the API model,
Essentially all i'm asking is that an ORM should be able to handle exactly that, and when I ask it a question I should be able to tell it "I want my question asked in this business context" which for the bulk of queries in enterprise applications boils down to "based on what the user making the call has access to", which in my case is a small cut of every table in the DB.
If the answer it seems is to just avoid asking the question then it's not really answer is it?
This comes back to my Dapper point above, can Dapper really claim it's faster than EF when it only solves half the problem!
username_5: You're misappropriating EF features with features every ORM should also have a desire to have given you're under the assumption that every other data access technology is just a partial (half!) implementation of EF, a consequence of relying on needing to work within EF's feature-set where you'd likely struggle if you had to develop Apps using different platforms or against a different DB technologies. Don't bother looking for features relying on modification tracking to ever appear in Micro ORMs, it's an example of an unnecessary complexity that'll never be implemented. Quite simply, if you need EF features don't switch, EF will be around forever - as the catchphrase goes, you'll never get fired for choosing it.
But to answer your question, yeah you can claim to be faster if you're accomplishing the same results, faster.
username_7: I am interested to collaborate and share. I am also an author of one micro-ORM named [RepoDb](https://github.com/username_7/RepoDb). Can you have a look?
It is a hybrid-ORM which will allow you to do the things of micro-ORMs and macro-ORMs. You will have a lot of benefits while using it, I can even explain and support on this.
You may experience a different test as it has been baked differently.
username_6: ... OData isn't dead, it IS restful and returns JSON by default and my entire point to you centered around my testing efforts on service stack are literally that CoC point (i have that already and don't want to give it up), service stack doesn't seem to do anything by convention it requires explicit definition literally everywhere.
When I say "I should be able to tell it "I want my question asked in this business context" ... i'm not joking, I run a transactional platform and the context of a question is important ... I see a billion euros a week worth of invoice data through the system and users getting back the wrong rowset is not an option, that is by design a complex question nto because M$ said so but simply because it has to be.
As for the "mis-appropriation of features" ... I don't use any of the extended features that didn't get ported to .Net core for that exact reason, I saw the headache coming and avoided it.
The one feature i'm struggling with is expansions when pulling entities from the DB with their children, other ORM's seem to be able to do this but it's pita by comparison and lacks flexibility.
I've had DBA's hand crank queries to answer some of the simpler questions and the way EF handles some of the scenarios actually beats that (it's rare but it happens).
The most complex of the queries i've hit run a 1MB select query and it comes back in incredibly short times, that's how complex the questions are.
That's a requirement imposed on me because of the nature of our DB not due to the framework imposing that, replacing EF with ORmLite or Dapper will not change that, it's been tested extensively.
username_6: Good place to start, I know here on the Service Stack side OData is seen as some sort of anti-pattern due to the way that M$ documents and recommends using it, I definitely don't use OData as documented so usually don't hit the down sides (like tight binding to the DB structure).
I actually get a lot of flack from the M$ dev teams about my abuse of their tech, but my implementations are cleaner, faster, and more secure than their documented examples.
When discussing it here though, it pays to appreciate the complexity of questions that you can answer with the OData + EF stack WITHOUT having to specifically write any code at all beyond building the class that matches the table by default.
With other stacks like shown here with service stack I've had some interesting conversations with @username_5 (sorry mate, I do like to ask the complex questions) on this area and the choice to jump boils down to a few key points for me ...
1. I should be able to implement what I want as a "base" and extend for specific cases where needed (and only where needed).
2. I shouldn't have to handle every possible use case of my API explicitly (because my API is consumed in situations I don't have any visibility of).
3. My clients build a solution using my framework so I can't "build the complete API they want" (meaning it needs to answer questions i haven't thought of)
4. I can't hand crank blocks of SQL (nor do i have any interest in that / time to do so)
5. It would be nice if it's free, (I can't test my code stack conversion to Service Stack without buying a licence which is frustrating).
@username_5 I was going to ask you about point 5 actually ... I've converted a few thousand lines of code over to service stack but obvs because my model is more than 10 tables (or whatever the limit is) I can't test it, i'm actually seriously interested in at least spinning it up to see if that performance gain is really there (although I do have a query generation problem to solve) as ORMLite only solves some of the scenarios I have.
Some points of discussion had on stackoverflow ...
- https://stackoverflow.com/questions/60686237/servicestack-is-context-based-routing-specified-in-the-url-possible
- https://stackoverflow.com/questions/60678495/understanding-the-request-lifecycle-and-routing-mechanism-in-service-stack
The key thing here is that as the technical lead on my own stack I should be able to pick the pieces that work for me (EF admittedly doesn't give me that, it's all of that half million lines or nothing - ish) ... but then having picked my pieces I should be able to build solutions around them as needed.
When I talk about my ideal ORM, i currently don't think it exists but then i'm very picky.
Key features I would like to see in an ORM which would make building my own API easy are ...
- CRUD<T> without a model (so much like OrmLite here or Dapper the ability to just grab a connection and do stuff with T's on it)
- Query generation, from either some form of string source or an expression tree.
- T based Filters on the DB (which is why i don't want to get involved with the SQL construction).
- The ability to point the ORM at a context class which defines the model and generates migrations for me (EF does this bit incredibly well).
- No forced patterns or architecture design.
That last point along side the lack of SQL generation from LINQ is where I feel both Dapper and OrmLite fall short, but this is why I think they are sold as "micro-ORM's" at least in part, they deal with ONLY the problem of talking SQL to SQL servers.
That's not a bad thing as it keeps the framework light, but maybe what's missing here is a LINQ to Sql abstraction (like the one in EF) but not tied to any ORM, one that's pluggable so consumers can override certain behaviour (probably on a tpye by type basis).
consider this sort of query example ...
```cs
// build the query
var query = new Query<T>()
.Where(...)
.Where(...)
.Select(...)
.Expand(...)
.ThenExpand(...)
.GroupBy(...)
.OrderBy(...)
.ToSql();
// then with Dapper I could do ...
var results = connection.Query<ResultType>(query).ToList();
```
... the key thing to note about this example is that I'm mapping questions presented as OData parameters in to this framework basically allowing the User to build the query they want the API to run, but not only that, the base set is filtered by a preconfigured filter for T based on the users access to rows in the DB, then when they expand in to the subsets those also have a filter applied to them, all of this is automatically injected in to the query.
Now I can see the response here ... "yeh you can do all that with x-ORM" ... you're right, I can ... but I don't want to hand crank all that functionality, EF already handles it for me, the only issue is that I have to take all of EF to get it.
If I could take the query building, as a feature and plug that in to any ORM then I'm free to choose to use OData on top of that if I so please.
If I take service stack I can't do this, I have to pre-think of all the possible queries the user might want to ask and build them in to my API layer providing a pairing of at least 1 DTO + a service method for each possible question.
For CRUD on a single OData endpoint I only require a single generic controller, with a filter on the DB table (a one of linq expression, and one liner) I can filter the table for any user that logs in by applying my own "app role logic or whatever" to the table and i'm done.
Assuming I follow a convention I would then have 1 controller / service and one context class representing my DB then the simple POCO that represents the table (all things I have with service stack) ... the key difference with the OData + Ef stack is that if I want a new endpoint I simply add a new POCO and i'm done, full CRUD implemented "by convention".
is this slower than a handcranked query for each CRUD operation on every possible endpoint query ... yup, do I care that it costs a few extra CPU cycles ... nope, servers are cheap to rent, cloud solutions architects and the dev teams to maintain complex codebases aren't.
username_5: ```csharp
var q = db.From<Customer>()
.Join<Customer, CustomerAddress>()
.Join<Customer, Order>()
.Select("*");
using (var multi = db.QueryMultiple(q.ToSelectStatement()))
{
var results = multi.Read<Customer, CustomerAddress, Order,
Tuple<Customer,CustomerAddress,Order>>(Tuple.Create).ToList();
foreach (var tuple in results)
{
Customer customer = tuple.Item1;
CustomerAddress custAddress = tuple.Item2;
Order custOrder = tuple.Item3;
}
}
```
The equivalent API for selecting multiple tables from a single query in OrmLite looks like:
```csharp
var q = db.From<Customer>()
.Join<Customer, CustomerAddress>()
.Join<Customer, Order>()
.Where(x => x.CreatedDate >= new DateTime(2016,01,01))
.And<CustomerAddress>(x => x.Country == "Australia");
var results = db.SelectMulti<Customer, CustomerAddress, Order>(q);
foreach (var tuple in results)
{
Customer customer = tuple.Item1;
CustomerAddress custAddress = tuple.Item2;
Order custOrder = tuple.Item3;
}
```
I hope you realize you can't just have a generic SQL Builder that generates SQL for all queries that works with all ORMs and across all RDBMS's like your proposing in your Example right? There are subtle differences between each RDBMS which needs to be abstracted so that it generates the right SQL for each specific RDBMS Dialect. There's also differences between RDBMS versions in what features it supports which you have to account for (i.e. some of the value Typed ORMs provide). So when you call `db.From<T>()` you're getting back a Typed Expression Builder tied to an RDBMS Dialect. Each ORM also handles how they map from RDBMS result-sets to .NET Models differently so you're not going to be able to even share POCO models when you start making use of ORM specific features, e.g. All complex Types in OrmLite that aren't [marked as POCO References](https://github.com/ServiceStack/ServiceStack.OrmLite#reference-support-poco-style) are blobbed using a [pluggable Complex Type Serializer](https://github.com/ServiceStack/ServiceStack.OrmLite#pluggable-complex-type-serializers) that can be configured per RDBMS Provider, I don't know of another ORM which supports that which is going to inhibit Model reuse.
So it doesn't sound like your utopia of mixing ORMs data models and their typed SQL builders is going to come to pass, but if you're just after a SQL Query Builder than you may want to checkout [SqlKata](https://github.com/sqlkata/querybuilder).
But if you want to keep using OData then you should only be considering EF, no-one in .NET outside MS is going to be wasting their efforts trying to support it. You're going to forever use whatever MS provides, who are the only ones investing any resources and effort advancing OData.
username_6: OData is repeatedly comprared to WCF SOAP Services for some reason by both you and @username_0 in his article (which I find odd as they have literally nothing in common) EXCEPT ... SOAP had a WSDL description, you could use the tools to generate your client code.
OData doesn't require tools, it has an XML based description, as stated above the schema for that is well documented and anything that can read XML can understand OData schemas and thus consume with strong typing the service.
I go a step further and expose metadata relevant to each endpoint on the endpoint itself to avoid the caller having to rely on a large blob of meta for which they want a small portion (IMO this should be the standard).
I could go on but my "opinions" (however documented and fact based they may be) regarding OData aren't wanted here.
Again .. The reason I pointed at OData was that it generates complex "real world questions" I have to build an API to answer.
Your examples are interesting and do solve the problem in the event that I handle the question or use AutoQuery to do this for me ... can AutoQuery do this with my own business logic in the middle something like this (taking the OrmLite example) ...
```cs
var q = db.From<Customer>()
.Where(...)
.Join<Customer, CustomerAddress>()
.Where(...)
.Join<Customer, Order>()
.Where(...)
.Where(x => x.CreatedDate >= new DateTime(2016,01,01))
.And<CustomerAddress>(x => x.Country == "Australia");
var results = db.SelectMulti<Customer, CustomerAddress, Order>(q);
foreach (var tuple in results)
{
Customer customer = tuple.Item1;
CustomerAddress custAddress = tuple.Item2;
Order custOrder = tuple.Item3;
}
```
... if I understand this correctly this is the equivilent of a query that returns an expanded subset of properties too.
- Does this support filtered joins "in this manner"?
- Can I configure AutoQuery / ORMLite to apply the nested Where clauses ANY time a table of that type is used anywhere in any query?
Essentially the reasoning here is that from the users information in the request (like an auth token for example), I have to filter the db down to the stuff they can see in every table then execute my question on what's left (standard multi-tenancy issue basically).
From there the logic is only as complex as the user question asked which it sounds like AutoQuery might be able to limit to a problem domain that's already coded for which is perfect!
username_0: @username_6 quick question, since you are considering AutoQuery, I take it that the OData solution isn't deployed yet? You are still in the research phase? Is this a new solution?
username_5: #### Gist Desktop App
You can run & play around with this App locally as it's published as a [Gist Desktop App](https://sharpscript.net/docs/gist-desktop-apps) which you can run with our [app dotnet tool](https://docs.servicestack.net/netcore-windows-desktop):
$ dotnet tool -g app
Then run the Gist Desktop App with:
$ app open rockwind
Where it will download the Gist and run it in the app's local Chromium Desktop Browser.
#### Local Sharp App
Or if you want you can download [sharp-apps/rockwind](https://github.com/sharp-apps/rockwind) and launch it with:
$ app
Where you'll be able to make changes in real-time and get instant feedback while the app's running.
#### Deployed Gist Desktop App
This Gist App is also [deployed as a .NET Core App](https://sharpscript.net/docs/deploying-sharp-apps) so you'll also be able to try it out at [rockwind-sqlite.web-app.io](http://rockwind-sqlite.web-app.io):
So you can use it to query any of your RDBMS tables using the nice page-routing based API, e.g:
- [/uberdata/customer](http://rockwind-sqlite.web-app.io/uberdata/customer)
- [/uberdata/employee](http://rockwind-sqlite.web-app.io/uberdata/employee)
- [/uberdata/product](http://rockwind-sqlite.web-app.io/uberdata/product)
- [/uberdata/category](http://rockwind-sqlite.web-app.io/uberdata/category)
It's available in all registered formats, if viewed from a browser it will return the [Auto HTML5 Report Format](https://docs.servicestack.net/html5reportformat), you can use the `?format` queryString to specify the content type you want it in, e.g:
- [/uberdata/customer?format=json](http://rockwind-sqlite.web-app.io/uberdata/customer?format=json)
- [/uberdata/customer?format=csv](http://rockwind-sqlite.web-app.io/uberdata/customer?format=csv)
CSV is particularly nice because you can save it in a `.csv` file and open up directly in a spreadsheet.
Or if JSON is the one true format you [force it to always return a specific Content-Type](https://sharpscript.net/docs/sharp-apis#user-content-hello-api-page) with:
|> return({ format: 'json' })
It uses parameterized SQL it's safe from SQL injection (The free-text `?orderBy` is also validated) so you can safely query any of your columns:
- [/uberdata/customer?country=Germany](http://rockwind-sqlite.web-app.io/uberdata/customer?country=Germany)
- [/uberdata/product?categoryId=1](http://rockwind-sqlite.web-app.io/uberdata/product?categoryId=1)
Including Paging & Order By:
- [/uberdata/customer?country=Germany&offset=5&limit=5&orderBy=CompanyName+DESC](http://rockwind-sqlite.web-app.io/uberdata/customer?country=Germany&offset=5&limit=5&orderBy=CompanyName+DESC)
So it's a pretty flexible "uberdata" API, a nice quality about it is because it uses a clean, technology-agnostic URL and queryString API and returns a flat tabular data structure, it can later be easily re-implemented by a typed Service when you know which features you want to allow consumers to query and want to formalize it in a typed Service Contract.
#### bye uberdata :(
Moving back to AutoQuery, yes you can define the AutoQuery Service with joins, where it's joined using an [implicit JOIN Reference](https://github.com/ServiceStack/ServiceStack.OrmLite#typed-sqlexpression-support-for-joins), e.g:
```csharp
class QueryJoinedCustomers : QueryDb<Customer>,
IJoin<Customer,CustomerAddress>, IJoin<Customer,Order>
[Truncated]
.And<Customer,CustomerAddress>((c,a) => ...)
.And(c => x.CreatedDate >= new DateTime(2016,01,01))
.And<CustomerAddress>(a => a.Country == "Australia");
var results = db.SelectMulti<Customer, CustomerAddress, Order>(q);
return new CustomerJoinResponse {
Results = results.Map(x => new CustomerJoinResult {
Customer = x.Item1,
Address = x.Item2,
Order = x.Item3,
})
}
}
}
```
You can also add the joins in your Custom implementation but then the users wouldn't be able to query the joined tables.
Anyway spent to much time here, that's about all from me, back to work...
username_6: @username_0 I have an existing solution implemented with an OData based API layer, my issue generally isn't with OData, I think @username_5 here has issues with "ugly URLs" in OData (not unreasonable to be honest if you lok at them encoded they can appear pretty ugly).
**The background for my "problem domain"**
The reason we use OData is because it allows the client to specify the question they want to ask instead of me stating to the client "these are the questions you can ask" which is key here.
In @username_5 example from the last comment for example in order to achieve the join result in AutoQuery I have to provide the endpoint with a method that implements that particular question joined in that particular way.
My issue is that I don't know that that particular join is something the client wants at the time I'm writing the code and I don't want a support call to implement a new API method everytime they have a new question to ask the API.
I know you guys are highly against OData but the key thing this offers is "within the confines of the type safety as defined by the contract emtadata which defines the tpyed sets that can be questioned" the user can "build a question in a URL to confirm to even the most complex of business scenarios" and yes the nature of the question they can ask CAN get complex but it's on them to decide that not me, and forcing them to only ask "pre-built questions" won't cut it.
The issue is that our clients are fortune 500 companies with big complex "poorly deisgned" systems like SAP implementations and often are constrained by having to work to a standard that that system implements, and OData is one of those standards.
This is where OData excels because the provider (this case SAP / IBM / Seimans) that delivers the platform to our client will provide functionality to allow them to interact with systems through expensive (like half a million $) "connectors" which are specifically designed to a given spec and whilst AutoQuery looks great I can't tell a client "sorry this is how we work because it's better for us" I have to conform to the provisions that their system can handle.
With the netflix example netflix can decide how people communicate with them, with our platform we offer business services that connect between such systems and are forced to interact with those systems in the way that they support so i'm not dealing with an "in an ideal world" scenario.
**With that in mind**
Given that the clients system works a given way, the bulk of my questions are around this area and making my API layer fit to my clients requirements.
This is arguably not a "normal" API delivery scenario that most companies have where they can essentially dictate to partners how their systems work.
username_5: My issue with OData is its unnatural complexity, poor encapsulation, slow implementation, poor promotion of API design & practices; basically everything you should be avoid having exposed beyond your Service layer whose goal is supposed to provide an interoperable layer that [encapsulates your Systems complexity](https://docs.servicestack.net/service-complexity-and-dto-roles#services) - all goals OData fails hard at.
Allowing clients to be able to construct adhoc joins to System tables is even worse tight-coupling, which you have even less hope of being able to make changes without breaking existing clients. Might as well give out an RDBMS connection string and give them maximum flexibility.
Apart from promoting poor practices ,I have a [high intolerance against complexity](https://docs.servicestack.net/service-complexity-and-dto-roles#softwares-biggest-enemy) and will go for the simplest, most elegant solution that accomplishes my needs every time. Apart not having to spend any time fighting my tools and libraries, my creations enjoy a rare longevity since I've been building systems with the same libraries for over a decade, which have survived many generations of alternative solutions and even new .NET runtimes. What does change over time is the [best UI stack to use of the day](https://github.com/NetCoreApps/TechStacks#recommended-net-spa-stack) and even the platform it runs on, but my backend technologies and existing clean, well-defined ServiceStack APIs remain unchanged, it'll just gain more services + functionality.
[servicestack.net](https://servicestack.net/) is another example, it was originally hosted on Mono/Linux (AFAIK the first .NET web framework that supported Linux/Mono). Mono had a number of instability issues which required restarting nginx every couple of months, and when Novell dumped the Mono/Ximian team in 2011 they lost interest/focus on ASP.NET and created Xamarin to focus on iOS/Android where ASP.NET bugs were no longer being fixed. So when ServiceStack went commercial in 2013 I decided to abandon Mono and just run it on AWS/Windows/IIS/ASP.NET since a commercial website shouldn't have to put up with random instability/downtime. Then when ServiceStack [added support for .NET Core](https://twitter.com/ServiceStack/status/789142311326326784) I moved back to Linux/nginx/.NET Core only it was now rock solid and much faster. Through the years my backend technology and Services remained the same, because the APIs were designed correctly, they capture the essence of what's required and are implemented in the simplest way, with intuitive focused libraries that provides a thin typed wrapper over the underlying DB functionality needed, not some heavyweight artificial performance-sucking high-level abstraction that loses flavor over time. So I have zero desire to consider overarchitected complex technologies that exposes an unintuitive surface area that I have no idea what it's doing without debugging in a profiler and wasting time with reading abstraction designer docs.
But hey if EF/OData works for you and you're productive with it, keep using it. I wont dissuade you from using something that you deliver value with, which in the end should be everyone's goal in-spite of our technology choices, so just use what makes you productive & happy.
username_6: Or in my case ...
Allowing clients to be able to construct adhoc joins to different sets in an API model which isn't the DB Model.
My service layer deals with the translation of questions but much like the layer above it, it can generate some "interesting" questions.
Believe it or not I carry much the same ethos as you but i'm often not in a position to make the "ideal choice" due to external concerns (as described above). I offset a ton of the concerns you have with the nature of the "over-complexity" and "over-engineering" by putting all my business logic behind interfaces and using IoC so if I have OData controllers, WebAPI controllers, or ServiceStack services i'm always insulated from that complexity, but it does make it tricky to answer some of the problems that such implementations introduce.
I've also deliberately put EF behind an Interface which having migrated the stack on to ServiceStack I'm seeing that I did lapse in a couple of places where I exposed IQueryables when I should have exposed IEnumerables (that's on me to fix and trivial to do so).
That said, having taken your advice onboard my plan is to update the code until those "leaks" are plugged then to re-migrate the code again as it only took me a couple days this time round it shouldn't be too bad next time.
I have another major demand on my time this week but hopefully I can get to looking at that stuff next week. The up side is that my business logic architecture is entirely interface / IoC driven so it's mostly a lift and shift operation (i have said I don't use that stuff like most people do).
I do appreciate your advice @username_5 and you do make some great points, points that I intend to raise with Microsoft too in places because ultimately you're right and it's on them to provide good advice for the technology stacks that so many use.
I also feel in places you over generalise the problem of bad platform design and imply that I as a result have a bad stack that will ultimately need to be rebuilt because of my "assumption OData is a necessary complexity" or "mis-guided impressions", that's not how I work and for that reason my OData API doesn't support the full spec deliberately, and in places actively ignores it.
I would still be interested in looking at ServiceStack but the current feedback from our board of directors is the following ...
1. Who are you / how big is the company as support is a must have for the work we do?
2. Is there a way to test our complete migrated stack without the cost of a licence fees until we know it works for us?
3. Is It compatible with the demands our existing clients have?
That last point is the one i've been trying to address here of course for the most part.
My current understanding is that it works the way it works and that clients should fall inline with that, which might not be something our clients are willing to accept, but it's extremely flexible so with a bit of work in some cases I should be able to make ServiceStack fit their needs.
username_5: @username_6 If you have to convince a whole board of directors I'd say keep on Microsoft's supported technology stack, they're going to have concerns adopting an alternative .NET technology esp. since most of .NET ecosystem just uses whatever MS puts out. Esp. if they're non-technical, IMO the only way they'll ever consider it, is if you have the freedom to evaluate and compare prototype solutions side-by-side. But if you still want to consider evaluating SS you can request a unrestricted trial license key from <EMAIL>.
username_6: I do happen to have that freedom, and as the technical lead here for everything we do the board leans on my guidance to make it's technology spending calls, gneerally speaking the calls made are "because it's right" not out of some mis-guided impression that M$ puts out about how things should be done (hopefully i've shown you that much at least).
Again, Many thanks for the feedback, when I get back to it i'll definitely ping them an email, i'm actually curious to see how the two solutions work side by side because as they say "the proof is in the pudding" ... right!
username_6: I completely agree @username_8 I'm trying to be polite and promote a strong technical discussion about the differences and why they exist instead i'm just being told "accept it, drop some opinions you have and move away from what years of experience has shown you because this is better", that's not how a responsible architect delivers good architecture to a business at scale.
Myths and Paul are clearly very attached to the Service Stack design and feel very strongly about it which is commendable to stand by your creation but there really is no need for the aggressive stance here, i'm not here to insult / knock anything.
As it happens i've got a wide variety of experience using different approaches to the "API layer problem" and the "N-Tier stack problem" having worked in the industry for 20 years I've seen a lot of stuff claimed as the "best answer, and anything else is simply broken due to <insert some reason here>". Frameworks always die eventually and something better always comes along, no doubt at some point that will be the case with Service Stack too.
In order to shield myself from particular stacks flaws i've essentially built a business layer that sits behind and depends on interfaces entirely, so the specifics of a particular stack / framework design don't really matter much me, I just need an IoC/DI implementation to wrap it all up.
That said, what i've learnt from this discussion is that both Myths and Paul point out both some valid points and some misconceptions that they will defend to the very end and not accept any adverse information to the contrary which makes it hard to get advice on how to fit complex scenarios in to Service Stack. Given the price tag on Service Stack, it really doesn't matter how good or bad OData / EF are only how well Service Stack solves the problems I have already solved with those competing technologies. If you're gonna charge for something as a potential buyer i'm gonna be dam sure i'm putting my money in to value add for the business before I pull out the company credit card.
For example ...
Valid Point
OData's way of handling and serving up metadata requires heavy models and parsing of large blocks of metadata.
Misconception:
The WCF model is the only one, which requires "special tools" to generate client proxies that clients should interact with in order to retain any form of contract or type safety.
I have overcome this pitfall oddly enough by working not that differently to how AutoQuery works.
I've also discussed above some of my reasoning for not agreeing with many of the documented points that Paul makes in his article (both are above).
That doesn't mean I disagree with the approach taken by Service Stack, it just means I don't agree with the particular use case / implementation detail they have chosen to pick fault with or the fault reasoning, I'm just looking for the parallels in order to make informed choices.
**Why I use OData + EF + .Net Core**
I am in a unique position in that unlike API providers like Netflix (as this seems to be the example used above) ... I have to provide an API layer that allows users to construct a UI to support a business process that they design on top of a connected web of systems with mine as the middleware platform that ties them altogether.
I don't have my own business process design I can force on others, nor do I have control over how my clients may choose to interact with my system in order to facilitate their business.
This means I have to consider problems like ...
1. Client needs to show a grid of data with columns from multiple tables in my DB and allow paging sorting filtering and grouping (and in some cases aggregation) on that grid regardless of my underlying data structure.
2. The exposed sets can be mapped to cover complex edge cases to multiple entity sets (using DTO's) for complex joins or specific "common" scenarios I choose to optimise as known common paths.
3. The data model and the API model are separated allowing me to inject my own business rules in the middle that apply to the platform (e.g. a multi-tenancy rule so that no instance of T is returned to a user without them having access to it no matter how they ask a given question).
4. I don't have to expose all of / any of my DATA MODEL in my API MODEL as doing so is often considered bad practice.
5. My clients are fortune 500 companies with massive complex systems that talk to standards based endpoints, some won't even use http calls because they refuse to spend £1 million on a "plugin" or module for a massive ERP solution.
6. The nature of API provided questions are "complicated" in some scenarios beyond my control, typical things like aggregation, joins, and projections kill most API's ... not ours.
7. It must be secure because it's financial data i'm dealing with.
8. It must be reasonably maintainable (by convention implementations, type safe, ect).
9. Support for us is important as often our contracts with come heavy fines if we mess up / have down time.
**What this means**
If a user wishes to design a query that pulls data from 10 tables, and filters on 3 of them as a flat set "source" for a datagrid in their UI I can support that with 0 coding, 0 deploys, 0 changes to the system at all. That's the key point here, I can't roll out a deploy every time any client finds some new question they wish to ask the API as I would be forever deploying.
I've been here asking exactly that sort of context specific question because it's the sort of question I've not really seen answerable in other so-called "better" frameworks, but these are the sorts of questions that if a framework can handle them it can literally handle anything in API circles.
**The bottom line**
There's a key point i'm trying to get understood here which boils down to ...
Can I deliver "OData like" features without OData (as it's deemed such bad design) or is my complex API situation that depends on those features (for good reasons) just bad design because "it's not simple enough" ....
If it's the latter that suggests that Service Stack isn't ready for real world situations that are complex as the one I have to face every day but works in situations where there's a known finite set of questions that can be asked on discrete unrelated DTO sets.
Regardless of peoples opinions, there's a technical fact that comes with all this discussion which determines the viability and I would be negligent not to ask these awkward questions.
I apologise if anything I have said has come across offensive here.
username_8: @username_6 I don't think you've been offensive at all. You've laid out your case, and although I don't know the details of the problem you're trying to solve, you've described it well enough that I can understand the essence.
In the mix of all the words during this thread, I've seen two viewpoint presented:
1. My queries can by dynamically composed, constructed by the user. The exact shape of the resulting model is unknown, which could be based on joins from any number of tables -- it's all up to the user. There's flexibility here, but complexity comes with it.
2. Data models should be "known" beforehand, because _most_ APIs are essentially answering a set of known questions.
That could be a simplistic over-generalization of the two views. However, I can at least envision how EF can help solve #1, while micro-ORMs may have difficulty supporting it.
On the other hand, for simpler scenarios like #2, you could argue that EF is overkill. When I say "simpler" here, it is relative. I'm by no means suggesting that micro-ORMs are only for simple solutions.
However, I'll admit I haven't spent time in ServiceStack, or done much work with micro-ORMs. But I have asked myself the same question as @username_6 as I've looked at Dapper. I started writing a dynamic query builder using EF that, at least from what I could tell, would be extremely more difficult using Dapper.
username_6: Exactly @username_8 it sounds like you understand the nature of my problem ...
It boils down to a need to map from a URL to an expression tree, that then gets manipulated in my business logic layer and then translated in to the final SQL in the data layer.
Putting all these pieces together is essentially a standard web stack, but some use more strict / restricted API layer capabilities under the guise of "complexity is bad" but I don't have that option due to the operational requirements of the problems i'm solving.
We could make a case for not using expression trees instead and just manipulate strings sourced from the API layers "query" and then directly translate those to SQL but having the expression tree in the middle gives us type safety in the business logic and an interception point that doesn't require things like reflection which can be slow.
I'd be really interested in a demo service stack project that replicated some of the more complex capabilities of OData "by convention" avoiding the need to write out a lot of DTO's specific to each use case but my gut feeling is that point that Myths comes back to about the key design elements in Service Stack force this as the DTO's explicitly define the contract information.
Sure OData may be bad for some valid reasons but it's a great way to point at complex API layer functionality and be like "can your API layer do this" as a point of discussion, it's certainly not the holy grail though.
Things like aggregation or sub selection don't appear to be possible as a user defined scenario without me having to pre-define those in Service Stack which contradicts the "over-posting" and "over-responding"best practices i've come to like for both security and perf reasons.
**background for this**
If I have a business object on my back end (simple POCO) with 20 properties and I need a result set with 10 of them, filtered on some child tables values, with a system derived business rule injected in to the query for each table hit I'm not losing any type safety by asking for only those 10.
Nor should I be forced to declare a second POCO with only those 10 for that scenario else I have to now consider (in my situation at least) what are all the possible combinations of those 20 that may be needed and implement a POCO for each to avoid returning excessive data loads to the client.
This is a common problem in API layers, Netflix for example assumes I want all of the fields for a movie, and I have no choice but request them all, I can't just ask for a subset like say a key and a name if i'm building a drop down list of them. In my API layer, 1MB responses could turn in to 10MB repsonses. OData whilst ugly does at least give me the flex to define exactly what I want it to do on the back end and exactly what I want it to pull from the DB for me. |
hyb1996-guest/AutoJsIssueReport | 321953810 | Title: [163]com.stardust.autojs.runtime.exception.ScriptInterruptedException: java.io.IOException: Cannot run program "su": error=13, Permission denied
Question:
username_0: Description:
---
com.stardust.autojs.runtime.exception.ScriptInterruptedException: java.io.IOException: Cannot run program "su": error=13, Permission denied
at com.stardust.autojs.runtime.api.ProcessShell.execCommand(ProcessShell.java:206)
at com.stardust.autojs.runtime.api.ProcessShell.execCommand(ProcessShell.java:236)
at com.stardust.scriptdroid.tool.AccessibilityServiceTool.enableAccessibilityServiceByRoot(AccessibilityServiceTool.java:59)
at com.stardust.scriptdroid.tool.AccessibilityServiceTool.enableAccessibilityServiceByRootAndWaitFor(AccessibilityServiceTool.java:63)
at com.stardust.scriptdroid.ui.main.SideMenuFragment$2.run(SideMenuFragment.java:145)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636)
at java.lang.Thread.run(Thread.java:764)
Caused by: java.io.IOException: Cannot run program "su": error=13, Permission denied
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at java.lang.Runtime.exec(Runtime.java:692)
at java.lang.Runtime.exec(Runtime.java:525)
at java.lang.Runtime.exec(Runtime.java:422)
at com.stardust.autojs.runtime.api.ProcessShell.execCommand(ProcessShell.java:189)
... 7 more
Caused by: java.io.IOException: error=13, Permission denied
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:133)
at java.lang.ProcessImpl.start(ProcessImpl.java:128)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
... 11 more
Device info:
---
<table>
<tr><td>App version</td><td>2.0.16 Beta2</td></tr>
<tr><td>App version code</td><td>163</td></tr>
<tr><td>Android build version</td><td>G9508ZMU2CRD4</td></tr>
<tr><td>Android release version</td><td>8.0.0</td></tr>
<tr><td>Android SDK version</td><td>26</td></tr>
<tr><td>Android build ID</td><td>R16NW.G9508ZMU2CRD4</td></tr>
<tr><td>Device brand</td><td>samsung</td></tr>
<tr><td>Device manufacturer</td><td>samsung</td></tr>
<tr><td>Device name</td><td>dreamqltecmcc</td></tr>
<tr><td>Device model</td><td>SM-G9508</td></tr>
<tr><td>Device product name</td><td>dreamqltezm</td></tr>
<tr><td>Device hardware name</td><td>qcom</td></tr>
<tr><td>ABIs</td><td>[arm64-v8a, armeabi-v7a, armeabi]</td></tr>
<tr><td>ABIs (32bit)</td><td>[armeabi-v7a, armeabi]</td></tr>
<tr><td>ABIs (64bit)</td><td>[arm64-v8a]</td></tr>
</table> |
scala-kansai/2018.scala-kansai.org | 355191453 | Title: グリーンリボンを応援する会 様 Web情報更新
Question:
username_0: ブロンズプラン :グリーンリボンを応援する会 様
更新日:2018-08-29T14:35:30.450Z
## 掲載情報
```
gold:
- name: グリーンリボンを応援する会
url: ""
image:
introduction:
job:
title:
image:
message: |
buttonTitle:
url:
silver:
- name: グリーンリボンを応援する会
url: ""
image:
job:
image:
message:|
buttonTitle:
url: ""
bronze:
- name: グリーンリボンを応援する会
url: "https://eiel.info/green-ribbon/"
image:
```
## ロゴ画像
https://drive.google.com/open?id=1gA7b2vox3weZrRWnJT01ews3YvmFEs3l
## 掲載における注意点
確認中
## やることリスト
- [ ] 掲載における注意点確認
- [ ] 掲載用ロゴ画像準備
- [ ] YAMLに反映<issue_closed>
Status: Issue closed |
sendgrid/sendgrid-nodejs | 825396789 | Title: Invalid json
Question:
username_0: ### Issue Summary
When using @sendgrid/client and calling `/v3/marketing/contacts`, I was hit with a `400` error and a message of **Invalid json**
### Steps to Reproduce
1. Set up a nodejs application
2. Add the code snippet (as below) to a route of your choice. Mine is `subscribe` in this case.
3. Deploy and run the nodejs application and pass the following data
```
{
"email": "<EMAIL>"
}
```
### Code Snippet
```node
app.post("/subscribe", function (req, res) {
// Add your code here
// res.json({success: 'post call succeed!', url: req.url, body: req.body})
// don't need to json.parse req.body
const { email } = req.body;
const request = {
method: "PUT",
url: "/v3/marketing/contacts",
data: {
list_ids: [process.env.SENDGRID_LISTID],
contacts: [
{
email: email,
},
],
},
};
sgClient
.request(request)
.then(([response, body]) => {
console.log(`statusCode: ${response.statusCode}`);
console.log(body);
res.json({ statusCode: response.statusCode, body: body });
})
.catch(([response, body]) => {
console.log(error);
res.statusCode = response.statusCode;
res.json({ statusCode: response.statusCode, body: body });
});
});
```
### Exception/Log
```
{
"statusCode": 400,
"error": {
"field_id": "",
"message": "invalid json"
}
}
```
### Technical details:
* sendgrid-nodejs version: 6.0.0
* node version: 10.23.0
### The Fix
As discovered in `packages/client/src/classes/client.js`, it is expecting a property `body` instead of `data` for the FormData you are passing to sendgrid.
```
createRequest(data) {
let options = {
url: data.uri || data.url,
baseUrl: data.baseUrl,
method: data.method,
data: data.body,
params: data.qs,
headers: data.headers,
};
```
If you go back to the code snippet above, just swap `data` to `body` and it should fix this issue for you.
Status: Issue closed
Answers:
username_0: I've no idea how to submit a PR to add this to the document but leaving this here for future reference lol
username_1: I am having the same issue even when using `body` |
epistemik-co/staple-api | 676508555 | Title: Log folder path prevent deploy on Heroku
Question:
username_0: Hi,
Thanks a lot for this amazing library. I am using it for a couple of my side projects and for one of them I'd like to deploy an API using staple-api on Heroku.
The problem comes from this row in the file config/winston.js line 9,
`filename: "../logs/app.log"`
On Heroku a project needs to be self contained in its root folder, thus I believe something like this should work
`filename: "./logs/app.log"` |
firebase/firebase-ios-sdk | 1025684565 | Title: Storage SDK will crash when Swizzle URLSession.dataTask(with:completionHandler:)
Question:
username_0: <!-- DO NOT DELETE
validate_template=true
template_path=.github/ISSUE_TEMPLATE/bug_report.md
-->
### [REQUIRED] Step 1: Describe your environment
* Xcode version: 12
* Firebase SDK version: 8.8.0
* Installation method: CocoaPods
* Firebase Component: Storage
### [REQUIRED] Step 2: Describe the problem
#### Steps to reproduce:
Add some Swift code to swizzle URLSession.dataTask with completion, then make sure this function `swizzleURLSessionDataTask` is called to initialize method Swizziling feature
Then try to call Firebase storage API to upload a file to firebase.
Crash will happen
#### Relevant Code:
```
//Method swizzling in Swift
extension URLSession {
@objc func swizzledMethod(with request: URLRequest, completionHandler: @escaping (Data?, URLResponse?, Error?) -> Void) -> URLSessionDataTask {
// Will only Intercept report endpoint
if let url = request.url?.absoluteString, url.contains("report") {
// do something here for this endpoint
}
return swizzledMethod(with: request, completionHandler: completionHandler)
}
private static let swizzleDesriptionImplementation: Void = {
let originalMethod = class_getInstanceMethod(URLSession.self, #selector((URLSession.dataTask(with:completionHandler:)) as (URLSession) -> (URLRequest, @escaping (Data?, URLResponse?, Error?) -> Void) -> URLSessionDataTask))
let swizzledMethod = class_getInstanceMethod(URLSession.self, #selector(swizzledMethod(with:completionHandler:)))
if let originalMethod = originalMethod, let swizzledMethod = swizzledMethod {
method_exchangeImplementations(originalMethod, swizzledMethod)
}
}()
static func swizzleURLSessionDataTask() {
_ = self.swizzleDesriptionImplementation
}
}
```
Function that are used to upload data with Firebase Storage API
```
@objc func save(){
let storage = Storage.storage(url:"gs://xxxx.appspot.com")
// Create a root reference for result.json
let storageRef = storage.reference()
// Create a reference to the file we want to upload
let resultRef = storageRef.child("iOS/testpath/" + "result.json")
let data = Data(base64Encoded: "data")!
let _ = resultRef.putData(data, metadata: nil) { (metadata, error) in
guard let _ = metadata else {
return
}
}
}
```
Crash will happen on iOS 11.x, please see the screenshot
<img width="1040" alt="Screen Shot 2021-10-13 at 4 49 47 PM" src="https://user-images.githubusercontent.com/66632259/137210703-852e6a1d-8a1d-4307-ae47-6a5ac1c43b2e.png">
.
Answers:
username_1: @username_0 From the first glance it looks like it is most likely a bug with the swizzling implementation. I'm pretty sure that Firebase Storage is not the only broken URLSession user, but rather the first discovered one. If Firebase Storage SDK works for you without swizzling then I unlikely see any reasonable actions from Firebase side. Please let us know if it is the case.
A general suggestion - I would recommend avoid using swizzling when possible, because it is almost impossible to implement without breaking other class users.
username_1: @username_0 Some swizzling have been used in Firebase and we keep discovering different issues when our implementation breaks the host app code and assumption. Currently we are trying to get rid of any swizzling to avoid the issues in the future. So my suggestion comes from our poor experience with swizzling.
If you are interested, there is an implementation in Firebase Performance that seems somehow similar to what you are trying to do (see [FPRNSURLSessionInstrument.m](https://github.com/firebase/firebase-ios-sdk/blob/master/FirebasePerformance/Sources/Instrumentation/Network/FPRNSURLSessionInstrument.m) file). This implementation works fine in most of the cases but we keep discovering corner cases when it doesn't work as expected.
username_1: I am going to close the ticket as it doesn't look like an issue on the Firebase SDK side. Feel free to file another if you have other problems with Firebase SDKs.
Status: Issue closed
|
matplotlib/matplotlib | 313833279 | Title: legend(bbox_transform) don't work with tight layout in 2.2.x
Question:
username_0: The following code worked as expected in matplotlib 2.1.x but don't work in matplotlib 2.2.x
```
fig=plt.figure(1)
fig.clf()
ax=[fig.add_subplot(2,2,i+1) for i in range(4)]
for j in range(3):
for i in range(2):
ax[j].plot([0,i])
ax[0].legend(['1','2'],
bbox_to_anchor=(0.1, 1),
bbox_transform=ax[-1].transAxes, loc="upper left")
ax[-1].axis("off")
fig.set_tight_layout(True)
```
The subplots are in the wrong place and have the wrong size (mostly outside the figure window in my real ploting code ...) and its get even worse if you interact with them.
python 3.6
matplotlib 2.2.0 and 2.2.2 has been tested.
Answers:
username_1: `ax.legend` is now counted as part of the bbox for the axes when calculating tight_layout. In this case, the axes is `ax[0]`, and so `tight_layout` tries (and fails) to provide enough room for the legend in the spacing between `ax[0]` and the other axes. The relevant PR is #9164.
The solution is to use `fig.legend`. For some cases, that will mean you will need to cart around the lines that you want as part of the legend. I acknowledge that is a bit of an inconvenience, but the converse is that legends are not equal artists in `tight_layout` or `constrained_layout` and you end up with axes and legends overlapping. For this layout, I think the idea that this is a legend that belongs to the figure is more sensical than the idea that this is the legend for `ax[0]`.
username_2: Would it in general be an idea to let artists decide whether they want to take part in geometry management or not?
I could imagine something similar to the `animated` property. Say an artist has a property `geomanageable` (which is set to True for axes, ticks, titles, legends etc by default), the geometry manager could query that property and only include it if it is set.
For the cases where you don't want that, you set it to False. I guess this would also solve the recent issue about long titles destroying the layout.
username_1: Right now all both layout managers do is query the tight_bbox of the axes objects, and other figure-level artists. So if you added such a flag to individual artists, it would affect the tight_bbox of the axes, and other uses of tight_bbox (`savefig(bbox_inches='tight')` being the major one that comes to mind) would similarly ignore the artists that are not included.
I'm not wholesale against this, but its pretty fiddly. I'd rather we were more disciplined in where in the figure hierarchy we place artists. To my mind saying a legend is a member of axes[0] but drawing in in axes[3] is basically asking for automatic layout not to work too well. That it worked before is to my mind an oversight in not including legends in the bbox for the axes.
Similarly I'd be mildly against letting people have axes titles that are wildly too wide for the figure. Taking the title out of the bbox is possible, but then the automatic layout would no longer make room for it vertically. Better just to note that automatic layout doesn't work anymore, fix the title, or turn automatic layout off.
But we can keep an eye on it. If there are lots of users trying to attach artists to an axis, but have the artist plotted far off on another axis, maybe its worth adding the extra layer of bookkeeping. `ConnectionPatch` is one instance where I don't really know what tight_bbox does with the artists, and it has a legitimate need to plot off the axes.
Finally, another option is to do all your layout, call the layout manager, turn off the layout manager, and add the extra do-hickeys.
```python
fig=plt.figure(1)
fig.clf()
ax=[fig.add_subplot(2,2,i+1) for i in range(4)]
for j in range(3):
for i in range(2):
ax[j].plot([0,i])
ax[-1].axis("off")
fig.set_tight_layout(True)
fig.canvas.draw()
fig.set_tight_layout(False)
ax[0].legend(['1','2'],
bbox_to_anchor=(0.1, 1),
bbox_transform=ax[-1].transAxes, loc="upper left")
plt.show()
```
works fine.
username_1: BTW, see #10682 where I propose making this worse :wink:
username_2: Yep I suppose this is the correct thing to do. I was rather wondering if there was an easy option to opt out of this - which may still be useful in certain cases. I guess the main argument here is simply that some automatic layout system cannot guess what you trying to do. So certain cases cannot be handled reasonably by it. E.g. the [annotation bbox example](https://matplotlib.org/examples/pylab_examples/demo_annotation_box.html) - I suppose it's not actually clear what a user expects if calling tight_layout on that one. In such cases it would probably be useful to have the option to get the annotation box included or excluded on a per case basis.
username_0: I think my use case was kind of natural. I have many subplots that should have the same legend, the legend are to large to be nicely be shown in a single subplot. So I used the plot data from one of the subplots and put it in another clear subplot. The workaround is not that difficult but it is definitely more complicated and more error prone than my old code.
I doubt that it is that easy to implement but it would be nice if the legend in my use case was handled together with the ax/subplot that it actually exist in and not together with the ax that it got the data from.
The problem with turning on and off tight layout are that it doesn't work well when resizing the figure.
username_0: The following example doesn't work as expected
```
fig=plt.figure(100)
fig.clf()
ax=[fig.add_subplot(2,2,i+1) for i in range(4)]
for j in range(3):
for i in range(3):
ax[j].plot([0,i])
n=35
legendString=['1'*n,'2'*n,'3'*n]
ax[-1].legend(ax[0].lines, legendString,
bbox_to_anchor=(0, 1), loc="upper left")
ax[-1].axis("off")
fig.set_tight_layout(True)
```
The subplots are to small
username_1: Again the legend over spilling them axismakes the bounding box get larger. That makes the space between subplots smaller. It’s working as designed, the difference is the API change between 2.1 and 2.2.
username_0: Ok, I understand that now. I think that it would be good to have a parameter that sets if a legend (or other artists) should be included in the tight_layout calculation or not. I believe that would be useful when you work with data and want to have reasonable good plots that work with different figure sizes and different legend lengths. It is better to show nice and large plots and a cut legend or overlapping titles instead of a good looking legend and small plots. Tight_layout are good if you want larger plots in many applications and not just to avoid overlaps. |
dankamongmen/notcurses | 1046624359 | Title: eliminate recursion in ncplane_polyfill_yx()
Question:
username_0: We earlier found `ncvisual_polyfill_yx()` to be blowing up due to recursing too deep. I'd forgotten all about `ncplane_polyfill_yx()`'s existence. It has the same issue. Clean it up. Make sure there's a unit test or two on it, also.
Answers:
username_0: it also needs diagnostics.
username_0: resolved in the `username_0/negativecursor` branch
Status: Issue closed
|
SAP/spartacus | 728213075 | Title: Extract schematics update mechanism
Question:
username_0: With Spartacus 2.0 we started using schematics to help customers to perform an automatic upgrade.
With the upcoming 3.0 we should re-use the same mechanism, and just write scripts for 2.0 -> 3.0 upgrade.<issue_closed>
Status: Issue closed |
exoscoriae/eXoWin3x | 757070404 | Title: Diva X - Ariana (1995)
Question:
username_0: krc solved the no sound in large window issue we were having. Set cycles to 19000 and the problem goes away
Status: Issue closed
Answers:
username_1: changed cycles
username_0: You changed the cycles but left the user message in. The message is no longer needed now that the cycles fixed the issue.
username_0: krc solved the no sound in large window issue we were having. Set cycles to 19000 and the problem goes away
username_1: ther was no mention of the note in the initial bug report, so I didn't even bother scrolling down that far. fixed now.
Status: Issue closed
|
AnySoftKeyboard/AnySoftKeyboard | 1163585893 | Title: Critical bug in revertLastWord
Question:
username_0: ### Steps to reproduce
1. type words until you get a next word candidates.
2. pick a next word candidate.
3. click on backspace.
### Expected behaviour
<!--
Candidates of the last typed word should reappear.
-->
### Actual behaviour
<!--
the last word changes to be the previous word :-o
-->
**Android OS version:**
9.0
**Device manufacturer and model:**
Samsung note 8
**List of installed add-ons (like languages, or themes):**<issue_closed>
Status: Issue closed |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.