repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
SantaClaws91/ENSL-GatherBOT
199270725
Title: !help should display all commands possible ? Question: username_0: !help displays: Use !info to request gather information or wait for the automated responses. but it should mention also something like: !msgconditions So let !help display: Use: !info to request gather information or wait for the automated responses. !msgconditions to change on which Steamfriends status he should message you. And on: !msgconditions display: !msgconditions [option] Online : Only announce if your personastatus on steam is set to "Online" (Default setting) Away : Announce if your personastatus on steam is set to "Online" or "Away" Busy : Announce if your personastatus on steam is set to "Online" or "Busy" All : Announce if your personastatus on steam is set to anything other than "Offline" Non : Disable gather announcing Answers: username_1: This is resolved now. Thanks. Status: Issue closed
serverless/examples
212111517
Title: Example for receiving a file through API Gateway and uploading it to S3 Question: username_0: Hey, I'm wondering if there is any good example which could be added to the list of examples, where a file (image, pdf, whatever) could be received through the API Gateway in a POST-request and then uploaded into a S3. I think it would be great to have it, since it is a rather common use case for Lambdas. Best regards, username_0 Answers: username_1: Why not have the client upload the file directly to S3? No need to pay for Lambda execution time just to forward a file to S3. username_0: Hey, My idea was that the Lambda function could include something like manipulation of the file or use data of the file somewhere else. This could also be done as a S3 event trigger (so when a file gets uploaded on the S3 it would trigger the Lambda), but in some cases it would be handier to upload the file through the API Gateway & Lambda-function. username_2: @username_1 @username_0 is this the recommended pattern? I'm pretty new to building restful API (serverless is awesome), so I'm not exactly sure if I should be accepting a base64 encoded string via the create method or first creating an object via one restful call then putting the base64 encoded string (image) in a second call. Any examples would be greatly appreciated :) I know that there are examples for S3 upload and post processing, but there is no example used with a restful/ dynamodb setup. username_2: @username_0 , @username_1 also should this issue be assigned with the question label? username_1: For uploading files, the best way would be to return a [pre-signed URL](http://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html), then have the client upload the file directly to S3. Otherwise you'll have to implement uploading the file in chunks. username_3: @username_2 @username_0 @username_1 @rupakg did anybody trying to create it? because I'm also working for that username_2: This worked for me with runtime: nodejs6.10 and the dependencies installed. Let me know if you have any questions. ` use strict"; const uuid = require("uuid"); const dynamodb = require("./dynamodb"); const AWS = require("aws-sdk"); const s3 = new AWS.S3(); var shortid = require('shortid'); module.exports.create = (event, context, callback) => { const timestamp = new Date().getTime(); const data = JSON.parse(event.body); if (typeof data.title !== "string") { console.error("Validation Failed"); callback(null, { statusCode: 400, headers: { "Content-Type": "text/plain" }, body: "Couldn't create the todo item due to missing title." }); return; } if (typeof data.subtitle !== "string") { console.error("Validation Failed"); callback(null, { statusCode: 400, headers: { "Content-Type": "text/plain" }, body: "Couldn't create the todo item due to missing subtitle." }); return; } if (typeof data.description !== "string") { console.error("Validation Failed"); callback(null, { statusCode: 400, headers: { "Content-Type": "text/plain" }, body: "Couldn't create the todo item due to missing description." }); return; } if (typeof data.sectionKey !== "string") { console.error("Validation Failed"); callback(null, { statusCode: 400, headers: { "Content-Type": "text/plain" }, body: "Couldn't create the todo item due to missing section key." }); return; } if (typeof data.sortIndex !== "number") { console.error("Validation Failed"); callback(null, { statusCode: 400, headers: { "Content-Type": "text/plain" }, [Truncated] body: JSON.stringify(params.Item), }; callback(null, response); }); }).catch(function(err) { console.log(err); // create a response const s3PutResponse = { statusCode: 500, body: JSON.stringify({ "message": "Unable to load image to S3" }), }; callback(null, s3PutResponse); }); }; ` username_3: @username_0 hi, I need help to create a cloudformation json file to upload a file directly to s3 using api gateway and lambda function. Is it possible to do it? username_0: @username_3 It seems to be possible, however I never finished my implementation. You might wanna check this blog which has quite simple instructions on how to do it! http://blog.stratospark.com/secure-serverless-file-uploads-with-aws-lambda-s3-zappa.html username_3: @username_0 thank you, but I want to do it in only lambda and api gateway, S3 in cloudformation but it is showing HTML. Can you help me with how to link lambda function and api gateway username_0: This question is not really related to this thread. Status: Issue closed username_0: Hey, I'm wondering if there is any good example which could be added to the list of examples, where a file (image, pdf, whatever) could be received through the API Gateway in a POST-request and then uploaded into a S3. I think it would be great to have it, since it is a rather common use case for Lambdas. Best regards, username_0 username_0: @aemc I think you should be able to set the file name when creating the presigned url. There you can add it with the extension included, if you wish to. username_4: I'm quite new to serverless myself but isn't the most widely seen approach more costly than what was asked by OP for a file upload + processing in a lambda? What we usually see is to send the file to S3 or asking a signed url to a lambda to then upload to S3 (like in https://www.netlify.com/blog/2016/11/17/serverless-file-uploads/). However, if the file needs to be processed, that means that we access the file from S3 when we could access it directly in the lambda (and then store it in S3 if needed). Which means that we pay for an access we don't really need. Am I wrong in thinking that the approach asked by OP (uploading in chunks then saving to S3) would be more cost-efficient than uploading to S3 when there's some processing involved? username_5: Does anyone who worked with the signed-URL approach find a way to bundle the upload in a transaction? We are currently handling files up to 50 MB, so using a lambda (or even API Gateway) is not an option due to the current limits. Whenever a file is uploaded, we have to make a database entry at the same time. If the database entry is made in a separate request (e. g. when creating the signed upload link) we run into trouble if the client calls the lambda but then loses internet access and cannot finish the file upload, then there is inconsistent state between the database and S3. What is the serverless way to run transactions including a database and S3? username_1: You can create a database record when the signed URL is created, and then update it from a lambda triggered by an S3 event when the object has been created. If you want to handle aborted uploads, you could trigger a lambda from a cloudwatch schedule that handles (e.g. removes) records for files that have not been uploaded within the validity of the signed URL. Or if using dynamodb, you could set TTL on the records for pending uploads. username_5: Hmm, this would still leave me with an inconsistent state. I could set a flag on the record that states whether the file is already confirmed or not. Sounds like monkey-patching a transaction system, though. If there is no better solution I will stay off serverless for these uploads for a while longer. username_1: Yes, you could store the state of the upload (e.g. initiated/completed). The serverless approach often requires you to think asynchronously. The advantage is that you don't have to pay compute time while you are just waiting for the client to upload data over a potentially slow connection. username_6: @username_5 have you figured this out? I'm facing a similar situation. I thought to create the database record when creating the signed URL but not sure how can I handle my database state in case something goes wrong or in case the user just give up uploading the file. @username_1 in cases like this, wouldn't it be better to handle the upload using a lambda function? username_1: @username_6 As mentioned before, just store the state of the upload in your database. You could use Lambda, S3 triggers and DynamoDB TTL to implement a flow like: - client calls API to get upload URL - backend creates database record with `state: initiated` and `ttl: 3600` - backend creates signed URL and returns it to the client - client receives URL, and starts uploading file directly to S3 - when upload is complete, S3 triggers lambda - lambda updates database record: `state: complete` (remove `ttl` field) All records in the DB with `state: complete` are available in S3. Records with `state: initiated` are either uploading (and will turn to `state: complete`), or abandoned, and will be removed automatically when the TTL expires. username_6: I've ended up doing something pretty close to what you described. What I did was: 1. Client calls the API to get an upload URL 2. Client uploads the file to the provided URL 3. When the file is uploaded to s3 I got a lambda that listens to this event and then inserts the data into my database; Thanks for your help @username_1 username_7: both approaches are valid. with the presigned S3 URLs - you have to implement uploading logic on both backend and frontend. PUT requests can not be redirected. as an additional note to [aws-node-signed-uploads example](https://github.com/serverless/examples/tree/master/aws-node-signed-uploads) it is _better_ to sign content-type and file-size together with filename. (to make s3 to check for those as well, if attacker will want to send some `.exe` file instead) but the receiving a file and processing it (even without s3 involved) is also a valid use-case. thanks @username_2 code looks interesting. looks like APIGateway is base64-encoding the octet streams. i'll keep looking, tho
Jermolene/TiddlyWiki5
94587000
Title: Extend syslink.js to linkify prefix "$:/_" and a suffixing number Question: username_0: In #1767 @pmario explains regarding automatic SystemTiddlerLink detection. ``` There is a rule syslink.js that detects system tiddler links. It basically applies these rules: starting with a literal $: then any number of character not a whitespace, < or | closing with anything that is, again, not a whitespace, < or | As Jeremy wrote ^^ and basically everything except whitespace < | are allowed for system tiddlers to be recogniced as automatic WikiLinks. ``` ...but in my TWaddle site, I typically use the prefix `$:/_` (specifically, I use `$:/_TWaddle/`) This is however not recognized as a WikiLink. Can it please be made to accept this. Neither are suffixing digits or numbers included. Pretty serious IMO. `$:/foo2` Answers: username_1: I don't understand. `[[$:/_TWaddle/]]` works for me: <img width="986" alt="screen shot 2015-07-13 at 09 30 49" src="https://cloud.githubusercontent.com/assets/70075/8650619/0be29e8a-2942-11e5-8d3d-3b836d7f28fc.png"> username_2: I think it makes sense to extend the automatic system tiddler link rules to include underscore and digits. Status: Issue closed username_0: @username_1 You see the difference if you type in the following two $:/.foobar $:/_foobar The first is recognized, the second isn't. People insert the point or the underscore to make e.g all their own system tiddlers list together. @username_2 - great. username_1: Oh I guess because the auto linking doesn't work for terms that are not CamelCased I just always got into the practice of putting brackets around all my links.
StraboSpot/strabo-mobile
87458436
Title: images Question: username_0: Did some work with this in the branch https://github.com/StraboSpot/strabo-mobile/tree/imageUpload but still not working. Answers: username_0: Did some work with this in the branch https://github.com/StraboSpot/strabo-mobile/tree/imageUpload but still not working. username_1: Fixed in 864333c6fe5262ca22a708a7099f87ab4b017f54. I like the idea of "images" as a sibling to "properties". If it were a child of "properties", I feel that it would get too crowded. username_0: @username_1 Can you see if you can get the download working too? I added code for it in the branch but not sure about the format we need to actually save the image in the spot locally. username_1: Corrected image download in c457b04788231b6e7b6a22a1aa0d1363fc94a007. Problem was due to XHR coming back as a string and not as a blob. Then needed to convert blob back to base64 encoded string. username_0: @username_1 Great, this is looking good and I merged it in with master. Still have a problem with the server overwriting images since they all have the same filename but Jason says he'll fix it sever-side tomorrow. See https://github.com/StraboSpot/strabo-server/issues/11 Status: Issue closed
atk4/ui
301359142
Title: Grid can not handle $g->setModel(new MyModel($this->db), FALSE); Question: username_0: Right from the demo ( https://github.com/atk4/ui/blob/develop/demos/grid.php ) , I wanted to use the code like this: `<?php require 'init.php'; require 'database.php'; $g = $app->add(['Grid']); $g->setModel(new Country($db), FALSE); $g->addQuickSearch(); $g->menu->addItem(['Add Country', 'icon' => 'add square'], new \atk4\ui\jsExpression('alert(123)')); $g->menu->addItem(['Re-Import', 'icon' => 'power'], new \atk4\ui\jsReload($g)); $g->menu->addItem(['Delete All', 'icon' => 'trash', 'red active']); $g->addColumn('name'); $g->addColumn(null, ['Template', 'hello<b>world</b>']); //$g->addColumn('name', ['TableColumn/Link', 'page2']); $g->addColumn(null, 'Delete'); $g->addAction('Say HI', function ($j, $id) use ($g) { return 'Loaded "'.$g->model->load($id)['name'].'" from ID='.$id; }); $g->addModalAction(['icon'=>'external'], 'Modal Test', function ($p, $id) { $p->add(['Message', 'Clicked on ID='.$id]); }); $sel = $g->addSelection(); $g->menu->addItem('show selection')->on('click', new \atk4\ui\jsExpression( 'alert("Selected: "+[])', [$sel->jsChecked()] )); $g->ipp = 10; ` I only changed lines 7 and 12 (added a FALSE to have the columns added myself and added one column). Returns an "calling function on NULL" exception in line 234 of Grid.php... It seems like Grid can not handle it like this. It works with the normal table and forms though. Answers: username_1: It's not because `setModel()`, it's because you have `null` as column name and it calls `Table->addColumn(null, 'whatever', null)` and here https://github.com/atk4/ui/blob/develop/src/Table.php#L151 it try to create model field with name `null` :) ``` $field = $this->model->addField($name); // $name = null here ``` Solution / workaround - use meaningful column name (which is not model field name). username_1: Well actually it was because of setModel :) Here is real issue why this happened: https://github.com/atk4/ui/issues/390 and solution: https://github.com/atk4/ui/pull/391 username_2: Table should support null as a column name. Status: Issue closed
Meragon/Unity-WinForms
205099157
Title: i creat c# code in unity3d and implement Control,copy your codes,but when i run,only show a mouse icon 。why?thanks! Question: username_0: i creat c# code in unity3d and implement Control,copy your codes,but when i run,only show a mouse icon 。why?thanks! Answers: username_1: Make sure your control is placed on a form. Like ``` public class MyControl : Control { public MyControl() { Size = new Size(64, 64); } ...OnPaint(e) { e.Graphics.FillRectangle(Color.Red, 0, 0, With, Height); } } public class MyForm : Form { public MyForm() { var c = new MyControl(); c.Location = new Point(32, 32); Controls.Add(c); } } public class AppBeh : MonoBehaviour { private void Start() { new MyForm(); } } ``` username_1: Or just upload your project and I'll take a look. username_0: this is my project,thank you very much. username_0: i send it to your email Status: Issue closed username_0: i creat c# code in unity3d and implement Control,copy your codes,but when i run,only show a mouse icon 。why?thanks! username_0: could you tell me your email,i send my project to you,thank you very much. username_1: <EMAIL> Status: Issue closed
paketo-buildpacks/paketo-website
679245331
Title: Update docs sidebar Question: username_0: We should update the docs sidebar nav to be more flexible and organized. The CFF web team has already made some changes to the Hugo config. to support a more "tree-like" nav, but we'll need to make some changes to wire everything up. Github Branch - https://github.com/paketo-buildpacks/paketo-website/tree/docs_sidebar_enhancement The eventual structure for this should look something like this: ![image](https://user-images.githubusercontent.com/37810054/90267514-a879d600-de23-11ea-9488-b29d5c14c9ac.png) We don't have some of these pages yet, so for now we should just make the sidebar nav flexible and nest existing content around the proposed nav. Answers: username_0: This looks great. This has been implemented in the `redesign` branch and will be merged in once we're closer to shipping all of our website changes. Status: Issue closed
getsentry/sentry-python
466024993
Title: Django Integration Leaks Memory Question: username_0: I expended around 3 days trying to figure out what was leaking in my Django app and I was only able to fix it by disabling sentry Django integration (on a very isolated test using memory profiler, tracemalloc and docker). To give more context before profiling information, that's how my memory usage graph looked on a production server (killing the app and/or a worker after a certain threshold): ![image](https://user-images.githubusercontent.com/15985195/60928150-38cd0b00-a261-11e9-8c55-86f6f2f5e0bf.png) Now the data I gathered: By performing 100.000 requests on this endpoint: ```python class SimpleView(APIView): def get(self, request): return Response(status=status.HTTP_204_NO_CONTENT) ``` A [tracemalloc ](https://docs.python.org/3/library/tracemalloc.html) snaphost, grouped by filename, showed sentry django integration using 9MB of memory after a 217 seconds test with 459 requests per second. (using NGINX and Hypercorn with 3 workers): ``` /usr/local/lib/python3.7/site-packages/sentry_sdk/integrations/django/__init__.py:0: size=8845 KiB (+8845 KiB), count=102930 (+102930), average=88 B /usr/local/lib/python3.7/site-packages/django/urls/resolvers.py:0: size=630 KiB (+630 KiB), count=5840 (+5840), average=110 B /usr/local/lib/python3.7/linecache.py:0: size=503 KiB (+503 KiB), count=5311 (+5311), average=97 B /usr/local/lib/python3.7/asyncio/selector_events.py:0: size=465 KiB (+465 KiB), count=6498 (+6498), average=73 B /usr/local/lib/python3.7/site-packages/sentry_sdk/scope.py:0: size=325 KiB (+325 KiB), count=373 (+373), average=892 B ``` tracemalloc probe endpoint: ```python import tracemalloc tracemalloc.start() start = tracemalloc.take_snapshot() @api_view(['GET']) def PrintMemoryInformation(request): current = tracemalloc.take_snapshot() top_stats = current.compare_to(start, 'filename') for stat in top_stats[:5]: print(stat) return Response(status=status.HTTP_204_NO_CONTENT) ``` I have performed longers tests and the sentry django integration memory usage only grows, never releases, this is just a scaled-down version of the tests I've been performing to identify this leak. This is how my sentry settings looks like on settings.py: ![image](https://user-images.githubusercontent.com/15985195/60928374-ff48cf80-a261-11e9-80d8-e3700680bf79.png) Memory profile after disabling the Django Integration (same test and endpoint), no sentry sdk at top 5 most consuming files: ``` /usr/local/lib/python3.7/site-packages/django/urls/resolvers.py:0: size=1450 KiB (+1450 KiB), count=15123 (+15123), average=98 B /usr/local/lib/python3.7/site-packages/hypercorn/protocol/h11.py:0: size=1425 KiB (+1425 KiB), count=8868 (+8868), average=165 B /usr/local/lib/python3.7/site-packages/channels/http.py:0: size=1398 KiB (+1398 KiB), count=14848 (+14848), average=96 B /usr/local/lib/python3.7/site-packages/h11/_state.py:0: size=1242 KiB (+1242 KiB), count=13998 (+13998), average=91 B /usr/local/lib/python3.7/site-packages/h11/_connection.py:0: size=1226 KiB (+1226 KiB), count=15957 (+15957), average=79 B ``` settings.py for the above profile: ![image](https://user-images.githubusercontent.com/15985195/60928761-6d41c680-a263-11e9-848c-e75d67ccce5d.png) Memory profile grouped by line number (more verbose): ``` /usr/local/lib/python3.7/site-packages/sentry_sdk/integrations/django/__init__.py:272: size=4512 KiB (+4512 KiB), count=33972 (+33972), average=136 B /usr/local/lib/python3.7/site-packages/sentry_sdk/integrations/django/__init__.py:134: size=4247 KiB (+4247 KiB), count=67945 (+67945), average=64 B [Truncated] Twisted==19.2.1 txaio==18.8.1 typed-ast==1.4.0 typing-extensions==3.7.4 Unidecode==1.1.1 urllib3==1.25.3 uvloop==0.12.2 vine==1.3.0 wcwidth==0.1.7 websockets==7.0 whitenoise==4.1.2 wrapt==1.11.2 wsproto==0.14.1 zipp==0.5.2 zope.interface==4.6.0 ``` I used the [official python docker image](https://hub.docker.com/_/python) with the label 3.7, meaning latest 3.7 version. Hope you guys can figure the problem with this data, I'm not sure if I'll have the time to contribute myself! Answers: username_0: Identifying a memory leak makes me a contributor? :/ username_1: Could you try a different wsgi server? I cannot reproduce any of this with uwsgi or django's devserver. username_0: That might be the thing then, I'm actually using ASGI, tried the three options available, Daphne, Hypercorn and Gunicorn+Uvicorn. username_1: Oh you're using Django from a development branch and run on ASGI? That might be not the same issue @username_3 is seeing at all then. username_0: I'm absolutely not using Django from a development branch, I'm using Django Channels. username_1: got it. we never tested with channels, but have support/testing on our roadmap. the memory leak still shouldn't happen. we'll investigate but it could take some time. username_0: Feel free to reach me for any further clarifications, I'm happy to help and appreciate your efforts nonetheless 👍 username_1: Yeah, I can repro it with a random channels app that just serves regular routes via ASGI + hypercorn (not even any websockets configured) So far no luck with uvicorn though. Did you observe faster/slower leaking when comparing servers? username_1: I understand the problem now, and can see how we leak memory. I have no idea why this issue doesn't show with uvicorn but I also don't care. It's very simple: For resource cleanup we expect every Django request to go through `WSGIHandler.__call__`. This doesn't happen for channels. I think you will see more of the same issue if you try to set tags with `configure_scope()` in one request. They should not persist between requests, but do with ASGI. This issue should not be new in 0.10, but it should've been there since forever. @username_3 if you saw a problem when upgrading to 0.10 then this is an entirely separate issue, and I would need more information about the setup you're running. I think this becomes a duplicate of #162 then (or it will get #162 as dependency at least). cc @username_2. The issue is that last time I looked into this I saw a not-quite-stable specification, which is why I was holding off of it. You might be able to find a workaround by installing https://github.com/encode/sentry-asgi in addition to the Django integration username_2: ASGI 3 is baked and done. You’re good to go with whatever was blocked there. username_1: I don't think this helps in this particular situation because the latest version of channels still appears to use a prev version (had to pin down sentry-asgi) username_1: @username_2 I want to pull sentry-asgi into sentry. Do you think it would be possible to make a middleware that behaves as polyglot ASGI 2 and ASGI 3? username_2: Something like this would be a "wrap ASGI 2 or ASGI 3, and return an ASGI 3 interface" middleware... ```python def asgi_2_or_3_middleware(app): if len(inspect.signature(app).parameters) == 1: # ASGI 2 async compat(scope, receive, send): nonlocal app instance = app(scope) await instance(receive, send) return compat else: # ASGI 3 return app ``` Alternatively, you might want to test if the app is ASGI 2 or 3 first, and just wrap it in a 2->3 middleware if needed. ```python def asgi_2_to_3_middleware(app): async compat(scope, receive, send): nonlocal app instance = app(scope) await instance(receive, send) return compat ``` Or equivalently, this class based implementation, that Uvicorn uses: https://github.com/encode/uvicorn/blob/master/uvicorn/middleware/asgi2.py username_3: The main problem I observed was an increase in memory usage over time of a long-running (tens of minutes) Celery task (which has lots of Django ORM usage). I haven't had a chance to do any traces, but I'll create a new issue when I do. username_1: @username_0 When 0.10.2 comes out, you can do this to fix your leaks: https://github.com/getsentry/sentry-python/blob/ce3b49f8f0f76939d972868f417dcf9ef78758aa/tests/integrations/django/myapp/asgi.py#L19 username_0: Hey! that sounds great @username_1 I will try it eventually, thanks for your time!!! username_1: 0.10.2 is released with the new ASGI Middleware! No documentation yet, will write tomorrow username_2: Nice one. 👍 username_1: Please watch this PR, when it is merged the docs are automatically live: https://github.com/getsentry/sentry-docs/pull/1118 username_1: docs are deployed, this is basically fixed. New docs for ASGI are live on https://docs.sentry.io/platforms/python/asgi/ Status: Issue closed username_4: @username_1 I'm currently using channels==1.1.8 and raven==6.10.0 and experiencing memory leak issues very similar to this exact issue. I wanted to inquire whether there is a solution for Django Channels 1.X? I see that Django Channels 2.0 and sentry-sdk offer a solution via SentryAsgiMiddleware. I wanted to see if this also works with Django Channels 1.0? username_1: I'm not aware of any issues on 1.x, and for that version sentry-sdk does not do anything differently, but I believe you would have to try for yourself.
AAChartModel/AAChartKit-Swift
688827059
Title: moveOverEventMessage.x is nil when tapped any node - AAChartViewDelegate Question: username_0: Hi, I am using Line chart in my app and I want to show a certain view when tapped on any Node(x,y) value in the graph. Answers: username_1: Can you post your configuration code of username_1 or AAOptions? username_0: `self.chartModel = username_1() .chartType(AAChartType.line) .colorsTheme(self.currentSelectedGraphData.getSelectedColorTheme())//Colors theme .dataLabelsEnabled(true) .stacking(.none) .touchEventEnabled(true) .tooltipEnabled(false) .yAxisLabelsEnabled(true) .legendEnabled(false) .yAxisVisible(true) .yAxisMin(Float(self.currentSelectedGraphData.minYValue)) .yAxisMax(Float(self.currentSelectedGraphData.maxYValue)) .yAxisAllowDecimals(true) .xAxisVisible(false) .xAxisLabelsEnabled(false) .xAxisTickInterval(1.0) .animationType(.linear) .markerRadius(5) .markerSymbolStyle(.borderBlank) .markerSymbol(.circle) .series(self.currentSelectedGraphData.getSelectedGraphSeries())` username_0: By the way, I have observed, this is even happening in the sample project of swift as well. username_1: I think may be it is a bug of Highcharts, However, Index values is always appears, so you can get x and y values by index ```swift x = moveOverEventMessage.index y = aaSeriesElement.data[moveOverEventMessage.index] ``` username_0: Actually its not that simple. My X and Y axis are being generated at run time according to the data model and X axis could also be 0.1 as well. so Can't really reply on the above approach. username_1: I found the reason now. because JavaScript is dynamic language, so the types of x and y values are not very stable. sometimes it is `String` type, sometimes it is `Float` type, just like this as follow: * `String` type y value ```js user finger moved over!!!,get the move over event message: { category = "C++"; index = 8; name = 2020; offset = { plotX = "259.95833333333"; plotY = "196.772604709689"; }; x = 8; y = "14.2"; } ``` * `Float` type y value ```js user finger moved over!!!,get the move over event message: { category = C; index = 6; name = 2020; offset = { plotX = "198.79166666667"; plotY = "156.3052309904728"; }; x = 6; y = 17; } ``` Status: Issue closed username_1: It was fixed in this commit b72b82b username_1: -------------------------------------------------------------------------------- 🎉 Congrats 🚀 AAInfographics (5.0.3) successfully published 📅 August 31st, 21:43 🌎 https://cocoapods.org/pods/AAInfographics 👍 Tell your friends! -------------------------------------------------------------------------------- huanghunbieguannoMac:AAChartKit-Swift anan$
Mygod/VPNHotspot
462312927
Title: Neither user 10109 nor current process has android.permission.MANAGE_USB Question: username_0: How to fix USB tethering? Error: "Neither user 10109 nor current process has android.permission.MANAGE_USB" "pm grant be.mygod.vpnhotspot android.permission.MANAGE_USB" not working :-( Android 9.0 (API 28), LineageOS 16.0 Answers: username_1: #14 Status: Issue closed
haskell-distributed/network-transport-tcp
89090421
Title: Network.Transport.TCP may reject incoming connection request Question: username_0: Suppose A and B are connected, but the connection breaks. When A realizes this immediately and sends a new (heavyweight) connection request to B, then it /might/ happen that B has not yet realized that the current connection has broken and will therefore reject the incoming request from A as invalid. This is low priority because * the window of opportunity for the problem to occur is small, especially because in the case of a true network failure it will take some time before a new connection can be established * even if the problem _does_ arise, A can simply try to connect again (A might have to that anyway, to find out if the network problem has been resolved). Answers: username_1: From @username_1 on June 16, 2015 22:23 _Copied from original issue: haskell-distributed/distributed-process#31_ This may refer to the same problem we are observing when resolving connection crossing [1]. A can retry for a long time if B doesn't have any stimulus to test the connection health. TCP does not detect connection failures unless B is trying to send something, and moreover, detection won't happen promptly unless the user cares to tweak the tcp user timeout or the amount of retries. [1] https://cloud-haskell.atlassian.net/browse/NTTCP-9
openssl/openssl
375850505
Title: Cannot compile openssl1.1.1's Windows 32-bit static library Question: username_0: First of all, currently, is there an official openssl 1.1.1 Windows 32-bit static library? Where can I download it? Besides, I referred to the Chinese link: https://blog.csdn.net/qq_22000459/article/details/82968171. But as many of the posts on the web say, the nmake tool compiled incorrectly and could not be fixed. Using the dmake.exe under perl to compile also comes with an Error: Error: -- Expecting macro or rule defn, found neither. I tried many times and cann't fix it. So far, it is easy for me to complie static library of openssl 1.1.1 under linux system, but in windows systems, I can not solve the problem. Is anyone that can help me? Thanks a lot. Answers: username_1: How about you show us exactly what you did, and what errors you get. Start with your configuration command, and then a log of what goes wrong. username_0: ![image](https://user-images.githubusercontent.com/33438500/47779490-2bf03300-dd34-11e8-98c7-0e462f4968f1.png) username_2: You're not supposed to use dmake. You are supposed to use nmake. What happens when you attempt to use that? username_0: ![image](https://user-images.githubusercontent.com/33438500/47779774-e41ddb80-dd34-11e8-9357-474688da1967.png) I refer to the posts on many websites in China, all of which say that nmake compilation will have problems, so I use dmake.I tried it. There's also a problem.A post on the website seems to say that the makefile supports nmake by default, and that it is easily buggy to compile with dmake.But nmake compilation also has problems, so it's hard to fix. username_2: Are you running this from a *developer* command prompt? Running nmake from the command line requires your environment to be properly set up. Using the developer command prompt should do that for you. Make sure you use the 32-bit developer command prompt with the VC-WIN32 OpenSSL Configuration target. If you really want 64-bit then use a 64-bit developer command prompt and VC-WIN64A OpenSSL Configuration target. username_1: The the output, it looks like the environment is set up correctly. @username_0, would you mind translating that error message? Most of us don't read Chinese. username_1: It's the translation of the `cl` error message that I'm interested in. Among others, it should say on what line the error occurred. username_3: Although I know zero about Windows but I can provide the translation Richard was interested in. That `cl` error message said `'cl' is not either an external or internal command or an executable program`... I guess you can then figure out the 'native' wording of Windows... username_3: the `0x1` and `0x2` are return values of `cl` and `nmake.exe` in the 2 `fatal error U1017` lines, respectively. username_1: Ah, in that case I was wrong earlier, the environment isn't set up right. See @mattcasswell's comment... username_1: Thanks for the translation, @username_3 Status: Issue closed username_0: Since the vs2015 x86 local tool command prompt was not installed by default, I need to rerun the installation program of vs2015 to supplement the installation again. Use vs2015 x86 local tool command prompt to run nmake. Thanks a lot. Problem sloved. @username_1 @username_2 @username_3
melvinsalas/tucurrique-website
951006467
Title: New Post - 178492562722259_996093277628846 Question: username_0: ![Image](https://external-iad3-2.xx.fbcdn.net/safe_image.php?d=AQGhw_ARU5q2eLBe&url=https%3A%2F%2Ftucurrique.super.site%2F_next%2Fimage%3Furl%3D%252Fimages%252Fsuper-icon.svg%26w%3D640%26q%3D75&ccb=3-5&_nc_hash=AQFQdDZZ8fRxSOpr) Link: https://www.facebook.com/178492562722259/posts/996093277628846/
rynazavr/goit-js-hw-01
623454488
Title: bind Question: username_0: https://learn.javascript.ru/bind https://stackoverflow.com/questions/2236747/what-is-the-use-of-the-javascript-bind-method https://medium.com/@stasonmars/%D0%BF%D0%BE%D0%B4%D1%80%D0%BE%D0%B1%D0%BD%D0%BE-%D0%BE-%D0%BC%D0%B5%D1%82%D0%BE%D0%B4%D0%B0%D1%85-apply-call-%D0%B8-bind-%D0%BD%D0%B5%D0%BE%D0%B1%D1%85%D0%BE%D0%B4%D0%B8%D0%BC%D1%8B%D1%85-%D0%BA%D0%B0%D0%B6%D0%B4%D0%BE%D0%BC%D1%83-javascript-%D1%80%D0%B0%D0%B7%D1%80%D0%B0%D0%B1%D0%BE%D1%82%D1%87%D0%B8%D0%BA%D1%83-ddd5f9b06290 Answers: username_0: https://www.javascripttutorial.net/javascript-bind/
OpenSourceBrain/StochasticityShowcase
199661869
Title: Can't run LEMS_NoisyCurrentInput.xml Question: username_0: @username_1 @justasb I'm running this through pyNeuroML (which I don't think is the problem): ``` from pyneuroml import pynml pynml.run_lems_with_jneuroml('LEMS_NoisyCurrentInput.xml') ``` which tells me that things failed with (abbreviated): ``` pyNeuroML >>> *** Command: java -Xmx400M -jar ".../site-packages/pyNeuroML-0.2.0-py3.4.egg/pyneuroml/lib/jNeuroML-0.8.0-jar-with-dependencies.jar" "LEMS_NoisyCurrentInput.xml" *** ``` So I run that on the command line to get the specific error: ``` $ java -Xmx400M -jar ".../site-packages/pyNeuroML-0.2.0-py3.4.egg/pyneuroml/lib/jNeuroML-0.8.0-jar-with-dependencies.jar" "LEMS_NoisyCurrentInput.xml" jNeuroML v0.8.0 Loading: /.../StochasticityShowcase/NeuroML2/LEMS_NoisyCurrentInput.xml with jLEMS... org.lemsml.jlems.core.run.RuntimeError: Error at PathDerivedVariable eval(): tgt: null; sin: IF_curr_exp[IF_curr_exp]; tgtvar: I; path: synapses[*]/I at org.lemsml.jlems.core.run.PathDerivedVariable.eval(PathDerivedVariable.java:128) at org.lemsml.jlems.core.run.StateType.initialize(StateType.java:373) at org.lemsml.jlems.core.run.StateInstance.initialize(StateInstance.java:177) at org.lemsml.jlems.core.run.StateInstance.initialize(StateInstance.java:160) at org.lemsml.jlems.core.run.MultiInstance.initialize(MultiInstance.java:63) at org.lemsml.jlems.core.run.StateInstance.initialize(StateInstance.java:165) at org.lemsml.jlems.core.run.MultiInstance.initialize(MultiInstance.java:63) at org.lemsml.jlems.core.run.StateInstance.initialize(StateInstance.java:165) at org.lemsml.jlems.core.sim.Sim.run(Sim.java:276) at org.lemsml.jlems.core.sim.Sim.run(Sim.java:152) at org.lemsml.jlems.core.sim.Sim.run(Sim.java:143) at org.neuroml.export.utils.Utils.loadLemsFile(Utils.java:391) at org.neuroml.export.utils.Utils.runLemsFile(Utils.java:364) at org.neuroml.JNeuroML.main(JNeuroML.java:306) Caused by: org.lemsml.jlems.core.run.RuntimeError: Problem while trying to return a value for variable I: NaN ``` and then a bunch of INFO messages. Do you know what I am missing to be able to run this example successfully? Answers: username_1: @username_0 That looks like an old version of pyNeuroML... I'd pull the latest from github (or merge from master), install it and try again. username_1: I assume it's ok to close this now @username_0? username_0: Yes. Status: Issue closed
gbif/portal-feedback
233392811
Title: OccurrenceID search doesn't return results Question: username_0: **OccurrenceID search doesn&#39;t return results** Copying the occurrenceID from this record: https://demo.gbif.org/occurrence/575175144 Does not return results when you add it to an occurrence search: https://demo.gbif.org/occurrence/search?dataset_key=bf2a4bf0-5f31-11de-b67e-b8a03c50a862&amp;occurrence_id=http:%2F%2Fdata.rbge.org.uk%2Fherb%2Fe00070244&amp;taxon_key=3097538&amp;advanced=1 ----- fbitem-4cd51ab3c35a7b3305c9ec1422d26bc93a21a482 System: Chrome 58.0.3029 / Mac OS X 10.11.5 Referer: https://demo.gbif.org/occurrence/search?dataset_key=bf2a4bf0-5f31-11de-b67e-b8a03c50a862&occurrence_id=http:%2F%2Fdata.rbge.org.uk%2Fherb%2Fe00070244&taxon_key=3097538&advanced=1 Window size: width 1344 - height 799 [API log](http://elk.gbif.org:5601/app/kibana#/discover/UAT-Varnish-403s?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:'2017-06-03T20:10:29.214Z',mode:absolute,to:'2017-06-03T20:16:29.214Z'))&_a=(columns:!(request,response,clientip),filters:!(),index:'prod-varnish-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'response:%3E499%20AND%20(request:%22%2F%2Fapi.gbif.org%22)')),sort:!('@timestamp',desc))&indexPattern=uat-varnish-*&type=histogram) [Site log](http://elk.gbif.org:5601/app/kibana#/discover/UAT-Varnish-403s?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:'2017-06-03T20:10:29.214Z',mode:absolute,to:'2017-06-03T20:16:29.214Z'))&_a=(columns:!(request,response,clientip),filters:!(),index:'prod-varnish-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'response:%3E399%20AND%20(request:%22%2F%2Fdemo.gbif.org%22)')),sort:!('@timestamp',desc))&indexPattern=uat-varnish-*&type=histogram) Answers: username_1: Thanks for reporting This looks like an API issue and is same behaviour as the current production site: Current site: http://www.gbif.org/occurrence/search?ORGANISM_ID=http%3A%2F%2Fdata.rbge.org.uk%2Fherb%2FE00070244 no results for not encoded http://api.gbif.org/v1/occurrence/search?occurrence_id=http://data.rbge.org.uk/herb/e00070244 no results for encoded http://api.gbif.org/v1/occurrence/search?occurrence_id=http%3A%2F%2Fdata.rbge.org.uk%2Fherb%2Fe00070244 but the occurrence is in SOLR https://demo.gbif.org/occurrence/search?q=e00070244&basis_of_record=PRESERVED_SPECIMEN&country=CL&taxon_key=3097538 Occurrence: https://demo.gbif.org/occurrence/575175144 username_2: Hi. The above referenced duplicated issue was mine (for some reason my login info was not reported; perhaps I started writing the issue before I logged). [https://github.com/gbif/portal-feedback/issues/235](https://github.com/gbif/portal-feedback/issues/235) But I have to say the behaviour is not the same. In my case, the API works well: http://api.gbif.org/v1/occurrence/search?occurrence_id=SANT:SANT-Lich:8850-B username_1: @username_2 I see that it is an issue of casing - looks like the api and front end don't agree on wether case is important and varies per field. I'll look into it. Thank you for reporting As for the login info, that is to be expected (and leaving feedback without contact info is just fine btw). We have deliberately left the option to the user to provide contact details, but the idea is to have the option to set a default (e.g. your github handle). But that we haven't implemented yet username_1: The e is lowercased following the procedure for other fields. This is not how the API works. Seems reasonable that ids are case sensitive, so the front end should reflect that. This works: http://api.gbif.org/v1/occurrence/search?occurrence_id=http%3A%2F%2Fdata.rbge.org.uk%2Fherb%2FE00070244 Status: Issue closed
iBotPeaches/Apktool
195789562
Title: Failed to decompile the apk Question: username_0: ### Information 1. **Apktool Version (`apktool -version`)** - 2.2.1 2. **Operating System (Mac, Linux, Windows)** - Mac Sierra, and JDK 1.8 3. **APK From? (Playstore, ROM, Other)** - Local apk ### Stacktrace/Logcat Exception in thread "main" brut.androlib.AndrolibException: brut.directory.DirectoryException: file must be a directory: client-xxxx at brut.androlib.res.AndrolibResources.decodeManifestWithResources(AndrolibResources.java:225) at brut.androlib.Androlib.decodeManifestWithResources(Androlib.java:137) at brut.androlib.ApkDecoder.decode(ApkDecoder.java:106) at brut.apktool.Main.cmdDecode(Main.java:166) at brut.apktool.Main.main(Main.java:81) Caused by: brut.directory.DirectoryException: file must be a directory: client-16.5.33-qa-phone-debug at brut.directory.FileDirectory.<init>(FileDirectory.java:38) at brut.androlib.res.AndrolibResources.decodeManifestWithResources(AndrolibResources.java:205) ... 4 more ### Steps to Reproduce 1. Decompile the apk with following command : apktool d xxxx.apk ### Questions to ask before submission 1. Have you tried `apktool d`, `apktool b` without changing anything? - Answer : I have just tried to decompile the apk with this command: apktool d xxxx.apk Answers: username_1: Haven't seen this problem before. Almost like the manifest couldn't be located or the structure of the application invalid. I will need the APK unforunately to look at this any further. If the application is private, you can email me (ibotpeaches) (at) gmail (dot) com, with the subject - [PRIVATE] Bug 1381 - Apktool otherwise just upload it here. Status: Issue closed username_1: Closing. 4+ years later, no application.
deeplearning4j/deeplearning4j
139298426
Title: Word2Vec:loadTxtVectors using input Stream Question: username_0: Hi all, It would be convenient to be able to load a Word2Vec model from an input stream instead of a File only. Status: Issue closed Answers: username_1: Implemented: https://github.com/deeplearning4j/deeplearning4j/pull/1237 Will be merged into master a bit later.
xylagbx/Picture
548459944
Title: 程序员请收好:10 个实用的 VS Code 插件 Question: username_0: 提示:这些插件都可以在 Visual Studio Marketplace 上免费找到。 <https://marketplace.visualstudio.com/> **0、Visual Studio Intellicode** 下载超过 320 万次的 Visual Studio Intellicode,是 VS Marketplace 上下载次数最多的插件之一。而且,在我看来,它是你能用到的最有用的插件之一。 这个插件旨在帮助开发人员提供智能的代码完成建议而构建的,并且已预先构建了对多种编程语言的支持。 借助机器学习技术和查找众多开源 GitHub 项目中使用的模式,该插件在编码时提供建议。 ![](https://mmbiz.qpic.cn/mmbiz_png/Pn4Sm0RsAuh8ZO85XianrZ3MOerVDJO4Ria1ibr1w0CicENH2oogicic03wFicjU2ckzyv8zKUptw1MvGQWE8Wmbooycw/640?wx_fmt=png) **1、 Git Blame** 有时候,你需要知道是谁写了某段代码。好吧,Git Blame 进行了救援,它会告诉你最后接触一行代码的人是谁。最重要的是,你可以看到它发生在哪个提交中。 这是非常好的信息,特别是当你使用诸如特性分支之类的东西时。在使用特性分支时,你可以使用分支名称来引用票据。因为 Git Blame 会告诉你哪一个提交 (也就是哪个分支) 的一行代码被更改了,所以你就会知道是哪一个票据导致了这种更改。这有助于你更好地了解更改背后的原因。 ![](https://mmbiz.qpic.cn/mmbiz_gif/Pn4Sm0RsAuh8ZO85XianrZ3MOerVDJO4RWrPsAPcUpHVMq5oufyBMD7w5NmdHywuxKQfGhOo0z37yyKOaGyqMew/640?wx_fmt=gif) **2、Prettier** Prettier 是开发人员在开发时需要遵循一组良好规则的最佳插件之一。它是一个引人注目的插件,让你可以利用 Prettier 软件包。它是一个强大的、自以为是的代码格式化程序,可以让开发人员以结构化的方式格式化他们的代码。 Prettier 与 JavaScript、TypeScript、HTML、CSS、Markdown、GraphQL 和其他现代工具一起使用,可以让你能够正确地格式化代码。 **3、JavaScript (ES6) Code Snippets** 每个略更新的网页开发人员可能都使用过各种 JavaScript 堆栈。无论你选择哪种框架,在不同的项目中键入相同的通用代码应该会减少你的工作流程。 JavaScript (ES6)Code Snippets 是一个方便的插件,它为空闲的开发人员提供了一些非常有用的 JavaScript 代码片段。它将标准的 JavaScript 调用绑定到简单的热键中。一旦你掌握了窍门,你的工作效率就会大大提高。 **4、Sass** 你可能已经猜到了,这个插件可以帮助正在使用样式表的开发人员。一旦开始为应用程序创建样式表,就一定要使用 Sass 插件。该插件支持缩进的 Sass 语法自动设置语法制导 、自动补全和格式化。 在样式方面,你肯定希望将此工具包含在你的工具集中。 **5、Path Intellisense** Path Intellisense 是 Visual Studio 代码之一,它可以为你的开发提供有保证的生产力提升。如果你同时处理许多项目,使用了太多不同的技术,那么你肯定会需要一个可以帮你记住路径名的便捷工具。这个插件将为你节省大量的时间,否则将浪费在寻找正确的目录上。 Path Intellisence 最初是用于自动完成文件名的简单扩展,但它后来被证明是大多数开发人员工具集中的宝贵资产。 **6、Debugger for Chrome** 如果需要调试 JavaScript,则无需离开 Visual Studio Code。微软发布的 Chrome 调试器允许你可以直接在 Visual Studio Code 中调试源文件。 ![](https://mmbiz.qpic.cn/mmbiz_png/Pn4Sm0RsAuh8ZO85XianrZ3MOerVDJO4RicVTFH2Z5prQ1uFlBokm1iaS0on3WMUe2BcUxdPiaC5OibiaGSnYdnn2JbA/640?wx_fmt=png) **7、ESLint** ESLint 插件将 ESLint 集成到 Visual Studio Code 中。如果你不熟悉它,ESLint 就会作为一个静态分析代码的工具来快速发现问题。 ESLint 发现的大多数问题都可以自动修复。ESLint 修复程序可识别语法,因此你不会遇到由传统查找和替换算法引入的错误。最重要的是,ESLint 是高度可定制的。 **8、SVG Viewer** [Truncated] 有大量的自定义插件,可以改变侧边栏的配色方案和图标。有些流行的 Themes 都是免费的,比如:One Monokai 、One Dark Pro 和 Material Icon 。 英文: medium.com/better-programming/10-extremely-helpful-visual-studio-code-plugins-for-programmers-c8520a3dc4b8 推荐阅读  点击标题可跳转 [真香!Emacs 大佬公开转投 VS Code](http://mp.weixin.qq.com/s?__biz=MjM5OTA1MDUyMA==&mid=2655445262&idx=1&sn=03bc6c911bd1582ac4febd64f790a1db&chksm=bd732d798a04a46f2af183273b94c9d42ed98541357f0af2407ff657eac387d0d81285aa131d&scene=21#wechat_redirect) [为什么 Facebook 会选择 VS Code 作内部开发工具?](http://mp.weixin.qq.com/s?__biz=MjM5OTA1MDUyMA==&mid=2655447693&idx=1&sn=7dc4f973287e18a9b46b0a5b4e8a1d77&chksm=bd7326fa8a04afecf9bcdc59fa06a41c51acea966c57abdd727b8f5e7e662a8d4c13a8a4c89b&scene=21#wechat_redirect) 关注「程序员的那些事」加星标,不错过圈内事 !\[](<https://mmbiz.qpic.cn/mmbiz_png/2A8tXicCG8ylBdiap3J3SwP0ianxOhYKCBYY75PlXmINda18ybIrkfxbRT4jhBBtSu3k1qbwdHUArhNOIKsnQgJMg/640?wx_fmt=png> <https://mp.weixin.qq.com/s/JgZpsqJScGY9LLffTskWzQ> <https://mp.weixin.qq.com/s/JgZpsqJScGY9LLffTskWzQ>
MrTJP/ProjectRed
442356151
Title: Project Bench does not drop when broken Question: username_0: I did mine with a pickaxe. Answers: username_1: It drops fine. You need to use a pickaxe. You cannot harvest by hand. I will make it break slower if you do it by hand to make this more clear. username_0: I did mine with a pickaxe. username_1: No longer relevant in 1.15 port Status: Issue closed
asmith4299/asmith4299
216401778
Title: Sơn Chống Thấm Jappont J6.7 | Chống Thấm tốt trong và ngoài nhà | SONJAPPONT.COM | 0167 266 6789 Question: username_0: S&#417;n Ch&#7889;ng Th&#7845;m Jappont J6.7 | Ch&#7889;ng Th&#7845;m t&#7889;t trong v&agrave; ngo&agrave;i nh&agrave; | SONJAPPONT.COM | 0167 266 6789<br> http://www.youtube.com/watch?v=qYzOVKsL2tQ<br><br><br> via S&#417;n Nh&agrave; &#272;&#7865;p - S&#417;n Jappont http://www.youtube.com/channel/UC-GPvET-eDZMBwFU6pzWqyg<br> March 23, 2017 at 05:34PM
yiisoft/yii2
245510418
Title: new istallation - getting error /vendor/bower/jquery/dist Question: username_0: ### What steps will reproduce the problem? Installation new istance ### What is the expected result? The file or directory to be published does not exist: /media/sf_Web_share/www/solidcoinz.local/vendor/bower/jquery/dist ### What do you get instead? In my own, I first time see that, but DevOps from my compfny always telling me about it for avg 2 month I shall find the problem, but i don't think i'm enough in this framework, but,.,,, ### Additional info Any server: CentOs/Ubuntu/Debian - detected PHP: 7,0 or 7.1 Composer: 1.3.1 | Q | A | ---------------- | --- | Yii version | 2.0.12 | PHP version | 7.0/7.1 | Operating system | CentOs/Ubuntu/Debian Answers: username_0: Global fxp/composer-asset-plugin was old, Sorry! username_0: Close it please Status: Issue closed username_2: and the solution was... ¿? username_3: @username_2 the solution is not to use `fxp/composer-asset-plugin`. Use `asset-packagist.org` repository instead username_4: I am getting this behavior on a yii2-app-basic template project with a brand-new composer.json and fxp/composer-asset-plugin removed. username_4: What a mess. For whoever else finds this post, if you are upgrading an existing app by overwriting an old composer.json with the latest one (as I did) you need to add aliases to your config/web.php for the following: ``` 'aliases' => [ '@bower' => '@vendor/bower-asset', '@npm' => '@vendor/npm-asset', ] ``` https://github.com/yiisoft/yii2/issues/14324#issuecomment-311260179 username_5: I'm surprised that this problem has been coming up reliably for about four years, but every time the issue gets shut (and this must be the 7th thread I've read on the yii issues list alone) and the solution is always to refer to someone else's solution, although the solution is followed by posts suggesting it didn't work. everyone appears very hung up on the difference between 'bower' and 'bower-asset', which I get, but my problem (I know have both directories, aliases, fxp-whatever and asset-packagist blah) but the error still says I'm missing `jquery` (which I am). How on earth do I get `jquery` in either of the directories. username_5: I guess to be fair, I now have it working, but with so many solutions having been offered to me, and so many (probably conflicting changes) I'm really sorry but I can't actually say what change I made helped. My best advice would be that it involves - updating to the latest fxp thing (no idea how you find out what number it is(or what it does), I just randomly kept going up until it didn't work (1.4.2 was the last one), - add the alias suggestion above into the top level of the config array, - don't use the "replace" suggestion in one of the other accepted answers, - delete the root vendor directory (it never came back, so not sure where it came from in the first place), - rename web.php to main.php, - enable fxp and asset-packagist, - rm your existing vendor directory - rm composer.lock, - get a GitHub key and - download from the worlds slowest download server chuck in a few apachectl graceful and you might just get there. I was going to attach a composer.json file in case that helped anyone, but apparently GitHub forums don't allow it - really 😞 so here it is in text - and please, I'm happy for constructive feedback, but it turned itself into a dogs breakfast: <pre>{ "name": "yiisoft/yii2-app-basic", "description": "Yii 2 Basic Project Template", "keywords": ["yii2", "framework", "basic", "project template"], "homepage": "http://www.yiiframework.com/", "type": "project", "license": "BSD-3-Clause", "support": { "issues": "https://github.com/yiisoft/yii2/issues?state=open", "forum": "http://www.yiiframework.com/forum/", "wiki": "http://www.yiiframework.com/wiki/", "irc": "irc://irc.freenode.net/yii", "source": "https://github.com/yiisoft/yii2" }, "minimum-stability": "stable", "require": { "php": ">=5.4.0", "yiisoft/yii2": "~2.0.14", "yiisoft/yii2-bootstrap": "~2.0.0", "yiisoft/yii2-jui": "*", "aws/aws-sdk-php": "^3.2" }, "require-dev": { "yiisoft/yii2-debug": "~2.0.0", "yiisoft/yii2-gii": "~2.0.0", "yiisoft/yii2-faker": "~2.0.0" }, "config": { "process-timeout": 1800, "fxp-asset": { "enabled": true } }, "scripts": { "post-install-cmd": [ "yii\\composer\\Installer::postInstall" ], "post-create-project-cmd": [ "yii\\composer\\Installer::postCreateProject", "yii\\composer\\Installer::postInstall" ] }, "extra": { "yii\\composer\\Installer::postCreateProject": { [Truncated] { "runtime": "0777", "web/assets": "0777", "yii": "0755" } ] }, "yii\\composer\\Installer::postInstall": { "generateCookieValidationKey": [ "config/web.php" ] } }, "repositories": [ { "type": "composer", "url": "https://asset-packagist.org" } ] }<pre> username_3: To make it work you needed just `asset-packagist.org` in `repositories` section of `composer.json` username_3: It is the best option until bower-asset dependencies(like jQuery) is removed from Yii core. And core team is already working on it. Seems like `yiisoft/yii2 >= 2.1` would be separated from those dependencies. And we'll be able to create REST API`s without pulling all those frontend libraries
Ragee23/f1-3-c2p1-colmar-academy
342590363
Title: Notes on CSS Question: username_0: Below I've highlighted a few areas where we can improve the CSS: **Indentation** Throughout the file there are a few areas where indentation is a little off. To get started, take a look at the very beginning of the file where we open a new media query but do not indent the things that it contains. **More CSS breakpoints (media queries)** Right now, the site get's a little jumbled as the size of the window changes. We can combat this by setting more breakpoints at different max-width's so as to adjust the sizes/arrangement of our elements as the size of the window grows or shrinks. For example, one might want to have breakpoints at `450px`, `700px`, `900px`, and `1100px` **Class selector overuse** As mentioned in #1, we overuse classes that could otherwise be consolidated to one class that styles more than one element. This will keep us from having to repeat a lot of the same selectors throughout the CSS, making our life a bit easier.
ocornut/imgui
95922827
Title: OpenGL3 example broken on OSX Question: username_0: The commit e3b9a618839bda97e5bec814172185f3667e2324 seems to have broken rendering on OS X. On linux everything works as expected, but OS X throws `GL_INVALID_OPERATION` for the `glDrawElements` on L91 and therefore nothing is rendered. I suspect this is due to the OpenGL driver being more strict on OS X, because with a GL core context, client-side arrays for the `indices` parameter of `glDrawElements` and `glDrawElementsInstanced` are explicitly disallowed. Answers: username_1: Do you know what would be a good fix? I am not really familiar with OSX nor OpenGL. Maybe the iOS exemple provide a better base? username_0: Just did some more testing and the specified commit is not the problem, the problem was introduced by the use of indices. The solution would be to load the indices into a buffer as well and bind it to `GL_ELEMENT_ARRAY_BUFFER`. So essentially the procedure of streaming data to the vertex buffers, has to be done for an index buffer as well. username_0: Not much time to dispose just now, but I can try to get a pull request done in the next couple of days if necessary. Status: Issue closed
probonopd/go-appimage
1118456043
Title: Says "AppImage Added", but that doesn't seem to be the case Question: username_0: I'm running a fresh install of Ubuntu 21.10, I downloaded AppImaged (-650-aarch64.AppImage) and, as soon as the download completed, I got a notification that said, "AppImage Added" (Paraphrasing) and then I moved it to ~/.local/bin/ and I got a notification saying, "AppImage Removed" and then "AppImage Added". The same thing happens with any other AppImages I'm downloading and moving, BUT then when I try to find it in the app search, these Apps don't come up. I made sure they're set to executable, and if I run them, they seem to work, but I thought they were supposed to show up in the App Menu... isn't that the point? Any advice is appreciated! Answers: username_1: Yes, they should show up in the menu. Try logging out from your desktop and logging in again. Also try without moving the AppImages around. username_0: OK, so I rebooted and downloaded another AppImage, but got no "AppImage added" notification when the download completed. I tried moving it to ~/.local/bin/ again and got no removed/added notifications. So I went to ~/.local/bin/ and deleted appimaged-650-aarch64.AppImage and re-downloaded it. This time, no notification when that download completed either, or upon moving it even though I got them the first time. 😕 username_0: I'm an idiot. I first tried the x86_64 version and mistakenly read the notification, "You're not running one of the approved live versions...." as an error message, so I figured I had the architecture wrong. Then I tried the aarch64 version, but in reality, it was the first version still working (kinda) until I rebooted. I just reinstalled the x86_64 version and all is working as it should. Sorry f Status: Issue closed username_1: :100: glad it worked out!
red-hat-storage/ocs-ci
528564373
Title: Installation fails on missing Local Storage CSV Question: username_0: ) E AssertionError: There are more than one local storage CSVs: [] ocs_ci/ocs/resources/ocs.py:178: AssertionError ================ 1 failed, 265 deselected in 2704.84s (0:45:04) ================ ```<issue_closed> Status: Issue closed
inception-project/inception
413364751
Title: Optimize PNG images Question: username_0: **Describe the refactoring action** Some of the PNG images are unnecessarily large in terms of file size and could be optimized. **Expected benefit** The checkout size of the website which redundantly includes the documentation would be smaller. Also the artifact size would be reduced.<issue_closed> Status: Issue closed
DDVTECH/mistserver
69690359
Title: sourcery usage in the (old) Makefile Question: username_0: In commit 9b6312c on April 2nd (switch to CMake), sourcery was changed from using stdout to taking an explicit output argument. The (old) Makefile, however, was not updated accordingly, so now a clean project build with good-old-Make fails due to an empty src/controller/server.html.h file (which should have been generated by sourcery). In the same commit, the file embed.js.h was moved from src/ to src/output. The fix is to remove the output redirect ">" from both sourcery invocations and substitute src/embed.js.h -> src/output/embed.js.h a few places in the Makefile ;-) Also in the Makefile line 159 add src/io.cpp to the input list (this problem was introduced by commit d370ef4) Status: Issue closed Answers: username_1: Thanks for looking into all this! The old Makefile was just kept there for backup purposes initially. We've since completely removed it as more and more things started breaking on it, and the cmake method is now the only way to compile. :-)
lingow/BusWatch
75897758
Title: Agregar método del servicio para obtener unidades rastreadas de cada ruta Question: username_0: Agregué el método a la interface y a la implementación, pero hace falta realmente implementarlo. Resulta que ya pensandolo mejor, voy a pedir la actualización de donde están los camiones de cada ruta periodicamente. En ese caso, el cliente ya tiene las rutas, pero la información de los camiones rastreados cambia, por lo que decidí que unitPoints ya no sería parte de la clase Route, sino que se obtendrán a partir de una llamada al servicio. El servidor lo que tendrá que hacer será que al momento de hacer checkins y sus actualizaciones de checkin, guardará la posición más actual para esa instancia de checkin. Cuando se haga un request de getUnitPoints, el servidor buscará todos los checkin's correspondientes con el routeId provisto, y regresará una lista de LatLng con las posiciónes más recientes para los checkins activos. Status: Issue closed Answers: username_0: Agregué el método a la interface y a la implementación, pero hace falta realmente implementarlo. Resulta que ya pensandolo mejor, voy a pedir la actualización de donde están los camiones de cada ruta periodicamente. En ese caso, el cliente ya tiene las rutas, pero la información de los camiones rastreados cambia, por lo que decidí que unitPoints ya no sería parte de la clase Route, sino que se obtendrán a partir de una llamada al servicio. El servidor lo que tendrá que hacer será que al momento de hacer checkins y sus actualizaciones de checkin, guardará la posición más actual para esa instancia de checkin. Cuando se haga un request de getUnitPoints, el servidor buscará todos los checkin's correspondientes con el routeId provisto, y regresará una lista de LatLng con las posiciónes más recientes para los checkins activos. Status: Issue closed
Javacord/Javacord
950029534
Title: Let methods that require a populated cache throw exceptions Question: username_0: As already discussed here https://github.com/Javacord/Javacord/pull/816#issuecomment-884374895 we should throw an error when someone tries to use methods like `getMembers` and the cache is disabled because of the GUILD_MEMBERS intent or the cache has not been activated through `new DiscordApiBuilder().setUserCacheEnabled(true)`
cognoma/machine-learning
242043050
Title: Selecting the number of components returned by PCA Question: username_0: ```python sss = StratifiedShuffleSplit(n_splits=100, test_size=0.1, `random_state=0) ``` I'm thinking the next step should be creating a dataset that provides good coverage of the different query scenarios (#11), and perform `GridSearchCV` on these datasets, searching over a range of `n_components` to see how changing `n_components` effects performance (AUROC). @username_3, @username_2, @username_1 feel free to comment now or we can discuss at tonight's meetup. Answers: username_1: Thanks Ryan, this sounds like a good plan. Because of the increased speed of the pipeline, I'm in favor of just upping the number of components we search over. One thing that I will add is that we haven't optimized for the regularization strength yet (`alpha`). My guess is that the interplay between it and `n_components` is meaningful. I think it makes sense to optimize for them jointly. The process would be the same: - select several balanced and unbalanced genes - optimize over a large range of values for `n_components` and `alpha` - determine the max range for each parameter - decide if we should limit the range based on a heuristic around class balance I won't be there tonight, but I should be free to work on something this weekend. username_0: Agreed. I would assume the interplay between `n_components` and `l1_ratio` would also be very important (an `l1_ratio` of greater than 0 will include less features and likely have a large impact on the optimal `n_components`). I'll put a reference to #56 here as well as that talks about what `l1_ratio` to use. username_2: @username_0 Thanks for opening this issue! I also think this is a must-do if we want to enable search over optimal `n_components` for Cognoma web interface. It is in line with my experience that unbalanced mutations usually have a smaller optimal `n_components`. In order to limit the range of search for optimal `n_components`, we may need to experiment with a large variety of genes (including many balanced and many unbalanced) and a lot of values of `n_components`. Thus, we may be able to select a smaller common set of `n_components` values for search when users query our datasets. Although not sure, it will be great if we can select a particular range of `n_components` based on the pos/neg ratio of each gene. I will play around with Dark-SearchCV and see how much time / RAM it can save for the pipeline. username_0: Sounds good. I evaluated the speed increase with dasksearchCV [here](https://github.com/cognoma/machine-learning/blob/master/explore/dask-searchCV/dask-searchCV.ipynb). username_3: For the time being, before we get a better indication of the ideal PCA n_components by number of positives and negatives, I'd suggest expanding to something like `[10, 20, 50, 100, 250]`. As per #56, I suggest sticking with the default `l1_ratio` but expanding the range of `alpha`. You can use `numpy.logspace` or `numpy.geomspace` to generate the `l1_ratio` range. I also think we may want to switch to: ```python sss = StratifiedShuffleSplit(n_splits=100, test_size=0.1, `random_state=0) ``` Since non-repeated cross-validation is generally going to be too noisy for our purposes. username_1: @username_3 when you say stick to the default `l1_ratio` do you mean the `sklearn` default of 0.15? I made a comment in @56 mentioning that we may want to consider just using ridge regression since we have switched from `SelectKBest` to `PCA`. The number of features we are using has been reduced significantly, so I'm not sure what we get from sparsity in the classifier. username_3: That's what I was thinking (`l1_ratio = 0.15`). But your logic for using ridge (`l1_ratio = 0`) when we're using PCA makes sense to me. I don't have a strong opinion since I can see some principal components being totally irrelevant to the classification task and hence truly zero. I think we should pick whichever value gives us the most usable / user-friendly results. For example, with elastic net or lasso you may get all zero coefficient models which will likely confuse the user as to what's going on. username_1: I'm starting to run into RAM bloating issues again with the switch to `StratifiedShuffleSplit` and increasing the parameter search space. It balloons up quickly even with just 10 splits. I think we are going to need to move this to EC2. username_0: Just as an FYI - @dcgoss is getting [ml-workers](https://github.com/cognoma/ml-workers) pretty close to up and running. This is the repo where the notebooks will be run in production... on EC2 :smile: I'm also working on creating a benchmark dataset that provides good coverage of the different query scenarios. I'll post or PR this once its further along. username_1: That is with `l1_ratio=0` and using dasksearch. With respect to (2), it may not be a data leak but it could bias the classifier. There is significant difference in performance between models with different `n_components` so this could alter which model is chosen. username_0: True, but I'm not thinking we wouldn't still search over a range of `n_components`... I'm thinking we would perform PCA on the training set, the PCA components would be our `X Train` and we would use something like `SelectKBest` to search over a range of features (PCA components) to include in the model but instead of performing PCA within CV it's already been performed on the whole training set. username_1: Using `SelectKBest` is in essence doing the same thing as iterating over a variety of `n_components` after running the decomposition. I need to think about it a little more, but I believe you might still be leaking information here -- from the train partition of the CV fold to the test partition of the CV fold. You would get different principal components if you perform the decomp within cross-validation, so it may be a little less robust. username_0: I agree, but this may not have much of an effect, and I don't think it's a big deal if our training `AUROC` score is slightly inflated as long as the testing `AUROC` score is still accurate. I also agree that the cross-validation will be less robust this way but if it lets us search over a larger range of `n_components` I think that will far outweigh the small loss of robustness. P.S. I know this would be a fairly large departure from the way the notebook is currently set up so I'm not saying we _should_ do it this way I'm just thinking about ways to get this thing to run. username_2: I also agree that we should fit a `PCA` instance for each CV fold. A fitted PCA instance is part of the final model, as is the `SGDClassifier`. I did PCA together with scaling before Grid Search in [#71](https://github.com/cognoma/machine-learning/blob/master/algorithms/RIT1-PCA-username_2.ipynb), just because the former was not feasible given the computation resource I had. Still, we can have a separate notebook, evaluating the loss of robustness. username_3: A few notes. The case @username_1 describes where `n_components = 640` (or more) is optimal will be relatively uncommon, since TP53 is the most mutated gene. In general, there will be fewer positives. I'm okay with not always getting the best model in these extreme cases. The researcher could always iterate by adding larger `n_components` values. We used to perform option 2 mentioned by @username_0, but switched to the more correct option 3. I agree that in most cases option 2 will cause little damage with great speedup. Furthermore, the testing values will still be correct, just not the cross-validation scores. However, I'd prefer to use the theoretically correct method, since we don't want to teach users incorrect methods. We'll have to find the delicate balance a grid search that evaluates enough parameter combinations... but not too many. username_0: Closed by #113 We will be using a function to select the number of components based on the number of positive samples in the query (or the number of negatives if it is a rare instance with more positives than negatives). The current function looks like: ```python if min(num_pos, num_neg) > 500: n_components = 100 elif min(num_pos, num_neg) > 250: n_components = 50 else: n_components = 30 ``` Status: Issue closed
bojanrajkovic/pingu
237095798
Title: Implement alpha separation Question: username_0: Section 4.3.2 of the standard says that encoders should, as part of encoding, implement alpha separation as follows: "If all alpha samples in a reference image have the maximum value, then the alpha channel may be omitted, resulting in an equivalent image that can be encoded more compactly."
ElinamLLC/SharpVectors
277615084
Title: GDI broken? Question: username_0: Been looking at the GDI renderer and hit a problem so tried with the samples and they give the same error. Inheritance security rules violated while overriding member: 'SharpVectors.Renderers.Gdi.GdiGraphicsRenderer.get_Window()'. Security accessibility of the overriding method must match the security accessibility of the method being overriden. In GdiTextSVGViewer sample the error is when creating the SvgPictureBox control. #### This work item was migrated from CodePlex CodePlex work item ID: '1749' Vote count: '1' Answers: username_0: [drmcw@2014/05/16] Well dropped the build from .NET 4.5 down to 3.5 and had to comment out ``` settings.DtdProcessing = DtdProcessing.Parse; ``` in SVGDocument as it's a 4.5 feature and it works now. Would like to know if DTD parsing is necessary and what security in 4.5 was causing the problem though! Status: Issue closed
lerna/lerna
874199630
Title: Unable to update workspace package-lock.json after `version` on npm@7 Question: username_0: I manage two mono-repo projects ([PixiJS](https://github.com/pixijs/pixi.js) and [PixiJS Filters](https://github.com/pixijs/filters)) which are both experiencing flavors of the same issue after upgrading to npm@7 workspaces. The package-lock file is not updated after calling `lerna version`. This creates an unclean git environment when CI attempts to publish resulting in a blocked publish ([see example](https://github.com/pixijs/filters/runs/2251151629?check_suite_focus=true#step:13:19)). ## Expected Behavior After doing `lerna version`, the package-lock.json should be updated to reflect the version number bumps in the packages and included in the tag. ## Current Behavior package-lock.json is not updated, and version numbers on workspace packages reflect the previous versions. Subsequent `npm install`s create a change in the package-lock.json. ## Possible Solution I tried unsuccessfully add a hook to bump the lock: `"postversion": "npm i --package-lock-only"` But this errored `Failed to exec postversion script` Maybe having a `pretag` lifecycle hook would help to add this? ## Steps to Reproduce (for bugs) 1. `git clone [email protected]:pixijs/filters.git` 2.`npm install` 3. `npm run release -- --no-push --force-publish` (notice no package-lock.json changes in tag) 4. `npm install` (notice package-lock.json is updated) <details><summary>lerna.json</summary><p> <!-- browsers demand the next line be empty --> ```json <!-- Please paste your `lerna.json` here --> ``` </p></details> <details><summary>lerna-debug.log</summary><p> <!-- browsers demand the next line be empty --> ```txt <!-- If you have a `lerna-debug.log` available, please paste it here --> <!-- Otherwise, feel free to delete this <details> block --> ``` </p></details> ## Context <!--- How has this issue affected you? What are you trying to accomplish? --> <!--- Providing context helps us come up with a solution that is most useful in the real world --> ## Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> | Executable | Version | | ---: | :--- | | `lerna --version` | v4.0.0 & v3.13.4 | | `npm --version` | v7.11.2 | | `yarn --version` | n/1 | | `node --version` | v12.18.1 | | OS | Version | | --- | --- | | macOS BigSur | 11.2.3 | Answers: username_0: I have a workaround that is okay in case this is helpful for anyone. The idea is to append the release commit after updating the package-lock. Previous `release` script: ```json "release": "lerna version", ``` Updated `release` script: ```json "release": "lerna version --no-push", "postrelease": "npm i --package-lock-only && git commit -a --amend --no-edit && git push && git push --tags", ``` username_0: Sharing my workaround. This, however, should probably be Lerna's responsibility to play well with npm 7. ### Before ```shell lerna version ``` ### After ```shell # Ignore Lerna's tag and push lerna version --no-push --no-git-tag-version # Get the current tag version tag=v$(node -e "process.stdout.write(require('./lerna.json').version)"); # Update the lock file here npm i --package-lock-only # Auto-tag and auto-commit like usual git commit --all -m ${tag} git tag -a ${tag} -m ${tag} git push --tags git push ``` username_1: @username_0 so I realised I'm not hitting your error because i'm not (yet) including a private package as a devDependency. Which seemed like a thing I'll wish to do shortly, so I took a look at your repro repo. Notwithstanding it indeed looks like an npm regression you've hit, it looks like everything(?) works if you use a relative `file:` specifier instead of the version string when including your private devDependency. I.e. in your example `packages-a/package.json`: `"devDependencies": ["@tools/a": "file:../tools-a"]` - 👎  creates a hardcoded dependency on package folder structure - 👍  smaller release diffs with fewer version lines to update - 🤷  versioned dependency for private package in same repo don't seem meaningful anyhow Does this make sense to you? Anything I'm missing re: utilising relative `file:` version specifiers for private devDependencies? username_0: Nice @username_1. All this makes sense and seems like `file:` sounds like a workaround. I know in the past when I've used internal versioning like `*`, Lerna blew that away with the fuzzy/exact versions. Do you know if Lerna leaves `file:` paths intact? username_1: Huzzah! Yes it appears Lerna does leave `file:` paths intact; in fact I see it listed as a [highlighted feature for devDependencies (`lerna link convert`)](https://github.com/lerna/lerna/blob/6cb8ab2d4af7ce25c812e8fb05cd04650105705f/README.md#common-devdependencies) - and in fact [recommended](https://github.com/lerna/lerna/issues/1679#issuecomment-461544321)..
DevExpress/testcafe
896574001
Title: TestCafé hangs & fails when a button click triggers a download in a separate browser window Question: username_0: ### What is your Test Scenario? As part of a test, we click on a button which leads to a document being downloaded. The download url is being supplied by our backend, and the frontend basically calls a window.open(documentUrl, "_blank") on it. TestCafe opens a blank window (about:blank), where the download is being completed in. ### What is the Current behavior? When a button is clicked that triggers a download, TestCafè opens a separate browser window and seems not to be able to switch to this download-window. Sooner or later the test runs into an error with the message "Cannot switch to the window.". ### What is the Expected behavior? TestCafé is able to continue with the test after clicking a button that triggers a download in a separate browser window. ### What is your web application and your TestCafe test code? A minimal example to reproduce the mentioned problem: The HTML-Code: ```html <html> <head> <script> function download() { window.open("testcafe-download.zip"); } </script> </head> <body> <h1>TestCafe Download</h1> <button onclick="javascript:download();">Download Test File</button> </body> </html> ``` The TestCafé/TypeScript code to reproduce the problem: ```typescript import { Selector, ClientFunction } from "testcafe"; fixture`My download tests`.page("file://C:/Users/path/to/testcafe-test.html"); test("First download test", async (t) => { const downloadButton = Selector("button"); await t.expect(Selector("button").exists).ok("Seems like the page is missing key components!"); await t.click(downloadButton); }); ``` The TestCafé/TypeScript code including a workaround that I came up with: ```typescript import { Selector, ClientFunction } from "testcafe"; fixture`My download tests`.page("file://C:/Users/path/to/testcafe-test.html").beforeEach(async (t) => { console.log("Another test starts.."); }); const overrideWindowOpen = ClientFunction(() => { // @ts-ignore window.open = function (url) { // @ts-ignore window.__lastWindowOpenUrl = url; }; }); // @ts-ignore [Truncated] const downloadUrl = await getLastWindowOpenUrl(); await t.navigateTo(downloadUrl); // navigate to the download-url within the same browser window to trigger the download in there }); ``` Calling the ClientFunction "overrideWindowOpen" right before clicking the button solved the issue for me as a workaround. ### Steps to Reproduce: 1. Save the given HTML code as well as zip folder with the name "testcafe-download.zip" together in one folder 2. Make sure that the page() call in the TestCafé/TypeScript code points correctly to the HTML file saved locally 3. Run your test command `testcafe chrome src/to/my/testfile.ts` (has to be adjusted accordingly) 4. See the blank window that pops up in which the download completes. TestCafé hangs from there and can not finish the test properly. ### Your Environment details: * testcafe version: 1.14.2 * node.js version: 14.16.1 * command-line arguments: testcafe chrome src/to/my/testfile.ts * browser name and version: Chrome 90.0.4430.93 * platform and version: Windows 10 Answers: username_1: I also caught the same issue when upgraded testcafe from "1.7.0" to "1.15.0". I checked what's different by running testcafe with `--live` flag and in 1.15 a new window is opened to download a file and it hangs the test: ![image](https://user-images.githubusercontent.com/4989157/127287104-6f2f04a5-d3f0-47d5-83ce-1c80cb129bc2.png) While in 1.7.0 no new browser windows are opened, the downloaded file appears at the bottom of the browser window and the same test doesn't fail: ![image](https://user-images.githubusercontent.com/4989157/127287848-05ca4c2c-afba-45ee-948e-af3f25963c7b.png) username_2: Hello, Thank you for your input. We have reproduced the problem and need some time to investigate it, please stay tuned.
icerockdev/moko-widgets
546165221
Title: open screen with result handler Question: username_0: To support cases like photo capture, photo selection, item selection required something like [startActivityForResult](https://developer.android.com/training/basics/intents/result). Answers: username_0: implemented and merged Status: Issue closed
nathanvda/cocoon
156158664
Title: 'Before remove' fade effect doesn't work on dynamically added items, works on existing items. Question: username_0: Hello. Can someone help me figuring out the issue. ``` // _form.html.erb <%= simple_form_for @institute_profile, url: url, method: method do |f| %> <div id="courses"> <%= f.simple_fields_for :courses do |s| %> <%= render 'course_fields', f: s %> <% end %> <%= link_to_add_association f, :courses do %> <span class="creat-file">Add Course</span> <%= fa_icon "plus-circle" %> <% end %> </div> <% end%> ``` ``` // _course_fields.html.erb <div class="nested-fields"> <%= f.input :name, label: "Course Name" %> <%= f.input :description, label: "Course Description", input_html: {:rows => 3} %> <%= link_to_remove_association "Remove Course", f %> </div> ``` ``` // Before adding new item, animate. This is working. $('#courses').on('cocoon:before-insert', function(e, course) { course.fadeIn('slow'); }); ``` ``` // Before removing an item, animate. This is working on existing items on edit page, but when I add new item and remove it, the fade effect isn't working $('#courses').on('cocoon:before-remove', function(e, course) { $(this).data('remove-timeout', 1000); course.fadeOut('slow'); }); ``` Answers: username_1: Mmmm did you check the obvious: maybe newly created items are not inside the `#courses` div? username_0: @username_1, when on the edit page, the existing items are directly under the '#courses' div. When a new item is added, the hierarchy is #courses > .links > newly-added-item. Is this normal? username_1: A bit unclear, because I do not see a `.links` class in the code you showed? Check out * the wiki page with ERB examples: https://github.com/username_1/cocoon/wiki/ERB-examples * the sample project: https://github.com/username_1/cocoon_simple_form_demo (which includes the remove animation as well) username_0: Sorry @username_1, I did not copy it the _form.html.erb correctly. Found out the issue. The link_to_add_association was wrapped in another div. Removed that div and now it's working fine. The previous hierarchy was #courses > .links > .another-div > newly-added-item Now, the hierarchy is #courses > .links > newly-added-item username_1: Ok, great you solved it. Still a bit weird, because events just bubble up, and as long as you define the event-handler on a outer-most level (e.g. `#courses`) it should always be handled correctly (no matter how deep). So only if the new item is _not_ contained in `#courses` or maybe you have another event-handler which blocks further propagation could explain this behaviour imho. Anyway :+1: :smile: Status: Issue closed username_0: Not sure @username_1 :) Thanks for the support.
awslabs/amazon-kinesis-producer
96357075
Title: Exception during updateCredentials Question: username_0: Is it possible to avoid this error when calling producer.destroy() to end the program? It doesn't seem to cause any issues, but I always see this: [pool-1-thread-6] WARN com.amazonaws.services.kinesis.producer.Daemon - Exception during updateCredentials java.lang.InterruptedException: sleep interrupted at java.lang.Thread.sleep(Native Method) at com.amazonaws.services.kinesis.producer.Daemon$5.run(Daemon.java:316) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) Answers: username_1: Yea that's an expected part of the termination procedure. It doesn't indicate any problems. We'll try to get rid of the warning in the next release. username_2: Is this issue addressed yet? username_3: If anyone is still experiencing this issue, this fixed it for me: `kinesisProducerConfiguration.setCredentialsRefreshDelay(100)` As I see it, the [thread running updateCredentials](https://github.com/awslabs/amazon-kinesis-producer/blob/master/java/amazon-kinesis-producer/src/main/java/com/amazonaws/services/kinesis/producer/Daemon.java#L320) gets interrupted. The default delay is 5 seconds, but during shutdown the executor only waits [1 second](https://github.com/awslabs/amazon-kinesis-producer/blob/master/java/amazon-kinesis-producer/src/main/java/com/amazonaws/services/kinesis/producer/Daemon.java#L511) before interrupting the thread. So setting the delay to something smaller that 1 second should fix the issue. username_4: Thanks for reporting this. We are investigating and prioritizing the fix for this. For other affected by this please comment, and add reactions to assist us in prioritizing the change. username_5: @username_3 ```scalakinesisProducerConfiguration.setCredentialsRefreshDelay(100) ``` worked for me. username_3: @username_5 I'm glad it worked! username_6: Bug still impacting me. username_7: +1 to fix this issue
openshift/openshift-docs
630376138
Title: [enterprise-4.4] Edit suggested in file authentication/identity_providers/configuring-oidc-identity-provider.adoc Question: username_0: <!-- Please submit only documentation-related issues with this form, or follow the Contribute to OpenShift guidelines (https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/contributing.adoc) to submit a PR. --> ### Which section(s) is the issue in? Creating a ConfigMap Not only OpenID Connect but the following sections need to be fixed. - Keystone - LDAP - Basic Authentication - Request Header - GitHub or GitHub Enterprise - GitLab ### What needs fixing? It's not clear enough with the below instruction and hard to understand what kind of certificate authority bundle is needed when creating a ConfigMap. Does the CA bundle refer to some of the CAs described here [1] or the CA for the identity provider? _ Identity providers use OpenShift Container Platform ConfigMaps in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider. Define an OpenShift Container Platform ConfigMap containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap_ In the Sample OpenID Connect CRs section, it says the CA bundle is optional but this needs to be elaborate, for example, when the CA bundle is required. _Optional: Reference to an OpenShift Container Platform ConfigMap containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL._ [1] https://docs.openshift.com/container-platform/4.4/authentication/certificate-types-descriptions.html<issue_closed> Status: Issue closed
type-challenges/type-challenges
1118046522
Title: 3060 - Unshift Question: username_0: <!-- Notes: 🎉 Congrats on solving the challenge and we are happy to see you'd like to share your solutions! However, due to the increasing number of users, the issue pool would be filled by answers very quickly. Before you submit your solutions, please kindly search for similar solutions that may already be posted. You can "thumb up" on them or leave your comments on that issue. If you think you have a different solution, do not hesitate to create the issue and share it with others. Sharing some ideas or thoughts about how to solve this problem is greatly welcome! Thanks! --> ```ts // your answers type Unshift<T extends readonly any[], U> = [U, ...T] ```
Benedicht/BestHTTP-Issues
912841207
Title: parsing error Question: username_0: {"tid":1,"div":"WebSocketTransport","msg":"OnMessage Packet parsing","ex": [{"msg": "Can't add a value here", "stack": " at BestHTTP.JSON.LitJson.JsonWriter.DoValidation (BestHTTP.JSON.LitJson.Condition cond) ...... server: socket:io 4 unity 2020.3.x Answers: username_1: Could you share: 1. Version of the plugin ([HTTPManager.UserAgent](https://besthttp-documentation.readthedocs.io/en/latest/#7.GlobalTopics/GlobalSettings/#useragent)) 2. Full log entry 3. If possible the server code that causes this error username_0: 2.5.1 ` public class Competition { public string roomCode { get; set; } public List<Users> users { get; set; } } var options = new SocketOptions {AutoConnect = false}; options.ConnectWith = TransportTypes.WebSocket; options.AdditionalQueryParams = new PlatformSupport.Collections.ObjectModel.ObservableDictionary<string, string>(); options.AdditionalQueryParams.Add("token", token); manager = new SocketManager(new Uri(URL), options); manager.Socket.On<Competition>("competition", OnCompetition); manager.Socket.On("ready", OnReady); manager.Socket.On<int, string, string>("words", OnWords); manager.Socket.On("start", OnStart); manager.Open(); void OnCompetition(Competition competition) { Debug.Log("OnCompetition"); } ` Unity logs: `{"tid":1,"div":"WebSocketTransport","msg":"OnMessage Packet parsing","ex": [{"msg": "Index was out of range. Must be non-negative and less than the size of the collection.\r\nParameter name: index", "stack": " at System.ThrowHelper.ThrowArgumentOutOfRangeException (System.ExceptionArgument argument, System.ExceptionResource resource) [0x00029] in <695d1cc93cca45069c528c15c9fdd749>:0 \r\n at System.ThrowHelper.ThrowArgumentOutOfRangeException () [0x00000] in <695d1cc93cca45069c528c15c9fdd749>:0 \r\n at System.Collections.Generic.List`1[T].get_Item (System.Int32 index) [0x00009] in <695d1cc93cca45069c528c15c9fdd749>:0 \r\n at BestHTTP.SocketIO3.Parsers.DefaultJsonParser.ReadParameters (BestHTTP.SocketIO3.Socket socket, BestHTTP.SocketIO3.Events.Subscription subscription, System.Collections.Generic.List`1[T] array, System.Int32 startIdx) [0x00168] in C:\\Users\\Wiggle\\Desktop\\Unity\\Freelance\\Can\\LetsDraw2\\Assets\\Best HTTP\\Source\\SocketIO.3\\Parsers\\DefaultJsonParser.cs:277 \r\n at BestHTTP.SocketIO3.Parsers.DefaultJsonParser.ReadData (BestHTTP.SocketIO3.SocketManager manager, BestHTTP.SocketIO3.IncomingPacket packet, System.String payload) [0x00130] in C:\\Users\\Wiggle\\Desktop\\Unity\\Freelance\\Can\\LetsDraw2\\Assets\\Best HTTP\\Source\\SocketIO.3\\Parsers\\DefaultJsonParser.cs:240 \r\n at BestHTTP.SocketIO3.Parsers.DefaultJsonParser.Parse (BestHTTP.SocketIO3.SocketManager manager, System.String from) [0x00237] in C:\\Users\\Wiggle\\Desktop\\Unity\\Freelance\\Can\\LetsDraw2\\Assets\\Best HTTP\\Source\\SocketIO.3\\Parsers\\DefaultJsonParser.cs:123 \r\n at BestHTTP.SocketIO3.Transports.WebSocketTransport.OnMessage (BestHTTP.WebSocket.WebSocket ws, System.String message) [0x00059] in C:\\Users\\Wiggle\\Desktop\\Unity\\Freelance\\Can\\LetsDraw2\\Assets\\Best HTTP\\Source\\SocketIO.3\\Transports\\WebSocketTransport.cs:133 "}],"stack":" at SocketIO3.Transports.WebSocketTransport.OnMessage (WebSocket.WebSocket ws, System.String message) [0x000a0] in C:\\Users\\Wiggle\\Desktop\\Unity\\Freelance\\Can\\LetsDraw2\\Assets\\Best HTTP\\Source\\SocketIO.3\\Transports\\WebSocketTransport.cs:142 \r at WebSocket.OverHTTP1.<OnInternalRequestUpgraded>b__15_0 (WebSocket.WebSocketResponse ws, System.String msg) [0x00013] in C:\\Users\\Wiggle\\Desktop\\Unity\\Freelance\\Can\\LetsDraw2\\Assets\\Best HTTP\\Source\\WebSocket\\Implementations\\OverHTTP1.cs:283 \r at WebSocket.WebSocketResponse.Core.IProtocol.HandleEvents () [0x000a4] in C:\\Users\\Wiggle\\Desktop\\Unity\\Freelance\\Can\\LetsDraw2\\Assets\\Best HTTP\\Source\\WebSocket\\WebSocketResponse.cs:537 \r at Core.ProtocolEventHelper.ProcessQueue () [0x00087] in C:\\Users\\Wiggle\\Desktop\\Unity\\Freelance\\Can\\LetsDraw2\\Assets\\Best HTTP\\Source\\Core\\ProtocolEvents.cs:69 \r at HTTPManager.OnUpdate () [0x0000d] in C:\\Users\\Wiggle\\Desktop\\Unity\\Freelance\\Can\\LetsDraw2\\Assets\\Best HTTP\\Source\\HTTPManager.cs:416 \r at HTTPUpdateDelegator.Update () [0x00020] in C:\\Users\\Wiggle\\Desktop\\Unity\\Freelance\\Can\\LetsDraw2\\Assets\\Best HTTP\\Source\\HTTPUpdateDelegator.cs:171 ","ctxs":[{"TypeName": "SocketManager", "Hash": -793657216}],"t":637586662811783002,"ll":"Exception","bh":1} UnityEngine.Debug:LogError (object) BestHTTP.Logger.UnityOutput:Write (BestHTTP.Logger.Loglevels,string) (at Assets/Best HTTP/Source/Logger/UnityOutput.cs:22) BestHTTP.Logger.ThreadedLogger:ThreadFunc () (at Assets/Best HTTP/Source/Logger/ThreadedLogger.cs:130) BestHTTP.PlatformSupport.Threading.ThreadedRunner/<>c_DisplayClass5_0:<RunLongLiving>b_0 (object) (at Assets/Best HTTP/Source/PlatformSupport/Threading/ThreadedRunner.cs:98) System.Threading.ThreadHelper:ThreadStart (object)` Also iOS debug logs for server-side data: `2021-06-06 14:27:25.643598+0300 SocketTest[31306:16941170] LOG SocketEngine: Got message: 42["competition","{\"roomCode\":\"UKIROF\",\"users\":[{\"id\":83,\"fullName\":\"Emre33333\",\"isPremium\":0,\"countryCode\":\"tr\",\"packetIds\":[1,2,3],\"isBot\":false,\"isOnline\":true}]}"] 2021-06-06 14:27:25.643771+0300 SocketTest[31306:16940941] LOG SocketParser: Parsing 2["competition","{\"roomCode\":\"UKIROF\",\"users\":[{\"id\":83,\"fullName\":\"Emre33333\",\"isPremium\":0,\"countryCode\":\"tr\",\"packetIds\":[1,2,3],\"isBot\":false,\"isOnline\":true}]}"] 2021-06-06 14:27:25.643781+0300 SocketTest[31306:16941170] LOG SocketEngine: Switching to WebSockets 2021-06-06 14:27:25.643917+0300 SocketTest[31306:16941170] LOG SocketEngineWebSocket: Sending ws: as type: 5 2021-06-06 14:27:25.644021+0300 SocketTest[31306:16940941] LOG SocketParser: Decoded packet as: SocketPacket {type: 2; data: [competition, {"roomCode":"UKIROF","users":[{"id":83,"fullName":"Emre33333","isPremium":0,"countryCode":"tr","packetIds":[1,2,3],"isBot":false,"isOnline":true}]}]; id: -1; placeholders: -1; nsp: /}` username_1: I think the `competition` event is ok, one of your other event handlers might have more arguments than what the server sends. username_0: OnCompetition is not working username_1: I used the following code and it works completely fine. Client: ```cs public sealed class Users { public int id; public string fullName; public int isPremium; public string countryCode; public int[] packetIds; public bool isBot; public bool isOnline; } public sealed class Competition { public string roomCode { get; set; } public List<Users> users { get; set; } } manager.Socket.On<Competition>("competition", OnCompetition); private void OnCompetition(Competition comp) { Debug.Log($"OnCompetition: {comp.roomCode}, {comp.users.Count}"); } ``` Server: ```js socket.emit('competition', { roomCode: "UKIROF", users: [ { id: 83, fullName: "Emre33333", isPremium: 0, countryCode: "tr", packetIds: [1, 2, 3], isBot: false, isOnline: true } ] }); ``` username_0: ![image](https://user-images.githubusercontent.com/3260696/121180544-a462ed00-c869-11eb-8473-2dc22dfe751a.png) username_0: OK we found the problem. Server was sending with JSON.stringify. We fixed by sending as objects. Status: Issue closed
sopra-fs21-group-05/group-05-client
889992326
Title: Create Separate Winner Page Question: username_0: - [ ] The winners should be declared on a separate page, not the scoreboard - [ ] Once this page is loaded, the scores should not change - [ ] Redirects to this page should be triggered by the GameRoom Creator Time estimate: 3h<issue_closed> Status: Issue closed
allegro/turnilo
350328849
Title: E2E smoke tests to avoid regressions on popular browsers Question: username_0: As a Selenium runtime I recommend https://www.browserstack.com/open-source Browsers: Latest Chrome, Firefox, Safari and Edge Systems: Windows 10 and Latest Mac Scenarios: main page and report pages for every visualisation type (using wikipedia example) Assertions: no JS errors, the most important components rendered Answers: username_0: One more tool for the testing toolbox: https://github.com/cypress-io/cypress Status: Issue closed
Angryquacker/f1-3-c2p1-colmar-academy
251495885
Title: SUMMARY Question: username_0: **Grade**: Needs Improvement **Summary**: Your code has some large issues that need fixing, mostly with the core functionality of resizing. For example your header, try breaking the mobile and desktop items into their own separate divs, and activate / deactivate as needed in your media only section. Then you can work to make the transitions smoother by using more auto margins, or insert an in between "tablet" step, whatever ends up working best for you. Once your resizing kinks are worked out, make sure to try adding some personal flair to the site. You can check out this site to get some ideas of modern web design patterns. http://www.webdesign-inspiration.com/web-designs/style/modern
QualiSystems/vCenterShell
138787687
Title: Error when trying to deploy 2 apps from template connected by CVC Question: username_0: Error when trying to deploy 2 apps from template connected by CVC. Watch the attached gif. This is unstable, but should be reproduced after 2-3 attempts. Answers: username_1: It happens because of a race condition. We should have an internal discussion about this... username_1: fixed Status: Issue closed username_2: Duplicate #416 Closing...
jesus2099/konami-command
312909707
Title: Some MBS links are becoming relative (again?) Question: username_0: Some MBS links are becoming relative (again?). Spotted by @username_1 in https://chatlogs.metabrainz.org/brainzbot/musicbrainz/msg/4157956/ It’s only broken in `beta.` now so I’ll wait for live deploy before fixing as it has already came back and forth in the past (#155, #156). Answers: username_1: Yvanzo says it's because there's a new beta release with some stuff changed to react... <username_2> Current beta contains a lot of template rewrite from TT to React, thus the small display bugs like the one spotted by mfeulenbelt and username_1 (MBS-9672). username_1: username_2: MajorLurker: updated beta username_2: However, it still contains small layout changes that might possibly break some userscripts. Such breakage should be reported to userscript authors. username_2: I confirm that internal links will be turned into relative links as we rewrite UI to React/JSX. Beyond that, there might be unintentional changes. Please ping me if you encounter any other glitch. username_0: Thanks. :) username_3: I believe this [has gone live now](https://blog.musicbrainz.org/2018/04/23/server-update-2018-04-23/). username_0: https://community.metabrainz.org/t/script-super-mind-control-x-turbo-isnt-working-anymore/376885?u=username_0
rajaramtt/ng7-dynamic-breadcrumb
604853122
Title: .updateBreadcrumb() not working after first call Question: username_0: I am developing a website with i18n module to translate between two languages. This module doesn't support translation, so i tried to workaround using the method .updateBreadcrumb(). When i enter in a new page, the updateBreadcrumb is called and updates the the breadcrumb fine. But when i change the language in the menu i made, an language change event is triggered and i call again the method .updateBreadcrumb (with other language labels), but this time the breadCrumb doesn't change and continues with the old language. If i change to other page, the breadcrumb updates to the new language selected before. But if i change again the language through menu and call .updateBreadcrumb again, nothing happens. Status: Issue closed Answers: username_2: This is still happening in the latest version, will u include it at any point?
starwing/lua-protobuf
294996430
Title: 反序列化来自C#的二进制流的时候,出现错误 Question: username_0: ``` type mismatch at offset 5, varint expected for type int32, got bytes ``` 这个应该从什么方向下手去调试错误呢? Answers: username_0: 大概知道原因了。Proto3里面的`repeated`默认是`[packed=true]`,但是lua-protobuf没有做这个处理。手动在proto里面加上[packed=true]可以破。 username_1: fix it by HEAD Status: Issue closed username_2: 尝试在proto中添加`[packed=true]`,又出现了新的问题: ![image](https://user-images.githubusercontent.com/40486759/117826499-c71dd800-b2a2-11eb-98b7-32a29ed6f576.png) username_1: @username_2 检查你的lua-protobuf版本,尽量用最新的,最新的已经不需要packed了 username_3: @username_1 作者大大有头绪么?这个我应该怎么做? username_1: @username_3 如果Windows的正常,那Android的肯定也正常,除非版本不一样。因为代码都是跨平台的。建议还是检察一下pb.c/pb.h和protoc.lua这三个文件和HEAD的diff。
aboustati/at_gp
450186683
Title: About your code in final part Question: username_0: Hi Ayman, Thanks for your sharing the code online. I am really like your code. I am new to TensorFlow and GPflow. After reading your code, I am a bit confused about you final part of your code: Knn = self.kern.K(Xnew) if full_cov else self.kern.Kdiag(Xnew) f_mean, f_var = base_conditional(K_new, C, Knn, y, full_cov=full_cov, white=False) return f_mean + self.mean_function(Xnew), f_var When I reading the paper: Adaptive Transfer Learning http://www3.ntu.edu.sg/home/sinnopan/publications/[AAAI10]Adaptive%20Transfer%20Learning.pdf the author may add some variance to the Knn in your code. I just slightly change your code to add the variance: Knn = self.kern.K(Xnew) + tf.eye(tf.shape(Xnew)[0], dtype=settings.float_type)\ * self.likelihood.variance if full_cov else self.kern.Kdiag(Xnew)\ + self.source_likelihood.variance f_mean, f_var = base_conditional(K_new, C, Knn, y, full_cov=full_cov, white=False) return f_mean + self.mean_function(Xnew), f_var I am not sure how the GPflow work in detail so I am not sure whether we need to add the variance in this way. Could you please clarify it? Thanks in advance. Answers: username_1: Hi Chunchao, Are you referring to eq (7) in the paper? This is the predictive distribution `p(y* | y_t, y_s) = \int p(y* | f*) p(f* | y_t, y_s) df*`. In the code, `build_predict` builds the tensor for `p(f* | y_t, y_s)` not the predictive distribution. You have to integrate the output with the likelihood to get the predictive distribution. You can do this by calling `predict_y`. Let me know this is is helpful/answers you question. Best wishes, Ayman username_0: Hi Ayman, Great! Thanks for your help. :) Yes, I am referring to the eq (7). I considered build_predict as predictive distribution previously. Thanks for your clarification. Best wishes, Chunchao Status: Issue closed
flutter/flutter
962183597
Title: flutter conductor should pretty print its state file Question: username_0: currently the conductor's persistent state file is JSON that is serialized compactly, without extraneous whitespace. this makes it difficult to edit should a manual change be needed. this file should be pretty-printed, with 2 space indentation.<issue_closed> Status: Issue closed
spring-cloud/spring-cloud-function
726601220
Title: Spring Cloud Function can not deserealize mulitpart file types Question: username_0: **Describe the bug** Trying to write a function like following ``` @Bean public Function <MultipartFile, String> uppercase() { return value -> value.getOriginalFilename(); } ``` This function will be deployed as AWS lambda which will be triggered by API gateway. API gateway end point is exposing a post request with type multipart/form-data type. Getting following error : ``` Caused by: com.fasterxml.jackson.databind.exc.InvalidDefinitionException: Cannot construct instance of org.springframework.web.multipart.MultipartFile` (no Creators, like default constructor, exist): abstract types either need to be mapped to concrete types, have custom deserializer, or contain additional type information ``` Answers: username_0: Hi, Is there any update on this issue ? Status: Issue closed username_1: The support for MultipartFile has been fixed. You can get more details from the two test cases that are part of the fix. Please let us know if you believe something is missing>
A7ocin/FaceAnimator
249358983
Title: Maya side? Question: username_0: Hi! This looks like a very interesting project. Sending dlib landmarks to maya is cool. :) What do you have on the maya side? Is there an already rigged face? A python script to connect to the stream? Thanks! Answers: username_1: Hello there! Thank you for your interest. I just updated the README in order to make everyone capable of testing the tool. I included the rigged face model as well as the MEL script for the connection. Again, thank you for your interest, please let me know if you succeed executing the tool or if you encounter any issue. Any suggestion is really appreciated. Nicola. username_0: Thank you again! I have it working. :) Once I have used it a bit more I will report back. Status: Issue closed
nhmo-feedback/o_herpetiles
752524258
Title: Monthly VertNet data use report for 2020-7, resource o_herpetiles Question: username_0: Your monthly VertNet data use report is ready! You can see the HTML rendered version of this report at: http://tools-usagestats.vertnet-portal.appspot.com/reports/0d8f74b3-42b9-488f-9ec1-1faefaff1d1c/202007/ Raw text and JSON-formatted versions of the report are also available for download from this link. A copy of the text version has also been uploaded to your GitHub repository under the "reports" folder at: https://github.com/nhmo-feedback/o_herpetiles/tree/master/reports A full list of all available reports can be accessed from: http://tools-usagestats.vertnet-portal.appspot.com/reports/0d8f74b3-42b9-488f-9ec1-1faefaff1d1c/ You can find more information on the reporting system, along with an explanation of each metric, at: http://www.vertnet.org/resources/usagereportingguide.html Please post any comments or questions to: http://www.vertnet.org/feedback/contact.html Thank you for being a part of VertNet.
Canadensys/vascan-data
67715206
Title: authority of synonym Question: username_0: [Originally posted on GoogleCode (id 943) on 2012-01-16] <b>(This is the template to report a data issue for Vascan. If you want to</b> <b>report another issue, please change the template above.)</b> <b>What is the URL of the page where the problem occurs?</b> http://data.canadensys.net/vascan/taxon/5063 <b>What data are incorrect or missing?</b> Carex rupestris var. drummondiana L.H. Bailey <b>What data are you expecting instead?</b> Carex rupestris var. drummondiana (Dewey) L.H. Bailey <b>If applicable, please provide an authoritative source.</b> Unless this is a different taxon, Dewey's name preceeds Baileys&quot; Carex drummondiana Dewey (1836) C. rupestris var. drummondiana (Dewey) L.H.Bailey (1884) Tropicos has C.r. var. drummondiana L.H.Bailey, but it's not in FNA or IPNI, so I'm not sure this is the same taxon, but it likely is, given that C. drummondiana Dewey is listed as a syn. of C. rupestris in other sources.<issue_closed> Status: Issue closed
googleapis/google-cloud-go
1032307803
Title: bigquery: How to run simple query without a filter like '_PARTITION_LOAD_TIME', '_PARTITIONDATE', '_PARTITIONTIME' ? Question: username_0: Hi, sorry for asking this question here, I am new in bigquery, So I am trying to run a simple query in golang as below : ``` q := bq.BQClient.Query(`select count(*) from projectId.datasetId.container_event`) it, err := q.Read(ctx) if err != nil { fmt.Println(err) } var values []bigquery.Value err = it.Next(&values) if err != nil { fmt.Println(err) } fmt.Println(values) ``` but I always get this error `googleapi: Error 400: Cannot query over table 'projectId.datasetId.container_event' without a filter over column(s) '_PARTITION_LOAD_TIME', '_PARTITIONDATE', '_PARTITIONTIME' that can be used for partition elimination, invalidQuery` Actually, I don't want to use any filter, is it mandatory to use filters like `'_PARTITION_LOAD_TIME', '_PARTITIONDATE', '_PARTITIONTIME'` ?? Thanks in advance<issue_closed> Status: Issue closed
NetAppDocs/ontap
1113842214
Title: Facts not in evidence: creating audit directories in ONTAP Question: username_0: Page: [How the ONTAP auditing process works](https://docs.netapp.com/us-en/ontap/nas-audit/auditing-process-concept.html) This page refers casually to an audit directory needing to exist before auditing can be created but doesn't discuss how to create one, let alone if there are issues about choosing where to create one. That seems worth a hyperlink to an article on those nuances. Answers: username_1: Thanks for pointing this out -- I'll look into it ASAP. username_1: We have updated the topic mentioned above, and one additional one. Planning an auditing configuration https://docs.netapp.com/us-en/ontap/nas-audit/plan-auditing-config-concept.html#parameters-common-to-all-auditing-configurations From the latter topic: "Log destination path ... Specifies the directory where the converted audit logs are stored, typically a dedicated volume or qtree." Your suggestion was welcome; apparently others have had questions about this before. From a community discussion: https://community.netapp.com/t5/ONTAP-Discussions/How-to-create-a-destination-for-audit-logging-in-clustermode-NetApp-Release-8-3/m-p/125416 Thanks again for contacting us. Status: Issue closed
asecurityteam/awsconfig-transformerd
1053126391
Title: Does not work on M1 Mac or ARM based machines Question: username_0: make dep gives warning: ``` WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested ... make test fails with: ``` WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested # crypto/x509/pkix qemu: uncaught target signal 11 (Segmentation fault) - core dumped # github.com/asecurityteam/serverfull qemu: uncaught target signal 11 (Segmentation fault) - core dumped ? github.com/asecurityteam/awsconfig-transformerd [no test files] ? github.com/asecurityteam/awsconfig-transformerd/pkg/domain [no test files] ? github.com/asecurityteam/awsconfig-transformerd/pkg/handlers [no test files] === RUN TestMissingRequiredFields === RUN TestMissingRequiredFields/missing-accountID === RUN TestMissingRequiredFields/missing-region === RUN TestMissingRequiredFields/missing-time === RUN TestMissingRequiredFields/missing-resource-type === RUN TestMissingRequiredFields/empty_tags === RUN TestMissingRequiredFields/valid --- PASS: TestMissingRequiredFields (0.01s) --- PASS: TestMissingRequiredFields/missing-accountID (0.00s) --- PASS: TestMissingRequiredFields/missing-region (0.00s) --- PASS: TestMissingRequiredFields/missing-time (0.00s) --- PASS: TestMissingRequiredFields/missing-resource-type (0.00s) --- PASS: TestMissingRequiredFields/empty_tags (0.00s) --- PASS: TestMissingRequiredFields/valid (0.00s) === RUN TestTransformELB === RUN TestTransformELB/elb-created === RUN TestTransformELB/elb-deleted === RUN TestTransformELB/elbv2-created === RUN TestTransformELB/elbv2-deleted === RUN TestTransformELB/elbv2-created-notags === RUN TestTransformELB/elb-malformed === RUN TestTransformELB/elbv2-malformed-tags --- PASS: TestTransformELB (0.12s) --- PASS: TestTransformELB/elb-created (0.03s) --- PASS: TestTransformELB/elb-deleted (0.02s) --- PASS: TestTransformELB/elbv2-created (0.01s) --- PASS: TestTransformELB/elbv2-deleted (0.01s) --- PASS: TestTransformELB/elbv2-created-notags (0.01s) --- PASS: TestTransformELB/elb-malformed (0.02s) --- PASS: TestTransformELB/elbv2-malformed-tags (0.01s) === RUN TestTransformELBTagUpdate === RUN TestTransformELBTagUpdate/elb-updated === RUN TestTransformELBTagUpdate/elbv2-updated --- PASS: TestTransformELBTagUpdate (0.02s) --- PASS: TestTransformELBTagUpdate/elb-updated (0.01s) --- PASS: TestTransformELBTagUpdate/elbv2-updated (0.01s) === RUN TestELBTransformerCreate === RUN TestELBTransformerCreate/elb-unmarshall-error === RUN TestELBTransformerCreate/elb-missing-value --- PASS: TestELBTransformerCreate (0.00s) --- PASS: TestELBTransformerCreate/elb-unmarshall-error (0.00s) --- PASS: TestELBTransformerCreate/elb-missing-value (0.00s) === RUN TestELBTransformerUpdate === RUN TestELBTransformerUpdate/elb-missing-value --- PASS: TestELBTransformerUpdate (0.00s) [Truncated] --- PASS: Test_extractTagChanges/malformed (0.00s) --- PASS: Test_extractTagChanges/both_nil (0.00s) --- PASS: Test_extractTagChanges/tags_included (0.00s) PASS coverage: 95.0% of statements in github.com/asecurityteam/awsconfig-transformerd/pkg/domain, github.com/asecurityteam/awsconfig-transformerd/pkg/handlers, github.com/asecurityteam/awsconfig-transformerd/pkg/handlers/v1, github.com/asecurityteam/awsconfig-transformerd/pkg/logs ok github.com/asecurityteam/awsconfig-transformerd/pkg/handlers/v1 0.668s coverage: 95.0% of statements in github.com/asecurityteam/awsconfig-transformerd/pkg/domain, github.com/asecurityteam/awsconfig-transformerd/pkg/handlers, github.com/asecurityteam/awsconfig-transformerd/pkg/handlers/v1, github.com/asecurityteam/awsconfig-transformerd/pkg/logs ? github.com/asecurityteam/awsconfig-transformerd/pkg/logs [no test files] FAIL make: *** [test] Error 2 ``` make run fails with: ``` Attaching to awsconfig-transformerd_app_1, awsconfig-transformerd_gateway_1 app_1 | panic: dial udp: lookup statsd on 127.0.0.11:53: no such host app_1 | app_1 | goroutine 1 [running]: app_1 | main.main() app_1 | /go/src/github.com/asecurityteam/awsconfig-transformerd/main.go:31 +0x315 ```
ContinuumIO/anaconda-issues
247608863
Title: Navigator Error Question: username_0: Navigator Error An unexpected error occurred on Navigator start-up Report Please report this issue in the anaconda issue tracker Main Error byte indices must be integers or slices, not str Traceback Traceback (most recent call last): File "D:\Anaconda3\lib\site-packages\anaconda_navigator\exceptions.py", line 75, in exception_handler return_value = func(*args, **kwargs) File "D:\Anaconda3\lib\site-packages\anaconda_navigator\app\start.py", line 115, in start_app window = run_app(splash) File "D:\Anaconda3\lib\site-packages\anaconda_navigator\app\start.py", line 58, in run_app window = MainWindow(splash=splash) File "D:\Anaconda3\lib\site-packages\anaconda_navigator\widgets\main_window.py", line 160, in __init__ self.api = AnacondaAPI() File "D:\Anaconda3\lib\site-packages\anaconda_navigator\api\anaconda_api.py", line 1205, in AnacondaAPI ANACONDA_API = _AnacondaAPI() File "D:\Anaconda3\lib\site-packages\anaconda_navigator\api\anaconda_api.py", line 65, in __init__ self._conda_api = CondaAPI() File "D:\Anaconda3\lib\site-packages\anaconda_navigator\api\conda_api.py", line 1622, in CondaAPI CONDA_API = _CondaAPI() File "D:\Anaconda3\lib\site-packages\anaconda_navigator\api\conda_api.py", line 340, in __init__ self.set_conda_prefix() File "D:\Anaconda3\lib\site-packages\anaconda_navigator\api\conda_api.py", line 489, in set_conda_prefix self.ROOT_PREFIX = info['root_prefix'] TypeError: byte indices must be integers or slices, not str Status: Issue closed Answers: username_1: **See Issue #1837 for more information on how to fix this.** --- Closing as duplicate of #1837 --- Please remember to update to the latest version of Navigator to include the latest fixes. Open a terminal (on Linux or Mac) or the Anaconda Command Prompt (on windows) and type: ``` $ conda update anaconda-navigator ```
LBNL-UCB-STI/beam
583252673
Title: CaccSpec: Exception during the validation of the run Question: username_0: https://github.com/LBNL-UCB-STI/beam/blob/develop/src/test/scala/beam/sflight/CaccSpec.scala fails with ``` The line does not contain 'car' as TravelTimeMode java.lang.IllegalStateException: The line does not contain 'car' as TravelTimeMode at beam.sflight.CaccSpec$.$anonfun$avgCarModeFromCsv$3(CaccSpec.scala:103) at scala.Option.getOrElse(Option.scala:189) at beam.sflight.CaccSpec$.avgCarModeFromCsv(CaccSpec.scala:103) at beam.sflight.CaccSpec.runSimulationAndReturnAvgCarTravelTimes(CaccSpec.scala:61) at beam.sflight.CaccSpec.$anonfun$new$2(CaccSpec.scala:79) at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85) at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83) at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104) at org.scalatest.Transformer.apply(Transformer.scala:22) at org.scalatest.Transformer.apply(Transformer.scala:20) at org.scalatest.WordSpecLike$$anon$3.apply(WordSpecLike.scala:1075) at org.scalatest.TestSuite.withFixture(TestSuite.scala:196) at org.scalatest.TestSuite.withFixture$(TestSuite.scala:195) at beam.sflight.CaccSpec.withFixture(CaccSpec.scala:19) at org.scalatest.WordSpecLike.invokeWithFixture$1(WordSpecLike.scala:1073) at org.scalatest.WordSpecLike.$anonfun$runTest$1(WordSpecLike.scala:1085) at org.scalatest.SuperEngine.runTestImpl(Engine.scala:286) at org.scalatest.WordSpecLike.runTest(WordSpecLike.scala:1085) at org.scalatest.WordSpecLike.runTest$(WordSpecLike.scala:1067) at beam.sflight.CaccSpec.runTest(CaccSpec.scala:19) at org.scalatest.WordSpecLike.$anonfun$runTests$1(WordSpecLike.scala:1144) at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:393) at scala.collection.immutable.List.foreach(List.scala:392) at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:381) at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:370) at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:407) at scala.collection.immutable.List.foreach(List.scala:392) at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:381) at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:376) at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:458) at org.scalatest.WordSpecLike.runTests(WordSpecLike.scala:1144) at org.scalatest.WordSpecLike.runTests$(WordSpecLike.scala:1143) at beam.sflight.CaccSpec.runTests(CaccSpec.scala:19) at org.scalatest.Suite.run(Suite.scala:1124) at org.scalatest.Suite.run$(Suite.scala:1106) at beam.sflight.CaccSpec.org$scalatest$WordSpecLike$$super$run(CaccSpec.scala:19) at org.scalatest.WordSpecLike.$anonfun$run$1(WordSpecLike.scala:1189) at org.scalatest.SuperEngine.runImpl(Engine.scala:518) at org.scalatest.WordSpecLike.run(WordSpecLike.scala:1189) at org.scalatest.WordSpecLike.run$(WordSpecLike.scala:1187) at beam.sflight.CaccSpec.org$scalatest$BeforeAndAfterAll$$super$run(CaccSpec.scala:19) at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213) at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210) at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208) at beam.sflight.CaccSpec.run(CaccSpec.scala:19) at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:45) at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13(Runner.scala:1349) at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13$adapted(Runner.scala:1343) at scala.collection.immutable.List.foreach(List.scala:392) at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:1343) at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24(Runner.scala:1033) at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24$adapted(Runner.scala:1011) at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:1509) at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1011) at org.scalatest.tools.Runner$.run(Runner.scala:850) at org.scalatest.tools.Runner.run(Runner.scala) at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runScalaTest2(ScalaTestRunner.java:133) at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.main(ScalaTestRunner.java:27) ``` Answers: username_1: Closed by https://github.com/LBNL-UCB-STI/beam/pull/2422 Status: Issue closed
ropensci/RNeXML
40859616
Title: Other options for `nexml_publish()` Question: username_0: Seems that figshare is the only option right now. Correct @username_2 ? * Perhaps we can include a options for github, either to repo, gist, or both? * I imagine Dryad doesn't accept submissions programmatically? Are there other places folks may want to share nexml files? Answers: username_1: Just as an FYI to everyone, [DataONE issue 6068](https://redmine.dataone.org/issues/6068) has been implemented and closed. Status: Issue closed username_2: Closing as NeXML is now a recognized type in DataONE. I think it would make sense that a user would upload to DataONE directly via the `dataone` package, I don't believe that the uploading process should really be in the scope of RNeXML.
ankane/ahoy
352369037
Title: Authenticate hook unexpectedly wrapping user param in hash Question: username_0: Hi, I encountered a bewildering issue when using the ahoy authenticate hook as specified in my initializer: ``` def authenticate(user) if visit.present? && visit.user.nil? visit.update(user: user) end super end ``` This failed due to the user I passed into the method being wrapped in a hash like so: https://github.com/username_1/ahoy/blob/master/lib/ahoy/tracker.rb#L89 I was able to resolve the issue by changing my initializer to the following: ``` def authenticate(user_hash) if visit.present? && visit.user.nil? visit.update(user_id: user_hash[:user_id]) end super end ``` May I ask why the user param given is wrapped? If this is necessary, it would be helpful if the documentation included this gotcha. Thanks! Answers: username_1: The `authenticate` method was designed to pass a model, not a hash. Are you manually calling `authenticate` with a hash? username_0: I am manually calling the `authenticate` method that I have defined in an initializer, passing a `User` model. When I call `ahoy.authenticate(user)` from my application_controller it seems [this `authenticate` method](https://github.com/username_1/ahoy/blob/master/lib/ahoy/tracker.rb#L83) is receiving the call and wrapping the user in a `data` hash before forwarding the call to the `authenticate` method I have defined in an initializer. username_1: Ah, the store method takes a `data` hash, not `user`, to be consistent with the other store methods and allow it accept more data, like the visit token.
ethercreative/seo
366016754
Title: Global description Question: username_0: I think there should be option in settings for global meta description value. This value would be used if no description for specific entry would be provided. Answers: username_1: +1 for this, id assume this is possible with a custom meta template, but not sure exactly how to get that working as its not detailed very well in the readme. username_2: Hi, I have used a custom meta template for this. I added the following folder / file in `templates`: `_seo/meta.twig` Then copied the default template from here: [https://github.com/ethercreative/seo/blob/v3/src/templates/_seo/meta.twig](https://github.com/ethercreative/seo/blob/v3/src/templates/_seo/meta.twig) And added the following if statement around the description (you can do the same around the facebook and twitter ones too): ``` {% if seo.description == '' %} <meta name="description" content="I am the global description" /> {% else %} <meta name="description" content="{{ seo.description }}" /> {% endif %} ``` Then in the settings add `_seo/meta.twig` to the _SEO Meta Template_ field username_3: This is resolved with our upcoming dynamic meta feature. It's currently in beta, so if you get a chance we'd really appreciate some testers! See [[Beta] Dynamic Meta](https://github.com/ethercreative/seo/issues/167) for more info. Status: Issue closed
jitpack/jitpack.io
479182040
Title: UNA BASURAAAA DE APLICACIOOOOOOOOOOON, NO FUNCIONAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Question: username_0: Please provide: - Link to build log from https://jitpack.io - Does the project build on your machine with **the same** commands (e.g. ./gradlew install) ? - What error are you seeing? Thank you! Answers: username_0: Ok no es para tanto, aunque no funciona ni sirve para nada. Status: Issue closed
Collegeville/VirtualTeams
654803889
Title: The Impact of Virtual Reality Experience on Accessibility of Cultural Heritage (2019) Question: username_0: This [Uploading IMPACT_OF_VIRTUAL_REALITY_EXPERIENCE_ON_ACCESSIBIL.pdf…](article) was a pretty interesting read. Using VR, in this case, for a cultural heritage site showed that it's participants "experienced relevant increase in their interest in the temple and in the recognition of the importance of conserving it". The study shed's light on how to use VR in creating awareness, and possibly how it could create awareness in the culture of different members in a virtual team. Though the study did not particularly talk about this (awareness of cultural teams with VR), the use of VR could improve awareness of other team members culture, or places they go on the weekend. Imagine being able to wear a VR set with virtual team members and show them where you went during the weekend. This could help build relationships and improve team trust, since it build's on the dimensions of affective trust.
dotnet/wcf
340839452
Title: NetSuite WSDL Not Working Question: username_0: I've tried using this tool (Add Connected Service, Microsoft WCF Web Service Reference Provider into a .Net Core 2.1 console application) but it does not bring in the NetSuite WSDL service correctly. https://webservices.netsuite.com/wsdl/v2017_2_0/netsuite.wsdl Importing this in a .NET Framework WinForm application via the 'Add Service Reference, Advanced, Add Web Reference' works fine and I can access the NetSuite service with the following line of code. `public NetSuiteService service = new NetSuiteService();` Any idea what is going on here? Answers: username_1: Hi @username_0 Thank you for reporting this issue. Can you please provide more details about the problem? what is not correct? Also, please let us know the version of Visual Studio 2017 you are using. thanks, username_1: Observe that the generated client name is NetSuitePortTypeClient and not NetSuiteService ... Status: Issue closed username_0: Hi Miguel, sorry I missed the name change. I have found this now and will do some testing. Thanks for your clarification on this! username_2: I have a related problem. On the old NetSuiteService type there was an add method that had a Record parameter and returned a WriteResponse object. This is in line with NetSuite's documentation. Consuming the service using WCF, I now see the new NetSuitePortTypeClient object and it has an add method, but it now has this signature: public DocumentInfo add(Passport passport, TokenPassport tokenPassport, ApplicationInfo applicationInfo, PartnerInfo partnerInfo, Preferences preferences, Record record, out WriteResponse writeResponse) Is there any way to have WCF generate objects and methods in the way that old style web references did?
processing/p5.js-website
257329788
Title: Custom library build download Question: username_0: Now that modular build of the library is done when building on the command line (https://github.com/processing/p5.js/pull/2051), will this be the time to look into integrating that into the website itself (ie [Modernizr](https://modernizr.com/download#setclasses) style)? That should greatly help adoption of modularized p5.js Unless of course there are still more work to be done on the modularization first? Answers: username_1: yes definitely! I think the first step is to mockup and refine the website interface and determine the backend functionality hooks we need based on that
gluster/project-infrastructure
794080390
Title: Gluster regression jobs are not running with IPv6 - missing deps (libtirpc-devel) Question: username_0: Perhaps we don't test IPv6 yet (we should, but that's a separate topic), but right now we are not even building it properly. From https://build.gluster.org/view/Regressions/job/regression-test-burn-in/5493/consoleText: ``` Please proceed with configuring, compiling, and installing. features requiring zlib enabled: yes configure: WARNING: --------------------------------------------------------------------------------- libtirpc (and/or ipv6-default) were enabled but libtirpc-devel is not installed. Disabling libtirpc and ipv6-default and falling back to legacy glibc rpc headers. This is a transitional warning message. Eventually it will be an error message. --------------------------------------------------------------------------------- ``` Status: Issue closed Answers: username_1: Ok so the closure of the ticket was automatic, efficient but not very polite. If the fix is not working feel free to reopen it. As for testing with IP v6, we can enable IP v6, but it was explictely disabled as it was breaking the test suite. I would welcome a change here, especially since we had more and more problem to get ride of IP v6 for tests. username_1: So, we think this change is the cause of the georep failure we have been seeing: - the change occured at the same time as failure - the only builder who is not causing trouble is one where packages deployment failed - the softserve instance didn't had that package - debugging the whole tests showed error around RPC: For example: ``` 17:12:32 [2021-02-02 17:12:32.174458 +0000] W [xdr-rpcclnt.c:68:rpc_request_to_xdr] 0-rpc: failed to encode call msg ``` So I am going to revert the change, remove the package and reopen the bug. username_1: Perhaps we don't test IPv6 yet (we should, but that's a separate topic), but right now we are not even building it properly. From https://build.gluster.org/view/Regressions/job/regression-test-burn-in/5493/consoleText: ``` Please proceed with configuring, compiling, and installing. features requiring zlib enabled: yes configure: WARNING: --------------------------------------------------------------------------------- libtirpc (and/or ipv6-default) were enabled but libtirpc-devel is not installed. Disabling libtirpc and ipv6-default and falling back to legacy glibc rpc headers. This is a transitional warning message. Eventually it will be an error message. --------------------------------------------------------------------------------- ```
ipfs/fs-repo-migrations
238347584
Title: Release blocker: test suite broken Question: username_0: The merge of #53 was the first one breaking the test suite. I fixed the first error in f1d82ebab2adc55723c8673684802265824e4fce but there's more which are more complicated :( This blocks any further fs-repo-migrations releases.<issue_closed> Status: Issue closed
Puterism/TheDiaryDot-Client
374646037
Title: Message 컴포넌트 애니메이션 처리 향상하기 Question: username_0: https://kr.vuejs.org/v2/guide/transitions.html Vue 트랜지션을 이용하여 애니메이션 처리 수정하기 Answers: username_0: Finished at https://github.com/username_0/TheDiaryDot-Client/commit/b178aa353a6a53d0cd0b77a4180e73c629992bd2 TODO: setTimeout 걷어내기 username_0: https://github.com/username_0/TheDiaryDot-Client/commit/98411cbab690a0ebfbd2a23d82b9d88ed8142cc4 Velocity 라이브러리 적용으로 해결 Status: Issue closed
google/j2objc
66129159
Title: String length: Parse Issue: Expected identifier or '(' Question: username_0: Version 0.9.6.1 has trouble processing this file: https://github.com/myabc/markdownj/blob/master/core/src/main/java/org/markdownj/CharacterProtector.java Specifically, this: private String longRandomString() { StringBuilder sb = new StringBuilder(); final int CHAR_MAX = GOOD_CHARS.length(); for (int i = 0; i < 20; i++) { sb.append(GOOD_CHARS.charAt(rnd.nextInt(CHAR_MAX))); } return sb.toString(); } is turned into this: NSString *OrgMarkdownjCharacterProtector_longRandomString(OrgMarkdownjCharacterProtector *self) { JavaLangStringBuilder *sb = [[JavaLangStringBuilder alloc] init]; jint CHAR_MAX = ((jint) [((NSString *) nil_chk(OrgMarkdownjCharacterProtector_GOOD_CHARS_)) length]); for (jint i = 0; i < 20; i++) { (void) [sb appendWithChar:[OrgMarkdownjCharacterProtector_GOOD_CHARS_ charAtWithInt:[((JavaUtilRandom *) nil_chk(self->rnd_)) nextIntWithInt:CHAR_MAX]]]; } return [sb description]; } This results in a `Parse Issue: Expected identifier or '('` at this line: jint CHAR_MAX = ((jint) [((NSString *) nil_chk(OrgMarkdownjCharacterProtector_GOOD_CHARS_)) length]); Answers: username_0: This is due to `CHAR_MAX` being defined in `limits.h` username_1: Added symbols from limits.h and i386/limits.h, awaiting code review. If you need to get past this before this gets pushed, just add "CHAR_MAX" to the reservedNames list in translator/src/main/java/com/google/devtools/j2objc/util/NameTable.java. Status: Issue closed username_1: Fixed in public source.
home-assistant/frontend
809409238
Title: Color thermostat card not customizable Question: username_0: **Checklist** - [X] I have updated to the latest available Home Assistant version. - [X] I have cleared the cache of my browser. - [X] I have tried a different browser to see if it is related to my browser. **Describe the issue you are experiencing** The color of the slider of the thermostat card is not customizable by a theme or setting. **Describe the behavior you expected** I expect the slider color to be customizable. **Steps to reproduce the issue** 1. 2. 3. ... **What version of Home Assistant Core has the issue?** core-2021.2.3 **What was the last working version of Home Assistant Core?** _No response_ **In which browser are you experiencing the issue with?** _No response_ **Which operating system are you using to run this browser?** _No response_ **State of relevant entities** ```yaml # Paste your state here. ``` **Problem-relevant frontend configuration** ```yaml # Paste your YAML here. ``` **Javascript errors shown in your browser console/inspector** ```txt # Paste your logs here. ``` Answers: username_1: Haven't tried it myself yet, but from looking at the code that should be possible. For a fixed/static override `round-slider-bar-color` should work. By default it is assigned to use `mode-color` which is set depending on the thermostat mode. See the different mode variables in the coding: https://github.com/home-assistant/frontend/blob/09e7600d8639b81600f3917d7678ebd93779a936/src/panels/lovelace/cards/hui-thermostat-card.ts#L451-L477 Can you try if that works? username_2: I tried both **heat-color** and **round-slider-bar-color** - neither work. username_0: Same here. username_1: I can confirm. The `ha-card` seems to be overwriting the theme values here. username_2: Also, on this card, the unselected modes are forced to var(--disabled-text-color), which doesn't seem right since they are not really disabled. https://github.com/home-assistant/frontend/blob/09e7600d8639b81600f3917d7678ebd93779a936/src/panels/lovelace/cards/hui-thermostat-card.ts#L563 As one counter example, the simple thermostat card uses var(--secondary-text-color) instead for unselected modes. username_0: Shouldn't it be something like this? ``` ha-card { height: 100%; position: relative; overflow: hidden; --name-font-size: 1.2rem; --brightness-font-size: 1.2rem; --rail-border-color: transparent; } .content { --auto-color: green; --eco-color: springgreen; --cool-color: #2b9af9; --heat-color: #ff8100; --manual-color: #44739e; --off-color: #8a8a8a; --fan_only-color: #8a8a8a; --dry-color: #efbd07; --idle-color: #8a8a8a; --unknown-color: #bac; } .auto, .heat_cool { --mode-color: var(--auto-color); } .cool { --mode-color: var(--cool-color); } .heat { --mode-color: var(--heat-color); } .manual { --mode-color: var(--manual-color); } .off { --mode-color: var(--off-color); } .fan_only { --mode-color: var(--fan_only-color); } .eco { --mode-color: var(--eco-color); } .dry { --mode-color: var(--dry-color); } .idle { --mode-color: var(--idle-color); } .unknown-mode { --mode-color: var(--unknown-color); } ``` username_3: Thanks, also looking for a solution here :) username_4: Hi, still looking for a solution here. Any news guys ? :) username_5: After inspecting the card the browser showed me that adding the following to a theme yaml will control the color of the slider and the heat icon of the thermostat card. state-climate-heat-color: "#7716F6" username_3: Thanks for your help @username_5 ! Could you please tell how to use those values though ? :) I'm not getting them to work... username_5: have a look at [https://www.home-assistant.io/integrations/frontend/#defining-themes](https://www.home-assistant.io/integrations/frontend/#defining-themes) I also noticed that in the linked file on that page all the climate entries are also listed. [ha-style.ts](https://github.com/home-assistant/home-assistant-polymer/blob/master/src/resources/ha-style.ts) So the answer was in the documentation as well.
dotnet/roslyn
90741964
Title: Unable to customize the generated InternalsVisibleTo attributes. Question: username_0: For doing custom, private builds of Roslyn, it is sometimes required to have access to the internals of various assemblies. One example of this need is for code completion. It is possible to manually edit the requisite project files to inject InternalsVisibleTo attributes, but doing so is tedious and error prone. It must also be done after every pull from upstream. My proposed solution to this is to add an extension point to VSL.Settings.targets. this extension point would load a local project file that contains `<InternalsVisibleTo/>` elements, which would be injected in to targeted projects during build. The proposed change to VSL.Setings.targets: ```xml <Import Project="$(MSBuildThisFileDirectory)..\..\..\..\..\LocalInternalsVisibleTo.props" Condition="Exists('$(MSBuildThisFileDirectory)..\..\..\..\..\LocalInternalsVisibleTo.props')" /> ``` And an example LocalInternalsVisibleTo.props: ```xml <Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <ItemGroup> <InternalsVisibleTo Include="MyCustomAssembly" Condition=" '$(MSBuildProjectName)' == 'Features' OR '$(MSBuildProjectName)' == 'Workspaces.Desktop'" /> <InternalsVisibleTo Include="MyOtherCustomAssembly" Condition=" '$(MSBuildProjectName)' == 'Features'" /> </ItemGroup> </Project> ``` Status: Issue closed Answers: username_1: Based on this https://github.com/dotnet/roslyn/pull/3669, we're not going to do this.
llvm/circt
892718966
Title: [firrtl] Flow checking rejects firrtl/regress/Rob.fir Question: username_0: I'm not sure if this is correct or not, but we're currently rejecting this testcase from the firrtl regression suite with: ``` rob.scala:934:41: error: 'firrtl.partialconnect' op has invalid flow: the left-hand-side has source flow (expected sink or duplex flow). rob.scala:934:41: note: see current operation: "firrtl.partialconnect"(%T_23566, %T_23562) : (!firrtl.uint<37>, !firrtl.uint<40>) -> () regress/Rob.fir:46:9: note: the left-hand-side was defined here. infer mport T_23566 = T_23558[T_23565], clk ^ ``` @username_1 Answers: username_1: I think the default flow is "source" and flow computation is incorrectly using the default case for `mport` (`mport` isn't handled at all). Should be a quick fix. Thanks for finding and reporting this. Status: Issue closed
econ-ark/HARK
935061562
Title: removing handcrafted terminal solutions; better pseudoterminal/afterlife handling Question: username_0: Currently, both handcrafted terminal and pseudo-terminal models are supported. The handcrafted terminal functionality should be deprecated and all models should have better, more robust "pseudo-terminal", "afterlife" functionality for the last period.
bigtestjs/bigtest
541835549
Title: Scrollable does not error when unable to scroll Question: username_0: The `scrollable` property creator and `scroll` method has no check in place to verify that is is able to scroll the element. We need to add something like the following ```js .when(element => { if (scrollTo.top && element.scrollHeight < scrollTo.top) { throw new Error(`unable to scroll "${selector}", not enough vertical overflow`); } else if (scrollTo.left && element.scrollWidth < scrollTo.left) { throw new Error(`unable to scroll "${selector}", not enough horizontal overflow`); } }) .do(element => { // current scrollable behavior }) ``` Undecided on what the error message should be
GordeyChernyy/DrawingTool
120612545
Title: The text symbol crashes app Question: username_0: The app crashes when string has this symbol ' (\342). ```C++ const FTGlyph* const FTGlyphContainer::Glyph(const unsigned int charCode) const { unsigned int index = charMap->GlyphListIndex(charCode); return glyphs[index]; // xc_bad _acces } ``` Status: Issue closed Answers: username_0: The app crashes when string has this symbol ' (\342). ```C++ const FTGlyph* const FTGlyphContainer::Glyph(const unsigned int charCode) const { unsigned int index = charMap->GlyphListIndex(charCode); return glyphs[index]; // xc_bad _acces } ```
lowRISC/opentitan
978588245
Title: [prim_diff_decode] Failing assertion `LevelCheck0_A` needs a fix Question: username_0: This assertion fails occasionally at the chip level in the CSR RW test. Here's the link: https://github.com/lowRISC/opentitan/blob/94b47186f82294c2923196d4f54b7c9ea65a200a/hw/ip/prim/rtl/prim_diff_decode.sv#L245 In one particular case, the write to an `alert_test` register in an IP (running at 100MHz) resulted in the diff alert pair toggle very close to the edge of the 24MHz clock the alert receiver in the `alert_handler` is on. This results in a case where the synchronized output appears to change within one cycle itself, instead of 2 cycles imposed by this assertion check. Discussed with @msfschaffner already offline. Answers: username_0: Higher priority because this makes the chip level CSR test which is run in CI flaky.
paymill/paymill-documentation
107506683
Title: Connect of PAYMILL Mobile SDK via website Question: username_0: On this page: https://developers.paymill.com/guides/integration/mobile-sdk.html There is a block two time with a button "Install the mobile App" in it. From my understanding both should lead to Our connect featre where you then connect your account with our Mobile App. Clicking the first I get to the start page of the developer centre. Clicking the second I get a 404 error. +++ Update +++ mobile-app-success & mobile-app-failure has been switched of with the old docu. So the 2 pages have to be implemented in the Dev Center Btn Link should switch to: https://connect.paymill.com/authorize?client_id=app_3306946e6e1e6cb593f996f437d6081c054f51ae9&scope=transactions_w+preauthorizations_w+clients_w+payments_w+webhooks_w+subscriptions_w+offers_w+refunds_w&response_type=code<issue_closed> Status: Issue closed
sanity-io/sanity
436411386
Title: Can not run sanity studio on custom path in custom express server Question: username_0: **Describe the bug** When I use the modules from `@sanity/server` I can not run the studio in dev mode under another path. _I added comment to the offending commit, but I lost that comment, so here is the proper issue report_ **To Reproduce** _warning: this is messy code, it's just an evaluation experiment_ Directory layout: ``` |_ package.json (no sanity) |_ serve.js |_ admin |_ package.json (sanity) |_ ... (all other sanity files) |_ getExpressAdminApp.js ``` ``` // serve.js const getAdminApp = require('./admin/getExpressAdminApp') const app = require('express')() getAdminApp() .then(adminApp => { app.use((req, res, next) => { if (req.path.startsWith('/admin')) { adminApp(req, res, next) } else { next() } }) app.use('*', (req, res) => res.status(200).send('okidoki')) app.listen(8888) }) .catch(e => console.error(e)) ``` ``` // getExpressAdminApp.js const isProduction = process.env.NODE_ENV === 'production' const reinitializePluginConfigs = require('@sanity/core/lib/actions/config/reinitializePluginConfigs') const getAdminApp = require(`@sanity/server/lib/${isProduction ? 'prodServer' : 'devServer'}`).default const getConfig = require("@sanity/util/lib/getConfig").default const path = require('path') module.exports = async () => { const config = getConfig(__dirname) await reinitializePluginConfigs.tryInitializePluginConfigs({ workDir: __dirname, output: { print: (...args) => console.log(...args) } }) const x = getAdminApp({ basePath: __dirname, staticPath: path.resolve(__dirname, config.get('server.staticPath')), project: config.get('project'), }) x.locals.compiler.plugin('done', stats => { // report webpack warnings and errors [Truncated] features: { customProperties: { preserve: false } } } }) } }; ``` Why do we want to have the studio in a custom local development on a sub path? I am looking for suitable replacements of our current stack. In order to sell this to my co-developers I want to create a nice 'starter kit' that has next js for the main front-office and sanity for the backoffice. I want them to have an awesome experience where every edit they make directly has effect. I also want to add some basic stuff: - next js pages appear in the studio where content can be added for those pages - for each next js they can add some SEO info - automatically render the appropriate SEO info with next js - add some localization options and show the studio in dutch Oops, maybe too much info, anyway, it would be nice if this minor change would be made. Answers: username_1: We were hoping for a solution to this as well. Any hacky workarounds to achieve this for now?
open-austin/iced-coffee
133563461
Title: AC4D Partnered Event Question: username_0: We've talked about a co-hosted event with AC4D, "Designing for Social Impact Workshop" https://open-austin.slack.com/archives/eventplanning/p1455379930000061 Here are some todos: - [ ] Figure out a rough date. - End of April so not to conflict with other pre-ATXH4C workshops? - [ ] Decide what problem statement we want to focus in on? - [Safety & Justice](http://www.codeforamerica.org/focus/safety-justice/), [Health](http://www.codeforamerica.org/focus/health/), [Economic Development](http://www.codeforamerica.org/focus/economic-development/), [Communications & Engagement](http://www.codeforamerica.org/focus/communications-engagement/). - [ ] Have a convo, maybe coffee with Eric, to talk details. (@username_0 setting this up) - [ ] Book a venue (here's a good [list](https://github.com/open-austin/iced-coffee/issues/88) to start) Answers: username_1: Hey @username_0, is that the best list of Venues we have? If so, I'd like to create a spreadsheet of that. If not, can you point me to another list? username_0: Yes, its the best we've got but I'm sure there are some missing. username_0: Here's a few more: https://library.austintexas.gov/sites/default/files/u20/meeting_spaces_2015.pdf username_0: Got an email response from For the City Center (we had reached out to them regarding CodeAcross). Their venue rental fees for non profits are reasonable. [2015 FTCC Rental Information - NonProfit & General Use.pdf](https://github.com/open-austin/iced-coffee/files/135333/2015.FTCC.Rental.Information.-.NonProfit.General.Use.pdf) username_1: That does seem reasonable. If you find/have any more venue docs can you put them in here: [https://drive.google.com/drive/u/1/folders/0BwKb2oghmQldMDlLVnNIQk9tTEU](https://drive.google.com/drive/u/1/folders/0BwKb2oghmQldMDlLVnNIQk9tTEU) Status: Issue closed username_0: So this event has spawned several other issues. Going to close this one for now but I added it under the Design Workshop Milestone.
EliotVU/UnrealScript-Language-Service
506193742
Title: [0.4.3] "const ref" argument confusing the parser Question: username_0: ![image](https://user-images.githubusercontent.com/2865341/66702900-4046f900-ed15-11e9-93c3-5b60608bddd7.png) ```uc static native function bool IsUnitInRange(const ref XComGameState_Unit SourceUnit, const ref XComGameState_Unit TargetUnit, float MinRange = 0.0f, float MaxRange = 0.0f, float MaxAngle = 360.0f); static native function bool IsUnitInRangeFromLocations(const ref XComGameState_Unit SourceUnit, const ref XComGameState_Unit TargetUnit, const out TTile SourceTile, const out TTile TargetTile, float MinRange = 0.0f, float MaxRange = 0.0f, float MaxAngle = 360.0f); ``` ``` Encountered an error while constructing the body for function 'XComGame.Helpers.IsUnitInRange' Error: The specified node does not exist at Tt.getChild (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:40:1695) at Tt.getRuleContext (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:40:2402) at Tt.functionBody (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:256:277874) at e.DocumentASTWalker.visitFunctionDecl (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:596:22518) at Tt.accept (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:256:278381) at e.DocumentASTWalker.visitChildren (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:600:658) at O.accept (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:256:252506) at e.DocumentASTWalker.visitChildren (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:600:658) at R.accept (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:256:251720) at e.DocumentASTWalker.visit (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:600:513) Encountered an error while constructing the body for function 'XComGame.Helpers.XComGameState_Unit' Error: The specified node does not exist at Tt.getChild (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:40:1695) at Tt.getRuleContext (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:40:2402) at Tt.functionBody (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:256:277874) at e.DocumentASTWalker.visitFunctionDecl (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:596:22518) at Tt.accept (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:256:278381) at e.DocumentASTWalker.visitChildren (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:600:658) at O.accept (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:256:252506) at e.DocumentASTWalker.visitChildren (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:600:658) at R.accept (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:256:251720) at e.DocumentASTWalker.visit (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:600:513) Encountered an error while constructing the body for function 'XComGame.Helpers.IsUnitInRangeFromLocations' Error: The specified node does not exist at Tt.getChild (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:40:1695) at Tt.getRuleContext (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:40:2402) at Tt.functionBody (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:256:277874) at e.DocumentASTWalker.visitFunctionDecl (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:596:22518) at Tt.accept (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:256:278381) at e.DocumentASTWalker.visitChildren (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:600:658) at O.accept (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:256:252506) at e.DocumentASTWalker.visitChildren (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:600:658) at R.accept (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:256:251720) at e.DocumentASTWalker.visit (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:600:513) Encountered an error while constructing the body for function 'XComGame.Helpers.XComGameState_Unit' Error: The specified node does not exist at Tt.getChild (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:40:1695) at Tt.getRuleContext (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:40:2402) at Tt.functionBody (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:256:277874) at e.DocumentASTWalker.visitFunctionDecl (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:596:22518) at Tt.accept (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:256:278381) at e.DocumentASTWalker.visitChildren (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:600:658) at O.accept (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:256:252506) at e.DocumentASTWalker.visitChildren (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:600:658) at R.accept (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:256:251720) at e.DocumentASTWalker.visit (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:600:513) Encountered an error while constructing the body for function 'XComGame.Helpers.TTile' Error: The specified node does not exist at Tt.getChild (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:40:1695) at Tt.getRuleContext (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:40:2402) at Tt.functionBody (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:256:277874) at e.DocumentASTWalker.visitFunctionDecl (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:596:22518) at Tt.accept (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:256:278381) at e.DocumentASTWalker.visitChildren (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:600:658) at O.accept (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:256:252506) at e.DocumentASTWalker.visitChildren (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:600:658) at R.accept (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:256:251720) at e.DocumentASTWalker.visit (c:\Users\xyman\.vscode\extensions\eliotvu.uc-0.4.3\server\out\server.js:600:513) Helpers: Walking time 17.128762990236282 Helpers: linking time 2.1040569841861725 [Helpers]: post linking time 6.587505012750626 ``` Answers: username_1: Thanks! This has been fixed in the next update(0.5) :) I'm assuming this is a feature shared between some of the UE3 licensees. Status: Issue closed
palantir/policy-bot
437481640
Title: Commit listing issues in 1.7.x and 1.8.x Question: username_0: The new commit listing logic introduced in 1.7.0 also introduced several significant bugs that break various parts of the application. As a result, we don't recommend deploying the 1.7.0, 1.7.1, 1.8.0, and 1.8.1 releases. This issue serves as a summary of the problems and a tracking issue for the final fix. In 1.7.0, commit listing was switched to read the history of a commit from the head repository instead of from the pull request to populate the `pushedDate` field and fix `invalidate_on_push` for pull requests from forks. This relied on the `commits` field existing in the pull request object, which was not true for objects included in `pull_request_review` event payloads. The server would crash when processing these events. This was fixed in 1.8.0 and revealed a more significant problem. By listing the history of a single commit, commits already on the target branch but included in the PR via a merge commit were also considered by the bot. This broke collaborator detection and could break file predicates, since the commit list is truncated to the number of commits GitHub reports as being part of the pull request. Both of these issues were caused by bad assumptions made about the GitHub API that were not adequately tested before release. In addition to fixing the bug, we'll also be looking at better integration testing strategies. Status: Issue closed Answers: username_0: I believe this is fully fixed in the upcoming 1.9.0 release: - For PRs that are from the same repository, commits are listed from the pull request - For PRs that are from forks, commits are listed from the pull request and then a second request is made against the fork repository to load the `pushedDate` field for all commits included in the pull request
GwtMaterialDesign/gwt-material-table
323834956
Title: Standard Table - Clicking the Maximize / Stretch button does not do anything Question: username_0: ![image](https://user-images.githubusercontent.com/39285900/40150850-b6bfe2be-59ae-11e8-8235-d78ea9fa2d01.png) Is this intended behavior? Demo : https://gwtmaterialdesign.github.io/gwt-material-demo/2.1/#infinite_datatable
chronos-tachyon/googletest-bazel
64427054
Title: gtest_prod_test is not linking for Windows CE platform. Question: username_0: ``` IMPORTANT NOTE: PLEASE send issues or requests to http://groups.google.com/group/googletestframework *instead of here*. This issue tracker is NOT regularly monitored. If you really need to create a new issue, please provide the information asked for below. What steps will reproduce the problem? 1. Convert gtest and gtest_main to compile both of them for CE. I changed the preporocessor settings to "WIN32;NDEBUG;_LIB;_i386_;UNDER_CE=$(CEVER);_WIN32_WCE=$(CEVER);$(CePlatform);UN ICODE;_UNICODE;_X86_;x86;_WIN32_WCE_CEPC;". With this setting, gtest & gtest_main compiles. Change the gtest_prod_test preprocessor setting to "WIN32;NDEBUG;_i386_;UNDER_CE=$(CEVER);_WIN32_WCE=$(CEVER);$(CePlatform);UNICODE ;_UNICODE;_X86_;x86;_WIN32_WCE_CEPC;_CONSOLE". Then compile. 2. its giving following error. 1>LINK : error LNK2001: unresolved external symbol _mainCRTStartup 3. What is the expected output? What do you see instead? It should compile and I should be able to see the results in the example given which comes along with gtest 1.6.0 What version of Google Test are you using? On what operating system? gtest 1.6.0 Please provide any additional information below, such as a code snippet. ``` Original issue reported on code.google.com by `<EMAIL>` on 29 Sep 2011 at 3:41<issue_closed> Status: Issue closed
department-of-veterans-affairs/caseflow
709216785
Title: Add Actions to CavcRemandProcessedLetterResponseWindowTask Question: username_0: <!-- The goal of this template is to be a tool to communicate the requirements for a story related task. It is not intended as a mandate, adapt as needed. --> ## User or job story User story: As a Litigation Support user, I need actions to be available on the CavcRemandProcessedLetterResponseWindowTask, so that I can add actions if applicable or complete the task. ## Acceptance criteria - [ ] Please put this work behind the feature toggle: cavc_remand - [ ] This feature should be accessible to the following user groups: Litigation Support - [ ] Include screenshot(s) in the Github issue if there are front-end changes - [ ] Update documentation: Cavc Remand - [ ] Verify the actions drop down has the following available: - Place Task on Hold - opens current on hold modal - Send to VLJ for Extension Request grant/denial - workflow outside of this ticket - Charge to VSO - Creates IHP task under distribution task for VSO on file (existing task) - Closes self & parent - Schedule Hearing - Opens schedule hearing task (auto opens parent hearing task under distribution task) - Closes self & parent - Mark task complete - Closes self and parent to be ready for distribution ## Release notes <!-- Write what should be included in release notes (Caseflow uses Headway), updated when the story is built, before it's deployed. --> <!-- The following sections can be deleted if they are not needed --> ### Out of scope <!-- Clarify what is out of scope if the designs include more or there are many tickets for this chunk of work --> ### Designs <!-- Include screenshots or links to designs if applicable. --> ### Background/context <!-- Include as needed, especially for issues that aren't part of epics. Include a value statement - why is this feature being developed? --> ### Technical notes <!-- Include notes that might help an engineer get started on this more quickly, or potential pitfalls to watch out for. --> ### Other notes ### Resources/other links <!-- E.g. links to other issues, PRs, Sentry alerts, or Slack threads, or external service requests. --> Answers: username_0: @geronimoramos @username_1 to confirm we no longer want the user to place this task on hold for any reason? username_1: @username_0 @geronimoramos Correct, MDRs get placed on hold. At this point for JMR/JMPR, the equivalent of a hold would be an extension request that they would want to track Status: Issue closed
posva/vue-promised
350750520
Title: Type check failed for prop "promise". Expected Promise, got Promise. Question: username_0: Hello, thank you very much for the very useful project. The component works as expected, however I keep getting strange error on the console. `[Vue warn]: Invalid prop: type check failed for prop "promise". Expected Promise, got Promise.` I have no idea how this can become a problem since it's exactly what the component expects, right? I am using Nuxt, if that matters, and here's the code of my view: ``` <Promised :promise="checkImage(resto.pic)"> <div slot="pending">Loading...</div> <v-card-media slot="then" slot-scope="data" :src="data" height="150px"/> <v-card-media slot="catch" slot-scope="error" :src="error" height="150px"/> </Promised> ``` Here's the method which returns the promise: ``` checkImage(src) { return new Promise((resolve, reject) => { let img = new Image() img.src = src img.onload = () => resolve(src) img.onerror = () => reject("/images/resto-placeholder.png") }) } ``` Is there anything wrong on my end? Any help and feedback is greatly appreciated. Thank you. Answers: username_1: This is because you are using a polyfill for Promises and it's being included after VuePromised is included, making the type differ. The solution is making sure the polyfill is included before VuePromised. It had bit me before, so I will probably just use a custom validator that checks for `then` and `catch` methods username_0: Thank you for pointing that out. I... am still clueless about polyfill and I'm going to get some reading first and I'll get back if I have something specific to ask. Thanks again :) Status: Issue closed username_2: @username_1, I‘m having the same problem in a wrapper component for `<Promised>`. Is there an easy way to control the import order of the polyfill in a `vue-cli` application? Otherwise I will have to remove the type checks for `Promise`. username_1: This was fixed so what error are you seeing? username_2: @username_1, sorry if this might be a little off-topic. I’m using the `promise: { type: Promise }` specification for a prop in my wrapper component and this leads to the same error as reported above. You solved this by changing the prop definition, but said it was actually caused by the Promise polyfill being imported only after `vue-promised`. So I was wondering if you had a solution in mind, where one would control the import order of the polyfill. username_1: oh, I'm sorry I don't have a different solution in mind apart from the one I implemented in vue-promised
ARCANEDEV/Localization
235936687
Title: Cannot install 1.1 with Laravel 5.4 Question: username_0: The compiled services file has been removed. Loading composer repositories with package information Updating dependencies (including require-dev) Your requirements could not be resolved to an installable set of packages. Problem 1 - Installation request for arcanedev/localization ^1.1 -> satisfiable by arcanedev/localization[1.1.0]. - Conclusion: remove arcanedev/support 3.21.4 - Conclusion: don't install arcanedev/support 3.21.4 - arcanedev/localization 1.1.0 requires arcanedev/support ~4.1 -> satisfiable by arcanedev/support[4.1.0, 4.1.1, 4.1.2, 4.1.3, 4.1.4, 4.1.5, 4.1.6, 4.1.7, 4.1.8, 4.1.9]. - Can only install one of: arcanedev/support[4.1.0, 3.21.4]. - Can only install one of: arcanedev/support[4.1.1, 3.21.4]. - Can only install one of: arcanedev/support[4.1.2, 3.21.4]. - Can only install one of: arcanedev/support[4.1.3, 3.21.4]. - Can only install one of: arcanedev/support[4.1.4, 3.21.4]. - Can only install one of: arcanedev/support[4.1.5, 3.21.4]. - Can only install one of: arcanedev/support[4.1.6, 3.21.4]. - Can only install one of: arcanedev/support[4.1.7, 3.21.4]. - Can only install one of: arcanedev/support[4.1.8, 3.21.4]. - Can only install one of: arcanedev/support[4.1.9, 3.21.4]. - Installation request for arcanedev/support (locked at 3.21.4) -> satisfiable by arcanedev/support[3.21.4]. Installation failed, reverting ./composer.json to its original content. ``` Answers: username_1: Can you show me your `composer.json`? Try to update any other arcanedev packages before installing `arcanedev/localization`. Status: Issue closed username_0: I see your point, the issue may comes from another package (such as laravel-translation-manager) : ```json { "name": "laravel/laravel", "description": "The Laravel Framework.", "keywords": ["framework", "laravel"], "license": "MIT", "type": "project", "require": { "php": ">=5.5.9", "laravel/framework": "5.4.*", "intervention/image": "^2.3", "illuminate/html": "^5.0", "vsch/laravel-translation-manager": "~2.4", "dimsav/laravel-translatable": "^5.3", "spatie/laravel-medialibrary": "~3.10", "guzzle/guzzle": "^3.9", "guzzlehttp/guzzle": "^6.1", "anhskohbo/no-captcha": "^2.1", "laravel/tinker": "^1.0" }, "require-dev": { "fzaninotto/faker": "~1.4", "mockery/mockery": "0.9.*", "phpunit/phpunit": "~5.7", "phpspec/phpspec": "~2.1", "laravel/homestead": "dev-master", "symfony/dom-crawler": "3.1.*", "symfony/css-selector": "3.1.*" }, "autoload": { "classmap": [ "database" ], "psr-4": { "App\\": "app/" }, "files": [ "app/Helpers.php" ] }, "autoload-dev": { "classmap": [ "tests/TestCase.php" ] }, "scripts": { "post-install-cmd": [ "php artisan clear-compiled", "php artisan optimize" ], "pre-update-cmd": [ "php artisan clear-compiled" ], "post-update-cmd": [ "php artisan vendor:publish --provider=\"Vsch\\TranslationManager\\ManagerServiceProvider\" --tag=public", "php artisan optimize" ], "post-root-package-install": [ "php -r \"copy('.env.example', '.env');\"" ], "post-create-project-cmd": [ "php artisan key:generate" ] }, "config": { "preferred-install": "dist" } } ``` username_1: You need to check if the packages you're using has a Laravel `5.4`. Like `dimsav/laravel-translatable`, you must update it to `v7.*`. [Check this link](https://github.com/dimsav/laravel-translatable#laravel-compatibility) username_0: That's exactly what I was doing when I first met this issue. I did update every version, removed composer.lock and my vendor directory. Everything is installed flawlessly, thank you !
routetopa/spod
181914095
Title: Add more info about the users (e.g. data analist, student, journalist) Question: username_0: `(Utrecht, SPOD (v.1.8) , September 26 2016)` The members page is a good feature and with the new interface it also looks nice. However, it would be good if you can see what the person’s background is (e.g. data analist, student, journalist) and where they are from and what topic they are interested in. This way you could connect more easy with people you should be in contact with for your project. See screenshot below ![image](https://cloud.githubusercontent.com/assets/22379520/19223879/91cc089e-8e7a-11e6-8ad3-14d9b3e33fcf.png) Add more info about the users. For example <NAME> (Italy/Data-Analist/ Circular Economy) or something like that. Answers: username_1: Moved to the UX&UI project.. Status: Issue closed username_2: We decided according with project partners / users to not show personal informations for privacy reasons.
blackboard/BBDN-BlackboardData-Queries
1057640588
Title: Careful with deleted items Question: username_0: There are several queries posted here that don't take into account that Blackboard Data retains records that have been deleted from the source systems. If you want to make sure you are only counting things that are currently on the platform be sure to include a filter based on **row_deleted_time** being null.
Miahelloworld/awesome-wechat-account-verification-outside-of-China
805337503
Title: wechat scan qr code sign up | I currently get 13 awesome solutions on google | you can get a Wechat account via the following methods. Question: username_0: I currently get 13 awesome solutions on google, you can get a Wechat account via the following methods. #1. Can you find a Chinese restaurant near you? You can ask their owners to do this job for you. Of course, you need to buy some Chinese food first. when you eat your food, ask them to fix this issue for you. Sounds funny, but 90% works. #2. Can you find some online marketplace platform? I know the eBay and kiwikiwifly marketplace, etc. why not search Wechat account keywords or buy Wechat account on these two websites. It will save you lots of time. Many freelancers can do this job I think. And they are happy to help you address this issue. After they helped you, you can give them around $30 tips or buy a coffee. #3. Do you know Chinese language, do you know a website taobao, it is so famous in China, not understand of Chinese? Can you use the google translate extension? #4. Can you use the PayPal function? 14 days transaction protection, find a Chinese and buy it by PayPal. #5. Can you post a job on a job seeking website? Just try to hire someone, and leave your contacts. #6. Can you see some YouTube video tutorials. But only a few tutorials works, most of them waste time. #7. You can submit a ticket to Wechat official team, and add your issue. Just google Wechat help center #8. Some sellers on the YouTube offer the services of Wechat security verification. But you have to use the marketplace escrow payment to secure your transaction. #9 just search how to activate Wechat account 2021 keyword on google, you will get the latest methods. #10.You can search Wechat account on discord server groups. There are some users can help you verify it. #11. Facebook group is the last possible solution to get Wechat account verification. But the cons is that there are so many fake infos and spam. #12 I find 2 awesome answers on github about how to fix your issue. wechat verification services|buy wechat accounts from 10 professional freelancers #13 another one is here, Wechat account verification discord server|The #1 wechat users group on discord https://discord.gg/pNExxP7 **Please note:** You need a second person to activate your WeChat Account. The person needs to meet the following requirements: Account age has to be longer than 6 months ago
Has not verified anyone within the last month
Has not done more than 2 verifications within the last 6 months
Has not done more than 3 verifications within the last year
The account is from the same country/region
The account cannot be blocked
jamesls/fakeredis
330680169
Title: Unsupported keyword argument 'score_cast_func' to zrangebyscore() Question: username_0: In the latest version of `redis-py`, [zrangebyscore() accepts a keyword argument called score_cast_func.](http://redis-py.readthedocs.io/en/latest/#redis.StrictRedis.zrangebyscore) Fakeredis [does not support it yet](https://github.com/jamesls/fakeredis/blob/master/fakeredis.py#L1630-L1631). I'll do a PR ASAP. Answers: username_1: There are two PRs already, but they're incomplete. You can probably save some time/effort by finishing what they started. username_1: Closing because it's a duplicate of #44 (a complete PR would still be welcome though). Status: Issue closed username_0: Sorry, I didn't check the open PRs. I ended up by making my own complete PR in #194 , with one test. Not sure if it's enough to be merged username_1: Thanks, I'll take a look next week.
MicrosoftDocs/azure-docs
450378089
Title: Azure Storage lifecycle management - Json schema Question: username_0: I haven't found in the docs any information on if a Json schema exists for Azure Storage lifecycle management policy syntax. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 9a558a9e-10d0-ea34-8780-d9084f87f952 * Version Independent ID: 60dda4e0-7630-f358-aa3a-6a992eae61e2 * Content: [Managing the Azure Storage lifecycle](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-lifecycle-management-concepts) * Content Source: [articles/storage/blobs/storage-lifecycle-management-concepts.md](https://github.com/Microsoft/azure-docs/blob/master/articles/storage/blobs/storage-lifecycle-management-concepts.md) * Service: **storage** * Sub-service: **common** * GitHub Login: @username_2 * Microsoft Alias: **mhopkins** Answers: username_1: @username_0 Thanks for the feedback! I have assigned the issue to the content author to evaluate and update as appropriate. username_2: @username_0 - Does the [Policy](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-lifecycle-management-concepts#policy) section of this article answer your questions? username_0: Uhm... No. That's why I created this feedback issue. Yes, the article is quite detailed, so I'm managing for now, but it's still not a Json schema. username_3: @username_0 We don't publish the schema. Do you find the documentation here not sufficient? username_0: I can live without the schema - I was just asking, if it exists, not stating, that it's something mandatory. :) If you are curious, why I was asking, then: The documentation is OK, and is definitely more important than a schema, but having something, that can be used for automated validation, helps. Most of Azure management API Json structures are covered by some schemas, so I was expecting that this is also the case. username_3: @username_0 thanks for the clarification. I think it is a valid request to publish schema and I will work with my API engineering team to work on that. username_2: #reassign:username_3 username_4: @username_0 - Thanks again for your feedback. I'm closing this one out now as there's no further work required from the content team. #please-close Status: Issue closed
WasmEdge/WasmEdge
1117484142
Title: [Docs] Translate the crun.md to zh Question: username_0: ### The MD file link you want to translate (must be in English) https://github.com/WasmEdge/WasmEdge/blob/master/docs/book/en/src/kubernetes/container/crun.md ### Target language zh: https://github.com/WasmEdge/WasmEdge/blob/master/docs/book/zh/src/kubernetes/container/crun.md Answers: username_1: please assigne to me. username_2: Could you pls assign it to me? username_0: Thank you! since @username_2 is new here, I will assign this issue to him/her. @username_1 Thanks for your contribution again! Status: Issue closed
spatie/flysystem-dropbox
275070536
Title: The Dropbox API v2 does not support mimetypes Question: username_0: Hi, First, thanks for your work on this piece of software. I am using it with Laravel 5.4 as provider for flysystem, included to elFinder. Found a weird issue, not sure it is related to your script but maybe you found this before. I can see the folders, but then I cannot access folders with files: ![2017-11-18 002318](https://user-images.githubusercontent.com/26375917/32979422-9e5b08e8-cc87-11e7-8093-8f01d1c665f6.png) Any idea? Thanks. Answers: username_1: Can you provide a PHP code sample that results in this error? username_2: I just ran into this problem as well. The exception is thrown by any call to getMimetype() when using the Dropbox adaptor. https://github.com/spatie/flysystem-dropbox/blob/master/src/DropboxAdapter.php#L232 I've worked around the issue for now by editing the Dropbox adaptor to calculate the mimetype from the filename, which is how the FTP Flysystem adaptor works: https://github.com/thephpleague/flysystem/blob/master/src/Adapter/Ftp.php#L415 Would you accept a PR which adds the behaviour from the FTP driver to this package? username_1: Yup! 👍 Status: Issue closed username_1: Fixed in #30 username_0: Just to confirm the fix is working fine, thank you very much!
DMTF/python-redfish-library
486268385
Title: ILO5 Remote end closed connection without response Question: username_0: Executing commands against ILO5 failed as the 'Remote end closed connection without response'. When checking the sessions, I can see a session successfully created and is still active, appears any connections after are failing: ``` redfish_obj = redfish.redfish_client(base_url=host_url, username=svc_account, \ password=<PASSWORD>, max_retry=1, timeout=120, default_prefix='/redfish/v1') redfish_obj.login(auth='session') redfish_obj.get('/redfish/v1') ``` Any guidance on how to check this, the same script works fine against ILO4. ``` Traceback (most recent call last): File "C:\Program Files (x86)\Python37-32\lib\site-packages\redfish\rest\v1.py", line 814, in _rest_request resp = self._conn.getresponse() File "C:\Program Files (x86)\Python37-32\lib\http\client.py", line 1321, in getresponse response.begin() File "C:\Program Files (x86)\Python37-32\lib\http\client.py", line 296, in begin version, status, reason = self._read_status() File "C:\Program Files (x86)\Python37-32\lib\http\client.py", line 265, in _read_status raise RemoteDisconnected("Remote end closed connection without" http.client.RemoteDisconnected: Remote end closed connection without response The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Program Files (x86)\Python37-32\lib\site-packages\redfish\rest\v1.py", line 617, in get headers=headers) File "C:\Program Files (x86)\Python37-32\lib\site-packages\redfish\rest\v1.py", line 1013, in _rest_request args=args, body=body, headers=headers) File "C:\Program Files (x86)\Python37-32\lib\site-packages\redfish\rest\v1.py", line 878, in _rest_request raise_from(RetriesExhaustedError(), cause_exception) File "<string>", line 3, in raise_from redfish.rest.v1.RetriesExhaustedError ``` Answers: username_0: It appears running the script from a server within the same site (therefore < 1ms latency) works, but running this outside (around 71ms latency) does not. Is there a latency requirement for this module or specific redfish services that someone is aware of? Running against ILO4/BMC is fine from both locations... Status: Issue closed username_0: Can't believe I didn't try this... Weirdly, second attempt works fine and therefore connections. Thanks! username_1: Great. Glad to hear that helped.
LarsSjogreen/CalKul
220201132
Title: The == operator only handle doubles for the moment Question: username_0: This operator should also work with other datatypes, for example booleans and strings. Status: Issue closed Answers: username_0: The == operator now handles bool, string and double. Could also handle other things by extending the SupportedTypes property. Something like this should probably be either a base class or an interface (ITypesafeOperator) so that the mechanism can be used in other operators too. username_0: Oops. Didn't check in. username_0: This operator should also work with other datatypes, for example booleans and strings. username_0: Fixed, checked in, closed. Status: Issue closed
emergenzeHack/covid19italia_segnalazioni
655930474
Title: Il Cerchio Tondo: spettacoli e laboratori per bambini online Question: username_0: <pre><yamldata>Azione: sollievo/intrattenimento Comune: <NAME> Descrizione: 'Il Cerchio Tondo: spettacoli e laboratori per bambini online ' Destinatari: minori Ente: Organizzazione Fonte: CSV Lombardia Link: http://it-it.facebook.com/pages/Il-Cerchio-Tondo-Teatro-di-Marionette-e-Burattini/202544286433691 Posizione: 9.316666666666666 9.316666666666666 0 0 </yamldata></pre>
ethereum/ethereum-org-website
991722465
Title: Add BitKeep Wallet request Question: username_0: Before suggesting a wallet, make sure you've read [our listing policy](https://www.ethereum.org/en/contributing/adding-products/). Only continue with the issue if your wallet meets the criteria listed there. If it does complete the following information which we need to accurately list the wallet. **Is your wallet security tested? Please explain security measures i.e. security audit, internal security team or some other method.** Yes, we have security tested with PeckShield and Amors team, but is not opened. As the most popular wallet in China, BitKeep have released almost 3 years, there has been no safety incident. **When did your wallet go live to users?** At May, 2018 **Does your wallet have an active development team?** Yes, always **Is your wallet open-source?** Not yet. We are preparing to open part of the core code. **Is your wallet globally accessible?** Yes, any time any where. **Is your wallet custodial, non-custodial, or a hardware wallet?** Non-custodial, the private key is controlled by the user and stored in the user’s device, like Metamask. **Please describe the measures taken to ensure the wallet's security and provide documentation wherever possible** 1 system architecture The standard C/S architecture is adopted in the software. Besides interface interaction and data storage, Client also implements transaction signatures for all currencies locally. Server mainly implements data reading and broadcasting on block chains, as well as other basic data services. HTTPS protocol is used for all requests of software. Each request has a unique token authenticating mechanic, which avoids user’s operation is being simulated by robot and greatly enhances security. Cloud services are deployed in a distributed way, with 12 nodes around the world. They can accommodate at least one million concurrent traffic and have higher dilatancy. 2 DESM double encryption algorithm Double Encryption Storage Mechanism (DESM) is a set of algorithms customized by BitKeep wallet to encrypt and store mnemonics or private keys. BitKeep adopts the combination of encrypting method of sha256+aes256+cloud authentication. The algorithm of DESM of BitKeep can be understood as follows: 1) the user sets the transaction password: the client stores it in the cloud after sha256 (password + seed), and returns the new seed based on BitKeep’s account and password; 2) calculates the key needed for symmetric encryption through sha256 (password+account Seed+specific rules); 3) calculates the key needed for symmetric encryption through aes256 (mnemonic code or private key+Key) to encrypt; similarly, users also abide by this principle when entering mnemonic words. This principle has fundamentally solved the problem of security, even if hackers or internal staff can not solve it. Only in one case that is possible to crack the user's cell phone data while knowing the user's transaction password and account password, as well as the encryption rules in the BitKeep cloud. But this possibility is extremely rare. **Does the wallet have fiat on-ramps?** Not yet. **Does the wallet allow users to explore dapps?** Yes, we have Dapps Store to explore dapps. It is a dapp browser, support WalletConnect and other connect method like Metamask. **Does the wallet have integrated defi/financial tools?** Not yet. **Can a user withdraw to their card?** Not yet. **Does the wallet offer limits protection?** [Truncated] **Wallet title** BitKeep **Wallet description** BitKeep is a most popular digital currency wallet, which supports all mainstream public chains and Layer 2 such as BTC, ETH, BSC, HECO, TRON, OKExChain, Polkadot, Polygon, EOS, etc. It has provided reliable digital currency asset management service for nearly tens of millions of users around the world; **Wallet logo** https://github.com/bitkeepcom/download/blob/main/bitkeep.svg **Background colour for brand logo** #FFFFFF **URL** https://bitkeep.com Answers: username_1: Closed by #3850 Status: Issue closed
symfony/symfony
320849218
Title: dump() breaks web debug toolbar Question: username_0: | Q | A | ---------------- | ----- | Bug report? | yes | Feature request? | no | BC Break report? | no | RFC? | no | Symfony version | 4.1 When calling `dump('foo')` in my code, the web debug toolbar doesn't load and I get this error in my console: ``` Uncaught ReferenceError: Sfdump is not defined at eval (eval at load.maxTries ((index):1), <anonymous>:1:9) at load.maxTries ((index):23) at (index):23 at XMLHttpRequest.xhr.onreadystatechange ((index):23) ``` Answers: username_1: A quick git bisect indicates that it seems to happens since e4e591b46bcb81a436a8837d5cf5d5f335047da0 username_0: Thanks @username_1, I hadn't had a change to dig in yet. username_1: Here is a fix: https://github.com/symfony/symfony/pull/27189 Status: Issue closed
simplefoc/Arduino-FOC
689217692
Title: esp32_position_control example does not compile for ESP32 Wrover Module Question: username_0: Hello, I was using the Arduino-FOC with the STM32 which worked nicely. Now I set it up with an ESP32 Wrover (board version 1.0.4), trying both the current master-branch and dev-branch (31.08.20) and it does not compile. the ...\examples\hardware_specific_examples\ESP32\magnetic_sensor\esp32_position_control . Arduino throws the following error message: `C:\Users\Josefin\Documents\Arduino\libraries\Arduino-FOC-dev\src\FOCutils.cpp: In function 'void _setPwmFrequency(int, int, int)': C:\Users\Josefin\Documents\Arduino\libraries\Arduino-FOC-dev\src\FOCutils.cpp:108:55: error: 'MCPWM_SELECT_SYNC_INT0' was not declared in this scope mcpwm_sync_enable(m_slot.mcpwm_unit, MCPWM_TIMER_0, MCPWM_SELECT_SYNC_INT0, 0);` How can I fix this? Thanks for your help! Status: Issue closed Answers: username_1: @username_0 Thanks!