repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
mradkov/solidity-examples | 296852867 | Title: Purchasable Service Feedback
Question:
username_0: Hello Milen,
You've done an amazing job :)
Here are few points to improve. Keep in mind that this is mine opinion and you can choose to accept or not to accept them in your work :)
- It is good practice to name your events starting with Log (although not all libraries follow this) This will help you avoid naming the executable function the same as the event, thus giving you countless hours of debugging to figure out what is going on (I speak from experience here)
- Can you create a modifier that can later be used by inherited contracts to mark a certain method as service and requiring some value to be paid to these methods?
Answers:
username_1: Thank you for your feedback. I've taken your suggestions into account and fixed the issues. You can check again if you could :) |
GoogleChrome/lighthouse | 437188012 | Title: DevTools Error: PAGE_HUNG
Question:
username_0: **Initial URL**: https://regencycigar.com/
**Chrome Version**: 73.0.3683.103
**Error Message**: PAGE_HUNG
**Stack Trace**:
```
LHError: PAGE_HUNG
at eval (chrome-devtools://devtools/remote/serve_file/@e<KEY>82a23/audits2_worker/audits2_worker_module.js:1063:1407)
at async Driver._waitForFullyLoaded (chrome-devtools://devtools/remote/serve_file/@e8<KEY>660f9bb00b4a82a23/audits2_worker/audits2_worker_module.js:1063:1720)
at async Driver.gotoURL (chrome-devtools://devtools/remote/serve_file/@e82a<KEY>00b4a82a23/audits2_worker/audits2_worker_module.js:1072:974)
at async Function.loadPage (chrome-devtools://devtools/remote/serve_file/@e<PASSWORD>58d8<KEY>bb00b4a82a23/audits2_worker/audits2_worker_module.js:1116:58)
at async Function.pass (chrome-devtools://devtools/remote/serve_file/@e82a<KEY>82a23/audits2_worker/audits2_worker_module.js:1124:614)
at async Function.run (chrome-devtools://devtools/remote/serve_file/@e<PASSWORD>58d8<KEY>/audits2_worker/audits2_worker_module.js:1135:60)
at async Function._gatherArtifactsFromBrowser (chrome-devtools://devtools/remote/serve_file/@<KEY>/audits2_worker/audits2_worker_module.js:1591:151)
at async Function.run (chrome-devtools://devtools/remote/serve_file/@<KEY>/audits2_worker/audits2_worker_module.js:1583:11)
```
Answers:
username_1: This error means that the page stopped responding. The solution here is to re-run. If the error repeats, the page or your machine creates infinite loops and needs to be fixed.
Status: Issue closed
|
w3c-social/social-web-protocols | 165611724 | Title: Aligning ActivitySub and LDN
Question:
username_0: ## Discovery
- [ ] Namespace of `inbox`
- LDN is deciding between ldp, solid or ? [#13](https://github.com/csarven/ldn/issues/13)
- AP has it's own mini vocabulary [#30](https://github.com/w3c-social/activitypub/issues/30)
- **Proposal 1:** LDN uses AP's `inbox`
- **Proposal 2:** AP uses LDN's `inbox` (whatever that is decided to be)
## Sending
- [x] Media type of sent payload
- LDN requires receivers to accept `application/ld+json` POSTs (`profile` optional)
- AP senders will send plain JSON with `applicaton/activity+json` media type
- **[Resolved](https://github.com/csarven/ldn/commit/a68f3fc8eb9e21636c8f6ab3db773b325698d3e4):** LDN receivers SHOULD treat incoming POSTs with `application/activity+json` as equivalent to `application/ld+json` with the AS2 profile
- [ ] The canonical id of notifications
- LDN has the notification created at the domain of the receiver, and added as a value of `ldp:contains`.
- AP has the notification created at the domain of the sender, and a copy of that sent to the receiver, (to be added to the inbox `Collection`).
- **Proposal:** If a notification is received which already has a URI then an LDN receiver should use this URI as the subject and add it to `ldp:contains` rather than creating a new one.
- Require the sender is authenticated and the URI is on their domain? (AP does)
- Is LDP okay with this? `ldp:contains` can contain pointers to resources elsewhere (according to @sandhawke) but I don't know if they can get there with a POST to the Container..
- Can AP handle incoming notifications payloads which do not yet exist elsewhere..?
## Consuming
- [ ] Relation between inbox and contents
- LDN uses `ldp:contains` to allow LDP servers to act as senders without implementation changes.
- AP inboxes are an `as:Collection` with `as:items` as the relation
- **Proposal:** ???
- [ ] Returning AS2 payload of individual notifications
- Plain JSON AP consumers doing a GET on the inbox are going to say `Accept: application/activity+json`
- LDN receivers are only required to honour requests for `ld+json`. We don't say what they should do if there's an accept header with a profile, and I think it's unreasonable to ask them to try to translate all inbox contents to the vocabulary in the profile, and thus if `activity+json` is requested, we can't ask them to attempt to return anything as AS2.
- **Proposal 1:** Sorry AP consumer, you can only have things if you ask for `ld+json`, and you have to filter out the ones that aren't AS2 on your end (return 415).
- **Proposal 2:** Treat `activity+json` as `ld+json`+profile and just return JSON-LD. Tough luck if the consumer can't parse it.
- **Proposal 3:** Treat `activity+json` as `ld+json`+profile and return JSON-LD, but it MUST be in compacted form, so an AP consumer should be able to parse it as JSON.
Answers:
username_0: @csarven @cwebber @tsyesika, your expertise on your specs would be valuable here. Please look through this when you can!
username_0: I think we got as far as we could with this one. See also [inboxes](https://w3c-social.github.io/social-web-protocols/#inbox-interop) for consuming bridging.
Status: Issue closed
|
dask/distributed | 462667156 | Title: LocalCluster does not respect security configuration
Question:
username_0: Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/distributed/deploy/local.py", line 219, in __init__
self.start(ip=ip, n_workers=n_workers)
File "/usr/local/lib/python2.7/site-packages/distributed/deploy/local.py", line 257, in start
self.sync(self._start, **kwargs)
File "/usr/local/lib/python2.7/site-packages/distributed/deploy/local.py", line 250, in sync
return sync(self.loop, func, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/distributed/utils.py", line 331, in sync
six.reraise(*error[0])
File "/usr/local/lib/python2.7/site-packages/distributed/utils.py", line 316, in f
result[0] = yield future
File "/usr/local/lib/python2.7/site-packages/tornado/gen.py", line 1133, in run
value = future.result()
File "/usr/local/lib/python2.7/site-packages/tornado/concurrent.py", line 261, in result
raise_exc_info(self._exc_info)
File "/usr/local/lib/python2.7/site-packages/tornado/gen.py", line 326, in wrapper
yielded = next(result)
File "/usr/local/lib/python2.7/site-packages/distributed/deploy/local.py", line 283, in _start
self.scheduler.start(address)
File "/usr/local/lib/python2.7/site-packages/distributed/scheduler.py", line 1199, in start
self.listen(addr_or_port, listen_args=self.listen_args)
File "/usr/local/lib/python2.7/site-packages/distributed/core.py", line 320, in listen
connection_args=listen_args,
File "/usr/local/lib/python2.7/site-packages/distributed/comm/core.py", line 258, in listen
loc, handle_comm, deserialize, **(connection_args or {})
File "/usr/local/lib/python2.7/site-packages/distributed/comm/tcp.py", line 530, in get_listener
return self._listener_class(loc, handle_comm, deserialize, **connection_args)
File "/usr/local/lib/python2.7/site-packages/distributed/comm/tcp.py", line 402, in __init__
self._check_encryption(address, connection_args)
File "/usr/local/lib/python2.7/site-packages/distributed/comm/tcp.py", line 339, in _check_encryption
"refusing communication from/to %r" % (self.prefix + address,)
RuntimeError: encryption required by Dask configuration, refusing communication from/to 'tcp://127.0.0.1'
```
Answers:
username_1: Hmm what's the expected behavior here? I'm not familiar with our TLS stuff, but using TLS for LocalCluster doesn't seem likely to work. What options do we have?
username_2: I'm curious, have you set up TLS somewhere in your configuration perhaps?
username_0: So, in my use case, I do indeed have TLS set-up in the dask configuration file (with require-encryption and default-scheme=tls. The motivation for me to do this was to ensure TLS was used for Client-Scheduler communication. I actually don't care what happens Scheduler-Worker, but this approach will make it go over TLS.
If I create a LocalCluster, then (unlike many other distributed objects), LocalCluster does not default to having its security parameter set by the configuration - it defaults to security=None. When LocalCluster starts the Scheduler, the Scheduler does read the configuration. It is this mismatch that is causing the problem.
username_2: cc'ing @username_3 who knows more about how we're supposed to handle Security
objects. I personally don't have much experience or opinions here.
username_3: Yeah, local cluster should default to reading in the configuration as the scheduler does. Would you care to submit a PR fixing this?
username_0: I can write a PR to fix the issue, but I'm afraid I don't have time to write a test.
Status: Issue closed
|
tpolecat/tut | 453220960 | Title: Incompatible with `sbt-1.3.0-RC1`
Question:
username_0: Hi Rob,
I experienced a (binary?) issue when upgrading `sbt`. Did it happen to you by any chance?
```
[tut] compiling: /workspace/oss/fs2-rabbit/site/src/main/tut/examples/sample-mult-connections.md
[error] (run-main-2) java.lang.VerifyError: class scala.tools.nsc.Global overrides final method isDeveloper.()Z
[error] java.lang.VerifyError: class scala.tools.nsc.Global overrides final method isDeveloper.()Z
[error] at java.lang.ClassLoader.defineClass1(Native Method)
[error] at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
[error] at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
[error] at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
```
Here's the full CI build: https://circleci.com/gh/profunktor/fs2-rabbit/32
Thanks!
Answers:
username_1: Ugh, no, I haven't seen that. @username_2 any ideas?
username_0: The same happens with `sbt-1.3.0-RC2`. /cc @username_2
username_2: ```
Does this happen on both JDK 8 and 11?
Do we know who's doing the overriding?
username_0: I only tried with JDK 8.
username_2: I wonder if this is to do with Scala version mismatch. Scala library is designed for binary compatibility, but Scala compiler is not.
username_0: That's a good question. Hopefully Rob can provide more insights.
As a side note, this happened sometime ago with `tut` and `scribe` but until this day the cause remains a mystery... https://github.com/outr/scribe/issues/80
username_1: It's not me, it's something in the compiler. I'm at Scala Days so I'm not going to have a chance to look at this for a while. The workaround is to just run your docs in 2.12 … everyone using 2.13 is cross-compiling anyway so it should be ok for most people for now, I hope.
username_0: No problems, enjoy the conference!
username_0: Mmm I'm using Scala 2.12.8, not sure I understand this.
```
[info] Running tut.TutMain /home/circleci/repo/site/src/main/tut /home/circleci/repo/site/target/scala-2.12/ ....
```
Status: Issue closed
username_0: Thanks @username_2 ! I can confirm this problem is solved with the latest `tut` and `sbt`. |
elastic/elasticsearch-net | 681615046 | Title: Compatibility Upgrade while using NEST
Question:
username_0: **Is your feature request related to a problem? Please describe.**
Hello, we are a big NEST user, however since NEST only has a compatibility between a single ElasticSearch version we do wonder how to safely upgrade our ElasticSearch cluster **and** NEST.
Currently it's more like a doc question on how to do things or a wish for better compat for NEST.
What exactly would be the solution to upgrade ElasticSearch while using NEST?
Our steps would be:
1. upgrade ElasticSearch 6.8 to 7.X via rolling upgrade
2. hope that NEST 6.8 somehow works against 7.X? (it doesn't)
3. upgrade NEST to 7.X
4. reindex the ElasticSearch index
however since NEST 6.8 does not work against 7.X how exactly would a upgrade look alike? is it better to upgrade NEST to 7 and use against ElasticSearch 6.8?
**Describe the solution you'd like**
Better Compatibility or at least some guide on how to upgrade ElasticSearch **with** NEST.
Answers:
username_1: Hi @username_0
It's somewhat of a chicken and egg scenario at the moment; the client major versions implement changes in line with changes to the Elasticsearch REST API layer in major versions. We've had previous discussions about maintaining some form of compatibility in the client, but have erred on not doing so. There are [plans to have a better major version compatibility story in going from 7.x to 8.x and beyond](https://github.com/elastic/elasticsearch/issues/51816), so it is something we're actively looking at improving going forward.
Have you seen [the blog post about using our CI versioned packages in conjunction with the nuget released package, to allow you to upgrade the cluster?](https://www.elastic.co/blog/nest-and-elasticsearch-net-upgrading-your-codebase). In using the versioned packages, you can use two different versions of the client within the same application, to allow you to update the application first, and then the cluster. You would likely need to
1. Write some wrapper/adapter around them as the types will be different in versioned packages compared to the nuget package.
2. Have some way of switching the application over from using 6x to 7x e.g. App config setting
3. Removing the adapter and old version once Elasticsearch is upgraded.
Status: Issue closed
|
rails/sprockets | 79255970 | Title: Improve performance on server with multiple cores or multiple cpus
Question:
username_0: My rails project has a bit lot of asset files. It almost takes 7 minutes to compile assets. I use `htop` to see how usage of computer assets compiling process use, it consumes only one cpu 100%, but other cpus almost are empty.
Answers:
username_1: You think that's bad, this situation is exacerbated by @discourse which can't really do anything about it other than rewrite the Rails asset pipeline, but it takes a good 20 minutes to generate assets on our 4 core Xeon server.
username_2: We need to parallelize here, that's a long way off until Sprockets 4 can stabilize. Going to close this for now.
Status: Issue closed
|
PaddlePaddle/Paddle | 279631156 | Title: Cluster training always stuck at the beginning
Question:
username_0: I started a 10 node job with 10 pservers and 10 trainers. Using [DeepFM model](https://github.com/PaddlePaddle/models/tree/develop/deep_fm), the job always stuck before the trainer can run batches.
pserver log:
```
commandline: ./paddle_pserver2 --num_gradient_servers=10 --nics=xgbe0 --port=7164 --ports_num=1 --ports_num_for_sparse=1 --rdma_tcp=tcp --comment=paddle_cluster_job
W1206 12:25:42.604581 29267 CpuId.h:112] PaddlePaddle wasn't compiled to use avx instructions, but these are available on your machine and could speed up CPU computations via CMAKE .. -DWITH_AVX=ON
I1206 12:25:42.605259 29267 ParameterServerController.cpp:83] number of parameterServer instances: 2
I1206 12:25:42.605267 29267 ParameterServerController.cpp:87] Starting parameterServer[0]
I1206 12:25:42.605304 29267 ParameterServerController.cpp:87] Starting parameterServer[1]
I1206 12:25:42.605325 29267 ParameterServerController.cpp:96] Waiting parameterServer[0]
I1206 12:25:42.605643 29288 LightNetwork.cpp:273] tcp server start
I1206 12:25:42.605721 29287 LightNetwork.cpp:273] tcp server start
I1206 12:36:39.935209 23676 LightNetwork.cpp:326] worker started, peer = 10.104.100.14
I1206 12:36:39.960119 23677 LightNetwork.cpp:326] worker started, peer = 10.104.100.14
I1206 12:40:23.655452 4484 LightNetwork.cpp:326] worker started, peer = 10.102.196.43
I1206 12:40:23.683399 4485 LightNetwork.cpp:326] worker started, peer = 10.102.196.43
I1206 12:40:25.714017 4485 ParameterServer2.cpp:256] pserver: setParameter
I1206 12:40:25.714077 4485 ParameterServer2.cpp:302] pserver: new cpuvector: size=1196032
I1206 12:40:25.737254 4484 ParameterServer2.cpp:256] pserver: setParameter
I1206 12:40:25.743959 4484 ParameterServer2.cpp:302] pserver: new cpuvector: size=30000
I1206 12:40:25.766773 4686 LightNetwork.cpp:326] worker started, peer = 10.102.196.43
I1206 12:40:25.810925 4688 LightNetwork.cpp:326] worker started, peer = 10.102.196.43
I1206 12:40:26.050133 23677 ParameterServer2.cpp:564] pserver: getParameter
I1206 12:48:30.094723 8188 LightNetwork.cpp:326] worker started, peer = 10.102.196.37
I1206 12:48:30.119384 8189 LightNetwork.cpp:326] worker started, peer = 10.102.196.37
I1206 12:48:32.120713 8189 ParameterServer2.cpp:564] pserver: getParameter
I1206 12:50:31.704797 18232 LightNetwork.cpp:326] worker started, peer = 10.102.196.39
I1206 12:50:31.731204 18233 LightNetwork.cpp:326] worker started, peer = 10.102.196.39
I1206 12:50:33.527307 18356 LightNetwork.cpp:326] worker started, peer = 10.102.196.41
I1206 12:50:33.564779 18363 LightNetwork.cpp:326] worker started, peer = 10.102.196.41
I1206 12:50:33.732800 18233 ParameterServer2.cpp:564] pserver: getParameter
I1206 12:50:35.555511 18363 ParameterServer2.cpp:564] pserver: getParameter
I1206 12:51:13.538336 21095 LightNetwork.cpp:326] worker started, peer = 10.104.100.12
I1206 12:51:13.562628 21097 LightNetwork.cpp:326] worker started, peer = 10.104.100.12
I1206 12:51:15.564131 21097 ParameterServer2.cpp:564] pserver: getParameter
I1206 12:51:33.749547 22697 LightNetwork.cpp:326] worker started, peer = 10.102.196.44
I1206 12:51:33.776844 22701 LightNetwork.cpp:326] worker started, peer = 10.102.196.44
I1206 12:51:35.778558 22701 ParameterServer2.cpp:564] pserver: getParameter
I1206 12:55:58.762526 4722 LightNetwork.cpp:326] worker started, peer = 10.102.196.38
I1206 12:55:58.790271 4723 LightNetwork.cpp:326] worker started, peer = 10.102.196.38
I1206 12:56:00.792819 4723 ParameterServer2.cpp:564] pserver: getParameter
I1206 12:56:01.233184 4860 LightNetwork.cpp:326] worker started, peer = 10.102.196.42
I1206 12:56:01.260675 4861 LightNetwork.cpp:326] worker started, peer = 10.102.196.42
I1206 12:56:03.409593 4861 ParameterServer2.cpp:564] pserver: getParameter
I1206 12:57:06.424779 9664 LightNetwork.cpp:326] worker started, peer = 10.104.100.11
```
trainer log:
```
./paddle_trainer --num_gradient_servers=10 --trainer_id=5 --pservers=10.102.196.43,10.102.196.42,10.102.196.41,10.104.100.12,10.102.196.37,10.104.100.11,10.102.196.44,10.102.196.38,10.104.100.14,10.102.196.39 --rdma_tcp=tcp --nics=xgbe0 --port=7164 --ports_num=1 --local=0 --job=train --dot_period=10 --saving_period=1 --log_period=20 --trainer_count=1 --num_passes=1 --ports_num_for_sparse=1 --config=conf/trainer_config.conf --save_dir=./output --use_gpu=0
W1206 12:57:04.976786 9582 CpuId.h:112] PaddlePaddle wasn't compiled to use avx instructions, but these are available on your machine and could speed up CPU computations via CMAKE .. -DWITH_AVX=ON
I1206 12:57:05.292944 9582 Trainer.cpp:166] trainer mode: SgdSparseCpuTraining
I1206 12:57:05.292971 9582 TrainerInternal.cpp:239] Sgd sparse training can not work with ConcurrentRemoteParameterUpdater, automatically reset --use_old_updater=true
I1206 12:57:05.498909 9582 PyDataProvider2.cpp:243] loading dataprovider dataprovider::process_deep
I1206 12:57:05.553319 9582 GradientMachine.cpp:94] Initing parameters..
I1206 12:57:06.414937 9582 GradientMachine.cpp:101] Init parameters done.
I1206 12:57:06.415112 9621 ParameterClient2.cpp:113] pserver 0 10.102.196.43:7165
I1206 12:57:06.415400 9621 ParameterClient2.cpp:113] pserver 1 10.102.196.42:7165
I1206 12:57:06.415513 9621 ParameterClient2.cpp:113] pserver 2 10.102.196.41:7165
I1206 12:57:06.415627 9621 ParameterClient2.cpp:113] pserver 3 10.104.100.12:7165
I1206 12:57:06.415748 9621 ParameterClient2.cpp:113] pserver 4 10.102.196.37:7165
I1206 12:57:06.415866 9621 ParameterClient2.cpp:113] pserver 5 10.104.100.11:7165
I1206 12:57:06.415930 9621 ParameterClient2.cpp:113] pserver 6 10.102.196.44:7165
I1206 12:57:06.416048 9621 ParameterClient2.cpp:113] pserver 7 10.102.196.38:7165
I1206 12:57:06.416160 9621 ParameterClient2.cpp:113] pserver 8 10.104.100.14:7165
I1206 12:57:06.416251 9621 ParameterClient2.cpp:113] pserver 9 10.102.196.39:7165
I1206 12:57:06.439986 9582 ParameterClient2.cpp:113] pserver 0 10.102.196.43:7164
I1206 12:57:06.440148 9582 ParameterClient2.cpp:113] pserver 1 10.102.196.42:7164
I1206 12:57:06.440269 9582 ParameterClient2.cpp:113] pserver 2 10.102.196.41:7164
I1206 12:57:06.440351 9582 ParameterClient2.cpp:113] pserver 3 10.104.100.12:7164
I1206 12:57:06.440423 9582 ParameterClient2.cpp:113] pserver 4 10.102.196.37:7164
I1206 12:57:06.440512 9582 ParameterClient2.cpp:113] pserver 5 10.104.100.11:7164
I1206 12:57:06.440629 9582 ParameterClient2.cpp:113] pserver 6 10.102.196.44:7164
I1206 12:57:06.440716 9582 ParameterClient2.cpp:113] pserver 7 10.102.196.38:7164
I1206 12:57:06.440809 9582 ParameterClient2.cpp:113] pserver 8 10.104.100.14:7164
I1206 12:57:06.440871 9582 ParameterClient2.cpp:113] pserver 9 10.102.196.39:7164
```
Answers:
username_0: Found the cause: I set batch_size=1024 which causes this, setting to 512 solve this problem. This must be a bug that large batch_size may cause remote communication hangs.
Status: Issue closed
|
Open-Source-Studio-at-ITP/Final-Projects | 389801512 | Title: Diasporadical Radio
Question:
username_0: This is an issue for feedback and discussion on final project presentations and documentation. It will be closed after documentation is submitted and the semester has completed.
Answers:
username_0: @username_1 is there a link to your final presentation / write-up? I'm looking at: https://github.com/username_1/radio Thanks!
username_1: Yes! My final write-up is here:
https://wp.nyu.edu/luna/2018/12/11/oss-final-presentation/
username_0: Thanks, added in #90 which can close this issue when merged!
Great job this semester on Diasporadical Radio! You worked very hard to solve a variety of difficult technical problems and it was wonderful to see a fully functioning version of the project at your final presentation! The mission and goals of your project were clearly stated and presented well. If you are to continue the project further I think a good next step would be to reach out to communities that might like to participate to expanding the dataset. User testing the interaction around adding new music may reveal missing features or points of confusion. A small thing re: project documentation - I might suggest renaming `How-to.md` to `CONTRIBUTING.md` and `Code-of-ethics` to 'CODE_OF_CONDUCT.md` which is a bit more standard and will be detected by GitHub for featuring (but perhaps you have reasons for your naming?) More importantly, however, a useful next step might be to separate the guide for contributing to the codebase from the one for contributing music. A video walk-through about adding new music would probably be useful for non-technical contributors. I hope you continue this work!
username_0: Merged #90 and closing, great work!
Status: Issue closed
|
pbs-assess/herringsr | 954058218 | Title: Decide which data to include for A10
Question:
username_0: Are we going to include the same data as the minor SARs? If so, do we want separate tables and figures for A10, or add new columns/panels for A10 in the existing ones?
Answers:
username_0: I am going to go ahead and add the same data we show for the minor SARs, and I will add it this as separate/new figures and tables.
Status: Issue closed
username_0: Data is all there except catch (there is none in 1978 to now). |
chr15m/bugout | 604901919 | Title: Automatically JSON.stringify b.send
Question:
username_0: This is just a suggestion. There's probably some reason this doesn't make sense but: whenever I'm using `b.send` to send a non-string message I want to stringify it so I end up doing e.g.
`b.send(JSON.stringify([message_type,[cool_data,cooler_data]]));`
And then obviously I end up using `JSON.parse` at the receiving end.
Is there some reason `b.send` and `b.on('message',...` couldn't/shouldn't do this under the hood?
Answers:
username_1: @username_0 I wanted to make sure people could use whatever encoding they wanted - json or msgpack etc. but what you are saying also makes sense. What if there was a convenience method called `sendjson()`?
username_2: I just started using this, so take my opinion with a grain of salt - but I would personally opt for configuration or some sort of middleware over growing the API for convenience functions like that. By extension, if half of people like JSON and the other half like msgpack, does the API group to have send, sendjson, and sendmsgpack? Seems cleaner to let the user specify their preferred serialize/deserialize functions when they instantiate the Bugout instance. This would keep the API small, while still allowing people to use whatever serialization they prefer. Personally, I suspect most people will want to use JSON, so even making JSON.stringify/JSON.parse the default functions would probably be fine.
username_0: Gotcha, didn't know about msgpack, I'm a js noob. I was going to say what about having the encoding as a parameter (defaulting to "JSON", if you're ok with assuming ES6) on send but then you also need to specify that for on("message",...). Maybe that's ok? Can one auto-detect JSON vs msgpack encoding?
username_0: Ryan - I missed your suggestion of config. The bugout instance remembering the preferred encoding sounds v reasonable to me - hopefully people don't mix JSON and msgpack in one app?
username_2: I doubt they would mix serializations (it would be unnecessarily complex), but even if they did they could use different Bugout instances (one for each serialization format). That, or it would be a config when defining the RPC (but at that point, you might as well just do the manual serialization/deserialization anyways...you could always write a wrapper function to act as middleware for that). At a minimum, it doesn't make sense to mix serialization formats for the same call (whether that is to send/message or rpc/register). Personally, I think doing it for the instance is sufficiently grainular.
Regarding auto-detection: While you likely could automatically detect on receive (I don't think you can craft a msgpack that is also valid json and vice versa), it would probably be unnecessarily expensive, and it wouldn't help when sending. The pair of send/recieve (whether that is rpc/register or send/message) need to use the same serialization. If you just pass a JS object/array/etc, Bugout would have no idea how you want it serialized, so it couldn't determine automatically whether to send msgpack/json/anything else. Thus, you're better off knowing ahead of time what the convention is for that send/receive pair rather than trying to detect at runtime.
username_1: Settable encoder/decoder at instantiation time sounds good to me. Defaulting to JSON will require updating all of the examples. I don't like auto-detect at all as it is bound to frustrate people (computers are dumber than humans). What do we do with decoding errors? Same as what happens with decryption errors (silent drop)?
username_2: Personally I never like silent anything...it would be nice to have an event emitted when something goes wrong (and I can choose whether to set up a handler for that event or not). Even if all errors were sent to the same handler (for decrypt and decode), that would be nice. For simple cases like JSON, where we can expect users to use stringify/encode, it probably isn't a big deal. However, it could be useful when somebody does something more esoteric (say, with msgpack, if they have custom types that aren't available at both ends for whatever reason), having the error would make debugging easier.
username_1: You can debug at the moment using the debug logging facility, but emitting an error event is a great idea so that people can catch errors at runtime. Thank you.
username_1: @username_0 I've just realized that data is infact already serialized and de-serialized to and from JSON format at both ends: https://github.com/username_1/bugout/blob/master/index.js#L170
I guess you are double-serializing in your own code with that extra JSON call.
username_0: Oh ha, good to know!
username_1: Closing this. If somebody really wants messagepack or other transport encoding, we can open a new ticket and add it.
Status: Issue closed
|
MicrosoftDocs/azure-docs | 665233248 | Title: Logic App returning error on HTTP Post
Question:
username_0: After follow all steps, the Logic App is returning the following error on the HTTP Post:
BadRequest. The provided 'Http' action URI '"CliXml": "<Objs Version=\"1.1.0.1\" xmlns=\"http://schemas.microsoft.com/powershell/2004/04\">\r\n <S>https://e15a9a62-09b7-4632-a267-0d303ba9559a.webhook.wus2.azure-automation.net/webhooks?token=<PASSWORD>%<PASSWORD></S>\r\n</Objs>"' is not valid. The URI must be a well formed absolute URI not referencing local host or UNC path.
There are any specific guidance here?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 24a6c4e2-702f-2b17-3e2d-c86f33b13beb
* Version Independent ID: 1e7811d8-668a-db98-34ca-eaf4d7ee8c1f
* Content: [Scale session hosts Azure Automation - Azure](https://docs.microsoft.com/en-us/azure/virtual-desktop/set-up-scaling-script)
* Content Source: [articles/virtual-desktop/set-up-scaling-script.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/virtual-desktop/set-up-scaling-script.md)
* Service: **virtual-desktop**
* GitHub Login: @Heidilohr
* Microsoft Alias: **helohr**
Status: Issue closed
Answers:
username_0: I've found typo on my logic app. Appologies.
username_1: Thanks for the update. |
typelevel/general | 612101090 | Title: New project: case-insensitive
Question:
username_0: I spun off a [case-insensitive string microlibrary](https://github.com/username_0/case-insensitive/) from http4s. I think Typelevel would be a better long-term home for it. If there is any interest, I'll transfer the GitHub repo and move the package to `org.typelevel`.
Answers:
username_1: We definitely need something like this 👍
username_2: Seems that there is consensus on this. You know the drill (PR to the website, transfer the repo) :wink:
Status: Issue closed
|
vueComponent/ant-design-vue | 1078094935 | Title: prefixCls修改类名前缀,Rate组件不生效
Question:
username_0: - [ ] I have searched the [issues](https://github.com/vueComponent/ant-design-vue/issues) of this repository and believe that this is not a duplicate.
### Version
3.0.0-alpha.14
### Environment
3.0.0-alpha.14
### Reproduction link
[https://github.com/vueComponent/ant-design-vue](https://github.com/vueComponent/ant-design-vue)
### Steps to reproduce
prefixCls修改类名
### What is expected?
Rate类名未被修改
### What is actually happening?
Rate类名未被修改,目前其余组件正常
<!-- generated by issue-helper. DO NOT REMOVE --><issue_closed>
Status: Issue closed |
haskell/haskell-language-server | 797975133 | Title: No ghc 8.8.3 support in the haskell-language-server package in nixpkgs
Question:
username_0: <!--
If you encounter a bug or you have a support question, please try to fill out some of the information below.
Generally speaking, the information below is meant to help debugging issues but is no prerequisite for opening an issue.
-->
### Your environment
Output of `haskell-language-server --probe-tools` or `haskell-language-server-wrapper --probe-tools`:
```sh
haskell-language-server version: 0.8.0.0 (GHC: 8.10.3) (PATH: /nix/store/3c6i9fmv0s93q2jymvxd37nb3y2f8bzg-haskell-language-server-0.8.0.0/bin/haskell-language-server-wrapper)
Tool versions found on the $PATH
cabal: 3.0.0.0
stack: 2.5.1.1
ghc: 8.8.3
```
Which lsp-client do you use: Neovim
Describe your project (alternative: link to the project): package.yaml
Contents of `hie.yaml`:
```yaml
cradle:
cabal:
- path: "./src"
component: "lib:mwb"
- path: "./test"
component: "test:test"
- path: "./app"
component: "exe:mwb"
- path: "./one-off-task"
component: "exe:one-off-tasks"
```
I am getting an error that ghcide was compiled with 8.8.4 when my project uses 8.8.3. This is to be expected, but I am wondering if the pre-compiled binaries are missing 8.8.3 support?
running `haskell-language-server-wrapper` shows this:
```sh
Project GHC version: 8.8.3
haskell-language-server exe candidates: ["haskell-language-server-8.8.3","haskell-language-server-8.8","haskell-language-server"]
Launching haskell-language-server exe at:/etc/profiles/per-user/jonathanl/bin/haskell-language-server-8.8
haskell-language-server version: 0.8.0.0 (GHC: 8.8.4) (PATH: /nix/store/c8709i89qymkcpi6c2b1s3nlmj72b8gz-haskell-language-server-0.8.0.0/bin/haskell-language-server)
```
Answers:
username_1: There is support for ghc-8.8.3 in the official release artifacts set: https://github.com/haskell/haskell-language-server/releases/tag/0.9.0
Not sure why it cant be available for nix though. @fendor and @pepeiborra are using nix iirc and maybe can help better than me
username_2: https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/tools/haskell/haskell-language-server/withWrapper.nix
This nixpkgs repo file lists the supported GHC versions.
You can change the `supportedGhcVersions` argument to whatever you want to make it built with different versions of GHC.
username_0: We could add more defaults, or add a section to the README explaining how to add support for different compiler versions. I can do either!
username_0: I think I might have also found another related issue. I _think_ you can only have 1 minor version for every major ghc release. When I do this overlay hls fails to build:
```nix
(self: super: {
haskell-language-server =
super.haskell-language-server.override {
supportedGhcVersions = [
"865"
"883"
"884"
"8102"
];
};
})
```
with this error:
```sh
error: --- Error ---------------------------------------------------------------------------------- nix
builder for '/nix/store/bvbz959l8jvh0j264b9id1z9hyxzs8sg-haskell-language-server-0.8.0.0.drv' failed with exit code 1; last 1 log lines:
ln: failed to create symbolic link '/nix/store/811v2z81is5h02cc8066v0ja338viysj-haskell-language-server-0.8.0.0/bin/haskell-language-server-8.8': File exists
```
but when I comment out one of the `88x` versions everything builds just fine.
username_2: Yes, since when you install both "883" and "884", it will try to install `haskell-language-server-8.8` twice, as both of them are under minor version `8.8`. I don't know what kind of *preference* should be given for such a case though...
username_3: Yep, it is! We should have an issue for this on nixpkgs. Mention me, as I am the author of that expression.
I think I ignored the problem because I couldn‘t think of a good solution. But I think the best solution is to let the newest version win.
Is there anything else to be fixed here? If not I suggest migrating the issue to nixpkgs and closing it here?
username_3: The reason is simply, that I didn‘t think that a lot of people use out-dated minor versions. But if they do, they have already opted into building a lot of stuff by themselves. Everything we enter by default will be built on hydra and increase the closure of the hls package, which is a lot of unnecessary balast for most users. So that’s the compromise we settled on. Should we reconsider that?
username_1: It seems nothing left to do here in the hls side, feel free to reopen in other case
Status: Issue closed
|
avniproject/JSSCP | 607258529 | Title: [JSSCP] Child PNC
Question:
username_0: https://docs.google.com/spreadsheets/d/1FFMRLGWfnRlZ6jxIft7cBHOjdMdhcKLbkomaUdrE1a8/edit#gid=1896792459
Answers:
username_1: QA Notes -
- Question - "TIme of Breastfeeding", we are taking only time but what if Breastfeeding was not done on the same day as birth? We should take date also. Check once.
- Question - "How was breastfeeding", in logic it says show counselling point but we do not have any counselling points and also when to show counselling points.
- Question - "Temperature of baby", Unit is (F) but everywhere else we have taken unit Celsius. check once. I guess they use electronic one and it gives the temperature in celsius.
- And in temperature, we do not have option to say "Don't know".
- "General counselling points" are not showing at the end.
- Also, there are some counselling points given with the question.
Status: Issue closed
|
angcyo/DslTabLayout | 1071813027 | Title: configTabLayoutConfig#onSelectItemView 获取索引不对
Question:
username_0: 麻烦有时间看一下。在回调里面通过DslTabLayout 获取索引获取的是不对的,但是回调的值 `index`是对的。
代码如下,获取索引不对:
`
desLayout.configTabLayoutConfig {
onSelectItemView = { _, index, select, _ ->
Log.d("desLayout", "titles2DslTabLayoutChildren: " + select + "," + index)
if (select) {
//这里获取索引不对。
val currentItemIndex =dslLayout .currentItemIndex
}
false
}
}
`
Answers:
username_1: 当需要选中`ItemView`时, 才会触发此回调.
你可能需要`onSelectViewChange`回调.
Status: Issue closed
|
appbaseio/appbase-js | 109299589 | Title: ReferenceError on using bulk() method
Question:
username_0: This is similar to the https://github.com/appbaseio/appbase-js/issues/2 issue.
```js
appbaseObj.bulk({
type: "tweet",
body: [
// action#1 description
{ index: { _id: 2 } },
// the JSON data to index
{ "msg": "writing my second tweet!",
"by": "Ev",
"using": ["appbase.io", "javascript", "streams"],
"test": true
},
// action#2 description
{ delete: { _id: 2 } },
// deletion doesn't any further input
]
}).on('data', function(res) {
console.log("successful bulk: ", res);
}).on('error', function(err) {
console.log("bulk failed: ", err);
})
```
Stacktrace:
```
Uncaught ReferenceError: id is not defined(…)bulkService
@ appbase.js:15bulk
@ appbase.js:436(anonymous function)
@ VM989:2InjectedScript._evaluateOn
@ VM931:904InjectedScript._evaluateAndWrap
@ VM931:837InjectedScript.evaluate
@ VM931:693
```<issue_closed>
Status: Issue closed |
r-lib/waldo | 601419068 | Title: Expose ignore_environments?
Question:
username_0: Just since `all.equal()` does so this will provide a simple out for many failures
Answers:
username_0: Hmmm, `all.equal()` does compare environments so I need to do a bit more investigation here.
username_0: ```
`actual` (opm(y ~ x, d, n.samp = 10)) not equal to `expected` (`o`).
`attr(actual$terms, '.Environment')` is <env:0x558fa8099768>
`attr(expected$terms, '.Environment')` is <env:0x558fa6f34da0>`
`deparse(actual$terms)` equals `deparse(expected$terms)`, but AST non-identical
```
```
`actual` (factorFormula(...)) not equal to `expected` (death ~ ilogit((treat == 0) + (angio == 2))).
`attr(actual, '.Environment')` is <env:0x55c121178638>
`attr(expected, '.Environment')` is <env:0x55c1210805c0>`
`deparse(actual)` equals `deparse(expected)`, but AST non-identical
```
```
`actual` (result$UnStandardize) not equal to `expected` (Standardize(data[, 1])$UnStandardize).
`environment(actual)` is <env:0x55fc857d3a20>
`environment(expected)` is <env:0x55fc85c89470>`
```
username_0: Minimal reprex:
``` r
mod1 <- lm(mpg ~ wt, data = mtcars)
mod2 <- local(lm(mpg ~ wt, data = mtcars))
all.equal(mod1, mod2)
#> [1] TRUE
waldo::compare(mod1, mod2)
#> `attr(x$terms, '.Environment')` is <env:global>
#> `attr(y$terms, '.Environment')` is <env:0x7fa408822a90>`
#>
#> `deparse(x$terms)` equals `deparse(y$terms)`, but AST non-identical
#>
#> `attr(attr(x$model, 'terms'), '.Environment')` is <env:global>
#> `attr(attr(y$model, 'terms'), '.Environment')` is <env:0x7fa408822a90>`
#>
#> `deparse(attr(x$model, 'terms'))` equals `deparse(attr(y$model, 'terms'))`, but AST non-identical
```
<sup>Created on 2020-04-18 by the [reprex package](https://reprex.tidyverse.org) (v0.3.0)</sup>
username_0: ``` r
x <- lm(mpg ~ wt, data = mtcars)
y <- local(lm(mpg ~ wt, data = mtcars))
waldo::compare(x, y)
#> `attr(x$terms, '.Environment')` is <env:global>
#> `attr(y$terms, '.Environment')` is <env:0x7ff5cd72de00>`
#>
#> `attr(attr(x$model, 'terms'), '.Environment')` is <env:global>
#> `attr(attr(y$model, 'terms'), '.Environment')` is <env:0x7ff5cd72de00>`
```
<sup>Created on 2020-04-18 by the [reprex package](https://reprex.tidyverse.org) (v0.3.0)</sup>
username_0: I think the hardest part is going to be figuring out a name, because it will ignore both function and formula arguments. OTOH given that we already have internal `ignore_function_env` maybe it would be fine to add `ignore_formula_env`?
Status: Issue closed
|
PomeloProductions/easy-products | 207297450 | Title: Add Reorder admin function
Question:
username_0: The admin needs to have a ajax function built in that will allow us to run an action that will reorder all products. This will entail receiving an array of product ids and then setting that as the main order for the entire list of products.<issue_closed>
Status: Issue closed |
DevExpress/testcafe-hammerhead | 139601672 | Title: Health Monitor - js errors on the https://www.fiverr.com/ site
Question:
username_0: @inikulin I found out that the site is actively used working with prototypes (`Document`, `XMLHttpRequest`...). We can not fix this currently inexpensively. I propose to wait for another site with a similar problem.
Status: Issue closed
Answers:
username_0: @inikulin I found out that the site is actively used working with prototypes (`Document`, `XMLHttpRequest`...). We can not fix this currently inexpensively. I propose to wait for another site with a similar problem.
Status: Issue closed
|
SCIInstitute/SCIRun | 426151133 | Title: Regular internal build channel
Question:
username_0: Let's formalize the private build process. @SCIInstitute/butson-lab has a private Github repo. I post builds to Slack channels occasionally (which disappear very quickly nowadays). There's also SCI server locations that are web-accessible. What is everyone's favorite? @SCIInstitute/cibc-users @SCIInstitute/ceg_team @SCIInstitute/scirun-users
Answers:
username_0: Related: #1795 will help streamline the release build process.
username_1: Hey Dan,
I am partial to 'ceg_team'. Thanks for doing this.
Cheers,
Wilson
username_0: Oops, I was just tagging those groups of users to respond, those aren't poll choices listed there. Please indicate where the binaries should be placed (Slack, github, sci server), and how often builds should come (weekly, monthly, etc)
username_2: Hi Dan,
This is awesome and I would greatly appreciate this.
As for specifics, is there a way to create a private repo under the SCI Institute organization? Then give access to that repo as needed? I would love to stay on Github considering viewing release notes, etc.
As for timing, I would leave that up to you. I am not an expert in that but would be happy to update as frequently as possible.
Thanks,
Brian
>
username_0: The SCI github org is close to its limit of private repos right now (4 remaining). If I grab one just for binary releases with a certain GPLed meshing library, that might be a bit greedy. I don't control the distribution of those either. Although we did go from 10 to 20 at some point, so perhaps those 4 are just there for the taking.
username_0: @jessdtate informs me we got an educational license, so we have as many private repos as we want. So no greed applicable.
username_2: Hi Dan,
Great! Let’s do that then! Are there any objections from the gallery?
Brian
>
username_0: Let's finalize this at tomorrow's sim-est.
username_0: PEOPLE LOVE IT. I will close this issue
Status: Issue closed
|
tracking-exposed/experiments-data | 274888900 | Title: Keywords Analysis
Question:
username_0: The first step to get deeper into the data is to look at some specific keywords and understand their behaviour during entire period of the experiment.
Has that specific keyword recorded a constant ascending/descending/stable trend? Or has something influenced its trend making it rapidly ascending or discending?
Looking for the reason of those changes can make us understand the behaviour of the algorithm, it shows where the main reason is, if it is inside the social network or outside, and in this case how the algorithm react to the inputs from outside.
To get even deeper in the analysis it is also possible to study the terms used by sources to describe and tell a specific event. This kind of studies can show how a specific word, with a fixed meaning, can be preferred to others making us able to determine in which way the algorithm filters some terms deciding himself the impact of the event on the community.
The aim right here is the visualization of a semantic trend over the period of the experiment. How can we represent the evolution of the use of a term in both qualitative and quantitative way?
A sentiment analysis is also provided thought dandelion.eu
Answers:
username_1: The script `filtermerge.py` works in this way
- the variable `lookFor` has to match with `label` from the semantics file
- all the posts which contains that variable are selected and extended with their semantic values
- all the posts are saved in a file. the name by default is **merged.json** but can be specifiy in the command line ( `python filtermerge.py outputname.json` creates **outputname.json** )
username_0: 
username_0: To see the trend we analyse the distribution of the terms during the period of the experiment. From a quantitative aspect such as the 'number of records per day' it is possible to see both the increasing trends of the records both the changes related to the terms used to report the story












username_1: nice, well done @username_0, three things:
1. it is probably better if you remove the "Number of Records" gradient on the right, and put with tableau the actual number written with a small font into the boxes?
2. Why the tables are not all covering the 16 days?
3. Probably we should try to observe if any pattern raises considering only the top 10-15 most productive sources? because I don't get any story from that. (maybe there is not a story at all, but...)
username_0: The distribuition of terms in the days allows us to define how the trend can change due to a specific event. In particular here we focus on the 17-10-17, the day when Maldonado's body was found.
These are the most used terms to report the news during the days just before the discovery, when few sources were publishing stories about Maldonado.

To better understand here we have the distribution of feeds

And the distribution of sources
 |
rancher/dashboard | 712425341 | Title: Fleet issues
Question:
username_0: - [ ] Typing in Key/Value gives error in console; vue.runtime.esm.js:1888 TypeError: Cannot read property 'byId' of undefined
- [ ] Cluster Group selector is saving but not showing up in Edit or on details
- [ ] Cluster group menu's not working even thou it's says it active (after creation first happens?)
- [ ] All clusters showing in cluster group details
- [ ] Check and fix ui issues on this screen (tani later)

- [ ] Repo create - Can't deploy to cluster group
- [ ] Repo create - Advanced doesn't work
- [ ] Repo - Deploying to single cluster not working cause the label changed from name to cluster-name
- [ ] Repo - If path is incorrect and repo has nothing in it or not found, not getting an error
- [ ] Repo - "/" at beginning of path is bad. (need fix from Darren)
Answers:
username_0: Fixed
Status: Issue closed
|
uber/AutoDispose | 393753362 | Title: Lint Check with RxBinding Conflict
Question:
username_0: i don't requirement lint check
how do close it
Answers:
username_1: There technically isn't any conflict since RxBinding also exposes Observable's whose subscriptions should be handled safely.
If you're already handling those subscriptions by capturing a `Disposable` like:
```kotlin
fun bindViews() {
val disposable = RxView.clicks(button).subscribe()
}
```
consider enabling `lenient` mode for AutoDispose by adding in your `gradle.properties` file.
```
autodispose.lenient=true
```
If you want to disable lint fully, there are a few options as prescribed in the [Android documentation](https://developer.android.com/studio/write/lint#gradle). You can choose to disable certain checks also.
username_2: Yep, if you feel it's wrong then please file a bug with an example code that's wrong.
username_0: thank you
Status: Issue closed
|
JuliaGPU/CUDA.jl | 625417577 | Title: Pitched pointers
Question:
username_0: Further to discussion about porting `CUFFT.jl` to use `CUDAdrv` (JuliaGPU/CUFFT.jl#12), it would be useful to provide support for CUDA pitched pointers, in addition to standard memory types.
How might such an implementation be best approached, given longer term plans for this package (e.g., is there still a plan to move to the `CuArray` implementation?). |
typeorm/typeorm | 759510563 | Title: cascade: ["insert"] duplicates the entry if it already exists
Question:
username_0: ## Issue Description
_**DISCLAIMER:** I am not completely sure if this is a bug or a feature.
If the issue described is a feature, I would appreciate any help to achieve the behavior I am willing to achieve (which is a very common use case, IMO).
If it is a bug, well - then I know that I need to find a workaround._
Let `A` and `B` be two tables/repositories in the same database, with a many-to-one relationship from A to B.
Let the many-to-one relationship have the option `{ cascade: ["insert"] }`, so that whenever an entity
is inserted in table `A`, the corresponding entity is inserted in table `B` if it does not exist.
(Find a much more detailed explanation in the Readme file of the reproducer: https://gitlab.com/username_0/typeorm-cascade-issue)
### Expected Behavior
If I insert an entity in table `A`, and the corresponding entity already
exists in table `B`, no new entry is created in table `B`.
### Actual Behavior
If I insert an entity in table `A`, and the corresponding entity already
exists in table `B`, the entry in table `B` is duplicated.
### Steps to Reproduce
1. Download the repository
```shell
git clone https://gitlab.com/username_0/typeorm-cascade-issue.git
```
2. Install the dependencies
```shell
yarn install # alternatively, npm install
```
3. Run tests
```shell
yarn test # alternatively, npm run test
```
**Expected result:** The test is green
**Actual result:** The test is red
### My Environment
| Dependency | Version |
| --- | --- |
| Operating System | 5.9.11-3-MANJARO |
| Node.js version | v15.0.1 |
| Typescript version | 4.1.2 |
| TypeORM version | 0.2.29 |
| yarn version | 1.22.10 |
| npm version | 6.14.9 |
### Additional Context
The "issue" is 100% reproducible using the example provided.
### Relevant Database Driver(s)
- [ ] `aurora-data-api`
- [ ] `aurora-data-api-pg`
- [ ] `better-sqlite3`
- [ ] `cockroachdb`
- [ ] `cordova`
- [ ] `expo`
- [ ] `mongodb`
- [ ] `mysql`
- [ ] `nativescript`
- [ ] `oracle`
- [ ] `postgres`
- [ ] `react-native`
- [ ] `sap`
- [x] `sqlite`
- [ ] `sqlite-abstract`
- [ ] `sqljs`
- [ ] `sqlserver`
### Are you willing to resolve this issue by submitting a Pull Request?
- [ ] Yes, I have the time, and I know how to start.
- [ ] Yes, I have the time, but I don't know how to start. I would need guidance.
- [ ] No, I don't have the time, although I believe I could do it if I had the time...
- [x] No, I don't have the time and I wouldn't even know how to start.
Answers:
username_1: Most likely it can only use the existing entity in `B` if you provide its primary keys in the object. If you are not providing the primary keys it would be equally misleading for it to assume that because some other columns are now duplicated that it should use the existing value.
While I'm not sure it would have made a difference in your example it's also worth noting that your two `favouriteColor` values are in fact two distinct objects, so even if their IDs were updated after the first `save()` it wouldn't get passed on to the second `save()`.
Probably the best solution would be to save a `color` first, then reference the object (with its now updated primary columns) in the `user`s.
username_0: Hi @username_1 ! That is indeed the answer!
The problem was: if `color` is not the primary key, how should TypeOrm know at all that we are referencing the same color? It just cannot tell, hence it produces a new entry (which I wrongly called "duplicate").
The solution we are working on right now is using the unique value (in this case, `color`) as a primary key. Our example is quite more convoluted than the example, and we have to see if it works correctly. I will update the thread once we have a working solution at hand, but feel free to close it already if you feel like doing so :) |
vercel/next.js | 668076629 | Title: Next export site Javascript not working at all
Question:
username_0: # Bug report
## Describe the bug
I am building a static exported site with graphQL hydrating the page at build time. The data are coming in fine, but the entire site after export is loading but has no javascript - carousels, countdowns, popups are all non-responsive.
I'm wondering if there's anything I'm missing fora static exported site, I checked the docs and followed every step.
## To Reproduce
Logs on Netlify on deploy:
```
2:24:07 PM: Build ready to start
2:24:12 PM: build-image version: ca811f47d4c1cbd1812d1eb6ecb0c977e86d1a1d
2:24:12 PM: build-image tag: v3.3.20
2:24:12 PM: buildbot version: be8ecf2af866e16fa4301cc5c14de2ccbbb21cf4
2:24:12 PM: Fetching cached dependencies
2:24:12 PM: Starting to download cache of 95.0MB
2:24:13 PM: Finished downloading cache in 848.069378ms
2:24:13 PM: Starting to extract cache
2:24:17 PM: Finished extracting cache in 4.248923864s
2:24:17 PM: Finished fetching cache in 5.123868348s
2:24:17 PM: Starting to prepare the repo for build
2:24:17 PM: Preparing Git Reference refs/heads/master
2:24:19 PM: Starting build script
2:24:19 PM: Installing dependencies
2:24:19 PM: Python version set to 2.7
2:24:19 PM: Started restoring cached node version
2:24:22 PM: Finished restoring cached node version
2:24:23 PM: v12.18.0 is already installed.
2:24:23 PM: Now using node v12.18.0 (npm v6.14.4)
2:24:23 PM: Started restoring cached build plugins
2:24:23 PM: Finished restoring cached build plugins
2:24:24 PM: Attempting ruby version 2.7.1, read from environment
2:24:25 PM: Using ruby version 2.7.1
2:24:25 PM: Using PHP version 5.6
2:24:25 PM: 5.2 is already installed.
2:24:25 PM: Using Swift version 5.2
2:24:25 PM: Started restoring cached node modules
2:24:25 PM: Finished restoring cached node modules
2:24:25 PM: Started restoring cached yarn cache
2:24:25 PM: Finished restoring cached yarn cache
2:24:26 PM: Installing NPM modules using Yarn version 1.22.4
2:24:26 PM: yarn install v1.22.4
2:24:26 PM: warning package-lock.json found. Your project contains lock files generated by tools other than Yarn. It is advised not to mix package managers in order to avoid resolution inconsistencies caused by unsynchronized lock files. To clear this warning, remove package-lock.json.
2:24:26 PM: [1/4] Resolving packages...
2:24:27 PM: success Already up-to-date.
2:24:27 PM: Done in 0.59s.
2:24:27 PM: NPM modules installed using Yarn
2:24:27 PM: Started restoring cached go cache
2:24:27 PM: Finished restoring cached go cache
2:24:27 PM: go version go1.14.4 linux/amd64
2:24:27 PM: go version go1.14.4 linux/amd64
2:24:27 PM: Installing missing commands
2:24:27 PM: Verify run directory
2:24:28 PM:
2:24:28 PM: ┌─────────────────────────────┐
2:24:28 PM: │ Netlify Build │
2:24:28 PM: └─────────────────────────────┘
[Truncated]
2:24:54 PM: Minifying css bundle
2:24:58 PM: Post processing - redirect rules
2:24:58 PM: Post processing - header rules
2:24:58 PM: Post processing done
2:24:59 PM: Site is live
2:25:06 PM: Finished processing build request in 54.116591215s
```
## Expected behavior
Site should work with javascript (carousels, popups etc.)
## System information
Next version is 9.4.4
## Additional context
I'm deploying on Netlify, the site works fine in `yarn dev` mode, but when deployed it's loading the page but no javascript is working.
Status: Issue closed
Answers:
username_1: Assuming the application was configured incorrectly on netlify. Did you try vercel.com?
username_0: @username_1 I tried vercel and it worked, not sure what's the difference between how netlify vs. vercel handles the static exported files.
I followed this guide: https://community.netlify.com/t/using-next-js-as-a-static-site-generator-for-netlify/3391
The build commands and settings are identify on both vercel and netlify. Honestly puzzled.
username_2: I'm having the same issue. Did you find a fix @username_0 ?
username_3: This issue has been automatically locked due to no recent activity. If you are running into a similar issue, please create a new issue with the steps to reproduce. Thank you. |
metafacture/metafacture-fix | 1120702784 | Title: Add flatten-function
Question:
username_0: Flatten is not added as function.
I can be helpful if we have an array of keywords with some keywords connected and some tokenized.
in:
```
{
"keywords": [
"dog",
"cat",
"horse,bird,dragon"
]
}
```
If we would split the third keyword we would (which we cannot at the moment) we would get an array in an array. Then `flatten` would be nice.
result (does not work because of the asterisk-problem with spli-field):
```
{
"keywords": [
"dog",
"cat",
["horse" , "bird" , "dragon"]
]
}
```
should:
```
{
"keywords": [
"dog",
"cat",
"horse",
"bird",
"dragon"
]
}
```
See test-scenario:
https://github.com/username_0/fix-FunctionalReview-Testing/tree/master/data/testing/flattening
Answers:
username_1: Related to #100.
username_2: Will revisit after https://github.com/metafacture/metafacture-fix/issues/121. |
diazrenata/ldats-sandbox | 507972528 | Title: Recapture qualitative dynamics
Question:
username_0: * No temporal change in proportions, but sampling error
* Species change in simple ways, topics may emerge
* Each species changes in a unique way; best way might be to fit each species independently |
rix1337/docker-ripper | 898670712 | Title: Completely wrong disk info?
Question:
username_0: I just ran a few disks through and some of them are coming back with the completely wrong data, what's the process of fixing this?
Answers:
username_1: No idea. That's not a known issue
username_0: Have you any idea where the issue may lie? I haven’t dug into things but I’m going to take a guess that it’s getting the metadata online in ahead of from the disk?
username_1: From the type of disk you are trying to rip we could try and infer the reason for your issue.
Is it data, video, audio..?
username_0: It’s an audio CD. It’s also new enough that it should contain some metadata on the disk.
It ripped all three songs fine they’re just all labeled as different CD as none of the details match.
Status: Issue closed
username_1: The software responsible for ripping and labeling audio is called `ripit`
you will need to create an issue there to get this resolved. |
jschneier/django-storages | 463072816 | Title: Retry policy for Azure Blob stores?
Question:
username_0: I am using the following dependencies (Django and Azure Blob storage):
django==1.11.15
django-storages==1.7.1
django-storages-azure==1.6.8
azure==4.0.0
It appears whenever I save an image using the `.save` method on the model's attribute I receive an exception from inside `'/site-packages/azure/storage/common/storageclient.py'` that says `The specified blob does not exist. ErrorCode: BlobNotFound` and specifies that `Retry policy did not allow for a retry`.
The exception also specifies the full URL to the Blob - it's always retrievable if I manually visit the URL. From what I can tell this error is completely benign, and has to do with some sort of write/read consistency issue where metadata is trying to be read back too quickly after the object was written (via a HEAD request against Azure's server). It looks to me that perhaps a retry policy would be the way to fix this, although I do not see any support for retry policies in django-storages. I feel like this is a bug but I am not sure - it definitely clouds up our Sentry logs. Is this perhaps a known issue? I did not find anything in the issue history or on google that suggests other people have this problem.
Reference: https://azure.microsoft.com/en-us/blog/azure-storage-client-library-retry-policy-recommendations/
Answers:
username_1: @username_0 any solutions found? i have the same issue
username_0: @username_1 No, I have not resolved this yet. I've actually revisited it a few times. Honestly I'd be fine with just pruning the sentry log and calling it a day but I've had a difficult time doing that API-side as well.
username_0: I finally got sentry working with this. Here's the config I used to filter the events.
```
def filter_unwanted_events(event, hint):
'''
This is the before_send hook for sentry. It is explicitly referenced in the sentry_sdk.init call.
'''
# https://github.com/jschneier/django-storages/issues/720
if 'logentry' in event:
if 'message' in event['logentry']:
if event['logentry']['message'] == '%s Retry policy did not allow for a retry: %s, HTTP status code=%s, Exception=%s.':
print("Skipping sending of 'Retry policy' to Azure.")
return None
print("Sending event to Sentry.")
return event
sentry_sdk.init(
integrations=[DjangoIntegration()],
dsn=env('DJANGO_SENTRY_DSN'),
environment=env('AZURE_ENVIRONMENT'),
before_send=filter_unwanted_events,
debug=env.bool('DJANGO_SENTRY_DEBUG', default=False),
# Set traces_sample_rate to 1.0 to capture 100%
# of transactions for performance monitoring.
# We recommend adjusting this value in production,
traces_sample_rate=1.0,
# Associate users to errors
send_default_pii=True
)
``` |
hubmapconsortium/portal-ui | 768080280 | Title: 502s
Question:
username_0: In the docker logs we see this set of three lines repeated, with small variations:
Error from uwsgi:
```
invalid request block size: 4143 (max 4096)...skip
```
Internal error from nginx:
```
recv() failed (104: Connection reset by peer) while reading response header from upstream,
client: 172.18.0.6, server: , request: \"GET /favicon.ico HTTP/1.1\",
upstream: \"uwsgi://unix:///tmp/uwsgi.sock:\",
host: \"portal.hubmapconsortium.org\",
referrer: \"https://portal.hubmapconsortium.org/search?entity_type[0]=Donor\"\n"
```
Error response from nginx:
```
"GET /favicon.ico HTTP/1.1" 502 560
```<issue_closed>
Status: Issue closed |
h4cc/awesome-elixir | 178115165 | Title: Evaluate Package "base62_uuid"
Question:
username_0: Evaluate "base62_uuid" to see if it's awesome, and possibly include in the list.
Link: https://hex.pm/packages/base62_uuid
Description:
A library for creating Base62-encoded UUIDs
This is a autogenerated issue, because the packages was added on hex.pm. |
geneontology/go-ontology | 412520929 | Title: Similar classes
Question:
username_0: Hello,
What is the difference between GO:0022838 substrate-specific channel activity and GO:0015267
channel activity? They have same definition
Answers:
username_1: You are correct. I think this was intended to group substrate-specific terms, as opposed to transporters with a broad specificity. However, this would be practically impossible, because not all subtrates are tested, and we have put "ion channel activity" under here which covers a broad range of substrates. It isn't really appropriate information to encode in the ontology. I suggest a merge @username_2 ?
username_2: Right, I will merge GO:0022838 substrate-specific channel activity (42 annotations/4 EXP) and GO:0015267 channel activity (many annotations)
Status: Issue closed
|
autozimu/LanguageClient-neovim | 584167276 | Title: Do not send textDocument/codeLens while textDocument/didOpen is still in process
Question:
username_0: ## Describe the bug
Some language servers such as [fsautocomplete](https://github.com/fsharp/FsAutoComplete/) take some time to parse the source after `textDocument/didOpen` is called.
LC-neovim tries to call `textDocument/codeLens` right after calling `textDocument/didOpen` [here](https://github.com/autozimu/LanguageClient-neovim/blob/next/src/language_server_protocol.rs#L2109) and if the server is not fast enough the code lens information is not yet available and it will return an error.
If that was the first time `textDocument/didOpen` is called, LC-neovim considers the error as that the server has failed to start. This means LC-neovim cannot be used with this kind of servers at all.
```
18:44:47 INFO unnamed src/language_server_protocol.rs:2015 Begin textDocument/codeLens
18:44:47 INFO writer-Some("fsharp") src/rpcclient.rs:215 => Some("fsharp") {"jsonrpc":"2.0","method":"textDocument/codeLens","params":{"textDocument":{"uri":"file:///private/tmp/foo/Program.fs"}},"id":2}
18:44:47 INFO reader-Some("fsharp") src/rpcclient.rs:169 <= Some("fsharp") {"jsonrpc":"2.0","id":2,"error":{"code":-32603,"message":"File '/private/tmp/foo/Program.fs' not parsed"}}
18:44:47 WARN unnamed src/language_server_protocol.rs:2794 Failed to start language server automatically. Error: Failure { jsonrpc: Some(V2), error: Error { code: InternalError, message: "File \'/private/tmp/foo/Program.fs\' not parsed", data: None }, id: Num(2) }
```
My suggestion is LC-neovim should not send `textDocument/codeLens` there and instead should do so once (LC-neovim believes) the server has finished loading the source (for example, wait until the server sends `textDocument/publishDiagnostics`).
## Environment
- neovim/vim version: arbitrary
- This plugin version: tag 0.1.156
- Language server link and version: [fsautocomplete](https://github.com/fsharp/FsAutoComplete/), latest (for example)
## To Reproduce
Open an document with a "slow" language server.
## Current behavior
LC-neovim fails to start the server.
## Expected behavior
LC-neovim starts the server successfully.
## Additional context
Ionide-vim, F# plugin for (neo)vim using LC-neovim under the hood, has stopped working after version 0.1.156 (https://github.com/ionide/Ionide-vim/issues/22).
It is confirmed that https://github.com/autozimu/LanguageClient-neovim/commit/1c7a3b0d6 is the very commit causing the issue, and it introduces a call to `self.textDocument_codeLens` into the `textDocument_didOpen` functions.
Answers:
username_1: First of all apologies for the very very late response. I've seen this before but I wasn't as involved in the project as I am now and I didn't have a way (or didn't know how to) test this to confirm, but those things have changed now so happy to pick this up again.
Sadly, I don't think waiting for a `publishDiagnostics` event is gonna cut it, as there might be servers which don't have support for that notification and we'd be left waiting forever for it. Of course we could wait for a different one, but same logic applies.
I'm honestly not sure what the protocol says about this type of scenario, but considering how the protocol is async I would be surprised if the server was supposed to finish processing anything before receiving another request. As I said, I'm not sure if this is correct, and I'm not sure what the correct way of dealing with this would be, but I don't think the client should be tracking ordering or timing of the requests.
Maybe it's worth opening an issue on the LSP repo to see if anyone had to deal with a similar situation before?
username_0: I think you can just ignore the result of `textDocument/codeLens`. The problem is that LC-neovim is assuming `textDocument/didOpen` has failed even when the server is just too slow to respond to `textDocument/codeLens`.
username_1: Oh I see the confusion now. The message you are getting there is because the `codeLens` request fails, but the language start has actually been successful. The fact that we're calling `text_document_did_open` and `text_document_did_change` inside `server_start` makes that message a little misleading, as one of those two may have failed and we show that the server hasn't started correctly. Which I believe is what's happening here.
I was able to correctly use `fsautocomplete` with a fix I'm about to PR here and this settings.json:
```json
{
"initializationOptions": {
"AutomaticWorkspaceInit": true
}
}
```
username_1: @username_0 #1124 should fix that warning message.
Status: Issue closed
username_1: Closing via #1124 |
spree/spree | 40279997 | Title: Possible To Skip Confirmation Step In Checkout State Machine?
Question:
username_0: From what I can tell, it's not possible to skip the confirmation step even if `confirmation_required?` returns `false`. Is this expected behavior? I'd expect this to go back to the `payment` state and not to `confirm`.
```
[5] pry(main)> Spree::Order.find_by_number("R025482011").state
=> "payment"
[6] pry(main)> Spree::Order.find_by_number("R025482011").checkout_steps
=> ["address", "delivery", "payment", "complete"]
[7] pry(main)> Spree::Order.find_by_number("R025482011").confirmation_required?
=> false
# INVALID PAYMENT SUBMITTED TO STRIPE, ORDER FAILS THE FOLLOWING GOING TO COMPLETE
# https://github.com/spree/spree/blob/master/core/app/models/spree/order/checkout.rb#L73-L80
[8] pry(main)> Spree::Order.find_by_number("R025482011").state
=> "confirm"
[9] pry(main)> Spree::Order.find_by_number("R025482011").checkout_steps
=> ["address", "delivery", "payment", "confirm", "complete"]
[10] pry(main)> Spree::Order.find_by_number("R025482011").confirmation_required?
=> true
[11] pry(main)> Spree::Order.find_by_number("R025482011").payments.first
=> #<Spree::Payment id: 4, amount: #<BigDecimal:7fe225b77398,'0.3599E2',18(18)>, order_id: 4, source_id: 3, source_type: "Spree::CreditCard", payment_method_id: 6, state: "failed", response_code: nil, avs_response: nil, created_at: "2014-08-14 17:41:58", updated_at: "2014-08-14 17:41:59", identifier: "ZRQRCCSB", cvv_response_code: nil, cvv_response_message: nil>
[12] pry(main)> Spree::Order.find_by_number("R025482011").payments.first.source
=> #<Spree::CreditCard id: 3, month: nil, year: nil, cc_type: "visa", last_digits: nil, address_id: nil, gateway_customer_profile_id: "cus_4ZaGpwsdAslBPc", gateway_payment_profile_id: "ASDASDASDASDASDASD", created_at: "2014-08-14 17:41:58", updated_at: "2014-08-14 17:41:58", name: nil, user_id: nil, payment_method_id: 6>
```<issue_closed>
Status: Issue closed |
osmlab/osm-community-index | 377167350 | Title: `npm run test` crashes
Question:
username_0: internal/modules/cjs/loader.js:589
throw err;
^
Error: Cannot find module 'colors/safe'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:587:15)
at Function.Module._load (internal/modules/cjs/loader.js:513:25)
at Module.require (internal/modules/cjs/loader.js:643:17)
at require (internal/modules/cjs/helpers.js:22:18)
at Object.<anonymous> (/Users/stereo/Documents/code/osm-community-index/build.js:2:16)
at Module._compile (internal/modules/cjs/loader.js:707:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:718:10)
at Module.load (internal/modules/cjs/loader.js:605:32)
at tryModuleLoad (internal/modules/cjs/loader.js:544:12)
at Function.Module._load (internal/modules/cjs/loader.js:536:3)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] build: `node build.js && rollup -c`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] build script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm WARN Local package.json exists, but node_modules missing, did you mean to install?
npm ERR! A complete log of this run can be found in:
npm ERR! /Users/stereo/.npm/_logs/2018-11-04T16_05_57_200Z-debug.log
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] test: `npm run build && npm run lint && tap test/*.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] test script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm WARN Local package.json exists, but node_modules missing, did you mean to install?
npm ERR! A complete log of this run can be found in:
npm ERR! /Users/stereo/.npm/_logs/2018-11-04T16_05_57_221Z-debug.log
```
Answers:
username_0: building data
Features:✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓/Users/stereo/Documents/code/osm-community-index/node_modules/geojson-precision/index.js:6
return p.map(function(e) {
^
TypeError: p.map is not a function
at point (/Users/stereo/Documents/code/osm-community-index/node_modules/geojson-precision/index.js:6:16)
at Array.map (<anonymous>)
at multi (/Users/stereo/Documents/code/osm-community-index/node_modules/geojson-precision/index.js:12:16)
at Array.map (<anonymous>)
at poly (/Users/stereo/Documents/code/osm-community-index/node_modules/geojson-precision/index.js:16:16)
at geometry (/Users/stereo/Documents/code/osm-community-index/node_modules/geojson-precision/index.js:38:29)
at feature (/Users/stereo/Documents/code/osm-community-index/node_modules/geojson-precision/index.js:52:22)
at parse (/Users/stereo/Documents/code/osm-community-index/node_modules/geojson-precision/index.js:72:16)
at /Users/stereo/Documents/code/osm-community-index/build.js:60:23
at Array.forEach (<anonymous>)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] build: `node build.js && rollup -c`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] build script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /Users/stereo/.npm/_logs/2018-11-04T16_10_38_428Z-debug.log
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] test: `npm run build && npm run lint && tap test/*.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] test script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /Users/stereo/.npm/_logs/2018-11-04T16_10_38_489Z-debug.log
```
username_1: I'd like to make this error message friendlier - what did you put into the geojson that caused the issue? I think you might have gotten a different error message if it didn't validate.
username_0: It was like the Croatia polygon except I had it as a `Polygon` instead of a `LineString`.
username_1: Ok, yeah I switched it back to a Polygon and realized there needs to be an additional array nesting. Our current GeoJSON schema validator doesn't catch this as an error, that's why you end up with an error later in the code.
We can look into replacing the GeoJSON validator with something better..
* [geojson/schema](https://github.com/geojson/schema) I think is a generator for the json-schema doc that this project currently uses.
* [yagajs/geojson-schema](https://github.com/yagajs/geojson-schema) looks promising, may validate more things with code.
username_0: In that case, Croatia is wrong too, and I picked the wrong file to copy :)
username_1: stale
Status: Issue closed
|
topcoder-platform/community-app | 349964627 | Title: Profile - Change skills implementation in settings page
Question:
username_0: <img width="1402" alt="screen shot 2018-08-13 at 12 59 28" src="https://user-images.githubusercontent.com/4612921/44025412-c07a42e2-9ef8-11e8-9e93-cc98a8388fa6.png">
Currently, the app saves the skills as `traits`.
You need to update the logic of that page to save the skills in the same way as the legacy app (https://github.com/appirio-tech/topcoder-app/).
Answers:
username_0: @topcoder-platform/topcodercompetitors new ticket available for pickup
username_1: @username_0 pls add me to this ticket
username_2: @username_0
if you could add me to one of the tickets you are not adding @username_1 to :)
username_0: @username_1 it's yours
@username_2 which one do you want?
username_0: Contest https://www.topcoder.com/challenges/30069326 has been created for this ticket.
username_0: Contest https://www.topcoder.com/challenges/30069326 has been updated - the new changes has been updated for this ticket.
username_0: Contest https://www.topcoder.com/challenges/30069326 has been updated - it has been assigned to diazz.
username_2: @username_0
only two tickets have open-for-pickup label ...not sure if they are actually available!
username_0: Does it work if you run the app against the production API?
My guess is that the dev db is not populated properly so the skills are missing there.
username_1: i check in the project https://github.com/appirio-tech/topcoder-app/, it use the same api for getting list but different parameter and it return empty
https://api.topcoder-dev.com/v3/tags/?filter=domain%3DSKILLS%26status%3DAPPROVED
username_0: This one https://api.topcoder-dev.com/v3/tags/ returns all skills
username_1: @username_0 the production work. Can i do base on production api?
username_0: yes
username_1: @username_0 fixed in https://github.com/topcoder-platform/community-app/pull/1169, pls check with production api
username_0: Contest https://www.topcoder.com/challenges/30069326 has been updated - the new changes has been updated for this ticket.
username_0: Thanks @username_1!
I've also increased the payment a bit.
Status: Issue closed
username_0: Payment task has been updated: https://software.topcoder.com/review/actions/ViewProjectDetails?pid=30069326 |
PouleR/facebook-messenger-bundle | 423728502 | Title: Symfony 4.2 deprecation on the tree builder
Question:
username_0: A tree builder without a root node is deprecated since Symfony 4.2 and will not be supported anymore in 5.0.
{▼
/var/www/spinninplatform/vendor/symfony/config/Definition/Builder/TreeBuilder.php:30 {▼
› if (null === $name) {
› @trigger_error('A tree builder without a root node is deprecated since Symfony 4.2 and will not be supported anymore in 5.0.', E_USER_DEPRECATED);
› } else {
}
/var/www/spinninplatform/vendor/pouler/facebook-messenger-bundle/DependencyInjection/Configuration.php:18 {▼
› {
› $treeBuilder = new TreeBuilder();
› $rootNode = $treeBuilder->root('pouler_facebook_messenger');
}
}<issue_closed>
Status: Issue closed |
vuetifyjs/vuetify | 248572796 | Title: VToolbarSideIcon is not working well in mobile to open VNavigationDrawer
Question:
username_0: ### Steps to reproduce
### Versions
"vuetify": "^0.14.8", "vue": "^2.4.2" chrome 59.0.3071.125, Firefox 54.0.1, Amdroid 7.0.0,
<!-- Which versions of Vue, Vuetify, OS, browsers are affected? -->
### What is expected ?
oping VNavigationDrawer when I touch VToolbarSideIcon
<!-- The behavior you would expect to see -->
### What is actually happening ?
It not open time to time
<!-- Is there anything else we should know? -->
### Reproduction Link
<!-- Any issues without a reproduction link will be closed -->
Answers:
username_1: Please provide a link to a reproduction of the problem using the codepen template or equivalent.
username_0: Hello Nekosaur
https://codepen.io/username_0/pen/zdwvjp
I made this.
However even in https://vuetifyjs.com has same problem.
In mobile opening navigation is not working well once in 10times.
For understanding I made a video. Please watch this : https://www.youtube.com/watch?v=5uiYyVtyoPM&feature=youtu.be
Additionally a navigation in Vue Material is runing well
Thanks
username_1: @username_0 I've tried replicating this on vuetifyjs docs but am unable to (both desktop mobile and an iphone). It always opens the drawer when clicking the icon.
Looking at your video it's hard to know exactly when and where you click since there's no visible cursor, but a few times a small white ball shows up at the top of the window, which seems to indicate it has interpreted your touch as a swipe instead of a click. Could that be the problem?
username_0: Yes it's hard to know Sorry.
I think the touchStart and swipe problem is one of it
The button should start event when it touch start to solve it.
And I guess it have another error.
For example, Vue-maratial is working well. its opening navigation is
`<md-sidenav class="md-left md-fixed" ref="sidebar">`
`<md-list-item @click="$refs.sidebar.toggle()"... `
However vuetyify is using data(state). perhaps it can solve the problem
username_1: I do not agree that the navigation drawer should activate on touchstart. Touchstart is a mobile only event, while click works for both desktop and mobile.
I'm not sure what you mean by the button style changing.
username_0: hello,
I mean to open a navigation should use a function in the navigation element not using watching data changing.
as far as I know vue using ticks to refresh date state. But..
I am not sure that is a cause of the opening problem . I am trying to fine the cause
after fining it. I will tell you. :)
Thank you for helping.
username_0: hi ! I found error! please check this https://youtu.be/AAF6x_oWjR0 and I am trying to fix it.
and Vuetify must be fixed. have good day!
Status: Issue closed
|
mesonbuild/meson | 219258278 | Title: Attach test setup to tests
Question:
username_0: The idea is to put all the test invocation in meson.build, so I can simply run `mesontest` without any external script (that way, I need to sync between meson.build and the script).
Answers:
username_1: What is missing from [`test()`](https://github.com/mesonbuild/meson/wiki/Reference-manual#test) that you need? You should be able to do everything except set `exe_wrapper` there. We could of course add that too, if that's what you need.
username_0: I have test cases which are to make sure no memory access issues with valgrind.
Currently, I `add_test_setup()` and `test()`, then run `mesontest --setup=` to run test cases with valgrind. This way, I need another script to tell `mesontest` how to pair test/testsuite with test setup, so I have to maintain the test cases (which written in .c), meson.build, and shell script.
username_2: If you want to run only a subset of all tests when running with Valgrind, give them a unique suite name, say `vgtests` and then you can run them with:
mesontest --setup=valgrind --suite=vgtests
username_0: The point is this way I have to maintain a extra script to make sure test cases is run with right test setup.
username_2: How are your tests invoked currently?
username_0: `meson --setup=test-setup-name`
username_2: No, I mean what starts that? Is it invoked by CI or something else?
username_0: Currently, by hand, plan to integrate with Travis CI.
username_2: In Travis set your test command to `ninja test && mesontest --setup=valgrind <other flags>` and you are set.
If you have a test that must always be run with Valgrind (and only with Valgrind) you can instead do this:
vg = find_program('valgrind')
exe = executable('vg_test', ...)
test('with vg', vg, args : [exe])
But note that this test can then never be run without Valgrind.
username_0: OK, thank you, @username_2. Although I still think if meson can bind test setup (as test fixture?) with test cases and let user simply choose which test case/test suite to run would be a better idea.
username_2: That hardcodes tests and suites. Most people want to run under a variety of suites. |
cakephp/bake | 343843977 | Title: Unknown command cake bake
Question:
username_0: Tengo problemas al crear el model con bin/cake bake nombre
Me sale :
Exception: unknown command 'cake bake ' . Run cake -- help to get the list of valid commands in /var/www/html/congress/vendor/cakephp/cakephp/src/console/CommandRunner.php line 321
Answers:
username_1: Have you ensured that bake is installed? Try running `composer require cakephp/bake` which will install bake.
username_0: En un princio bin/cake bake model funcionaba correctamente. Solo que tube que hacer algunos cambios en la base de datos. Y ahora pretendo remodelar con bake model para que se se modele los cambios de la base de datos. Pero ahora me sale ese error y no entiendo el problema ya que no reconoce el comando ya intente el composer update composer install y nada
Status: Issue closed
username_2: You need to run `bin/cake ...`
This is not a bug of the plugin, but a user error. Closing as such.
If you are
looking for help on how to implement a feature or to better understand
how to use the framework correctly, please visit one of the following:
[The CakePHP Manual](https://book.cakephp.org)
[The CakePHP online API](https://api.cakephp.org)
[The CakePHP Forum](https://discourse.cakephp.org)
[Stackoverflow](https://stackoverflow.com/questions/tagged/cakephp)
or the #cakephp channel on irc.freenode.net, where we will be more than
happy to help answer your questions.
Thanks!
username_3: I just did a test using latest app skeleton installing it using `composer create-project cakephp/app myapp`.
After that running `bin/cake bake` from the folder gives me expected output:
```
The following commands can be used to generate skeleton code for your application.
Available bake commands:
- all
- behavior
- cell
- command
- component
- controller
- fixture
- form
- helper
- mailer
- middleware
- migration
- migration_diff
- migration_snapshot
- model
- plugin
- seed
- shell
- shell_helper
- task
- template
- test
- twig_template
By using `cake bake [name]` you can invoke a specific bake task.
```
username_4: Experiencing same problem
Updated CakePHP 3.6.11. From CakePHP 3.5.*. using composer update.
Installed as expected no errors.
bin/cake **bake** .... Not available:
**ERROR:** _Exception: Unknown command `cake bake`. Run `cake --help` to get the list of valid commands_.
bin/cake -h
Available Commands:
- cache
- completion
- console
- help
- i18n
- orm_cache
- plugin
- routes
- schema_cache
- server
- version
username_4: UPDATE - **Using username_3 suggestion above** - Created a new project from scratch composer create-project cakephp/app myapp ... this worked bin/cake bake is available.
username_2: Make sure the plugin is installed via composer and loaded properly as documented.
Then it will also work for upgraded apps.
username_4: Cheers Mark .... Appreciate quick response.
username_5: Hello,
I have the same problem, and i can't find how resolve this one.
When i run cake --help => bake isn't displayed
i've tried remove and required the composer command and this not resolved the problem.
Have you any other suggestions ?
thanks
username_2: Did u load the plugin as documented? e.g. in Application.php?
username_5: no, i don't,
Where it is documented, because in other project, i haven't need to configure it.
username_5: i found the solution after create a new project, the Application.php file :
```
public function bootstrap()
{
// Call parent to load bootstrap from files.
parent::bootstrap();
if (PHP_SAPI === 'cli') {
try {
$this->addPlugin('Bake');
} catch (MissingPluginException $e) {
// Do not halt if the plugin is missing
}
$this->addPlugin('Migrations');
}
/*
* Only try to load DebugKit in development mode
* Debug Kit should not be installed on a production system
*/
if (Configure::read('debug')) {
$this->addPlugin(\DebugKit\Plugin::class);
}
}
```
This code doesn't exists in my project, so when i duplicate it, the problem is resolved.
Thanks for your help.
Regards. |
intellij-rust/intellij-rust | 1128331740 | Title: Сonstructor generation puts cursor in weird place
Question:
username_0: <!--
Hello and thank you for the issue!
If you would like to report a bug, we have added some points below that you can fill out.
Feel free to remove all the irrelevant text to request a new feature.
-->
## Environment
* **IntelliJ Rust plugin version:** 0.4.166.4426-221-nightly
(stable 0.4.164.4409-213 same problem )
* **Rust toolchain version:** 1.60.0-nightly (5e57faa78 2022-01-19) x86_64-apple-darwin
* **IDE name and version:** CLion 2022.1 EAP (CL-221.3427.90)
* **Operating system:** macOS 11.4
* **Macro expansion engine:** new
* **Name resolution engine:** new
## Problem description
https://user-images.githubusercontent.com/49211026/153175648-8d221b93-a494-42f4-a085-795adefbc647.mov
Empty lines do not matter.
## Steps to reproduce
```rust
struct S {
a: u8, \\ generate constructor
b: bool
}
impl S {
fn foo(&self){
println!("foo")
}
}
struct A { \\ cursor will be inside word `struct` o_O
}
fn main() {}
```
## Expected:
```rust
struct S {
a: u8, \\ cursor here
b: bool
}
impl S {
fn foo(&self){
println!("foo")
}
pub fn new(a: u8, b: bool) -> Self { \\ or in this line
Self { a, b }
} \\ may be here is also ok
}
struct A {
}
fn main() {}
```
<!--
Please include as much of your codebase as needed to reproduce the error.
If the relevant files are large, please provide a link to a public repository or a [Gist](https://gist.github.com/).
--> |
jashkenas/coffeescript | 1113698 | Title: Adding plugins to coffeescript (i.e. preprocessor for logging)
Question:
username_0: I would like to see coffeescript have a plugin system that would allow users to develop enhancements that can be inserted into the coffeescript compile process.
For instance, I scatter `console.log` output throughout my application, but there are times that I want to have loglevel features, such as console.debug or console.trace. To make this work cross-browser, I create my own logging functions. The problem is that console output shows the file name and line number of where console.log was called, not where my wrapper function is called. This makes it more time consuming to debug in a browser.
Furthermore, I want all of my logging code to be removed when in production mode. It seems to me that coffeescript would be an ideal place to handle custom logging features, and it could easily choose to include or exclude logging code in production mode. I would love to see coffeescript enhanced to allow users to build preprocessors that would be run on the code before the compiler is run.
Imagine a preprocessor that defines some custom functions. These functions would inline code in the output based on some condition. For instance, my preprocessor defines a `logger` object and adds some methods to it. This could be used with coffeescript like this:
logger.loglevel 'debug'
logger.info 'Info'
logger.debug 'Debug'
logger.trace 'Trace'
This would generate:
console.log('Info');
console.log('Debug');
Note that 'Trace' isn't output because the loglevel excludes it. The values of `logger.loglevel` could be kept simple, or could allow lots of levels: `none`, `fatal`, `error`, `warn`, `info`, `debug`, and `trace`.
The output would be dynamically included depending upon the function called and the current log level. It would be great if the command line compiler could somehow take parameters for preprocessors so that something like `--loglevel none` could be specified when generating production code.
It would be extremely nice if each file could have it's own loglevel scope. That way if I wanted to only see trace level logging in a certain module of my application, I wouldn't have to see the trace output from everything else.
Anyway, does this seem like a useful idea? Since coffeescript is already compiling code, why not plug into it for features like this? Could this be added without changing coffeescript core, or would it need to go into coffeescript core?
Answers:
username_1: I know that the original problem this thread set out to solve has been resolved, but I'm looking for information about customizing the coffee compiler.
Say, for example, that I wanted to augment the functionality of a all arrays and objects with underscore functions, allowing me to do something like `testArray.first()` and have it compile to `_.first(testArray)`
This is something which would be very dangerous to do in plain javascript, as I would have to extend the Array.prototype and might break functionality of the array in other libraries. It seems like it would be safe and fun to do with coffee-script, though!
It would be great if there was a way to do this in a way which is:
1. Modular - I could add multiple pre-processing steps, pulling from different sources which make programming more easy and elegant in different ways.
2. Integrated - I want to be able to change something in the configuration files of coffeescript so that I don't have to use a custom binary and replace the command in every single development tool which is calling the coffee compiler (for live compilation, etc).
Does this exist built into coffeescript now? If not, it seems like it should. I realize this opens up a can of worms and would probably lead to some horrible misuse. But, hey, with great power comes great responsibility!
username_2: @username_1 leaving aside whether that's a good idea, I don't see a way of doing that without runtime checks. Check out #3171 for a stab at hygenic macros, which may be what you're looking for.
username_3: The desire to avoid runtime checks seems to have caused an unhelpful aversion to runtime in general. It doesn't really matter how much we sugar a user's invocations. It's not a runtime check to compile `2px` to `px(2)` or `with _ then @first array` to `_.first(array)`.
Invocation sugar seems to get conflated with runtime checking.
username_2: But think of how much slower your app would be if you had to wrap every member access call to `.first()` to see if the variable is an array. It's also surprising behavior that some method invocations would act differently depending on what method was called. This is a poor idea.
username_3: Agreed. I wouldn't want this feature as suggested either.
I was just saying that having more sugar for expressing invocations (the two I mentioned are just for example) is totally different to wrapping stuff in checks.
It just seems like we could allow for a lot more of what people wanted to do if there were more ways to write an expression that invokes a user defined function on the expression's operands. Instead of exploring how to do that best, we tend to dismiss it for sounding like it introduces runtime checks.
You are totally correct though @username_2: We can't fix methods without expensive checks all over the place.
Status: Issue closed
username_4: Closing this issue as it appears that a `--logLevel` flag or something like it isn’t currently in the plans. However there definitely is need for making the compiler extensible, so if someone wants to propose a feasible way for implementing a plugin architecture that would be greatly appreciated. Please make such suggestions as new issues. |
149203/developersvegas2 | 412740233 | Title: Capture video at Demo Day
Question:
username_0: Something like 80% of the links that devs submit for their demos are broken within 4 months. We want to offer videos on developers.vegas because [video lives forever](https://www.mikezetlow.com/how-to-live-forever-on-the-internet/) (or long enough).
We want the video to have 3 areas:
1. A screen capture of their presentation
2. An angle filmed on a tripod
3. A sponsor's watermark
In sum, it should look like this:

I will provide a camera and tripod and video output (HDMI or whatever the Canon 650D outputs).
We need to take the video from the tripod camera, the video from the screencast, the audio from the presentation, and the graphic of the logo and compile a video in realtime. We will save the video to a directory on the computer. The next step is to automatically post the video to YouTube and developers.vegas - but that's another issue. This issue only deals with capturing the video.
I use [OBS Studio](https://obsproject.com/). I know there is other software that can do this. Perhaps look at what Twitch users do.
Answers:
username_0: Video should save in MP4 format.
username_0: We are going to capture video from the Canon 650D / Canon T4i at 1920x1080 and 24fps.
We should screen capture the video of the presentation at the same resolution and fps.
I made a graphic that goes "under" the videos here:

When the 2 video sources and the graphic are combined they should look like:

The graphic is layer 1, covers the screen, and is 1920w x 1080h.
The screen capture of the presentation is layer 2, goes in the upper left corner, and is 1600w x 900h.
The video of the presenter is layer 3, goes in the lower right corner, and is 640w x 360h.
username_0: Our Whole Stack Developer friend Chad is brining the following:
https://www.ebay.com/itm/Dell-Latitude-7280-12-5-Laptop-i5-Gen-7-8GB-RAM-128GB-SSD-/202603110138?oid=283396876949
https://www.ebay.com/itm/292975999948
https://www.ebay.com/itm/163132241881
We are also operating with a Canon T4i camera and whatever outputs we can get from Innevation.
username_0: We need this:
https://www.amazon.com/CCYC-Version-Adpater-Coupler-Replacement/dp/B076ZR4GVV/
Status: Issue closed
|
ultravideo/kvazaar | 470980984 | Title: Can we set the QP of cu ?
Question:
username_0: I can only found the way to change the QP( using ROI method ) of LCU. How can we control the QPs of CUs?
Answers:
username_1: Hi,
Sorry for the delay. As far as I know, setting QP at CU level is not completely implemented so we are only able to adjust the QP at LCU level at the moment.
Status: Issue closed
|
GsDevKit/zinc | 54541524 | Title: configuration error ... apparently the base WebSocket support classes are not brought in
Question:
username_0: ```
GsDeployer deploy: [
Metacello new
baseline: 'ZincHTTPComponents';
repository: 'github://GsDevKit/zinc:gs_master/repository';
onLock: [:ex | ex honor ];
get: load: 'Tests'; load:'Zinc-WebSocket-Tests' ].
```
Answers:
username_0: With this stuff in mind, it is worth revisiting the load instructions ....
BTW, this is the winning load expression:
```Smaltalk
GsDeployer deploy: [
Metacello new
baseline: 'ZincHTTPComponents';
repository: 'github://GsDevKit/zinc:gs_master/repository';
get;
load: #( 'Core' 'WebSocket' ) ].
```
At the end of day `Zinc-WebSocket-Tests` does not require `Zinc-GemStone-Server-Tools` and it should ... |
keplergl/kepler.gl | 518237186 | Title: [Bug] Exported map does not render hexbin colorscale correctly
Question:
username_0: **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here. |
angular/angular | 222240764 | Title: Query params containing commas result in a redirect
Question:
username_0: **I'm submitting a ...** (check one with "x")
```
[x] bug report => search github for a similar issue or PR before submitting
[ ] feature request
[ ] support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
```
**Current behavior**
When visiting an ng2 URL with a query parameter value containing a comma, ng2 redirects to the same path with all query parameters removed.
**Expected behavior**
Parameters should not disappear, and the router should properly report the value for the query param.
**Minimal reproduction of the problem with instructions**
Steps to reproduce:
1. Download https://github.com/username_0/ngupgrade-example/tree/ng2
2. npm start
3. npm install
4. Visit http://localhost:3000/teams?foo=bar,baz
**What is the motivation / use case for changing the behavior?**
Our product currently has some parameter values containing commas, and we need to support them.
**Please tell us about your environment:**
* **Angular version:** 4.0.2
* **Browser:** Chrome
* **Language:** TS
Answers:
username_1: The first navigation is triggered by the Angular2 router
Subsequent navigations are initiated by the interop, most specifically:
```js
function setUpLocationSync(ngUpgrade) {
if (!ngUpgrade.$injector) {
throw new Error("\n RouterUpgradeInitializer can be used only after UpgradeModule.bootstrap has been called.\n Remove RouterUpgradeInitializer and call setUpLocationSync after UpgradeModule.bootstrap.\n ");
}
var router = ngUpgrade.injector.get(_angular_router.Router);
var url = document.createElement('a');
ngUpgrade.$injector.get('$rootScope')
.$on('$locationChangeStart', function (_, next, __) {
url.href = next;
router.navigateByUrl(url.pathname);
});
}
```
from the router-upgrade
username_1: Seems like changing `router.navigateByUrl(url.pathname);` for `router.navigateByUrl(url.pathname + url.search);` fixes the issue
username_0: While query parameters aren't dropped anymore, it looks like they're still not being handled properly. Now I see a brief flicker in the location bar, which appears to correspond to the URL quickly changing to http://localhost:3000/teams?foo=bar2%2baz, then back to http://localhost:3000/teams?foo=bar,baz.
The browser's back button also doesn't successfully navigate back in the URL history for parameters containing a comma. To demonstrate this, try the original repro case, but add queryParamsHandling="merge" to the link in team_list.component.html. If you click on a team link, then attempt to use the browser's "back" functionality, the page is not updated.
username_2: **I'm submitting a ...** (check one with "x")
```
[x] bug report => search github for a similar issue or PR before submitting
[ ] feature request
[ ] support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
```
**Current behavior**
When visiting an ng2 URL with a query parameter value containing a comma, ng2 redirects to the same path with all query parameters removed.
**Expected behavior**
Parameters should not disappear, and the router should properly report the value for the query param.
**Minimal reproduction of the problem with instructions**
Steps to reproduce:
1. Download https://github.com/username_0/ngupgrade-example/tree/ng2
2. npm start
3. npm install
4. Visit http://localhost:3000/teams?foo=bar,baz
**What is the motivation / use case for changing the behavior?**
Our product currently has some parameter values containing commas, and we need to support them.
**Please tell us about your environment:**
* **Angular version:** 4.0.2
* **Browser:** Chrome
* **Language:** TS
username_2: re-opening this one. @username_1 @mhevery @username_4
username_3: One addition from me: The `setUpLocationSync` function has another issue that it drops the hash part of the url.
So the fix probably should be `router.navigateByUrl(url.pathname + url.search + url.hash);`
cc @username_1 @mhevery @username_4 @username_2
username_4: @username_3 Yes, looks like you're correct. I'll add that in as well.
Status: Issue closed
|
VKCOM/vk-android-sdk | 648468027 | Title: where is VKShareDialogBuilder ?
Question:
username_0: Hello, When I read the doc located in https://vk.com/dev/android_sdk they use `VKShareDialogBuilder builder = new VKShareDialogBuilder(); ` but I can't find any implementation of `VKShareDialogBuilder` in the master. where is `VKShareDialogBuilder ` or what to use instead ?
Answers:
username_1: Hello! its documentation about old version SDK. https://github.com/VKCOM/vk-android-sdk/tree/version1.6.9
New 2.0+ version does not exist it.
Status: Issue closed
|
pwndbg/pwndbg | 970124775 | Title: ModuleNotFoundError: No module named 'elftools'
Question:
username_0: <!--
Before reporting a new issue, make sure that we do not have any duplicates already open.
If there is one it might be good to take part in the discussion there.
Please make sure you have checked that the issue persists on LATEST pwndbg version.
Below is a template for BUG REPORTS.
Don't include it if this is a FEATURE REQUEST.
-->
### Description
<!--
Briefly describe the problem you are having in a few paragraphs.
-->
### Steps to reproduce
<!--
What do we have to do to reproduce the problem?
If this is connected to particular C/asm code,
please provide the smallest C code that reproduces the issue.
-->
### My setup
<!--
Show us your gdb/python/pwndbg/OS/IDA Pro version (depending on your case).
NOTE: We are currently supporting only Ubuntu installations.
It is known that pwndbg is not fully working e.g. on Arch Linux (the heap stuff is not working there).
If you would like to change this situation - help us improving pwndbg and supporting other distros!
This can be displayed in pwndbg through `version` command.
If it is somehow unavailable, use:
* `show version` - for gdb
* `py import sys; print(sys.version)` - for python
* pwndbg version/git commit id
-->
Answers:
username_0: ./setup.sh can run normal. However, when I run gdb, this error appeared. I installed elftools by pip, but it was not worked. How I do can solve it? Please help me
Status: Issue closed
username_1: Hey, since there is not much information from your side, I am going to close this issue. Please provide more information so the issue can be reproduced next time (OS, steps you did, etc; it would be neat if this can be easily reproduced e.g. in a standard docker container, but if not, its also ok). |
matrix-org/synapse | 710047040 | Title: Unable to deactivate users when identity server is disable
Question:
username_0: ### Unable to deactivate users when identity server is disable
I have selfhosted Synapse server.
Some of them used ` vector.im ` identity server.

When I tried to deactivate users I've got an error
```
{
"errcode": "M_UNKNOWN",
"error": "Failed to remove threepid from ID server"
}
```
As I understand it is impossible to deactivate user from selfhosted Synapse if identity server is disable or not working.
Other users who not connected to ` vector.im ` were successfully removed. |
Esri/distance-direction-addin-dotnet | 263245087 | Title: Static Code Analyzer : Missing check against null
Question:
username_0: - add null checks
Answers:
username_1: Moving this to impeded just until we can see the original report.
I did not see an instances of this in the compile warnings
username_1: Addressed in PR #502 and can be verified with #456 (see: https://github.com/Esri/distance-direction-addin-dotnet/issues/456#issuecomment-366808136).
username_2: This has been verified. There are no longer any references to this build warning on the latest build.
Status: Issue closed
|
mschlenstedt/Loxberry | 1056495150 | Title: Linfo updaten für LB 3.0
Question:
username_0: Die alte Version kennt anscheinend Debian Bullseye noch nicht... ;-)

Answers:
username_0: Update nicht notwendig. Ist eventuell Onlineabfrage, auf jeden Fall ist es nun korrekt.

Status: Issue closed
|
chetbox/SafariCam | 963310569 | Title: Use Motion's movie_passthrough to reduce CPU usage
Question:
username_0: [`movie_passthrough`](https://motion-project.github.io/motion_config.html#movie_passthrough)
V4L info of the RPi camera:
```
# v4l2-ctl --list-devices
mmal service 16.1 (platform:bcm2835-v4l2):
/dev/video0
root@7<PASSWORD>:/safaricam/media# v4l2-ctl -d /dev/video0 --list-ctrls
User Controls
brightness 0x00980900 (int) : min=0 max=100 step=1 default=50 value=50 flags=slider
contrast 0x00980901 (int) : min=-100 max=100 step=1 default=0 value=0 flags=slider
saturation 0x00980902 (int) : min=-100 max=100 step=1 default=0 value=0 flags=slider
red_balance 0x0098090e (int) : min=1 max=7999 step=1 default=1000 value=1000 flags=slider
blue_balance 0x0098090f (int) : min=1 max=7999 step=1 default=1000 value=1000 flags=slider
horizontal_flip 0x00980914 (bool) : default=0 value=0
vertical_flip 0x00980915 (bool) : default=0 value=0
power_line_frequency 0x00980918 (menu) : min=0 max=3 default=1 value=1
sharpness 0x0098091b (int) : min=-100 max=100 step=1 default=0 value=0 flags=slider
color_effects 0x0098091f (menu) : min=0 max=15 default=0 value=0
rotate 0x00980922 (int) : min=0 max=360 step=90 default=0 value=0 flags=modify-layout
color_effects_cbcr 0x0098092a (int) : min=0 max=65535 step=1 default=32896 value=32896
Codec Controls
video_bitrate_mode 0x009909ce (menu) : min=0 max=1 default=0 value=0 flags=update
video_bitrate 0x009909cf (int) : min=25000 max=25000000 step=25000 default=10000000 value=10000000
repeat_sequence_header 0x009909e2 (bool) : default=0 value=0
h264_i_frame_period 0x00990a66 (int) : min=0 max=2147483647 step=1 default=60 value=60
h264_level 0x00990a67 (menu) : min=0 max=11 default=11 value=11
h264_profile 0x00990a6b (menu) : min=0 max=4 default=4 value=4
Camera Controls
auto_exposure 0x009a0901 (menu) : min=0 max=3 default=0 value=0
exposure_time_absolute 0x009a0902 (int) : min=1 max=10000 step=1 default=1000 value=1000
exposure_dynamic_framerate 0x009a0903 (bool) : default=0 value=0
auto_exposure_bias 0x009a0913 (intmenu): min=0 max=24 default=12 value=12
white_balance_auto_preset 0x009a0914 (menu) : min=0 max=10 default=1 value=1
image_stabilization 0x009a0916 (bool) : default=0 value=0
iso_sensitivity 0x009a0917 (intmenu): min=0 max=4 default=0 value=0
iso_sensitivity_auto 0x009a0918 (menu) : min=0 max=1 default=1 value=1
exposure_metering_mode 0x009a0919 (menu) : min=0 max=2 default=0 value=0
scene_mode 0x009a091a (menu) : min=0 max=13 default=0 value=0
JPEG Compression Controls
compression_quality 0x009d0903 (int) : min=1 max=100 step=1 default=30 value=30
``` |
kubernetes/kubernetes | 454483952 | Title: Kubectl cp command over writes the running process executable without warning
Question:
username_0: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately via https://kubernetes.io/security/
-->
**What happened**: While performing kubectl cp of an executable from host to a container in pod. It copied successfully and modified all the permission of the existing file. The running process crashed.
**What you expected to happen**: kubectl cp should not perform direct copy of running executable files in usage from some other process. Instead it should report error for the same.
**How to reproduce it (as minimally and precisely as possible)**:
1. Run an instance of a program i.e run a process in a container in a kubernetes pod. While the proocess is still running. Perform a kubectl cp of the executable from the host to the conatiner.
2. Check whether the process is still running.
**Anything else we need to know?**: usually cp command reports error when a copy is performed on a running executable file. It prevents direct copy.
**Environment**:
- Kubernetes version (use `kubectl version`): 1.12
- Cloud provider or hardware configuration:
- OS (e.g: `cat /etc/os-release`): rhel 7.2
- Kernel (e.g. `uname -a`): Linux 5.0.2-1.el7.elrepo.x86_64
- Install tools: kubernetes cluster
- Network plugin and version (if this is a network-related bug):
- Others:
Answers:
username_1: kubectl cp uses tar under the covers. I would be surprised if tar detected/handled that scenario the way you are requesting
username_1: if the filesystem allows writing a file, cp should allow it as well. this is working as intended
/close |
WolfFire404/Plslwerk | 240328246 | Title: [serailizefield] weer + benaming
Question:
username_0: Je hebt feedback gekregen van **username_0**
op:
```c
public Transform biem;
```
URL: https://github.com/WolfFire404/Plslwerk/blob/master/Assets/Scripts/EnemyRelated/Explosion.cs
Feedback: Denk goed na over je benaming want als anderen mensen gaan kijken naar je code kan het onduidelijk zijn voor hun[](http://www.studiozoetekauw.nl/codereview-in-het-onderwijs/ '#cr:{"sha":"e17758cf7e52e67402038478b2d8b0475d785f17","path":"Assets/Scripts/EnemyRelated/Explosion.cs","reviewer":"username_0"}') |
dresden-elektronik/deconz-rest-plugin | 807122756 | Title: MOES 2 gang switch (tuya)
Question:
username_0: <!--
- Before requesting a device, please make sure to search the open and closed issues for any requests in the past.
- Sometimes devices have been requested before but are not implemented yet due to various reasons.
- If there are no hits for your device, please proceed.
- If you're unsure whether device support was already requested, please ask for advise in our Discord chat: https://discord.gg/QFhTxqN
-->
## Device
- Product name: MOES 2 gang wireless scene switch 22/44
- Manufacturer: TZ3000_arfwfgoa
- Model identifier: TS0042
- Device type : Please remove all unrelated device types.
- Switch
<!--
Please refer to https://github.com/dresden-elektronik/deconz-rest-plugin/wiki/Request-Device-Support
on how the Basic Cluster attributes are obtained.
-->
## Screenshots
<!--
Screenshots help to identify the device and its capabilities. Please refer to:
https://github.com/dresden-elektronik/deconz-rest-plugin/wiki/Request-Device-Support
for examples of the required screenshots.
Required screenshots:
- Endpoints and clusters of the node

- Node Info panel

- Basic Cluster attributes in the Cluster Info panel.

In the Cluster Info panel press "read" button to retreive the values. Please note that at least "Manufacturer Name" and "Model Identifier" must be populated with data (therefore, must not be empty), otherwise that information will not be usable. For battery powered devices, after pressing read it is required to wake-up the device by pressing a button or any other means of interaction.
-->
<!--
If available add screenshots of other clusters.
Relevant clusters are: Simple Metering, Electrical Measurement, Power Configuration, Thermostat, etc. You can typically spare Identify, Alarms, Device Temperature, On/Off. Please ensure data has been read prior to taking any screenshots.
-->
Answers:
username_0: 
username_0: 
username_0: 
username_0: 

username_1: @username_2
username_0: it's this one fyi:
https://nl.aliexpress.com/item/1005001504737652.html?spm=a2g0s.9042311.0.0.6ff84c4dhC00ci
username_2: Added in the list https://github.com/dresden-elektronik/deconz-rest-plugin/pull/4157
Status: Issue closed
|
topcoder-platform/community-app | 561750387 | Title: [Common] Most of the thrive article contents are not properly formatted
Question:
username_0: 
**Target URL:** https://www.topcoder.com/thrive/articles/Five%20Books%20Every%20Designer%20Should%20Read
1. Open the Application and Login as valid user
2. Go to https://www.topcoder.com/thrive/articles/Five%20Books%20Every%20Designer%20Should%20Read
3. Check the content
**Actual:** Most of the thrive article contents are not properly formatted
**Expected:** Should have proper consistent paragraph spacing, section spacing, bullet or numbering indentation etc..
**Environment:** HP Pavilion x360 14 inch; Windows 10
**Browser:** Google Chrome 80.0.3987.87 | FF 72.0.2
Status: Issue closed
Answers:
username_1: out of scope |
don585/TPPS_EuroDiffusion | 394383378 | Title: Constants inside the class namespace
Question:
username_0: https://github.com/username_1/TPPS_EuroDiffusion/blob/78593a4dfc01b1c3a1eb8cef0ccdd433868bbffa/EuroDiffusion/CoinDistribution.cs#L11-L18
Do you really need all these consts inside the class ?
Answers:
username_1: In previous issue I explain why all these const variables are inside the class.
Status: Issue closed
|
redcamel/RedGPU | 537867365 | Title: 다중 텍스쳐 로더가 필요함...
Question:
username_0: 텍스쳐 로더가 필요함...
텍스쳐 개별 로딩 완료서 재질 업데이트 처리시 ---> 파이프라인을 변경해야함
이때 텍스쳐가 5개씩 된다면 최악....6번의 파이프라인 변경 비용이 필요함
한번에 리소스 로딩다하고 화면이 그려지도록 해야함...
마치 게임 로딩처럼
Answers:
username_0: 기본 작성완료
큐브텍스쳐도 받아줘야함
username_0: resolved
Status: Issue closed
username_0: 향후 키로도 관리할수있도록 해줘야겠음
username_0: 텍스쳐 로더가 필요함...
텍스쳐 개별 로딩 완료서 재질 업데이트 처리시 ---> 파이프라인을 변경해야함
이때 텍스쳐가 5개씩 된다면 최악....6번의 파이프라인 변경 비용이 필요함
한번에 리소스 로딩다하고 화면이 그려지도록 해야함...
마치 게임 로딩처럼
Status: Issue closed
|
Tfarcenim/crafting-station | 462371664 | Title: Bottom of the crafting station is missing texture
Question:
username_0: **Issue description:**
Bottom of the crafting station is missing texture

* Minecraft: 1.12.2
* Forge: 1.12.2-14.23.5.2838
* Mantle: Not installed
* Tinkers Construct: Not installed
* This mod: 0.0.4
Answers:
username_1: resolved in 0.1.0
Status: Issue closed
|
lukechilds/docker-bitcoind | 537393177 | Title: Add Tests
Question:
username_0: Test the Docker image works as expected with a GitHub action.
- Test syncing to n blocks
- Test bitcoin-cli can be called from docker exec
- Test JSON-RPC can be called via cURL
- ETC
Answers:
username_0: - Test process isn't run as root
- Test volume data is preserved
- Test container can read files written by host |
MiCode/Xiaomi_Kernel_OpenSource | 219460738 | Title: To all xiaomi devs release android 7.1 for Redmi 3s/3sprime phones so that developers can take useful blobs from miui and make custom roms.
Question:
username_0: To all xiaomi devs release android 7.1 for Redmi 3s/3sprime phones so that developers can take useful blobs from miui and make custom roms.To all xiaomi devs release android 7.1 for Redmi 3s/3sprime phones so that developers can take useful blobs from miui and make custom roms.To all xiaomi devs release android 7.1 for Redmi 3s/3sprime phones so that developers can take useful blobs from miui and make custom roms.To all xiaomi devs release android 7To all xiaomi devs release android 7.1 for Redmi 3s/3sprime phones so that developers can take useful blobs from miui and make custom roms..1 for Redmi 3s/3sprime phones so that developers can take useful blobs from miui and make custom roms. |
raven4752/huabei | 503305266 | Title: 测试集全部为反例
Question:
username_0: 你好,我用您的模型训练了花呗的数据集,但得到的预测结果为所有数据对全部判为不同义,想请问您知道这是什么原因吗?
Answers:
username_1: 你好,这可能有很多原因,如果你提供一份可复现的代码和数据,我可以帮你分析一下
username_0: 感谢感谢!我用的是您的这份代码进行学习,改动很少,这是我使用的数据集:https://pan.baidu.com/s/13FdPQlY8AeFJU4qxRKweQw ,也是花呗的数据
但是最后得到的结果为f1为0.0,自己没能找出原因,还请您帮忙看看,真的非常感谢!
username_1: 我看了一下,可能是词向量的问题,你可以换成 https://pan.baidu.com/s/1DjIGENlhRbsVyHW-caRePg 这个词向量看下
username_0: 好的好的!我试一下,谢谢! |
ofrohn/d3-celestial | 681836549 | Title: How to prevent animation on initial load?
Question:
username_0: I can't seem to figure out how to prevent the animation that occurs on the initial load? It seems to happen on every example provided as well.
There are a number of comments on the site that are asking this question as well: https://armchairastronautics.blogspot.com/p/skymap.html
No matter what I set as the initial center, the map always seems to animate on load
Answers:
username_1: When I set the third value (orientation) of config parameter center to 0, for example `center:[0,0,0]` my sky doesn't animate. Or you could try setting the ANIMDISTANCE variable in celestial.js (line 9) to some big number, for example 7 |
loiste-interactive/infra-issues | 114478987 | Title: Generator 1+2 puzzle
Question:
username_0: When done, sparks from other screen are shown thru walls (white round spots)
Answers:
username_0: 
Managed to capture white spot.
username_1: I can confirm that this happens.
username_1: Also I don't have a screenshot of this, but the fire that happens soon after this is invisible, but the particles are not.
username_2: 
username_3: This also happens in the hydro plant when the generators are turned on
Status: Issue closed
username_4: Should be fixed in next update.
username_2: no it's not
username_2: When done, sparks from other screen are shown thru walls (white round spots)
username_5: The sparks in the flooded dam room are no longer visible through wall in the December 19th update.
Status: Issue closed
|
hexojs/hexo | 668320312 | Title: Injector doesn't seem to work.
Question:
username_0: ## Check List
Please check followings before submitting a new issue.
- [x] I have already read [Docs page](https://hexo.io/docs/) & [Troubleshooting page](https://hexo.io/docs/troubleshooting)
- [x] I have already searched existing issues and they are not help to me
- [x] I examined error or warning messages and it's difficult to solve
- [x] Using [the latest](https://github.com/hexojs/hexo/releases) version of Hexo (run `hexo version` to check)
- [x] Node.js is higher than 8.6.0
## Question
<!-- Question description -->
## Environment & Settings
**Node.js & npm version**
```
node --version
v12.18.1
npm --version
6.14.5
```
**Your site `_config.yml`** (Optional)
```
```
**Your theme `_config.yml`** (Optional)
```
```
**Hexo and Plugin version(`npm ls --depth 0`)**
```
+-- @hapi/[email protected]
+-- @hapi/[email protected]
+-- [email protected]
+-- [email protected]
+-- [email protected]
+-- [email protected]
+-- [email protected]
+-- [email protected]
+-- [email protected]
+-- [email protected]
+-- [email protected]
+-- [email protected]
+-- [email protected]
+-- [email protected]
+-- [email protected]
+-- [email protected]
+-- [email protected]
+-- [email protected]
+-- [email protected]
+-- [email protected]
[Truncated]
The [injector](https://hexo.io/zh-cn/api/injector) doesn't seem to work. `hexo.extend.injector` doesn't exist. I use the following code in the plug-in or script directory
```
const css = hexo.extend.helper.get('css');
hexo.extend.injector.register('head_end', () => {
return css('/css/hexo-admonition.css');
}, 'default');
```
返回错误
Retrun Error
```
...
TypeError: Cannot read property 'register' of undefined
...
```
在代码中dump,未发现有`injector`对象
Debugging in code, no `injector` objects found.
```
console.log(hexo.extend);
```
Answers:
username_1: Upgrade hexo to 5.0.0 first.
Status: Issue closed
|
securego/gosec | 410409391 | Title: G105 issue cannot be fixed?
Question:
username_0: ### Summary
I get an error G105 - Use of math/big.Int.Exp function should be audited for modulus == 0 (Confidence: HIGH, Severity: LOW) but I can't see any way to fix it?
### Steps to reproduce the behavior
code sample:
```
import "math/big"
lower := new(big.Int)
lower.Exp(big.NewInt(2), big.NewInt(255), nil)
```
### gosec version
1.2.0
### Go version (output of 'go version')
go version go1.11.2 linux/amd64
### Operating system / Environment
ubuntu 18
### Expected behavior
Have a way to get rid of this error, not sure if its even valid as setting m==nil explictitly ignores m according to the exported func
### Actual behavior
no way to get rid of warning
Answers:
username_1: Thanks for reporting this. I believe this rule was originally introduced to detect https://github.com/golang/go/issues/15184. We can certainly do some better analysis of arguments here. I'm wondering if this rule is still applicable since this should've been addressed in the go runtime now? @username_2 thoughts?
username_2: I need to check if this rule is still relevant.
username_3: Checking in on the status of this? We have to do `// nolint:gosec` on this line for a pretty standard operation.
As far as I know the below is sound, and the only way to do `x**y` with `big.Int`.
```go
var x, two, zero = big.NewInt(9), big.NewInt(2), big.NewInt(0)
x.Exp(nine, two, zero)
```
See [playground.](https://play.golang.org/p/6mV65OZB79V)
Status: Issue closed
|
netlify/build | 968696137 | Title: Allow some configuration file paths to point outside of the build directory
Question:
username_0: See background at https://github.com/netlify/zip-it-and-ship-it/issues/609
We do not allow the `build.base` configuration property to point to a directory outside of the repository root. This makes sense to me.
Additionally, we do not allow the following configuration properties to point to a directory outside of the build directory (i.e. either the repository root or the `base` directory, if any): `build.publish`, `build.functions`, `build.edge_handlers`, `functions.included_files`.
Do we have a good reason to do this? This appears to be a bug. While I can see why we would not want those to point to files outside of the repository root, pointing to files outside of the base directory but still inside the repository root might make sense in a monorepo setup.
I suggest we fix this by changing the validation to check the repository root directory instead of the build directory.
https://github.com/netlify/build/blob/eb94887298428ca27c28131439cfaf5284f609f8/packages/config/src/validate/validations.js#L229
https://github.com/netlify/build/blob/eb94887298428ca27c28131439cfaf5284f609f8/packages/config/src/validate/helpers.js#L32
Answers:
username_0: Started at https://github.com/netlify/build/pull/3596
username_0: Fixed by https://github.com/netlify/build/pull/3598
username_0: Humio queries:
- [Error messages](https://cloud.us.humio.com/netlify-us-production/search?query=%22must%20be%20inside%20the%20root%20directory.%22%20%7C%20not%20%22username_0%22&live=false&start=30d&fullscreen=false) before the change
- [Error messages](https://cloud.us.humio.com/netlify-us-production/search?query=%22must%20be%20inside%20the%20repository%20root%20directory.%22%20%7C%20not%20%22username_0%22&live=false&start=30d&fullscreen=false) after the change
username_0: [Netlify Community update post](https://answers.netlify.com/t/using-files-outside-of-a-monorepos-base-directory/43947). |
asus4/tf-lite-unity-sample | 786510552 | Title: Show 3D model when object is detected
Question:
username_0: I am trying to show a 3D model in real world as an AR when any specified object is detected using tensorflowlite SSD scene.
Do you have any idea how to acheive it do i also need to use ARcore or ARKIT or AR foundation with tflite if yes then how to integrate their cameras?
Answers:
username_1: To achieve this: I guess,
- Get input texture from ARCore / ARKit
- There are different texture formats on each platform. Especially ARKit uses the YCbCr texture format. Maybe you need to write a new shader for AR.
- Should use a background thread for good performance.
Let me know if it works! |
blitz-research/monkey2 | 266019467 | Title: Can't handle german umlaute
Question:
username_0: First added as IDE issue:
https://github.com/username_0/Ted2Go/issues/59
To reproduce:
`SaveString( "test data","löss.txt" )`
note the filename `ö`.
I got `lГ¶ss.txt` saved on disk.
More info:
https://en.wikipedia.org/wiki/Germanic_umlaut
Status: Issue closed
Answers:
username_1: Working now?
username_0: It works now. |
serilog/serilog-aspnetcore | 783339949 | Title: ASP.NET 5 stops logging in Postgresql; but worked on 3.1
Question:
username_0: Hello
I have an ASP.NET Core 5 solution with PostgreSQL database.
Installed nuget packages:
Serilog.AspNetCore 3.4.0
Serilog.Sinks.PostgreSQL 2.1.0
Recently i updated from 3.1 working version to 5.0, when suddenly without any change logger stops writing to database.
` public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.UseServiceProviderFactory(new AutofacServiceProviderFactory())
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder
.UseIISIntegration()
.UseSerilog((hostingContext, loggerConfiguration) =>
loggerConfiguration
.MinimumLevel.Information()
.MinimumLevel.Override("Microsoft", LogEventLevel.Warning)
.MinimumLevel.Override("System", LogEventLevel.Warning)
.Enrich.WithProperty("application", "MY APP")
.WriteTo.PostgreSQL(
connectionString: CONNECTION"),
tableName: "application_log",
columnOptions: SerilogColumnWriters.ColWriters,
needAutoCreateTable: false
))
.UseStartup<Startup>();
});`
Any ideas how to fix this?
Answers:
username_1: @username_0 I have exactly the same issue after upgrading to DotNet5. Logging to file works ok, but logging to DB (PostgreSQL) stopped working - without exception.
Anyone else with this issue ?
Status: Issue closed
username_2: Hi! Unfortunately this repo is not the correct one to report this - please open an issue at: https://github.com/b00ted/serilog-sinks-postgresql. Thanks! |
jlippold/tweakCompatible | 545327373 | Title: `Activator` partial on iOS 13.3
Question:
username_0: ```
{
"packageId": "libactivator",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "libactivator",
"deviceId": "iPhone9,4",
"url": "http://cydia.saurik.com/package/libactivator/",
"iOSVersion": "13.3",
"packageVersionIndexed": true,
"packageName": "Activator",
"category": "System",
"repository": "rpetrich repo",
"name": "Activator",
"installed": "1.9.13~beta5",
"packageIndexed": true,
"packageStatusExplaination": "This package version has been marked as Not working based on feedback from users in the community. The current positive rating is 25% with 1 working reports.",
"id": "libactivator",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "Centralized gestures, button and shortcut management for iOS",
"latest": "1.9.13~beta5",
"author": "<NAME>",
"packageStatus": "Not working"
},
"base64": "<KEY>
"chosenStatus": "partial",
"notes": "Requires ActivatorHomeButtonCrashFix tweak"
}
```<issue_closed>
Status: Issue closed |
mgarber93/hiking-app | 756749310 | Title: Bug: handle username is taken
Question:
username_0: Creating a user that already exists doesn't warn the user of any errors.
Steps to reproduce:
- go to https://limitless-dusk-10336.herokuapp.com/login
- create a user with the username 'test'
Actual:
- No error is shown
Expected:
- An error prompting the username is taken |
ajaxorg/ace | 300191095 | Title: Editor is not useable and will not render in shadow DOM
Question:
username_0: Plenty of people are using shadow DOM these days, with the introduction of web components and what not.
Unfortunately, ace seems to be tied into `document` and makes many assumptions which it shouldn't (such as its owning document, where to query nodes from, etc).
Would it be possible to modernise this a little so ace will detect its shadow root at least rather than assuming `document` is the containing document?
FYI you can quite easily use `Node#getRootNode()` to return the shadow root or `document`.
In places like [this](https://github.com/ajaxorg/ace/blob/fac1081f74afd445c10b72e8097de397b0298d77/lib/ace/virtual_renderer.js#L49) its very important because all the styling is lost at this point due to shadow DOM scoping rules.
Answers:
username_1: How is shadow dom normally used?
For something like:
```js
var shadow = document.body.attachShadow({mode: 'open'});
editor = ace.edit(null, {maxLines: 2})
shadow.appendChild(editor.container)
```
there does not seem to be a way to automatically detect when new shadow dom is added.
One way to solve the issue is to add a method `editor.addStylesToShadowRoot()`, which will have to be called every time editor is added to a new shadow root. Would that work for you?
username_0: Maybe we could do it on initialisation?
```js
const container = this.shadowRoot.querySelector('.editor');
ace.edit(container, options);
```
so on initialisation here, we check if `getRootNode` exists (backwards compatibility) and if the result of it `!== document`. in which case, we (somehow) check if the styles have already been added to this root and add them if they haven't. but if it _is_ `document` or this browser doesn't support that method, we just append it to the `head` once.
if im missing something or thats too large of a change, though, your suggestion would be a good workaround. AFAIK you always do DOM selections and what not relative to a node so we should be safe there.
one thing we may encounter (which im also in the process of fixing in quill) is a problem with selection ranges, too. each root has its own selection from what i remember, but i would have to double check that.
username_0: @username_1 another thing is that the completion container will always be appended to the document body (for good reason). so the editor css must be in document scope **and** shadow scope anyway.
so your solution makes more sense in fact. because we want the current behaviour and the ability to add styles to a shadow root simultaneously. unless there's some non-css issues with being in a shadow tree, that should be enough.
username_1: Luckily we do not use selection, so the attached pr should fix this issue
Status: Issue closed
username_2: This "fix" is causing our code to get duplicate ace <style> chunks and our editors use the duplicate css. Not sure if this is only happening on Angular apps or in other frameworks too. All our code that has projections or conditional statements (e.g. *ngIf) is causing local copies of the styles in "random" places in our code, in addition to styles being added to the document header. This because we have multiple ace editors in our code. None of our code use web components. Just plain Angular. I wish there was a way to force all styles in document header so we don't end up with all these duplicates. |
plotly/plotly.js | 610392644 | Title: Security warnings in downstream packages
Question:
username_0: We need to resolve https://github.com/plotly/jupyterlab-chart-editor/issues/47
Answers:
username_0: Current strategy detailed in https://github.com/scijs/cwise/pull/25#issuecomment-642972416
username_0: @username_1 what's our strategy for https://www.npmjs.com/advisories/1179 ?
username_1: @username_0 Thanks for the question. That would be fixed as part of `cwise` patch.
Status: Issue closed
|
d3/d3-delaunay | 692950471 | Title: Error with es6 worker + es6 modules
Question:
username_0: Hi,
I'm getting the following error when trying to include your lib in js worker (paint-worklet to be exact).
```
Uncaught (in promise) ReferenceError: self is not defined
```
Can you either chain and or change the `self` to `globalThis` (is supported by all relevant browsers as far as I am aware).
Or even better, if I can bother you with this job, create a delaunay.min.esm so I can just use es6 modules as is :D
Thanks in advance!!!
Answers:
username_0: btw, this is how I import the code, the docs tell me that a `d3` global is created, so to get that I have
```
import 'https://unpkg.com/[email protected]/dist/d3-delaunay.js';
console.log(d3);
```
but I never reach my console log due to the error
username_1: this works for me:
~~~
cat test.html
<script type=module>
import 'https://unpkg.com/[email protected]/dist/d3-delaunay.js';
console.log(d3);
</script>
~~~
username_0: test.html
```html
<script type="module">
await CSS.paintWorklet.addModule('gradient.js');
</script>
```
gradient.js
```es6
import 'https://unpkg.com/[email protected]/dist/d3-delaunay.js';
console.log(d3);
```
Throws the error, because workers and/or worklets do not have a global variable `self`
username_0: aah correction, worklets specifically, workers work fine.
this creates a worker, and works as intended
```html
<script type="module">
const w = new Worker('gradient.js', { type: 'module' });
</script>
```
where as my previous comment creates a worklet, which gives the error of the `self` variable being `undefined`
username_1: hmmm I wonder if upgrading rollup would suffice?
Compare the boilerplate at the top of https://unpkg.com/[email protected]/dist/d3.js and https://unpkg.com/[email protected]/dist/d3.js ; the difference was introduced by our upgrade to rollup:"2" and specifically https://github.com/rollup/rollup/releases/tag/v2.23.0
It's something that's yet to deploy in all the modules.
username_0: can I possibly bother you with the question of quickly making a build with the new rollup? or is that not feasible?
username_1: I have a simpler test:
~~~js
import 'https://unpkg.com/d3@6';
console.log(d3); // {the d3 object} 👍
~~~
~~~js
import 'https://unpkg.com/d3@5';
console.log(d3); // ReferenceError: self is not defined 💣
~~~
So I guess the next releases will work. In the meantime you might want to load d3@6
username_1: Related: https://github.com/d3/d3/pull/3366
username_0: Thanks!!! works like a charm!
Status: Issue closed
username_1: Hi,
I'm getting the following error when trying to include your lib in js worker (paint-worklet to be exact).
```
Uncaught (in promise) ReferenceError: self is not defined
```
Can you either chain and or change the `self` to `globalThis` (is supported by all relevant browsers as far as I am aware).
Or even better, if I can bother you with this job, create a delaunay.min.esm so I can just use es6 modules as is :D
Thanks in advance!!!
username_1: leaving open so I'll be happier to close it with the proper dependency upgrade :)
username_0: ok. :D
username_0: if by any chance you are curious what I need this lib for: I'm trying to replicate adobe illustrator's freeform gradient in the browser, this is to not limit the options of the designers.

As you can see, I still need to find a way to actually blend between the polygons :D
username_1: Should be fixed by https://github.com/d3/d3-delaunay/pull/113
username_0: sorry for the late response, I seem to have overlooked this on my vacation.
I don't believe I would use the tatic used in your link as the 2dcontext provided by a paintworklet, while intended to be similar, is not the same as a canvas. One of the differences is no ability to draw pixel per pixel. In this case meaning that I need some nasty workaround to calculate each pixels color based on a polygon somehow.
my current thinking is to use what I have above and then somehow place some linear gradients on the edges.
username_1: Your approach made me curious about this spec and I wrote a helper function to create and update paintworklets https://observablehq.com/@fil/hello-paintworklet
username_0: nice! thanks for the mention :D
Maybe I'll make a post if / when I actually manage to make this gradient shader without per-pixel calculations :P
username_1: fixed in d3-delaunay@6
Status: Issue closed
|
GSS-Cogs/airtable-utils | 827371869 | Title: update the main.py created by airtable-sync to match current practice
Question:
username_0: this function: https://github.com/GSS-Cogs/airtable-utils/blob/39be41df7f7b7a987493dcd072c78930733b4adf/misc/createRepoUtils.py#L173
should probably be using something like
```
from gssutils import *
cubes = Cubes("info.json")
scrape = Scraper(seed="info.json")
scrape
```
rather than the legacy approach currently being created as the default code.
NOTE - the above is just a quick sketch, have a proper think when you do it.
Answers:
username_0: closing this. We're moving away from having a single starting template.
Status: Issue closed
|
python-trio/pytest-trio | 343948092 | Title: Switch from pytest_runtest_call → pytest_runtest_setup?
Question:
username_0: Right now we have to jump through hoops to make sure our `pytest_runtest_call` runs as early as possible, because it doesn't actually want to run anything, it just wants to patch the test object before the real `pytest_runtest_call` hook runs.
There's also a hook called `pytest_runtest_setup` which is... kind of designed for exactly this. It seems like we should use it?
Unfortunately when I tried this, I got 1 test failure: in `test_async_test_as_class_method`, for some reason the test method `TestInClass.test_base` is not being detected as a trio test. Pytest is very mysterious sometimes.
At some point it would be good to dig into this and figure out what's going on. |
porsager/bss | 416491992 | Title: Simple integration width vuejs.
Question:
username_0: Simple integration width vuejs.
https://flems.io/#0=N4Igtgl<KEY>s<KEY>Hl4B+<KEY>
Answers:
username_1: Hey @username_0 .. Sorry, I completely missed this issue, and just found it now.
Looks great that bss is easy to use with Vue as well, how would you compare the experience of using bss vs. vue single file components? (of course there's no build step needed which is one plus).
Status: Issue closed
|
Justineo/github-hovercard | 1062111047 | Title: Add support for GitHub's new Light High Contrast theme
Question:
username_0: Echoing from #166, #176 and #178, there's now a new Light High Contrast theme for GitHub—and it's exactly the same as GitHub's Light theme, but with better contrast. This feature is enabled by default in **user menu** > **Feature Settings** for those who would like to test.

I would like to see if this browser extension fits very well for this theme. |
urbanware-org/salomon | 198340581 | Title: Remove argument allows to remove the filter pattern
Question:
username_0: When using a filter pattern, the `-r` (`--remove`) argument allows to remove the filter pattern from the output.
Example:
`./salomon.sh -a monitor -i /tmp/logfile --highlight --filter "foo" -r "foo"`
Status: Issue closed
Answers:
username_0: **Fixed** in version **1.7.1** (when released). |
osmcode/pyosmium | 817997406 | Title: Are Prerequisites Libraries needed for building or also running ?
Question:
username_0: Hello,
I was wondering if the Prerequisites Libraries are needed for building only or need to be kept for running, so need to be `"make install"`-ed
That's all :)
Thanks!
Answers:
username_1: You can use the same command as in here: https://github.com/osmcode/osmium-tool/issues/208
Status: Issue closed
username_0: Yes I was doing that but the following question was, does everything in contrib needs to be compiled in front of psyosmium install ? |
CUAHSI/HISWebClientIssues | 141260585 | Title: (average) Workspace: Time Series Viewer: the "Launch Tool" button enabled if time series with 3 different Variable Units are checked
Question:
username_0: Steps to reproduce:
1. Navigate to http://qa-webclient-solr.azurewebsites.net/
2. Search for any time series
3. Add any 3 time series with 3 different Variable Units to Workspace
4. Click "Open Workspace"
5. Check all time series
6. Select "Time Series Viewer" from "Select Tool" menu
7. Note the "Launch Tool" button
Result:
The "Launch Tool" button enabled if time series with 3 different Variable Units are checked (see att.)

Expected result:
The "Launch Tool" button enabled if 1-5 time series with 1-2 different Variable Units are checked
Note: error of loading time series is displayed if click "Launch Tool" in this case (see att.)

Status: Issue closed
Answers:
username_1: Changes in Timeseries Viewer alert user to variable units restrictions... |
earthref/MagIC | 421990043 | Title: Back button on plots should close the plots screen, not go back in the background on the browser
Question:
username_0: Back button on plots should close the plots screen, not go back in the background on the browser. People (I still do) will expect that back will close the plot screen and go back to the main one (In their mind the previous screen). Pressing the "x" should not be necessary (if easily done).
Answers:
username_1: This is a bit tricky, but not impossible. Showing the plots is actually just a change in the state of the plot thumbnail in the summary you clicked on. It's the same problems as changing tabs in the search interface and expecting the back button to go to the previous tab instead of leaving the search interface. I can see this would be preferable, though.
username_0: I guess this is expected in the world of web apps. People will be used to it by now hopefully.
Status: Issue closed
|
strapi/strapi | 1041880900 | Title: ctx.state.user is not populated if the route isn't configured with at least an empty policy array
Question:
username_0: <!--
Hello 👋 Thank you for submitting an issue.
Before you start, please make sure your issue is understandable and reproducible.
To make your issue readable make sure you use valid Markdown syntax.
https://guides.github.com/features/mastering-markdown/
Please ensure you have also read and understand the contributing guide.
https://github.com/strapi/strapi/blob/master/CONTRIBUTING.md#reporting-an-issue
-->
## Bug report
### Describe the bug
In Strapi v3.6.8, ctx.state.user is not populated if you do not define
```json
"config": {
"policies": []
}
```
on the route object. Even though in this case, I have no use for policies on this route. This makes it impossible to see the currently logged-in user that is trying to access that route.
### Steps to reproduce the behavior
1. Create a controller (any content type)
2. Define a route for that controller but omit the above JSON property
3. Try to access `ctx.state.user` in the controller. This will return `undefined`
4. Include the above properties in the route definition and the query will return the desired result.
### Expected behavior
To my understanding, not specifying a policy config shouldn't omit a route from populating the currently logged in user. `ctx.state.user` must be populated always.
### System
- Node.js version: 14
- NPM version: 6
- Strapi version: 3.6.8
- Database: MySQL
- Operating system: Ubuntu 20.04
Answers:
username_1: Intended as this doesn't allow the uesrs-permissions plugin to inject it's policy. It is an undocumented way to disable auth on a route. We decided not to document this in v3 as it wasn't recommend and in the v4 it has been replaced with a boolean option in the routes to disable auth (eg. `auth: false` to disable it).
It's documented in v4 I believe so going to close this as it's intended.
Status: Issue closed
|
scrapinghub/scrapyrt | 760628824 | Title: Search Page returns empty through scrapyrt only
Question:
username_0: I hope this is the right place where to ask this.
I created a spider that can scrape a page in an e-commerce site and gather the data on the different items.
The spider works fine with specific pages of the site (www.sitedomain/123-item-category), as well as with the search page (www.sitedomain/searchpage?controller?search=keywords+item+to+be+found).
But, when I run it through scrapyrt the specific page works fine, but the search page returns 0 items. No errors, just 0 items.This occurs on 2 different sites with 2 different spiders.
Is there something specific to search pages that has to be taken in account when using scrapyrt?
Answers:
username_1: Can you post your spider code? I don't see a way to reproduce it without spider code. Try to pinpoint the problem so that there is small code sample of spider running in raw ScrapyRT (without any middlewares, pipelines and other stuff from your project intefering). This way we can see this is problem on ScrapyRT side.
username_0: yes, sure.
so, my spider, stripped of all other suff looks like this:
`import scrapy
class QuotesSpider(scrapy.Spider):
name = "minimal"
def start_requests(self):
urls = [
"https://www.dungeondice.it/ricerca?controller=search&s=ticket+to+ride",
]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
print("Found ", len(response.css("article")), " items")
for article in response.css("article"):
print("Item: ", [article.css("img::attr(title)").get())`]
and I set Obey_robots = False
when I do
`scrape crawl minimal`
I get 20 items in the response, but if I go
`curl "http://localhost:9081/crawl.json?spider_name=minimal&url=https://www.dungeondice.it/ricerca?controller=search&s=ticket+to+ride"`
I get 0 items, no error, just 0 items.
I wonder if, in some way, returns the results before the page gets completely loaded?
(sorry couldn't get the markup to work correctly)
username_2: Seems that when there is '&' on the url.
scrapyrt split it right before the &. |
godotengine/godot | 775018790 | Title: Vulkan still does not work on multiple graphics cards
Question:
username_0: **Issue description:**
Godot looks like this on all of my systems (details below):

**Godot version:**
build fb16b1e39
**OS/device including version:**
Radeon R9 290 - Ubuntu 18.04 with amdgpu-pro/amdgpu
Radeon R9 290 - Ubuntu 20.04 with radeon/amdgpu with oibaf drivers
Radeon R9 290 - Ubuntu 20.10 LiveCD (radeon/amdgpu)
Radeon R9 290 - Windows 7 x64
Radeon RX 550 - Ubuntu 20.10 LiveCD (radeon/amdgpu)
GeForce GTX 670 - Windows 7 x64
**Steps to reproduce:**
- Boot the Ubuntu LiveCD (20.10, 20.04, any...)
- Download Godot from Calinou's builds
- Launch Godot, create an empty project and open it
**Minimal reproduction project:**
Just create a new empty project
I've never once seen Godot 4 working, so I assumed it's just this early in development. I've tried every few months, but it was always the same. But apparently it does work for some people?.. Just what kind of hardware or software do you have?! :) Am I doing something wrong?..
I've tried building if with GCC and Clang, with various options, debug and release_debug - it's all the same. Sometimes there are no errors in the console, and sometimes it says something about validation layers, but this appears to be not connected because that's not always the case. (Is it?)
There was #38410 , which seems to solve it for some people, but not for me. (There is also a mention of Godot not working on Debian Buster, but that's not surprising - #43231 .) I don't think R9 290 and especially RX 550 are so old they don't support Vulkan properly?..
Answers:
username_1: It seems to work just fine. You have to add a light source if you want to see colors :)
Status: Issue closed
username_0: O_O So it's just that by default there is no sky light?..
OK, added light, now I can see things. It was just very unexpected to see this instead of what we usually saw. :)
username_1: There is sky light but there is no default sky, you have to add an environment.
This will likely change before the stable release though, see https://github.com/godotengine/godot-proposals/issues/1599. |
MicrosoftDocs/azure-docs | 368027217 | Title: Please apply Japanese language for this document
Question:
username_0: I believe this section is really important for Japanese User, please consider to add Japanese translation for this page.
---
#### ドキュメントの詳細
⚠ *このセクションを編集しないでください。 docs.microsoft.com で必須です ➟ GitHub の問題のリンク。*
* ID: 95e671ca-1844-d3e4-e437-44496409109f
* Version Independent ID: d100fae1-349f-0382-8f52-2d5c9cf0dd60
* Content: [Quickstart - Grant access for a user using RBAC and the Azure portal](https://docs.microsoft.com/ja-jp/azure/role-based-access-control/quickstart-assign-role-user-portal)
* Content Source: [articles/role-based-access-control/quickstart-assign-role-user-portal.md](https://github.com/Microsoft/azure-docs/blob/master/articles/role-based-access-control/quickstart-assign-role-user-portal.md)
* Service: **role-based-access-control**
* GitHub Login: @rolyon
* Microsoft Alias: **rolyon**
Answers:
username_1: Thanks for the feedback! We are currently investigating and will update you shortly.
username_2: Assigning to @username_3 to assist with this localization request - Japanese.
username_3: @username_2, please help assign localization issue to @DavidPrio.
@SunnyDeng, FYI.
username_4: Hi @username_0
Thank you for your feedback.
We implemented your feedback and published to the updated articles on the Docs site to the following pages https://docs.microsoft.com/ja-jp/azure/role-based-access-control/quickstart-assign-role-user-portal
We truly appreciate your contribution to our articles and we encourage you to continue providing your valuable feedback.
Kind regards,
Microsoft DOCS International Team
@SunnyDeng
#please-close
Status: Issue closed
|
JimmyLv/reading | 368202177 | Title: Integrating and Building All the Things For My Website!
Question:
username_0: ## Integrating and Building All the Things For My Website!<br>
This article is the second of a two part series, on the engineering behind my website To have beyond the best user experience possible, but also the most…<br>
<br>
October 9, 2018 at 09:09PM<br>
via Instapaper https://jiahao.codes/blog/integrating-and-building-all-the-things/ |
SeleniumHQ/docker-selenium | 602259764 | Title: W3C: false, not working with selenium hub (works with standalone)
Question:
username_0: ## 🐛 Bug Report
Error in tests running in docker selenium/hub with selenium/node-chrome, all tests using
`driver.actions()` or
`driver
.manage()
.window()
.setSize()`
fail with errors:
`UnknownCommandError: unknown command: Cannot call non W3C standard command while in W3C mode` and
`unknown command: unknown command: session/90d9a40cf2534e448ae6a2ad5f38c1bf/window/size`
After I switch to selenium/standalone-chrome same tests passes.
<!--
Please be sure to include an SSCCE (Short, Self Contained, Correct [compilable] example) http://sscce.org/
-->
<!-- NOTE
FIREFOX 48+ IS ONLY COMPATIBLE WITH GECKODRIVER.
If the issue is with Google Chrome consider logging an issue with chromedriver instead:
https://sites.google.com/a/chromium.org/chromedriver/help
If the issue is with Firefox GeckoDriver (aka Marionette) consider logging an issue with Mozilla:
https://bugzilla.mozilla.org/buglist.cgi?product=Testing&component=Marionette
-->
## To Reproduce
1. Capabilities:
Set chrome option W3C: false
2. script, any tests using
`driver.actions()` or
`driver
.manage()
.window()
.setSize()`
Steps to reproduce the behavior (including the command to start the containers):
docker-compose file:
version: '3.4'
services:
selenium-hub:
image: selenium/hub:latest
container_name: selenium-hub
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome:latest
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
## Expected behavior
[Truncated]
OS: <!-- Windows 10? OSX? -->
NAME="Ubuntu"
VERSION="18.04.4 LTS (Bionic Beaver)"
Docker-Selenium image version: <!-- 3, 3.141, 3.141.59-20200409 etc
Also provide the docker image id
-->
selenium/node-chrome latest c3454acccbb6 8 days ago 908MB
selenium/hub latest fc6accc9b9e2 8 days ago 263MB
Docker version:
Docker version 19.03.8, build afacb8b7f0
Docker-Compose version (if applicable):
docker-compose version 1.24.1, build 4667896b
Exact Docker command to start the containers (if using docker-compose, provide
the docker-compose file as well):
docker-compose up -d
Answers:
username_1: Can you please provide a complete self-contained script that reproduces the issue?
username_0: NVM I found the cause:
my protractor config had:
` 'chromeOptions': {
prefs: {
download: {
prompt_for_download: false,
directory_upgrade: true,
default_directory: path.join(process.cwd(), 'testData', 'downloads')
}
},
w3c: false
},`
fixed by changing to:
` 'goog:chromeOptions': {
prefs: {
download: {
prompt_for_download: false,
directory_upgrade: true,
default_directory: path.join(process.cwd(), 'testData', 'downloads')
}
},
w3c: false
},`
Status: Issue closed
|
department-of-veterans-affairs/caseflow | 418959806 | Title: BUG: Veteran shows Schedule hearing task on hold, no actions
Question:
username_0: Sal reported this Veteran needs their hearing request withdrawn, but actions aren't appearing for the Schedule Veteran task. It appears on hold. Need to investigate why this is happening.
appeals/3349547
Answers:
username_1: This appeal does not have a closest regional office. My theory is that this is one that was flagged with a "ValidateAddress" admin action before there was a zip code fallback. That action was deleted, but the status was no updated. May need to investigate if this affected other appeals.
username_1: Updated veteran's RO and AHLs and changed status to `assigned`
Status: Issue closed
|
symfony/symfony | 270899736 | Title: Report an asset issue
Question:
username_0: As I said here https://github.com/twigphp/Twig/issues/2577#issuecomment-341540954, that's really an issue. Assets should not add a leading slash, but it must be exactly the same as provided in code, otherwise they are not found.
```
$namedPackages = array(
'img' => new PathPackage('path/to/images', $versionStrategy),
);
```
Answers:
username_0: Thanks for review. When I use {{ asset() }} it prints something like "/twig/css/styles.css" that leading / is a problem to not found css. I defined the path like "twig/css". Assets should not add leading /. The css path must be intact without leading / something like "twig/css/styles.css". When I hardcode this path it works fine.
username_1: IMO that's expected and the base path passed to the `PathPackage` constructor must be given from the path of your entry point. You can optionally pass a context if your application is hosted under a subpath of your web root.
username_0: Please give example of both ways. Thanks.
username_1: You just have to pass an object that implements the `ContextInterface` as the third argument when constructing the `PathPackage`. This interface requires a `getBasePath()` method where you return the base path to your application. If you use Symfony or any other application that makes use of the HttpFoundation component, you can make use of the `RequestStackContext` class which allows to use the current `Request` instance.
username_0: As you said that RequestStackContext implements ContextInterface with getBasePath method, I did so and passed the instance as third parameter to PathPackage. But no changes yet in css path, there is still a leading / that causes problem to not found the css. what wrong I did? Please advise.
```
use Symfony\Component\Asset\Context\RequestStackContext;
use Symfony\Component\HttpFoundation\RequestStack;
$namedPackages = array(
'css' => new PathPackage('themes/'.$theme_name.'/css', $versionStrategy, new RequestStackContext(new RequestStack())),
);
```
Status: Issue closed
username_1: Well, you not only need to pass an empty request stack to the context, but you need to put the current request onto the stack. Anyway, this is something you should solve using [one of the support channels](https://symfony.com/support). I am going to close here as we do not use GitHub for support. Thank you for understanding.
username_0: Ok, here I posted on SO would you please advise there? I appreciate your time.
https://stackoverflow.com/questions/47084035/how-to-remove-symfony-asset-leading-slash
@username_1
username_0: I did a print debug as code below and noticed it returns nothing. I don't know why. I created my own class implementing ContextInterface using HttpFoundation and it works fine now! Thank you. But now the question is that why code below returns nothing?
```
use Symfony\Component\Asset\Context\RequestStackContext;
use Symfony\Component\HttpFoundation\RequestStack;
$con = new RequestStackContext(new RequestStack);
print($con->getBasePath());
```
@username_1
username_1: Well, you just pass an empty request stack to the context. So there is no request that could be used to determine the base path.
username_0: How to pass request stack? Please give example.
username_0: I did so
```
$stack = new RequestStack();
$stack->push(Request::createFromGlobals());
$context = new RequestStackContext($stack);
```
Then pass $context as third parameter. However it works fine now, I just wanted to ask if this is correct usage and good practice? or needs improvements?
@username_1 |
g0vhk-io/g0vhk_legco_web | 232830135 | Title: Data Parsing Util and Data Warehouse
Question:
username_0: May I know that is there any data warehouse (or database) to store all of the LegCo documents(first maybe the parsable data)? Cause in `xmls` part, it's downloaded all the document from LegCo.
I would like to propose that we can deploy the database( e.g. Mongo/Cassandra, MySQL etc.) to store all the data which are parsed from `xmls`.
If so, I think that I would help to develop this part.
Answers:
username_1: It is currently stored at MySQL.
username_1: @username_0 shall I close this issue? or you want to discuss further?
Status: Issue closed
|
ExpressionEngine/ExpressionEngine-User-Guide | 407354748 | Title: Duplicate search link names
Question:
username_0: **Description of the problem**
When searching for "global variables" you get three identical results called "Global Variables", two of the links are for forum variables and the other for normal template variables.
A bit confusing!
**Additional context**
Suggest renaming the titles used for the search links, say "Template Global variables" and "Forum Global Variables", anything to avoid confusion.
There may be other duplicate titles but I haven't found any so far.
Answers:
username_0: Found another duplicate, search for "file" you get identical link names which go to different pages.
username_1: Thanks, @username_0. We'll get that fixed.
username_0: Hi Jordan
Good stuff, keep up the good work!
Rob
>
Status: Issue closed
|
JasonPuglisi/emmental | 407811915 | Title: Add brute force prevention
Question:
username_0: ### Acceptance Criteria
1. Users should not be able to make repeated unsuccessful login attempts.
### Notes
We should discuss how to best handle this. Probably by using failed login logs from database per IP address.<issue_closed>
Status: Issue closed |
swapmyvote/swapmyvote | 527560721 | Title: Make swap expiry time configurable
Question:
username_0: Can be configured for M0, and parameterised for M1
Answers:
username_0: Can be configured for M0, and parameterised for M1
username_1: @username_0 can you please explain what you mean as the difference between 'configure' and 'parameterise' ? I'm thinking they both mean 'make an environment variable that controls this' but they seem to be different things for you.
username_0: This came out of a conversation with @username_2 and he can probably explain better, but AIUI, we can easily configure it right now, by changing a variable in a file, which is a M0 issue (we want it set to 48 hours for launch)
But I think we want also, for M1, to be able to set it 'within the system' - I guess higher up the stack somewhere
username_2: No we want it to be an environment variable from the beginning.
username_1: @username_2 picking this up unless I hear from you that it's already under way
username_2: Go for it!
username_1: @username_2 @username_0
https://github.com/swapmyvote/swapmyvote/pull/180#issuecomment-559493003
Status: Issue closed
|
Vasfed/flot-rails | 171013837 | Title: Bump version and cut a release
Question:
username_0: Can you bump the version to 0.0.7 and release the new version to rubygems.org? Alternatively, I'm happy to help if you want to grant me access to this repo and the gem. Thanks!
Answers:
username_0: Here's why i want a new release: https://dependencyci.com/github/projecthydra/sufia/builds/2
username_1: Released 0.0.7
Status: Issue closed
username_0: Thanks, @username_1! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.