repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
GoogleCloudPlatform/nodejs-docs-samples | 700182135 | Title: iot manager list command with null information
Question:
username_0: Running 'node manager.js listDevices my-node-registry' , I get list of my devices and more information, however, some information is null. for example:
```
Current devices in registry:
...
Device 3: { credentials: [],
metadata: {},
id: 'pcTest',
name: '',
numId: '123456789123456',
lastHeartbeatTime: null,
lastEventTime: null,
lastErrorTime: null,
lastErrorStatus: null,
config: null,
lastConfigAckTime: null,
state: null,
lastConfigSendTime: null,
blocked: false,
lastStateTime: null,
logLevel: 'LOG_LEVEL_UNSPECIFIED',
gatewayConfig: null }
Device 4: { credentials: [],
metadata: {},
id: 'pcTest2',
name: '',
numId: '123456789123456',
lastHeartbeatTime: null,
lastEventTime: null,
lastErrorTime: null,
lastErrorStatus: null,
config: null,
lastConfigAckTime: null,
state: null,
lastConfigSendTime: null,
blocked: false,
lastStateTime: null,
logLevel: 'LOG_LEVEL_UNSPECIFIED',
gatewayConfig: null }
```
Why some parameters are null if have information over google cloud platform like heartbeat time? is a problem of IAM permission or is a limit over API?
thks
Answers:
username_1: Hi there! Would you mind sharing your code to triage the issue? Thanks!
username_0: Hi:
Code is the example code from iot/manager from this repository. I just use the command with my repository parameters.
username_1: Hi @username_0,
Without knowing how many devices or requests you've made, I can't say for sure if it's a quota issue. I've linked the documentation [here](https://cloud.google.com/iot/quotas) just in case.
A couple of ideas I have off the top of my head are: (i) have you confirmed the devices are correctly recording the information, and are storing it? (ii) perhaps you needed to wait for the information to become available?
I'm also involving @sgreenberg who may know more!
username_2: @username_0 It is working as intended. have nothing to do with IAM. It just indicates that these variables are not initialized.
username_0: hi:
Attached I send 3 images: first one is the command listdevice (and reason of this ticket); second one is getdevice that show information about devices and get data information, for example lastheartbeatTime; and finally a screenshot over gcp where I can confirm that data exists.
I need a API that check if all my devices is alive (without send any additional state or telemetry message, just mqtt alive ping alias heartbeat), and listdevices is my best option, and my second option is use listdevices to get all devices and check one by one with getdevice. I check the example code and it is fine, for this reason a think that maybe is a problem over gcp or google API.
What do you think, google API could be the problem?



username_2: @username_0 Hmmm, you are right. Let me investigate a bit on more whether it is service related or sample related.
Status: Issue closed
username_3: Looks like you were able to resolve the issue. Closing, please re-open if needed. |
Multiverse/Multiverse-Core | 648043617 | Title: Individual chats per world
Question:
username_0: Suggestion:
I've looked for other plugins that would do this, it's self explanatory, each world has their own chat, and you can't see the entire server chat. I've seen a plugin that does this but it isn't maintained and I think it would be a great addition to MV core.
Answers:
username_0: So I've tried the unmaintained plugin: https://www.spigotmc.org/resources/perworldchat.29897/ . It half works, so you can list the chats that should have a single chat, but doesn't work when it comes to connected worlds (a world with and end and nether). If you're going to use their source code, take this into consideration
username_1: There are many chat plugins out there that has this feature, ranging from plugins like Chat Control, Chat Manager, DeluxeChat, VentureChat and probably more. So there isn't much of a use for mv to create a chat plugin.
Status: Issue closed
|
ContinuumIO/anaconda-issues | 211012278 | Title: cv2.imshow() is not working on ipython console !!!
Question:
username_0: /Users/travis/build/skvark/opencv-python/opencv/modules/highgui/src/window.cpp:583: error: (-2) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function cvShowImage
Answers:
username_1: same problem,i have tried two method
1.recompile OpenCV with GTK
2.conda install -c https://conda.anaconda.org/menpo opencv3
doesn't work for me
username_0: @username_1 doesn't work
username_2: mine is the same can anyone help me please :
error: /io/opencv/modules/highgui/src/window.cpp:668: error: (-2) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function cvStartWindowThread
username_1: I have found the reason,because i have multiple opencv. use `cv2.__file__` to confirm which opencv is used,then uninstall the opencv that not used. |
canjs/bit-docs-html-canjs | 436428947 | Title: Clean up the design
Question:
username_0: - [ ] Get rid of Lato
- [ ] Either change the direction of the vertical shadow between the left sidebar & content, or make it a solid gray line?
- [ ] Add spacing to the bottom of the left sidebar to avoid this:
<img width="300" alt="Screen Shot 2019-04-23 at 3 45 39 PM" src="https://user-images.githubusercontent.com/10070176/56620768-e9226000-65de-11e9-94c8-23be2fa20d96.png">
- [ ] Fix the dropdown at 1000px wide
<img width="572" alt="Screen Shot 2019-04-23 at 3 15 59 PM" src="https://user-images.githubusercontent.com/10070176/56620821-153de100-65df-11e9-932f-8acdc1fb1ae9.png">
- [ ] Vertical alignment: make the logo, version, & links share the same baseline? [And same thing with the other stuff in the nav?]
<img width="246" alt="Screen Shot 2019-04-23 at 3 48 59 PM" src="https://user-images.githubusercontent.com/10070176/56620918-6a79f280-65df-11e9-988b-ccf81533db6b.png">
- [ ] Horizontal alignment: tighten the space between the main nav links, use padding instead of margin so the link target area is bigger, & make the spacing between things more cohesive (right now it’s _30px | logo | 15px | version | 60px | nav links_ on the left and _search input | 20px | By Bitovi | 10px_ on the right)
- [ ] Make comps for what the “By Bitovi” logo would look like with black text (just By black, By gray and Bitovi black, & both black)
- [ ] The X in the search input feels huge to me but it looks like it’s the same size as the magnifying glass. Maybe consider making it smaller, or less thick, or…?
- [ ] Consider how we could make the hover & selected states more consistent
<img width="134" alt="Screen Shot 2019-04-23 at 4 03 15 PM" src="https://user-images.githubusercontent.com/10070176/56621459-63ec7a80-65e1-11e9-96ce-4d995c8266e0.png">
Answers:
username_1: RE: sidebar remove shadow and add gray line
<img width="1920" alt="Screen Shot 2019-04-24 at 7 35 49 AM" src="https://user-images.githubusercontent.com/4571457/56656587-e3bb2900-6663-11e9-88d9-6ed33fcf7dad.png">
Option 2 - make sidebar background a light gray color
<img width="1920" alt="Screen Shot 2019-04-24 at 7 36 13 AM" src="https://user-images.githubusercontent.com/4571457/56656597-e87fdd00-6663-11e9-91d5-0fc4102eedb5.png">
username_1: By Bitovi mods





username_1: 

Status: Issue closed
|
BabylonJS/Spector.js | 925905860 | Title: Seeing only the last frame with Chrome extension
Question:
username_0: Hey, when using Chrome's SpectorJS it shows only thew last frame, and empty ones before.
Any suggestion for solving it?

Answers:
username_1: The frame buffers are in this case not from a renderable type.
Do you have a repro to confirm ?
username_0: @username_1
What makes you know that?
As it can be seen in the image, the last frame is created from several meshes, but it was not possible to see with the frames, the process that created it (For example which mesh was rendered before others)
username_1: This is one of the only reason we would not capture a frame buffer. I might happen for a shadow map for instance. you could look into the frame buffer setup for the draw call to confirm
Status: Issue closed
|
andreamazza89/amaze | 573275525 | Title: Take a step from within the maze rather than looking at a map of it
Question:
username_0: At the moment, the `takeAStep` api is from the point of view of someone looking down at the maze map.
This is to expose another endpoint, which allows a player to take a step from within the maze; this will require the app tracking the player's direction.
In summary, after this story we should have something like:
- `takeAStepOnTheMap` -> which takes an absolute direction of `North | West | South | East`
- `takeAStepInTheMaze` -> which takes a relative direction of `Forwards | Backwards | Right | Left`
Answers:
username_0: Could be cool for this to be the same Mutation, but with a parameter that specifies which way around you want to communicate the direction. |
joferkington/mplstereonet | 896062939 | Title: Problem with ax.set_azimuth_ticks()
Question:
username_0: I was updating some stereonet subplots in Jupyter (that used to look fine) and after running the cell the degrees around the stereonets did not plot correctly as they used to. Instead of plotting around the edge of the stereonets, they plotted all around the frame of the four subplots. I then tried on a single stereonet and it was better, but the degrees were positioned quite far away from the edge of the stereonet and I could not adjust that position.
Answers:
username_1: I encountered exactly the same issue today, and I am using Spyder. The last time I used mplstereonet was in December 2020 and back then the error did not occur. Here is an example of the ticks and labels:

username_2: Ok, took me a while to figure out what was going on. I can suggest two workarounds to fix this issue:
1. If "ax" is your stereonet axis, add the line "ax._polar.set_position(ax.get_position())" before calling plt.show()
2. Downgrade matplotlib to version <3.4.0
mplstereonet makes a hidden polar axis within the stereonet axis as a bit of a kludge to add azimuth tick labels.
The issue is in mplstereonet/stereonet_axes.py line 291. Prior to matplotlib 3.4.0, fig.add_axes() used to detect if you were attempting to create Axes with the same keyword arguments as already-existing Axes in the current Figure, and if so, it would return the existing Axes. Now it will create a new axis. So changes to the size of the stereonet axis aren't matched by changes in the polar axis anymore. Workaround 2 manually matches any size changes that have occurred in the parent axis.
The matplotlib functionality change is documented here:
https://matplotlib.org/stable/users/prev_whats_new/whats_new_3.4.0.html#changes-to-behavior-of-axes-creation-methods-gca-add-axes-add-subplot |
ZJONSSON/node-etl | 429391888 | Title: ElasticSearch should retry with backoffs if transport error
Question:
username_0: If too much data gets written to ES, it will sometimes fail with this error:
https://stackoverflow.com/questions/37855803/rejected-execution-of-org-elasticsearch-transport-transportservice-error
It'd be nice if this was handled by the node-etl library and automatically retried (after backoffs) |
Azure/azure-functions-host | 754232401 | Title: proxies route value = 'null' when running inside kubernetes
Question:
username_0: I am using Azure Functions proxies to handle CORS requests in a K8 Cluster and my `proxies.json` looks like this.
```
{
"$schema": "http://json.schemastore.org/proxies",
"proxies": {
"proxy1": {
"matchCondition": {
"methods": [
"OPTIONS"
],
"route": "{*path}"
},
"responseOverrides": {
"response.statusCode": "200",
"response.headers.Access-Control-Allow-Origin": "*",
"response.headers.Access-Control-Allow-Methods": "GET, PUT, POST, DELETE, HEAD, OPTIONS",
"response.headers.Access-Control-Allow-Headers": "*",
"response.headers.Access-Control-Allow-Credentials": "true"
},
"debug": true
}
}
}
```
I have built my docker container and when I run the container locally the proxy works as expected when the OPTIONS calls are made, I see the following output.
```
dbug: Microsoft.AspNetCore.Routing.RouteBase[1]
Request successfully matched the route with name 'proxy1' and template '{*path}'
info: Function.proxy1[1]
Executing 'Functions.proxy1' (Reason='This function was programmatically called via the host APIs.', Id=27a6aa40-ca1f-4a56-b1b5-23b9721d1793)
info: Function.proxy1.User[0]
Executing request via Azure Function Proxies
info: Function.proxy1[2]
Executed 'Functions.proxy1' (Succeeded, Id=27a6aa40-ca1f-4a56-b1b5-23b9721d1793, Duration=2ms)
dbug: Microsoft.AspNetCore.Routing.RouteConstraintMatcher[1]
```
However, the same container running in my K8 Cluster gives this output when the OPTIONS request is made
```
dbug: Microsoft.Azure.WebJobs.Script.Workers.Rpc.RpcFunctionInvocationDispatcher[0]
FUNCTIONS_WORKER_RUNTIME=dotnet. Will shutdown all the worker channels that started in placeholder mode
info: Host.General[316]
Host lock lease acquired by instance ID '0000000000000000000000003FA04801'.
dbug: Microsoft.AspNetCore.Routing.RouteConstraintMatcher[1]
Route value '(null)' with key 'httpMethod' did not match the constraint 'Microsoft.AspNetCore.Routing.Constraints.HttpMethodRouteConstraint'
dbug: Microsoft.AspNetCore.Routing.RouteConstraintMatcher[1]
Route value '(null)' with key 'httpMethod' did not match the constraint 'Microsoft.AspNetCore.Routing.Constraints.HttpMethodRouteConstraint'
dbug: Microsoft.AspNetCore.Routing.RouteConstraintMatcher[1]
Route value '(null)' with key 'httpMethod' did not match the constraint 'Microsoft.AspNetCore.Routing.Constrai
```
You can see that the `Route value is '(null)' in the output. I am guessing this must be something to do with the load balancer? Does anyone know when the route value would be null when running inside Kubernetes ?
Answers:
username_0: @pragnagopa any update on this?
username_1: Hi @username_0, Apologies for the delayed response. Were you able find a solution? do let us know if a follow up is required. The issue was somehow lost in the trace. |
asrob-uc3m/robotDevastation-developer-manual | 218310809 | Title: Introduce programming style conventions
Question:
username_0: Started as an idea while taking a closer look at emacs modelines ([robotDevastation#96](https://github.com/asrob-uc3m/robotDevastation/issues/96#issuecomment-290517327)), it could be expanded to encompass further coding style rules.
Answers:
username_0: I find the [official documentation](https://git-scm.com/docs/gitattributes) a bit too sparing of details with this respect, but it might be possible to handle/enforce the indentation thing in a `.gitattributes` file (see [YARP/.gitattributes](https://github.com/robotology/yarp/blob/v2.3.68/.gitattributes)).
username_0: Consider adding header inclusion guidelines:
* asrob-uc3m/robotDevastation@be2a7de
* asrob-uc3m/robotDevastation@e109550
username_1: Maybe it's okay to point to https://github.com/roboticslab-uc3m/best-practices for now. |
PaddlePaddle/Paddle | 269882299 | Title: cnn_output_size Calculation error using dilation in python api
Question:
username_0: The origin func in python api as follows:
```
def cnn_output_size(img_size, filter_size, padding, stride, caffe_mode):
output = (2 * padding + img_size - filter_size) / float(stride)
if caffe_mode:
return 1 + int(math.floor(output))
else:
return 1 + int(math.ceil(output))
```
This doesn't take dilation into account.<issue_closed>
Status: Issue closed |
radicaled/dart-tools | 79513531 | Title: Error reporting's red is difficult to read
Question:
username_0: On styles/dart-tool.less for error reporting, you are using `@text-color-error`, but this color is difficult to read on the dark UI theme of atom, maybe changing it to something lighter like `#FD4848` I've found it becomes easier to my eyes.
Answers:
username_1: 0.9.18 uses the linter package, which has a different linting UI. Is the color still an issue? |
stephanstapel/ZUGFeRD-csharp | 667657723 | Title: ram:BasisAmount in comfort profile
Question:
username_0: The validation of the xml says, that the ram:BasisAmount (in SpecifiedSupplyChainTradeAgreement) will not be evaluated in the comfort profile.
In previous versions the BasisAmount was not written, too.<issue_closed>
Status: Issue closed |
ampproject/amphtml | 717728457 | Title: [amp-story] Allow players to control system UI controls
Question:
username_0: For the amp-story-player, we would like to control the position & visibility of UI elements of the amp-story's system layer (speaker, sharing...).
Since amp-story knows best where to position them in different scenarios (page-attachment open, tweet expanded, etc...), it's appropriate for the player to send a message to the story with the required configuration, and the story takes care of the layout logic.
#### Player to story message
`player.sendRequest('updateSystemUI', config);`
#### Config API
(based on https://github.com/ampproject/amphtml/issues/30031)
```
“header” (optional)
“buttons”: array of objects, each one containing:
“name”: a string with the name of the button. If the name is part of the default set of buttons (close / (un)mute / share) the following fields are optional but overridable.
“icon”: (required when not part of the default set of buttons) accepts URLS and `data:image` URIs like base64 images and svgs.
“event”: (required when not part of the default set of buttons) custom event that will be dispatched when clicked.
“visibility”: “hidden” or “visible”
“position”: “end” or “start” string values
“footer” (coming soon - not available for dev preview)
“logo” (coming soon - not available for dev preview)
```
Example:
```
{
"header": { // (Optional)
"buttons": [ // Required.
{
"name": "mute", // If name is part of default set: the default event/behavior and icon will be used, but can be overridden.
"icon": "icon.jpg", // Supports URLS and `data:image` URIs like base64 images and svgs.
},
{
"name": "close",
"position": "start", // Values: "start" or "end".
},
{
"name": "customButton", // If not part of default set, the following fields will be required.
"icon": "data:image/svg+xml;charset=utf-8,<svg ... ",
"event": "customEvent", // Event that will be dispatched when clicked.
}
]
}
}
```
#### Listening to Custom Events
Players could then subscribe to events coming from the buttons using the existing viewer message `documentStateUpdate` and checking if it is a player UI event.
```
playerMessaging.registerHandler('documentStateUpdate', (event, data) => {
if (data.state === 'PLAYER_UI_EVENT`) {
console.log(data.value); // `amp-story-player-close` (Close button was clicked!)
}
});
```
@ampproject/wg-stories
Answers:
username_1: nit: should we do the listening for `documentStateUpdate` and re-trigger a javascript event? That way the only code necessary is standard event listening (either vanilla JS or from the publisher's framework of choice)
If the `documentStateUpdate` event doesn't match one specifies by the UI Controls API, we can just continue to propagate it.
username_0: Yep, that's what I had in mind for the `amp-story-player`. I didn't mention it in this issue because I think @username_2 wanted this to be something that any publisher could reference to and implement in their own viewer/players.
username_0: Sounds good. Landed on controls, will update all the APIs.
username_2: That all sounds good!
We can make the footer decision later once we know more about our plans but it's great that this API will let us pass its configuration as well, if needed.
username_0: Closed by #30502
Status: Issue closed
|
BEEmod/BEE2-items | 198389141 | Title: Cube Dropper compile failure
Question:
username_0: Looks like something with the Cube Colorizer. What does that even do?
```
[ERROR] cond.core.check_all(): Error in condition:
Traceback (most recent call last):
File "F:\Git\BEE2.4\src\conditions\__init__.py", line 463, in check_all
File "F:\Git\BEE2.4\src\conditions\__init__.py", line 348, in test
File "F:\Git\BEE2.4\src\conditions\__init__.py", line 515, in check_flag
ValueError: "coloredcube" is not a valid condition flag!
```
Answers:
username_1: It lets you colour cubes, so they have a coloured stripe on them to make it possible to tell different droppers and cubes apart (for puzzles where you need to fizzle cubes strategically).
Status: Issue closed
username_0: But what if I don't want that? The puzzle only has 1 cube.
username_1: It only does something when you have that item attached to the cube - except that your compiler version doesn't have the code for it, so it errors out. |
socrata/opendatanetwork.com | 123473065 | Title: idea: use stackexchange for support and to build community
Question:
username_0: I feel like the site needs a way for people to a) ask questions as well as b) provide feedback.
For a few reasons, I think (a) can be covered by StackExchange much like `openfda` has leveraged it. We can also start to try to build a community of people to answer each other's questions and drive organic traffic to the ODN site. I, personally, am willing to monitor the `opendatanetwork` tag on SE if we decide to do this.
For (b), I think a Github Issues or email would be better.
Answers:
username_1: What's the status of this idea now that ODN is open source and Github issues is now used on dev.socrata.com?
Status: Issue closed
|
wfxr/forgit | 896882085 | Title: when using "ga" to git add: zsh:11: parse error near `else', no diff in right box
Question:
username_0: <!-- ISSUES NOT FOLLOWING THIS TEMPLATE WILL BE CLOSED AND DELETED -->
<!-- Check all that apply [x] -->
## Check list
- [x] I have read through the [README](https://github.com/username_2/forgit/blob/master/README.md)
- [x] I have the latest version of forgit
- [x] I have searched through the existing issues
## Environment info
- OS
- [x] Linux
- [ ] Mac OS X
- [ ] Windows
- [ ] Others:
- Shell
- [ ] bash
- [ ] zsh
- [x] fish
## Problem / Steps to reproduce
In any git repo that I've tried, when I run `ga`, I don't see a diff in the right box, just
```
zsh:11: parse error near `else'
```
I installed with the `omf` method
Answers:
username_1: will you try typing `fish` before you try `ga` again, just to be sure you're running it in fish? It's strange to see a zsh error while running the fish version of this plugin! I suspect a configuration error.
Is this just with `ga`, or with other commands too?
username_0: I am running fish for sure, but I see that the SHELL variable is set to /bin/zsh, and if I try to set it with fish as a universal variable (that persists over reboots), it tells me it's overshadowed by a global variable (that is set for all sessions but doesn't persist) of the same name. So somewhere in my config something is setting that variable. If I just set the variable for that session only I can override the global value and `ga` works without any issues. I'll hunt that configuration down on my side and I'll close this issue
Status: Issue closed
username_1: @username_2 Have you ever seen anything like that?
Good luck @username_0, be sure to post a comment if you figure it out, may help someone in the future :)
username_2: I haven't seen this error. The error message looks like the `fish` plugin was sourced in the `zsh` shell
for some reasons.
username_1: Very strange, for sure!!
username_0: I couldn't find why the SHELL variable was set as a global variable anywhere. I have a setup at work where I can't chsh (it gets reset on reboots) so I tell alacritty and guake to run fish instead of the default shell. Maybe zsh still gets called and sets itself as SHELL, or maybe that's just set at login because my default shell is zsh.
My solution was to have `set -gx SHELL /usr/bin/fish` in `~/.config/fish/config.fish` to overwrite it, and it works well. Maybe forgit or something called by forgit reads SHELL at some point to know what it's running on, even if it has been installed via omf ? I imagine my case of having SHELL not match with the shell that's running is not very common
username_1: Very strange! Well, glad you got to the bottom of it. Don't hesitate to reach out if you find any more issues
username_1: @username_0, interestingly enough, this just started happening with me too when I switched to a new job!
I will dig in at some point and see if we can fix this on the forgit side. I can't figure out why I can't update my shell with `chsh`. I figured that would fix it. I'm going to use your hack for now.
Super strange
username_1: <!-- ISSUES NOT FOLLOWING THIS TEMPLATE WILL BE CLOSED AND DELETED -->
<!-- Check all that apply [x] -->
## Check list
- [x] I have read through the [README](https://github.com/username_2/forgit/blob/master/README.md)
- [x] I have the latest version of forgit
- [x] I have searched through the existing issues
## Environment info
- OS
- [x] Linux
- [ ] Mac OS X
- [ ] Windows
- [ ] Others:
- Shell
- [ ] bash
- [ ] zsh
- [x] fish
## Problem / Steps to reproduce
In any git repo that I've tried, when I run `ga`, I don't see a diff in the right box, just
```
zsh:11: parse error near `else'
```
I installed with the `omf` method
username_1: @username_0 I think I have found the fix! It seems like you have to do a little legwork for fish to be identified as a shell
First, make sure fish is listed as a shell in /etc/shells. Here is a quick one liner that can help you do that.
```
which fish | sudo tee -a /etc/shells
```
Then you need to change the shell for your user
```
chsh -s (which fish)
```
You then need to close out of your terminal session, but then when you open a new one everything was good for me!
Will you confirm this works and get back to me?
username_0: I tried but it didn't work for me. I don't think I can change my shell at my work, or rather the shell is always overwritten when I reboot. I don't have that issue on my personal computer though, and fish is recognized as a shell all on its own. I'll just keep this little hack for now
Status: Issue closed
|
agda/agda | 1007379268 | Title: The Agda mode’s version (2.6.3) does not match that of agda (2.6.2).
Question:
username_0: Hey guys,
I've installed the agda2-mode from melpa. Looking inside `/home/username_0/.emacs.d/elpa/agda2-mode-20210903.1114/agda2-mode.el` I see this: `(defvar agda2-version "2.6.3"` The Agda version available via cabal (and listed on Hackage) is 2.6.2.
How can I make sure that I am using the matching versions? Is there a way to dowgrade the agda2-mode to 2.6.2?
Thanks,
Vladimir
Answers:
username_1: Installing agda should also install agda-mode. You shouldn't need to use melpa for that.
username_2: I'd like to ask `melpa` to take off `agda2-mode`, since `agda-moda` is always installed together with `agda`.
@username_0: Does @username_1's reply solve your problem?
username_0: I am not sure what do you mean by that. Are you saying that installing agda with cabal will also install agde-mode for emacs? How does it know about peculiarities of my emacs setup? Which version of agda-mode does it pick then?
What I don't understand is why would the master branch of agda and agda mode use different versions?
username_2: The stable version of Agda on Hackage is 2.6.2. So if you really need an experiment version of Agda, then you need to `git clone` this repo and compile inside the repo.
username_0: I see, thanks, guys. That melpa alternative is so confusing, maybe it could be mentioned somewhere in the docs? I do think though that the melpa package just grabs the master and installs, so if master version of agda-mode was matching master agda version, that all would work as expected.
username_2: Yeah, it does not make sense to install `agda-mode` from `melpa` at all, since it just downloads `*.el` files directly from `master`. I have asked them to take it off.
username_0: What is the reason for having different versions of the master branch between agda and agda-mode? Agda master is 2.6.2, agda-mode under the same branch is 2.6.3. Should not that be the same version and 2.6.3 be "experimental" or something, but not "master"?
username_1: Agda master is 2.6.3: https://github.com/agda/agda/blob/master/Agda.cabal#L2
username_0: Oh, I see. Maybe the issue then is that it's not been updated on hackage yet:

username_3: Hackage does not have Agda 2.6.3 as it has not yet been released. The `master` branch always corresponds to the version of Agda that's currently under development.
username_0: I understand. I guess then the melpa installer is making some wrong assumptions, it should not pick the latest (master), but match the version on hackage, which is a bit harder to do, I can imagine...
username_3: This is exactly why installing `agda` also automatically installs `agda-mode`, so you do not have to bother making sure you have the right version.
username_1: And would still be wrong if people install an older version of Agda
(via their OS's package manager for instance).
username_0: If everybody stuck to a convention that is easiest to maintain across many standards, that would not be necessary to have a custom installer. Such as, if master always matched the version on hackage for example. I can imagine that there are reasons not to do so, which resulted in yet another standard (custom installer). I am not saying it's wrong, it's commonplace. But it's not particularly good either, from the point of view of modularity and separation of concerns for example.
username_4: When the Emacs mode was made available via MELPA @username_5 wrote that he was "available for any questions or concerns on this" (#3360).
username_5: If I understand correctly the issue now is that:
1. melpa fetches agda mode from master.
2. user install agda stable from hackage (maybe cabal install or something).
3. master agda-mode is incompatible with hackage version
So a solution would be to change the recipe on melpa not to use master, but instead to use a stable version as determined by the agda project, for example, the release tarball may work (need to filter out nightly),
I've to look into recipe system to see if it's possible.
It maybe possible to modify that recipe to get it from hackage even (although I suspect the releases on github are easier).
Please let me know if you have any other questions or concerns I may have forgotten to address here :slightly_smiling_face:
username_1: Users may install an older version than the latest available on Hackage
e.g. because they are working with a textbook that is pinned to a specific
version. AFAIU whatever decision is made on melpa's side will be incompatible
with that scenario.
I'm not sure agda-mode needs to be on melpa at all: as we have seen, it is
tied to the corresponding version of Agda & automatically installed when Agda is.
username_5: First of all I'd like to point out that this hasn't been an issue to date.
The issue opened here is for a difference in minor versions,
which will show up in output path but aside from a little confusing shouldn't cause issues.
These patches have landed since I put it on melpa:
+ https://github.com/agda/agda/commit/c79e052a984b5775f2652867ad14c164c89cfdf8#diff-e77962711f1ff4164544af64cc9f08419e5ffc8e33911d84c186998beb215e65
+ https://github.com/agda/agda/commit/98d7f27e8bc5fde716aad7aac62c57080f5564ef#diff-e77962711f1ff4164544af64cc9f08419e5ffc8e33911d84c186998beb215e65
+ https://github.com/agda/agda/commit/a38bf00d7192aad854eb7f434ddd76eca476f647#diff-e77962711f1ff4164544af64cc9f08419e5ffc8e33911d84c186998beb215e65
+ https://github.com/agda/agda/commit/997ca3a842ab743cce69f6aeaef604605644b904#diff-e77962711f1ff4164544af64cc9f08419e5ffc8e33911d84c186998beb215e65 <-- only this one would be noticed
+ https://github.com/agda/agda/commit/1f75261df2dc085dcb0235f6dff346ec62120836#diff-e77962711f1ff4164544af64cc9f08419e5ffc8e33911d84c186998beb215e65
It seems a bit extreme to remove agda-mode from melpa for a hypothetical
issue which hasn't caused any reported problems.
---
But, if you're writing a text book,
I suggest you use a dedicated tool for software pinning,
like nix, docker or vagrant.
I don't think It's not reasonable to assume a version conflict will happen
to *just* the emacs agda-mode and agda.
Agda like any other great software project is build on the shoulders of many giants.
For example if using the haskell backend you depend on whatever version of ghc,
and then whatever version of libc for the runtime executable, dynamically linked.
I just named some of many of such dependencies,
You can have conflicts on all of these.
Unless of course if agda is statically linked binary,
but it's [not](https://github.com/agda/agda/blob/master/src/full/Agda/Compiler/MAlonzo/Compiler.hs#L208)
cabal will not really help you with this because it's not designed to pin a software system.
cabal is a tool for developers to build haskell software.
It assumes you know how to resolve version conflicts and puts all of
the burden on the user.
For example there is no way for cabal to express a version of libc to be used,
or a version for emacs.
If you're speaking in terms of decades, I wouldn't consider the emacs
elisp api that stable
(which is why the melpa maintainer insisted on adding emacs versions).
On wayland for example the [elisp api changes](https://lwn.net/Articles/843896/).
nix, docker and vagrant can pin software in such a way that this isn't an issue.
I'd be happy to help you setup a nix shell for a book if you're writing
that.
Nixos simply abosrbs the entire melpa package archive,
*and* the hackage archive to provide a consistent world state.
Removing the agda-mode package from melpa would break that.
But it would also not really protect users from version
conflicts because some of the many native dependencies may
become incompatible.
Besides most emacs users are sort off aware of potential
versions mismatches when using melpa,
which is why this particular issue was opened up in the first place
(why would you otherwise check?).
Trying to protect this class of users by making it more
difficult to install agda-mode seems ineffective.
I didn't mean to write an essay,
but let me know if I missed some concerns. 🙂
username_1: I suggest you use a dedicated tool for software pinning,
like nix, docker or vagrant.
The tool we use is `cabal` and it installs both `agda` and `agda-mode`.
I don't know of anyone advising users to install `agda-mode` from melpa
instead of using the one provided by the install.
username_4: When you write "`agda-mode`" above, do you mean the Emacs Agda mode or the Haskell program called `agda-mode`? I assume that MELPA is only used to install the Emacs Agda mode, and not the Haskell program.
Is it not possible to have several (released) versions of the Agda Emacs mode in MELPA?
username_6: I'd be happy to learn about a convention that names the branch for the latest release. Usually `master` isn't the latest release, but the _head_, the latest development version. At least in the Haskell universe, this has been commonly adopted practice.
The difference is only when to bump the version:
1. Some leave the version number on `master` the same as the latest release until the next release. This does not reflect reality and is confusing for all that install from `master`: The version number promises a feature set described in the notes of the latest release, but the feature set has actually moved on.
2. Thus, we bump the version number on `master` immediately after the release. This reflects the reality that the version on `master` might differ in its feature set from the released version.
Whenever I encountered a project doing 1., the developers never had strong arguments for their practice and were easily convinced to switch to style 2.
I acknowledge that there is no convention to name the branch with the latest release. What often works is to look for the `vN.M.L` tags and pick the one with the highest version number `N.M.L`.
username_6: I am not familiar with `nix`, so help is appreciated!
Recently @username_7 contributed a nix flake, but it does not include the `agda-mode`:
https://github.com/agda/agda/blob/78ea6d259b10d4751f96924e418dbc1c8cd8b638/flake.nix#L12
I think it would be nice if we maintained a nix-description of the environment that Agda assumes. Then you could restore it when you fetch an older version of Agda from the repo (or maybe from stackage/hackage).
username_7: For the most part, this is just the cabal file. If you want something more concrete, then assuming we test the flake build (ideally including the test suite) on CI then the combination of flake.nix + flake.lock would suffice for this. |
mvandersteen/ha-jemenaoutlook | 357891253 | Title: Error while setting up platform energyeasy
Question:
username_0: Getting the following stack trace.
My sensor data is as per your example with my username and password added. My username is an email address so I've enclosed the username with single quotes (').
I'm using this for https://energyeasy.ue.com.au/ which is using the same platform as https://electricityoutlook.jemena.com.au/
The only thing I've modified is `HOST = 'https://energyeasy.ue.com.au'`.
```
2018-09-07 11:54:44 ERROR (MainThread) [homeassistant.components.sensor] Error while setting up platform energyeasy
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/homeassistant/helpers/entity_platform.py", line 128, in _async_setup_platform
SLOW_SETUP_MAX_WAIT, loop=hass.loop)
File "/usr/local/lib/python3.6/asyncio/tasks.py", line 358, in wait_for
return fut.result()
File "/usr/local/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "/config/custom_components/sensor/energyeasy.py", line 148, in setup_platform
jemenaoutlook_data.get_data()
File "/config/custom_components/sensor/energyeasy.py", line 236, in get_data
self._fetch_data()
File "/config/custom_components/sensor/energyeasy.py", line 228, in _fetch_data
self.client.fetch_data()
File "/config/custom_components/sensor/energyeasy.py", line 502, in fetch_data
self._data.update(self._get_tariffs())
File "/config/custom_components/sensor/energyeasy.py", line 326, in _get_tariffs
"controlled_load_cost": self._strip_currency(data["controlledLoadCost"]),
KeyError: 'controlledLoadCost'
```
Answers:
username_1: How did you solve this? I am currently facing a similar problem |
fhstp-mfg/mobilot | 169108851 | Title: Reposition StationComponent info button
Question:
username_0: We should reposition the StationComponent info button, since it takes up quite some space and the WYSIWYG editor for example, doesn't fit anymore.:
Any volunteers? 😄
Screenshot taken on iPhone 5s:

Answers:
username_1: I'll do it!
username_0: @username_1 thank you 😃
Status: Issue closed
username_1: Done! but we should redesign the editor anyway! |
ropensci/iheatmapr | 395793228 | Title: add_row_dendro details
Question:
username_0: Hello, can we color the dendrogram just like those in these pictures?

<img width="1005" alt="skaermbillede-2017-09-22-kl -11 10 06" src="https://user-images.githubusercontent.com/24263046/50671699-77fa7500-100e-11e9-851d-b6332f6c8b1c.png">
Answers:
username_1: Nope not at the moment! I suspect the first style of coloring would be easier to achieve. I think this would be a good capability to add, but am unlikely to do that myself anytime soon (but would be happy to accept PR or give guidance to someone wanting to try implementing). |
klarna/klarna-mobile-sdk | 760614341 | Title: LICENSE file is missing but still specified in podspec
Question:
username_0: **Describe the bug**
My company uses the `cocoapods-acknowledgements` plugin. After pod update to 2.0.30, I see this warning:
```
[!] Unable to read the license file `LICENSE` for the spec `KlarnaMobileSDK (2.0.30)`
```
The podspec still specifies a LICENSE file, but the file appears to have been deleted sometime after 2.0.28.
**To Reproduce**
Steps to reproduce the behavior:
1. `pod update KlarnaMobileSDK`
**Expected behavior**
LICENSE file should be available. Or `license` section in podspec should be removed or fixed.
Answers:
username_1: Hi @username_0. Thanks for reaching out with this warning, we already uploaded the LICENSE file again, sorry for any inconvenience. Please, try again and let us know how did it go.
username_1: Hi @username_0. Thanks for reaching out with this warning, we already uploaded the LICENSE file again, sorry for any inconvenience. Please, try again and let us know how did it go.
Steps to update the pod:
- pod deintegrate
- pod cache clean --all
- pod install --repo-update
Status: Issue closed
username_0: Worked! |
pytorch/glow | 500189463 | Title: Can anyone provide complete detailed information of installing Glow in Ubuntu 16.0.4
Question:
username_0: Please provide installation guide for Glow
Ubuntu 16.0.4
I had installed LLVM 8.0.1 and Clang 8.0.1 but iam unable to configure glow in my system it will be more helpful if i get detailed info!
Answers:
username_1: Hi @username_0, we have instructions [here](https://github.com/pytorch/glow#ubuntu), though they may be a bit out of date. If you can follow those instructions and provide more information/logs on what is causing you issues we can help try to debug. Thanks!
username_0: Hi @username_1 ,
I followed same instructions but the installation went hanging off my system
/Desktop$ cd LLVM/
~/Desktop/LLVM$ cd glow/
~/Desktop/LLVM/glow$ cd build_Debug/
~/Desktop/LLVM/glow/build_Debug$ sudo ninja all
[10/107] Linking CXX executable bin/lenet-loader
At this step system is not responding can you guide me through this.
previously i got so many errors like ERROR: without a user defined constructor
ninja subcommand failed.
username_1: Hm, I don't think you need sudo to run ninja. Can you try building a specific binary, for example `ninja InterpreterOperatorTest`, and then post the build log?
username_0: i finally installed glow, Can you tell me how to run programs using glow?
username_1: Great! We have a wiki section on how to run things [here](https://github.com/pytorch/glow#testing-and-running). Going to close this issue as it has been resolved.
Status: Issue closed
|
staudt/Genius | 177609712 | Title: Error found
Question:
username_0: I do not press a button when game starts but it thinks I do HELP PLZZ On home menu its fine....
Answers:
username_1: Are you using ports 1 or 2? If so, in many Arduinos they are also used for the internal LED, which can send extra signals, triggering button presses. Try ONLY using ports 3 and higher for both digital and analog. Let me know if that works ;)
username_0: Im using pots 9, 10, 11. I am using a smaller chip then a Arduino uno so im thinking that there is not enough memory. Also I found a simpler version and that worked. :)
username_2: Hi @username_0, if you using protoboard like it image, try to join the two board negative.

Status: Issue closed
|
abarnes26/quantified-self-express-backend | 308384060 | Title: Create Endpoint (get all foods for a meal)
Question:
username_0: a get request to "/meals/:meal_id/foods" should return a list of all foods associated with that meal
Status: Issue closed
Answers:
username_0: a get request to "/meals/:meal_id/foods" should return a list of all foods associated with that meal
Status: Issue closed
|
schulz3000/deepstreamNet | 258213069 | Title: Set an array as a new record
Question:
username_0: Hi,
looks like you can not set an array as a record. The DeepStreamRecordBase must be a JContainer. Should it be a JToken so it can support arrays ? Alternatively throw an exception if you try and set an array.
When you try and set an array this line does not merge anything
Data.Merge(item, jsonMergeSettings);
Here is a test
```
[Test]
public async Task TestSetArray()
{
var name = Guid.NewGuid().ToString();
var key = @"my/testrecord/" + name;
var obj = new JArray();
obj.Add("hello");
using (var client1 = await GetConnection("chris", "test"))
{
var record1 = await client1.Records.GetRecordAsync(key);
var res = await client1.Records.SetWithAckAsync(record1, obj);
Assert.IsTrue(res);
var json = JsonConvert.SerializeObject(record1);
Assert.IsTrue(json.StartsWith("["), "SHould be an array");
}
}
``` |
kangwonlee/reposetman | 383699785 | Title: update subtrees in parallel
Question:
username_0: * now a857554 can process chapters in subtrees
* Update is currently in single process
* multiple users at once as subtree pull may not work for multiple chapters of one participant in this case
Status: Issue closed
Answers:
username_0: 4b91090adfc970b8f4e9acb28b3231287795d70a resolves this issue |
emar10/uweather | 378395282 | Title: Swift optional chaining
Question:
username_0: Look into how you could potentially use optional chaining to prevent having to nest `if let`s and `if <var> != nil`s
https://github.com/emar10/uweather/blob/d477062e1c38dfaf2a5a4af9c90cb2db2f405014/uWeather/WeatherModel.swift#L43 |
agehring/CxFlow-Demo | 860075043 | Title: CX Heap_Inspection @ web/Areas/Identity/Pages/Account/Manage/ManageNavPages.cs [agehring-alt]
Question:
username_0: **Heap_Inspection** issue exists @ **web/Areas/Identity/Pages/Account/Manage/ManageNavPages.cs** in branch **username_0-alt**
*Method "ChangePassword"; at line 13 of web\Areas\Identity\Pages\Account\Manage\ManageNavPages.cs defines ChangePassword, which is designated to contain user passwords. However, while plaintext passwords are later assigned to ChangePassword, this variable is never cleared from memory.*
Severity: Medium
CWE:244
[Vulnerability details and guidance](https://cwe.mitre.org/data/definitions/244.html)
[Internal Guidance](https://checkmarx.atlassian.net/wiki/spaces/AS/pages/79462432/Remediation+Guidance)
[Checkmarx](http://CXSAST-9/CxWebClient/ViewerMain.aspx?scanid=1000035&projectid=15&pathid=9)
[Training](https://cxa.codebashing.com/courses/)
[Recommended Fix](http://CXSAST-9/CxWebClient/ScanQueryDescription.aspx?queryID=3772&queryVersionCode=94864652&queryTitle=Heap_Inspection)
Lines: [13](https://github.com/username_0/CxFlow-Demo/blob/username_0-alt/web/Areas/Identity/Pages/Account/Manage/ManageNavPages.cs#L13)
---
[Code (Line #13):](https://github.com/username_0/CxFlow-Demo/blob/username_0-alt/web/Areas/Identity/Pages/Account/Manage/ManageNavPages.cs#L13)
```
public static string ChangePassword => "<PASSWORD>";
```
---<issue_closed>
Status: Issue closed |
maciejtarmas/AlleBlock | 261870570 | Title: Info o kredycie na OLX
Question:
username_0: np tu: https://www.olx.pl/oferta/dzialka-inwestycyjna-gospodarstwo-CID3-IDoEAhU.html#f56ce9749e

Status: Issue closed
Answers:
username_1: Dzięki, usunięte :) |
kevin-brown/blog.kevin-brown.com | 185128531 | Title: What is the licence of code snippets on your blog?
Question:
username_0: Hello!
What is license of code snippets on your blog? Could it be MIT or GPL3? :-)
Thank you for answers!
Answers:
username_1: Which snippets specifically are you looking for the license for?
I'll go back through the posts I've done and determine what the license needs to be. And then I'll add a note in the footer about licensing.
username_0: Hello,
right now I'm very interested in https://blog.username_1.com/programming/2014/09/24/combining-autotools-and-setuptools.html . I would like to use it GPLv3 projekt called [FreeIPA](http://www.freeipa.org/).
Thank you for considering this!
Status: Issue closed
|
pondokit/deJacetre | 423557428 | Title: ENV Example Missing
Question:
username_0: untuk push ke github, memang .env tidak dianjurkan karena ada data, sebaliknya .env.example bisa dipush karena ini data kosong.
jika ada yang pertama kali menggunakan pasti bingung, karena ketika download sourcenya, tidak ada .env.example |
log2timeline/plaso | 436236406 | Title: Crash in image_export.py with missing file_entry
Question:
username_0: Traceback, from an older version of Plaso:
```
Traceback (most recent call last):
File "/usr/bin/image_export.py", line 57, in <module>
if not Main():
File "/usr/bin/image_export.py", line 36, in Main
tool.ProcessSources()
File "/usr/lib/python2.7/dist-packages/plaso/cli/image_export_tool.py", line 770, in ProcessSources
skip_duplicates=self._skip_duplicates)
File "/usr/lib/python2.7/dist-packages/plaso/cli/image_export_tool.py", line 338, in _ExtractWithFilter
skip_duplicates=skip_duplicates)
File "/usr/lib/python2.7/dist-packages/plaso/cli/image_export_tool.py", line 279, in _ExtractFileEntry
for data_stream in file_entry.data_streams:
AttributeError: 'NoneType' object has no attribute 'data_streams'
```
This looks to be because OpenFileEntry in dfvfs can return None.
Answers:
username_0: Also an issue in https://github.com/log2timeline/plaso/blob/master/plaso/filters/file_entry.py#L398 as we check for False in the results, and not for None.
username_0: Ran into another crash while trying to debug this:
```
Traceback (most recent call last):
File "./tools/image_export.py", line 57, in <module>
if not Main():
File "./tools/image_export.py", line 36, in Main
tool.ProcessSources()
File "/usr/lib/python2.7/dist-packages/plaso/cli/image_export_tool.py", line 791, in ProcessSources
skip_duplicates=self._skip_duplicates)
File "/usr/lib/python2.7/dist-packages/plaso/cli/image_export_tool.py", line 350, in _ExtractWithFilter
extraction_engine.knowledge_base, artifact_filters, filter_file)
File "/usr/lib/python2.7/dist-packages/plaso/engine/engine.py", line 328, in BuildFilterFindSpecs
artifact_filter_names, environment_variables=environment_variables)
File "/usr/lib/python2.7/dist-packages/plaso/engine/artifact_filters.py", line 97, in BuildFindSpecs
definition, environment_variables)
File "/usr/lib/python2.7/dist-packages/plaso/engine/artifact_filters.py", line 129, in _BuildFindSpecsFromArtifact
self._knowledge_base.user_accounts)
File "/usr/lib/python2.7/dist-packages/plaso/engine/artifact_filters.py", line 218, in _BuildFindSpecsFromFileSourcePath
path_glob, path_separator, user_accounts):
File "/usr/lib/python2.7/dist-packages/plaso/engine/path_helper.py", line 232, in ExpandUsersVariablePath
path_segments, path_separator, user_accounts)
File "/usr/lib/python2.7/dist-packages/plaso/engine/path_helper.py", line 99, in _ExpandUsersVariablePathSegments
path_segments, path_separator, user_accounts)
File "/usr/lib/python2.7/dist-packages/plaso/engine/path_helper.py", line 64, in _ExpandUsersHomeDirectoryPathSegments
if cls._IsWindowsDrivePathSegment(user_path_segments[0]):
IndexError: list index out of range
```
username_0: It turns out the file causing the original crash is a broken symbolic link, so this was probably triggered by the behavior changed in https://github.com/log2timeline/dfvfs/issues/388
username_1: Changes merged, closing issue.
Status: Issue closed
|
hasura/graphql-engine | 1016567074 | Title: Increase in Hasura idle connections kill database
Question:
username_0: ### Version Information
Server Version: 2.0.7
### Environment
Docker
### What is the expected behaviour?
Do create 15+ connections if we don’t need it.
### What is the current behaviour?
Hasura is creating too many connections and is crashing the database.
### How to reproduce the issue?
I’m not sure
<img width="1366" alt="PNG image" src="https://user-images.githubusercontent.com/52385564/136070008-ef199916-2c50-4477-a6e0-008016afdad7.png">
Answers:
username_0: Update: When you do a BIG query, the query works. But it’s after the fact that makes it fail and blow up.
username_1: The number of connections you see may be because of the connection pool idle timeout. Hasura has a default idle timeout of 180s which means that after a connection is created, it will hold the connection for at max 180s so that it may be reused by another query. You can reduce it if you want by editing the source configuration.
The database unresponsiveness maybe due to some heavy query. What is the CPU/mem of your Postgres db?
username_2: @username_0 I am currently facing the exact same issue. Database sessions (opened from Hasura IP) increase until a complete freeze. Have you made any progress identifying the root cause of this issue ? Any "patch" like what suggests @username_1 ?
username_1: @username_2 Database sessions usually increase in the following situations:
1. Load increases on Hasura because of requests (this will spawn new connections limited by the connection pool setting per instance, on Cloud there can be auto-scaling as well). This can also be because of a burst in event trigger deliveries.
2. Some query is taking up lots of locks and blocking other queries (this is usually the case when some long running query or DDL is performed)
3. It's normal to see many IDLE sessions after a burst of traffic because of connection idle timeout (for reuse of sessions).
The best way to diagnose the root cause is to see the activity on the database during this burst. Do you have any info on this?
username_2: @username_1 thanks, it was indeed right after a large number of events were triggered. However, is there any way we can prevent this from happening ? I set the idle timeout to 60 seconds, though I'm not sure this will fix the issue ? Our app was unusable after this, because database was unreachable. |
slms4redd/portal | 97498707 | Title: Build error
Question:
username_0: Skipping the execution of mvn tests at installing , skipping to run them in order to not fall into the issue-2, would lead you a little further into the following mvn error:
-----------------------------------------------------
[INFO] ------------------------------------------------------------------------
[ERROR] BUILD ERROR
[INFO] ------------------------------------------------------------------------
[INFO] Internal error in the plugin manager executing goal 'ro.isdc.wro4j:wro4j-maven-plugin:1.7.6:run': Unable to load the mojo 'ro.isdc.wro4j:wro4j-maven-plugin:1.7.6:run' in the plugin 'ro.isdc.wro4j:wro4j-maven-plugin'. A required class is missing: org/codehaus/plexus/util/Scanner
org.codehaus.plexus.util.Scanner
[INFO] ------------------------------------------------------------------------
Answers:
username_0: The reason is related to the usage of the version of Mavan used. Maven 3.0 at least is required. For further information please refer to: http://stackoverflow.com/questions/13181322/wro4j-maven-plugin-required-class-is-missing
Status: Issue closed
|
goatfungus/NMSSaveEditor | 1074427873 | Title: After opening save my object count for bases is minimal
Question:
username_0: After touching my nms with the save editor just by opening the save file it caused this on every slot. I can make a base but I get about 45 objects before I can't place anymore which before that the count was 3000 objects. So save editor basically broke my game for now hopefully you guys can fix it so I can fix my save. I actually only copied slot 1 edited and save as slot 2 not sure why just reading the slot broke it but it did. |
ably/ably-java | 137648765 | Title: Library doesn't seem to serialise Map objects properly
Question:
username_0: I'm using the following code to send a presence update:
Map<String,Boolean> payload = new HashMap<>();
payload.put("isTyping",false);
try {
sessionChannel.presence.update(payload,null);
} catch (AblyException e) {
e.printStackTrace();
}
This translates to the following MsgPack hex:
`89A6616374696F6E0EA26964AD4D3754536F6A4A5A35532D3331A5636F756E7415A76368616E6E656CAB6D6F62696C653A63686174AD6368616E6E656C53657269616CB0396335333838303037353736313A3733AC636F6E6E656374696F6E4964AA4D3754536F6A4A5A3553B0636F6E6E656374696F6E53657269616C2CA974696D657374616D70CF000001533354B1D2A870726573656E63659184A8636C69656E744964A573616E6479AC636F6E6E656374696F6E4964AA4D3754536F6A4A5A3553A6616374696F6E04A464617461B07B6973547970696E673D66616C73657D`
Which translates to:
`{"action":14,"id":"M7TSojJZ5S-31","count":21,"channel":"mobile:chat","channelSerial":"9c53880075761:73","connectionId":"M7TSojJZ5S","connectionSerial":44,"timestamp":null,"presence":[{"clientId":"sandy","connectionId":"M7TSojJZ5S","action":4,"data":"{isTyping=false}"}]}
`.
However, when ably.js does a similar thing (I'm testing via the Heroku app), the MsgPack hex is:
`89A6616374696F6E0EA26964AD6E554158777A664E33522D3232A5636F756E7412A76368616E6E656CAB6D6F62696C653A63686174AD6368616E6E656C53657269616CB0396335333838303037353736313A3737AC636F6E6E656374696F6E4964AA6E554158777A664E3352B0636F6E6E656374696F6E53657269616C30A974696D657374616D70CF000001533359AAE1A870726573656E63659185A8636C69656E744964A27363AC636F6E6E656374696F6E4964AA6E554158777A664E3352A6616374696F6E04A8656E636F64696E67A46A736F6EA464617461B27B226973547970696E67223A66616C73657D`
Which translates to:
`{"action":14,"id":"nUAXwzfN3R-22","count":18,"channel":"mobile:chat","channelSerial":"9c53880075761:77","connectionId":"nUAXwzfN3R","connectionSerial":48,"timestamp":null,"presence":[{"clientId":"sc","connectionId":"nUAXwzfN3R","action":4,"encoding":"json","data":"{\"isTyping\":false}"}]}
`
The Java lib doesn't seem to create a valid JSON (IIRC JSON doesn't contain equals signs), that the JS lib can successfully parse. I set breakpoints in the JS presence handler and they do not trigger.
As a consequence of these differences, the Cordova sample app does not recognise presence messages sent from the Android sample app.
The inverse is true - the Android lib successfully parses and displays "isTyping" presence updates coming from JS.
@username_1 @username_2 @username_4
Answers:
username_1: Thanks @username_0 we'll get Gokhan to look at this shortly.
@username_3 this reconfirms our fear that some libraries are not as interoperable as we expect. The related issue is https://github.com/ably/wiki/issues/22, we should consider prioritising this.
@username_4 in the mean time, can you try and figure out why the Hashmap is being encoded as `{isTyping=false}` please and provide a fix?
username_2: Yes, but this case is simply a deviation from the spec, not a difference in encoding that we would only detect by testing different libraries back-to-back.
username_3: So I'm very far from being a java dev, but gson can apparently serialise a HashMap into json [just fine](http://stackoverflow.com/a/12156646). Why not just call `gson.toJson` on anything we get that isn't a `byte[]` or a `String`, wrap it a try-catch, and let gson throw an exception if it's fed something it can't serialize?
username_2: That's possible but that same approach doesn't work when deserialising - we would have to pick something canonical to deserialise to, and the end result would not be the same as the original value.
I think it is more useful on the decode side to allow clients to define their own Gson-serialisable types, and have a way for them to decode specifically to that type, instead of to a generic collection.
username_0: So I tried using a JSONObject like this:
JSONObject payload = new JSONObject();
payload.put("isTyping", false);
That generates a proper JSON as the data: `{"action":14,"id":"zu4ekVHDa6-6","count":21,"channel":"mobile:chat","channelSerial":"e478831912432:13","connectionId":"zu4ekVHDa6","connectionSerial":11,"timestamp":null,"presence":[{"clientId":"sa1","connectionId":"zu4ekVHDa6","action":4,"data":"{\"isTyping\":false}"}]}`
However, my JS presence handler sees the data as a string, instead of an object :/
I noticed that the JS lib sends an additional `"encoding":"json"`member in the presence message, but it's not present in the message that the Java lib sends.
Is this behavior OK?
username_4: @username_0 [this](https://github.com/ably/ably-java/blob/bcde67a28a3b9a5547ac1763e78fa7c54ff88981/lib/src/main/java/io/ably/lib/types/BaseMessage.java#L218) is the root of your problem. We are packing Java `Object.toString()` representation to data field, when we have a subclass of java object. As @username_2 pointed out, we are only properly serialising JSONObject instances to JSON serialised String. For now, using JSONObject is the safest way to use with our library.
@username_2, would you like me to update public method signature to accept only JSONObject as a data argument?
username_2: No.
The library supports raw `data` - which is either a `String` or `byte[]`. This is simply passed unchanged into the structure that gets packed.
The library also supports JSON-encoding of complex types, if it recognises that they can be JSON--encoded. The current implementation requires that these are instances of `JSONElement`, meaning that they are encodable using the gson library: then it encodes the value to the JSON text and also sets the `encoding` member of the message to indicate that it is JSON (and must therefore be decoded by the client).
In the current implementation, if the `data` is not either a `String` or `byte[]` or `JSONElement` then it is coerced to a `String` (in the line linked above) and sent **as a String** to other clients. The `encoding` member is not set, and so it is not decoded by any recipient, and so it is delivered as a `String`.
So @username_4 is is definitely not valid to limit `data` to just a `JSONObject`. The correct behaviour, as I indicated above, is not to coerce values to `String` but to reject them if they are not either a `String`, or `byte[]` or are naturally JSON-encodable.
And @username_0 the demo needs to pass an object that the library will JSON-encode explicitly, not just implicitly coerce to a `String`.
username_1: @username_2 to avoid this unexpected co-erced string behaviour, I definitely vote we reject them. @username_4 can you prioritise getting that to work and issue a PR for this?
username_0: OK, I got the presence updates working.
In my last comment I was using Android's native [org.json.JSONObject](http://developer.android.com/reference/org/json/JSONObject.html) which gets coerced to string as the library checks against `com.google.gson.JsonObject`.
Once I switched over to using GSON's Json objects everything started working fine.
username_1: Ok, that's good @username_0. It would be good to prevent other developers from hitting this snag @username_4 if you can help
username_5: Just bumping this as a user just had problems with this. Maybe this should be prioritized? See ably/wiki#22.
username_2: @username_5 I'm not sure what there is to fix other than a documentation issue? I don't think the java lib should be supporting all available JSON libraries; we picked one (GSON). So if you want to pass `data` values to the library and you want it serialised as JSON, your `data` must implement the relevant interface (eg `com.google.gson.JsonObject`).
username_2: @username_5 ok, agreed
username_1: @username_5 can you help ensure that we reject invalid objects that don't match the expected type then?
username_5: @username_1 Yep, I'll give it a try.
Status: Issue closed
|
jgthms/bulma | 244796601 | Title: Navbar center?
Question:
username_0: <!-- PLEASE READ THE FOLLOWING INSTRUCTIONS -->
Is it about Bulma or about the Docs?
Yes
<!-- Is it a bug/feature/question or do you need help? -->
No
<!-- If it's a bug, is it a browser bug? -->
### Overview of the problem
<!-- UNCOMMENT THE APPROPRIATE LINES -->
This is about the Bulma **CSS framework**
This is about the Bulma **Docs**
I'm using Bulma **version** [0.4.3]
My **browser** is: Chrome
<!-- This is a **Sass** issue: I'm using version [x.x.x] -->
<!-- I am sure this issue is **not a duplicate**? -->
### Description
<!-- Description of the bug, enhancement, or question -->
### Steps to Reproduce
N/A
<!--
1. First Step
2. Second Step
3. and so on...
-->
### Expected behavior
<!-- What you expected to happen -->
### Actual behavior
<!-- What actually happened -->
My issue is I used nav-center in previous versions of bulma and loved it for logo use cases as well as content display. Is there an equivalent in the newest version of bulma with the new navbar?
Status: Issue closed
Answers:
username_1: #839 |
commercialhaskell/stack | 97290775 | Title: docker: stack runghc should pass stdin/stdout through
Question:
username_0: ```
$ cat echo.hs
main = interact id
$ echo 'hello' | runghc echo.hs
hello
$ echo 'hello' | stack runghc echo.hs
```
If stack uses its default (non-docker) configuration, then this will work as expected, with output `hello`. If stack is configured to be inside of a docker container, then there will be no output.
Stack version: 6adb449f1083a0c4768924d59956b46486b31be7
Answers:
username_1: This is an unfortunate case where we can't get a Docker container to behave like a normal process in all cases, and we're stuck between two non-ideal solutions when stdin/stdout/stderr are not connected to a terminal:
1) Use `docker run -i` (but not `-t`), in which case piping standard input works (also nice for running in an Emacs shell) but docker processes never exit when run in Bamboo and other Java-based tools.
2) Use `docker run` (without `-i` or `-t`), in which case Bamboo works but piping standard input doesn't.
So we ended up switching to option (2). We never did do any deep investigation into why Bamboo triggers this problem, so it would be better to have a way to correct the underlying issue
Status: Issue closed
username_2: I'm reading this as wontfix, closing. |
cs-education/classTranscribe | 394540171 | Title: [a11y] Landmarks - Courses
Question:
username_0: Courses URL: https://classtranscribe.ncsa.illinois.edu/courses
All elements should be contained in landmarks. This supports assistive technologies in organizing a page’s content for users who can’t see/scan the page’s layout (see [WCAG 2.0 SC 2.4.1 Bypass Blocks][1]).
Can be implemented either with [HTML5][2] (`<main>`, `<nav>`, `<header>`, `<footer>`, etc.) or [ARIA][3] (`role=main`, `role=navigation`, `role=banner`, `role=contentinfo`, etc.). There should at least be a “main” landmark at a minimum.
Related: #56, #59
[1]: https://www.w3.org/TR/UNDERSTANDING-WCAG20/navigation-mechanisms-skip.html
[2]: http://www.w3.org/TR/html5/sections.html
[3]: http://www.w3.org/TR/WCAG20-TECHS/ARIA11 |
vaadin/flow | 439981946 | Title: Generic DnD is broken in Firefox starting drag inside shadow dom
Question:
username_0: FF issue: https://bugzilla.mozilla.org/show_bug.cgi?id=1521471
Plan: investigate a workaround
Answers:
username_0: The workarounds were not helping enough to be considered to add to Flow.
Thus generic DnD feature has been moved to its own module `flow-dnd` that will be released with Flow 2.0 but it will not be added to the platform in 14.0.
Once the ^ mentioned FF issue has been fixed, we can then included the `flow-dnd` module to the platform in a new minor release of 14. So if you don't need Firefox support for your Flow app using DnD, you can take the `flow-dnd` module and add it to your Vaadin 14 project.
The only workaround for FF that worked was to force the shadow dom polyfill to be used instead of native support with implementing `PageConfigurator` in the main layout and:
```
public void configurePage(InitialPageSettings settings) {
if (settings.getBrowser().isFirefox()) {
settings.addInlineWithContents(InitialPageSettings.Position.PREPEND,
" if (window.customElements) window.customElements.forcePolyfill = true;"
+ " ShadyDOM = { force: true };",
InitialPageSettings.WrapMode.JAVASCRIPT);
}
}
```
It should be noted that this can reduce the performance of the application severely.
Keeping this issue open until the FF bug has been fixed.
username_1: To test with latest Firefox, which contains the fix.
Status: Issue closed
|
kubeflow/manifests | 520230848 | Title: e2e presubmit needs to be updated to use kfctl from kubeflow/kfctl
Question:
username_0: The presubmit for kubeflow/manifests tries to build kfctl and deploy Kubeflow.
This is now failing because kfctl has moved to kubeflow/kfctl but the test hasn't been updated yet.
We need to update the test. We should model it on
https://github.com/kubeflow/kfctl/blob/6c3fbeaf081a481d4324ea1ac1748eb7c6edae82/prow_config.yaml#L20
We probably want to replace the test with a py_func workflow that is imported from a different repo rather than duplicating the test.
As a short term fix we should remove the existing e2e presubmit test
Related to kubeflow/kfctl#7
Answers:
username_0: #660 has been merged.
Here's a presubmit indicating the E2E tests were triggered and run.
https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/kubeflow_manifests/675/kubeflow-manifests-presubmit/1206529100970201088/
Status: Issue closed
|
RelativeMedia/blog.relative.media | 120758927 | Title: letsencrypt simp_le needs to be updated
Question:
username_0: Notice the cd, the account_key.json and the email field. All of those need to be added to the script for it to work.
````
DOMAIN=%%DOMAIN%%;
sudo mkdir /etc/nginx/ssl/${DOMAIN};
sudo chmod 700 /etc/nginx/ssl/${DOMAIN};
cd /etc/nginx/ssl/${DOMAIN};
sudo simp_le -d ${DOMAIN}:/tmp/letsencrypt -f account_key.json -f key.pem -f cert.pem -f fullchain.pem --email "<EMAIL>" && sudo service nginx reload;
sudo chmod -R 400 /etc/nginx/ssl/${DOMAIN}/;
````
Answers:
username_1: email is optional, account_key.json is only needed if you don't cd into the ssl dir. I do know that you'd need to drop into a root shell in order to cd into `/etc/nginx//${DOMAIN}` .. might add that instead, otherwise all of those flags need to have the full path.
Status: Issue closed
username_0: Actually if you check none of them accept a full path anymore. They only work relative now.
I spent about an hour trying to get it working with the full path and in the end gave up and used the relative as it expects certain params.
username_1: Yeah good point, I opted to drop into root shell to run the commands.. Should be sufficient, I updated the post. |
rlwhitcomb/utilities | 1122380953 | Title: Calc `===` operator is basically unusable with numbers
Question:
username_0: With recent changes to make numbers BigInteger as much as possible, it seems that code like:
`loop over 0..4 { if $_ === 0 ... }`
never works because somehow the loop variables are never EXACTLY equal to the constants.<issue_closed>
Status: Issue closed |
ionic-team/ionic-framework | 675724489 | Title: feat: root config for each component to target platform
Question:
username_0: <!-- Before submitting an issue, please consult our docs (https://ionicframework.com/docs/). -->
<!-- Please make sure you are posting an issue pertaining to the Ionic Framework. If you are having an issue with the Ionic Appflow services (Ionic View, Ionic Deploy, etc.) please consult the Ionic Appflow support portal (https://ionic.zendesk.com/hc/en-us) -->
<!-- Please do not submit support requests or "How to" questions here. Instead, please use one of these channels: https://forum.ionicframework.com/ or http://ionicworldwide.herokuapp.com/ -->
<!-- ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. -->
# Feature Request
**Ionic version:**
<!-- (For Ionic 1.x issues, please use https://github.com/ionic-team/ionic-v1) -->
<!-- (For Ionic 2.x & 3.x issues, please use https://github.com/ionic-team/ionic-v3) -->
[x] **5.x**
**Describe the Feature Request**
<!-- A clear and concise description of what the feature request is. Please include if your feature request is related to a problem. -->
I would like to change mode for each component through config.
ex
```
IonicModule.forRoot({
IonToolBarMode: 'ios',
IonSearchBarMode: 'ios',
IonBackButtonMode: 'ios'
})
```
**Describe Preferred Solution**
<!-- A clear and concise description of what you want to happen. -->
option1.
```
IonicModule.forRoot({
IonToolBarMode: 'ios',
IonSearchBarMode: 'ios',
IonBackButtonMode: 'ios'
})
```
option2.
```
IonicModule.forRoot({
iosModeComponent: ['IonToolBarMode', 'IonSearchBarMode'],
mdModeComponent: ['IonBackButtonMode']
})
```
**Describe Alternatives**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Related Code**
<!-- If you are able to illustrate the feature request with an example, please provide a sample application via an online code collaborator such as [StackBlitz](https://stackblitz.com), or [GitHub](https://github.com). -->
**Additional Context**
<!-- List any other information that is relevant to your issue. Stack traces, related issues, suggestions on how to add, use case, Stack Overflow links, forum links, screenshots, OS if applicable, etc. -->
Answers:
username_1: Thanks for the feature request. I am going to close this as per-platform config is already possible in Ionic Framework. Please see: https://ionicframework.com/docs/angular/config#per-platform-config
Status: Issue closed
username_0: Config option does not have ionToolbar option to set it mode: ios for all platforms
username_1: Can you please describe your use case for this feature? Typically we recommend setting the config globally in this instance.
username_0: In my application, I want to use mode="ios" for all toolbar, It is almost done, i hate to go on each page and component to add mode="ios" on ion-toolbar or basically any ion-component.
Why i am using mode="ios" with that my ion-title can be center even if i add more icons at start and end since this is not possible in MD specs. |
open-mpi/mtt | 229187791 | Title: default_check_profile.ini BAT test checks for hardcoded kernel version
Question:
username_0: While testing out the Dockerized Jenkins console, the default_check_profile.ini test was failing due to checking for the wrong kernel version, which was hardcoded into the INI file.
Fix is to parameterize kernel version within the INI and within the console and require users to change this parameter to match their machine.<issue_closed>
Status: Issue closed |
orlp/slotmap | 381895339 | Title: Make hash function of SparseSecondaryMap configurable
Question:
username_0: Thanks a lot for this really nice and clean library! One suggestion though:
It would be nice if one could (optionally) change the hash function that SparseSecondaryMap uses since Rust's HashMap uses a somewhat slow one by default (SipHash), and especially for small key sizes (I believe SlotMap Keys are 8 bytes?) other hashers like [FNV](https://crates.io/crates/fnv) or [FxHash](https://crates.io/crates/fxhash) should be quite a bit faster. |
dresden-elektronik/deconz-rest-plugin | 585682225 | Title: Force network reconfiguration
Question:
username_0: **Feature request / question**
Force full reconfiguration of whole network or specific devices only
**Scenario**
Some devices get disconnected from network (or resetted to factory defaults) for e.g. firmware updates w/ external gateways.
When they are reconnected and get the old name reassigned, the group associations seem to be adapted and transmitted to the device but all the *scenes are lost* and not to be reapplied.
Due to many usability lacks in PWA its not easily possible to restore the scenes but to completely reconfigure all the scenes in which the affected devices are used **manually**.
Since there is even no option for textual inputs but sliders only its impossible to restore the old values exactly or to apply excatly those from other bulbs.
_Summarized:_ It's quite a horror for the user.
**Feature request**
Give an option for single bulbs or whole gateway/network to reconfigure with locally stored deCONZ configuration. |
Automattic/Edit-Flow | 26403354 | Title: iCalendar feed doesn't use gmt date for start/end
Question:
username_0: Publish times are reported incorrectly when subscribing to the iCalendar feed in Google Calendar or Apple Calendar.
Times appear to be adjusted by both the local time on the server and local time in the user's calendar. iCalendar start/end times should use UTC times.<issue_closed>
Status: Issue closed |
mattboldt/typed.js | 187533211 | Title: Without jQuery
Question:
username_0: I didn't want to include jQuery just for this plugin, so I created my own version of this.
It's quite scaled down, Typed.js had to many options IMO - only did what was necessary for me
Right now it only works in chrome, opera & FF 52
But with babel it will work like a charm...
For anyone interested here is a [demo](https://jsfiddle.net/2jgvftf2/1)
Answers:
username_1: I just visited this repository looking to see if it was usable with React. Using jQuery with React is a no-go, which renders this project unusable for those using React.
Perhaps an official port without jQuery for use with React would be a useful addition here, as the current options for typing animations usable with React are not as popular as Typed.js.
username_0: I tried React before... Never liked it. They change the html syntax to camelCase, even changed some attributes names that didn't even make since to me like using `className` instead of just `class`. And coding html in jsx was something else...
When you have worked with web and Node for such a long time and you know that all frameworks comes and go but one thing will always stay. Html, css & javascript
So today i mostly only do vanilla coding
But if you want a react port of this then it's fine by me. But you have to do it without my contribution
username_0: Now when i think of it i would maybe even have extracted out the html part out of my version so it can be used for inputs, objects Data-binding and other DOM elements
```javascript
typed({
strings: ['Hangout', 'Facebook', 'Web App', 'your email', 'Messenger', 'anything!!!']
set value(val) {
document.querySelector('.element').innerText = val
}
})
```
username_2: I created a library for me, with no jQuery dependency, if someone can take a look: https://ityped.surge.sh
Thanks!
username_3: @username_2
Very sexy, and it works in my React Project!
username_2: @username_3 Yeah, nice! ;)
username_4: A jQuery-free typed.js has been released under v1.1.6 thanks to this guy: @nikrolls
Status: Issue closed
|
lairworks/nas2d-core | 231264837 | Title: EventHandler::warpMouse() doesn't raise a mouse motion event
Question:
username_0: This is a basic oversight but it results in incorrect behavior. When the ```warpMouse()``` function is called, no event for a mouse position (mouse motion) event is raised.
To get correct behavior, when this function is called the ```EventHandler``` should raise a Mouse Motion event with the current coordinates and a ```{ 0, 0 }``` relative coordinate change.<issue_closed>
Status: Issue closed |
brunosardinepi/pagefund | 257576455 | Title: search - optimization
Question:
username_0: consider using jquery to add/remove the search results to the page instead of doing a full query again. it would potentially help with the server load, and would definitely help with the page refresh. i think it's the best solution but it needs testing
Answers:
username_0: complete, merged to master
Status: Issue closed
|
cfpb/hmda-platform-ui | 213040076 | Title: Recommended update to ui - parsing error language
Question:
username_0: - [ ] Recommend updating the wording in the red banner to:
Your file has format errors. Update your file and select the "Update a new file" button, or return to the Institutions page.

- [ ] Recommend updating the error message to:
XX Formatting Errors
The uploaded file is not formatted according to the requirements specified in the [Filing Instructions Guide for data collected in 2017](https://www.consumerfinance.gov/data-research/hmda/static/for-filers/2017/2017-HMDA-FIG.pdf). Make corrections in your file and then upload your file **here**.

Status: Issue closed
Answers:
username_0: **Backlog for now**:
- [ ] Recommend updating the wording in the red banner to:
Your file has format errors. Update your file and select the "Update a new file" button, or return to the Institutions page.

- [ ] Recommend updating the error message to:
XX Formatting Errors
The uploaded file is not formatted according to the requirements specified in the [Filing Instructions Guide for data collected in 2017](https://www.consumerfinance.gov/data-research/hmda/static/for-filers/2017/2017-HMDA-FIG.pdf). Make corrections in your file and then upload your file **here**.

username_0: Minor update to match FFVT:
Recommend updating the first sentence in the red banner to (adding the "ting" to format):
Your file has formatting errors.
Status: Issue closed
|
xtf-cz/xtf | 558053657 | Title: Evaluate possibility of on-demand XTF releases
Question:
username_0: XTF is used by number of teams with different time frames for test development and testing. In case that new XTF functionality(or modifaiction) is required for test development and testing, there is long process to get into new release. As a workaround teams rather wrap XTF classes or add possibly useful tooling into their product test suites.
Purpose of this issue is to evaluate a way how to provide XTF releases on demand.
Prerequisites/Requirements(WIP):
- XTF must have stable test suite which will be part of standard PR review (https://github.com/xtf-cz/xtf/issues/325)
- Script which automatically releases XTF (tag + push to maven repo) |
MyEtherWallet/MEWconnect-web-client | 819358655 | Title: remove keepalive net_version calls ?
Question:
username_0: remove keepalive net_version calls
My company uses this package as a dependency to support MewConnect. The number of "net_version" calls coming from this package is quite significant and takes over 50% of requests to our Infura account. No other wallet connection dependency we have like WalletConnect and WalletLink makes this many keep-alive requests.
I haven't looked into it too much but it looks like MEW site removed these calls. Does the MEW site use this package or just a copied pasted version of it?
https://github.com/MyEtherWallet/MyEtherWallet/pull/2009/files
Answers:
username_1: Will add this in, it is there so some ws providers don't disconnect the connection. But it turns out you can always send an empty msg
username_1: fixed in ff2059f7 |
artsy/README | 583213648 | Title: Cloud provider ethics and values
Question:
username_0: ## Proposal:
We hold our cloud provider to our same values. After reading about Amazon's [reaction to Coronavirus encouraging employees to trade sick days](https://www.commondreams.org/news/2020/03/13/grotesque-level-greed-owned-worlds-richest-man-jeff-bezos-whole-foods-wants-workers) I think we should realign our provider once we get onto Kubernetes and have the tooling necessary to do all of our Postgres migrations.
## Reasoning
We want to have secure, reliable cloud infrastructure at a reasonable cost. But we also want to make sure our providers align with our values and goals.
## Exceptions:
I previously brought looking into green hosting via https://www.thegreenwebfoundation.org/ - AWS claims to be tackling sustainability but it's not the entire facet we should evaluate our cloud provider or datacenter on. We should consider sustainability as well as good labor practices.
## Additional Context:
https://artsy.slack.com/archives/C03J4L2KK/p1584192238002500
## How is this RFC resolved?
I would like to converge on a provider we are happy with that could satisfy our needs while upholding our values. then plan a migration accordingly.
Answers:
username_1: This is a hard thing to respond to, but I really want to add my voice to the conversation. I feel very vulnerable replying to this, so please understand that I just want to help the team have a healthy conversation around this topic.
Let me start by saying I in no way agree with how Whole Foods/Amazon handled this situation. I think it's important that companies take care of their workers and folks shouldn't have to be worried if they will be able to take time off if they get sick during a pandemic.
That said, I don't agree with moving cloud providers with that alone as a justification. I firmly don't believe Google is any more moral of a company. Microsoft has been building their good will with devs but largely everyone would've said they were toxic 10 years ago.
I have to imagine that any cloud provider move is going to have a _significant_ cost to Artsy. Even after we're fully migrated to kubernetes, we're always going to have to learn more about the underlying services. That knowledge transfer has to happen across the _entire_ team. We'll always have to deal with abstractions of the cloud provider. We _can't_ escape that.
I'm not saying we shouldn't use our principles as a factor when picking technologies. Let's definitely do so. That's part of what makes Artsy so special. But we have to balance that with the needs and cost to Artsy. Our mission is to help expand the art market to support more artists and art in the world. If we succeed more artists can devote their life to doing what they love as their full time gig. And because we care and have such strong principles we can continue to use our voice to help shape this company into something that people can look to as a model of what it means the be a business that adds values to people's lives.
If we consider moving cloud providers, I'd want to see how it helps us get closer to our goals. Will in be cheaper? Will it be easier for our developers to use? More stable? Can we do it in a reasonable amount of time with a limited investment?
I just don't want us to be in a position that we spend a lot of time migrating and the company associated to our new cloud provider does some unethical thing in the name of capitalism and we're back to this conversation. I hope saying that doesn't make me sound like I'm apathetic or a bad person. I _do_ care. I just don't think moving cloud providers will do any thing to help us fix a broken system or in any way absolve us from participating in it.
username_2: I agree with Justin. I don't think we'll find a totally-ethical cloud provider. This kind of move would be _really_ costly to Artsy in the short-term, and would not really help apply pressure on ~Amazon~ Whole Foods to change their behaviour.
That's not to say we shouldn't oppose those behaviours – I just think that we should oppose them as human beings and citizens, rather than as consumers.
username_0: I hear your concerns @username_1 and @username_2 - it would incur significant engineering cost particularly for platform team but I would point out that there is likely a cost benefit for us as well if we don't stay locked into one single provider. Perhaps I should clarify I opened this proposal in order to plan our roadmap for the medium to long-term future and am not advocating for taking on a huge amount of short-term risk or work... we will in any case incur costs upgrading Kubernetes and performing other maintenance tasks anyway. There could be other benefits for us as well if we are more flexible. See cost comparisons for Kubernetes hosting last year https://www.presslabs.com/blog/kubernetes-cloud-providers-2019/
username_0: In terms of ease to use, I don't think it matters for developers who are interacting with Kubernetes via Hokusai / the k8s Dashboard. What we could save on is the Kubernetes provisioning tooling via kops and instead adopt a fully-hosted solution.
Additionally, we would need to research stability and features offered by each Kubernetes provider, so it's possible GKE offers better overall stability and ease of maintenance as well as cost.
That said, I don't think there is a 100% ethical cloud provider and I agree that there will be no perfect solution here. (I don't think there is in reality any sense of perfectly ethical consumption.) But I wouldn't want that fact to restrict our decisions.
username_0: P.S. following up with looking at all facets like cost and reliability as well, take a look at Digital Ocean's pricing in the above comparison - almost half of either EKS or GKE. This could save us significant cost we could then reinvest in other ares, including but not limited to further engineering hires #PeopleAreParamount
username_3: Choosing cloud services isn't that different from other [technology choices](https://github.com/artsy/README/blob/master/playbooks/technology-choices.md). There are a _lot_ of considerations like features, community, compatibility, support, cost, and engineering time (the big one). One could ask similar questions about any of our other choices.
If we attempted that, though, I doubt that _any_ of our technology choices would stand up to the scrutiny. Could we use design patterns documented by a bigot? Contribute to an open-source library authored by a chauvinist? Subscribe to a service partially owned by a Private Equity firm that has mistreated employees? It's overwhelming [and I think impossible] to navigate it all. Even more so as events evolve, or leadership/ownership change, or our team's preferences change. Instead, I suggest a set of principles like:
* Assume that _no one_ and certainly _no organization_ would stand up to comprehensive ethical scrutiny.
* Assume that we have something to potentially learn or adopt from _everyone_ anyway.
* Strive to make choices according to [our framework](https://github.com/artsy/README/blob/master/playbooks/technology-choices.md) and the best interests of the company and team.
* Where applicable, continue to improve that framework. E.g., some ethical or macroeconomic concerns might truly present some product or reputation risk and warrant careful evaluation.
The primary reason we're all assembled in this organization is to have a positive impact on the world through the company's goals. If we do that well, I'd hope the psychic and financial rewards lead to _lots_ of further opportunities to contribute to meaningful causes in individual, discretionary ways.
username_4: Thanks Isac for bringing up this difficult and important topic.
I honestly don't have a prescriptive answer to this, but just want to be in dialogue with some of the points raised so far:
- I agree that corporate reputations are labile things, that change over the years in response to their most visible actions. Bezos/Amazon's reputation may be deservedly lousy right now, but I wouldn't put stock in any other tech behemoth standing the test of time.
- I also agree that no corporation, perhaps not even ours, would survive a high level of scrutiny into all of its practices, or board membership, or funders. But I don't think that entirely absolves us from considering any particular company's behavior before giving them our money — or taking money from them, or being associated with them. Consider that when we found out our banner ads were appearing on certain disreputable websites after the 2016 US elections that [we acted](https://artsy.slack.com/archives/C02531TUD/p1480701037000748) to put a stop to that quickly. Certainly there is some threshold for misconduct or misanthropy that compels action on our part. I think it is at least reasonable to ask if Amazon meets it.
- The question of what it costs us — in friction, lost productivity, engineering time — is a complicated one and I don't have any special insight into what would be a reasonable cost to bear, especially under the present outlandish world conditions. But I would expect the cost to be non-zero and that we would need to collectively embrace that as an organization, simply because any meaningful solidarity would be expected to have such a cost.
- And I guess this that leads to my final thought. In order for this to actually *be* an act solidarity I think it would need to be part of a campaign, one that somehow connects to the affected workers (at [their](https://www.coworker.org/petitions/global-retail-worker-sick-out) [stores](http://www.nottinghammd.com/2020/03/25/attorneys-general-call-on-amazon-whole-foods-to-provide-paid-leave-to-employees-during-covid-19-pandemic/), at [their](https://www.vox.com/recode/2020/3/19/21186322/amazon-warehouse-coronavirus-covid-19-worker-tested-positive-delivery-station-queens-new-york) [warehouses](https://www.businessinsider.com/amazon-workers-strike-coronavirus-2020-3)). Otherwise it remains an isolated act of consumer activism, whose shortcomings you've already acknowledged above.
Anyhoo... to bring this back down to earth. Is there some small thing we could to (on whatever timeline we'd deem sensible at this moment) to even validate the portability of our infrastructure? What's an incremental step that you could envision?
Whatever the outcome of this, I commend you for raising it.
username_0: Thanks for the comments everyone - I think @username_4 you make a salient point, and will take this away as the outcome, to _ validate the portability of our infrastructure_
Status: Issue closed
|
andrewpenland/CAExperiment | 299973073 | Title: CAExperimenter class changes
Question:
username_0: some basic functionalities: convert rule table lookup to be based on a dictionary (so we don’t have to call bin_to_dec repeatedly, only once when the lookup table is created), have option for hex, extend current functions to arbitrary alphabets, take function in addition to rule table or rule number, read more options (.csv file, etc.) as input configurations, documentation, create a corresponding .tex template that matches experiment format for consistent and quick lab reports…i.e. a person can run an experiment just by filling out a form.
Status: Issue closed
Answers:
username_1: We are going to address this by taking these as separate issues. |
Mod4JobFinder/FrontEnd | 891044353 | Title: PR Template
Question:
username_0: # PR JobFinder
**<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>.**
### What does this do?
- [ ] Fix
- [ ] Feature
- [ ] Refactor
- [ ] Style
- [ ] Testing
### Are there any known bugs?
- [ ] No
- [ ] Yes (if so please disclose in Notes for Reviewer & Next Iterations)
### Summary of the changes
### What ticket(s) do the changes resolve?
Please Link Issues to this PR as necessary.
### Notes for the Reviewer
### How to Test Changes?
### Next Iterations |
KnpLabs/KnpPaginatorBundle | 309999129 | Title: The service "twig" has a dependency on a non-existent service "translator".
Question:
username_0: steps to reproduce (Symfony 4):
- composer create-project symfony/skeleton my_project; cd my_project
- composer require twig
- composer require knplabs/knp-paginator-bundle
composer require knplabs/knp-paginator-bundle
Using version ^2.7 for knplabs/knp-paginator-bundle
./composer.json has been updated
Loading composer repositories with package information
Updating dependencies (including require-dev)
Package operations: 2 installs, 0 updates, 0 removals
- Installing knplabs/knp-components (1.3.7): Loading from cache
- Installing knplabs/knp-paginator-bundle (v2.7.2): Loading from cache
Writing lock file
Generating autoload files
Symfony operations: 1 recipe (aad0540f3ff3dcb75819683e2a626f4c)
- Configuring knplabs/knp-paginator-bundle (>=v2.7.2): From auto-generated recipe
ocramius/package-versions: Generating version class...
ocramius/package-versions: ...done generating version class
Executing script cache:clear [KO]
[KO]
Script cache:clear returned with error code 1
!!
!! In CheckExceptionOnInvalidReferenceBehaviorPass.php line 32:
!!
!! The service "twig" has a dependency on a non-existent service "translator".
!!
!!
!!
Installation failed, reverting ./composer.json to its original content.
Answers:
username_0: from composer.json:
"require": {
"php": "^7.1.3",
"ext-iconv": "*",
"symfony/console": "^4.0",
"symfony/flex": "^1.0",
"symfony/framework-bundle": "^4.0",
"symfony/lts": "^4@dev",
"symfony/twig-bundle": "^4.0",
"symfony/yaml": "^4.0"
},
username_1: The problem is the one service required translator as dependency
https://github.com/KnpLabs/KnpPaginatorBundle/blob/master/Resources/config/paginator.xml#L44
username_2: Same issue here, but this is easily solved by adding `symfony / translation` to your project.
`composer require symfony/translation`
username_3: Yep, this is legitimate. The bundle needs to either make that dependency optional, or explicitly require `symfony/translator`.
Status: Issue closed
|
weekendesk/nodegate | 710047730 | Title: [Feature request] New worker to just pass parameters
Question:
username_0: Right now if we want to pass some outpuf from a call as input or another, we need to use aggregate + remove.
This is cumbersome and requries revisiting the remove if changes on the called method happen. |
danielaparker/jsoncons | 367166963 | Title: Illegal codepoint (>= 0xd800 && <= 0xdfff)
Question:
username_0: I catch an exception on "big" json with cyrillic symbols. If you try to cut this json, exception will gone.
```shell
terminate called after throwing an instance of 'jsoncons::parse_error'
what(): Illegal codepoint (>= 0xd800 && <= 0xdfff) at line 308 and column 39
```
for this json
```json
{
"Xml": {
"SLMS": [
{
"-idSLMS": "1",
"LMS": [
{
"PostTh": "Главный специалист эксперт 4 отдела(Яяяяяяяя яяяяяя яяяяяяяяяяяя)",
"PostSh": "Гл.сп.эксп.4 отд.(Яяяяяяяя Я.Я.)",
"KodeMod": "$x",
"PostTabNum": "0000000059"
},
{
"PostTh": "Главный специалист эксперт 4 отдела(Яяяяяяяя яяяяяя яяяяяяяяяяяя)",
"PostSh": "Гл.сп.эксп.4 отд.(Яяяяяяяя Я.Я.)",
"KodeMod": "$x",
"PostTabNum": "0000000059"
},
{
"PostTh": "Главный специалист эксперт 4 отдела(Яяяяяяяя яяяяяя яяяяяяяяяяяя)",
"PostSh": "Гл.сп.эксп.4 отд.(Яяяяяяяя Я.Я.)",
"KodeMod": "$x",
"PostTabNum": "0000000059"
},
{
"PostTh": "Главный специалист эксперт 4 отдела(Яяяяяяяя яяяяяя яяяяяяяяяяяя)",
"PostSh": "Гл.сп.эксп.4 отд.(Яяяяяяяя Я.Я.)",
"KodeMod": "$x",
"PostTabNum": "0000000059"
},
{
"PostTh": "Главный специалист эксперт 4 отдела(Яяяяяяяя яяяяяя яяяяяяяяяяяя)",
"PostSh": "Гл.сп.эксп.4 отд.(Яяяяяяяя Я.Я.)",
"KodeMod": "$x",
"PostTabNum": "0000000059"
},
{
"PostTh": "Главный специалист эксперт 4 отдела(Яяяяяяяя яяяяяя яяяяяяяяяяяя)",
"PostSh": "Гл.сп.эксп.4 отд.(Яяяяяяяя Я.Я.)",
"KodeMod": "$x",
"PostTabNum": "0000000059"
},
{
"PostTh": "Главный специалист эксперт 4 отдела(Яяяяяяяя яяяяяя яяяяяяяяяяяя)",
"PostSh": "Гл.сп.эксп.4 отд.(Яяяяяяяя Я.Я.)",
"KodeMod": "$x",
"PostTabNum": "0000000059"
},
{
"PostTh": "Главный специалист эксперт 4 отдела(Яяяяяяяя яяяяяя яяяяяяяяяяяя)",
[Truncated]
"PostTabNum": "0000000059"
},
{
"PostTh": "Главный специалист эксперт 4 отдела(Яяяяяяяя яяяяяя яяяяяяяяяяяя)",
"PostSh": "Гл.сп.эксп.4 отд.(Яяяяяяяя Я.Я.)",
"KodeMod": "$x",
"PostTabNum": "0000000059"
},
{
"PostTh": "Главный специалист эксперт 4 отдела(Яяяяяяяя яяяяяя яяяяяяяяяяяя)",
"PostSh": "Гл.сп.эксп.4 отд.(Яяяяяяяя Я.Я.)",
"KodeMod": "$x",
"PostTabNum": "0000000059"
}
]
}
]
}
}
```
Answers:
username_0: I try to parse this json with this code
```c++
std::ifstream f(fileName);
jsoncons::json jsonSchema = jsoncons::json::parse_stream(f);
```
username_1: I'm unable to replicate the error by copying the text to a file and parsing, possibly something lost in the paste. Can you send me the file that failed as an attachment to <EMAIL>?
Thanks,
Daniel
username_0: Done
username_1: It's a bug, thanks very much for reporting it. The fix is on master.
The json parser loads from the stream in chunks, in your case it read the bytes 0xD0 0xA3 0xD1 before exhausting the chunk, the parser validated the three bytes before receiving the byte 0x87 that completed the UTF-8 multi-byte byte sequence. The validation has been changed to only occur when the complete string has been received.
Thanks again,
Daniel
username_1: Hi Alexander,
It's a bug, fix is on master.
The json parser loads from the stream in chunks, in your case it read the bytes 0xD0 0xA3 0xD1 before exhausting the chunk, the parser validated the three bytes before receiving the byte 0x87 that completed the UTF-8 multi-byte byte sequence. The validation has been changed to only occur when the complete string has been obtained.
Daniel
username_0: I confirm, that issue was resolved
username_1: Just FYI,
```c++
std::ifstream f(fileName);
jsoncons::json jsonSchema = jsoncons::json::parse(f);
```
is preferred, `parse_stream` has been deprecated for some time and may be removed in a later release.
Daniel |
lixxu/sanic-jinja2 | 956534113 | Title: How to pass a variable globally to a template?
Question:
username_0: For example: after I logged in successfully, I need to hide the login button and show the login user's information
Answers:
username_1: you can refer to [https://sanicframework.org/en/guide/basics/middleware.html#attaching-middleware](https://sanicframework.org/en/guide/basics/middleware.html#attaching-middleware), then use `request.ctx.user` in template.
username_1: closed as long time feedback.
Status: Issue closed
|
devssa/onde-codar-em-salvador | 729783822 | Title: [JAVA] [SALVADOR] Analista Java na [ADN]
Question:
username_0: <!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Descrição da vaga
- Analista Java
## Local
- Salvador
## Benefícios
- Informações diretamente com o responsável/ recrutador da vaga
## Requisitos
**Obrigatórios:**
- Desenvolvimento em projetos Java
- Conhecimento em banco de dados Oracle
- Experiência em Frameworks Spring Boot
- Conhecimento de GIT
- Facilidade em relacionamentos com clientes, c/ boa dicção
- Iniciativa, trabalho em equipe
**Desejáveis:**
- Experiência em GITHUB ou GITLAB
- Conhecimento do banco MongoDB
- Desenvolvimento baseado em testes
- Arquitetura REST
- Conhecimento Avançado de banco de dados Oracle
- Conhecimento em Docker
## Contratação
- a combinar
## Nossa empresa
- A ADN foi fundada em Salvador em 1991 com o objetivo de oferecer ao mercado do Polo Petroquímico de Camaçari uma oferta de serviços integrados de TI. A viabilização da empresa se deu através da contratação do projeto de downsizing e full outsourcing da TI da CPC – Companhia Petroquímica de Camaçari, hoje unidade de vinílicos da Braskem.
- Por conta dessa origem, obteve diversos contratos e mais de 100 projetos para as principais empresas do Polo Petroquímico de Camaçari, entre elas: Copene, CPC, Estireno do Nordeste, Ceman, Cetrel, Ciquine, Deten, Politeno, Polialden e Grupo Unigel.
- Em 1995 a empresa abriu escritório em São Paulo para atender aos contratos com a DM9 e Serveng-Civilsan, ainda hoje clientes da empresa. As operações em São Paulo cresceram e parte da diretoria foi transferida para São Paulo em 2002, ficando em Salvador as áreas administrativo-financeira e desenvolvimento de sistemas.
- Em 2009 a ADN passou por uma recomposição societária com a saída de dois sócios fundadores, <NAME> e <NAME>, e a entrada do Amazontech Group. A partir deste momento a empresa revisitou seu planejamento estratégico com o objetivo de buscar um novo ritmo de crescimento e de um novo posicionamento de mercado.
- Em 2014, os sócios fundadores remanescentes, responsáveis por toda a operação da empresa, <NAME>r e <NAME> compram a participação da Amazontech Group.
- Com 24 anos de atuação, a empresa se encontra com uma posição consolidada e uma base de Clientes fiel, especialmente focada em empresas de marketing, comunicação e mídia, de concessões rodoviárias e de engenharia, construção e incorporação.
## Como se candidatar
- Por favor, enviar currículo, com pretensão salarial, para o e-mail: <EMAIL> |
karma9874/AndroRAT | 684204218 | Title: App not installed.
Question:
username_0: When i try to install the apk to my phone to test it it say App not installed. I am sure it is not because i do not have any memory left on storage.I have android 10.
Answers:
username_1: Strange I didn't get any issue with android 10. Can I know your mobile mode?
Status: Issue closed
|
sap-tutorials/Tutorials | 1097428279 | Title: Tutorial Page fiori-ios-scpms-starter-mission-03.md Issue. DEV GREEN
Question:
username_0: Tutorial issue found: [https://github.com/sap-tutorials/Tutorials/blob/master/tutorials/fiori-ios-scpms-starter-mission-03/fiori-ios-scpms-starter-mission-03.md](https://github.com/sap-tutorials/Tutorials/blob/master/tutorials/fiori-ios-scpms-starter-mission-03/fiori-ios-scpms-starter-mission-03.md) contains invalid primary tag.
Your tutorial was not created. Please double-check primary tag property.
Each tutorial md-file shall have primary tag provided above. Example:
\-\-\-
title: Text Bundles within Node.js SAP HANA applications
description: Working with text bundles in Node.js
primary_tag: products>sap\-hana
tags: [ tutorial>intermediate\, products>sap\-hana\, products>sap\-hana\-\-express\-edition ]
\-\-\-
Affected server: DEV GREEN<issue_closed>
Status: Issue closed |
Audentio/xf2addon-issues | 285732187 | Title: [TH] Nodes 2, ability to style and layout categories
Question:
username_0: Was requested in ticket #9629.
Answers:
username_1: If this is the feature you've mentioned before I don't think it's worth adding, we had it in UI.X 1 but it was rarely used (once or twice, if ever) and would require refactoring the inheritance system quite a bit
username_0: Was from a ticket so figured I'd officially add it here. Also were requesting basically to be able to style the category strip and maybe some of the category itself similarly to how nodes can be styled. The styling part might actually be more useful arguably.
username_1: The category strip styling can likely be handled with the system we already have in place, so it'd just be a matter of having it build the css for category strips. Will look into it
Status: Issue closed
|
SketchUp/sketchup-developer-tools | 8238763 | Title: IE interrupt eval?
Question:
username_0: I was running a snippet that took a while to complete - and I suddenly got a messagbox from IE about a script taking longer to complete than expected.
It halted all other operations.
I'll see if I can create a reproducible test case. Wonder if that is possible to suppress....<issue_closed>
Status: Issue closed |
DewGew/Domoticz-Google-Assistant | 563067676 | Title: ERROR - 400 Client Error: Bad Request for url
Question:
username_0: hi everybody,
I have been struggling with the following issue. Acutually everything works fine, lights go on and off but every request is followed by the message that an unknown error has accured. I see in the dzga log the information below. Any one any suggestion what to do next ? Thank you in advance.
Andre
2020-02-11 10:02:39 - INFO - Request {
"inputs": [
{
"context": {
"locale_country": "NL",
"locale_language": "nl"
},
"intent": "action.devices.EXECUTE",
"payload": {
"commands": [
{
"devices": [
{
"id": "ColorSwitch2436"
}
],
"execution": [
{
"command": "action.devices.commands.OnOff",
"params": {
"on": true
}
}
]
}
]
}
}
],
"requestId": "16069518951766576407"
}
2020-02-11 10:02:39 - ERROR - 400 Client Error: Bad Request for url: https://accounts.google.com/o/oauth2/token
2020-02-11 10:02:39 - INFO - Error handling message {'inputs': [{'context': {'locale_country': 'NL', 'locale_language': 'nl'}, 'intent': 'action.devices.EXECUTE', 'payload': {'commands': [{'devices': [{'id': 'ColorSwitch2436'}], 'execution': [{'command': 'action.devices.commands.OnOff', 'params': {'on': True}}]}]}}], 'requestId': '16069518951766576407'}: {'errorCode': 'unknownError'}
2020-02-11 10:02:39 - INFO - Response {
"requestId": "16069518951766576407",
"payload": {
"errorCode": "unknownError"
}
}<issue_closed>
Status: Issue closed |
appveyor/ci | 212841140 | Title: Unresolved environment variables should be empty string for deployment
Question:
username_0: Currently they are being resolved to `$(variable_name)`. This is correct behavior during a build (because it resembles how CMD behaves and output is visible), but during deployment `$(variable_name)` is passed to deployment code as variable value, which result in unexpected behavior. |
getsentry/sentry-python | 617552993 | Title: First exception not flushed on Celery+Python until second occurrence
Question:
username_0: Here is an issue that's a little hard to understand. We run Sentry on Django+Celery.
```
Django==2.2.12
celery==4.4.2
sentry-sdk==0.14.3
```
We run many packages on that project, so I suspect it's a conflict with another one but I do not know where to start.
### Details
1. Exceptions are reported to Sentry as expected on the Django wsgi
2. Exceptions from code running on Celery will not be reported immediately
3. Triggering the same exception (same fingerprint, different message) a second time will "flush" both and they will both appear on Sentry
4. Triggering 2 different exceptions on a single Celery task will not report either of them to Sentry
5. Calling `Hub.current.client.flush()` doesn't change any of it
### Celery task
```python
@app.task
def sentry_logging_task(what):
try:
raise Exception("Sentry say what? {}".format(what))
except Exception as e:
logger.exception(str(e))
```
### Sentry init
```python
sentry_sdk.init(
dsn=settings.DNS,
integrations=[DjangoIntegration(), CeleryIntegration()],
environment=settings.ENV
)
ignore_logger('django.security.DisallowedHost')
```
Answers:
username_1: How are you launching the celery workers? Is there anything special about the way you are configuring celery?
username_0: Thanks for the follow up @username_1:
Nothing special. I can reproduce the issue 100% of the time locally and I launch it with `pipenv` so it loads a .env file as environment variables
```
pipenv run celery worker -A app -n app
```
On production it runs in a docker container.
username_2: I've confirmed that the bug can still be reproduced with the steps outlined above with the latest release of sentry-sdk (v1.3.1).
<details>
<summary>logs</summary>
```
[2021-08-04 18:48:04,389: INFO/MainProcess] Scaling down 1 processes.
[2021-08-04 18:48:04,391: DEBUG/ForkPoolWorker-1] Flushing HTTP transport
[2021-08-04 18:48:04,392: DEBUG/ForkPoolWorker-1] background worker got flush request
[2021-08-04 18:48:04,494: DEBUG/ForkPoolWorker-1] 2 event(s) pending on flush
[2021-08-04 18:48:06,395: ERROR/ForkPoolWorker-1] flush timed out, dropped 2 events
[2021-08-04 18:48:06,396: DEBUG/ForkPoolWorker-1] background worker flushed
```
</details>
@username_1 Would you mind removing the `needs-information` and `question` labels from this issue?
username_2: I've confirmed that this bug is still present in sentry-sdk 1.5.1 and master although the behaviour has changed due to commit a6cc9718fe398acee134e6ee9297e0fddea9b359.
There is still the issue that the first logged error doesn't get processed immediately and could stay pending on the queue indefinitely. However:
- Prior to a6cc9718fe398acee134e6ee9297e0fddea9b359 when the Celery worker was terminated the pending event would be dropped and never reported.
- From a6cc9718fe398acee134e6ee9297e0fddea9b359 onwards when the Celery worker is terminated the pending event does get reported to Sentry.
I still consider this an issue because the Celery worker may not be terminated for quite some time and so the error may not be reported until it is too late.
The code in https://github.com/getsentry/sentry-python/issues/687#issuecomment-837738001 can still be used to replicate the issue. By default Celery autoscales down inactive workers after 30 seconds so the error will be reported after 30 seconds. This can be changed with the [`AUTOSCALE_KEEPALIVE`](https://github.com/celery/celery/blob/527458d8d419cb41b74c5b05aaa8ddf957704f84/celery/worker/autoscale.py#L28) environment variable. For example, setting `AUTOSCALE_KEEPALIVE=600` will demonstrate the error doesn't get reported for 10 minutes. |
wix/react-native-navigation | 586259164 | Title: [v6] bottomBar icons cut off on Android
Question:
username_0: On Android the icons in the bottomBar are cut off.
On iOS it works fine:


This isa part of the code i use to build the navigation:
```
export function startLoggedInNavigation(user) {
Icon.getImageSource('home', 25, ctColor.neutral.regular, FA5Style.solid).then(source => {
eventsIcon = source;
Icon.getImageSource('search', 25, ctColor.neutral.regular, FA5Style.solid).then(source => {
searchIcon = source;
Icon.getImageSource('cog', 25, ctColor.neutral.regular, FA5Style.solid).then(source => {
profileIcon = source;
Icon.getImageSource('comments', 25, ctColor.neutral.regular, FA5Style.solid).then(source => {
chatIcon = source;
Navigation.setDefaultOptions(defaultNavOptions());
const children = [];
children.push(
createNavigationChildren('churchtoolsmobile.Start', {
bottomTab: {
text: t('start.tabbarName'),
icon: eventsIcon
},
topBar: {
title: {
text: t('start.tabbarName')
}
}
})
);
children.push(
createNavigationChildren('churchtoolsmobile.Discover', {
bottomTab: {
text: t('discover.tabbarName'),
icon: searchIcon
},
topBar: searchTopBarOptions()
})
);
children.push(
createNavigationChildren('churchtoolsmobile.ChatRoster', {
bottomTab: {
text: t('chat.tabbarName'),
icon: chatIcon,
badge: ''
},
topBar: {
title: {
text: t('chat.tabbarName')
}
}
})
);
children.push(
createNavigationChildren('churchtoolsmobile.Settings', {
bottomTab: {
text: t('profile.tabbarName'),
[Truncated]
root: {
bottomTabs: {
children: children
}
}
});
});
});
});
});
}
```
`Icon` is an import from the project `react-native-vector-icons`. The import looks like this: `import Icon, { FA5Style } from 'react-native-vector-icons/FontAwesome5';`
---
### Environment
* React Native Navigation version: 6.3.0
* React Native version: 0.61.5
* Platform(s) (iOS, Android, or both?): Android
* Device info (Simulator/Device? OS version? Debug/Release?): Simulator and Real Device
Answers:
username_1: Hi @username_0 ,
This is a known issue in react-native-vector-icons, in fact I created a issue there myself as I ran into the same problem. Please give that a thumbs up, in the meanwhile a possible solution is to patch-package the commit mentioned in the comments.
https://github.com/oblador/react-native-vector-icons/issues/1054
Status: Issue closed
username_1: Duplicate of: https://github.com/wix/react-native-navigation/issues/5539 and https://github.com/wix/react-native-navigation/issues/4951 |
Nuitka/Nuitka | 447774427 | Title: --recompile-c-only mentionning it "resume" | "continue" the process
Question:
username_0: I suggest that `--recompile-c-only` be aliased to either `--continue` or `--resume`.
Or that, alternatively, the documentation of `--recompile-c-only` contains **continue** and/or **resume** since these are likely to be searched for when one experiences an interrupted compilation.
Answers:
username_1: Hello,
that is not their purpose really, although I agree, that is what they can be used for. Right now it's a development hack, where generated code will not be written, but instead manual modifications will survive.
Need to think of this, you are right, this is very nearly such a feature.
Yours,
Kay
username_1: I think, I am not in favor of this, after some though. Rather I want to get Nuitka to be caching so heavily that this is no longer needed. For developing, the edit and compile feature is very useful. I will make the help more clear, that it's not for end users at all.
Status: Issue closed
|
DataBiosphere/azul | 1049195134 | Title: Allow for sorting by `submissionDate` and `updateDate`
Question:
username_0: Sort `/index/foo` by `hits[].foo[].submissionDate`. Sort `/index/foo` by `hits[].foo[].updateDate`. With `foo` being one of (projects | samples | files) but not `bundles` (links.json doesn't have a `provenance` property).
Answers:
username_1: @hannes-ucsc: "Triage: we already support this and the data-browser already does this, why is this ticket still open?"
username_0: https://github.com/DataBiosphere/azul/issues/3501 added sorting by `aggregateSubmissionDate` and `aggregateUpdateDate` to the `projects` endpoint.
This ticket is for sorting by `submissionDate` or `updateDate` on the `projects`, `samples`, and `files` endpoints.
I believe this ticket was created after a PL discussion regarding:
https://ucsc-gi.slack.com/archives/C705Y6G9Z/p1636485993219100?thread_ts=1636482135.215900&cid=C705Y6G9Z |
MicrosoftDocs/azure-docs | 422063624 | Title: The error "Failed to update connection settings for resource xxx to Log Analytics" occurs when try connect an azure function to azure log analytics.
Question:
username_0: I have an azure function named "xxx".
In the azure Log Analytics workspace -> azure resource, I can see azure function "xxx" is there, then I click it, fill in any fields -> then click "save". But I see this error: Failed to update connection settings for resource xxx to Log Analytics.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 521c75f5-5d71-10b5-df71-a8a6c0204a2e
* Version Independent ID: 4343bd59-d34e-8d99-a723-292fc27b60bd
* Content: [Log Analytics FAQ](https://docs.microsoft.com/en-us/azure/azure-monitor/platform/log-faq)
* Content Source: [articles/azure-monitor/platform/log-faq.md](https://github.com/Microsoft/azure-docs/blob/master/articles/azure-monitor/platform/log-faq.md)
* Service: **log-analytics**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
Answers:
username_1: @username_0 Thanks for the comment! We will investigate this issue and update you soon.
Status: Issue closed
username_2: Hi @username_0
Sorry for the issue you are experiencing. We are happy to help but this channel is reserved for documentation issues/bugs.
For the best possible response, we recommend posting your issue on either our [Log Analytics](https://social.msdn.microsoft.com/Forums/azure/en-US/home?forum=opinsights) or [Azure Function](https://social.msdn.microsoft.com/Forums/azure/en-US/home?forum=AzureFunctions) forums where one of our capable engineers will engage asap.
Your other option is to open a service request with our technical support team using [these steps](https://docs.microsoft.com/en-us/azure/azure-supportability/how-to-create-azure-support-request).
We will now close this issue on Github but hope to engage with you further via one of the support channels mentioned above.
Cheers. |
department-of-veterans-affairs/caseflow | 321942421 | Title: Discussion: Should we store person information in Caseflow?
Question:
username_0: # Problem statement
VACOLS currently stores a lot of information about people. Specifically contact info. This is done for the veteran's power of attorney (representative table) and for the appellant (correspondent table). We likely need access to the same information in our system. The board currently sends decisions to both POA and appellants, and the board needs to communicate with POAs throughout the decision process.
Do we want to store this information? Or can we leverage BGS to store it?
# We store the information
Advantages:
- Reduces our reliance on an external dependency BGS
- Allows us to develop more quickly
- Makes handling non-VBA business lines easier
Disadvantages:
- Two sources of truth. What happens when they're out of sync?
- How will we keep our information up to date?
Implementation:
- Could create a contact table which appellants and veterans would have a foreign key into.
# We let BGS store the information
Advantages:
- One source of truth
- Often information will already exist for the person
Disadvantages:
- The process to be allowed to write to BGS might be long
- Is BGS okay with us writing contact information for people who come up through non-VBA business lines (i.e. NCA)
- We are beholden to BGS. What if they go down? What if they decide to remove some of our data?
Implementation:
- Use BGS's person service which has both read and write endpoints to get and add people.
# We use BGS when the information exists, but never write to BGS
Advantages:
- Don't have to go through the process of writing to BGS
- Might make dealing with non-VBA business lines easier
Disadvantages:
- If someone we store is eventually added to BGS then there are two sources of truth. What happens when they're out of sync?
- How will we keep our information up to date?
- Super complicated model (sometimes go to BGS, sometimes go to our DB)
Implementation:
- Could create a contact table which appellants and veterans would have a foreign key into. But only store it when BGS doesn't return anything.
Answers:
username_1: I am curious to hear how complicated the process to be allowed to write to BGS would be? Do they already have API endpoints that would allow us to do that or do they have to implement them?
username_2: I would advocate for whatever path gets us the most up-to-date information.
In AMA, the Board will take over all correspondence (letters) to Veterans and POAs for scheduling hearings (right now ROs do it). It's essential that they have the most up-to-date contact info so they can receive the letter telling them when to show up for a hearing. Today, 30% of Veterans don't show up for the hearing and we don't want this getting worse.
username_3: A few questions:
-Is it correct to say that contact info for POAs and appellants is the only duplicated info across VACOLS and BGS?
-Can we clarify what information exactly we mean when we say "contact info"— names? addresses? phone numbers? emails? all of the above?
-Also, POAs aren't necessarily people, they can be firms or organizations right?
username_2: Does anyone know about the Vets 360 initiative and how that might related to this? My info is limited: This is something Vets.gov will be interfacing with in the near future. Vets 360 was supposed to launch soon. It's not one centralized database, but the goal of change a Veteran's address in one place and it changes it in all places.
username_4: So I've been chatting with AMO about what to do here for higher level reviews and supplemental claims:
- Most cases, the appellant/claimant will already exist as a related dependent to the veteran in the VBA corporate DB.
- There is a BGS endpoint for pulling relationships for the veteran. There is a chance that this endpoint is already available in prod since we use other end points on the `PersonWebService`.
- In the case where the appellant/claimant isn't in Corp DB. The CA will be asked to add it (in SHARE I think). For BVA, we should consider if this is possible in VBMS, because they don't have access to SHARE.
So I am very in favor of this option, since it's what we'll be doing for HLR and SC.
## We use BGS when the information exists, but never write to BGS
username_0: I do not have a good understanding of this.
--------
After Shane and I talked yesterday, I'm also in favor of `We use BGS when the information exists, but never write to BGS`. We can have our users write contact information in Share when it doesn't exist. In the future if we decide we have time to implement it, we could integrate with the insert/update person endpoints to allow users to change and insert appellants right in caseflow.
username_5: It seems like it would be more consistent than that, and likely look like:
1. Call BGS
2. If there is a BGS record, check if we have stored a separate Caseflow record for a prior instance when there was no BGS record, and if so, update Caseflow to reflect BGS data and archive the Caseflow record.
3. If there is no BGS record, use Caseflow data.
username_6: Actions:
- try pulling Veteran name from BGS (with VACOLS as backup) for some users to test it out adn implement datadog monitor @username_0
- async think about cacheing strategy @username_0 to start an issues about this
- investigate if Board employees can update in SHARE (short term) @username_5
- investigate if Board employees can update in VBMS (short term) @username_5
- from Jebby: Board does not update addresses in BGS or VBMS - think they don't have the ability to. Might be able to get permissions to do so in VBMS. Only update POA in both systems.
- investigate writes to BGS (long term) - @username_5 to determine who we can talk to about this
- <NAME>?
- can we write to BGS with person info
Status: Issue closed
|
aquawicket/DigitalKnob | 175182145 | Title: Between 8659 and 8660, porper CEF died
Question:
username_0: Apps like DKSDLBrowser, DKYoutube and DKFacebook have died.
The issues happened between reversion 8659 and 8660.
The file at fault seems to be DKWidget.cpp
Answers:
username_0: Maybe DKRocketToRml::PostProcess is hitting an element twice and, ya know.
username_0: working around this with CEF and javascript for now... closing
username_0: working around this with CEF and javascript for now... closing
Status: Issue closed
|
intlify/vue-i18n-next | 654309075 | Title: defineComputed inside t, d and other composer methods is not needed
Question:
username_0: Looks like `defineComputed` call is not needed because `computed` is immediately unwrapped and only primitive string value is returned.
https://github.com/intlify/vue-i18n-next/blob/d0ed0a5b9924ce000d630e9f11ea00c608ff81fb/src/composer.ts#L505<issue_closed>
Status: Issue closed |
TrySound/rollup-plugin-terser | 532421614 | Title: Using with "require"
Question:
username_0: I'm trying to use this with rollup plugin for gulp. However, gulp imports everything like this:
```javascript
const terser = require('rollup-plugin-terser')
```
This throws and error saying that gulp can't find anything in the plugin. I'm supposing the code is not compatible with commonjs require statements.
Using an es6 `import` won't work. Any suggestions?
Answers:
username_1: Same as for es modules. There is no magic here as with default exports.
```
import { terser } from 'rollup-plugin-terser'
const { terser } = require('rollup-plugin-terser')
```
Status: Issue closed
|
saltstack/salt | 75712611 | Title: As of 2015.5.0 image is required with docker.running
Question:
username_0: Prior to salt 2015.5.0, image was not required for the `docker.running` state. So a state that looked like this worked just fine:
```
example-container-running:
require:
- docker: example-container-installed
docker.running:
- container: example-container
```
Now that same state will produce an error that `image` is required missing. This breaks backwards compatibility with the previous versions.
Ideally, if the container being started already exists (created from the docker.installed state, for example), the container's associated image is assumed.
Answers:
username_1: @username_0, thanks for the report.
username_2: @username_0 Do you have a current work around for this? Just provide the image as defined in `example-container`?
username_0: @username_2 Correct! If you specify the `image` key your states will work - it's just a pain to update/test all your formulas.
So, to be clear, to work around the problem in the example I posted, one would use something like:
```
example-container-installed:
docker.installed:
- name: example-container
- image: some/image
example-container-running:
require:
- docker: example-container-installed
docker.running:
- container: example-container
- image: some/image
```
(A related warning though, if your images use a tag specifier _do not_ use the `tag` key in the docker.installed or the docker.running states. They won't work properly if you do. You have to specify the tag as part of the `image` value in the traditional docker way of 'image/path:TAG'. That's another issue I need to find time to document)
username_2: @username_0 Awesome, thanks for the quick and clear response!
username_3: @username_0 I have a problem with this solution:
https://gist.github.com/username_3/3fd556c87bb2de9725d6
saltstack: 2015.0
docker: 1.6.2
docker-py: 0.5.3
Do you have an idea for a workaround for this workaround ;)
username_4: @username_3
Unfortunately, it looks like you're running into a different issue all together. It turns out that the entire `dockerio` module is pretty much a just wrapper that calls `docker-py` methods. Unfortunately, `dockerio` doesn't do any version checking of `docker-py` to detect supported/unsupported features, so you get errors like the one you posted. I know, because I ran into _a lot_ of them :-).
Anyway, I took a peek at the [0.5.3 docker-py source](https://github.com/docker/docker-py/blob/0.5.3/docker/client.py) and, sure enough, create_container doesn't support `cpu_set` in that version. It looks like it wasn't introduced [until 0.6.0](https://github.com/docker/docker-py/blob/0.6.0/docker/client.py).
__TL;DR__ This is an unrelated issue. If you upgrade to 0.6.0 (or later) of `docker-py`, it should work for ya, good luck!
username_5: also, `port_bindings` have been renamed to `ports` in 2015.5 in `docker.running`
username_4: @username_1
I think we should close this issue. The docker module/state support in salt has undergone some pretty large changes since this issue was written and it's no longer relevant.
username_6: @username_4 I agree. Thank you!
Status: Issue closed
|
jlippold/tweakCompatible | 339202243 | Title: `Activator` working on iOS 11.3.1
Question:
username_0: ```
{
"packageId": "libactivator",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "libactivator",
"deviceId": "iPhone9,2",
"url": "http://cydia.saurik.com/package/libactivator/",
"iOSVersion": "11.3.1",
"packageVersionIndexed": true,
"packageName": "Activator",
"category": "System",
"repository": "rpetrich repo",
"name": "Activator",
"packageIndexed": true,
"packageStatusExplaination": "This package version has been marked as Working based on feedback from users in the community. The current positive rating is 92% with 13 working reports.",
"id": "libactivator",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.0.7",
"shortDescription": "Centralized gestures, button and shortcut management for iOS",
"latest": "1.9.13~beta2",
"author": "<NAME>",
"packageStatus": "Working"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
``` |
facebookresearch/Detectron | 320920830 | Title: MIN_KEYPOINT_COUNT_FOR_VALID_MINIBATCH and IMS_PER_BATCH
Question:
username_0: Hi, I tried to train my network with IMS_PER_BATCH: 1 for keypoint detection.
I realized that finalize_keypoint_minibatch function in lib/roi/keypoint_rcnn.py uses MIN_KEYPOINT_COUNT_FOR_VALID_MINIBATCH for rois in a "batch". Hence, I think I have to modify the MIN_KEYPOINT_COUNT_FOR_VALID_MINIBATCH value which is defined for IMS_PER_BATCH: 2 originally. Would it be enough if I just divide it by 2? |
ChrisCummins/ProGraML | 645353669 | Title: Refactor Ggnn.__init__(vocabulary,node_y_dimensionality,graph_y_dimensionality,graph_x_dimensionality,use_selector_embeddings,test_only,name)
Question:
username_0: I've selected [**Ggnn.__init__(vocabulary,node_y_dimensionality,graph_y_dimensionality,graph_x_dimensionality,use_selector_embeddings,test_only,name)**](https://github.com/username_0/ProGraML/blob/5fbdd912cf0927bcda36afb17f2f7622e1356f6b/programl/models/ggnn/ggnn.py#L192-L282) for refactoring, which is a unit of **72** lines of code and **7** parameters. Addressing this will make our codebase more maintainable and improve [Better Code Hub](https://bettercodehub.com)'s **Keep Unit Interfaces Small** guideline rating! 👍
Here's the gist of this guideline:
- **Definition** 📖
Limit the number of parameters per unit to at most 4.
- **Why**❓
Keeping the number of parameters low makes units easier to understand, test and reuse.
- **How** 🔧
Reduce the number of parameters by grouping related parameters into objects. Alternatively, try extracting parts of units that require fewer parameters.
You can find more info about this guideline in [Building Maintainable Software](http://shop.oreilly.com/product/0636920049159.do). 📖
----
ℹ️ To know how many _other_ refactoring candidates need addressing to get a guideline compliant, select some by clicking on the 🔲 next to them. The risk profile below the candidates signals (✅) when it's enough! 🏁
----
Good luck and happy coding! :shipit: :sparkles: :100:<issue_closed>
Status: Issue closed |
SeleniumHQ/docker-selenium | 505954343 | Title: Selenium Grid Error - Error forwarding the new session cannot find : Capabilities {browserName: chrome, platform: VISTA, version: }
Question:
username_0: Hi,
My test cases are failed with an error **(Error forwarding the new session cannot find : Capabilities {browserName: chrome, platform: VISTA, version: })** message from the selenium-hub container
Here is my docker compose file -
**docker-compose.yaml file -**
===================
version: "3"
services:
selenium-hub:
image: selenium/hub:3.141.59-vanadium
container_name: selenium-hub
ports:
- "4444:4444"
chrome:
ports:
- "5555:5555"
image: selenium/node-chrome-debug:3.141.59-vanadium
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
**I receive below response error message from while requesting in the browser - http://localhost:4444/wd/hub/static/resource/hub.html**
{
"sessionId": null,
"status": 13,
"value": {
"class": "org.openqa.grid.common.exception.GridException",
**"error": "unknown error",
"message": "Session [(null externalkey)] not available and is not among the last 1000** terminated sessions.\nActive sessions are[]",
"stackTrace": [
{
"className": "org.openqa.grid.internal.ActiveTestSessions",
"fileName": "ActiveTestSessions.java",
"lineNumber": 120,
"methodName": "getExistingSession"
},
{
"className": "org.openqa.grid.internal.DefaultGridRegistry",
"fileName": "DefaultGridRegistry.java",
"lineNumber": 387,
"methodName": "getExistingSession"
},
{
"className": "org.openqa.grid.web.servlet.handler.RequestHandler",
"fileName": "RequestHandler.java",
"lineNumber": 241,
"methodName": "getSession"
},
{
"className": "org.openqa.grid.web.servlet.handler.RequestHandler",
"fileName": "RequestHandler.java",
"lineNumber": 123,
"methodName": "process"
},
[Truncated]
{
"className": "org.seleniumhq.jetty9.util.thread.QueuedThreadPool",
"fileName": "QueuedThreadPool.java",
"lineNumber": 765,
"methodName": "runJob"
},
{
"className": "org.seleniumhq.jetty9.util.thread.QueuedThreadPool$2",
"fileName": "QueuedThreadPool.java",
"lineNumber": 683,
"methodName": "run"
},
{
"className": "java.lang.Thread",
"fileName": "Thread.java",
"lineNumber": 748,
"methodName": "run"
}
],
"stacktrace": "org.openqa.grid.common.exception.GridException: Session [(null externalkey)] not available and is not among the last 1000 terminated sessions.\nActive sessions are[]\n\tat org.openqa.grid.internal.ActiveTestSessions.getExistingSession(ActiveTestSessions.java:120)\n\tat org.openqa.grid.internal.DefaultGridRegistry.getExistingSession(DefaultGridRegistry.java:387)\n\tat org.openqa.grid.web.servlet.handler.RequestHandler.getSession(RequestHandler.java:241)\n\tat org.openqa.grid.web.servlet.handler.RequestHandler.process(RequestHandler.java:123)\n\tat org.openqa.grid.web.servlet.DriverServlet.process(DriverServlet.java:85)\n\tat org.openqa.grid.web.servlet.DriverServlet.doGet(DriverServlet.java:63)\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:687)\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:790)\n\tat org.seleniumhq.jetty9.servlet.ServletHolder.handle(ServletHolder.java:865)\n\tat org.seleniumhq.jetty9.servlet.ServletHandler.doHandle(ServletHandler.java:535)\n\tat org.seleniumhq.jetty9.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat org.seleniumhq.jetty9.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat org.seleniumhq.jetty9.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat org.seleniumhq.jetty9.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat org.seleniumhq.jetty9.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat org.seleniumhq.jetty9.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat org.seleniumhq.jetty9.server.handler.ContextHandler.doHandle(ContextHandler.java:1340)\n\tat org.seleniumhq.jetty9.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat org.seleniumhq.jetty9.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat org.seleniumhq.jetty9.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat org.seleniumhq.jetty9.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat org.seleniumhq.jetty9.server.handler.ContextHandler.doScope(ContextHandler.java:1242)\n\tat org.seleniumhq.jetty9.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat org.seleniumhq.jetty9.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat org.seleniumhq.jetty9.server.Server.handle(Server.java:503)\n\tat org.seleniumhq.jetty9.server.HttpChannel.handle(HttpChannel.java:364)\n\tat org.seleniumhq.jetty9.server.HttpConnection.onFillable(HttpConnection.java:260)\n\tat org.seleniumhq.jetty9.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)\n\tat org.seleniumhq.jetty9.io.FillInterest.fillable(FillInterest.java:103)\n\tat org.seleniumhq.jetty9.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)\n\tat org.seleniumhq.jetty9.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)\n\tat org.seleniumhq.jetty9.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)\n\tat org.seleniumhq.jetty9.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)\n\tat org.seleniumhq.jetty9.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)\n\tat org.seleniumhq.jetty9.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)\n\tat org.seleniumhq.jetty9.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)\n\tat org.seleniumhq.jetty9.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)\n\tat java.lang.Thread.run(Thread.java:748)\n"
Answers:
username_1: `platform: VISTA`
The docker containers are based on Ubuntu/Linux, therefore the Grid cannot find the requested capabilities. Please use Linux as Platform.
If there are more questions, please send them to the [selenium user group](https://groups.google.com/forum/#!forum/selenium-users), to [StackOverflow](https://stackoverflow.com/questions/tagged/selenium), or join us in the [IRC/Slack channel](https://goo.gl/9o4J3Y) where the community can help you as well.
Status: Issue closed
username_1: How do your capabilities look like?
username_1: Aside from that, it is not clear what the error is now, there is a template for issues that is really helpful, so I would recommend to file a new issue and fill out the complete issue.
username_0: Here are the capabilities setup -
ChromeOptions Options = new ChromeOptions();
Options.AddAdditionalCapability("platform", "Linux", true);
Options.AddAdditionalCapability("version", "latest", true)
driver = new RemoteWebDriver(new Uri("http://host.docker.internal:4444/wd/hub"), Options.ToCapabilities());
username_1: `version`: `latest` won't work
Please use the exact version as in the docker images or remove it.
Also, not sure if the url will work.
username_1: Have a look at https://github.com/SeleniumHQ/docker-selenium/wiki/Getting-Started-with-Docker-Compose#step-4-running-tests
username_0: But these capabilities and url perfectly works with **selenium/standalone-chrome: and selenium/standalone-chrome-debug** docker images.
For **selenium hub** docker images do I need to modify them?
username_0: You were right Diemol **_version: latest_** is not working for **hub**. If I specify the chrome browser version(which is defined in hub image) or keep it blank then works. Thanks for your help. :-)
Anyway, below desired capabilities(lines of code) works for **standalone** but not for **hub**. Any idea?
DesiredCapabilities desiredCapabilities = new DesiredCapabilities("chrome", $"", Platform.CurrentPlatform);
driver = new RemoteWebDriver(new Uri("http://host.docker.internal:4444/wd/hub"), desiredCapabilities);
username_2: I am running on macOS and got the same error.
Here is my docker-compose.yml file
```yml
version: '3'
services:
selenium-hub:
image: selenium/hub:3.141.59-xenon
container_name: selenium-hub
privileged: true
ports:
- 4444:4444
selenium-chrome-standalone:
image: selenium/standalone-chrome:3.141.59-xenon
# ports:
# - 4444:4444
volumes:
- shared:/home/protractor-test
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
selenium-firefox-standalone:
image: selenium/standalone-firefox:3.141.59-xenon
# ports:
# - 4444:4444
volumes:
- shared:/home/protractor-test
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
protractor:
build: .
environment:
- SELENIUM_ADDRESS=http://selenium-hub:4444/wd/hub
volumes:
- shared:/home/protractor-test
links:
- selenium-hub
volumes:
shared:
```
```
Creating network "protractor-ecs-fargate-example_default" with the default driver
Creating selenium-hub ... done
Creating protractor-ecs-fargate-example_selenium-chrome-standalone_1 ... done
Creating protractor-ecs-fargate-example_protractor_1 ... done
Creating protractor-ecs-fargate-example_selenium-firefox-standalone_1 ... done
Attaching to selenium-hub, protractor-ecs-fargate-example_protractor_1, protractor-ecs-fargate-example_selenium-firefox-standalone_1, protractor-ecs-fargate-example_selenium-chrome-standalone_1
selenium-hub | 2019-11-12 10:45:45,687 INFO Included extra file "/etc/supervisor/conf.d/selenium-hub.conf" during parsing
selenium-hub | 2019-11-12 10:45:45,689 INFO supervisord started with pid 7
selenium-hub | 2019-11-12 10:45:46,692 INFO spawned: 'selenium-hub' with pid 10
[Truncated]
protractor_1 | [10:53:39] I/testLogger -
protractor_1 |
protractor_1 | [10:53:39] E/launcher - Runner process exited unexpectedly with error code: 1
protractor_1 | [10:53:39] I/launcher - 0 instance(s) of WebDriver still running
protractor_1 | [10:53:39] I/launcher - chrome #01 failed with exit code: 1
protractor_1 | [10:53:39] I/launcher - firefox #11 failed with exit code: 1
protractor_1 | [10:53:39] I/launcher - overall: 2 process(es) failed to complete
protractor_1 | [10:53:39] E/launcher - Process exited with error code 100
protractor_1 | npm ERR! code ELIFECYCLE
protractor_1 | npm ERR! errno 100
protractor_1 | npm ERR! [email protected] test:headless: `env-cmd protractor dist/protractor.config.js --headless`
protractor_1 | npm ERR! Exit status 100
protractor_1 | npm ERR!
protractor_1 | npm ERR! Failed at the [email protected] test:headless script.
protractor_1 | npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
protractor_1 |
protractor_1 | npm ERR! A complete log of this run can be found in:
protractor_1 | npm ERR! /root/.npm/_logs/2019-11-12T10_53_39_407Z-debug.log
protractor-ecs-fargate-example_protractor_1 exited with code 100
``` |
reactor/reactor-netty | 896472745 | Title: Spring Boot 2.4.5 (gateway) and reactor netty stopped working
Question:
username_0: After upgrade to Spring Boot 2.4.5 with netty reactor 1.0.6 we are not able to use IPV6 anymore. The problem is that strict decoding were added that throws an exception (#1589).
Right now we had to disable Forwarded filter using:
spring.cloud.gateway.forwarded.enabled=false
## Expected Behavior
DefaultHttpForwardedHeaderHandler should fallback to parseXForwardedInfo in case strict decoding fails. Otherwise server can stop working just because of incorrect implementation of any gateway vendor in IPV6 environment. As fixing these gateways gets time I do not think strict decoding of forwarded information is a good idea.
## Actual Behavior
```
java.lang.IllegalArgumentException: Invalid IPv4 address 0:0:0:0:0:0:0:1:50532
at reactor.netty.transport.AddressUtils.parseAddress(AddressUtils.java:132)
at reactor.netty.http.server.DefaultHttpForwardedHeaderHandler.parseForwardedInfo(DefaultHttpForwardedHeaderHandler.java:66)
at reactor.netty.http.server.DefaultHttpForwardedHeaderHandler.apply(DefaultHttpForwardedHeaderHandler.java:47)
```
Root cause is that Spring org.springframework.cloud.gateway.filter.headers.ForwardedHeadersFilter is not encoding "Forwarded" according to standard, but there can be many more gateways.
## Steps to Reproduce
Use spring cloud gateway and spring boot 2.4.5 + netty reactor 1.0.6. Then use curl using IPV6 address to spring cloud gateway (or any other broken gateway) that points final application.
## Possible Solution
```java
@Override
public ConnectionInfo apply(ConnectionInfo connectionInfo, HttpRequest request) {
String forwardedHeader = request.headers().get(FORWARDED_HEADER);
if (forwardedHeader != null) {
try {
return parseForwardedInfo(connectionInfo, forwardedHeader);
} catch (IllegalArgumentException e) {
// Fallback to xforwarded info
}
}
return parseXForwardedInfo(connectionInfo, request);
}
```
Or make strict configurable and not hardcoded as "true".
## Your Environment
* Spring Boot 2.4.5
* Spring Cloud Gateway 3.0.2
* Reactor netty 1.0.6
Answers:
username_0: @username_1 @simonbasle Please take a look as you were involved in the PR
username_1: @username_0 What about you provide your own handler for the Forwarded headers?
You can do it with:
```
@Component
public class MyNettyWebServerCustomizer
implements WebServerFactoryCustomizer<NettyReactiveWebServerFactory> {
@Override
public void customize(NettyReactiveWebServerFactory factory) {
factory.addServerCustomizers(httpServer ->
httpServer.forwarded((connectionInfo, httpRequest) -> {...}));
}
}
```
username_0: @username_1 Yes, but then I would need to "copy" the class and maintain it. This call is also default so everyone using spring boot will hit this problem and everyone will need to provide own copy of this class. We should not break the contract (It was not failing with illegal exception before).
I was thinking about wrapping the class, but it is package private and also final. Having many implementations across the world for the same thing does not seems to be best solution to me in long term. Yes in short term I can copy and use "false".
username_0: @username_2 any comment? As you were part of review.
username_1: @username_0 There wasn't a fallback to `parseXForwardedInfo`, the source prior `1.0.6` version:
https://github.com/reactor/reactor-netty/blob/v1.0.5/reactor-netty-http/src/main/java/reactor/netty/http/server/DefaultHttpForwardedHeaderHandler.java#L45-L51
The both decoding of `Forwarded` header and the result were not correct prior 1.0.6 and I do not see how we can provide something less strict.
My opinion is that the source of the problem should be fixed and not to make compromises with the validation in Reactor Netty.
For a workarounds there is an API for providing another interpretation of the decoding of `Forwarded` header.
username_0: I know there was no fallback, but before it did not break whole communication channel. As it was not required before all upstreams could have been broken (like Spring gateway is right now). That means until someone fixes Spring Gateway or potentially other types of gateway, your library will not work. It is pretty strong behavior change.
I know I can workaround it. And I did. And everyone else will need to. But for common library like this one you should give a time to accommodate such a big change. Another option is to give a fallback which at least does not break the server. Only forwarded headers will not be processed in case these does not follow standard. Everyone would understand it.
Internet is full of legacy software that is maybe not following standards, but these works right now. Like yours were working. Putting assertions in the path that can be easily misused by third party gateways is not good approach IMHO.
username_1: @simonbasle @username_2 The only possibility that I see is to provide a configuration so if we are not able to parse the `Forwarded` header to log an error and to continue as `forwarded` functionality is not enabled. Wdyt?
username_2: Perhaps as a system property, to be able to enable the previous behavior, to be phased out eventually. It's also worth pointing out that the previous behavior isn't deterministic and that's never a good thing also from a security standpoint.
username_1: @username_0 I'm gonna add a system property that can be used to disable the strict decoding. Wdyt?
username_0: I am not fan of these global properties as it makes it harder in complex (mixed) deployments. Why do you think having "strict" parameter exposed as wrong one? Simply if validation do not pass and strict is true filter simply works as headers are not available (because these are not valid). It will just not throw an exception.
We can create another variant of "forwarded" in http server:
```
class HttpServer {
public final HttpServer forwarded(boolean forwardedEnabled, boolean strict);
```
username_1: @username_0 This system property should be used as a workaround for a short time of period as such it is easier to enable a system property for a given existing solution. Having an API implies that you have to change the existing solution in order to enable this workaround.
username_1: @username_2 @simonbasle @username_0 PTAL #1640
username_0: Sure I am fine. I just kept in mind that it took me two hours to find why something that worked stopped working during micro version upgrade. More people will hit this issue. We are covered already. Just was thinking about other people as they will not know about this magic property.
Status: Issue closed
|
ampproject/amphtml | 140303802 | Title: *_HOST URL substitutions use hostname, omit port number
Question:
username_0: Currently (at least as of v1457112743399) the `SOURCE_HOST`, `CANONICAL_HOST` and `AMPDOC_HOST` URL substitutions rely on the `hostname` property of the anchor element, not `host`. Because of this, the port number is omitted. This can cause problems when referencing hosts running on non-standard ports, for example in pre-production environments.
I'd like to propose that these `*_HOST` vars should be changed to rely on `host`, and that new variables be introduced for `*_HOSTNAME` that would preserve the existing behavior. This would of course be a breaking change, but would be more consistent going forward.
Answers:
username_1: I have no problem with this. @cramforce what do you think?
username_2: Maybe combined with vendors.js updates we can do this?
username_1: I think we should just go ahead and do it. /to @username_3
username_2: @username_3 when you start on this, please update the milestone. Thanks!
username_3: This sounds good. This hopefully won't break anyone's current usage as I'd imagine people don't use ports in their prod-setup.
Status: Issue closed
|
crsh/prereg | 336718139 | Title: (Not) possible to set `keep_md: true`
Question:
username_0: Next to knitting the PDF, I would like to keep the intermediary markdown file. I tried to add the `keep_md: true` option, but with no success (see the MRE below). (I hope it's not because I messed up with the white spaces.) If `prereg` does not support this option, would it be cumbersome to add it?
Thanks!
```rmarkdown
---
title : "My preregistration for the COS Preregistration Challenge"
shorttitle : "My preregistration"
date : "`r Sys.setlocale('LC_TIME', 'C'); format(Sys.time(), '%d\\\\. %B %Y')`"
author:
- name : First Author
affiliation : 1
- name : <NAME>
affiliation : "1,2"
affiliation:
- id : 1
institution : Wilhelm-Wundt-University
- id : 2
institution : Konstanz Business School
output:
prereg::cos_prereg:
keep_md: true
---
# Study Information
Lorem Ipsum
```
gives
```console
Error in (function (toc = FALSE, toc_depth = 2, number_sections = FALSE, :
unused argument (keep_md = TRUE)
Calls: <Anonymous> ... create_output_format -> do.call -> <Anonymous> -> do.call -> <Anonymous>
Execution halted
```
Answers:
username_1: Hey Stefan, I realize it's been a while (sorry). I just tried to reproduce the problem and it appears it has been resloved in the meantime. Is this still an issue for you?
username_0: @username_1 : Thanks for taking this up again! I've just run the above MRE again and didn't get an error, but I also cannot find the markdown document. Maybe I've just missed it. If you ran the MRE, can you find the markdown file?
Below are some session infos.
```
R version 4.1.1 (2021-08-10)
Platform: x86_64-apple-darwin17.0 (64-bit)
Running under: macOS Catalina 10.15.7
prereg_0.5.0
workflowr_1.6.2
rmarkdown_2.11
yaml_2.2.1
knitr_1.34
```
username_1: That's odd. It did retain the markdown file for me.
username_0: @username_1 : I just tried it again. The HTML of the Rmd file appears in `docs`, but I can't find the `.md` file. Where did you find your `.md` file? Can you share your MRE with me? That may help me find out whether there's something else/additional in my setup that is creating the problem. |
elim322/Instanews | 346348122 | Title: Unintended CSS affecting loading gif
Question:
username_0: Even though you've set a width and a height for your loading gif it looks like there's an issue with the following line. Try making your css more specific e.g. if the img tag is needed to change something in the header like the logo try giving the logo a classname and using that instead of the img tag.
https://github.com/username_1/Instanews/blob/d55462b6ee56860dd37240c579af834542e7523d/sass/style.scss#L38<issue_closed>
Status: Issue closed |
socallinuxexpo/scale-chef | 600039938 | Title: update SPF records
Question:
username_0: reminder to update SPF records for lists when we cutover
Status: Issue closed
Answers:
username_0: reminder to update SPF records for lists when we cutover
username_1: we're still going through mailgun, so this shouldn't be an issue. reopen if I missed something
Status: Issue closed
|
symfony/maker-bundle | 753538247 | Title: Maker bundle is creating files in the wrong directory
Question:
username_0: created: home/src/Entity/User.php
created: home/src/Repository/UserRepository.php
updated: home/src/Entity/User.php
updated: config/packages/security.yaml
Success!
Next Steps:
- Review your new App\Entity\User class.
- Use make:entity to add more fields to your User entity and then run make:migration.
- Create a way to authenticate! See https://symfony.com/doc/current/security.html
```
It's creating the files in home/, instead of the already existing src/ folder.
How could I fix this?
Answers:
username_1: Howdy @username_0! - Just to get quick info from you:
1) Which version of PHP are you using?
2) What operating system are you using?
3) Is this a docker setup?
4) Is there anything customized / unusual about the environment you are running `make:user` in?
username_1: O and I almost forgot, can you confirm that after running `make:user` - the generated `User.php` file _does not exist_ in `src/Entity` ? Or are the files generated in the `src/Entity` folder, but the output is showing `home/src/Entity`
username_0: The file does not exist under the normal src/Entity sadly. Only under the newly created home/src/Entity.
Thanks in advance! |
frontendr/python-rendertron | 1161099720 | Title: default_settings is null.
Question:
username_0: version: 0.2.1
<img width="677" alt="image" src="https://user-images.githubusercontent.com/50765823/157000518-fa7ece76-5e71-403d-a598-97391b2e187a.png">
<img width="677" alt="image" src="https://user-images.githubusercontent.com/50765823/157000659-ffedab75-ab99-4a15-b633-ba7656edc564.png">
default_settings is null. |
mikeyhew/shopify_graphql_client | 480872847 | Title: Bundle gem error: Could not find a valid gem 'shopify_graphql_client'
Question:
username_0: When trying to install the gem I get this error:
ERROR: Could not find a valid gem 'shopify_graphql_client' (>= 0) in any repository
ERROR: Possible alternatives: shopify-graphql_proxy, graphql-client, graphql_client, github-graphql-client, shopify_client
Is this gem no longer in use?
Answers:
username_1: Oh sorry, this isn't on rubygems so you'll have to put this in your Gemfile:
```ruby
gem 'shopify_graphql_client', githhub: 'username_1/shopify_graphql_client`
```
The documentation on this in the Readme is auto-generated by bundler and is wrong.
Status: Issue closed
username_0: You the man. I set it up and it works great, solved the memory issue I was facing. Thanks for creating this 👍 |
welaika/wordmove | 163955377 | Title: Push can't write dump.sql Errcode2
Question:
username_0: Naturally, I replaced all sensitive details with X's.
I'm running VVV + VV. I have a public SSH key setup without a passphrase.
And, here's the Movefile:
```
local:
vhost: "http://vhost.local"
wordpress_path: "/srv/www/wordmove/htdocs" # use an absolute path here
database:
name: "wordmove"
user: "wp"
password: "wp"
host: "localhost"
staging:
vhost: "http://XXXXXX.com"
wordpress_path: "~/domains/XXXXXX.com/html" # use an absolute path here
database:
name: "dbXXXXXX_1clk_wordpress_BXJgBXmJK8xKTgxC"
user: "wordpress_XXXXXX"
password: "<PASSWORD>"
host: "internal-db.sXXXXXX.gridserver.com"
# port: "3308" # Use just in case you have exotic server config
# mysqldump_options: "--max_allowed_packet=1G" # Only available if using SSH
exclude:
- ".git/"
- ".gitignore"
- ".sass-cache/"
- "node_modules/"
- "bin/"
- "tmp/*"
- "Gemfile*"
- "Movefile"
- "wp-config.php"
- "wp-content/*.sql"
# paths: # you can customize wordpress internal paths
# wp_content: "wp-content"
# uploads: "wp-content/uploads"
# plugins: "wp-content/plugins"
# mu_plugins: "wp-content/mu-plugins"
# themes: "wp-content/themes"
# languages: "wp-content/languages"
ssh:
host: "sXXXXXX.gridserver.com"
user: "XXXXXX.com"
# password: "<PASSWORD>" # password is optional, will use public keys if available.
# port: 22 # Port is optional
# rsync_options: "--verbose" # Additional rsync options, optional
# gateway: # Gateway is optional
# host: "host"
# user: "user"
# password: "<PASSWORD>" # password is optional, will use public keys if available.
[Truncated]
# user: "user"
# password: "<PASSWORD>"
# host: "host"
# passive: true
# scheme: "ftps" # default "ftp"
# staging: # multiple environments can be specified
# [...]
```
Connecting to the remote server via SSH and running the following dumps the file into dump.sql perfectly fine:
`mysqldump -h internal-db.sXXXXXX.gridserver.com -u wordpress_XXXXXX -p XXXXXX dbXXXXXX_1clk_wordpress_BXJgBXmJK8xKTgxC > dump.sql`
The MediaTemple documentation utilizes the '>' operator as opposed to the '-result-file' option but I'm not sure if that has anything to do with it really.
This project is stuck in the mud until this is resolved, so I'd appreciate any assistance.
Thank you!
Answers:
username_0: For those looking to investigate in the future, it helps to run the individual command directly on the server.
Doing a 'pwd' in SSh revealed the proper path to where the dumped file needed to go.
Despite it not updating the database, at least I no longer receive the original errors.
Status: Issue closed
username_1: So the problem was just the `~` in the `wordpress_path` key?
And subsequent errors was the ones reported here #326 ?
Just for easier reference for other members
username_0: The problem was Media Temple's HTML path differed from running a 'pwd' command, so the path wasn't correct.
E.g. I tried using:
`wordpress_path: "/home/000000/domains/wordmove.com/html"`
but this resolved it:
`wordpress_path: "/home/000000/users/.home/domains/wordmove.com/html"`
username_1: Yep. Goti it. Thanks for the explanation |
protegeproject/cellfie-plugin | 172475846 | Title: mm:uuidEncode not always resolving to same value for same reference
Question:
username_0: In the following example, the same reference expression using <tt>mm:uuidEncode</tt> (<tt>@B*(mm:uuidEncode))</tt> resolves to different values in two expressions:
<img width="982" alt="screen shot 2016-08-22 at 7 29 36 am" src="https://cloud.githubusercontent.com/assets/1731323/17859325/ff79871c-683d-11e6-94fc-9100eda72fd8.png">
Oddly, in the following slightly modified example it works correctly:
<img width="978" alt="screen shot 2016-08-22 at 7 31 41 am" src="https://cloud.githubusercontent.com/assets/1731323/17859332/055e72a0-683e-11e6-8318-e209957f0da7.png"> |
lauraW522/defenses | 767737364 | Title: Missing Illusions 'Reflection'
Question:
username_0: Hi first of all - great work! This is really awesome. However I was messing around with this yesterday and noticed you had not included reflection (though perhaps this is due to it being removed after every hit?)
That said, this is really really awesome work! Saved me so much time making triggers on my own and even caught some defences I didn't even know I had!
Answers:
username_1: Thank you! I can add that one in, though it won't be one that's attempted
to be unkept. Sounds like just an oversight, I rarely play classes that use
it.
Status: Issue closed
username_1: This has been added, you can download the latest version or download the release I made, they're identical.
username_0: Wow! Thank you, that was an amazing turn around! I'm glad that I don't have to put it in myself (haha) though I'll definitely do a diff to see exactly how you added a new defense. If I find a new defense that isn't included I'll be happy to add it myself and do a merge request.
Thank you!
username_1: Absolutely! Adding new defenses is simple. Create a new entry in the
defense_database script with the name of the defense and all the other
information filled out like the other entries. IE:
ablution = {
name = "ablution",
active = false,
auto = false,
pow = 2,
enabled = true,
req = {"eq","bal"},
use = {"eq"},
skill = "sacraments",
initial = "combat",
cmd = "starchant ablution"
}, And put it in alphabetical order. Then add the trigger for raising the
defense and for losing it, if one exists. updateDefenses("ablution",true)
means the defense is raised, while false means it's lost. Log out, log back
in, and everything else should work fine.
username_0: Wow thanks Laura! Again, exactly what I need :) |
denisdanielyan/as3-Application-Only-Twitter | 49374764 | Title: Security Sandbox Violation
Question:
username_0: Hi mate,
thanks for suck a great work but when deploying i'm getting the following error:
Error #2044: Unhandled securityError:. text=Error #2048: Security sandbox violation
is it just me or is a known error?
thanks!
Status: Issue closed
Answers:
username_1: First, sorry for the late reply.
What you are seeing is caused by a missing cross-domain-policy file at twitter.
Which means, you are doing something wrong as twitter will never post such a file.
To be honest, I never testet the module as a Web project and probably never will.
So I am sorry but I am not able to help you out here. |
OpenNTF/Bootstrap4XPages | 26789408 | Title: Create new control to create a Bootstrap form
Question:
username_0: As described at http://getbootstrap.com/css/#forms, a Bootstrap form is created using a combination of div's with specific classes. If done properly this will automatically make the form responsive. The current (standard XPages extlib) implementation using a <table> structure isn't.
Answers:
username_1: I have a control that does this, will add it in soon |
MPDL/KEEPER | 406853690 | Title: Tune usage of /tmp dir on background node
Question:
username_0: 1. clean up files older then a week to avoid an disk overflow
1. move `/tmp` into `/run/tmp` for better performance by virus scanning and document preview generation
Answers:
username_1: Fixed under: https://github.com/MPDL/KEEPER/commit/09cfc0b73ac1c4b5f17a3a069c81d7b2d1c317b7
Status: Issue closed
|
tbranyen/diffhtml | 225450600 | Title: Add `createNode` to Transaction#internals
Question:
username_0: For my purposes, I needed to reimplement the `patchNode` function, and the Transaction object currently provides all of the internals needed to do so, rather than importing them from `diffHTML` from a non-public path. `createNode` is the only internal function that isn't accessible this way, so I'm currently pulling it in from `diffhtml/lib/node`, which I'd rather avoid. Can that be provided on the Transaction object?
Answers:
username_1: @username_0 I agree that digging into the module structure is probably not ideal. What we could think about doing is deprecating the `internals` concept in core and instead maintain a package that digs into the module structure and exposes the internals safely since we can maintain compatibility safely with the wrapper.
Could look like:
``` js
exports.createTree = require('diffhtml/lib/tree/create');
exports.makeTree = require('diffhtml/lib/tree/make');
/* etc... */
```
Thoughts?
username_1: I'm okay with adding `createNode` to the `internals` object, but I don't think it should be directly attached to the transaction object.
username_0: There is an `internals` object already attached to the Transaction object, which is why I thought that was an acceptable location. Whatever mechanism you think is most maintainable from your end works for me; just looking for a "blessed" way of accessing some of those internals, especially because once the module is published, you can easily run into weird bugs if you import the main entry from the `dist` folder and some internals through `lib` and end up references different instances of the various caching mechanisms.
username_1: @username_0 we're talking about the same thing when we say `internals` object. I'm okay with putting it there. Feel free to PR that, I might have time during lunch today to do it otherwise.
username_0: Ah ok. Looking at the code now, it looks like the `internals` object is just a namespace import of the `util` folder. Should we re-export from that module's `index.js`? And how should it be re-exported? `export * as Node from '../node';` or each function individually? And should we re-export the `Tree` namespace in this way?
And not to go too far down the rabbit hole, but feasibly, the built-in tasks could use the `internals` object themselves so they could work as examples for anyone looking to implement their own tasks.
username_1: I'd like to mess with the idea of a separate `internals` package before we jump too fair down the rabbit hole. Ideally the internal tasks will use the same exact API as a consumer would to dogfood the concept :-p
username_0: Ok, I'm happy to handle a PR if you set the direction. Let me know what you'd prefer.
username_1: I'm experimenting with a `diffhtml-shared-internals` module that exposes the internals from diffHTML in a safe way. The reason why I'm moving slow on this, is that I'd like to try and suss out enough of the API to avoid needing any kind of rewrites later on. I don't have it figured out yet, it's going to take time. I'm focusing on the React Like Component implementation and I want to have that polished with all lifecycle methods bound with the most minimal integration necessary. Right now some use cases are a bit hairy and need more thought.
I'll open up a PR soon so you can see what I have. Luckily this first implementation won't really require much changes on your end other than an imports change.
username_0: Cool looking forward to it.
Status: Issue closed
username_0: Resolved by #129. |
codeforkansascity/kc_311dailybrief | 57854950 | Title: Display Watch Cases as Pins on the map
Question:
username_0: The watched cases are stored in a cookie and are available through WatchList in watch-list.js.
* As pins are displayed in add_yesterdays_markers, if the case is in the property list display as a black pin.
* Display any watched cases not already displayed.
* Query for cases not already displayed - will need to use OR
* Display pins on map. |
coreos/fedora-coreos-tracker | 374152973 | Title: no cloud agents: gce
Question:
username_0: In #12 we decided that we'd like to try to not ship cloud agents. This ticket will document investigation and strategy for shipping without a cloud agent on the **google compute engine cloud platform**.
See also #41 for a discussion of how to ship cloud specific bits using ignition.
Answers:
username_1: GCE wanted google-oslogin added to CL to remain at primary tier OS, so we implemented that. It shouldn't be needed for networking and the rest of the GCE stuff we can do in a container like we do on CL (although it might be worth revisiting it since we haven't in a while).
oslogin itself though needs to be implemented in the os since it messes with nsswitch, pam, and sshd. We'll need to conditionally enable it on gce (could be done with Ignition, and 3.0.0 will make it easier to optionally disable it)
username_0: FYI: rpm package reviews for oslogin rpm: https://pagure.io/fedora-server/issue/5#comment-538460
The current discussion in today's meeting was that we would possibly include the oslogin rpm and just conditionally enable it on gce.
username_1: A little background on oslogin:
On mutable distros there's [this script](https://github.com/GoogleCloudPlatform/compute-image-packages/blob/master/google_compute_engine_oslogin/bin/google_oslogin_control) which the agent uses to toggle oslogin on and off. For CL we decided we didn't want to ship that script (seems somewhat brittle if a user modifies those files themselves) and instead enable via a systemd oneshot that runs early on first boot.*
"Normal" fedora probably wants the `google_oslogin_control` script. I don't know if we want that for FCOS though (for similar reasons to why we don't ship it in CL). This means we'd need two seperate rpms unless dnf/rpm has something like gentoos `INSTALL_MASK` functionality.
*We should be able to do it all with Ignition with spec 3.0.0 (no systemd unit necessary). Trying to do it with the 2.x.y spec is what led me to discover that [files, directories, and links are not declarative](https://github.com/coreos/ignition/issues/608).
username_0: :(
so how do we implement that functionality without the `google_oslogin_control` script? are we going to have to continuously manage our version of the implementation? Could we somehow convince google to change the script to be more compatible with what we need?
I guess it's worth asking.. Do we need to ship google_oslogin at all or can we get by without it (which is the topic of this ticket anyway, right?)?
username_1: That's something we need to discuss with the GCE folks. For CL they said it was a requirement to be a first tier OS.
username_2: Bodhi updates submitted:
* Fedora 28: https://bodhi.fedoraproject.org/updates/FEDORA-2018-ba5030068d
* Fedora 29: https://bodhi.fedoraproject.org/updates/FEDORA-2018-a8c791535d
username_3: The biggest blocker I've hit for using with OpenShift so far is forwarded IPs (set from instance metadata https://github.com/GoogleCloudPlatform/compute-image-packages/blob/master/google_compute_engine/distro_lib/ip_forwarding_utils.py#L78) - we have to use NLB for our front ends for masters, and so without the route being read from instance metadata and then set NLB health checks never go green.
username_4: Enabling OS Login requires modifying several monolithic files (`nsswitch.conf`, `/etc/pam.d/sshd`, `sshd_config`) only on GCP, which is inconvenient.
The `sshd_config` changes are specifically to add an `AuthorizedKeysCommand`. The current plan for #139 is to implement our own `AuthorizedKeysCommand` to read `authorized_keys.d` fragments, and that command could chain to a script which conditionally runs the OS Login `AuthorizedKeysCommand` when enabled on GCP.
username_5: What about the google-startup-script (and shutdown script) "agents"? I have external integrated VM management service I depend on that uses it. My service downloads and starts it's own agent by injecting runtime-specific details through the google-startup-script service. It can't work through cloud-init or similar, it's specific to that google-startup-script API. Is this use-case covered?
username_4: @username_5 We don't plan to support those startup/shutdown scripts. Fedora CoreOS should be configured by passing an Ignition config in userdata. That config can download and install your agent. Or, if you'd prefer to continue using your existing script, the Ignition config can install a `ConditionFirstBoot` systemd unit which runs the script.
username_5: You're assuming an open-source service, and that they care about supporting a one-off special case for an non-standard OS, on a major cloud platform. As a user of a a third-party service like this, that makes my choice effectively:
* Ask them nicely and pray
* Re-implement my entire stack for the sake of a "new OS"
There's nearly zero incentive for third-parties to add the required special-case support when every other OS happily plays along whether or not agent-services are a good idea. It doesn't even have to be GCE-specific, there are plenty of other third-parties which require host-agents. If the OS makes it difficult to integrate, the OS will simply be placed on the "not supported" list and loose out in the long-run. I don't think that's the desire of the community here, whatever the specific philosophy over "agents" is.
username_4: @username_5 Fedora CoreOS is continuing the Container Linux philosophy of providing an opinionated, minimal, and reasonably legacy-free OS for running containers. Part of being opinionated is that not everyone will agree with our opinions, and that's okay. If you want flexibility beyond what Fedora CoreOS is prepared to provide, other distros (including Fedora Cloud Base) could be a great choice.
Fedora CoreOS doesn't support cloud-init, whose design has unfixable race conditions. It strongly discourages installing software in the host, in favor of running all user software in containers. It favors immutable infrastructure and reprovisioning rather than configuration management. So existing tooling will already need to be adapted to work well with Fedora CoreOS.
As to this bug, the principle is that provisioning setup for a machine should always be encapsulated into the Ignition config, rather than passed via a platform-specific agent.
username_5: Thanks for the details and explanation. For me that means I won't use this for testing container run-times and related tooling...ironically as that is. That said, being SO strongly opinionated seems (IMHO) to make this OS overly difficult to use. History provides plenty of examples where difficult-use inventions, are simply not used and therefor ultimately fail. I think the principals here are "cool", and would like to see it be successful. IMHO, that probably necessitates additional flexibility of opinions.
username_4: I hope you'll give Fedora CoreOS a try when we're a little further along; it's easier to use than you might think. :smiley: (We don't have much documentation right now, which is a problem, but we're working on that.)
Fedora CoreOS's opinions are pretty closely aligned with Container Linux, and they've served that community pretty well for several years now. We're trying to make things easier, not harder, honest.
username_5: I know you are. I'm just thinking of all the "agents" out there which must run with privileges, on the host, and may not be conducive to being written into container images. This will especially be a problem in cases where the software or service is closed-source/proprietary. The sad fact is, many environments are like this, especially in government and health-care. Requiring little bits of "internal malware" if you will...because management always knows best.
Possibly not an issue for Fedora, but as that rolls down into CentOS/elsewhere it will become a monumental obstacle to adoption. As in my case, the user's choice may literally be: "Ask nice and pray" :disappointed: We have exactly zero control/influence with what third-parties do, especially with cloud APIs that we also have no control over.
username_6: I recently did a deep dive on the agent for GCE for some work getting Openshift to run in GCE. The use-case that I needed to solve was the L4 (aka Network Load Balancers)
I came up with a proof-of-concept [1] which _only_ runs the Network Configuration and the Clock Skew daemon.
`podman run -d --privileged=true --net=host quay.io/behoward/gce-container` works as expected.
username_4: For the record, the new Go-based GCP agent is [here](https://github.com/GoogleCloudPlatform/guest-agent) and the new OSLogin repo is [here](https://github.com/GoogleCloudPlatform/guest-oslogin).
username_2: Yeah, I found out they were reworking this so I had stopped my rebase work. I guess it's ready now to be put into Fedora...
Not that I like Go at all for this (I really, really, really don't), but at least this means it's shippable for FCOS.
username_0: I broke the OS Login part out into #648. I'm going to close this ticket since we've got a GCP image now and no agent seems to be going fine. We can start new discussions in new tickets.
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.