repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
project-koku/koku-ui | 374503923 | Title: Group by dropdown looks inconsistent with other dropdown component
Question:
username_0: The group by dropdown component found in both Cloud Cost and OpenShift Charge pages, is a PF3 component, but until we get a PF4 component to replace it, we would like the current component styled as if it is PF4 (to be more in-line with what the Sort by dropdown looks like)
Steps to reproduce the behavior:
1. Go to Cloud Cost or OpenShift Charge page
2. On the header of the page, underneath the title is line that allows the user to group the contents of the page by the various options in the dropdown.

Would like the drop down line (label and control) to be akin to whatever we are doing for PF4, which I heard is closely like the style of the SortBy dropdown on the page.
Answers:
username_1: Instead of trying to style the select to look like PF4, it would be best to simply replace it with the PF4 component.
@username_0 Does this need to be a dropdown? Both the select and dropdown components are available in PF4 now.
http://patternfly-react.surge.sh/patternfly-4/components/select
http://patternfly-react.surge.sh/patternfly-4/components/dropdown |
jim-b/KMS | 1115718196 | Title: Decryption Delay and Memory Exceptions
Question:
username_0: Hello, thanks for the code, I use it in my project, works great and solves my big problem.
Here's 2 little problems:
1、The process of parsing the ssv from the ciphertext message(MIKEY I-MESSAGE) is time-consuming. It takes 600 ms on my platform. Check the for(;N!=0; value of sakke_computeTLPairing in the sakke.c file. --N) Time-consuming.Is this loop implemented as specified in the protocol?Is there any optimization method? Thank you.
2、The function of main() in KmsMenu.c includes a pair of initialization(ms_initParameterSets/community_initStorage) and release(ms_deleteParameterSets/community_deleteStorage),if I use this pair more than once in a program,it is bound to crash.It looks like stepping on memory or any other reason.Could you please help comfirm it and fix it ?Thank you . |
WazeDev/WME-Place-Harmonizer | 260362278 | Title: Force URL update in certain situations
Question:
username_0: 1) Force URL override for a chain, but only if the mods agree that every place should receive that URL
2) Force URL override for a chain if the existing URL = xxx (e.g. when the chain's URL changes and the old URL goes to a 404 error page).
This would likely require adding a ph_speccase option. I would suggest something like
**ph_forceURL** (forces new URL on all places)
**ph_forceURL<>www.mysite.com/old-page\www.mysite.com/other-old-page<>** (forces new URL only if old URL matches one of the URLs in the list)
Note the backslash as the separator. Not the only possible separator we could use, but commas or pipes shouldn't be used.
WMEPH should ignore https:// or http://, and also ignore a trailing forward slash.
If URL doesn't match, still show the prompt that states the URL doesn't match.
If a URL update is forced, show a banner message stating that it was updated. |
rotorgames/Rg.Plugins.Popup | 663082726 | Title: Common Setting in App Resources
Question:
username_0: Hello firstly thank you for your very useful and important plugin. I would like to set common style for my whole project's popups as following code; but that is not working. is there any way to implement style to all popup pages? Thank you in advance.
```
<Style x:Key="PopupPageStyle" TargetType="popup:PopupPage">
<Setter Property="popup:PopupPage.Animation">
<animations:ScaleAnimation
DurationIn="250"
DurationOut="250"
EasingIn="Linear"
EasingOut="Linear"
PositionIn="Center"
PositionOut="Center"
ScaleIn="1.2"
ScaleOut="1.0" />
</Setter>
</Style>
```
Answers:
username_1: Hey @username_0, would using a staticResource animation in the application Resource be an okay compromise?
eg
```
<Application.Resources>
<ResourceDictionary>
<animations:FadeAnimation x:Key="Fader"
DurationIn="150"
DurationOut="150"
EasingIn="Linear"
EasingOut="Linear"
HasBackgroundAnimation="False" />
</ResourceDictionary>
</Application.Resources>
```
with the popup page using
```
<pages:PopupPage x:Class="blahblah"
xmlns="http://xamarin.com/schemas/2014/forms"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
xmlns:animations="clr-namespace:Rg.Plugins.Popup.Animations;assembly=Rg.Plugins.Popup"
xmlns:pages="clr-namespace:Rg.Plugins.Popup.Pages;assembly=Rg.Plugins.Popup"
Animation="{x:StaticResource Key=Fader}"
HasKeyboardOffset="False">
```
username_2: Can you make a PR?
Status: Issue closed
|
monero-project/monero-gui | 716024699 | Title: Bug report: Application icon does not display properly on Ubuntu
Question:
username_0: I have installed the latest version of the Monero GUI v0.17.0.1 on Ubuntu version 20.04.1. In the application tray the Monero icon does not display properly. Here is a screenshot of what it looks like: https://imgur.com/PTroRVp
The application does appear correctly on the dock once Monero GUI is launched and running.
I checked the .desktop file and under icon it just says 'monero'. I tried replacing this field with a file path to the Monero logo and that fixed the problem, however once you launch the application any changes made to the .desktop file get overwritten and the application icon does not display properly once again.
Any suggestions on how to fix this would be appreciated, thanks.
Answers:
username_1: Yea, I see there were several attempts, starting from #2292, none of them got merged. :disappointed:
username_2: @username_1 https://github.com/monero-project/monero-gui/pull/3251
On first startup it should ask you if you want to install desktop file.
username_1: @username_2 Yes, it does, but it shows only a generic icon, as in OP's screenshot. There's no Monero icon under `.local/share/icons/` where the most of other apps put their icons. I have Fedora with Gnome, so maybe it's somehow related to Gnome or its themes, if you believe there should be the icon? However I don't see where could it take the icon from. |
ktbyers/netmiko | 181324836 | Title: timing issue with find_prompt on Palo Alto - netmiko 1.0.0
Question:
username_0: _[**Summary**: in netmiko v1.0.0 need to set delay_factor in paloalto_panos_ssh.py to 7 from the current value of 3 to get it to work with Palo Altos as it did with netmiko v0.5.6]_
All was fine connecting to my Palo Alto 5060 with 0.5.6, but with 1.0.0 I get:
```
Traceback (most recent call last):
File "/home/fav/get_arp_tables.py", line 241, in <module>
net_connect = ConnectHandler(**dev)
File "/home/fav/anaconda3/lib/python3.5/site-packages/netmiko/ssh_dispatcher.py", line 94, in ConnectHandler
return ConnectionClass(*args, **kwargs)
File "/home/fav/anaconda3/lib/python3.5/site-packages/netmiko/base_connection.py", line 89, in __init__
self.session_preparation()
File "/home/fav/anaconda3/lib/python3.5/site-packages/netmiko/paloalto/paloalto_panos_ssh.py", line 21, in session_preparation
self.set_base_prompt(delay_factor=3)
File "/home/fav/anaconda3/lib/python3.5/site-packages/netmiko/base_connection.py", line 478, in set_base_prompt
raise ValueError("Router prompt not found: {0}".format(prompt))
ValueError: Router prompt not found: Welcome root.
```
The Palo Alto returns the following on login:
```
Last login: Thu Oct 6 11:58:08 2016 from x.y.z
Welcome root.
root@fw(active)>
```
Turning on debugging in find_prompt() in base_connection.py (extra debugging prompt4 and "end of prompt" added by me - prompt4 is the result after processing multi-line response) I get:
```
prompt1:
^^^^^ end of prompt1 ^^^^^
prompt2a: 'Welcome root.'
^^^^^ end of prompt2a ^^^^^
prompt2b: Welcome root.
^^^^^ end of prompt2b ^^^^^
prompt3: Welcome root.
^^^^^ end of prompt3 ^^^^^
prompt4: Welcome root.
^^^^^ end of prompt4 ^^^^^
Traceback (most recent call last):
File "test_netmiko.py", line 7, in <module>
net_connect = ConnectHandler(**dev)
File "/home/fav/anaconda3/lib/python3.5/site-packages/netmiko/ssh_dispatcher.py", line 94, in ConnectHandler
return ConnectionClass(*args, **kwargs)
File "/home/fav/anaconda3/lib/python3.5/site-packages/netmiko/base_connection.py", line 89, in __init__
self.session_preparation()
File "/home/fav/anaconda3/lib/python3.5/site-packages/netmiko/paloalto/paloalto_panos_ssh.py", line 23, in session_preparation
self.set_base_prompt(delay_factor=3)
File "/home/fav/anaconda3/lib/python3.5/site-packages/netmiko/base_connection.py", line 478, in set_base_prompt
raise ValueError("Router prompt not found: {0}".format(prompt))
ValueError: Router prompt not found: Welcome root.
```
So despite sending a newline in find_prompt(), it doesn't get past seeing "Welcome root" as the response.
[Truncated]
```
prompt1: Welcome root.
root@fw(active)>
root@fw(active)>
^^^^^ end of prompt1 ^^^^^
prompt3: Welcome root.
root@fw(active)>
root@fw(active)>
^^^^^ end of prompt3 ^^^^^
prompt4: root@fw(active)>
^^^^^ end of prompt4 ^^^^^
```
Hope this helps, and many thanks for the module,
Mark
Answers:
username_1: @username_0 @username_2
I assume that you are modifying the delay_factor here?
```
def session_preparation(self):
"""
Prepare the session after the connection has been established.
Disable paging (the '--more--' prompts).
Set the base prompt for interaction ('>').
"""
self.set_base_prompt(delay_factor=3)
```
username_0: Yes, that's correct.
username_0: The other curious thing I've just noticed is that v1.0.0 seems to double up with prompt lines. If, after establishing a connection to the Palo Alto via:
`net_connect = ConnectHandler(**dev)`
I then call
`prompt = net_connect.find_prompt()`
with debugging turned on in find_prompt() in base_connection.py, I get (with the initial connection find_prompt() debug output removed, leaving just the net_connect.find_prompt() debug output):
```
prompt1: root@fw(active)>
root@fw(active)>
^^^^^ end of prompt1 ^^^^^
prompt3: root@fw(active)>
root@fw(active)>
^^^^^ end of prompt3 ^^^^^
prompt4: root@fw(active)>
^^^^^ end of prompt4 ^^^^^
```
so the prompt processed by find_prompt() is initially two lines. With v0.5.6, the prompt is initially just a single line:
```
prompt1: root@fw(active)>
prompt3: root@fw(active)>
prompt4: root@fw(active)>
```
Maybe not a problem, but it is a different behaviour.
username_0: My script logins in to the Palo Alto once every 10 minutes; even with increasing the delay_factor from 3 to 7, it still fails to get the prompt every now and again (1-2 times per day). I've now increased the delay_factor to 8 and will see how that goes.
username_0: Meant to add that with whatever delay Netmiko 0.5.6 was using it was rock solid as far as this script was concerned.
username_1: Let me know how the delay factor of 8 works.
Unfortunately, reverse engineering what it used to be is probably not straight-forward. We had issues with this earlier as well.
username_2: Unfortunately it seems it can vary from device to device. I've just tested on a virtual one and the first good value was 13 :\
username_1: Okay, made this change in dev_1_1 branch:
https://github.com/username_1/netmiko/commit/891fc42f1d7f1ca51d8790a617bdd80dd71b0a56
That results in a 2 second delay which isn't that long.
Can either of you test the dev_1_1 branch and see if this fixes it? @username_0 @username_2
username_2: It worked for me.
username_0: I've made the delay_factor change (to 20) in my copy of v1.0, and it's been working so far for most of the day, running every 10 minutes. I haven't tried using the full dev_1_1 branch yet...
username_1: Okay, I am going to close this issue.
If you see problems in the future, just open a new issue.
Status: Issue closed
|
aio-libs/aiohttp-cors | 1107431293 | Title: Regex domains
Question:
username_0: Hi all!
I'm attempting to integrate this in a project that needs to allow origins that match a regex. From my reading of the README and some of the code, I don't _see_ this functionality, but I wanted to ask to make sure I wasn't overlooking it.
An example of when this functionality would be useful is for some front end deployments that we have to netlify that have unique hashes in the subdomain, such as `<hash>-my-slug.mydomain.com`. Ideally, I'd like to be able to set up rules that match any domain that matches that pattern.
If I'm not overlooking the functionality, is this something that you'd either consider supporting or would consider a PR for? |
wjcnr/issues | 849467124 | Title: [BUG] Spawn House
Question:
username_0: **Deskripsi Bug**
Pemegang kunci rumah orang lain yang tidak memiliki rumah tidak bisa mengakses /spawn > house
**Cara Melihat/Mendapat Bug**
Langkah-langkah agar bug bisa terlihat:
1. Add player yg tidak memiliki rumah ke dalam pemegang kunci rumah
2. lakukan /spawn > house di player yg tidak memiliki rumah (tetapi memiliki kunci)
3. terlihat dialog "Anda tidak mempunyai rumah"
4.
5.
...
**Hal yang harusnya terjadi**
Player tersebut dapat mengakses /spawn > house karena memiliki kunci rumah orang lain
**Screenshots**
**Versi Server**
3.0.7
**Penjelasan Tambahan**<issue_closed>
Status: Issue closed |
SwissDataScienceCenter/renku-python | 518386405 | Title: Treat local Git repos as normal files
Question:
username_0: ATM, when adding files from a local Git repos, Renku treats them as Git repositories and keeps references to it (e.g. to check for updates). This was allowing users to see lineage for local projects but with new changes this is not possible anymore. Moreover, this special treatment of local files is causing some problems when adding them to other projects (See https://github.com/SwissDataScienceCenter/renku-python/issues/798).
Renku should treat local Git repos like normal files/directories and warn users that to have lineage information and update, they must add files from repo's remote instead.<issue_closed>
Status: Issue closed |
basst314/ngx-webcam | 331091669 | Title: Demo-Page broken on mobile device
Question:
username_0: A bug, introduced with Release 0.1.6 prevents the demo-page from working on at least one of my mobile devices.
Will look into it and keep everyone here updated.
Answers:
username_0: Issue fixed. An overrestrictive/mismatching mediaTrackConstraint on the demo paged resulted in 0 matching video tracks on the device. I removed those constraints from the demo-page to resolve the issue.
Status: Issue closed
|
pandas-dev/pandas | 343637529 | Title: Substraction with UInt64 series resulting in negative values gives TypeError
Question:
username_0: With the extension integer array the below operation errors, with numpy uint not:
```
In [68]: pd.Series([1, 1, 1]) - pd.Series([1, 2, 3], dtype='UInt64')
...
TypeError: cannot safely cast non-equivalent float64 to uint64
In [69]: pd.Series([1, 1, 1]) - pd.Series([1, 2, 3], dtype='uint64')
Out[69]:
0 0.0
1 -1.0
2 -2.0
dtype: float64
```
Answers:
username_1: "the 'integer_array' function instead")
E TypeError: values should be integer numpy array. Use the 'integer_array' function instead
```
But that is because of what happens in _maybe_mask_result.
https://github.com/pandas-dev/pandas/blob/0480f4c183a95712cb8ceaf5682c5b8dd02e0f21/pandas/core/arrays/integer.py#L532
That code has some logic for detecting when it should be outputting floats. Does anyone know why we do that instead of just checking the dtype of result?
If we don't want to rely on the dtype of result, then we would have to add operations between uint64 and int64 to the list of cases where we get floats back from numpy
```
In [10]: (np.array([1], dtype='uint64') - np.array([1], dtype='int64')).dtype
Out[10]: dtype('float64')
In [11]: (np.array([1], dtype='uint32') - np.array([1], dtype='int64')).dtype
Out[11]: dtype('int64')
```
username_0: I can't directly think of cases where numpy will return float, but where we want to convert again to an integer dtype. But since it is written that way, there might be cases (@jreback do you remember?)
You can maybe do the change and see if there are tests failing then
username_1: if I just change it to return the float array (apply nan mask) whenever the result dtype is a float, all tests pass.
https://github.com/username_1/pandas/tree/remove_float_logic
https://github.com/username_1/pandas/blob/remove_float_logic/pandas/core/arrays/integer.py#L532
seems ok - but I don't completely understand what was there to begin with
username_2: Apparently the bug also exists for addition:
```python
[ins] In [1]: import pandas as pd
[ins] In [2]: left = pd.Series([1, 1, 1])
[ins] In [3]: right = pd.Series([1, 2, 3], dtype="UInt64")
[ins] In [4]: left + right
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-4-ded06e34d5c6> in <module>
----> 1 left + right
~/pandas/pandas/core/ops/common.py in new_method(self, other)
63 other = item_from_zerodim(other)
64
---> 65 return method(self, other)
66
67 return new_method
~/pandas/pandas/core/ops/__init__.py in wrapper(left, right)
341 lvalues = extract_array(left, extract_numpy=True)
342 rvalues = extract_array(right, extract_numpy=True)
--> 343 result = arithmetic_op(lvalues, rvalues, op)
344
345 return left._construct_result(result, name=res_name)
~/pandas/pandas/core/ops/array_ops.py in arithmetic_op(left, right, op)
184 if should_extension_dispatch(lvalues, rvalues) or isinstance(rvalues, Timedelta):
185 # Timedelta is included because numexpr will fail on it, see GH#31457
--> 186 res_values = op(lvalues, rvalues)
187
188 else:
~/pandas/pandas/core/arrays/integer.py in __array_ufunc__(self, ufunc, method, *inputs, **kwargs)
400
401 # for binary ops, use our custom dunder methods
--> 402 result = ops.maybe_dispatch_ufunc_to_dunder_op(
403 self, ufunc, method, *inputs, **kwargs
404 )
~/pandas/pandas/_libs/ops_dispatch.pyx in pandas._libs.ops_dispatch.maybe_dispatch_ufunc_to_dunder_op()
89 else:
90 name = REVERSED_NAMES.get(op_name, f"__r{op_name}__")
---> 91 result = getattr(self, name, not_implemented)(inputs[0])
92 return result
93 else:
~/pandas/pandas/core/ops/common.py in new_method(self, other)
63 other = item_from_zerodim(other)
64
---> 65 return method(self, other)
66
67 return new_method
~/pandas/pandas/core/arrays/integer.py in integer_arithmetic_method(self, other)
659 )
660
[Truncated]
~/pandas/pandas/core/arrays/integer.py in _maybe_mask_result(self, result, mask, other, op_name)
588 return result
589
--> 590 return type(self)(result, mask, copy=False)
591
592 @classmethod
~/pandas/pandas/core/arrays/integer.py in __init__(self, values, mask, copy)
359 def __init__(self, values: np.ndarray, mask: np.ndarray, copy: bool = False):
360 if not (isinstance(values, np.ndarray) and values.dtype.kind in ["i", "u"]):
--> 361 raise TypeError(
362 "values should be integer numpy array. Use "
363 "the 'pd.array' function instead"
TypeError: values should be integer numpy array. Use the 'pd.array' function instead
[ins] In [5]: pd.__version__
Out[5]: '1.2.0.dev0+520.gdca6c7f43'
``` |
statamic/ideas | 569626552 | Title: Feature Request: Responsive Breakpoint Field
Question:
username_0: This may be possible already through customizing live preview controls, but I frequently use a series of dropdown or array fields to allow users to set section padding based on breakpoints.
It would be great to have a "Breakpoint" dropdown field that could be used as a conditional control in the editor, that would also sync up with Live Preview's breakpoints dropdown.
This way we could easily set up fields that are visible based on breakpoints — when in live preview mode, setting the breakpoint using either field (editor or live preview), would change both live preview and show the appropriate conditional fields.
Answers:
username_1: Transferred this to the ideas repo 👍🏼 |
raguay/favorites | 382162875 | Title: Adding a network folder causes issues with the address
Question:
username_0: I have added a network path as a favorite, e.g. `\\fileserver\public` when I use the shortcut to go to this folder, the address that is shown is `C:\Users\user\AppData\Local\fman\Versions\1.4.4\network:\fileserver\public`. From this point, any operation I try to perform fails (as expected).
Answers:
username_1: I'm sorry, I did not see this post before now.
I can't reproduce this since all of my windows systems are currently dead. I have a vm of XP, but I'm having a hard time getting it to connect to a network drive for testing. But, I do have issues with other alternate file systems. I'll keep checking, but I have very limited time.
Sorry for the inconvenience.
username_2: @username_0 I have a Windows machine. Maybe I can help.
Can you give me step my step instructions on adding a network drive?
I can follow, replicate and debug if any issues come up.
username_0: Hi @username_2 and @username_1,
@username_2: What I have done was just navigated to a network folder (on a Windows network). To do it I have used `CTRL+P` to open the navigation panel and entered the network path (`\\fileserver\public`). Then I have just followed the instructions to add this folder as a favorite (using `SHIFT+f`). When trying to move back to the folder later (using `CTRL+0`) I had the issue I have mentioned.
Thank you for your help!
@username_1: I will also try to take a look at the plugin code to try to understand what is the issue.
Cheers.
username_1: I found an article on mapping a drive on windows 10: https://www.groovypost.com/howto/map-network-drive-windows-10/
This should work the same as the "c:/" type drives except for the higher letter. But, the above note is showing a network drive that isn't mounted. If your intent is to have a non-mounted network drive, the fman API for setting the directory will not work. It needs to be a mounted drive first. I should of noticed this at first.
Try mounting the drive first and then saving it as a favorite. It would need to be remounted as the same drive each time you try to go to it as a favorite. You can set the mount to automatically remount on each boot.
Let me know if this helps. Also, you can post your `~/.favoritedirs` file here to see what fman is saving as the directory.
Status: Issue closed
|
cloud-hypervisor/cloud-hypervisor | 702605429 | Title: CI is flaky due to `test_boot_from_vhost_user_blk_self_spawning`
Question:
username_0: Both https://cloud-hypervisor-jenkins.westus.cloudapp.azure.com/blue/organizations/jenkins/cloud-hypervisor/detail/master/5509/pipeline and https://cloud-hypervisor-jenkins.westus.cloudapp.azure.com/blue/organizations/jenkins/cloud-hypervisor/detail/master/5512/pipeline are example showing the test `test_boot_from_vhost_user_blk_self_spawning` is failing with `--libc musl`.
Answers:
username_0: Just a note this might be caused by the same issue reported by @likebreath on #1707.
username_0: After some investigations, I think the problem might be coming from the fact that under high load, it might take some time for the self spawned backend to be listening onto the socket.
Or the second option would be that the backend is terminated by a `SIGSYS` because we would be missing a syscall in the seccomp list.
Problem is, it's not reproducible :(
username_0: @username_1 suggested that we could have a dedicated `fd=` option added to the `--block-backend` and `net-backend` parameters. This means the VMM process would be passing a file descriptor instead of a vhost-user socket path. This `fd` would refer to the socket without requiring the backend to open the given path and start listening on it.
This would prevent from concurrency issues where the backend might not be listening yet.
Before jumping into any implementation, we must validate the problem is solved if we increase the wait time or if we don't run this test under high load.
username_1: Self spawning has been removed.
Status: Issue closed
|
openshift/openshift-docs | 532427636 | Title: [enterprise-4.2] Edit suggested in file installing/installing_bare_metal/installing-bare-metal.adoc
Question:
username_0: <!--
Please submit only documentation-related issues with this form, or follow the
Contribute to OpenShift guidelines (https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/contributing.adoc) to submit a PR.
-->
### Which section(s) is the issue in?
Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines by PXE or iPXE booting
https://docs.openshift.com/container-platform/4.2/installing/installing_bare_metal/installing-bare-metal.html#installation-user-infra-machines-pxe_installing-bare-metal
### What needs fixing?
At a customer site on older Dell M610 Blades, we saw an issue where the pxe menu items for bootstrap, master and worker while creating the entry like the doc suggests (using HTTP urls for kernel and initramfs files), would cause the pxe menu to just reload itself and take no action.
We believe that instead, the kernel and initramfs files should be put directly on the tftp server. We believe the cause for the failure is that pxelinux.0 does not have drivers for some physical nics that cause the option inability to pull down the kernel and initramfs files over http.
The examples should be changed to have the user place those files directly on the tftp server and then use relative paths from the tftp boot in the pxe menu configs.
Answers:
username_1: The Dell M610 blades have Broadcom-based NICs. |
mendix/cf-mendix-buildpack | 679205701 | Title: Offline Section shows jre/jdk 8, when it should show 11 (for Mendix 8+)
Question:
username_0: Offline Section shows jre/jdk 8, when it should show 11 (for Mendix 8+)
Answers:
username_1: Thanks for letting us know and apologies for the delay in responding.
We'll pick this documentation change up in a future change - I'll close this issue when that's done.
username_1: Fixed in `develop`, will be part of upcoming release.
Status: Issue closed
|
gurry/efi | 340056558 | Title: Create a more ergonomic model for protocols and handles
Question:
username_0: Currently we have to call the raw UEFI code to create device handles and install protocols. Create a model that's more ergonomic and safe. For example we could have a type called `Handle` with methods like `InstallProtocol()` and `UninstallProtocol()`. These methods would take any type that implements a trait called `Protocol`. The trait will have a static field called `guid()` which the `Handle` can internally to register the protocol with UEFI.
The `Handle` will take ownership of all protocol instances passed to it and keep maybe `dyn` references to them in a vector. A `Protocol` implementation will be needed per protocol and will have methods on it specific to the protocol. The `Handle` will support querying for a protocol using the GUID. It will also uninstall all protocols it's currently holding in its `Drop`.
To make things more strongly typed we could find a way to have multiple different `Handle` types such as `DeviceHandle`, `ImageHandle` etc. In that case maybe `Handle` will be a trait rather than struct. |
AlecTroemel/quickxml_to_serde | 731052183 | Title: Converting boolean values
Question:
username_0: JSON bool values only accept 2 values: `true` or `false` and are case sensitive. See https://json-schema.org/understanding-json-schema/reference/boolean.html#boolean
XML Example:
```xml
<user disabled="True" validated="false" />
```
is converted into
```json
{ "user": { "disabled": "True", "validated": false } }
```
We even have these test cases for it:
```rust
assert_eq!(false, parse_text("false", false, &JsonType::Infer));
assert_eq!(true, parse_text("true", true, &JsonType::Infer));
assert_eq!("True", parse_text("True", true, &JsonType::Infer));
```
In my particular case the XML values are inconsistent so some JSON properties come through as strings and others as bools. It causes problems with deserialization.
## Proposal
Add another type enforcement rule for boolean types per node to list possible TRUE values and treat everything else as false.
The resulting JSON type will always be bool.
This will be a non-breaking change.
Answers:
username_0: It's pretty much done and tested, but not in production: https://github.com/username_0/quickxml_to_serde/tree/bool_conversion
Not a big change. Nothing breaking.
Will submit a PR after closing PR #10
Status: Issue closed
username_0: Apart from the built-in tests it's been running in production for a couple of weeks without any problems. |
facebook/hermes | 627172548 | Title: Building assemble release through command showing warnings
Question:
username_0: I enabled Hermes engine in my app, everything is working fine except I am getting warnings while generating assemble release of the application using the command `./gradlew assembleRelease`.
[Click to see logs1](https://hastebin.com/ehefikuyag.coffeescript)
[Click to see logs2](https://hastebin.com/apokafacal.coffeescript)
Please let me know what could be the possible solution to fix the warning.
Answers:
username_1: The pastes look like minified JS. I don't see any warnings in them.
Can you be more specific about exactly which warnings you are seeing and would like to fix?
username_0: These are the warnings you can see them in the log2 file.
[warnings](https://hastebin.com/uborumimof.coffeescript)
Right now after enabling Hermes, I am getting these logs while running the app in debug mode using the command `react-native run-android`. This is not happening every time, getting these logs only in the first run.
[logs](https://hastebin.com/yoyoyusepu.coffeescript)
This doesn't seem like normal logs. So I thought I should ask, although the app is running fine in both debug and assemble release.
Let me know if there is more info required. Thanks :)
username_1: Ah, I didn't scroll down past all the whitespace to see the errors. Sorry about that.
Those warnings come from React Native JavaScript code: https://github.com/facebook/react-native/blob/master/Libraries/Network/fetch.js
This happens because that code references some symbols as globals, and Hermes does not include them in its list of standard JavaScript global properties. An earlier polyfill does define these, but in strict mode, use of unknown globals causes a warning to be emitted which you see here.
If you want to just disable warning generation, you can follow the instructions here https://github.com/facebook/hermes/issues/216#issuecomment-615429942
If you like, you could also report the underlying bug to the React Native team so they can fix it.
Status: Issue closed
|
zaproxy/zaproxy | 105200468 | Title: Two Cookie line in the header when adding a cookie with an httpsender script and doing an active scan
Question:
username_0: I use an httpsender script that add cookie to every request.
var has_js_cookie = new org.parosproxy.paros.network.HtmlParameter(COOKIE_PARAM_TYPE, HAS_JS_NAME, "1");
cookieParams.add(has_js_cookie);
var toolbar_cookie = new org.parosproxy.paros.network.HtmlParameter(COOKIE_PARAM_TYPE, TOOLBAR_NAME, "0");
cookieParams.add(toolbar_cookie);
If use it during an active scan with a context that create a session based on cookies, first zap add the cookie of the session in the header
Cookie: SESS8bcff576a76286b5afa8392e2841f07f=9344OUeBNCCDF8YKkTni2SKSpFuzx5mNulec1n_sXnA
Then it adds the two other cookies in an other line
Cookie: toolbar.collapsed=0; has_js=1
Instead of adding them to the cookie line already created, this result with a header with two lines of cookies.
Cookie: toolbar.collapsed=0; has_js=1
Connection: keep-alive
Content-Length: 0
Host: 172.17.0.3
Cookie: SESS8bcff576a76286b5afa8392e2841f07f=9344OUeBNCCDF8YKkTni2SKSpFuzx5mNulec1n_sXnA
The cookie line that's going that will be taken in account is unpredictable.
Answers:
username_1: How are you obtaining (i.e. cookieParams) and setting the cookies?
For which "initiator" are you changing the message? Active Scanner? All?
Status: Issue closed
username_0: I obtain cookie param inside the function sendingRequest(msg, initiator, helper)
function sendingRequest(msg, initiator, helper) {
...
var cookieParams = msg.getCookieParams();
....
}
I use it at the moment for all the initiator but the goal is the active scanner.
username_0: I use an httpsender script that add cookie to every request.
var has_js_cookie = new org.parosproxy.paros.network.HtmlParameter(COOKIE_PARAM_TYPE, HAS_JS_NAME, "1");
cookieParams.add(has_js_cookie);
var toolbar_cookie = new org.parosproxy.paros.network.HtmlParameter(COOKIE_PARAM_TYPE, TOOLBAR_NAME, "0");
cookieParams.add(toolbar_cookie);
If I use it during an active scan with a context that create a session based on cookies, first zap add the cookie of the session in the header
Cookie: SESS8bcff576a76286b5afa8392e2841f07f=9344OUeBNCCDF8YKkTni2SKSpFuzx5mNulec1n_sXnA
Then it adds the two other cookies in an other line
Cookie: toolbar.collapsed=0; has_js=1
Instead of adding them to the cookie line already created, this results in a header with two lines of cookies.
Cookie: toolbar.collapsed=0; has_js=1
Connection: keep-alive
Content-Length: 0
Host: 172.17.0.3
Cookie: SESS8bcff576a76286b5afa8392e2841f07f=9344OUeBNCCDF8YKkTni2SKSpFuzx5mNulec1n_sXnA
The cookie line that's going to be taken in account by the server is unpredictable.
I use ZAP 2.4.1
username_0: Sorry I didnet't mean to close the issue.
username_1: OK, following a workaround:
```JavaScript
if (msg.getRequestingUser() && msg.getRequestingUser().getCorrespondingHttpState()) {
var cookie = new org.apache.commons.httpclient.Cookie(/* domain */ msg.getRequestHeader().getHostName(), HAS_JS_NAME, "0");
cookie.setPath("/");
msg.getRequestingUser().getCorrespondingHttpState().addCookie(cookie);
cookie = new org.apache.commons.httpclient.Cookie(msg.getRequestHeader().getHostName(), TOOLBAR_NAME, "1");
cookie.setPath("/");
msg.getRequestingUser().getCorrespondingHttpState().addCookie(cookie);
}
```
You might need to tweak the domain and path of the cookies.
The option "Single Cookie Request Header" [1] must be selected too.
[1] https://github.com/zaproxy/zap-core-help/wiki/HelpUiDialogsOptionsConnection#single-cookie-request-header
username_0: The script do not get over if (msg.getRequestingUser()... because msg.getRequestingUser() is null
But now I have a much simpler use case
If I resend this request using the request button with the forced User Mode enabled (note cookie line already exist Cookie: Drupal.toolbar.collapsed=0; has_js=1)
GET http://172.17.0.3/ HTTP/1.1
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0 Iceweasel/31.6.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3
Cookie: Drupal.toolbar.collapsed=0; has_js=1
Connection: keep-alive
Cache-Control: max-age=0
Host: 172.17.0.3
Then ZAP add the cookie session at the end of the request header
GET http://172.17.0.3/ HTTP/1.1
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0 Iceweasel/31.6.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3
Cookie: Drupal.toolbar.collapsed=0; has_js=1
Connection: keep-alive
Cache-Control: max-age=0
Content-Length: 0
Host: 172.17.0.3
Cookie: SESS8bcff576a76286b5afa8392e2841f07f=EN-_01dPRO2fGDIQ0YDo972Q70JzSANgm9i_4b7REL4
And I click again on send then it duplicate the "Cookie: Drupal.toolbar.collapsed=0; has_js=1" line
GET http://172.17.0.3/ HTTP/1.1
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0 Iceweasel/31.6.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3
Cookie: Drupal.toolbar.collapsed=0; has_js=1
Connection: keep-alive
Cache-Control: max-age=0
Content-Length: 0
Cookie: Drupal.toolbar.collapsed=0; has_js=1
Host: 172.17.0.3
Cookie: SESS8bcff576a76286b5afa8392e2841f07f=EN-_01dPRO2fGDIQ0YDo972Q70JzSANgm9i_4b7REL4
If I do it again ZAP create the same third line.
username_1: That means that there's no user. When are you enabling the "HTTP sender" script?
Is "Forced User" mode enabled? Or, are you specifying a user when active scanning?
Sorry, should have been more explicit, the issue is acknowledged. I was able to reproduce the issue with the indications you gave earlier.
username_0: I have to be more explicit myself also.
I discribe two cases now in the issue :
1. Resquests sent by the active scanner
2. Request sent by the "Resend ..." dialog
For both cases :
- The authentication is a form based authentication
- Regex pattern identified in logged in or log out response message is empty
- I have one user (michael) which is enabled
- Forced user is michael
- Force user mode is enabled
- Single cookie request header is checked
In the active scanner the httpsender script is enabled (and before starting the scanner)
In the "Resend ..." dialog the httpsender script is disabled
Hope my explainations were not too messy.
Thanks.
username_0: Hi I've made a pull request to solve this problem #2081
Status: Issue closed
|
rapidsai/cuml | 910069907 | Title: [BUG] CUDA lineinfo is not generated by current cmake build
Question:
username_0: **Describe the bug**
`cpp/CMakeLists.txt` has a option to enable generation of CUDA kernel lineinfo on line:
https://github.com/rapidsai/cuml/blob/42297b59e5cd9fde97ca31026cd15b330f887a92/cpp/CMakeLists.txt#L66
However, enabling it does not generate required lineinfo, which is useful for debugging and profiling.
**Steps/Code to reproduce bug**
Enable `LINE_INFO` option in the CMakeLists.txt file and build in verbose mode. See that CUDA compile lines have no `-lineinfo` option.
**Expected behavior**
Line information should be generated when enabled.
**Environment details (please complete the following information):**
- Environment location: I tested on Bare-metal, but should be reproducible on Docker, Cloud etc.
- Linux Distro/Architecture: Not relevant
- GPU Model/Driver: Not relevant
- CUDA: Not relevant
- Method of cuDF & cuML install: from source
- `cmake` version `3.20.2` & `gcc/g++` version 9.3.2
**Additional context**
This issue is probably because the module https://github.com/rapidsai/cuml/blob/42297b59e5cd9fde97ca31026cd15b330f887a92/cpp/cmake/modules/ConfigureCUDA.cmake#L37 Looks for variable `CUDA_ENABLE_LINEINFO` and `cpp/CMakeLists.txt` sets `LINE_INFO`. On my local setup changing this produced the line information.
Answers:
username_1: I see the same problem with `KERNEL_INFO` [here](https://github.com/rapidsai/cuml/blob/42297b59e5cd9fde97ca31026cd15b330f887a92/cpp/CMakeLists.txt#L65)
https://github.com/rapidsai/cuml/blob/42297b59e5cd9fde97ca31026cd15b330f887a92/cpp/CMakeLists.txt#L65
The only other place that the variable `KERNEL_INFO` semms to be used in the file is here:
https://github.com/rapidsai/cuml/blob/42297b59e5cd9fde97ca31026cd15b330f887a92/cpp/CMakeLists.txt#L84 |
Metastruct/garrysmod-chatsounds | 141978667 | Title: Freeman's Mind Chatsounds (Help me choose the best ones)
Question:
username_0: Now, for obvious reasons with this being an effing huge series, I'm not going to add everything from it, just specific ones famous from it, and the ones people choose. This is where the people who actually browse here get some choice. List any quotes with a timestamp + the episode of the Freeman's Mind series the quote is from, and I'll add it.
Examples of three that will be added for sure:
the "Bassheads" scene https://youtu.be/J9C78b_FhKA?t=3m36s
"Fuck, my leg!" https://youtu.be/Ex-Da3NC83o?t=3m42s
The "DIPLOMACY SUCKS" scene https://youtu.be/Z5YZTHfr1P4?t=1m48s<issue_closed>
Status: Issue closed |
python-adaptive/adaptive | 392690130 | Title: make BalancingLearner work with the live_plot
Question:
username_0: ## ([original issue on GitLab](https://gitlab.kwant-project.org/qt/adaptive/issues/113))
_opened by <NAME> ([@username_0](https://gitlab.kwant-project.org/username_0)) at 2018-10-12T17:48:13.845Z_
The following would need to work:
```
dmap = hv.DynamicMap(lambda X, Y: hv.Scatter([(X,Y)]), kdims=['X', 'Y']).redim.values(X=[0, 1, 2], Y=[2, 3, 4])
hv.HoloMap(dmap[:, :])
```
Depends on https://github.com/ioam/holoviews/issues/3085.<issue_closed>
Status: Issue closed |
ajenti/ajenti | 162662983 | Title: Signature by key XXX uses weak digest algorithm (SHA1)
Question:
username_0: Hi
`http://repo.ajenti.org/debian/dists/main/InRelease: Signature by key XXX+ uses weak digest algorithm (SHA1)`
This is a security issue, and should be corrected immediately.
A new key is probably needed.
Answers:
username_1: I second this, having the same issue too
username_2: third; has there been any updates to support SHA2 on the ajenti side?
username_3: Count me in. This message bothers me every time. This is security issue.
username_4: will this be fixed?
username_5: Should work now.
Status: Issue closed
|
cloudfoundry/diego-release | 228231191 | Title: How to get postgres db host endpoint of existing postgres runnning in cf to use in diego ?
Question:
username_0: I want to use
Answers:
username_1: Hi, @username_0,
The Diego BBS's connection to its SQL database is configured with the BOSH properties under `diego.bbs.sql` that are listed in the BBS job spec: https://github.com/cloudfoundry/diego-release/blob/v1.15.3/jobs/bbs/spec#L100-L129. If you are using the manifest-generation scripts in the diego-release repository, those can be configured via a stub file passed to the `-s` argument. As an example, the manifest-generation for BOSH-Lite selects either the MySQL or the Postgres stub file in https://github.com/cloudfoundry/diego-release/blob/v1.15.3/scripts/generate-bosh-lite-manifests#L14-L37, with the Postgres stub file at https://github.com/cloudfoundry/diego-release/blob/v1.15.3/manifest-generation/bosh-lite-stubs/postgres/diego-sql.yml.
If you need to know the values to configure in that stub, you should be able to get those from the existing CF manifest if CC and UAA are using that Postgres instance for their own databases. It may end up being simply a single IP address, in which case you could also get it from the `bosh instances` output.
Best,
Eric, CF Diego PM
username_0: Hi @username_1
Thanks for providing links. Currently I am running CF in only 1 zone and have only 1 postgres sql in our vsphere environment. I can provide postgres_z1 IP address but when I need deploy CF on 2 zones. In that case which address should I provide.
Is there any link which I can use to deploy diego release on vsphere enviroment ? I have cf running on my vsphere env.
username_1: Hi, @username_0,
The postgres job in cf-release does not support any sort of clustered deployment, so even if you expand your CF deployment to 2 or more availability zones you'll still only have one instance of that postgres job. You can then continue to supply the IP of that instance as the host for the Diego BBS database connection.
Alternately, the [cf-mysql release](https://github.com/cloudfoundry/cf-mysql-release) supports a highly available clustered database deployment that is compatible with the Diego BBS and the other SQL-database-based components in CF. Its proxy instances are capable of being registered as Consul services and hence would be addressable via Consul under a domain name such as `mysql.service.cf.internal`.
If you are using the manifest-generation script and templates from diego-release, all of the infrastructure-specific configuration will be specified in the "IaaS-settings" stub file that is the argument to the `-i` flag on the generate-deployment-manifest script. This configuration includes networks and resource pools for the different job types in the generated manifest. An example for BOSH-Lite is located at https://github.com/cloudfoundry/diego-release/blob/v1.15.3/manifest-generation/bosh-lite-stubs/iaas-settings.yml.
Finally, if you are already using the new v2 BOSH CLI, you may be interested in [cf-deployment](https://github.com/cloudfoundry/cf-deployment) as an alternative to the separate manifest-generation script/template systems in cf-release and diego-release that uses Diego instead of the DEAs and takes advantage of new BOSH features such as cloud-config, links, and variable generation and interpolation. It's not quite ready for all production deployments, but reaching that milestone is the next major goal for the Release Integration team that is responsible for developing it.
Best,
Eric
username_2: Hi @username_1 ,
Thanks for the explanation. I got it.
Regarding cf-deployment, can I deploy windows-cells directly to my cf-release instead of going for diego-release first. Currently I am deploying diego-release over cf-release and then will go for windows-garden.
Here I am confused about these values, I am not sure what to do as I am getting errors for pem certs in updating database_z1 vm.
bbs:
active_key_label: key1
encryption_keys:
- label: key1
passphrase: "<PASSWORD>"
and what are diego_credentials in property-overrides.yml
Error: Error: 'database_z1/0 (6595b96a-845e-4a4f-b33c-1335e9757054)' is not running after update. Reviewed logs for failed jobs: bbs
goroutine 1 [running]:
panic(0xc14620, 0xc42025e600)
/var/vcap/data/packages/golang/b2f1508f98daf236581e0146f0af18f55b24067e/src/runtime/panic.go:500 +0x1a1
code.cloudfoundry.org/lager.(*logger).Fatal(0xc420156180, 0xd42aa4, 0x18, 0x10e85e0, 0xc42025e600, 0x0, 0x0, 0x0)
/var/vcap/packages/bbs/src/code.cloudfoundry.org/lager/logger.go:152 +0x41c
main.main()
/var/vcap/packages/bbs/src/code.cloudfoundry.org/bbs/cmd/bbs/main.go:235 +0x313f
panic: failed to load keypair: tls: failed to find any PEM data in key input
username_0: Additional to @username_2, we got below error while running diego deploy.
In task error, it is
E, [2017-05-16 16:59:39 #27717] [] ERROR -- DirectorJobRunner: Worker thread raised exception: 'cell_z1/0 (d81b6ec3-0655-4269-9ca3-ef5f621e3e4a)' is not running after update. Review logs for failed jobs: rep - /var/vcap/packages/director/gem_home/ruby/2.3.0/gems/bosh-director-261.4.0/lib/bosh/director/instance_updater/state_applier.rb:48:in `post_start'
In rep errors, we encountered this
{"timestamp":"1494953958.998428106","source":"rep","message":"rep.executor-fetching-containers-to-destroy","log_level":1,"data":{}}
{"timestamp":"1494953959.001061440","source":"rep","message":"rep.executor-fetched-containers-to-destroy","log_level":1,"data":{"num-containers":0}}
{"timestamp":"1494953959.042432547","source":"rep","message":"rep.initial-capacity","log_level":1,"data":{"capacity":{"memory_mb":3951,"disk_mb":6595,"containers":249}}}
{"timestamp":"1494953959.042812347","source":"rep","message":"rep.instance-identity-enabled","log_level":1,"data":{}}
{"timestamp":"1494953959.045436382","source":"rep","message":"rep.failed-to-initialize-executor","log_level":2,"data":{"error":"asn1: structure error: tags don't match (2 vs {class:0 tag:6 length:9 isCompound:false}) {optional:false explicit:false application:false defaultValue:\u003cnil\u003e tag:\u003cnil\u003e stringType:0 timeType:0 set:false omitEmpty:false} @2"}}
{"timestamp":"1494953999.149849415","source":"rep","message":"rep.creating-grpc-client","log_level":1,"data":{"address":"localhost:3458"}}
{"timestamp":"1494953999.152936935","source":"rep","message":"rep.wait-for-garden.ping-garden","log_level":1,"data":{"initialTime:":"2017-05-16T16:59:59.151594178Z","session":"2","wait-time-ns:":1332316}}
{"timestamp":"1494953999.154919863","source":"rep","message":"rep.wait-for-garden.ping-garden-success","log_level":1,"data":{"initialTime:":"2017-05-16T16:59:59.151594178Z","session":"2","wait-time-ns:":3322762}}
Seems interesting error.
username_0: Resolved, After correcting certificate values, its get resolved
Status: Issue closed
|
sandybradley/HarmonicTrader | 591067390 | Title: Could you please add a telegram bot
Question:
username_0: This would be amazing with a telegram bot to alert you when there a new one.
is there a setting to edit so you can see only the last patten, it show me the last 100 patterns from the last 3 months
Answers:
username_1: Hi. Not sure how I would make a bot alert.
The actual trading script (deribit_harmonics.py) checks at specified intervals only if there is a current pattern.
def checkHarmonic():
global i, current_idx,current_pat,start,end, sign, price,prices
i = len(prices) - 1
current_idx,current_pat,start,end = peak_detect(prices.values[:i],order=7)
XA = current_pat[1] - current_pat[0]
AB = current_pat[2] - current_pat[1]
BC = current_pat[3] - current_pat[2]
CD = current_pat[4] - current_pat[3]
moves = [ XA , AB , BC , CD ]
# gart = is_gartley(moves,err_allowed)
# butt = is_butterfly(moves,err_allowed)
# bat = is_bat(moves,err_allowed)
# crab = is_crab(moves,err_allowed)
shark = is_shark(moves,err_allowed)
trio = is_trio(moves,err_allowed)
harmonics = np.array([shark,trio])
labels = ['Shark','Trio']
# harmonics = np.array([shark])
# labels = ['Shark']
start = np.array(current_idx).min()
end = np.array(current_idx).max()
price = prices.values[end]
if delta == 0.0:
if np.any(harmonics == 1) or np.any(harmonics == -1):
for j in range (0,len(harmonics)):
if harmonics[j] == 1 or harmonics[j]==-1:
sense = 'Bearish ' if harmonics[j]==-1 else 'Bullish '
label = sense + labels[j]
print(label)
sign = harmonics[j]
date = data.iloc[end].name
trade_dates = np.append(trade_dates,date)
if harmonics[j]==-1:
sell_limit(price)
else:
buy_limit(price)
else:
if delta < 0.0:
sign = -1
else:
sign = 1
walk(price,sign) |
maillouxc/git-rekt | 270857358 | Title: Implement viewing of current staff accounts
Question:
username_0: We need to populate the tableview on the staff accounts management screen with a list of all the current employee accounts.
This task should be very easy once the employee class is fully implemented and mapped to the database; it cannot be done until then.
Get with me before you begin work, in case anything changes, and to get additional help.
Status: Issue closed
Answers:
username_0: Done |
zulip/zulip | 279184189 | Title: Add reCAPTCHA to realm creation form
Question:
username_0: On a server on the public Internet, particularly one with open realm creation, people may try to send spam through Zulip. We have to prevent them from doing so.
One key step we can take is to make it hard for the spammers to create a lot of realms. Every anti-spam measure we take that notices patterns within a realm, or limits the amount of potential spam that can be sent through a given realm, gets more effective the harder it is for a would-be spammer to make new realms to evade it.
To do this, we want to use [reCAPTCHA](https://developers.google.com/recaptcha/). I think we want to use the ["Invisible reCAPTCHA"](https://developers.google.com/recaptcha/docs/invisible), out of the several options they provide.
I think the pieces involved to do this will look something like
* Add a setting like `RECAPTCHA_SITEKEY` and a secret like `recaptcha_secret`. I'd put them near `GOOGLE_OAUTH2_CLIENT_ID` and `GOOGLE_OAUTH2_CLIENT_SECRET` in `settings.py`.
* Sign up for an API key for your own use in development, with domains like `zulipdev.com` and `localhost`. Write down what you did, to put in the documentation.
* Follow the instructions you get after signup, to add the reCAPTCHA to the `/create_realm/` page in the simplest possible way. Get it working.
* Look at the [more detailed docs](https://developers.google.com/recaptcha/docs/invisible) for useful ways to improve it further.
* Make sure the reCAPTCHA settings are optional -- if the server admin doesn't set them, there just won't be a CAPTCHA.
We've had at least one spammer decide to try using zulipchat.com for their spam, so this is something we really want to put in place soon.
Answers:
username_1: @zulipbot claim
username_1: Waiting for a review.
username_2: @username_0 can i take this issue forward?
username_2: @zulipbot claim |
petergardfjall/garminexport | 749413875 | Title: Age and Weight fields request
Question:
username_0: Hi @username_1 ! Amazing work, the backup works very well. Is there any change to get Age and Weight too? Thank you!
Answers:
username_1: I have no such plans at this time, sorry. I'd like to keep the library focused on activities. I suppose you could use the client from this library to write such a tool if you feel up for it.
Status: Issue closed
username_0: Ok thanks and keep up the good work! |
spree/spree | 153971937 | Title: Can we add Tagging module in spree for products
Question:
username_0: Can we add tagging module for products.
If yes I'll ready to send PR for same.
Answers:
username_1: Hey @username_0 - how would this differ from `Taxons`? :)
username_0: @username_1 Taxons are different that tags a tag is a non-hierarchical keyword or term assigned to a piece of information (such as an Internet bookmark, digital image, or computer file).
You can call it as structure of non-hierarchical taxonomies
Taxons are nothing but the categories of products correct me if I'm wrong. But tags can be used to group data not only products it could be anything in system.
username_1: @username_0 please submit a PR than :)
username_0: yes :+1:
username_2: This could be nice, since other platforms have this feature too
Status: Issue closed
|
tinyworlds/tinyworldsBackup | 196504286 | Title: Improvement meme 2012 - Now
Question:
username_0: Improvement meme 2012 - Now<br>
http://ift.tt/2hSlBc1<br><div class="paragraph">
<span style="color: rgb(41, 47, 51);">Here is my deviantART</span><span style="color: rgb(41, 47, 51);"> photography improvement meme From 2012 to now. Enjoy! :)</span>
</div>
<div>
<div class="wsite-image wsite-image-border-none" style="padding-top: 10px; padding-bottom: 10px; margin-left: 0; margin-right: 0; text-align: center;">
<a><img src="http://ift.tt/2h3cxTa" alt="Picture" style="width: 727;"></a>
<div style="display: block; font-size: 90%;"></div>
</div>
</div>
<div>
<div id="670851553713420522" align="left" style="width: 100%;" class="wcustomhtml">
<hr class="styled-hr" style="width: 100%;">
<div style="height: 20px; overflow: hidden; width: 100%;"></div>
<div class="paragraph">
<font color="#2A2A2A"><font size="4">KEEP UPDATED.</font></font><br><span>Sign up below to get my nature photos & articles delivered to your email for free.</span>
</div>
<div id="mc_embed_signup">
<form action="//username_0.us9.list-manage.com/subscribe/post?u=970fe437166eae6d39488c908&id=fb96a08c72" method="post" id="mc-embedded-subscribe-form" name="mc-embedded-subscribe-form" class="validate" target="_blank">
<div id="mc_embed_signup_scroll">
<input type="email" name="EMAIL" class="email" id="mce-EMAIL"><div><input type="text" name="b_970fe437166eae6d39488c908_fb96a08c72" tabindex="-1"></div>
<div class="clear"><input type="submit" value="Subscribe" name="subscribe" id="mc-embedded-subscribe" class="button"></div>
</div>
</form>
</div>
<div class="paragraph">
<font color="#D3D3D3">I'll never share your email-address, unless you allow me to do so.</font><br><font color="#D3D3D3">View my</font><span> </span><a href="http://ift.tt/2ff1i6e" target="_blank"><span style="color: rgb(169, 169, 169);">Privacy Policy</span></a><span style="color: rgb(211, 211, 211);">. Easy unsubscribe links are provided in every mail.</span>
</div>
</div>
</div> |
tailwindlabs/tailwindcss | 864546753 | Title: [Bug]: In CRA projects, Breakpoints not working when [JIT] mode is on
Question:
username_0: ### What version of Tailwind CSS are you using?
v2.1.0
### What build tool (or framework if it abstracts the build tool) are you using?
Create React App
### What version of Node.js are you using?
v12.14.1
### What browser are you using?
Chrome
### What operating system are you using?
macOS
### Reproduction repository
https://github.com/username_0/tailwind-jit-bug
### Describe your issue
In the CRA bootstrapped project, when turn on the `JIT` mode, the breakpoints of Tailwind CSS is not working. However, when `JIT` mode is off, the breakpoints work just fine.
See the bug details at: https://github.com/username_0/tailwind-jit-bug
Answers:
username_1: Same problem in Nuxt 2.15.3.
Breakpoints not working even without jit turned on.
username_2: I have the same issue. When JIT is on, it creates the class but it doesn't put the class inside a @media breakpoint. When I turn off JIT its working fine.
username_0: **Updates:**
Just tested on the newly released **2.1.2 version** of tailwindcss, media query still not working.
You can repro the issue with: https://github.com/username_0/tailwind-jit-bug/tree/2.1.2
username_3: +1 facing the same issue.
username_4: I'm using Next.js and facing the same issue.
Especially when deploying via Vercel, media query not working.
username_5: Hey @username_0. You didn't quite update `tailwindcss` correctly. You can see this if you run `yarn list --pattern tailwindcss`:
```
yarn list v1.22.4
├─ @tailwindcss/[email protected]
└─ [email protected]
```
I would recommend running the following commands:
```
yarn remove tailwindcss @tailwindcss/postcss7-compat
yarn add --dev tailwindcss@npm:@tailwindcss/postcss7-compat
```
The installation documentation previously said to install both `tailwindcss@npm:@tailwindcss/postcss7-compat` _and_ `@tailwindcss/postcss7-compat`, but you should only need the former.
For anyone else having a similar issue: please check that the version you are using is actually `2.1.2`. Again, I would recommend reinstalling the `@tailwindcss/postcss7-compat` package, aliased to `tailwindcss` (see above)
username_0: Hi @username_5, thank you for pointing out my mistake! I just tried on the "real" **tailwindcss v2.1.2**, and media query worked great!
I have updated the README of my repo: https://github.com/username_0/tailwind-jit-bug.
Close the issue.
Status: Issue closed
|
rubytune/perf_check | 477832780 | Title: PerfCheck should spawn commands in clean environment
Question:
username_0: Using `Process.spawn` will create a process in the same process group as the parent (i.e. the Sidekiq process) and inherit most of its environment. A contaminated environment causes Bundler, Webpacker, and other tools to bleed over their settings into the new process.
A solution is to use `Process.spawn` options to decouple the new process as much as possible when creating a new process and re-use this as much as possible for `rails server`, `bundle install`, and `rails db:migrate`.<issue_closed>
Status: Issue closed |
netliferesearch/craft-starter | 1051055164 | Title: RFC: Strømlinjeforme deploy oppsett og mappestruktur
Question:
username_0: _RFC: Request for comments._
Craft CMS har bytta ifrå `public/` til `web/` som web root i sin [anbefalte mappestruktur](https://craftcms.com/docs/3.x/directory-structure.html#web).
@username_1 har foreslått at starteren burde reflektere denne endringen. Eg syns det høres bra ut, og så fordrer dette at me også endrer på litt andre greier relatert til deploy.
[Servebolt](https://servebolt.com/) [bruker i dag public/ som web root](https://servebolt.com/help/article/what-is-the-web-root-folder-and-site-root-folder/) så viss me byrjer å bruke `web/` mappe for lokal utvikling så må me sørge for at deploy steget automatisk teiper over denne skilnaden.
## Endringsforslag
1. [Skriv ein buddy.yml fil](https://buddy.works/docs/yaml/yaml-gui) som definerer bygge-pipeline. Dette er i tråd med [infrastructure as code](https://en.wikipedia.org/wiki/Infrastructure_as_code) tilnærmingen.
1. Omnavn public/ til web/ og sørg for å oppdater både dokumentasjon og kode relatert til lokal utvikling og livereloading.
2. Oppdater buddy.yml slik at byggesteget tek høgde for at det blir lagd ein symlink frå public/ til web/ i server-miljøet. Foreslå gjerne ein annen løysing om det kan passe betre.
2. Skriv ein guide `docs/deployment-servebolt.md` som set brukerar i stand til å settje opp deploy løpet sjølv og link til den i `readme.md`.
Answers:
username_1: Takk, @username_0 for at du plukket opp denne!
Jeg er jevnt over helt enig i alt :)
Ad pkt 1) Jeg må innrømmet at jeg ga opp på forrige forsøk på å konfigurere buddy works via yaml-fila, men her kreves det nok bare trening, og med gode ENV-variabler i Buddy-miljøet, skal YAML-fila i prinsippet kunne brukes nærmest uendret mellom prosjekter.
Ad pkt 3): Jeg vil foreslå at vi også i samme venda vurdere å legge opp til atomic deployments. Buddy works har en [template](https://buddy.works/blog/introducing-atomic-deployments) på dette. Blant fordelene er ekte zero downtime deploys, at man kan holde på noen revisjoner for enkelt å rulle tilbake, eksplisitt definere opp det som skal gjenbrukes (persiste) mellom deploys, samt at vi da _aldri_ rsyncer mot rot-mappa på serveren (som kanskje er det viktigste for meg fordi det kjennes så utrygt).
Mappestrukturen blir da omtrent slik
```
current -> releases/e68b1d257c4c443f41a4b9084e4dff8b5c5ad218
public -> current/web
releases
storage
```
Eksempel:

[Symlinken fra public mappa](https://servebolt.com/help/article/change-the-webroot-of-your-site/) trenger i praksis egentlig bare settes når serveren settes opp første gang, og det er kanskje bedre enn å redefinere på hver deploy? Evt at deploy-rutina verifiserer at den er der?
@torgeirbeyer - veldig interessert i din take på dette ! |
hyb1996-guest/AutoJsIssueReport | 238350719 | Title: java.lang.NullPointerException: Attempt to invoke virtual method 'java.lang.String android.content.Context.getPackageName()' on a null object reference
Question:
username_0: Description:
---
java.lang.NullPointerException: Attempt to invoke virtual method 'java.lang.String android.content.Context.getPackageName()' on a null object reference
at android.content.ComponentName.<init>(ComponentName.java:77)
at com.stardust.view.accessibility.AccessibilityServiceUtils.isAccessibilityServiceEnabled(AccessibilityServiceUtils.java:22)
at com.stardust.scriptdroid.service.AccessibilityWatchDogService.isEnable(AccessibilityWatchDogService.java:47)
at com.stardust.scriptdroid.ui.main.SideMenuFragment$1$1.run(SideMenuFragment.java:91)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1112)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:587)
at java.lang.Thread.run(Thread.java:818)
Device info:
---
<table>
<tr><td>App version</td><td>2.0.12 Beta</td></tr>
<tr><td>App version code</td><td>137</td></tr>
<tr><td>Android build version</td><td>1491811758</td></tr>
<tr><td>Android release version</td><td>5.1</td></tr>
<tr><td>Android SDK version</td><td>22</td></tr>
<tr><td>Android build ID</td><td>LMY47I release-keys</td></tr>
<tr><td>Device brand</td><td>vivo</td></tr>
<tr><td>Device manufacturer</td><td>vivo</td></tr>
<tr><td>Device name</td><td>PD1501D</td></tr>
<tr><td>Device model</td><td>vivo X6Plus D</td></tr>
<tr><td>Device product name</td><td>PD1501D</td></tr>
<tr><td>Device hardware name</td><td>mt6752</td></tr>
<tr><td>ABIs</td><td>[arm64-v8a, armeabi-v7a, armeabi]</td></tr>
<tr><td>ABIs (32bit)</td><td>[armeabi-v7a, armeabi]</td></tr>
<tr><td>ABIs (64bit)</td><td>[arm64-v8a]</td></tr>
</table> |
watson-developer-cloud/unity-sdk | 493889394 | Title: IBM Watson SDK for Unity version Error while importing Watson core sdk for unity in unity 2018.2.9f1
Question:
username_0: Remember, an issue is not the place to ask questions. You can use [Stack Overflow](http://stackoverflow.com/questions/tagged/ibm-watson) for that, or you may want to start a discussion on the [dW Answers](https://developer.ibm.com/answers/questions/ask/?topics=watson).
Before you open an issue, please check if a similar issue already exists or has been closed before.
### When reporting a bug, please be sure to include the following:
- [ ] Start the title with the service name in brackets: `[speech-to-text] websockets...`
- [ ] Steps to reproduce
- [ ] Expected behavior
- [ ] Actual behavior
- [ ] IBM Watson SDK for Unity version
### When you open an issue for a feature request, please add as much detail as possible:
- [ ] A descriptive title starting with the service name
- [ ] A description of the problem you're trying to solve
- [ ] A suggested solution if possible
Answers:
username_0: Need your help guys
Thanks,
Hari
username_1: @Mohsina11 Can you provide more info about the issue, you can use the issue template and fill in the required info.
username_1: @username_0 Can you provide more info about the issue, you can use the issue template and fill in the required info.
username_0: Thanks for the reply am not getting the error anymore.
Status: Issue closed
|
dPexxIAM/Momentum-surf-zones | 404840635 | Title: surf_me only has one of the bonuses zoned instead of the main map
Question:
username_0: also other stage maps only have start and end zones but no stage zones which doesn't matter cause you can't get stage records right now but ought to be fixed eventually
Answers:
username_1: Yeah majority of these zones were made by wayne a while back, he did say he planned to go back and re-zone the maps that were zoned poorly, however i doubt we will see that happen now.
Are bonus zones available now? Sadly i am on 0.6.2 so i am not sure if they are included. |
davedotluebke/old-skool-text-game | 484991737 | Title: Disallow default names from containing spaces
Question:
username_0: Currently, when you pass in a `default_name` with a space character in it, the code accepts it. It also uses this name to generate an ID string. However, all calls to `Player.perceive()` fail with an ID with a space, and the user cannot possibly use the name to reference the object. We should disallow `default_names` containing spaces.
Answers:
username_0: We have fixed the id string problem, so closing this. Default names with spaces are still impractical in the parser.
Status: Issue closed
|
steve-gray/semantique | 338709659 | Title: Failing due to modified package-lock.json
Question:
username_0: Hello,
I'm having an issue using `semantique` in my drone pipeline:
```
pipeline:
tests:
image: node
commands:
- npm install
- npm run lint
- npm test
update-semantic-versioning:
image: eventualconsistency/semantique
pull: true
secrets: [ git_pass, git_user ]
when:
branch: [ master ]
event: [ push ]
npm-publish:
image: plugins/npm
registry: https://registry.npmjs.itcs.hpecorp.net
secrets: [ npm_token, github_token ]
when:
branch: [ master ]
event: [ tag ]
```
It seems to be failing due to the fact that npm is writing out a modified `package-lock.json`:
```
===========================================================================================
_____ ______ __ __ _ _ _______ _____ ____ _ _ ______
/ ____| | ____| | \/ | /\ | \ | | |__ __| |_ _| / __ \ | | | | | ____|
| (___ | |__ | \ / | / \ | \| | | | | | | | | | | | | | | |__
\___ \ | __| | |\/| | / /\ \ | . ` | | | | | | | | | | | | | | __|
____) | | |____ | | | | / ____ \ | |\ | | | _| |_ | |__| | | |__| | | |____
|_____/ |______| |_| |_| /_/ \_\ |_| \_| |_| |_____| \___\_\ \____/ |______|
===========================================================================================
-> Checking workspace for uncommitted changes.
The workspace contains 1 non-committed changes.
-> CHANGED: package-lock.json
```
Do you have a suggestion for how to handle this? |
dotnet/maui | 901298922 | Title: TreeView missing from wiki/Status
Question:
username_0: Where is TreeView control or something that can simulate it's behavior?
Answers:
username_1: Hi @username_0 we don't have a TreeView at this time. For platforms that have a platform-native equivalent you could use those, but I would recommend check with UI component vendors that have a Xamarin.Forms TreeView that will likely be ported to .NET MAUI.
We'll keep this as an enhancement request to add a TreeView in .NET 7.
username_2: I have a TreeView control in TemplateUI https://github.com/username_2/TemplateUI and I am porting it to .NET MAUI. After that I think need some features (virtualization etc) to have a complete control, but could be a way to have it. |
HelgeCPH/cypher_kernel | 564507174 | Title: Which yaml module is required by cypher_kernel?
Question:
username_0: Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-vmumfhnr/cypher-kernel/setup.py", line 2, in <module>
import cypher_kernel
File "/tmp/pip-install-vmumfhnr/cypher-kernel/cypher_kernel/__init__.py", line 3, in <module>
from .kernel import CypherKernel
File "/tmp/pip-install-vmumfhnr/cypher-kernel/cypher_kernel/kernel.py", line 2, in <module>
import yaml
ModuleNotFoundError: No module named 'yaml'
---------------------------------------- |
mosbth/cimage | 160162649 | Title: using Cimage to output image on browser
Question:
username_0: Hello.
First of all my compliments for your amazing work.
Can you give an example how to use the class without using the imgp.php
e.g
```
$img = new CImage();
** what stuff should i do here **
$img-> output // output image on browser
```
Answers:
username_1: Have you seen the [last part of img.php](https://github.com/username_1/cimage/blob/master/webroot/img.php#L1211-L1269)? There is how `img.php` is (finally) using `CImage.php`.
Doing a find on `$img->` in `img.php` will give you 15 more places where `img.php` uses `CImage.php`.
The [docs on the preliminary website](https://cimage.se/doc/cimage-api) might give some more info. That manual page does not seem to be complete though, all 15 places seems not to be there, yet.
I havent myself used the `CImage` class outside of `img.php`, but feel free to give it a try, and let me know how it goes.
username_0: ok i thought maybe you allready expose an API to proccess an image using only the class.
Anyway i will test it and let you know
Thanks
Status: Issue closed
|
node-influx/node-influx | 194172524 | Title: cannot connect to my influxdb cloud instance
Question:
username_0: as i assume it uses self-signed SSL certificates. i get the following error:
```bash
Error: write EPROTO 140735755973568:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:../deps/openssl/openssl/ssl/s23_clnt.c:794:
at exports._errnoException (util.js:1022:11)
at WriteWrap.afterWrite (net.js:801:14)
```
how do i get around this? i am using node v6.9.2 and node influx v5.0.3
Answers:
username_0: nvm. was because i was using `hostname` instead of `host`. would be nice if the options conformed to `require(`url`)` standards
Status: Issue closed
|
webpack-contrib/webpack-bundle-analyzer | 285519801 | Title: Server errors are unhandled
Question:
username_0: I cannot share that.
Answers:
username_1: Hi! Thanks for the detailed issue, it's highly appreciated :relaxed:
Is it so that you are however able to use the tool anyway? So that nothing is fundamentally broken, albeit the UX is definitely suboptimal here.
username_0: I can, yes, and it's already proven to be very helpful: discovered code splitting to not work, fixed, shaved off 300 KB of initial load of 1.41 MB app.
Due to the large number of modules to build, though, the development hot reloading server crashing is quite inconvenient because it needs to rebuild all the modules (I didn't go with caches across builds for reliability).
username_1: Great, thanks for the additional info!
Would you perhaps have a chance at trying out editing this code to add some resiliency to the quick reloads? I don't use the dev server myself, so I am not able to start digging into this without someone else looking into it first.
https://github.com/webpack-contrib/webpack-bundle-analyzer/blob/a9282dec4fdb87e3ccdc2b996bcf3b7f5172b39f/src/viewer.js#L25-L99
username_2: Looks like this error can be caused by [this issue](https://github.com/websockets/ws/issues/1256) in `ws`.
Can you check your `ws` version using `npm ls --depth 0 | grep ws@`?
username_1: This should be fixed by #140. I published a new version with the fix, v2.9.2
Status: Issue closed
username_3: Sorry to dig up this topic.
I was having the same problem, and it was because my webpack is configured to compile multiple apps with plugins in each app. The plugin BundleAnalyzerPlugin was duplicated in each App and so several web app was trying to be opened, so the error.
```javascript`
let apps = [
'app1',
'App2',
// ...
];
// getConfig add plugins and other common stuff to each apps
module.exports.configs = apps.map(getConfig);
```
I'm using now the same instance of BundleAnalyzerPlugin, the error disappeared but only the result of the last build is available in the web view.
So I ended to add the plugin to only the app I need to analyse.
Another solution(not tested) would be to instantiate the plugin with a different port for each app
bundleAnalyzerPlugin = new BundleAnalyzerPlugin({analyzerPort: incrementedPort});
If it can help someone.
username_2: @username_3 you can use `analyzerPort: 'auto'` configuration option. |
pysal/pysal.github.io | 342927048 | Title: update submodule contract
Question:
username_0: the submodule contracts have diverged between the site & the wiki.
we need to:
1. update the site's [submodule contract from the wiki](https://github.com/pysal/pysal/wiki/Submodule-Contract)
2. delete the submodule contract in the wiki
3. update any links to submodule contract coming in from elsewhere (maybe the migrating directions?)
Answers:
username_1: 1. The site has been updated http://pysal.org/getting_started.html#submodule-contract
2. I am going to delete the submodule contract wiki and further editing of the contract should be made in the site
3. The links to submodule contract in migrating directions is updated in https://github.com/pysal/pysal.github.io/pull/57
Status: Issue closed
|
Blackhandpl/bugtracker | 201213766 | Title: Quest: Wisps of the Woods
Question:
username_0: Description:
Current behaviour: Uzycie Swiftwind Switch nie powoduje zadnej reakcji - nie dziala q item
Expected behaviour:
Steps to reproduce the problem:
Wowhead link: http://www.wowhead.com/quest=28383/wisps-of-the-woods |
BuildFire/placesPlugin | 111319851 | Title: Link UI is messed up
Question:
username_0: Link UI is messed up. Text should be to the right of the image/icon.
<img width="750" alt="screen shot 2015-10-13 at 9 03 06 pm" src="https://cloud.githubusercontent.com/assets/6145369/10474636/dcfa4c72-71ed-11e5-932a-e063b06c0c07.png">
Answers:
username_1: 
It seems like its breaking on mac devices. @JDM555 Can you please look it. |
PowerShell/vscode-powershell | 539527062 | Title: Preview 2019.12.00 crashes at startup with "Invalid URI: The hostname could not be parsed."
Question:
username_0: ## Issue Description
PowerShell Extension just crashes at startup with following error:
* The terminal process terminated with exit code: 3762504530
<details><summary>Error message from terminal</summary><p>
```powershell
DEBUG: Logging started
DEBUG: Beginning EndProcessing block
VERBOSE: Removing PSReadLine
DEBUG: Creating host configuration
DEBUG: Determining REPL kind
DEBUG: REPL configured as PSReadLine
DEBUG: Configuring LSP transport
DEBUG: Configuring debug transport
DEBUG: Session file writer created
VERBOSE: Adding AssemblyResolve event handler for new AssemblyLoadContext dependency loading
VERBOSE: Loading EditorServices
DEBUG: Logging host information
VERBOSE:
== Build Details ==
- Editor Services version: <development-build>
- Build origin: VSTS
- Build time: 13.12.2019 01:18:26
VERBOSE:
== Host Startup Configuration Details ==
- Host name: Visual Studio Code Host
- Host version: 2019.12.0
- Host profile ID: Microsoft.VSCode
- PowerShell host type: System.Management.Automation.Internal.Host.InternalHost
- REPL setting: PSReadLine
- Session details path: c:\Users\OlavRønnestadBirkela\.vscode\extensions\ms-vscode.powershell-preview-2019.12.0\sessions\PSES-VSCode-16556-573187
- Bundled modules path: c:\Users\OlavRønnestadBirkela\.vscode\extensions\ms-vscode.powershell-preview-2019.12.0\modules
- Additional modules: PowerShellEditorServices.VSCode
- Feature flags:
- Log path: c:\Users\OlavRønnestadBirkela\.vscode\extensions\ms-vscode.powershell-preview-2019.12.0\logs\1576657201-5b389b1d-fcff-486f-8429-b659b95083701576656134103\EditorServices.log
- Minimum log level: Diagnostic
- Profile paths:
+ AllUsersAllHosts: C:\Program Files\PowerShell\7-preview\profile.ps1
+ AllUsersCurrentHost: C:\Program Files\PowerShell\7-preview\Microsoft.VSCode_profile.ps1
+ CurrentUserAllHosts: C:\Users\OlavRønnestadBirkela\OneDrive - Ironstone\Documents\PowerShell\profile.ps1
+ CurrentUserCurrentHost: C:\Users\OlavRønnestadBirkela\OneDrive - Ironstone\Documents\PowerShell\Microsoft.VSCode_profile.ps1
DEBUG: Assembly resolve event fired for System.Text.Encoding.CodePages.resources, Version=4.1.3.0, Culture=en-GB, PublicKeyToken=<KEY>
DEBUG: Assembly resolve event fired for System.Text.Encoding.CodePages.resources, Version=4.1.3.0, Culture=en, PublicKeyToken=<KEY>
DEBUG: Loaded into load context "Default" System.Runtime.Loader.DefaultAssemblyLoadContext #0: Microsoft.PowerShell.Commands.Diagnostics, Version=7.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35
DEBUG: Loaded into load context "Default" System.Runtime.Loader.DefaultAssemblyLoadContext #0: Microsoft.PowerShell.Commands.Utility, Version=7.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35
DEBUG: Loaded into load context "Default" System.Runtime.Loader.DefaultAssemblyLoadContext #0: Microsoft.PowerShell.MarkdownRender, Version=7.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35
DEBUG: Loaded into load context "Default" System.Runtime.Loader.DefaultAssemblyLoadContext #0: System.Net.Http, Version=4.2.2.0, Culture=neutral, PublicKeyToken=<KEY>
DEBUG: Loaded into load context "Default" System.Runtime.Loader.DefaultAssemblyLoadContext #0: Microsoft.PowerShell.Commands.Management, Version=7.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35
DEBUG: Loaded into load context "Default" System.Runtime.Loader.DefaultAssemblyLoadContext #0: Microsoft.WSMan.Management, Version=7.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35DEBUG: Loaded into load context "Default" System.Runtime.Loader.DefaultAssemblyLoadContext #0: Microsoft.WSMan.Runtime, Version=7.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35
DEBUG: Assembly resolve event fired for Microsoft.PowerShell.Security.resources, Version=7.0.0.0, Culture=en-GB, PublicKeyToken=31bf3856ad364e35
DEBUG: Assembly resolve event fired for Microsoft.PowerShell.Security.resources, Version=7.0.0.0, Culture=en, PublicKeyToken=31bf3856ad364e35
[Truncated]
<details><summary>Visual Studio Code Extensions(Click to Expand)</summary>
|Extension|Author|Version|
|---|---|---|
|armview|bencoleman|0.3.3|
|azure-account|ms-vscode|0.8.7|
|azure-pipelines|ms-azure-devops|1.157.4|
|material-theme|zhuangtongfa|3.2.1|
|powershell-preview|ms-vscode|2019.12.0|
|pretty-formatter|mblode|0.1.7|
|shell-launcher|Tyriar|0.4.0|
|theme-dracula|dracula-theme|2.19.0|
|vsc-material-theme|Equinusocio|30.0.0|
|vscode-azurefunctions|ms-azuretools|0.20.1|
|vscode-markdownlint|DavidAnson|0.33.0|
|vscode-pull-request-github|GitHub|0.13.0|
|vscode-sort-json|richie5um2|1.18.0|
</details>
Answers:
username_0: The stable 2019.12.00 extension also crashes.
Diagnostic logs:
[1576657663-fa20ebda-3a8f-4dec-a30f-3b8c09c9f5661576657576950.zip](https://github.com/PowerShell/vscode-powershell/files/3977374/1576657663-fa20ebda-3a8f-4dec-a30f-3b8c09c9f5661576657576950.zip)
username_0: Found the cause.. Thought "powershell.powerShellDefaultVersion" should match the "versionName" of one of the "powershell.powerShellAdditionalExePaths", But no.
Better error messages or smarter logic around handling the "powershell.powerShellDefaultVersion" setting would be highly appreciated.
Fixed:
* "powershell.powerShellDefaultVersion": "PowerShell 7-preview (x64)",
username_0: Nope, that did not fixed it. If opening a Visual Studio Code windows with three PS1 files open, it always crashes. If opening a empty VSCode window, the extension loads successfully.
username_0: Got it working now, with:
* "powershell.powerShellDefaultVersion": "PowerShell Core 7 Preview (x64)"
Not a pleasent experience wasting 50 minutes of my work day troubleshooting this bisarre issue. I hope default shell can be handled smoother in feature versions.
username_0: Now it started crashing again. Didn't touch anything.
[1576659301-ab90fdf1-2019-43f1-8c7e-0fd773c463971576659162452.zip](https://github.com/PowerShell/vscode-powershell/files/3977460/1576659301-ab90fdf1-2019-43f1-8c7e-0fd773c463971576659162452.zip)
username_0: Long paths for files in the editor seems to maybe affect the stability of the extension. Might be visible in the logs.
Tried with shorter path for a PS1 file, seems to not crash anymore. But like. Why doesn't even basic intellisense work at all?

username_1: This is a dupe of #2324 because your username contains a non-ASCII character. This is my top priority right now but please be aware that the holidays are approaching so progress will be delayed. |
dwightwatson/validating | 57581672 | Title: How are the merged rulesets implemented with isValid()?
Question:
username_0: The documentation provides an example:
$mergedRules = $post->mergeRulesets('saving', 'creating');
How would that be implemented with the following line (but using the merged rulesets instead of the 'my_custom_rules' ruleset?
$post->isValid('my_custom_rules', false);
Answers:
username_1: Hey, thanks for the ping on Twitter, had a hectic week and haven't had a chance to get to my emails yet.
I suppose it makes sense that the first argument of the method should accept the string name of a ruleset or a set of rulesets to validate with.
$post->isValid($mergedRules);
I'd be happy to accept a PR that introduces this functionality otherwise I'll try and get to it when things settle down.
username_0: Thanks for the quick response to the tweet. I'll be looking forward to added functionality.
How are the merged rulesets supposed to be used currently if you can't validate against them? Just want to make sure I understand its functionality as it stands.
username_1: If my memory serves me correctly the `mergeRulesets()` method was originally protected and a later PR made it public. It is just used to perform the merging of rulesets behind the scenes, as it would if you used a custom ruleset for example.
So yeah, effectively that method is useless on it's own.
username_0: Sweet. Thanks for the explanation. All making sense now.
Any chance the feature will be added to the 4.2 branch? Haven't upgraded to 5.0 just yet. :)
username_1: Yeah it will only be added to 4.2. I took rulesets out of 5 because it has form requests (and because they're an absolute pain to maintain).
username_1: Cool, this should be available in `0.10.7` now.
Status: Issue closed
username_0: I got an answer relating to this on stackoverflow and the suggested answer on http://stackoverflow.com/questions/28497111/in-laravel-4-how-are-the-merging-rulesets-implemented works.
$post = new Post(Input::all());
$post->setRules($post->mergeRulesets('set_up_all', 'set_up_property_room', 'set_up_property'));
if($post->isValid()) {
///
}
So the new way is now $post->isValid($mergedRules)?
username_1: Yeah, either option would work, but passing the intended ruleset to `isValid()` now won't override the rules already set on the model, plus it's a much nicer syntax.
username_0: Perfect! I appreciate the quick addition! |
yiisoft-contrib/yiiframework.com | 66000136 | Title: Make sure contributors list is OK when GitHub is offline
Question:
username_0: Looks perfect.
Answers:
username_1: I just committed a command `./yii contributors/generate` that will generate (do'h) a *contributors.json* in the @app/data directory and generate thumbnails of the user avatars and put them in @app/data/avatars.
In 493f1748311c74c40c6221b14e8c800b42e3da66
I am working on stage 2 which is using Gulp to generate a sprite image using the avatar thumbnails and spit out a sprite image and a Sass file that can be (and will be) included and processed by our styles Gulp task.
That should take care of that, I think :)
username_1: 
username_1: Do you have any size preferences?
24px is good, but maybe too tiny ?
username_1: So, 48 :

username_1: And with padding:

Now, all that remains is to marry the sprite with the generated CSS :)
username_1: Btw, the sprites are brought to you by [gulp.spritesmith](https://github.com/twolfson/gulp.spritesmith)
username_0: Looks perfect.
username_0: Can you add it to readme not to forget to set up cron for it?
username_1: I will when it's done :)
username_1: 
Moved the Github API code out of the SiteController and into the ContributorsController - for some reason Curl only returned a smaller list.
username_1: Added a note.
This probably needs some code review, which is why I leave this issue as open. ;)
username_1: The team page now loads almost instantaneous :speedboat:
username_2: the reason is pagination.
username_1: The result, however, is that I got to reuse the existing code for grabbing the contributor list - it just moved from one controller to another :p
username_2: yeah, seen that.
username_2: looks great to me, I think this can be closed. Thank you @username_1 !
Status: Issue closed
username_1: Awesome! :+1:
Now I can stop holding my breath - the new team page sure is speedy. :) |
duncan3dc/dusk | 425000844 | Title: Support for dusk v5
Question:
username_0: Hi,
Are you planning on supporting the latest version of Dusk (^5.0)?
Best,
Answers:
username_1: Yes, I wasn't aware there'd been a new release, I'll take a look as soon as I can
username_1: This is available in [1.0.0](https://github.com/username_1/dusk/releases/tag/1.0.0)
Status: Issue closed
|
arresteddevops/ado-hugo | 177469556 | Title: change episode images from png to jpeg
Question:
username_0: Similar to #233
Few steps needed here:
- [ ] Convert all existing episode images to JPEG (leaving the PNG files in place)
- [ ] Update the code to look for `.jpg` instead of `.png` when pulling up an episode image
- [ ] Update the README to tell people to make a JPEG instead of a PNG
- [ ] Update the README to specify a larger size for the square episode image for use in cover art
- [ ] Check the use of the square cover art in the homepage code to constrain the image size so it doesn't blow up
Answers:
username_0: With the new theme, this will work going forward using whichever type of image you want.
Status: Issue closed
|
mosdef-hub/gmso | 730429647 | Title: Incorporate Ele
Question:
username_0: I'd like to replace the `element.py` with `ele`. The functionality is basically identical. The only problem I see is that `ele` doesn't use `unyt` (to keep things lightweight), so I'm not sure how we can add that information in. Thoughts?
Answers:
username_0: Duplicate to #435.
Status: Issue closed
|
lanj/MDX-CrowdParking | 148649694 | Title: Readme.md
Question:
username_0: Updating Readme.MD file. Trying to reconcile main repository with GitBooks
Answers:
username_0: Readme.md updated , further information will be added as required.
Status: Issue closed
username_0: will add information about how to run server and provide sample code. |
osmlab/name-suggestion-index | 444236222 | Title: Ignore historical logos in Wikidata
Question:
username_0: [The Wikidata item for Popeyes](https://www.wikidata.org/wiki/Q1330910#P154) has a logo statement, but that statement has an [end time](https://www.wikidata.org/wiki/Property:P582) qualifier saying that the brand stopped using that logo in 2008. The index should omit historical logos in favor of other, more current statements in the Wikidata item.
I just added the current logo to the Wikidata item. A Wikidata constraint detected that there were multiple logo statements and forced me to mark the current logo as having “preferred” rank. So that’s another way to ensure that we’re pulling in the current logo.
Answers:
username_0: [Wendy’s](https://www.wikidata.org/wiki/Q550258) is another example. In this case, no Commons image is available to serve as a current logo, so the constraint doesn’t apply and the rank doesn’t identify the statement as being outdated. The qualifier is the only way to determine that the logo should be omitted.
username_1: Yes, when choosing a claim value, we are already skipping "deprecated" and prioritizing "preferred" ranked.. I was hoping that might be sufficient enough for our purposes?
https://github.com/osmlab/name-suggestion-index/blob/2b70a7241cd92bf469ead75698145dc5d17a0ba4/build_wikidata.js#L179-L187
username_0: That’s good enough for items that have multiple logo statements, because the constraint requires one of them to be preferred. However, in the case of Wendy’s (and previously Popeyes), there’s only a single statement with historical qualifiers, so the rank is normal.
Status: Issue closed
username_1: Ok, I did the thing in 5821c3e |
webpack-contrib/compression-webpack-plugin | 756664882 | Title: Compressed file is not emitted in webpack 5
Question:
username_0: <!--
Issues are so 🔥
If you remove or skip this template, you'll make the 🐼 sad and the mighty god
of Github will appear and pile-drive the close button from a great height
while making animal noises.
👉🏽 Need support, advice, or help? Don't open an issue!
Head to StackOverflow or https://gitter.im/webpack/webpack.
-->
- Operating System: macOS 10.15.7
- Node Version: 14.15.0
- NPM Version: 6.14.9
- webpack Version: 5.9.0
- compression-webpack-plugin Version: 7.0.0 wp
### Expected Behavior
The file generated by compression-webpack-plugin should be emitted
<!-- Remove this section if not reporting a bug or modification request. -->

### Actual Behavior
The file generated by compression-webpack-plugin is not emitted in Webpack v5

### Code
```js
// webpack.config.js
const { CleanWebpackPlugin } = require('clean-webpack-plugin');
const Compression = require('compression-webpack-plugin');
const Path = require('path');
const { WebpackManifestPlugin } = require('webpack-manifest-plugin');
module.exports = {
entry: {
input: './input.js',
},
mode: 'development',
output: {
filename: '[name].js',
path: Path.join(__dirname, 'dist'),
publicPath: '',
},
plugins: [
new CleanWebpackPlugin(),
new Compression(),
new WebpackManifestPlugin(),
],
};
```
### How Do We Reproduce?
<!--
Remove this section if not reporting a bug.
If your webpack config is over 50 lines long, please provide a URL to a repo
for your beefy 🍖 app that we can use to reproduce.
-->
https://repl.it/@username_0/manifest-plugin-repro#webpack.config.js
It was working before in webpack v4: https://repl.it/@username_0/manifest-plugin-repro-1#webpack.config.js
Answers:
username_1: ./input.js 58 bytes {input} [depth 0] [built] [code generated]
[used exports unknown]
```
Don't use `webpack-nano`, you don't need it, please use `webpack-cli` (official CLI for webpack)
In webpack@5 related assets are hidden from stats to reduce memory usage, you can use `--stats=detailed`
Status: Issue closed
|
benjcunningham/govtrackr | 128413240 | Title: Missing LICENSE template
Question:
username_0: From Travis-CI [Build #6](https://travis-ci.org/username_0/govtrackr/builds/104389997):
```
* checking DESCRIPTION meta-information ... NOTE
License components which are templates and need '+ file LICENSE':
MIT
```<issue_closed>
Status: Issue closed |
enthought/traits | 1188295191 | Title: Deprecate acceptance of lists by BaseTuple and Tuple traits
Question:
username_0: Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/username_0/Enthought/ETS/traits/traits/base_trait_handler.py", line 74, in error
raise TraitError(
traits.trait_errors.TraitError: The 't' trait of an A instance must be a tuple of the form: (an integer, a string), but a value of [2, '2'] <class 'list'> was specified.
```
I propose getting rid of the inconsistency and only ever accepting tuples for both `Tuple` and `BaseTuple` traits. This would be a backwards incompatible change, so we'd at least need a deprecation warning period. |
paritytech/ink | 1048852620 | Title: How to log inside runtime chain extension impl?
Question:
username_0: ```
impl ChainExtension<Runtime> for CustomChainExtension {
fn call<E: Ext>(
func_id: u32,
env: Environment<E, InitState>,
) -> Result<RetVal, DispatchError>
where
<E::T as SysConfig>::AccountId: UncheckedFrom<<E::T as SysConfig>::Hash> + AsRef<[u8]>,
<E as Ext>::T: orml_tokens::Config,
{
match func_id {
0 => {
// deposit
let mut env = env.buf_in_buf_out();
let input_data = env.read(56)?;
log::debug!("hello ~~ deposit ~~");
```
I try to log `hello ~~` but it doesn't show up in the console. What should I config to log inside chain extension runtime?
I also tried `RUST_LOG=debug RUST_BACKTRACE=1 ./target/release/polkadex-node --dev --tmp -lruntime=debug` but it doesn't work
Answers:
username_0: @username_1 Could you take a quick look here, please? Thanks
username_1: @username_0 You can use e.g.
```
info!(
target: "runtime",
"[ChainExtension]|call|func_id:{:}", func_id
);
```
inside your runtime. If you then run the node via `substrate-contracts-node --tmp --dev` it will show up on the console.
I think in your code above the issue is that you use `-lruntime=debug`, but don't use `log::debug!(target: "runtime", …);`.
Status: Issue closed
username_0: @username_1 it doesn't still work for me.
username_1: @username_0 Have you tried starting without the `-l`? Maybe try panic-king in your custom chain extension in the runtime, just to verify that the function is even hit?
username_1: ```
impl ChainExtension<Runtime> for CustomChainExtension {
fn call<E: Ext>(
func_id: u32,
env: Environment<E, InitState>,
) -> Result<RetVal, DispatchError>
where
<E::T as SysConfig>::AccountId: UncheckedFrom<<E::T as SysConfig>::Hash> + AsRef<[u8]>,
<E as Ext>::T: orml_tokens::Config,
{
match func_id {
0 => {
// deposit
let mut env = env.buf_in_buf_out();
let input_data = env.read(56)?;
log::debug!("hello ~~ deposit ~~");
```
I try to log `hello ~~` but it doesn't show up in the console. What should I config to log inside chain extension runtime?
I also tried `RUST_LOG=debug RUST_BACKTRACE=1 ./target/release/polkadex-node --dev --tmp -lruntime=debug` but it doesn't work
username_0: yes. I tried to panic in the entry and it panics there. So the function is called. you mean `runtime=debug`? |
lectri/Snake | 809041764 | Title: Batch Problem
Question:
username_0: Sprites and labels don't show up when put into batch, maybe separate the batch to whether its a label or sprite.
Answers:
username_0: Fixed! For the time being to fix this the batch objects have to be global variables, instead of properties in a class.
Status: Issue closed
|
Yubico/libfido2 | 495947459 | Title: fido2-assert does not support hmac-secret as opposed to examples/assert
Question:
username_0: It would be great if `fido2-assert` supported hmac-secret extension the same way `examples/assert` does, so one could integrate this standard tool into their workflow instead of building their own solution.
Answers:
username_1: tentative code pushed to [fido2-hmac](https://github.com/Yubico/libfido2/tree/fido2_hmac); apologies for the delay!
username_0: Can you please provide `fido2-cred` and `fido2-assert` command lines so I can test?
Maybe these can become an example in manpages for fido2-cred/assert. Currently we have only usage without `-h` there.
username_0: Can you please provide examples of how to use `fido2-cred`/`fido2-assert` for the hmac-secret extension scenario? Ideally this would be added to the `fido2-cred`/`fido2-assert` manpages as the second example.
username_1: Sure. The steps are almost the same as the documented ones, with the difference that -h and a HMAC salt need to be present:
- Making a credential:
echo credential challenge | openssl sha256 -binary | base64 > cred_param
echo relying party >> cred_param
echo user name >> cred_param
dd if=/dev/urandom bs=1 count=32 | base64 >> cred_param
fido2-cred -M -hi cred_param /dev/hidraw7 | fido2-cred -V -ho cred
- Getting an assertion:
echo assertion challenge | openssl sha256 -binary | base64 > assert_param
echo relying party >> assert_param
head -1 cred >> assert_param
tail -n +2 cred > pubkey
dd if=/dev/urandom bs=1 count=64 | base64 -w0 >> assert_param <- hmac salt
fido2-assert -G -hi assert_param /dev/hidraw7 > /tmp/assert
fido2-assert -V -hi /tmp/assert pubkey es256
tail -1 /tmp/assert | base64 -d | xxd <- hmac secret
(Please note I have just amended the branch to add a -h option to fido2-assert -V, which was missing.)
username_0: I tested the commands above (commit 0d7d1511d7e8ce97ebd58cff97344e3045afd430) with Trezor Model T and it fails on this line
```
fido2-assert -G -hi assert_param /dev/hidraw0 > /tmp/assert
```
with
```
fido2-assert: output error
```
username_1: Thanks, I will take a look.
username_1: Reverting #60 (9d44af2) fixes it for me. What are the device's expectation of the COSEAlgorithmIdentifier in this case?
username_0: @username_2 can you have a look please?
username_2: It works fine with the trezor-core master branch. The reason is that https://github.com/trezor/trezor-firmware/commit/9537bc40a58fb4e7c428fb7b7137e1fc2d004e6b never made it into the release, because the change was made too late after the code freeze. This commit is the equivalent of 9d44af2 in libfido2.
Note that the next day after further research I decided to disable the check completely to avoid compatibility issues, even though it's against the spec: https://github.com/trezor/trezor-firmware/commit/18998ff42f53cc022eefb619320cbdd19a956770
username_0: Ah, thank you for the explanation Andrew!
I confirm that trezor-firmware master works with 0d7d1511d7e8ce97ebd58cff97344e3045afd430 (fido2-hmac branch HEAD) correctly!
We can close this issue after the fido2-hmac branch is merged to master. Thank you @username_1 for getting this done, much appreciated!
username_1: Implemented in 183b556; thanks!
Status: Issue closed
|
vaadin/vaadin-element-mixin | 521503867 | Title: Prepare for a new minor version
Question:
username_0: Once vaadin/vaadin-usage-statistics#47 is released as 2.1.0 we should
- [ ] create `14.0` branch from the current master
- [ ] bump `vaadin-usage-statistics` to `^2.1.0` on master
- [ ] release `vaadin-element-mixin` 2.2.0
- [ ] pin `vaadin-usage-statistics` to `~2.0.11` for `14.0` branch
- [ ] release `vaadin-element-mixin` 2.1.5<issue_closed>
Status: Issue closed |
godotengine/godot | 115754208 | Title: Add Metadata to Tileset Tiles to determine behaviour on contact with specific tiles (grates, oneway platforms, vertical stairs, etc)
Question:
username_0: Example: when in tieset setup scene and creating a tile StaticBody2D or KinematicBody2D, be able to set an script and set something like:
########## grate.gd
extends KinematicBody2D
func _ready():
set_meta("collider_type", "grate")
##########
this way in the character collision event when can so something like this:
########## player.gd
extends KinematicBody2D
func _fixed_process(delta):
if(is_colliding()):
if(get_collider().has_meta("collider_type")):
if(get_collider().get_meta("collider_type") == "grate"):
print("you can go through if your character has acquired a mist form")
#######################################
right now this can be done with a separated KinematicBody2D on the scene, but it would be simpler to define this just one time on the Tileset scene and then be reusable as a tile instead of creating many separated objects on the scene and setting each one of them as something different, like a grate, platforms that you can dropdown from, vertical stairs, etc...
Thanks in advance
Regards
Answers:
username_1: I think, this will help to set up stages of block puzzle game also.
:+1:
username_0: Just updated the sample a little to give more examples of possible functionality (this is thinking in a Platformer/Metroidvania case, but I think the possibilities are endless)
username_0: Updated again to demonstrate the idea of different walking sound according to the Tile Meta Data (grass, wood, metal, etc. there are so mny possibilities)
username_2: This is a brilliant idea, the possibilities for this feature is just beyond endless. : )
username_3: Duplicate of #12634 (more recent but it has more visibility).
Status: Issue closed
|
mboskamp/MMM-PIR | 683953528 | Title: MMM-PIR doesn't work with NODE_MODULE_VERSION 73
Question:
username_0: Hello,
Can you updated this module because I imposible to install it because the Node.js version is not up to date.
I have NODE_MODULE_VERSION 73. and NODE_MODULE_VERSION 64. This version of Node.js requires.
this my pm2 logs after [Electron] displayed on my screen :
` 0|mm | [2020-08-22 09:47:51.986] [ERROR] WARNING! Could not load config file. Starting with default configuration. Error found: Error: The module '/home/pi/MagicMirror/modules/MMM-PIR/node_modules/epoll/build/Release/epoll.node'
0|mm | was compiled against a different Node.js version using
0|mm | NODE_MODULE_VERSION 64. This version of Node.js requires
0|mm | NODE_MODULE_VERSION 73. Please try re-compiling or re-installing
0|mm | the module (for instance, using `npm rebuild` or `npm install`).`
Thanks you
Answers:
username_1: same issue here
username_1: same here
username_2: same here
username_3: I solved this problem using the command below:
./node_modules/.bin/electron-rebuild
Steps:
If you have not installed electron-rebuild just install it with the command: npm i -D electron-rebuild
Remove from the node-modules folder the serialport and @serialport folders.
Remove the file packages-lock.json
Run npm i to install non-installed modules
And finally run ./node_modules/.bin/electron-rebuild
It is very important to run ./node_modules/.bin/electron-rebuild directly after npm i.
Source: https://stackoverflow.com/a/52796884 |
Zerui18/XenLive-Issues-Tracker | 853564356 | Title: Can't get rsync to work
Question:
username_0: # Can't get Rsync to work
When I enable XenLive in Visual Studio code I get an error from Rsync.
# Device Information
* iPhone 12 mini
* 14.3 with Taurine
* Tweak Version 1.0.0
# Expected Behaviour
Rsync to sync my files to my iPhone.
# Current Behaviour
Visual studio code gives me the following error:
_XenLive Edit: Failed to sync with device: Error: Command failed: C:\Users\remon\Documents\rsync\bin\rsync -rl -o --usermap="*:mobile" --delete -e 'C:\Users\remon\Documents\rsync\bin\ssh' //localhost/C$/Users/remon/Desktop/Widget/ '[email protected]:/var/mobile/Library/Widgets/Universal/Mining Tracker/' Permission denied, please try again. Permission denied, please try again. '[email protected]: Permission denied (publickey,password,keyboard-interactive). rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(228) [sender=3.2.3]_
# Additional
I added the SSH public key to my iPhone, when I do `ssh [email protected]` it connects without a password.
I also added the widgets folder to the Widgets/Universal/ directory and checked the permissions.
Status: Issue closed
Answers:
username_0: Found out what it was.
You can't use a space in the widgetname.
Thanks for the awesome work!
Keep it up! |
guywhorley/guywhorley.github.io | 152018967 | Title: Changed menu widget to dropdown menu
Question:
username_0: Change the jQuery ui menu widget (in left nav container) to a drop-down list.
Answers:
username_0: I've completed the change over on my profile page. I'm using the default bootstrap3 dropdown menu.
Status: Issue closed
username_0: Considering this issue closed. Any future work will be filed under separate issues. |
infor-design/enterprise-ng | 448100362 | Title: Datagrid group is not sortable - sorting throws error
Question:
username_0: **Describe the bug**
Datagrid with groupableOption breaks sorting becuase it expects dataset do have values property that is array
**To Reproduce**
Steps to reproduce the behavior:
1. Go to http://localhost:4210/ids-enterprise-ng-demo/datagrid-custom-formatter
2. Add this option to gridOptions
```
groupable: {
aggregator: '',
expanded: true,
fields: ['status']
},
```
3. Change sortable attribute to true for productId in datagrid-paging.data.ts
4. See error
```
errors.ts:35 ERROR TypeError: Cannot read property 'length' of undefined
at Datagrid.syncDatasetWithSelectedRows (sohoxi.js:79408)
at Datagrid.sortDataset (sohoxi.js:79346)
at Datagrid.setSortColumn (sohoxi.js:79298)
at HTMLTableCellElement.<anonymous> (sohoxi.js:75297)
at HTMLDivElement.dispatch (jquery.js:5237)
at HTMLDivElement.elemData.handle (jquery.js:5044)
at ZoneDelegate.push../node_modules/zone.js/dist/zone.js.ZoneDelegate.invokeTask (zone.js:423)
at Object.onInvokeTask (ng_zone.ts:262)
at ZoneDelegate.push../node_modules/zone.js/dist/zone.js.ZoneDelegate.invokeTask (zone.js:422)
at Zone.push../node_modules/zone.js/dist/zone.js.Zone.runTask (zone.js:195)
```
**Expected behavior**
Datagrid should not assume any dataset has attribute "values"
**Version**
- ids-enterprise-ng: 5.3.0
**Additional context**
workaround available -> just add an attribute "values": [] to your dataset
Answers:
username_1: @clepore Could you please move this issue to the EP project. I can reproduce this with the following example. Save this as app/views/components/datagrid/example-grouping-multiselect.html
```
<div class="row">
<div class="twelve columns">
<div id="datagrid">
</div>
</div>
</div>
<script>
$('body').one('initialized', function () {
var grid,
columns = [],
data = [];
//Define Columns for the Grid.
columns.push({ id: 'selectionCheckbox', sortable: false, resizable: false, formatter: Formatters.SelectionCheckbox, align: 'center'});
columns.push({ id: 'id', name: 'Customer Id', field: 'id'});
columns.push({ id: 'type', name: 'Type', field: 'type'});
columns.push({ id: 'location', name: 'Location', field: 'location', formatter: Formatters.Hyperlink});
columns.push({ id: 'firstname', name: '<NAME>', field: 'firstname'});
columns.push({ id: 'lastname', name: '<NAME>', field: 'lastname'});
columns.push({ id: 'phone', name: 'Phone', field: 'phone'});
columns.push({ id: 'purchases', name: 'Purchases', field: 'purchases'});
//Get some data via ajax
var url = '{{basepath}}api/accounts';
$.getJSON(url, function(res) {
$('#datagrid').datagrid({
columns: columns,
dataset: res,
selectable: 'multiple',
groupable: {
fields: ['type'],
expanded: true
},
toolbar: {title: 'Accounts', results: true, personalize: true, actions: true, rowHeight: true, keywordFilter: false}
}).on('click', function (e, args) {
console.log('Row Click ',args)
});
});
});
</script>
```
Then load the page and hit sort and check the console. |
jekyll/jekyll | 243564197 | Title: jekyll install leads to immediate errors
Question:
username_0: Just starting out with Jekyll. Ruby is up to date. I ran
`sudo gem install jekyll`
and jekyll installs fine.
But when I run a test command, like, `jekyll -v`, I get this error:
```
WARN: Unresolved specs during Gem::Specification.reset:
listen (< 3.1, ~> 3.0)
WARN: Clearing out unresolved specs.
Please report a bug if this causes problems.
/usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.15.2/lib/bundler/runtime.rb:317:in `check_for_activated_spec!': You have already activated kramdown 1.14.0, but your Gemfile requires kramdown 1.13.2. Prepending `bundle exec` to your command may solve this. (Gem::LoadError)
from /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.15.2/lib/bundler/runtime.rb:32:in `block in setup'
from /usr/local/Cellar/ruby/2.4.1_1/lib/ruby/2.4.0/forwardable.rb:229:in `each'
from /usr/local/Cellar/ruby/2.4.1_1/lib/ruby/2.4.0/forwardable.rb:229:in `each'
from /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.15.2/lib/bundler/runtime.rb:27:in `map'
from /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.15.2/lib/bundler/runtime.rb:27:in `setup'
from /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.15.2/lib/bundler.rb:101:in `setup'
from /usr/local/lib/ruby/gems/2.4.0/gems/jekyll-3.5.1/lib/jekyll/plugin_manager.rb:48:in `require_from_bundler'
from /usr/local/lib/ruby/gems/2.4.0/gems/jekyll-3.5.1/exe/jekyll:9:in `<top (required)>'
from /usr/local/bin/jekyll:22:in `load'
from /usr/local/bin/jekyll:22:in `<main>'
```
What gives?
Answers:
username_1: Something that's not clearly stated in our docs:
### Running Jekyll commands with recent versions, is highly dependent on the presence of a `Gemfile` in the location (directory) from which the commands are called..
- If the current location contains a `Gemfile`, ***always*** prepend jekyll commands with `bundle exec`. i.e. `bundle exec jekyll -v` or `bundle exec jekyll serve` . **`Note: This requires Bundler gem to be installed as well.`**
- Otherwise, bare commands work just fine.
- If not, some other things are conflicting with Jekyll. Get more info by *append*-ing `--trace` to the command.
username_2: * Don’t ever install Jekyll with `sudo`
username_0: `bundle update` doesn't help. `--trace` returns the same error message.
installed with sudo because it didn't work the way prescribed by the quick-start guide: https://jekyllrb.com/docs/quickstart/
username_1: - Don't run jekyll from the `/home` directory
username_1: Please elaborate on this. What was the exact error?
Permission Issues? Perhaps [this link](https://jekyllrb.com/docs/troubleshooting/#jekyll--mac-os-x-1011) might help..
Or other sections might hold an answer..
username_3: @username_1 in Linux, if you're not the root user and you're using the system's ruby (instead of using version managers like `rvm`), and you try to run `gem install jekyll bundler` without `sudo`, you get this error
``` text
ERROR: While executing gem ... (Errno:EACCES)
Permission denied - /var/lib/gems
```
[This Stack Overflow question](https://stackoverflow.com/q/11496591/1743811) has more details.
---
@username_0 Looking at this part of the error message
```
You have already activated kramdown 1.14.0, but your Gemfile requires kramdown 1.13.2. Prepending `bundle exec` to your command may solve this. (Gem::LoadError)
```
It is telling you to add `bundle exec` in front of all your Jekyll commands. For example, use `bundle exec jekyll serve` instead of `jekyll serve`. I get the impression from your post that you seem to be running `bundle exec` alone by itself. I apologize in advance if I misunderstood your post.
username_1: @username_3 thanks for letting me know :+1:
username_4: I experienced the exact same issue. Prepending 'bundle exec ' to each jekyll command (such as `bundle exec jekyll serve` indeed seems to work. Weird stuff. It's voodoo, I guess.
username_5: Do https://jekyllrb.com/docs/troubleshooting/#installation-problems needs to be updated?
I'm no Linux user but @jekyll/documentation would certainly appreciate a contribution here.
username_6: I suppose that it could be divided either according to error message or according to the Linux distribution. I would rather choose the "by error" aproach, but I don't know well it would scale. Anyway, I could add some information on installing on Archlinux, but again, the section is already well "populated"...
Another point: the section starts describing *Linux* related stuff, from "Rubygems on Gentoo" it jumps onto *Windows*, *Android*, *MacOS* etc, and then is subdivided (by headers) on Mac OS etc. Any thoughts on that?
username_7: @username_3 What flavor of Linux are you referring to?
For all I know, jekyll wouldn't run as executable unless the path is set. And I've always tried to stay away from appending `bundle exec` to jekyll. Usually `bundle update` does the trick, or even gem update. But appending a `bundle exec` seems a stretch
username_5: Closing as https://jekyllrb.com/docs/installation/ is now more detailed.
Status: Issue closed
username_1: Don't do this. It may lead to other bugs. The *proper* way is to just run `bundle update` |
kabouzeid/phonograph-issue-tracker | 150097178 | Title: Lock screen player
Question:
username_0: Somehow I can't get a player on my lock screen...
Using:
Huawei P8 Lite
Android 5.0.1
EMUI 3.1
Answers:
username_1: what do you mean exactly?
username_0: I don't see the music player on my lockscreen while there is music playing.
This makes me NEED to unlock my phone before I can go to another song, etc
username_1: There should be a notification with playback controls.
username_0: There is a notification in the notificationscreen, but not on the lockscreen
username_1: Well notifications should show up at the lockscreen on 5.0+
Status: Issue closed
username_0: I'm on Android 5.0.1 and it's not showing up, so I wouldn't say that it's fixed, but that might be because the Huawei P8 Lite uses a custom rom (EMUI)
username_1: It's an issue with your ROM. |
topcoder-platform/community-app | 597811198 | Title: Error message displayed while submitting the file.
Question:
username_0: After user selects and submits the file, Error message is displayed.
** Oh, that’s embarrassing! The file couldn’t be uploaded, I’m so sorry.
Cannot read property 'allPhases' of undefined **
Attached video/screenshot for reference:

[Error_message_for_file_Submission.zip](https://github.com/topcoder-platform/community-app/files/4461043/Error_message_for_file_Submission.zip)
Answers:
username_1: @username_2 @lakshmiathreya , This issue also happens occasionally.
username_2: Not relevant for the submission processor |
SirRolf/IkBenEenWalvis | 376725571 | Title: Cleanup comments
Question:
username_0: Je hebt feedback gekregen van **username_0**
op:
```c
//don't fuck with the time
```
URL: https://github.com/SirRolf/IkBenEenWalvis/blob/master/Assets/Code/GameObject/GameManager/GameManagerObstackleSpawner.cs
Feedback: Probeer de comments die geen informatie geven te verwijderen voor het inleveren van je producten.[](http://www.studiozoetekauw.nl/codereview-in-het-onderwijs/ '#cr:{"sha":"68931c995d29379a185fd06ba6cdd90d3abe71ec","path":"Assets/Code/GameObject/GameManager/GameManagerObstackleSpawner.cs","reviewer":"username_0"}') |
ajency/binary-flux | 539035327 | Title: add button - needs rounded edges
Question:
username_0: **expected:**

**actual:**
<issue_closed>
Status: Issue closed |
kubernetes/website | 496633762 | Title: There's a merge conflict on the website
Question:
username_0: **Problem:**
Merge conflict visible here:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/#steps-for-the-first-control-plane-node
**Proposed Solution:**
Resolve the conflict.
It seems this has happened before - #13941 - perhaps consider `pre-commit` ([`check-merge-conflict`](https://pre-commit.com/hooks.html)) or similar tooling to prevent it in future?
**Page to Update:**
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/#steps-for-the-first-control-plane-node
Answers:
username_1: /kind bug
/close
Fixed in #16497 (I think) |
firesim/firesim | 740279821 | Title: Default Autocounter Annotations
Question:
username_0: Include some annotations that are always present in rocket and boom for:
- Instructions Retired
- LI Cache Hits, Misses
- L2 Cache Hits, Misses
Maybe some BP stats too?
Answers:
username_0: Better as a doc in CY? Not part of a FireSim release.
Status: Issue closed
|
ooni/explorer | 1168672646 | Title: measurement page crashes when accessing missing measurement from search interface
Question:
username_0: When I attempt to access a missing measurement from the search listing, the page crashes with an error:

When doing the same thing, but not by navigating to the view, I instead get a missing measurement page as expected:
 |
ianstormtaylor/slate | 194742378 | Title: remove the need for `split_node` and `join_node` operations
Question:
username_0: This is just an idea, I'm not totally sure if it's possible... if you do know, let me know!
Right now we've got `split_node` and `join_node` as operation primitives (in addition to being available as in transforms), but it's a bit of a weird case, and the splitting we do generally leads to trouble in places.
I wonder if we even need it, or if that "split" could be expressed purely in terms of `move_node`, `insert_node`, `remove_text`, `insert_text` operations. It seems like it should be possible, and that we wouldn't lose any "information" that is required for collaborative use cases, since the indexes of those other operations would just be shifted as other users made edits.
If we were able to get rid of it, that would be great for simplifying things.
The only thing I'm not sure of is if the split/join "intent" is critical to maintain collaborative edits that best approximate the user's intentions when resolving conflicts.
Answers:
username_0: Ah nevermind, I think without knowing whether a node was "split" you can't update users's cursors that came after the split point to be in their new equivalent nodes. Damn.
Status: Issue closed
|
dfinity/motoko | 1115960590 | Title: Provide syntax for functional record update operations
Question:
username_0: Sadly, the lack of these shows in the Motoko code, we have to copy fields a lot. With the record features the thing would be much more compact.
@krpeacock also asked for this in the past.
This all needs some design work, e.g. should record concatenation be biased to the left when duplicated fields occur, etc.
Answers:
username_1: Ah, I knew we had something about this already somewhere:
See https://forum.dfinity.org/t/motoko-design-evolution/9761
https://github.com/dfinity/motoko/blob/master/design/WhitePaper.md#objects
username_1: The thing that bothers me about @Rossberg's proposal (as it stands)
https://github.com/dfinity/motoko/blob/master/design/WhitePaper.md#formalisation-6
is that it immediately breaks subject reduction.
In the the presence of subtyping, you need some information about the static domain of the two records that you are merging to do the right thing when the dynamic types of the records are richer than the static ones (as will happen with reduction).
It's very much the same problem as with SML's open construct on modules. I guess we could give up on subject reduction (we probably already have), but it would be nice to void breaking it even further.
I guess it would be enough to require record type annotations on the two expressions:
`{ exp1 : R1 and exp2 : R2 }` but then we lost the concision...
username_0: @username_1 the problem doesn't occur when the `exp1` and `exp2` are label-disjoint, right?
username_1: Failure of subject reduction (If you don't all mutable fields):
```
(fun (x: {}) { {l = 1 in x} }) (x:{var f : Nat});
{l = 1 in {var f : Nat}};
``` |
osirrc/terrier-docker | 458220552 | Title: Problems with run (exception, missing values, duplicate values)
Question:
username_0: I've noticed a couple of things when doing the official runs:
I'm seeing these errors in some of the runs:
```
Error in org.terrier.matching.dsms.DFRDependenceScoreModifier java.lang.ClassCastException: org.terrier.structures.postings.bit.BasicIterablePosting cannot be cast to org.terrier.structures.postings.BlockPosting
java.lang.ClassCastException: org.terrier.structures.postings.bit.BasicIterablePosting cannot be cast to org.terrier.structures.postings.BlockPosting
at org.terrier.matching.dsms.DependenceScoreModifier.scoreFDSD(DependenceScoreModifier.java:441)
at org.terrier.matching.dsms.DependenceScoreModifier.calculateDependence(DependenceScoreModifier.java:357)
at org.terrier.matching.dsms.DependenceScoreModifier.doDependency(DependenceScoreModifier.java:310)
at org.terrier.matching.dsms.DependenceScoreModifier.modifyScores(DependenceScoreModifier.java:191)
at org.terrier.matching.dsms.DFRDependenceScoreModifier.modifyScores(DFRDependenceScoreModifier.java:87)
at org.terrier.matching.BaseMatching.finalise(BaseMatching.java:299)
at org.terrier.matching.daat.Full.match(Full.java:182)
at org.terrier.querying.LocalManager$ApplyLocalMatching.process(LocalManager.java:459)
at org.terrier.querying.LocalManager.runSearchRequest(LocalManager.java:845)
at org.terrier.applications.batchquerying.TRECQuerying.processQuery(TRECQuerying.java:720)
at org.terrier.applications.batchquerying.TRECQuerying.processQueryAndWrite(TRECQuerying.java:631)
at org.terrier.applications.batchquerying.TRECQuerying.processQueries(TRECQuerying.java:830)
at org.terrier.applications.batchquerying.TRECQuerying.processQueries(TRECQuerying.java:743)
at org.terrier.applications.batchquerying.TRECQuerying$Command.run(TRECQuerying.java:257)
at org.terrier.applications.AbstractQuerying$AbstractQueryingCommand.run(AbstractQuerying.java:160)
at org.terrier.applications.CLITool$CLIParsedCLITool.run(CLITool.java:155)
at org.terrier.applications.CLITool.main(CLITool.java:316)
```
The run files and MAP numbers don't seem to perfectly align with the table in the README. For example, pl2 is missing the "+prox" and "+prox +qe" from the README table.
There also appears to be some duplicates. For example, "./run.robust04.pl2.txtmap all 0.2241" and "./run.robust04.pl2_prox.txtmap all 0.2241" as well as "./run.robust04.bm25.txtmap all 0.2363" and "./run.robust04.bm25_prox.txtmap all 0.2363" are the same in the run files, but are different in the README table.
`robust04` results:
```
ryan@thinkpad ~/sync/git/jig/azure/output/terrier [azure-script]× $ find . -type f -name "*robust04*" -printf "%p" -exec ~/sync/git/jig/trec_eval/trec_eval -m map ~/sync/git/jig/qrels/qrels.robust04.txt {} \; [16:16:52]
./run.robust04.pl2.txtmap all 0.2241
./run.robust04.dph_prox_qe.txtmap all 0.2821
./run.robust04.pl2_qe.txtmap all 0.2538
./run.robust04.dph_prox.txtmap all 0.2479
./run.robust04.dph_qe.txtmap all 0.2821
./run.robust04.dph.txtmap all 0.2479
./run.robust04.bm25.txtmap all 0.2363
./run.robust04.bm25_qe.txtmap all 0.2762
./run.robust04.pl2_prox_qe.txtmap all 0.2538
./run.robust04.pl2_prox.txtmap all 0.2241
./run.robust04.bm25_prox_qe.txtmap all 0.2762
./run.robust04.bm25_prox.txtmap all 0.2363
```
`core18` results:
```
ryan@thinkpad ~/sync/git/jig/azure/output/terrier [azure-script]× $ find . -type f -name "*core18*" -printf "%p" -exec ~/sync/git/jig/trec_eval/trec_eval -m map ~/sync/git/jig/qrels/qrels.core18.txt {} \; 1 [16:16:45]
./run.core18.dph_prox_qe.txtmap all 0.3055
./run.core18.bm25.txtmap all 0.2326
./run.core18.bm25_prox.txtmap all 0.2326
./run.core18.bm25_prox_qe.txtmap all 0.2975
./run.core18.bm25_qe.txtmap all 0.2975
./run.core18.dph.txtmap all 0.2427
./run.core18.dph_prox.txtmap all 0.2427
./run.core18.dph_qe.txtmap all 0.3055
./run.core18.pl2.txtmap all 0.2225
./run.core18.pl2_prox.txtmap all 0.2225
./run.core18.pl2_prox_qe.txtmap all 0.2787
./run.core18.pl2_qe.txtmap all 0.2787
```
Full logs + run files: https://drive.google.com/file/d/1qze83_WPEvOHdsHl2dMbOxfcOcyFZ7xz/view?usp=drivesdk
Lots of info here, so in summary:
- exceptions in the logs
- missing values from readme table (pl +prox and pl +prox + qe)
- duplicate values for [method] and [method]_prox (i.e., bm25 and bm25_prox are both 0.2326 for core18)
Are you guys able to take a look? Could you do a run through with the commands you've provided and ensure all the results align? I can provide the copy of core18 and robust04 we're working with via Slack...
Answers:
username_1: Ryan, the scripts we used are in https://github.com/osirrc/terrier-docker/tree/master/bin
In particular, you need a block index in order to do proximity run.
username_0: Thanks for pointing me to that!
There's still a disconnect between what's in the table in the README. For example, `config=ql` and `config=ql_qe` are in the [robust04 script](https://github.com/osirrc/terrier-docker/blob/master/bin/robust04.sh) but not in the README. Likewise, `config=pl2_prox` and `config=pl2_prox` are present as examples in the README but are missing entries in the README table with MAP and the robust04 script.
Which conditions should I include in the official run? Can you sync up the robust04 script with the README and I can use that script as the source of truth to build my run script?
Any idea about the exception?
username_1: IMHO, I think I'm happy with whats in the README, so we can comment out the QL runs. @username_2 let us know your thoughts?
The exception is precisely due to not having a block index. I agree the exception message isn't informative, I'll address that in the next Terrier release.
Craig
username_2: I think we are good with BM25, PL2 and DPH + prox, qe.
username_0: Alright, this is what I'll run for terrier as it's what I can validate from the README: https://github.com/osirrc/jig/blob/azure-script/azure/osirrc2019/robust04.json#L100
username_0: All of the numbers except `./run.core18.pl2_qe.txtmap all 0.2787` look good, maybe a typo in the README?
username_1: I concur, 0.2787, so have updated README. Can you verify the commit version of the README @username_0 ?
Status: Issue closed
|
babel/babel-sublime | 335093776 | Title: Multiline arguments in arrow functions
Question:
username_0: When using arrow functions both the function name and the arguments lose syntax highlighting when the arguments are on multiple lines:

Answers:
username_1: This is a limitation of Sublime's parsing engine. See https://github.com/SublimeTextIssues/Core/issues/2241.
Status: Issue closed
username_1: Closing as a duplicate of #340. |
aws/aws-cdk | 602604283 | Title: Invalid Error Message in ICluster
Question:
username_0: <!--
description of the bug:
-->
The error message [here](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-ecs.ICluster.html) does not match the actual API of `Cluster`: https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-ecs.ICluster.html
There is no `addXXXCapacity()` method.
### Reproduction Steps
In my case, this presented itself in using the `ApplicationLoadBalancedEc2Service`, which _should_ work out of the box, but as with earlier attempts, it does not.
```
const service = new ecsPatterns.ApplicationLoadBalancedEc2Service(
this,
"ApplicationService",
{
cpu: 512,
memoryLimitMiB: 1024,
desiredCount: 1,
healthCheckGracePeriod: cdk.Duration.seconds(30),
publicLoadBalancer: true,
taskImageOptions: {
enableLogging: true,
image: ecs.AssetImage.fromAsset("../api"),
containerPort: 4000,
},
vpc,
}
);
```
Seems reasonable that capacity must be created and assigned, but there are no callouts for that in the [docs](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-ecs-patterns.ApplicationLoadBalancedEc2Service.html), where the word `capacity` is nowhere on the page.
### Error Log
As shown in linked code
### Environment
- **CLI Version :**
- **Framework Version:**
- **OS :**
- **Language :**
### Other
---
This is :bug: Bug Report
Answers:
username_1: Hi @username_0,
The error message link in the description doesn't seem to lead to the correct location. Would you mind sharing the error again?
However, I think the general flow for an `ApplicationLoadBalancedEc2Service` would be like this:
```typescript
const vpc = new ec2.Vpc(this, 'MyVpc', { maxAzs: 2 });
const cluster = new ecs.Cluster(this, 'Cluster', { vpc });
cluster.addCapacity("HostFleet", {
instanceType: ec2.InstanceType.of(
ec2.InstanceClass.T3,
ec2.InstanceSize.SMALL
),
machineImage: new ec2.AmazonLinuxImage(),
});
const service = new ecsPatterns.ApplicationLoadBalancedEc2Service(
this,
"ApplicationService",
{
cpu: 512,
memoryLimitMiB: 1024,
desiredCount: 1,
healthCheckGracePeriod: cdk.Duration.seconds(30),
publicLoadBalancer: true,
vpc,
taskImageOptions: {
enableLogging: true,
image: ecs.AssetImage.fromAsset("./api"),
containerPort: 4000,
},
cluster,
}
);
```
So the cluster needs to be declared first. I can look into updating the docs saying that a cluster with enough capacity must be associated with the service first. |
RasaHQ/rasa | 803693851 | Title: Rasa validator shows empty error message when text: null for a response
Question:
username_0: ```
Rasa Version : 2.2.2
Rasa SDK Version : 2.2.0
Rasa X Version : None
Python Version : 3.8.6
Operating System : macOS-10.16-x86_64-i386-64bit
```
**Issue**:
Originally reported in [our forum](https://forum.rasa.com/t/yamlvalidationexception-failed-to-validate-c-users-domain-yml-please-make-sure-the-file-is-correct-and-all-mandatory-parameters-are-specified-here-are-the-errors-found-during-validation/39970/14), training/validating threw an error but didn't show anything in the file to fix. After looking at the file, one of the responses had `text: null`. Changing this to a random string fixed the problem. We should look into why this error wasn't showing up in the traceback
**Error (including full traceback)**:
```
(e2e) chris@ChristophersMBP e2e % rasa data validate -d original-domain.yml
The configuration for pipeline was chosen automatically. It was written into the config file at 'config.yml'.
YamlValidationException: Failed to validate '/Users/chris/rasa/e2e/original-domain.yml'. Please make sure the file is correct and all mandatory parameters are specified. Here are the errors found during validation
```
**Command or request that led to error**:
```
rasa train
```
```
rasa validate
```
**Content of domain file (domain.yml)** (if relevant):
[File is here](https://forum.rasa.com/t/yamlvalidationexception-failed-to-validate-c-users-domain-yml-please-make-sure-the-file-is-correct-and-all-mandatory-parameters-are-specified-here-are-the-errors-found-during-validation/39970/15?u=username_0)
Answers:
username_0: there have been a few more cases in this forum thread. The latest one was due to still including the format for `MappingPolicy` and from a response formatted with an extra tick:
```
utter_cheer_up:
- image: https://i.imgur.com/nGF1K8f.jpg
- text: Here is something to cheer you up
```
[Here's the domain file](https://forum.rasa.com/t/yamlvalidationexception-failed-to-validate-c-users-domain-yml-please-make-sure-the-file-is-correct-and-all-mandatory-parameters-are-specified-here-are-the-errors-found-during-validation/39970/24?u=username_0)
username_1: PR merged to `2.6.x`
Status: Issue closed
|
nulpoet/mjkey | 752698072 | Title: 保定哪里有出租车票-保定哪里有出租车票
Question:
username_0: 保定哪里有出租车票【徴:ff181一加一⒍⒍⒍】【Q:249⒏一加一357⒌⒋0】学历低加上外貌的原因,很多工作都受限制。最后她找到了一份通讯公司客服的工作,不用见面,只用声音和人打交道。从小乐观的天性,加上后天的积极努力,表妹在单位成了受欢迎的红人。
一些热心的姑妈姨妈总是喜欢牵红线给大
https://github.com/nulpoet/mjkey/issues/550
https://github.com/nulpoet/mjkey/issues/551
https://github.com/nulpoet/mjkey/issues/552 |
epicodus-lessons/volunt33r_track3r | 626131372 | Title: Inconsistent CRUD Requirements.
Question:
username_0: For the Ruby Rails week three project "[Database Basics](https://epicenter.epicodus.com/courses/514/code_reviews/2199)" it says the `volunteer` class doesn't need full CRUD functionality, only create and read are needed, but then in the tests they're required to use we have a test for updating `volunteer`.
We either need to update the project requirements in the future so students have the update functionality or we can remove lines 61-63 [here](https://github.com/epicodus-lessons/volunt33r_track3r/blob/master/spec/volunteer_integration_spec.rb) and change line 64 to `expect(page).to have_content('Jasmine')`.<issue_closed>
Status: Issue closed |
kach/nearley | 541877102 | Title: Unexpected Token, "I was expecting to see..." is blank
Question:
username_0: Firstly, thanks for Nearly! This will save me a ton of work if I can get it going. Since I'm new to Nearley, it's probably something simple that I'm missing.
Repo with sources, lexer output, tests: https://github.com/username_0/nearley-sandbox
I've got a Moo lexer and it seems to be working properly. The output is what I expect to see. However, when passing the tokens to Nearly, I get the following error:
```
PS C:\usr\GitHub\nearley-sandbox> node test_parser.js
C:\usr\GitHub\nearley-sandbox\node_modules\nearley\lib\nearley.js:317
throw err;
^
Error: Syntax error at line 5 col 1:
SUBROUTINE
^
Unexpected identifier token: "SUBROUTINE". Instead, I was expecting to see one of the following:
at Parser.feed (C:\usr\GitHub\nearley-sandbox\node_modules\nearley\lib\nearley.js:314:27)
at Object.<anonymous> (C:\usr\GitHub\nearley-sandbox\test_parser.js:7:8)
at Module._compile (module.js:652:30)
at Object.Module._extensions..js (module.js:663:10)
at Module.load (module.js:565:32)
at tryModuleLoad (module.js:505:12)
at Function.Module._load (module.js:497:3)
at Function.Module.runMain (module.js:693:10)
at startup (bootstrap_node.js:191:16)
at bootstrap_node.js:612:3
```
My intuition says that the error has something to do with whitespace processing, but I really can't see what's wrong. Comments are processing just fine. Here's my Nearley grammar:
```
@{%
const lexer = require('lexer');
%}
@lexer lexer
# Base token types
comment -> _ %commentStart %commentText:* %commentEnd _ {% id %}
number -> %number {% id %}
string -> %string {% id %}
oparithmetic -> %arithmeticop {% id %}
oplogical -> %logicalop {% id %}
lparen -> %lparen {% id %}
rparen -> %rparen {% id %}
colon -> %colon {% id %}
identifier -> %identifier {% id %}
keyword -> %keyword {% id %}
equals -> %equals {% id %}
_ -> null | %whitespace {% function(d) { return null; } %}
```
Any hints will be much appreciated! |
hyb1996-guest/AutoJsIssueReport | 285413134 | Title: java.lang.NullPointerException: Attempt to invoke virtual method 'void com.stardust.scriptdroid.ui.main.script_list.MyScriptListFragment.importFile(java.lang.String)' on a null object reference
Question:
username_0: Description:
---
java.lang.NullPointerException: Attempt to invoke virtual method 'void com.stardust.scriptdroid.ui.main.script_list.MyScriptListFragment.importFile(java.lang.String)' on a null object reference
at com.stardust.scriptdroid.ui.main.MainActivity$5.onFileSelection(MainActivity.java:218)
at com.stardust.scriptdroid.ui.main.script_list.ScriptFileChooserDialogBuilder$1.onClick(ScriptFileChooserDialogBuilder.java:56)
at com.stardust.scriptdroid.ui.main.script_list.ScriptAndFolderListRecyclerView$1.onClick(ScriptAndFolderListRecyclerView.java:92)
at android.view.View.performClick(View.java:5646)
at android.view.View$PerformClick.run(View.java:22459)
at android.os.Handler.handleCallback(Handler.java:761)
at android.os.Handler.dispatchMessage(Handler.java:98)
at android.os.Looper.loop(Looper.java:156)
at android.app.ActivityThread.main(ActivityThread.java:6523)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:942)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:832)
Device info:
---
<table>
<tr><td>App version</td><td>2.0.10b Beta</td></tr>
<tr><td>App version code</td><td>127</td></tr>
<tr><td>Android build version</td><td>C00B205</td></tr>
<tr><td>Android release version</td><td>7.0</td></tr>
<tr><td>Android SDK version</td><td>24</td></tr>
<tr><td>Android build ID</td><td>PRA-AL00C00B205</td></tr>
<tr><td>Device brand</td><td>HONOR</td></tr>
<tr><td>Device manufacturer</td><td>HUAWEI</td></tr>
<tr><td>Device name</td><td>HWPRA-H</td></tr>
<tr><td>Device model</td><td>PRA-AL00</td></tr>
<tr><td>Device product name</td><td>PRA-AL00</td></tr>
<tr><td>Device hardware name</td><td>hi6250</td></tr>
<tr><td>ABIs</td><td>[arm64-v8a, armeabi-v7a, armeabi]</td></tr>
<tr><td>ABIs (32bit)</td><td>[armeabi-v7a, armeabi]</td></tr>
<tr><td>ABIs (64bit)</td><td>[arm64-v8a]</td></tr>
</table> |
cosmos/gaia | 519587437 | Title: Can't propagate MsgSend through /tx LCD endpoint
Question:
username_0: I am writing custom wrap of LCD and I am trying to propagate MsgSend
First I sign
`{"account_number":120836,"chain_id":"gaia-13006","fee":{"amount":[{"amount":"50000","denom":"muon"}],"gas":"200000"},"memo":"Hello World","msgs":[{"type":"cosmos-sdk/MsgSend","value":{"amount":[{"amount":"1","denom":"muon"}],"from_address":"cosmos1amfwp3cv3u3tg2h33tafpsgn4ullup5thd9cfl","to_address":"cosmos1amfwp3cv3u3tg2h33tafpsgn4ullup5thd9cfl"}}],"sequence":1} `
Then POST:
`
{"mode":"block","tx":{"msg":[{"type":"cosmos-sdk/MsgSend","value":{"amount":[{"amount":"1","denom":"muon"}],"from_address":"cosmos1amfwp3cv3u3tg2h33tafpsgn4ullup5thd9cfl","to_address":"cosmos1amfwp3cv3u3tg2h33tafpsgn4ullup5thd9cfl"}}],"fee":{"amount":[{"amount":"50000","denom":"muon"}],"gas":"200000"},"signatures":[{"pub_key":{"type":"tendermint/PubKeySecp256k1","value":"<KEY>"},"signature":"v9+cn9GEIc8FX11MKlWAYOvms/BrP3dtUbZEFXqm4TEelRgL1V/0GCcdwGM8qcY7GVFbLQx4jLTdtzxHaM4nxQ=="}],"memo":"Hello World"}}`
Unfortunately as the response I am getting following error:
`{"codespace":"sdk","code":4,"message":"signature verification failed; verify correct account sequence and chain-id"}`
Sequence is correct, I copy is straight from account info. What am I doing wrong here, any clues ?
____
#### For Admin Use
- [ ] Not duplicate issue
- [ ] Appropriate labels applied
- [ ] Appropriate contributors tagged
- [ ] Contributor assigned/self-assigned
Answers:
username_0: OK figured it out the documentation here is incorrect and misleading:
https://cosmos.network/docs/spec/auth/03_types.html#stdtx
in StdSignDoc AccountNumber and Sequence are strings and must be signed as such as well
Status: Issue closed
|
f4exb/sdrangel-docker | 686408901 | Title: Option to use UDP multicast to/from containers
Question:
username_0: As reported by many sources multicast is not supported in Docker networks so the only possible solution to deal with multicast is to use the host network.
With multicast support in the Remote Input plugin of SDRangel it becomes desirable to be able to send I/Q data with the Remote Sink to a multicast group. This makes possible to have one container connected to the receiver serving I/Q to multiple SDRangel (GUI or server) clients. Pretty cool! Alas it will not work presently since the containers use a Docker network.
However nice it is this should be made as an option since it may not be desirable to use the host network in all use cases.
Status: Issue closed
Answers:
username_0: Implemented and effective with v5.9.3 and v4.15.3 SDRangel images. |
toni-moreno/influxdb-srelay | 493366504 | Title: [Bug] queryRouterEndpointAPI not working, does not seem to be populated
Question:
username_0: Hi,
While trying to redirect /query to active endpoints, I see that it is always redirected to the first specified backend.
On this line [https://github.com/username_1/influxdb-srelay/blob/0dab7079b642d9ad11e963c218480c2ee425d936/cluster/influxcluster.go#L322](https://github.com/username_1/influxdb-srelay/blob/0dab7079b642d9ad11e963c218480c2ee425d936/cluster/influxcluster.go#L322),
I see that the variable `queryRouterEndpointAPI` is an empty list, and does not seem to be populated anywhere.
Could you please go through the code and explain how this variable works?
Answers:
username_1: Hi @username_0 , in order to redirect over an active influxdb you need configure a list of HTTP endpoints like this one:
https://github.com/username_1/influxdb-srelay/blob/0dab7079b642d9ad11e963c218480c2ee425d936/examples/rwha-sample.influxdb-srelay.conf#L53
Srelay expects for each endpoint a list of active influxdb servers with its ID's in the influxdb-srelay.conf
````bash
curl http://my_check_active_server:4090/api/queryactive
[
"myinfluxdb01",
"myinfluxdb02"
]
````
if more than one srelay will choose the first one.
This URL could be served by any HTTP server you want, or you can use if you need the syncflux tool (https://github.com/username_1/syncflux) than has an embedded active monitoring thread for an HA cluster. If you choose syncflux remember use the same ID's for both config files.
You could build the HA server as in the next picture

username_0: I did configure it according to the diagram shown as shown above (as shown in issue #9 as well),
and I am able to curl my active syncflux instance on `/api/queryactive` to get the list of active backends, which gives me a list of 1 node.
However, as I mentioned my `/query` api on srelay does not get redirected to that active node, and instead is directed to the first specified backend.
To verify, I also tried looking up usages of `c.queryRouterEndpointAPI`, but I could not find anywhere where that array is being populated.
username_1: did you named the influxdb instance ID's exactly equal in both tools ? Could you upload both config files?
username_1: Hi @username_0 did you fix your issue ?
username_0: Hi @username_1 ,
I have not been able to fix my issue yet.
**influxdb-srelay1.conf**
```toml
###############################
##
## InfluxDB Single instances Config
##
###############################
# InfluxDB Backend InfluxDB01
[[influxdb]]
name = "influxdb01"
location = "http://172.16.3.18:8086/"
timeout = "10s"
# InfluxDB Backend InfluxDB02
[[influxdb]]
name = "influxdb02"
location = "http://172.16.3.72:8086/"
timeout = "10s"
#################################
##
## InfluxDB Cluster Configs as a set
## of influxdb Single Instances
##
#################################
# Cluster for linux Metrics
[[influxcluster]]
# name = cluster id for route configs and logs
name = "ha_cluster"
# members = array of influxdb backends
members = ["influxdb01","influxdb02"]
# where to write logs for all operations in the cluster
log-file = "ha_cluster.log"
# log level could be
# "panic","fatal","Error","warn","info","debug"
log-level = "info"
# mode = of query and send data
# * HA :
# input write data will be sent to all members
# queries will be sent on the active node with
# data if query-router-endpoint-api config exist, else the first
# * Single:
# input write data will be sent on the first member ( should be only one)
# query will be sent on the only first member (should be only one)
# * LB: // NOT IMPLEMENTED YET //
type = "HA"
# query-router-endpoint-api:
# List of API url which give us the name of the influxdb backend available with all available data (when recovery process)
# use any available sync tool as in https://github.com/username_1/syncflux if needed
#
query-router-endpoint-api = ["http://172.16.3.18:4090/api/queryactive","http://172.16.3.72:4090/api/queryactive"]
[Truncated]
match=".*"
## Send to PMEREASP15 cluster
[[http.endpoint.route.rule]]
name="route_all"
action="route"
key="db"
match=".*"
to_cluster="ha_cluster"
```
These are my config files; as can be seen, the names of influxdb instances are same in both the files.
I would once again request you to please explain me how the variable ```c.queryRouterEndpointAPI``` in
https://github.com/username_1/influxdb-srelay/blob/0dab7079b642d9ad11e963c218480c2ee425d936/cluster/influxcluster.go#L322
gets populated, as I am unable to find where this array gets filled.
Thanks.
username_2: Hi @username_0 , we have been reviewing and we think that you are right, the arrray is not being filled by the cfg config.
Would you make a PR fixing it please? We are quite busy right now
Thanks,
Regards
Status: Issue closed
|
dask/distributed | 189372957 | Title: Failures on HDFS
Question:
username_0: I'm noticing intermittent HDFS failures in dask/distributed now. It looks like the bytes refactor introduced some issues. Our solution to introduce locks around the hdfs3 library appears to be insufficient (or else we aren't locking everything perfectly.)
Answers:
username_0: This appears to be resolved
Status: Issue closed
|
fvonhoven/fvonhoven | 764212819 | Title: Readme Pics
Question:
username_0: 
 |
pyriell/gs2-bugfixes | 276715992 | Title: Incorrect text entry on Richmond's investigation for Stallion
Question:
username_0: Hi,
I noticed this incorrect entry on Richmond's investigation (see screenshot). I'm not sure if it's caused by the patch, but if I remember correctly it wasn't not like this before.

Answers:
username_1: Huh, I wasn't aware that rune name appeared in any text that wasn't a copy of the rune list itself. I'll probably have to sub in a whitespace character in those instances. Otherwise it's more of a task for a translation/retranslation effort to ensure every occurrence is perfect. In this case, the real solution would be to change the name and shift everything after that back one byte, but it looks like doing that might also lead to issues with fitting the text in the window properly.
username_1: I searched all the places where the rune name appears, and the only one that isn't a copy of the rune list is this one. The name is immediately followed by a newline character in the string, so formatting/space in the text box isn't an issue. The process has been changed to shift the remaining text up, so this should be fixed by reapplying the patch.
I verified the change is working in the data files, but I haven't had an opportunity to play-test, so I'm closing this a bit provisionally for now. I can't see any reason the fix shouldn't work just fine, but it'll take quite a few hours for me to confirm in-game.
Status: Issue closed
username_0: That's great, thanks for doing this. I still have the save file or save states in case you need it, I use mednafen.
username_1: Thanks. I found the saves from my old QA archives, though. It looks good to me.
 |
openjfx/javafx-gradle-plugin | 750986589 | Title: Support for gradle 6.7
Question:
username_0: I think so... All my projects built with javafx gradle plugin won't run. So, they are likely toys...
Answers:
username_1: Is it also necessary for toolchain support?
username_0: I think so... All my projects built with javafx gradle plugin won't run. So, they are likely toys...
username_0: There is a PR, #79, anyway things here seems like abandoned. I thought it would be available in 0.0.10-SNAPSHOT.
username_0: Guess the only way out is download and build it myself.
Status: Issue closed
|
ropensci/ckanr | 90590742 | Title: tag_show tests fail after CKAN API change
Question:
username_0: The CKAN's `tag_show` as of early June 2015 [defaults](http://docs.ckan.org/en/latest/api/#ckan.logic.action.get.tag_show) to omit datasets from the result. `ckanr`'s tests assume `include_datasets=TRUE`.
Suggestion: include explicit parameter `include_datasets=FALSE` in `tag_show.R` and adjust tests to use `include_datasets=TRUE`.
Status: Issue closed
Answers:
username_1: closed via #42 |
filestack/filestack-js | 550490363 | Title: Can't resolve `fs` in 3.11.0 `filestack-js\build\module\lib\api\upload`
Question:
username_0: After updating to the latest version, our webpack build is failing with:
```
Module not found: Error: Can't resolve 'fs' in '...\node_modules\filestack-js\build\module\lib\api\upload'
```
Downgrading to 3.10.1 resolves the issue.
Answers:
username_1: Same error and fix here
username_2: have the same error today
username_3: Hi
Sorry we are working to resolve this issue. For now please use previous version of filestack-js.
username_4: it looks like downgrade to 3.10.1 is not enough.
https://github.com/filestack/filestack-js/compare/v3.10.0...master
For me downgrade to `"3.10.0"` resolved the issue.
Status: Issue closed
username_3: After updating to the latest version, our webpack build is failing with:
```
Module not found: Error: Can't resolve 'fs' in '...\node_modules\filestack-js\build\module\lib\api\upload'
```
Downgrading to 3.10.1 resolves the issue.
username_0: Just to confirm, upgrading to the newest release, `3.11.1`, still seems to have the issue.
username_5: I have run into the same issue when building the webpack too. Can you guys share the version number which has no "fs" error? Thanks.
username_0: `3.10.1` is the latest version that works for us, though some have reported having to downgrade to `3.10.0`.
username_6: I have also encountered this issue, rolling back to 3.10.1 worked for me.
username_3: Hi, sorry there was a problem with local linking package. new Release 3.11.2 should fix the problem. Tested with webpack. @username_0 can you confirm?
username_0: I can confirm 3.11.2 is working fine for us now. Thanks @username_3!
Status: Issue closed
|
rossfuhrman/_why_the_lucky_markov | 665823911 | Title: [You hit with 28 points of damage! Paij-ree carved a flute from the phone company dropped in to winning Risky Rosco’s Original Homestyle Country Medallion.
Question:
username_0: Toot: [You hit with 28 points of damage! Paij-ree carved a flute from the phone company dropped in to winning Risky Rosco’s Original Homestyle Country Medallion.
One comment = 1 upvote. Sometime after this gets 2 upvotes, it will be posted to the main account at https://mastodon.xyz/@_why_toots |
t9md/atom-vim-mode-plus | 259348719 | Title: Is there a way to delete without register?
Question:
username_0: # Check list
- Atom version info
```
Atom : 1.20.1
Electron: 1.6.9
Chrome : 56.0.2924.87
Node : 7.4.0
```
- vim-mode-plus version: 1.7.0
- Your OS: Windows 10
- You disabled vim-mode? Yes
Typically when I use `x` ,`d`, or `c`, I will pip my result into black hole register (or any non-default register)
```
nnoremap d "_d
nnoremap dd "_dd
vnoremap d "_d
vnoremap p "_dP
nnoremap c "_c
vnoremap c "_c
```
is there a way to do that in `vim-mode-plus`?
Sorry I am not familiar with atom or coffee script. I tried to figure it out my self, but failed...
Answers:
username_1: similar (solved): https://github.com/t9md/atom-vim-mode-plus/issues/473
Status: Issue closed
|
ddradar/ddradar | 740654160 | Title: [New Songs] GOLDEN LEAGUE PLUS #4
Question:
username_0: ## Going Hypersonic
- **Song Id:** [SONG_ID]
- **Artist / アーティスト:** Mameyudoufu
- **Furigana / 読み仮名:** [FURIGANA]
- **Series Folder / シリーズ:** DanceDanceRevolution A20 PLUS
- **BPM:** 156
### Charts
#### Single
|Difficulty|Lv|Notes|FA|SA|Str|Vol|Air|Fre|Cha|
|:---------|--:|--:|--:|--:|--:|--:|--:|--:|--:|
|BEGINNER|4|117|8|0|???|???|???|???|???|
|BASIC|7|201|9|0|???|???|???|???|???|
|DIFFICULT|12|348|15|0|???|???|???|???|???|
|EXPERT|15|472|14|0|???|???|???|???|???|
#### Double
|Difficulty|Lv|Notes|FA|SA|Str|Vol|Air|Fre|Cha|
|:---------|--:|--:|--:|--:|--:|--:|--:|--:|--:|
|BASIC|7|???|??|0|???|???|???|???|???|
|DIFFICULT|12|355|8|0|???|???|???|???|???|
|EXPERT|15|470|12|0|???|???|???|???|???|
-----
### Reference Data/参考データ
- https://p.eagate.573.jp/game/ddr/ddra20/p/info/index.html#info68a<issue_closed>
Status: Issue closed |
solana-labs/solana | 673709601 | Title: set-solana-release-tag.sh does not stick to beta branch
Question:
username_0: #### Problem
For non-release docs publishing to [beta|edge].docs.solana.com, we are substituting in the latest-in-time release version. If we make a release on the stable branch last, this value gets subbed in, when it should only be the latest release on beta branch, that is, the release with the greatest numerical version number.
https://github.com/solana-labs/solana/blob/master/docs/set-solana-release-tag.sh#L10
#### Proposed Solution
- Set release tag based on numerical precedence, rather than newest
- Make sure we don't accidentally set a tag for a pre-release build.<issue_closed>
Status: Issue closed |
tretapey/raisincss | 555215024 | Title: Use normalize css instead of a self built version
Question:
username_0: https://necolas.github.io/normalize.css/
Answers:
username_1: @username_0 once installed.. should we remove _setup.scss?
username_0: I feel we should advice devs to add it them selves either by adding the CDN or at `yarn add `
username_0: It seems I'm a little behind the times on this one, NormalizeCSS has not changed and most css frameworks use their own version.
Personally I'm infavor of renaming the file `_setup.scss` to something more like `normalize` or `base`.
Status: Issue closed
username_0: https://necolas.github.io/normalize.css/
To implement a major version bump may be required as some websites may break
username_1: @username_0 I agree that _setup.scss should be renamed to _base.scss, let keep this on hold for the moment
Status: Issue closed
|
mozilla/sphinx-js | 629954753 | Title: Enable the '.' character in names paths when parsing them?
Question:
username_0: Hi,
as a first time user of sphinx-js I ran across an error when trying to use it with Typescript code.
Typescript version: 3.9.3
Typedoc version: 0.17.7
The error is like this:
`parsimonious.exceptions.ParseError: Rule 'path' didn't match at '../../docs/abstractS'`
The error is triggerd by this path input value, which seems to hold an '.' character in the name:
`../../docs/abstractSigner.module:abstractSigner`
Allowing the '.' character in names in the parsers grammer, seems to solve the issue:
`name = ~r"(?:[^(/#~\\]|\\.)+"`
instead of
`name = ~r"(?:[^(/#~.\\]|\\.)+"`
Could you change the rule for names like this without raising new problems?
TIA
Answers:
username_1: I ran into the same issue, and this change seems to fix it.
username_2: Should be fixed in 63ec39950798140490815c348e5d25bfc4f265de.
Status: Issue closed
username_0: Great, Thank you very much :-) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.