repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
PeterJCLaw/test | 1115497028 | Title: Get envelopes & card for the prize ceremony
Question:
username_0: We like to put the results on pieces of card, inside envelopes, to add a sense of drama to the prize ceremony. We need to source that card and envelopes and then print the SR logo onto them.
These will need to be kept safe until the knockouts, during which they will have the awards written onto them.
### Original
[comp/prizes/envelopes](https://github.com/srobo/recurring-tasks/blob/master/comp/prizes/envelopes.yaml) |
CS839/is-differential-privacy-fairly-robust | 588795182 | Title: robust training
Question:
username_0: For robust training, try also code from <NAME>'s group.
DiffAI can be a bit too heavyweight.
Answers:
username_1: We are using [ERAN](https://github.com/eth-sri/eran) for verification, and it works -- see commit #1ec777b.
Current scope only has DP training.
Status: Issue closed
|
tensorflow/federated | 628297392 | Title: tf_computation returns garbage when used with `**` operator
Question:
username_0: **Describe the bug**
```python
@tff.tf_computation(tf.int32) # or tf.float32
def pow10(n):
return n ** 10
pow10(10) # returns garbage (different wrong values)
```
The code above does not throw any error, but quietly returns garbage
(even if `**` is not supported, it should return an error)
**Environment (please complete the following information):**
* OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
* Python package versions (e.g., TensorFlow Federated, TensorFlow):
[email protected] [email protected]
* Python version:
3.7
Answers:
username_1: This seems to be integer overflow unrelated to TFF -- using `tf.int64` works as I would expect in this case.
Status: Issue closed
username_0: 🤦 sorry |
randombit/botan | 940453773 | Title: Can't locate system certificates on Android systems
Question:
username_0: Hello,
I'd like to validate x509 certificates (and check the signature of other messages with them) with Botan in my Android native library.
The library will be compiled with Android NDK and used in another Android App.
Unfortunately, when I tried to use the system certificate store (`Botan::System_Certificate_Store`),
I encounter the following error message.
```
E/libc++abi: terminating with uncaught exception of type
Botan::Stream_IO_Error: I/O error: DataSource: Failure opening file /etc/ssl/cert.pem
```
FYI, I generated botan_all.cpp / h with the following commands.
`./configure.py --os=android --cc=clang --amalgamation --cpu=arm64 --disable-shared --disable-modules=pkcs11,aes,aes_armv8,sha1_armv8,sha2_32_armv8,pmull`
Is there anything I've missed?
Any comments or suggestions related to this issue will be appreciated.
Answers:
username_1: There is a bug here wrt cross-compliation where the configure script guesses the location for a certificate bundle based on your machine, rather than the target machine. Probably on your machine `/etc/ssl/cert.pem` is a valid bundle. And on Android it is not.
I am not sure in fact if such a bundle file exists on Android at all. It is possible that we have to use some Android specific API in order to access the trust store (done analogously to the macOS and Windows certificate stores).
One workaround would be to create (or just copy from some trusted local machine) your bundle and embed it into your `apk` and then load the trust roots from there. |
richdizz/Microsoft-Teams-Tab-Themes | 304179788 | Title: Latest css
Question:
username_0: Hi. I've just tested your code and it works fine, but it looks like the sample is not referencing the latest css. The colors are a bit off from the theme of the current Teams theme. Do you know how to get the latest? Thanks!
Answers:
username_1: These are the latest css urls
```
https://statics.teams.microsoft.com/hashedcss/stylesheets.theme-default.min-284aea21.css
https://statics.teams.microsoft.com/hashedcss/stylesheets.theme-dark.min-0d6d7ffd.css
https://statics.teams.microsoft.com/hashedcss/stylesheets.theme-contrast.min-ea8ce3a6.css
```
If you visit MS Teams in the browser and have your browser dev tools open, you will see console messages mentioning the new theme css when switching the theme in the settings.

Also you can get the URL of the current theme by manually executing
`document.getElementById('themed-stylesheet').href`
But indeed it's sad Microsoft doesn't provide a better way to get the themed css (e.g. using the context). |
dglo/domhub-tools-python | 621150741 | Title: [jkelley on 2017-11-18 19:51:45] : hubmoni: add top-level integration testing
Question:
username_0: Supporting libraries have unit tests, but tests for the hubmoni process itself are also needed. This should spawn a dummy LiveControl listener and receive the ZMQ messages from hubmoni run in simulation mode (i.e. using the procfile snapshot in the tests directory).
Answers:
username_0: [jkelley on 2017-11-23 03:04:41]
The new tests are basically working but for some reason still fails as pdaq@access. Seems to work as me...
username_0: [jkelley on 2017-11-27 02:43:25]
Fixed some PYTHONPATH issues with the test.
username_0: [master cd99551] 13 changed, 13 inserted, 13 deleted
Status: Issue closed
|
peeringdb/peeringdb | 191037625 | Title: Sync data inconsistent with PeeringDB data
Question:
username_0: Hi,
On https://www.peeringdb.com/ix/1249 I see the LAN IPv4 to be 172.16.17.32/24 and IPv6 to be 2001:504:61::/64 however in the database synced with the sync tool I see IPv4 as: 192.168.127.12/24 and IPv6 as 2001:7f8:18:210::/64
I deleted all the tables and re-ran the sync, but I see the same happen again.
mysql> select * from peeringdb_ixlan_prefix where ixlan_id=1249;
+-----+--------+---------------------+---------------------+---------+-------+----------+----------------------+----------+
| id | status | created | updated | version | notes | protocol | prefix | ixlan_id |
+-----+--------+---------------------+---------------------+---------+-------+----------+----------------------+----------+
| 730 | ok | 2016-11-22 10:47:20 | 2016-11-22 10:47:20 | 0 | | IPv4 | 192.168.127.12/24 | 1249 |
| 731 | ok | 2016-11-22 10:47:20 | 2016-11-22 10:47:20 | 0 | | IPv6 | 2001:7f8:18:210::/64 | 1249 |
+-----+--------+---------------------+---------------------+---------+-------+----------+----------------------+----------+
2 rows in set (0.00 sec)
Is that a known issue?
Status: Issue closed
Answers:
username_1: the ixlan for ix 1249 has the id 1241, the ids of ix and ixlan are not always identical (actually the only identical ones are from the initial import from v1 to v2)
sqlite> select * from peeringdb_ixlan where ix_id=1249;
1241|ok|2016-03-22 15:40:48|2016-04-14 09:41:17|0|DE-CIX Dallas Peering LAN||1500||0|0||1249
sqlite> select * from peeringdb_ixlan_prefix where ixlan_id=1241;
724|ok|2016-03-22 15:41:02|2016-03-22 15:41:02|0||IPv4|172.16.17.32/24|1241
725|ok|2016-03-22 15:42:03|2016-04-14 20:32:36|0||IPv6|2001:504:61::/64|1241 |
dotnet/project-system | 221153818 | Title: Spell checker doesn't run on explicit interfaces implementations
Question:
username_0: ``` C#
using System.Collections;
using System.Threading.Tasks;
namespace ConsoleApp282
{
public interface IProjectConfigurationsService
{
void Method();
}
class Program : IProjectConfigurationsService
{
void IProjectConfigurationService.Method()
{
}
}
}
```
1. Attempt to CTRL+. on IProjectConfigurationService in the interface implementation
Expected: To get spell checker to change `IProjectConfigurationService` to `IProjectConfigurationsService`
Actual: Don't get spell checker results
Note I do get add using results in the same position if the problem is just that the namespace is not imported.
Answers:
username_0: This issue was moved to dotnet/roslyn#18626
Status: Issue closed
|
ipython/ipykernel | 363765033 | Title: Incorrect sideplots in inline matplotlib backend
Question:
username_0: Inline backend is producing images which differ from other matplotlib backends when using sideplots with axes made by using `mpl_toolkits.axes_grid1`
```python
%matplotlib inline
from matplotlib import pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import numpy as np
np.random.seed(1)
ax = plt.gca()
x = y = np.linspace(0,10,10)
z = np.random.random((10,10))
plt.pcolor(x,y,z)
sidex = make_axes_locatable(ax).append_axes("top", 1)
sidex.plot(x, np.random.random(10))
```
Expected Output (with qt backend):

Actual Output

Examples on [matplotlib docs](https://matplotlib.org/mpl_toolkits/axes_grid/users/overview.html) produce similar results.
ipykernel: 4.9.0
Python 3.6.6, 3.7.0
matplotlib 3.0.0
numpy 1.15.2 |
andmorefine/since-co | 646287220 | Title: <NAME>
Question:
username_0: Hello,
My Name is Ash and I Run Tech Know Space https://techknowspace.com We are your Premium GO-TO Service Centre for All Logic Board & Mainboard Repair
When other shops say "it can't be fixed" WE CAN HELP!
ALL iPHONE 8 & NEWER
BACK GLASS REPAIR - 1 HOUR
Devices We Repair
Audio Devices Audio Device Repair
Bluetooth Speakers - Headphones - iPod Touch
Computers All Computer Repair
All Brands & Models - Custom Built - PC & Mac
Game Consoles Game Console Repair
PS4 - XBox One - Nintendo Switch
Laptops All Laptop Repair
All Brands & Models - Acer, Asus, Compaq, Dell, HP, Lenovo, Toshiba
MacBooks All MacBook Repair
All Series & Models - Air, Classic, Pro
Phones All Phone Repair
All Brands & Models - BlackBerry, Huawei, iPhone, LG, OnePlus, Samsung, Sony
Smart Watches Apple Watch Repair
Apple Watch - Samsung Gear - Moto 360
Tablets All Tablet Repair
All Brands & Models - iPad, Lenovo Yoga, Microsoft Surface, Samsung Tab
Drone Repair
Call us and tell us your issues today!
Toll Free: (888) 938-8893
https://techknowspace.com
<NAME>
<EMAIL>
https://twitter.com/techknowspace
https://www.linkedin.com/company/the-techknow-space<issue_closed>
Status: Issue closed |
sinanazemi/global-hiring | 178672957 | Title: Full name text box capitalization
Question:
username_0: Can you make the full name text box to auto cap the first letter of words
<NAME> instead of <NAME>
<img width="1360" alt="screen shot 2016-09-22 at 10 33 18 am" src="https://cloud.githubusercontent.com/assets/10181777/18759162/0e3481b4-80b0-11e6-8cde-a628ec262d93.png"><issue_closed>
Status: Issue closed |
botpress/botpress | 556893137 | Title: https://help.botpress.io
Question:
username_0: Is it possible to read the headers in `onBotMount` event.
Trying to write custom authentication strategy, since i'm working on microservices, it would be better for me to only validate the JWT (with some custom code, existing JWT strategy in botpress won't work for my case).
Looking for a way to read the request headers on every request and update context user details.
Answers:
username_1: I too would like to read headers sent by the frontend/client, however I would like to do that in a hook.
It would be really awesome if we could access `event.request` in hooks and find the entire express request object there for inspection, if the event was caused by an incoming http request.
username_2: There's a metadata field that you can use in the converse api, you can pass in everything json serializable (e.g token, auth data whatever)
Then you would create a before_incoming hook and capture that metadata straight from the event. From there you can perform any authentication routine you need.
.
Closing this as it's doable today (as described) |
facebook/react | 191869609 | Title: Error messages swallowed in certain edge cases
Question:
username_0: I would like to report a bug.
Currently in certain cases (like the one reproduced below) critical error messages are not displayed in the developer console. Sometimes the view does not render at all and it's left to our guesswork to identify the culprit.
Here's the relevant jsfiddle:
https://jsfiddle.net/b1co3uex/9/
You can see that the only error being thrown is the one deliberately put in line 6. It blocks proper rendering of the view, yet no trace of it is show in the developer console.
The expected behaviour would be to catch the error and show it in developer console.
My current workaround is to wrap `setState` calls in `try/catch`, which is ugly - but it works.
I suppose this is not by design, as React usually has excellent error reporting capabilities.
I am aware that the issue might be caused by imperfections in my code. If there's a way to improve it while maintaining its functionality, I'm all ears :)
I've been struggling with this issue for quite some time now. In the above jsfiddle you can see the error being reproduced in React 15.3.2.
Answers:
username_1: I've also noticed this error. +1
Status: Issue closed
username_2: I might be wrong but I think that the problem is in the `qwest` library you are using. It doesn't seem to report unhandled rejections. Maybe it's because it uses [`pinkyswear`](https://github.com/timjansen/PinkySwear.js) under the hood which probably doesn't support this feature.
If you switch to a native Promise implementation, like [in this fiddle](https://jsfiddle.net/b1co3uex/11/) where I wrap pinky's Promise in a native Promise with `Promise.resolve()`, you will see the error in console in browsers that log unhandled rejections.
<img width="564" alt="screen shot 2016-11-27 at 17 25 11" src="https://cloud.githubusercontent.com/assets/810438/20650447/7d690d6c-b4c6-11e6-97f3-d8d880d0ea6b.png">
There is nothing we can do here. My advice is that if you use Promises, pick a library that supports printing unhandled rejections in development, or at least pick a library that uses native Promises when they are available. This is [the one we're using at Facebook](https://github.com/then/promise). It's also included by default in Create React App.
I hope this helps!
username_0: I understand. Thank you for your response. It's time to review the way we do ajax requests :) |
albertjan/PipeToCom | 324280220 | Title: "Com Port" doesn't show up on Windows Terminal Server
Question:
username_0: As you can see, the option "Com Port 1" doensn't show up in HyperV-Manager for all our windows terminal server.
On all other server, the option is available.

Answers:
username_0: Update; The option won't show up on all Server 2016 machines.
username_1: FYI, you have to use generation 1. Can't be a GEN 2 Hyper-V.
Status: Issue closed
|
EBISPOT/efo | 1037606686 | Title: EVA-EFO import 10/27
Question:
username_0: Hi
The new spreadsheet for EFO import has now be prepared with **30** terms to be **IMPORTED** and **22 NEW** terms to be created
https://docs.google.com/spreadsheets/d/1NRTs0jXXxbVtJB64WsHgYdvbUuL1Sg_iy8GjQh80pDg/edit?usp=sharing
A couple of things to note:
Disorder of sex development - The MONDO cross reference for this term points to a different term string. If this is not suitable, the Orphanet reference can be used and the MONDO can be dropped altogether.
Basel-Vanagaite-Smirin-Yosef syndrome - Similar situation with the MONDO cross reference. MONDO can be dropped and Orphanet & OMIM can be used.
-----------------------------------
For any NEW terms, the Medgen cross reference was used (additional column W). However in many cases they do not provide much information. If a new term can not be created please let me know and I will discuss how to handle these with my team.
Answers:
username_1: @username_0 Thank you for your reply! In that case I will create new terms for the remaining three terms.
username_0: @username_1
Great, thank you for your help.
username_1: The new terms have been created and will be available with our 15th November release.
Status: Issue closed
|
magicmai/learn-C | 320448329 | Title: C语言中 make 命令的使用
Question:
username_0: ## 安装 make程序
因为已安装的 MinGW/bin 中没有make.exe 文件,首先进行安装:
```
mingw-get install gcc g++ mingw32-make
```
安装完毕后,到 D:\MinGW\bin 下,把 `mingw32-make` 复制一份,改名为 `make`,或直接重命名。
附:
MinGW 下载地址:
环境变量:D:\MinGW\bin
## makefile 文件格式
```
<生成文件>:<依赖项1> <依赖项2> <...>
[TAB]<生成方法>
```
## 创建 makefile 文件
要在命令行使用 make 命令,首先要在项目目录下创建 makefile 文件:
- 第一步,新建 .txt 文件
- 第二步,写入命令
- 第三步,保存退出,把文件名连同扩展名删掉,改为 makefile
假设有此三个文件,求两数的最大值:
```c
/* max.h */
int max(int a, int b);
/* max.c */
#include "max.h"
int max(int a, int b)
{
return a > b ? a : b;
}
/* max_num.c */
#include <stdio.h>
#include <stdlib.h>
#include "max.h"
int main(void)
{
printf("The bigger one of 3 and 5 is %d\n", max(3, 5));
system("pause");
return 0;
}
```
则 makefile 文件:
```
max.o: max.c max.h
gcc -c max.c
max_num.o: max_num.c max.h
gcc -c max_num.c
max_num.exe: max_num.o max.o
gcc -o max_num.exe max_num.o max.o
```
## 运行 make 命令
在此项目目录下打开命令窗口(按住 shift 点击鼠标右键,选择“在此处打开命令窗口”),输入:
```
make
```
即可以自动化创建可执行文件。 |
topher-chris/RunApp | 602604346 | Title: Create a 3D data chart
Question:
username_0: -The 3D chart is visually pleasing compared to the 2D chart. I need to do research on a 3D chart that can be easily integrated into my project.
-Acceptance criteria: 2D chart is the first priority. The 3D chart can be on a separate tab named "Data".
-Check out chart.js, YouTube for beginner tutorial for java script. data visualization involve java script. |
sassoftware/saspy | 755532846 | Title: The system cannot find the file specified
Question:
username_0: import saspy
sas = saspy.SASsession()
Using SAS Config named: winlocal
The OS Error was:
The system cannot find the file specified
SAS Connection failed. No connection established. Double check your settings in sascfg_personal.py file.
Attempted to run program C:\Program Files\SASHome\SASPrivateJavaRuntimeEnvironment\9.4\jrin\java.exe with the following parameters:['C:\\Program Files\\SASHome\\SASPrivateJavaRuntimeEnvironment\\9.4\\jre\x08in\\java.exe', '-classpath', 'C:\\Users\\Jun\\anaconda3\\lib\\site-packages\\saspy\\java\\saspyiom.jar;C:\\Users\\Jun\\anaconda3\\lib\\site-packages\\saspy\\java\\iomclient\\log4j.jar;C:\\Users\\Jun\\anaconda3\\lib\\
.......
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-4-97095405f39b> in <module>
----> 1 sas = saspy.SASsession()
~\anaconda3\lib\site-packages\saspy\sasbase.py in __init__(self, **kwargs)
535 # validate encoding
536 try:
--> 537 self.pyenc = sas_encoding_mapping[self.sascei]
538 except KeyError:
539 print("Invalid response from SAS on inital submission. printing the SASLOG as diagnostic")
KeyError: 'No SAS process attached. SAS process has terminated unexpectedly.'
-----------------------------------------------
windows 10
SAS 9.4M6
last asapy
-------------------------------------------------
sascfg_personal.py
SAS_config_names=['winlocal']
winlocal = {'java' : 'C:\\Program Files\\SASHome\\SASPrivateJavaRuntimeEnvironment\\9.4\\jre\\bin\\java.exe',
'encoding' : 'windows-1252',
}
-----------------------------------------------
C:\Program Files\SASHome\SASFoundation\9.4>sas.exe/regserver
C:\Program Files\SASHome\SASFoundation\9.4>path
PATH=C:\Program Files (x86)\Common Files\Oracle\Java\javapath;
.........
C:\Program Files\Java\jdk1.8.0_221\bin;C:\Program Files\Java\jdk1.8.0_221\jre\bin;D:\hive\b
in;C:\Program Files\SASHome\SASFoundation\9.4\core\sasexe;C:\Program Files\SASHo
me\SASFoundation\9.4\ets\sasexe;C:\Program Files\SASHome\Secure\ccme4;C:\Program
Files\SASHome\SASFoundation\9.4\core\sasext;**C:\Program Files\SASHome\SASPrivate
JavaRuntimeEnvironment\9.4\jre\bin**;C:\Users\Jun\AppData\Local\GitHubDesktop\bin;
C:\Users\Jun\anaconda3\Scripts;C:\Program Files\SASHOME\SASFoundation\9.4\core\s
asext
Answers:
username_1: Hey @username_0, that's an easy one. On windows you can't simply code single backslashes in file system paths, because python treats them as escape sequences when followed by a number of other characters. The error is that it can't find java, and that's because the path is wrong, because backslash b, and others turn into other characters, making the path wrong.
So, either use double backslashes in your windows paths, or add the 'r' prefix to the string to make it not use backslashes as an escape character.
You can see the error with what youspecified:
C:\Program Files\SASHome\SASPrivateJavaRuntimeEnvironment\9.4\jrin\java.exe
C:\Program Files\SASHome\SASPrivateJavaRuntimeEnvironment\9.4\jr[e\\b]in\java.exe
See how the \b (backspace), removed the 'r' before it, as well as itself, leaving jrin instead of jre\\bin?
```
#So, you can code either of these to fix this error in python:
winlocal = {'java' : r'C:\Program Files\SASHome\SASPrivateJavaRuntimeEnvironment\9.4\jre\bin\java.exe',
winlocal = {'java' : 'C:\\Program Files\\SASHome\\SASPrivateJavaRuntimeEnvironment\\9.4\\jre\\bin\\java.exe',
```
Tom
username_0: Thanks
username_1: Was that all that was wrong? Are you up and running?
username_1: I assume you've fixed that string and you're up and running. That's all that was needed for this error. If you have any other issue, just let me know!
Thanks,
Tom
Status: Issue closed
|
MovingBlocks/Terasology | 123766579 | Title: Extra entity events when loading game
Question:
username_0: Here is a sample output from my test game (with extra info added by system to expose the issue):
```
22:31:20.685 [main] INFO o.t.e.m.l.InitialiseWorld - World seed: "hkNG4i9RrZnxArmtteZvybL2IDEOoHsD"
22:31:21.650 [main] ERROR o.t.n.e.s.EntityTransportAuthoritySystem - Activated component for block: (3, 41, 0)
22:31:21.679 [main] ERROR o.t.n.e.s.EntityTransportAuthoritySystem - Activated component for block: (4, 41, 0)
22:31:21.768 [main] WARN o.t.l.nameTags.NameTagClientSystem - Can't create player based name tag for character as owner has no client component
22:31:21.770 [main] INFO o.t.r.world.RenderableWorldImpl - New Viewing Distance: Moderate
22:31:21.777 [main] ERROR o.t.n.e.s.EntityTransportAuthoritySystem - Deactivated component for block: (3, 41, 0)
22:31:21.799 [main] ERROR o.t.n.e.s.EntityTransportAuthoritySystem - Activated component for block: (3, 41, 0)
22:31:21.801 [main] ERROR o.t.n.e.s.EntityTransportAuthoritySystem - Deactivated component for block: (4, 41, 0)
22:31:21.802 [main] ERROR o.t.n.e.s.EntityTransportAuthoritySystem - Activated component for block: (4, 41, 0)
```
The first two events are correct, however the remaining two are just waste of CPU.
Answers:
username_1: Heya @username_0 good to see you :-)
I'm using the Blocker label as things to review before v1.0.0 in case it would result in a backwards-incompatible change, so we should complete that before releasing. Do you think this qualifies or is it just poor performance that wouldn't impact game functionality (or API) to fix at some point?
Pinging @immortius and @flo
username_0: Due to the second case happening sometimes, this is definitely a blocker.
username_1: Sure, it is a serious bug, I get that :-) But does the likely fix require resulting fixes in modules or stuff wouldn't compile? In any case I hope Immortius and/or Florian have ideas or comments, and I'm happy to have you poking at the holes again! We need to fix em.
I'm on vacation in NYC and heading off for today, but I'll be around from time to time.
username_0: No, it's just that for every test I have to recreate the world, because the existing blocks might be incorrectly initialized. Quite annoying.
The result for the player is, that stuff might not work after loading game.
username_2: By chance is this in a multiplayer game and it you are seeing the events for the temp client entity and then the server sent entity afterwards?
Status: Issue closed
|
nasa/cFS | 935080620 | Title: Add RTEMS build and test to CI
Question:
username_0: **Checklist (Please check before submitting)**
* [x] I reviewed the [Contributing Guide](https://github.com/nasa/cFS/blob/main/CONTRIBUTING.md).
* [x] I reviewed the [cFS README.md file](https://github.com/nasa/cFS/blob/main/README.md) to see if the feature is in the major future work.
* [x] I performed a cursory search to see if the feature request is relevant, not redundant, nor in conflict with other tickets.
**Is your feature request related to a problem? Please describe.**
Currently the build, test, and run GitHub Actions workflow only tests building and running on native Linux. This won't catch any errors that only affect running in RTEMS or when cross-compiling.
**Describe the solution you'd like**
Add GitHub Actions workflow to cross-compile and run unit tests in both RTEMS 5 and RTEMS 4.11 using QEMU
**Describe alternatives you've considered**
None
**Requester Info**
<NAME> - GSFC 582 Intern<issue_closed>
Status: Issue closed |
rancher/rancher | 349659453 | Title: Server logs have too many "TLS handshake errors"
Question:
username_0: **Rancher versions:**
rancher/server or rancher/rancher: v2.0.7-rc5
1. Create a rancher server v2.0.7-rc5.
2. Create one or more clusters
The server logs are full of "TLS handshake errors"
```
2018/08/10 21:07:41 [INFO] 2018/08/10 21:07:41 http: TLS handshake error from abc:13758: remote error: tls: bad certificate
2018/08/10 21:07:41 [INFO] 2018/08/10 21:07:41 http: TLS handshake error from def:34470: remote error: tls: bad certificate
2018/08/10 21:07:41 [INFO] 2018/08/10 21:07:41 http: TLS handshake error from abc: remote error: tls: bad certificate
2018/08/10 21:07:41 [INFO] 2018/08/10 21:07:41 http: TLS handshake error from
2018/08/10 21:07:43 [INFO] 2018/08/10 21:07:43 http: TLS handshake error from abc:15573: remote error: tls: bad certificate
```
Status: Issue closed
Answers:
username_0: The above errors happen when the nodes used in a cluster are not deleted from DO/AWS cloud providers and the same server is used to create a new cluster
Tested on a new setup with v2.0.7-rc5. Created a new cluster and ran some tests on the cluster.
Did not see the "TLS handshake errors" |
smartdevicelink/sdl_javascript_suite | 943661202 | Title: Choice Set Present followed directly by a Delete can have undefined behavior
Question:
username_0: ### Bug Report
In certain cases, a present can happen when the choices are not available on the head unit. The present operation should double-check when it starts that all cells are preloaded on the head unit. When a present the operation should be created along with the preload, and updated when the preload finishes. That way the delete will be slotted after the present.
##### Reproduction Steps
1. Send a present with a large number of choices
2. Immediately send a delete for several of those choices
##### Expected Behavior
The delete happens after the present
##### Observed Behavior
Undefined error behavior
Answers:
username_1: Closed by https://github.com/smartdevicelink/sdl_javascript_suite/pull/477
Status: Issue closed
|
OpenXBL/OpenXBL-PHP | 732286081 | Title: Unable to get Player Achievements for Xbox 360 Games, empty response- /achievements/player/{xuid}/title/{titleid}
Question:
username_0: `/achievements/player/{xuid}/title/{titleid} `
The above request works perfectly fine for Xbox One games. However, if I use an Xbox 360 Title ID, the response is:
```
{
"achievements": [],
"pagingInfo": {
"continuationToken": null,
"totalRecords": 0
}
}
```
Answers:
username_1: I'm getting the same thing for an iOS title that has Xbox Live achievements.
username_0: I recommend joining the Discord, more responsive https://discordapp.com/channels/310973589178417153/310973589178417153.
username_2: The Official Xbox Live API has a header called x-xbl-contract-version which as far as I know allows you to access different forms of data from the same endpoint. With the achievements endpoint using version 4 will return current gen titles whereas using version 3 will return old-gen titles, unfortunately, it looks like that with the xbl.io API you can't actually specify which version you want to use, I may be wrong.
username_0: Thanks, yeah I doubt its an issue on Microsoft's end. My workaround has been using [this](https://xapi.us/) API for 360 titles, although its not ideal because the rate limits are lower.
username_1: Looks like this API might not work for my needs then if it doesn't support older games. Thanks for the suggestions!
username_0: I can't use the official API as you have to be a Xbox Developer to be granted access.
username_2: If you check out the package it will show you how to self authenticate without using Xbox App authentication.
username_2: You may have to check your accounts sign in history and authorise the location
username_0: Got it working, thanks. I didn't realise its asking for **Xbox Live Account Email Address**. Now to see what I can do with the API, maybe my project isn't dead after all 😄 |
zalando-stups/fullstop | 125634073 | Title: [fullstop-instance-plugin-support] exception when parsing app_version from yml
Question:
username_0: ```
ERROR [pool-7-thread-1] o.z.s.f.PluginEventsProcessor - java.lang.Double cannot be cast to java.lang.String
java.lang.ClassCastException: java.lang.Double cannot be cast to java.lang.String
at org.zalando.stups.fullstop.plugin.impl.EC2InstanceContextImpl.lambda$getVersionId$5(EC2InstanceContextImpl.java:135) ~[fullstop-instance-plugin-support-1.2.0-SNAPSHOT.jar!/:?]
at java.util.Optional.map(Optional.java:215) ~[?:1.8.0_66-internal]
at org.zalando.stups.fullstop.plugin.impl.EC2InstanceContextImpl.getVersionId(EC2InstanceContextImpl.java:135) ~[fullstop-instance-plugin-support-1.2.0-SNAPSHOT.jar!/:?]
at org.zalando.stups.fullstop.plugin.taupageyaml.TaupageYamlPlugin.process(TaupageYamlPlugin.java:52) ~[fullstop-taupage-yaml-plugin-1.2.0-SNAPSHOT.jar!/:?]
at java.util.ArrayList.forEach(ArrayList.java:1249) ~[?:1.8.0_66-internal]
at org.zalando.stups.fullstop.plugin.AbstractEC2InstancePlugin.processEvent(AbstractEC2InstancePlugin.java:36) ~[fullstop-instance-plugin-support-1.2.0-SNAPSHOT.jar!/:?]
at org.zalando.stups.fullstop.PluginEventsProcessor.doProcess(PluginEventsProcessor.java:52) ~[fullstop-processing-1.2.0-SNAPSHOT.jar!/:?]
at org.zalando.stups.fullstop.PluginEventsProcessor.doProcess(PluginEventsProcessor.java:43) ~[fullstop-processing-1.2.0-SNAPSHOT.jar!/:?]
at java.util.ArrayList.forEach(ArrayList.java:1249) [?:1.8.0_66-internal]
at org.zalando.stups.fullstop.PluginEventsProcessor.process(PluginEventsProcessor.java:33) [fullstop-processing-1.2.0-SNAPSHOT.jar!/:?]
at com.amazonaws.services.cloudtrail.processinglibrary.reader.EventReader.emitEvents(EventReader.java:229) [aws-cloudtrail-processing-library-1.0.2.jar!/:?]
at com.amazonaws.services.cloudtrail.processinglibrary.reader.EventReader.processSource(EventReader.java:156) [aws-cloudtrail-processing-library-1.0.2.jar!/:?]
at com.amazonaws.services.cloudtrail.processinglibrary.AWSCloudTrailProcessingExecutor$ScheduledJob$1.run(AWSCloudTrailProcessingExecutor.java:178) [aws-cloudtrail-processing-library-1.0.2.jar!/:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_66-internal]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_66-internal]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_66-internal]
```
An exception is thrown, when `application_version` is parsed, because if it is a number, it will be recognized as double and we then try to cast it to string.<issue_closed>
Status: Issue closed |
facebook/folly | 220601398 | Title: compiled error for FBVector.h in release v2017.04.10.00 on centos
Question:
username_0: centos6.5 g++ 3.8.1, folly version is release v2017.04.10.00.
compiling error as follows:
FBVector.h:1432:49: error:parameter package can not be expand in‘...’:
M_construct(start, std::forward<Args>(args)...);
thanks for your help.
Answers:
username_1: same error in centos 7, gcc 4.8.5
username_2: Support for GCC 4.8 in Folly is in the process of being dropped. I'll look into this if I get a chance, but there are significant changes being worked on that will definitely make Folly not compile on GCC 4.8, so I'd strongly recommend getting things upgraded to GCC 4.9 (devtoolset-3 is GCC 4.9 and is available for both centos 6 & 7)
username_1: @username_2
Using gcc 4.9 from devtoolset-3 solved my issue. Thank you.
username_0: @username_2 Thanks, I'd udpdated the GCC 4.8.5 to 4.9.3, then this problem solved.
Status: Issue closed
|
amjadafanah/Hotels-App-Test-Project | 337191630 | Title: Spring-REST-Example-4 : example_v1_hotels_@randominteger_get_auth_invalid
Question:
username_0: Project : Spring-REST-Example-4
Job : Dev
Env : Dev
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 404
Headers : {X-Application-Context=[application:8090], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Sat, 30 Jun 2018 06:30:56 GMT]}
Endpoint : http://192.168.3.11:8090/example/v1/hotels/example/v1/hotels/1899344009
Request :
Response :
null
Logs :
Assertion [@StatusCode == 401] failed, expected value [401] but found [404] |
johnsonandjohnson/Bodiless-JS | 815563799 | Title: Touts: Placeholder title for asToutOverlayTitle isn't fully visible
Question:
username_0: ## Description
Touts: Placeholder title for asToutOverlayTitle isn't fully visible
### Steps to reproduce:
1. in Edit env, open /touts/,
2. scroll to a tout that has a Title on an image (asToutOverlayTitle)
### Actual result
Placeholder title isn't fully visible

3. switch to Edit mode
### Actual result
Placeholder title isn't fully visible

###Expected Results
Placeholder title is fully visible
reproduced in Master
Answers:
username_1: Closing since has been fixed and tested on #934 .
Status: Issue closed
|
jhen0409/react-native-debugger | 191599886 | Title: redux devtool doesn't get any data
Question:
username_0: Hi...
For some reason my redux devtool doesn't recieve any data. The JS developer tools and React Devtools works fine.
I am using the Apollo GraphQL redux middleware. Seems like that is the problem, but can't figure out how to fix or debug? Should the choice of middleware could be a problem for the redux messages in devtools?
I use the composeWithDevTools() arround the middleware.
Cheers
Peter
Answers:
username_1: It's composeWithDevTools of `redux-devtools-extension` package? Make sure your RNDebugger version is 0.5.x.
username_0: Sure,
Code example `
import { createStore, combineReducers, applyMiddleware, compose } from 'redux';
import { composeWithDevTools } from 'remote-redux-devtools';
import thunk from 'redux-thunk';
import {persistStore, autoRehydrate} from 'redux-persist'
import { ApolloProvider } from 'react-apollo';
import ApolloClient, { createNetworkInterface } from 'apollo-client';
...
this.networkInterface = createNetworkInterface({ uri: APOLLO_SERVER_URL });
this.client = new ApolloClient({
networkInterface: this.networkInterface,
});
console.log(window.__REDUX_DEVTOOLS_EXTENSION_COMPOSE__)
this.store = createStore(
combineReducers({
player,
user,
podcasts,
feed,
apollo: this.client.reducer(),
}),
{},
composeWithDevTools(
applyMiddleware(this.client.middleware()),
applyMiddleware(thunk),
autoRehydrate(),
)
);
`
My console inside the React Native Debugger (Electron App) i get the log:

So the console log debugger is running, but the redux isn't. And I have a store running, but the state is undefined inside React Native Debugger:

Am I doing something wrong?
username_2: @username_0, `remote-redux-devtools` by default send data via `remotedev.io`, while `react-native-debugger` uses its local server and already includes `remote-redux-devtools` inside. Just in case, you can open http://remotedev.io/local/ to see if you get your data there.
Basically, you should just use:
```diff
this.store = createStore(
combineReducers({
player,
user,
feed,
apollo: this.client.reducer(),
}),
{},
- composeWithDevTools(
+ window.__REDUX_DEVTOOLS_EXTENSION_COMPOSE__(
applyMiddleware(this.client.middleware()),
applyMiddleware(thunk),
autoRehydrate(),
)
);
```
username_0: @username_2 Thanks... that did the trick... removed the remote-redux-devtools from import and added, window.__REDUX_DEVTOOLS_EXTENSION_COMPOSE__ as the compose function :)
Status: Issue closed
username_3: Thanks @username_2 -- worked for me and I got to remove a dependency.
username_1: hmmmm I've been pointed that on [README](https://github.com/username_1/react-native-debugger#redux-devtools-and-remotedev-on-local-even-mobx), what is the reason for misunderstanding it? 🤔
username_4: @username_1 cuz that should have been explicitly mentioned on the remote-redux-devtools front page. @username_2 please correct me if not. Or at least a link to [subREADME](https://github.com/username_1/react-native-debugger/blob/master/docs/redux-devtools-integration.md)
username_5: I just had a hard time to find this too. @username_1, let me try to explain the path I followed:
- Here you show the correct form, to use "redux-devtools-extension": https://github.com/username_1/react-native-debugger/blob/master/docs/getting-started.md#use-redux-devtools-extension-api
- To install it, I went to Redux Devtools Extension docs: https://github.com/username_2/redux-devtools-extension
There on "Installation::4" and on "1.5 For React Native,..." it says I should actually use the REMOTE-redux-devtools. So I went there thinking "well, that first info told me to use redux-devtools-extension, but what I really need is remote-redux-devtools" and followed the instructions in there. And I did exactly as shown [there](https://github.com/username_2/remote-redux-devtools). The wrong way.
So, IMHO, you should say that the person needs to use the remove-redux-devtools and that that form (window.__REDUX_DEVTOOLS_EXTENSION_COMPOSE__) is the one that should be used.
BTW, thanks for this great work!
username_5: In fact window.__REDUX_DEVTOOLS_EXTENSION_COMPOSE__ will only be available if you start remote debugging. If it's stopped an error will be thrown. So checking is needed unless you want a red screen:
```
export default ( undefined === window.__REDUX_DEVTOOLS_EXTENSION_COMPOSE__) ?
createStore( reducers, middleware ) :
createStore(reducers,
window.__REDUX_DEVTOOLS_EXTENSION_COMPOSE__(
middleware
)
)
```
username_1: We could update the documentations to eliminate such misunderstanding. Maybe remove RNDebugger in `remote-redux-devtools`, and add link to `redux-devtools-extension`.
@username_5 I guess you missing to read [`redux-devtools-integration.md`](https://github.com/username_1/react-native-debugger/blob/master/docs/redux-devtools-integration.md)? As [`Getting Started`](https://github.com/username_1/react-native-debugger/blob/master/docs/getting-started.md#use-redux-devtools-extension-api) have mentioned the link.
username_6: Just curios... if I'm getting an `undefined` on my `console.log(window.__REDUX_DEVTOOLS_EXTENSION__);`... am I missing some configuration?
The console and react dev tools are working fine.
username_1: @username_6, what version of RNDebugger you're using? I just see the screenshot, it looks like the old version, remind you that it must be `>= 0.5.0`. (current: `0.7.11`)
If you're using `< 0.5.0` I must also say sorry, it haven't auto update feature until `0.5.0`. :(
username_6: @username_1
Thank you. It looked like there was some issues with my RNDebugger. :( I reinstalled via homebrew and it worked (install newest version)
username_7: This is the solution. |
bshuster-repo/logrus-logstash-hook | 474028062 | Title: New release needed
Question:
username_0: Hi,
We need a new release to fix the bug with persistent fields added with WithField method
Thanks! :)
Answers:
username_1: Hi @username_0
The stable release doesn't support `WithField` methods. Nevertheless, version 0.4.1 is supporting it and available ([branch 0.4](https://github.com/bshuster-repo/logrus-logstash-hook/tree/0.4))
If you still want to use the stable release, instead of doing:
```golang
hook, err := logrustash.NewHookWithFields("tcp", "172.17.0.2:9999", "myappName", logrus.Fields{})
hook.WithFields(logrus.Fields{
"hostname": os.Hostname(),
"serviceName": "myServiceName",
})
```
```golang
log := logrus.New()
conn, err := net.Dial("tcp", "logstash.mycompany.net:8911")
if err != nil {
log.Fatal(err)
}
hook := logrustash.New(conn, logrustash.DefaultFormatter(logrus.Fields{"type": "myappName"}))
log.Hooks.Add(hook)
ctx := log.WithFields(logrus.Fields{
"hostname": os.Hostname(),
"serviceName": "myServiceName",
})
```
username_2: @username_1 `logrustash.DefaultFormatter` is not actually present in v0.4.1 - I got bitten by this when moving to Go 1.13 + modules as the module system by default uses the latest release which, as opposed to master, does not have the DefaultFormatter. Would be great to see a new release!
In the meantime `go get github.com/bshuster-repo/logrus-logstash-hook@b3d898b5138adbc4c4da62fa6d889f18a587566b` has done the trick for me :)
username_1: That's true, because master has the new API.
I guess I should mention that in the README.md file.
Thanks for the feedback though.
username_1: I am closing this as the master branch was released [here](https://github.com/bshuster-repo/logrus-logstash-hook/releases/tag/1.0).
If there is a problem with this, feel free to re-open this.
Thanks.
Status: Issue closed
|
quarkusio/quarkus | 705103794 | Title: ReflectiveHierarchyStep does not handle classes generated during
Question:
username_0: **Describe the bug**
ReflectiveHierarchyStep does not handle classes generated during build that results in failures on REST resources as classes are not allowed to be used for serialisation.
When extension during build generates model classes (POJOs) that are referenced from REST resources then they are attempted to be added as reflective classes but they are not in combined index and that makes them unindexed even though they are in the BeanArchiveIndex.
**Expected behavior**
All classes that are in bean archive index should be found by ReflectiveHierarchyStep
**Actual behavior**
Beans that are generated during build but do not exists in main source folders are not processed by ReflectiveHierarchyStep and are marked as unindexed
**To Reproduce**
Steps to reproduce the behavior:
1. Generate class in extension that is used in rest resource class
2.
3.
**Environment (please complete the following information):**
- Output of `uname -a` or `ver`: Darwin 19.5.0 Darwin Kernel Version 19.5.0: Tue May 26 20:41:44 PDT 2020; root:xnu-6153.121.2~2/RELEASE_X86_64 x86_64
- Output of `java -version`: openjdk version "11.0.2" 2019-01-15
- GraalVM version (if different from Java): OpenJDK Runtime Environment GraalVM CE 20.1.0 (build 11.0.7+10-jvmci-20.1-b02)
- Quarkus version or git rev: 1.17 and 1.18
- Build tool (ie. output of `mvnw --version` or `gradlew --version`): maven-3.6.3
**Additional context**
An attempt to fix this by introducing new index build item in the core to avoid moving of BeanArchiveIndexBuildItem can be found in this draft PR https://github.com/quarkusio/quarkus/pull/12217
I confirm that this solves the problem but not sure if using new BuildItem for that is the best approach.
Answers:
username_0: closing and wait for implementation of #12574
Status: Issue closed
|
halaei/2048 | 113995050 | Title: suggestions of towc
Question:
username_0: I have a few suggestions for you ;)
1) use ternary, just to be different
2) the circle is way too close to the title, add at least 20px of padding
3) put the new game button where it's easy to click (maybe under the triangle, still in the circle)
4) consider using the same color palette as 2048, the colors you've chosen right now look pretty bad toghether, some numbers are also hard to read because of that (including the circle)
5) nobody likes crummed space, leave something for the eye to rest, so remove the red lines completely
and in the end, you don't really need the circle. It doesn't help. Just remove it to get more space and make the triangle a bit bigger
Answers:
username_0: 🇨🇲 |
staxrip/staxrip | 952517911 | Title: New VCEEnc features
Question:
username_0: Hello.
The amount of work and support for AMD's HW encoder has definitely increased for StaxRip (and Rigaya's VCEEnc) so I would like to add just a few features / parameters more:
a) Latest version is 6.12 (no difference than 6.11)
b) New functions / switches have been added (they are probably old but appeared in help since v6.11)
--vbaq enable VBAQ
--pe enable Pre Encode
--pa enable Pre Analysis
--pa-sc <string> sensitivity of scenechange detection
- none, low, medium(default), high
--pa-ss <string> sensitivity of static scene detection
- none, low, medium(default), high
--pa-activity-type <string> block activity calcualtion mode
- y (default, yuv)
--pa-caq-strength <string> Content Adaptive Quantization (CAQ) strength
- low, medium(default), high
--pa-initqpsc <int> initial qp after scene change
--pa-fskip-maxqp <int> threshold to insert skip frame on static scene
c) New --vpp-delogo
d) The --vpp-smooth filter is a denoiser so it could probably moved from "Misc 2" to "Denoise" section of StaxRip
BR,
Nikos
Answers:
username_1: 1. Done ... but don't forget you can update it also by yourself, every time 😉
2. Such implementations are time consuming and it seems like not many users are using VCEEnc. Nevertheless I've implemented some params for the next release.
3. Not at the moment... same for NVEnc.
4. The effect might be similar, but it's definitely not a pure denoiser.
Status: Issue closed
|
odedolive/RedditPurge | 606578169 | Title: Help me pls
Question:
username_0: hi, i'm just learning to program, can i contact you to talk? I found your article on the Internet how to log in to Facebook and make posts, and I would like to ask how you can like any post. Write me
Answers:
username_1: Hi, what can I do for you?
Probably not the best medium to chat, so I might delete it later... but what do you need?
username_0: don’t know English, I will write using google translate, you have done several lessons how to use C # and selenium to log in to facebook and make posts. I would like you to say how I could like to post on Facebook, the Like button has no id. I do not know how to work with xpath and selector.
thanks for writing to me
username_0: if you can write me an email <EMAIL>
Status: Issue closed
|
ccyang/ysx-stackdriver-alerts | 404678514 | Title: [ALERT] CPU request utilization on dada-cloud dada-ipc-app
Question:
username_0: Date: January 30, 2019 at 05:26PM<br>
<EMAIL><br>
<br>
<div style="width: 600px; margin: 0 auto;">
<table style="width: 100%; padding: 34px 6px; border-spacing: 0;">
<tr>
<td style="padding: 0;"><img src="http://www.gstatic.com/stackdriver/notification/google_stackdriver_logo.png" alt="Google Stackdriver" style="vertical-align: top;"></td>
<td style="text-align: right; padding: 0; font-family: inherit;"><a href="https://app.google.stackdriver.com/incidents/0.l3d78cg0s3cw?project=dada-cloud" style="font-weight: 500; text-decoration: none;">VIEW DETAILS</a></td>
</tr>
</table>
<div style="background-color: white; border-top: 4px solid #d40001; border-left: 1px solid #eee; border-right: 1px solid #eee; border-radius: 6px 6px 0 0; height: 24px;"></div>
<div style="background-color: white; border-left: 1px solid #eee; border-right: 1px solid #eee; padding: 0 24px; overflow: hidden;">
<table style="width: 100%; border-spacing: 0;">
<tr>
<td style="width: 35px; padding: 0;"><img src="http://www.gstatic.com/stackdriver/notification/exclamation_mark.png" alt="exclamation mark" style="vertical-align: top;"></td>
<td style="padding: 0; font-family: inherit;"><span style="color: #d40001; font-size: 130%; font-weight: bold;">Alert firing</span></td>
</tr>
</table>
<div style="margin-left: 35px;">
<h1>CPU request utilization</h1>
<p>CPU request utilization for dada-cloud dada-ipc-app is above the threshold of 1.3 with a value of 1.559.</p>
<h2>Summary</h2>
<p><strong>Start time</strong><br>
Jan 30, 2019 at 9:22AM UTC (~3 min, 45 sec ago)</p>
<p><strong>Project</strong><br>
dada-cloud (<a href="https://console.cloud.google.com/?project=dada-cloud" style="text-decoration: none;">Cloud Console</a> | <a href="https://app.google.stackdriver.com/?project=dada-cloud" style="text-decoration: none;">Stackdriver</a>)</p>
<p><strong>Policy</strong><br>
<a href="https://app.google.stackdriver.com/policy-advanced/6212262078520598160?project=dada-cloud" style="text-decoration: none;">CPU request utilization (containers)</a></p>
<p><strong>Condition</strong><br>
CPU request utilization</p>
<p><strong>Metric</strong><br>
<a style="color: inherit; cursor: text; text-decoration: none;">kubernetes.io/container/cpu/request_utilization</a></p>
<p><strong>Threshold</strong><br>
above 1.3</p>
<p><strong>Observed</strong><br>
1.559</p>
</div>
<div style="height: 54px;"></div>
<div style="text-align: center;"><a href="https://app.google.stackdriver.com/incidents/0.l3d78cg0s3cw?project=dada-cloud" style="display: inline-block; background-color: #4285f4; color: white; padding: 10px 18px; border-radius: 2px; text-decoration: none;">VIEW DETAILS</a></div>
</div>
<div style="background-color: white; border-left: 1px solid #eee; border-right: 1px solid #eee; border-bottom: 1px solid #eee; height: 58px;"></div>
<div style="padding: 62px 6px; text-align: center; color: #757575;">
<img src="http://www.gstatic.com/stackdriver/notification/google_logo.png" alt="Google" style="vertical-align: top;">
<p>© 2017 Google LLC<br>
<a style="color: inherit; cursor: text; text-decoration: none;">1600 Amphitheatre Parkway, Mountain View, CA 94043</a></p>
<p>You have received this mandatory service announcement to update you about important changes to Google Cloud Platform or your account.</p>
<p><a href="https://app.google.stackdriver.com/policy-advanced/edit/6212262078520598160?project=dada-cloud" style="text-decoration: none;">Manage notifications</a></p>
</div>
</div>
<br> |
josdejong/mathjs | 145530929 | Title: Incorrect LaTeX for some predefined functions when passing the wrong number of parameters
Question:
username_0: I just noticed a bug in the LaTeX output. Let's say you parse `sin(a,b) = a+b; sin(1, 2)` and run `toTex()` on it, the result is `'\\mathrm{sin}\\left(x,\\mathrm{y}\\right):= x+ y;\\;\\;\n\\sin\\left(1\\right)'`. The last occurence of `sin` misses the second parameter.
This happens because `sin` provides a template for LaTeX generation that only uses one parameter: `'\\sin\\left\\(${args[0]\\}\\right)'`.
I have no idea how to fix this without doing one of the following:
* keeping track of redefined functions and disabling the LaTeX-Templates for them (ugly)
* change every single function that supplies it's own LaTeX-String to support all numbers of arguments. (much work + ugly)
* Let the parser modify the `.toTex` properties of functions once they are redefined (can't do this because of it's side effects on the global mathjs instance)
Or we could just ignore this issue and hope that no one actually redefines one of the functions that provide a custom LaTeX output and still expects it to behave correctly with `toTex()`.
Answers:
username_1: wow, that's a tricky one. I doubt whether this would ever occur in practice but it is not impossible.
I think that your second option is best: change the LaTeX template of all functions to display all arguments (`'\\sin\\left(${args}\\right)'`) instead just one or two. I don't think this is a particularly ugly solution, what do you find ugly about it?
It would be a relatively simple refactoring though we need to be careful to replace only the right occurrences. I can do the refactoring, this is easy to do with WebStorm.
username_0: It's ugly in two regards:
1. It needs to be done for every single function that defines it's own LaTeX output, even ones that will be added in the future.
2. It makes the template language I introduced quite pointless for most functions, because you would suddenly need to distinguish between different amounts of arguments thereby requiring a dedicated handler function that handles those cases separately.
username_1: Thanks for your clear arguments. Yes for functions we will have to use `${args}` instead of `${args[0]}`, but I don't see a problem there. What I find more important is a solution which does not silently hide information from you, even if the information is partly invalid like `sin(1,2,3)`. It's up to the user to fix that but if you hide this from the user, he has no way to know about it and fix it.
The template language is an will be essential for all operators and some special functions like `nthRoot` and `combinations`. But we should not use the template stuff for the sake of using it but where it's appropriate and needed.
username_0: I don't understand what you mean by "hiding information".
But reading the code I just noticed, that the solution to the problem I described doesn't actually require a handler function. It should be as easy doing this for all functions (all other numbers of arguments will automatically fall back to the default template):
```js
sin.toTex = {
// LaTeX-Template for 1 argument
1: '\\sin\\left\\(${args[0]\\}\\right)';
}
```
See [FunctionNode line 354](https://github.com/username_1/mathjs/blob/26e1e26555b81b753d84eeb2bf6b99047e5682e6/lib/expression/node/FunctionNode.js#L354).
I didn't even remember that I wrote this, but now that I see it, I kind of remember using it for the LaTeX output of some of the functions, not sure which though.
username_0: I'm not sure why this isn't documented. We'd have to take a look at the discussion we had at the time. My guess is that it shouldn't be considered part of the API so it wasn't documented, but I'm not sure.
username_1: Ah, indeed :). I don't know why it's wasn't documented.
With "hiding information" I just mean that if you input say `sin(1,2,3)`, and call toTex, the tex representation shows `sin(1)` and hides the last two arguments from you, which can give confusion since these two arguments are actually there.
So there are two options to replace the current templates:
```js
sin.toTex = '\\sin\\left\\(${args[0]\\}\\right)';
```
- option 1
```js
sin.toTex = {
'1': '\\sin\\left\\(${args[0]\\}\\right)';
}
```
- option 2
```js
sin.toTex = '\\sin\\left\\(${args\\}\\right)';
```
Option 1 is most correct, but it contains a bit more magic. I have a slight preference for option 2 since that's simpler to understand and we're talking about an edge case only. I don't have a strong opinion here. What has your preference?
username_0: I strongly prefer option 2 because most LaTeX templates that were defined manually don't make too much sense if you extend them to support a variable number of arguments and users will probably expect their overwritten functions to have the default LaTeX output.
Also many of the templates can't even be extended to support variable numbers of arguments because the LaTeX output would simply break or make no sense at all. Like `'\\left(${args[0]}' + latex.operators['xor'] + '${args[1]}\\right)'` or `'\\binom{${args[0]}}{${args[1]}}'`.
username_1: Ok lets go for option 1 then :)
username_0: I won't have the time to do this refactoring in the near future. You would either have to do it yourself or you'd have to wait a while until I find the time (which might be ok since this bug doesn't seem to critical).
username_1: I think I can do it, maybe this weekend. It's indeed not critical.
username_1: I've gone through all functions and made the `toTex` templates more strict, reckoning with the number of arguments.
I've left the functions which had the default template (`'\\mathrm{${name}}\\left(${args}\\right)'`) as they are, since some of them accept different numbers of arguments and it doesn't really help to make the `toTex` conditional since it falls back to the default template anyway.
Just thinking, maybe we should change the functions that use the default template:
```js
format.toTex = '\\mathrm{${name}}\\left(${args}\\right)';
```
to something like:
```js
format.toTex = undefined; // use default template
```
That way you get information that this function is using the default template, and it probably saves a few bytes of the bundle.
username_0: Will this get stripped during the minification? Otherwise maybe remove the line entirely.
But I like the idea of explicitly stating that the default template was used. Thereby it is possible to distinguish between cases where a proper `toTex` is still missing and cases where the default template was intended.
username_1: yes exactly, and we shouldn't loose that.
This shouldn't be stripped when minifying: the minifier cannot just decide to drop a property of an object because it's (initial) value is undefined. That would alter the behavior of for example `Object.keys(obj)`.
Ok I will refactor the places where the default toTex is used to `format.toTex = undefined`.
username_1: Done in https://github.com/username_1/mathjs/commit/b2066e53f64ff0ccfcdabd31ca9476ba68e28c86
Status: Issue closed
|
jlippold/tweakCompatible | 303365650 | Title: `NonceSet` working on iOS 10.2
Question:
username_0: ```
{
"packageId": "com.julioverne.nonceset",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.julioverne.nonceset",
"deviceId": "iPhone6,1",
"url": "http://cydia.saurik.com/package/com.julioverne.nonceset/",
"iOSVersion": "10.2",
"packageVersionIndexed": false,
"packageName": "NonceSet",
"category": "Utilities",
"repository": "julioverne's Repo",
"name": "NonceSet",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.julioverne.nonceset",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.0.6",
"shortDescription": "Manage boot-nonce easy.",
"latest": "0.4",
"author": "julioverne",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
aws-amplify/amplify-cli | 1070775563 | Title: [blogUserPoolId] do not exist in the template
Question:
username_0: ### Before opening, please confirm:
- [X] I have installed the latest version of the Amplify CLI (see above), and confirmed that the issue still persists.
- [X] I have [searched for duplicate or closed issues](https://github.com/aws-amplify/amplify-cli/issues?q=is%3Aissue+).
- [X] I have read the guide for [submitting bug reports](https://github.com/aws-amplify/amplify-cli/blob/master/CONTRIBUTING.md#bug-reports).
- [X] I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue.
### How did you install the Amplify CLI?
npm
### If applicable, what version of Node.js are you using?
_No response_
### Amplify CLI Version
7.6.2
### What operating system are you using?
Mac
### Amplify Categories
function
### Amplify Commands
push
### Describe the bug
The pull is working fine, but when I am doing some changes (even any comment in the code) and pushing, then it is failing.
```
UPDATE_FAILED functionBlogbackendBlogTable AWS::CloudFormation::Stack Fri Dec 03 2021 21:34:59 GMT+0530 (India Standard Time) Parameters: [authBlogbackendUserPoolId] do not exist in the template
UPDATE_FAILED functionBlogbackendUserTable AWS::CloudFormation::Stack Fri Dec 03 2021 21:34:59 GMT+0530 (India Standard Time) Parameters: [authBlogbackendUserPoolId] do not exist in the template
```
### Expected behavior
It should push the local changes to backend.
### Reproduction steps
1. I upgraded amplify cli to 7.6.2 earlier I think it was 7.5.2.
2. amplify pull
3. amplify push
4. ERROR
### GraphQL schema(s)
<details>
```graphql
# Put schemas below this line
[Truncated]
</details>
### Log output
<details>
```
# Put your logs below this line
```
</details>
### Additional information
_No response_
Answers:
username_1: Hey @username_0 :wave: thanks for raising this! To clarify, are these functions custom resolvers for a GraphQL API, which have access to the auth resource?
username_0: These functions are not custom resolvers, these are Dynamodb CRUD trigger functions, which have auth access.
username_0: Thanks @username_1 for the hint, first I removed auth access from both of these functions, then I tried `amplify push` and it worked.
Status: Issue closed
|
eigoninaritai-naokichi/SQLiteDatabaseOperator | 348347486 | Title: JOINを使用して複数のテーブルのデータをSELECTする機能を追加する。
Question:
username_0: まずはどのような仕様で複数のテーブルのデータを返すことができるかを設計する。
Answers:
username_0: まず、SQLiteTableOperator.selectDataListの引数をクラス化する。
Join用の関数で引数をクラス化したクラスのリストを関数で受け取り、リストの最初のテーブルをメインテーブルとし、それ以外をJoinの対象にする。
取得した各テーブルは、マッピングした格納して返す。
マッピングしたクラス以外のカラムを取得したい場合にも対応し、Cursorを返す関数も用意する。
Joinは主にLEFT OUTER JOIN、INNER JOINが使用されるので、それを考慮してこの関数を作成すること。 |
ujamii/prometheus-sentry-exporter | 736053814 | Title: Some labels aren't idiomatic
Question:
username_0: Labels like `issue_first_seen` and `issue_last_seen` are a poor use of labels, because
1. Having a very high cardinality for labels is an prometheus antipattern
2. You never want to show a graph for an issue "where issue_first_seen is 2020-03-23T12:00:06.430Z", so you shouldn't report it as a label
This, instead, should be a separate metric (e.g. age since last/first seen in seconds), or should be ignored, or at least reduced to "year-month" format, for example if someone wants to know how often old issues resurface.
Answers:
username_1: Yes, you are right. There has already been some discussion about this in #5. You are more than welcome to propose a PR for this change.
username_0: Ah, noted. Sadly, I don't dabble in PHP anymore, so I'll try to come up with another way to fix this on our side. Thanks for replying so quickly though!
Status: Issue closed
|
Students-of-the-city-of-Kostroma/Student-timetable | 529533915 | Title: Разработать и автоматизировать сценарии тестирования метода Insert (Model model) сущности Нагрузка
Question:
username_0: [Сценарии](https://docs.google.com/spreadsheets/d/114F1wKsHoGB75gmF2p_XUR5zgbUb6IeQNX1ziO_BSIw/edit#gid=1813879881)
Answers:
username_1: Добрый день. В ветке task-1240 давно не было активности
username_2: 
Для отчёта
username_1: Добрый день. В ветке task-1240 давно не было активности
username_2: 
Исправлена разметка кода для будущего отчёта
username_1: Добрый день. В ветке task-1240 давно не было активности
username_2: Прошу задачу не снимать, проставлю приоритеты в течении суток.
username_1: Если у вас нет уважительной причины по которой нет активности, задача будет с вас снята.
username_2: в течение суток новые изменения будут загружены
username_1: Все промежуточные результаты должны быть отображены в коммитах, ваше сообщение не несет полезной нагрузки
username_2: Сегодня вечером будут коммиты.
username_1: Задача с Вас снята, так как по ней давно не было активности.
username_2: 
можно вернуть задачу?
username_1: Задача с Вас снята, так как по ней давно не было активности. Активность по основной задаче #1240 составляет 18 дней. Ветка отсутствует. Запрос отсутствует. Нет запросов, которые должен проверить преподаватель
username_1: Задача с Вас снята, так как по ней давно не было активности. Активность по основной задаче #1240 составляет 13 дней. Активность в ветке issue-1240 составляет 31 дней. Активность по запросу #1367 составляет 19 дней. Нет запросов, которые должен проверить преподаватель
Status: Issue closed
|
woodruffw/SimpleSession | 119106362 | Title: Option to close session (window) when a session is saved
Question:
username_0: Also, if it's the only window open, just empty it.
Nice package :+1:
Answers:
username_1: Thank you!
This is something I could add, but it might be better suited to a compound keybinding (through something like [Chain of Command](https://packagecontrol.io/packages/Chain%20of%20Command)).
I've been considering adding a configuration file - when I do, I'll probably add a `clear_on_save` option that can be toggled to achieve this.
username_0: I would like to have them as separate menu entries, it's probably not always something I'd want.
username_1: I've implemented your request in 96d5cb3 (tagged as 1.1.0) - it should be available on Package Control in a few hours.
Cheers,
William
Status: Issue closed
username_0: :+1: |
aleksanderwozniak/table_calendar | 436478349 | Title: forcedCalendarFormat is not working
Question:
username_0: Thanks for this awesome package. I want to update the calendarFormat using another button, but when updating the forcedCalendarFormat, the TableCalendar UI doesn't change.
Sample code:
```
import 'package:flutter/material.dart';
import 'package:table_calendar/table_calendar.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
home: MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key key, this.title}) : super(key: key);
final String title;
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
CalendarFormat _calendarFormat = CalendarFormat.month;
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(widget.title),
),
body: Column(
children: <Widget>[
TableCalendar(
forcedCalendarFormat: this._calendarFormat,
),
Text(this._calendarFormat.toString())
],
),
floatingActionButton: FloatingActionButton(
onPressed: () {
setState(() {
this._calendarFormat = (this._calendarFormat == CalendarFormat.month)
? CalendarFormat.twoWeeks
: CalendarFormat.month;
});
},
tooltip: 'Toggle',
child: Icon(Icons.add),
),
);
}
}
```
Answers:
username_1: Thanks for the report!
I am quite busy atm, but will look into it once I have some free time
username_2: Looking at the source code, seems like this feature was never actually implemented.
Will look into adding it in a fork, since I need it as well, and creating a pull request afterwards.
username_3: @username_2 - thanks for looking into this! Wondering if this was implemented yet?
username_1: After trying out many different solutions, I finally came up with one that is both efficient and easy to use. There was a big code refactor involved and I still need to polish it, but you can expect an update next week!
Status: Issue closed
username_1: Added with #84.
Update the package to version `2.0.0`, then follow [these steps](https://github.com/username_1/table_calendar#installation).
Finally, use this:
```dart
setState(() {
_calendarController.setCalendarFormat(CalendarFormat.week);
});
``` |
JTFouquier/mark2cure | 144087092 | Title: investigate how to enable intercom for anonymous users in training 1
Question:
username_0: ----------------------------------------
- Bitbucket: https://bitbucket.org/sulab/mark2cure/issue/37
- Originally reported by: [<NAME>](http://bitbucket.org/asu)
- Originally created at: 2014-12-10T23:27:36.759
Answers:
username_0: obsolete
----------------------------------------
Original comment by: [<NAME>](http://bitbucket.org/asu)
Status: Issue closed
|
KnpLabs/KnpPaginatorBundle | 465749245 | Title: Autoloader not loading bundle
Question:
username_0: I am updating old code to use the latest version of Symfony; however, I suddenly get the error
```
Class "Knp\Component\Pager\Event\Subscriber\Paginate\PaginationSubscriber" used for service "knp_paginator.subscriber.paginate" cannot be found.
```
I was able to trace similar errors in other bundles to a failure of composer to load anything using psr-0. Considering that knp-components currently uses psr-0 is there a known workaround?
The project currently uses knp-components 1.2.4 and knp-paginator-bundle 2.8.0
Answers:
username_1: Symfony version?
username_0: 2.8
username_1: That's an old version, currently suported only for security fixes.
This bundle currently requires Symfony 3.4 at least
username_0: I'm upgrading to 3.4, but until the project compiles with no errors, which it does due to not loading anything that uses psr-0, I'm stuck with 2.8.
username_1: Check your `vendor/composer/autoload_namespaces.php` file, it should contains something like this:
```php
<?php
// autoload_namespaces.php @generated by Composer
$vendorDir = dirname(dirname(__FILE__));
$baseDir = dirname($vendorDir);
return array(
'Twig_Extensions_' => array($vendorDir . '/twig/extensions/lib'),
'Twig_' => array($vendorDir . '/twig/twig/lib'),
'Knp\\Component' => array($vendorDir . '/knplabs/knp-components/src'),
);
```
username_0: This is where I noticed the problem. Only bundles that use psr-4 appear in the array. For whatever reason bundles with psr-0 get left out.
username_1: This is not what I see in my projects.
Vendors with psr-4 are in `autoload_psr4.php` file, while mentioned file contains psr-0 ones.
username_1: Also, check your composer version
username_0: composer version 1.8.6
I just realized I was looking in the wrong file; however, not all the psr-0 bundles present are in the namespace file.
username_1: There's now an [issue on library repo](https://github.com/KnpLabs/knp-components/issues/227).
Anyway, if solved, it will merged in branch 2
username_1: A new release of knp-components is just out, with PSR-4
Status: Issue closed
|
microsoft/fluentui | 739678899 | Title: On Combobox Narrator is not reading as required even if aria-required="true" is set ,as aria-required attribute goes to parent hierarchy instead of <input> control
Question:
username_0: On Combobox Narrator is not reading as required even if aria-required="true" is set ,as aria-required attribute goes to parent hierarchy instead of <input> control .
### Environment Information
OS: Win10
Browser: Edge chromium Version 87.0.634.0 (Official build) dev (64-bit),
### Describe the issue:
On Combobox Narrator is not reading as required even if aria-required="true" is set ,as aria-required attribute goes to parent hierarchy instead of <input> control .
### Please provide a reproduction of the issue in a codepen:
combo_box_codepen_link
#### Actual behavior:
On Combobox Narrator is not reading as required even if aria-required="true" is set ,as aria-required attribute goes to parent hierarchy instead of <input> control .
#### Expected behavior:
On Combobox Narrator should read as required when aria-required="true" is set and aria required attribute should be able set on <input>
Answers:
username_1: @username_0 Thanks for filing this issue. We'll take a look.
username_0: @username_1 Could you please let us know when we could get the fix for it.
username_0: any update on this ...?
username_0: @username_1 Could you please let us know when we could get the fix for it.
username_2: @username_0 I ran into a similar issue and was able to get around this by setting `required={true}` on the Combo Box element. The `required` tag automatically added `aria-required` to the element itself and was read by the screen reader.
(My team was using a custom element that inherited from Combo Box, so I also had to add `required={this.props.required}` to the custom element so that the props would work) |
dcloudio/uni-app | 599977690 | Title: 报bug:nvue 页面 将数据返回上一页异常
Question:
username_0: `back() {
//把结果返回到上一菜单
var pages = getCurrentPages();
pages[pages.length - 2].setData({
id: this.id
});
uni.navigateBack({
delta: 1
});
}`
此段代码在vue页面可正常使用,将id传到上一页,在nvue页面 pages[pages.length - 2] 语句报undefined ,应该为bug。
Answers:
username_1: 1. setData方法不跨平台,不要使用该方式修改数据
2.跨页面通讯,可以使用uni.$on,uni.$emit 通讯:https://ask.dcloud.net.cn/article/36010
Status: Issue closed
|
orange-cloudfoundry/paas-templates | 910352528 | Title: mongodb - quota-enforcer not working
Question:
username_0: ### Expected behavior
* As a paas-templates operator
* In order to limit space usage
* I need to have quota-enforcer process working
### Observed behavior
When the master node, is not on the first mondb (index 0), the update user process is not working.
We need to have all the mongodb topology in the context, so the program can use the real master node
### Affected releases
* 50
* earlier versions
### Traces and logs
```
2021-06-03 06:53:29.735 ERROR 14737 --- [ scheduling-1] o.s.s.s.TaskUtils$LoggingErrorHandler : Unexpected error occurred in scheduled task.
com.mongodb.MongoNotPrimaryException: Command failed with error 10107 (NotMaster): 'not master' on server 10.118.127.233:27017. The full response is { "operationTime" : { "$timestamp" : { "t" : 1622703205, "i" : 66 } }, "ok" : 0.0, "errmsg" : "not master", "code" : 10107, "codeName" : "NotMaster", "$clusterTime" : { "clusterTime" : { "$timestamp" : { "t" : , "i" : 66 } }, "signature" : { "hash" : { "$binary" : "=", "$type" : "00" }, "keyId" : { "$numberLong" : "" } } } }
```
Answers:
username_1: Fixed in mongodb v15 and will be delivered with v52
https://github.com/orange-cloudfoundry/mongodb-boshrelease/releases/tag/15
https://github.com/orange-cloudfoundry/mongodb-boshrelease/issues/80 |
clear-code/redmine_full_text_search | 558789860 | Title: Search results differ between top level and project level
Question:
username_0: Thank you for valuable plugins. After #79 fix applied, other issue has been identified.
Issue description:
Projects are defined as following and the same file1 attached to sub projects:
Project1 (parent)
Project1-1 (child of Project1) test file1 attached to this project
Project1-2 (child of Project1) test file1 attached to this project
If search requested from Project1 level, the results are as estimated (file1 hits for each sub project).
If search requested form top level (without project selection), the results are also as estimated (file1 hits for each sub project).
If search requested form Project1-1 or Project1-2 level, no hits for the same search word.
(File1 attached to a ticket in Project1-1 is searched fine both from parent levels and the sub project level.)
Could you please confirm the issue?

Status: Issue closed
Answers:
username_1: Thanks for your report.
I've fixed it. But drilldown count isn't consistent... "All" count doesn't equal to sum of other drilldown counts...
username_0: Thank you for your kind attention and my apologize for taking some time for response.
I confirmed the issue has been fixed. |
google/digitalbuildings | 862325652 | Title: Enhancement on ontology_match_lib.py
Question:
username_0: Ontology_match_lib.py is great to find the closest type with the proximity level e.g. NONE, CLOSE, INCOMPLETE and EXACT. May I suggest to enhance the tool so it can display the closest set of abstract types?
For example, a FCU consist of the following point:
real_fields = {
"chilled_water_valve_percentage_command",
"chilled_water_valve_percentage_sensor",
"discharge_fan_run_command",
"discharge_fan_run_status",
"schedule_run_command",
"zone_air_temperature_sensor",
"zone_air_temperature_setpoint",
}
fit = ont.find_best_fit_type(real_fields, 'HVAC', 'FCU')
When I ran the tool, it would display the following result:
MATCH COMPLETENESS: 'INCOMPLETE'
MATCHED TYPE: 'FCU_DFSS_DFVSC_ZTC_CHWZTC_DTC_CO2C'
ACTUAL FIELDS TYPE FIELDS REQUIRED
======================================= ======================================= =======================================
chilled_water_valve_percentage_command chilled_water_valve_percentage_command True
chilled_water_valve_percentage_sensor chilled_water_valve_percentage_sensor False
discharge_air_temperature_sensor True
discharge_air_temperature_setpoint True
discharge_fan_run_command discharge_fan_run_command True
discharge_fan_run_status discharge_fan_run_status True
discharge_fan_speed_percentage_command True
schedule_run_command schedule_run_command False
zone_air_co2_concentration_sensor True
zone_air_co2_concentration_setpoint True
zone_air_temperature_sensor zone_air_temperature_sensor True
zone_air_temperature_setpoint zone_air_temperature_setpoint True
The closest type it returned was FCU_DFSS_DFVSC_ZTC_CHWZTC_DTC_CO2C (a new type created by me for another FCU device). Due to the match completeness was ‘INCOMPLETE’, so it still did not fulfil the requirement. Upon looking for the most appropriate abstract types, I could get CHWVM, DFSS and ZTC to create the following new type:
FCU_DFSS_CHWVM_ZTC:
id: "8815310973633036290"
description: "tbc"
is_canonical: true
opt_uses:
- schedule_run_command
implements:
- CHWVM
- DFSS
- ZTC
May I know whether you could enhance the tool to display the following extra table? By viewing the below example, I could easily know that I need to create a new type with the abstract type listed in the table.
ACTUAL FIELDS TYPE FIELDS ABSTRACT
======================================= ======================================= =======================================
chilled_water_valve_percentage_command chilled_water_valve_percentage_command CHWVM
chilled_water_valve_percentage_sensor chilled_water_valve_percentage_sensor CHWVM
discharge_fan_run_command discharge_fan_run_command DFSS
discharge_fan_run_status discharge_fan_run_status DFSS
schedule_run_command schedule_run_command
zone_air_temperature_sensor zone_air_temperature_sensor ZTC
zone_air_temperature_setpoint zone_air_temperature_setpoint ZTC
Answers:
username_1: i will take a look next week and see if i can add it. A bit more complex than the regular type match, but there may be a way.
username_2: @username_1 bump
Status: Issue closed
username_1: this is fairly complex, and the new ontology explorer should cover most of the need for type matching. |
robinwhittleton/freeman-wills-crofts_the-pit-prop-syndicate | 268202089 | Title: Remove quotes around ship name in ToC
Question:
username_0: Generally items are _either_ italicized _or_ quoted, but not both. Ship names are italicized so quotes need to be removed in the chapter titles within the files themselves, and also in the ToC.
Answers:
username_0: You can also remove the quotes from `<title>` tags, even though we can't put italics in `<title>`.
username_1: Please explain more, thanks
username_2: No problem, fixed in https://github.com/username_2/freeman-wills-crofts_the-pit-prop-syndicate/commit/f84d3510d46b5660861018021b342f8d4b376f56 .
Status: Issue closed
|
FISCO-BCOS/FISCO-BCOS | 523272708 | Title: 编译错误:error: catching polymorphic type ‘class std::exception’ by value [-Werror=catch-value=]
Question:
username_0: 你好,Ubuntu 18 下编译出错:
2019-11-15 编译源码出错,错误信息如下:
[[ 97%] Built target eventfilter
Scanning dependencies of target initializer
[ 98%] Building CXX object libinitializer/CMakeFiles/initializer.dir/BoostLogInitializer.cpp.o
[ 98%] Building CXX object libinitializer/CMakeFiles/initializer.dir/GlobalConfigureInitializer.cpp.o
[ 99%] Building CXX object libinitializer/CMakeFiles/initializer.dir/Initializer.cpp.o
[ 99%] Building CXX object libinitializer/CMakeFiles/initializer.dir/LedgerInitializer.cpp.o
[ 99%] Building CXX object libinitializer/CMakeFiles/initializer.dir/P2PInitializer.cpp.o
[100%] Building CXX object libinitializer/CMakeFiles/initializer.dir/RPCInitializer.cpp.o
[100%] Building CXX object libinitializer/CMakeFiles/initializer.dir/SecureInitializer.cpp.o
[100%] Linking CXX static library libinitializer.a
[100%] Built target initializer
Scanning dependencies of target fisco-bcos
[100%] Building CXX object fisco-bcos/main/CMakeFiles/fisco-bcos.dir/main.cpp.o
In file included from /~/~/app/FISCO-BCOS/FISCO-BCOS/fisco-bcos/main/main.cpp:25:
/~/~/app/FISCO-BCOS/FISCO-BCOS/libdevcore/FileSignal.h: In static member function ‘static void dev::FileSignal::callIfFileExist(const string&, std::function<void()>)’:
/~/~/app/FISCO-BCOS/FISCO-BCOS/libdevcore/FileSignal.h:51:31: error: catching polymorphic type ‘class std::exception’ by value [-Werror=catch-value=]
catch (std::exception _e)
^~
cc1plus: all warnings being treated as errors
make[2]: *** [fisco-bcos/main/CMakeFiles/fisco-bcos.dir/build.make:63:fisco-bcos/main/CMakeFiles/fisco-bcos.dir/main.cpp.o] 错误 1
make[1]: *** [CMakeFiles/Makefile2:2417:fisco-bcos/main/CMakeFiles/fisco-bcos.dir/all] 错误 2
make: *** [Makefile:152:all] 错误 2
Answers:
username_1: 编译器把warning当成错误了。编译时设置一下参数,忽略告警
username_0: OK 问题解决。
Status: Issue closed
|
WayneKeenan/python-vrzero | 395551009 | Title: what to do to run on x86?
Question:
username_0: hi, thanks for the awesome work!
i successfully ran your package on raspberry, now i'm trying to run it on x86.
to make things simple (or more compatible?), installed raspbian strech x86 version on my x86 machine.
i went through your instructions and it seemed to me that the only arm-dependent package is openhmd. so i installed the x86 version like so:
95 wget http://ftp.us.debian.org/debian/pool/main/libo/libopenhmd/libopenhmd0_0.2.0-5_i386.deb
96 cd ..
97 sudo dpkg -i install/libopenhmd0_0.2.0-5_i386.deb
98 sudo apt-get install -f
99 sudo ldconfig
when running abbey.py demo, i get:
Traceback (most recent call last):
File "./abbey.py", line 7, in <module>
from vrzero import engine
File "/usr/local/lib/python3.5/dist-packages/vrzero-0.0.1-py3.5.egg/vrzero/__init__.py", line 2, in <module>
from .vrzero import (
File "/usr/local/lib/python3.5/dist-packages/vrzero-0.0.1-py3.5.egg/vrzero/vrzero.py", line 6, in <module>
from .hmd import OpenHMD
File "/usr/local/lib/python3.5/dist-packages/vrzero-0.0.1-py3.5.egg/vrzero/hmd.py", line 18, in <module>
lib = cdll.LoadLibrary('libopenhmd.so')
File "/usr/lib/python3.5/ctypes/__init__.py", line 425, in LoadLibrary
return self._dlltype(name)
File "/usr/lib/python3.5/ctypes/__init__.py", line 347, in __init__
self._handle = _dlopen(self._name, mode)
OSError: libopenhmd.so: cannot open shared object file: No such file or directory
"whereis libopenhmd.so" doesn't deliver anything...
any clue, what to do?
Answers:
username_0: actually, changing the hmd.py file to .so.0 allowed me to start the demo without errors, as the OP of #7 suggested. however, now both, hmd as well as monitor turn black and i can't get out there anymore. did a remote login to reboot the machine. should i try to use the fork, you made in #7 ?
username_1: It's been a long time and this project isn't my main focus anymore so all I can suggest is give it a try.
Status: Issue closed
username_0: thanks for that project, it was fun. |
udacity/cn-dlnd-issue-reports | 351834981 | Title: 循环神经网络>实验:实现RNN和LSTM>1.RNN入门
Question:
username_0: ## 课程导航+地址
循环神经网络>实验:实现RNN和LSTM>[1.RNN入门](https://classroom.udacity.com/nanodegrees/nd101-cn-advanced/parts/3a7742cb-4b38-425b-b53c-d85f8da494c9/modules/9c1e2e43-479b-4747-ad67-505961e7d7e9/lessons/a8c4d04d-282c-44de-a42e-b14981451264/concepts/a10e7178-8abe-44bf-8c0a-6cc35dccb61a#)
## 截图

## 具体描述
1. 视频0:24字幕语音不对应<issue_closed>
Status: Issue closed |
FreeUKGen/MyopicVicar | 296177223 | Title: Add documents to Help for Syndicate Coordinators
Question:
username_0: Obtaining approval of the County Records Office.
Obtaining approval of the Vicar.
Obtaining approval of other sources.
Answers:
username_1: IMO Thi is NOT a responsibility of the SC but one for the County Coordinator.
These document used to exist on FR1 help materials
username_2: Many coordinators have tried sending letters of various formats to CRO's over the years with little success (they are generally ignored). Rather than giving them another letter to send, they are looking for an official approach to the CRO with some kind of legal back up that we are entitled to have access to images.
username_0: Sure, I have changed the title to reflect the changes needed (i.e. the process we now have for me contacting record offices unless a CC contact is well established).
username_2: No programming requirement, this is only a documentation update.
username_1: Does @username_5 know what is required?
username_2: I'm assuming @username_0 will tell her.
username_0: Nothing for Alison to do at the moment. Just me.
Text last edited in 2008, so complete re-write is needed, which I've begun.
username_0: I've now updated the 2008 version.
Things to be done noted in the story at the top. Anything else to be added? Example of letter of permission to photograph / transcribe from a CRO?
username_0: Moved bulk to the "Information for the SWAT team" page (story #2319).
Text updated at Information for Coordinators DRAFT needs review. Possibly needs a direct link to the Information for the SWAT team page, too, between the links to the Image Server and Behind the Scenes?
username_0: Mick, Steve and Eric to take a look.
username_3: I suggest that CCs should NOT approach CROs while Mick is finding out the best approach. My experience says that CROs just say NO to avoid any extra work and without any real reason behind it. So we need to set a precedence and then we can approach CROs to say that the XYZ county have given permission and for what reason, so please do the same. They then know that what we are doing is legal and approved. It may need TNA's endorsement.
username_0: In line with Eric's comment on 17 Sept,
I have put a section between 'The Image Server' and 'Acquiring transcriptions from other projects' in the Information for Coordinators page on test3.
Text:
Obtaining images to transcribe has proven to be very difficult. FreeREG has therefore set up a team of volunteers who will take the lead in working with potential partners to secure access to images to transcribe. Please contact <EMAIL> with any specific requests, or if you are aware of a source which might be available to us. Under no circumstances can a Free UK Genealogy volunteer transcribe from an image which has been downloaded "for personal use only" (or similar condition), e.g. from a commercial genealogical site.
I have replaced point 3 in the Coordinator's responsibilities with this:
Liaise with existing sources of transcriptions (e.g. the relevant County Record Office / Archive). Where there is no existing relationship, support the SWAT Team in obtaining permission, if requested.
Line 2 of transcription workflow now:
Inform the SWAT team
Feedback on the above is requested @username_3 @username_2 (also Mick - will inform by email)
username_3: Pat,
I can't see your new section in Test3.
In "Acquiring data from other transcription projects" the CC is asked to contact the source. I suggest that they do not but contact the SWAT team so that a formal approach can be made and proper permission obtained.
Also "Obtaining the approval of The Church of England" suggests that the CC does it. No mention of the SWAT team.
Eric
username_0: 
Changed the 'acquiring data' section and removed 'obtaining approval' link.
Do we ask the SWAT team to upload donated transcriptions, or the Coord? @username_1 is implementing a role on FreeCEN2 to do something a bit similar, so it may not need to be a DataManager who needs to do this.
I've removed the link to Church of England approval.
I have made a copy (DRAFT) of the Other Sources and removed links no longer needed.
username_3: I suggest that whoever has the images should upload them. Even the donor if they are prepared to register and be given permission. It is much better than having to transfer huge amounts of data over the internet only to transfer it again to the image server. But whoever uploads it needs to understand about Image Groups or the CC needs to set them up in advance.
username_0: Transcriptions should be uploaded by the cord. Ditto for images.
username_0: Added: Please continue to work as normal with the Record Office / Archives for your County, but do not make any new approaches at this time.
Added: Only the County Coodinator should upload images to the server. You can use the Free UK Genealogy Dropbox account to transfer images to the appropriate County Coordinator.
Added: Transcriptions received from others, such as a community project, church project or museum or archive, should only be uploaded by the County Coordinator. Please use the name of the Project or institution (e.g. "St James' Local History Group", or "Little Wittering Museum" rather than the name of the person who you have corresponded with (e.g. "<NAME>") as the name of the transcriber.
Please let me know if further changes are needed.
username_0: Pat to ask Mick to take a look
username_0: Contacted Mick 6 Jan 2021
username_4: Pat to check for reply
username_0: Thanks all, now needs final proofing
username_0: @username_0 to check which page and let @username_5
username_5: The draft page, "Information for Coordinators DRAFT" promoted to "Information for Coordinators". Page is sync-ready.
Status: Issue closed
|
fuse-box/fuse-box | 254814755 | Title: Quantum and wildcard import
Question:
username_0: When importing with wildcards (`import * as stuffs from '~/stuffs/*'`), quantum doesn't resolve the file-names as "file numbers". Though, it could do it as it is sure all the files are bundled already.
Instead of letting the `require` statement as is, it could develop the wildcard in the bundle.
So, instead of - at run-time - listing all the files that match the path, having in the quantum-bundle something like :
```js
var stuffs = {'stuffs/blung': $fsx.r(19), 'stuffs/bling': $fsx.r(7), ...};
```
Answers:
username_1: Yeah I though about the same, perhaps in the next iteration
username_2: Hi @username_1
Any progress on this issue?
username_1: @username_2, unfortunately, no, due to a very limited amount of requests on this issue. You can, however, keep using the basic build, nothing wrong with it, there is a very non-significant overhead at runtime tho. Depends on the size of your project.
Status: Issue closed
username_1: ### FuseBox v4 Major Upgrade
As of today, the latest major version of FuseBox `v4` is available in `fuse-box@next`. FuseBox has undergone a full re-write from scratch, learning from its mistakes in the previous versions.
There are a lot of improvements, speed and config wise.
Get started [here](https://github.com/fuse-box/fuse-box/blob/master/docs/getting-started/get-started.md)
We are still working on the documentation and polishing before the major release. We are getting very close. And we need help testing.
```bash
npm install fuse-box@next
```
Most of the issues that I am closing have been resolved, however, if you find that your issue is still relevant in `v4` don't hesitate to re-open or (better) create a new one with `v4` configuration. |
mni0000/TFG_InterpretacionEscalas_UBU | 393701700 | Title: Modificar las pantallas que generan los gráficos.
Question:
username_0: Dichas pantallas deben permitir al usuario seleccionar un rango de fechas que limitaran las evaluaciones de cada alumno que aparecerán finalmente en el gráfico.
También es interesante considerar el uso de HTML para la creación del gráfico ya que ofrece más funcionalidades.<issue_closed>
Status: Issue closed |
EmbeddedRPC/erpc | 359370707 | Title: Python inout fn parameters
Question:
username_0: What's the right way of using inout params in Python.
What I do is create Reference class, but it doesn't seem to work.
The issue is:
1. At the top of a function there is an assert checking for Reference class:
`{% for p in fn.outParameters if not p.serializedViaMember %}
assert type({$p.name}) is erpc.Reference, "{$p.direction} parameter must be a Reference object"
{% endfor -- outParams %}`
2. Few lines below, the parameters are serialized
`{% for p in fn.inParameters if not p.serializedViaMember %}`
which in
`{% def encodeValue(info, name, codec, indent, depth) %}`
resolves to:
`{$name}._write({$codec}){%>%}`
,but the `{$name}` has no member `_write` as it is of class `Reference`.
It probably should be something like:
`{$name}.value._write({$codec}){%>%}`
in this case, but not in the case for `in` function parameter, only for `inout`.<issue_closed>
Status: Issue closed |
ember-learn/guides-source | 410721373 | Title: Remove Positional Params from Guides
Question:
username_0: In #456 we added a note to the Positional Params section explaining that they are only available with classic invocation syntax.
At some point we should consider removing the Positional Params section from the guides entirely, since they aren't considered a best practice, and aren't available with Angle Bracket syntax.
Status: Issue closed
Answers:
username_1: This has since been addressed. Thank you! |
pingcap/tidb | 958636760 | Title: select return error:ERROR 1105 (HY000): runtime error:“ integer divide by zero occasionally” or ”index out of range“
Question:
username_0: ## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
CREATE TABLE `tbl_35` (
`col_244` smallint(6) NOT NULL DEFAULT '-12567',
`col_245` bigint(20) unsigned DEFAULT NULL,
`col_246` tinyint(4) NOT NULL,
`col_247` tinyint(1) NOT NULL,
`col_248` mediumint(8) unsigned DEFAULT NULL,
`col_249` bigint(20) NOT NULL,
`col_250` tinyint(3) unsigned NOT NULL DEFAULT '52',
`col_251` int(10) unsigned NOT NULL DEFAULT '3672190216',
PRIMARY KEY (`col_251`,`col_249`) /*T![clustered_index] CLUSTERED */,
KEY `idx_82` (`col_250`,`col_248`,`col_251`,`col_244`),
UNIQUE KEY `idx_83` (`col_248`,`col_250`,`col_247`,`col_246`,`col_244`,`col_249`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
PARTITION BY HASH( `col_249` )
PARTITIONS 6;
insert into tbl_35 values(-12567,2039475165784309258, 12,1, 295431, 5108568534753042951, 52,3672190216);
insert into tbl_35 values( 10921,7383050426504683878, -24,1, 575644,-8481892470407815658, 69,1178171067);
insert into tbl_35 values(-12567, 31055430965522696, 89,1,15938351,-5305129692297068462, 52,3672190216);
insert into tbl_35 values( 16990,6364109974099788726, 45,1, 6085727, 4046315604437132474, 52,3672190216);
insert into tbl_35 values(-30627, 6587933123738203, -59,1, 4818297, 219835403999226661,195, 571380197);
insert into tbl_35 values(-12567, 420855916349948304, 4,0, 5333525,-7854584778698567131, 52,3672190216);
insert into tbl_35 values( 1261,8560887231683292749,-112,0, 1845818,-7456880248953153031, 52,3672190216);
insert into tbl_35 values(-12567,4101176173470815398, -20,0, 7736327, -879859309190038219, 52,3672190216);
insert into tbl_35 values(-12567,4710177654909059880, 11,1, 2925302, 1284439958228355577, 52,3672190216);
insert into tbl_35 values(-23584,4718069391352853238, -89,1, 451974, 1507620968110739479, 52,3672190216);
insert into tbl_35 values(-12567,1990284829886584698, -89,0, 99459, 1875855323515394959, 52,3672190216);
insert into tbl_35 values( 24508,6778324720546396927, -92,1, NULL,-2450297623516827509, 52,3672190216);
insert into tbl_35 values( 23555,3290440220320110711, 68,1,14307176, -165079195802779661, 52,3672190216);
insert into tbl_35 values(-12567,6976609217105510486, -32,1,12696332, 2135258411092839407, 52,3672190216);
insert into tbl_35 values(-26628,2510884000352962361, -30,0, 7800286, 3287043294638418065, 52,3672190216);
insert into tbl_35 values(-12567,4408923544517225002, 20,1, 194353, 3445749248693394665, 52,3672190216);
insert into tbl_35 values(-12567,5083946274055385841, -24,0,13950880, 7463945667013917749, 52,3672190216);
insert into tbl_35 values(-12567,6230179212097425583, 88,0, 8035271, 7555444438016890289, 52,3672190216);
insert into tbl_35 values(-19438, 937153351609097690, 22,0, 7800522, -524450175229525540,126,2658864355);
insert into tbl_35 values(-12567,7854999488979625038,-109,1,15493020,-3866114964040071826, 52,3672190216);
insert into tbl_35 values(-12567,5880030640634864989, 28,0, 2719798, 1208590981860196858, 52,3672190216);
insert into tbl_35 values(-15088,5015524994767041146, 60,1, 1943533, 2233557247694087902, 52,3672190216);
insert into tbl_35 values(-12567,7670541236326225813, -83,1, 8537665, 2252578468814218504, 52,3672190216);
insert into tbl_35 values(-12567,3916467909459577005, 37,1,14079968, 4391363789796645970, 52,3672190216);
insert into tbl_35 values(-12567, 339507939220463717, 20,1, 4486826, 5394720489477460606, 52,3672190216);
insert into tbl_35 values(-12567,8041435190777307590, 1,0,11817423,-8797651559769414378, 52,3672190216);
insert into tbl_35 values( 9001,3021671509253755664, 17,0,14495370,-6457616006985138726, 52,3672190216);
insert into tbl_35 values(-12567,1763716571886143420, 123,0, 3832916,-5230555791873807348, 52,3672190216);
select /*+ agg_to_cop() stream_agg() */ approx_percentile( col_250 , 100 ) aggCol from (select * from t where not( IsNull( t.col_249 ) ) or t.col_246 not in ( -93 ) or not( t.col_245 between 1861738122396168665 and 6503630781418033805 ) or t.col_245 <= 3420550844659577896 order by col_251,col_249 ) ordered_tbl order by aggCol;
### 2. What did you expect to see? (Required)
mysql> select /*+ agg_to_cop() stream_agg() */ approx_percentile( col_250 , 100 ) aggCol from (select * from t where not( IsNull( t.col_249 ) ) or t.col_246 not in ( -93 )
or not( t.col_245 between 1861738122396168665 and 6503630781418033805 ) or t.col_245 <= 3420550844659577896 order by col_251,col_249 ) ordered_tbl order by aggCol;
+--------+
| aggCol |
[Truncated]
tidb log:
1、index out of range [28] with length 28:
[2021/08/03 01:33:09.963 +00:00] [INFO] [conn.go:995] ["command dispatched failed"] [conn=9057] [connInfo="id:9057, addr:192.168.122.1:5823 status:10, collation:utf8_general_ci, user:root"] [command=Query] [status="inTxn:0, autocommit:1"] [sql="select /*+ agg_to_cop() stream_agg() */ approx_percentile( col_250 , 100 ) aggCol from (select * from t where not( IsNull( t.col_249 ) ) or t.col_246 not in ( -93 ) or not( t.col_245 between 1861738122396168665 and 6503630781418033805 ) or t.col_245 <= 3420550844659577896 order by col_251,col_249 ) ordered_tbl order by aggCol"] [txn_mode=PESSIMISTIC] [err="runtime error: index out of range [28] with length 28\ngithub.com/pingcap/tidb/executor.(*recordSet).Next.func1\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/adapter.go:141\nruntime.gopanic\n\t/usr/local/go/src/runtime/panic.go:965\nruntime.goPanicIndex\n\t/usr/local/go/src/runtime/panic.go:88\ngithub.com/pingcap/tidb/executor/aggfuncs.partialResult4PercentileInt.Swap\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/aggfuncs/func_percentile.go:76\ngithub.com/pingcap/tidb/util/selection.partitionIntro\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/util/selection/selection.go:137\ngithub.com/pingcap/tidb/util/selection.medianOfMedians\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/util/selection/selection.go:74\ngithub.com/pingcap/tidb/util/selection.introselect\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/util/selection/selection.go:40\ngithub.com/pingcap/tidb/util/selection.introselect\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/util/selection/selection.go:50\ngithub.com/pingcap/tidb/util/selection.introselect\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/util/selection/selection.go:50\ngithub.com/pingcap/tidb/util/selection.introselect\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/util/selection/selection.go:50\ngithub.com/pingcap/tidb/util/selection.introselect\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/util/selection/selection.go:50\ngithub.com/pingcap/tidb/util/selection.introselect\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/util/selection/selection.go:50\ngithub.com/pingcap/tidb/util/selection.introselect\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/util/selection/selection.go:50\ngithub.com/pingcap/tidb/util/selection.Select\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/util/selection/selection.go:27\ngithub.com/pingcap/tidb/executor/aggfuncs.percentile\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/aggfuncs/func_percentile.go:43\ngithub.com/pingcap/tidb/executor/aggfuncs.(*percentileOriginal4Int).AppendFinalResult2Chunk\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/aggfuncs/func_percentile.go:172\ngithub.com/pingcap/tidb/executor.(*StreamAggExec).appendResult2Chunk\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/aggregate.go:1376\ngithub.com/pingcap/tidb/executor.(*StreamAggExec).consumeCurGroupRowsAndFetchChild\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/aggregate.go:1359\ngithub.com/pingcap/tidb/executor.(*StreamAggExec).consumeOneGroup\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/aggregate.go:1296\ngithub.com/pingcap/tidb/executor.(*StreamAggExec).Next\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/aggregate.go:1266\ngithub.com/pingcap/tidb/executor.Next\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/executor.go:285\ngithub.com/pingcap/tidb/executor.(*SortExec).fetchRowChunks\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/sort.go:228\ngithub.com/pingcap/tidb/executor.(*SortExec).Next\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/sort.go:112\ngithub.com/pingcap/tidb/executor.Next\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/executor.go:285\ngithub.com/pingcap/tidb/executor.(*recordSet).Next\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/adapter.go:145\ngithub.com/pingcap/tidb/server.(*tidbResultSet).Next\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/server/driver_tidb.go:305\ngithub.com/pingcap/tidb/server.(*clientConn).writeChunks\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/server/conn.go:1993\ngithub.com/pingcap/tidb/server.(*clientConn).writeResultset\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/server/conn.go:1941\ngithub.com/pingcap/tidb/server.(*clientConn).handleStmt\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/server/conn.go:1835\ngithub.com/pingcap/tidb/server.(*clientConn).handleQuery\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/server/conn.go:1681\ngithub.com/pingcap/tidb/server.(*clientConn).dispatch\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/server/conn.go:1215\ngithub.com/pingcap/tidb/server.(*clientConn).Run\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/server/conn.go:978"]
2、runtime error: integer divide by zero:
[2021/08/03 01:30:08.685 +00:00] [INFO] [conn.go:995] ["command dispatched failed"] [conn=9057] [connInfo="id:9057, addr:192.168.122.1:5823 status:10, collation:utf8_general_ci, user:root"] [command=Query] [status="inTxn:0, autocommit:1"] [sql="select /*+ agg_to_cop() stream_agg() */ approx_percentile( col_250 , 100 ) aggCol from (select * from t where not( IsNull( t.col_249 ) ) or t.col_246 not in ( -93 ) or not( t.col_245 between 1861738122396168665 and 6503630781418033805 ) or t.col_245 <= 3420550844659577896 order by col_251,col_249 ) ordered_tbl order by aggCol"] [txn_mode=PESSIMISTIC] [err="runtime error: integer divide by zero\ngithub.com/pingcap/tidb/executor.(*recordSet).Next.func1\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/adapter.go:141\nruntime.gopanic\n\t/usr/local/go/src/runtime/panic.go:965\nruntime.panicdivide\n\t/usr/local/go/src/runtime/panic.go:191\ngithub.com/pingcap/tidb/util/selection.randomPivot\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/util/selection/selection.go:85\ngithub.com/pingcap/tidb/util/selection.introselect\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/util/selection/selection.go:43\ngithub.com/pingcap/tidb/util/selection.introselect\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/util/selection/selection.go:50\ngithub.com/pingcap/tidb/util/selection.introselect\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/util/selection/selection.go:50\ngithub.com/pingcap/tidb/util/selection.introselect\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/util/selection/selection.go:50\ngithub.com/pingcap/tidb/util/selection.introselect\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/util/selection/selection.go:50\ngithub.com/pingcap/tidb/util/selection.introselect\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/util/selection/selection.go:50\ngithub.com/pingcap/tidb/util/selection.Select\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/util/selection/selection.go:27\ngithub.com/pingcap/tidb/executor/aggfuncs.percentile\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/aggfuncs/func_percentile.go:43\ngithub.com/pingcap/tidb/executor/aggfuncs.(*percentileOriginal4Int).AppendFinalResult2Chunk\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/aggfuncs/func_percentile.go:172\ngithub.com/pingcap/tidb/executor.(*StreamAggExec).appendResult2Chunk\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/aggregate.go:1376\ngithub.com/pingcap/tidb/executor.(*StreamAggExec).consumeCurGroupRowsAndFetchChild\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/aggregate.go:1359\ngithub.com/pingcap/tidb/executor.(*StreamAggExec).consumeOneGroup\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/aggregate.go:1296\ngithub.com/pingcap/tidb/executor.(*StreamAggExec).Next\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/aggregate.go:1266\ngithub.com/pingcap/tidb/executor.Next\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/executor.go:285\ngithub.com/pingcap/tidb/executor.(*SortExec).fetchRowChunks\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/sort.go:228\ngithub.com/pingcap/tidb/executor.(*SortExec).Next\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/sort.go:112\ngithub.com/pingcap/tidb/executor.Next\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/executor.go:285\ngithub.com/pingcap/tidb/executor.(*recordSet).Next\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/executor/adapter.go:145\ngithub.com/pingcap/tidb/server.(*tidbResultSet).Next\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/server/driver_tidb.go:305\ngithub.com/pingcap/tidb/server.(*clientConn).writeChunks\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/server/conn.go:1993\ngithub.com/pingcap/tidb/server.(*clientConn).writeResultset\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/server/conn.go:1941\ngithub.com/pingcap/tidb/server.(*clientConn).handleStmt\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/server/conn.go:1835\ngithub.com/pingcap/tidb/server.(*clientConn).handleQuery\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/server/conn.go:1681\ngithub.com/pingcap/tidb/server.(*clientConn).dispatch\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/server/conn.go:1215\ngithub.com/pingcap/tidb/server.(*clientConn).Run\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/server/conn.go:978\ngithub.com/pingcap/tidb/server.(*Server).onConn\n\t/home/jenkins/agent/workspace/build_tidb_multi_branch_master/go/src/github.com/pingcap/tidb/server/server.go:485\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1371"]
### 4. What is your TiDB version? (Required)
Release Version: v5.2.0-alpha-440-g17523d3da
Edition: Community
Git Commit Hash: 17523d3da8fe863979ae93277424f4ac5f0e9aa6
Git Branch: master
UTC Build Time: 2021-07-30 09:50:40
GoVersion: go1.16.4
Race Enabled: false
TiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306
Check Table Before Drop: false
Answers:
username_1: @leiysky PTAL it seems caused by https://github.com/pingcap/tidb/pull/19799
username_1: /cc @leiysky |
PointCloudLibrary/pcl | 272654028 | Title: OctreeIteratorBase bug in comparison operators ==/!=
Question:
username_0: <!--- Provide a general summary of the issue in the Title above -->
There is a bug in octree_iterator.h for the operator==() and operator!=() which can be found between the lines 136 and 151.
The boolean output of operator==() is inequal to the negation of the boolean output of operator!=().
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Operating System and version: All
* Compiler: All
* PCL Version: Latest ( and most likely all previous ones )
## Expected Behavior
(itA==itB) == !(itA!=itB);
## Current Behavior
(itA==itB) != !(itA!=itB);
## Possible Solution
A quick fix might be this ( at least with that the code below will work ).
```
bool operator==(const OctreeIteratorBase& other) const
{
return !this->operator!=(other);
}
```
However, the problem seems to be more deeply nested inside the implementation of OctreeIteratorBase. It appears wrong that with octree.breadth_end() we get an iterator which is not related at all to the underlying octree. In other words
```
auto aIt = octreeA.breadth_end();
auto bIt = octreeB.breadth_end();
```
The members of iterator aIt and bIt are absolutely identical.
## Code to Reproduce
```
// works as expected
auto it = octree.breadth_begin();
const auto it_end = octree.breadth_end();
for (it; it != it_end; it++)
{
cout << "Hello" << endl;
}
```
```
// stuck in an endless loop
auto it = octree.breadth_begin();
const auto it_end = octree.breadth_end();
while (true)
{
if (it == it_end)
break;
else
it++;
}
```
Answers:
username_1: You tripped into something. The issue is that the all end iterators are being created with the default constructor, when they should mimic the behavior of "some next element that should not be dereferenced".
username_1: @username_2 Can I count on you to have a look at this one then?
username_2: I think that I will be able to deal with this issue in the beginning of the next week. If I fail to get this thing done, I'll let you know :wink:
username_2: Ok, finally this issue is fixed in the branch `octree_iterator_fix`:
https://github.com/username_2/pcl/tree/octree_iterator_fix
I wait the end of the pull request #1983 before to propose a new one of this issue.
Status: Issue closed
|
evanw/esbuild | 780875496 | Title: Access inline sourcemap produced using Transform API in TransformResult
Question:
username_0: It'd be handy to be able to access a sourcemap that's generated and stored inline in a `.transform`'d file from the `TransformResult` as well. Right now, if `sourcemap: "inline"` is set, there's no sourcemap in the returned `TransformResult`. If it's not a major performance hit, it'd be awesome to just have access to the map when using `sourcemaps: "inline"` by default, but if not, a `"both"` option a la [babel](https://babeljs.io/docs/en/options#sourcemaps) might make sense too.
My use case is this: when using esbuild in a development setting, sometimes there are multiple downstream consumers, each expecting to find sourcemaps in their own strange way. In my case, I'm using `esbuild-jest` to transform TypeScript files for Jest, and then using VSCode to edit said sourcefiles. VSCode accesses sourcemaps on disk or via the inline `data:` URLs, so if I want to debug my code with breakpoints from the editor, I want to pass `sourcemaps: "inline"` to give VSCode the sourcemap. But, Jest also consumes sourcemaps in order to provide nice rich error messages that show the source around an assertion error, and it uses the sourcemap to do that. Its API expects to be passed the sourcemap and the code from anything transforming code (like `esbuild-jest`), so I want to pass `sourcemaps: true` to `esbuild` to get a sourcemap to hand to Jest. I can teach one of the things in the chain of tools to find the sourcemap in one place and move it to the other, but it'd be handy if esbuild produced it in both places since it will be the most performant thing at doing that.
Status: Issue closed
Answers:
username_1: This has been added as a `--sourcemap=both` option in version 0.8.31. |
sass/node-sass | 394795078 | Title: auditjs vulnerability warnings
Question:
username_0: Hello,
I use auditjs (https://www.npmjs.com/package/auditjs) in my CI build scripts.
This generates a vulnerability report for the package dependencies my project uses.
When the audit command is executed, it reports several warnings about lodash referenced by node-sass package.
The issue is mainly about node-sass using older/vulnerable version of lodash packages.
My question is if node-sass could be updated with a newer version of lodash (4.17.5 or newer), so that these audit warnings could be eliminated.
Here is the output of auditjs:
------------------------------------------------------------
[158/1242] lodash.clonedeep 4.5.0 [VULNERABLE] 2 known vulnerabilities affecting installed version
[CVE-2018-3721] lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutabl...
lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutable Data (MAID) vulnerability via defaultsDeep, merge, and mergeWith functions, which allows a malicious user to modify the prototype of "Object" via __proto__, causing the addition or modification of an existing property that will exist on all objects.
ID: 12e63c9c-b3f9-42d3-8541-dca1b72cad69
Details: https://ossindex.sonatype.org/vuln/12e63c9c-b3f9-42d3-8541-dca1b72cad69
Dependency path: /node-sass/lodash.clonedeep
CWE-471: Modification of Assumed-Immutable Data (MAID)
The software does not properly protect an assumed-immutable element from being modified by an attacker.
ID: 0f23ff35-235f-404f-8118-bc1580673fd0
Details: https://ossindex.sonatype.org/vuln/0f23ff35-235f-404f-8118-bc1580673fd0
Dependency path: /node-sass/lodash.clonedeep
------------------------------------------------------------
------------------------------------------------------------
[769/1242] lodash.assign 4.2.0 [VULNERABLE] 2 known vulnerabilities affecting installed version
[CVE-2018-3721] lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutabl...
lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutable Data (MAID) vulnerability via defaultsDeep, merge, and mergeWith functions, which allows a malicious user to modify the prototype of "Object" via __proto__, causing the addition or modification of an existing property that will exist on all objects.
ID: 12e63c9c-b3f9-42d3-8541-dca1b72cad69
Details: https://ossindex.sonatype.org/vuln/12e63c9c-b3f9-42d3-8541-dca1b72cad69
Dependency path: /node-sass/lodash.assign
CWE-471: Modification of Assumed-Immutable Data (MAID)
The software does not properly protect an assumed-immutable element from being modified by an attacker.
ID: 0f23ff35-235f-404f-8118-bc1580673fd0
Details: https://ossindex.sonatype.org/vuln/0f23ff35-235f-404f-8118-bc1580673fd0
Dependency path: /node-sass/lodash.assign
------------------------------------------------------------
------------------------------------------------------------
[770/1242] lodash.mergewith 4.6.1 [VULNERABLE] 2 known vulnerabilities affecting installed version
[CVE-2018-3721] lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutabl...
lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutable Data (MAID) vulnerability via defaultsDeep, merge, and mergeWith functions, which allows a malicious user to modify the prototype of "Object" via __proto__, causing the addition or modification of an existing property that will exist on all objects.
ID: 12e63c9c-b3f9-42d3-8541-dca1b72cad69
Details: https://ossindex.sonatype.org/vuln/12e63c9c-b3f9-42d3-8541-dca1b72cad69
Dependency path: /node-sass/lodash.mergewith
CWE-471: Modification of Assumed-Immutable Data (MAID)
The software does not properly protect an assumed-immutable element from being modified by an attacker.
ID: 0f23ff35-235f-404f-8118-bc1580673fd0
Details: https://ossindex.sonatype.org/vuln/0f23ff35-235f-404f-8118-bc1580673fd0
Dependency path: /node-sass/lodash.mergewith
------------------------------------------------------------
Status: Issue closed
Answers:
username_1: All the lodash dependencies are marked with `^` so they should all be updated to the latest 4.x release if you clear you dependencies and reinstall (or call `npm update`)
https://github.com/sass/node-sass/blob/7c1dd8ea212473f7eb8c8fc998c122b956a35383/package.json#L63-L65 |
HRNet/HRNet-Semantic-Segmentation | 620612433 | Title: How to use this function? elif 'test' in config.DATASET.TEST_SET: to Visualize the result.
Question:
username_0: If I set the sv_pred flag as true,then,if I need to write additional codes to disaplay the segmentation mask? I'm sorry to ask the Inferior question and I'm just satring learn the deep learing.
Answers:
username_1: We save the segmentation mask in the testing process.
If you want to visualize the results, you can set the sv_pred flag as true.
https://github.com/HRNet/HRNet-Semantic-Segmentation/blob/f7dab6fb6344bbe9fee6606955c526f2e969931b/lib/core/function.py#L103
https://github.com/HRNet/HRNet-Semantic-Segmentation/blob/f7dab6fb6344bbe9fee6606955c526f2e969931b/lib/core/function.py#L154
username_0: If I set the sv_pred flag as true,then,if I need to write additional codes to disaplay the segmentation mask? I'm sorry to ask the Inferior question and I'm just satring learn the deep learing.
username_1: You can find the segmentation mask under the given sv_dir without any extra codes.
username_0: Thanks you sir,I find the dir.
username_2: Hi @username_0 How did you inference on your own images? |
cakephp/cakephp | 61314716 | Title: entity's dirty being reset before afterSave is called
Question:
username_0: is there a reason for resetting the dirty properties before the afterSave is called?
I want to see use the data that were successfully saved in my afterSave logic but
```
$newData = $entity->extract($entity->visibleProperties(), true);
```
returns empty.
Answers:
username_1: Because the entity has been saved, and is no longer dirty?
Status: Issue closed
username_0: how do i access the saved data in the afterSave function ?
username_1: Well the entity contains all the saved data. If you need to know which specific columns were changed, you can track them in a beforeSave and then act on that data in the afterSave.
username_2: You also have access to getOriginal() and its friends. |
mapbox/mapbox-gl-js | 141294572 | Title: Mapbox GL - Can we add animation while filtering layers
Question:
username_0: I would like to do some animation while filters are being applied on layers. For example, I have 2 layers with circles drawn of different radius on some geo points. While I switch from one layer to other layer by setting filters using setFilter(), I want the transition on circle opacity for a smooth transition.
Please assist
Answers:
username_1: I don't foresee us getting around to this feature any time soon but I love the idea! As always, external contributions are welcome.
username_2: Closing; implementing this in core would be awkward given our rendering and worker architecture, and you can implement this effect in user code by transitioning opacity manually, and then changing the filter.
Status: Issue closed
|
andresgc1213/semana-1-126 | 757184229 | Title: Implementacion Veu.js
Question:
username_0: Realizar la instalación, desarrollo e invocación del componente Veu.js para el cargue de los datos de los miembros del equipo
Answers:
username_0: Se implementa el componenete Veu en el documento App.js, se desarrolla cargue de la informacion de miembros de equipo para ser renderizada en index.html
Status: Issue closed
|
Hacker0x01/react-datepicker | 232607222 | Title: Time selection is not working
Question:
username_0: Instead of time set in the field I get current time
```
handleDateTimeChange(date) {
this.setState({
startDate: date
});
}
```
The state receives the date correctly but time if in a form I change the time to 5 pm it still returns 6 if it's now 6. Here is component
```
<DatePicker
dateFormat="YYYY-MM-DD HH:mm"
selected={this.state.startDate}
todayButton="Today"
onChange={this.handleDateTimeChange}
/>
```
Answers:
username_1: is there any support to change the date as well as time from the date picker itself??
username_2: `this.state.startDate` is a `moment` object. You can always rewrite it in `onChange` or you can use `onSelect` callback if you need to catch each `select`. DatePicker is not TimePicker, it just cares about dates, but you can track current time in `onSelect` callback.
Status: Issue closed
username_3: Closing due inactivity. |
kiddo-capstone/kiddo-frontend | 812268690 | Title: Component: Modal Message
Question:
username_0: A Modal Message communicates various messages and renders upon some contextual action (example #7 )
A Modal message maintains a theme, but may alter color of text/button depending on the use case.
Example warning before taking photo:
<issue_closed>
Status: Issue closed |
remote-job-boards/software-engineering | 845531554 | Title: ReCharge Payments: Director of Engineering, Platform
Question:
username_0: **Published on:** March 10, 2021
**Original Job Post:** https://weworkremotely.com/remote-jobs/recharge-payments-director-of-engineering-platform

<div><strong>Overview</strong></div><div><br>Reporting to the VP of Engineering, we are seeking a Director of Engineering to lead our Platform group. The ideal candidate will possess a balance of technology and people management skills and lead with empathy. Cross-functional agility is essential and this leader will work closely with engineering leadership to evangelize ReCharge and create an exceptional platform experience.</div><div><br>Our stack includes: Python, Flask, Jinja, ES6, Vue.js, Sass, Webpack, Redis, Docker, GCP, Terraform, Ansible, Memcached, Nginx</div><div><strong><br>What You’ll Do</strong></div><ul><li>Live by and champion our values: #day-one, #ownership, #empathy, #humility.</li><li>Hire and grow your teams with A+ engineering talent; build and maintain a culture of speed, excellence, collaboration, mentorship, and open feedback in engineering.</li><li>Develop and drive forward a strategic roadmap in collaboration with VP of Engineering and CTO.</li><li>People manage and mentor platform group development managers and provide career growth guidance for team org of ~25+ engineers.</li><li>Dive deep into technology and be on the forefront of the latest tools, technologies, and strategies and help evaluate, prototype, and introduce them to our teams.</li><li>Measure and report on team execution; coach and improve the performance of existing engineers in your group.</li></ul><div><strong>What You’ll Bring</strong></div><ul><li>Typically, 12+ years of experience in engineering, preferably at a SaaS or e-commerce company</li><li>Typically, 6+ years at minimum managing agile development teams of 20+ in fast paced environments, preferably distributed teams</li><li>Strong people leadership and role-model for technical excellence; exceptional delegation and people management skills</li><li>Experience in a modern web stack and cloud-native environments such as GCP, AWS, or Azure</li><li>Track record of delivery in complex, high growth technical environments</li><li>Desire and ability to work remote-first in a high growth company</li><li>Bachelor’s degree or equivalent experience</li></ul><issue_closed>
Status: Issue closed |
aws/aws-sdk-java-v2 | 552593741 | Title: Inconsistent behavior between ListObjects and ListObjectVersions APIs with respect to encoding key field
Question:
username_0: ## Expected Behavior
When encodingType of url is specified as a request parameter, with respect to the key field in Contents section of ListObjects (v1 + v2), and key fields in DeleteMarker and Versions sections of ListObjectVersions, either one of the following should be true:
1) They are all encoded after unmarshalling
2) They are all decoded after unmarshalling
## Current Behavior
What happens instead is that the key field is decoded in Contents section ListObjects apis (v1 + v2), but NOT the key field in neither DeleteMarker nor Versions section in ListObjectVersions apis. This results in having to handle the unmarshalled response differently when using the different apis, even though they both have the same encoding-type request parameter.
## Possible Solution
In https://github.com/aws/aws-sdk-java-v2/blob/master/services/s3/src/main/java/software/amazon/awssdk/services/s3/internal/handlers/DecodeUrlEncodedResponseInterceptor.java#L51-L57, have another if block to check if the response is `instanceof` ListObjectVersionsResponse, and have corresponding `modifyListObjectVersionsResponse` method that does decoding on keys in the delete marker and versions sections. This method should work similarly to `modifyListObjectsResponse`. Also, it should decode the KeyMarker, NextKeyMarker, Prefix, and Delimiter fields as well.
## Steps to Reproduce (for bugs)
Send ListObjects request with encoding-type "url" request parameter and ensure that it returns results that have XML special characters in the key name. Unmarshall into the Java object and when looking at the key field in the contents object, it is decoded.
Send ListObjectVersions request with encoding-type "url" request parameter and ensure that it returns results that have XML special characters in the key name. Unmarshall into the Java object and when looking at the key field in the deleteMarkers or versions objects, it is still encoded. (Same for KeyMarker, NextKeyMarker, etc as mentioned above)
## Context
Usage of ListObjects api and ListObjectVersions api (to ensure consistency between them)
## Your Environment
<!--- Include as many relevant details about the environment where the bug was discovered -->
* AWS Java SDK version used: 2.9.10 (but I still see it in latest master version)
* JDK version used: Java 8
* Operating System and version: Mac OS Mojave.
Thanks!
Answers:
username_1: Thank you for the detailed report @username_0, it helped a lot in expediting the repro process.
I can confirm I'm seeing this, marking as a bug.
Status: Issue closed
username_2: Fixed via #1630 and the fix has been released as part of `2.10.57` https://github.com/aws/aws-sdk-java-v2/blob/master/CHANGELOG.md#21057-2020-02-04
username_0: Great, thank you! |
Activiti/Activiti | 276341077 | Title: MessageProducerCommandContextCloseListener is not using the CommandContext received as parameter
Question:
username_0: The` MessageProducerCommandContextCloseListener` should use the `CommandContext` received as parameter instead of getting the current one from the `Context`.
Answers:
username_0: PR: Activiti/activiti-cloud-runtime-bundle-service#20
Status: Issue closed
|
ampproject/amphtml | 126014660 | Title: Disallow nested scrolling elements
Question:
username_0: Enforced through validation. Seems like it would be a nice user experience guarantee.
Answers:
username_1: Can you provide a list of the scrolling elements you are aware of, or at least a couple examples.
username_2: This is mostly CSS. `overflow:auto` and `overflow:scroll`.
This would be for #1356
username_3: Also `overflow-x: scroll` and `overflow-y: scroll`.
username_4: Search for `overflow` in this spec file: https://github.com/ampproject/amphtml/blob/master/spec/amp-html-format.md
username_2: Ha, I missed it the other day :)
Did we implement it, though? It would be interesting to see if we can still make the change without breaking many docs.
CC @username_1
username_1: Those aren't implemented. We haven't had time to finish up the css work, so what's actually active are essentially a few blacklist regexps at the moment. `overflow` is not one of them.
username_5: Currently having author defined scrollable subviews (using overflow) results in buggy behaviour as this is not something the runtime code handles properly but that does not stop page authors to do it ( for example see https://github.com/ampproject/amphtml/pull/4081#issuecomment-234655415 ). IMO, we should get this in the validator soon to avoid setting the wrong expectations.
username_6: I've pointed it out in the aforementioned comments on #4081, but would like to elaborate that unlike a nested vertical scrolling container, horizontal scrolling is not a bad user experience. This is to contrast to the original scope of this issue as @username_0 has defined it.
Buggy layout for custom elements on horizontal scrolling can be fixed with minimal effort and without performance tolls.
username_5: Good point @username_6. If we decide to only disable vertical scrollables and allow horizontal (which seems reasonable) we have some work to do to bring layout scheduling to arbitrary sub-layers (such as finding all scrollable areas and attaching scroll handlers to detect user actions, etc..) or introduce wrappers such as `<amp-scrollable>`. https://github.com/ampproject/amphtml/issues/3434 would be a good start.
username_6: @username_5, that sounds reasonable. But may be then entirely disabling horizontal axis in $4081 should not be rushed out the door?
username_5: @username_6 #4081 makes the lack of support for nested scrollables more obvious but the behaviour without it is also bad and against AMP principles. Without #4081, **all** resources inside a horizontally scrollable area will start loading after page load **regardless** of whether these resources are in the view or about to come to the view. There was no resource scheduling or pre-loading happening before #4081, it simply loaded everything, which isn't good (*see example below)
If this is not causing big production issues for you, I rather keep #4081 to prevent similar usages until we have a proper solution.
*To see the issue in action, disable [Dev Channel](https://cdn.ampproject.org/experiments.html) and visit https://amphtml-ae5fc.firebaseapp.com/examples.build/empty.amp.html, but do not scroll and keep your Network tab open. You will notice that after a while (about 8-9 seconds) all 10 image resources are loaded despite most of them not even being close to come to view port.
cc @username_4
username_6: @username_5, I've actually thought that it was normal that after a delay (to allow clear path for highest priority content), 6-10 would load. The only part of it that I considered as a bug in my work was the fact that when you start scrolling, 6 and 7 don't preload in time before they get in the view.
You guys obviously have a better view into the usage of AMP across different publishers, whereas I can only assume that horizontal scrolling is not the most common issue at the moment.
No, it certainly does not cause big production issues, but prevents a feature deployment. What worries me is if the only place where horizontal scrolling is allowed will be `<amp-carousel>`, the possibilities for horizontal scrolling will stay limited due to declarative nature of the tag. For example, one of the uses that we would like to employ is being able to switch from carousel view (where a little more than one element is visible, like in your example), to the full screen view, where only one element is in the view). When you switch between the views, the currently visible item in the carousel stays visible. This is possible with minimal CSS, however is not possible with `<amp-carousel>` + CSS or with `<amp-rousel>` + `<amp-lightbox>`. Since the only performance toll is coming from the layout and preloading, I've been hoping it would be possible to stall a blanket moratorium, and instead couple it together with the release of `<amp-scrollable>` instead, which would give a meaningful alternative within AMP principles.
username_6: In addition, I see that disabling horizontal scrolling was originally slated for v1, as per #1356
username_7: Is this still relevant?
username_1: @username_0 @username_4 @username_2
I did a quick check and it looks like this would invalidate about 20% of docs. I presume this is not something that we want to do?
Status: Issue closed
username_1: I'm going to close this. Feel free to reopen if this is the incorrect conclusion. |
helm/helm-www | 805950708 | Title: Helm Rollback Documentation - Explicitly define 0
Question:
username_0: I was going through the documentation for `helm rollback` and it seems like providing `0` as the second argument is also equivalent to omitting the second argument. Would it be valuable to update the documentation to specify the `0` option being the same as omitting the second argument?
https://github.com/helm/helm-www/blob/83e7f82a9f0f6ca959da8131b1b3701b1b501fa5/content/en/docs/helm/helm_rollback.md |
appirio-tech/connect-app | 213206276 | Title: text area auto-size issues
Question:
username_0: ### Expected behavior
The text area grows with text, but when deleting text it doesn't shrink back. Currently the `<textarea>` tag has property `rows="3"` but it should be initialized with only 1 row and allow to grow.
### Screenshot/screencast



Expected

We should initialize the specification text-area with 1 row. Css style should have min-height 45px (line-height is 25px - fixed).
--
#### Environment
- OS:
- Browser (w/version):
- User role (client, copilot or manager):
- Account used:
Answers:
username_1: @username_2 How to make change in the TextArea.jsx which is present in node_modules as it is included in the git ignore.
username_1: https://github.com/appirio-tech/connect-app/pull/831/
username_1: @username_2 On which branch I need to create request for the repo appirio-tech/react-components .As changes to row number will be made in this repo.
username_2: see https://github.com/appirio-tech/connect-app/wiki/Bug-Bash-Rules
it's feature/connectv2
username_1: @username_2 Unable to push branch cut out of feature/connect/v2
fatal: unable to access 'https://github.com/appirio-tech/react-components.git/': The requested URL returned error: 403
username_2: @username_1 have you forked the react-components repository ? if not, fork it, create a branch in your repository and create a pull request
username_1: Ok Sure.
username_1: @username_2 Pull request for this issue https://github.com/appirio-tech/react-components/pull/133 .
https://github.com/appirio-tech/connect-app/pull/831.
Status: Issue closed
|
JesusPaz/DeepSalsa | 453820953 | Title: Comparing three subjects annotations takes too long
Question:
username_0: The first attempt at the code works but takes 30 seconds to match one annotation with the other two which are below a closeness threshold. A more efficient code is easily written. As time is incremental its better to use a while than to use a for. |
Aamirofficiall/API | 587528838 | Title: Missing Documentation
Question:
username_0: Add variable in url
like this indication of the variable in url and its type along with restrictions if any
http://127.0.0.1:8000/register/id/{id}
also add body syntax in Post requests
What needs to be sent in body
Add a sample for each request<issue_closed>
Status: Issue closed |
cpitclaudel/alectryon | 859942353 | Title: alectryon.py --help does not explain how to suppress warnings
Question:
username_0: I'd like to be able to suppress warnings for long lines. But there don't seem to be any relevant flags?
Answers:
username_1: Indeed, there's only a python-level option ATM. Is that enough for your case or do you want a command-line flag?
username_0: I would very much prefer a command-line flag; the HoTT library does one-line comments and relies on editors to linewrap them, so we get hundreds of useless "line very long" warnings from alectryon. (Alternatively, perhaps what we really want is a way to set the maximum line width from the command line, and a toggle to tell alectryon that it's allowed to line-wrap comments if it doesn't already. (And if it does already line-wrap comments, then perahps this warning should be disabled by default on lines where the default comment-wrapping behavior will display things nicely for us.))
username_0: There is no reason, IMO, to complain about the comment on the first line of https://hott.github.io/HoTT/alectryon-html/HoTT.Basics.Notations.html, for example.
username_1: More generally, I think we shouldn't complain at all about long comments — only about long code lines, if anything. at all
username_1: I'd like to do this eventually, but it's a bit of a pain to handle lines that mix code and comments, so I'll leave this open for now.
Status: Issue closed
|
InternationalScratchWiki/ScratchWikiSkin2 | 679623188 | Title: Manage translation in one place
Question:
username_0: There are currently two ways to submit translations, one here and one at the Transifex - https://www.transifex.com/international-scratch-wiki/
The transifex organization is not made by Ken or me, though, and is not sure who made it.
We need to discuss which platform we use - before getting started with Transifex.
Answers:
username_1: The Transifex org is not official and shouldn't be considered a second way to submit translations. @username_3 please don't create things with official names if you are not the representative. You represent the French wikis, not all of the wikis.
username_1: I've edited the OP after new information has come to light. I don't know enough about any translation platform to be anything but indifferent, though in my hazy memory I seem to recall that Transifex costs money?
username_2: Well the Transifex was taken down but yes it costs a lot of money.

username_2: However Crowdin has a much cheaper plan

username_3: Transifex:
- [x] **is free of charge for open sources projects**, like the ones hosted at the International Scratch Wiki org
- [x] has support for all the languages we support at the Test Wiki
- [x] has a reviewing system
- [x] has a large user-base (don't forget about all the voluteers already signed up for the Scratch projects)
- [x] has a team management system
- [x] has full automation support (built-in pull & push to GitHub from new translations, + API for custom system)
- [x] has login via GitHub support, so no need for additional account (since already have to have a GH account to push tranlsations with the current system)
- [x] has a glossary, to re-use the same translations between strings
- [x] has a comments feature, to let comments on some strings
- [x] can be managed by mutilple people at the time
- [x] has a dashboard to see the overall progress for each language
- [x] has a notification system when new strings are added
username_1: @username_2 Was it? The link in the OP still works fine and I have access to the org.
username_1: Looking at Crowdin's website, and comparing with Transifex, Crowdin:
- [x] is also free of charge for open source projects
- [x] [supports every language we do](https://api.crowdin.com/api/supported-languages) (I assume, I couldn't be bothered to check the whole list)
- [x] also has a "proofreading" system, which is reviewing under a different name
- [ ] though it does have a large userbase, it's admittedly not one that's familiar with the Wikis
- [ ] doesn't have team management, but teams aren't ideal - anyone should be allowed to translate anything to their language
- [x] is endorsed by Electron
- [x] also has full automation support
- [x] also supports logging in with GH
- [x] also supports JSON as a format
- [x] also has a glossary
- [x] also has comments
- [x] also supports multiple managers
- [x] also has a dashboard
- [x] also has notifications for new strings
- [x] also has priorities for files
I can't tell if Crowdin has support for custom variables; that being said, Transifex's support for MediaWiki parameter syntax is dodgy because you can only specify things that are delimited (e.g. `{variable}`) or constant (e.g. `%s`). The former doesn't work at all, and for the latter we'd have to manually specify every `$1`, `$2`, etc as a separate constant variable...
Basically, Crowdin looks like a legitimate competitor to Transifex. However, I have a different suggestion.
username_1: I personally am leaning towards the [MediaWiki Language Extension Bundle](https://www.mediawiki.org/wiki/MediaWiki_Language_Extension_Bundle), which is used by MediaWiki themselves to translate MediaWiki itself, in particular using the [Translate extension](https://www.mediawiki.org/wiki/Extension:Translate). This probably makes the most sense for translating wiki-related projects. The bundle:
- [x] is already used by Wikimedia for MediaWiki extensions and skins they maintain - *our exact situation*
- [x] makes use of `qqq.json` so that we don't have to copy all documentation over to another platform
- [x] is installed on a wiki, which in our case can either be the Test wiki or a new thing like translate.scratch-wiki.info
There are other features which I'm researching at the moment. This is the most logical choice to me, because it doesn't even take us to another website - just use the Wiki!
username_3: It would mean **only the users with a Test Wiki account would be able to translate** these strings, so **a large amount of potential translators are left out** (All the ones with GH account who are not part of the Test Scratch Wiki and who just wanted to help with translation).
username_1: and this is exactly what the `qqq.json` file is for! The strings in there are shown to explain the context for the string and what goes in the parameters.
My only qualm with the whole idea at the moment is that it's taking me quite a while to figure out how to set it up. I'm playing around with it on my local installation and things seem to be going well, but it'll take some time. Transifex is more out-of-the-box, but it's still not expressly designed with MediaWiki in mind.
(To be fair, the example in `scratchlogin-userlogin-link` is quite egregious - that formatting really should be hardcoded. Go ahead and open an issue for it, or I'll change that at some point. However, your example change shows your ignorance - Special page names are *not* translated, and string keys are hyphenated-lowercase, not camelCase or underscore_case.)
username_3: You should have a look at [this](https://stackoverflow.com/questions/5543490/json-naming-convention) at some point.
---
👉 Anyway, that being said, keep in mind we don't have very specific needs, so a generic solution would work like a charm.
Whatever we end up choosing, being Transifex, Crowdin or anything else should be easily usable and user friendly. The goal is to get enough volunteers, not necessarly the best integrated system.
username_1: MediaWiki convention overrides JSON convention. It's [MediaWiki:Requestaccount-text](https://en.scratch-wiki.info/wiki/MediaWiki:Requestaccount-text), not RequestaccountText or Requestaccount_text.
----
I should say that I'm not opposed to Transifex - I'm just currently more enthusiastically in favor of ext:Translate. If it turns out more interwiki admins than just you would prefer it, then I'm happy to use it.
username_2: This is a very valid point ^^
username_1: Not when you consider my preference for wikitext knowledge. I'm happy having the iw admins on Transifex, but someone who knows nothing about wikis, even in general? nah, they wouldn't be able to understand enough context even if it was given.
username_2: You have to be accepted on Transifex iirc...
username_4: I think that MW translate extension is a better idea, especially for references and templates. However, you need to setup a whole server for each language or edit the extension for each language's prefix
username_2: I think that ext:Translate is better since we're updating anyway, and the points Ken made outweigh the Transifex points. |
milvus-io/milvus | 508353758 | Title: [BUG] Some troubleshoot messages in Milvus do not provide enough information
Question:
username_0: **Describe the bug**
Some troubleshoot messages in the Milvus software do not provide enough information. Users still need to refer to the documentation for details. This might have a negative effect on user experience. We need to improve these messages in the software.
**Expected behavior**
Please refer to the suggested text below:
Topic | Old Message | New Message
------------ | ------------- | ---------------
General | Invalid table name: xxx | Invalid table name: xxx. A table name can only contain numbers, letters, and underscores. The first character of a table name must not be a number. The length of a table name must be less than 255 characters.
General | Table xxx not exist | Table xxx does not exist. Use milvus.has_table to verify whether the table exists. You also can check if the table name exists.
CreateTable | Invalid table dimension: xxx | Invalid table dimension: xxx. The table dimension must be within the range of 1 ~ 16384.
CreateTable | Invalid index file size: xxx | Invalid index file size: xxx. The index file size must be within the range of 1 ~ 4096.
CreateTable | Invalid index metric type: xxx | Invalid index metric type: xxx. Make sure the metric type is either MetricType.L2 or MetricType.IP.
CreateIndex | Invalid index type: xxx | Invalid index type: xxx. Make sure the index type is among FLAT, IVFLAT, and IVF_SQ8.
CreateIndex | Invalid index nlist: xxx | Invalid index nlist: xxx. The index nlist must be greater than 0.
Insert | Row record array is empty | The row record array is empty. Make sure you have entered vector records.
Insert | Size of vector ids is not equal to row record array size | The size of vector ID array must be equal to the size of the vector.
Insert | Table vector ids are user defined, please provide id for this batch | Table vector IDs are user-defined. Please provide IDs for all vectors of this table.
Insert | Table vector ids are auto generated, no need to provide id for this batch | Table vector IDs are auto-generated. All vectors of this table must use auto-generated IDs.
Insert | Row record float array is empty | The row record float array must not be empty.
Insert | Invalid row record dimension: xxx vs. table dimension: xxx | The row record dimension must be equal to the table dimension.
Search | Invalid topk: xxx | Invalid topk: xxx. The topk must be within the range of 1~2048.
Search | Invalid nprobe: xxx | Invalid nprobe: xxx. The nprobe must be within the range of 1 ~ index nlist.
Search | Query record float array is empty | The query record float array is empty. Make sure the vectors you want to search have values.
Search | Invalid query record dimension: xxx vs. table dimension: xxx | The vector dimension must be equal to the table dimension.
Refer to [Milvus Documentation](https://milvus.io/docs/en/userguide/troubleshoot/) for all troubleshooting messages documented in the current release.
Note that the original descriptions are based on the current Milvus documentation. If there are any software changes that are not captured by the documentation or the suggested text is not technically accurate, please let me know.
Answers:
username_1: After discussion, we redefine the error messages:
ERROR CODE | ERROR MESSAGE | API | REASON | ACTION
-- | -- | -- | -- | --
9 | Table name should not be empty. | most of sdk api | should not be empty | table name should not be empty
9 | Invalid table name: xxx. The length of a table name must be less than 255 characters. | most of sdk api | table name is illegal | ensure character less than 255
9 | Invalid table name: xxx. The first character of a table name must be an underscore or letter. | most of sdk api | table name is illegal | first character should be underscore or character
9 | Invalid table name: xxx. Table name can only contain numbers, letters, and underscores. | most of sdk api | table name is illegal | table name only contains underscore/alphanumber
7 | Invalid table dimension: xxx. The table dimension must be within the range of 1 ~ 16384. | CreateTable | table dimension is illegal | dimension range: 1 ~ 16384
5 | Invalid index file size: xxx. The index file size must be within the range of 1 ~ 4096. | CreateTable | index file size is illegal | range: 1 ~ 4096
23 | Invalid index metric type: xxx. Make sure the metric type is either MetricType.L2 or MetricType.IP. | CreateTable | metric type is illegal | 1: L2 2:Inner Product
4 | Table xxx not exist. Use milvus.has_table to verify whether the table exists. You also can check if the table name exists. | most of sdk api | table not found | make sure the table is exist and table name is correct
8 | Invalid index type: xxx. Make sure the index type is in IndexType list. | CreateIndex | index type is illegal | 1:FLAT 2:IVFLAT 3:IVF_SQ8 4:MIX_NSG
22 | Invalid index nlist: xxx. The index nlist must be greater than 0. | CreateIndex | nlist value is illegal | value > 0
11 | The row record array is empty. Make sure you have entered vector records. | Insert/Search | insert empty vectors into milvus | insert vector must has value
11 | The row record float array must not be empty. | Insert/Search | vector data is empty |
12 | The size of vector ID array must be equal to the size of the vector. | Insert | vector id array size not equal to vector size | if you want to use user-custom id for insert vectors, make sure vector id array size equal to vector size. if you didn't provide user-custom ids, keep the id array empty and milvus will generate id for these vectors
12 | Table vector IDs are user-defined. Please provide IDs for all vectors of this table. | Insert | some vectors of this table use user-custom id, all vectors must use user-custom id | provide user-custom id to insert vector
12 | Table vector IDs are auto-generated. All vectors of this table must use auto-generated IDs. | Insert | some vectors of this table use auto-generated id, all vectors must use auto-generated id | don't provide user-custom id to insert vector
7 | The row record dimension must be equal to the table dimension. | Insert/Search | vector dimension is not equal to table dimension | make sure vector dimension is equal to table dimension
10 | Invalid topk: xxx. The topk must be within the range of 1 ~ 2048. | Search | search parameter topk is illegal | range: 1 ~2048
5 | Invalid nprobe: xxx. The nprobe must be within the range of 1 ~ index nlist. | Search | search parameter nprobe is illegal | range: 1 ~ index nlist
11 | The query record float array is empty. Make sure the vectors you want to search have values. | Search | vector data is empty | for example, dimension=512, you must provide 512 float values for each vector
7 | The vector dimension must be equal to the table dimension. | Search | vector dimension is not equal to table dimension | make sure vector dimension is equal to table dimension
username_1: Already implemented
Status: Issue closed
|
DLR-SC/tixi | 62079670 | Title: Open xml file with local external data fails with relative paths
Question:
username_0: ```
What steps will reproduce the problem?
1. create xml file with relative reference to some local file
2. open file using tixiOpenDocumentRecursive and OPENMODE_RECURSIVE
What is the expected output? What do you see instead?
The referenced file should be opened. Instead we get an error, because tixi
searches inside the current directory, not inside the directory of the original
file.
Please use labels and text to provide additional information.
We should memorize the location of the original file. That should be passed to
openExternalFiles.
The whole function openExternalFiles seems to be broken though. Lots of
valgrind errors.
Check, if that function is still required!!!
```
Original issue reported on code.google.com by `<EMAIL>` on 14 May 2013 at 3:26
Answers:
username_1: The memory problems should now be resolved. We only need to adjust the paths relative to the xml file.
Status: Issue closed
|
UCF/Athena-Framework | 228709212 | Title: Navbars - consider applying state classes to navbars when they affix
Question:
username_0: Because affixed navbars now use pure css to handle affixing (e.g. .sticky-top), affixing logic is now technically stateless--meaning, we don't currently have a way of being able to tell when a navbar has affixed or not. Because of this, we are unable to add style changes to the navbar between affixed and unaffixed states (e.g. the heavy drop shadow we've been adding to affixed navbars on the main site). We _might_ be able to tack on some state class toggling with additional javascript. |
phusion/passenger-docker | 173238436 | Title: Stop nginx with SIGQUIT instead of docker stop's SIGTERM signal
Question:
username_0: As suggested in the [issue I opened in phusion/baseimage-docker](https://github.com/phusion/baseimage-docker/issues/335#issuecomment-242425175), I decided to repost the issue here.
I'd like to see nginx stopped gracefully with the `SIGQUIT` signal when `docker stop` is invoked, so there's at least an attempt to drain connections rather than reset them. See my referenced issue for more information on references and the suggestion of a wrapper script around `nginx`.
Answers:
username_0: Finally got around to this, here is my `run` script implementation of a signal handler for remapping to Nginx properly:
```bash
#!/bin/sh
# `/sbin/setuser memcache` runs the given command as the user `memcache`.
# If you omit that part, the command will be run as root.
# We're omitting that part.
set -x
pid=0
# SIGHUP handler
hup_handler() {
if [ $pid -ne 0 ]; then
echo "Sending SIGHUP to nginx pid $pid" >> /var/log/nginx.log
kill -s HUP "$pid"
echo "Successfully reloaded nginx process $pid" >> /var/log/nginx.log
else
echo "Nginx was not reloaded because pid was $pid"
fi
}
# SIGKILL handler
kill_handler() {
if [ $pid -ne 0 ]; then
echo "Sending SIGKILL to nginx pid $pid" >> /var/log/nginx.log
kill -s KILL "$pid"
wait "$pid"
echo "Successfully killed the Nginx process $pid"
else
echo "Exiting without SIGQUIT because pid was $pid" >> /var/log/nginx.log
fi
exit 137; # 128 + 9 == SIGKILL
}
# SIGTERM handler
term_handler() {
if [ $pid -ne 0 ]; then
echo "Sending SIGQUIT to nginx pid $pid" >> /var/log/nginx.log
kill -s QUIT "$pid"
wait "$pid"
echo "Successfully exited the Nginx process $pid" >> /var/log/nginx.log
else
echo "Exiting without SIGQUIT because pid was $pid" >> /var/log/nginx.log
fi
exit 143; # 128 + 15 == SIGTERM
}
trap 'kill ${!}; hup_handler' SIGHUP
trap 'kill ${!}; term_handler' SIGTERM
trap 'kill ${!}; kill_handler' SIGKILL
# run application
/opt/nginx/sbin/nginx & # >> /var/log/nginx.log 2>&1 &
pid="$!"
echo "Kicking off wait, nginx pid is $pid" >> /var/log/nginx.log
# wait forever
while true
do
tail -f /dev/null & wait ${!}
done
```
This allows `stop`, `kill` and `reload` to all work correctly, although it is possible for a hung nginx process to become detached from the script if you send a `SIGTERM` and `SIGKILL` in succession. I haven't found a graceful workaround but this is adequate enough for me in the meantime.
username_1: Did you try [`STOPSIGNAL`](https://docs.docker.com/engine/reference/builder/#stopsignal) on your Dockerfile?
username_2: Just FYI, your script is terminating itself incorrectly.
You must not `exit 128+$code`, that's just how the shell represents it.
Programs terminating in response to a signal must instead terminate themselves using the same signal, meaning you have to do `kill -TERM $$` instead.
This is so that a calling program can determine whether a program exited on its own accord, or in response to a signal (`WIFEXITED` vs `WIFSIGNALED`). This matters for instance for Ctrl+C handling in a loop, see https://mywiki.wooledge.org/SignalTrap#Special_Note_On_SIGINT_and_SIGQUIT
Remember you then also have to reset the traps first in your traps (`trap - SIGNAL`), otherwise you get an infinite loop.
P.S. What's the purpose of `kill $!` in each trap?
username_3: My team solved this issue by creating a `sv` control file for the term command e.g. `/etc/service/nginx/control/t`:
```sh
#!/bin/bash
set -e
sv q nginx
```
With that, there's no need for an extra script as it would "translate" sigterm to sigquit for NGINX. For more info, check the [runsv documentation](http://smarden.org/runit/runsv.8.html). |
iz7iz7iz/eKDG.css | 276778720 | Title: Sınıf isimlerini türkçeleştirme hk.
Question:
username_0: Sınıf isimlerini türkçeleştirmek yerine İngilizce hazırlamak ve türkçe comment eklemek iş yükü açısından çok daha uygun olacaktır düşüncesindeyim. Kullanıcıların büyük bir kısmı sınıfları copy&paste şeklinde kullandığı için açıklamalara bakarak kolaylıkla yapmak istediği işe uygun sınıf bulacaktır. Her halukarda zaten açıklama olacak sınıflar için.
Answers:
username_1: güzel fikir gerçekten, kolaylıkla gerçekleştirilebilir.
sınıfları ingilizce olarak öğrenmiş olanlarda yeniden öğrenmek zorunda kalmaz.
Status: Issue closed
|
velo/manifest-validator-maven-plugin | 302828265 | Title: Release 0.2
Question:
username_0: @username_1 release, tag=`0.2`
Answers:
username_1: @username_0 OK, I will release it now. Please check the progress [here](http://www.username_1.com/t/14063-1)
username_1: @username_0 Done! FYI, the full log is [here](http://www.username_1.com/t/14063-1) (took me 5min)
Status: Issue closed
|
Combitech/codefarm | 207546025 | Title: Fix hang of exec job with artifact publish
Question:
username_0: slave_cli command create_artifact with --file option does the following:
1. Creates type artifactrepo.artifact
2. Uploads artifact to created artifact
Somehow when response from step 2 above is written to command socket it appears to be closed.
Status: Issue closed
Answers:
username_0: slave_cli command create_artifact with --file option does the following:
1. Creates type artifactrepo.artifact
2. Uploads artifact to created artifact
Somehow when response from step 2 above is written to command socket it appears to be closed.
Status: Issue closed
|
flutter/flutter | 638461021 | Title: "Could not build the precompiled application for the device."
Question:
username_0: Been working in Flutter going on two years now. Love it.
Admittedly, however, I'm coming from the Android side and have never worked on the iOS side until today.
Ok, and so I tried running my app on a Macbook today and got the following [results](https://pastebin.com/AMGUGXpq).
I haven't a clue what to do next. Any insight would be appreciated.
Going back to school on this one--took me a half-hour to work the Macbook's Finder.
<!-- `flutter doctor -v`-->
```
[✓] Flutter (Channel stable, v1.17.3, on Mac OS X 10.15.5 19F101, locale en-CA)
• Flutter version 1.17.3 at /Users/gregperry/Projects/flutter
• Framework revision b041144f83 (10 days ago), 2020-06-04 09:26:11 -0700
• Engine revision ee76268252
• Dart version 2.8.4
[✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
• Android SDK at /Users/gregperry/Library/Android/sdk
• Platform android-28, build-tools 28.0.3
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b3-6222593)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 11.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 11.2, Build version 11B52
• CocoaPods version 1.8.4
[✓] Android Studio (version 4.0)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 46.0.2
• Dart plugin version 193.7361
• Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b3-6222593)
[✓] IntelliJ IDEA Community Edition (version 2019.1)
• IntelliJ at /Applications/IntelliJ IDEA CE.app
• Flutter plugin version 34.0.4
• Dart plugin version 191.6183.88
[✓] VS Code (version 1.32.3)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 2.25.0
[✓] Connected device (1 available)
• Greg’s iPhone • 790231423ae297eb332cfbb9d02ec90f023e1588 • ios • iOS 13.1.2
• No issues found!
```
</details>
Answers:
username_1: Hi @username_0
This platform is not meant for assistance on personal code.
You may want to look at this [docs](https://flutter.dev/docs/development/platform-integration/platform-channels)
Please see https://flutter.dev/community for resources and asking questions like this,
you may also get some help if you post it on Stack Overflow and if you need help with your code, please see https://www.reddit.com/r/flutterhelp/
Closing, as this isn't an issue with Flutter itself. If you disagree, please write in the comments and I will reopen it.
Thank you
Status: Issue closed
|
OpenSpace/Ghoul | 301537793 | Title: Increase the size of UniformCache variables
Question:
username_0: The execution of the main atmosphere fragment shader requires the use of 38 uniforms. So, UniformCache macros must be updated to at least 38 uniform cached variables.
Answers:
username_0: Let's use more the one cache variable. :-)
Status: Issue closed
|
rosuH/AndroidFilePicker | 702427268 | Title: xml加载出现问题。
Question:
username_0: me.rosuh.filepicker.FilePickerActivity}: android.view.InflateException: Binary XML file line #76: Binary XML file line #76: Error inflating class androidx.swiperefreshlayout.widget.SwipeRefreshLayout
Answers:
username_1: @username_0 你的项目是 AndroidX 的吗?那你要用 AndroidX 的版本哦
username_0: // 文件选择 https://github.com/username_1/AndroidFilePicker/blob/master/README_CN.md
implementation 'me.rosuh:AndroidFilePicker:0.6.5-x'我也有点迷糊。
------------------ 原始邮件 ------------------
username_1: @username_0 我自己新建的一个项目,是可以正常使用的。或者你手动把 [swiperefreshlayout](https://developer.android.com/jetpack/androidx/releases/swiperefreshlayout) 依赖加进去。
username_0: 好的,谢谢了。我新建了一个项目使用的确没问题。但是原本的项目在导入swiperefreshlayout 依赖后也没问题了。这是什么原因呢?
------------------ 原始邮件 ------------------
username_1: @username_0 我也不太清楚具体原因,可能是和库的打包有关系。你看看新项目有什么依赖是你老项目没有的。或者版本差异?
username_0: 我一开始也怀疑是有什么不同的,但是,老项目也才昨天创建的,gradle版本也是一样的,差异性不大。脑壳痛。
------------------ 原始邮件 ------------------
username_1: 试试新版本 0.6.6-x。估计是库里没有加依赖的问题。 @username_0
username_0: 好的,我试了一下。的确没问题了。
------------------ 原始邮件 ------------------
Status: Issue closed
|
manga-download/hakuneko | 1080382950 | Title: [manga livre] Connector not working
Question:
username_0: **Did you read [the troubleshooting guide](https://hakuneko.download/docs/troubleshoot/)**
Yes
**Is the website of the connector working properly / are you able to see the manga within your browser**
Yes
**Describe the bug**
not working, no download
**To Reproduce**
Steps to reproduce the behavior:
1. manga livre
**Screenshots**


**HakuNeko (please complete the following information):**
- Version [e.g. 6.17] |
filaraujo/gulp-i18n-localize | 432836773 | Title: Translation for the main index.html
Question:
username_0: Say, if I have 3 different languages for a site; `lang-1`, `lang-2`, and `lang-3`. Is it possible to make the `lang-1` as the translation for main index.html? Like, when the user go to example.com they'll see translation from `lang-1`.
Right now, my main index.html is showing the `${{ translation.content.example }}$`
```
<url>/index.html (the main index file with lang-1)
<url>/lang-2/index.html
<url>/lang-3/index.html
```
Thank you.
Answers:
username_1: Sounds like your best bet is to have another gulp process after translation
move the lang-1 folder into the root.
`gulp.src($LANG-1-PATH).pipe(gulp.dest(ROOT));` something similar to this
Status: Issue closed
username_0: I was thinking of doing the same too. Thanks man! |
spacetelescope/hstcal | 380395008 | Title: Clean up WFC3 Regression files for Jenkins testing
Question:
username_0: This issue is specifically to address any and all issues with WFC3 regression tests so all the files run successfully in Jenkins. At this time some of the tests are set to xfail. These settings are due to the need to address outdated truth files.
Answers:
username_0: Duplicate
Status: Issue closed
|
hgrecco/pint | 61674290 | Title: Quantity operator overloading does not return NotImplemented on unknown types
Question:
username_0: Unless I'm missing something, it seems that, for example, Quantity.__add__ should return NotImplemented instead of raising a DimensionalityError? This would allow pints to be used together with other libraries that use operator overloading. In (gpkit)[https://github.com/convexopt/gpkit] we use a (kind of awful hack)[https://github.com/convexopt/gpkit/blob/master/gpkit/__init__.py#L61] to get around this, though I'm potentially just doing it wrong.
Answers:
username_1: Thanks for the support. The problem is how to deal with user defined numerical classes. Imagine that you do `X+Y` where `X` is a Pint quantity and `Y` is something else.
(A) If `Y is a number, then
- if `X` is dimensionless, you want to return `X.to('') + Y`
- if `X` is not dimensionless, you want to raise a `DimensionalityError`.
(B) If `Y` is not a number, then you would like to return `NotImplemented`
Because we do not have good way to say if something is a number or not we decided to implement only A. We are open to suggestions in this regard.
But tell us more about your use case, maybe we can find a way to support it.
username_0: Huh, interesting! You're saying that if `1 + units.m` is run, you want the error to read `DimensionalityError: Cannot convert from 'meter' to 'dimensionless'` instead of `TypeError: unsupported operand type(s) for +: 'int' and 'pint.unit.Quantity'`, presumably because the latter is less clear as to what the error message is?
I wonder, how do integers / floats in python determine if something is a number?
My use case in gpkit is making a symbolic math object (e.g. `x**2 + y`) that has units (but is not a Quantity), and that is overloaded to allow adding to numbers and other symbolic math objects.
username_1: I think the usual way is that they return `NotImplemented` and let the other class (e.g. numpy's ndarray) handle it. The problem with that approach is that we will need to make a list of all the things that could be handled by Pint. This includes all python numerical types, all numpy numerical types (we have something like this `compat.NUMERIC_TYPES`. But he other problem still is that this wont work with user defined numerical classes.
A few questions about your case:
- Can you derive from Quantity?
- How does Pint's quantities get into the equation? Something like `q * x **2 + y` where `q` is a quantity?
username_0: I don't think we can derive from Quantity while still working for users who do not have pints installed.
That is one way for pint's quantities to get in; another way is in variable declaration, e.g.
```python
x = gpkit.Variable("x", "meters")
```
How about looking for a "I'm not a numerical class" flag that's specific to pint? E.g., `if hasattr(other, "__pintimnotanumber"): return NotImplemented`
Status: Issue closed
username_1: I am closing this now. Feel free to reopen.
username_2: I think this issue is the cause of #278. It appears that Quantity is forcing all other objects to go through it's math. This makes sense in most cases since you want units to propagate. However, what it does is force all of the right hand operations to return Quantities and it prevent left hand operator overloads from working properly.
As for a fix, what about making it so that the Quantity math operators check for the existence of a units property (built off of Pint) in the other object? This would allow other users to build off of Pint's units without messing up their code or forcing them to use their own hacked version of Pint's automatic unit conversion or forcing them to inherit from Quantity if they don't need to. Perhaps that would go against the philosophy of the pint package. If so, it would be nice to get clarification on this issue.
username_3: Putting in a vote for reopening this! |
ClickInsight/community-visualizations | 952363040 | Title: Expose control over tick mark font and backround color
Question:
username_0: Hello...
Would it be possible to expose the ability to change the font and background color for the tick marks? In my case, I want to use a transparent chart background (on a dark report page background) and given that the only font color option is light blue (with a white background) I can't use them and have to hide them.
Thanks,
Daniel |
wekan/wekan | 739060430 | Title: Screenshots in comments
Question:
username_0: I tried to take a screenshot(print screen) and upload it to a comment(ctrl + v), but it doesn't work. It can be fixed?
I used different permissions for the user, but nothing has changed.
My task looks like this:

Status: Issue closed
Answers:
username_1: On some operating systems, it could be possible to add image with Ctrl-V to attachments to Attachments / Ctrl-V part of card.
If you like to view that attachment also in comment, you can use that image url with image tag like this with optional width and height: `<img src='https...' width="100" height="100">` , but it's not currently possible to paste or attach image to comment. |
backend-br/vagas | 927126591 | Title: [Home Office] Arquiteto de Infraestrutura na Qintess
Question:
username_0: ## EMPRESA
Com 30 anos no mercado, somos uma multinacional brasileira entre as maiores integradoras de TI do Brasil.
Valorização dos talentos, seriedade, compromisso no atendimento aos clientes, agilidade e qualidade nos projetos são algumas das premissas que nos diferenciam.
São mais de 2000 clientes – grandes empresas nacionais e globais – que nos reconhecem pela excelência e inovação, o que só é possível pelo trabalho de uma equipe altamente qualificada e engajada. Reunimos diferentes programas de qualidade de vida e oportunidades de carreira, além de ações para proporcionar um ambiente de trabalho motivador e colaborativo, com incentivo à diversidade e à inovação. Tudo isso, baseado nos princípios ESG – Environmental, Social and Governance (em português: Ambiental, Social e de Governança), uma prática global que tem ganhado força no mundo dos negócios e que é composta por três fatores centrais que atuam na medição da sustentabilidade e do impacto social de uma empresa.
Mais de 3.500 profissionais altamente capacitados compõem a nossa equipe em mais de 151 cidades no Brasil e na América Latina. Estamos na era da mudança permanente e, nesse cenário, trabalhamos para suportar as empresas à alcançarem um novo patamar de desempenho e alavancarem à transformação digital.
Somos uma fonte de geração de ideias, que trabalha de forma colaborativa, que transforma pessoas e estas transformam o jeito de fazer negócios.
Para mais informações, acesse: www.qintess.com
## DESCRIÇÃO DA VAGA
Arquiteto de Infraestrutura
## LOCAL
Home Office
## BENEFÍCIOS
• Plano de saúde AMIL Apto
• Convênio Odontológico - <NAME>
• Seguro de vida
• Vale refeição
• Auxílio creche
• Convenio Farmácia
## REQUISITOS
**Obrigatório conhecimentos em:**
- API Gateway/MGMT;
- Azure Functions;
- SMTP;
- SFTP;
- Storage Account;
- Blob Storage;
- vNets;
- EventHub;
- NAT Gateways;
- Open VPN;
- Virtual Servers;
- AD Azure para RBAC e AD on premisse;
- Azure Data Factory;
- DataBricks;
- CMS Drupal;
- Application Gateway;;
- Elastic Search;
- Kibana;
- NAT Proxy;
- Office 365;
- Git Repository/Git LAB/Git Runner;
- Nexus;
- WAF;
- Projeto e Deploy através do Terraform.
**Comportamental**
- Perfil Colaborativo
- Atuar em conjunto com Devs
- Senso de Dono
- Tenha voz ativa
## Como se candidatar
Por favor envie um email para <EMAIL> com seu CV anexado - enviar no assunto: [VAGA XX] [Seu nome completo]
### Contratação:
- CLT |
jgehrcke/gipc | 318281252 | Title: [Windows] Can't run example on Windows 10, Python 3.6
Question:
username_0: Example from [synchronization.py](https://github.com/username_1/gipc/blob/master/examples/synchronization.py). Can't seem to use a pipe as a parameter for a spawned process.
Error code:
```
File "test.py", line 33, in <module>
main()
File "test.py", line 13, in main
p = gipc.start_process(writer_process, args=(cend,))
File "C:\Anaconda\lib\site-packages\gipc\gipc.py", line 284, in start_process
p.start()
File "C:\Anaconda\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Anaconda\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Anaconda\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Anaconda\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "C:\Anaconda\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle gevent._semaphore.Semaphore objects
```
System: Windows 10, Python 3.6, gevent 1.2 (same failure on latest though), greenlet 4.1.2
Answers:
username_1: ```
TypeError: can't pickle gevent._semaphore.Semaphore objects
```
The lock on a `GIPCHandle` is meant for itra-process synchronization. It lives on the heap of the current process and cannot even help synchronizing across multiple processes.
That is, we don't even need to try to pickle it to the child process.
One way to fix that could be to use the `_winapi_childhandle_after_createprocess_parent/child()` hook to remove the semaphore before starting the child, and to re-create a fresh one in the child.
Need to get CI set up on Windows again and then we can test this.
username_1: This was indeed solved with https://github.com/username_1/gipc/pull/66! Good news. I'll leave this open until we have a release.
Status: Issue closed
|
sstsimulator/sst-macro | 982643638 | Title: Algorithm implementation for MPI_Wtime?
Question:
username_0: Dear All,
I have one question about the SST-Macro implementation of the MPI_Wtime
In the following example, if the MPI_Wtime function is called twice in a row, the time obtained twice will be the same (unless other MPI functions are called in the middle, which will produce different times)
Some questions:
1. result of two consecutive calls to MPI_Wtime consistent. Is it correct?
#DIM 25
static float **a;
static float *x;
static float *y;
void allocate_host_arrays()
{
int i=0, j=0;
a = (float **)malloc(DIM * sizeof(float *));
for (i = 0; i < DIM; i++) {
a[i] = (float *)malloc(DIM * sizeof(float));
}
x = (float *)malloc(DIM * sizeof(float));
y = (float *)malloc(DIM * sizeof(float));
for (i = 0; i < DIM; i++) {
x[i] = y[i] = 1.0f;
for (j = 0; j < DIM; j++) {
a[i][j] = 2.0f;
}
}
}
void
compute_on_host()
{
int i = 0, j = 0;
for (i = 0; i < DIM; i++)
for (j = 0; j < DIM; j++)
x[i] = x[i] + a[i][j]*a[j][i] + y[j];
}
static inline void
do_compute_cpu(double target_seconds)
{
double t1 = 0.0, t2 = 0.0;
double time_elapsed = 0.0;
while (time_elapsed < target_seconds) {
t1 = MPI_Wtime();
compute_on_host();
t2 = MPI_Wtime();
time_elapsed += (t2-t1); // the time_elapsed always is zero
}
}
Thanks to anyone who may shed some light into this.
-sirui
Answers:
username_1: Hi Sirui,
This is expected behavior. Doing actual computation in a skeleton (such as in compute_on_host()) doesn't actually advance simulator time. Accounting for computation needs to be done explicitly. There are a number of ways to do this. The most straightforward examples would be sleep() and compute() in operating_system.h. We've been experimenting with more sophisticated compiler-based approaches, but the toolchain is pretty tricky to get working.
--Joe
username_1: This paper: https://www.osti.gov/servlets/purl/1513073
And Chapter 6 of the sst-macro manual talk about the source-to-source approach.
These are also good background reading for computation modeling in sst-macro.
--Joe
username_0: Hi username_1,
Thank you very much for your reply! I will use nanosleep() function in time.h, for i want to use a nano level time delay.
Thanks,
--Sirui
Status: Issue closed
|
RehanSaeed/Serilog.Exceptions | 530570513 | Title: Please upload a new nuget package of Serilog.Exceptions.EntityFrameworkCore
Question:
username_0: At the moment the nuget version of Seriglog.Exceptions.EntityFrameworkCore is at 5.3.1 which is the version with the incorrect namespace. This is fixed in version 5.3.2 but this isn't on nuget.
I thought I had accidentally installed the wrong package when I needed the SQLServer namespace inside my project
Answers:
username_1: Released. Thank you for pointing that out.
Status: Issue closed
|
syedjawadakhtar/ETHZ-ROS | 472994356 | Title: scan.py execution
Question:
username_0: In scan.py in ```~/ETHZ-ROS/src/husky_high_level_controller/src/```
When executing it with - ```python scan.py```
it gives an error message with ```rosparam: ~topic``` unammed or not identified
But when running the launch file and RViz it works fine |
basharinandre/learning | 783997000 | Title: Аналогичная проблема
Question:
username_0: https://github.com/basharinandre/learning/blob/f38586f44aa43257f68af9c4d376baff27306e0d/index.html#L46-L47
list-wrapper__main-navigation - нельзя у родителя нет блока list-wrapper
**header__main-navigation**<issue_closed>
Status: Issue closed |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.