repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
RespiraWorks/pcbreathe | 604544208 | Title: Triple Check Nucleo Physical Pin Mapping
Question:
username_0: It's kind of hard to tell by the way you did the part definition by making a Nucleo symbol U1 AND the 19x2 header symbols P2/P3 (but NC). I think it would be more straightforward to just have the Nucleo symbol as the two 19x2 headers, at least that's how I would have done it.
Regardless, at first glance, it seems like maybe you have the assignments mirrored/rotated? Again, it's hard to tell, I'm assuming U1A maps to P2, U1B to P3, 1:1 pin number mapping, etc.
I think a couple people should look at this and maybe have a more clearer representation of the symbol and how it relates to the physical headers.
Answers:
username_1: yeah those are in there to make them appear on the BOM for parts ordering. What I've done on other designs is to make the module itself just a series of anchoring points, and to have the connectors themselves be the actual connection points, I'll swap that design in, stand by for updates.
username_1: *that's how its done for the pi, as well
username_1: swap has been made, nucleo board footprint is now mech guide with separately placed connectors (like the Rpi). Please verify pinouts.
username_2: I've verified the pinout based on https://www.st.com/resource/en/schematic_pack/nucleo_64pins_sch.zip
But it would be nice to have a physical unit in hand. I should have one today.
username_2: @username_0 if you can provide the triple in check I would be grateful :)
username_0: Yep - Can do it tonight. Also have the physical one in hand.
username_0: Seems like you nailed it - orientation and connector assignment matches the Nucleo!
Status: Issue closed
username_1: many thanks @username_0 |
kasamikona/ConsoleChat | 815845638 | Title: Question
Question:
username_0: Can you DM with this?
Answers:
username_1: Hi, I didn't realize anyone was using this. It was a messy tool that was only really meant for me and a few friends and I made it public for convenience, but it's now an abandoned project. I don't plan to update it in the future if a major discord.js update breaks it, so please don't rely on it.
I don't think it's able to do DMs explicitly, and I don't plan to add that feature or any other new features. Sorry if this impacts your plans. You're welcome to try and add it yourself, but I know the code is a big mess so I wouldn't personally recommend it.
Status: Issue closed
username_0: oh ok, i guess i could try to add it myself. my main plan was to have a way to access discord whenever its blocked and so far this does just that
username_0: @username_1 if there were to be a major update to discord.js and I updated the code, would you add it here if I opened a pull request? Just wondering since I've heard about discord.js v13
username_0: Can you DM with this?
username_0: also one last question, can i set a server and channel to connect to by default with their ids?
username_1: I'll probably archive this repo so I won't be able to accept PRs, if you want to maintain your own updated fork that's fine with me. I might link to yours if you're okay with that.
There's no direct way to set a default channel but you can just use
```js
let defaultConsoleChannel = client.channels.fetch(yourDefaultChannelId);
consoleChat.setChannel(defaultConsoleChannel);
```
when your bot starts. No server id is necessary when using channel ids directly.
username_0: Ah ok, thanks. And yes I'll be fine with you linking to my [fork](https://github.com/username_0/ConsoleChat).
username_0: ```
let defaultConsoleChannel = client.channels.fetch(yourDefaultChannelId);
consoleChat.setChannel(defaultConsoleChannel);
```
did not work, got this error
```
node index.js
Running ConsoleChat.js v1.2.3
readline.js:1147
throw err;
^
TypeError: consoleMsgChannel.send is not a function
at sendMessage (/home/runner/ConsoleChat/consoleChat.js:102:23)
at processCommand (/home/runner/ConsoleChat/consoleChat.js:452:44)
at Interface.<anonymous> (/home/runner/ConsoleChat/consoleChat.js:201:23)
at Interface.emit (events.js:315:20)
at Interface._onLine (readline.js:329:10)
at Interface._line (readline.js:658:8)
at Interface._ttyWrite (readline.js:1003:14)
at ReadStream.onkeypress (readline.js:205:10)
at ReadStream.emit (events.js:315:20)
at emitKeys (internal/readline/utils.js:335:14)
exit status 1
```
username_0: yes i did change it to the channel id
username_1: My bad, channels.fetch returns a channel promise not the channel itself.
Status: Issue closed
|
sympy/sympy | 867879476 | Title: Plot3d function does not work properly
Question:
username_0: The example given in plot.py plot3d(x*y, (x, -5, 5), (y, -5, 5)) produces this error: ValueError Traceback (most recent call last)
<ipython-input-19-45d58399c482> in <module>
1 var('x y')
----> 2 plot3d(x*y, (x, -5, 5), (y, -5, 5))
~/opt/miniconda3/lib/python3.9/site-packages/sympy/plotting/plot.py in plot3d(show, *args, **kwargs)
2184 plots = Plot(*series, **kwargs)
2185 if show:
-> 2186 plots.show()
2187 return plots
2188
~/opt/miniconda3/lib/python3.9/site-packages/sympy/plotting/plot.py in show(self)
220 self._backend.close()
221 self._backend = self.backend(self)
--> 222 self._backend.show()
223
224 def save(self, path):
~/opt/miniconda3/lib/python3.9/site-packages/sympy/plotting/plot.py in show(self)
1443 'The TextBackend supports only one graph per Plot.')
1444 elif not isinstance(self.parent._series[0], LineOver1DRangeSeries):
-> 1445 raise ValueError(
1446 'The TextBackend supports only expressions over a 1D range')
1447 else:
ValueError: The TextBackend supports only expressions over a 1D range
This error also occurs for other attempted surface graphs, prior to updating to this version of SymPy all of these plots worked fine, is this due to some undocumented change in the functionality of plot3d or is this a bug?
Answers:
username_0: Additional information, this bug still occurs in a virtual environment using the previous SymPy release, the functions that I have tested work fine in Windows 10 and a Linux OS but for some reason do not work within Mac OS Big Sur through a Jupyter notebook.
username_1: Do you have matplotlib installed?
Status: Issue closed
username_0: I did not have the correct version of matplotlib installed, thank you for the help. |
IntelRealSense/realsense-ros | 646621688 | Title: Frame Rate fluctuating
Question:
username_0: Hey,
Following are the details of my setup:
```
Device: Intel Realsense D415
Device Fw Version: 05.12.05.00
Realsense ROS: v2.2.14
LibRealSense: v2.33.1
```
output of lsusb -v:
```
Bus 004 Device 002: ID 8086:0ad3 Intel Corp.
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 3.20
bDeviceClass 239 Miscellaneous Device
bDeviceSubClass 2 ?
bDeviceProtocol 1 Interface Association
bMaxPacketSize0 9
idVendor 0x8086 Intel Corp.
idProduct 0x0ad3
bcdDevice 50.c5
iManufacturer 1 Intel(R) RealSense(TM) Depth Camera 415
iProduct 2 Intel(R) RealSense(TM) Depth Camera 415
iSerial 3 928223020142
bNumConfigurations 1
```
In realsense-viewer i get 30 fps without any throttling or fluctuations. But when i launch realsense2_camera rs_camera.launch. The output of /diagnostics is as below:
```
header:
seq: 54
stamp:
secs: 1593238538
nsecs: 325000661
frame_id: ''
status:
-
level: 1
name: "camera/realsense2_camera_manager_color: Frequency Status"
message: "Frequency too low."
hardware_id: "925322061947"
values:
-
key: "Events in window"
value: "85"
-
key: "Events since startup"
value: "505"
-
key: "Duration of window (s)"
value: "5.120828"
-
key: "Actual frequency (Hz)"
value: "16.598878"
-
[Truncated]
value: "702"
-
key: "Duration of window (s)"
value: "5.131224"
-
key: "Actual frequency (Hz)"
value: "30.012335"
-
key: "Target frequency (Hz)"
value: "30.000000"
-
key: "Minimum acceptable frequency (Hz)"
value: "27.000000"
-
key: "Maximum acceptable frequency (Hz)"
value: "33.000000"
---
```
And this continues as long as the node is up. For the first 5-6 seconds the framerate is maintained at 30 fps and then it starts falling; until eventually hitting the 14-15 fps mark. The depth and IR stream framerates are consistent. I've tried changing the camera resolution parameters (color_width, color_height), but this doesnt affect the problem at hand.
Answers:
username_1: Hi @username_0 There was a similar case recently in which a RealSense ROS user's frame rate was being halved but they did not experience this in the RealSense Viewer. In that case they were using **align_depth:=true** in their roslaunch statement. Are you using align_depth in your roslaunch too, please?
https://github.com/IntelRealSense/realsense-ros/issues/1251
username_0: Hey @username_1, yes that was the case. Problem solved. Thanks.
Status: Issue closed
username_1: You are very welcome. Thanks for the update! |
mateuszbaran/CovarianceEstimation.jl | 391986099 | Title: [bug] centercols! fails if given Ints
Question:
username_0: * mechanism is: inplace centering of columns
* if entries are Integer, the mean may be float, in which case it tries to feed floats in place for an array of int.
e.g.
```julia
B = [1 2; 3 4];
CovarianceEstimation.centercols!(B)
# InexactError: Int64
```
I think this is an easy fix with no impact on performance. Before we were doing (in `cov`)
1. copy X into Xc,
1. then modify Xc in place
Now we should do
1. form `Xc` directly as output of `centercols`
## fix
* drop `centercols!`
* replace by a `centercols` with implementation
```julia
centercols(X::AbstractMatrix) = (X .- mean(X, dims=1))
```<issue_closed>
Status: Issue closed |
CMakePP/CMakeTest | 980098450 | Title: `Execution count of nested functions #21` introduced a regression
Question:
username_0: See https://github.com/CMakePP/CMakeTest/issues/21
It seems this fix creates a regression.
```cmake
ct_add_test(NAME test_a)
function(${test_a})
ct_add_section(NAME test_b) # <<< Gets executed
function(${test_b})
ct_assert_equal(BUILDSERVER OFF)
endfunction()
ct_add_section(NAME test_c) # <<< Doesn't get executed; Regression Bug
function(${test_c})
ct_assert_equal(RELEASEBUILD OFF)
endfunction()
endfunction()
```<issue_closed>
Status: Issue closed |
eonum/drg-search | 154652124 | Title: Documentation
Question:
username_0: - how to obtain catalogue data
- structure of the data folder
- general information about Rails
- data model visualised
- source code comments
Answers:
username_0: * Compile both READMEs and all the information above into a document
* Include usability test concept and resutls
Status: Issue closed
|
pytorch/pytorch | 705265240 | Title: about benchmark issue
Question:
username_0: Under the condition of benchmark=true, the training is repeated twice, and each training epoch=50, the model trained twice has a much worse effect on the test set. I know that in the case of benchmark=true, it is not reproducible during training, but the result is a bit worse, and the recall rate has dropped from 40.3% to 39.5%. Is this within the expected range?
Answers:
username_1: Can you please give a bit more details, namely exact command you are trying to run, what vesion of PyTorch are you using and so on.
username_0: In the script: 'torch.backends.cudnn.benchmark = True', 'torch.backends.cudnn.deterministic = True'
pytorch1.5+cu92
Distributed training is used:
```
import torch.distributed as dist
def main():
cfg = configurations[1]
ngpus_per_node = torch.cuda.device_count()
world_size = cfg['WORLD_SIZE']
cfg['WORLD_SIZE'] = ngpus_per_node * world_size
mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, cfg))
def main_worker(gpu, ngpus_per_node, cfg):
SEED = cfg['SEED'] # random seed for reproduce results
torch.manual_seed(SEED)
torch.cuda.manual_seed_all(SEED)
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.deterministic = True
```
username_0: I know that when cudnn.benchmark=true, different convolution implementation algorithms are selected for calculation, so when cudnn.benchmark=true, the training process cannot be reproduced. But I think the results obtained by different convolution implementation algorithms are different but should not be very large. I found in the test that the recall rate of the test set obtained by repeated training twice dropped from 40.2% to 39.5%, and the error exceeded expectations. So I want to know if this error may be caused by other reasons when the benchmark=true, which can be avoided. In addition, is this error within the expected range?
username_2: That's a valid assumption.
Could you update to the latest stable PyTorch version (`1.6`) with CUDA10.2, which would ship with cudnn7.6.5.32 and rerun the script, please? (your cudnn version should be 7.6.3.30, if I'm not mistaken)
Would it be possible to post the model definition, data shapes as well as the used GPU, so that we can check the called kernels?
username_0: I also tried it in cudnn8.0.2 and had the same problem. Sorry I cannot provide our model definition. If this is a definite problem, then I think the problem should be reproduced well.
In addition, I plan to train several times to find the best result, then find the corresponding convolution operator, modify the pytorch source code, and fix the convolution operator. Is this method reliable? |
shlinkio/shlink | 589559444 | Title: Update to infection 0.16.0
Question:
username_0: <!--
Before opening an issue, just take into account that this is a completely free of charge open source project.
I'm always happy to help and provide support, but some understanding will be required.
I do this in my own free time, so expect some delays when implementing new features and fixing bugs, and don't take it personal if an issue gets eventually closed.
Try to be polite, and understand it is impossible for an OSS project to cover all use cases.
-->
https://infection.github.io/2020/03/28/whats-new-in-0.16.0/<issue_closed>
Status: Issue closed |
Roll20/roll20-character-sheets | 1141758146 | Title: [Shadowrun 5e Advanced]
Question:
username_0: Second weapon mount on Drone does not roll any dice for either simple or complex attack for the last 2 days (matching the last update seen on GitHub to the sheet.)
I think I see why, but I am not sure:
This is on weapon 1 for a drone:
<button type="action" class="droneroll autopilotroll" name="act_rollweapon" title="Fire the Weapon" value="@{gmroll}
However, this is not in weapon 2 for a drone.
<button type="action" class="droneroll autopilotroll" name="roll_droneweaponrollautopilot2" title="Fire the Weapon" value="@{gmroll}
It appears "act_rollweapon" is missing on the second Drone slot.<issue_closed>
Status: Issue closed |
the-spice-mice/web-app | 399006749 | Title: Logo Sketches
Question:
username_0: Each team member is to create a minimum of 10 logo sketches each.
Answers:
username_0: 
Holli's Logo Sketches
username_1: 
username_1: 
Status: Issue closed
|
sequelize/sequelize | 911694779 | Title: Migrate database with RDS IAM authentification correctly
Question:
username_0: ## Migrate database with RDS IAM authentification correctly
### What was unclear/insufficient/not covered in the documentation
In docs is not described how to restore db connection if it lost in RDS IAM authentification case.
I am using RDS postgresql with IAM authentication. In my application, it generates the token at runtime before it connects to RDS cluster. Based on AWS document,
```An authentication token is a string of characters that you use instead of a password. After you generate an authentication token, it's valid for 15 minutes before it expires. If you try to connect using an expired token, the connection request is denied.,```
the token is only valid for 15 minutes. My question is that how can I reconnect to db when my token already expired what would resume migration?
## Issue Template Checklist
### Is this issue dialect-specific?
- [x] No. This issue is relevant to Sequelize as a whole.
- [ ] Yes. This issue only applies to the following dialect(s): XXX, YYY, ZZZ
- [ ] I don't know.
### Would you be willing to resolve this issue by submitting a Pull Request?
- [ ] Yes, I have the time and I know how to start.
- [ ] Yes, I have the time but I don't know how to start, I would need guidance.
- [ ] No, I don't have the time, although I believe I could do it if I had the time...
- [x] No, I don't have the time and I wouldn't even know how to start.
Answers:
username_0: Sequelize does not close the connection in case of connection lost, respectively after 15 minutes token generation isn't required.
Status: Issue closed
|
SpongePowered/SpongeForge | 333898363 | Title: Movement Module crash with other mods
Question:
username_0: **I am currently running**
<!-- If you don't use the latest version, please tell us why. -->
- SpongeForge version:
- Forge version:
- Java version:
- Operating System:
Windows 10
<!-- Please include ALL mods/plugins you had installed when your issue happened, you can get a list of
your mods and plugins by running "/sponge plugins" and/or "/sponge mods" -->
- Plugins/Mods:
spongeforge-1.12.2-2705-7.1.0-BETA-3212
plethora-1.12.2-1.1.11
cc-tweaked-1.80pr1.5
<!-- Please include as much information as possible. For the description, assume we have no idea how
mods work, be as detailed as possible and include a step by step reproduction. It is recommended
you try to reproduce the issue you are having yourself with as few mods as possible. -->
**Issue Description**
Just go on global.conf and enable movement module and you will crash
[crash-2018-06-19_22.00.07-server.txt](https://github.com/SpongePowered/SpongeForge/files/2117515/crash-2018-06-19_22.00.07-server.txt)
<!-- Please provide a *full* server log (and crash-report if applicable).
Go to https://gist.github.com/ and upload them there, then paste the resulting link here!
Don't use hastebin/pastebin or other similar sites, as they have a history of quickly
deleting files before we can look at them. -->
Answers:
username_1: @phit: I wouldn't say that this is 'high priority', since this mixin just disables some annoying Vanilla messages (and can be disabled in the config). However, I'll take a look.
username_1: This crash happens without either of those mods installed. It appears to happen only in production - it seems to be a problem with the `@ModifyConstant` injector.
username_1: Slightly unrelated: Debugging this issue lead me to submit https://github.com/MinecraftForge/MinecraftForge/pull/4989, since I was having trouble running Plethora in my IDE. |
MCLD/greatreadingadventure | 866332987 | Title: Upgrade fails from 4.1.1 to 4.2.0 if pages exist
Question:
username_0: If pages are created in 4.1.1 and then an upgrade is attempted to 4.2.0, the database migrations will fail.
## Steps to reproduce
1. Install 4.1.1
2. Create a page
3. Upgrade to 4.2.0
This bug was reported by @Jason-42 in discussion #749.
## Expected behavior
Automatic functioning with language enhancements
## Actual behavior
Inability to perform migration, error in the log:
```
2021-04-23 11:06:38.073 -07:00 [Error] Failed executing DbCommand ("3"ms) [Parameters=[""], CommandType='Text', CommandTimeout='30']"
""SELECT TOP(2) [u].[Id], [u].[AchievedAt], [u].[Age], [u].[BranchId], [u].[CanBeDeleted], [u].[CardNumber], [u].[CreatedAt], [u].[CreatedBy], [u].[Culture], [u].[DailyPersonalGoal], [u].[Email], [u].[FirstName], [u].[HouseholdHeadUserId], [u].[IsActive], [u].[IsAdmin], [u].[IsDeleted], [u].[IsEmailSubscribed], [u].[IsFirstTime], [u].[IsHomeschooled], [u].[IsLockedOut], [u].[IsNewsSubscribed], [u].[IsSystemUser], [u].[LastAccess], [u].[LastActivityDate], [u].[LastBroadcast], [u].[LastName], [u].[LockedOutAt], [u].[LockedOutFor], [u].[PasswordHash], [u].[PhoneNumber], [u].[PointsEarned], [u].[PostalCode], [u].[ProgramId], [u].[SchoolId], [u].[SchoolNotListed], [u].[SiteId], [u].[SystemId], [u].[UnsubscribeToken], [u].[Username]
FROM [Users] AS [u]
WHERE [u].[IsSystemUser] = CAST(1 AS bit)"
```
## Technical details
After creating the language table (migration `20190312210309_add-base-i18n`) we don't insert a language, relying on running the software to insert locales that are present. If the software is not run between these migrations this error will occur (which is why it was not caught during development). When the migration which adds i18n to pages occurs (migration `20190329202403_add-page-i18n`) the `Pages` table can't be modified due to the lack of languages since `LanguageId` has a foreign key constraint.
## Mitigation strategies
1. Running the following SQL Query against the database inserts a default language so that the migration can proceed successfully:
```tsql
IF(NOT EXISTS(SELECT 1 FROM [Languages]))
BEGIN
INSERT INTO [Languages] ([CreatedBy], [CreatedAt], [Description], [IsActive], [IsDefault], [Name])
VALUES (1, GETDATE(), 'English (United States)', 1, 1, 'en-US')
END
```
2. Remove all pages and re-insert them after the migration occurs.
3. Once 4.2.1 is released, upgrade to it as the migration should be patched to include that query.
Status: Issue closed
Answers:
username_0: This issue is fixed in the develop branch but will not be fixed in a release until 4.2.1. |
floodsk/portfolio | 271121865 | Title: Title text alignment/line breaks
Question:
username_0: 1) On the homepage the categories text should be left aligned.
2) Also can we widen their columns to maybe a min of 75px (not sure what % that would be) - mostly just want to avoid text wrapping with shorter titles (eg. "about me" should take one line" "user experience design" take up 2, not three lines.
3) long category names overlap page titles
Affecting laptop and larger browser-widths.

Answers:
username_1: @username_0 This has been addressed.
#### FYI:
Using the terminal app do the following:
* Type `cd ` with the path to where you downloaded the portfolio repo. No worries if you don't know it if you find the repo in the finder app you can drag the folder named portfolio into the terminal app and it will place the path info for you. **NOTE**: Remember to type `cd` + a space character.
* Hit enter when you have done the above.
* You can now run the `git pull origin master command` to get the latest changes.
* When `git pull` is done you can then run `php -S localhost:8080` to start a PHP development server.
* You should now be able to enter `localhost:8080` in a browser URL bar and see your local website running.
* Type `ctrl + c` in the terminal where you ran the `php -S localhost:8080` when you are done this will deactivate the development server.
Status: Issue closed
|
emory-libraries/dlp-curate | 505884841 | Title: Show User UI: Works Created link returns no matches
Question:
username_0: In the Show Users page, if click on the Works Created link, the query returns no results. If I search by my user name, it will return items I have deposited. The Activity Tab displays differently formatted but functional links.
Broken query link:
https://curate.library.emory.edu/catalog?f%5Bgeneric_type_sim%5D%5B%5D=Work&locale=en&q=%22eporter%40emory.edu%22&search_field=depositor

Answers:
username_1: @username_0 Hyrax is configured out of the box to reference depositor to the User's email. Our depositor fields receives `uid`, so I will customize the Hyrax behavior to reflect that.
username_1: So, User Activity is dynamically formatted by the Actor Stack and the formats vary by Works' and Collections' attributes created. I'm not sure what you meant by the quote above. Is there some action you wanted us to take on the Activity List of items?
username_1: PR made: https://github.com/emory-libraries/dlp-curate/pull/1547
No screenshots necessary.
Status: Issue closed
username_2: This occurs both in our local Curate as well as Nurax, so it may be a Hyrax bug.
In the [Show Users page](https://curate-test.library.emory.edu/users), if you click on the _Works Created_ link for a user with activity noted, the query returns no results.
Also, the _Activity Tab_ on an individual user's profile displays differently formatted but functional links.

**Broken query link:**
https://curate.library.emory.edu/catalog?f%5Bgeneric_type_sim%5D%5B%5D=Work&locale=en&q=%22eporter%40emory.edu%22&search_field=depositor

username_2: @username_1 this looks good to me.
username_0: Working as expected, thanks!
Status: Issue closed
|
dmlc/gluon-cv | 1056351768 | Title: Pip install ocr.utils is not working
Question:
username_0: <img width="1350" alt="Screen Shot 2021-11-17 at 16 41 22" src="https://user-images.githubusercontent.com/40887316/142243723-8fb2b23a-180b-42e7-9c09-5ec37511f1b0.png">
Status: Issue closed
Answers:
username_1: I don't think GluonCV has or uses this ocr module. Close this issue for now, but feel free to reopen it. |
jlippold/tweakCompatible | 574518896 | Title: `Himiko 13` not working on iOS 12.4
Question:
username_0: ```
{
"packageId": "com.platykor.himiko13",
"action": "notworking",
"userInfo": {
"arch32": false,
"packageId": "com.platykor.himiko13",
"deviceId": "iPhone6,1",
"url": "http://cydia.saurik.com/package/com.platykor.himiko13/",
"iOSVersion": "12.4",
"packageVersionIndexed": false,
"packageName": "Himiko 13",
"category": "Tweaks",
"repository": "Packix",
"name": "Himiko 13",
"installed": "",
"packageIndexed": false,
"packageStatusExplaination": "This tweak has not been reviewed. Please submit a review if you choose to install.",
"id": "com.platykor.himiko13",
"commercial": true,
"packageInstalled": false,
"tweakCompatVersion": "0.1.5",
"shortDescription": "Kill all apps and lock the screen with a tapâ£ï¸(iOS 13)",
"latest": "1.4.5",
"author": "Plat-Ykor",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "not working",
"notes": "For iOS 13 only"
}
```<issue_closed>
Status: Issue closed |
mozilla/neqo | 830303117 | Title: H3 client - Cannot create stream Err
Question:
username_0: I've tried building `neqo` from source on two machines to remove a potential environment issue, however both can't use the `h3` `neqo-client` that generates the error below.
```
./target/release/neqo-client --resume --output-read-data http://www.facebook.com:443/
H3 Client connecting: [::]:35855 -> [fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b]:443
Cannot create stream Err(Unavailable)
Cannot create stream Err(Unavailable)
Cannot create stream Err(Unavailable)
Cannot create stream Err(Unavailable)
Cannot create stream Err(Unavailable)
```
Do I need to set `--ciphers` to have this work against TLS targets?
The test against the local server does work
```
./target/release/neqo-client http://127.0.0.1:12345/
H3 Client connecting: 0.0.0.0:38415 -> 127.0.0.1:12345
Cannot create stream Err(Unavailable)
Successfully created stream id 0 for http://127.0.0.1:12345/
Unhandled event ResumptionToken(ResumptionToken { token: [7, 1, 128, 0, 64, 0, 7, 10, 34, 64, 70, 8, 1, 16, 12, 0, 5, 4, 128, 16, 0, 0, 6, 4, 128, 16, 0, 0, 0, 8, 105, 2, 125, 0, 173, 219, 119, 67, 14, 1, 8, 106, 178, 0, 15, 10, 34, 75, 241, 26, 75, 154, 29, 205, 14, 129, 9, 1, 16, 1, 4, 128, 0, 117, 48, 7, 4, 128, 16, 0, 0, 4, 8, 255, 255, 255, 255, 255, 255, 255, 255, 43, 173, 154, 139, 141, 134, 1, 0, 207, 75, 114, 134, 32, 48, 169, 42, 102, 165, 132, 86, 159, 116, 202, 190, 98, 26, 27, 115, 103, 169, 145, 9, 157, 45, 252, 71, 163, 128, 84, 2, 115, 239, 32, 102, 2, 0, 5, 189, 89, 236, 37, 229, 163, 0, 5, 189, 130, 39, 212, 165, 163, 0, 5, 189, 89, 236, 37, 229, 163, 0, 2, 163, 0, 0, 0, 0, 1, 249, 233, 139, 156, 255, 255, 255, 255, 0, 1, 73, 48, 130, 1, 69, 48, 129, 236, 160, 3, 2, 1, 2, 2, 5, 0, 176, 13, 245, 143, 48, 10, 6, 8, 42, 134, 72, 206, 61, 4, 3, 2, 48, 15, 49, 13, 48, 11, 6, 3, 85, 4, 3, 19, 4, 116, 101, 115, 116, 48, 30, 23, 13, 49, 57, 48, 49, 50, 55, 49, 50, 50, 54, 52, 55, 90, 23, 13, 49, 57, 48, 52, 50, 55, 49, 50, 50, 54, 52, 55, 90, 48, 15, 49, 13, 48, 11, 6, 3, 85, 4, 3, 19, 4, 116, 101, 115, 116, 48, 89, 48, 19, 6, 7, 42, 134, 72, 206, 61, 2, 1, 6, 8, 42, 134, 72, 206, 61, 3, 1, 7, 3, 66, 0, 4, 42, 240, 199, 17, 152, 64, 127, 175, 32, 106, 156, 147, 147, 171, 185, 13, 157, 177, 225, 249, 112, 141, 249, 175, 72, 224, 44, 119, 74, 249, 38, 109, 45, 217, 239, 119, 190, 34, 246, 151, 97, 85, 39, 175, 182, 174, 5, 16, 183, 139, 81, 228, 52, 245, 172, 45, 183, 68, 82, 214, 10, 3, 114, 4, 163, 53, 48, 51, 48, 25, 6, 3, 85, 29, 17, 4, 18, 48, 16, 130, 14, 115, 101, 114, 118, 101, 114, 46, 101, 120, 97, 109, 112, 108, 101, 48, 9, 6, 3, 85, 29, 19, 4, 2, 48, 0, 48, 11, 6, 3, 85, 29, 15, 4, 4, 3, 2, 7, 128, 48,10, 6, 8, 42, 134, 72, 206, 61, 4, 3, 2, 3, 72, 0, 48, 69, 2, 32, 118, 227, 238, 3, 17, 159, 222, 78, 215, 173, 203, 63, 51, 101, 41, 145, 17, 156, 179, 18, 64, 146, 197, 54, 64, 212, 255, 201, 133, 92, 244, 58, 2, 33,0, 148, 138, 31, 164, 15, 1, 107, 82, 152, 245, 77, 127, 251, 227, 229, 183, 33, 162, 111, 169, 222, 240, 171, 167, 99, 81, 10, 183, 76, 80, 130, 65, 0, 0, 0, 9, 49, 50, 55, 46, 48, 46, 48, 46, 49, 0, 0, 0, 0, 0, 0, 0,0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 3, 4, 0, 5, 189, 89, 236, 37, 229, 163, 0, 4, 0, 0, 1, 0, 0, 4, 0, 0, 0, 255, 0, 0, 29, 0, 4, 3, 0, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,0, 0, 0, 0, 0, 0, 0, 0, 0, 19, 1, 1, 48, 234, 253, 247, 124, 20, 118, 204, 134, 39, 28, 56, 70, 249, 164, 223, 29, 166, 166, 235, 34, 94, 141, 241, 71, 54, 161, 136, 201, 4, 86, 155, 35, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32, 0, 0, 0, 0, 0, 0, 0, 1, 50, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 2, 1, 0, 0, 0, 2, 104, 51, 0, 242, 28, 217, 236, 94, 68, 248, 61, 174, 125, 131, 122, 164, 0, 0, 0, 0, 63, 185, 136, 167, 167, 94, 221, 65, 167, 63, 86, 244, 7, 92, 58, 140, 0, 176, 15, 78, 74, 118, 47, 100, 153, 156, 158, 7, 143, 9, 211, 63, 1, 212, 165, 192, 143, 19, 125, 219, 130, 206, 131, 226, 81, 63, 113, 205, 165, 92,82, 16, 45, 252, 104, 71, 38, 90, 157, 194, 11, 6, 128, 76, 136, 140, 107, 236, 221, 231, 21, 136, 18, 97, 76, 93, 4, 222, 182, 206, 198, 230, 120, 173, 123, 181, 132, 90, 219, 9, 119, 39, 198, 37, 188, 170, 43, 94, 29, 129, 232, 90, 86, 183, 55, 131, 86, 184, 185, 198, 231, 233, 34, 217, 168, 200, 171, 245, 196, 214, 120, 54, 64, 251, 104, 61, 31, 1, 223, 165, 223, 110, 220, 48, 215, 194, 241, 208, 70, 226, 74, 122, 180, 221, 96, 205, 27, 70, 56, 192, 241, 102, 88, 99, 14, 188, 71, 65, 210, 29, 102, 221, 224, 44, 183, 176, 187, 215, 246, 92, 91, 206, 105, 159, 3, 108, 115, 147, 239, 143, 126, 132, 103, 127, 67, 6, 98, 141, 107, 198, 197, 236, 91, 175, 183, 127, 158, 118, 176, 120, 119, 80, 34, 187, 44, 249, 122, 201, 252, 214, 184, 5, 8, 90, 113, 176, 211, 18, 128, 141, 175, 8, 126, 191, 177, 20], expiration_time: Instant { tv_sec: 1625465, tv_nsec: 823177287 } })
READ HEADERS[0]: fin=false [(":status", "200"), ("content-length", "11")]
READ[0]: 11 bytes
<FIN[0]>
```
Answers:
username_1: Try adding `-a h3-29`. I don't think that Facebook has deployed the final version of QUIC and HTTP/3 yet. Our current client defaults to the final version for testing purposes.
Status: Issue closed
username_0: @username_1 that works which is great. You are right that the sites below are not yet advertising `h3` in their `alt-svc` headers:
- Facebook - `alt-svc: h3-29=":443"; ma=3600,h3-27=":443"; ma=3600`
- Google - `alt-svc: h3-29=":443"; ma=2592000,h3-T051=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"`
- Cloudflare - `alt-svc: h3-27=":443"; ma=86400, h3-28=":443"; ma=86400, h3-29=":443"; ma=86400`
And it is why https://mew.org/ works. Thanks again. |
mattermost/docs | 489877895 | Title: Request for Documentation: Update Config setting description for Team Name Display
Question:
username_0: Mattermost user `katie.wiersgalla` from https://community-daily.mattermost.com has requested the following be documented:
```
Right now there is both a config setting and a user setting.
I don't think that the config setting restricts people from changing it, I think the config setting is used for the default and for push notifications. Documentation should be updated to clarify exactly how the setting works though as it's ambiguous - https://docs.mattermost.com/administration/config-settings.html#teammate-name-display
```
See the original post [here](https://community-daily.mattermost.com/_redirect/pl/npc6xw7c9j8u3cg7495jbh43kr).
_This issue was generated from [Mattermost](https://mattermost.com) using the [Doc Up](https://github.com/jwilander/mattermost-plugin-docup) plugin._<issue_closed>
Status: Issue closed |
rerost/issue-creator | 483186129 | Title: [08/21/2019-08/28/2019] Sample
Question:
username_0: issue-creator
Create new issue from this issue
```
issue-creator create https://github.com/username_0/issue-creator/issues/1
```
Create new issue from this issue by every monday
```
issue-creator schedule apply '0 0 * * 1' https://github.com/username_0/issue-creator/issues/1
```
last issue: https://github.com/username_0/issue-creator/issues/9
_created from https://github.com/username_0/issue-creator/issues/1 by [issue-creator](https://github.com/username_0/issue-creator) _<issue_closed>
Status: Issue closed |
phpstan/phpstan | 292281294 | Title: Incorrect variable may not be defined warning
Question:
username_0: ### Summary of a problem or a feature request
phpstan incorrectly reports that a variable might not be defined. Specifically, the value target of a foreach is reported as might not be defined after the loop, even when there is sufficient information to show that the loop _must_ have iterated at least once for the code to be reached.
### Code snippet that reproduces the problem
1. https://phpstan.org/r/6762e042fa172646410eca60449b7354
2. https://phpstan.org/r/5735aae73a39f48bd76c3a75e5bbf127
3. https://phpstan.org/r/3b98070636c0974658810607a3dfec06
4. https://phpstan.org/r/d2ccf447a0b83e8280a7a28e81855c4b
### Expected output
There should be no error reported. This error is new in phpstan 0.9.2. It did not appear in 0.9.1.
In all four cases, phpstan reports incorrectly that the value variable of a foreach may not be defined after the loop, even though it must be if the code is reached.
In case 1, the value is always defined, because a non-empty array will always result in the foreach value variable being set. In case 2, the array is explicitly checked for non-empty status, but the error is still displayed. (The error is still displayed if the `if($arr)` test is moved to cover both the foreach and the reference to is variable.)
Cases 3 and 4 are slightly more complicated, because there is a proxy used to determine if the array was empty. In 3, a variable is incremented in each loop iteration, and if non-zero, would indicate the presence of the variable. Case 4 (which most closely resembles my actual code), uses a !== test between two initially null variables, one of which is used as a loop counter, and the other which (in production code) may be changed to obtain the value of the loop counter at some point. In any case, given the initial values of the variables, the test only succeeds if there has been at least one loop iteration, which means the value variable must be present.
I'm uncertain if this is one issue, or multiple issues; if multiple, I can split this issue up as necessary.
Answers:
username_1: Hi, thanks for the report! The thing that got fixed in 0.9.2 is:
https://phpstan.org/r/5faa326a7fcea698ee5e7b695beb6a7d
```php
public function sayHello(array $arr): void
{
foreach ($arr as $i) {
}
var_dump($i);
}
```
No error is reported on that 0.9.1, hiding potential bugs.
All your examples have in common the fact that there's no logic in PHPStan's engine that would tell "this loop ran at least once, the variable must exist then!". That never worked and will be an objective of some future release.
I'd recommend you to wrap accessing the variable with an `if (isset($i))`, PHPStan will understand that. Anything indirect like `count` won't work in the current version.
Let's leave this issue open - it contains useful examples that will serve as test cases once this logic will be implemented.
username_1: Second example is now solved: https://github.com/phpstan/phpstan-src/commit/4f7157096c4706d7e3a6a4924740258ba63f0cac
I don't feel like examples involving `$cnt` are very valuable, so I'm gonna skip those.
Status: Issue closed
|
BCDevOps/devops-requests | 757312646 | Title: Please remove the following namespaces
Question:
username_0: Please remove the following namespaces:
agehlers-sandbox agehlers-sandbox Active
ak73dc-sandbox Next Gen Security Project (sandbox) Active
jjrkby-dev DevSecOps Workshop (dev) Active
jjrkby-tools DevSecOps Workshop (tools) Active
All workload is scaled to zero. Thx.<issue_closed>
Status: Issue closed |
kalimatas/php-liquid | 307103797 | Title: for loop working incorrect with "in deep variable"
Question:
username_0: If I use for with "deep variable", the for loop working incorrect.
Example
```
{% for product in collections[13].products %}
<li>
<span><a href="{{ product.url }}">{{ product.id }}: {{ product.title }}</a></span>
</li>
{% endfor %}
```
My mean is loop all products in collections's index 13, but it loop collections, same case
```
{% for product in collections %}
<li>
<span><a href="{{ product.url }}">{{ product.id }}: {{ product.title }}</a></span>
</li>
{% endfor %}
```
---
When I use bellow, it working well
```
{% assign products = collections[13].products %}
{% for product in products %}
<li>
<span><a href="{{ product.url }}">{{ product.id }}: {{ product.title }}</a></span>
</li>
{% endfor %}
```
Please check!
Sorry my bad english
Answers:
username_1: No worries! Thanks for the report.
Please stick to `{% assign ... %}` for now. We'll see if we can fix this by the next release.
username_0: Thank you so much |
zyedidia/micro | 354297887 | Title: perl syntax highlight is broken?
Question:
username_0: ## Description of the problem or steps to reproduce

_s/1.4.1-git/1.4.2-git/_
1. `qw()` should highlight in different color, between `()` should be in different color
2. `my $answer =` breaking something, but this is how should be written
## Specifications
Version: 1.4.2-dev.19
Commit hash: 9cbe2c6
Compiled on August 27, 2018
OS: Arch Linux
Terminal: Tilix
Answers:
username_0: Oh, very sorry. I had old file inside my home, probaby very very old (i forgot about). But syntax highlight can be better. Lightly modified file: [perl.yaml.gz](https://github.com/zyedidia/micro/files/2324290/perl.yaml.gz) |
cloudmesh/client | 138706267 | Title: tests: use a unique basename
Question:
username_0: we get lots of errors such as this when i run py.test
ERROR collecting tests/cm_basic/test_CloudmeshDatabase.py
import file mismatch:
imported module 'tests.cm_basic.test_CloudmeshDatabase' has this __file__ attribute:
/Users/big/github/cloudmesh-new/client/build/lib/tests/cm_basic/test_CloudmeshDatabase.py
which is not the same as the test file we want to collect:
/Users/big/github/cloudmesh-new/client/tests/cm_basic/test_CloudmeshDatabase.py
HINT: remove __pycache__ / .pyc files and/or use a unique basename for your test file modules
Answers:
username_0: must be called with py.test tests
addede make pytest
Status: Issue closed
|
humhub/humhub-prosemirror | 1104035110 | Title: prosemirror-dev-tools/toolkit not working
Question:
username_0: I tried to implement [prosemirror-dev-tools](https://github.com/d4rkr00t/prosemirror-dev-tools) / [prosemirror-dev-toolkit](https://github.com/TeemuKoivisto/prosemirror-dev-toolkit) in order to make debugging and testing the prosemirror editor much more efficient.
I was able to doge one error message by editing the Gruntfile but was left with another
<hr>
1. Install via npm
2. Change `humhub-prosemirror/Gruntfile.js` the following line:
``` javascript
rollupPluginBuble(),
```
with these
``` javascript
rollupPluginBuble({
objectAssign: "Object.assign",
transforms: {
modules: false,
dangerousForOf: true,
},
}),
```
3. Wrap EditorView instance in applyDevTools method:
from repository ReadMe:
``` js
import applyDevTools from "prosemirror-dev-tools";
const view = new EditorView /*...*/();
applyDevTools(view);
```
Done that in `humhub-prosemirror/src/editor/index.js`
- Add import statement to line 9:
``` javascript
import { applyDevTools } from 'prosemirror-dev-toolkit'
```
- Add following line after this.view declaration in line 107:
``` javascript
applyDevTools(this.view);
```
4. run `grunt rollup`
Somehow afterwords the editor is not loaded properly and I get the following error messages:

Interestingly the error also occurs without `applyDevTools(this.view);` inserted. The import itself is enough.
Any idea how to make it work?
Answers:
username_0: See branch for commited changes: https://github.com/username_0/humhub-prosemirror/tree/add_prosemirror_dev_tools |
PaddlePaddle/Paddle | 333532320 | Title: Optimized user API about checkpoint
Question:
username_0: At currently, we have exposed too much checkpoint APIs to users, such as:
```
save_checkpoint
load_checkpoint
clean_checkpoint
load_persist_vars_without_grad
load_lookup_table_vars
save_persist_vars_without_grad
get_latest_checkpoint_serial
```
Much of them are not for users but for develops, so, I will extract ```save_checkpoint```, ```load_checkpoint```, ```clean_checkpoint``` from them, and make others as private functions.
Status: Issue closed
Answers:
username_0: At currently, we have exposed too much checkpoint APIs to users, such as:
```
save_checkpoint
load_checkpoint
clean_checkpoint
load_persist_vars_without_grad
load_lookup_table_vars
save_persist_vars_without_grad
get_latest_checkpoint_serial
```
Much of them are not for users but for develops, so, I will extract ```save_checkpoint```, ```load_checkpoint```, ```clean_checkpoint``` from them, and make others as private functions.
Status: Issue closed
|
baomidou/mybatis-plus | 489511803 | Title: 是否覆盖文件疑问
Question:
username_0: ### 当前使用版本(必须填写清楚,否则不予处理)
3.1.2
### 该问题是怎么引起的?**([最新版](https://search.maven.org/search?q=g:com.baomidou%20a:mybatis-*)上已修复的会直接**close**掉)**
GlobalConfig gc = new GlobalConfig();
gc.setFileOverride(false); // 是否覆盖文件
这里只能对 dao,domain 和 xml 一项的开关。应该可以分开设置。比如我只想重新生成 domain,因为 dao 和 xml里我可能写了自己代码。
### 重现步骤
### 报错信息
Answers:
username_1: 暂不支持挨个配置
username_2: 支持的不是那么明显.但是可以实现.添加InjectionConfig,setIFileCreate,为自己的IFileCreate实现.自己的IFileCreate实现里面就可以完全的把握你说的是否覆盖.
username_1: up
Status: Issue closed
|
tensorflow/tensorboard | 263916761 | Title: AttributeError: 'SummaryMetadata' object has no attribute 'display_name' ( OS:windows, Python36, tensorflow (1.3.0rc0) tensorflow-tensorboard (0.1.7)
Question:
username_0: My laptop OS is windows10 (64bit), install tensorflow (1.3.0rc0) in Anaconda python36 with tensorflow-tensorboard (0.1.7) in it. when run command " tensorboard --logdir="path//to//logs", met below error.
```
Exception in thread Reloader:
Traceback (most recent call last):
File "c:\soft_app\anaconda3\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "c:\soft_app\anaconda3\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "c:\soft_app\anaconda3\lib\site-packages\tensorboard\backend\application.py", line 327, in _reload_forever
reload_multiplexer(multiplexer, path_to_run)
File "c:\soft_app\anaconda3\lib\site-packages\tensorboard\backend\application.py", line 301, in reload_multiplexer
multiplexer.Reload()
File "c:\soft_app\anaconda3\lib\site-packages\tensorboard\backend\event_processing\plugin_event_multiplexer.py", line 195, in Reload
accumulator.Reload()
File "c:\soft_app\anaconda3\lib\site-packages\tensorboard\backend\event_processing\plugin_event_accumulator.py", line 189, in Reload
self._ProcessEvent(event)
File "c:\soft_app\anaconda3\lib\site-packages\tensorboard\backend\event_processing\plugin_event_accumulator.py", line 335, in _ProcessEvent
value = data_compat.migrate_value(value)
File "c:\soft_app\anaconda3\lib\site-packages\tensorboard\data_compat.py", line 57, in migrate_value
return handler(value) if handler else value
File "c:\soft_app\anaconda3\lib\site-packages\tensorboard\data_compat.py", line 69, in _migrate_histogram_value
display_name=value.metadata.display_name or value.tag,
AttributeError: 'SummaryMetadata' object has no attribute 'display_name'
```
Answers:
username_1: The `display_name` property was patched into TF 1.3 after rc0. Does installing the release of 1.3.0 fix the problem? ie, `pip install --upgrade tensorflow`? (or `tensorflow-gpu`)?
username_0: Thank you. use release verison (1.3.0), no issue.
username_0: Thanks. Release version of 1.3.0 got no issue.
Status: Issue closed
|
KsGin/ksgin.github.io | 296183662 | Title: Intersection-of-Two-Arrays | KsGin Blog
Question:
username_0: https://blog.ksgin.com/2017/12/15/intersection-of-two-arrays/
LeetCode#349 Intersection of Two Arrays Given two arrays, write a function to compute their intersection. Example:Given nums1 = [1, 2, 2, 1], nums2 = [2, 2], return [2]. Note: Each element in the res |
sculptor/sculptor | 565541913 | Title: Support for custom SQL functions in ConditionalCriteria
Question:
username_0: We have to enhance ConditionalCriteria definition and allow to call more than one custom function with existing one. Maybe we can get rid of our custom Dialects which implement standard methods for week(), hour() for each specific database ... and do this in CoditionalCriteria.
Answers:
username_0: col[0] = 102
col[1] = 306
col[2] = 102
col[3] = 102 * 1 = 102
col[4] = 102
col[5] = 2008
col[6] = Gandhi
col[7] = <NAME>
col[8] = <NAME>
col[9] = Gandhi Mahatutma 1
col[10] = Gandhi_AHATUTMA
col[11] = %%%%%Gandhi#-#-#-#-#
col[12] = ndh
col[13] = 6
col[14] = 2 <- index of 'a' in "Gandhi"
col[15] = ################################################################################
col[16] = Gandhi::Mahatutma
col[17] = andhi Mahatutma_andhi
col[18] = t
Any ideas for improvements or design changes are welcome.
username_0: New examples with different expressions in [PersonServiceTest](https://github.com/sculptor/sculptor/blob/00b5182e99069d812829d254418eaa2d148ffe20/sculptor-examples/DDDSample/src/test/java/org/sculptor/dddsample/relation/serviceapi/PersonServiceTest.java#L205).
- Custom SQL functions [testFindByConditionExpr()](https://github.com/sculptor/sculptor/blob/00b5182e99069d812829d254418eaa2d148ffe20/sculptor-examples/DDDSample/src/test/java/org/sculptor/dddsample/relation/serviceapi/PersonServiceTest.java#L306)
- new having() part for ConditionalCriteria in [testFindByConditionHaving()](https://github.com/sculptor/sculptor/blob/00b5182e99069d812829d254418eaa2d148ffe20/sculptor-examples/DDDSample/src/test/java/org/sculptor/dddsample/relation/serviceapi/PersonServiceTest.java#L246)
- Improved autocompletition for expressions
- OR have now higher priority than AND
Missing:
- case/when/otherwise
Status: Issue closed
username_0: Now we have complex support for case/when/than conditions. For more details checkout examples in [PersonServiceTest.java](https://github.com/sculptor/sculptor/blob/efc4bf7ab226a48203b72771f99abd49a2d554a2/sculptor-examples/DDDSample/src/test/java/org/sculptor/dddsample/relation/serviceapi/PersonServiceTest.java#L495) |
suriyun-production/turnbase-rpg-docs | 590463099 | Title: Clan and Friend search problem.
Question:
username_0: Clan and friend search function does not work. I can't find the player or clan I'm looking for.
Answers:
username_1: Try changes follow this comit
https://github.com/username_1/rpg-php-service/commit/cd84303b61cc394ece4c5ad4ffbcb717dd352179
username_0: Yes, the problem is solved. Thank you.
username_0: It's ridiculous to add oneself in search. Can you hide yourself.
Status: Issue closed
username_1: https://github.com/username_1/rpg-php-service/commit/e68b707934e8<PASSWORD>
Try this. |
balena-io/etcher | 645193533 | Title: I am not able to restore bootable usb
Question:
username_0: - **Etcher version:**
1.5.100
- **Operating system and architecture:**
Mac OS
- **Image flashed:**
ubuntu 18.0
- **Do you see any meaningful error information in the DevTools?**
<!-- You can open DevTools by pressing `Ctrl+Shift+I` (`Ctrl+Alt+I` for Etcher before v1.3.x), or `Cmd+Opt+I` if you're on macOS. -->
it made my usb of 2.some mb. I tried formatting it on windows but it is the same, and on Mac OS it is not picking usb.
Answers:
username_1: https://github.com/balena-io/etcher/blob/master/docs/USER-DOCUMENTATION.md#recovering-broken-drives ?
username_2: You can follow the guide linked above, there are plenty of other resources on the subject of formatting a drive so that the OS can read it. Closing
Status: Issue closed
username_3: thanks. |
Chingu-cohorts/voyage-wiki | 301341630 | Title: @jen is a free-agent looking for a new team timezone C, Bear
Question:
username_0: **_About You_**
- Team Name:
- Slack Name: @jen
- Timezone: Asia
**_Issue Description & Expected Outcome:_**
Team is inactive after one member left. Jen could use a new team.
**_Symptoms:_**
**_Steps to Recreate:_**
**_Resolution:_**
Answers:
username_0: Sent a recommendation for someone to contact to join a new team. Waiting to see how it pans out. 🙏
Status: Issue closed
|
MicrosoftDocs/microsoft-365-docs | 532789247 | Title: Provide a list of all account types and a list of all accounts used by Microsoft
Question:
username_0: Hello,
for compliance reasons it makes sense to provide:
- a list of all account types that exist (e. g. on some external, non-Microsoft sites there are resource and system account types mentioned but no documentation about this can be found in the official docs; see this as one of those sources: https://kb.wisc.edu/office365/page.php?id=32951)
- a list of all accounts that Microsoft does use for services and internal tasks, for example the 'app@sharepoint' user.
This information makes sense to know as it is necessary to be able to differentiate within the audit log between usual activities of an Office 365 tenant and what kind of activities are suspicious/malicious/should rise an alert.
Thank you!
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3494b782-92ad-e365-3398-bd2d760d10c6
* Version Independent ID: fa4bca98-3975-23dd-dfac-f23049c71d01
* Content: [Search the audit log in the Security & Compliance Center](https://docs.microsoft.com/en-us/microsoft-365/compliance/search-the-audit-log-in-security-and-compliance#feedback)
* Content Source: [microsoft-365/compliance/search-the-audit-log-in-security-and-compliance.md](https://github.com/MicrosoftDocs/microsoft-365-docs/blob/public/microsoft-365/compliance/search-the-audit-log-in-security-and-compliance.md)
* Service: **o365-seccomp**
* GitHub Login: @username_1
* Microsoft Alias: **username_1**
Answers:
username_1: @username_0 Thanks for the suggestion. Your reasons for wanting an authoritative list of service accounts are understandable. However, give the number of O365 services and the number of different tasks these services perform continuously on behalf of tenants, this is too difficult to compile and maintain an authoritative list.
You might try exporting audit records and transforming the AuditData JSON object to view, filter, and sort additional information contained in audit records. For info, see https://docs.microsoft.com/en-us/microsoft-365/compliance/export-view-audit-log-records.
We will also update the audit topic with information about the "Search query performed" SP activity and the default app@sharepoint SPO user account (which typically performs the search on behalf of other features like auto-applying labels and retention policies).
Status: Issue closed
username_0: Thank you for your detailed reply @username_1!
Are you aware of typical/common patterns in such accounts of Microsoft? Like if there are more accounts like app@sharepoint which have no fully qualified domain name as username.
What about the list of existing account types? I think that list should be quite short in relation to the list of all Microsoft's accounts.
username_1: @username_0 I'm assume you are already familiar with the various [UserType enum values](https://docs.microsoft.com/office/office-365-management-api/office-365-management-activity-api-schema#enum-user-type---type-edmint32). I assume that's what you may mean by "list of existing account types." If you're asking about a common format/pattern for the UserID field (if UserType equals "service account") then I don't think there is. In researching my response to this issue, it was very instruction to export a large number audit records, parse the AuditData JSON object as I mentioned above, then filter on the UserType column (e.g. for example, UserType = 4, to display only the records that were "triggered" by system accounts). I saw that the values in the UserID column were, in terms of "typical/common patterns", pretty varied. Doing this might also help you compile a list of the different service account values, at least in your tenant.
As an aside/anecdote and probably not that interesting or important, but I discovered that the UserType for the app@SharePoint account is "Regular User" (UserType enum = 0). That surprised me. I also discovered that this account (and SHAREPOINT\system) is returned when you run Get-SPOUser against the top-level SP site in my tenant. Maybe this is why it's regular user in the audit record.
username_1: Run Get-SPOUser in SPO PowerShell, that is
username_0: I appreciate the information you did provide and I will look into this in detail within the next days. I do appreciate any information provided and added to the docs and I think this will bring me quite some steps forward. Thank you for taking this topic seriously! |
CTFd/CTFd | 441814453 | Title: Enable CORS for CTFd's challenge submit API
Question:
username_0: Using the UI to submit flag codes is generally a good thing.
But sometimes an auto-submit feature could be useful (e.g. in connection with Juice Shop) in order to track the participant's success in real-time.
During implementation of this feature within Juice Shop we ran into an issue with the CORS policy.
How could we whitelist cross origin requests to the API endpoint for submiting a challenge?
A colleague and I tried the FLASK-CORS package but we failed to get it running.
Generally it would be useful to add an option in the config in order to enable submitting
solutions from another site. |
neo-one-suite/neo-one | 338803406 | Title: neo-one cli fails to automatically start server on node:10.5.0-stretch
Question:
username_0: Seems to get stuck on the waitReachable - https://github.com/neo-one-suite/neo-one/blob/master/packages/neo-one-server-client/src/ServerManager.ts#L49
Switching from waitForReady to just calling a method did not seem to help.
Answers:
username_0: Fixed by moving to grpc-js
Status: Issue closed
|
tensorflow/tensorflow | 237282491 | Title: No way to freeze fused BN stats
Question:
username_0: When fine-tuning networks trained with BN sometimes we want to freeze and use the accumulated moving averages while allowing the gradients to be backpropagated through the BN layer, but currently there is no way of doing so with fused BN, since when is_training = False the layer gives erroneous gradients. Of course, we could use the batch statistics from the new task to accumulate the stats, but it isn't possible in the case of batch_size = 1.
I understand that due to the nature of the CuDNN kernel it might be hard to implement such feature, but a fused Batch Renorm layer could be a decent compromise, as it uses the moving averages when training as well as during inference.
Answers:
username_1: @username_2 can you comment on this? Thanks!
username_2: Setting is_training to False and at the same time doing gradient computation is expected to get erroneous gradients, because the second and third outputs of FusedBatchNorm are only applicable/valid when is_training is True.
Could you try train with fused batch norm, and fine-tuning with non-fused batch norm?
username_0: In object detection and semantic segmentation sometimes the batch_size is set to 1 for the original network, and unfortunately our use case fall under this category. Non-fused batch norm for NCHW is around 6x slower https://github.com/tensorflow/tensorflow/issues/7551#issuecomment-280421351.
username_2: Not sure if I understand correctly: are you saying fine tune is done with a batch size of 1 and non-fused batch norm doesn't work with a batch size of 1? And even if it works, it is 6x slower?
How long do you expect fine tune currently takes (on the scale of hours or days? say on GPU)?
username_0: Yes, fine tune is done with a batch size of 1 (on each GPU). I am training on a 4x 1080 machine, and fine tuning would take around ~20 hours. Non-fused batch norm does work (gives correct gradients), but it is way too slow and will push the fine tuning time beyond 100 hours. It would be faster to simply retrain the original network in NHWC and fine tune there with non-fused batch norm, which is ironic, considering NCHW is the CuDNN canonic format.
username_2: It may make sense to simply forward the population mean and variance to the second and third outputs of FusedBatchNorm when fused=Tue. There is no performance penalty for this, since it is a forwarding instead of a copy.
Before this feature is implemented, you can modify the code [here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/nn_impl.py#L815) to set batch_mean and batch_variance to mean and variance when is_training is False. Note that doing this will introduce actual copies (a slight performance penalty), so as ideal as making this change on the C++ side; but this should get you most of the GPU performance.
username_2: I marked this as contribution welcome for now. Anyone on the Tensorflow team or external developers are welcome to contribute if interested. Please let me know if you have any questions.
username_2: I'm closing this issue, as the freeze mode fused batch norm has been supported now: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/nn_grad.py#L674
Status: Issue closed
username_0: Thank you!
username_3: @username_2 Thanks for providing the implementation for backward. However, I noticed that the performance of FusedBatchNormGrad when is_training=False is very pool.
I didn't benchmark the op alone, but switching is_training on and off can affect my total training speed by 2.5x. My estimate is that the op itself would have at least 5x difference in speed.
I guess the reason is that the implementation now is not based on fused kernels. This makes it as slow as the non-fused kernels.
username_3: @username_2 I used the kernel `BatchNormWithGlobalNormalizationGrad` for the backward implementation when is_training=False. This improves the speed a lot (my overall training time is then closed to the fused case when is_training=True).
To really use this op there are still some issues:
1. The op `BatchNormWithGlobalNormalizationGrad` was marked deprecated. I unmarked it to test but not sure if that's desired. It's a helpful op in this case.
2. The op doesn't support backward. This will break "grad of grad" unit tests. I think "grad of grad" is less important but it could be implemented if really needed.
username_3: I'll send a PR on this.
Status: Issue closed
username_5: Hi @username_3 ,
I notice that currently only NHWC is supported for FusedBatchNormFreezeGrad and tf.transpose is needed in python if the input is NCHW. This can take enourmous time if batch norm is adopted in large scale and the data format is NCHW.
Do you consider adding support for NCHW?
username_3: I don't have time for this.
In fact a thorough benchmark is needed because the performance of the normal reduction ops has been improved a lot since last year. It's quite likely that my naive cuda kernel isn't very performant in NHWC or NCHW compared to a non-fused multi-op implementation.
username_5: Thanks for your prompt response.
I have done a quick benchmark using my network. For fused BN, it takes ~1.43 second/batch while for non-fused BN, it takes ~1.32 second/batch regardless if NCHW or NHWC is used. It seems for freezed BN, non-fused is faster.
username_3: A quick test shows that fused & non-fused BN have similar backward speed when freezed:
```python
#!/usr/bin/env python
import tensorflow as tf
import time
import os
from tensorflow.python.client import device_lib
local_device_protos = device_lib.list_local_devices()
GPU_MODE = len([x.name for x in local_device_protos if x.device_type == 'GPU'])
N = 64
C = 128
H, W = 128, 128
print("N, C, H, W:", [N, C, H, W])
def benchmark_all(fuse, format):
shape4d = [N, C, H, W] if format == 'NCHW' else [N, H, W, C]
tf.reset_default_graph()
input = tf.get_variable('input', shape=shape4d, dtype=tf.float32)
scale = tf.get_variable('scale', shape=[C])
offset = tf.get_variable('offset', shape=[C])
mean = tf.random_normal(shape=[C])
var = tf.random_normal(shape=[C]) + 1.
if fuse:
output, _, _ = tf.nn.fused_batch_norm(
input, scale, offset, mean, var, epsilon=1e-5, data_format=format, is_training=False)
else:
if format == 'NCHW':
newshape = [1, C, 1, 1]
scale = tf.reshape(scale, newshape)
offset = tf.reshape(offset, newshape)
mean = tf.reshape(mean, newshape)
var = tf.reshape(var, newshape)
output = (input - mean) * (tf.rsqrt(var + 1e-5) * scale) + offset
forward_op = output.op
cost = tf.reduce_sum(output)
backward_op = tf.train.GradientDescentOptimizer(0.1).minimize(cost)
def benchmark(op, nr_iter=200, nr_warmup=10):
if not GPU_MODE:
nr_iter = nr_iter // 10
nr_warmup = nr_warmup // 5
for k in range(nr_warmup):
op.run()
start = time.perf_counter()
for k in range(nr_iter):
op.run()
end = time.perf_counter()
itr_per_sec = nr_iter * 1. / (end - start)
return itr_per_sec
sess = tf.Session()
with sess.as_default():
sess.run(tf.global_variables_initializer())
[Truncated]
formats = ['NHWC', 'NCHW']
else:
formats = ['NHWC']
for format in formats:
for fuse in [True, False]:
benchmark_all(fuse, format)
```
Outputs on GTX1080Ti:
```
N, C, H, W: [64, 128, 128, 128]
Fuse=True, Format=NHWC, Forward: 100.62081027587591 itr/s
Fuse=True, Format=NHWC, Backward: 61.392495723794156 itr/s
Fuse=False, Format=NHWC, Forward: 100.51494045143419 itr/s
Fuse=False, Format=NHWC, Backward: 60.875319893753236 itr/s
Fuse=True, Format=NCHW, Forward: 261.46341501158184 itr/s
Fuse=True, Format=NCHW, Backward: 44.5398623696861 itr/s
Fuse=False, Format=NCHW, Forward: 71.64600813406135 itr/s
Fuse=False, Format=NCHW, Backward: 48.042794214933586 itr/s
``` |
DotNetAnalyzers/ReflectionAnalyzers | 378963376 | Title: Validate DefaultMemberAttribute
Question:
username_0: Should validate that member specified by the DefaultMemberAttribute actually exists.
```csharp
[DefaultMember("Bar")]
public class Foo
{
public int Value { get; set; }
}
```
Answers:
username_1: Makes me think of https://docs.microsoft.com/dotnet/api/system.componentmodel.defaultpropertyattribute and https://docs.microsoft.com/dotnet/api/system.componentmodel.defaulteventattribute which are not in the Reflection namespace.
username_0: @username_2, It's used by InvokeMember when the empty string is provided for the member name.
@username_1, I though of those too, but I'm not sure if we should start branching out into non-reflection attributes that happen to specify members by name, as that list could get quite large and diverse (which reminds me, I should create an issue in WpfAnalyzers for the [DependsOnAttribute](https://docs.microsoft.com/en-us/dotnet/api/system.windows.markup.dependsonattribute?view=netframework-4.7.2) :))
Status: Issue closed
|
CityofToronto/bdit_flashcrow | 908556530 | Title: "Requested by me" filter not applied after login
Question:
username_0: **What Happened**
When you first log in and load the Track Requests page, the filter "Requested by me" is displayed but is not applied to the list of requests.
**What Should Happen**
Only requests displayed by me should be rendered if the filter "Requested by me" is automatically selected.
**To Reproduce**
Steps to reproduce the bug:
1. From the sidebar, click on the "Track Requests" icon
2. Enter credentials and log in
3. MOVE redirects you to the Track Requests page
4. See error (screenshot below)
**Screenshots**

**Additional Notes**
I haven't submitted any studies from my account (Maddy!)
Tested in prod (v1.4.0)
Tested on Chrome
Answers:
username_0: Also seeing this in `dev`.
username_1: This is a _race condition_ (-ish) bug. Here's what happens, in order, for a non-Data Collection user:
1. the user loads Track Requests, which by default does not set the `userOnly` flag.
2. the `created()` hook in `FcMixinRouteAsync` fetches requests without `userOnly`.
3. the `created()` hook in `<FcRequestsTrack>` sets `userOnly = true`.
4. this triggers another fetch for requests, this time with `userOnly`.
5. the request with `userOnly` often fetches a much smaller result set...so it finishes first, and updates the table!
6. ...and then the request without `userOnly` finishes, and updates the table!
This is why, if you look closely, you'll see the correct set of requests flash before being replaced by the entire request list. (As further evidence: if you clear the "Requested by me" filter, then re-apply it, it now works properly.)
Currently working on a fix here.
Status: Issue closed
|
frederic-bousefsaf/ippg-3dcnn | 1140919924 | Title: How to use the result.mat?
Question:
username_0: After excuse predict/main.py, I can get the file "result.mat". But I don't know how to use it . Can it be transformed to the predictions map?
Answers:
username_1: Hello,
You can load this file with MATLAB
username_0: Hello,
Thanks for your reply. So if I input the image sequence, I can get the output of ippg version image sequence by update apart of this code? |
facebookresearch/detectron2 | 920380668 | Title: ModuleNotFoundError: No module named 'torchvision.ops'
Question:
username_0: ## Instructions To Reproduce the 🐛 Bug:
1. Full runnable code or full changes you made:
```
```
2. What exact command you run: python demo.py --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \ --input input1.jpg input2.jpg
3. __Full logs__ or other relevant observations:
```
Traceback (most recent call last):
File "/home/sunjeet/detectron2/demo/demo.py", line 14, in <module>
from detectron2.data.detection_utils import read_image
File "/home/sunjeet/anaconda3/envs/opencv/lib/python3.9/site-packages/detectron2/data/__init__.py", line 4, in <module>
from .build import (
File "/home/sunjeet/anaconda3/envs/opencv/lib/python3.9/site-packages/detectron2/data/build.py", line 12, in <module>
from detectron2.structures import BoxMode
File "/home/sunjeet/anaconda3/envs/opencv/lib/python3.9/site-packages/detectron2/structures/__init__.py", line 7, in <module>
from .masks import BitMasks, PolygonMasks, polygons_to_bitmask
File "/home/sunjeet/anaconda3/envs/opencv/lib/python3.9/site-packages/detectron2/structures/masks.py", line 9, in <module>
from detectron2.layers.roi_align import ROIAlign
File "/home/sunjeet/anaconda3/envs/opencv/lib/python3.9/site-packages/detectron2/layers/__init__.py", line 3, in <module>
from .deform_conv import DeformConv, ModulatedDeformConv
File "/home/sunjeet/anaconda3/envs/opencv/lib/python3.9/site-packages/detectron2/layers/deform_conv.py", line 9, in <module>
from torchvision.ops import deform_conv2d
ModuleNotFoundError: No module named 'torchvision.ops'
```
4. please simplify the steps as much as possible so they do not require additional resources to
run, such as a private dataset.
## Expected behavior:
If there are no obvious error in "full logs" provided above,
please tell us the expected behavior.
## Environment:
No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda-10.2'
--------------------- ----------------------------------------------------------------------------------
sys.platform linux
Python 3.9.4 | packaged by conda-forge | (default, May 10 2021, 22:13:33) [GCC 9.3.0]
numpy 1.20.3
detectron2 0.4 @/home/sunjeet/anaconda3/envs/opencv/lib/python3.9/site-packages/detectron2
Compiler GCC 7.3
CUDA compiler not available
DETECTRON2_ENV_MODULE <not set>
PyTorch 1.8.1 @/home/sunjeet/anaconda3/envs/opencv/lib/python3.9/site-packages/torch
PyTorch debug build False
GPU available False
Pillow 8.2.0
torchvision 0.2.2 @/home/sunjeet/anaconda3/envs/opencv/lib/python3.9/site-packages/torchvision
fvcore 0.1.3.post20210317
cv2 4.5.2
--------------------- ----------------------------------------------------------------------------------
PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) oneAPI Math Kernel Library Version 2021.2-Product Build 20210312 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v1.7.0 (Git Hash 7aed236906b1f7a05c0917e5257a1af05e9ff683)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.8.1, USE_CUDA=0, USE_CUDNN=OFF, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON,
Answers:
username_1: uninstall torchvision
and then dont install from pip
build from source or install specific version like torchvision= 0.9.0 or 0.8.2
username_0: This is what I get after using the command you mentioned:
Requirement already satisfied: torchvision in ./anaconda3/envs/opencv/lib/python3.9/site-packages (0.2.2)
Requirement already satisfied: numpy in ./anaconda3/envs/opencv/lib/python3.9/site-packages (from torchvision) (1.20.3)
Requirement already satisfied: six in ./anaconda3/envs/opencv/lib/python3.9/site-packages (from torchvision) (1.16.0)
Requirement already satisfied: torch in ./anaconda3/envs/opencv/lib/python3.9/site-packages (from torchvision) (1.8.1)
Requirement already satisfied: pillow>=4.1.1 in ./anaconda3/envs/opencv/lib/python3.9/site-packages (from torchvision) (8.2.0)
Requirement already satisfied: typing_extensions in ./anaconda3/envs/opencv/lib/python3.9/site-packages (from torch->torchvision) (3.10.0.0)
username_1: in armv7l raspberry pi 4 it installed that when i ran that command .
for your purpose try removing everything and build from source
username_2: According to torchvision: https://github.com/pytorch/vision#installation
Pytorch1.8 requires torchvision 0.9
Status: Issue closed
|
squidfunk/mkdocs-material | 169740772 | Title: Syntax Hightlighting
Question:
username_0: It would be awesome to have syntax hightlighting in the code blocks.
If not always then at least configurable in the settings.
Thx for the great theme!
Answers:
username_1: There should be. Have you activated the codehilite extension?
``` yaml
markdown_extensions:
- codehilite(css_class=code)
```
username_0: Oh - didnt know that this is needed. It worked without this option with a normal theme.
Thanks!!
username_1: The other themes use JavaScript-based Syntax Highlighting. I have it on my internal list for the next release as a fallback, if codehilite is not installed.
Status: Issue closed
|
ShadowCraft/shadowcraft-ui-react | 278729029 | Title: Item stats are being changed improperly, even when selection the same item
Question:
username_0: https://cdn.discordapp.com/attachments/219866789067096065/386652610054062100/unknown.png
Answers:
username_0: The equipped item and the selected item appear to have the correct stats.
The wrong stats are being passed to the action, somehow.
Status: Issue closed
username_0: should have been fixed with https://github.com/ShadowCraft/shadowcraft-ui-react/commit/40ff0f950a0fd395ddbbc20b39848f3b852d63b5 |
ccyang/ysx-stackdriver-alerts | 408787760 | Title: [ALERT] CPU request utilization on dada-cloud dada-ipc-app
Question:
username_0: Date: February 11, 2019 at 09:36PM<br>
<EMAIL><br>
<br>
<div style="width: 600px; margin: 0 auto;">
<table style="width: 100%; padding: 34px 6px; border-spacing: 0;">
<tr>
<td style="padding: 0;"><img src="http://www.gstatic.com/stackdriver/notification/google_stackdriver_logo.png" alt="Google Stackdriver" style="vertical-align: top;"></td>
<td style="text-align: right; padding: 0; font-family: inherit;"><a href="https://app.google.stackdriver.com/incidents/0.l3tzl8dw9jri?project=dada-cloud" style="font-weight: 500; text-decoration: none;">VIEW DETAILS</a></td>
</tr>
</table>
<div style="background-color: white; border-top: 4px solid #d40001; border-left: 1px solid #eee; border-right: 1px solid #eee; border-radius: 6px 6px 0 0; height: 24px;"></div>
<div style="background-color: white; border-left: 1px solid #eee; border-right: 1px solid #eee; padding: 0 24px; overflow: hidden;">
<table style="width: 100%; border-spacing: 0;">
<tr>
<td style="width: 35px; padding: 0;"><img src="http://www.gstatic.com/stackdriver/notification/exclamation_mark.png" alt="exclamation mark" style="vertical-align: top;"></td>
<td style="padding: 0; font-family: inherit;"><span style="color: #d40001; font-size: 130%; font-weight: bold;">Alert firing</span></td>
</tr>
</table>
<div style="margin-left: 35px;">
<h1>CPU request utilization</h1>
<p>CPU request utilization for dada-cloud dada-ipc-app is above the threshold of 1.3 with a value of 1.477.</p>
<h2>Summary</h2>
<p><strong>Start time</strong><br>
Feb 11, 2019 at 1:32PM UTC (~3 min, 45 sec ago)</p>
<p><strong>Project</strong><br>
dada-cloud (<a href="https://console.cloud.google.com/?project=dada-cloud" style="text-decoration: none;">Cloud Console</a> | <a href="https://app.google.stackdriver.com/?project=dada-cloud" style="text-decoration: none;">Stackdriver</a>)</p>
<p><strong>Policy</strong><br>
<a href="https://app.google.stackdriver.com/policy-advanced/6212262078520598160?project=dada-cloud" style="text-decoration: none;">CPU request utilization (containers)</a></p>
<p><strong>Condition</strong><br>
CPU request utilization</p>
<p><strong>Metric</strong><br>
<a style="color: inherit; cursor: text; text-decoration: none;">kubernetes.io/container/cpu/request_utilization</a></p>
<p><strong>Threshold</strong><br>
above 1.3</p>
<p><strong>Observed</strong><br>
1.477</p>
</div>
<div style="height: 54px;"></div>
<div style="text-align: center;"><a href="https://app.google.stackdriver.com/incidents/0.l3tzl8dw9jri?project=dada-cloud" style="display: inline-block; background-color: #4285f4; color: white; padding: 10px 18px; border-radius: 2px; text-decoration: none;">VIEW DETAILS</a></div>
</div>
<div style="background-color: white; border-left: 1px solid #eee; border-right: 1px solid #eee; border-bottom: 1px solid #eee; height: 58px;"></div>
<div style="padding: 62px 6px; text-align: center; color: #757575;">
<img src="http://www.gstatic.com/stackdriver/notification/google_logo.png" alt="Google" style="vertical-align: top;">
<p>© 2017 Google LLC<br>
<a style="color: inherit; cursor: text; text-decoration: none;">1600 Amphitheatre Parkway, Mountain View, CA 94043</a></p>
<p>You have received this mandatory service announcement to update you about important changes to Google Cloud Platform or your account.</p>
<p><a href="https://app.google.stackdriver.com/policy-advanced/edit/6212262078520598160?project=dada-cloud" style="text-decoration: none;">Manage notifications</a></p>
</div>
</div>
<br> |
MicrosoftDocs/azure-devops-docs | 775023393 | Title: Order of execution of test cases in Visual Studio Test task using Test Plan
Question:
username_0: Visual Studio Test task - Azure Pipelines
While using Visual Studio Test task, when there are 'n' tests received and discovered to execute, how the order of execution decided?
Out of those 'n' tests that are identified to execute, which one is picked up first and what is the logic behind this?
Is this sorted by testcase ID or GUID or any other algorithm?
Is there a way that I can choose the order of execution in a single Visual Studio Test Task , that is executed with Test Plan( Version 2.*).
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 16bd8dd1-7650-8c18-2553-38a8bc54e892
* Version Independent ID: f9fa517a-7adc-e327-d12e-256295a1ed52
* Content: [Visual Studio Test task - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/test/vstest?view=azure-devops)
* Content Source: [docs/pipelines/tasks/test/vstest.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/tasks/test/vstest.md)
* Product: **devops**
* Technology: **devops-cicd-tasks**
* GitHub Login: @shashban
* Microsoft Alias: **shashban**
Status: Issue closed
Answers:
username_1: @username_0 Thanks for your question. It looks like you're asking a product question, rather than an issue with the documentation. Here are a couple of options where you might consider asking your question:
- [Azure DevOps Services](https://azure.microsoft.com/support/devops/)
- [Azure DevOps Support Bot](https://azuredevopsvirtualagent.azurewebsites.net/)
- [Azure DevOps on Stack Overflow](https://stackoverflow.com/questions/tagged/azure-devops) |
robertdebock/ansible-development-environment | 1102697699 | Title: "no space left on device" error
Question:
username_0: at least with the libvirt Vagrantfile the tmpfs /tmp folder in fedora*-cloud is to small ( 2GB ) - tested with fedora35-cloud.
The task to install pip modules for molecule consumes currently around 4GB temporary files.
https://github.com/robertdebock/ansible-development-environment/blob/4f792f71ebc529c50ccd8ee5a380241cd266d1eb/roles/molecule/tasks/main.yml#L12 |
redux-observable/redux-observable | 315361493 | Title: Redux 4 Support
Question:
username_0: This issue will track overall progress and issues for supporting redux v4. I have almost all the tests passing now and a WIP PR is imminent once I split off some changes that I also made that are related to synchronous emission on start up, but I think need to be done as follow ups as they might need community input to be intuitive.
Feel free to file individual tickets if you find something, just know tha I imagine a majority are already covered by our existing tests and so are known.
Related: #449
@username_2
Answers:
username_1: Hi, I have tried to run tests using Redux 4 and got 6 errors, I have investigated the issues.
When you doing "deep.equal" using "chai" module, you also add this in every test
`{ type: '@@redux/INIT' }`
But in log you can find redux init action value like this
"type": "@@redux/INITh.f.7.9.g"
"@@redux/INITh.f.7.9.g"
and etc ...
so test will fail because "@@redux/INITh.f.7.9.g" not equal to '@@redux/INIT'
I have added two functions which filtering the Redux init value.
```
const isReduxInitAction = (action) => !!action.type.match(/@@redux\/INIT/);
const removeReduxInit = (actions) =>
actions.filter(action =>
!isReduxInitAction(action)
);
```
Actually, all errors not like this, using these functions you can fix part of them.
username_1: Please look this changes in my forked repo
https://github.com/username_1/redux-observable/commit/dc799a4570696946f28b60d49af8cb4655eb61c3
username_0: Hey @username_1 I super appreciate your help though you definitely should hold off as I’ve galready got everything working and there were also changes that had to be made to behavior.
username_1: thanks @username_0, I will wait for new releases of redux-observable which will work with redux 4
username_2: @username_1 btw for tests it would be fine to `import { __DO_NOT_USE__ActionTypes } from 'redux'` imo
username_1: yes @username_2, cool, I did not know about it.
username_3: I've been using the current release with redux 4, the only bug I noticed is about emitting events during middleware initialisation. So I added an `INIT` action I dispatch at startup use a `pipe(first(), switchMap(...))` for these cases to ensure the emitted actions are delayed until after middleware initialisation.
username_4: @username_0 any news on progress?
username_2: I've done some initial testing with https://github.com/redux-observable/redux-observable/pull/501 and I think it's ready for a new prerelease build.
I've also worked on docs for a bit, that'll need more work. Big thing I didn't touch yet is all the jsbin examples. Is there an easy or right way to update those?
username_0: redux v4 is supported as of [1.0.0-alpha.3](https://github.com/redux-observable/redux-observable/blob/master/CHANGELOG.md#100-alpha3-2018-06-01), with the latest release now being 1.0.0-beta.1
I'm not sure if #449 is still an issue, but for any other issues please create new tickets 😄
Status: Issue closed
|
MicrosoftDocs/azure-docs | 412897854 | Title: EventSchema missing
Question:
username_0: When new Event Grid Topic is created, there is now drop down selection of Event Schema.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: aa9fcb9e-8d13-c4d1-a5d0-7eeb706c2b51
* Version Independent ID: 642cbcbe-ed7d-1181-fc9a-ee3538446996
* Content: [Send custom events to web endpoint - Event Grid, Azure portal](https://docs.microsoft.com/en-us/azure/event-grid/custom-event-quickstart-portal)
* Content Source: [articles/event-grid/custom-event-quickstart-portal.md](https://github.com/Microsoft/azure-docs/blob/master/articles/event-grid/custom-event-quickstart-portal.md)
* Service: **event-grid**
* GitHub Login: @username_2
* Microsoft Alias: **username_2**
Answers:
username_1: @username_0 Thanks for the feedback! We have assigned this issue to the content author to review and update accordingly.
username_2: https://github.com/MicrosoftDocs/azure-docs-pr/pull/71157
username_2: #in-progress
username_2: I have updated the article.
#please-close
username_3: @username_0
Thanks for bringing this to our attention. We will now close this issue. If there are further questions regarding this matter, please tag me in a comment. I will reopen it and we will gladly continue the discussion.
Status: Issue closed
|
vishvananda/netlink | 127522771 | Title: Parsing other messages [attrs]
Question:
username_0: Hi, can someone point me how to parse other messages than route?
I'm trying to map another kind of message(nl general message with attributes), but it's really hard to understand if it's even possible to do that with this library.. (also there is not documentation)
Answers:
username_1: If it is using special structures they need to be defined in nl along with serialize and deserialize methods and tests. See for example nl/xfrm_linux.go
Then you make the request and loop through the messages, calling your deserialize method for each one. Finally you need to convert it into some friendly format for users of the library to use (xfrm.go:Policy) See the XfrmPolicyList in xfrm_policy_linux.go for an example.
Some netlink messages have nested attributes which can generally be teased out using ParseRouteAttr an the data of the parent attr.
username_0: Hi @username_1, thanks for the quick response.
For some reason nested attributes gets wrong type (32776 instead of 7). Do you have any idea?
username_0: ```
func normalizeNestedType(t uint16) uint16 {
return t & ^uint16(syscall.NLA_F_NESTED | syscall.NLA_F_NET_BYTEORDER)
}
func deNormalizeNestedType(t uint16) uint16 {
return t | uint16(syscall.NLA_F_NESTED)
}
```
username_0: @username_1 Do you think we can add these methods to the repo as utils? I think it'll be very useful
I also will try to write a PR for the documentation.. I think it worth to show that it's possible to implement other protocol messages with this project. |
postor/react-wechat-provider | 400143288 | Title: Error: Cannot find module '../config' ?
Question:
username_0: Hi,
followed your steps in readme:
yarn dev
then visit http://localhost:3000
but it errors out:
```
{ Error: Cannot find module '../config'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:603:15)
at Function.Module._load (internal/modules/cjs/loader.js:529:25)
at Module.require (internal/modules/cjs/loader.js:657:17)
at require (internal/modules/cjs/helpers.js:22:18)
at Object.<anonymous> (/home//project/unzip/example/pages/index.js:5:1)
at Module._compile (internal/modules/cjs/loader.js:721:30)
at Module._compile (/home//project/unzip/example/node_modules/source-map-support/source-map-support.js:492:25)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:732:10)
at Module.load (internal/modules/cjs/loader.js:620:32)
at tryModuleLoad (internal/modules/cjs/loader.js:560:12)
at Function.Module._load (internal/modules/cjs/loader.js:552:3)
at Module.require (internal/modules/cjs/loader.js:657:17)
at require (internal/modules/cjs/helpers.js:22:18)
at _callee$ (/home//project/unzip/example/node_modules/next/dist/server/require.js:33:46)
at tryCatch (/home//project/unzip/example/node_modules/babel-runtime/node_modules/regenerator-runtime/runtime.js:62:40)
at Generator.invoke [as _invoke] (/home//project/unzip/example/node_modules/babel-runtime/node_modules/regenerator-runtime/runtime.js:296:22)
at Generator.prototype.(anonymous function) [as next] (/home//project/unzip/example/node_modules/babel-runtime/node_modules/regenerator-runtime/runtime.js:114:21)
at step (/home//project/unzip/example/node_modules/babel-runtime/helpers/asyncToGenerator.js:17:30)
at /home//project/unzip/example/node_modules/babel-runtime/helpers/asyncToGenerator.js:28:13 code: 'MODULE_NOT_FOUND' }
```
what I should check?
Answers:
username_1: it requires the appid and secret config for wechat app, you can get them from wechat app admin page https://open.weixin.qq.com
```
mv config-example.json config.json
# fill your wechat appid secret .etc | 填写微信appid secret等
vi config.json
```
refer https://github.com/username_1/react-wechat-provider#run-example |
react-brasil/vagas | 610190406 | Title: [São Paulo] Front-end Developer na 5A Attiva
Question:
username_0: ## 5A Attiva
Olá, estamos buscando Front End Developer, de níveis Jr a Sr. Buscamos um profissional que queira evoluir profissionalmente e que goste de trabalhar com modelo de Squads.
Aqui você será ouvido e terá voz durante ao projeto. É preciso saber trabalhar em equipe, e principalmente, gostar de trabalhar em equipe.
## Processo seletivo
O processo seletivo, será de forma 100% remota.
## Local
Devido a pandemia, após a aprovação, o profissional iniciará as atividades de moto totalmente home office. - São Paulo - <NAME>
## Requisitos
Atuar como desenvolvedor Front-End, e Gostar de trabalhar com front-end
Front-end: React, HTML5, Java Script, SASS, Less, CSS3.
Experiencia com compiladores de estilo (Sass ou Less)
Consumo de webservices
Manipulação de JSON
Ter atuado em projetos ágeis
Auto-organização é fundamental
JavaScript
Versionamento de códigos .git
Vivência em metodologias ágeis como kanban e scrum.
## Contratação
PJ a combinar
## Como se candidatar
Por favor envie um email para <EMAIL> com seu CV anexado.
- 🏢 Flexível
- 🏢 Remoto
- 👦 Júnior
- 👴 Sênior
- ⚖️ PJ
Answers:
username_1: Olá @username_0 tudo bem?, Remoto apenas durante a pandemia ou pode ser full remoto? Sou de Salvador-BA.
username_2: Email enviado
username_3: Email enviado |
apache/airflow | 899654861 | Title: Running tasks marked as 'orphaned' and killed by scheduler
Question:
username_0: **Apache Airflow version**:
2.0.2, 2.1.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.12-eks-7684af", GitCommit:"<PASSWORD>", GitTreeState:"clean", BuildDate:"2020-10-20T22:57:40Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
**Environment**:
- **Cloud provider or hardware configuration**:
AWS EKS
- **Others**:
Helm chart - 8.0.8, 8.1.0
Executor - CeleryExecutor
**What happened**:
When DAG is paused, and long PythonOperator tasks triggered manually (with "Ingnore all deps" - "run"), they are failing with error:
```
[2021-05-24 08:49:02,166] {logging_mixin.py:104} INFO - hi there, try 6, going to sleep for 15 secs
[2021-05-24 08:49:03,808] {local_task_job.py:188} WARNING - State of this instance has been externally set to None. Terminating instance.
[2021-05-24 08:49:03,810] {process_utils.py:100} INFO - Sending Signals.SIGTERM to GPID 172
[2021-05-24 08:49:03,812] {taskinstance.py:1265} ERROR - Received SIGTERM. Terminating subprocesses.
```
And in scheduler logs there’s message:
```
[2021-05-24 08:48:59,471] {scheduler_job.py:1854} INFO - Resetting orphaned tasks for active dag runs
[2021-05-24 08:48:59,485] {scheduler_job.py:1921} INFO - Reset the following 2 orphaned TaskInstances:
<TaskInstance: timeout_testing.run_param_all 2021-05-23 13:46:13.840235+00:00 [running]>
<TaskInstance: timeout_testing.sleep_well 2021-05-23 13:46:13.840235+00:00 [running]>
```
**What you expected to happen**:
These tasks are alive and well, and shouldn't be killed :)
Looks like something in `reset_state_for_orphaned_tasks` is wrongly marking running tasks as abandoned...
**How to reproduce it**:
```
dag = DAG(os.path.basename(__file__).replace('.py', ''),
start_date=datetime(2021, 5, 11),
schedule_interval=timedelta(days=1))
def sleep_tester(time_out, retries):
for i in range(retries):
print(f'hi there, try {i}, going to sleep for {time_out}')
time.sleep(time_out)
print("Aaah, good times, see ya soon")
sleeping = PythonOperator(task_id="sleep_well",
python_callable=sleep_tester,
op_kwargs={'time_out': 15, 'retries': 50},
dag=dag)
```
Create DAG with task above, verify it paused, trigger dag run manually from UI, then trigger the task manually. The task should fail after several tries.
**Anything else we need to know**:
It might happen only if DAG never was unpaused ("ON"), though couldn't verify it.
Answers:
username_1: We were experiencing a similar issue, but it turned out to be due to the SchedulerJob being marked as failed due to a slow heartbeat (due to adding a few thousand DAGs).
Can you look for this line in your scheduler logs?
`Marked %d SchedulerJob instances as failed`
In our case, the solution was to increase the value of `scheduler.scheduler_health_check_threshold` in the config, which then prevented the schedulerJob from being killed.
Here's the part in the source code that deals with resetting tasks with missing SchedulerJobs:
https://github.com/apache/airflow/blob/9c94b72d440b18a9e42123d20d48b951712038f9/airflow/jobs/scheduler_job.py#L1803
username_2: Just to add that I'm also seeing this behaviour on 2.1.0, running on bare metal. Have a paused DAG and I am trying to run a task within the DAG manually, but upon the next check for orphaned tasks by the scheduler (on a 5min interval), the task is reset and receives a `SIGTERM`.
No sign of `Marked %d SchedulerJob instances as failed` entries in logs. Given the original issue shows MWE, I don't see how it can be related to issues caused by a complex environment.
username_3: @username_4 Would this be fixed by one of your PRs? (They haven't made it to a release yet.)
username_4: Unfortunately not. My latest bugfix PR was to prevent scheduled tasks from being picked up and to make sure cleared tasks wouldn't be seen as orphans.
username_4: Is it possible you are restarting your webserver frequently?
I think the problem is that it shouldn't go through this filter, but it somehow is.
https://github.com/apache/airflow/blob/db6acd9e8a91e0eca9e12cace72edc57b2667d25/airflow/jobs/scheduler_job.py#L1185
Makes me wonder, what do you have for your [worker_refresh_interval](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#worker-refresh-interval) in the airflow.cfg?
username_4: @username_3 I don't mind creating a PR but do you have an idea on how we can fix this?
I am having a bit of trouble figuring out how we can get the `external_executor_id` in the current flow without delaying the response of the request noticeably.
username_0: @username_4 Hi, it might be helpful to know that I couldn't reproduce the issue on a DAG that was previously active, but it reproduced like a charm on a brand new DAG with same setup. Both paused at the time of running the tasks.
I can't find anything related to either `Setting external_id for` or `SchedulerJob instances as failed` in the logs for both instances (on DEBUG)
Here's a `grep -A 5 -B 5` from logs related to the task name (there wasn't related logs on webserver...):
```
Scheduler:
[2021-07-11 08:28:08,961] {dag_processing.py:385} DEBUG - Received message of type DagParsingStat
[2021-07-11 08:28:08,972] {scheduler_job.py:1854} INFO - Resetting orphaned tasks for active dag runs
[2021-07-11 08:28:08,972] {scheduler_job.py:1862} DEBUG - Running SchedulerJob.adopt_or_reset_orphaned_tasks with retries. Try 1 of 3
[2021-07-11 08:28:08,972] {scheduler_job.py:1864} DEBUG - Calling SchedulerJob.adopt_or_reset_orphaned_tasks method
[2021-07-11 08:28:09,003] {scheduler_job.py:1921} INFO - Reset the following 1 orphaned TaskInstances:
<TaskInstance: timeout_testing.sleep_operator_task 2021-07-11 08:02:31.293769+00:00 [running]>
[2021-07-11 08:28:09,020] {scheduler_job.py:1399} DEBUG - Next timed event is in 0.645959
[2021-07-11 08:28:09,020] {scheduler_job.py:1401} DEBUG - Ran scheduling loop in 0.13 seconds
[2021-07-11 08:28:09,205] {settings.py:292} DEBUG - Disposing DB connection pool (PID 1178)
[2021-07-11 08:28:09,223] {scheduler_job.py:310} DEBUG - Waiting for <ForkProcess(DagFileProcessor921-Process, stopped)>
[2021-07-11 08:28:09,469] {settings.py:292} DEBUG - Disposing DB connection pool (PID 1182)
Worker:
[2021-07-11 08:23:47,559: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2021-07-11 08:23:52,559: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2021-07-11 08:23:57,559: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2021-07-11 08:24:02,558: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2021-07-11 08:24:07,132: INFO/MainProcess] Received task: airflow.executors.celery_executor.execute_command[ad28b4dc-6a78-4821-9a02-998aff8156b2]
[2021-07-11 08:24:07,133: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x7feff474d3b0> (args:('airflow.executors.celery_executor.execute_command', 'ad28b4dc-6a78-4821-9a02-998aff8156b2', {'lang': 'py', 'task': 'airflow.executors.celery_executor.execute_command', 'id': 'ad28b4dc-6a78-4821-9a02-998aff8156b2', 'shadow': None, 'eta': None, 'expires': None, 'group': None, 'group_index': None, 'retries': 0, 'timelimit': [None, None], 'root_id': 'ad28b4dc-6a78-4821-9a02-998aff8156b2', 'parent_id': None, 'argsrepr': "[['airflow', 'tasks', 'run', 'timeout_testing', 'sleep_operator_task', '2021-07-11T08:02:31.293769+00:00', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/timeout_testing.py']]", 'kwargsrepr': '{}', 'origin': 'gen268@airflow-dev-web-6d79645c68-tzbnv', 'reply_to': 'a3aa91fd-479b-3d76-893e-7a8e8d23c454', 'correlation_id': 'ad28b4dc-6a78-4821-9a02-998aff8156b2', 'hostname': 'celery@airflow-dev-worker-0', 'delivery_info': {'exchange': '', 'routing_key': 'default', 'priority': 0, 'redelivered': None}, 'args': [['airflow', 'tasks', 'run', 'timeout_testing', 'sleep_operator_task',... kwargs:{})
[2021-07-11 08:24:07,134: DEBUG/MainProcess] Task accepted: airflow.executors.celery_executor.execute_command[ad28b4dc-6a78-4821-9a02-998aff8156b2] pid:45
--
[2021-07-11 08:24:07,134: DEBUG/MainProcess] Task accepted: airflow.executors.celery_executor.execute_command[ad28b4dc-6a78-4821-9a02-998aff8156b2] pid:45
[2021-07-11 08:24:07,186: INFO/ForkPoolWorker-15] Executing command in Celery: ['airflow', 'tasks', 'run', 'timeout_testing', 'sleep_operator_task', '2021-07-11T08:02:31.293769+00:00', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/timeout_testing.py']
[2021-07-11 08:24:07,329: DEBUG/ForkPoolWorker-15] Calling callbacks: [<function default_action_log at 0x7ff0008deb90>]
[2021-07-11 08:24:07,350: DEBUG/ForkPoolWorker-15] Setting up DB connection pool (PID 47)
[2021-07-11 08:24:07,351: DEBUG/ForkPoolWorker-15] settings.prepare_engine_args(): Using NullPool
[2021-07-11 08:24:07,353: INFO/ForkPoolWorker-15] Filling up the DagBag from ...dags_dev/timeout_testing.py
[2021-07-11 08:24:07,354: DEBUG/ForkPoolWorker-15] Importing ...dags_dev/timeout_testing.py
--
--
[2021-07-11 08:24:07,357: DEBUG/ForkPoolWorker-15] Loaded DAG <DAG: timeout_testing>
[2021-07-11 08:24:07,399: DEBUG/ForkPoolWorker-15] Loading plugins
[2021-07-11 08:24:07,399: DEBUG/ForkPoolWorker-15] Loading plugins from directory: /opt/airflow/plugins
[2021-07-11 08:24:07,399: DEBUG/ForkPoolWorker-15] Loading plugins from entrypoints
[2021-07-11 08:24:07,473: DEBUG/ForkPoolWorker-15] Integrate DAG plugins
[2021-07-11 08:24:07,501: WARNING/ForkPoolWorker-15] Running <TaskInstance: timeout_testing.sleep_operator_task 2021-07-11T08:02:31.293769+00:00 [None]> on host airflow-dev-worker-0.airflow-dev-worker.airflow-dev.svc.cluster.local
[2021-07-11 08:24:07,559: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2021-07-11 08:24:12,562: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2021-07-11 08:24:17,559: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2021-07-11 08:24:22,563: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
[2021-07-11 08:24:27,559: DEBUG/MainProcess] pidbox received method enable_events() [reply_to:None ticket:None]
--
```
I cannot share the full logs as I'm testing in real environment and there are many S3 requests that need hashing, but I saved the logs and happy to provide any part of them as needed. Thank you!
username_4: Yes this sort of goes hand in hand with what I expected. Once the scheduler kicked off tasks and you re-run them from the web interface, the tasks will keep their external_executor_id that you can't clear with Airflow 2.1.1.
Thanks for providing the logs and looking into it!
username_4: We are actually running into the same issue here as well.
Sometimes our tasks aren't running long enough before they fail and don't get an external_executor_id. Then once we restart them in the webserver (when they don't have an `external_executor_id`), it still get's killed.
username_5: I ran the above provided DAG in LocalExecutor. Before I ran it, I did `airflow db reset`.
The first 16 tasks were successful but the next 17 tasks failed.
In scheduler log, I found the following:
```
[[34m2021-08-20 21:31:41,056[0m] {[34mdagrun.py:[0m431} [31mERROR[0m - [31mMarking run [01m<DagRun long_running_pyoperator @ 2021-06-04 00:00:00+00:00: scheduled__2021-06-04T00:00:00+00:00, externally triggered: False>[22m failed[0m
[[34m2021-08-20 21:31:41,144[0m] {[34mscheduler_job.py:[0m1143} INFO[0m - Resetting orphaned tasks for active dag runs[0m
[[34m2021-08-20 21:31:41,152[0m] {[34mscheduler_job.py:[0m1165} INFO[0m - Marked 17 SchedulerJob instances as failed[0m
[[34m2021-08-20 21:31:41,177[0m] {[34mscheduler_job.py:[0m1209} INFO[0m - Reset the following 10 orphaned TaskInstances:
[01m<TaskInstance: long_running_pyoperator.sleep_well 2021-05-27 00:00:00+00:00 [running]>
<TaskInstance: long_running_pyoperator.sleep_well 2021-05-28 00:00:00+00:00 [running]>
<TaskInstance: long_running_pyoperator.sleep_well 2021-06-02 00:00:00+00:00 [running]>
<TaskInstance: long_running_pyoperator.sleep_well 2021-06-03 00:00:00+00:00 [running]>
<TaskInstance: long_running_pyoperator.sleep_well 2021-06-06 00:00:00+00:00 [running]>
<TaskInstance: long_running_pyoperator.sleep_well 2021-06-07 00:00:00+00:00 [running]>
<TaskInstance: long_running_pyoperator.sleep_well 2021-06-08 00:00:00+00:00 [running]>
<TaskInstance: long_running_pyoperator.sleep_well 2021-06-09 00:00:00+00:00 [running]>
<TaskInstance: long_running_pyoperator.sleep_well 2021-06-10 00:00:00+00:00 [running]>
<TaskInstance: long_running_pyoperator.sleep_well 2021-06-11 00:00:00+00:00 [running]>[22m[0m
[[34m2021-08-20 21:31:42,225[0m] {[34mdag.py:[0m2691} INFO[0m - Setting next_dagrun for [01mlong_running_pyoperator[22m to [01m2021-06-13T00:00:00+00:00[22m[0m
[[34m2021-08-20 21:31:42,307[0m] {[34mdagrun.py:[0m431} [31mERROR[0m - [31mMarking run [01m<DagRun long_running_pyoperator @ 2021-06-06 00:00:00+00:00: scheduled__2021-06-06T00:00:00+00:00, externally triggered: False>[22m failed[0m
[[34m2021-08-20 21:31:42,324[0m] {[34mdagrun.py:[0m431} [31mERROR[0m - [31mMarking run [01m<DagRun long_running_pyoperator @ 2021-06-07 00:00:00+00:00: scheduled__2021-06-07T00:00:00+00:00, externally triggered: False>[22m failed[0m
[[34m2021-08-20 21:31:42,333[0m] {[34mdagrun.py:[0m431} [31mERROR[0m - [31mMarking run [01m<DagRun long_running_pyoperator @ 2021-06-08 00:00:00+00:00: scheduled__2021-06-08T00:00:00+00:00, externally triggered: False>[22m failed[0m
[[34m2021-08-20 21:31:42,352[0m] {[34mdagrun.py:[0m431} [31mERROR[0m - [31mMarking run [01m<DagRun long_running_pyoperator @ 2021-06-09 00:00:00+00:00: scheduled__2021-06-09T00:00:00+00:00, externally triggered: False>[22m failed[0m
[[34m2021-08-20 21:31:42,374[0m] {[34mdagrun.py:[0m431} [31mERROR[0m - [31mMarking run [01m<DagRun long_running_pyoperator @ 2021-06-10 00:00:00+00:00: scheduled__2021-06-10T00:00:00+00:00, externally triggered: False>[22m failed[0m
[[34m2021-08-20 21:31:42,383[0m] {[34mdagrun.py:[0m431} [31mERROR[0m - [31mMarking run [01m<DagRun long_running_pyoperator @ 2021-06-11 00:00:00+00:00: scheduled__2021-06-11T00:00:00+00:00, externally triggered: False>[22m failed[0m
[[34m2021-08-20 21:31:42,400[0m] {[34mdagrun.py:[0m431} [31mERROR[0m - [31mMarking run [01m<DagRun long_running_pyoperator @ 2021-05-27 00:00:00+00:00: scheduled__2021-05-27T00:00:00+00:00, externally triggered: False>[22m failed[0m
[[34m2021-08-20 21:31:42,422[0m] {[34mdagrun.py:[0m431} [31mERROR[0m - [31mMarking run [01m<DagRun long_running_pyoperator @ 2021-05-28 00:00:00+00:00: scheduled__2021-05-28T00:00:00+00:00, externally triggered: False>[22m failed[0m
[[34m2021-08-20 21:31:42,429[0m] {[34mdagrun.py:[0m431} [31mERROR[0m - [31mMarking run [01m<DagRun long_running_pyoperator @ 2021-05-29 00:00:00+00:00: scheduled__2021-05-29T00:00:00+00:00, externally triggered: False>[22m failed[0m
[[34m2021-08-20 21:31:42,443[0m] {[34mdagrun.py:[0m431} [31mERROR[0m - [31mMarking run [01m<DagRun long_running_pyoperator @ 2021-05-30 00:00:00+00:00: scheduled__2021-05-30T00:00:00+00:00, externally triggered: False>[22m failed[0m
[[34m2021-08-20 21:31:42,463[0m] {[34mdagrun.py:[0m431} [31mERROR[0m - [31mMarking run [01m<DagRun long_running_pyoperator @ 2021-05-31 00:00:00+00:00: scheduled__2021-05-31T00:00:00+00:00, externally triggered: False>[22m failed[0m
[[34m2021-08-20 21:31:42,484[0m] {[34mdagrun.py:[0m431} [31mERROR[0m - [31mMarking run [01m<DagRun long_running_pyoperator @ 2021-06-01 00:00:00+00:00: scheduled__2021-06-01T00:00:00+00:00, externally triggered: False>[22m failed[0m
[[34m2021-08-20 21:31:42,503[0m] {[34mdagrun.py:[0m431} [31mERROR[0m - [31mMarking run [01m<DagRun long_running_pyoperator @ 2021-06-02 00:00:00+00:00: scheduled__2021-06-02T00:00:00+00:00, externally triggered: False>[22m failed[0m
[[34m2021-08-20 21:31:42,511[0m] {[34mdagrun.py:[0m431} [31mERROR[0m - [31mMarking run [01m<DagRun long_running_pyoperator @ 2021-06-03 00:00:00+00:00: scheduled__2021-06-03T00:00:00+00:00, externally triggered: False>[22m failed[0m
[[34m2021-08-20 21:31:42,519[0m] {[34mdagrun.py:[0m431} [31mERROR[0m - [31mMarking run [01m<DagRun long_running_pyoperator @ 2021-06-05 00:00:00+00:00: scheduled__2021-06-05T00:00:00+00:00, externally triggered: False>[22m failed[0m
```
In the scheduler tasks log for dag processing, I found the following:
```log
[2021-08-20 21:31:41,007] {processor.py:618} INFO - DAG(s) dict_keys(['long_running_pyoperator']) retrieved from /files/dags/long_running_pyoperator.py
[2021-08-20 21:31:41,051] {processor.py:575} INFO - Executed failure callback for <TaskInstance: long_running_pyoperator.sleep_well 2021-06-04 00:00:00+00:00 [failed]> in state failed
[2021-08-20 21:31:41,082] {processor.py:575} INFO - Executed failure callback for <TaskInstance: long_running_pyoperator.sleep_well 2021-05-29 00:00:00+00:00 [failed]> in state failed
[2021-08-20 21:31:41,097] {processor.py:575} INFO - Executed failure callback for <TaskInstance: long_running_pyoperator.sleep_well 2021-06-05 00:00:00+00:00 [failed]> in state failed
[2021-08-20 21:31:41,116] {processor.py:575} INFO - Executed failure callback for <TaskInstance: long_running_pyoperator.sleep_well 2021-05-30 00:00:00+00:00 [failed]> in state failed
[2021-08-20 21:31:41,139] {processor.py:575} INFO - Executed failure callback for <TaskInstance: long_running_pyoperator.sleep_well 2021-06-01 00:00:00+00:00 [failed]> in state failed
[2021-08-20 21:31:41,164] {processor.py:575} INFO - Executed failure callback for <TaskInstance: long_running_pyoperator.sleep_well 2021-05-31 00:00:00+00:00 [failed]> in state failed
[2021-08-20 21:31:41,189] {processor.py:575} INFO - Executed failure callback for <TaskInstance: long_running_pyoperator.sleep_well 2021-05-28 00:00:00+00:00 [failed]> in state failed
[2021-08-20 21:31:41,201] {processor.py:575} INFO - Executed failure callback for <TaskInstance: long_running_pyoperator.sleep_well 2021-06-03 00:00:00+00:00 [failed]> in state failed
[2021-08-20 21:31:41,212] {processor.py:575} INFO - Executed failure callback for <TaskInstance: long_running_pyoperator.sleep_well 2021-05-27 00:00:00+00:00 [failed]> in state failed
[2021-08-20 21:31:41,224] {processor.py:575} INFO - Executed failure callback for <TaskInstance: long_running_pyoperator.sleep_well 2021-06-02 00:00:00+00:00 [failed]> in state failed
[2021-08-20 21:31:41,236] {processor.py:575} INFO - Executed failure callback for <TaskInstance: long_running_pyoperator.sleep_well 2021-06-09 00:00:00+00:00 [failed]> in state failed
[2021-08-20 21:31:41,246] {processor.py:575} INFO - Executed failure callback for <TaskInstance: long_running_pyoperator.sleep_well 2021-06-11 00:00:00+00:00 [failed]> in state failed
[2021-08-20 21:31:41,255] {processor.py:575} INFO - Executed failure callback for <TaskInstance: long_running_pyoperator.sleep_well 2021-06-07 00:00:00+00:00 [failed]> in state failed
[2021-08-20 21:31:41,265] {processor.py:575} INFO - Executed failure callback for <TaskInstance: long_running_pyoperator.sleep_well 2021-06-08 00:00:00+00:00 [failed]> in state failed
[2021-08-20 21:31:41,276] {processor.py:575} INFO - Executed failure callback for <TaskInstance: long_running_pyoperator.sleep_well 2021-06-10 00:00:00+00:00 [failed]> in state failed
[2021-08-20 21:31:41,285] {processor.py:575} INFO - Executed failure callback for <TaskInstance: long_running_pyoperator.sleep_well 2021-06-06 00:00:00+00:00 [failed]> in state failed
```
That could be as a result of the schedulerJob being marked as failed.
Then the task log:
```log
[2021-08-20, 21:31:46 UTC] {local_task_job.py:209} WARNING - State of this instance has been externally set to failed. Terminating instance.
[2021-08-20, 21:31:46 UTC] {process_utils.py:100} INFO - Sending Signals.SIGTERM to GPID 5738
[2021-08-20, 21:31:46 UTC] {taskinstance.py:1369} ERROR - Received SIGTERM. Terminating subprocesses.
[2021-08-20, 21:31:46 UTC] {process_utils.py:66} INFO - Process psutil.Process(pid=5738, status='terminated', exitcode=0, started='21:24:33') (5738) terminated with exit code 0
```
Status: Issue closed
username_5: Fixed in #19375, @username_0 can you test in 2.2.2 and reopen if it still happens
username_0: Yep, not reproducing on 2.2.2, thank you!! |
popolo-project/popolo-spec | 39219132 | Title: Area "bounding box" or "extent" property
Question:
username_0: It therefore is not obvious how it can be used. I prefer WKT (which already is used by OpenGovLD) to GML.
Answers:
username_1: Noting that OCDS or its draft extractives and land extension has some relevant content for this. |
google/gapid | 302871617 | Title: The gapis server has exited with an error code of: 2
Question:
username_0: GAPID Version: 1.1.0:76966beae5023f52e2f0dd3878c34a24d0428054
OS: linux amd64
Set my vulkan app to trace 10 frames. I think that worked, then it killed my app. I hit Stop Tracing then this error:
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x68 pc=0x177352b]
goroutine 11 [running]:
github.com/google/gapid/core/app/crash.Crash(0x225fce0, 0x3eda4b0)
core/app/crash/crash.go:89 +0xb4
github.com/google/gapid/core/app/crash.handler()
core/app/crash/crash.go:56 +0x52
panic(0x225fce0, 0x3eda4b0)
GOROOT/src/runtime/panic.go:491 +0x283
github.com/google/gapid/gapis/api/vulkan.(*ImageObject).IsResource(...)
gapis/api/vulkan/resources.go:40
github.com/google/gapid/gapis/api/vulkan.(*ImageObject).OnAccess(0x0, 0xc4202b41e0, 0x558b00000000)
bazel-out/k8-opt/genfiles/gapis/api/vulkan/api.go:114011 +0x2b
github.com/google/gapid/gapis/api/vulkan.(*VkDestroyDevice).mutate(0xc4204b9200, 0x3f329e0, 0xc42067d140, 0x7fffffffffffffff, 0xc4202b41e0, 0xc422f60f00, 0x24f3da0, 0x0)
bazel-out/k8-opt/genfiles/gapis/api/vulkan/mutate.go:5391 +0x1c7
github.com/google/gapid/gapis/api/vulkan.(*VkDestroyDevice).Mutate(0xc4204b9200, 0x3f329e0, 0xc42067d140, 0x7fffffffffffffff, 0xc4202b41e0, 0xc422f60f00, 0x524538, 0x40)
gapis/api/vulkan/custom_replay.go:333 +0x7f
github.com/google/gapid/gapis/replay.(*adapter).MutateAndWrite(0xc422c75090, 0x3f329e0, 0xc42067d140, 0x7fffffffffffffff, 0x3f69520, 0xc4204b9200)
gapis/replay/batch.go:213 +0xcf
github.com/google/gapid/gapis/api/vulkan.(*destroyResourcesAtEOS).Flush(0x48c40d0, 0x3f329e0, 0xc42067d140, 0x3f01460, 0xc422c75090)
gapis/api/vulkan/replay.go:443 +0x1948
github.com/google/gapid/gapis/api/transform.Transforms.Transform(0xc4210dc880, 0x6, 0x8, 0x3f329e0, 0xc42067d140, 0x48c40d0, 0x0, 0x0, 0x3f01460, 0xc422c75090)
gapis/api/transform/transforms.go:44 +0x376
github.com/google/gapid/gapis/api/vulkan.API.Replay(0x3f329e0, 0xc42067d140, 0xc4225fe258, 0xc4225fe250, 0x24f5a80, 0xc42034c390, 0xc42018e0f0, 0xa, 0xa, 0x3fc39c0, ...)
gapis/api/vulkan/replay.go:638 +0xef6
github.com/google/gapid/gapis/api/vulkan.(*API).Replay(0x48c40d0, 0x3f329e0, 0xc42067d140, 0xc4225fe258, 0xc4225fe250, 0x24f5a80, 0xc42034c390, 0xc42018e0f0, 0xa, 0xa, ...)
<autogenerated>:1 +0xef
github.com/google/gapid/gapis/replay.(*Manager).execute.func1()
gapis/replay/batch.go:149 +0x18c
github.com/google/gapid/core/app/benchmark.(*DurationCounter).Time(0xc4201b9820, 0xc42036c250)
core/app/benchmark/counter.go:163 +0x7d
github.com/google/gapid/gapis/replay.(*Manager).execute(0xc420286b80, 0x3f329e0, 0xc42067d140, 0x3f05a60, 0xc420047e60, 0x9dbd4029ba59403c, 0x2dbc2b775850d44d, 0x4de19af19bf321b4, 0x7084a179d7b0bacf, 0xe30f8cf18b052b1, ...)
gapis/replay/batch.go:148 +0x9fb
github.com/google/gapid/gapis/replay.(*Manager).batch.func1(0x3f05a60, 0xc420047e60, 0xc42036c5e0, 0xd7b0bacf4de19af1, 0x18b052b17084a179, 0xba59403c0e30f8cf, 0x5850d44d9dbd4029, 0x9bf321b42dbc2b77, 0x24f5a80, 0xc42034c390, ...)
gapis/replay/batch.go:83 +0x487
github.com/google/gapid/gapis/replay.(*Manager).batch(0xc420286b80, 0x3f329e0, 0xc42067ced0, 0xc422f60dc0, 0xa, 0xa, 0x245f540, 0x29122e0, 0x249d700, 0xc4231af770, ...)
gapis/replay/batch.go:84 +0x33a
github.com/google/gapid/gapis/replay.(*Manager).(github.com/google/gapid/gapis/replay.batch)-fm(0x3f329e0, 0xc42037c120, 0xc422f60dc0, 0xa, 0xa, 0x245f540, 0x29122e0, 0x249d700, 0xc4231af770, 0x1)
gapis/replay/manager.go:126 +0x93
github.com/google/gapid/gapis/replay/scheduler.(*bin).exec(0xc420051180, 0x3f329e0, 0xc42037c120, 0xc420047e90)
gapis/replay/scheduler/scheduler.go:243 +0x2e6
github.com/google/gapid/gapis/replay/scheduler.(*Scheduler).run(0xc420286ec0, 0x3f329e0, 0xc42037c120)
gapis/replay/scheduler/scheduler.go:176 +0x36c
github.com/google/gapid/gapis/replay/scheduler.New.func1()
gapis/replay/scheduler/scheduler.go:73 +0x3c
github.com/google/gapid/core/app/crash.Go.func1(0xc420286ee0)
core/app/crash/crash.go:65 +0x43
created by github.com/google/gapid/core/app/crash.Go
core/app/crash/crash.go:63 +0x3f
Answers:
username_1: Fixed in #1727
Status: Issue closed
|
orangeui/orange | 498940256 | Title: Card with image on top
Question:
username_0: Add an option of having an image before or after header, based on how you position it in the html structure. This would get us one step closer for using cards for landing pages, blogs, etc., without any need of extra customization of html and css.<issue_closed>
Status: Issue closed |
yt-dlp/yt-dlp | 1077902415 | Title: DeprecationWarning with Pornhub
Question:
username_0: ### Checklist
- [X] I'm reporting a broken site
- [X] I've verified that I'm running yt-dlp version **2021.12.01**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
- [X] I've checked that all provided URLs are alive and playable in a browser
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Global
### Description
Downloading videos from Pornhub website triggers a DeprecationWarning about deprecated `format_id` that does not appear with Youtube or other platforms.
This happens even if the said `format_id` is not used in the command.
Example below with a random video (can be any one on the platform)
### Verbose log
```shell
[debug] Command-line config: ['-Uv', 'http://www.pornhub.com/view_video.php?viewkey=<KEY>']
[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, err utf-8, pref UTF-8
[debug] yt-dlp version 2021.12.01 [91f071af6] (zip)
[debug] Python version 3.8.10 (CPython 64bit) - Linux-5.10.60.1-microsoft-standard-WSL2-x86_64-with-glibc2.29
[debug] exe versions: ffmpeg 4.2.4, ffprobe 4.2.4
[debug] Optional libraries: keyring, sqlite
[debug] Proxy map: {}
Latest version: 2021.12.01, Current version: 2021.12.01
yt-dlp is up to date (2021.12.01)
[debug] [PornHub] Extracting URL: http://www.pornhub.com/view_video.php?viewkey=<KEY>
[PornHub] ph61b5c0413feee: Downloading pc webpage
[PornHub] ph61b5c0413feee: Downloading m3u8 information
[PornHub] ph61b5c0413feee: Downloading m3u8 information
[PornHub] ph61b5c0413feee: Downloading m3u8 information
[PornHub] ph61b5c0413feee: Downloading m3u8 information
[PornHub] ph61b5c0413feee: Downloading JSON metadata
DeprecationWarning: Format sorting alias format_id is deprecated and may be removed in a future version. Please use id instead
[debug] Sort order given by extractor: height, width, fps, format_id
[debug] Formats sorted by: hasvid, ie_pref, height, width, fps, id, lang, quality, res, hdr:12(7), vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source
[debug] Default format spec: bestvideo*+bestaudio/best
[info] ph61b5c0413feee: Downloading 1 format(s): hls-2827
[debug] Invoking downloader on "https://ev-h.phncdn.com/hls/videos/202112/12/399529021/,1080P_4000K,720P_4000K,480P_2000K,240P_1000K,_399529021.mp4.urlset/index-f1-v1-a1.m3u8?validfrom=1639337720&validto=1639344920&ipa=192.168.127.12&hdl=-1&hash=XxE1diseKsgKdxnTR%2FGcAQdPTdU%3D"
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 145
[... (download happens)]
```<issue_closed>
Status: Issue closed |
tarkal/highchart-lambda-export-server | 1172643007 | Title: Not running on Nodejs 12 or 14 (Error: write EPIPE)
Question:
username_0: Nodejs 10 is not supported anymore at AWS Lambda. I used the following Docker Image to build the zip file and deployed it both, for Nodejs 12 and 14 and deployed to AWS Lambda.
```Dockerfile
FROM amazon/aws-lambda-nodejs:12
ARG HIGHCHARTS_VERSION=latest
ENV ENVIRONMENT=dev
ENV ACCEPT_HIGHCHARTS_LICENSE="YES"
ENV PHANTOMJS_PLATFORM="linux"
ENV PHANTOMJS_ARCH="x64"
# Install yarn and serverless
RUN npm install yarn -g
RUN yarn global add [email protected]
# Install required dependencies - without 'fontconfig' there will be errors in highcharts server
RUN yum install -y tar fontconfig yum-utils rpmdevtools libuuid-devel
WORKDIR /tmp
RUN yumdownloader fontconfig.x86_64 freetype.x86_64 expat.x86_64
RUN rpmdev-extract *.rpm
# Create the project and folders called lib and fonts inside it
RUN mkdir -p /highchart_export_server/lib
RUN mkdir -p /highchart_export_server/fonts
# Copy the installed dependencies to the lib folder
RUN cp /tmp/*/usr/lib64/* /highchart_export_server/lib
# Download the ttf fonts and unzip the fonts into the fonts dir
# Original source: https://github.com/tarkal/highchart-lambda-export-server/raw/master/resources/fonts.zip
ADD container_files/fonts /highchart_export_server/fonts
# Download the updated fonts.conf file and place it in the libs
# This seems to have no effect
# Original source: https://raw.githubusercontent.com/tarkal/highchart-lambda-export-server/master/src/lib/fonts.conf
ADD container_files/fonts.conf /highchart_export_server/lib
# Init the project and install highcharts-export-server
WORKDIR /highchart_export_server
RUN npm install highcharts-export-server
# Download the basic index.js
# Original source: https://raw.githubusercontent.com/tarkal/highchart-lambda-export-server/master/src/index.js
ADD container_files/index.js /highchart_export_server/index.js
EXPOSE 8080
ENTRYPOINT ["/app/docker-entrypoint.sh"]
# Local development
CMD ["node_modules/.bin/highcharts-export-server", "--enableServer", "1", "--port", "8080"]
```
The following error occurs:
`
<html>
<body>
<!--StartFragment-->
Field | Value
-- | --
@message | 2022-03-17T17:19:02.619Z 8b73cea3-4b09-41bb-9451-c1351dfacaed ERROR Uncaught Exception {"errorType":"Error","errorMessage":"write EPIPE","code":"EPIPE","errno":-32,"syscall":"write","stack":["Error: write EPIPE"," at afterWriteDispatched (internal/stream_base_commons.js:156:25)"," at writeGeneric (internal/stream_base_commons.js:147:3)"," at Socket._writeGeneric (net.js:798:11)"," at Socket._write (net.js:810:8)"," at writeOrBuffer (internal/streams/writable.js:358:12)"," at Socket.Writable.write (internal/streams/writable.js:303:10)"," at Object.worker.work (/var/task/node_modules/highcharts-export-server/lib/phantompool.js:264:34)"," at Object.postWork (/var/task/node_modules/highcharts-export-server/lib/phantompool.js:360:13)"," at exec (/var/task/node_modules/highcharts-export-server/lib/chart.js:176:11)"," at doExport (/var/task/node_modules/highcharts-export-server/lib/chart.js:228:5)"]}
[Truncated]
errorType | Error
stack.0 | Error: write EPIPE
stack.1 | at afterWriteDispatched (internal/stream_base_commons.js:156:25)
stack.2 | at writeGeneric (internal/stream_base_commons.js:147:3)
stack.3 | at Socket._writeGeneric (net.js:798:11)
stack.4 | at Socket._write (net.js:810:8)
stack.5 | at writeOrBuffer (internal/streams/writable.js:358:12)
stack.6 | at Socket.Writable.write (internal/streams/writable.js:303:10)
stack.7 | at Object.worker.work (/var/task/node_modules/highcharts-export-server/lib/phantompool.js:264:34)
stack.8 | at Object.postWork (/var/task/node_modules/highcharts-export-server/lib/phantompool.js:360:13)
stack.9 | at exec (/var/task/node_modules/highcharts-export-server/lib/chart.js:176:11)
stack.10 | at doExport (/var/task/node_modules/highcharts-export-server/lib/chart.js:228:5)
syscall | write
<!--EndFragment-->
</body>
</html>
`
Anyone got it working on AWS Lambda Node 12 or 14? |
ant-design/ant-design | 135289112 | Title: Carousel 走马灯组件自动切换长时间停留在当前页面会变成空白的,需要重新刷新页面后才能出来。
Question:
username_0: **问题描述**
如题
**发生环境**
- antd 版本:0.12.1
- 操作系统及版本:
- 浏览器及版本:
- 在线演示地址:
**重现步骤**
1. ...
2. ...
**线上重现演示**
http://codepen.io/anon/pen/pgdXYp?editors=001
Answers:
username_1: Try: 0.12.3
And see: https://github.com/ant-design/ant-design/issues/1009
Status: Issue closed
|
STEllAR-GROUP/hpxMP | 359075194 | Title: HPXMP hangs when creating more than one tasks under parent task
Question:
username_0: This example hangs when ```OMP_NUM_THREADS = 1```
```
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv)
{
#pragma omp parallel
{
#pragma omp task
{
#pragma omp task
{
printf("this is task 1\n");
}
#pragma omp task
{
printf("this is task 2\n");
}
}
}
return 0;
}
```<issue_closed>
Status: Issue closed |
custom-cards/button-card | 1173821319 | Title: Cut GithubRelease of 3.4.2
Question:
username_0: **Is your feature request related to a problem? Please describe.**
I believe HACS uses GitHub releases to determine the latest version available. Currently there is a github tag for 3.4.2 but no release. We should cut a new release so HACS users can have the latest version.
**Describe the solution you'd like**
A github release for 3.4.2
**Describe alternatives you've considered**
I currently pull in master, but would like to avoid that if possible.
**Additional context**
N/A
Answers:
username_1: There's no 3.4.2, but there's a beta release. If you enable beta for button-card in HACS you'll see it. Master is the same as 3.4.1
username_0: #429 Indicates that issue is fixed in the version 3.4.2 (I linked the tag below).
https://github.com/custom-cards/button-card/releases/tag/v3.4.2
username_2: The beta releases for this card don't seem to work anymore in HACS, the latest you can select is 3.4.1. This has been the case for a few weeks now (I did mention it on the HA forums, but I guess mentioning it here might be better :P )
Yes I can still download it manually and install it, but there is absolutely no way to get past 3.4.1 via HACS unless you already had it. (I have done quite a few clean HA installs over the past few weeks). Selecting `Show beta versions` will also show 3.4.1 as the latest.
You might want to take notice of it @username_1
username_1: Nothing I can do about that if hacs doesn't show the versions properly. It has worked well so far and there hasn't been any change on this repo for 1y :shrug:
I'd open a bug on HACS' side.
username_1: Actually, it's github which has changed something in how they order the releases... I'll have a look.
username_1: It seems fixed now. Please close the issue if it's the case. |
bowdenk7/lab1-spa | 370847419 | Title: images
Question:
username_0: <img width="718" alt="image" src="https://user-images.githubusercontent.com/820883/46748497-90a1fb80-cc68-11e8-9de3-f60029f7866a.png">
<img width="391" alt="image" src="https://user-images.githubusercontent.com/820883/46748625-d3fc6a00-cc68-11e8-98c7-b8febbff43e8.png">
<img width="605" alt="image" src="https://user-images.githubusercontent.com/820883/46748737-158d1500-cc69-11e8-84fc-1ad22fdeaeac.png">
<img width="954" alt="image" src="https://user-images.githubusercontent.com/820883/46750203-4d498c00-cc6c-11e8-9ae2-efa9cb371a44.png">
<img width="463" alt="image" src="https://user-images.githubusercontent.com/820883/46750293-7a963a00-cc6c-11e8-9181-1698e114e7c7.png">
<img width="1186" alt="image" src="https://user-images.githubusercontent.com/820883/46752696-bcc27a00-cc72-11e8-939b-b264cb0f1846.png">
<img width="393" alt="image" src="https://user-images.githubusercontent.com/820883/46754332-0ca34000-cc77-11e8-9d26-c6a84e17c4cc.png">
<img width="465" alt="image" src="https://user-images.githubusercontent.com/820883/46754588-ad91fb00-cc77-11e8-9ec0-6a145b17256a.png">
<img width="556" alt="image" src="https://user-images.githubusercontent.com/820883/46754668-da461280-cc77-11e8-9e01-0c16da5b0e0f.png">
<img width="717" alt="image" src="https://user-images.githubusercontent.com/820883/46756803-247dc280-cc7d-11e8-9abf-29b35ce1a3f8.png">
<img width="390" alt="image" src="https://user-images.githubusercontent.com/820883/46758888-d66bbd80-cc82-11e8-88d4-64931452f486.png"> |
SpiderLabs/owasp-modsecurity-crs | 160507901 | Title: 933130 Stricter Siblings
Question:
username_0: The use of the ARG_NAMES value within 933130 causes extensive issues with Wordpress due to their parameter names. The suggestion is the make a version with ARGS_NAMES at paranoid level 2.
Is the use of such words common within attacks using register_globals? @username_1

Answers:
username_1: @username_0 Good find. To be honest there's a lot in the `php-variables.data` file that I don't overly care about. About the top half of the data file is interesting from an attacker standpoint (though easy to evade). Starting at `AUTH_TYPE` the data file gets kinda meh. The `PHP_` strings would hopefully not turn up so often in FP though, so they could be nice to keep for now.
We could split off the losers to a higher paranoia level if you like to keep them. Maybe PL3 since they kinda attract FP and are not overly important in my opinion? Just by educated guess I think they'll be a lot worse than many other PL2 rules. But if you guys think it's worth it for PL2, that's fine with me.
To recap I'd propose moving these terms to PL3.
```
AUTH_TYPE
HTTP_ACCEPT
HTTP_ACCEPT_CHARSET
HTTP_ACCEPT_ENCODING
HTTP_ACCEPT_LANGUAGE
HTTP_CONNECTION
HTTP_HOST
HTTP_KEEP_ALIVE
HTTP_REFERER
HTTP_USER_AGENT
HTTP_X_FORWARDED_FOR
ORIG_PATH_INFO
PATH_INFO
PATH_TRANSLATED
QUERY_STRING
REQUEST_URI
```
username_0: I'm inclined to agree with you but perhaps these make more sense while trying to detect code. As you said the FP rate should be low (within parameters). I think it is just the locations that will lead to increased false positives.
username_1: @username_0 That's true, but stuff also ends up in `ARGS_NAMES` when it's just passed as a query string (e.g. `/foo?PAYLOAD_HERE`), which is effective in a lot of vulnerabilities, so I think checking that variable is pretty much a must for most rules. There's a lot of options, but I would rather start with moving around the less important terms and see how we fare then.
username_0: That sounds pretty reasonable - I can make a PR with the modified files ... is that pk with you?
username_1: @username_0 Sure!
username_2: ACK
Status: Issue closed
username_1: Closing this, PR is in #473 |
denis-sokolov/chrome-custom-css | 372223861 | Title: 1000 character limit per domain
Question:
username_0: Because of a simple way we store CSS in Chrome synced storage, we only get 1000 characters per domain. We could solve this with some effort.
[Chrome extension quota documentation](https://developer.chrome.com/apps/storage#properties) |
ministryofjustice/cloud-platform | 329904874 | Title: Add issue and pr templates to all cloud platform repos
Question:
username_0: ## Background
<!-- Describe background of the story -->
## Proposed user journey
<!-- Describe user journey and needs for better understanding of the work -->
## Approach
<!-- Describe proposed approach -->
## Questions / Assumptions
<!-- Additional information to explain approach taken -->
## Definition of done
<!-- Checklist for definition of done and acceptance criteria, for example: -->
- [ ] must compile
- [ ] must pass tests
- [ ] must address all the steps of user journey
## Reference
[How to write good user stories](https://www.gov.uk/service-manual/agile-delivery/writing-user-stories)<issue_closed>
Status: Issue closed |
betheluniversity/cascade | 168984286 | Title: Group permissions
Question:
username_0: I created a new group for the Business Office, with <NAME> as the only member. She has access to what she should, but can't create any new assets (only Messages is in her New drop down list).
Andrea and I have compared the group to others and it looks identical
The issue looks to be with the group, and not her account, because when I also added her to the ITS group, she could then create new assets like normal.
Does anyone have any ideas?
Answers:
username_1: @username_0 I believe you resolved this?
Status: Issue closed
username_0: Yep! Thanks |
cakephp/cakephp | 162776416 | Title: Problems with I18n using translation functions
Question:
username_0: ... 'You have traveled {0,number,decimal} kilometers in {1,number,integer} weeks',
... [5423.344, 5.1]
... )
=> "You have traveled decimal5423 kilometers in 5 weeks"
```
The problem here is the `{0,number,decimal}`. Integer, currency and percent formats works fine.
Answers:
username_1: Looks like a documentation issue only. As per the table shown on http://icu-project.org/apiref/icu4j/com/ibm/icu/text/MessageFormat.html#patterns `string` doesn't seems to be a valid `argType` nor is `decimal` an inbuilt `argStyle`.
Status: Issue closed
username_0: Ok, i'll reopen in the issue on the doc repo. |
sudheerj/reactjs-interview-questions | 947354338 | Title: Question 33: React 16.4+ had some changes wrt getDerivedStateFromProps
Question:
username_0: `getDerivedStateFromProps` static method is now called on every re-render including any changes in the internal state.

@username_1
Status: Issue closed
Answers:
username_1: Thanks @username_0. I updated the image. |
daliansky/XiaoMi-Pro-Hackintosh | 811451886 | Title: Open core VS Clover ? Newbie is here 😎
Question:
username_0: Hello
I have the Xioami notebook pro for a 3 years for now and I just found out that I can install Mac OSX on it - what a great job guys , so first of all thanks a lot for the hard work.
I have some questions and I hope it’s ok to ask .
1. In what way should I choose ? The clover one or the Open-Core one? What is the benefit of one on the other ?
2. I’ve purchased a Mac compatible WiFi and BT card for perfection installation- this one : BCM943602CS[Card] (https://h5.m.taobao.com/awp/core/detail.htm?id=532197802538&ft=t&toSite=main) , do I need to install something special for that card? Will it impact the WiFi and BT connection on the windows (I want to make a dual boot ) and what takes me straight to my third question
3. What Operation system should I install first the windows 10 or the macOS ?
4. can I install the latest Mac OS version ?
Thanks
Answers:
username_1: Yes, covered in README.
Please surf previous issues and better post questions on [Discussions](https://github.com/daliansky/XiaoMi-Pro-Hackintosh/discussions) page instead of issue page.
Status: Issue closed
|
Fraunhofer-AISEC/cpg | 550627313 | Title: Missing CPP/Java Features
Question:
username_0: 1 `org.eclipse.cdt.internal.core.dom.parser.cpp.CPPASTAliasDeclaration`
Example: eval_expression.cpp
1 `org.eclipse.cdt.internal.core.dom.parser.cpp.CPPASTLinkageSpecification`
Example: tbbproxy.cpp
3 `com.github.javaparser.ast.expr.SuperExpr`
Example: ConnPoolByRoute.java
3 `org.eclipse.cdt.internal.core.dom.parser.cpp.CPPASTNamespaceAlias`
Example: pcl_video.cpp
5 `com.github.javaparser.ast.stmt.LocalClassDeclarationStmt`
Example: TestGroup.java
5 `org.eclipse.cdt.internal.core.dom.parser.cpp.CPPASTASMDeclaration`
Example: xptcstubs_amd64_openbsd.cpp
6 `org.eclipse.cdt.internal.core.dom.parser.cpp.CPPASTStaticAssertionDeclaration`
Example: nsDocument.cpp
7 `org.eclipse.cdt.internal.core.dom.parser.cpp.CPPASTUsingDirective`
Example: btSoftBodySolver_DX11SIMDAware.cpp
9 `org.eclipse.cdt.internal.core.dom.parser.cpp.CPPASTLambdaExpression`
Example: eval_expression.cpp
11 `org.eclipse.cdt.internal.core.dom.parser.cpp.CPPASTExplicitTemplateInstantiation`
Example: TypedArrayObject.cpp
13 `org.eclipse.cdt.internal.core.dom.parser.cpp.CPPASTTypeIdInitializerExpression`
Example: libx264.c
18 `com.github.javaparser.ast.expr.LambdaExpr`
Example: CommunityPojo.java
28 `com.github.javaparser.ast.body.AnnotationDeclaration`
Example: RatingFile.java
32 `org.eclipse.cdt.internal.core.dom.parser.cpp.CPPASTTemplateSpecialization`
Example: TypedArrayObject.cpp
96 `org.eclipse.cdt.internal.core.dom.parser.cpp.CPPASTTemplateDeclaration`
Example: TypedArrayObject.cpp
98 `org.eclipse.cdt.internal.core.dom.parser.cpp.CPPASTUsingDeclaration`
Example: TypedArrayObject.cpp
Answers:
username_0: [files.zip](https://github.com/Fraunhofer-AISEC/cpg/files/4069846/files.zip) |
graphprotocol/graph-ts | 955208868 | Title: Please update the compiler
Question:
username_0: see: https://github.com/jtenner/as-pect/issues/364
I would make a PR but I don't know how to update the compiler so if anyone knows, feel free.
Status: Issue closed
Answers:
username_1: Hi @username_0, we having been working hard on this and will soon have an alpha release with an update AS compiler, follow https://github.com/graphprotocol/graph-ts/pull/185 for the progress. |
quarkusio/quarkus | 664255437 | Title: Panache docs should have a full REST resource before the advance query section
Question:
username_0: **Expected behavior**
We need to add an example for a REST resource before the advance query section.
**Actual behavior**
See docs
**To Reproduce**
N/A
**Configuration**
N/A - Docs
**Screenshots**
NA
**Environment (please complete the following information):**
- Output of `uname -a` or `ver`:
Darwin Josephs-MBP.homenet.telecomitalia.it 18.6.0 Darwin Kernel Version 18.6.0: Thu Apr 25 23:16:27 PDT 2019; root:xnu-4903.261.4~2/RELEASE_X86_64 x86_64
- Output of `java -version`:
```
openjdk version "11.0.2" 2019-01-15
OpenJDK Runtime Environment 18.9 (build 11.0.2+9)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.2+9, mixed mode)
```
- GraalVM version (if different from Java):
- Quarkus version or git rev:
latest docs at
- Build tool (ie. output of `mvnw --version` or `gradlew --version`):
**Additional context**
(Add any other context about the problem here.)
Answers:
username_1: I want to work on this ,I would be extremely thankful if someone could guide me on how to get started with this?
username_2: @username_1 if you're new to contributing to Quarkus, you can read the [Contributing for the first time](https://quarkus.io/blog/contributing-for-the-first-time/) blog post.
This issue is to add a REST example to the Panache guides (Hibernate, MongoDB, Hibernate Reactive and the Kotlin variants).
You can take as example other guides.
The REST example can comes from the quickstarts.
username_3: Hi @username_2 If someone is not working on this issue can you please assign this to me. Many thanks!
username_2: @username_1 you ask for advise on this one two weeks ago, do you plan to work on it ? Otherwise I'll assign it to @username_3 thats also volonteer for it.
username_1: I wanted to work on it but being a beginner I'm not able to understand a lot of things.
Status: Issue closed
|
gkralik/php7-sapnwrfc | 172545778 | Title: Extra spaces being added to all return values it seems
Question:
username_0: I'm noticing that spaces are being added to the end of nearly all values, bot scalar and in table results.
Example:
Call function:
BAPI_CUSTOMER_GETDETAIL1
With import params:
CUSTOMERNO = 2000xxxxx customer number
PI_SALESORG = 33xx
Look at field 'NAME' in result table PE_COMPANYDATA
The value you'll see seems to be padded with spaces (possibly to match the datatype of the field): "ABC Company Inc. ".
This causes us to have to unfortunately trim every value (both scalar and in tables) coming back, which is undesirable from a performance, coding, and bug-management perspective (sometimes we may forget to do this in instances and the bug will slip through).
Answers:
username_1: This is the returned valued from SAP (i guess). That was also the behaviour in the PHP5 extension.
If i'm not wrong, SAP always fill the fields with whitespaces to the defined length of the column.
username_2: @username_1 I also think you are right.
@username_0 Think of the following scenario, you receive a GUID from SAP result that is always 16 bytes long. After applying a recursive trim over all the array (I always did this to remove unused spaces) sometimes the last character of a GUID is a space, end of line or carriage return (happens) and you mess up the data. I lost many hours trying to figure out where I was wrong and it was the trimming.
username_0: Yeah these 90/10 scenarios are tough. Previously we were using the saprfc connector which auto trimmed, and we detected this difference when comparing old version new results. However since it seems that 90% of the time trimming the values is more helpful and harmful, would you be ok adding a configuration setting that would turn trimming on? Say as part of the call function method or as maybe a global setting you can turn on/off?
username_1: :-1: for a configuration - this could lead to accidental errors like told by @username_2
Better an extra function, which does this job
username_3: I'll look into this after my holiday...
username_3: Back from holiday...
Initially, I planned to provide an option to automatically trim returned values. But the more I think about it, the less I think this would be good or the right way.
The developer should know what kind of values/data types to expect to be returned from SAP and therefore should decide whether to trim or not on a per item base. Letting the extension do the trimming just hides the fact that SAP has different data types, padding, etc. - and in my experience, such magic causes bugs.
Also, I would not be sure where to add an option for auto-trimming:
1. Setting a global option does not feel right, as it might not be obvious which setting is effective at the time of calling an RFC.
2. Adding a parameter to `invoke()` might work, though.
Using 2) might work, up until the time where you have RFCs that return complex and/or nested structures/tables where you would want one member to be trimmed, but not the other. So you end up doing the trimming manually anyways.
Personally, I use `trim()` manually on every item that I want to be trimmed. That way, I make the decision an explicit one and I always know the data I am handling.
Status: Issue closed
username_0: Ok, agreed. Not the end of the world. I've added code to trim all values coming back. |
lvgl/lvgl | 870833363 | Title: lv_example_get_started_2 not working
Question:
username_0: <!--
IMPORTANT
Issues that don't use this template will be ignored and closed.
-->
### Perform all steps below and tick them with [x]
- [ x] Check the related part of the [Documentation](https://docs.lvgl.io/)
- [ x] Update lvgl to the latest version
- [ x] Reproduce the issue in a [Simulator](https://docs.lvgl.io/latest/en/html/get-started/pc-simulator.html)
### Describe the bug
lv_obj_remove_style_all breaks the program
### To Reproduce
run lv_example_get_started_2 in lv_sim_eclipse_sdl
### Expected behavior
Get 2 buttons. This works if lv_obj_remove_style_all is commented
### Screenshots or video
with lv_obj_remove_style_all

without lv_obj_remove_style_all

Status: Issue closed
Answers:
username_1: Thanks, fixed.
See the added comment for the explanation of the issue.
username_0: Ok, understood. Thanks! The Python version works correctly as well now.
username_1: Greae!
It's awesome that you create the PYthon examples! :heart: |
NetApp/trident | 592738756 | Title: PVC pending on new 20.01.1 install
Question:
username_0: **Describe the bug**
After the install of Trident 20.01.1 and creating a (NAS-)Backend and a basic storage class, I tried to create a test pv/pvc to check if everything works. But the pvc is pending. Tridentctl and the pvc shows me the errors
`time="2020-04-02T15:32:19Z" level=info msg="Found PVC for requested volume pvc-63e8998f-df1f-404c-b242-fc0a7cc0675b." UID=63e8998f-df1f-404c-b242-fc0a7cc0675b name=basic namespace=default size="{{1073741824 0} {<nil>} 1Gi BinarySI}" storageClass=basic-netapp
time="2020-04-02T15:32:19Z" level=info msg="Found storage class for requested volume pvc-63e8998f-df1f-404c-b242-fc0a7cc0675b." name=basic-netapp
time="2020-04-02T15:32:19Z" level=error msg="ONTAP-NAS pool serv0914_aggr_daten02/serv0914_aggr_daten02; error creating volume prod_pvc_63e8998f_df1f_404c_b242_fc0a7cc0675b: API status: failed, Reason: Ruleset default not found. Reason: entry doesn't exist. , Code: 13001"
time="2020-04-02T15:32:19Z" level=warning msg="Failed to create the volume on this backend." backend=NetappNAS backendUUID=fecff922-117f-47ba-a9ad-7f1232e6e4d6 error="backend cannot satisfy create request for volume prod_pvc_63e8998f_df1f_404c_b242_fc0a7cc0675b: (ONTAP-NAS pool serv0914_aggr_daten02/serv0914_aggr_daten02; error creating volume prod_pvc_63e8998f_df1f_404c_b242_fc0a7cc0675b: API status: failed, Reason: Ruleset default not found. Reason: entry doesn't exist. , Code: 13001)" pool=serv0914_aggr_daten02 volume=pvc-63e8998f-df1f-404c-b242-fc0a7cc0675b
time="2020-04-02T15:32:19Z" level=error msg="GRPC error: rpc error: code = Unknown desc = encountered error(s) in creating the volume: [Failed to create volume pvc-63e8998f-df1f-404c-b242-fc0a7cc0675b on storage pool serv0914_aggr_daten02 from backend NetappNAS: backend cannot satisfy create request for volume prod_pvc_63e8998f_df1f_404c_b242_fc0a7cc0675b: (ONTAP-NAS pool serv0914_aggr_daten02/serv0914_aggr_daten02; error creating volume prod_pvc_63e8998f_df1f_404c_b242_fc0a7cc0675b: API status: failed, Reason: Ruleset default not found. Reason: entry doesn't exist. , Code: 13001)]"`
For this environment we created a new SVM with its own set of credentials.
**Environment**
- Trident version: 20.01.1
- Trident installation flags used: -n trident --use-custom-yaml
- Container runtime: Docker 1.13.1 (RHEL 7)
- Kubernetes version: 1.16.3
- Kubernetes orchestrator: Rancher v2.3.3
- Kubernetes enabled feature gates:
- OS: RHEL 7.7
- NetApp backend types: [ONTAP 9.3 P11]
- Other:
**To Reproduce**
- Install Trident 20.01.1 with custom config (internal registry)
- Create backend
```
{
"version": 1,
"storageDriverName": "ontap-nas",
"backendName": "NetappNAS",
"managementLIF": "x.x.x.x",
"dataLIF": "x.x.x.x",
"svm": "xxx",
"username": "vsadmin",
"password": "<PASSWORD>",
"storagePrefix": "prod_",
}
```
- Create storage class
```
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: basic-netapp
provisioner: netapp.io/trident
parameters:
backendType: "ontap-nas"
```
The provisioner gets changed to the new CSI provisioner.
- Create PVC
```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: basic
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: basic-netapp
```
**Expected behavior**
PV gets provisioned immediately
**Additional context**
Maybe this could be an issue with the new SVM we created?
Answers:
username_1: The provisioner in your storage class should be `csi.trident.netapp.io`
username_0: Yes, if I apply this I get this result
```
$ k describe sc basic-netapp
Name: basic-netapp
IsDefaultClass: No
Annotations: <none>
Provisioner: csi.trident.netapp.io
Parameters: backendType=ontap-nas
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
```
In the logs I can seen the provisioner gets "updated".
```
time="2020-04-02T15:25:45Z" level=info msg="Replaced storage class so it works with CSI Trident." name=basic-netapp newPr
ovisioner=csi.trident.netapp.io oldProvisioner=netapp.io/trident
```
username_1: `Ruleset default not found. Reason: entry doesn't exist.` sounds like there may not be a default export policy in your SVM?
username_0: I'll have to check with my collegue
Status: Issue closed
username_0: Ok, that was it. The default export policy has been renamed. We reverted it, and now the pv gets created without problems. Thanks! |
quasarframework/quasar-framework.org | 318840341 | Title: Mark Old Docs as DEPRECATED
Question:
username_0: Many users have accidentally stumbled onto older versions of the docs without realizing that many of the methods described just won't work anymore. We should look into a way to remind them that they are looking at outdated reference material and guide them to the newest docs. |
wso2/product-apim | 306330376 | Title: Application subscription page is not functioning properly
Question:
username_0: **Description:**
Applications subscription page is not listing the subscribed apis.
**2.x**
<img width="1394" alt="screen shot 2018-03-19 at 9 58 23 am" src="https://user-images.githubusercontent.com/3424539/37578057-45859402-2b5c-11e8-8493-ba0b21ecf517.png"><issue_closed>
Status: Issue closed |
ProyectoIntegrador2018/patrones_hermosos_backend | 408483066 | Title: S14 Iniciar sesión en el sistema (admin de sede)
Question:
username_0: # S14
Prioridad 9
Como administrador de sede se podrá iniciar sesión en el sistema para acceder a la aplicación y vistas de administrador de sede.
## Conversación
Debe haber retroalimentación de que el inicio de sesión fue exitoso; en caso de lo contrario, debe denegar el acceso y alertar al usuario.
## Criterios de aceptación
- Al ingresar los datos, el sistema reconoce al usuario como administrador de sede y lo redirecciona a su pantalla de inicio con sus permisos adecuados.
- Al ingresar las credenciales incorrectas, debe desplegar un mensaje de error. |
offa/scope-guard | 278804303 | Title: Review handling of LValue references and reference wrapper
Question:
username_0: Review handling of LValue references and reference wrapper. This has changed and the implementation is no longer required to use a reference wrapper. Though, most of the specification is already handled by the wrapper class, the specification needs some review.
Answers:
username_0: **Ref.** #101
username_0: Those cases are already handled by `Wrapper`.
Status: Issue closed
|
munafio/chatify | 757143530 | Title: messages:331 Uncaught ReferenceError: $ is not defined
Question:
username_0: I followed the documentation but getting error messages:331 Uncaught ReferenceError: $ is not defined.
Chatify page is accessible but not able to access any of the functionality.
I am using laravel 8.
Status: Issue closed
Answers:
username_1: You need to publish the assets, or JQuery is not installed in your app which is required for this version of Chatify |
hoffstadt/DearPyGui | 1186383569 | Title: Can textures be unloaded?
Question:
username_0: **My Improvement**
I was digging through the documentation but I could not find a clear statement: **Can textures be unloaded?** I.e. can I free their memory and remove them from a texture registry? If yes, how?
I am considering to use DearPyGui for an app that would dynamically load and visualize a large quantity of images, likely exceeding available memory, so a clean way of managing memory is highly desired.
**Necessary Assets**
None.
Answers:
username_1: You just need to "delete_item(...)". This will remove them from the texture registry. Once the last item using the texture is deleted, it will unload the texture completely. |
compodoc/compodoc | 272854106 | Title: feature: exporting html with locale option
Question:
username_0: -->
##### **Overview of the issue**
exporting html with locale option, so we can export better integrity api doc for users

##### **Operating System, Node.js, npm, compodoc version(s)**
not a bug
##### **Angular configuration, a `package.json` file in the root folder**
not a bug
##### **Compodoc installed globally or locally ?**
not a bug
##### **Motivation for or Use Case**
not a bug
##### **Reproduce the error**
not a bug
##### **Related issues**
none
##### **Suggest a Fix**
add an locale option, like:
```
compodoc -p tsconfig.json --locales zh_CN -d my-doc
```
`--locales zh_CN` means ours doc is written in Chinese, and, compodoc should export the HTMLs with Chinese locale
Answers:
username_1: -->
##### **Overview of the issue**
exporting html with locale option, so we can export better integrity api doc for users

##### **Operating System, Node.js, npm, compodoc version(s)**
not a bug
##### **Angular configuration, a `package.json` file in the root folder**
not a bug
##### **Compodoc installed globally or locally ?**
not a bug
##### **Motivation for or Use Case**
not a bug
##### **Reproduce the error**
not a bug
##### **Related issues**
none
##### **Suggest a Fix**
add an locale option, like:
```
compodoc -p tsconfig.json --locales zh_CN -d my-doc
```
`--locales zh_CN` means ours doc is written in Chinese, and, compodoc should export the HTMLs with Chinese locale
Status: Issue closed
|
lucidrains/stylegan2-pytorch | 650957594 | Title: Crash when resuming from previous checkpoint
Question:
username_0: Pretty easy to reproduce. Use any dataset, train for a few steps, resume from previous checkpoint.
```
Traceback (most recent call last):
File "/usr/local/bin/stylegan2_pytorch", line 90, in <module>
fire.Fire(train_from_folder)
File "/usr/local/lib/python3.6/dist-packages/fire/core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/usr/local/lib/python3.6/dist-packages/fire/core.py", line 468, in _Fire
target=component.__name__)
File "/usr/local/lib/python3.6/dist-packages/fire/core.py", line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/usr/local/bin/stylegan2_pytorch", line 87, in train_from_folder
model.print_log()
File "/usr/local/lib/python3.6/dist-packages/stylegan2_pytorch/stylegan2_pytorch.py", line 936, in print_log
print(f'G: {self.g_loss:.2f} | D: {self.d_loss:.2f} | GP: {self.last_gp_loss:.2f} | PL: {self.pl_mean:.2f} | CR: {self.last_cr_loss:.2f} | Q: {self.q_loss:.2f}')
TypeError: unsupported format string passed to NoneType.__format__
```
Answers:
username_1: Oops, fixed in the latest! thanks for reporting!
Status: Issue closed
|
PyCQA/flake8-bugbear | 545302648 | Title: B012 false positive when break is in a loop in a finally block
Question:
username_0: #99 and #100 added a new check B012 warning about `return`/`continue`/`break` inside `finally` blocks. The implementation raises a false positive when the `continue` or `break` is in a loop in the `finally` block.
Here's some [code in Werkzeug](https://github.com/pallets/werkzeug/blob/ea0d6b95fb118f33025163b7441577ee5755d971/src/werkzeug/formparser.py#L126-L142) demonstrating the issue.
```python
def exhaust_stream(f):
"""Helper decorator for methods that exhausts the stream on return."""
def wrapper(self, stream, *args, **kwargs):
try:
return f(self, stream, *args, **kwargs)
finally:
exhaust = getattr(stream, "exhaust", None)
if exhaust is not None:
exhaust()
else:
while 1:
chunk = stream.read(1024 * 64)
if not chunk:
break
return update_wrapper(wrapper, f)
```
Perhaps the code itself could be written differently regardless, I haven't gone down that path yet. But it doesn't seem like the code would cause exceptions to be silenced, which is what B012 is warning about.
Answers:
username_1: Thanks for the great report. We’ll look in to it. I’m not sure if we want to detect if it’s in a loop or not.
username_2: Ah, nice catch. I'll gladly fix it.
username_1: Will close when I push a new version to PyPI.
username_1: 20.1.1 should have this fixed. Please re-open if not.
Status: Issue closed
username_0: Seems to be working, thanks. Working so well, in fact, that I can't get the original error to trigger after downgrading flake8. I'm sure it was happening though! :man_shrugging: |
feathersjs-ecosystem/feathers-mongodb | 1140188460 | Title: upsert does not create new record
Question:
username_0: ### Steps to reproduce
1. Create a fresh service with the `mongodb` adapter.
2. Enable `multi:true`
3. Run the upsert:
```
const data={
address:'abc',
tags:['a','b']
}
const params = {
query: { address: data.address },
mongodb: { upsert: true }
};
await feathers.service('fresh-service').patch(null, data, params);
```
4. Get an empty Array in return.
5. Wonder what is going on? Maybe a miss-configured db?
### Expected behavior
`create` works with this data.
`patch` also works with the above code when the address already exists in a record in the db.
I would expect the `upsert` to work when there is no data in the db.
### Actual behavior
I just get an empty array in return meaning nothing is patched.
### System configuration
Tell us about the applicable parts of your setup.
**Module versions** (especially the part that's not working):
"@feathers-plus/cache": "^1.4.0",
"@feathersjs/authentication": "^4.5.12",
"@feathersjs/authentication-client": "^4.5.13",
"@feathersjs/authentication-local": "^4.5.12",
"@feathersjs/authentication-oauth": "^4.5.12",
"@feathersjs/configuration": "^4.5.12",
"@feathersjs/errors": "^4.5.12",
"@feathersjs/express": "^4.5.12",
"@feathersjs/feathers": "^4.5.12",
"@feathersjs/socketio": "^4.5.13",
"@feathersjs/socketio-client": "^4.5.13",
"@feathersjs/transport-commons": "^4.5.12",
"feathers-hooks-common": "^5.0.6",
"feathers-memory": "^4.1.0",
"feathers-mongodb": "^6.4.1",
"feathers-profiler": "^0.1.5",
"feathers-reactive": "^0.8.2",
"feathers-shallow-populate": "^2.5.1",
"mongodb": "^3.7.3",
"mongodb-core": "^3.2.7"
**NodeJS version**:
v14.18.2
**Operating System**:
Linux
**Browser Version**:
Firefox
Answers:
username_0: Upon further inspection it seems `mongodb: { upsert: true }` gets stripped from the params.
I've logged the `params` as the first hook in the `before.all`. |
TablePlus/TablePlus | 629050515 | Title: [UI improvement] Ability to navigate filter fields, menus and buttons with keyboard
Question:
username_0: 1. Which driver are you using and version of it (Ex: PostgreSQL 10.0):
MYSQL
2. Which TablePlus build number are you using (the number on the welcome screen, Ex: build 81):
3.4.0 (304)
It would be great to be able to use common keyboard shortcuts to manipulate the filter panel.
* Tab / shift+tab to select the next / previous field
* arrow up down to select in the drop down menu
* space to validate checkbox, drop down selection or press a button
* when multi criteria, tabs goes to next line also
* enter to run the whole filter thing

(Love TablePlus! thanks for the amazing product)
Answers:
username_1: There are some shortcuts listed right below the field @username_0 but unfortunately, there is no shortcut key for column and operator dropdowns. |
thunky-monk/kawhi | 222861952 | Title: Example fails with ResponseTimeout
Question:
username_0: When I run the example, this is what I see:
```
~/kawhi/example$ stack exec kawhi-example
kawhi-example: HttpExceptionRequest Request {
host = "stats.nba.com"
port = 80
secure = False
requestHeaders = []
path = "/stats/teamdashboardbygeneralsplits"
queryString = "?Conference&DateFrom&DateTo&Division&GameScope&GameSegment&LastNGames=0&LeagueID=00&Location&MeasureType=Advanced&Month=0&OpponentTeamID=0&Outcome&PaceAdjust=N&PerMode=PerGame&Period=0&PlayerExperience&PlayerPosition&PlusMinus=N&PORound=0&Rank=N&Season=2015-16&SeasonSegment&SeasonType=Regular%20Season&ShotClockRange&StarterBench&TeamID=1610612759&VsConference&VsDivision"
method = "GET"
proxy = Nothing
rawBody = False
redirectCount = 10
responseTimeout = ResponseTimeoutDefault
requestVersion = HTTP/1.1
}
ResponseTimeout
```
Any idea what could be causing this?
Answers:
username_1: Yea, the stats.nba.com API is a bit unstable. Looks like it changed to require some headers. See https://github.com/username_1/kawhi/pull/3
Status: Issue closed
|
DonBruce64/MinecraftTransportSimulator | 396280947 | Title: Bug: Propellers produce less thrust the more upwards they are pointed
Question:
username_0: This causes issues with tailwheel planes that are pointed substantially upwards, as they do not produce enough thrust to accelerate the plane to takeoff velocity.
Status: Issue closed
Answers:
username_0: Closing this issue because it was my fault, not the code's fault. |
dotnet/aspnetcore | 643532100 | Title: @functions block in .razor file loses all language support if preceded by an unmatched <
Question:
username_0: Steps to reproduce:
1. Open a .razor file (e.g. from a Razor Components project)
2. Add an @functions block if not already present, put something into it
3. Before the @functions block, start typing new markup
Expected:
When typing < nothing weird should happen
Actual:
Each time a < is typed without a closing >, the @functions block loses all C# stuff.
gif attached.

Answers:
username_1: This is because of Blazor's requirement to have all tags well-formed. While this is by design, maybe we can be clever about when to terminate the parse of a start tag. This might require significant changes to our parser. |
seb-jones/website | 503009043 | Title: Serve webp images with jpeg fallback
Question:
username_0: - [ ] Add jpeg-to-webp conversion to build script
- [ ] Replace img tags with `<picture>` tags
- [ ] Modify stylesheet to work with `<picture>` tag
Answers:
username_0: JPEG optimization was good enough that this is not needed
Status: Issue closed
|
erlang/otp | 1038980203 | Title: inet_tcp_dist / inet_tls_dist ignores custom EPMD name resolution mechanism
Question:
username_0: **Describe the bug**
When using custom EPMD module with address_please/3 defined, ip resolving of a dist connection should always go through this function. However right now in inet_tcp_dist:select/1 and inet_tls_dist:select/1, it can end up calling inet:gethostbyname_tm, which is not desired.
**To Reproduce**
Terminal 1:
erl -sname "a@a" -epmd_module my_epmd
Terminal 2:
erl -sname "b@a" -epmd_module my_epmd
net_kernel:connect_node('a@a').
This will return false
However, if we do this, it will success
Terminal 1:
erl -sname "a@localhost" -epmd_module my_epmd
Terminal 2:
erl -sname "b@localhost" -epmd_module my_epmd
net_kernel:connect_node('a@localhost').
true
[my_epmd.zip](https://github.com/erlang/otp/files/7437488/my_epmd.zip)
**Expected behavior**
inet_tcp_dist:select/1 and inet_tls_dist:select/1 should respect EpmdModule:address_please/3 if it is defined
net_kernel:connect_node('a@a') should success
**Affected versions**
Answers:
username_1: I did a quick attempt at a fix in #5337, can you take a look and see if it solves the problem for you?
username_0: Thanks for the fix!
I think the inet_tcp_dist/inet_tls_dist:select part looks good to me, but there is an additional one (see previous comment): inet_tls_dist:do_setup_connect uses Address (it actually is the string hostname), instead of IP in ssl:connect
Should that done in a separate commit or you work like to put them in one place?
username_1: I fixed the problem in tls dist and added a testcase for it. So it should work now.
username_0: Yeah I verified it works now. Thanks a lot!
Status: Issue closed
|
tstack/lnav | 982809842 | Title: ssh mode failures
Question:
username_0: **lnav version**
v0.10.0, macos 11.5.2, apple M1
**Describe the bug**
There are 2 issues with the ssh mode i run into with almost every longer use:
1. the connection is lost/terminated and the log window becomes empty and reports 0 Lines.
2. when reconnecting after the lost connection, for a couple of minutes there is no activity in the window at all when clearly the file is changing on the server.
**To Reproduce**
Steps to reproduce the behavior:
open a logfile through ssh and wait...<issue_closed>
Status: Issue closed |
akleemans/fourcolors | 149047284 | Title: Edge cases (adjacency problems)
Question:
username_0: Not working properly when it's not clear what is adjacent to each other:

[Comment on HN](https://news.ycombinator.com/item?id=11502628) |
sphinx-doc/sphinx | 157344216 | Title: test-autosummary warns "nested tables are not yet implemented."
Question:
username_0: `test-autosummary`; a testcase for `sphinx.ext.autosummary` causes warning on building LaTeX document at stable branch:
```
$ sphinx-build -b latex tests/roots/test-autosummary/ output
Running Sphinx v1.5a0
making output directory...
loading pickled environment... not yet created
[autosummary] generating autosummary for: contents.rst, sphinx.rst
[autosummary] generating autosummary for: /Users/tkomiya/work/sphinx/tests/roots/test-autosummary/dummy_module.rst, /Users/tkomiya/work/sphinx/tests/roots/test-autosummary/generated/sphinx.application.Sphinx.rst
building [mo]: targets for 0 po files that are out of date
building [latex]: all documents
updating environment: 4 added, 0 changed, 0 removed
reading sources... [100%] sphinx
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
processing Python.tex... contents dummy_module sphinx generated/sphinx.application.Sphinx
resolving references...
writing...
Markup is unsupported in LaTeX:
contents:: nested tables are not yet implemented.
```
Status: Issue closed
Answers:
username_0: It seems the errors gone away now. Closing.
```
tkomiya@dhcp67> LC_ALL=C .tox/py37/bin/sphinx-build -b latex tests/roots/test-autosummary/ output
Running Sphinx v2.3.0+/7d0aa9594
making output directory... done
[autosummary] generating autosummary for: dummy_module.rst, generated/sphinx.application.Sphinx.rst, index.rst, sphinx.rst, underscore_module_.rst
building [mo]: targets for 0 po files that are out of date
building [latex]: all documents
updating environment: [new config] 5 added, 0 changed, 0 removed
reading sources... [100%] underscore_module_
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
processing python.tex... index dummy_module underscore_module_ sphinx generated/sphinx.application.Sphinx
resolving references...
done
writing... done
copying TeX support files... copying TeX support files...
done
build succeeded.
The LaTeX files are in output.
Run 'make' in that directory to run these through (pdf)latex
(use `make latexpdf' here to do that automatically).
```
username_1: How it was fixed? I can't see anything related to this issue in recent master commits.
username_0: I don't know why. But it seems resolved in this 2 yrs. |
alexcorvi/anchorme.js | 197118323 | Title: Feature: Adding target = "_blank" support and other customizations to linkified text
Question:
username_0: Currently there is no config which allows one to customise created a tags. It would be great if we have support for adding class and other attributes like target which are supported on a tags.
Status: Issue closed
Answers:
username_0: Realised that this already exists after surfing a bit in the doc. Closing the Issue. |
crystal-community/icr | 296094518 | Title: Icr shows errors next instruction after calling Dir.mkdir(), Dir.mkdir_p() and Dir.rmdir()
Question:
username_0: When I try to use `Dir.mkdir()` in `icr` the next instructions always will fail with error complaining about it exists. From source code it's just trying to run `mkdir()` on every recompilation and thats why error shows.
The same goes for `Dir.rmdir()` and `Dir.mkdir_p()`.
```bash
username_0@crap:~[130]$ icr
icr(0.24.1) > Dir.mkdir("bla bla")
=> 0
icr(0.24.1) > 1 + 1
Unable to create directory 'bla bla': File exists (Errno)
from Dir::mkdir<String, Int32>:Int32
from Dir::mkdir<String>:Int32
from __icr_exec__:Int32
from __crystal_main
from _crystal_main<Int32, Pointer(Pointer(UInt8))>:Nil
from Crystal::main_user_code<Int32, Pointer(Pointer(UInt8))>:Nil
from Crystal::main<Int32, Pointer(Pointer(UInt8))>:Int32
from main
```
Answers:
username_1: This is interesting. It seems this works fine for me in version 0.4.0, but I do get this error in 0.5.0. I'm not sure what changed exactly between the two, but that may be worth investigating.
username_0: Strange that it could works before because in code there is reevaluation of whole command stack when every new command is issued, so it was probably always error for second rmdir("xx") evaluation, because this dir was removed at previous instruction evaluation. https://github.com/crystal-community/icr/blob/6dabbef34e3223fe08ff337ded13e873cd369c60/src/icr/executer.cr#L20
username_2: Please see https://github.com/crystal-community/icr/pull/91#issuecomment-372580944
Status: Issue closed
|
Creators-of-Create/Create | 1118673307 | Title: Brass Funnels filtered to Lapis, Redstone, Emerald will not pick up Lapis, Redstone, and Emerald off of mechanical belt
Question:
username_0: ### Describe the Bug
Having a brass funnel(s) on a mechanical belt set to lapis, redstone, and/or emerald will not pick up that item off of the belt.
NOTE: Version is actually 0.4d instead of 0.4c.
### Reproduction Steps
1. Put brass funnel on mechanical belt filtered to either Lapis, redstone, or emerald
2. Put item on belt
3. Funnel will not pick up that item.
...
### Expected Result
Funnel will pick up item.
### Screenshots and Videos
https://youtu.be/7_GdVjqxeHo
### Crash Report or Log
_No response_
### Operating System
Windows 10
### Mod Version
0.4.0c
### Minecraft Version
1.18.1
### Forge Version
39.0.63
### Other Mods
No Crash
Xaero's Minimap
Xaero's World map
Compact Machines
Create
Create Crafts & Additions
Create Deco
Flywheel
Just Enough Items (JEI)
Mod Name Tooltip
Mouse Tweaks
Placebo
Toast Control
### Additional Context
NOTE: Version is actually 0.4d instead of 0.4c.
Answers:
username_1: The arrows on the side of funnels are not cosmetic, they show whether a funnel is in input or output mode.
The funnels that are not taking items in your video are in output mode, right click them with a wrench to change the mode. |
ManageIQ/manageiq-providers-kubernetes | 279734646 | Title: Pod conditions fields != Node conditions fields
Question:
username_0: We use one table and one `parse_conditions()` method for both pod & node `status.conditions`.
The fields we take match node conditions but not exactly pods. Specifically:
- nodes have `lastHeartbeatTime` which we parse & store;
- pods instead have `lastProbeTime` which we discard.
https://kubernetes.io/docs/api-reference/v1.8/#nodecondition-v1-core
https://kubernetes.io/docs/api-reference/v1.8/#podcondition-v1-core
Example from node:
```
conditions:
- lastHeartbeatTime: 2017-12-06T12:06:44Z
lastTransitionTime: 2017-12-06T10:03:09Z
message: kubelet has sufficient disk space available
reason: KubeletHasSufficientDisk
status: "False"
type: OutOfDisk
```
example from pod:
```
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2017-12-06T10:19:15Z
status: "True"
type: Initialized
```
(but according to doc, may also have `message` and `reason`)
Answers:
username_1: @username_0 will adding lastProbeTime solve our problem?
(and in the UI show what is available) where do we show/use these?
username_0: I have no idea if there are any practical consequences :-)
Just noticed it when looking at code vs API response...
Yes, adding last_probe_time is only missing thing.
And perhaps it'd be fine to stick it into last_heartbeat_time (perhaps
renamed) — it's one OR the other, and both essentially mean "when this was
checked". |
atla5/csci320_rc | 335952505 | Title: controller: generate initial stubs for R2 behavior
Question:
username_0: ### Background
- stubs are the interface between the front end and the back
- they are the means by which we read/write/update/delete content in the database
- would be excellent to be able to test these directly and hook it up to the suite
### Success Criteria
method stubs have been created in `src/.../controllers/` that allow the UI to do the following
- get a list of all customers (for the dropdown)
- get a list of all stores
- get a list of all items in a given store
- create and/or validate a purchase of any number of items for any given product<issue_closed>
Status: Issue closed |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.