repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
vim-jp/issues | 119821618 | Title: 補完の絞り込みチラつき改善
Question:
username_0: neovim のパッチ https://github.com/neovim/neovim/pull/2674 からの移植です。
レビューをお願いします。
このパッチは補完の絞り込みに作用します。
組み込み補完でもチラつきが解消されるのではないかと期待していますが、検証していません。
https://gist.github.com/username_0/344a41fcd0bf488c2527
再現方法:
* vim -Nu test.vim
* インサートモードで "Ju" と入力
* パッチ前は絞り込み時にチラつきがある。パッチ後はチラつきが解消される
test.vim :
```vim
fun! CompleteMonths(findstart, base)
if a:findstart
" locate the start of the word
let line = getline('.')
let start = col('.') - 1
while start > 0 && line[start - 1] =~ '\a'
let start -= 1
endwhile
return start
else
" find months matching with "a:base"
let ret = []
for m in split("January February March April May June July Auguest September October November December")
if m =~ '^' . a:base
call add(ret, m)
endif
endfor
echom "return words"
sleep 400m " simulate searching for next match
return {'words':ret, 'refresh':'always'}
endif
endfun
set completefunc=CompleteMonths
set omnifunc=CompleteMonths
aug mytest
au!
au CursorMovedI <buffer> echom 'moveed' | call feedkeys("\<C-x>\<C-u>\<C-p>", 'n')
aug end
```
関連 issue
https://github.com/vim-jp/issues/issues/597
Answers:
username_1: 試したけどあんまり効果が見えないすね。
username_0: Vim は絞り込みを行うときに毎回補完ウインドウを消しているようです。
ただ、Vim 自身の絞り込みは非常に早いのでマシンが遅いか、
refresh_always機能により絞り込みで長時間待たされない限り、チラつきは感知できません。
@username_1 さんのところで違いがあまり見えないのもそれが原因かと。
パッチ自身に問題がなければ、vim_dev に投げようと思います。
username_0: 大変もうしわけありませんが、一身上の都合によりこの issue は閉じさせていただきます。
これまでありがとうございました。
Status: Issue closed
|
ASRCsoft/lidar_noise | 539463765 | Title: Only store intermediate results in database
Question:
username_0: The full-blown lidar database is gigantic and unwieldy, and requires installing and setting up Postgres, TimescaleDB, and other extensions.
Instead we can store intermediate summary statistics by day (same as the Mesonet text files). That would require vastly less disk space, which means we can use SQLite and keep things much simpler. |
GEOLYTIX/xyz | 296445933 | Title: Reports
Question:
username_0: Update reporting to allow for a list of selected features being shown.
Answers:
username_0: We first need to decide on a format. Should be either PDF or ODF.
username_0: We now have a single report template. Once we get a second meaningful report we need to think about where to store the templates.
username_0: In addition to the location report there should be a layer and locale report.
The layer report being defined for a layer and will be called in the interface from the layer drawer.
The locale report being defined in the locale and will be called in the interface from the button column.
We need to have default reports which will work with minimum parameter (layer for layer reports; layer + table + id for location reports).
username_1: how about a draft layer/locale report?
username_0: Default report (locale)

Default format, width A4. Then add formats to set the width & height (portrait / landscape).
Layer report (same locale report but a layer is provided as param).

Location table (same as locale / layer report but locale and optional layer provided as param.

username_0: The report endpoint will either load a local report template or can load a report template from a private github repository with the KEY_GITHUB entry defined in the environment settings.
A local report template can be requested like so:
```
"infoj": [
{
"type": "report",
"class": "_report",
"report": {
"template": "/views/report.html"
}
},
```
A template hosted on github can be requested like so:
```
"infoj": [
{
"type": "report",
"class": "_report",
"report": {
"template": "https://api.github.com/repos/GEOLYTIX/xyz_resources/contents/shneame/report.html&source=GITHUB"
}
},
```
```
async function view(req, res, token = { access: 'public' }) {
let _tmpl;
if (req.query.source === 'GITHUB') {
const response = await nodefetch(`${req.query.template}?access_token=${env.keys.GITHUB}`);
const b64 = await response.json();
const buff = await Buffer.from(b64.content, 'base64');
_tmpl = await buff.toString('utf8');
} else {
const response = await nodefetch(env.desktop || `${env.http || 'https'}://${req.headers.host}${env.path}${req.query.template}`);
if (response.status !== 200) return res.type('text/plain').send('Failed to retrieve report template');
_tmpl = await response.text();
}
const tmpl = jsr.templates('tmpl', _tmpl);
//Build the template with jsrender and send to client.
res.type('text/html').send(tmpl.render({
dir: env.path,
token: req.query.token || token.signed || '""'
}));
};
```
username_0: Reports CSS should be defined in the report template.
Report scripts should be requested through the cdn proxy endpoint. |
ogham/exa | 720453361 | Title: Switch from make to just breaks Homebrew formula when installing from HEAD
Question:
username_0: The Homebrew formula still expects to use make to build exa. We'll either need to update the formula to use a different install command based on the version/commit or re-add a Makefile that just invokes just.
Answers:
username_0: I opened https://github.com/Homebrew/homebrew-core/pull/62812 to have Homebrew just use cargo directly rather than going through make/just.
username_1: Ah, thank you for raising this issue with the Homebrew team! I didn't realise that the formula was still dependent on `make`.
Looking at your change, I think using `cargo install` with Homebrew's arguments is a better idea than using `make` anyway — it means the package manager knows how to install exa, rather than exa knowing how to install itself for all possible OSes (which is something I've been wary about getting wrong).
Thanks again.
username_0: That update is merged, but now the `contrib` -> `completions` change is breaking completion installation, which causes the install to fail. I'll see if I can find some time to fix that today.
username_0: https://github.com/Homebrew/homebrew-core/pull/62876
username_0: Merged and looking good. |
martinciu/fuubar-cucumber | 76714238 | Title: Missing failure output: Steps completed and Scenario Outlines
Question:
username_0: Ideally, when a test fails, it would be great if it printed the full "pretty" step sequence that was completed and the name of the step that it failed on as well as the failure message.
Also, tests that fail running a Scenario Outline with Examples don't print error messages, but do show up in the failure reports after tests run.
I'm not sure I'll have any time to address these issues (or maybe there's a formatting option I'm missing) but I thought I'd drop this here in case anyone has an easy fix for it. |
microsoft/botframework-sdk | 639077503 | Title: [BF DOCS] Document APIs
Question:
username_0: From CSS Engineer:
Ffrom my own perspective I have a more general, broader scope request which is to more fully document the API beyond the swagger stuff (or whatever autogen tool you guys use). For example, things like this: https://docs.microsoft.com/en-us/python/api/botbuilder-ai/botbuilder.ai.qna.qnamaker_options.qnamakeroptions?view=botbuilder-py-latest. I spent a long time a couple weeks ago just trying to figure out how to simply change the QnaMaker threshold to a lower value for a python bot for a customer and was never able to confidently find out how to do it in our docs. This is more general feedback in that it would be nice to go in an fill in more of the missing pieces in the reference docs.
Answers:
username_0: Update: Initial Pythons docs for 4.9 released yesterday after addressing publishing issues. Further updates will continue based on the work Code Quality Pillar is doing.
username_0: Closing the issue as complete for R10.
Status: Issue closed
|
metoppv/improver | 224442723 | Title: Wind downscaling unit tests: tests for RoughnessCorrectionUtilities.roughness_correction_sub
Question:
username_0: As a wind downscaling developer I want unit tests for RoughnessCorrectionUtilities.roughness_correction_sub so that I can be more confident in the code and in future maintenance.
Acceptance criteria:
* Write unit tests for this method
* Read up or scrutinise the method so that you can be confident in the unit test results<issue_closed>
Status: Issue closed |
wellle/targets.vim | 115145352 | Title: use matchpairs for g:targets_argClosing/Opening
Question:
username_0: How about using matchpairs to set `g:targets_argClosing/Opening` by default ? Over here
```vim" let g:targets_argOpening = '[({[]'
" let g:targets_argClosing = '[]})]'
let g:targets_argOpening = '['
let g:targets_argClosing = '['
let mps = split(&g:matchpairs, ',')
for mp in mps
let mpOC = split(mp, ':')
let g:targets_argOpening = g:targets_argOpening . mpOC[0]
let g:targets_argClosing = g:targets_argClosing . mpOC[1]
endfor
unlet mps
unlet mp
unlet mpOC
let g:targets_argOpening = g:targets_argOpening . ']'
let g:targets_argClosing = g:targets_argClosing . ']'
```
does that.
Answers:
username_0: The comparison with `&g:matchpairs` also suggests a buffer local version `b:targets_argClosing/Opening`.
username_1: That's an interesting idea.
But it would change the current default to include curly braces. Also I believe the closing `]` needs to be escaped inside `[]` unless it's the first character (that's why I put it like this).
username_0: There might be good reasons to differ from `setglobal matchpairs&` but as a dilettante I see none.
True that `]` must be escaped. Another reason to prefer the syntax of `matchpairs` ? Again, as an outsider, the enclosing `[ ]` do not seem necessary.
username_0: My bad, now I understand that `[..]` is a regex. Still, perhaps the regex could by default coincide with that given by`&g:matchpairs`.
username_0: Here is an updated version of the above script converting `&matchpairs` into an opening and closing regex pattern as expected by `g:targets_argOpening/Closing`:
```vim
let g:targets_argOpening = '['
let g:targets_argClosing = '['
let mps = split(&g:matchpairs, ',')
for mp in mps
let mpOC = split(mp, ':')
let g:targets_argOpening = g:targets_argOpening . escape(mpOC[0], ']^-\')
let g:targets_argClosing = g:targets_argClosing . escape(mpOC[1], ']^-\')
endfor
unlet mps
unlet mp
unlet mpOC
let g:targets_argOpening = g:targets_argOpening . ']'
let g:targets_argClosing = g:targets_argClosing . ']'
``` |
parallaxinc/PropLoader | 223192885 | Title: Add logic and message to alert when EEPROM checksum failed
Question:
username_0: [This request created in response to Issue #27]
In much the same way that the added "Propeller not found" error message clarifies download failures, we need an EEPROM checksum failure message.
- Add fatal error message, "EEPROM checksum failed" and give it the next available error code.
- Add logic to emit that message when an EEPROM checksum response is received and doesn't match the proper value.
This should apply to both wired and wireless downloads and should appear before the "ERROR: Download failed: -1" message so that instead of this:
```
001-Opening file '../myfile.binary'
002-Downloading file to port ...
009-... bytes sent
003-Verifying RAM
004-Programming EEPROM
102-ERROR: Download failed: -1
```
we see this (assuming the error code is 125):
```
001-Opening file '../myfile.binary'
002-Downloading file to port ...
009-... bytes sent
003-Verifying RAM
004-Programming EEPROM
125-ERROR: EEPROM checksum failed
102-ERROR: Download failed: -1
```
Answers:
username_1: I've added this to the fast loader code. It is not yet present in the single-stage loader.
username_1: done
Status: Issue closed
username_0: Verified in v1.0-37. EEPROM Checksum error code is 126. |
sapcc/vrops-exporter | 1011886084 | Title: Collecting stats from UserDesktop resource?
Question:
username_0: Hello,
Thanks for a great project.
We're in the process of setting this up and I'm struggling to get back the UserDesktop stats, specifically the ones around PCoIP.
Ideally I'm after the `pcoip|round_trip_latency` metric, however if I can grab all of the metrics that are under the `UserDesktop` namespace that would be great.
I seem to already have access to most, it's just the PCoIP ones that seem to be missing at the moment - what do I need to do in order to retrieve them?
Answers:
username_1: @username_0 sorry for not getting back to you earlier.
Do you still have the problem? Can you point out, where to find your requested metric exactly? Please take it from the [current vmware documentation](https://docs.vmware.com/en/vRealize-Operations-Manager/8.4/com.vmware.vcom.metrics.doc/GUID-9DB18E49-5E00-4534-B5FF-6276948D5A09.html)
username_0: @username_1 - The metrics come from a V4V plugin and are not documented in the current VMWare documentation.
We ended up writing additional `get_adapter` functions to talk directly to the V4V API running one request for every VM discovered.
We've got a ticket on our backlog to upstream it at some point in the future, however there's a lot of client-specific code in there so it may take a good while to achieve.
username_1: Ok, problem is you will not be benefiting from the latest updates we are making. If have an idea how to bring this upstream, feel free to open a PR for discussion.
I just opened #188 to make it a little bit more generic too, but ofc it is hard to tell what other ppl need without knowing.
Status: Issue closed
|
lima-vm/lima | 1153803107 | Title: Same IP for multiple instances
Question:
username_0: ### Description
```
[N] ❯ limactl list
NAME STATUS SSH ARCH CPUS MEMORY DISK DIR
vm1 Running 127.0.0.1:51426 aarch64 4 4GiB 100GiB /Users/jun/.lima/vm1
vm2 Running 127.0.0.1:51487 aarch64 4 4GiB 100GiB /Users/jun/.lima/vm2
[N] ❯ limactl shell vm1 ip route
default via 192.168.5.2 dev eth0 proto dhcp src 192.168.5.15 metric 100
192.168.5.0/24 dev eth0 proto kernel scope link src 192.168.5.15
192.168.5.2 dev eth0 proto dhcp scope link src 192.168.5.15 metric 100
[N] ❯ limactl shell vm2 ip route
default via 192.168.5.2 dev eth0 proto dhcp src 192.168.5.15 metric 100
192.168.5.0/24 dev eth0 proto kernel scope link src 192.168.5.15
192.168.5.2 dev eth0 proto dhcp scope link src 192.168.5.15 metric 100
```
Answers:
username_1: It is actually the same IP for _every_ instance. The "user" network is not seen by the others.
https://wiki.qemu.org/Documentation/Networking
The current alternative is VDE, but that is currently _only_ available on macOS (`vde_vmnet`)
It would be possible to set up a "socket" network, that would give them simple connectivity.
----
I will add a separate issue about it, but basically the first VM "listens" to a port on the host.
And then the second (and third...) VM "connects" to this port, so they each get an `eth1`...
Setting up tun/tap (or vde) also for Linux would be more powerful, but also requires root.
username_1: This was the story about adding Linux networking:
* https://github.com/lima-vm/lima/issues/358 |
jpmsilva/jsystemd | 381459389 | Title: The documentation does not indicate this
Question:
username_0: This link has an example of a service:
https://username_1.github.io/jsystemd-site/versions/1.0.1/howto.html
```ini
[Unit]
Description=MyService
Requires=network.target
After=network.target
After=syslog.target
[Service]
Type=notify
WorkingDirectory=/opt/myservice
ExecStart=/opt/jdk1.8.0/bin/java -XX:+ExitOnOutOfMemoryError -Xms256M -Xmx512M -XX:+UseG1GC -jar /opt/myservice/myservice.jar
SuccessExitStatus=143
KillMode=mixed
TimeoutStopSec=10
TimeoutStartSec=30
[Install]
WantedBy=multi-user.target
```
But the important parameter is not specified, without which nothing worked for me.
Namely this:
```ini
NotifyAccess=all
```
Please correct it in the documentation so that people like me do not waste time searching for the reason why it does not work.
:)
Answers:
username_1: Hi @username_0
First of all, thanks for reporting this. I'm not sure why, but GitHub did not send me any notifications on your issue.
As documented in https://username_1.github.io/jsystemd-site/versions/1.0.1/native-library.html you should only need to add `NotifyAccess=all` to your sevice unit when the native library (JNA) fails to load for some reason.
It would be preferable to use the native library instead, as the fallback implementation has some limitations.
Could you provide a log of the startup sequence of your Spring Boot application?
Thanks and best regards
username_0: @username_1
I am no longer busy with the project where I had a problem. and I can’t take the time to recreate the problem and throw you a log file.
I'm sorry, I don’t have time for this.
you can try to recreate the problem yourself or just close the task
username_1: No problem, that's fair.
I'll close this since I'm unable to replicate.
Status: Issue closed
|
linkedin/avro-util | 1125496609 | Title: performance benchmarks after 2017
Question:
username_0: https://techblog.rtbhouse.com/2017/04/18/fast-avro/ shows performance improvements with fast serde. Can there be some benchmarks published here to compare how this performs with the recent version of AVRO ? |
htmlacademy/yomoyo | 413537025 | Title: [Интерсив PHP, уровень 1, mysql_helper.php]
Question:
username_0: Предлагаю добавить обработку ошибок в функцию-помощник для отображения ошибок в запросах.
При отсутствии GROUP BY и использовании агрегирующей функции, не отображаются ошибки.
```php
function db_get_prepare_stmt($link, $sql, $data = []) {
$stmt = mysqli_prepare($link, $sql);
if (false === $stmt) {
throw new mysqli_sql_exception($link->error, $link->errno);
}
if ($data) {
$types = '';
$stmt_data = [];
foreach ($data as $value) {
$type = null;
if (is_int($value)) {
$type = 'i';
}
else if (is_string($value)) {
$type = 's';
}
else if (is_double($value)) {
$type = 'd';
}
if ($type) {
$types .= $type;
$stmt_data[] = $value;
}
}
$values = array_merge([$stmt, $types], $stmt_data);
$func = 'mysqli_stmt_bind_param';
$func(...$values);
}
return $stmt;
}
```<issue_closed>
Status: Issue closed |
openshift/release | 528676518 | Title: prow bumps: Cookie secret should be exactly 32 bytes [deck]
Question:
username_0: This error/warning seems to occur during Prow bumps.
```
"level":"warning",
"msg":"Cookie secret should be exactly 32 bytes. Consider truncating the existing cookie to that length"
```
Not 100% sure about the impact, probably benign, but maybe we have some config to fix.
/area triage
Answers:
username_0: This is probably related to this secret: https://github.com/openshift/release/blob/master/core-services/prow/03_deployment/deck.yaml#L177, which indeed isn't 32 bytes. I'm not sure what this cookie exactly is, so hard to say whether we should/can truncate it to the length the warning suggests.
username_0: Oh, this is a XSRF token... So I guess it should be safe to just generate a 32 byte secret here and get rid of this warning?
https://github.com/kubernetes/test-infra/blob/c7a125031726b6dc75d090aad74f8ff97e363811/prow/cmd/deck/main.go#L407-L409
username_0: I'm surprised we do not see this also for the `deck-internal` instance which is using a 43-character long generated string as this value: https://github.com/openshift/release/blob/master/core-services/prow/03_deployment/deck.yaml#L7
:confused:
username_1: Appeared today.
```json
{
"component": "deck",
"file": "prow/cmd/deck/main.go:359",
"func": "main.main",
"level": "warning",
"msg": "Cookie secret should be exactly 32 bytes. Consider truncating the existing cookie to that length",
"time": "2020-01-02T21:30:38Z"
}
``` |
aztfmod/rover | 984601184 | Title: Support auto nonlock feature during plan with rover
Question:
username_0: Implementing automatically non lock state for tfstate feature with `-lock=false` when the `-a plan`, so that minimal permission to be granted at the containers / blob level to perform a plan.
This should only be in plan action NOT apply.
Answers:
username_1: Thanks for the suggestion. This applies well to the caf_platform_contributors who can work on improving the deployed platform and check their modifications without being able to apply their changes.
caf_platform_contributors has "Storage Blob Data Reader"
Status: Issue closed
|
mroderick/PubSubJS | 562872367 | Title: Advantages over standard node events library?
Question:
username_0: Colleagues are asking whether to use pubsub-js or https://github.com/Gozala/events for a project.
I know pubsub-js is more extensive, but could the README clarify what pubsub-js can do that `events` can't? I understand that to use `events` similarly, you'd need to create a singleton instance, and from various places in your code, call the subscription and publishing/emitting functions on that instance.
Answers:
username_1: If you chose to use `EventEmitter` or similar as a singleton, you've basically created an implementation of publish/subscribe, very similar to PubSubJS. I think that both options are quite stable, so either would be a decent choice.
If you chose to go with `EventEmitter`, then I'd recommend creating the singleton in a file that can then be imported where needed, so you don't have to explicitly pass the singleton around. That's just a matter of convenience really.
There might be minor difference in features that make one solution more appealing than the other. Maybe one has a feature/method that you need, and the other doesn't.
If you feel like it, then a PR comparing PubSubJS with other solutions in a table, would be a nice addition to the README. |
laravel-shift/blueprint | 620877484 | Title: DBALException on ENUM type when running blueprint:trace on existing project
Question:
username_0: 441| }
442|
443| return $this->doctrineTypeMapping[$dbType];
444| }
Exception trace:
1 Doctrine\DBAL\Platforms\AbstractPlatform::getDoctrineTypeMapping("enum")
\vendor\doctrine\dbal\lib\Doctrine\DBAL\Schema\MySqlSchemaManager.php:128
2 Doctrine\DBAL\Schema\MySqlSchemaManager::_getPortableTableColumnDefinition()
\vendor\doctrine\dbal\lib\Doctrine\DBAL\Schema\AbstractSchemaManager.php:810
3 Doctrine\DBAL\Schema\AbstractSchemaManager::_getPortableTableColumnList("messages", "db")
\vendor\doctrine\dbal\lib\Doctrine\DBAL\Schema\AbstractSchemaManager.php:167
4 Doctrine\DBAL\Schema\AbstractSchemaManager::listTableColumns("messages", "db")
\vendor\laravel-shift\blueprint\src\Commands\TraceCommand.php:123
5 Blueprint\Commands\TraceCommand::extractColumns(Object(App\Models\Message))
\vendor\laravel-shift\blueprint\src\Commands\TraceCommand.php:55
6 Blueprint\Commands\TraceCommand::handle()
\vendor\laravel\framework\src\Illuminate\Container\BoundMethod.php:32
7 call_user_func_array([])
\vendor\laravel\framework\src\Illuminate\Container\BoundMethod.php:32
8 Illuminate\Container\BoundMethod::Illuminate\Container\{closure}()
\vendor\laravel\framework\src\Illuminate\Container\Util.php:36
9 Illuminate\Container\Util::unwrapIfClosure(Object(Closure))
\vendor\laravel\framework\src\Illuminate\Container\BoundMethod.php:90
10 Illuminate\Container\BoundMethod::callBoundMethod(Object(Illuminate\Foundation\Application), Object(Closure))
\vendor\laravel\framework\src\Illuminate\Container\BoundMethod.php:34
11 Illuminate\Container\BoundMethod::call(Object(Illuminate\Foundation\Application), [])
\vendor\laravel\framework\src\Illuminate\Container\Container.php:590
12 Illuminate\Container\Container::call()
\vendor\laravel\framework\src\Illuminate\Console\Command.php:134
13 Illuminate\Console\Command::execute(Object(Symfony\Component\Console\Input\ArgvInput), Object(Illuminate\Console\OutputStyle))
\vendor\symfony\console\Command\Command.php:255
14 Symfony\Component\Console\Command\Command::run(Object(Symfony\Component\Console\Input\ArgvInput), Object(Illuminate\Console\OutputStyle))
\vendor\laravel\framework\src\Illuminate\Console\Command.php:121
15 Illuminate\Console\Command::run(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
\vendor\symfony\console\Application.php:1001
16 Symfony\Component\Console\Application::doRunCommand(Object(Blueprint\Commands\TraceCommand), Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
\vendor\symfony\console\Application.php:271
17 Symfony\Component\Console\Application::doRun(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
\vendor\symfony\console\Application.php:147
18 Symfony\Component\Console\Application::run(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
\vendor\laravel\framework\src\Illuminate\Console\Application.php:93
19 Illuminate\Console\Application::run(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
\vendor\laravel\framework\src\Illuminate\Foundation\Console\Kernel.php:131
20 Illuminate\Foundation\Console\Kernel::handle(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
\artisan:37
```
Answers:
username_1: So you end up doing stuff like:
```php
DB::statement('ALTER TABLE some_table CHANGE COLUMN some_enum some_enum your-data-type NOT NULL DEFAULT your-default;');
```
And you do that for all instances of enums in your database.
Additionally, I would recommend reading up on how other developers handle enums, like the folks at Spatie do:
* https://stitcher.io/blog/php-enums
* https://github.com/spatie/enum
* https://github.com/spatie/laravel-enum
Status: Issue closed
username_2: @username_0 there should be basic support for this now if you'd like to try it out by updating to the latest version. |
go-gitea/gitea | 610914485 | Title: bug: Attachments are not displayed on issues
Question:
username_0: <!-- NOTE: If your issue is a security concern, please send an email to <EMAIL> instead of opening a public issue -->
<!--
1. Please speak English, this is the language all maintainers can speak and write.
2. Please ask questions or configuration/deploy problems on our Discord
server (https://discord.gg/gitea) or forum (https://discourse.gitea.io).
3. Please take a moment to check that your issue doesn't already exist.
4. Please give all relevant information below for bug reports, because
incomplete details will be handled as an invalid report.
-->
- Gitea version (or commit ref): 1.12.0+dev-247-g1bdffefc0
- Can you reproduce the bug at https://try.gitea.io:
- [x] Yes: https://dev-git.epginsurance.com/username_0/test/issues/1
- [ ] No
- [ ] Not relevant
- Log gist:
## Description
Attachments added to an issue as part of comments are not being displayed. If an attachment is added by itself, the comment on the issue reads "Attached file".
## Screenshots

Answers:
username_1: Will be resolved by #11272
username_2: should be closed now - @username_0 can you test if it works now on latest master build?
username_0: Sure thing man, as soon as CI builds the docker image:

Thanks for the quick action! 👍
username_0: Ok, so try.gitea.com has been updated, and the attachments "work" to an extent. Picture attachments seem to be working OK, but uploading other formats does not work. For example, I am trying to upload a `docx` file. It does not show up or there is any trace of it being downloaded. I know it is supported:


username_1: Why did you think try.gitea.io support pdf and docx? I think it will only support images and zip files.
username_0: I sent you a screenshot of what try.gitea.io says it supports.
username_0: And yes, I can see there is a setting on app.ini for this:
```
username_0: OK, I can verify the attachment works on my environment (because of my allowed types setting). I think you can close this issue. However, on `try.gitea.io` you might want to look at that upload component, I am not sure it is in sync with what's actually allowed (because it says it allows `docx`). 😄
Thanks again!
Status: Issue closed
|
Leostorm3/IMD3900_EDL_REP | 403321878 | Title: think of a game title
Question:
username_0: we need to think of a good game title
Answers:
username_1: Deliverance is a good title dont @ me
username_2: Hear no evil
username_1: How about Disclosure
due to the communication between the player in game and on mobile devices
username_2: anomalous
username_0: how about Claustrophobia
username_0: How about Breakout: a co-op experience
username_1: Distress
username_0: i like distress |
sofastack/sofa-common-tools | 571316407 | Title: LoggerSpaceFactory4LogbackBuilder#doBuild should use spaceClassloader load LogbackLoggerSpaceFactory class
Question:
username_0: ```java
public AbstractLoggerSpaceFactory doBuild(String spaceName, ClassLoader spaceClassloader,
URL url) {
return new LogbackLoggerSpaceFactory(getSpaceId(), new LoggerContext(), getProperties(),
url, getLoggingToolName());
}
```
should use spaceClassloader load LogbackLoggerSpaceFactory class.
Status: Issue closed
Answers:
username_1: fix in https://github.com/sofastack/sofa-common-tools/pull/52 |
erdl/survey_display | 416338999 | Title: Resize buttons so that 7 can fit horizontally on the screen
Question:
username_0: Right now, 7 survey options will wrap around the screen (we want everything to show up inline, but one button ends up on the next line). The survey option buttons need to be resized so that up to 7 fit inline.
Answers:
username_1: @username_0 should we close this?
Status: Issue closed
username_0: @username_1, yup we can close this one. (pr was accepted a while back, so i will close now) |
valdisiljuconoks/localization-provider-core | 503413054 | Title: Exception of type 'System.StackOverflowException' was thrown
Question:
username_0: Hi,
I'm using your library in my new Asp.Net Core project and I like it so far. However, I have an issue. I have a model class named Projects which uses attribute [LocalizedModel]. Whenever I start debugging I get "The application is in break mode" with the exception "Exception of type 'System.StackOverflowException' was thrown." If I comment out the attribute everything works fine.
I'm using this attribute on other model classes, but without exception. Do you have any suggestions on how to solve this issue, or what could cause this problem?
Thanks in advance.
Answers:
username_1: Hi,
Can you paste whole class here?
username_1: Also small description how you are using class layer in project would help. There are differences what is model and what is resource. But agree that exception should be handled properly.
Btw, which version is it?
username_0: Here is the class:
`using DbLocalizationProvider;
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
namespace SmartM.Web.Models
{
//[LocalizedModel]
public partial class Projects
{
public Projects()
{
InverseCustomer = new HashSet<Projects>();
ProjectItems = new HashSet<ProjectItems>();
ProjectOfferItems = new HashSet<ProjectOfferItems>();
}
[Display(Name = "Project Id")]
public int? ProjectId { get; set; }
[Display(Name = "Project name")]
[Required]
public string ProjectName { get; set; }
[Display(Name = "Date")]
[Required]
public DateTime? ProjectDate { get; set; }
[Display(Name = "Customer")]
[Required]
public int? CustomerId { get; set; }
[Display(Name = "Status")]
public int? Status { get; set; }
[Display(Name = "Description")]
public string Description { get; set; }
[Display(Name = "Project total")]
public decimal? ProjectTotal { get; set; }
[Display(Name = "Currency")]
public string CurrencyCode { get; set; }
[Display(Name = "Deadline")]
public DateTime? Deadline { get; set; }
public byte[] Timestamp { get; set; }
public Projects Customer { get; set; }
public ICollection<Projects> InverseCustomer { get; set; }
public ICollection<ProjectItems> ProjectItems { get; set; }
public ICollection<ProjectOfferItems> ProjectOfferItems { get; set; }
}
}`
username_0: The version is 5.7.1.
username_1: property `Customer` is causing problems. Can you try to add `[Ignore]` attribute to see if it helps?
username_0: Great, it's working now. Thank you for your excelent support. I have another issue that bothers me from the very beginning of my project. I'll post it in another question.
Status: Issue closed
|
postmanlabs/postman-app-support | 675031131 | Title: SSL Certificate Problem: Unable to get Local Issuer Certificate
Question:
username_0: We have a shared resource collection that is being used by our team. Till now we are able to hit our applications from Postman, but suddenly one after one resource started getting the below error message while hitting our applications.
**Error: Unable to get local issuer certificate.**
We need to activate SSL validation for hitting our applications.
**Image 1: even when we have added the valid certificate and correct configuration, getting the below error:**
<img width="673" alt="postman2" src="https://user-images.githubusercontent.com/22025315/89652369-028a1100-d8e3-11ea-9d1c-e492d67fb668.PNG">
**Image 2: When we diable the SSL validation, getting the below error while hitting our application API's**
<img width="665" alt="postman1" src="https://user-images.githubusercontent.com/22025315/89652371-03bb3e00-d8e3-11ea-8ca7-d1c1cbb6552f.PNG">
Note: we need a Certificate as part of our request, we have added the necessary certificate details in our local postman and it was working as expected till 2 days back. Suddenly around 4-5 resources started facing the above the mentioned error message while hitting our APIs. For remaining resource who has a similar setup is working as expected and we tried multiple options that we found around Postman forum and no luck in resolving the issue. Now we are worried if all the resources might get impacted due to the same issue.
Looking for fix on the above request.
Answers:
username_0: @username_2 any update on the requested details?
username_0: @username_1 Any help here? Now additional 2 resources are facing the same issue. Now we are worried if it might impact the complete team, your inputs might help us.
username_1: @username_0 this error occurs if the issuer certificate is untrusted. Can you share the following details:
* What's the app's version where request fails
* What's the app's version where request pass
* What's the type of client certificate (CRT + KEY or PFX)
* Does it require a CA certificate
username_0: @username_1
What's the app's version where request fails -- v7.29.0 and tried till the current latest version v7.30.1
What's the app's version where request pass -- v7.29.0 and tried till the current latest version v7.30.1
What's the type of client certificate (CRT + KEY or PFX) -- Combination of CRT+KEY+PFX
Does it require a CA certificate -- No
Just to confirm, It was happening for only few resources, and for few resources with the same version and new version all the cases it's working. For now, 7 resources are facing this problem and for remaining it is working properly only.
username_1: @username_0 This is weird that on the same version (say `v7.30.1`) a few of your resources are working fine and the rest are facing this issue. This seems like some kind of misconfiguration in either **certificate** or **proxy** configuration (does the request require a proxy setup?).
I recommend you compare the request sent (passed and failed) using the Postman Console to see what's the difference.

username_0: @username_1 , settings wise we think it was initialized correctly only. As mentioned earlier for the resources who are currently facing the issue have the postman working for them as expected and suddenly they started getting the issue.
As suggested above options to check the request in Postman console, Tried validating the request and seems everything is matching and the only difference we see here is they are getting the error message as "Error: Unable to get local issuer certificate." as mentioned in actual description.
Attaching a sample screenshot for reference.
**1. The user who is facing the issue postman console.**
<img width="946" alt="image1error" src="https://user-images.githubusercontent.com/22025315/90255771-5efcab80-de62-11ea-8ad9-ae243950cf6b.PNG">
**2. The user who is not facing the issue postman console:**

If we check both the images, request wise and certificate wise everything matching. Request you to just have a cross-check once from your end as well.
username_2: The differences I can see are the file paths to the certificate locations - both in terms of names and `/` or `\` 🤔
Would that cause an issue @username_1? Are these paths related to the directory set in the Working Directory?
username_2: @username_0 I removed the images as the _masking_ you used, was not really masking any of those details like the client_id and client_secret. 😬
username_0: @username_2 Sure. Thank you for deleting.
we are selecting the same certificate, the path would be different but the certificate is the same.
username_3: @username_0 @username_2
The path file you see with "/" is a postman normalized file path that we store in order to maintain compatibility between different operating systems for collaboration and this shouldn't be a problem. Please refer to the following thread for more information: https://github.com/postmanlabs/postman-app-support/issues/8852#issuecomment-665603793
username_2: Hey 👋🏻
I'm going to close this issue due to inactivity - If you're still facing the issue in the latest version we can reopen this and investigate further.
Status: Issue closed
|
typora/typora-issues | 572524147 | Title: OS X Catalina: Typora Freezes Indefinitely (2)
Question:
username_0: **Typora Version**: 0.9.9.32.1 (4191)
**OS**: macOS Catalina 10.15.3
Same symdrone as #3279.
I insert a few pictures, and with a certain probability, Typora freezes when save.
On average 3-5 save will trigger a freeze.
CPU% is not very high (0.6%), and memory usage is low.
Answers:
username_1: @username_0 Is it same with #2895 that save panel will never show?
username_0: I haven't try what #2895 described. I can try. So far it is just the normal save that triggers the freeze.
username_2: I just downloaded the latest Typora (Version 0.9.9.32.1 (4191)) and it hung for a long time upon opening. I took a spindump (attached). When the spindump was done, Typora had become responsive again.
I'm running macOS 10.15.4 (19E287) Catalina.
[typora_spindump.txt](https://github.com/typora/typora-issues/files/4631702/typora_spindump.txt)
username_1: Does this still happens in latest version?
Status: Issue closed
username_1: **Typora Version**: 0.9.9.32.1 (4191)
**OS**: macOS Catalina 10.15.3
Same symdrone as #3279.
I insert a few pictures, and with a certain probability, Typora freezes when save.
On average 3-5 save will trigger a freeze.
CPU% is not very high (0.6%), and memory usage is low.
username_1: And what is your file system, is it same as #3496?
Status: Issue closed
|
aoudiamoncef/apollo-client-maven-plugin | 598380276 | Title: Unable to generate classes for recursive tree structures
Question:
username_0: Hi,
I am having issues generating classes with below error.
**Can't query `Info` on type `Info`**
Sample
**Info.graphqls**
extend type Query {
listSomeInfo(userId: String!): [Info!]
}
type Info {
id: ID!
name: String!
desription: String!
children: [Info!]
parent: Info
}
**Info.graphql**
query (
$userId: String!
) {
listSomeInfo(
userId: $userId
) {
id
name
desription
children {
id
name
children {
id,
name
}
parent {
id,
name
}
}
}
}
Answers:
username_1: Hi @me-hek ,
I think that you have to report this issue to the main project which is [Apollo Android](https://github.com/apollographql/apollo-android/issues )
Status: Issue closed
|
whitecube/nova-flexible-content | 610403879 | Title: Allow casts to return a layout collection within Nova endpoints
Question:
username_0: In regards to the discussion in issue #169, it seems that it could be beneficial to allow casting to a Layout Collection within Nova endpoints to facilitate usage with Laravel Scout.
This is currently possible but not easily done, as the casting has a check that always returns the raw value when within Nova, thus requiring a workaround.
Discussion regarding a possible solution to this is encouraged!
Thanks |
jlippold/tweakCompatible | 514318356 | Title: `DateUnderTimeX-A12` working on iOS 12.4
Question:
username_0: ```
{
"packageId": "com.hackyouriphone.dateundertimex",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.hackyouriphone.dateundertimex",
"deviceId": "iPhone11,8",
"url": "http://cydia.saurik.com/package/com.hackyouriphone.dateundertimex/",
"iOSVersion": "12.4",
"packageVersionIndexed": false,
"packageName": "DateUnderTimeX-A12",
"category": "HYI - Tweaks",
"repository": "HackYouriPhone",
"name": "DateUnderTimeX-A12",
"installed": "1.2.2",
"packageIndexed": false,
"packageStatusExplaination": "This tweak has not been reviewed. Please submit a review if you choose to install.",
"id": "com.hackyouriphone.dateundertimex",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "Adds the date under the status bar on Devices with the Modern Statusbar",
"latest": "1.2.2",
"author": "NeinZedd9",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
``` |
HeapsIO/domkit | 770991904 | Title: HL bytecode hot reload and DomKit
Question:
username_0: Is it possible to use HL's bytecode hot reload functionality to reload changes to the SRC markup at runtime?
I am currently able to do the reload and clear the children of a component and recreate it from the DomkitML, however it doesn't seem to pick up the actual changes made to the SRC variable, they stay the same. Is this even possible? How does Shiro test quick iterations on the markup?
Cheers
Answers:
username_1: Hi ! HL bytecode hot reload is experimental and we haven't yet deployed it at Shiro. I need to look into it further to see if there's a specific problem wrt Domkit although it should be generic because SRC changes will trigger generation of initComponent() method. ATM at Shiro markup changes are tested by restarting the game.
Status: Issue closed
username_0: Thanks for the insights! Reloading the markup through hot reload would be a massive speed up during development of UIs. Loading Markup form File (XML-like) during dev-time would also be a nice option, but maybe that's more difficult with inlining and such. I'll further look into it. |
Zrips/CMI | 620637536 | Title: Using /cheque with no arguments gives a permission error even if you have cmi.command.cheque
Question:
username_0: **Description of issue:**
Players who have the cmi.command.cheque permission get a no permission error instead of the "correct usage: /cheque <amount>" message. They can correctly create cheques with /cheque <amount>.
---
**Cmi Version (using`/cmi version`): 192.168.127.12**
**Server Type (Spigot/Paperspigot/etc): Paper 283**
**Server Version (using `/ver`): 1.15.2**
Answers:
username_1: Should be resolved
Status: Issue closed
|
dotnet/docs | 299838673 | Title: .NET Core 2.1 System.IO API changes
Question:
username_0: # .NET Core 2.1 System.IO API changes
There are a number of behavior changes to System.IO APIs in .NET Core 2.1 and new APIs. This issue summarizes the high level changes and links to relevant documents.
# General
APIs have been modified to better support cross platform code writing and substantially improve performance. Here is a summary of the API changes:
1. Path validation has been simplified
2. Span overloads have been added for a number of `System.IO.Path` APIs
3. There is a new overload for `Path.GetFullPath()` that allows specifying a base path for resolving the path
4. Directory enumeration results are more consistent cross-plat
5. New enumeration options have been added
6. A new extensible enumeration API has been added
Initial details are below. I'll be adding more links and details shortly.
## Path Validation
To facilitate writing cross platform code, System.IO.Path APIs have had their preemptive error checking simplified. Notably:
- `Path.GetFullPath()` only checks for embedded nulls, null strings, and empty strings
- No IO APIs check for invalid characters
- `Path.GetDirectoryName()` returns null for empty strings, instead of throwing
- Search patterns are no longer validated beyond null check and rooting (they cannot return true from `Path.IsPathRooted()`)
## Span overloads
https://github.com/dotnet/corefx/issues/25539
## Path.GetFullPath overload
https://github.com/dotnet/corefx/issues/25539
## Enumeration API changes
https://github.com/dotnet/corefx/issues/25873
Answers:
username_1: Thank you @username_0! I'll be adding this to our .NET Core 2.1 project.
username_0: I put together some blog posts covering the details that can be used as a starting point:
https://blogs.msdn.microsoft.com/jeremykuhne/2018/03/08/system-io-in-net-core-2-1-sneak-peek/
https://blogs.msdn.microsoft.com/jeremykuhne/2018/03/09/custom-directory-enumeration-in-net-core-2-1/
username_0: When we do these updates we should clarify legacy behavior. https://github.com/dotnet/corefx/issues/26008 brought up the fact that we didn't do whitespace trimming on Unix platforms. Please sync with me to ensure that you've got accurate details across the platforms/versions. :)
username_1: Moving this to the backlog until we're actually ready to start working on this |
jlippold/tweakCompatible | 420272879 | Title: `Hyperion` working on iOS 12.1.1
Question:
username_0: ```
{
"packageId": "com.spark.hyperion",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.spark.hyperion",
"deviceId": "iPhone10,3",
"url": "http://cydia.saurik.com/package/com.spark.hyperion/",
"iOSVersion": "12.1.1",
"packageVersionIndexed": true,
"packageName": "Hyperion",
"category": "Tweaks",
"repository": "SparkDev",
"name": "Hyperion",
"installed": "0.4.2-1",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.spark.hyperion",
"commercial": true,
"packageInstalled": true,
"tweakCompatVersion": "0.1.4",
"shortDescription": "OLED notification tweak",
"latest": "0.4.2-1",
"author": "Spark",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
ThibautGrx/thp2 | 348078355 | Title: better logs
Question:
username_0: ### Why ?
Are the heroku logs easy to read ? Is sending email with Sendgrid fast ? Do you have any idea of what is happening ?
### Must have
- [ ] Better log formating (Yay JSON) with [lograge](https://github.com/roidrage/lograge)
- [ ] Store them into a LogDNA
### To Do
- [ ] Read [lograge](https://github.com/roidrage/lograge) documentation and [logstash](https://github.com/dwbutler/logstash-logger) too;
- [ ] Try to make it work with json in production;
- [ ] Cry a bit;
- [ ] Take [this](https://github.com/denispasin/turtle_family/blob/master/config/environments/production.rb#L85), it's dangerous to go alone;
- [ ] Add the logDNA add-on on heroku. |
SafetyGraphics/safetyGraphics | 1021093185 | Title: Migrate metadata storage to safetyCharts
Question:
username_0: Since our intent is to add many new charts across new data domains, I'm thinking it makes more sense to store the default metadata in a more modular way in safetyCharts. Process might look like this:
- Delete `safetyGraphics::meta`
- Save domain-level metadata (`aes_meta`, `dm_meta` etc) in the `/data` folder in safetyCharts. Keep the same columns as `safetyGraphics::meta`
- Create a new `makeMeta.R` function in safetyGraphics. The function will identify the metadata domains specified in the current charts and stack them up for use in `meta` parameter in `safetyGraphicsApp()`
Answers:
username_0: There are a few more complex use cases that we might want to consider in this refactor as well:
- Charts can be used on any data domain (e.g. codebooks like {datadigest}
- Charts that have optional data domains (e.g. a patient profile chart where components are added or removed based on what data is available)
username_0: Thinking that adding an optional `meta` parameter to the charts might be a good idea. If `meta` is provided then `makeMeta` would look for a data.frame with that name in the package specified in the chart. If it's missing, `makeMeta` would look for domain-level metadata as described above.
username_0: After re-reading #562, I'm thinking that `makeMeta` should really be applied to each chart (probably as part of `prepareChart()`). The goal would be for each chart to have its own `meta` object, which could be used for both to generate the mapping tab for the whole app and to 'validate' individual charts.
Not sure the best way to actually derive `chart.meta`. Maybe something that combines:
- `meta_{domain}` - domain-level metadata tibbles saved in safetyChart (like safetyGraphics::meta)
- `meta_{chart}` - chart-specific metadata tibbles
- metadata defined in the `chart` object - Specifications saved directly in the chart config. Maybe something like what was proposed in #562:
```
env: safetyGraphics
label: Outlier Explorer - widget
type: htmlwidget
domain:
- labs
- id_col: character
- measure_col: character
- value_col: numeric
- ...
package: safetyCharts
export: true
...
```
username_0: Going to scale back the proposal above for v2.1, and avoid `chart.meta` for now. Instead, `makeMeta` will produce an app-level metadata file after the `charts` list is finalized by stacking all `{package}::meta_{domain}` and `{package}::meta_{chart}` objects found for the current charts. `make_meta` will throw an error if duplicate `domain`+`text_key` combinations are found.
We can revisit adding chart-level requirements in a future release.
username_0: I think this is pretty much done, and I did add support for metadata saved to `chart$meta` as a data.frame. This allows for easier incorporation of custom charts as shown in the updated `hello world` example below. To keep things simple, each source is expected to be fully independent - that is no duplicate rows of metadata are allowed.
```
helloMeta <- tribble(
~text_key, ~domain, ~label, ~standard_hello, ~description,
"x_col", "hello", "x position", "x", "x position for points in hello world chart",
"y_col", "hello", "y position", "y", "y position for points in hello world chart"
) %>% mutate(
col_key = text_key,
type="column"
)
detectStandard(domain='hello', data = iris, meta=helloMeta)
helloData<-data.frame(x=runif(50, -1,1), y=runif(50, -1,1))
helloWorld <- function(data, settings){
plot(-1:1, -1:1)
text(data[[settings$x_col]], data[[settings$y_col]], "Custom Hello Domain!")
}
helloChart<-prepareChart(
list(
env="safetyGraphics",
name="HelloWorld",
label="Hello World!",
type="plot",
domain="hello",
workflow=list(
main="helloWorld"
),
meta=helloMeta
)
)
charts <- c(makeChartConfig(),helloChart) #Easy to combine default and custom charts
data<-list(
labs=safetyData::adam_adlbc,
aes=safetyData::adam_adae,
dm=safetyData::adam_adsl,
hello=helloData
)
#no need to specify meta since makeMeta() will generate the correct list by default.
safetyGraphicsApp(
domainData=data
charts=charts
)
```
Status: Issue closed
|
isawnyu/pleiades-gazetteer | 140228127 | Title: radio button groups and combo boxes positioned oddly on edit forms
Question:
username_0: Some examples:




Answers:
username_1: Fixed (also fixed field help text styles)
Status: Issue closed
|
quic/aimet | 614939222 | Title: Documentation Related Issue
Question:
username_0: Does the library supports RNN based model, as you haven't mentioned anything about it and also not mentioned any benchmarks on a RNN based model??
Answers:
username_1: Thanks for the query. Currently AIMET does not support RNN based models – both for model quantization and model compression. We will update the documentation to make this more clear.
More information on the following would be useful
• Which particular feature were you planning to use with RNN models?
• Which RNN-based models are you interested in trying AIMET features on?
We are considering adding support for RNN (including LSTM/GRU) quantization in the future. Please let us know if you are interested in contributing to AIMET.
username_0: I would like to try AIMET on the tacotron 2(TTS) model, (both pruning and quantization).
Talking about the contribution, currently i don't think i have enough knowledge so as to contribute, but will definitely contribute in future.
Status: Issue closed
|
psanford/wormhole-william | 491848222 | Title: Error building on Raspberry Pi
Question:
username_0: I successfully built wormhole-william on AMD64, but it fails on my Raspberry Pi.
go version go1.13 linux/arm
```# github.com/username_1/wormhole-william/wormhole
wormhole/file_transport.go:140:28: constant 4294967295 overflows int```
Status: Issue closed
Answers:
username_1: Thanks for the bug report! It should be fixed now on master. |
DemocracyClub/EveryElection | 246310808 | Title: Can't attach Notice of Election PDF to Peterborough Park by-election
Question:
username_0: Steps to reproduce:
* Go to https://elections.democracyclub.org.uk/elections/local.peterborough.park.by.2017-08-17/
* Paste in
https://www.peterborough.gov.uk/upload/www.peterborough.gov.uk/council/elections/NoticeOfElection-ParkWard.pdf
or
https://www.peterborough.gov.uk/upload/www.peterborough.gov.uk/council/elections/NoticeOfElection-ParkWard.pdf?inline=true
* On submit, `archive_document()` will raise an unhandled `HTTPError`
Answers:
username_0: Solution: Notice of Election form should also include a file upload dialog so we can upload a PDF instead of linking to it if we need to
username_0: Also
* https://elections.democracyclub.org.uk/elections/local.trafford.2017-09-14/
* http://www.trafford.gov.uk/about-your-council/elections/docs/Notice-Of-Election-bucklow-st-martins-aug-2017.pdf
username_0: Also:
* https://elections.democracyclub.org.uk/elections/local.trafford.2018-05-03/
* http://www.trafford.gov.uk/about-your-council/elections/docs/Notice-of-Election.pdf
this may be something particular to Trafford (e.g: checking for a cookie before allowing download, or something)
username_1: `500`
username_0: Good spot. I reckon my preferred solution to this is still to just add the ability to upload a file to cover this small handful of cases rather than build in specific edge case handling like spoofing the user agent. That would also give us something that covers a situation where someone had to (for example) take a photo of a bit of paper :)
username_1: `200` |
MHumm/DelphiEncryptionCompendium | 1051773365 | Title: Add cipher demo that uses a KDF and a currently unbroken cipher
Question:
username_0: The simple cipher demo does not use a KDF to generate a key and instead uses a password that has the right length to be accepted by the used cipher. Since developers (myself included) often start with copying code from the demo, it would be cool, if there was a demo, that uses the library in a safe way with an unbroken cipher.
The goal here is, that developers, which are new to cryptography or this library, won't end up with weak encryption in they're projects.
Answers:
username_1: KDF is for generating a key from a password. It's not really about generating encryption keys, but it can be used for this purpose though. I'll think about it. KDF would bring in hashes and up to now I wanted to have separate demos for cipher related things and hash related ones in order to not make things too complicated for new users. But a new demo might make sense. Maybe...
username_1: I changed the cipher of the console demo now, but not to AES as I might change AES handling a bit in 6.5 (but not in an incompatible way) and I added another console demo using KDF1 to improve key security. You can check it out in development branch already. But most likely 6.4.1 will be released this afternoon already.
username_1: A new demo was added to 6.4.1 which was just released. I hope it's good enough for now.
Status: Issue closed
|
mruby/mruby | 284269990 | Title: "register" keyword is not supported anymore in C++17
Question:
username_0: When compiling with "enable_cxx_abi" and setting the compiler options to use "-std=c++17" i get errors that there are still a handful of locations where this keyword is defined.
I think we can safely remove them. No C or C++ compiler is using these hints anymore.
Answers:
username_0: I just learned the basics of github in under 30min and create a pull request for my changes.
username_1: Closed by #3913
Status: Issue closed
|
moaxaca/async-redis | 733283250 | Title: NOAUTH Authentication required (How to handle passwords?)
Question:
username_0: How to simply include the redis password when connecting?
Answers:
username_1: You can pass the same configuration object that you would provide to the plain redis library. It is documented [here](https://github.com/NodeRedis/node-redis#rediscreateclient).
Status: Issue closed
username_2: Answer from above works |
rerun-modules/bintray | 267189564 | Title: support retrieving package file names
Question:
username_0: similar to #3
support retrieving list of package files
```
# bintray: package-file (specific package)
rerun bintray: package-file \
[ --extension .tar.gz ] \
--org rerun \
--repo rerun-bin \
--package rerun
``` |
seenthis/seenthis_squelettes | 468795521 | Title: Posts doublons depuis flux mediapart
Question:
username_0: Ça fait plusieurs fois que je remarque ça en parcourant /all à la recherche de comptes à spam, tous les posts de mediapart sont en doublon, exemple :
https://seenthis.net/messages/792985
https://seenthis.net/messages/792993
L'url contenue dans chaque post est un peu différente :
https://www.mediapart.fr/journal/france/160719/parafoudres-et-vague-de-cancers-orange-vise-par-une-une-plainte
https://www.mediapart.fr/journal/france/160719/parafoudres-et-vague-de-cancers-orange-vise-par-une-plainte
Ainsi que les tags associés aux posts.
Autre exemple :
https://seenthis.net/messages/792415
https://seenthis.net/messages/792396
Est-ce qu'on peut faire quelque chose pour éviter ça dans le plugin seenthis_importer_flux ou est-ce qu'on considère que ça n'est pas bien important ?
Answers:
username_0: On dirait bien que ça concerne aussi le flux de cuisine libre, cf https://seenthis.net/people/cuisinelibre :
https://seenthis.net/messages/801838
https://seenthis.net/messages/801943
https://seenthis.net/messages/801895
https://seenthis.net/messages/801935
username_1: on doit pouvoir unifier quand la différence est http / https, mais pas pour le cas d'une URL corrigée
username_0: Pour info, cuisine libre indique un flux en http alors que l'url du site SPIP en question est en https et le liste est accessible aussi bien en http que https (au lieu de forcer une redirection du premier vers le second). On est pas aidés sur ce coup là, mais soyons sympas tentons de prendre en charge ce genre de bourdes de notre côté :p
username_0: Peut-être en lien avec #130
Status: Issue closed
|
radiasoft/sirepo | 901073564 | Title: Prod Release 2021-05-25 16:39:48 UTC
Question:
username_0: - #3610 Beta Release 2021-05-24 17:15:32 UTC
- #3607 Alpha Release 2021-05-22 14:59:34 UTC
- Fix #3606 was not complete https://github.com/radiasoft/download/commit/81c73fd3b480665019b75758a630999222b69e43
- #3605 Alpha Release 2021-05-21 20:46:05 UTC
- Fix #3606 add libgfortran4 back into beamsim
- #3600 Alpha Release 2021-05-20 21:56:44 UTC
- fix flashcap #39 added flash unit test #3601
- Fix #3581: Create jupyterhub user given email that may not be Sirepo … #3588
- Fix: #3564: Implement statelessCompute API #3565
- #3591 Alpha Release 2021-05-19 21:20:47 UTC
- Issue/flashcap 31 new flash app structure #3599
- #3577 Alpha Release 2021-05-18 16:03:59 UTC
- Fix #3578: b is not defined #3580
- Issue/3511 OPAL 3D plot axes and ticks and element name fix #3585
- Issue/3586 fix lib_file_name_without_type regex #3587
Answers:
username_0: still having login issues at bnl so i'll get those sorted and push the current prod whenever that is
Status: Issue closed
|
IHTSDO/snomed-owl-toolkit | 871233692 | Title: Unable to run snomed-owl-toolkit-2.10.1-executable.jar
Question:
username_0: Hello,
I'm trying to run the RDF-to-OWL toolkit from command line but I keep getting a set of Exceptions. Can you provide advise on this?
Details:
I am trying to convert the following SNOMED releases. Both of them have Full, Snapshot and Delta folders. Both of them generate the same error.
- SnomedCT_InternationalRF2_PRODUCTION_20200731T120000Z.zip
- UK_SNOMEDCT2_30.0.0_20200805000001.zip (UK Clinical extension for the same period)
I set those files in the same folder as the .jar and run the following command line:
`java -Xms4g -jar snomed-owl-toolkit-2.10.1-executable.jar -rf2-to-owl -rf2-snapshot-archives SnomedCT_InternationalRF2_PRODUCTION_20200731T120000Z.zip -version 20200731T12`
Then I receive the following errors:
```
Creating Ontology using the following options:
Snapshot archives: [SnomedCT_InternationalRF2_PRODUCTION_20200731T120000Z.zip]
Delta archive: -none-
Ontology URI: http://snomed.info/sct/900000000000207008
Ontology Version: 20200731T12
Include Description Annotations: true
2021-04-29 12:20:57,633 [INFO ] [main] org.snomed.otf.owltoolkit.conversion.RF2ToOWLService - Loading RF2 files
Exception in thread "main" java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50)
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51)
Caused by: java.lang.ExceptionInInitializerError
at com.google.inject.internal.cglib.reflect.$FastClassEmitter.<init>(FastClassEmitter.java:67)
at com.google.inject.internal.cglib.reflect.$FastClass$Generator.generateClass(FastClass.java:72)
at com.google.inject.internal.cglib.core.$DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at com.google.inject.internal.cglib.core.$AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at com.google.inject.internal.cglib.reflect.$FastClass$Generator.create(FastClass.java:64)
at com.google.inject.internal.BytecodeGen.newFastClass(BytecodeGen.java:204)
at com.google.inject.internal.ProviderMethod$FastClassProviderMethod.<init>(ProviderMethod.java:256)
at com.google.inject.internal.ProviderMethod.create(ProviderMethod.java:71)
at com.google.inject.internal.ProviderMethodsModule.createProviderMethod(ProviderMethodsModule.java:275)
at com.google.inject.internal.ProviderMethodsModule.getProviderMethods(ProviderMethodsModule.java:144)
at com.google.inject.internal.ProviderMethodsModule.configure(ProviderMethodsModule.java:123)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:349)
at com.google.inject.spi.Elements.getElements(Elements.java:110)
at com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104)
at com.google.inject.Guice.createInjector(Guice.java:96)
at com.google.inject.Guice.createInjector(Guice.java:73)
at com.google.inject.Guice.createInjector(Guice.java:62)
at org.semanticweb.owlapi.apibinding.OWLManager.createInjector(OWLManager.java:89)
at org.semanticweb.owlapi.apibinding.OWLManager.instatiateOWLOntologyManager(OWLManager.java:97)
at org.semanticweb.owlapi.apibinding.OWLManager.createOWLOntologyManager(OWLManager.java:58)
at org.snomed.otf.owltoolkit.taxonomy.AxiomDeserialiser.<init>(AxiomDeserialiser.java:33)
at org.snomed.otf.owltoolkit.taxonomy.SnomedTaxonomyLoader.<init>(SnomedTaxonomyLoader.java:54)
at org.snomed.otf.owltoolkit.taxonomy.SnomedTaxonomyLoader.<init>(SnomedTaxonomyLoader.java:63)
at org.snomed.otf.owltoolkit.taxonomy.SnomedTaxonomyBuilder.build(SnomedTaxonomyBuilder.java:117)
at org.snomed.otf.owltoolkit.taxonomy.SnomedTaxonomyBuilder.build(SnomedTaxonomyBuilder.java:81)
at org.snomed.otf.owltoolkit.conversion.RF2ToOWLService.convertRF2ArchiveToOWL(RF2ToOWLService.java:45)
at org.snomed.otf.owltoolkit.Application.rf2ToOwl(Application.java:145)
at org.snomed.otf.owltoolkit.Application.run(Application.java:88)
at org.snomed.otf.owltoolkit.Application.main(Application.java:53)
... 8 more
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make protected final java.lang.Class java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain) throws java.lang.ClassFormatError accessible: module java.base does not "opens java.lang" to unnamed module @39ba5a14
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:357)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Method.checkCanSetAccessible(Method.java:199)
at java.base/java.lang.reflect.Method.setAccessible(Method.java:193)
at com.google.inject.internal.cglib.core.$ReflectUtils$2.run(ReflectUtils.java:56)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:312)
at com.google.inject.internal.cglib.core.$ReflectUtils.<clinit>(ReflectUtils.java:46)
... 39 more
```
Initially I thought the problem would be due trying to work with the UK Extension, but since the International release also presents problems, I'm lost in what I should do.
Thank you for your time,
Rafael<issue_closed>
Status: Issue closed |
YousefED/typescript-json-schema | 321177962 | Title: Optional array / tuple elements
Question:
username_0: I have defined a tuple type with an optional element like this:
```
type tuple23 = [ number, string, string | undefined ];
```
The translation I get is:
```
{
"$schema": "http://json-schema.org/draft-06/schema#",
"additionalItems": {
"anyOf": [
{
"type": "number"
},
{
"type": "string"
},
{
"type": "string"
}
]
},
"items": [
{
"type": "number"
},
{
"type": "string"
},
{
"type": "string"
}
],
"minItems": 3,
"type": "array"
}
```
The problem is that the `undefined` has been lost: `minItems` should be 2. What I expect is:
```
{
"$schema": "http://json-schema.org/draft-06/schema#",
"items": [
{
"type": "number"
},
{
"type": "string"
},
{
"type": "string"
}
],
"minItems": 2,
"maxItems": 3,
"type": "array"
}
```
Answers:
username_1: Same thing happens for me: When using type Array<SomeInterface | null>, typescript-json-schema ignores the null type in the "items" property.
username_2: same issue for me
username_3: +1 To this. I would like Array<string | null> to turn into `"items": {"type": ["string", "null"] }`
username_4: Happy to review PRs for this bug.
username_5: Same issue here |
serde-rs/serde | 362903322 | Title: #[serde(skip)] in tuple enum variants silently ignored
Question:
username_0: ```rust
#[macro_use]
extern crate serde_derive; // 1.0.78
extern crate serde; // 1.0.78
struct S;
#[derive(Serialize)]
enum E {
V(#[serde(skip)] S),
}
```
```
error[E0277]: the trait bound `S: serde::Serialize` is not satisfied
--> src/lib.rs:7:10
|
7 | #[derive(Serialize)]
| ^^^^^^^^^ the trait `serde::Serialize` is not implemented for `S`
|
= note: required by `serde::Serializer::serialize_newtype_variant`
error: aborting due to previous error
```
[Playground](https://play.rust-lang.org/?gist=52468306cc60f55fd62d11eba4a1ca45&version=stable&mode=debug&edition=2015)
The attribute should either work (thus serializing like `enum E { V() }` or give an error/warning that it has no meaning in this position.
Answers:
username_1: Just ran into this today. Changing the variant into
```rust
enum E {
V { #[serde(skip)] v: S }
}
```
makes it work as expected. I think should just work instead of an error. This is likely a bug.
username_2: This also applies to tuple structs (e.g. `struct Ignore(#[serde(skip)] Foo);`).
username_3: Might be related to why `#[serde(default)]` is also ignored: https://github.com/serde-rs/serde/issues/1418.
Status: Issue closed
|
deep-compute/funcserver | 210509055 | Title: funcserver client prints a msgpack exception when calling invalid function
Question:
username_0: Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/anto/github.com/username_0/funcserver/funcserver/funcserver.py", line 705, in __call__
return self.parent._call(self.prefix, args, kwargs)
File "/home/anto/github.com/username_0/funcserver/funcserver/funcserver.py", line 709, in _call
return self._do_single_call(fn, args, kwargs)
File "/home/anto/github.com/username_0/funcserver/funcserver/funcserver.py", line 730, in _do_single_call
res = self.DESERIALIZER(req.content)
File "msgpack/_unpacker.pyx", line 146, in msgpack._unpacker.unpackb (msgpack/_unpacker.cpp:2231)
msgpack.exceptions.UnpackValueError: Unpack failed: error = 0
```
The logs on the server are accurate. We need to transmit them to client.
```
2017-02-27T15:01:00.972946Z [error ] RPC failed _={'ln': 325, 'file': '/home/anto/github.com/username_0/funcserver/funcserver/funcserver.py', 'name': 'funcserver.funcserver', 'fn': '_handle_single_call'} args=[] fn='xyz' kwargs={}
Traceback (most recent call last):
File "/home/anto/github.com/username_0/funcserver/funcserver/funcserver.py", line 308, in _handle_single_call
fn = self._get_apifn(fn_name)
File "/home/anto/github.com/username_0/funcserver/funcserver/funcserver.py", line 281, in _get_apifn
obj = getattr(obj, part)
AttributeError: 'MyAPI' object has no attribute 'xyz'
2017-02-27T15:01:00.974387Z [error ] RPC failed _={'ln': 429, 'file': '/home/anto/github.com/username_0/funcserver/funcserver/funcserver.py', 'name': 'funcserver.funcserver', 'fn': '_handle_call_wrapper'} args=[] fn='xyz' kwargs={}
Traceback (most recent call last):
File "/home/anto/github.com/username_0/funcserver/funcserver/funcserver.py", line 426, in _handle_call_wrapper
return self._handle_call(request, fn, m, protocol)
File "/home/anto/github.com/username_0/funcserver/funcserver/funcserver.py", line 351, in _handle_call
r = self._handle_single_call(request, m)
File "/home/anto/github.com/username_0/funcserver/funcserver/funcserver.py", line 344, in _handle_single_call
fn=fn_name, args=args, kwargs=kwargs,
UnboundLocalError: local variable 'args' referenced before assignment
```
Answers:
username_1: Not maintaining funcserver as `kwikapi` is now the preferred option. closing this.
Status: Issue closed
|
wso2/product-is | 722923189 | Title: Missing instructions related to CORS configuration in 5.10.0 and 5.9.0
Question:
username_0: **Is your suggestion related to a missing or misleading document? Please describe.**
Following documentation has the configuration related to IS versions before IS - 5.9.0 This needs to be updated after templating the cors configurations in web.xml
https://is.docs.wso2.com/en/latest/administer/invoking-an-endpoint-from-a-different-domain/#applying-cors-filter-to-another-web-application
Answers:
username_0: Related configuration issue: https://github.com/wso2/product-is/issues/9917
username_1: The configurations are now available in IS version 5.10
The steps on how to add the configuration are available in : https://github.com/wso2-support/carbon-kernel/pull/1127 |
ryan-hoffman/R_projects | 733725993 | Title: simplified cocinaVocab app
Question:
username_0: Hi @ryan-hoffman,
I started out small in order to figure out the problem. This "small start" strategy may be familiar to you, but just in case, may I describe it? I observed that the existing code was a little complex, too hard for me to understand at first sight. I noticed that one observeEvent was nested inside another. Too hard to grasp that! So I commented out all the code except for the few lines needed to
- choose a new Spanish word at random whenever the Seleccionar button was clicked
- print that word to the console
Nothing else. I got that much to work, and I understood those few moving parts.
Next up, I wanted to
- save the new word in a reactive value ("put it in the safe deposit box" was my metaphor from last week
- re-enable your one line of code which makes output$palabraOutput dependent that reactive value
Reload the program, see that this works.
Then I moved on to the translate button, keeping it dead simple at first:
- in the observeEvent handler for that button, I just printed out "traducir clicked"
- nothing more. checked to make sure that the seleccionar button still worked
- only then, with that assurance, added a line to print to the console ```palabraRe()```
- found that, yes, the safety deposit box can be read
- added some protection: don't want to use that value if it is empty ("") or NULL
- do this with ```req(palabraRe()``` which functions as a guard: require a meaningful value, guard against further execution of this observeEvent handler
Next up, and still being pessimisitic (things will break given any chance), and still using print to the console:
- ```print(cocinaVoca[[palabraRe()]]```
- does this work? do I always get a legit translation
- finding that I could not make this fail, only then assign to the traducirRe safe deposit box
- and only then enable your one line which states that traducirOutput depends on traducirRe()
The golden rule of pessimistic programming has these elements
- get one tiny aspect of your project to work
- exercise it with all possible inputs
- when it works, you now stand on solid ground
- there are many swampy places you can step next: choose just one
- expect that that step will land you in confusion, in the mud up to your knees
- revise the step till you have it working: you now have slighly larger but equally tested solid ground
- repeat until done
- expect trouble and confusion at EVERY STEP!
Answers:
username_0: Hi @ryan-hoffman,
I started out small in order to figure out the problem. This "small start" strategy may be familiar to you, but just in case, may I describe it? I observed that the existing code was a little complex, too hard for me to understand at first sight. I noticed that one observeEvent was nested inside another. Too hard to grasp that! So I commented out all the code except for the few lines needed to
- choose a new Spanish word at random whenever the Seleccionar button was clicked
- print that word to the console
Nothing else. I got that much to work, and I understood those few moving parts.
Next up, I wanted to
- save the new word in a reactive value ("put it in the safe deposit box" was my metaphor from last week
- re-enable your one line of code which makes output$palabraOutput dependent that reactive value
Reload the program, see that this works.
Then I moved on to the translate button, keeping it dead simple at first:
- in the observeEvent handler for that button, I just printed out "traducir clicked"
- nothing more. checked to make sure that the seleccionar button still worked
- only then, with that assurance, added a line to print to the console ```palabraRe()```
- found that, yes, the safety deposit box can be read
- added some protection: don't want to use that value if it is empty ("") or NULL
- do this with ```req(palabraRe()``` which functions as a guard: require a meaningful value, guard against further execution of this observeEvent handler
Next up, and still being pessimisitic (things will break given any chance), and still using print to the console:
- ```print(cocinaVoca[[palabraRe()]]```
- does this work? do I always get a legit translation
- finding that I could not make this fail, only then assign to the traducirRe safe deposit box
- and only then enable your one line which states that traducirOutput depends on traducirRe()
The golden rule of pessimistic programming has these elements
- get one tiny aspect of your project to work
- exercise it with all possible inputs
- when it works, you now stand on solid ground
- there are many swampy places you can step next: choose just one
- expect that that step will land you in confusion, in the mud up to your knees
- revise the step till you have it working: you now have slighly larger but equally tested solid ground
- repeat until done
- expect trouble and confusion at EVERY STEP! |
hydroshare/hydroshare | 168329519 | Title: Raster zipped VRT upload issue
Question:
username_0: When attempting to create a Raster resource from a zipped folder containing a VRT files and its associated TIFs, I kept getting the following error:
'The .tif files provided are inconsistent (e.g. missing or extra) with the references in the .vrt file.'
I checked my VRT file and it seemed that everything was fine, although I noted that each SourceFilename had a filename path relative to the current directory (i.e. './filename'). To fix the problem, I deleted the './' in front of each SourceFilename in the VRT, and re-attempted to create the resource. This time it worked.
It seems to me that the raster_file_validation function in receivers.py should be modified to overlook the './' since that is how the VRT was automatically generated through gdalbuildvrt.
Here is the line that triggered the error:
https://github.com/hydroshare/hydroshare/blob/master/hs_geo_raster_resource/receivers.py#L88
Here is the line that would need to be modified:
https://github.com/hydroshare/hydroshare/blob/master/hs_geo_raster_resource/receivers.py#L86
I wouldn't mind making the change, however since you, @username_1, are already the primary developer of the Raster resource, would this be a fix that you could slip in to other work you are already doing?
Answers:
username_1: @username_0 Thanks a lot for reporting the bug. I will fix it.
username_1: @username_0 Would you please test the fix on playground.hydroshare.org. If you find any problem, please let me know. Otherwise, I will make a pull request. Thank you.
username_0: @username_1 It appears to be fixed. Good work.
Status: Issue closed
|
markallie/Pauls-Curve-Tracer | 579002349 | Title: Update on Progress
Question:
username_0: Hello Mark,
Following your project.
Any update on progress?
Got your parts lists already but eagerly await PCB details.
Will you issue these at some point or do you intend to have some made and sell them?
Best Regards
<NAME>.
<EMAIL>
Answers:
username_1: I have made 1 set of circuit boards for Paul’s curve tracer. I have not tested them yet. I will test them and publish my findings, I know the PCB’s will change a small amount. Not significantly. I will verify functionality. If I encounter any problems the new PCB will correct the trouble. Progress soon.
<NAME>
username_0: Hello Mark,
Thank you to take the time to update me on the progress.
I shall keep a watch on github for any new posts.
Best Regards
<NAME>.
The views, comments and suggestions contained within this mail are attributed solely to the sender of this mail and can not be inferred to any individual, company or organisation with whom he may be associated.
On Thursday, 12 March 2020, 03:47:46 GMT+7, username_1 <<EMAIL>> wrote:
I have made 1 set of circuit boards for Paul’s curve tracer. I have not tested them yet. I will test them and publish my findings, I know the PCB’s will change a small amount. Not significantly. I will verify functionality. If I encounter any problems the new PCB will correct the trouble. Progress soon.
<NAME>
username_2: Have you gotten a chance to test the boards? Any idea when PCB files will be avalible?
Thanks. |
NitramLegov/Musicbox | 327009238 | Title: Bluetooth and mopidy don´t work in parallel
Question:
username_0: Currently, mopidy does only work, if the bt_speaker service is turned off, since both require exclusive access to the sound device.
Possible solutions:
Use Pulseaudio
-->BT command needs to be changed
Use Jack
-->BT command needs to be changed
Use the ALSA dmix plugin (https://wiki.debianforum.de/Erweiterte_ALSA-Funktionen)
Answers:
username_1: Regarding dmix havw a look at
http://www.brain-dump.org/blog/entry/45/Sound_mixing_with_the_ALSA_Dmix_plugin_instead_of_a_soundserver
username_0: It does work if I make the following changes:
1. Create file /etc/asound.conf with the following content:
### Alles in einer Zeile nach dem Zeichen # ist ein Kommentar, und wird von ALSA ignoriert.
# Das dmix-Plugin wird definiert.
pcm.dmixer {
type dmix
ipc_key 1024
ipc_perm 0666 # Andere Benutzer können ebenfalls dmix gleichzeitig nutzen
slave.pcm "hw:1,0"
slave {
### buffer_size kann bei Problemen der jeweiligen Karte angepasst werden.
period_time 0
period_size 1024
buffer_size 4096
### bei Störungen kann die Konvertierung auf die Rate 44100 eingeschaltet werden.
# rate 44100
### einige Soundkarten benötigen das exakte Datenformat (zB ice1712)
# format S32_LE
### Verfügbare Formate: S8 U8 S16_LE S16_BE U16_LE U16_BE S24_LE S24_BE U24_LE U24_BE
### S32_LE S32_BE U32_LE U32_BE FLOAT_LE FLOAT_BE FLOAT64_LE FLOAT64_BE
### IEC958_SUBFRAME_LE IEC958_SUBFRAME_BE MU_LAW A_LAW IMA_ADPCM MPEG GSM
### Anzahl channels muss mit den bindings übereinstimmen
channels 2
}
bindings {
0 0
1 1
}
}
# Das dsnoop-Plugin, welches es erlaubt, mehrere Programme gleichzeitig aufnehmen zu lassen.
pcm.dsnooper {
type dsnoop
ipc_key 2048
ipc_perm 0666
slave.pcm "hw:1,0"
slave
{
period_time 0
period_size 1024
buffer_size 4096
# bei Störungen kann die Konvertierung auf die Rate 44100 eingeschaltet werden.
# rate 44100
# einige Soundkarten benötigen das exakte Datenformat (zB ice1712)
# format S32_LE
### Anzahl channels muss mit den bindings übereinstimmen
channels 2
}
bindings {
0 0
1 1
}
}
# Dies definiert unser Fullduplex-Plugin als Standard für alle ALSA-Programme.
pcm.duplex {
type asym
playback.pcm "dmixer"
capture.pcm "dsnooper"
}
pcm.!default {
type plug
slave.pcm "duplex"
}
2. in /etc/mopidy/mopidy.conf change setting output in the audio section:
output = alsasink device=default
3. in /etc/bt_speaker/config.ini change setting play_command in the bt_speaker section:
play_command = aplay -D default -N -f cd -
I will reflect this in the installation procedure accordingly. |
codeforbtv/cvoeo-app | 491672720 | Title: Fix registration message
Question:
username_0: To test, log out of app and click the 'Register' link and enter your email address.
<img src="https://user-images.githubusercontent.com/1809882/64617325-43fcfe00-d3ac-11e9-8290-fc51014143a0.png" width=250/> |
Automattic/vip-cli | 196697524 | Title: Cannot sandbox sites
Question:
username_0: When I try to sandbox a site I get an error:
```
$ vip sandbox start 68
Warning: There are more than 5 total sandbox containers on this host. Consider deleting some unused ones with `vip sandbox delete <site>`
Waiting for sandbox to start...
{ [Error: ENOENT: no such file or directory, open '/home/vipdev/.vip-cli/sbox']
errno: -2,
code: 'ENOENT',
syscall: 'open',
path: '/home/vipdev/.vip-cli/sbox' }
```
I can see the sandbox running when I run `vip sandbox list`.
Answers:
username_1: Dupe of #95. Potential fix in #96.
Status: Issue closed
|
Adyen/adyen-ios | 668900317 | Title: [BUG] ayden-ios fails to build via carthage on Xcode 12 beta 3
Question:
username_0: **Describe the bug**
When trying to add ayden-ios as a dependency to a project using `Xcode 12 beta 3` via `Carthage`, the compilation fails with the following error:
```bash
⇒ carthage bootstrap --platform ios --use-ssh
*** No Cartfile.resolved found, updating dependencies
*** Fetching adyen-ios
*** Fetching adyen-3ds2-ios
*** Checking out adyen-3ds2-ios at "2.1.0-rc.5"
*** Checking out adyen-ios at "3.6.0"
*** xcodebuild output can be found in /<KEY>carthage-xcodebuild.VDLNeQ.log
*** Downloading adyen-3ds2-ios.framework binary at "2.1.0-rc.5"
*** Building scheme "Adyen" in Adyen.xcodeproj
Build Failed
Task failed with exit code 65:
/usr/bin/xcrun xcodebuild -project /Users/einternicola/Code/watches/CarthageAydenExample/Carthage/Checkouts/adyen-ios/Adyen.xcodeproj -scheme Adyen -configuration Release -derivedDataPath /Users/einternicola/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A8169g/adyen-ios/3.6.0 -sdk iphoneos ONLY_ACTIVE_ARCH=NO CODE_SIGNING_REQUIRED=NO CODE_SIGN_IDENTITY= CARTHAGE=YES archive -archivePath /<KEY>/adyen-ios SKIP_INSTALL=YES GCC_INSTRUMENT_PROGRAM_FLOW_ARCS=NO CLANG_ENABLE_CODE_COVERAGE=NO STRIP_INSTALLED_PRODUCT=NO (launched in /Users/einternicola/Code/watches/CarthageAydenExample/Carthage/Checkouts/adyen-ios)
This usually indicates that project itself failed to compile. Please check the xcodebuild log for more details: /var/folders/<KEY>T/carthage-xcodebuild.VDLNeQ.log
```
**To Reproduce**
Steps to reproduce the behavior:
1. Download and install Xcode12 beta 3
2. Use xcode select to set this as your xcode: `xcode-select --switch /Applications/Xcode-beta.app`
3. Create a new Xcode project
4. Create a Cartfile with the following contents: `github "Adyen/adyen-ios" == 3.6.0`
5. Execute the following command: `carthage bootstrap --platform ios --use-ssh`
6. Observe the failure (similar to what I noted above)
**Expected behavior**
I expect Carthage to build ayden-ios successfully.
**Screenshots**

**Smartphone (please complete the following information):**
- Device: MacBook Pro (16-inch, 2019)
- OS: macOS Catalina 10.15.5 (19F101)
- Xcode: 12.0 beta 3 (12A8169g)
- SDK Version 3.6.0
**Additional context**
N/A
Answers:
username_1: Hi @username_0
Thanks for the feedback!
Could you also print out `xcodebuild` log?
username_0: Full log file:
[carthage-xcodebuild.VDLNeQ.log](https://github.com/Adyen/adyen-ios/files/5002419/carthage-xcodebuild.VDLNeQ.log)
username_0: In doing some more research, this might not be an Ayden specific issue: https://github.com/Carthage/Carthage/issues/3019
username_1: Could you verify if it works for you?
username_0: Yes, definitely. I'm downloading beta 2 now.
username_0: I get the same error with Xcode 12 beta 2:
```
Undefined symbols for architecture armv7:
"type metadata for Swift._StringObject.Variant", referenced from:
outlined init with take of Swift._StringObject.Variant in IssuerListComponent.o
ld: symbol(s) not found for architecture armv7
```
See the attached log file:
[carthage-xcodebuild.NqG6pg.log](https://github.com/Adyen/adyen-ios/files/5017168/carthage-xcodebuild.NqG6pg.log)
username_0: This might also be of some help: https://developer.apple.com/forums/thread/649918
username_1: Not all heroes wears capes! 🦸♂️
Thanks a lot @username_0
From what I can see - this is a temporary (I hope so) issue, that should be fixed with stable\next Xcode build.
Issue caused by using armv7 functionality on new arm64 architectures (looks like the `String` is a man suspect)
It could be fixed in two ways:
1) by striping up armv7 architecture or rebuilding .framework with target version iOS13. [source1](https://github.com/Carthage/Carthage/issues/3019#issuecomment-665136323) [source2](https://developer.apple.com/forums/thread/649918?answerId=615750022#615750022) [source3](https://developer.apple.com/forums/thread/649918?answerId=624371022#624371022)
2) by dirty magic with Strings [source1](https://github.com/airbnb/lottie-ios/pull/1215/files) [source2](https://developer.apple.com/forums/thread/649918?answerId=614633022#614633022)
Unfortunately, we can't promise to fix it on our end quick enough( not before it is done by Cartages, at least :D )
I'll keep my eye on Cartage.
username_0: @username_1 thanks for looking into it. I'm hopeful that beta 4 drops soon with a fix. 🤞
username_0: Update - different error on Xcode 12 beta 4:
```
Ld /Users/einternicola/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A8179i/adyen-ios/3.6.0/Build/Intermediates.noindex/Adyen.build/Release-iphonesimulator/AdyenCard.build/Objects-normal/arm64/Binary/AdyenCard normal arm64 (in target 'AdyenCard' from project 'Adyen')
cd /Users/einternicola/Code/watches/CarthageAydenExample/Carthage/Checkouts/adyen-ios
/Applications/Xcode-beta.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang -target arm64-apple-ios10.0-simulator -dynamiclib -isysroot /Applications/Xcode-beta.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator14.0.sdk -L/Users/einternicola/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A8179i/adyen-ios/3.6.0/Build/Products/Release-iphonesimulator -F/Users/einternicola/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A8179i/adyen-ios/3.6.0/Build/Products/Release-iphonesimulator -FCarthage/Build/iOS -filelist /Users/einternicola/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A8179i/adyen-ios/3.6.0/Build/Intermediates.noindex/Adyen.build/Release-iphonesimulator/AdyenCard.build/Objects-normal/arm64/AdyenCard.LinkFileList -install_name @rpath/AdyenCard.framework/AdyenCard -Xlinker -rpath -Xlinker /usr/lib/swift -Xlinker -rpath -Xlinker @executable_path/Frameworks -Xlinker -rpath -Xlinker @loader_path/Frameworks -dead_strip -Xlinker -object_path_lto -Xlinker /Users/einternicola/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A8179i/adyen-ios/3.6.0/Build/Intermediates.noindex/Adyen.build/Release-iphonesimulator/AdyenCard.build/Objects-normal/arm64/AdyenCard_lto.o -Xlinker -objc_abi_version -Xlinker 2 -fobjc-arc -fobjc-link-runtime -L/Applications/Xcode-beta.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/swift/iphonesimulator -L/usr/lib/swift -Xlinker -add_ast_path -Xlinker /Users/einternicola/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A8179i/adyen-ios/3.6.0/Build/Intermediates.noindex/Adyen.build/Release-iphonesimulator/AdyenCard.build/Objects-normal/arm64/AdyenCard.swiftmodule -framework Adyen3DS2 -framework Adyen -compatibility_version 1 -current_version 3.6.0 -Xlinker -dependency_info -Xlinker /Users/einternicola/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A8179i/adyen-ios/3.6.0/Build/Intermediates.noindex/Adyen.build/Release-iphonesimulator/AdyenCard.build/Objects-normal/arm64/AdyenCard_dependency_info.dat -o /Users/einternicola/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A8179i/adyen-ios/3.6.0/Build/Intermediates.noindex/Adyen.build/Release-iphonesimulator/AdyenCard.build/Objects-normal/arm64/Binary/AdyenCard
ld: building for iOS Simulator, but linking in dylib built for iOS, file 'Carthage/Build/iOS/Adyen3DS2.framework/Adyen3DS2' for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
```
See attached log file:
[carthage-xcodebuild.VpZWEA.log](https://github.com/Adyen/adyen-ios/files/5024769/carthage-xcodebuild.VpZWEA.log)
username_2: Issue persists on Xcode 12 beta 6
username_3: Have the same issue here. The problem seems to be related to `adyen/adyen-3ds2-ios` framework, any plans on uploading the update binary?
username_4: Confirming that this issue still exists on Xcode 12 beta 6. I am using [this Carthage workaround](https://github.com/Carthage/Carthage/issues/3019#issuecomment-692291631) and I get this output when running `carthage update` in a new project:
```
Build Failed
Task failed with exit code 1:
/usr/bin/xcrun lipo -create /Users/rachel.hyman/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A8189n/adyen-ios/3.6.0/Build/Intermediates.noindex/ArchiveIntermediates/Adyen/IntermediateBuildFilesPath/UninstalledProducts/iphoneos/Adyen.framework/Adyen /Users/rachel.hyman/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A8189n/adyen-ios/3.6.0/Build/Products/Release-iphonesimulator/Adyen.framework/Adyen -output /Users/rachel.hyman/Desktop/AdyenTest/Carthage/Build/iOS/Adyen.framework/Adyen
This usually indicates that project itself failed to compile. Please check the xcodebuild log for more details: /<KEY>carthage-xcodebuild.O3s0Nd.log
```
However, the log does indicate that the build succeeded.
Any updates on a fix or workaround for this issue?
username_1: This is peculiar...
username_0: I have just tried building again using the Xcode 12 GM and it still failing. It provides even less information in the log though:
Command: `carthage update --platform ios`
Result:
```
*** Fetching adyen-ios
*** Fetching adyen-3ds2-ios
*** Checking out adyen-ios at "3.6.0"
*** Checking out adyen-3ds2-ios at "2.2.0"
*** xcodebuild output can be found in /<KEY>carthage-xcodebuild.HiwYek.log
*** Downloading adyen-3ds2-ios.framework binary at "2.2.0"
*** Building scheme "AdyenCard" in Adyen.xcodeproj
Build Failed
Task failed with exit code 65:
/usr/bin/xcrun xcodebuild -project /Users/einternicola/Code/watches/CarthageXcode12Test/Carthage/Checkouts/adyen-ios/Adyen.xcodeproj -scheme AdyenCard -configuration Release -derivedDataPath /Users/einternicola/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A7208/adyen-ios/3.6.0 -sdk iphonesimulator -destination platform=iOS\ Simulator,id=E1C7E02B-05ED-433C-94D3-C02045DF8640 -destination-timeout 3 ONLY_ACTIVE_ARCH=NO CODE_SIGNING_REQUIRED=NO CODE_SIGN_IDENTITY= CARTHAGE=YES build (launched in /Users/einternicola/Code/watches/CarthageXcode12Test/Carthage/Checkouts/adyen-ios)
```
Log (excerpt):
```
Ld /Users/einternicola/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A7208/adyen-ios/3.6.0/Build/Intermediates.noindex/Adyen.build/Release-iphonesimulator/AdyenCard.build/Objects-normal/arm64/Binary/AdyenCard normal arm64 (in target 'AdyenCard' from project 'Adyen')
cd /Users/einternicola/Code/watches/CarthageXcode12Test/Carthage/Checkouts/adyen-ios
/Applications/Xcode-12-GM.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang -target arm64-apple-ios10.0-simulator -dynamiclib -isysroot /Applications/Xcode-12-GM.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator14.0.sdk -L/Users/einternicola/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A7208/adyen-ios/3.6.0/Build/Products/Release-iphonesimulator -F/Users/einternicola/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A7208/adyen-ios/3.6.0/Build/Products/Release-iphonesimulator -FCarthage/Build/iOS -filelist /Users/einternicola/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A7208/adyen-ios/3.6.0/Build/Intermediates.noindex/Adyen.build/Release-iphonesimulator/AdyenCard.build/Objects-normal/arm64/AdyenCard.LinkFileList -install_name @rpath/AdyenCard.framework/AdyenCard -Xlinker -rpath -Xlinker /usr/lib/swift -Xlinker -rpath -Xlinker @executable_path/Frameworks -Xlinker -rpath -Xlinker @loader_path/Frameworks -dead_strip -Xlinker -object_path_lto -Xlinker /Users/einternicola/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A7208/adyen-ios/3.6.0/Build/Intermediates.noindex/Adyen.build/Release-iphonesimulator/AdyenCard.build/Objects-normal/arm64/AdyenCard_lto.o -Xlinker -objc_abi_version -Xlinker 2 -fobjc-arc -fobjc-link-runtime -L/Applications/Xcode-12-GM.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/swift/iphonesimulator -L/usr/lib/swift -Xlinker -add_ast_path -Xlinker /Users/einternicola/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A7208/adyen-ios/3.6.0/Build/Intermediates.noindex/Adyen.build/Release-iphonesimulator/AdyenCard.build/Objects-normal/arm64/AdyenCard.swiftmodule -framework Adyen3DS2 -framework Adyen -compatibility_version 1 -current_version 3.6.0 -Xlinker -dependency_info -Xlinker /Users/einternicola/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A7208/adyen-ios/3.6.0/Build/Intermediates.noindex/Adyen.build/Release-iphonesimulator/AdyenCard.build/Objects-normal/arm64/AdyenCard_dependency_info.dat -o /Users/einternicola/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A7208/adyen-ios/3.6.0/Build/Intermediates.noindex/Adyen.build/Release-iphonesimulator/AdyenCard.build/Objects-normal/arm64/Binary/AdyenCard
ld: building for iOS Simulator, but linking in dylib built for iOS, file 'Carthage/Build/iOS/Adyen3DS2.framework/Adyen3DS2' for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
** BUILD FAILED **
The following build commands failed:
Ld /Users/einternicola/Library/Caches/org.carthage.CarthageKit/DerivedData/12.0_12A7208/adyen-ios/3.6.0/Build/Intermediates.noindex/Adyen.build/Release-iphonesimulator/AdyenCard.build/Objects-normal/arm64/Binary/AdyenCard normal arm64
(1 failure)
```
username_5: The same error occurs for me when using CocoaPods.
It only happens when building for iOS Simulator but it is possible to fix this temporarily by adding `arm64` to the `Excluded Architectures` in Adyen targets. However this results in being unable to build for a device.
username_1: Hi @username_5
What version of CocoaPods are you using?
username_5: Hi @username_1
I'm using `1.9.3` but I've also tried with `1.10.0.rc.1`
username_5: Hey @username_1
I think that the issue with Cocoapods could be fixed by excluding `arm64` for simulator builds in `.podspec` file.
It was done with Intercom as you can see here: https://github.com/intercom/intercom-ios/pull/385
username_6: I'm also getting an error on Xcode 12, I tried with version 3.7.0 using cocoapods and get the following `ld: building for iOS Simulator, but linking in dylib built for iOS, file '/.../Pods/Adyen3DS2/Dynamic/Adyen3DS2.framework/Adyen3DS2' for architecture arm64`
username_5: @username_6, I've discovered recently that if you set `ONLY_ACTIVE_ARCH` parameter to `YES` in your target and pods then it compiles and archives without this error.
You can add this to your Podfile and see if it helps:
```
post_install do |installer|
installer.pods_project.targets.each do |target|
target.build_configurations.each do |config|
config.build_settings["ONLY_ACTIVE_ARCH"] = "YES"
end
end
end
```
username_6: @username_5 thank you! this works like a charm 😃
username_7: @username_1 here is a Cocoapods thread on the podspec workaround to exclude arm64: https://github.com/CocoaPods/CocoaPods/issues/10065
Adyen will need to add this to both Adyen3DS2 and Adyen podspecs, since Adyen depends on Adyen3DS2. But as noted in that thread and the Intercom example posted above, this is a temporary workaround. As soon as people start using Apple Silicon Macs for development, this is going to cause big problems. Migrating to XCFramework or fixing the binary is the real solution.
The issue I am facing is that I develop a private Cocoapod for my employer, and our pod depends on Adyen. We are unable to publish our pod to our private Cocoapods spec repo because `pod spec lint` fails due to this architecture issue. The workaround we have found is to privately host a copy of the Adyen and Adyen3DS2 podspecs with the workaround added:
```
# workaround for binary dependencies, see https://github.com/CocoaPods/CocoaPods/issues/10065
s.pod_target_xcconfig = { 'EXCLUDED_ARCHS[sdk=iphonesimulator*]' => 'arm64' }
s.user_target_xcconfig = { 'EXCLUDED_ARCHS[sdk=iphonesimulator*]' => 'arm64' }
```
and we have to add this workaround to our private podspec as well.
username_1: 3.8.0 is released!
This issue should be fixed now!
username_1: Thanks everyone for feedback and support!
We have added a note in [README](https://github.com/Adyen/adyen-ios#carthage).
I am going to close it for now.
Feel free to provide any additional feedback in this thread or create a new one is necessary.
Status: Issue closed
username_8: @username_1 Hi, am having issue in running Unit tests while having "Ayden" pod, issue am facing is this?
<img width="480" alt="Screenshot 2021-03-03 at 6 19 14 PM" src="https://user-images.githubusercontent.com/73534982/109811696-fd106180-7c4c-11eb-825c-fb36727b8a6b.png">
Build is successful but when unit tests starts running, it got failed. |
intel/pmem-csi | 712573739 | Title: operator: stop creating CRD
Question:
username_0: The approach taken by OLM (= aka installation from Operator Hub) is to create the CRD as part of the installation, i.e. the operator doesn't need to create it and also shouldn't delete it. This is more predictable.
We should do the same when installing the operator via YAML and remove the extra code for managing the CRD.
Answers:
username_0: In the "update dependencies" PR (= PR #764 ) I just got a strange error that I cannot reproduce locally:
https://cloudnative-k8sci.southcentralus.cloudapp.azure.com/blue/rest/organizations/jenkins/pipelines/pmem-csi/branches/PR-764/runs/1/nodes/90/steps/142/log/?start=0
```
create deployment error: the server could not find the requested resource, will retry...
```
Tests like "operator-lvm-production driver runs" fail because the CRD is not found?
Status: Issue closed
|
zculp1292/Project-Boost | 785282409 | Title: Follow Camera
Question:
username_0: Work to implement a Follow-Camera that can follow the Rocket through the level and can allow the creation of vertical levels.
Answers:
username_0: Follow Camera prototype created. Tested in Sandbox Level.
Functions as intended, however further refinements to positioning and level design are needed before incorporating into main game. |
MaksymFilypchuk/homepage | 785082142 | Title: Create patch/content/primary branch and add primary content (e.g. avatar, name, job, contacts, intro, etc.)
Question:
username_0: ```
<div class="sixteen wide column">
<h1 class="name"><NAME> (リュウ Ryū)</h1>
<div class="job">Fighter into series games "Street Fighter"</div>
<ul class="contacts">
<li>
<a href="https://www.facebook.com/profile.php?id=100004808958030" rel="author" class="facebook">
<svg class="icon" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 448 512">
<path
d="M448 56.7v398.5c0 13.7-11.1 24.7-24.7 24.7H309.1V306.5h58.2l8.7-67.6h-67v-43.2c0-19.6 5.4-32.9 33.5-32.9h35.8v-60.5c-6.2-.8-27.4-2.7-52.2-2.7-51.6 0-87 31.5-87 89.4v49.9h-58.4v67.6h58.4V480H24.7C11.1 480 0 468.9 0 455.3V56.7C0 43.1 11.1 32 24.7 32h398.5c13.7 0 24.8 11.1 24.8 24.7z" />
</svg>
<span>Ryu Hoshi</span>
</a>
</li>
<li>
<a href="<EMAIL>" rel="author" class="mail">
<svg class="icon" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512">
<path
d="M502.3 190.8c3.9-3.1 9.7-.2 9.7 4.7V400c0 26.5-21.5 48-48 48H48c-26.5 0-48-21.5-48-48V195.6c0-5 5.7-7.8 9.7-4.7 22.4 17.4 52.1 39.5 154.1 113.6 21.1 15.4 56.7 47.8 92.2 47.6 35.7.3 72-32.8 92.3-47.6 102-74.1 131.6-96.3 154-113.7zM256 320c23.2.4 56.6-29.2 73.4-41.4 132.7-96.3 142.8-104.7 173.4-128.7 5.8-4.5 9.2-11.5 9.2-18.9v-19c0-26.5-21.5-48-48-48H48C21.5 64 0 85.5 0 112v19c0 7.4 3.4 14.3 9.2 18.9 30.6 23.9 40.7 32.4 173.4 128.7 16.8 12.2 50.2 41.8 73.4 41.4z">
</path>
</svg>
<span><EMAIL></span>
</a>
</li>
<li>
<a href="https://twitter.com/denjinryu" rel="author" class="twitter">
<svg class="icon" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 496 512">
<path
d="M459.37 151.716c.325 4.548.325 9.097.325 13.645 0 138.72-105.583 298.558-298.558 298.558-59.452 0-114.68-17.219-161.137-47.106 8.447.974 16.568 1.299 25.34 1.299 49.055 0 94.213-16.568 130.274-44.832-46.132-.975-84.792-31.188-98.112-72.772 6.498.974 12.995 1.624 19.818 1.624 9.421 0 18.843-1.3 27.614-3.573-48.081-9.747-84.143-51.98-84.143-102.985v-1.299c13.969 7.797 30.214 12.67 47.431 13.319-28.264-18.843-46.781-51.005-46.781-87.391 0-19.492 5.197-37.36 14.294-52.954 51.655 63.675 129.3 105.258 216.365 109.807-1.624-7.797-2.599-15.918-2.599-24.04 0-57.828 46.782-104.934 104.934-104.934 30.213 0 57.502 12.67 76.67 33.137 23.715-4.548 46.456-13.32 66.599-25.34-7.798 24.366-24.366 44.833-46.132 57.827 21.117-2.273 41.584-8.122 60.426-16.243-14.292 20.791-32.161 39.308-52.628 54.253z" />
</svg>
<span><NAME>(@DenjinRyu)</span>
</a>
</li>
<li>
<a href="https://www.instagram.com/sworn_ryu/" rel="author" class="instagram">
<svg class="icon" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 448 512">
<path fill="currentColor"
d="M224.1 141c-63.6 0-114.9 51.3-114.9 114.9s51.3 114.9 114.9 114.9S339 319.5 339 255.9 287.7 141 224.1 141zm0 189.6c-41.1 0-74.7-33.5-74.7-74.7s33.5-74.7 74.7-74.7 74.7 33.5 74.7 74.7-33.6 74.7-74.7 74.7zm146.4-194.3c0 14.9-12 26.8-26.8 26.8-14.9 0-26.8-12-26.8-26.8s12-26.8 26.8-26.8 26.8 12 26.8 26.8zm76.1 27.2c-1.7-35.9-9.9-67.7-36.2-93.9-26.2-26.2-58-34.4-93.9-36.2-37-2.1-147.9-2.1-184.9 0-35.8 1.7-67.6 9.9-93.9 36.1s-34.4 58-36.2 93.9c-2.1 37-2.1 147.9 0 184.9 1.7 35.9 9.9 67.7 36.2 93.9s58 34.4 93.9 36.2c37 2.1 147.9 2.1 184.9 0 35.9-1.7 67.7-9.9 93.9-36.2 26.2-26.2 34.4-58 36.2-93.9 2.1-37 2.1-147.8 0-184.8zM398.8 388c-7.8 19.6-22.9 34.7-42.6 42.6-29.5 11.7-99.5 9-132.1 9s-102.7 2.6-132.1-9c-19.6-7.8-34.7-22.9-42.6-42.6-11.7-29.5-9-99.5-9-132.1s-2.6-102.7 9-132.1c7.8-19.6 22.9-34.7 42.6-42.6 29.5-11.7 99.5-9 132.1-9s102.7-2.6 132.1 9c19.6 7.8 34.7 22.9 42.6 42.6 11.7 29.5 9 99.5 9 132.1s2.7 102.7-9 132.1z" />
</svg>
<span>Ryu Hoshi [リュホシ]</span>
</a>
</li>
</ul>
</div>
<div class="sixteen wide mobile only column">
<div class="divider"></div>
</div>
<div class="sixteen wide column">
<h2>Résumé</h2>
<p>
— I do not compete with others, the only competition that challenges me is myself. As a
fighter, friend and an employee I am known as contemplative, tortured and driven.
</p>
</div>
</div>
</div>
</div>
<div class="row">
<div class="sixteen wide column">
<div class="fat divider"></div>
[Truncated]
<img class="mini-img" alt="heart" src="https://github.githubassets.com/images/icons/emoji/unicode/2764.png">
</li>
<li>
#2 in "Top 20 Street Fighter Characters of All Time" by GameDaily, same council found me worth of #6
place in "Top 25 Capcom Characters of All Time", which I proudly share with my partner Ken
</li>
<li>
I was named "5th Most Powerful Street Fighter Character" by Screen Rant-san
</li>
<li>
#71 in "Top 100 Heroes of All Time" by UGO Networks, also recognized #2 in their list of "Top 50
Street Fighter Characters"
</li>
<li>
In a survey of 4000 online matches for Super Street Fighter IV, I got to be the most popular
character, with 16.6% of players choosing my side
</li>
</ul>
</div>
```<issue_closed>
Status: Issue closed |
aarons22/homebridge-bond | 1046847413 | Title: Bond Online, IP Ping is good, Bond Plugin Says IP not found
Question:
username_0: **Describe the bug**
The Bond is online. The Bond works through the Bond app to control devices. In Homebridge, the log says the IP address is not found. However, the IP address is good and responds to a ping. I found the IP via the Bond app.
**Information (please complete the following information):**
- everything is on the latest. That is the first thing I checked.
**Logs**
[11/7/2021, 3:50:47 PM] [Bond] Unable to find Bond for IP Address: 192.168.1.22. Skipping this Bond.
[11/7/2021, 3:50:47 PM] [Bond] No valid Bonds available.
The bond is definitely available and working, just not in the plugin.
Answers:
username_1: I ran into this issue as well. Renamed the device from "Fan" to "Dining Room Fan" seemingly resolved the problem. Restarted Homebridge and the device showed up again.
username_2: Same issue for me as the original poster. I am running through user homebridge and using homebridge UI. I am seeing this same issue on a fresh install strictly through the UI.
Previously i installed without the UI and had success reaching Bond when I launched homebridge as the pi user.
username_3: I am getting the same issue.
[1/26/2022, 1:38:51 PM] [Bond] Unable to find Bond for IP Address: 192.168.x.xx. Skipping this Bond.
[1/26/2022, 1:38:51 PM] [Bond] No valid Bonds available.
However, works it works in the app.
username_2: I had some success by downgrading a few items.
I was on RaspberryPi Zero which isn't fully supported in Node v16. So I downgraded to v14.18.3.
I'm also currently running homebridge-bond v3.2.6 (which I downgraded prior to dropping node to v14. |
mrdoob/three.js | 203255212 | Title: EXR Loader
Question:
username_0: (*** This section is for bug reports and feature requests only. This is NOT a help site. Do not ask help questions here. If you need help, please use stackoverflow. ***)
##### Description of the problem
Feature request - support for reading in basic EXR files
Any update on supporting EXR file format in the official three.js?
exr/hdr loader #6274 touched on this but drifted off the rgbe, etc.
It would certainly be worthwhile since EXR is more than simply another HDR format. Floating point data, multiple channels, etc.
File size very dependent on content and better to be thinking ahead than trying to chase it later.
Answers:
username_1: Sounds good to me! Hopefully someone give it a go.
username_2: EXR is a complex format which will require lot of JavaScript to load.
I'd recommend against loading it directly. Convert it into an RGBM16 /RGBE PNG or something like that.
username_0: Currently I had to write my own code to convert to 16bit and then split that into 2 - 8bit png files since all (most?) web readers for thing like gl-react can only handle 8bits. This then requires everything related to this in the code to deal with 2 files, not just one. And t write all shaders to be able to handle this bandaid approach of combining these 2 textures to recreate a floating point value in the correct range.
So now for an exr I've had to strip it down to 16bits and then create 2 files for every one. And most png readers apply gamma and other manipulations so it's an incredible challenge to try to get the data through the pipeline correctly.
Suggestion:
Build a restricted EXR reader to start. 4 channels, simplified compression, rather than include every compression and format variation.
At the very least provide a consistent 16bit image reader.
username_2: @username_0 please to chat with you. My background is VFX as well, but mostly as a software provider to VFX firms...
There is quite a bit of flexibility here. The first is that the format of textures in Three.JS can be one byte per channel, fp16 per channel or fp32 per channel. The renderer doesn't change at all. Thus you just need to allocate your texture as FP16 or FP32 as appropriate and then just set it as the material map. This is a pre-decoded texture workflow.
Three.JS also supports inline decoding of HDR textures in various formats. I contributed that in this PR -
https://github.com/username_1/three.js/pull/8117 For select texture slots in materials, particularly map, emissiveMap and envMap (for IBL), you can specify the texture encoding as RGBE, RGBM16, RGBM7, LogLUV, etc. This allows one to use a very small size texture, RGBA8, but get dynamic range out of it.
We use RGBM16 for https://Clara.io 's stuff because it can be encoded in a PNG (which has native decoder support) - and we prefer RGBM16 because it's linear nature doesn't cause artifacts when using HW-based texture interpolation of the pre-decoded values:
https://clara.io/player/v2/9d11969e-eef5-483f-be63-b702a66bd7bf
https://clara.io/player/v2/cf2e9e9a-d643-4e77-a981-92f1ca1848e6
Be warned that fp16 and fp32 have various limitations across different devices -- they are extensions in WebGL 1 and thus support is inconsistent. Fp16 and FP32 will preform slower than RGBA8 textures because of increased GPU memory bandwidth requirements. But fp16 and fp32 texture that are pre-decoded do not have any artifacts introduced by HW texture interpolation.
My recommendation would be to use RGBM16 in pngs, unless there is a concrete reason not to.
username_2: BTW because Three.JS just seamlessly support FP16/FP32 textures, just add your own EXR loader and write to FP16/FP32 textures and it will work. Have a look at the HDRCubeTextureLoaderfor example of how to load FP16/FP32 into Three.JS: https://github.com/username_1/three.js/blob/dev/examples/js/loaders/HDRCubeTextureLoader.js
To understand how PNG is handled in the browser, refer to this guide: http://jonathannicol.com/blog/2006/12/01/fixing-png-gamma/ I would love to further improve the PNG handling in Three.JS in the context of HDR representations.
The only downside to PNGs that I've found is that IE for some reason premultiplies A into RGB and then unpremultiplies it, thus reducing the fidelity of the resulting RGBA when A is low. I've complained to the IE team about this a few years ago but it hasn't been fixed. Our solution when encountering IE, is to load the HDR equivalent of the RGBM16 PNG.
username_0: Thanks for the additional info. I am using a special UV map for displacement (STMap) and other purposes.
As someone getting up to speed on all of this:
What tools are you using to read/write RGBM16? (i.e. Photoshop, etc)
What are the routines in three.js to read in this file format?
(Are these in three.js or just clara.io?)
If I'm deal directly with WebGL or something like gl-react, can I use these same libraries or is this strictly a capability of three.js/clara.io?
username_2: All the tools for converting between Linear/sRGB/RGBM/RGBD/RGBE/LogLUV, etc are in Three.JS. On textures you can set the encoding that it will use when reading that texture. On render targets you can set the encoding it will use when writing to that render target. To do a conversion between any encoding just load the source into a texture with its current encoding set properly, and then render it to a render target with the target encoding set. Then read the render target back and save as a bitmat and you've converted your stuff.
I do not have a separate JavaScript-based workflow or command line tools at this time. But if you were to write one you can port across the glsl encoders/decoders pretty easily - I linked to them above: https://github.com/username_1/three.js/blob/dev/src/renderers/shaders/ShaderChunk/encodings_pars_fragment.glsl
So if you can load an EXR into a FP32 texture you can convert it to RGBM16 in the above way and then save it as a PNG. I should say that RGBM16 has a specific range limitation of 0.0 - 16.0, RGBM7 has a range limitation of 0.0 - 7.0 (hence the names.) It isn't hard to modify Three.JS to use an RGBMX format where X is the range you want.
username_2: I should mention that Three.JS is designed to work in linear space -- line any sane renderer. Thus the workflow is to decode the texture from whatever format it is in to linear and then do the rendering and then encoder into the desired format. Thus to change encodings, convert from source to linear, then linear to your target format.
username_2: I am using the *.png file format for storing RGBM16, thus the encoding is just a convention on that file format. This Three.JS example shows HDR, RGBM16 and LDR IBL maps working: https://threejs.org/examples/?q=hdr#webgl_materials_envmaps_hdr If you look at the source you can see it loading PNGs for the RGBM16 data.
username_3: @username_1 I've written an EXRLoader (only supports uncompressed EXRs right now).
Is there a style guide doc I can read, to get things up to par, before I make an official PR?
https://github.com/username_1/three.js/compare/dev...username_3:dev
Status: Issue closed
username_4: Closing. [EXRLoader](https://github.com/username_1/three.js/blob/master/examples/js/loaders/EXRLoader.js) is available since `R90` (thanks to @username_3).
username_2: Thanks @username_3!!
username_5: What tools are you using to read/write RGBM16? (i.e. Photoshop, etc)
What are the routines in three.js to read in this file format?
(Are these in three.js or just clara.io?)
I just came across an issue when using hdr/exr on Samsung mobiles because they lack OES_texture_float, a bit late to the party but here are the tools i came across to convert hdr to png. Hope this helps someone.
[https://github.com/plepers/hdr2png](url)
Need to handle the decoding in your shader of course. Here is a great example of that.
[http://pierrelepers.com/lab/jthree/](url) |
pfnet-research/go-menoh | 339659162 | Title: Copy twice, 2nd. copy can be reduced
Question:
username_0: A runner, VGG16 example, setups and attaches input images in following logic
1. load image file
1. crop and resize
1. convert to float array (copy)
1. copy the array to attached buffer
There are two copying process, should be reduced to improve performance. By aggregating 3. and 4. process, the runner can reduce it,, but current API is hard to implement.
Just an idea, A runner provides input updater like `io.Writer` or `io.Reader` and users can copy input data to buffer with ad hod function.<issue_closed>
Status: Issue closed |
weibocom/motan | 154187569 | Title: loadbalance LocalFirstLoadBalance
Question:
username_0: 1. LocalFirstLoadBalance.searchLocalReferer
long local = ipToLong(localhost);
放在循环里面是不是有点浪费?
2. LocalFirstLoadBalance.doSelectToHolder
if (localReferers.isEmpty()) {
Collections.sort(localReferers, new LowActivePriorityComparator<T>());
refersHolder.addAll(localReferers);
}
也不太对劲
Answers:
username_1: 已修改 #46
Status: Issue closed
|
scijava/scijava-scripts | 98278446 | Title: melting-pot: Report which dependency versions are overridden for each component
Question:
username_0: When a melting pot build succeeds, it is nice to know, for a given component, which dependency versions were different than the release build was.
This might be nice so that e.g. authors can be notified that their component's dependencies are safe to update to newer versions. In practice this will often likely work well, but we do need to be careful not to report such things to authors with already-outdated versions of their components—we are not testing against the latest `master` here, after all.
/cc @axtimwalde |
UniversityOfNottingham/devbot | 185661059 | Title: Remove unnecessary environment variables from dockerfile
Question:
username_0: I'm pretty sure slack integration doesn't require half the environment variables anymore, so we should determine which ones we can remove, and chuck them from the dockerfile.
Answers:
username_0: The shell script I'm using atm to run the container sets the following environment variables:
- HUBOT_NAME
- HUBOT_SLACK_TOKEN
- HUBOT_SLACK_TEAM
But I think even some of those are unnecessary, as the integration nowadays specifies the name, and is added to a team. If I had to guess I'd say only the Slack token is needed.
username_0: confirmed only HUBOT_SLACK_TOKEN is required.
Status: Issue closed
|
docksal/docksal | 396010289 | Title: RVM doesn't install
Question:
username_0: When using the script provided in the documentation RVM doesn't install the key to validate the download of RVM correctly.
Answers:
username_1: This is very few information about the issue, I must say
username_0: Give me a minute
Status: Issue closed
username_0: Sorry, realized the script I was thinking of was actually from a blog post and gist
https://gist.github.com/username_1/3bbd65a8265168f1261a70245da0e6e4#gistcomment-2801358
username_1: @username_0 thanks I've updated the script. Anyways rvm will be in `cli` with the next version so the issue should go off the table altogether. |
JeffKersting/refactor-tractor-whats-cookin | 778639423 | Title: Fetch-API POST requests (add and remove ingrds. to/from pantry)
Question:
username_0: As a User,
When I have decided to cook a recipe (call method adding recipe to user.toCook array),
I want to be able to add and subtract relevant ingredients / amounts from my pantry,
So that the data reflected in my pantry is up-to-date reflecting used/depleted and restocked ingredient amounts.
- If the currently displayed user has an ID of 50, and you want to add 3 units of an ingredient with an ID of 123, you would want to send a JSON object through with your POST request that looks like:
```
{
"userID": 50,
"ingredientID": 123,
"ingredientModification": 3
}
```
If you wanted to remove 3 units of that ingredient, you’d want to send a JSON that looks like this:
```
{
"userID": 50,
"ingredientID": 123,
"ingredientModification": -3
}
```
Answers:
username_1: Created an apis file for this with a dynamic function that gets users or ingredients or recipes.
Status: Issue closed
|
sympy/sympy | 944158709 | Title: DomainMatrix Jordan Normal Form
Question:
username_0: I've drafted a Jordan Normal Form computation for DomainMatrix based on the
"<NAME>, Matrix Analysis and Applied Linear Algebra Ch 7.7 ~ Ch 7.8"
I think that it uses different algorithm and could be more inefficient in nature, but as long as there is no references about the mathematical details of the `jordan_form` sympy was using, I don't think that it would be easy to develop the algorithm further, but it can be worth trying to find a proof of it.
There are still lots of open places to improve this algorithm
First is to avoid lots of repetitive computations of `pow`, `nullspace`, `rref`,
and the second is to improve `DomainMatrix.convert_to` since I think that it doesn't work well with `FiniteExtension` over certain domains like gaussian rationals, fraction fields, which can be more general problem not only for usage of this.
However, I'm afraid that the first optimization can make the code more difficult to comprehend, so I'll keep the inefficient version now to see how this works and how this improves the decidability problems.
The method works in the following:
 denotes the range of a matrix
 denotes the nullspace of a matrix
For each eigenvalues  of the matrix :
1. Set 
2. Find the nilpotent index  where  is satisfied.
3. Find the list of basis  where each  satisfies
- 
- 
- ...
- 
- 
Size of each  is the number of jordan blocks of size `i + 1`
4. For each , Find the particular solution of  (which is guaranteed to exist since each ), and  is now the generalized eigenvector that can be used to build the jordan chain 
5. Now each jordan chains can be horizontally stacked to build `P`
```python3
from sympy import *
from sympy.polys.matrices import DomainMatrix
from sympy.polys.factortools import dup_factor_list
from sympy.polys.agca.extensions import FiniteExtension
from functools import reduce
def basic_cols(A):
"""Find basic columns of $A$ that spans $R(A)$ by gaussian elimination"""
_, pivots = A.rref()
basic_cols = [A[:, p] for p in pivots]
return reduce(DomainMatrix.hstack, basic_cols)
def basic_cols_independent(A, B):
"""Find basic columns of $B$ that are linearly independent of the columns of $A$"""
temp = A.hstack(B)
_, pivots = temp.rref()
basic_cols = [temp[:, p] for p in pivots if p >= A.shape[1]]
if basic_cols:
return reduce(DomainMatrix.hstack, basic_cols)
return DomainMatrix.zeros((A.shape[0], 0), A.domain)
def particular_solution(A, b):
"""Find a particular solution $x$ of $A x = b$ by gaussian elimination if $b \in R(A)$"""
aug = A.hstack(b)
[Truncated]
A = A.to_field()
algebraic_jordan_structure = jordan_form(A)
P, J = jordan_form_to_sympy(algebraic_jordan_structure)
return P, J
```
Usage:
```
A = Matrix([
[-4, -5, -3, 1, -2, 0, 1, -2],
[4, 7, 3, -1, 3, 0, -1, 2],
[0, -1, 0, 0, 0, 0, 0, 0],
[-1, 1, 2, -4, 2, 0, -3, 1],
[-8, -14, -5, 1, -6, 0, 1, -4],
[4, 7, 4, -3, 3, -1, -3, 4],
[2, -2, -2, 5, -3, 0, 4, -1],
[6, 7, 3, 0, 2, 0, 0, 3]])
P, J = DOM_jordan_form(A)
```
Answers:
username_1: This looks good.
Some of these things could be added to `DomainMatrix` e.g. `rank`. Probably `hstack` and `vstack` should be made to be classmethods with `*args` signature. The `SDM` class already has a `particular` method for getting a particular solution but that isn't exposed by `DomainMatrix` yet.
I wonder if it's better to use `CRootOf` even where `roots` can find radical expressions for cubics and quartics because the expressions can be too complicated to be useful.
username_1: Also `connected_components` could be used here. |
github/codeql | 1167162246 | Title: Ran with database overwriting enabled, but the directory does not appear to be a CodeQL database or database cluster
Question:
username_0: I'm creating a java database but maven fails:
```
codeql database create log4j-database -l=java -c="mvn clean install -file pom.xml -Dmaven.test.skip=true" --overwrite
```
When I run it again I got:
```
A fatal error occurred: Ran with database overwriting enabled, but the directory does not appear to be a CodeQL database or database cluster. Please check you do indeed wish to delete it, and do so manually.
```
Looks like codeql failed to determine if it's a codeql database. Here's the contents of log4j-database:
```
%> ls log4j-database/ -R
log4j-database/:
./ ../ log/ trap/
````
Answers:
username_1: Can't reproduce this if I use `-c false` for a failing command. I'm using CodeQL CLI 2.8.2. What version are you using, and can you reproduce the problem using `-c false`? If not, what is the nature of the Maven failure that does reproduce this?
username_2: I also tried to reproduce, and it works fine for me locally. Does the same thing happen if you first manually remove `log4j-database`, and then run the `database create` command twice?
username_0: This happens every time in my environment:
1. Install jdk-17.0.2 on Mac
2. Download log4j source code: https://github.com/apache/logging-log4j2/archive/refs/tags/rel/2.11.0.zip
3. Run this command: codeql database create log4j-database -l=java -c="mvn clean install -file pom.xml -Dmaven.test.skip=true" --overwrite
And to reproduce the problem:
1. mkdir -p log4j-database/{log,trap}
2. codeql database create log4j-database -l=java -c="mvn clean install -file pom.xml -Dmaven.test.skip=true" --overwrite |
kubeflow/testing | 486684825 | Title: Continuous build of docker images and updating kustomize manifests
Question:
username_0: We need a good way to continuously build our docker images and then update our kustomize manifests to use the updated images.
This is critical for maintaining velocity. One of the big problems we are seeing with releases is that changes are piling up and not getting exercised until we start cutting releases because we haven't updated our kustomize manifests.
Also as the number of applications scale the toil around building docker images and then updating manifests becomes significant. This is especially true during releases as we try to rapidly push out fixes.
There is a POC based on the jupyter web app here.
https://github.com/kubeflow/kubeflow/tree/master/releasing/auto-update
We'd like to make it super easy for people to define new workflows to auto-build their application. In an ideal world they would just check in a YAMl file with a couple configurations e.g.
* Location of their Dockerfile
* Location of their kustomization.yaml file
A couple things we are missing
1. A good solution for triggering workflows on pre/postsubmits/cron jobs
1. A good solution for monitoring/alerting
1. A good story for reusability around common tasks (e.g. building images, creating a PR, etc...)
/cc @username_2 @animeshsingh @username_1 @jinchihe
Answers:
username_0: GCB now has direct integration via GitHub App Triggers
https://cloud.google.com/cloud-build/docs/create-github-app-triggers
So if we install that GitHub App in our project then we can trigger GCB builds in response to PRs. The GCB build could then create K8s resources.
This is very similar to our prow infra works today. We use Prow to trigger Prow jobs which run run_workflow_e2e.py which in turn submits a bunch of Argo workflows based on prow_config.yaml.
We could do something similar but use GCB to invoke run_e2e_workflow.py
username_0: With kubeflow/kubeflow#4029 we have a pretty good POC for CD of the jupyter web app image.
The next step is probably to generalize this to a 2nd image which should probably be the central dashboard; kubeflow/kubeflow#3781.
1. Parameterizing [update_jupyter_web_app.py](https://github.com/kubeflow/kubeflow/blob/master/py/kubeflow/kubeflow/ci/update_jupyter_web_app.py) into a reusable script should be pretty easy
* It looks like there are basically a couple arguments
1. The image location
1. Kustomization location
1. Location of source and the command to build the image
* We could make certain assumptions; i.e. that there is a Makefile and the name of the build rule and variables parameterizing where it gets pushed
1. Make it easy for people to add new jobs to periodically build and push their image.
username_0: I have created the [CI/CD for Kubeflow Applications Card in the Engprod Project](https://github.com/orgs/kubeflow/projects/13#column-7268759) to track this.
username_1: @username_0 thanks. I should be able to get to do some work on this in the next few days
username_0: Design doc is here: [bit.ly/kfcd](http://bit.ly/kfcd)
It looks like its a bit outdated. It would be good to update it and then socialize our thinking at the community meeting.
username_2: @username_0 I don't know if it's just me but that design doc link redirects to http://www.thelaptop-computers.info/2009/11/watauga-county-sheriff%E2%80%99s-office-arrests-two-suspected-burglars-go-blue-ridge/ which is not relevant.
username_1: I believe the design doc is https://docs.google.com/document/d/1oaBBJerOkKIuAAn_Swbu8uVBPtaYgvAVbQV1NxTsSSg/edit?ts=5d714796#heading=h.9g4gb5dvlquq
username_1: @username_0 I'll update the doc as you've [suggested](https://docs.google.com/document/d/1oaBBJerOkKIuAAn_Swbu8uVBPtaYgvAVbQV1NxTsSSg/edit?disco=AAAADpU3-Rs) to just focus on #1 (A doc focused on continuous delivery of our applications)
username_0: Status Update:
* kubeflow/kubeflow#4568 - I made some changes so I could run the profile controller on the KF release cluster
* The PR has instructions for how to setup the KF release cluster
* Running the pipeline created kubeflow/manifests#669
Next steps
* Refactor the kustomize layout of the tekton scripts
* I think we want to make it easier to fire off PipelineRun's for all the different applications we need to update
* I think as part of that we want to move all the scripts/pipeline resources into kubeflow/testing since
that's our main engprod repo
* We need kubeflow/manifests#665 to be submitted so that we can regenerate manifest tests for only changed files
username_0: @username_1 I wrote up my current thinking in this doc:
http://bit.ly/kfappscd-201912
PTAL
username_1: @username_0
I had commented on restructuring the PipelineRun to embed a pipelineSpec and resourceSpec rather than a pipelineRef and resourceRefs here: https://github.com/kubeflow/testing/issues/544#issuecomment-565694843
I'll comment on the doc as well
username_0: Bump to P0 because this is required for 1.0.
Here's an update
* We have successfully created an example Tekton PipelineRun for the profile controller
* https://github.com/kubeflow/testing/blob/master/apps-cd/runs/profile_controller_v1795828.yaml
* Our release infra cluster can successfully run this pipeline
The next steps are
1. Extend this to all applications in scope for KF 1.0
1. Continuously run the pipelines
Here's how I think we will do this
* We will define a YAML config file to list all the parameters for each application e.g.
* Path to Dockerfile
* Path to kustomize file etc
* We will create a simple python script to autogenerate PipelineRuns for all these applications
* We will update the python script to have a mode where it only creates a PipelineRun if the image
is out of date
* For reference see https://github.com/kubeflow/kubeflow/blob/master/py/kubeflow/kubeflow/ci/update_jupyter_web_app.py
* Run this script continuously on the release cluster
/assign @username_0
username_0: After #564 is merged the next steps would be
* Verify that Tekton pipelines are running and apps are getting updated
* Modify the application configuration to enable builds for one of the release branches (e.g. 0.7-branch)
* We want to verify that when we cut 1.0 branches we can just update application.yaml in order to
build docker images and update the kustomize manifests.
username_0: Update
* kubeflow/testing#572 was merged to update release branches
* Next missing piece is closing old PRs
* kubeflow/testing#571 is first PR for that
* After that its just a matter of ensuring everything is working as expected
username_0: This is working. Here's a list of PRs indicating several PRs updating 1.0 applications which were successfully merged
https://github.com/kubeflow/manifests/pulls?utf8=%E2%9C%93&q=+is%3Aclosed+author%3Akubeflow-bot+
Only remaining thing to do before updating this PR is updating the instance of the release infrastructure in the prod namespace.
Status: Issue closed
username_0: Closing this issue.
Filed #593 to setup a prod instance |
kyma-project/kyma | 624648895 | Title: Update knative dependencies
Question:
username_0: <!-- Thank you for your contribution. Before you submit the issue:
1. Search open and closed issues for duplicates.
2. Read the contributing guidelines.
3. In case of vulnerabilities of CVSS Base Score *7.0* or above, send an email directly to <EMAIL> instead of using this issue tracker. You can also use email for vulnerabilities with lower severity if you prefer to keep your report confidential.
-->
**Description**
<!-- Provide a clear and concise description of the vulnerability.
Describe where it appears, when it occurred, and what it affects. -->
Whitesource scans revealed vulnerabilities of the knative serving and networking packages
<!-- Provide relevant technical details such as the browser name and version, or the operating system. -->
Files:
- `knative.dev/serving/pkg/apis/serving/v1-v0.12.1`
- `knative.dev/serving/pkg/apis/networking-e4922e8a9ec460e78ab02ef598e042650131d6fe`
**Steps to exploit**
<!-- List the steps an attacker would follow to exploit the vulnerability. Attach any files, links, code samples, or screenshots that could help in reproducing the attack. -->
We have to find
Update find dependencies of serving and networking in the following folders:
- `tests/knative-serving`
- `tests/function-controller`
- `components/function-controller`<issue_closed>
Status: Issue closed |
Zensavona/elixtagram | 161900062 | Title: Unauthorised endpoints?
Question:
username_0: Hey, new to the Elixir ecosystem, so not sure what I'm doing wrong? I'm using your example project and I though the tag command didn't need an access token?
Horus:instagram-phoenix-example username_0$ iex -S mix phoenix.server
Erlang/OTP 18 [erts-7.2.1] [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false] [dtrace]
[info] Running InstagramPhoenixExample.Endpoint with Cowboy on http://localhost:4000
Interactive Elixir (1.2.2) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> Elixtagram.configure "XXX", "YYY","ZZZ.com"
{:ok, []}
iex(2)> Elixtagram.authorize_url!
"https://api.instagram.com/oauth/authorize/?client_id=XXX&redirect_uri=http%3A%2F%2FZZZ>com&response_type=code"
iex(3)> Elixtagram.tag("lifeisaboutdrugs")
** (Elixtagram.Error) OAuthAccessTokenException: The access_token provided is invalid.
(elixtagram) lib/elixtagram/api/base.ex:44: Elixtagram.API.Base.handle_response/1
(elixtagram) lib/elixtagram/api/tags.ex:13: Elixtagram.API.Tags.tag/2
Cheers
Answers:
username_1: Hey there @username_0 - since the time of writing the README Instagram has severely locked down their API for those who don't already have an old key (I believe those old keys are being revoked soon also). There are no such thing as unauthenticated endpoints anymore.
See [here](https://www.instagram.com/developer/) for more info.
Status: Issue closed
username_0: Thanks for the info :) |
libnet/libnet | 527407357 | Title: Msys2 issues
Question:
username_0: Hello !!
I've installed the Msys2 with all needed packages.
Did the commande in Mingw32.exe shell
`
BSaidus@UCCEN MINGW32 ~/libnet/build1/libnet-1.2
$ CFLAGS="-O2 -Wall -I$(pwd)/win32/wpdpack/Include" LDFLAGS="-L$(pwd)/win32/wpdpack/Lib/" ./configure --prefix=/mingw32
`
And I have as result ALL OK
`
-=-=-=-=-=-=-=-=-=-= libnet Configuration Complete =-=-=-=-=-=-=-=-=-=-
Version ....................... 1.2
Host .......................... i686-w64-mingw32
Operating System .............. mingw32
Host CPU ...................... i686
Host Vendor ................... w64
Host OS ....................... mingw32
Prefix ........................ /mingw32
Cross-compiling ............... no
Compiler is GCC ............... yes
CC ............................ gcc
CFLAGS ........................ -O2 -Wall -I/home/BSaidus/libnet/build1/libnet-1.2/win32/wpdpack/Include -march=i686 -mwin32
LD ............................ C:/GNU/Msys32/mingw32/i686-w64-mingw32/bin/ld.exe
LDFLAGS ....................... -L/home/BSaidus/libnet/build1/libnet-1.2/win32/wpdpack/Lib/
LIBS .......................... -lwpcap -lpacket -lws2_32 -liphlpapi
Link Layer .................... win32
Shared Libraries .............. yes
Static Libraries .............. yes
PIC ........................... yes
Build Sample Programs ......... no
Rebuild docs .................. yes
To override options
./configure --help
Report bugs to https://github.com/libnet/libnet/issues
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
To disable silent build and print the full command line of every stage
make V=1
To compile shared libraries on MinGW use the bundled WinPcap libraries
in ./win32/. GCC can NOT produce x64 compatible images with official
WinPcap Developer Pack. See README.win32 for more info.
To build/update the documentation
make doc
`
But when doing
`make`
it fails.
`
C:/GNU/Msys32/home/BSaidus/libnet/build1/libnet-1.2/win32/wpdpack/Include/pcap/pcap.h:357:28: warning: 'struct bpf_program' declared inside parameter list will not be visible outside of this definition or declaration
357 | void bpf_dump(const struct bpf_program *, int);
| ^~~~~~~~~~~
In file included from ../include/libnet.h:107,
from common.h:63,
from libnet_asn1.c:56:
../include/./libnet/libnet-structures.h:52:5: error: unknown type name '__int64_t'
52 | __int64_t packets_sent; /* packets sent */
| ^~~~~~~~~
../include/./libnet/libnet-structures.h:53:5: error: unknown type name '__int64_t'
53 | __int64_t packet_errors; /* packets errors */
| ^~~~~~~~~
../include/./libnet/libnet-structures.h:54:5: error: unknown type name '__int64_t'
54 | __int64_t bytes_written; /* bytes written */
| ^~~~~~~~~
make[1]: *** [Makefile:549: libnet_asn1.lo] Error 1
make: *** [Makefile:552: all-recursive] Error 1
`
Answers:
username_1: I hope you don't have `libnet/win32/stdint.h` flying around in
`libnet/include/".
username_0: no !!
just replaced `__int64_t` by `uint64_t` and it works
Status: Issue closed
|
net-dragon/draconian2d | 176267836 | Title: Sprites Needed
Question:
username_0: Idle sprite animation
Running sprite
Jumping sprite
Flying sprite
Dashing animation
* charging
* animation
Answers:
username_0: <a href="https://trello.com/c/HCq5hbGw/10-player-sprite"><img src="https://github.trello.services/images/trello-icon.png" width="12" height="12"> Player sprite</a>
username_0: <a href="https://trello.com/c/sg6CHX0t/20-guard-sprites"><img src="https://github.trello.services/images/trello-icon.png" width="12" height="12"> Guard sprites</a> |
great-expectations/great_expectations | 480859012 | Title: `data_context.util.safe_mmkdir` barfs in python 2 if it gets a path instead of a string
Question:
username_0: Quick note to track down later
`safe_mmkdir(os.path.dirname(self.base_directory))` Fails
`safe_mmkdir(str(os.path.dirname(self.base_directory)))` Succeeds
Proposed resolution: raise an informative error.
PR #615 should address this when finished.<issue_closed>
Status: Issue closed |
rheinzler/PointCloudDeNoising | 535549110 | Title: Release of the data!
Question:
username_0: Hi, Thanks for your amazing work!
The Adverse Weather has been a troubling question in the 3D field. Experiments from your paper give reasonable results for these scenes, which made me eager to try this task. Would you please release the dataset, or give us a schedule of planned releases?
Any help would be greatly appreciated!
Answers:
username_1: Also looking forward.
username_2: Thank you very much for your messages. We plan to publish the data set at the beginning of the 2nd quarter 2020.
username_2: The dataset is available :-)
Status: Issue closed
|
theopenlab/openlab | 611166972 | Title: Terraform-provider-openstack: Enable Port Forwarding extension
Question:
username_0: We have https://github.com/terraform-providers/terraform-provider-openstack/pull/940 that adds OpenStack Neutron port forwarding.
New test is failing with the following error:
```
2020-05-02 10:53:53.560290 | ubuntu-bionic | 2020/05/02 10:53:53 [DEBUG] OpenStack Response Body: {
2020-05-02 10:53:53.560326 | ubuntu-bionic | "NeutronError": {
2020-05-02 10:53:53.560361 | ubuntu-bionic | "detail": "",
2020-05-02 10:53:53.560422 | ubuntu-bionic | "message": "The resource could not be found.",
2020-05-02 10:53:53.560479 | ubuntu-bionic | "type": "HTTPNotFound"
2020-05-02 10:53:53.560512 | ubuntu-bionic | }
2020-05-02 10:53:53.560535 | ubuntu-bionic | }
```
I see that this extension was enabled manually for Gophercloud environment: https://github.com/theopenlab/openlab/issues/351
Can you please also enable it for the Terraform-provider-openstack.
Thanks!
Answers:
username_1: @bzhaoopenstack
Status: Issue closed
username_0: Hello. Could you please help enabling this extension? As far as I understand I need to add `port_forwarding` here: https://github.com/theopenlab/openlab-zuul-jobs/blob/master/roles/create-devstack-local-conf/tasks/main.yml#L85 ? |
NaikSoftware/StompProtocolAndroid | 655145344 | Title: android cant send message
Question:
username_0: Android cant send message, to Spring Boot 2.2.7.RELEASE.
Session gets connected, but while sending the message, do not get any error/callback. and also websocket server does not get invoked
Tried passing the following map during Stomp.over connection call
val map = hashMapOf<String, String?>()
map.put("accept-version","1.2");
map.put("version","1.2");
but that has no effect
mStompClient?.send("/topic/activity1", jsonObject.toString())?.subscribe() : this call does not send thge message to @MessageMapping.
Any help is appreciated? |
SublimeLinter/SublimeLinter-flake8 | 55500164 | Title: Show error code in popup box
Question:
username_0: Would make it much easier to know which codes to ignore. An example of the popup menu I'm talking about:

I think dreadatour/Flake8Lint has this built in.
Answers:
username_1: :+1:
The old SL did this and it made finding the correct code to ignore a lot easier. It's also easier to find similar errors or warnings in the panel when you can just search for it by typing.
This should happen in both the popup and the statusbar text.
username_2: This has been merged in, sorry it took so long!
Status: Issue closed
|
aws-amplify/amplify-js | 824560726 | Title: Failed to execute 'transaction' on 'IDBDatabase': One of the specified object stores was not found.
Question:
username_0: **Describe the bug**
I'm trying to use DataStore and getting this error. This is my code:
```
import { DataStore } from '@aws-amplify/datastore';
import { Election } from '../../models';
DataStore.query(Election)
```
Here is the screenshot:

It's coming from DataStore's `getAll` method.
Answers:
username_1: Please post your schema, as well as the output from `npx envinfo --system --binaries --browsers --npmPackages --npmGlobalPackages`
username_0: schema:
```
export const schema = {
models: {
Org: {
name: 'Org',
fields: {
id: {
name: 'id',
isArray: false,
type: 'ID',
isRequired: true,
attributes: [],
},
name: {
name: 'name',
isArray: false,
type: 'String',
isRequired: true,
attributes: [],
},
address: {
name: 'address',
isArray: false,
type: 'String',
isRequired: true,
attributes: [],
},
lon: {
name: 'lon',
isArray: false,
type: 'String',
isRequired: false,
attributes: [],
},
lat: {
name: 'lat',
isArray: false,
type: 'String',
isRequired: false,
attributes: [],
},
elections: {
name: 'elections',
isArray: true,
type: {
model: 'Election',
},
isRequired: false,
attributes: [],
isArrayNullable: true,
association: {
connectionType: 'HAS_MANY',
associatedWith: 'org',
},
},
},
syncable: true,
pluralName: 'Orgs',
[Truncated]
prettier: ^2.0.5 => 2.2.1
react: ^17.0.0 => 17.0.1
react-bootstrap: ^1.5.0 => 1.5.0
react-date-range: ^1.1.3 => 1.1.3
react-dom: ^17.0.0 => 17.0.1
react-input-mask: ^3.0.0-alpha.2 => 3.0.0-alpha.2
react-phone-number-input: ^3.1.16 => 3.1.16
react-router: ^5.2.0 => 5.2.0
react-router-dom: ^5.2.0 => 5.2.0
react-select: ^4.2.1 => 4.2.1
redux: ^4.0.5 => 4.0.5
rollup: ^2.39.1 => 2.39.1
snowpack: ^3.0.13 => 3.0.13
snowpack-plugin-svgr: ^0.1.2 => 0.1.2
typescript: ^4.0.0 => 4.2.2
valtio: ^0.7.0 => 0.7.0
npmGlobalPackages:
npm: 7.0.15
```
username_1: Thanks. Can you also share the `schema.graphql`?
username_0: ```
{
"data": {
"__schema": {
"queryType": {
"name": "Query"
},
"mutationType": {
"name": "Mutation"
},
"subscriptionType": {
"name": "Subscription"
},
"types": [
{
"kind": "OBJECT",
"name": "Query",
"description": null,
"fields": [
{
"name": "getOrg",
"description": null,
"args": [
{
"name": "id",
"description": null,
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "ID",
"ofType": null
}
},
"defaultValue": null
}
],
"type": {
"kind": "OBJECT",
"name": "Org",
"ofType": null
},
"isDeprecated": false,
"deprecationReason": null
},
{
"name": "listOrgs",
"description": null,
"args": [
{
"name": "filter",
"description": null,
"type": {
"kind": "INPUT_OBJECT",
"name": "ModelOrgFilterInput",
"ofType": null
},
"defaultValue": null
},
[Truncated]
],
"onOperation": false,
"onFragment": false,
"onField": false
},
{
"name": "aws_lambda",
"description": "Tells the service this field/object has access authorized by a Lambda Authorizer.",
"locations": ["OBJECT", "FIELD_DEFINITION"],
"args": [],
"onOperation": false,
"onFragment": false,
"onField": false
}
]
}
}
}
```
username_1: I'm talking about the file where you defined your models and relationships. It'll be in `./amplify/backend/api/{your app}/schema.graphql`.
username_1: Thanks! Could you try removing the `org` field from your Election model, then `amplify push`, `amplify codegen models` and see if you're still getting the same error?
In other words, try this schema:
```gql
type Org @model @auth(rules: [{ allow: owner }]) {
id: ID!
name: String!
address: String!
lon: String
lat: String
elections: [Election] @connection(keyName: "byOrg", fields: ["id"])
}
type Election @key(name: "byOrg", fields: ["orgId"]) @model {
id: ID!
title: String!
date: AWSDate!
result: String!
lon: String
lat: String
orgId: ID!
}
``` |
tlhackque/BlockCountries | 133367906 | Title: centos 5.11 only issue
Question:
username_0: hi
I am using this script on two ubuntu systems with no issue, but i install it on contos, and all is working even the list but i do get this error:
~]# /etc/init.d/BlockCountries start -update
Starting blocked countries IP filter:
No new IP data available from apnic
No new IP data available from lacnic
No new IP data available from afrinic
Updated IP zone data from ripe
No new IP data available from arin ip6tables-restore v1.3.5: ip6tables-restore: unable to initializetable 'filter'
Error occurred at line: 1
Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information.
Answers:
username_1: That says that the ip6tables filter that BlockCountries generated doesn't load.
The 'filter' table is internal to your kernel and provided by netfilter.
Is netfilter configured in your kernel?
Is IPv6 running? What does ifconfig report?
Is netfilter (iptables) working? What does ip6tables -nvL report?
redirect those to a file and post them -- as ATTACHMENTs.
If you just start BlockCountries on IPv4, does it work?
If not, what are the errors? What does iptables -nvL report?
I don't know anything about 'contos'. If you mean 'centos', that's the same as fedora, which I use for most of my machines. BlockCountries runs here on kernels as old as 2.6.17, Fedora Core 4. (though IPv6 has issues on kernels that old.)
What are the versions of your OS, iptables, ip6tables?
uname -a
cat /proc/sys/kernel/osrelease
iptables --version
ip6tables --version
username_1: Also: post, again as an ATTACHMENT, the output of
`BlockCountries start -d -6 -no4 2>ipv6.table`.
This will be large, but it includes exactly what is fed to `ip6tables-restore`
username_0: hi, sorry i have not had the change to look into it yet as today is very busy, but thanks for your post as it will give me good ideas of what to check, but from the research i already did, it seems that i maybe missing (xtables-addons-common) this may not be the case, but when i get a chance later I will reply with your requested outputs.
username_0: **uname -a**
Linux ams 2.6.18-408.el5 #1 SMP Tue Jan 19 09:14:52 EST 2016 x86_64 x86_64 x86_64 GNU/Linux
**cat /proc/sys/kernel/osrelease**
2.6.18-408.el5
**iptables --version**
iptables v1.3.5
**ip6tables --version**
ip6tables v1.3
**/etc/init.d/BlockCountries start -d -6 -no4 2>ipv6.table**
Starting blocked countries IP filter:
Read /root/blockips/cn.cdb
Read /root/blockips/tr.cdb[root@ams ~]
username_1: 2.6.18 is a very old kernel. I'm not sure it's new enough for reliable operation with IPv6, especially IPv6 with ip6tables. I know 2.6.17 has IPv6-related bugs in netfilter - but it does get further than you report.
Do you actually have IPv6 connectivity with this machine? If not, remove the -6 from your configuration file and BlockCountries will ignore IPv6, which should solve your problem.
If you do have IPv6 connectivity, you should update your kernel to something much more current in any case.
You didn't attach ipv6.table.
You didn't include the output of ifconfig
You didn't attach the output of ip6tables -nvL and iptables -nvL
You didn't tell me if IBlockCountries works with just IPv4. (BlockCountries start -no6 -4)
username_0: yes maybe issue id because of ole kernel, and as on that system i don't use iv6 i have disabled it in the config like your said, and the issue is gone, thanks for you help, I have your script running on three systems now, two ubuntu and one centOS.
Status: Issue closed
username_1: If you don't use iPv6, disabling it is a good idea in any case. The BlockCountries processing costs something - more importantly, the iptables rules occupy kernel memory. No point in that if you have no traffic.
You could consider adding an iptables rule that blocks all IPv6 traffic (except ::1) just in case IPv6 connectivity is added unexpectedly.
I'm closing this issue since you are running and haven't provided the additional data.
By the way, if you're going to install on more machines, get the latest release. bcinstall has been updated to be more helpful...
username_0: [root@ams ~]# /etc/init.d/BlockCountries stop
Removing blocked countries IP filteriptables-restore: line 1511 failed
Table update failed: 256
[root@ams ~]# /etc/init.d/BlockCountries start -no6 -4
Starting blocked countries IP filter: iptables-restore: line 7065 failed
[root@ams ~]#
[root@ams ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 08:00:27:97:A5:35
inet addr:172.16.31.10 Bcast:172.16.17.32 Mask:255.255.255.248
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:17688 errors:0 dropped:0 overruns:0 frame:0
TX packets:27649 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1799956 (1.7 MiB) TX bytes:3449579 (3.2 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:2513 errors:0 dropped:0 overruns:0 frame:0
TX packets:2513 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3769658 (3.5 MiB) TX bytes:3769658 (3.5 MiB)
[root@ams ~]# ip6tables -nvL
ip6tables v1.3.5: can't initialize ip6tables table `filter': Address family not supported by protocol
Perhaps ip6tables or your kernel needs to be upgraded.
iptables -nvL = **too big to copy but look good to me**
ipv6.table = **dont know how to get this data sorry**,
username_1: You created `ipv6.table` in your current directory when you executed:
`/etc/init.d/BlockCountries start -d -6 -no4 2>ipv6.table`
If you're administering Unix systems, you ought to take a course on the shell. Or see `man sh` or `man bash`
ip6tables -nvL
ip6tables v1.3.5: can't initialize ip6tables table `filter': Address family not supported by protocol
Perhaps ip6tables or your kernel needs to be upgraded.
This rules out BlockCountries as an issue. It says that ip6tables by itself can't even list IPv6 filters. That means that either your startup for doesn't include ip6tables, or your kernel doesn't support it.
This is **NOT good**:
```
/etc/init.d/BlockCountries start -no6 -4
Starting blocked countries IP filter: iptables-restore: line 7065 failed
```
This means the filter was not installed.
If you ever see a message that says 'error', 'failed', 'warning' or the like, something is wrong. Report it.
One thing to try first - you are running an old iptables, but it might have been patched by redhat.
So if you're using `-conntrack`, remove it and see if you get the error. Otherwise, add it ...
If that doesn't solve the problem, I need the result of `/etc/init.d/BlockCountries start -d -no6 -4 2>ipv4.table`. Which, as you've figured out by now, is the file ipv4.table. Please post this as an attachment.
I may need `iptables -nvL`. `iptables -nvl >ipv4.iptables` would capture that in a file (`ipv4.iptables`), which you would attach. But let's hold off until I can see `ipv4.table`
username_1: hi
I am using this script on two ubuntu systems with no issue, but i install it on contos, and all is working even the list but i do get this error:
~]# /etc/init.d/BlockCountries start -update
Starting blocked countries IP filter:
No new IP data available from apnic
No new IP data available from lacnic
No new IP data available from afrinic
Updated IP zone data from ripe
No new IP data available from arin ip6tables-restore v1.3.5: ip6tables-restore: unable to initializetable 'filter'
Error occurred at line: 1
Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information.
username_0: [root@ams ~]# /etc/init.d/BlockCountries start -d -6 -no4 2>ipv6.table
Starting blocked countries IP filter:
Read /root/blockips/cn.cdb
Read /root/blockips/tr.cdb[root@ams ~]# /etc/init.d/BlockCountries start -d -no6 -4 2>ipv4.table
Starting blocked countries IP filter:
Read /root/blockips/cn.cdb
Read /root/blockips/tr.cdb[root@ams ~]#
**result of /etc/init.d/BlockCountries start -d -no6 -4 2>ipv4.table**
[tr.txt](https://github.com/username_1/BlockCountries/files/129419/tr.txt)
[cn.txt](https://github.com/username_1/BlockCountries/files/129420/cn.txt)
username_0: -conntrack = no change when added with -ipv6 on config
username_0: **with -ipv4 only in config**
[root@ams ~]# /etc/init.d/BlockCountries start -update
Starting blocked countries IP filter:
No new IP data available from apnic
No new IP data available from lacnic
No new IP data available from afrinic
No new IP data available from ripe
No new IP data available from arin iptables-restore: line 7065 failed
Table update failed: 256
username_1: I asked for **ipv4.table** and you attached `tr.tx`t and `cn.txt`. They appear to be renamed versions of `cn.cdb` and `tr.cdb`, which are compiled binary data for Turkey and China from apnic. This is not helpful at this time.
`-conntrack` is something to try `-no6 -4` (that is, IPv4-only). The `iptables-restore: line 7065 failed`
is more important than the IPv6 mystery. It means the IPv4 filter is not loading.
For some reason your `iptables-restore` doesn't like the data. `-conntrack` mismatch is one possibility.
Unfortunately, `iptables-restore` provides no useful diagnostics - even the line number is useless. So I need the data file. If I can reproduce your issue here, I can diagnose it. If not, you'll have to help.
username_0: /etc/init.d/BlockCountries start -conntrack = same line 7065 error on output
**Had to rename to be able to be allowed to upload, as when i was trying .zip it still failed, but file are untouched so you can rename again if you like.**
[ipv4.table.txt](https://github.com/username_1/BlockCountries/files/129425/ipv4.table.txt)
[ipv6.table.txt](https://github.com/username_1/BlockCountries/files/129426/ipv6.table.txt)
username_1: Renaming Is fine, sending another file isn't. Next time we'll create the file with a .txt extension.
Both files look like reasonable data. Some is missing, but that may be due to your version of `iptables-restore` interfering with my logging.
I've looked at the IPv4 table in some detail.
It appears that iptables-restore is failing to delete a rule chain, probably because there is a reference to that chain that shouldn't exist.
If BlockCountries is the only software touching its chains, this is impossible - at least in theory.
First, do this:
```
iptables -nL | grep -nP 'BLOCKCC.-I '
```
You should get 2 lines of output. The numbers at the beginning may be different, and you may see either `BLOCKCC0` or `BLOCKCC1`. Everything else should match exactly:
```
27:Chain BLOCKCC1-I (1 references)
23841:BLOCKCC1-I all -- 0.0.0.0/0 0.0.0.0/0
```
**If what you see matches**, I need you to do these commands exactly in this order, with nothing in between:
```
iptables -nL >ipv4-tables.txt
/etc/init.d/BlockCountries start -no6 -4 -d 2>ipv4-bc.txt
``
Post the terminal log and the two files.
**Otherwise (you got different output):**
If you see anything else:
```
iptables -nL >ipv4-tables-grep.txt
```
Then restart `iptables`, `BlockCountries`, and any other software that uses iptables. I think I saw that you're using fail2ban, for example. (If it's easier, you can reboot the machine).
```
service iptables stop
service iptables start
# If you're using it
service fail2ban start
...
# the next command should return nothing.
iptables -nL | grep -nP 'BLOCKCC.-I '
# manually start BlockCountries
/etc/init.d/BlockCountries start
/etc/init.d/BlockCountries start
/etc/init.d/BlockCountries start
```
If any of the commands fail:
```
iptables -nL >ipv4-before.txt
/etc/init.d/BlockCountries start -d -no6 -4 2>ipv4-bc-table.txt
ls -l ipv4-*.txt
```
Post all three files (`ipv4-tables-grep.txt`, `ipv4-before.txt`, and `ipv4-bc-table.txt` with the terminal log.
Yes, I really need you to run start exactly three times.
username_1: Renaming Is fine, sending another file isn't. Next time we'll create the file with a .txt extension.
Both files look like reasonable data. Some is missing, but that may be due to your version of iptables-restore interfering with my logging.
I've looked at the IPv4 table in some detail yesterday.
It appears that iptables-restore is failing to delete a rule chain, probably because there is a reference to that chain that shouldn't exist.
If BlockCountries is the only software touching its chains, this is impossible - at least in theory.
I decided that it's too complicated for you to selectively gather the data that I need to debug this.
I have added data collection to `BlockCountries` that should simplify gathering what I need to debug this.
Please download and install the latest version. (2.13)
Run its bcinstall as some new modules are required.
```
/etc/init.d/BlockCountries start -d -z 2>debug.zip
```
Post the terminal output, and the debug.zip file that's created.
Depending on your configuration, the (unzipped) debug log may be several MB in size. It contains some data about your system; I don't think it's sensitive. It's all text; you are welcome to review its contents.
You can post it somewhere else if you prefer.
The new version also logs start and stop to syslog. I recommend it, but -nosyslog will turn it off.
username_1: FYI, I found my note on IPv6 - the minimum kernel version required for stateful IPv6 filters is 2.6.20.
Although BlockCountries will start on earlier kernels, IPv6 connections will not pass the firewall reliably.
This is a kernel restriction, not a BlockCountries issue.
This is not to say that you should run a kernel as old as 2.6.20 or that netfilter in that version is bug-free. Just that prior versions, which include the one you reported running, are known not to work with IPv6.
2.6.20 was released in 2007. 2.6.18 was released in 2006, 10 years ago. Both are EOL.
If you can collect the necessary data (use the latest BlockCountries with `-d -z 2>debug.zip`, I still want to get to the bottom of the issue with iptables-restore. The latest version collects more data than my previous (deleted) post requested, but requires much less work from you.
username_0: I've been away for few days and won't be back home for another 36 hours, but will be sure to send the data you have requested.
Sent from my iPhone
>
username_0: # /etc/init.d/BlockCountries start -update
Starting blocked countries IP filter:
Updated IP zone data from apnic
Updated IP zone data from lacnic
No new IP data available from afrinic
Updated IP zone data from ripe
Updated IP zone data from ariniptables-restore: line 7068 failed
Rules update failed: Broken pipe at line 0
[ FAILED ]
[root@ams BlockCountries]# /etc/init.d/BlockCountries start -d -z 2>debug.zip
Debug log will be written as a .zip archive
Starting blocked countries IP filter:
Read /root/blockips/cn.cdb
Read /root/blockips/tr.cdb
iptables-restore: line 7068 failed
Rules update failed: Broken pipe at line 0
[ FAILED ]
**Again zip will not upload for some reason**, please down file from my website [link](http://morganmultimediagroup.com/debug.zip)
username_1: Thanks for the data.
.zip should upload per https://help.github.com/articles/file-attachments-on-issues-and-pull-requests/, contact github for help on that.
I've downloaded it from your website & will analyze it shortly.
username_1: The good news:
With this data, I can reproduce your issue. Should have a solution shortly.
username_0: Ok please remember that the data dose not include -IPv6 in config
Sent from my iPhone
>
username_1: Would you please post your /etc/sysconfig/iptables-config?
username_0: **iptables-config**
# Load additional iptables modules (nat helpers)
# Default: -none-
# Space separated list of nat helpers (e.g. 'ip_nat_ftp ip_nat_irc'), which
# are loaded after the firewall rules are applied. Options for the helpers are
# stored in /etc/modprobe.conf.
IPTABLES_MODULES="ip_conntrack_netbios_ns"
# Unload modules on restart and stop
# Value: yes|no, default: yes
# This option has to be 'yes' to get to a sane state for a firewall
# restart or stop. Only set to 'no' if there are problems unloading netfilter
# modules.
IPTABLES_MODULES_UNLOAD="yes"
# Save current firewall rules on stop.
# Value: yes|no, default: no
# Saves all firewall rules to /etc/sysconfig/iptables if firewall gets stopped
# (e.g. on system shutdown).
IPTABLES_SAVE_ON_STOP="no"
# Save current firewall rules on restart.
# Value: yes|no, default: no
# Saves all firewall rules to /etc/sysconfig/iptables if firewall gets
# restarted.
IPTABLES_SAVE_ON_RESTART="no"
# Save (and restore) rule and chain counter.
# Value: yes|no, default: no
# Save counters for rules and chains to /etc/sysconfig/iptables if
# 'service iptables save' is called or on stop or restart if SAVE_ON_STOP or
# SAVE_ON_RESTART is enabled.
IPTABLES_SAVE_COUNTER="no"
# Numeric status output
# Value: yes|no, default: yes
# Print IP addresses and port numbers in numeric format in the status output.
IPTABLES_STATUS_NUMERIC="yes"
# Verbose status output
# Value: yes|no, default: yes
# Print info about the number of packets and bytes plus the "input-" and
# "outputdevice" in the status output.
IPTABLES_STATUS_VERBOSE="no"
# Status output with numbered lines
# Value: yes|no, default: yes
# Print a counter/number for every rule in the status output.
IPTABLES_STATUS_LINENUMBERS="yes"
# Reload sysctl settings on start and restart
# Default: -none-
# Space separated list of sysctl items which are to be reloaded on start.
# List items will be matched by fgrep.
#IPTABLES_SYSCTL_LOAD_LIST=".ip_conntrack .bridge-nf"
username_0: [root@ams ~]# locale
LANG=en_US.UTF-8
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=
username_0: **IV6-Config**
# Load additional ip6tables modules (nat helpers)
# Default: -none-
# Space separated list of nat helpers (e.g. 'ip_nat_ftp ip_nat_irc'), which
# are loaded after the firewall rules are applied. Options for the helpers are
# stored in /etc/modprobe.conf.
IP6TABLES_MODULES=""
# Unload modules on restart and stop
# Value: yes|no, default: yes
# This option has to be 'yes' to get to a sane state for a firewall
# restart or stop. Only set to 'no' if there are problems unloading netfilter
# modules.
IP6TABLES_MODULES_UNLOAD="yes"
# Save current firewall rules on stop.
# Value: yes|no, default: no
# Saves all firewall rules to /etc/sysconfig/ip6tables if firewall gets stopped
# (e.g. on system shutdown).
IP6TABLES_SAVE_ON_STOP="no"
# Save current firewall rules on restart.
# Value: yes|no, default: no
# Saves all firewall rules to /etc/sysconfig/ip6tables if firewall gets
# restarted.
IP6TABLES_SAVE_ON_RESTART="no"
# Save (and restore) rule and chain counter.
# Value: yes|no, default: no
# Save counters for rules and chains to /etc/sysconfig/ip6tables if
# 'service ip6tables save' is called or on stop or restart if SAVE_ON_STOP or
# SAVE_ON_RESTART is enabled.
IP6TABLES_SAVE_COUNTER="no"
# Numeric status output
# Value: yes|no, default: yes
# Print IP addresses and port numbers in numeric format in the status output.
IP6TABLES_STATUS_NUMERIC="yes"
# Verbose status output
# Value: yes|no, default: yes
# Print info about the number of packets and bytes plus the "input-" and
# "outputdevice" in the status output.
IP6TABLES_STATUS_VERBOSE="no"
# Status output with numbered lines
# Value: yes|no, default: yes
# Print a counter/number for every rule in the status output.
IP6TABLES_STATUS_LINENUMBERS="yes"
# Reload sysctl settings on start and restart
# Default: -none-
# Space separated list of sysctl items which are to be reloaded on start.
# List items will be matched by fgrep.
#IP6TABLES_SYSCTL_LOAD_LIST=".ip_conntrack .bridge-nf"
username_1: Thanks for the configuration & locale. That rules out two possibilities.
I have spent several hours on this.
The problem, as I said earlier, is that `iptables-restore` is unable to delete an old ruleset because
netfilter thinks it's still in use. However, `iptables-restore` accepts a command to delete the only user. So it can't be in use.
I have used the oldest available machine - which is not as old as yours.
I am using exactly your config file (which is in the data that you sent).
I am setting up `iptables` exactly as they are before the start command executes. (That's in the data too.)
I am using the output of the iptables command as captured on your system.
It turns out that I can not reproduce the problem unless I make a mistake in the process. I did, which is why I was optimistic earlier.
I can successfully put the data that `BlockCountries` is sending to `iptables-restore` into `iptables-restore` here. So BlockCountries is generating good data.
The only difference that I can see is that you are using `iptables` version **1.3.5**, and the oldest version that I can build here is **1.4.15**. I have 1.3.0, but that version does not support `BlockCountries`. I think the latest version is something like **1.6.0**.
At this point, I think you should update your system.
* Clearly, it is facing the internet.
* The old kernel that you have is likely vulnerable to known bugs.
* We know that `netfilter` is broken for IPv6
* `Iptables-restore` appears to be broken with IPv4 & I can not debug it here.
* Even if we got `BlockCountries` working, other problems are sure to follow.
You can try to just update `iptables`, but I think you would do better to update the entire OS.
If you really want to try to make `iptables` work:
* `iptables` sources are available at ftp://ftp.netfilter.org/pub/iptables/
* There a procedure that includes a more recent RPM that may work on your machine at
http://www.squldvision.info/2012/05/29/update-iptables-centos5/ 1.4.14 is probably new enough to work, but old enough to run on your old kernel.
But I do think that updating this machine - or moving its functions to another one - would be a much better use of your time.
username_0: thanks for your feedback I will look into updating iptables to 1.4, and see how it goes from there.
username_0: not had a good time, the link you posted should of worked for me but didn't as build worked be with restore area issues as the kernel effected the build, in the end i reinstalled stock version of iptables, which seem to fix issues with that error, as (start -update) had no issues in output, i think their may still be another with table-restore but its dose seem to be running OK, anyway just in-case their is better data i have added an updated debug file for you.
check the [link](http://morganmultimediagroup.com/debug.zip)
username_1: It's **not** running OK. `BlockCountries` uses `iptables-restore` to load the rules that it generates.
"Rules update failed" means just that. `BlockCountries` is unable to update the firewall with the latest rules. **Do not ignore the error.**
You are still/again running iptables 1.3.5. It's clearly broken. I can not fix `iptables`. **You have to run a version that works.**
The `start command` will work the first time the rules are loaded, because there is no old ruleset to remove. **Thereafter, it will fail. and you will run a stale ruleset.** This is not a `BlockCountries` problem. It's `iptables`.
This is easy to see in the debug data.
Before `BlockCountries` starts, we have a set of `BlockCountreis` rules installed. (as reported by `iptables -n -L`):
```
Chain INPUT (policy ACCEPT)
target prot opt source destination
BLOCKCC0-I all -- 0.0.0.0/0 0.0.0.0/0
Chain BLOCKCC0-I (1 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
RETURN tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 25,587,465,53
```
This shows that the `BLOCKCC0-I` chain has (as expected) exactly one reference, from the `INPUT` chain.
`BlockCountries` generates a new set of rules, based on the latest data. It needs to install the new rules, and remove the old ones.
To do this, `BlockCountries` includes these commands (among thousands, but only these matter):
```
-D INPUT -j BLOCKCC0-I
-F BLOCKCC0-I
-X BLOCKCC0-I
```
These commands
* delete the rule in `INPUT` that refers to the chain (references now should be zero)
* delete all the rules in the `BLOCKCC0-I` chain, and
* delete the `BLOCKCC0-I` chain.
The third command fails. That's the one at line 7074.
The only reasons for chain deletion to fail are a rule referencing it, or rules in the chain. But the preceding two commands ensure that neither should be true.
Thus, either `iptables-restore` is broken, or the `netfilter` service in the kernel is. They're part of the same package. `iptables` startup installs `netfilter`.
So `netfilter/iptables` is broken. You need to update. **You need to update.**
**I can't spend any more time chasing a broken environment.**
What I will do is add a check to `bcinstall` to verify that a recent version of iptables is installed. That should prevent anyone else from stumbling across this issue.
The oldest version of `iptables` that I'm running here is 1.14.10.
As I've said before, an internet-facing machine should be running more current software. But that's your business.
username_1: Leave a comment
Status: Issue closed
username_0: ok, I have upgraded the OS to the latest version that i must use production reasons (centOS 6.7),
this uses iptables version 1.4.7 and it has installed with no issues at all, the only bad message i got was from bcinstall saying it wants version 1.14.10, but centOS will be supported uptil sept 2021 i think and its not even that old at all, so i think your script is being a bit unkind with the requirements, anyway if you have any questions please let me know.
username_1: I set the required version based on what I have here and knew works.
Based on your feedback, I'll update the requirement.
To make sure that the rest of the checks pass ,you can change the value of IPTV on line 9 of bcinstall to 1.4.7 in the meantime.
If you're not using IPv6, you can run bcinstall with -S to disable the IPv6 checks.
-v if you want to see the analysis. (and -h for all the options)
I'm glad that you've updated and that it solved your problems. |
DIYgod/RSSHub | 565517135 | Title: More than 20 items per feed
Question:
username_0: ### What feature is it?
Currently, when subscribing to say a twitter feed, you only get 20 most recent tweets.
### What problem does this feature solve?
Get the history, when subscribing to new people.
Answers:
username_1: RSS is used to get updates, not history
Status: Issue closed
|
ziglang/zig | 372356165 | Title: Syntax flaw: Block statements and terminating semicolon
Question:
username_0: Currently, block expressions (`for`, `if`, `while`) are only terminated with `;`, if the ending block of that expression is not a block. This makes the grammar very hard (maybe even impossible) to make context free. My best bet at formalising a grammar for this have been something along these lines:
```
Statement
: IfStatement
...
| Expr Semicolon
IfStatement
: "if" GroupedExpr option(Payload) Block
| "if" GroupedExpr option(Payload) Expr "else" option(Payload) Block
IfExpr
: "if" GroupedExpr option(Payload) Expr option("else" option(Payload) Expr)
```
Sadly, this grammar requires `N` token lookahead at best, and at worst it is ambiguous, because a `Block` is also an `Expr`.
Answers:
username_0: Consider this example:
```rust
test "" {
if (true)
if (true)
if (true)
if (true)
if (true)
if (true)
return;
}
```
username_0: I think the solution to this is related to #760 #114 ([this comment](https://github.com/ziglang/zig/issues/114#issuecomment-302235077)) and maybe even #1676.
We can change the syntax of the language so that syntactic constructs that will never pass semantic analysis will give a parse error. Consider this piece of code:
```
test "" {
1 + 1; // error: expression value is ignored
}
```
No matter what the LHS and RHS are for this operator, this code will never pass semantic analysis, because `+` always returns an expression that can shouldn't be ignored. We can, therefore, split expressions into categories, where only some are valid as statements. This is the same solution as [this](https://github.com/ziglang/zig/issues/760#issuecomment-430938743). With a system like this, we can disallow the `if` expressions at the statement level and the ambiguity is solved.
@kyle-github We can then consider enforcing blocks, but it is not necessary to solve this issue (i think).
username_1: I'm not an expert on formal language theory, so I don't understand the problem in this issue. Does the syntax flaw manifest in any way other than purely theoretical?
username_0: @username_1 Well, if we wanna have a formal grammar, it should be unambiguous, so that differently implemented parses don't vary in behavior. Otherwise, what is the point of having a grammar?
We could ask the same question for #760. The stage1 compiler takes a choice, and always follows that choice, so in implementation, there is no ambiguity. We still consider it a problem though, because ambiguities make code harder to reason about and read (even if the parser is consistent).
username_0: Also, it makes using parser generators to parse the Zig language trivial, which helps with specifying and testing the grammar itself. If we can have a grammar that can actually parse code by generating a parser, then, if some compilers vary in what syntax they parse, then you can point to the grammar and its generates parser and say "well, the parser that does the same as this generated one is correct" (This will help when new syntax have to be added to stage1 and stage2).
username_0: I've also heard, that different C++ compiler disagrees on what syntax is valid. I'd be nice if we could avoid such a mess. :)
username_0: 3rdly! Having a grammar we can generate from allows from quick prototyping of syntax. We still have #114, and that is a big change. We should probably ensure we get it right, before rewriting a 2000 line parser :)
username_1: Here are some excerpts from the grammar at the bottom of the langref docs:
```
Block = option(Symbol ":") "{" many(Statement) "}"
Statement = LocalVarDecl ";"
| Defer(Block)
| Defer(Expression) ";"
| BlockExpression(Block)
| Expression ";"
| ";"
Defer(body) = ("defer" | "errdefer") body
BlockExpression(body) = Block
| IfExpression(body)
| IfErrorExpression(body)
| TestExpression(body)
| WhileExpression(body)
| ForExpression(body)
| SwitchExpression
| CompTimeExpression(body)
| SuspendExpression(body)
```
I'm not totally clear on what a context-free grammar is, but I'm assuming that this parametric `(body)` pattern doesn't qualify for that.
I don't think this is ambiguous. You're supposed to match the patterns in highest-to-lowest precedent as listed. So if `Defer(Block)` matches, then don't bother trying to match `Defer(Expression) ";"`. Is this not good enough?
username_1: We don't need the rules in #114 to be encoded in the grammar. Those rules can be enforced after parsing by examining the AST. I'm more optimistic about that approach generating more helpful error messages anyway. I understand that that will result in sloppy implementations failing to reject invalid Zig programs, but I think that's a minor concern compared to implementations failing to parse correct Zig programs.
username_2: That's not true, there are a couple. Here's one from `std.mem`.
```
pub fn set(comptime T: type, dest: []T, value: T) void {
for (dest) |*d|
d.* = value;
}
```
username_0: I don't think we have to give up the shorthand versions of the block expressions to resolve this issue.
Also, we have the dangeling `else` problem :)
```
Statement = IfStatement | Block | Expr ;
IfStatement = if ( Expr ) Block | if ( Expr ) Expr else IfStatement
Expr = 1 | 1 + Expr | IfExpr | Block
IfExpr = if ( Expr ) Expr | if ( Expr ) Expr else Expr
Block = { Statement }
./cfg-checker grammar.cfg
Found a sentential form with two different derivations:
if ( Expr ) if ( Expr ) Expr else Expr ;
Derivation 1:
0: Statement
1: Expr ;
2: IfExpr ;
3: if ( Expr ) Expr else Expr ;
4: if ( Expr ) IfExpr else Expr ;
5: if ( Expr ) if ( Expr ) Expr else Expr ;
Derivation 2:
0: Statement
1: Expr ;
2: IfExpr ;
3: if ( Expr ) Expr ;
4: if ( Expr ) IfExpr ;
5: if ( Expr ) if ( Expr ) Expr else Expr ;
```
username_0: Grammars are ambiguous as long as there are one input that has two ways of expanding the grammar. Some parsing technices do not have the concept of "highest-to-lowest", so they will parse grammars differnelty from parsers that do if the grammar is ambiguous (LARL parsers expand the grammar into states, where a state can handle `N` rules at the same time.)
username_0: More fun and ambiguous grammar:
```rust
async<if (true) A else B> fn()void {}
async<comptime A> fn()void {}
```
username_1: @username_0 is your work on the flex/bison parser online anywhere? That sounds like it might be a good starting point for overhauling the grammar specification.
username_0: @username_1 I've pushed my work to [here](https://github.com/username_0/zig-grammar2). It can parse most of Zigs std (except async calls and a few statements). Currently, I'm working on identifying what syntactic constructs bison emits conflict warnings for, and seeing if I can restructure the grammar to get rid of them.
username_3: ```
test "" {
if (true)
if (true)
if (true)
if (true)
if (true)
if (true)
return;
}
```
I don't see how this requires N token lookahead. I think lookahead refers to reading extra tokens to decide what to do with the already read tokens, for example you must read more before you reduce "a + b". Lookahead will always be required in some cases, so I don't see what's so bad about lookahead.
When you've seen `if (expr) { }` (in the general case, I know nothing about Zig) then you can immediately reduce this to say an IfStmt. It does not have to look at anything after '}'. The same is true for ';' in the code above. No other token needs to be looked at to collapse the whole stack of if stmts. No token after the semicolon could possibly alter how this is parsed.
username_0: I have a solution to this, but it required changing when `if`s can be expressions.
username_0: Fixed: #1685
Status: Issue closed
|
Zrips/CMI | 646910881 | Title: editctext completely broke,
Question:
username_0: **Description of issue:**
/editctext seems to be broken, unable to edit custom texts in game.
Auto pagination seems to be broken aswell. Even if it's enabled, it wont auto paginate.
---
**ERROR (DELETE IF YOU HAVE NO ERROR):**
```
[14:35:02 INFO]: Kristouffe issued server command: /cmi editctext test
[14:35:02 WARN]: java.lang.reflect.InvocationTargetException
[14:35:02 WARN]: at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
[14:35:02 WARN]: at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
[14:35:02 WARN]: at java.lang.reflect.Method.invoke(Unknown Source)
[14:35:02 WARN]: at com.username_1.CMI.Modules.RawMessages.RawMessageManager.send(RawMessageManager.java:109)
[14:35:02 WARN]: at com.username_1.CMI.Modules.RawMessages.RawMessage.show(RawMessage.java:689)
[14:35:02 WARN]: at com.username_1.CMI.Modules.RawMessages.RawMessage.show(RawMessage.java:673)
[14:35:02 WARN]: at com.username_1.CMI.Modules.RawMessages.RawMessage.show(RawMessage.java:719)
[14:35:02 WARN]: at com.username_1.CMI.Modules.CustomText.CTextManager.showCTextEditor(CTextManager.java:308)
[14:35:02 WARN]: at com.username_1.CMI.commands.list.editctext.perform(editctext.java:209)
[14:35:02 WARN]: at com.username_1.CMI.commands.CommandsHandler.onCommand(CommandsHandler.java:323)
[14:35:02 WARN]: at org.bukkit.command.PluginCommand.execute(PluginCommand.java:45)
[14:35:02 WARN]: at org.bukkit.command.SimpleCommandMap.dispatch(SimpleCommandMap.java:159)
[14:35:02 WARN]: at org.bukkit.craftbukkit.v1_16_R1.CraftServer.dispatchCommand(CraftServer.java:792)
[14:35:02 WARN]: at net.minecraft.server.v1_16_R1.PlayerConnection.handleCommand(PlayerConnection.java:1908)
[14:35:02 WARN]: at net.minecraft.server.v1_16_R1.PlayerConnection.a(PlayerConnection.java:1719)
[14:35:02 WARN]: at net.minecraft.server.v1_16_R1.PacketPlayInChat.a(PacketPlayInChat.java:47)
[14:35:02 WARN]: at net.minecraft.server.v1_16_R1.PacketPlayInChat.a(PacketPlayInChat.java:5)
[14:35:02 WARN]: at net.minecraft.server.v1_16_R1.PlayerConnectionUtils.lambda$ensureMainThread$0(PlayerConnectionUtils.java:23)
[14:35:02 WARN]: at net.minecraft.server.v1_16_R1.TickTask.run(SourceFile:18)
[14:35:02 WARN]: at net.minecraft.server.v1_16_R1.IAsyncTaskHandler.executeTask(IAsyncTaskHandler.java:136)
[14:35:02 WARN]: at net.minecraft.server.v1_16_R1.IAsyncTaskHandlerReentrant.executeTask(SourceFile:23)
[14:35:02 WARN]: at net.minecraft.server.v1_16_R1.IAsyncTaskHandler.executeNext(IAsyncTaskHandler.java:109)
[14:35:02 WARN]: at net.minecraft.server.v1_16_R1.MinecraftServer.aZ(MinecraftServer.java:1136)
[14:35:02 WARN]: at net.minecraft.server.v1_16_R1.MinecraftServer.executeNext(MinecraftServer.java:1129)
[14:35:02 WARN]: at net.minecraft.server.v1_16_R1.IAsyncTaskHandler.awaitTasks(IAsyncTaskHandler.java:119)
[14:35:02 WARN]: at net.minecraft.server.v1_16_R1.MinecraftServer.sleepForTick(MinecraftServer.java:1090)
[14:35:02 WARN]: at net.minecraft.server.v1_16_R1.MinecraftServer.v(MinecraftServer.java:1004)
[14:35:02 WARN]: at net.minecraft.server.v1_16_R1.MinecraftServer.lambda$a$0(MinecraftServer.java:177)
[14:35:02 WARN]: at java.lang.Thread.run(Unknown Source)
[14:35:02 WARN]: Caused by: com.google.gson.JsonParseException: Unexpected empty array of components
[14:35:02 WARN]: at net.minecraft.server.v1_16_R1.IChatBaseComponent$ChatSerializer.deserialize(IChatBaseComponent.java:229)
[14:35:02 WARN]: at net.minecraft.server.v1_16_R1.IChatBaseComponent$ChatSerializer.deserialize(IChatBaseComponent.java:148)
[14:35:02 WARN]: at net.minecraft.server.v1_16_R1.IChatBaseComponent$ChatSerializer.deserialize(IChatBaseComponent.java:101)
[14:35:02 WARN]: at com.google.gson.internal.bind.TreeTypeAdapter.read(TreeTypeAdapter.java:69)
[14:35:02 WARN]: at net.minecraft.server.v1_16_R1.ChatDeserializer.a(SourceFile:493)
[14:35:02 WARN]: at net.minecraft.server.v1_16_R1.ChatDeserializer.a(SourceFile:517)
[14:35:02 WARN]: at net.minecraft.server.v1_16_R1.IChatBaseComponent$ChatSerializer.a(IChatBaseComponent.java:358)
[14:35:02 WARN]: ... 29 more
[14:35:02 INFO]: [CMI] Failed to show json message with packets, using command approach
[14:35:02 INFO]: Invalid chat component: Unexpected empty array of components
[14:35:02 INFO]: ...istouffe" ["",{"text":"","extra":[{"text":"[+]"}],"hoverEvent":{"action":"show_text","value":{"text":"","extra":[{"text":"Page automatique: Vrai"}]}},"clickEvent":{"action":"run_command","value":"/cmi editctext test autopage false"}},{"text":"","extra":[ ]},{"text":"","extra":[{"text":"[+]"}],"hoverEvent":{"action":"show_text","value":{"text":"","extra":[{"text":"Alias automatique: Vrai"}]}},"clickEvent":{"action":"run_command","value":"/cmi editctext test autoalias false"}},{"text":"","extra":[ ]},{"text":"","extra":[{"text":"[-]"}],"hoverEvent":{"action":"show_text","value":{"text":"","extra":[{"text":"Permission requise: Faux\ncmi.command.ctext.test"}]}},"clickEvent":{"action":"run_command","value":"/cmi editctext test permreg true"}},{"text":"","extra":[ ]},{"text":"","extra":[{"text":"<NouvelleLigne>"}],"hoverEvent":{"action":"show_text","value":{"text":"","extra":[{"text":"Ajouter une nouvelle ligne"}]}},"clickEvent":{"action":"run_command","value":"/cmi editctext test 1 add 0"}},{"text":"","extra":[{"text":" "}]},{"text":"","extra":[{"text":"<NouvellePage>"}],"hoverEvent":{"action":"show_text","value":{"text":"","extra":[{"text":"Créer une nouvelle page"}]}},"clickEvent":{"action":"run_command","value":"/cmi editctext test 1 newpage"}},{"text":"","extra":[{"text":" "}]},{"text":"","extra":[{"text":"<RetirerUnePage>"}],"hoverEvent":{"action":"show_text","value":{"text":"","extra":[{"text":"Retirer une page"}]}},"clickEvent":{"action":"run_command","value":"/cmi editctext test 1 removepage"}}]<--[HERE]
```
**Cmi Version (using`/cmi version`):**
[14:37:55 INFO]: --------------------------------------------------
[14:37:55 INFO]: CMI: 8.7.0.2 SqLite
[14:37:55 INFO]: Server: Paper(3) 1.16.1-R0.1-SNAPSHOT
[14:37:55 INFO]: CMI economy: Vrai Vault: 1.7.3-b CMI Chat: Vrai
[14:37:55 INFO]: Modules -> 47 enabled 4 disabled: coloredArmor, votifier, holograms, dynamicSigns
[14:37:55 INFO]: --------------------------------------------------
**Server Type (Spigot/Paperspigot/etc):**
paper 1.16.1
**Server Version (using `/ver`):**
[14:41:52 INFO]: This server is running Paper version git-Paper-10 (MC: 1.16.1) (Implementing API version 1.16.1-R0.1-SNAPSHOT)
[14:41:52 INFO]: Checking version, please wait...
[14:41:52 INFO]: Previous version: git-Paper-3 (MC: 1.16.1)
[14:41:52 INFO]: You are running the latest version
**Relevant plugins (Delete if this isn't needed):**
only cmi atm
Status: Issue closed
Answers:
username_1: Will be fixed with next update |
IonicaBizau/coins-ph | 318694994 | Title: topup sell order is unathorized
Question:
username_0: Hi. Thanks for this amazing package. I'm working on your coins-ph topup api. but I keep on getting **404 error** using your npm package 1.1.8 . Crypto exchange and balance checking is working fine. API credentials is properly set.
Here is my code:
```
client.createSellorder( {
"payment_outlet": "load-globe",
"btc_amount": 0.000022,
"currency": "PHP",
"currency_amount_locked": 10,
"pay_with_wallet": "PBTC",
"phone_number_load": "+639054044313"
}, (err, data) => {
console.log(err || data);
});
```
Here is the result : [https://ibb.co/nhQgwx](url)
Answers:
username_0: CoinsPh team said the api is for business account only.
Oath Authentication method works tho, so I'm closing this.
Status: Issue closed
|
delphi-hub/delphi | 334066864 | Title: Infrastructure: Install update hooks so this repository gets updated when the component repos do
Question:
username_0: **Is your feature request related to a problem? Please describe.**
Currently the submodule dependencies have to be updated manually. This is not so much a problem for the master branch as this will be a release step, but since we introduced the develop branch with regular updates of the submodule components, this has become a burden.
**Describe the solution you'd like**
Please implement a GitHub-based mechanism that updates the submodule relations once there was a push to the respective branch of a component repository.
**Describe alternatives you've considered**
Since we build the components using TravisCI, it might also be possible to trigger this from a component build.
**Additional context**
None
Answers:
username_0: Closed by #12
Status: Issue closed
|
paulholden/moodle-local_cohortrole | 993064785 | Title: usernotconfirmed exception thrown checking require_active_user()
Question:
username_0: Hi, I'm using Moodle 3.9 using latest version of your local_cohortrole plugin in conjuntion with [local_profilecohort](https://moodle.org/plugins/local_profilecohort). The latest defines users membership using specific rules throwing an event that is taken by your plugin.
The issue is that `core_user::require_active_user($user)` throws an Exception reporting a `usernotconfirmed` error.
This is understandable because the $user which is registering for the first time to moodle, isn't already confirmed. In fact, when $user confirms his registration (ex. by email link) there is no events managed by your plugin and the user isn't assigned to the cohort.
Thank you for your attention,
Alessandro |
nodejs/node-gyp | 853635799 | Title: npm i not running !
Question:
username_0: aditya@aditya-ASUS-Gaming-FX570UD:~/Desktop/stuff/gsoc/wikipedia-preview$ npm i
npm ERR! code 1
npm ERR! path /home/aditya/Desktop/stuff/gsoc/wikipedia-preview/node_modules/canvas
npm ERR! command failed
npm ERR! command sh -c node-gyp rebuild
npm ERR! gyp info it worked if it ends with ok
npm ERR! gyp info using [email protected]
npm ERR! gyp info using [email protected] | linux | x64
npm ERR! gyp info find Python using Python version 3.8.5 found at "/usr/bin/python3"
npm ERR! gyp info spawn /usr/bin/python3
npm ERR! gyp info spawn args [
npm ERR! gyp info spawn args '/usr/local/lib/node_modules/npm/node_modules/node-gyp/gyp/gyp_main.py',
npm ERR! gyp info spawn args 'binding.gyp',
npm ERR! gyp info spawn args '-f',
npm ERR! gyp info spawn args 'make',
npm ERR! gyp info spawn args '-I',
npm ERR! gyp info spawn args '/home/aditya/Desktop/stuff/gsoc/wikipedia-preview/node_modules/canvas/build/config.gypi',
npm ERR! gyp info spawn args '-I',
npm ERR! gyp info spawn args '/usr/local/lib/node_modules/npm/node_modules/node-gyp/addon.gypi',
npm ERR! gyp info spawn args '-I',
npm ERR! gyp info spawn args '/home/aditya/.cache/node-gyp/15.12.0/include/node/common.gypi',
npm ERR! gyp info spawn args '-Dlibrary=shared_library',
npm ERR! gyp info spawn args '-Dvisibility=default',
npm ERR! gyp info spawn args '-Dnode_root_dir=/home/aditya/.cache/node-gyp/15.12.0',
npm ERR! gyp info spawn args '-Dnode_gyp_dir=/usr/local/lib/node_modules/npm/node_modules/node-gyp',
npm ERR! gyp info spawn args '-Dnode_lib_file=/home/aditya/.cache/node-gyp/15.12.0/<(target_arch)/node.lib',
npm ERR! gyp info spawn args '-Dmodule_root_dir=/home/aditya/Desktop/stuff/gsoc/wikipedia-preview/node_modules/canvas',
npm ERR! gyp info spawn args '-Dnode_engine=v8',
npm ERR! gyp info spawn args '--depth=.',
npm ERR! gyp info spawn args '--no-parallel',
npm ERR! gyp info spawn args '--generator-output',
npm ERR! gyp info spawn args 'build',
npm ERR! gyp info spawn args '-Goutput_dir=.'
npm ERR! gyp info spawn args ]
npm ERR! Package pixman-1 was not found in the pkg-config search path.
npm ERR! Perhaps you should add the directory containing `pixman-1.pc'
npm ERR! to the PKG_CONFIG_PATH environment variable
npm ERR! No package 'pixman-1' found
npm ERR! gyp: Call to 'pkg-config pixman-1 --libs' returned exit status 1 while in binding.gyp. while trying to load binding.gyp
npm ERR! gyp ERR! configure error
npm ERR! gyp ERR! stack Error: `gyp` failed with exit code: 1
npm ERR! gyp ERR! stack at ChildProcess.onCpExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:351:16)
npm ERR! gyp ERR! stack at ChildProcess.emit (node:events:369:20)
npm ERR! gyp ERR! stack at Process.ChildProcess._handle.onexit (node:internal/child_process:290:12)
npm ERR! gyp ERR! System Linux 5.4.0-52-generic
npm ERR! gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
npm ERR! gyp ERR! cwd /home/aditya/Desktop/stuff/gsoc/wikipedia-preview/node_modules/canvas
npm ERR! gyp ERR! node -v v15.12.0
npm ERR! gyp ERR! node-gyp -v v7.1.2
npm ERR! gyp ERR! not ok
Answers:
username_1: Happens for me too.
username_0: How do you fix this ?
username_1: For me it was a mismatch between npm, node and the packages in the project.
username_2: Does issue still persists? |
npm/registry | 216902943 | Title: problem publishing uglify-js 2.8.16
Question:
username_0: When I did `npm publish`, it seems to have retried quite a number of times with some "error parsing JSON" messages. Then it claims to have `[email protected]`.
But when I visit https://www.npmjs.com/package/uglify-js or try `npm install uglify-js`, I only get `2.8.15` which is the previous version.
Trying `npm publish` again results in error complaining `2.8.16` has already been published.
Can somebody confirm the latest version of `uglify-js` has been published on `npm` properly?
Answers:
username_0: Just tried to install and got this weird `404` once:
```
npm install uglify-js
npm ERR! fetch failed https://registry.npmjs.org/uglify-js/-/uglify-js-2.8.16.tgz
npm WARN retry will retry, error on last attempt: Error: fetch failed with status code 404
`-- [email protected]
+-- [email protected]
+-- [email protected]
`-- [email protected]
+-- [email protected]
+-- [email protected]
| +-- [email protected]
| | +-- [email protected]
| | | +-- [email protected]
| | | | `-- [email protected]
| | | +-- [email protected]
| | | `-- [email protected]
| | `-- [email protected]
| +-- [email protected]
| `-- [email protected]
+-- [email protected]
`-- [email protected]
```
Status: Issue closed
|
kedacore/keda | 606114763 | Title: kubectl get scaledobject should show related trigger authentication
Question:
username_0: As of today, `kubectl get scaledobject` shows following information:
```
NAME DEPLOYMENT TRIGGERS
order-processor-autoscaler order-processor azure-servicebus
```
However, it would be good if it would show related trigger authentication:
```
NAME DEPLOYMENT TRIGGERS AUTHENTICATION
order-processor-autoscaler order-processor azure-servicebus trigger-auth-service-bus-orders
sales-processor-autoscaler order-processor azure-servicebus <none>
```
Answers:
username_1: fixed in #794
Status: Issue closed
username_0: Just verifying - @username_1 I presume this still works now that we have multiple triggers
username_1: Yeah, it should work :) |
difrad/truman_esl_empathy | 342444660 | Title: Double Post
Question:
username_0: **Describe the bug**
When making a new post, it posts twice.
**To Reproduce**
Steps to reproduce the behavior:
Make a new post
**Expected behavior**
Only posts to feed once
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [iOS]
- Browser [Chrome]
- Version [10.12]
**Smartphone (please complete the following information):**
N/A
**Additional context**
Add any other context about the problem here.
Answers:
username_1: Fixed this issue it latest push
Status: Issue closed
|
andydandy74/ClockworkForDynamo | 533760337 | Title: Need an alternative to Solid.ByUnion
Question:
username_0: Since Solid.ByUnion often fails when merging larger numbers of solids (e.g. due to coinciding edges) I need a node that tries to create a minimal number of solids from a list. if something can'T be unioned, create another solid.
Answers:
username_0: - [ ] Migrate to 2.x
- [ ] Add to node list
- [ ] Add to samples
- [ ] Add to version history
Status: Issue closed
|
GoogleContainerTools/skaffold | 854965305 | Title: Premature end of Content-Length delimited message body
Question:
username_0: Seen on Travis: note that it's fetching from the GCP Maven mirror.
```
=== RUN TestDebugEventsRPC_StatusCheck
time="2021-04-09T17:11:46Z" level=info msg="Running [skaffold build --default-repo gcr.io/k8s-skaffold] in testdata/jib"
helper.go:234: skaffold build: exit status 1, Generating tags...
- skaffold-jib -> gcr.io/k8s-skaffold/skaffold-jib:v1.21.0-35-g9045293
Checking cache...
- skaffold-jib: Error checking cache.
panic.go:617: getting hash for artifact "skaffold-jib": getting dependencies for "skaffold-jib": could not fetch dependencies for workspace .: initial Jib dependency refresh failed: failed to get Jib dependencies: running [/skaffold/integration/testdata/jib/mvnw jib:_skaffold-fail-if-jib-out-of-date -Djib.requiredVersion=1.4.0 --no-transfer-progress --non-recursive jib:_skaffold-files-v2 --quiet --batch-mode]
- stdout: "[ERROR] Failed to execute goal com.google.cloud.tools:jib-maven-plugin:3.0.0:_skaffold-fail-if-jib-out-of-date (default-cli) on project hello-java: Execution default-cli of goal com.google.cloud.tools:jib-maven-plugin:3.0.0:_skaffold-fail-if-jib-out-of-date failed: Plugin com.google.cloud.tools:jib-maven-plugin:3.0.0 or one of its dependencies could not be resolved: Could not transfer artifact org.apache.maven:maven-core:jar:3.6.3 from/to google-maven-central (https://maven-central.storage-download.googleapis.com/maven2/): GET request of: org/apache/maven/maven-core/3.6.3/maven-core-3.6.3.jar from google-maven-central failed: Premature end of Content-Length delimited message body (expected: 633,028; received: 309,904) -> [Help 1]\n[ERROR] \n[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.\n[ERROR] Re-run Maven using the -X switch to enable full debug logging.\n[ERROR] \n[ERROR] For more information about the errors and possible solutions, please read the following articles:\n[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException\n"
- stderr: ""
- cause: exit status 1
``` |
python-amazon-mws/python-amazon-mws | 688708409 | Title: How to upload PDF invoice?
Question:
username_0: Hi,
I'm trying to upload a PDF invoice.
The submit feeds works, but i get this error:
"Feed Processing Summary:
Number of records processed 1
Number of records successful 0
79503 Error
**Invoice uploaded is not pdf for shipmentId** N/A, orderId 405-example-4561315 and InvoiceNumber 11-2020-2-1"
It seems that MWS does not accept my PDF, but when i check the Base64 string, it is correct.
This is my request code:
feeds_api = mws.Feeds(region="UK")
base64_invoice = base64.b64encode(pdf_invoice.read())
response = feeds_api.submit_feed(
feed=base64_invoice,
feed_options={"OrderId": order_id, "InvoiceNumber": invoice_id},
feed_type="_UPLOAD_VAT_INVOICE_",
marketplaceids=["Example"]
)
Am i doing something wrong with the base64?
Thanks for your help
Answers:
username_1: The file is sent as bytes, not base64. Try `.read().encode('iso-8859-1')`.
username_0: Great! It worked with just `pdf_invoice.read()` for me.
Thanks!
Status: Issue closed
username_2: ll l..4.
username_2: . |
jwenjian/ghiblog | 483158276 | Title: [From Instapaper] : 使用Alt-svc自举 HTTP/3 explained
Question:
username_0: 使用Alt-svc自举 · HTTP/3 explained<br>
Alt-svc 替代服务(alternative service, Alt-svc:)头部和它相对应的 ALT-SVC HTTP/2帧并不是特别为QUIC和HTTP/3设计的。它是为了让服务器可以告诉客户端 “看,我在这个主机的这个端口用这个协议提供相同的服务” 而设计的。详见 RFC 7838 。…<br>
<br>
--- August 21, 2019 at 09:49AM<br>
via Instapaper https://ift.tt/2P5fUvW |
xamarin/Xamarin.Forms | 740712421 | Title: Xamarin Tizen for Samsung SMART Signage Platform
Question:
username_0: Does xamarin support for Samsung SMART Signage Platform?
.tpk format is not supported by it.
How do create app and install on Signage TV?
Link: https://displaysolutions.samsung.com/solutions/partner-solution/sssp
Answers:
username_1: cc @username_2
username_2: I'm not sure, but as fa as I know sssp is supporting the web app (*.wgt not *.tpk) only. For more detail, please to the below link. http://samsungdforum.com/B2B/Introduction/Sssp_fea
username_0: How can i generate Tizen build in *.wgt format ?
username_2: Refer to https://docs.tizen.org/application/web/get-started/overview/
username_0: Meaning that their is no support for xamarin.forms right now.
Is their any plan to support it in future?
username_3: @username_2 can this be closed?
username_2: @username_3 Yes. :-)
Status: Issue closed
|
flutter/flutter | 389620045 | Title: Unable To Install Release
Question:
username_0: ## Steps to Reproduce
Visit:
Using Android Studio on Linux
Perform the codelab steps in:
https://flutter.io/docs/get-started/codelab
Write your first Flutter app, part 1
Plug in phone with USB cable.
Add Additional Arguments to Configuration:
--release --no-track-widget-creation flags
(This step was suggested due to another bug).
Got confusing message:
Target file "flags" not found.
## Logs
```
```
<!-- If possible, paste the output of running `flutter doctor -v` here. -->
```
bessermt@bessermt-Latitude-E6440 ~ $ flutter doctor -v
[✓] Flutter (Channel stable, v1.0.0, on Linux, locale en_US.UTF-8)
• Flutter version 1.0.0 at /opt/flutter
• Framework revision 5391447fae (11 days ago), 2018-11-29 19:41:26 -0800
• Engine revision 7375a0f414
• Dart version 2.1.0 (build 2.1.0-dev.9.4 f9ebf21297)
[✓] Android toolchain - develop for Android devices (Android SDK 28.0.3)
• Android SDK at /home/bessermt/Android/Sdk
• Android NDK at /home/bessermt/Android/Sdk/ndk-bundle
• Platform android-28, build-tools 28.0.3
• Java binary at: /opt/android-studio/jre/bin/java
• Java version OpenJDK Runtime Environment (build
1.8.0_152-release-1136-b06)
• All Android licenses accepted.
[✓] Android Studio (version 3.2)
• Android Studio at /opt/android-studio
• Flutter plugin version 31.1.1
• Dart plugin version 181.5656
• Java version OpenJDK Runtime Environment (build
1.8.0_152-release-1136-b06)
[✓] Connected device (2 available)
• ONEPLUS A5000 • 31acfaea • android-arm64 • Android 8.1.0
(API 27)
• Android SDK built for x86 • emulator-5554 • android-x86 • Android 9 (API
28) (emulator)
• No issues found!
bessermt@bessermt-Latitude-E6440 ~ $
```
Answers:
username_1: Can you please post the full console output.
username_0: Cut and pasted right from the console:
**Target file "flags" not found.**
username_0: Yes, that was all that was there. I did install into /opt/flutter since that's one of the two suggested locations according to the Android Studio install rules. I followed the suggested fix for that which was to make myself both owner and group for permissions.
The suggestion of tar xf ~/Downloads/flutter_linux_v1.0.--stable.tar.xz isn't a good suggestion. Besides, the instructions don't actually say to extract there, it says "Extract the file in the desired location, for example:"... I don't have a development folder and my dev folder is only temporary. My install is for multiple users and so I chose the most standard way I could find.
I don't see any path for the file "flags", so I'm extra confused.
username_0: Thanks. I think it should install like Android Studio as root, but for now the workaround seems to be making yourself the owner and group. BTW, tar isn't good for installs since my understanding is that it was designed to be a backup utility and therefore keeps the user id of the original owner. This user won't be on the install computer, so it defaulted to the id of 1024. Hope that helps.
username_2: The original issue for which this was filled is a confusing command in the code lab, which has since been fixed. In general, we need better installation instructions for our users but that is out of the scope of this bug
Status: Issue closed
|
opensalt/opensalt | 304015978 | Title: The create framework form should use a modal dialog and not reference "LsDoc"
Question:
username_0: Login as an editor and click create a framework. The form is on the page not in a dialog. Also it include "LsDoc Creation" which is no longer correct in case and not user friendly either. Should simply say Create New Framework Package |
square/wire | 1147064587 | Title: Gradle DSL does not work with version catalogs
Question:
username_0: You cannot do `srcJar(libs.whatever)`, for example. This returns a `Provider` of type [`MinimalExternalModuleDependency`](https://docs.gradle.org/current/javadoc/org/gradle/api/artifacts/MinimalExternalModuleDependency.html).
Would be nice to help completely eliminate dealing with string coordinates.
Answers:
username_1: Dup of #1946
Status: Issue closed
|
richardltyler/vueniverse | 819595556 | Title: Deployment
Question:
username_0: **Describe the solution you'd like**
- Deploy website on heroku
- [here is a link](https://www.youtube.com/watch?v=GBYmAMBuoc0&feature=youtu.be) to a tutorial that I pulled from the mod3 calendar on Thursday 2/25 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.