repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
colymba/GridFieldBulkEditingTools | 129301984 | Title: GridFieldBulkUpload_Request getDisplayFolderName
Question:
username_0: When I try to select files from the ones I've already uploaded the form is not loaded as the GFBU_Request can't find the function getDisplayFolderName. I've fixed it adding the function like this:
```
public function getDisplayFolderName()
{
/*
$uploadField = $this->getUploadField();
return $uploadField->handleSelect($request);
*/
$uploadField = $this->getUploadField();
return $uploadField->getDisplayFolderName();
}
```
in the file GridFieldBulkUpload_Request.php
Trace
```
Object->__call(getDisplayFolderName,Array)
UploadField.php:1537
GridFieldBulkUpload_Request->getDisplayFolderName()
UploadField.php:1537
UploadField_SelectHandler->Form()
ViewableData.php:402
ViewableData->obj(Form,,,1)
ViewableData.php:475
ViewableData->XML_val(Form,,1)
call_user_func_array(Array,Array)
SSViewer.php:179
SSViewer_Scope->__call(XML_val,Array)
SSViewer.php:550
```<issue_closed>
Status: Issue closed |
NeverSinkDev/NeverSink-Filter | 1040375161 | Title: Rare tiering customizations are not loaded properly from save state
Question:
username_0: I have customized rare tiers for Two-Handed Maces, Warstaves, and (Armour) body armour base types in the "ENDGAME - RARE ITEMS (LVL 65+)" section.
The filter save state seems to save properly, but when loaded again my tiering customizations are not loaded.
Customizations I have made:
- Astral Plate T1
- Glorious Plate T2
- Coronal Maul, Terror Maul, Imperial Maul, Karui Maul, T1
- Meatgrinder, Piledriver, Colossus Mallet T2
- Judgement Staff, Maelström Staff, Ezomyte Staff T1
- Foul Staff T2
After reloading the save state you can see that my Warstaves customizations are gone and are returned to the filter's defaults:

I had previously saved and re-loaded this save state multiple times without experiencing this issue.
Shared save state:
https://www.filterblade.xyz/?profile=octeris&saveState=OOIUIBWYR827LM&platform=pc
Answers:
username_1: A very recent update introduced a bug where BaseType-Matrix-changes were not saved correctly. I just deployed a hotfix for this. Can you verify that this resolves your issue as well? Sadly I presume your previous change can't be fully recovered from this since it was sitll saved incorrectly.
username_0: @username_1 Yep, it looks like it is fixed now. Thank you!
Status: Issue closed
|
manga-download/hakuneko | 874663727 | Title: [Site Request] HolyManga
Question:
username_0: **Name of the website**
HolyManga
**Website urls (examples below)**
- Site: https://w26.holymanga.net/
- Manga List: https://w26.holymanga.net/manga-list/
- Manga example: https://w26.holymanga.net/jujutsu-kaisen/
- Chapter online viewer example: https://w26.holymanga.net/jujutsu-kaisen-chap-1/
**Languages**
English
**Website relationship*
If applicable describe the relation with any other website (eg : "alternative domain, copy of ...").
**Additional details**
Add any other context details that you may have found (like template to reuse)
Answers:
username_1: Already supported, please check before creating tickets
Status: Issue closed
|
dart-lang/sdk | 509179076 | Title: Output divergence with nightly dartfuzz
Question:
username_0: ```dart
NO-FP NO-FFI FLAT : AOT-ReleaseX64 - KBC-MIX-noVFP-ReleaseSIMARM: !DIVERGENCE! 1.62:859021245 (output)
-- BEGIN REPRODUCE --
dartfuzz.dart --no-fp --no-ffi --flat --seed 859021245 /b/s/w/itjVmpLy/dart_fuzzMDHVSD/fuzz.dart
-- RUN 1 --
DART_CONFIGURATION=ReleaseX64 /b/s/w/ir/pkg/vm/tool/precompiler2 /b/s/w/itjVmpLy/dart_fuzzMDHVSD/fuzz.dart /b/s/w/itjVmpLy/dart_fuzzMDHVSD/snapshot
/b/s/w/ir/pkg/vm/tool/dart_precompiled_runtime2 /b/s/w/itjVmpLy/dart_fuzzMDHVSD/snapshot
-- RUN 2 --
/b/s/w/ir/pkg/vm/tool/gen_kernel --gen-bytecode --platform=/b/s/w/ir/out/ReleaseSIMARM/vm_platform_strong.dill -o /b/s/w/itjVmpLy/dart_fuzzMDHVSD/out.dill /b/s/w/itjVmpLy/dart_fuzzMDHVSD/fuzz.dart
/b/s/w/ir/out/ReleaseSIMARM/dart --enable-interpreter --no-use-vfp --old_gen_heap_size=128 /b/s/w/itjVmpLy/dart_fuzzMDHVSD/out.dill
-- END REPRODUCE --
```
Answers:
username_0: [fuzz859021245.dart.txt](https://github.com/dart-lang/sdk/files/3745858/fuzz859021245.dart.txt)
Status: Issue closed
username_0: Ran this many times on master with all recent fixes. No longer reproduces. |
scikit-image/scikit-image | 526239325 | Title: io documentation is incomplete
Question:
username_0: ## Description
`skimage.io.collection.alphanumeric_key` is a useful function (my second addition to skimage! :smile:), but it is currently not documented (https://scikit-image.org/docs/dev/api/skimage.io.html), it is importable from `skimage.io.collection` instead of `skimage.io` (is this how we want it?), and cannot be found by `skimage.lookfor`.
## Way to reproduce
```python
import skimage
skimage.lookfor('alphanumeric_key')
from skimage.io.collection import alphanumeric_key # works
```
Answers:
username_1: @username_0 I recommend renaming this function to something that hints at its function, and moving it to utilities from io.
username_2: Replaced by #4513
Status: Issue closed
|
fuzzitdev/javafuzz | 526581258 | Title: ClassNotFoundException jacoco
Question:
username_0: I have followed the steps in the readme multiple times but end up getting this exception, am I missing something obvious here?
```
java.lang.ClassNotFoundException: org.jacoco.agent.rt.RT
at org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass(SelfFirstStrategy.java:50)
at org.codehaus.plexus.classworlds.realm.ClassRealm.unsynchronizedLoadClass(ClassRealm.java:271)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:247)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:239)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:315)
at dev.fuzzit.javafuzz.core.Fuzzer.<init>(Fuzzer.java:25)
at org.fuzzitdev.javafuzz.maven.FuzzGoal.execute(FuzzGoal.java:62)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:210)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:956)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:192)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
hello mojo
```
Answers:
username_1: Hey @username_0 :) Try the following to reproduce one of the examples:
```bash
docker run -it maven:3.6.2-jdk-11 /bin/bash
git clone https://github.com/fuzzitdev/javafuzz.git
cd javafuzz
mvn install
cd examples
wget -O jacocoagent.jar https://github.com/fuzzitdev/javafuzz/raw/master/javafuzz-maven-plugin/src/main/resources/jacocoagent-exp.jar
MAVEN_OPTS="-javaagent:jacocoagent.jar" mvn javafuzz:fuzz -DclassName=dev.fuzzit.javafuzz.examples.FuzzYaml
```
There were indeed a few typos in the tutorial. Thanks for pointing out. (specifcally it should be `MAVEN_OPTS` and not `MAVEN_OPTIONS`)
Status: Issue closed
username_0: Thanks, got it working now but I have some feedback if you're interested @username_1
1) It's named "core" in maven central, most others would use something like "javafuzz-core" as artifact id which makes a lot more sense when it shows up in reports etc.
2) Your readme has the snapshot version in the dependency which isn't available in maven central.
3) <plugin>...</plugins> is misspelled in the readme ;)
4) The readme is ordered a bit strangely, you say "The first step is to implement the following function" before the dependency is even installed, its a minor thing but its nice if you can follow it step by step.
5) It doesnt make much sense for AbstractFuzzTarget to be an abstract class when it doesnt have any code, use an interface so we can have inheritance in the test class if we wanted.
6) The plugin step "fuzz" uses <phase>initialize</phase> which means it tries to run it before compile, which is a bit annoying if we're try to configure to run it in our pom.xml. I think the test phase makes more sense.
username_1: Thanks for the detailed feedback! I'll try to implement those soon. Pull-Requests are welcomed as well:) |
ikedaosushi/tech-news | 539246599 | Title: ヒアリ判定サービスをAutoMLで作ってみた
Question:
username_0: ヒアリ判定サービスをAutoMLで作ってみた<br>
断り書きとして、ヒアリのようなアリをみつけた場合は、下記にある環境省のサイトを御覧ください。<br>
https://ift.tt/36EkJjB |
simpligility/android-maven-plugin | 200147222 | Title: ResourceClassGenerator always generate final fields
Question:
username_0: It seems that the old version of SymbolWriter from the android builder always writes the fields as final fields.
Updated to 2.2.3 the SymbolWriter class allows to specify whether to use ```final``` fields.
This should probably be fixed for aar libraries as their tests currently fail with robolectric 3.2.1 (see https://github.com/robolectric/robolectric/issues/2847)
Answers:
username_0: Tried updating to the newer 2.2.3 but I don't know how to fix ManifestMergerMojo, since that class seems to depend heavily on classes which are gone in 2.2.3
username_1: I can help you with that class. Do you have a branch with the ongoing changes?
username_0: Not yet, my "fix" was to always call the SymbolWriter in a way that does not use final fields, but that would be wrong. I'm not sure how I can detect if the generation is run during test lifecycle (and inside an aar library).
username_0: Pushed my minimal changes: https://github.com/sprylab/android-maven-plugin/tree/final_ids
username_0: I've created a WIP PR: #757
username_2: Hi guys.
First of all thank you for your great work!
I've noticed that you are currently working on [PR:#757](https://github.com/simpligility/android-maven-plugin/pull/757)
Do you now, if there is a pre-release version available for getting and testing the fixed state?
Thanks
username_1: You can build it locally and try it out. It would be good to know how it works for your use cases
Status: Issue closed
username_0: Fixed with #757 |
mockery/mockery | 603967010 | Title: Arg matching with assoc arrays - order of elements
Question:
username_0: Hi
I am mocking a method and comparing the args it receives - like this;
```php
$this->mock('Class')
->shouldReceive('Method')
->once()
->withArgs([$entry]);
```
However I have an odd issue where the array elements (associative) are sometimes in a slightly different order in different computers (ie. bitbucket pipelines they are different to my home computer), and if they are not exactly in the same order, mockery throws an exception saying it cant find the given args - which is super annoying, especially because i really dont care about the ordering (since it's an assoc array).
Is there a way to compare arrays without a care for order of elements of they are associative? Any other recommendations?
Thanks!
Answers:
username_1: Hey, you could use [Mockery::on](http://docs.mockery.io/en/latest/cookbook/mockery_on.html) to do this.
Status: Issue closed
username_0: Thanks @username_1 I think I figured out something that works, essentially using asset equals cannonicalising inside and then returning true achieves this effect. |
stetre/moonglfw | 551747996 | Title: Compile time issues
Question:
username_0: Been trying to compile moongl, moonglfw and moonnuklear (after having given up on glut) with into my own (so I can follow full debug stack) and a number of errors popped up (the first lot I forgot to note down but were relatively easy to fix) but now I've gotten stuck at this point:
```
make --no-print-directory (in directory: /mnt/DATA/github/ffxv_cheats)
clang -I cloned/lua -I cloned/moonnuklear -I cloned/moonglfw -I cloned/moongl -Wall -std=c99 -DLUA_USE_LINUX -DLUA_USE_READLINE -DLINUX -o cloned/moonglfw/src/utils.c.o -c cloned/moonglfw/src/utils.c
cloned/moonglfw/src/utils.c:345:28: warning: declaration of 'struct timespec' will not be visible outside of this function [-Wvisibility]
static void sectots(struct timespec *ts, double seconds)
^
cloned/moonglfw/src/utils.c:347:7: error: incomplete definition of type 'struct timespec'
ts->tv_sec=(time_t)seconds;
~~^
cloned/moonglfw/src/utils.c:345:28: note: forward declaration of 'struct timespec'
static void sectots(struct timespec *ts, double seconds)
^
cloned/moonglfw/src/utils.c:348:7: error: incomplete definition of type 'struct timespec'
ts->tv_nsec=(long)((seconds-((double)ts->tv_sec))*1.0e9);
~~^
cloned/moonglfw/src/utils.c:345:28: note: forward declaration of 'struct timespec'
static void sectots(struct timespec *ts, double seconds)
^
cloned/moonglfw/src/utils.c:348:44: error: incomplete definition of type 'struct timespec'
ts->tv_nsec=(long)((seconds-((double)ts->tv_sec))*1.0e9);
~~^
cloned/moonglfw/src/utils.c:345:28: note: forward declaration of 'struct timespec'
static void sectots(struct timespec *ts, double seconds)
^
cloned/moonglfw/src/utils.c:359:20: error: variable has incomplete type 'struct timeval'
struct timeval tv;
^
cloned/moonglfw/src/utils.c:359:12: note: forward declaration of 'struct timeval'
struct timeval tv;
^
cloned/moonglfw/src/utils.c:360:8: warning: implicit declaration of function 'gettimeofday' is invalid in C99 [-Wimplicit-function-declaration]
if(gettimeofday(&tv, NULL) != 0)
^
cloned/moonglfw/src/utils.c:383:5: warning: implicit declaration of function 'usleep' is invalid in C99 [-Wimplicit-function-declaration]
usleep((useconds_t)(seconds*1.0e6));
^
cloned/moonglfw/src/utils.c:383:13: error: use of undeclared identifier 'useconds_t'
usleep((useconds_t)(seconds*1.0e6));
^
3 warnings and 5 errors generated.
make: *** [makefile:121: cloned/moonglfw/src/utils.c.o] Error 1
Compilation failed.
```
Answers:
username_1: Are you using the makefile included in the projects or what else?
username_0: When I compiled it normally yeah, but some issue cropped up in my project where I couldn't load the libraries despite using your example backend files so I'm trying to compile a debug bug build and see what happend, unfortunatly your makefile doesn't seem to have a debug target so I decided that rather than fiddle with the original I would do a wildcard search on the src directory and compile everything under the same flags as my project and the flags I give lua, lua compiled without issue (once I gave the correct defines) but I'm struggling to fully compile yours and I'm thinking that perhaps this has something to do with why I couldn't load it, either way I need to be able to follow the full stack when issues crop up to see what happened so I can diagnose the source of whatever issue I find (otherwise sometimes it's just blindly fumbling around until I fix it)
username_1: Can you please report the issues you had when loading the libraries built with their original makefiles? Or when building them with the original makefiles? I can't debug a customized build system I do not have access to.
username_0: Ran this:
```
make --no-print-directory moon_libs_original
cd cloned/moongl && make
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o getuniform.o getuniform.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o pipeline.o pipeline.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o enums.o enums.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o getvertex.o getvertex.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o transform.o transform.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o query.o query.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o sync.o sync.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o renderbuffer.o renderbuffer.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o texture.o texture.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o perfragment.o perfragment.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o init.o init.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o framebuffer.o framebuffer.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o object.o object.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o nongl.o nongl.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o uniform.o uniform.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o subroutine.o subroutine.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o draw.o draw.c
draw.c: In function ‘MultiDrawElementsBaseVertex’:
draw.c:282:54: warning: passing argument 4 of ‘__glewMultiDrawElementsBaseVertex’ from incompatible pointer type [-Wincompatible-pointer-types]
282 | glMultiDrawElementsBaseVertex(mode, count, type, (const void* const*)indices, drawcount,basevertex);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~
| |
| const void * const*
draw.c:282:54: note: expected ‘void **’ but argument is of type ‘const void * const*’
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o bitfield.o bitfield.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o pixel.o pixel.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o utils.o utils.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o proginterface.o proginterface.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o sampler.o sampler.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o capabilities.o capabilities.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o vertex_array.o vertex_array.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o hint.o hint.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o main.o main.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o buffer.o buffer.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o getstring.o getstring.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o whole_framebuffer.o whole_framebuffer.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o program.o program.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o debug.o debug.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o teximage.o teximage.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o raster.o raster.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o getintformat.o getintformat.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o get.o get.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o func.o func.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include -I/usr/include/lua5.3 -I/usr/local/include/lua5.3 -c -o shader.o shader.c
cd cloned/moonglfw && make
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include/lua5.3 -c -o enums.o enums.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include/lua5.3 -c -o hint.o hint.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include/lua5.3 -c -o window.o window.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include/lua5.3 -c -o callbacks.o callbacks.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include/lua5.3 -c -o vulkan.o vulkan.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include/lua5.3 -c -o id.o id.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include/lua5.3 -c -o monitor.o monitor.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include/lua5.3 -c -o win.o win.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include/lua5.3 -c -o mon.o mon.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include/lua5.3 -c -o native.o native.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I/usr/include/lua5.3 -c -o context.o context.c
[Truncated]
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I./nuklear -I/usr/include/lua5.3 -c -o cursor.o cursor.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I./nuklear -I/usr/include/lua5.3 -c -o objects.o objects.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I./nuklear -I/usr/include/lua5.3 -c -o atlas.o atlas.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I./nuklear -I/usr/include/lua5.3 -c -o versions.o versions.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I./nuklear -I/usr/include/lua5.3 -c -o utils.o utils.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I./nuklear -I/usr/include/lua5.3 -c -o context.o context.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I./nuklear -I/usr/include/lua5.3 -c -o main.o main.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I./nuklear -I/usr/include/lua5.3 -c -o edit.o edit.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I./nuklear -I/usr/include/lua5.3 -c -o buffer.o buffer.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I./nuklear -I/usr/include/lua5.3 -c -o font.o font.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I./nuklear -I/usr/include/lua5.3 -c -o widgets.o widgets.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I./nuklear -I/usr/include/lua5.3 -c -o input.o input.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I./nuklear -I/usr/include/lua5.3 -c -o tracing.o tracing.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I./nuklear -I/usr/include/lua5.3 -c -o canvas.o canvas.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I./nuklear -I/usr/include/lua5.3 -c -o structs.o structs.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I./nuklear -I/usr/include/lua5.3 -c -o udata.o udata.c
cc -O2 -Wall -Wextra -Wpedantic -std=gnu99 -DLUAVER=5.3 -fpic -DLINUX -I./nuklear -I/usr/include/lua5.3 -c -o style.o style.c
Compilation finished successfully.
```
There may be some missing messages since I've already corrected a few mistakes before it occured to me to post about the errors I was getting
username_0: The main difference I can see is that I used the -std=**c99** flag while you used -std=**gnu99**
username_1: You should try with fresh clones then. However, I see that the compilation succeeded, apart from that warning that you can ignore (it may depend on the version of GLEW you have on your system: that argument type is correct, as you can check in the OpenGL manual page for the glMultiDrawElementsBaseVertex function).
Now try installing the three libraries somewhere, run the examples to see if they work, and let me know if they produce some errors.
If you install the libraries in the default location `usr/local`, then Lua will find them at runtime without having to set the `LUA_PATH` and `LUA_CPATH` variables. If you don't want (or can not) install them there, then follow [these instructions](https://github.com/username_1/moonlibs#install-location) (the instructions use moonfltk as example, but moongl, moonglfw, etc. are supposed to work in the same way).
username_0: Well I tried that and doesn't seem to have complained about missing object files at the end, some issues I managed to resolve by including extra headers (sys/time.h and sys/types.h) but the implicit usleep error wouldn't go away until I shoved this in:
```
#elif (_BSD_SOURCE) ||\
((_XOPEN_SOURCE) >= 500 ||\
(_XOPEN_SOURCE) && (_XOPEN_SOURCE_EXTENDED)) &&\
!((_POSIX_C_SOURCE) >= 200809L || (_XOPEN_SOURCE) >= 700)
usleep((useconds_t)(seconds*1.0e6));
#else
sleep(seconds);
#endif
```
Still getting the warnings, tried looking through the headers and code to see if anything was not closed properly or closed too early but didn't find anything, would still like to compile in c99 without warnings instead of gnu99 for sake of portability so can you try compiling yours under the c99 witch instead and resolve what errors you can please (I'm sure when I get a hold of the changes you've made it will resolve majority of my issues), in mean time I'm gonna test some other code for what I was originally intending to do with a GUI, really don't want to rely on ncurses by default, rather leave that as a fallback, I'll also have too look for a lua mapping for that too ⎢⎢⎢⎻_⎻
username_1: Is it to address some specific problem that you want to use c99 instead of gnu99, or is it just for a generic 'sake of portability'?
username_0: generic sake of portability, it's fine to treat the gnu specific as extensions to the library but where possible one should keep to portable code especially when the library relies on opengl for backend stuff
username_1: May I ask why? I mean, what has OpenGL to do with this and how is it affected? Just curious.
BTW, the libraries are not that far away from c99 compatibility, since they just use a few POSIX calls here and there mainly for time-related stuff. I should investigate better but, honestly, I'd rather address issues when they reveal themselves to be really issues. Are you foreseeing to use platforms where gnu99 is not an option?
username_0: Correct, I'm trying to write my project in way that allows porting to say windows or osx, normally I use IUP from tegraf for my GUI stuff but for some reason that is unable to compile on the system I'm on, would use ogre but that's in c++ with no c api, I would've excepted a library that uses c++ as an extension to a c api but from what I've seen that's not done in ogre or many other GUI libaries I've looked into so now I'm trying a combination of opengl, glfw and nuklear which all use a c api and since you've already done the hard work of plugging in everything to lua I figured it made more sense to rely on your library and just code from there. Ah btw the project I'm working on is something like a basic version of cheat engine for linux, getting impatient with PINCE and GameConqueror looks like it will never support what I want so just gonna do it myself in full C/Lua, don't really care about matching the cheat engine api though since it's a new project why not just let peops play with their own ideas and fob off whatever I can to external libraries like yours
username_1: As far as I know, these libraries already work both on Windows (building them with MSYS2) and on OSX. I've not tried them on OSX myself, but some time ago a user did and claimed they work fine. I don't know the current status of OpenGL on OSX, though (I read that Apple has deprecated it on its systems...).
If you use them in the canonical way, i.e. build and install them as they are, and then in your project `require()` them from Lua code as in the examples, I don't expect gnu99 to be an issue.
username_0: er yeah about that, the whole reason I started trying to compile manually is because they refuse to load, something about moongl.make being missing or something, didn't get to see it's attempt on moonglfw
username_1: That may be caused by the `LUA_PATH` variable not being properly set. Could you try again and report here the exact error? For example, try running `demo.lua` in `moonnuklear/examples`.
username_0: Found a hack for now
```
win32_defines=_WIN32
linux_defines=_GNU_SOURCE LINUX LUA_USE_LINUX LUA_USE_READLINE
defines:=$(if $(IS_LINUX),$(linux_defines),$(win32_defines))
```
I think you should implement a common header that checks for system defines and then defines anything that ought to have been defined before including anything else, I will point out that my end goal is compiling under c89 full ansi if possible so if you can make your library do the same that would be great, not gonna hold out on that one though, anyways I'll try the GUI again once I fix a separate issue in my own code
username_1: May I ask you again why do you impose yourself such a strict requirement? Even Lua itself, which claims to have ANSI C as its "platform", by default compiles with gnu99 and avoids to include so-called "batteries" precisely to meet that requirement (implying that it's a reasonable requirement for the core lua library, but not for extension libraries).
username_0: simply because its the most portable version of C there is, not to say that the default can't be c99 or gnu99 or other flavors, its merely providing the option, also if I remember rightly until the recently microsoft visual c didn't support c99 anyways so in order to support to a degree those older variants I'd like to be able to compile under c89 even if the api has to be trimmed down in that circumstance
username_1: Are you sure you are not confusing "being able to compile everywhere" with "portability"? They are not the same thing.
Again, take core Lua as an example of library that provides that option. It does so for a good reason and accepting to pay a price. The good reason is not a mere "sake of portability" or "being able to compile everywhere" (which are not reasons at all, let alone good ones) but because it is meant to be used - and is actually used - on a variety of systems including embedded ones, where you can only expect ANSI C. The price it accept to pay is not being allowed to implement any feature that can not be implemented using only what ANSI C provides (with the only exception of library loading, if I recall correctly). The assumption is that the task of implementing any such additional feature, when needed, is left to external "batteries" such as LuaSocket and these ones we are talking about, precisely because they are not expected to have such a strict requirement. To extend the requirement to batteries you need good reasons. Do you have any?
username_0: Well since my project is for cheating on games and is being writ with portability in mind its likely there will be fellows who want to port it to platforms like dos, the bits that need system api's would be the extension, in the case of dos opengl would be swapped out with ncurses or something and the module would be loaded in lua normally just with a seperate gui handler eg.
```
if dofile('glGUI') == false then
dofile('shGUI')
end
```
username_1: In such case you wouldn't need to compile the libraries you don't use, but only their replacements for the particular platform, and from within your project you would load one or the other depending on which you discover being available, which you can do at runtime.
For example, say that in addition to moonnuklear on the platforms where it is available, you have a 'luancurses' binding library to ncurses on DOS, and a 'luawhatevergui' library available on another platform. In your project you could do something like:
local nk, nc, wg
nk = pcall(require, 'moonnuklear')
if not nk then -- not available here, try ncurses:
nc = pcall(require, 'luancurses')
if not nc then -- not available here, try whatever GUI:
wg = pcall(require, 'luawhatevergui')
if not wg then
error("cannot find any GUI library")
end
end
end
-- ... use the available one among nk, ng, and wg ...
`
username_1: PS: The above example code is wrong because pcall returns a boolean followed by either the library table or an error message. I just scratched it down in a hurry, but I hope the point is clear nonetheless: you don't need to be able compile all the alternatives on all the platforms, you just need (at least) one of them for each platform.
username_0: I'm only tried to compile the libraries myself because of a need for debugging ability so I can trace errors back to their source, also because lua wouldn't load one or more of your librabries and I'm trying to diagnose if that is a fault of mine or yours which needs a complete stack tree to see what actually happens when it fails, had it succeeded I wouldn't even thought to manually compile them, either way I think if I can get the my project to run successfully and cheat on ffxv like I originally intended then the project itself should bring more attention to your project when peops decide it's lacking something on the GUI abilities
username_1: What has c99 to do with this? Can't you just use gnu99 with the proper flags for debugging? But before diving into the debugger, why don't you report the error(s) you get? Most times error messages and reasoning are enough to find the cause of a library not loading, or at least to delineate the problem.
username_0: Well I figured if I'm gonna include it in my makefile I might as well go for a standard all modern compilers are supposed to support by now, as for not reporting the error yet I wanted to check if it was a problem with my code first, haven't yet plugged anything into lua yet but I did just dump the example files you gave in your repository and copied over the *.so files to see if I could get a GUI using code you already confirmed to work at one point, the *.so files are indeed found but lua complains of something being missing, here's an output from what I just ran now (using official libs):
```
make --no-print-directory
Already up to date.
Already up to date.
Already up to date.
Already up to date.
clang -fPIC -ldl -lm -llua -o gasp.elf gasp.c.o
./gasp.elf
Opening backend...
Loading moongl...
Failed:
[string "require('moongl.make')"]:1: module 'moongl.make' not found:
no field package.preload['moongl.make']
no file '/usr/share/lua/5.3/moongl/make.lua'
no file '/usr/share/lua/5.3/moongl/make/init.lua'
no file '/usr/lib/lua/5.3/moongl/make.lua'
no file '/usr/lib/lua/5.3/moongl/make/init.lua'
no file './moongl/make.lua'
no file './moongl/make/init.lua'
no file '/usr/lib/lua/5.3/moongl/make.so'
no file '/usr/lib/lua/5.3/loadall.so'
no file './moongl/make.so'
no module 'moongl.make' in file './moongl.so'
main Error: 0x00000016, System Description 'Invalid argument', Process Description 'Test failed'
make: *** [makefile:112: gasp] Error 1
Compilation failed.
```
username_0: Ignore that invalid argument at bottom, that's separate issue in my own code
username_1: Ok. This is a typical error you may encounter when loading Lua modules. A debugger won't help here. The message simply tells you that either the module is missing, or the path to the module is not in the `package.path` or the `package.cpath` variables, so the Lua interpreter can not find it.
You should be able to solve the problem by following [these instructions](https://github.com/username_1/moonlibs#install-location) (as I [already suggested 2 days ago](https://github.com/username_1/moonglfw/issues/2#issuecomment-576605794), by the way...)
username_0: DIdn't I say I don't want to install?
I tried this at least:
```
setenv("LUA_PATH","./?.lua",0);
setenv("LUA_CPATH","./?.so",0);
```
But no change to the lack of module **inside** the library
username_0: Take a look at this line from my earlier post:
```
no module 'moongl.make' in file './moongl.so'
```
Please tell me in what way that indicates to you that this is a problem with the path or cpath variables?
username_1: "Install" here means nothing else that copying the `.so` and `.lua` files into a location where the Lua interpreter can find them. There is nothing special to it, it's just copying them from the source directory into a more convenient directory. You can as well "install" them into a subdirectory of your project, if you want to. I highly suggest you to choose an install directory and install all the modules there, so to have only one path to add to `LUA_PATH` and one to `LUA_CPATH`.
To clarify your ideas, try installing in `PREFIX=/tmp/local`, for example, and then go to that location and see what is "installed" (e.g. enter the `/tmp/local` directory and run the `tree` command there).
About `moongl/make.lua`: this is a Lua script that is loaded by `moongl.so` when from the application you execute `gl = require("moongl")`. MoonGL, like most of my libraries, is mostly implemented in C, but a few parts are implemented directly in Lua, because it is more convenient. That is, the module is composed of a shared object (`.so`) and a few Lua scripts (`.lua`) which the Lua interpreter must be able to find at runtime.
username_1: That said, if you are developing a project that depends on third-party Lua modules, or if you want to write Lua modules yourself, I highly recommend you to find some time to familiarize on how Lua's `require()` works: if and how `LUA_PATH` and `LUA_CPATH` must be set, the standard `package` library, etc. Really. You can find all this info in the [Lua manual](http://www.lua.org/manual/5.3/manual.html#6.3).
(Especially if you plan to release your project, it is very likely that some of your users will report problems on this matter. Even if you strive to keep things simple, stick to standard practices, and make your efforts to document things as clearly as you can, you'll always find some user that'll run into self-inflicted problems and will thus demand your support. Trust me, I speak by experience ;-) ).
username_0: Well I will look into that after getting this to work in a portable manner but thank you for clearing up that issue with the missing make for me, I'll have a look in the repository when I get home to see if it's there, copy that, and check I can get a GUI before I start experimenting on that end
username_1: Ok.
(BTW, if you find a more portable way than putting the modules in the standard location where Lua looks for them, let me know.)
Status: Issue closed
|
rust-lang/cargo | 57708126 | Title: Allow specifying crates.io packages in a dependency section
Question:
username_0: Something like this should work:
```toml
[dependencies.rustc_serialize]
crate = "rustc-serialize"
```
At the moment, only `path` and `git` attributes are supported.
Answers:
username_1: What's the intent of this?
username_2: This is actually currently possible as:
```toml
[dependencies.foo]
version = "..."
```
Although if you mean renaming crates that's a separate issue :)
Status: Issue closed
username_0: @username_2 I do mean renaming crates, please re-open :P
@username_1 It would create a crate named `rustc_serialize`, which is imported as `extern crate rustc_serialize`.
username_2: Something like this should work:
```toml
[dependencies.rustc_serialize]
crate = "rustc-serialize"
```
At the moment, only `path` and `git` attributes are supported.
username_2: (updated with what I believe to be a more accurate title)
username_0: Sure, this title works. But a side effect of this is to permit the specification of crates.io as the origin of a dependency named in its own subsection (like `[dependencies.foo]`), as an alternative to `git` or `path`, the two currently available fields, no?
username_3: This is a wanted feature for the following scenario:
Your crate provides an optional feature named "serde" that enables serde integration.
In the next version you want to depend two crates for the "serde" feature, serde itself and another crate. Renaming the serde dependency would make the serde feature name available to do this transition smoothly.
username_2: @username_3 that may be better served in the long run by https://github.com/rust-lang/cargo/issues/1286 perhaps?
username_3: Yes, it's considering the same kinds of questions I have (but in greater depth).
username_4: Someone was trying to depend on both `uuid-sys` and `uuid` (both on crates.io), where the name of the library for `uuid-sys` is actually `uuid`. This results in two libraries with the same name (but different hash), so doing `extern crate uuid;` becomes ambiguous. Having a way to rename a dependency would allow for the user to depend on both crates.
username_5: Another example: a program that compared gif implementations might want to use both https://github.com/Geal/gif.rs/blob/master/Cargo.toml and https://github.com/PistonDevelopers/image-gif/blob/master/Cargo.toml, both of which claim the name "gif".
username_6: I've been having a think about this lately, and wondering the best way to do this... it seems the crate name (the one used in `extern crate`, that is) is taken from the crate metadata. Thus, a sensible solution may be to extend rustc with CLI args that specify the renaming map for dependencies? The checking for well-formed names would additionally be done on the rustc side, I presume. Thoughts?
username_2: FWIW I have a PR [open for this](https://github.com/rust-lang/cargo/pull/4953), and any comments would be much appreciated!
username_6: @username_2 Doh, I wish I had known sooner! I just implemented it here too. Oh well, at least someone has done it. Thanks.
username_6: @username_2 I think this should be closed now that https://github.com/rust-lang/cargo/pull/4953 is merged, no?
Status: Issue closed
username_2: Indeed!
username_7: The example in #4953 fails with:
```
Dependency 'foo' has different source paths depending on the build target. Each dependency must have a single canonical source path irrespective of build target.
```
username_7: cc #5413 |
coursier/coursier | 219208462 | Title: cassandra / netty resolution failure
Question:
username_0: when switching from ivy to coursier, this app fails (`sbt test:run`)
https://github.com/username_0/coursier-play-bug
```
[info] Caused by: java.lang.NoSuchMethodError: io.netty.util.UniqueName.<init>(Ljava/lang/String;)V
[info] at io.netty.channel.ChannelOption.<init>(ChannelOption.java:136) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final]
[info] at io.netty.channel.ChannelOption.valueOf(ChannelOption.java:99) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final]
[info] at io.netty.channel.ChannelOption.<clinit>(ChannelOption.java:42) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final]
[info] at org.apache.cassandra.transport.Server.start(Server.java:131) ~[cassandra-all-3.5.jar:3.5]
[info] at java.util.Collections$SingletonSet.forEach(Collections.java:4767) ~[na:1.8.0_121]
[info] at org.apache.cassandra.service.NativeTransportService.start(NativeTransportService.java:128) ~[cassandra-all-3.5.jar:3.5]
```
presumably because the classpaths are different.
With ivy
```
[info] * Attributed(/home/username_0/Projects/coursier-play-bug/target/scala-2.11/test-classes)
[info] * Attributed(/home/username_0/Projects/coursier-play-bug/target/scala-2.11/classes)
[info] * Attributed(/home/username_0/.ivy2/cache/org.scala-lang/scala-library/jars/scala-library-2.11.9.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/com.typesafe.akka/akka-persistence-cassandra_2.11/jars/akka-persistence-cassandra_2.11-0.14.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/com.datastax.cassandra/cassandra-driver-core/bundles/cassandra-driver-core-3.0.0.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/io.netty/netty-handler/jars/netty-handler-4.0.33.Final.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/io.netty/netty-buffer/jars/netty-buffer-4.0.33.Final.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/io.netty/netty-common/jars/netty-common-4.0.33.Final.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/io.netty/netty-transport/jars/netty-transport-4.0.33.Final.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/io.netty/netty-codec/jars/netty-codec-4.0.33.Final.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/io.dropwizard.metrics/metrics-core/bundles/metrics-core-3.1.2.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/org.slf4j/slf4j-api/jars/slf4j-api-1.7.7.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/com.typesafe.akka/akka-persistence_2.11/jars/akka-persistence_2.11-2.4.4.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/com.typesafe.akka/akka-actor_2.11/jars/akka-actor_2.11-2.4.4.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/com.typesafe/config/bundles/config-1.3.0.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/org.scala-lang.modules/scala-java8-compat_2.11/bundles/scala-java8-compat_2.11-0.7.0.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/com.typesafe.akka/akka-protobuf_2.11/jars/akka-protobuf_2.11-2.4.4.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/com.typesafe.akka/akka-cluster-tools_2.11/jars/akka-cluster-tools_2.11-2.4.4.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/com.typesafe.akka/akka-cluster_2.11/jars/akka-cluster_2.11-2.4.4.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/com.typesafe.akka/akka-remote_2.11/jars/akka-remote_2.11-2.4.4.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/io.netty/netty/bundles/netty-3.10.3.Final.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/org.uncommons.maths/uncommons-maths/jars/uncommons-maths-1.2.2a.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/com.typesafe.akka/akka-persistence-query-experimental_2.11/jars/akka-persistence-query-experimental_2.11-2.4.4.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/com.typesafe.akka/akka-stream_2.11/jars/akka-stream_2.11-2.4.4.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/com.typesafe/ssl-config-akka_2.11/bundles/ssl-config-akka_2.11-0.2.1.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/com.typesafe/ssl-config-core_2.11/bundles/ssl-config-core_2.11-0.2.1.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/org.scala-lang.modules/scala-parser-combinators_2.11/bundles/scala-parser-combinators_2.11-1.0.4.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/org.reactivestreams/reactive-streams/jars/reactive-streams-1.0.0.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/com.typesafe.akka/akka-testkit_2.11/jars/akka-testkit_2.11-2.4.4.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/com.typesafe.akka/akka-stream-testkit_2.11/jars/akka-stream-testkit_2.11-2.4.4.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/org.apache.cassandra/cassandra-all/jars/cassandra-all-3.5.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/org.xerial.snappy/snappy-java/bundles/snappy-java-1.1.1.7.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/net.jpountz.lz4/lz4/jars/lz4-1.3.0.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/com.ning/compress-lzf/bundles/compress-lzf-0.8.4.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/com.google.guava/guava/bundles/guava-18.0.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/commons-cli/commons-cli/jars/commons-cli-1.1.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/commons-codec/commons-codec/jars/commons-codec-1.2.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/org.apache.commons/commons-lang3/jars/commons-lang3-3.1.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/org.apache.commons/commons-math3/jars/commons-math3-3.2.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/com.googlecode.concurrentlinkedhashmap/concurrentlinkedhashmap-lru/jars/concurrentlinkedhashmap-lru-1.4.jar)
[info] * Attributed(/home/username_0/.ivy2/cache/org.antlr/antlr/jars/antlr-3.5.2.jar)
[Truncated]
[info] * Attributed(/home/username_0/.coursier/cache/v1/https/repo1.maven.org/maven2/com/typesafe/akka/akka-cluster_2.11/2.4.4/akka-cluster_2.11-2.4.4.jar)
[info] * Attributed(/home/username_0/.coursier/cache/v1/https/repo1.maven.org/maven2/com/boundary/high-scale-lib/1.0.6/high-scale-lib-1.0.6.jar)
[info] * Attributed(/home/username_0/.coursier/cache/v1/https/repo1.maven.org/maven2/com/typesafe/akka/akka-stream_2.11/2.4.4/akka-stream_2.11-2.4.4.jar)
[info] * Attributed(/home/username_0/.coursier/cache/v1/https/repo1.maven.org/maven2/org/slf4j/jcl-over-slf4j/1.7.7/jcl-over-slf4j-1.7.7.jar)
[info] * Attributed(/home/username_0/.coursier/cache/v1/https/repo1.maven.org/maven2/org/apache/cassandra/cassandra-all/3.5/cassandra-all-3.5.jar)
[info] * Attributed(/home/username_0/.coursier/cache/v1/https/repo1.maven.org/maven2/io/netty/netty-codec/4.0.33.Final/netty-codec-4.0.33.Final.jar)
[info] * Attributed(/home/username_0/.coursier/cache/v1/https/repo1.maven.org/maven2/org/scala-lang/modules/scala-java8-compat_2.11/0.7.0/scala-java8-compat_2.11-0.7.0.jar)
[info] * Attributed(/home/username_0/.coursier/cache/v1/https/repo1.maven.org/maven2/net/mintern/primitive/1.0/primitive-1.0.jar)
[info] * Attributed(/home/username_0/.coursier/cache/v1/https/repo1.maven.org/maven2/javax/validation/validation-api/1.0.0.GA/validation-api-1.0.0.GA.jar)
[info] * Attributed(/home/username_0/.coursier/cache/v1/https/repo1.maven.org/maven2/com/carrotsearch/hppc/0.5.4/hppc-0.5.4.jar)
[info] * Attributed(/home/username_0/.coursier/cache/v1/https/repo1.maven.org/maven2/com/github/rholder/snowball-stemmer/1.3.0.581.1/snowball-stemmer-1.3.0.581.1.jar)
[info] * Attributed(/home/username_0/.coursier/cache/v1/https/repo1.maven.org/maven2/it/unimi/dsi/fastutil/6.5.7/fastutil-6.5.7.jar)
[info] * Attributed(/home/username_0/.coursier/cache/v1/https/repo1.maven.org/maven2/org/antlr/antlr/3.5.2/antlr-3.5.2.jar)
[info] * Attributed(/home/username_0/.coursier/cache/v1/https/repo1.maven.org/maven2/com/ning/compress-lzf/0.8.4/compress-lzf-0.8.4.jar)
[info] * Attributed(/home/username_0/.coursier/cache/v1/https/repo1.maven.org/maven2/org/apache/thrift/libthrift/0.9.2/libthrift-0.9.2.jar)
[info] * Attributed(/home/username_0/.coursier/cache/v1/https/repo1.maven.org/maven2/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar)
[info] * Attributed(/home/username_0/.coursier/cache/v1/https/repo1.maven.org/maven2/net/jpountz/lz4/lz4/1.3.0/lz4-1.3.0.jar)
[info] * Attributed(/home/username_0/.coursier/cache/v1/https/repo1.maven.org/maven2/com/typesafe/akka/akka-actor_2.11/2.4.4/akka-actor_2.11-2.4.4.jar)
[info] * Attributed(/home/username_0/.coursier/cache/v1/https/repo1.maven.org/maven2/org/codehaus/jackson/jackson-core-asl/1.9.2/jackson-core-asl-1.9.2.jar)
```
Answers:
username_0: cleaned up I can see no diff between the classpaths!
```
ch.qos.logback/logback-classic/jars/logback-classic-1.1.3.jar
ch.qos.logback/logback-core/jars/logback-core-1.1.3.jar
com.addthis.metrics/reporter-config-base/jars/reporter-config-base-3.0.0.jar
com.addthis.metrics/reporter-config3/jars/reporter-config3-3.0.0.jar
com.boundary/high-scale-lib/jars/high-scale-lib-1.0.6.jar
com.carrotsearch/hppc/jars/hppc-0.5.4.jar
com.clearspring.analytics/stream/jars/stream-2.5.2.jar
com.datastax.cassandra/cassandra-driver-core/bundles/cassandra-driver-core-3.0.0.jar
com.github.jbellis/jamm/jars/jamm-0.3.0.jar
com.github.rholder/snowball-stemmer/jars/snowball-stemmer-1.3.0.581.1.jar
com.google.guava/guava/bundles/guava-18.0.jar
com.googlecode.concurrent-trees/concurrent-trees/jars/concurrent-trees-2.4.0.jar
com.googlecode.concurrentlinkedhashmap/concurrentlinkedhashmap-lru/jars/concurrentlinkedhashmap-lru-1.4.jar
com.googlecode.json-simple/json-simple/jars/json-simple-1.1.jar
com.lmax/disruptor/jars/disruptor-3.0.1.jar
com.ning/compress-lzf/bundles/compress-lzf-0.8.4.jar
com.thinkaurelius.thrift/thrift-server/jars/thrift-server-0.3.7.jar
com.typesafe.akka/akka-actor_2.11/jars/akka-actor_2.11-2.4.4.jar
com.typesafe.akka/akka-cluster-tools_2.11/jars/akka-cluster-tools_2.11-2.4.4.jar
com.typesafe.akka/akka-cluster_2.11/jars/akka-cluster_2.11-2.4.4.jar
com.typesafe.akka/akka-persistence-cassandra_2.11/jars/akka-persistence-cassandra_2.11-0.14.jar
com.typesafe.akka/akka-persistence-query-experimental_2.11/jars/akka-persistence-query-experimental_2.11-2.4.4.jar
com.typesafe.akka/akka-persistence_2.11/jars/akka-persistence_2.11-2.4.4.jar
com.typesafe.akka/akka-protobuf_2.11/jars/akka-protobuf_2.11-2.4.4.jar
com.typesafe.akka/akka-remote_2.11/jars/akka-remote_2.11-2.4.4.jar
com.typesafe.akka/akka-stream-testkit_2.11/jars/akka-stream-testkit_2.11-2.4.4.jar
com.typesafe.akka/akka-stream_2.11/jars/akka-stream_2.11-2.4.4.jar
com.typesafe.akka/akka-testkit_2.11/jars/akka-testkit_2.11-2.4.4.jar
com.typesafe/config/bundles/config-1.3.0.jar
com.typesafe/ssl-config-akka_2.11/bundles/ssl-config-akka_2.11-0.2.1.jar
com.typesafe/ssl-config-core_2.11/bundles/ssl-config-core_2.11-0.2.1.jar
commons-cli/commons-cli/jars/commons-cli-1.1.jar
commons-codec/commons-codec/jars/commons-codec-1.2.jar
commons-logging/commons-logging/jars/commons-logging-1.1.1.jar
de.jflex/jflex/jars/jflex-1.6.0.jar
io.dropwizard.metrics/metrics-core/bundles/metrics-core-3.1.2.jar
io.netty/netty-all/jars/netty-all-4.0.23.Final.jar
io.netty/netty-buffer/jars/netty-buffer-4.0.33.Final.jar
io.netty/netty-codec/jars/netty-codec-4.0.33.Final.jar
io.netty/netty-common/jars/netty-common-4.0.33.Final.jar
io.netty/netty-handler/jars/netty-handler-4.0.33.Final.jar
io.netty/netty-transport/jars/netty-transport-4.0.33.Final.jar
io.netty/netty/bundles/netty-3.10.3.Final.jar
it.unimi.dsi/fastutil/jars/fastutil-6.5.7.jar
javax.validation/validation-api/jars/validation-api-1.0.0.GA.jar
joda-time/joda-time/jars/joda-time-2.4.jar
junit/junit/jars/junit-4.8.1.jar
net.java.dev.jna/jna/jars/jna-4.0.0.jar
net.jpountz.lz4/lz4/jars/lz4-1.3.0.jar
net.mintern/primitive/jars/primitive-1.0.jar
org.antlr/ST4/jars/ST4-4.0.8.jar
org.antlr/antlr-runtime/jars/antlr-runtime-3.5.2.jar
org.antlr/antlr/jars/antlr-3.5.2.jar
org.apache.ant/ant-launcher/jars/ant-launcher-1.7.0.jar
org.apache.ant/ant/jars/ant-1.7.0.jar
org.apache.cassandra/cassandra-all/jars/cassandra-all-3.5.jar
org.apache.cassandra/cassandra-thrift/jars/cassandra-thrift-3.5.jar
[Truncated]
org/apache/thrift/libthrift/0.9.2/libthrift-0.9.2.jar
org/caffinitas/ohc/ohc-core/0.4.3/ohc-core-0.4.3.jar
org/codehaus/jackson/jackson-core-asl/1.9.2/jackson-core-asl-1.9.2.jar
org/codehaus/jackson/jackson-mapper-asl/1.9.2/jackson-mapper-asl-1.9.2.jar
org/eclipse/jdt/core/compiler/ecj/4.4.2/ecj-4.4.2.jar
org/fusesource/sigar/1.6.4/sigar-1.6.4.jar
org/hibernate/hibernate-validator/4.3.0.Final/hibernate-validator-4.3.0.Final.jar
org/jboss/logging/jboss-logging/3.1.0.CR2/jboss-logging-3.1.0.CR2.jar
org/mindrot/jbcrypt/0.3m/jbcrypt-0.3m.jar
org/reactivestreams/reactive-streams/1.0.0/reactive-streams-1.0.0.jar
org/scala-lang/modules/scala-java8-compat_2.11/0.7.0/scala-java8-compat_2.11-0.7.0.jar
org/scala-lang/modules/scala-parser-combinators_2.11/1.0.4/scala-parser-combinators_2.11-1.0.4.jar
org/scala-lang/scala-library/2.11.9/scala-library-2.11.9.jar
org/slf4j/jcl-over-slf4j/1.7.7/jcl-over-slf4j-1.7.7.jar
org/slf4j/log4j-over-slf4j/1.7.7/log4j-over-slf4j-1.7.7.jar
org/slf4j/slf4j-api/1.7.12/slf4j-api-1.7.12.jar
org/uncommons/maths/uncommons-maths/1.2.2a/uncommons-maths-1.2.2a.jar
org/xerial/snappy/snappy-java/1.1.1.7/snappy-java-1.1.1.7.jar
org/yaml/snakeyaml/1.12/snakeyaml-1.12.jar
```
username_0: ooooh, but the order changed and I've noticed there is a mismatch in the netty jar versions.
username_0: workaround is
```
dependencyOverrides ++= Set(
"io.netty" % "netty-all" % "4.0.33.Final"
)
```
but I guess the question is: should coursier try to match ivy here?
Status: Issue closed
username_1: Discovered this too. So to be continued in #1466. |
CTeX-org/forum | 627718175 | Title: 奇偶页眉与章节编号问题
Question:
username_0: ## 检查
<!-- 若需勾选,请把 [ ] 改为 [x]
<!-- 注意:x 两侧不要留空格,即不要写成 [x ] 或 [ x] -->
- [x] 已在 issues 中进行搜索(包括已关闭的问题)
## 编译环境
- 操作系统
- [x] Windows 10
- [ ] Windows 8/8.1
- [ ] Windows 7
- [ ] 更早版本的 Windows
- [ ] macOS
- [ ] Linux(请附发行版)
- TeX 发行版
- [x] TeX Live 2020
- [ ] MiKTeX <!-- 版本号 -->
- [ ] CTeX 套装 2.9.2.164
- [ ] 更早版本的 CTeX 套装
## 描述问题
<!--
请在此处描述清楚您所遇到的问题:
1. 描述出现的情况
2. 给出复现步骤
3. 给出您解决问题所进行的尝试
-->
我的需求是
- 奇偶页不同页眉,其中奇数页中间显示章的标题,偶数页中间显示论文标题,
- 奇数页的左边和偶数页的右边显示页码。这里的问题是需要**摘要和目录不编号也不出现在目录里面**,然后后续章节正常编号,**参考文献与致谢不编号但是需要出现在目录**。
## 我的思路存在问题
### 方法1
使用`\chapter*{}`,手动制作页眉和书签。如果直接用`\chapter*{}`环境的话,可以不编号,但是**不能正确生成页眉信息**,**没有生成pdf书签**,虽说可以手工用`\addcontentsline`把这项加入目录但是 `hyperref`超链接定位不准。这个思路的问题是如何**制作定位准的pdf书签**以及**包含章节标题的页眉**?
### 方法2
直接用编号的环境`\chapter{}`,但是不显示编号。如果`\setcounter{secnumdepth}{-1}`抑制编号输出的话,可以解决超链接定位以及页眉信息的问题,但是**摘要又会出现在目录中,这是不需要的**,这个思路的工作在怎么**使一个编号的章节不出现在目录中**?
## 最小工作示例(MWE)
<!--
请在此处填写最小工作示例,要求:
1. 完整:要使大家都可以编译通过,而不需要添加额外的代码;不可以只有片段
2. 最小:不包含与问题无关的内容,尤其是不要把整个导言区都贴过来
3. 工作:要反映出您所描述的问题
下面是一个范例,请提问时修改为自己的代码:
-->
```latex
%!TEX program = xelatex
\documentclass[a4paper,zihao=-4,UTF8]{ctexbook}
\usepackage[inner=3cm,outer=2cm,top=3cm,bottom=2cm,showframe]{geometry}%页面
\usepackage{fancyhdr} %页眉页脚
\pagestyle{fancy}
\fancyhead[EL,OR]{\songti\zihao{-5}\thepage}
[Truncated]
\section{总结}
\chapter{理论}
\zhlipsum[1]
\end{document}
```
## 链接
LaTeX 如何让不编号的章节标题显示在目录中? - 刘海洋的回答 - 知乎
https://www.zhihu.com/question/29413517/answer/44358389
链接里面说了一些思路,但是没展开
<!-- 请在此处填写相关链接(如果有的话) -->
<!-- 请详述 -->
## 附件

<!-- 较长的 tex 或者 log 文件请以附件形式上传,不要在这里直接加入过长的代码 -->
Answers:
username_1: ```tex
%!TEX program = xelatex
\documentclass[a4paper,zihao=-4,UTF8]{ctexbook}
\usepackage[colorlinks,bookmarksnumbered]{hyperref}%书签
\usepackage{zhlipsum,lipsum}%假文
\usepackage{bookmark}
\begin{document}
\chapter*{摘要}
\bookmark[dest=\HyperLocalCurrentHref, level=1]{摘要}
\chapter*{\rmfamily Abstract}
\bookmark[dest=\HyperLocalCurrentHref, level=1]{Abstract}
\tableofcontents
\chapter{绪论}
\section{意义}
\zhlipsum[1]
\section{现状}
\zhlipsum
\section{总结}
\chapter{理论}
\zhlipsum[1]
\end{document}
```
原理参考《[[LaTeX] 把不编号章节加进目录和 PDF 书签](https://zhuanlan.zhihu.com/p/66434387)》
username_0: 这样做可以解决不编号章节的书签以及定位不准的问题,代码也比较简洁,但是不编号章节的页眉还是不符合要求。
username_1: 给个建议,别用 `\pagenumbering`,用 `\frontmatter` 和 `\mainmatter`
username_2: 所有编号和不编号的章,章标题都要出现在页眉,是吗?
username_0: 是的。目前感觉偏向使用无编号的章节手工制作书签和页眉的情形。
username_0: 我试试
username_1: 这个建议只是让文章结构鲜明,对问题应该没帮助
username_0: 是的,所有编号和不编号的章,章标题都要出现在页眉。目前感觉偏向使用无编号的章节手工制作书签和页眉的情形。
username_0: 不过这样做就没法定制页码格式了
username_2: ### 手动
```tex
% 手动把 `\chapter*` 加入页眉、目录和 PDF 书签
\chapter*{标题}
\chaptermark{标题}
\addcontentsline{toc}{chapter}{标题}
% 手动把 `\chapter*` 加入页眉和 PDF 书签,但不加入目录
\chapter*{标题}
\chaptermark{标题}
\bookmark[dest=\HyperLocalCurrentHref, level=0]{标题}
```
### 一个半自动的尝试
```tex
\documentclass{ctexbook}
\usepackage{fancyhdr}
\usepackage{lipsum}
\usepackage{tocbibind}
\usepackage[colorlinks,bookmarksnumbered]{hyperref}
\usepackage{bookmark}
\pagestyle{fancy}
\fancyhead{}
\fancyfoot{}
\fancyhead[EL,OR]{\thepage}
\fancyhead[OC]{\leftmark}
\fancyhead[EC]{Title}
\makeatletter
\ctexset{
% 令所有 \chapter* 都出现在页眉
chapter/aftertitle+={%
\CTEXifname{}{\chaptermark{\@currentlabelname}}%
}
}
\newif\if@addtoc
\newcommand{\addTocAndBookmark}{%
\@addtoctrue
\expandafter\@getHrefName\@currentHref\@nil
}
\newcommand{\addBookmark}{%
\@addtocfalse
\expandafter\@getHrefName\@currentHref\@nil
}
\def\@getHrefName#1.#2\@nil{%
\edef\@tempa{\detokenize{#1}}%
\edef\@tempb{\detokenize{chapter*}}%
\ifx\@tempa\@tempb
\if@addtoc
\edef\@tempa{\noexpand\addcontentsline{toc}{chapter}{\@currentlabelname}}%
\else
% use "level=0" or "level=chapter"
[Truncated]
\fi
\@tempa
\fi
}
\makeatother
\begin{document}
\chapter*{不编号、无目录条目、出现在页眉、生成PDF书签}
\addBookmark
text\newpage text\newpage text
\chapter*{不编号、有目录条目、出现在页眉、生成PDF书签}
\addTocAndBookmark
text\newpage text\newpage text
\tableofcontents
\end{document}
```
username_2: @username_1 `\chapter` 对应的层级是 0,`\section` 对应的层级是 1。那篇知乎文章里用 `\section*` 举例,所以用的是 `level=1`。当前 issue 里需要给 `\chapter*` 加书签,应该用 `\bookmark[..., level=0]{}`。`bookmark` 宏包也支持直接写 `level=[section|chapter]`。
username_1: 啊我没细看,因为编译出来的书签是同等级的,没注意,可能是因为 `\section` 无法在前面没有 `\chapter` 的情况下单独为第二层书签
username_0: 感觉还是手动写入页眉书签目录好一点。
username_2: 为了简化最终的输入(半自动或自动),内部实现多少要复杂一些。
username_3: 大家都想复杂了,直接在目录中合理设置 `tocdepth` 就可以了。
```TeX
%!TEX program = xelatex
\documentclass[a4paper,zihao=-4,UTF8]{ctexbook}
\usepackage[inner=3cm,outer=2cm,top=3cm,bottom=2cm,headsep=1cm,showframe]{geometry}
\usepackage{fancyhdr}
\usepackage{zhlipsum,lipsum}
\usepackage[colorlinks]{hyperref}
\usepackage{bookmark}
\bookmarksetup {
open = true ,
numbered = true ,
depth = subparagraph ,
}
\fancyhead{}
\fancyfoot{}
\fancyhead[OC]{\zihao{5}{\leftmark}}
\fancyhead[EC]{\zihao{5}天线设计}
\renewcommand{\headrulewidth}{0.75pt}
\renewcommand{\footrulewidth}{0pt}
\fancypagestyle{abstract}{%
\fancyhead[EL,OR]{}}
\fancypagestyle{main}{%
\fancyhead[EL,OR]{\zihao{-5}\thepage}}
\ctexset {
chapter = {
beforeskip = 36pt ,
afterskip = 28pt ,
pagestyle = fancy
}
}
\makeatletter
\protected\def\SETTOCDEPTH#1{%
\setcounter{tocdepth}{#1}}
\g@addto@macro\frontmatter{%
\addtocontents{toc}{\SETTOCDEPTH{-10}\protected@file@percent}}
\let\abstractmatter\frontmatter
\g@addto@macro\abstractmatter{%
\pagestyle{abstract}}
\g@addto@macro\frontmatter{%
\pagestyle{main}%
\pagenumbering{Roman}}
\g@addto@macro\mainmatter{%
\pagestyle{main}%
\addtocontents{toc}{\SETTOCDEPTH{2}\protected@file@percent}}
\renewcommand\tableofcontents{%
\chapter{\contentsname}%
\@starttoc{toc}}
\makeatother
\begin{document}
\abstractmatter
[Truncated]
\lipsum[1]
\frontmatter
\tableofcontents
\mainmatter
\chapter{绪论}
\section{意义}
\zhlipsum[1]
\section{现状}
\zhlipsum
\section{总结}
\chapter{理论}
\zhlipsum[1]
\end{document}
```
username_0: 谢谢,问题解决了。这样做的原理是不是让编号的章节只是不出现在目录?
Status: Issue closed
|
schmittjoh/JMSSerializerBundle | 363423557 | Title: Wrong boolean deserialization
Question:
username_0: I use FosRestBundle together with JMS Serializer.
I have the following property in my Entity:
```
/**
* @var string $ecoOption
* @Serializer\Type("boolean")
* @Assert\NotBlank(payload={"error_code"="ERR_ECO_OPTION_EMPTY"})
*/
protected $ecoOption;
```
If i send "ecoOption" : true, everything is ok. But if i send "ecoOption" : false, the error ERR_ECO_OPTION_EMPTY appears.
Where is my error?
Answers:
username_1: From what I know, notblank is a string type constraint and should not be
used on booleans. Maybe you want the not null?
Status: Issue closed
username_0: NotNull is a workaround, but actually i need NotBoolean, because it should only be true or false
username_1: If you typehint the setter to ensure booleans, will work.
Do not confuse normalization and validation, suggested reading
https://www.username_1.com/blog/deserialization-normalization-validation-and-the-jms-serializer/
username_1: The validation rules you define should work for the type of the property
you have (in this case boolean). Avoiding to have strings there is a
different responsibility. |
mopidy/mopidy | 35978267 | Title: Is it possible to use a non-default card for alsa hardware mixer ?
Question:
username_0: I have an USB sound card which I don't want to make default in the system and use it only for music playback.
Audio output works fine with
```
[Audio]
output = alsasink device=hw:2,0
```
The problem is that by default alsamixer tries to use card 0, not card 2.
Is there a way to pass card number to mixer from mopidy conf ?
From command line, if I want to choose the USB mixer I need to type:
`amixer -c 2`
Setting the volume with amixer looks like:
`amixer -c 2 sset PCM <value>`
Answers:
username_1: This should definitely go into the documentation, preferably into a quick start guide. This is very basic information and I had to search much too long for it. |
bcgov/entity | 788412631 | Title: Correction UI: incorrect verb tense in certify section
Question:
username_0: 
Answers:
username_0: @username_2 @username_1 What do you think?
username_1: Sorry, is this for Alterations or Corrections? For alterations, I had:
![Uploading Screen Shot 2021-01-18 at 9.34.56 AM.png…]()
username_2: Remove the "I, ". I think staff was concerned about the "I" statement since it was staff completing the filing. So it should read "[Legal Name], certified that he/she has relevant knowledge of the BC Benefit Company and is authorized to make this filing."
username_0: I bumped up the estimate as I think there will need to be 2 variants of this text (one for correction, another for alteration).
username_3: Note, this is a shared component
username_0: This will be resolved by #5475. |
Azure/secrets-store-csi-driver-provider-azure | 848856626 | Title: Add test for helm release
Question:
username_0: **Describe the solution you'd like**
[A clear and concise description of what you want to happen.]
- Add pipeline to run e2e tests against helm release charts
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
**Environment:**
- Secrets Store CSI Driver version: (use the image tag):
- Azure Key Vault provider version: (use the image tag):
- Kubernetes version: (use `kubectl version`):
- Cluster type: (e.g. AKS, aks-engine, etc):<issue_closed>
Status: Issue closed |
Tamaized/AoV | 440815330 | Title: Change I(nteger):"Recharge Delay" to Float
Question:
username_0: My private pack's play group almost never sleeps which makes recharging AoV spells very impractical. With the current Integer only options recharge happens very quickly making spell use virtually unlimited.
Not sure what implications this would have for the rest of the code, so if that's not practical, perhaps add a "Prayer rug" item that could be used in the field that would make the player have to stop for a few seconds to recharge spells but isn't dependent upon time, location, etc.
Answers:
username_1: The config option is in ticks. Making that a float doesn't make any sense.
username_0: How about we reword the description in the config, then? Currently it reads:
"Sets the recharge rate per tick, -1 disables this" This led me to believe that this was how many charges I was regenerating per tick, not the other way around.
Fantastic mod, my players all love it... just looking for some QoL love is all.
Status: Issue closed
|
force11/force2017 | 233711406 | Title: Create SVG versions of FORCE2017 logo
Question:
username_0: The FORCE2017 logos are in the FORCE2017 folder under LOGOS https://goo.gl/7g4QoC (cf. https://github.com/force11/force2017/issues/66#issuecomment-306300687 ) but don't currently have SVG versions.
Answers:
username_1: Do you want them all in SVG? If not, which one(s)?
username_0: All of them, as we don't know which ones are more likely to be reused.
Status: Issue closed
|
Ecotrust/forestplanner | 43289389 | Title: Calculate NPV Baseline scenario [1 week]
Question:
username_0: As a first cut, we could just calculate (_not optimize for_) the max NPV as a default scenario. This would ignore realistic constraints such as riparian buffers and other objectives that may come into play with a "real" NPV scenario. But it would be a first cut.<issue_closed>
Status: Issue closed |
libp2p/rust-libp2p | 327398031 | Title: Implement other encryption schemes than RSA in secio
Question:
username_0: IPFS uses RSA, unfortunately the ring library doesn't support generating RSA keys.
The most pragmatic approach is to implement alternatives.
Answers:
username_0: Problem: `ring` doesn't allow serializing/deserializing EDCH private keys.
Status: Issue closed
|
AlphaWallet/alpha-wallet-ios | 719829864 | Title: Update of ERC20 token balance seems to lag after Activity tab by at least a few seconds or more
Question:
username_0: Could be a matter of refresh cycles ("ticks"), but when we pick up a new activity which is one of the known transfer types (ERC20/ERC721 send/receive event/activities), we should just manually trigger a refresh for that token balance.<issue_closed>
Status: Issue closed |
keris2020/hackathon | 740887841 | Title: CUDA out of memory 오류
Question:
username_0: 
batch size를 256으로 하니 위 에러가 발생하며, 128로 줄이니 위 오류가 발생되지 않습니다.
찾아보니 path에서 기본적으로 무료로 제공되는것을 넘어서서 그런거 같기도한데...혹시 해결방법이 있는지 궁금합니다.
Answers:
username_1: 딥러닝 모델 층이 깊거나, 텐서의 크기가 커서 그런 듯 하네요....좀 줄이셔야할 듯 합니다 |
cmsdaq/DAQExpert | 310015330 | Title: add support for defining threshold for 'no trigger rate'
Question:
username_0: When data taking and trigger is configured to record splashes (like now), there are typically extended periods with no triggers at all (even the calibration sequence is turned off to make sure the splashes are (almost) the only triggers which fire, the shift crew can immediately check the event display and splashes are not lost due to inhibiting triggers during the calibration sequence.
However, this leads to a somewhat annoying rate of sound alerts about 'no rate when expected' in the control room :-)
As discussed with @dinyar on the phone, a simple solution for this case would be to introduce an easily configurable parameter defining what should be considered 'zero' in the class `NoRate`. This parameter could temporarily be set to minus one during splash data taking.
----
As a bonus, the DAQExpert could have a special 'splashes' mode which would generate a sound every time the number of triggers increases by a (configurable) number between snapshots (see also https://github.com/cmsdaq/DAQExpert/issues/29#issuecomment-271580418 and #50) but that should be a discussed in a separate issue.
Answers:
username_0: a run with splashes is run 313133, see e.g. http://daq-expert.cms/DAQExpert/?start=2018-03-30T07:59:40.560Z&end=2018-03-30T10:00:44.813Z
username_0: About the automatic detection of 'splashes' mode: the level Zero function manager publishes the value of `L1_HLT_TRIGGER_MODE` in the `levelZeroFM_dynamic` flashlist (outside of `running` state it is however `N/A`). During data taking for LHC beam splashes, this key typically contains the word `splashes` (at the time of writing, there are at least two keys containing the word `splashes`).
So a proposal for detecting splashes mode (which would then allow to make splash-specific conclusions in the logic modules) would be to introduce a configurable parameter corresponding to a regular expression (e.g. `.*splashes.*`) which would be matched against the value of `L1_HLT_TRIGGER_MODE` of the toppro instance.
However, since it is unlikely that we will get splashes before LHC Run 3 (apart from those in the coming few days), there is no urgency to implement this now.
username_0: The corresponding code changes were deployed as a hotfix and merged back to `master` and `dev` (see also #171).
Status: Issue closed
|
wmjtxt/muxicomment | 598245427 | Title: 浅析软链接与硬链接 | 木夕木火のBlog
Question:
username_0: https://username_0.github.io/2019/11/05/ln/
背景知识Linux系统中的文件包含两个部分:用户数据(user data)与元数据(metadata)。用户数据存放文件的内容,元数据则存储文件的属性。元数据中的inode号是文件的唯一标识, 而不是文件名。其实,一个inode号对应一个或多个文件名,这就是硬链接(hard link)。因此,硬链接可以理解为同一个文件的多个别名。而软链接(soft link)则是文件的用户数据存放指向另一个文件的 |
barbacenasmc/armis-issues | 96486730 | Title: View Record Properties: Retention period is missing
Question:
username_0: Steps to replicate:
1. Open an existing non-electronic record to view.
2. Observe the properties section on the right side of the viewing page and look for retention period.
Actual result: Retention period is missing.
Expected result: Retention period should display.

Answers:
username_1: fixed
username_0: Issue fix was already verified.
Status: Issue closed
|
adamchainz/apig-wsgi | 476956794 | Title: Ability to send binary response with `text/` or `application/json` content type
Question:
username_0: Problem
--------
`Response.as_apig_response` is hardcoded to not return binary response when the content-type of the response is either `text/*` or `application/json`.
https://github.com/username_1/apig-wsgi/blob/b0ce56cbb3ff67586c398f57bc13590bf0961940/apig_wsgi.py#L105-L109
This becomes a problem for use cases like returning a gzip response from the application with the above content types.
Possible Solution
-----------------
**Check if the `Content-Encoding` header has `gzip` in it or not, and return the binary response based on that.**
This fix would keep the hardcoded values, something like this should work:
```python
def _should_send_binary(self):
"""Determines if binary response should be sent to API Gateway
"""
non_binary_content_types = ("text/", "application/json")
content_type = self._get_content_type() or ''
content_encoding = self._get_content_encoding() or ''
supports_binary = self.binary_support
is_binary_content_type = not content_type.startswith(non_binary_content_types)
if supports_binary and is_binary_content_type:
return True
# Content type is non-binary but the content encoding is.
elif supports_binary and not is_binary_content_type:
return 'gzip' in content_encoding.lower()
return False
```
**Allow `Response` class to be extended so that users can handle this logic on their own.**
This can be done by introducing the same `Response._should_send_binary` method which defaults to the current behaviour. The `make_lambda_handler` function can then be allowed to have an argument like `response_class` to facilitate this.
I can create a PR for either of the fixes which seem more appropriate to you.
What do you think about this?
Answers:
username_1: Hi @username_0 - I can't remember why I excluded those content types, but I *think* it might have been because API Gateway didn't support them being binary encoded when I tested.
Since `apig-wsgi` is one file, can you modify the logic and test yourself? Also are you using API Gateway or an ALB?
Very happy to go down either route. I think allowing Response to be extended is useful whatever.
username_2: I gave the method @username_0 specified a go and it works well with API Gateway (_Binary Media Types_: `*/*` and _Content Encoding_ `enabled` with `10000` minimum body bytes).
I'd ideally prefer the new binary check to the default if Response extensibility is added. I'm using a copied version of this library with the new binary check in what I'm working on at the moment.
username_0: @username_1 We've been using the modified version of this library in production, it is working fine.
We use it with API Gateway. It works with and without `BinaryMediaTypes` being enabled.
We currently use this on per-API/endpoint basis by using the `contentHandling=CONVERT_TO_BINARY` option in integration requests/responses.
More on this: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings.html
username_1: @username_2 , @username_0 Can one/both of you make PR's? I think making `Response` extensible is a separate issue right now btw, so a "hard coded" approach is fine.
username_0: Sure, I'll do it this weekend.
Status: Issue closed
|
mehdisadeghi/react-mathjax-preview | 592367081 | Title: Render problem in the React app
Answers:
username_1: Could you provide a minimal example that demonstrate the bug please?
username_2: @username_1 I am also facing the same issue.
A simple React Component that receives "math text to display" as props from parent component, needs page refresh to show MathJax text preview.
username_1: @username_2 what is the difference between the your react app and the demo on this repository? Does it work with older versions than 1?
I can only work on the issue if I can reproduce it.
username_3: That because when you re-render , loadingState is still loading. scriptTag onload only trigger once.
Quick fix:
```
useEffect(() => {
if (loadingState !== 'loaded' && !window.MathJax) {
return
}
previewRef.current.innerHTML = sanitizedMath
window.MathJax.Hub.Queue([
'Typeset',
window.MathJax.Hub,
previewRef.current
])
}, [sanitizedMath, loadingState, previewRef])
```
@username_0 @username_2
username_1: @username_3 thanks! Can you make your quick fix it into a quick PR?
username_2: @username_3 - Is this quick fix to be done inside react-mathjax-preview library?
Status: Issue closed
|
convergencelabs/convergence-project | 898816693 | Title: Convergence user auto select allows non-existent users
Question:
username_0: **Versions**
- Convergence Version: 1.0.0-rc.7
**Describe the Bug**
When adding members to a domain the convergence user autocomplete allows the user to type in a user that does not exist.
**Expected Behavior**
The input control should not allow you to select a user that does not exist.<issue_closed>
Status: Issue closed |
larq/zookeeper | 598503281 | Title: RFC: Caching field values
Question:
username_0: ### Feature description and motivation
At the moment, field values on component instances behave much like instance attributes of generic Python class instances. One value exists per instance, and if it is mutable then an access after modification will return the same, modified value, e.g.:
```python
@component
class A:
foo: List[int] = Field(lambda: [1, 2, 3])
class B:
def __init__(self):
self.foo = [1, 2, 3]
a = A()
b = B()
assert a.foo == b.foo == [1, 2, 3]
a.foo.append(4)
b.foo.append(4)
assert a.foo == b.foo == [1, 2, 3, 4]
```
We could change this behaviour so that field values instead behave much more like `@property` values, i.e. the value is not 'cached' on the instance and instead re-generated on every access. See discussion here for a motivation of this different behaviour: https://github.com/larq/zoo/issues/148#issuecomment-612523975.
### Current implementation
For a full explanation of how components access field values, see the docstring of the `_wrap_getattribute` method in `component.py`: https://github.com/larq/zookeeper/blob/2b8812aa61f4f96d7de5512ca1f4f4c78d3117b6/zookeeper/core/component.py#L116-L144
### New implementation
It would be straightforward to implement `@property`-esqe behaviour for default values which are passed into fields, as mutable default values are already generated from lambdas, and there's no issue with immutable default values being cached .
However, it would be much more difficult to implement for values passed in through the CLI. Consider the configuration CLI argument `foo=[1, 2, 3]`. We receive this as a string, and parse it into a Python value (in this case a list) to be used as the value for the field `foo`. If we wanted to return a new instance of this list on each access of `foo`, we would either need to be able to deep-clone generic mutable objects, or we would have to hold on to the configuration value as a string, and re-parse it into a Python value each time.
It's an open question whether we are happy for the behaviour of default values vs cli-overriden values to be different.
Answers:
username_1: In general I'm usually in favor of having a bit of internal implementation ugliness as a way to support more intuitive/robust/harder-to-break behavior for the end user, so these options don't sound too bad to me. I'd probably go for the latter, since it sounds easiest.
(Same reasoning as favoring giving up some cleanliness/minimalism in model code to enable larq/zoo#61, for example.)
username_0: This is a very valid point. I guess then the question becomes which behaviour we actually want. We can also support both (we already have a keyword arg to `Field` - `allow_missing` - and we could add `cache_value` as well, in which case we would need to decide what the default is).
username_2: I thought about introducing something like a `PartialField`, but I think having a keyword argument is cleaner.
In general having
```python
@Field
def input_quantizer(self):
return lq.quantizers.SteSign(clip_value=1.25)
```
behave like `@property` is very intuitive for me, though the same argument could be made the other way around.
username_0: If we have a single mutable object then wrapping it inside a lambda won't solve the issue because each call to the lambda will return a reference to the same object. I think we would have to do the CLI string parsing inside the lambda.
username_0: I completely agree. I think having `cache_value` default to false is a reasonable thing to do.
username_2: What about using [`copy.deepcopy`](https://docs.python.org/3.8/library/copy.html#copy.deepcopy) in the lambda?
username_0: Cool, I hadn't come across that before but it looks like it would work. Especially as the values that can be passed in through the CLI (i.e. parsed from a string) are a very small subset of possible Python objects, and no doubt exclude any 'difficult' ones which would be hard to clone.
username_0: Okay, so the concrete proposal is that `Field` gains a `cache_value` keyword argument, defaulting to `false`. When set to `true`, `Field` accesses have their current behaviour. When set to `false`, `Field` accesses behave like `@property` in the way specified above.
Are we all happy to move forward with this?
username_2: :+1:
username_0: Unless we require the immutable defaults to be passed in with lambdas we can't really not cache immutable values, but I don't think that matters?
username_2: I agree, just wanted to make sure that this applies only to values we pass via lambda functions.
username_0: Makes sense, and yes to be clear this will apply to mutable defaults passed with lambda functions and we'll use the `copy.deepcopy` module to do the same for things like lists that are passed in through the CLI.
username_0: Apologies to come back to this, but I have some further questions after partially implementing this.
It seems to me that behaviour could be quite confusing when you mix cached and not cached fields with parent/child components.
```python
@component
class Child:
foo: List[int] = Field(cache_value=False)
@component
class Parent:
child: Child = ComponentField(Child)
foo: List[int] = Field(lambda: [1, 2, 3], cache_value=True)
p = Parent()
configure(p, {})
assert p.foo == [1, 2, 3]
assert p.child.foo == [1, 2, 3]
p.foo.append(4)
assert p.foo == [1, 2, 3, 4]
assert p.child.foo == ??? # What should this be?
```
It seems to me that either behaviour here would be confusing. We either don't respect value inheritence, or we don't respect the fact that the child field is`cache_value=False`.
The solution would probably be to disallow `cache_value=False` if any 'parent field' has `cache_value=True`. But what happens if the parent is a super-class defined in another package, that can't be modified (especially as the default for a field is `cache_value=True`).
username_2: @username_0 What about not supporting `cache_value=True` at all, or a I missing something?
username_0: I think we need to support caching values. For example, in Zoo it would be very weird (and potentially buggy) if a new optimizer was created for every access of [`self.optimizer`](https://github.com/larq/zoo/blob/18b7473cf9e054495b76d010291685e6b43b3bc0/larq_zoo/training/basic_experiments.py#L46-L56).
[Here](https://github.com/larq/zoo/blob/01579c33ee84efa64d301ee9fe3091a4155789d4/larq_zoo/training/train.py#L35-L43) is another Zoo example where having caching is important.
username_2: I agree, for the case of the optimizer it is easy to get around by assigning it to a temporary variable, but for other cases were you might want to access the same value from different methods it makes sense to still have this possibility.
What do you think about to adhere to the way Python handles imutable attributes, by default?
```python
class Parent:
foo =[1, 2, 3]
class Child(Parent):
pass
c = Child()
p = Parent()
assert c.foo == [1, 2, 3]
assert p.foo == [1, 2, 3]
c.foo.append(4)
assert c.foo == [1, 2, 3, 4]
assert p.foo == [1, 2, 3, 4]
```
username_0: I've gone back and forth on this several times now, API design is not easy :)
We all agree that having the ability to cache values is important. The proposal is to also have the ability to have values which aren't cached. The problem then lies in how the two interact, especially when you mix in field value inheritence and CLI-overriden values. There are a lot of corner cases, some of which we can resolve by disallowing some combinations of behaviour (e.g. disallowing `cache_value=False` if any 'parent field' has `cache_value=True`), but in general are hard to cover all of them. I am concerned that by allowing both behaviours we are introducing more points of confusion into an API that's already quite confusing.
I now feel like the best course of action is probably to keep the existing behaviour, where we always cache values, and discourage the use of `Field`s for anything other than the 'built-in' types of `int/float/str/bool/...` plus sets, dicts, and lists of those types. These are, co-incidentally, the subset of Python values which can be parsed by the CLI. I've actually always been uncomfortable with the use of `Field`s for more complicated values such as functions and the like, that's what `@component`s and `ComponentField`s are for!
Regarding larq/zoo#148, I think we can go with @username_2's initial fix, in PR larq/zoo#149, and not use `Field`s for setting quantisers.
username_1: This sounds good to me. I've merged #149.
I think maybe it'd be good to have some simple examples of how these Zookeeper concepts (`Field`s, `@component`s, and `ComponentField`s) can/should be used in the context of an experiment/`@task`, perhaps alongside or replacing the more abstract child/parent examples we have in `README.md` now. I've still learned this mostly from pattern matching what I've seen others do across Zoo and other places, and it'd be nice to have a reference of the most ~Pythonic~ Zookeperic ways of doing different things. Shall I make an issue for this?
username_0: Yes, that's a good idea! There is an [example experiment](https://github.com/larq/zookeeper/blob/master/examples/larq_experiment.py) thing which I think is a good start, but we should make that more prominent and give more examples, you're right. |
glasklart/hd | 195770806 | Title: Hook Champ
Question:
username_0: **App Name:** Hook Champ
**Bundle ID:** com.rocketcat.HookChamp
**iTunes ID:** <a target="_blank" href="http://getart.username_1.at?id=334626134">334626134</a>
**iTunes URL:** <a target="_blank" href="https://itunes.apple.com/us/app/hook-champ/id334626134?mt=8&uo=4">https://itunes.apple.com/us/app/hook-champ/id334626134?mt=8&uo=4</a>
**App Version:** 1.60
**Seller:** Rocketcat LLC
**Developer:** <a target="_blank" href="https://itunes.apple.com/us/developer/rocketcat-llc/id305576988?uo=4">© Rocketcat LLC</a>
**Supported Devices:** iPhone, iPod-touch, iPod-touch-with-mic, iPhone-3G, iPhone-3GS, iPadWifi, iPad3G, iPodTouchThirdGen, iPhone4, iPodTouchFourthGen, iPad2Wifi, iPad23G, iPhone4S, iPadThirdGen, iPadThirdGen4G, iPhone5, iPodTouchFifthGen, iPadFourthGen, iPadFourthGen4G, iPadMini, iPadMini4G, iPhone5c, iPhone5s, iPhone6, iPhone6Plus, iPodTouchSixthGen
**Original Artwork:**
<img src="http://is4.mzstatic.com/image/thumb/Purple71/v4/b3/d9/cc/b3d9ccbe-cf79-4b42-5ecd-4f7830491215/source/1024x1024bb.png" width="180" height="180" />
**Accepted Artwork:**
\#\#\# THIS IS FOR GLASKLART MAINTAINERS DO NOT MODIFY THIS LINE OR WRITE BELOW IT. CONTRIBUTIONS AND COMMENTS SHOULD BE IN A SEPARATE COMMENT. \#\#\#
Answers:
username_1: ?

https://cloud.githubusercontent.com/assets/2068130/21292482/8bdf4cfc-c507-11e6-9a7a-c5c3c8483770.png
--- ---
Source:
https://cloud.githubusercontent.com/assets/2068130/21292483/9d33837e-c507-11e6-99f1-abe4cc15b4da.png
username_0: I think that looks fine, thanks.
Status: Issue closed
|
saltstack/salt | 93903005 | Title: [2015.8.0rc1] salt-bootstrap fails to properly install a minion
Question:
username_0: `curl -s -L https://bootstrap.saltstack.com | sudo sh -s -- -U -P -p python-jinja2 -A master git v2015.8.0rc1`
results in the following output:
```
* INFO: Running daemons_running()
* DEBUG: Waiting 3 seconds for processes to settle before checking for them
* ERROR: salt-minion was not found running
* ERROR: Failed to run daemons_running()!!!
```
Which makes sense because the init script is missing.
```
bjackson@dev-kmr01:~$ ls /etc/init.d/salt*
ls: cannot access /etc/init.d/salt*: No such file or directory
```
Not entirely sure if the error is in salt-bootstrap or salt (or some other place).
```
bjackson@dev-kmr01:~$ salt-call --versions
Salt Version:
Salt: 2015.8.0rc1
Dependency Versions:
Jinja2: 2.6
M2Crypto: 0.21.1
Mako: Not Installed
PyYAML: 3.10
PyZMQ: 13.1.0
Python: 2.7.3 (default, Mar 13 2014, 11:03:55)
RAET: Not Installed
Tornado: 4.2
ZMQ: 3.2.3
ioflo: Not Installed
libnacl: Not Installed
msgpack-pure: Not Installed
msgpack-python: 0.1.10
pycrypto: 2.6
System Versions:
dist: debian 7.8
machine: x86_64
release: 3.16.0-0.bpo.4-amd64
system: debian 7.8
```
Let me know what other information you need.
Answers:
username_0: `curl -s -L https://bootstrap.saltstack.com | sudo sh -s -- -U -P -p python-jinja2 -A master git v2015.5.3`
works fine
username_1: @username_0, thanks for the report.
username_2: Can you please paste the script's full output while passing `-D` to it? ... probably in a gist ...
username_0: I can't. When I try to select the output to copy it, my terminal dies. But it did it again and there was (again) no /etc/init.d/salt-minion
username_1: @username_2, @username_0: [without `-D`](https://gist.github.com/username_1/19622cef4e25e20956e9), [with `-D`](https://gist.github.com/username_1/2e8b692579e16e129481).
username_0: There was no /etc/init.d/salt-minion in either case. But with -D the bootstrap script seems to start the salt-minion in the foreground at the end (which may make it appear to be working correctly).
username_2: @username_0 in debug mode is runs the minion in the foreground to see if there's any complaint with salt starting(missing deps, bad code, etc would show up)
username_0: Okay, in that sense, the minion starts correctly with -D. But there's still no init script installed.
Are you having trouble reproducing this issue or something? I mean #25456 is open as well that deals with this same bug but in the master. Are you not seeing the same thing when you install the rc with the bootstrap script?
username_2: @username_1 is starting up another VM so we can get our hands even dirtier trying to find the source if this problem...
username_2: I know what the problem is. We removed the debian directory from our repository, so there's no init script to copy from...
Taking care of if...
username_2: https://github.com/saltstack/salt/pull/23219
Status: Issue closed
|
evan-goode/duolibre | 957554480 | Title: _RSAobj object has no 'export_key' attribute
Question:
username_0: Soooo close! I've got py3 and all the pip3'd dependencies installed.
Then I run to the Duo device registration and get the URL:
https://m-d9c5afcf.duosecurity.com/android/2xVRQjwT0kszoz******
Then I post the command to my Ubuntu 20.04 LTS box at Azure:
username_0@myserver:~/duolibre/duolibre$ ./duolibre.py https://m-d9c5afcf.duosecurity.com/android/2xVRQjwT0kszoz******
Result:
Traceback (most recent call last):
File "./duolibre.py", line 78, in <module>
duolibre()
File "/usr/lib/python3/dist-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/usr/lib/python3/dist-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/lib/python3/dist-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib/python3/dist-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "./duolibre.py", line 59, in duolibre
secret = get_secret(activation_uri)
File "./duolibre.py", line 37, in get_secret
"pubkey": RSA.generate(2048).publickey().export_key().decode(),
File "/usr/lib/python3/dist-packages/Crypto/PublicKey/RSA.py", line 126, in __getattr__
raise AttributeError("%s object has no %r attribute" % (self.__class__.__name__, attrname,))
AttributeError: _RSAobj object has no 'export_key' attribute
Any ideas? |
cibuilds/drupal | 458814980 | Title: 403 Forbidden
Question:
username_0: I'm getting 403 for all sites even after I create new virtual hosts or change permissions of directories.
response from `curl http://localhost`
```
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access /
on this server.<br />
</p>
<hr>
<address>Apache/2.4.10 (Debian) Server at localhost Port 80</address>
</body></html>
```
example VirtualHost conf file
```
<VirtualHost *:80>
DocumentRoot "/root/project/web"
ServerName localhost
<Directory /root/project/web >
Options FollowSymLinks
AllowOverride All
Require all granted
</Directory>
</VirtualHost>
``` |
astarte-platform/astartectl | 744626842 | Title: Allow restricting the end timestamp of the active device list
Question:
username_0: See #118, this means that we want to be able to add a flag `--active-until`. Right now this is not possible since only the last connection is saved and this erases the possibility of moving the activity window end.
This issue will track the implementation of the feature when Astarte APIs allow to do so. |
cgeo/cgeo | 40603330 | Title: Replace screenshots after actionbar has been released
Question:
username_0: This is mainly a reminder for myself that some of the PlayStore screenshots have to be updated after we released the actionbar version.
Same applies for crowdin screenshots, which will be quite some work.
Assigning myself.
- [x] English (US)
- [x] English (UK)
- [x] German
- [x] French
- [ ] Italian
- [ ] Japanese
- [ ] Netherlands
- [ ] Polish
- [ ] Portuguese
- [ ] Swedish
- [ ] Czech
- [ ] Hungarian
- [ ] Slovenian
Status: Issue closed
Answers:
username_0: We do now use a completely redesigned screenshot series commonly (english) for all languages.
Lets leave it like this: It looks good and maintaining screenshots for all supported languages is a nightmare without much benefit. |
pythonindia/wye | 226791799 | Title: Add new user role : Student
Question:
username_0: we need to add new user role student in our system.
In case user select in profile as student , we need to ask him to select college from our database.
Presently we will allow user only for register database.<issue_closed>
Status: Issue closed |
davidteather/TikTok-Api | 1055102618 | Title: ERROR: The request could not be satisfied
Question:
username_0: The TikTok API isn't working,
ERROR:root:Tiktok response:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<HTML><HEAD><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<TITLE>ERROR: The request could not be satisfied</TITLE>
</HEAD><BODY>
<H1>403 ERROR</H1>
<H2>The request could not be satisfied.</H2>
<HR noshade size="1px">
Bad request.
We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.
<BR clear="all">
If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.
<BR clear="all">
<HR noshade size="1px">
<PRE>
Generated by cloudfront (CloudFront)
Request ID: YDg97PShlKvMCvORQQRIBUZ6DGm5DhyGtMNTZvQ3_q2yRs___8dpLA==
</PRE>
<ADDRESS>
</ADDRESS>
</BODY></HTML>
Answers:
username_1: I'm getting the same error.
Is it because I can't authenticate with cookies anymore?
username_0: this problems is when run api.by_username
username_2: I also started getting this error
username_3: Having same issue as well.
username_4: having same issue @username_14
username_5: any advice to solver this issue @username_14
username_6: I have the same issue as well.
username_7: @username_14 Getting same issue please look into this.
username_8: Same issue for me too! Can't get past the Captcha.
"TikTok blocks this request displaying a Captcha
Tip: Consider using a proxy or a custom_verifyFp as method parameters"
I am already using custom_verifyFp.
`from TikTokApi import TikTokApi
import logging
api = TikTokApi.get_instance(logging_level=logging.INFO, custom_verifyFp="cookie")
print(api.by_username("username", count=1, custom_verifyFp="cookie"))`
username_9: until david updates it, use @username_11 's pull request guys, fixed it for me
username_10: As already pointed out, the call is failing due to the Captcha request.
In fact, Tiktok has been very active in the current months in blocking these kind of requests from bots (and should not expect things to get better)
username_0: @username_11 version its works!, great job
username_11: @username_10 That is correct but without my fix none of the requests work on my machine because CloudFront does not like the host header.
username_8: Can someone help me out here. I'm still getting the same error, but I don't think I grabbed the new branch correctly (new to this). I'm working in a VS Code Terminal on Windows 10.
`python -m pip uninstall TikTokApi`
`git clone https://github.com/username_11/TikTok-Api.git`
`cd TikTok-API`
`python setup.py install`
username_12: @username_8, you are missing checking out to the right branch:
...
`cd TikTok-API`
`git checkout fix/403-error-cloudfront`
`python setup.py install`
...
username_3: Pulling from fix/403-error-cloudfront won't solve issue with api.by_username()
Tiktok has changed new api params to get user posts, so api.by_username can't be use anymore, besides if you try this https://try.playwright.tech/?e=screenshot-async&l=python and pass any tiktok urls, you might get timeout error. Somehow playwright can't access tiktok domain anymore
username_13: just a comment so i get notified
username_14: Might have been fixed in V4.1.0, but cloudfront workaround in #768
username_2: For me it is fixed
username_13: Fixed for me as well, thanks !
Status: Issue closed
|
payu-intrepos/iOS-SDK-Sample-App | 164064406 | Title: None
Question:
username_0: Please drop an email to <EMAIL> so we can assist you.
Answers:
username_1: Same problem did you fixed it?
username_2: @username_1 : Are you using any third party library for keyboard related view handling?
username_3: Same problem, @username_2 i am using external keyboard library
username_3: @username_1, @jenkinsdafftest Were you able to fix this?
username_2: If possible disable third party keyboard handling libraries when presenting stand-alone UI which you are not controlling.
Status: Issue closed
username_4: closed. |
postmanlabs/postman-app-support | 232013935 | Title: Postman interceptor missing in win10 app
Question:
username_0: I just installed the latest version of Chrome (58.0.3029.110 - 64-bit) and Postman ( 4.10.7), then I installed the chrome interceptor extension, but it does not appear on postman. there is a "proxy settings" instead.
the interceptor is not working anyway with the embedded chrome extension.
OS version Windows 10

Answers:
username_1: Hi @username_0, Postman Interceptor is available only on the [Chrome apps](https://chrome.google.com/webstore/detail/postman/fhbjgbiflinjbdggehcddcbncdddomop), since cookie handling flows are built into the native apps.
I'd recommend sticking to the native apps though, since Google will end support for Chrome apps soon.
Status: Issue closed
|
jlippold/tweakCompatible | 482492941 | Title: `MTerminal` working on iOS 12.4
Question:
username_0: ```
{
"packageId": "mterminal",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "mterminal",
"deviceId": "iPhone9,2",
"url": "http://cydia.saurik.com/package/mterminal/",
"iOSVersion": "12.4",
"packageVersionIndexed": true,
"packageName": "MTerminal",
"category": "Utilities",
"repository": "Bingner/Elucubratus",
"name": "MTerminal",
"installed": "1.4-6",
"packageIndexed": true,
"packageStatusExplaination": "This package version has been marked as Working based on feedback from users in the community. The current positive rating is 100% with 1 working reports.",
"id": "mterminal",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "A MobileTerminal fork",
"latest": "1.4-6",
"author": "lordscotland",
"packageStatus": "Working"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
Oksana999/familytree | 479285030 | Title: Страница создания связи.
Question:
username_0: Форма:
- родитель (выпадающий список с персонами)
- дочерняя персона(выпадающий список с персонами)
- тип связи.
Answers:
username_0: https://www.w3schools.com/tags/tag_select.asp
username_0: Учитывать, что родитель всегда главнее ребенка.
Муж(parentId) главнее жены(childId).
реализовать Comparator !!
Status: Issue closed
|
operasoftware/devopera | 115399048 | Title: Article idea: The concept of Web Components
Question:
username_0: Web components are an old concept and soon a new standard for the Open Web. In this article I retrace the reason why we need components in web applications, show the difference between web components implemented in different libraries and frameworks, and show the code of a game written using only web standards.
The article will be based on the following slides of a tech talk a gave a couple of weeks ago at an event here in London:
http://lanyrd.com/2015/web-components-in-practice/sdtytt/#link-ccrfc
What are your thoughts? |
explosion/spaCy | 571742275 | Title: Upgrades for spacy 1.x
Question:
username_0: I just wanted to ask if there are any plans to upgrade spacy 1 with any of the features introduced in spacy 2; e.g. improvements to the lemmatiser and custom token attributes.
I mainly use spacy for the POS tagger, but can't justify moving to spacy 2 for such small gains at such a significant cost to speed (cf. [your benchmarks](https://spacy.io/usage/facts-figures#spacy-models), which are consistent with my own experiments).
While I know spacy 2 is your focus, it would be great if there were occasional updates to spacy 1, since I'm sure some of the changes are backwards compatible.
Answers:
username_1: Hi Chris,
It's too hard to port the changes back, because there are too many differences from V1, especially with the stateful stringstore. What we'd be doing instead is porting the v1 model to work with v3: we can have a linear model that works like the v1 model, and should be similar in speed, but working with the new version.
Status: Issue closed
|
kamilsk/egg | 547673101 | Title: invalid vendor mode detection
Question:
username_0: if a parent dir has Gopkg.toml then gex define current manager as dep instead of current go mod
```
//go:generate go build -v -o=${ROOT}bin/mockgen ./vendor/github.com/golang/mock/mockgen
//go:generate go build -v -o=${ROOT}bin/golangci-lint ./vendor/github.com/golangci/golangci-lint/cmd/golangci-lint
//go:generate go build -v -o=${ROOT}bin/egg ./vendor/github.com/username_0/egg
//go:generate go build -v -o=${ROOT}bin/easyjson ./vendor/github.com/mailru/easyjson/easyjson
//go:generate go build -v -o=${ROOT}bin/goimports ./vendor/golang.org/x/tools/cmd/goimports
```<issue_closed>
Status: Issue closed |
princeton-vl/RAFT | 694414402 | Title: Which released model is more suitable for real-world videos
Question:
username_0: Hi,
I want to use your model as an optical flow extractor on real-world videos. May I know which of the released model is more suitable?
Thanks!
Answers:
username_1: I would recommend trying both the sintel.pth model and the things.pth model. My guess would be sintel.pth since it is trained on the largest number of dataset.
Status: Issue closed
username_0: Thanks for your suggestion! |
tc39/ecma402 | 421706897 | Title: Provide a method for formatting numbers not represented by Number
Question:
username_0: It is fairly common for applications to handle numeric data that cannot be represented as a JavaScript number.
- One enormous application uses 64-bit integers (represented by a triple of Smi values) with an implicit 6 decimal digit scaling, aka 'MicroMoney'. It is necessary to format these values without intermediate rounding or truncation from conversion to Number.
- It should be possible to format the recently added JavaScript BigInt type with a NumberFormat. Note that BigInt by design provides no conversion to Number to prevent accidental truncation. ToNumber conversions throw.
- There are numerous libraries for monetary or 'decimal' values that could benefit from consistent formatting.
- There are extended precision libraries that could benefit from consistent formatting.
In the case of BigInt and extended precision libraries, the bounds on the allowable formatting parameters don't make much sense. A BigInt could have 100 digits, but should still be formatted with the localized digit grouping. An extended floating-point library should not be limited to 21 significant digits.
I'd like to see NumberFormat be capable of formatting these inputs. It is reasonable to expect these inputs to provide an API for digit generation, but Intl should handle everything that is localized.
As it currently stands applications contain an approximation to Intl.NumberFormat written in (or compiled to) JavaScript. The quality of the approximation varies, but the good ones are quite expensive, with measurable application startup latency due to effectively cloning part of ICU in the JavaScript code.
Answers:
username_1: For BigInt, see a proposal in https://github.com/tc39/ecma402/pull/236 . For your decimal use cases, well, I'd like to add a Decimal datatype to JavaScript. I hope it won't take too long! Let's consider adding ECMA-402 features which don't need to reference the Decimal data type if it seems unlikely to be standardized in the future.
username_0: @username_1 The main issue for my customers is MicroMoney. While I would welcome a Decimal type, the 'enormous' apps in question are so large as to be 'locked in' to the current types. We could use NumberFormat on Decimal only if there was a reasonably efficient conversion from the current representation to the Decimal type.
A useful option would be a **scale**, indicating the value is already scaled by 10^scale, for both positive and negative values. (I'm not sure if **scale** should be an option or an argument to **format**.) If I could format a `BigInt` with **scale: 6** then it would solve the first bullet point. There is already scaling implicit in '%', albeit in the other direction (-2). **scale** could also be used for fixing a mismatch with units in the format (the application stores a weight in Kg and wants 0.05 displayed as 50 g without doing division itself).
username_1: @username_0 Thanks for explaining your use case. I think there should be a reasonably efficient operation to create a Decimal value of the kind you need here, if we create a Decimal type.
username_2: #407 was a duplicate issue filed for this feature.
@username_1 Do you think it is wise to move forward with a `{ scale }` option to Intl.NumberFormat, which, combined with BigInt, can allow the formatting of arbitrary precision numbers, or is it better to wait for the Decimal proposal?
username_2: Okay. From a discussion with @username_1, my understanding is that there are basically three options; we could do one, two, or all three:
1. Allow strings to be interpreted as decimal numbers. May or may not be web-compatible. Requires reifying the syntax for decimal number strings.
- Q: Do we allow grouping separators in decimal number syntax? Probably not.
```javascript
const nf = new Intl.NumberFormat("en-US");
nf.format("1.23E4"); // => "12,300"
```
2. Add a `{ scale }` option to multiply the input number by a power of 10. We already do this when style="percent".
- Note: covers use case where you already have a BigInt representing currency micros. You can easily format the BigInt using `{ scale: -6 }`.
```javascript
const nf = new Intl.NumberFormat("en-US", { scale: -3 });
nf.format(1230n); // ==> "1.23"
```
3. Wait for the [Decimal proposal](https://github.com/tc39/proposal-decimal) for formatting things not supported by Number or BigInt.
username_2: I plan to address this as part of my new proposal Intl.NumberFormat V3.
https://github.com/username_2/proposal-intl-numberformat-v3
username_1: The current status is, we're working towards allowing strings to be interpreted as decimal numbers, but holding off on the `scale` option, due to the difficulty in finding a good solution to https://github.com/tc39/proposal-intl-numberformat-v3/issues/1 .
username_2: Good question; can you open an issue on https://github.com/username_2/proposal-intl-numberformat-v3?
username_3: @username_2, https://github.com/tc39/proposal-intl-numberformat-v3/issues/34 |
pytorch/pytorch | 305712451 | Title: Crash in `torch.tensor(ndarray, dtype=torch.cuda.float32)`
Question:
username_0: - OS: Linux
- PyTorch version: 0.4.0a0+eeb90d9
- Python version: 3.5
The following segfaults:
```python
import torch
import numpy as np
data = np.array([1.0])
torch.tensor(data, dtype=torch.float32)
torch.tensor(data, dtype=torch.cuda.float32)
```
The first tensor call with `dtype=torch.float32` seems necessary to trigger the crash on the second line.
(Reported by @Balandat)
cc @gchanan
Answers:
username_1: Per #5850 this issue should be closed
Status: Issue closed
|
xynxynxyn/terminal-discord | 305319543 | Title: Publish to npm
Question:
username_0: Is there a reason you haven't published to npm? If you have instructions for making the config file it should work by installing through npm.
Answers:
username_1: As a workaround, you can actually install it by running `npm i -g username_2/terminal-discord`, since `npm` supports installing packages straight from GitHub.
Status: Issue closed
username_2: Implemented a simple default config creation dialogue and published to npm [here.](https://www.npmjs.com/package/terminal-discord). |
elixir-protobuf/protobuf | 1096097365 | Title: lowercase/uppercase mismatch
Question:
username_0: when generating elixir code, the generated defmodule starts with an uppercase, but the reference to the type has a lowercase:
in the proto file:
```
enum valueType {
VALUE_TYPE_UNDEFINED = 0; // used for null, meaning: in the given timeframe CAN data was unavailable
VALUE_TYPE_INTEGER = 1;
VALUE_TYPE_FLOAT = 2;
VALUE_TYPE_STRING = 3;
}
....
```
The generated defmodule:
```
defmodule Police.Protos.ValueType do
@moduledoc false
use Protobuf, enum: true, syntax: :proto2
@type t ::
integer
| :VALUE_TYPE_UNDEFINED
| :VALUE_TYPE_INTEGER
| :VALUE_TYPE_FLOAT
| :VALUE_TYPE_STRING
field :VALUE_TYPE_UNDEFINED, 0
field :VALUE_TYPE_INTEGER, 1
field :VALUE_TYPE_FLOAT, 2
field :VALUE_TYPE_STRING, 3
end
defmodule Police.Protos.DataSeries do
@moduledoc false
use Protobuf, syntax: :proto2
@type t :: %__MODULE__{
id: non_neg_integer,
type: Police.Protos.valueType().t(),
...
}
defstruct id: nil,
type: nil,
samplingPeriod: nil,
lastSampleOffset: nil,
valueSet: []
field :id, 1, optional: true, type: :uint32
field :type, 2, optional: true, type: Police.Protos.valueType(), enum: true
...
end
```
I think the uppercase in the defmodule is correct, but type: Police.Protos.valueType() should be type: Police.Protos.ValueType()
Answers:
username_1: @username_0 I believe this has been fixed in #251. Can you give the latest `main` branch of this repo a try please?
username_0: checked out main (32d8f) as dependency in my elixir project, but I have the same issue.
```defp deps do
[
{:protobuf, git: "https://github.com/elixir-protobuf/protobuf"},
]
end```
Status: Issue closed
username_0: Thx so much |
home-assistant/frontend | 927577467 | Title: HA refreshes every minute
Question:
username_0: Can you advise what changes you made please? I'm experiencing this issue with 2021.06.all??
My frontend is refreshing every minute or so. Started around the upgrade to 2021.06.x
I am behind HAproxy on pfSense. I have added the "timeout tunnel 1h" to the Backend pass thru portion of the backend for HA.
### Describe the behavior you expected
The frontend should not be refreshing constantly.
My frontend is refreshing every minute or so. Started around the upgrade to 2021.06.x
I am behind HAproxy on pfSense. I have added the "timeout tunnel 1h" to the Backend pass thru portion of the backend for HA.
### Steps to reproduce the issue
1. Just go to homeassistant:8123 from internet --> pfsense firewall --> HAProxy --> <HA ip>:8123
2.
3.
...
### What version of Home Assistant Core has the issue?
core-2021.6.1 through core-2021.6.6
### What was the last working version of Home Assistant Core?
core-2021.5.x
### In which browser are you experiencing the issue with?
Google Chrome 91.0.4472.114 (Official Build) (64-bit)
### Which operating system are you using to run this browser?
Windows 10 Pro 21H1 Build 19043.1081
### State of relevant entities
_No response_
### Problem-relevant frontend configuration
_No response_
### Javascript errors shown in your browser console/inspector
_No response_
### Additional information
_No response_
Answers:
username_1: Please check the comments from https://github.com/home-assistant/frontend/issues/9398. Seems to be a similar issue, so the questions that bramkragten asked there would also apply to this ticket, esp. reg. console entries. |
uchicago-cs/chiventure | 875969233 | Title: Design what to do in DSL to WDL stage
Question:
username_0: We will design the stage of extracting information from DSL and loading that information into an internal Python representation before being converted to WDL. The design process will involve discussing how to represent DSL information with Python data structures.
Answers:
username_0: We had a team meeting and we decided that this task would involve designing skeleton functions that will interpret the output of lark and load them into python data structures that we can, in later stages, manipulate into WDL.
After reading through this [tutorial](http://blog.erezsh.com/how-to-write-a-dsl-in-python-with-lark/) on how to write and interpret a DSL with Lark, I have learned the following:
1. Lark takes a EBNF grammar file and outputs a **syntax tree** that represents the structures of the rules and terminals(tokens)
2. We can access the parent and children nodes of the tree "t" with `t.data` and `t.children`
The above information should be useful when determining how we are going to access the parsed data and load them into corresponding data structures.
username_0: We had a meeting where we discussed Jack and Tessie's implementation (commit [3e9aca0](https://github.com/uchicago-cs/chiventure/commit/3e9aca074257363608bfdfb15d0d4a2621b8af13)) that parses a DSL file into something that looks like the WDL format. In this sandbox implementation of the parser, methods corresponding to items, rooms, games, connections, etc. are contained inside a TreeToDict class. These functions will be called correspondingly when lark transforms the syntax tree and loads the information in the tree into Python data structures. Then, json.dumps() is called to transform the Python data structures into a JSON format.
However, there are still major differences between the sandbox output and the desired [WDL format](https://github.com/uchicago-cs/chiventure/blob/dev/src/wdl/sandbox/connected_rooms.jwdl). In the next stage of the sprint, John and I will work on designing modifications to the sandbox functions so that they may output the correct WDL format.
username_0: of manipulating the syntax tree is relatively straightforward, recursing through the syntax tree may be a better and more flexible way of loading data from the tree into python data structures that can later be transformed into valid WDL. That way, our output will not be confined by the order of branches in the tree.
username_0: After our team meeting today, we decided that it's better to use the `Transformer` pattern rather than the `Visitor` pattern (documentation for both can be found [here](https://lark-parser.readthedocs.io/en/latest/visitors.html)). The `transformer` can do everything the `visitor` can do, but it replaces the branches of the tree with the return values of each method.
The decision to use `Transformer` is to make the code more intuitive and readable. Currently, the implementation of visitor (commit <PASSWORD>) only has two methods, game and room, and accesses connections, items, etc. under the room method via if statements and helper functions. This design is to keep track of the properties, connections, and items in a given room. However, the design can can become cumbersome as chiventure advances to include more and more things in rooms. Therefore, we decided that using Transformer is a better option because it reconstructs the tree with intermediate Python data structures but still retains the order of the branches. Since we know the order of the branches, we can later go through the data structures again and extract the items that are sub-branches to certain rooms and load them into a separate item dictionary. The idea is to have a design as such:
1. feed the syntax tree to the `transformer`
2. the `transformer` replaces the branches of the tree with intermediate Python data structures
Next, we have functions that extract information from the data structures in the reconstructed tree:
3. function to group all free-floating `property:value` pairs (#981) into "free-floating" data structures
4. function to extract items from rooms and put items in separate dictionary
5. function to place all free-floating properties in their respective place
username_0: I wrote some skeleton functions based on the design outlined above and the current implementation of the sandbox Transformer parser (commit <PASSWORD>):
the transformer class:
```
Class TreeToDict(Transformer):
```
we have several objects of the form `('ROOM', (<room id>, <room properties>))` and
we want to group all rooms into their own dict of the form `{<room properties>}`:
```
def game(self, s)
```
replace room branch with dictionary of properties
we have several objects of the form `('ITEM', item dict)` and
we want to group all items into their own list:
```
def room(self, s)
```
replace connections with a dictionary that would automatically contain connection `tuples`
```
def connection(self, s)
```
replace item branch with `dictionary`:
```
def item(self, s)
```
input is of the form `[('id':value),<properties>]`
output is of the form `("actions”, <properties>)`:
```
def action(self, s)
```
replace escaped characters with unicode characters:
```
def ESCAPED_STRING(self, s)
```
replace single property with `tuple`
if property is specified to belong to a specific item or room, load into dictionary of the form `{<item/room id>: (property, value)}`:
```
def property(self, s)
```
replace start branch with tuple `(start, <room id>)`:
```
def start_g(self, s)
```
replace end branch with tuple `(end, <room id>)`:
```
def end_g(self, s)
```
replace id branch with tuple `(id, <room id>)`:
```
def id(self, s)
```
replace location branch with tuple `(location, <room id>)`:
```
def location(self, s)
```
replace phrase branch with the phrase as a joined `string`:
```
def phrase(self, s)
[Truncated]
```
load free-floating item/room property-value pair into an intermediate free-floating properties `dictionary`
input: single property-value `dictionary` of the form `{<item/room id>: (property, value)`}:
```
def load_free_property(property)
```
extract item from room and load item in separate `ITEMS` dictionary
input: a list of items:
```
def extract_items(transformed_tree)
```
place free-floating properties into their appropriate place
input:
- a `dictionary` of free-floating property-value pairs
- syntax tree transformed by `TreeToDict Transformer`:
```
def place_free_properties(free_properties, transformed_tree)
```
username_1: Using the skeleton design that Michelle wrote about above, it can be more easily visualized with all the functions next to each other, and I updated the function headers to a more refined design by referencing the current implementation of the Transformer sandbox.
```
import sys
from lark import Lark, Transformer
from lark.lexer import Token
import json
from vars_parser import evalVars
grammar_f = open("dsl_grammar.lark")
dsl_grammar = grammar_f.read()
grammar_f.close()
Class TreeToDict(Transformer):
def game(self, s):
return NULL
def room(self, s):
return NULL
def connection(self, s: list[tuple[str, str]]) -> tuple[str, dict]:
return ("connections", dict(s))
def item(self, s: list) -> tuple[str, tuple[str, dict]]:
return ('ITEM', dict(s))
def action(self, s):
return NULL
def ESCAPED_STRING(self, s):
return NULL
def property(self, s):
return NULL
def start_g(self, s: list[str]) -> tuple[str, str]:
return ("start", s[0])
def end_g(self, s: list[str]) -> tuple[str, str]:
return ("end", s[0])
def id(self, s: list[str]) -> tuple[str, str]:
return ("id", s[0])
def location(self, s: list[str]) -> tuple[str, str]:
return ("location", s[0])
def phrase(self, s: list[Token]) -> str:
return ' '.join(s)
# replaces both a single connection and a property with a tuple
connection = tuple
property = tuple
# reform the tree to fit the WDL syntax, calls helper functions below
def reform_tree(transformed_tree):
return NULL
# load free-floating item/room property-value pair into an intermediate free-floating properties dictionary
def load_free_property(property):
return NULL
# extract item from room and load item in separate ITEMS dictionary
def extract_items(transformed_tree):
return NULL
# place free-floating properties into their appropriate place
def place_free_properties(free_properties, transformed_tree):
return NULL
```
The simple functions have content in their function, while the more complex ones are just the skeleton with a generic 'return NULL' statement. This design is very similar to the current implementation with the exception of a few functions specific to the actions using the transformed tree of basic features. A more detailed description of the functions is found in Michelle's explanation above.
username_1: Michelle and I met for the last time today and confirmed the design to be that of the comment above. The headers of the functions were slightly altered to accommodate the input and output specification. For example, `def start_g(self, s)` to `def start_g(self, s: list[str]) -> tuple[str, str]:`. Apart from that, we both agreed on the design to have those functions mentioned above in the `TreetoDict()` class and to have four functions that manipulate the transformed tree.
The implementation will come after this, building around the skeleton design. Documentation of these functions will be done as it is implemented, in case there are any slight changes to the design throughout the process. Lastly, a function that loads up the JSON file to the directory should be included once the intermediate stage is fully implemented.
Status: Issue closed
username_2: Issue Score: ✔️++
Comments:
Great work on this issue! The documentation created is excellent, as well as the status updates! Be sure to associate issues with their proper milestone so they can be picked up by the grading script.
<!-- {"score": "check-plus-plus"} --> |
Iterable/swift-sdk | 429547118 | Title: Bitcode not working for Archive
Question:
username_0: Issue archiving with bitcode (for both main framework and extensions):
ld: bitcode bundle could not be generated because '/Frameworks/IterableAppExtensions.framework/IterableAppExtensions' was built without full bitcode. All frameworks and dylibs for bitcode must be generated from Xcode Archive or Install build file '/Users/max/Dropbox (Personal)/Developer/Salvo/Frameworks/IterableAppExtensions.framework/IterableAppExtensions' for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Answers:
username_1: Can you please list steps to repro this issue?
username_0: Download frameworks from "releases" page on github
Drag frameworks into xcode (as instructed, embedded binaries etc)
Build in development mode, everything works fine
Archive for iOS app store distributions (with Bitcode enabled)
See above error
username_1: Thanks. We will take a look at this issue. In the meantime would you be able to try Cocoapods or Carthage?
username_0: Yeah, will try Carthage
username_1: I have attached different files to the release. Can you please check if this solves the problem?
username_1: I have attached new binary files to the `6.0.8` version release and I have validated that archiving is working using these binaries. I am closing this issue. Please reopen if the issue is not resolved for you.
Status: Issue closed
|
dougmoscrop/serverless-plugin-include-dependencies | 866291256 | Title: New Serverless Package Pattern Seems to Break Plugin
Question:
username_0: Hello,
I updated to a newer version of Serverless (2.38) in one of my projects recently, and received deprecation warnings advising about the change to packaging includes/excludes (replacing them with the `pattern` method) - https://www.serverless.com/framework/docs/deprecations#new-way-to-define-packaging-patterns
I try to stay on top of these deprecations, so I updated my repo to use the new method.
**Original Package Section**
```
package:
exclude: # Specify the directories and files which should be excluded in the deployment package
- ./**
include:
- package.json
- package-lock.json
- src/**
- '*.js'
- '*.html'
excludeDevDependencies: false # Recommended setting for serverless-plugin-include-dependencies
```
**New (Not-Working) Package Section**
```
package:
patterns:
- '!./**' #Excludes (!) all by default before including the files below
- 'package.json'
- 'package-lock.json'
- 'src/**'
- '*.js'
- '*.html'
excludeDevDependencies: false # Recommended setting for serverless-plugin-include-dependencies
```
Based on the deprecation warning this new `patterns` section should be the equivalent of the old `exclude` / `include`.
However, when I deployed with serverless, my `node_modules` folder disappeared entirely. With the "old" include/exclude the plugin worked successfully and included the right modules, but with the change to `patterns` it appears that this plugin is no longer functioning as expected. I was able to "fix" this and have my node_modules included again by disabling the plugin entirely, and changing my packaging section to the bwlo:
**New (Working) Package Section**
package:
patterns:
- '!./**' #Excludes (!) all by default before including the files below
- 'package.json'
- 'package-lock.json'
- 'src/**'
- '*.js'
- '*.html'
- 'node_modules/**'
excludeDevDependencies: true
Obviously this includes the full `node_modules` directory now, so I lose some of the space-saving benefits of the plugin.
Does the plugin need to be updated to account for the new `package` method from the framework?
Thank you |
unizar-30226-2019-08/FrontendMobile | 420244605 | Title: Sliders de búsqueda
Question:
username_0: **Descripción**
Controladores deslizables para ajustar entero. Deben contemplar:
- [ ] Ajuste de un entero en un rango
- [ ] Callback en variación
- [ ] Título
- [ ] Unidades
**Complejidad**
Media
**Wireframe**

**Dependencias con otros widgets**
Ninguna
**Widgets de Flutter útiles**
- [Slider](https://docs.flutter.io/flutter/material/Slider-class.html)<issue_closed>
Status: Issue closed |
dotnet/roslyn | 176174136 | Title: Proposal: Simple code generation with textual, "preprocessor" macros
Question:
username_0: First, let me preface this by saying that I am a proponent of hygienic macros to eventually come to Roslyn, as well as powerful metaprogramming capabilities. I know that this is not that. But I also am trying to come up with a small, focused idea that could still solve real-world problems.
Many people think macros are only good for pseudo-obfuscating header trickery in C, etc. But this was inspired by a problem I had recently. Let's say that I have these two entities in a database layer of some sort.
```c#
public class Post {
[Key]
public Guid PostID { get; set; }
public string Title { get; set; }
}
public class Comment {
[Key]
public Guid CommentID { get; set; }
public Guid PostID { get; set; }
public string Text { get; set;
}
```
If I want to do something similar for every such entity, the usual answers are interface/base class polymorphism and reflection.
Let's say I wanted to write this method for every entity:
```c#
public IReadOnlyDictionary<Guid, Post> GetAllPostsByID(DbContext ctx)
=> ctx.Posts.ToDictionary(p => p.PostID);
```
Interfaces and base classes won't let you do that - `PostID` is different from `CommentID`. Even if I invented this interface...
```c#
public interface IKeyed {
Guid ID { get; }
}
public class Post : IKeyed {
public Guid ID => PostID
// ...
}
```
...it would end up doing the wrong thing - assuming Entity Framework, `ID` is not visible to the data model, and not translated to the right thing. Either the query fails, or Entity Framework ends up pulling down the whole contents and doing the `ToDictionary` locally.
Reflection is doable, but in this case it involves writing a method to dig through the type, find the property with the `KeyAttribute` and construct the lambda expression tree. This is many more minutes out of my time than just writing everything manually.
Writing manually is what we all end up doing. But little connector/adapter things like this are a big part of many programs. Those that have powerful tools for code generation will use them for this. But what is a simple subset of that, maybe enough to cover 80% of what you'd need a full tool for, would be in the box with C# too? So here is my proposal:
```c#
#macro ImplementGetAllByID(one, set, key)
public IReadOnlyDictionary<Guid, #{one}> GetAll#{set}ByID(DbContext ctx)
=> ctx.#{set}.ToDictionary(_ => _.#{key});
#endmacro
public partial class Wherever {
#expand ImplementGetAllByID(one:Post, set:Posts, key:PostID)
#expand ImplementGetAllByID(one:Comment, set:Comments, key:CommentID)
}
```
The compiler does not have any preprocessing stage, but at the stage just before `#if` and friends are evaluated, the macros are expanded too, by substituting the placeholders (`#{one}`; curlies stolen from string interpolation and `#` from "preprocessor", but if you call it a tribute to Ruby I won't disagree) with the values. Instead of paying a reflection tax for simple things like this, it is as if you just wrote the right code from the beginning - you just didn't have to do it manually.
[Truncated]
In fact, macros could be implemented with other macros:
```c#
#macro ImplementGetAllByIDSimple(name)
#expand ImplementGetAllByID(one:#{name}, set:#{name}s, key:#{name}ID)
#endmacro
public partial class Wherever {
#expand ImplementGetAllByIDSimple(name:Post)
#expand ImplementGetAllByIDSimple(name:Post)
}
```
Expansion is exactly equal to replacing each placeholder `#{foo}` with the text given for the `foo` parameter. If it is a literal, it becomes a literal; if it is part of an identifier, it becomes an identifier. Parameter names are *mandatory*, to give good error messages when someone changes something.
Having something like this would allow many people to fall into "the pit of success" for this sort of problem. Here, success isn't doing reflection, which is running unnecessary code at run-time at a performance hit (reflection is the right tool where you need the dynamism); for the cases where abstractions are hard to make using the tools in C#'s playbook, success is code generation.
I want code generation to be a tool you can reach for to solve smaller problems, and not just something you break out for the really big ones. Imagine typing the macros in the editor and seeing it fill out the implementation for you in a collapsed `#region`. It's not meant to be a secret to you, it's just meant to save you the labor of typing and the danger of copying and pasting.
Even if nothing else, this proposal would be a very cool thing to see built as a tool you could plug in before the compiler runs.
Answers:
username_1: FWIW, this kind of scenario can be covered by T4 templates. While not directly inline with your code, they're designed to perform just this kind of template-based code generation. And there's ongoing work to make sure T4 works well with .NET Core.
(Shameless plug) If T4 isn't your thing, there's also [Scripty](https://github.com/username_1/Scripty). This lets you do code generation via C# scripts using the Roslyn Workspaces API and/or MSBuild Project API for project introspection. Similar idea to T4, but not template based and using C# code to drive the generation.
Granted, both of the above tools place your generation instructions inside separate files so they're not quite what you're describing here. Just wanted to make sure you were aware of these alternate approaches.
username_0: I was very impressed by Scripty last I saw it, much more so than by T4. The point of proposing this isn't that Scripty and tools like it aren't great for those problems, it's that it may feel like cognitive overkill for some other problems. As a way of comparison: C# 7 introduces local functions, even though you could already do most of that already. But making something more approachable and involve less ceremony definitely is worth doing for its own sake. (Scripty's ceremony isn't ceremony when it's used as intended, but there are cases where it would feel "too big", as you point out.)
Status: Issue closed
username_2: We are now taking language feature discussion in other repositories:
- https://github.com/dotnet/csharplang for C# specific issues
- https://github.com/dotnet/vblang for VB-specific features
- https://github.com/dotnet/csharplang for features that affect both languages
Features that are under active design or development, or which are "championed" by someone on the language design team, have already been moved either as issues or as checked-in design documents. For example, the proposal in this repo "Proposal: Partial interface implementation a.k.a. Traits" (issue 16139 and a few other issues that request the same thing) are now tracked by the language team at issue 52 in https://github.com/dotnet/csharplang/issues, and there is a draft spec at https://github.com/dotnet/csharplang/blob/master/proposals/default-interface-methods.md and further discussion at issue 288 in https://github.com/dotnet/csharplang/issues. Prototyping of the compiler portion of language features is still tracked here; see, for example, https://github.com/dotnet/roslyn/tree/features/DefaultInterfaceImplementation and issue 17952.
In order to facilitate that transition, we have started closing language design discussions from [the roslyn repo](https://github.com/dotnet/roslyn) with a note briefly explaining why. When we are aware of an existing discussion for the feature already in the new repo, we are adding a link to that. But we're not adding new issues to the new repos for existing discussions in this repo that the language design team does not currently envision taking on. Our intent is to eventually close the language design issues in the Roslyn repo and encourage discussion in one of the new repos instead.
Our intent is not to shut down discussion on language design - you can still continue discussion on the closed issues if you want - but rather we would like to encourage people to move discussion to where we are more likely to be paying attention (the new repo), or to abandon discussions that are no longer of interest to you.
If you happen to notice that one of the closed issues has a relevant issue in the new repo, and we have not added a link to the new issue, we would appreciate you providing a link from the old to the new discussion. That way people who are still interested in the discussion can start paying attention to the new issue.
Also, we'd welcome any ideas you might have on how we could better manage the transition. Comments and discussion about closing and/or moving issues should be directed to https://github.com/dotnet/roslyn/issues/18002. Comments and discussion about this issue can take place here or on an issue in the relevant repo.
-----------------
I have not moved this feature request to the csharplang repo because I don't believe it would ever rise in priority over other requests to be something we would ever do in any particular release. Moreover, it rubs against the C# philosophy which explicitly excluded macros. It is possible your use cases could be solved by https://github.com/dotnet/csharplang/issues/107 and if so, you are welcome to continue discussion there. |
DSpace/dspace-angular | 879300969 | Title: [Deque Analysis] Avoid using "text_muted", as it has insufficient color contrast on a white background
Question:
username_0: ## Deque Analysis Summary
The `text_muted` style is used in a number of scenarios, and it's contrast is insufficient on a white background. A few of the scenarios where it is used include:
* Displays of objects in lists...often the displayed non-Title metadata (e.g. date, authors, etc) uses `text_muted`.
* Placeholder text in input fields use `text_muted`.
The contrast error is common in all these scenarios:
"Element has insufficient color contrast of 2.99 (foreground color: `#959595`, background color: `#ffffff`). Contrast must be 4.5 or more."
1. Search Results (9 tickets)
* https://demo7.dspace.org/search?spc.page=1&query=test&spc.sf=score&spc.sd=DESC
* **Locations:**
* Date/Author metadata displayed in results uses `text_muted`
* Placeholder text in all input boxes (in filters sidebar & in search box itself) uses `text_muted`
* Deque issue ticket IDs: 468512, 468513, 468514, 468515, 468516, 468517, 468518, 468519, 468520, 468521, 468522, 468928
2. Header Search box / Login module (2 ticket)
* (on all pages)
* **Locations:** Placeholder text in search box & in email/password of login field use `text_muted`
* Deque issue ticket IDs: 469254, 469332
3. Browse by Comm/Coll (1 ticket)
* https://demo7.dspace.org/community-list
* **Locations:** Community/Collection descriptions use `text_muted`
* Deque issue ticket IDs: 470068
4. Homepage (3 tickets)
* https://demo7.dspace.org/home
* **Locations:** Placeholder text in search box, and Community/Collection descriptions use `text_muted`
* Deque issue ticket IDs: 469553, 469470, 469471
5. Submission (16 tickets)
* https://demo7.dspace.org/submit
* **Locations:**
* Drag & drop files text (at top)
* Hint text under all fields
* Placeholder text in input fields
* Labels/hints for edit file name/description (after uploading a file)
* Deque issue ticket IDs: 469504, 469507, 469509, 469511, 469513, 469515, 469517, 469520, 469522, 469523, 469525, 469527, 469529, 469530, 469977, 470006
6. Collection homepage (5 tickets)
* https://demo7.dspace.org/collections/0f58a9e3-254a-4a38-a6ea-8c1922762475
* **Locations:**
* Date/Author information in all Item lists
* Placeholder text in input fields (of Browse by Date, Title, Author, Subject
* Deque issue ticket IDs: 469638, 469639, 469640, 469641, 469642
7. Community homepage (2 tickets)
* https://demo7.dspace.org/communities/8b632938-77c2-487c-81f0-e804f63e68e6
* **Locations:**
* In the Settings dropdown, the "section header" for "Results Per Page" and "Sort Options" (this same settings dropdown appears other places as well & was also reported in "Browse by Author")
* Deque issue ticket IDs: 470092, 469381
8. Edit Collection (1 ticket )
* https://demo7.dspace.org/collections/0f58a9e3-254a-4a38-a6ea-8c1922762475/edit/metadata
* **Locations:** Text in the drag & drop box "Drop a Collection Logo to upload"
* Deque issue ticket IDs: 470081
9. MyDSpace (18 tickets)
* https://demo7.dspace.org/mydspace?configuration=workspace&spc.sf=score&spc.sd=DESC
* **Locations:** "No Title" text in item list, along with date/authors in item list
* Deque issue ticket IDs: 470125, 470126, 470127, 470128, 470129, 470130, 470131, 470132, 470133, 470134, 470135, 470136, 470137, 470138, 470139, 470140, 470141, 470142
## Recommended Fix
As discussed in recent meetings, we may want to move away from using `text_muted` at all (and recommend _against it_ in our style guide).
* For text which is secondary (e.g. dates, author text), we may wish to use the secondary color & ensure it has proper color contrast on a white background (must be 4.5 or greater). This means that if we want the secondary color to remain gray, it may need to minimally be `#757575`. However, other secondary colors are welcome, provided it meets AA compliance (AAA is not required) using this tool: https://dequeuniversity.com/rules/axe/4.1/color-contrast?application=axeAPI
* For placeholder text, Deque recommended considering making it _italics_ rather than making it appear lighter via `text_muted`. However, obviously we should ensure once text is typed in the box, it is no longer in _italics_.
## More Information / Tools
* As necessary, see mentioned Issue IDs in our assessment results (requires login): https://axeauditor.dequecloud.com/test-run/0856438a-a19a-11eb-bc31-b7d5be387c86/issues
* Consider using [Deque's free Chrome plugin](https://chrome.google.com/webstore/detail/axe-devtools-web-accessib/lhdoppojpmngadmnindnejefpokejbdd?hl=en-US) to check your work
* Deque's color contrast tool: https://dequeuniversity.com/rules/axe/4.1/color-contrast?application=axeAPI
Answers:
username_1: Something to think about for the future: perhaps we have this problem because the class is *incomplete*. IMHO "muted text" should define both the foreground and the background color, and should not be a class itself but a consideration when designing a class. This suggests to me that we should be thinking in terms of the basic functional blocks of a page and defining classes that style everything which goes into them.
username_0: @username_1 : `text_muted` isn't a concept we (DSpace) created. It's something we are using from Bootstrap: https://getbootstrap.com/docs/4.0/utilities/colors/#color So, I'm just noting here that *Bootstrap's* `text_muted` class shouldn't be used, as it isn't sufficient on a white background (which we use). While I get your point, this isn't a problem we can really solve ourselves...Bootstrap only has one class of this name, and ideally, it would have separate classes for dark vs light backgrounds.
username_0: @username_2 : Assigning this to you for you/your team to investigate (as it sounded like you had an interest in replacing our usages of `text-muted` with something like `text-secondary` or similar).
Please also keep in mind that, fixing this ticket will require updating the style of "placeholder" text in form fields (as it widely uses `text-muted`), but you should not worry about fixing the color of the form fields themselves (as that will be handled by #1151).
username_1: I won't drag this thread out, but I did understand that `text-muted` is from Bootstrap. "Muted text" is not; it's a styling concept, and its meaning is dependent on the styling of adjacent and related elements. In the long term we *can* solve this ourselves, at the cost of less dependence on Bootstrap. (But not in this ticket, so I'll quiet down now.)
username_2: The default bootstrap has for $gray-600 (which is used for text-muted) does have enough contrast against a white background to pass. I propose we stick to bootstrap's default gray values at all in the base or dspace themes. That will fix this issue and likely a few of the other contrast tickets as well
This can be done in 1 hour.
username_0: Sounds good @username_2 and thanks for digging into that. That sounds like a clean solution. Adding 1 hour estimate & assigning to you for your team to fix.
Status: Issue closed
|
amcharts/amcharts5 | 1072151947 | Title: Category Axes last line missing
Question:
username_0: While setting any of axis X or Y to category they are to loose their last line:
this is how it looks with both axes of value type:

When one of them is categorial this is how it starts to look:

And when the other one is categorial:

As you can see after the Marketing category to the end of an axis there is no line like it was while this axis was value type.
Is there any property to set to retrieve this line back?
Answers:
username_1: There are a few ways to go about it.
1) Add an [axis range](https://www.amcharts.com/docs/v5/charts/xy-chart/axes/axis-ranges/):
```JavaScript
var xAxis = chart.xAxes.push(am5xy.CategoryAxis.new(root, {
categoryField: "category",
renderer: am5xy.AxisRendererX.new(root, {})
}));
var rangeDataItem = xAxis.makeDataItem({
category: data[data.length - 1].category
});
var range = xAxis.createAxisRange(rangeDataItem);
rangeDataItem.get("grid").setAll({
stroke: am5.color(0x000000),
strokeOpacity: 0.15,
location: 1
});
```
https://codepen.io/team/amcharts/pen/554ecde6988fe5f4d065d4e1d3a71c04?editors=0010
2) Add an "empty" data point / category to data and make + use `endLocation` on the axis:
```JavaScript
var data = [{
category: "2021",
value: 2.5
}, {
category: "2022",
value: 2.6
}, {
category: "2023",
value: 2.8
}, {
category: ""
}]
var xAxis = chart.xAxes.push(am5xy.CategoryAxis.new(root, {
categoryField: "category",
renderer: am5xy.AxisRendererX.new(root, {}),
endLocation: 0
}));
xAxis.data.setAll(data);
```
https://codepen.io/team/amcharts/pen/25494cb869606b5f806a3a172540f9ce?editors=0010 |
kendraio/kendraio-app | 500901801 | Title: ACL
Answers:
username_1: https://github.com/tensult/role-acl#readme
Example policy form based on the ACL format from the above library. Attempts to merge the best parts of RBAC and ABAC. It uses Role-based access control with added attributes and conditions.
https://dev.app.kendra.io/form-builder?data=<KEY>
username_1: Mock up of form for editing/creating ACL policy
https://dev.app.kendra.io/form-builder?data=<KEY>Ki<KEY>
username_1: Casbin as interoperable access control library?
https://casbin.org/docs/en/overview
Example app in js:
https://github.com/Jarvie8176/casbin-example/blob/master/docs/guide.md
username_1: part of #84
username_1: @username_0 I'm unassigning myself for now, as this is no longer in scope for the work on the Bloomen adapter. |
pillarjs/path-to-regexp | 832962533 | Title: Global route syntax causes parse error
Question:
username_0: | ^
158 | };
159 |
160 | const consumeText = (): string => {
at mustConsume (src/index.ts:157:11)
at parse (src/index.ts:228:5)
at stringToRegexp (src/index.ts:495:25)
at Object.pathToRegexp (src/index.ts:617:10)
at Suite.<anonymous> (src/index.spec.ts:2804:33)
at src/index.spec.ts:2802:7
at Array.forEach (<anonymous>)
at Suite.<anonymous> (src/index.spec.ts:2799:11)
at Suite.<anonymous> (src/index.spec.ts:2798:3)
at Object.<anonymous> (src/index.spec.ts:2705:1)
```
See Express documentation for global/wildcard route definitions here:
https://expressjs.com/en/4x/api.html#router.methods
A PR was opened here to demo the bug: https://github.com/pillarjs/path-to-regexp/pull/245
Answers:
username_1: The wildcard is not a supported feature for the version you’re testing. Are you sure it’s the same error as express? Express uses the 0.1.x branch of code, so a test on master isn’t related. If it’s from express, have you done something to force a more recent version of this package by changing resolution somewhere?
username_1: See also: https://github.com/pillarjs/path-to-regexp#compatibility-with-express--4x
username_0: Thanks for the info. I didn't see the compatibility docs. I'm using these routes in an Express v4.x app, and was expecting this library to mirror the functionality.
I see now that this wildcard route is not supported here. Curious if you have any recommendations for me -- should I use the Express v5-alpha, or avoid using route syntax that is incompatible with this lib?
Thanks for the quick response ❤️
username_0: Also, has Express stated this behavior has a bug, or only this library?
For example, in their docs they show this route:
```js
router.all('/api/*', requireAuthentication)
```
username_1: This isn't a bug, just a feature change. It might be flagged somewhere, and it's something that could be added back. Opting in to express 4.x compatible parsing probably wouldn't be possible though, there's some really iffy behavior such as regexp characters being valid anywhere in the string that would cause problems. |
angular-eslint/angular-eslint | 752115512 | Title: [feat] angular templates: lint expressions in html
Question:
username_0: Hi,
I added the default eslint eqeqeq rule to the *.component.html override as stated below
{
files: ['*.component.html'],
parser: '@angular-eslint/template-parser',
plugins: ['@angular-eslint/template'],
rules: {
eqeqeq: ['error', 'smart'],
'@angular-eslint/template/banana-in-box': 'error',
'@angular-eslint/template/no-negated-async': 'error',
},
},
Doing so, I thought, I could lint things like **<div class="col-4" *ngIf="isNewEmployee == 0">** => **<div class="col-4" *ngIf="isNewEmployee === 0">**
Unfortunately, this doesn't work. I guess that expressions in angular templates cannot be eslinted.
Is this a feature that could be added? For TS files everything is a completely setupped in eslint, so it is unfortunate that TS / JS code in the angular templates can pass through all quality checks we have set.
Thank you in advance.
Answers:
username_1: Is this possible today in TSLint/Codelyzer?
username_0: Hi James,
From the top of my head, I would say "No, it is not included in TSlint" though my experience with TSLint is limited. Is existance in TSLint /codelyzer a prerequisite for asking about a feature?
Kind regards
<NAME>gel
username_1: No, but naturally parity with the existing solution is the top priority so it’s useful context
username_2: I'm not sure if this is related but...
I'm trying to get `eslint-plugin-jsx-a11y` to work but no luck yet (even after configuring eslint in VSCode to validate `.html` files.
Do you think what I'm doing is related to this issue?
```json
{
"files": ["*.html"],
"plugins": ["jsx-a11y"],
"extends": ["plugin:@angular-eslint/template/recommended", "plugin:jsx-a11y/recommended"]
}
```
username_1: @username_2 JSX and HTML are really very different things and so rules intended to run on JSX will not work on HTML.
username_2: 😅 first time i work on ng and totally missed that...though it's in the name of the package
Thanks.
username_1: Hey folks, I'm going to close this one as it is simply not feasible to implement.
The language supported in Angular HTML templates is not the same as the JavaScript/TypeScript we know and love in our other source files. This means entirely bespoke tooling would need to be created to analyze it separately from what we do today for Components, Services etc - the existing tooling simply cannot cover this.
Additionally, the custom subset of the ECMAScript language that Angular supports within its templates is an ever-evolving thing and it would be essentially impossible for folks outside the Angular compiler team to keep up with, never mind build tooling for:
https://github.com/angular/angular/issues/43485
Just look at the number of things that the compiler currently doesn't support in templates but may or may not support soon.
We will all need to rely on Angular compiler (and by extension, the Angular language service in IDEs) checks (which it is getting better and better at providing) for protecting us against potential issues and inconsistencies in our Angular template expressions.
Status: Issue closed
username_1: FWIW even though we cannot do this in a generic way using existing rules - we will definitely consider various specific rules that can be built for Angular HTML template ASTs, for example this one which has been proposed: https://github.com/angular-eslint/angular-eslint/issues/599 |
mehtadone/PTFeeder | 299913776 | Title: [REQ] Buy strategy override enhancement
Question:
username_0: Allow for an "AND" condition to combine offset groupings for a buy strategy override.
For example: [If Price is < 0.1 AND Volume < 10000 AND PriceTrendChange < 5], apply buy strategy override.
For a given buy strategy override, have an option to disable applying existing BuyValue offsets and/or be able to apply a different set of BuyValue offsets (i.e. user may want to adjust offsets when a buy strategy changes from EMAGAIN to LOWBB for example.)
Status: Issue closed
Answers:
username_0: Is this still possible from a development perspective? Overrides would be much more useful if checking for two or more criteria/groupings before applying a buy strategy change.
For example, if short-term price trend AND long-term price trend are above/below set levels, apply a more aggressive or more conservative buy strategy.
username_1: Undocumented, but try this on Feeder 1.5:
```
"MTradeGrouping": {
"Condition": "[pair.LongerTermTrendPercentageChange] > [config.MinPriceChange] && [pair.PercentageChange] > [config.MinPriceChangeLongTerm]",
"Configs": [
{
"MinPriceChange": "0.5",
"MinPriceChangeLongTerm": "0.5", //rsi
"ASellValueOffset": "-50", // gain
"Override": {
"ABuyStrategy": "SMACROSS",
"Weight": "60"
}
}
]
},
```
Feel free if to message if you want more info on how this works.
username_0: Thanks, I will experiment with this.
Are the "MinPriceChange" and "MinPriceChangeLongTerm" terms just created ad hoc within this config?
username_1: Yes, in the condition, anything that is config.XXXX, the XXXX needs to be in the config
username_0: Thanks, the logic appears to be functioning.
Is there any way to ignore all offsets when applying a buy strategy override?
When applying a new strategy based on specific conditions like this I don't necessarily want any of the offsets I have setup for EMAGAIN or EMASPREAD to be applicable to a LOWBB or STOCH buy strategy for example.
username_1: There is no way to do that but I'd probably suggest using overrides for anything buy value related in this case, not offsets. Would that work?
username_0: I'm not sure it would. I have PTFeeder setup to behave under normal conditions with a set strategy, applying various offsets to buy_value depending on market conditions, price trends and volatility levels, relying on their 'interactions' with each other.
The idea here was to apply a completely different entry strategy if certain set, specific criteria/conditions are met, and so most or all of the defined offsets for the original strategy would not be necessary or applicable. (e.g. I wouldn't want to apply the volatility offsets defined for the original strategy to the override strategy)
I will think of a workaround. |
serde-rs/serde | 472256155 | Title: Serializing variant types requires static variant string
Question:
username_0: I'm trying to implement [luxem](https://gitlab.com/username_0/luxem) and having an intermediate type-less representation like Value in JSON is useful in various workflows. luxem has a "type" construct to represent variants and this is also explicitly representable in Value.
However, passing Value back into a serializer is a problem because the serialize_*_variant methods require the variant name be a string with static lifetime, but the `Value::Typed(String, Value)` element which contains the variant name is not generally static. I presume the original intent is that the variant types would only refer to actual Rust type names which would be generated as static constants, but I'm not sure how to handle this in a way that doesn't force some un-collectable global storage.
Is there any way the variant string lifetime could be reduced to that of the serializer perhaps? Or is there another solution that you could suggest here?
Answers:
username_1: This is intentional in the serde data model. There is a tension between what we require Serialize impls to guarantee and what we permit Serializer impls to assume. Data that is "tagged" with a non-static string does not qualify as a serde enum and its Serialize impl would need to pick a different serde data model representation for that data.
Status: Issue closed
username_0: Thanks for taking a look! This explanation seems circuitous, but perhaps I missed some nuance. Would you mind elaborating? Is this a just sort of "this would be a breaking change and our design is finalized at this point" thing?
username_1: That isn't it. To elaborate slightly, when I say "this is intentional in the serde data model" it means those APIs require static strings on purpose. Whether there were breaking changes being made or not, we would want static strings there. When I say "there is a tension between what we require Serialize impls to guarantee and what we permit Serializer impls to assume" it means that in this aspect of the API adding freedom to Serialize impls by removing the static bounds implies taking away freedom from Serializer impls which makes some kinds of Serializers impossible to write.
username_0: First of all, thanks for engaging with me on this.
Okay, right, maybe I complicated things by suggesting reducing the lifetime "to that of the serializer." It still seems strange to me that you'd need to hold on to the variant name for longer than it takes to write it to the stream or copy to some internal storage.
I thought you might be referring to FlatBuffers which for zero copy would require the data to live until the kernel's finished putting it on the wire, but the FlatBuffers impl copies the data to internal storage so that doesn't seem to be the case. Do you have an example of a class of Serializer that relies on this?
I'm assuming by impossible you mean "at current speeds" because any implementation that needed a longer lifetime for the variant name could always create an owned copy.
username_1: Here is one example I found of a Serializer that leverages the static bound:
https://docs.rs/oauth2/3.0.0/oauth2/helpers/fn.variant_name.html
Your comment about "at current speeds" is reflective of the tradeoff between what Serialize impls must guarantee and what Serializer impls may assume. While it's possible to make different sets of choices which result in Serializer impls being required to run slower, or being required to drop support for no_std by allocating, but those choices wouldn't be better overall.
username_0: So basically this optimization choice is part of the design and that part of the design is final? If that's the case, fine. I'm not going to twist your hand.
My insistence on this issue mainly revolves around the fact that rejecting a class of (already primarily supported) use cases in favor of an optimization that is not actually used seems a little strange.
Dynamic data (via `Value`) is a feature of almost every major Serde crate. The YAML and RON crates have fairly visible omissions from their `Value` types due to this restriction, and I'm sure there are other data formats with explicit types similarly restricted. I can't imagine you'd say "we intentionally support all first class dynamic data _except_ variants."
Similarly it's not like serializers can't already deal with non-static strings. Variant names are just another string data and every Serde crate I'm aware of is already able to handle non-static strings as a primary data type... using static buffers or global allocators or whatever. So I don't believe `no_std` is an obstacle here either.
I realize that this _is_ in fact an entrenched decision at this point, but I'd appreciate it if this would be seriously considered for the next major release or whatever the next opportunity for breaking design changes presents itself. |
satijalab/seurat | 470113921 | Title: Gene expression difference of one cluster between two conditions
Question:
username_0: I want to find the expression difference between two conditions, "STIM" vs "CTRL", of one cluster named `0` or "CD14 Mono". The official pipeline is [there](https://satijalab.org/seurat/v3.0/immune_alignment.html). I need the significantly different expression of RNA between two conditions in only one clusters (rather than comparing to the rest clusters).
The function I think Seurat provided is the following:
"
_# Find protein markers for all clusters, and draw a heatmap
adt.markers <- FindAllMarkers(cbmc.small, assay = "ADT", only.pos = TRUE)
DoHeatmap(cbmc.small, features = unique(adt.markers$gene), assay = "ADT", angle = 90) + NoLegend()_
"
However, as I tried many ways such as subset(), SplitObject(), and then merge(), I cannot perform the FindAllMarkers() functions. Look forward to your early reply.
Status: Issue closed
Answers:
username_1: You can perform differential expression between any two groups of cells using the `FindMarkers` function and setting the `ident.1` and `ident.2` arguments. As shown in the [immune alignment](https://satijalab.org/seurat/v3.0/immune_alignment.html) vignette, you can combine the cluster and treatment information to create a new set of cell identities, and then find differentially expressed genes within a cluster between treatment groups.
Relevant lines from the vignette:
```r
immune.combined$celltype.stim <- paste(Idents(immune.combined), immune.combined$stim, sep = "_")
Idents(immune.combined) <- "celltype.stim"
b.interferon.response <- FindMarkers(immune.combined, ident.1 = "B_STIM", ident.2 = "B_CTRL")
```
username_0: Thanks for your quick reply.
I change the code of the code for my own program. But I have the error for the FindMarker() function:
#Error in FindMarkers.default(object = data.use, slot = data.slot, counts = counts, :
#No features pass logfc.threshold threshold
original:
immune.combined$celltype.stim <- paste(Idents(immune.combined), immune.combined$stim, sep = "_")
immune.combined$celltype <- Idents(immune.combined)
Idents(immune.combined) <- "celltype.stim"
b.interferon.response <- FindMarkers(immune.combined, ident.1 = "B_STIM", ident.2 = "B_CTRL", verbose = FALSE)
head(b.interferon.response, n = 15)
changed:
ME.subset.combined$celltype.Sex <- paste(Idents(ME.subset.combined), ME.subset.combined$Sex, sep = "_")
ME.subset.combined$celltype <- Idents(ME.subset.combined)
Idents(ME.subset.combined) <- "celltype.Sex"
Cluster.Sex.Difference <- FindMarkers(ME.subset.combined, ident.1 = "0_ME_MALE", ident.2 = "0_ME_FEMALE", verbose = FALSE)
**#Error in FindMarkers.default(object = data.use, slot = data.slot, counts = counts, :
#No features pass logfc.threshold threshold**
head(ME.subset.combined$celltype.Sex)
#AAACCCAAGGTTACCT_1 AAACCCACAATAGTGA_1 AAACCCACATCACGGC_1
#"0_ME_MALE" "0_ME_MALE" "0_ME_MALE"
#AAACGAAAGGTAAACT_1 AAACGAACACGTCATA_1 AAACGAAGTCGAGTTT_1
#"0_ME_MALE" "0_ME_MALE" "0_ME_MALE"
head(Cluster.Sex.Difference, n = 15) |
keptn/keptn | 504681646 | Title: Keptn CLI: install option to roll-out a minimum set of services required for quality gates standalone
Question:
username_0: The `keptn install` command has to be extended to roll-out a minimum set of services required for quality gates standalone.
Answers:
username_1: We decided in the sprint planning that we will adapt the installer at the very end of the release (e.g., remove gatekeeper, remove jmeter). The quality-gates use case should work as it is right now (obviously the new evaluation service needs to be installed).
Status: Issue closed
username_1: Closed as of Sprint Planning on Oct 24th.
We will use a different behaviour (delete the distributor) |
JoeMayo/LinqToTwitter | 382079782 | Title: When i used https://github.com/JoeMayo/LinqToTwitter for auth user , i m not getting any webhook
Question:
username_0: I created app on account @ etechice
app name = ice_dev
environment = dev
I used[ https://github.com/twitterdev/account-activity-dashboard ]( https://github.com/twitterdev/account-activity-dashboard ) for webhook register and auth user.
whenever I auth user with this app, I m getting all the webhook.(direct message)
But when I auth user with LinqToTwitter in .net with same app credential I m not getting any webhook for the respected user.
in both case webhook endpoint is same.
Please help me out here
Answers:
username_0: https://twittercommunity.com/t/when-i-used-https-github-com-joemayo-linqtotwitter-for-auth-user-i-m-not-getting-any-webhook/116835
username_0: Issue resolved.
Missing one step POST account_activity/all/:env_name/subscriptions
[https://developer.twitter.com/en/docs/accounts-and-users/subscribe-account-activity/api-reference/aaa-premium#post-account-activity-all-env-name-subscriptions](https://developer.twitter.com/en/docs/accounts-and-users/subscribe-account-activity/api-reference/aaa-premium#post-account-activity-all-env-name-subscriptions)
Status: Issue closed
|
prestodb/presto | 934010369 | Title: Flaky test TestHiveDistributedJoinQueriesWithDynamicFiltering. testLimitWithJoin
Question:
username_0: Failed run: https://github.com/prestodb/presto/pull/16357/checks?check_run_id=2947455396
Error: Failures:
Error: TestHiveDistributedJoinQueriesWithDynamicFiltering>AbstractTestJoinQueries.testLimitWithJoin:96 expected row missing: [null, 44995]
All 15000 rows:
[37505, 37505]
[37507, 37507]
[37542, 37542]
[37570, 37570]
[37572, 37572]
[37574, 37574]
[37600, 37600]
[37602, 37602]
[37604, 37604]
[37606, 37606]
[37632, 37632]
[38438, 38438]
[38466, 38466]
[38470, 38470]
[38501, 38501]
[38503, 38503]
[38531, 38531]
[38533, 38533]
[38594, 38594]
[38598, 38598]
[38624, 38624]
[38626, 38626]
[38628, 38628]
[38630, 38630]
[38656, 38656]
[38693, 38693]
[38695, 38695]
[38721, 38721]
[38723, 38723]
[38727, 38727]
[38753, 38753]
[38755, 38755]
[38757, 38757]
[38785, 38785]
[38820, 38820]
[38822, 38822]
[38848, 38848]
[38850, 38850]
[38852, 38852]
[38854, 38854]
[38880, 38880]
[38887, 38887]
[38919, 38919]
[38945, 38945]
[38947, 38947]
[38951, 38951]
[38977, 38977]
[38981, 38981]
[39008, 39008]
[39009, 39009]
Expected subset 10 rows:
[null, 44995]
[null, 45025]
[null, 45191]
[null, 45223]
[null, 45281]
[null, 45314]
[null, 45379]
[null, 45381]
[null, 45570]
[null, 45607] |
wlwl2/time-converter | 588204142 | Title: Time zone heuristics
Question:
username_0: - US Central Time
- UTC
Answers:
username_0: The current time zones are from tzdb-2019c -> zone1970.tab.
(https://www.iana.org/time-zones)
username_0: See #3.
username_0: Custom US time zones have been added. Just options like UTC or country names are possibly next in line to be added.
username_0: UTC, GMT and Greenwich Mean Time. |
FasterXML/jackson-module-scala | 11474992 | Title: Case Classes with Option[Long] parameters don't deserialize properly
Question:
username_0: It seems that for small numbers, an Option[Long] in a case class actually gets deserialized with a runtime type of int. This can then manifest in a couple ways. Maps using these values as keys may not behave as expected, and conversions to Java Longs will throw a ClassCastException:
[info] java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long
[info] at scala.runtime.BoxesRunTime.unboxToLong(Unknown Source)
This only seems to affect case classes. Option[Long] by itself seems to behave correctly.
Answers:
username_1: What is the workaround if your case class is generated for you? I don't have an access to apply attribute.
username_2: @christophercurrie is it possible to create an optional module that uses what finatra has to work around this? It looks like they use the scala signature parsing https://github.com/twitter/finatra/tree/develop/jackson/src/main/scala/com/twitter/finatra/json/internal/caseclass/jackson
Having to annotate works fine, but its problematic when you are trying to distribute a model class (for example) which now pulls in all of jackson databind. Had the annotation been in jackson-annotations it would be less problematic, but leaking a dependency like that really sucks.
I'd be really open to having a separate module (if we don't want to include that in the main one) that provided for option[long] parsing via signature parsing in the interim, until the deprecation comes around for 2.10> |
ucb-bar/chipyard | 1056710479 | Title: Difficulties punching out a signal to tie to an LED
Question:
username_0: <!--
this type of issue is more for "how-tos", understanding chipyard, etc.
if you find an error or issue with chipyard, please use the "Bug Report Issue".
-->
I have this signal I'd like to tie to an LED. In my trait, I connect the signal to an `IO(Output(Bool())`.
```scala
val myModuleOpt = p(myModuleKey).map { k =>
// More code here
val led = InModuleBody {
val led = IO(Output(Bool()))
led := myLazyModule.module.led
led
}
led
}
```
I have the harness pin tied to led0 in my harness implementation:
```scala
val led_io = IO(Output(Bool()))
_outer.xdc.addPackagePin(led_io, "AT32")
_outer.xdc.addIOStandard(led_io, "LVCMOS12")
```
I then try to connect the two using io and harness binders:
```scala
class WithLEDIO extends ComposeIOBinder({
(system: CanHaveMyModule) => {
val led : Seq[Bool] =
system.myModuleOpt.map({ n =>
val led = IO(Output(Bool()))
led := n
led
}).toSeq
(led, Nil)
}
})
```
```scala
class WithLED extends ComposeHarnessBinder({
(system: CanHaveMyModule, th: BaseModule with HasHarnessSignalReferences, led: Seq[Bool] ) => {
th match { case vcu118th: myVCU118FPGATestHarnessImp => {
vcu118th.led_io := led.head // also tried led(0)
}}
}
})
```
But this causes a null pointer exception in the harness binder. I've managed to punch out signals like this before, but only when placed in a bundle and wrapped in ClockedIO. Is it possible to punch out a single signal like this? Do I need to use `ComposeIOLazyBinder`. I'll keep looking over the ComposeIOBinder implementation and see if I can piece together what I need to do. Any help is appreciated!
<!-- have you looked at the Chipyard documentation? -->
<!-- have you looked at the subproject documentation/githubs? -->
<!-- for example: -->
<!-- rocketchip: https://github.com/chipsalliance/rocket-chip/issues -->
<!-- boom: https://github.com/riscv-boom/riscv-boom/issues -->
<!-- firesim: https://github.com/firesim/firesim/issues -->
Answers:
username_1: Try 'led.foreach { l => vcu118th.led := l }'
username_0: I think it has to do with the fact
```scala
_outer.topDesign match { case d: HasTestHarnessFunctions =>
d.harnessFunctions.foreach(_(this))
}
```
in `VCU118FPGATestHarnessImp` evaluates before `led_io` is instatiated...
username_1: Hmm, can you paste the exception message?
There is a code smell here though, in that the IOBinder/HarnessBinder stuff expects that the diplomatic graph should not extend beyond the DUT, but the VCU118 TH violates that. Not sure if this is the cause of what you are seeing though,
username_0: ```scala
[error] java.lang.NullPointerException
[error] ...
[error] at chipyard.fpga.vcu118.Prototype.WithLED$$anonfun$$lessinit$greater$12.$anonfun$new$34(HarnessBinders.scala:159)
[error] at chipyard.fpga.vcu118.Prototype.WithLED$$anonfun$$lessinit$greater$12.$anonfun$new$34$adapted(HarnessBinders.scala:157)
[error] at chipyard.harness.ComposeHarnessBinder$$anonfun$$lessinit$greater$4.$anonfun$new$1(HarnessBinders.scala:73)
[error] at chipyard.harness.ComposeHarnessBinder$$anonfun$$lessinit$greater$4.$anonfun$new$1$adapted(HarnessBinders.scala:71)
[error] at chipyard.harness.HarnessBinder$$anonfun$$lessinit$greater$2$$anonfun$apply$2.$anonfun$applyOrElse$1(HarnessBinders.scala:56)
[error] at chipyard.harness.HarnessBinder$$anonfun$$lessinit$greater$2$$anonfun$apply$2.$anonfun$applyOrElse$1$adapted(HarnessBinders.scala:49)
[error] at chipyard.harness.ApplyHarnessBinders$.$anonfun$apply$1(HarnessBinders.scala:40)
[error] at chipyard.harness.ApplyHarnessBinders$.$anonfun$apply$1$adapted(HarnessBinders.scala:39)
[error] at scala.collection.Iterator.foreach(Iterator.scala:941)
[error] at scala.collection.Iterator.foreach$(Iterator.scala:941)
[error] at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
[error] at scala.collection.IterableLike.foreach(IterableLike.scala:74)
[error] at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
[error] at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
[error] at chipyard.harness.ApplyHarnessBinders$.apply(HarnessBinders.scala:39)
[error] at chipyard.fpga.vcu118.VCU118FPGATestHarnessImp.<init>(TestHarness.scala:137)
[error] at chipyard.fpga.vcu118.Prototype.AXI4PCIeVCU118FPGATestHarnessImp.<init>(TestHarness.scala:227)
[error] at chipyard.fpga.vcu118.Prototype.AXI4PCIeVCU118FPGATestHarness.module$lzycompute(TestHarness.scala:224)
[error] at chipyard.fpga.vcu118.Prototype.AXI4PCIeVCU118FPGATestHarness.module(TestHarness.scala:224)
[error] at chipyard.fpga.vcu118.Prototype.AXI4PCIeVCU118FPGATestHarness.module(TestHarness.scala:170)
[error] at freechips.rocketchip.stage.phases.PreElaboration.$anonfun$transform$1(PreElaboration.scala:38)
[error] ... (Stack trace trimmed to user code only, rerun with --full-stacktrace if you wish to see the full stack trace)
```
If I make `led_io` `lazy val` or move it to before the Harness Binders are applied in `TestHarness.scala` in `vcu118` (which I don't really want to do for maintanability reasons), I instead get:
```scala
[error] LazyModule.scala:278: Unable to name port Bool(IO <UNNAMED> in AXI4VCU118DigitalTop) in chipyard.fpga.vcu118Prototype.AXI4DigitalTopModule@23368e41, try making it a public field of the Module in class freechips.rocketchip.diplomacy.LazyModuleImpLike
[error] There were 1 error(s) during hardware elaboration.
[error] chisel3.internal.ChiselException: Fatal errors during hardware elaboration
[error] ...
[error] ... (Stack trace trimmed to user code only, rerun with --full-stacktrace if you wish to see the full stack trace)
Exception: sbt.TrapExitSecurityException thrown from the UncaughtExceptionHandler in thread "run-main-0"
```
I'd add `--full-stacktrace`, but having a hard time passing that to the `run_sbt` stuff in the makefile.
username_0: My feeling is bundle bridge sources might resolve this, but I'm still quite unfamiliar with them at this time.
username_1: Are you sure `myModuleOpt.isDefined`? Are you building a config where it is set to `None`?
What happens if you do
```
if (led.size > 0) vcu118th.led_io := led.head
```
username_0: I'm certain myModuleOpt is defined. `led.size` is greater than 1. I think `led.head` isn't the null pointer. It's `vcu118th.led_io` that's excepting (unless `led_io` is lazy, which works, but then there's this naming issue)
username_1: It is strange for `vcu118th.led_io` to be causing the null pointer exception, since it is just normal Chisel hardware in the vcu118th LazyModuleImp definition.
Did you place the declaration of `led_io` before the call to `ApplyHarnessBinders`, which should also be in the body of the vcu118th LazyModuleImp definition?
username_1: And making it led_io lazy does not strike me as the right solution here.
username_0: I'm extending `VCU118TestHarnessImp` which calls `ApplyHarnessBinders`, so `led_io` is after that. `Bringup`'s extension of `VCU118` does a similar trick of extending this harness. I can make it non-lazy if I put it directly in `vcu118/TestHarness.scala` before `ApplyHarnessBinders`, but I once again run into the "unable to name port" issue.
username_1: Oh I see, I didn't realize you were extending the VCU118TestHarnessImp. I don't believe we support that flow future, although it seems like a reasonable thing to add support for in the future.
Does just applying the following patch cause the naming error? It seems to work for me, and is how I expect a new IO to be implemented (obviously missing the HarnessBinder to drive it, but I don't think the driver of the IO affects the naming issue?).
```diff --git a/fpga/src/main/scala/vcu118/TestHarness.scala b/fpga/src/main/scala/vcu118/TestHarness.scala
index ab6897c9..a0b51d72 100644
--- a/fpga/src/main/scala/vcu118/TestHarness.scala
+++ b/fpga/src/main/scala/vcu118/TestHarness.scala
@@ -101,6 +101,11 @@ class VCU118FPGATestHarnessImp(_outer: VCU118FPGATestHarness) extends LazyRawMod
_outer.xdc.addPackagePin(reset, "L19")
_outer.xdc.addIOStandard(reset, "LVCMOS12")
+ val led_io = IO(Output(Bool()))
+ _outer.xdc.addPackagePin(led_io, "AT32")
+ _outer.xdc.addIOStandard(led_io, "LVCMOS12")
+ led_io := false.B
+
val resetIBUF = Module(new IBUF)
resetIBUF.io.I := reset
```
Status: Issue closed
username_0: The naming issue seems to have been resolved with `suggestName` in my module option.
```scala
val myModuleOpt = p(myModuleKey).map { k =>
// More code here
val led = InModuleBody {
val led = IO(Output(Bool())).suggestName("led_out")
led := myLazyModule.module.led
led
}
led
}
```
It also seems I can place this in an extension of `VCU118FPGATestHarnessImp` if I make it a `lazy val`. I'm still new to Scala and Chisel so not sure on the repercussions of that, but thanks for helping me debug! |
JuliaDynamics/RecurrenceAnalysis.jl | 401972359 | Title: Return Type of rqa()
Question:
username_0: Why is the return type of rqa a Dict?
With a NamedTuple you still have the clear connection between the values and the name of the metric.
Furthermore, the return values are ordered, and it is a little bit cheaper because it needs less allocations.
See https://discourse.julialang.org/t/namedtuples-vs-dict/13119
I made some tests with NamedTuples, with the code on my branch.
For a 100 x 100 RecurrenceMatrix the number of allocations is reduced from 257 to 209 and the running time is reduced from 112,5 microseconds to 107,8 microseconds.
For me this is important, because I want to apply rqa on a large number of pixels (at least a million) with around 100 timesteps.
Answers:
username_1: Thanks for the concern.
Are you willing to open a Pull Request so we can see your code and suggestion, and thus understand it further?
To answer your question, there is no specific reason for the return type to be a dictionary.
username_1: I just have to warn you that as the trajectory size increases, the dictionary-induced allocations become insignificant (as they are a flat constant). So this shouldn't affect the performance of large matrices in any significant manner, but still it has measurable impact on small matrices (and this also the `@windowed` implementation).
Status: Issue closed
|
nodeschool/hamburg | 65292798 | Title: JSUnconf Nodeschool Anyone?
Question:
username_0: The JSUnconf is on the horizon, so probably it would be nice to do a nodeschool then. i know about the js code retreat on friday before the event. so any suggestions @nodeschool/hamburg-mentors ?
Answers:
username_1: I can help regarding promo :)
username_1: sorry for not mentoring again, but i am organizing the unconf
username_2: I don't have a ticket yet. But I wanted to come to Hamburg for it. Would be nice to have a nodeschool.
username_2: Okay I am in. I would prefer Fri-Sun as a date, but maybe I can also arrange Thursday or Monday. Not sure yet.
username_0: screw it, i can't do it this time
Status: Issue closed
|
kubernetes/org | 376768913 | Title: REQUEST: New membership for @hardikdr
Question:
username_0: ### GitHub Username
@username_0
### Organization you are requesting membership in
@kubernetes-sigs
@kubernetes
### Requirements
- [x] I have reviewed the community membership guidelines (https://git.k8s.io/community/community-membership.md)
- [x] I have enabled 2FA on my GitHub account (https://github.com/settings/security)
- [x] I have subscribed to the kubernetes-dev e-mail list (https://groups.google.com/forum/#!forum/kubernetes-dev)
- [x] I am actively contributing to 1 or more Kubernetes subprojects
- [x] I have two sponsors that meet the sponsor requirements listed in the community membership guidelines
- [x] I have spoken to my sponsors ahead of this application, and they have agreed to sponsor my application
### Sponsors
- @roberthbailey
- @username_1
### List of contributions to the Kubernetes project
- PRs reviewed / authored
* https://github.com/kubernetes/kubernetes/pull/54385
* https://github.com/kubernetes/website/pull/9321
* https://github.com/kubernetes-sigs/cluster-api/pull/483
* https://github.com/kubernetes-sigs/cluster-api/pull/488
* https://github.com/kubernetes-sigs/cluster-api/pull/511
* https://github.com/kubernetes-sigs/cluster-api/pull/531
- Issues responded to
* https://github.com/kubernetes/kubernetes/issues/12455
* https://github.com/kubernetes-sigs/cluster-api/issues/490
* https://github.com/kubernetes-sigs/cluster-api/issues/47
* https://github.com/kubernetes/kubernetes/issues/12455
- SIG projects I am involved with https://github.com/kubernetes-sigs/cluster-api
<!-- DO NOT EDIT BELOW THIS LINE -->
/area github-membership
Answers:
username_1: +1 from me!
@username_0 Thank you for all your work so far, and happy to have you as part of the org 😍
username_2: Hey there @username_0. Unfortunately, @username_1 cannot sponsor you for membership in the @kubernetes org, but CAN sponsor you for the @kubernetes-sigs org.
An approver/reviewer in @kubernetes, may sponsor someone for the @kubernetes org or any of the related organizations (@kubernetes-sigs, @kubernetes-incubator etc); as long as its a project they're involved with.
However, this does not work the other way around. A sponsor that is only an approver/reviewer in @kubernetes-sigs cannot sponsor someone for membership in @kubernetes. They are scoped just to the org they're associated with.
If you'd like to amend your request to just @kubernetes-sigs, we can proceed with getting your membership processed. Otherwise, if you could, please find another sponsor from the @kubernetes org.
If you have any questions don't be afraid to ask.
Thanks!
username_3: +1
I am happy to act as a sponsor for @username_0. He has made numerous contributions to the Cluster API subproject and is an active participant in meetings. He has also written a [proposal](https://docs.google.com/document/d/12TsBPn1lfMk50_yydzbXNZ9PT-8-88o4tSNi7eqPCVg/edit#heading=h.fafx2l7zb1b4) for a feature.
Also, just for complete clarity, I am listed in the following OWNERS files: https://cs.k8s.io/?q=username_3&i=fosho&files=OWNERS&repos=
username_0: Thanks a lot @username_3, for being a sponsor 🙂
@username_2 Thank you for clarification. I updated the Issue to include @username_3 as a sponsor. Please let me know if there is anything required yet. 🙂
username_4: Invite will be sent out as soon as #220 merges. Welcome to @kubernetes! 🚀 |
cargomedia/puppet-packages | 95625107 | Title: Fix lvm spec
Question:
username_0: Secondary disk in virtualbox (`/dev/sdb`?) was removed here: https://github.com/cargomedia/puppet-packages/pull/916
(The problem with the previous approach was that the `Vagrantfile` is declarative, so we can't check if the disk's file already exists with ruby, and then call `vb.customize ['createhd' ...` if not, because it will run this customization multiple times.)
@username_1 maybe there's a way to refactor the LVM spec to work differently? Could we create a "pseudo disk" device on the fly?
Answers:
username_1: Now using a loop disk out of a temp file. Min. size must be around 40MB for xfs filesystem. 10MB did not work. Using 100MB, did not fully investigate
@username_0 please review
username_0: Syntax check is failing.
Rest looks good.
username_1: @username_0 please review
username_1: These checks were failing:
```
modules/lvm/spec/default/manifests.pp - WARNING: double quoted string containing no variables on line 10
modules/lvm/spec/default/manifests.pp - ERROR: single quoted string containing a variable found on line 9
```
username_0: The first line has *bash* variables I guess, so needs to use single quotes *or* escaping?
username_1: Changes bash variable expansion to alternative syntax
@username_0 please review
username_0: lgtm |
hotosm/tasking-manager | 898161554 | Title: Add the possibility to see project instructons on the project detail page
Question:
username_0: Some solutions:
- a button that enables the expand the project description section with the instructions
- a button that opens a modal with the instructions content
Preference to the first option, as it's a best practice to reduce the use of modals as much as possible. |
nigatou/FirstAndroidStudio | 752639263 | Title: Installation did not succeed
Question:
username_0: 
Я ожидаю работающего приложения, но вместо этого получаю эту ошибку и пустой экран
Android 11,
Answers:
username_1: https://medium.com/@aedwin905/solved-failure-calling-service-package-broken-pipe-32-3b860c7e04bb
Status: Issue closed
username_0: nice |
react-navigation/react-navigation | 570086991 | Title: createMaterialTopTabNavigator initial route not working with tabBarComponent
Question:
username_0: ### Current behaviour
When using "createMaterialTopTabNavigator" and "tabBarComponent", the TabNavigator does not stay on its initial route. Instead it quickly switches to the first route.
This only seem to reproduce in a react-native project (not using Expo) on Android only.
This works fine on Expo (Tested on snack.expo.io).
This works fine without "tabBarComponent" property.
This used to work fine on my other project on react-native 0.57.x
### Expected behaviour
TabNavigator should stay on initial route.
### Code sample
```javascript
// App.js
import React from "react";
import { Text } from "react-native";
import { createAppContainer } from "react-navigation";
import { createMaterialTopTabNavigator } from "react-navigation-tabs";
const TabNavigator = createMaterialTopTabNavigator(
{
test1: () => <Text>test1</Text>,
test2: () => <Text>test2</Text>,
},
{
tabBarComponent: () => null,
initialRouteName: "test2",
}
);
const AppContainer = createAppContainer(TabNavigator);
export default () => {
return <AppContainer />;
};
```
This displays "test2" for a very short time (few frames) then switches to "test1" without any action.

### Your Environment
"react": "16.9.0",
"react-native": "0.61.1",
"react-native-gesture-handler": "^1.4.1",
"react-native-reanimated": "^1.3.0",
"react-native-screens": "^1.0.0-alpha.23",
"react-navigation": "^4.0.10",
"react-navigation-tabs": "^2.5.5"
Answers:
username_1: Closing since no one provided a repro. If you have a minimal repro, feel free to open an issue.
Status: Issue closed
|
pulibrary/approvals | 566958178 | Title: Reporting exceptions
Question:
username_0: Should we have exceptions to the hierarchy view of the reports for Department heads? Right now you can only see your reports and below, so @jpstroop would not have access to other departments requests. Should we add an exception?<issue_closed>
Status: Issue closed |
youzan/vant-weapp | 423557647 | Title: 关于VantComponent内置的computed和watch的一些困惑
Question:
username_0: **描述一下问题**
computed里面的变量为什么会执行三次?
**截图**

Answers:
username_1: vant中`computed`的实现是每次`setData`都会执行一次。
Status: Issue closed
|
NearHuscarl/flutter_login | 673727938 | Title: AnimationControllers should be disposed before calling super.dispose()
Question:
username_0: https://github.com/NearHuscarl/flutter_login/blob/823d79f9feaf194e794c89062d06c8617f6af899/lib/flutter_login.dart#L332
```
The following assertion was thrown while finalizing the widget tree:
AuthCardState#d6475(tickers: tracking 2 tickers) was disposed with an active Ticker.
AuthCardState created a Ticker via its TickerProviderStateMixin, but at the time dispose() was called on the mixin, that Ticker was still active. All Tickers must be disposed before calling super.dispose().
```<issue_closed>
Status: Issue closed |
onflow/cadence | 1113105825 | Title: Include static type information of containers in JSON encoding
Question:
username_0: ## Context
On a high-level, containers (arrays, dictionaries, optionals), have two kinds of types associated with them:
- The run-time type / concrete types of the values in the container
- The static type of the container
For example, the program `let dict: {String: Number} = {"foo": 0 as UInt256}` creates a dictionary value with static type `{String: Number}`, and the first element has type `String`/`UInt256`.
The JSON Cadence spec was created when we did not have static types for containers.
Currently, the encoding of the example is as follows and lacks the static type:
```json
{
"type": "Dictionary",
"value": [{
"key": { "type": "String", "value": "foo" },
"value": { "type": "UInt256", "value": "0" }
}]
}
```
When we added static types to containers, we didn’t update the encoding (Cadence value -> JSON) to include the static type, so when decoding (JSON -> Cadence value), Cadence values end up without a static type.
For backwards compatibility and this lack of static type information, the JSON argument decoding in Cadence actually infers the static type from the element values that are sent. This works in most cases, but is implicit and it would be better if it were explicit and cover all values.
## Definition of Done
- Extend the JSON Cadence spec to have a notion for static types in containers (dictionaries, arrays, optionals)
- Update encoding to include static type in JSON
- Update decoding to use static type if provided in JSON |
DataKind-DC/capital-nature-ingest | 456006813 | Title: ENH: New event source - Nova Parks
Question:
username_0: ## Expected Behavior
Get events from https://www.novaparks.com/things-to-do
## Current Behavior
We don't have it yet!
Answers:
username_1: Looking into it
username_0: Hey @username_1! Sorry I didn't see your comment earlier. I've just assigned this to you so that others are aware. Feel free to reach out here or in Slack if you've got any questions or need help.
username_0: Hey @username_1. Any progress on this? Otherwise I'll unassign you so others can work on it.
username_2: It looks like NOVA Parks is using eventbrite
username_0: Do you want to take this one @username_2 ?
username_2: Sure 👍
Status: Issue closed
username_0: Closed by d4f6acd |
BALKANGraph/OrgChartJS | 480568772 | Title: how to add node
Question:
username_0: hello how to add node only Under "<NAME>"
this my example
https://jsfiddle.net/jaafar/kz54ucbL/62/
Answers:
username_1: See "Using addNode function" section from [this ](https://balkangraph.com/OrgChartJS/Docs/CreateOrgChartProgrammatically) help article
Status: Issue closed
|
phoenixframework/phoenix_html | 102614422 | Title: Showing errors for fields inside `inputs_for` doesn't work
Question:
username_0: I get this -> `could not generate inputs for :profile. Check the field exists and it is one of embeds_* or has_*`
Steps to reproduce the error:
1) Pull this -> https://github.com/username_0/phoenix-test-app
2) Go to `/registration`
3) Create any user and remember it's username
4) Open the form again
5) Enter some other email, password, password confirmation and the same username you used in the previous form (so you'd get an error that this username is already taken)
6) Submit
Answers:
username_1: I can't reproduce this as well. Using the same versions as you.
username_1: Nevermind, I forgot to migrate. :D
username_0: @username_1 it's ok :)
username_1: Fixed in Ecto master! :D
Status: Issue closed
|
OperationNT414C/FakeCamera | 1077542499 | Title: ur0 partition
Question:
username_0: I know the project is abandoned, however I would like to know if I'm doing it wrong, the plugins now work on partition ur0 instead of ux0. The Teareway game started working after I put the fakecamera plugin, but I can't load the image into the game, but I used all possible ways to call the file. Maybe it doesn't work or I have to try to use it on ux0 or is the colleague who opened a poll about ioplus correct?
Answers:
username_1: Hello, long time I did not started my PlayStation TV : I get back the BMP image I used to check this feature.
You can download it (with some additional setup files, like "kuio" because I am not sure it can be found anymore) here:
[TestImageAndKernelSetup.zip](https://github.com/username_1/FakeCamera/files/7697561/TestImageAndKernelSetup.zip)
Please check this image first, and don't forget that, even if you put FakeCamera in "ur0", the path where the plugin tries to get the image is always "ux0:/data/FakeCamera/".
Please also check with the "fakecamerakbmp.suprx" version of the plugin (by adding "kuio.skprx" in your "config.txt", "*KERNEL" part). With this configuration, you must allow "unsecured homebrews".
Once you validate that you can still, at least see this image, then you can replace it with other BMP images you generate yourself and try to find the "good format".
username_0: Thank you so much for answering me. Yes, I tried to use it on both ux0 and ur0, but nothing conclusive, I'll see the file you sent me and with your guidance see if I can make it work here on my ps tv, one more question, to call me the image for the game I do I put the line in the config.txt of the tai folder? or in the date folder?
username_0: Moving on here just to update you, I tried it after you instructed me and unfortunately I can't put the image to appear in the game (I even tried using the pro camera to help me in the test). But I'm glad that at least the game is working. A question, will it be possible one day to connect a webcam to ps tv to simulate as in vita?
username_1: It would require to develop webcam drivers (including the fact that some cameras have not a generic protocol). It would be a very long and complex project and unfortunately, I do not have the time to do it.
Status: Issue closed
username_0: I put the plugin yes (dsmotion), but about the part I asked you about calling the image in the game, where do I put the command line? in tai (config.txt) or elsewhere.
username_1: Can you share your "config.txt" file?
There is no "in game command" in order to activate an image with the plugin, only a logic in the plugin to look at successive paths described in the readme:
"ux0:data/FakeCamera/TITLEID00_Front.bmp" or "ux0:data/FakeCamera/TITLEID00_Back.bmp" (depends on front or back camera use)
"ux0:data/FakeCamera/TITLEID00.bmp"
"ux0:data/FakeCamera/ALL_Front.bmp" or "ux0:data/FakeCamera/ALL_Back.bmp" (depends on front or back camera use)
"ux0:data/FakeCamera/ALL.bmp" |
DocRaptor/docraptor-ruby | 380790659 | Title: [Question] Webhook for async PDF generation
Question:
username_0: Hello. I was wondering if there are any plans to POST to a webhook when generating a PDF via `async_doc`. That would simplify a lot of error handling and looping on the client side.
Answers:
username_1: We have that already, or should. Check out the [`callback_url` attribute](https://github.com/DocRaptor/docraptor-ruby/blob/b50a8ce235a5742afd729c7f24c9e30da6876f62/lib/docraptor/models/doc.rb#L100). The DocRaptor API documentation is [here](https://docraptor.com/documentation/api#api_callback_url).
username_0: Ah, perfect! Thank you.
Status: Issue closed
|
ecederstrand/exchangelib | 420485848 | Title: ExchangeService UseDefaultCredentials Property
Question:
username_0: Within the Microsoft Exchange Web Services API v2.2 there is a Property within the `Microsoft.Exchange.WebServices.Data.ExchangeService` class called `UseDefaultCredentials`. The purpose of this is to get or set a value that indicates whether the credentials of the user who is currently logged on to Windows should be used to authenticate with Exchange Web Services (EWS).
Reading through the documentation, I don't see that this feature is available. Am I missing something or is there another way to achieve the above result?
Answers:
username_1: A quick googling suggests that this feature is only available in the .Net-based EWS Managed API, not the XML-based SOAP API?
username_0: Hmm..that's too bad. I did a bit of a dive into the XML elements and i'm not seeing much of anything to achieve this. There's probably a valid reason that I just don't understand why it's not available.
Thanks for responding!
Status: Issue closed
username_1: At least if you use Kerberos auth, you should be able to achieve single sign-on. See #96 |
jlippold/tweakCompatible | 413715463 | Title: `Alkaline` working on iOS 12.1
Question:
username_0: ```
{
"packageId": "com.fortysixandtwo.alkaline",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.fortysixandtwo.alkaline",
"deviceId": "iPhone9,4",
"url": "http://cydia.saurik.com/package/com.fortysixandtwo.alkaline/",
"iOSVersion": "12.1",
"packageVersionIndexed": true,
"packageName": "Alkaline",
"category": "Tweaks",
"repository": "ModMyi (Archive)",
"name": "Alkaline",
"installed": "1.3",
"packageIndexed": true,
"packageStatusExplaination": "This package version has been marked as Working based on feedback from users in the community. The current positive rating is 100% with 2 working reports.",
"id": "com.fortysixandtwo.alkaline",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.0",
"shortDescription": "Themeable status bar battery replacement for iOS 7.",
"latest": "1.3",
"author": "magn2o",
"packageStatus": "Working"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
giansalex/peru-consult-api | 817525554 | Title: error consulta ruc
Question:
username_0: 
Answers:
username_1: Podrias revisar en la carpeta `var/log`, si hay algun archivo de log
username_0: La url sigue funcionando pero cuando se hace la consulta del ruc en la pagina de sunat lo reenvia hacia otro sitio:
https://e-consultaruc.sunat.gob.pe/cl-ti-itmrconsruc/frameCriterioBusqueda.jsp

username_2: Me sucede lo mismo desde ayer en la noche aproximadamente. Hoy continúa
username_1: He notado cierta inestabilidad desde que insertaron recaptcha en sus paginas, si conocen de otras urls donde se puede obtener la misma información, podría ser una alternativa.
username_1: @username_0 @username_2 se ha actualizado las urls en [peru-consult#36](https://github.com/username_1/peru-consult/pull/36), parece que eso generaba inestabilidad.
Status: Issue closed
|
jlippold/tweakCompatible | 286582617 | Title: `RocketBootstrap` working on iOS 10.3.3
Question:
username_0: ```
{
"packageId": "com.rpetrich.rocketbootstrap",
"action": "working",
"userInfo": {
"packageStatus": "Unknown",
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"author": "<NAME>",
"iOSVersion": "10.3.3",
"url": "http://cydia.saurik.com/package/com.rpetrich.rocketbootstrap/",
"latest": "1.0.5",
"name": "RocketBootstrap",
"category": "Tweaks",
"packageIndexed": true,
"iOSVersionAllowed": true,
"commercial": false,
"depiction": "Support library allowing tweaks to communicate with sandboxed processes",
"packageInstalled": true,
"id": "com.rpetrich.rocketbootstrap",
"packageName": "RocketBootstrap",
"repository": "BigBoss",
"deviceId": "iPhone5,3",
"packageVersionIndexed": false,
"packageCategoryAllowed": true,
"packageId": "com.rpetrich.rocketbootstrap"
},
"base64": "<KEY>",
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
square/javapoet | 351390040 | Title: TypeName.rawClass / toRawClass / toClassName
Question:
username_0: For codegen I often have a `TypeName` that I'm carrying around where I want to generate a peer class. This requires some ugly `instanceof` checks and conditional unwrapping to get a `ClassName` which I can then `peerClass` off of.
It would be nice to have a method which returned a `ClassName` and threw an exception if the name was a primitive.
Answers:
username_0: What do array types do? Throw?
username_1: I would like to know that is your demand going to get a ClassName from a TypeName? Also I wanna to know the reason that throw a excepetion when faced the primitive. |
libgdx/libgdx | 895404807 | Title: Gdx.graphics.getDensity() returns infinity inside VirtualBox
Question:
username_0: #### Issue details
Calling `Gdx.graphics.getDensity()` inside VirtualBox (Linux guest/host) returns `Infinity` (because display size reported by Lwjgl is 0). I understand that on some platforms the display density can not be computed, but I think there should be a special value UNKNOWN or an exception thrown to force client code to handle this case. I got an exception in the JNI code of FreeType (because I use density for font size scaling), and getDensity() doesn't indicate it can return a completely invalid value.
It returns an estimate in any case, so if the value is too low / too high it can just default to 1 or something.
#### Version of LibGDX and/or relevant dependencies
GDX version `1.10.0` (latest at time of writing)
#### Please select the affected platforms
- [ ] Android
- [ ] iOS
- [ ] HTML/GWT
- [ ] Windows
- [x] Linux
- [ ] MacOS
Answers:
username_1: Could you please try whether the issue also happens with the [LWJGL 3 backend](https://gist.github.com/username_1/eb37cb4f7a03d006b3a0ecad27292a2d)?
username_0: This is with Lwjgl3. It is not trivial to switch to 2 at this point :)
username_2: Good point that it already returns an estimate; I think you're right in your interpretation of the docs that it does not indicate it can return a non-finite value, and an estimate shouldn't generally be infinite or NaN. A density of 0 would clearly indicate that something is wrong with the estimation, since there's no way you could have a screen with 0 pixels per inch/cm that actually works. Same for a negative number. I'm not sure if it's better to just return 1, indicating "some normal screen," if the screen is actually unusable at that point in time. A quick workaround might be to use Float.isFinite() as your check for an invalid result, since that would pick up both infinities and also NaN (and NaN is tricky to check for normally). I don't know if getDensity() can return NaN, but if it's dividing a positive number by 0.0f already to get Infinity, it's not unreasonable that it could divide 0 by 0.0f and get NaN.
username_1: I think you should also report this to the [GLFW project](https://github.com/glfw/glfw), since the bug seems to be [on their end](https://github.com/libgdx/libgdx/blob/add6031b113d9e255f791656581bdffbbff83f89/backends/gdx-backend-lwjgl3/src/com/badlogic/gdx/backends/lwjgl3/Lwjgl3Graphics.java#L253).
username_3: Hi there, the following is a possible fix:
```
AbstractGraphics.cs lines 10-20
/**
* Facade method that estimates the pixel density of the underlying Graphics implementation.
* @return The approximated pixel density, or 1 if this could not be determined
*/
@Override
public float getDensity () {
float ppiX = getPpiX();
// Guard against invalid values
if (ppiX <= 0 || Float.isNaN(ppiX)) return 1f;
return ppiX / 160f;
}
```
The solution posted by @username_2 sounds good here, but unfortunately, `Float.isFinite()` is only available from Java 1.8 and up. I did not have the rights to push to a new branch, and open up a Pull Request. So here you go 😊
username_2: @username_3 That won't help with the original problem, which is a density of Infinity. Infinity wouldn't be considered `<= 0` and `isNaN()` returns false. NaN is an oddity, but looking at the OpenJDK source code gives a solution:
```java
public float getDensity () {
float ppiX = getPpiX();
return (ppiX > 0 && ppiX <= Float.MAX_VALUE) ? ppiX / 160f : 1f;
}
```
The important thing here is that all valid finite floats are `<= Float.MAX_VALUE` (NaN is not, so it gets ruled out), and we only want positive and non-zero floats, so the check for `> 0` also rules out negative infinity. The check against MAX_VALUE is used by isFinite() in Java 8. I changed `getDensity()` to single-return form because I think some JDKs can analyze that form a little easier, or at least it won't be harder.
There's still the issue that an invalid PPI is now indistinguishable from a common one, `1f`. I think in the cases where we would get an invalid PPI, the display density doesn't really matter as long as it doesn't cause problems, so `1f` is probably fine; users could check on their own if `getPpiX()` returns a value they can handle, also.
username_3: @username_2 Great, indeed this improves it. I don't think that the indistinguisibility is bad, because `getDensity()` returns an approximate anyways. To me, `1f` (one pixel per unit) is true always, except when the screen is better than that.
Anyways, that's my two cents 🙂
username_2: That's true, it is approximate, so 1f as a general guess is probably fine. It could maybe do something like 0.015625f density as an indicator of something very strange happening, but still a technically possible resolution. That's 1/64, which would correspond to 2.5 pixels per inch or less than 1 pixel per centimeter, and that seems unlikely to appear. Maybe a monome, or some other similar hardware, would have that approximate resolution, but I doubt one of those is a valid display for LWJGL3. But yeah, that's not needed because `getPpiX()` is still available and can be used to check if the backing information is something strange, even if `getDensity()` returns 1f for valid and invalid situations. Thanks for hopping on to the Discord, by the way; if you encounter any issues with adding a PR, feel free to ask in the `contributing` channel there (there are several "gotcha" problems that can affect PRs, chief among them formatting being automatically changed).
username_1: This has been fixed in #6539, however, it would still be appreciated if you reported this issue (i.e. `glfwGetMonitorPhysicalSize` returns the wrong display size) to the [GLFW project](https://github.com/glfw/glfw) with some information about your specific setup.
Status: Issue closed
username_0: @username_1 I will report to GLFW. The setup is not too crazy -- virtualbox + linux (Arch). In any case I'm glad you worked around it. There is always a chance that GLFW will return wrong data. |
felangel/bloc | 1027995642 | Title: docs: https://bloclibrary.dev/#/testing and https://pub.dev/packages/replay_bloc#using-a-replaybloc
Question:
username_0: ## Tested
dart 2.14.2
dev_dependencies:
bloc_test: ^8.3.0
test: ^1.16.0
dependencies:
bloc: ^7.2.1
bloc_concurrency: ^0.1.0
hydrated_bloc: ^7.1.0
replay_bloc: ^0.1.0
## https://bloclibrary.dev/#/testing
```
import 'package:test/test.dart';
import 'package:bloc_test/bloc_test.dart';
void main() {
group('CounterBloc', () {
CounterBloc? counterBloc; // <<< should be nullable as it's initialized in setUp()
setUp(() {
counterBloc = CounterBloc();
});
test('initial state is 0', () {
expect(counterBloc!.state, 0);
});
blocTest<CounterBloc, int>(// <<< `<CounterBloc, int>` seems to be required
'emits [1] when CounterEvent.increment is added',
build: () => counterBloc!,//<<< ! should be added
act: (bloc) => bloc.add(Increment()),
expect: () => [1],
);
blocTest<CounterBloc, int>( // <<< `<CounterBloc, int>` seems to be required
'emits [-1] when CounterEvent.decrement is added',
build: () => counterBloc!, //<<< ! should be added
act: (bloc) => bloc.add(Decrement()),
expect: () => [-1],
);
});
}
```
## https://pub.dev/packages/replay_bloc#using-a-replaybloc
```
void main() {
// trigger a state change
final bloc = CounterBloc()..add(Increment());
// wait for state to update
await bloc.stream.first; // <<< should use stream.first
print(bloc.state); // 1
// undo the change
bloc.undo();
print(bloc.state); // 0
// redo the change
bloc.redo();
print(bloc.state); // 1
}
``` |
kaitai-io/kaitai_struct_formats | 184522032 | Title: Document list of supported encodings
Question:
username_0: https://github.com/kaitai-io/kaitai_struct/wiki/Expressions has this example:
```
seq:
- id: filename_len
type: u4
- id: filename
type: str
size: filename_len
encoding: UTF-8
```
Other than `UTF-8`, which encodings are supported? Would be useful to have this list documented somewhere :)
Answers:
username_1: Unfortunately, there's no list of encodings per se in Kaitai Struct. Most of the time, encoding ID as passed as is to the target language, so it ultimately depends on the the target language.
For example:
* [Java encodings](https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html)
* Many languages (i.e. our C++ implementation, PHP, may be Python / Ruby) rely on system-wide iconv, which has [something like this list](https://gist.github.com/hakre/4188459).
It would be a good idea to make a KS "standard" of encoding names and do a hard mapping table for every supported language, but it's a fair amount of work and, as of now, it still works in 99% of cases, I guess.
Status: Issue closed
username_1: Closing this one to close issues for this repository for good. Issue reopened at https://github.com/kaitai-io/kaitai_struct/issues/116 |
Macyrate/Macyrate.github.io | 509536097 | Title: Hello World | 博丽吹笛分社
Question:
username_0: https://hakurei.red/2019/10/18/hello-world/
Welcome to Hexo! This is your very first post. Check documentation for more info. If you get any problems when using Hexo, you can find the answer in troubleshooting or you can ask me on GitHub. Quick
Answers:
username_0: test comment
username_0: test comment 2
Status: Issue closed
|
jajman/ColonMSI | 1050709571 | Title: segment the whole silde image
Question:
username_0: Thanks for your excellent work!but i wonder if the code contains the segmentation part?
The sentence in your paper"we segmented the WSIs into non-overlapping 360 x360 pixel patches at 20x magnification".if not,could you please tell me how to segment the WSI into small patches? |
klausahrenberg/WThermostatBeca | 613833279 | Title: Should I proceed with 2.22.03.00712
Question:
username_0: Hi
Just got my bht-002GB-B branded Minco Heat with sticker on the box saying BHT-002GBLwifi. The board says bac-002-wifi-2019-0409.
I want to get the floor temp which is not available via the HA tuya integration so your MQTT option is what I want.
I am nervous as the version discussed in another issue thread referred to versions which sound newer.
I don't want to brick it, what do you think?
Answers:
username_0: So far I've tried reading it and get either timed out waiting for packet header or invalid head of packet (0x00).
I've tried a range of baud rates.
Any thoughts?
username_1: I would not do the soldering method and update it using Tuya Convert.
Just be aware you may brick the device if it is not compatible.
Tuya Convert will backup the original firmware, but you would need to be able to upload it if this firmware doesn't work
username_2: I agree, try with Tyua-convert, if possible. If not and you want to try the soldering way, please follow my Flashing instruction. The first thing is to make a backup of the original firmware. This backup is without risk and if successful you know, that the flashing of new firmware is possible. So the risk for bricking the device is quite low, if the backup worked well.
username_2: The timout means, something with your connection is or ESP is not in programming mode. Remove the touch unit from the relay part for flashing. Power the ESP for flashing only over USB programmer. Connect GPIO0 to GND at power up the programmer to get the ESP in program mode. Try the backup first. If the backup runs well, the flashing should be possible.
username_0: Removing the display, That's worth a try.
I explored that gently and found it a bit resistant to coming apart. Is there a clue on how to do it?
username_0: Ok, the display is soldered to the board on this model, it's not on a connector. So tuya-convert is the next plan.
Status: Issue closed
|
prolike/prolike.io | 552905350 | Title: Get statuses
Question:
username_0: Restructure the GET of statuses.
I will probably use some of the old code, but in the light of having a new api i can't predict if it is going to be quick. This will also be needed to get from github api, therefore one more request :(
@username_1 Prio?
Answers:
username_1: Hmmm - what does "get statuses" mean? What statuses? @username_0
username_0: @username_1 Getting the statuses we write for our costumers and label as status.
 |
bitcoin/bitcoin | 858261637 | Title: feature_notifications.py cirrus test failure: `IndexError: string index out of range`
Question:
username_0: https://cirrus-ci.com/task/5589966861369344
```
self.expect_wallet_notify([(bump2, blockheight2, blockhash2), (tx2, -1, UNCONFIRMED_HASH_STRING)])
File "/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-i686-pc-linux-gnu/
test/functional/feature_notifications.py", line 172, in expect_wallet_notify
assert_equal(text[-1], '\n')
IndexError: string index out of range
```
Could not reproduce locally.
This is a recent change (March 15, 2021) from 06e1fb0b170 in #21141.
Answers:
username_0: cc @username_1
username_1: Thank you for the notification, I'll fix it.
username_0: Hit this again today https://cirrus-ci.com/task/6401787486797824
Status: Issue closed
|
songqsh/foo1 | 462899612 | Title: instability on value iteration for hjb_1d_dirichlet
Question:
username_0: If the mesh number is chosen to be 6, the computation is good. However, the mesh number is larger (for example 20), it gives bad result. See implementation at [https://github.com/username_0/foo1/blob/master/src/value_iter_dirichlet_1d_v01.ipynb](url)
Answers:
username_0: If one wants to have even more accurate result, one shall in general set a bigger NUM, as well as smaller TOL and bigger MAX_ITER. Only making bigger NUM without changing other parameters results in bad computational result.
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.